For clarity I have divided my question into parts:
Let [a,b] and [c,d] be two non-overlapping intervals inside the interval [0,N]. Now if I randomly select a number inside [0,N], the probability of the number selected being in either interval is proportional to the length of the interval (b-a) and (c-d). However, this seems to conflict with notions of size in set theory. If we define one set to contain the reals in the interval [a,b] and another set to contain the reals in the interval [c,d], don't these sets have the same cardinality?--in which case it seems they should yield the same probabilities of containing a random number, since they are the same "size".
Now suppose that instead of simply considering the interval [0,N] we consider the entire real number line. It is impossible to randomly select any real number in the intuitive sense like we did in (1) (assigning "equal chance" to equally sized intervals) because then the probability density would not be normalizable. Thus we must define a probability distribution which determines how the "random" number is selected. So it is no longer the case that larger intervals have a higher probability of containing the "randomly" selected number. Rather, the probabilities that a number lies in any particular interval depends on the probability distribution chosen. For example, a continous probability distribution can be chosen such that the probability for both intervals is 0, or one is greater than the other, or vice versa. In any case--intuitively, the probability distribution "biases" the random number towards certain parts of the real line. (I guess this wasn't really a question but just wanting to verify I am correct).
Given (2), I can't help but feel like mathematics has failed me, for it seems at least conceivable to choose a random real number, each with equal probability. Moreover it seems that if this did occur, then given two intervals of different size, we would be justified in saying that it is more likely the randomly selected number falls in the larger interval. Alternatively, it seems we could also say the probability is zero for any finite interval since there are so many reals. Yet there appears to be no mathematical basis for any such claim. Is developing such a mathematical theory just not of interest to mathematicians, or is it impossible?
Is there any probability distribution over the entire real numbers that is close to being uniform?
If I wanted to explore these mathematical questions rigorously, myself, should I be looking to pick up an intro probability text or what? Would an intro-level text answer all these questions?
Thanks.