print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

The mathematical relation between these two experiments was recognized in 1909 by the French mathematician Émile Borel, who used the then new ideas of measure theory to give a precise mathematical model and to formulate what is now called the strong law of large numbers for fair coin tossing. His results can be described as follows. Let e denote a number chosen at random from [0, 1], and let Xk(e) be the kth coordinate in the expansion of e to the base 2. Then X1, X2,… are an infinite sequence of independent random variables taking the values 0 or 1 with probability 1/2 each. Moreover, the subset of [0, 1] consisting of those e for which the sequence n−1[X1(e) +⋯+ Xn(e)] tends to 1/2 as n → ∞ has probability 1. Symbolically:Equation.

The weak law of large numbers given in equation (11) says that for any ε > 0, for each sufficiently large value of n, there is only a small probability of observing a deviation of Xn = n−1(X1 +⋯+ Xn) from 1/2 which is larger than ε; nevertheless, it leaves open the possibility that sooner or later this rare event will occur if one continues to toss the coin and observe the sequence for a sufficiently long time. The strong law, however, asserts that the occurrence of even one value of Xk for kn that differs from 1/2 by more than ε is an event of arbitrarily small probability provided n is large enough. The proof of equation (14) and various subsequent generalizations is much more difficult than that of the weak law of large numbers. The adjectives “strong” and “weak” refer to the fact that the truth of a result such as equation (14) implies the truth of the corresponding version of equation (11), but not conversely.

Measure theory

During the two decades following 1909, measure theory was used in many concrete problems of probability theory, notably in the American mathematician Norbert Wiener’s treatment (1923) of the mathematical theory of Brownian motion, but the notion that all problems of probability theory could be formulated in terms of measure is customarily attributed to the Soviet mathematician Andrey Nikolayevich Kolmogorov in 1933.

The fundamental quantities of the measure theoretic foundation of probability theory are the sample space S, which as before is just the set of all possible outcomes of an experiment, and a distinguished class M of subsets of S, called events. Unlike the case of finite S, in general not every subset of S is an event. The class M must have certain properties described below. Each event is assigned a probability, which means mathematically that a probability is a function P mapping M into the real numbers that satisfies certain conditions derived from one’s physical ideas about probability.

The properties of M are as follows: (i) SM; (ii) if AM, then AcM; (iii) if A1, A2,… ∊ M, then A1A2 ∪ ⋯ ∊ M. Recalling that M is the domain of definition of the probability P, one can interpret (i) as saying that P(S) is defined, (ii) as saying that, if the probability of A is defined, then the probability of “not A” is also defined, and (iii) as saying that, if one can speak of the probability of each of a sequence of events An individually, then one can speak of the probability that at least one of the An occurs. A class of subsets of any set that has properties (i)–(iii) is called a σ-field. From these properties one can prove others. For example, it follows at once from (i) and (ii) that Ø (the empty set) belongs to the class M. Since the intersection of any class of sets can be expressed as the complement of the union of the complements of those sets (DeMorgan’s law), it follows from (ii) and (iii) that, if A1, A2,… ∊ M, then A1A2 ∩ ⋯ ∊ M.

Given a set S and a σ-field M of subsets of S, a probability measure is a function P that assigns to each set AM a nonnegative real number and that has the following two properties: (a) P(S) = 1 and (b) if A1, A2,… ∊ M and AiAj = Ø for all ij, then P(A1A2 ∪ ⋯) = P(A1) + P(A2) +⋯. Property (b) is called the axiom of countable additivity. It is clearly motivated by equation (1), which suffices for finite sample spaces because there are only finitely many events. In infinite sample spaces it implies, but is not implied by, equation (1). There is, however, nothing in one’s intuitive notion of probability that requires the acceptance of this property. Indeed, a few mathematicians have developed probability theory with only the weaker axiom of finite additivity, but the absence of interesting models that fail to satisfy the axiom of countable additivity has led to its virtually universal acceptance.

To get a better feeling for this distinction, consider the experiment of tossing a biased coin having probability p of heads and q = 1 − p of tails until heads first appears. To be consistent with the idea that the tosses are independent, the probability that exactly n tosses are required equals qn − 1p, since the first n − 1 tosses must be tails, and they must be followed by a head. One can imagine that this experiment never terminates—i.e., that the coin continues to turn up tails forever. By the axiom of countable additivity, however, the probability that heads occurs at some finite value of n equals p + qp + q2p + ⋯ = p/(1 − q) = 1, by the formula for the sum of an infinite geometric series. Hence, the probability that the experiment goes on forever equals 0. Similarly, one can compute the probability that the number of tosses is odd, as p + q2p + q4p + ⋯ = p/(1 − q2) = 1/(1 + q). On the other hand, if only finite additivity were required, it would be possible to define the following admittedly bizarre probability. The sample space S is the set of all natural numbers, and the σ-field M is the class of all subsets of S. If an event A contains finitely many elements, P(A) = 0, and, if the complement of A contains finitely many elements, P(A) = 1. As a consequence of the deceptively innocuous axiom of choice (which says that, given any collection C of nonempty sets, there exists a rule for selecting a unique point from each set in C), one can show that many finitely additive probabilities consistent with these requirements exist. However, one cannot be certain what the probability of getting an odd number is, because that set is neither finite nor its complement finite, nor can it be expressed as a finite disjoint union of sets whose probability is already defined.

It is a basic problem, and by no means a simple one, to show that the intuitive notion of choosing a number at random from [0, 1], as described above, is consistent with the preceding definitions. Since the probability of an interval is to be its length, the class of events M must contain all intervals; but in order to be a σ-field it must contain other sets, many of which are difficult to describe in an elementary way. One example is the event in equation (14), which must belong to M in order that one can talk about its probability. Also, although it seems clear that the length of a finite disjoint union of intervals is just the sum of their lengths, a rather subtle argument is required to show that length has the property of countable additivity. A basic theorem says that there is a suitable σ-field containing all the intervals and a unique probability defined on this σ-field for which the probability of an interval is its length. The σ-field is called the class of Lebesgue-measurable sets, and the probability is called the Lebesgue measure, after the French mathematician and principal architect of measure theory, Henri-Léon Lebesgue.

In general, a σ-field need not be all subsets of the sample space S. The question of whether all subsets of [0, 1] are Lebesgue-measurable turns out to be a difficult problem that is intimately connected with the foundations of mathematics and in particular with the axiom of choice.

Probability density functions

For random variables having a continuum of possible values, the function that plays the same role as the probability distribution of a discrete random variable is called a probability density function. If the random variable is denoted by X, its probability density function f has the property thatEquation.for every interval (a, b]; i.e., the probability that X falls in (a, b] is the area under the graph of f between a and b (see the figure). For example, if X denotes the outcome of selecting a number at random from the interval [r, s], the probability density function of X is given by f(x) = 1/(sr) for r < x < s and f(x) = 0 for x < r or x > s. The function F(x) defined by F(x) = P{Xx} is called the distribution function, or cumulative distribution function, of X. If X has a probability density function f(x), the relation between f and F is F′(x) = f(x) or equivalentlyProblem 10

The distribution function F of a discrete random variable should not be confused with its probability distribution f. In this case the relation between F and f isProblem 11

If a random variable X has a probability density function f(x), its “expectation” can be defined byEquation.provided that this integral is convergent. It turns out to be simpler, however, not only to use Lebesgue’s theory of measure to define probabilities but also to use his theory of integration to define expectation. Accordingly, for any random variable X, E(X) is defined to be the Lebesgue integral of X with respect to the probability measure P, provided that the integral exists. In this way it is possible to provide a unified theory in which all random variables, both discrete and continuous, can be treated simultaneously. In order to follow this path, it is necessary to restrict the class of those functions X defined on S that are to be called random variables, just as it was necessary to restrict the class of subsets of S that are called events. The appropriate restriction is that a random variable must be a measurable function. The definition is taken over directly from the Lebesgue theory of integration and will not be discussed here. It can be shown that, whenever X has a probability density function, its expectation (provided it exists) is given by equation (15), which remains a useful formula for calculating E(X).

Some important probability density functions are the following:List of the normal, exponential, and Cauchy probability density functions.

The cumulative distribution function of the normal distribution with mean 0 and variance 1 has already appeared as the function G defined following equation (12). The law of large numbers and the central limit theorem continue to hold for random variables on infinite sample spaces. A useful interpretation of the central limit theorem stated formally in equation (12) is as follows: The probability that the average (or sum) of a large number of independent, identically distributed random variables with finite variance falls in an interval (c1, c2] equals approximately the area between c1 and c2 underneath the graph of a normal density function chosen to have the same expectation and variance as the given average (or sum). The figure illustrates the normal approximation to the binomial distribution with n = 10 and p = 1/2.

The exponential distribution arises naturally in the study of the Poisson distribution introduced in equation (13). If Tk denotes the time interval between the emission of the k − 1st and kth particle, then T1, T2,… are independent random variables having an exponential distribution with parameter μ. This is obvious for T1 from the observation that {T1 > t} = {N(t) = 0}. Hence, P{T1t} = 1 − P{N(t) = 0} = 1 − exp(−μt), and by differentiation one obtains the exponential density function.

The Cauchy distribution does not have a mean value or a variance, because the integral (15) does not converge. As a result, it has a number of unusual properties. For example, if X1, X2,…, Xn are independent random variables having a Cauchy distribution, the average (X1 +⋯+ Xn)/n also has a Cauchy distribution. The variability of the average is exactly the same as that of a single observation. Another random variable that does not have an expectation is the waiting time until the number of heads first equals the number of tails in tossing a fair coin.