- Related Topics:
- Bayes’s theorem
- central limit theorem
- stochastic process
- indifference
- likelihood
- On the Web:
- CORE - Kolmogorov and Probability Theory (Nov. 08, 2024)
A stochastic process is called Markovian (after the Russian mathematician Andrey Andreyevich Markov) if at any time t the conditional probability of an arbitrary future event given the entire past of the process—i.e., given X(s) for all s ≤ t—equals the conditional probability of that future event given only X(t). Thus, in order to make a probabilistic statement about the future behaviour of a Markov process, it is no more helpful to know the entire history of the process than it is to know only its current state. The conditional distribution of X(t + h) given X(t) is called the transition probability of the process. If this conditional distribution does not depend on t, the process is said to have “stationary” transition probabilities. A Markov process with stationary transition probabilities may or may not be a stationary process in the sense of the preceding paragraph. If Y1, Y2,… are independent random variables and X(t) = Y1 +⋯+ Yt, the stochastic process X(t) is a Markov process. Given X(t) = x, the conditional probability that X(t + h) belongs to an interval (a, b) is just the probability that Yt + 1 +⋯+ Yt + h belongs to the translated interval (a − x, b − x); and because of independence this conditional probability would be the same if the values of X(1),…, X(t − 1) were also given. If the Ys are identically distributed as well as independent, this transition probability does not depend on t, and then X(t) is a Markov process with stationary transition probabilities. Sometimes X(t) is called a random walk, but this terminology is not completely standard. Since both the Poisson process and Brownian motion are created from random walks by simple limiting processes, they, too, are Markov processes with stationary transition probabilities. The Ornstein-Uhlenbeck process defined as the solution (19) to the stochastic differential equation (18) is also a Markov process with stationary transition probabilities.
The Ornstein-Uhlenbeck process and many other Markov processes with stationary transition probabilities behave like stationary processes as t → ∞. Roughly speaking, the conditional distribution of X(t) given X(0) = x converges as t → ∞ to a distribution, called the stationary distribution, that does not depend on the starting value X(0) = x. Moreover, with probability 1, the proportion of time the process spends in any subset of its state space converges to the stationary probability of that set; and, if X(0) is given the stationary distribution to begin with, the process becomes a stationary process. The Ornstein-Uhlenbeck process defined in equation (19) is stationary if V(0) has a normal distribution with mean 0 and variance σ2/(2mf).
At another extreme are absorbing processes. An example is the Markov process describing Peter’s fortune during the game of gambler’s ruin. The process is absorbed whenever either Peter or Paul is ruined. Questions of interest involve the probability of being absorbed in one state rather than another and the distribution of the time until absorption occurs. Some additional examples of stochastic processes follow.
The Ehrenfest model of diffusion
The Ehrenfest model of diffusion (named after the Austrian Dutch physicist Paul Ehrenfest) was proposed in the early 1900s in order to illuminate the statistical interpretation of the second law of thermodynamics, that the entropy of a closed system can only increase. Suppose N molecules of a gas are in a rectangular container divided into two equal parts by a permeable membrane. The state of the system at time t is X(t), the number of molecules on the left-hand side of the membrane. At each time t = 1, 2,… a molecule is chosen at random (i.e., each molecule has probability 1/N to be chosen) and is moved from its present location to the other side of the membrane. Hence, the system evolves according to the transition probability p(i, j) = P{X(t + 1) = j|X(t) = i}, where
The long run behaviour of the Ehrenfest process can be inferred from general theorems about Markov processes in discrete time with discrete state space and stationary transition probabilities. Let T(j) denote the first time t ≥ 1 such that X(t) = j and set T(j) = ∞ if X(t) ≠ j for all t. Assume that for all states i and j it is possible for the process to go from i to j in some number of steps—i.e., P{T(j) < ∞|X(0) = i} > 0. If the equationshave a solution Q(j) that is a probability distribution—i.e., Q(j) ≥ 0, and ΣQ(j) = 1—then that solution is unique and is the stationary distribution of the process. Moreover, Q(j) = 1/E{T(j)|X(0) = j}; and, for any initial state j, the proportion of time t that X(t) = i converges with probability 1 to Q(i).
For the special case of the Ehrenfest process, assume that N is large and X(0) = 0. According to the deterministic prediction of the second law of thermodynamics, the entropy of this system can only increase, which means that X(t) will steadily increase until half the molecules are on each side of the membrane. Indeed, according to the stochastic model described above, there is overwhelming probability that X(t) does increase initially. However, because of random fluctuations, the system occasionally moves from configurations having large entropy to those of smaller entropy and eventually even returns to its starting state, in defiance of the second law of thermodynamics.
The accepted resolution of this contradiction is that the length of time such a system must operate in order that an observable decrease of entropy may occur is so enormously long that a decrease could never be verified experimentally. To consider only the most extreme case, let T denote the first time t ≥ 1 at which X(t) = 0—i.e., the time of first return to the starting configuration having all molecules on the right-hand side of the membrane. It can be verified by substitution in equation (20) that the stationary distribution of the Ehrenfest model is the binomial distributionand hence E(T) = 2N. For example, if N is only 100 and transitions occur at the rate of 106 per second, E(T) is of the order of 1015 years. Hence, on the macroscopic scale, on which experimental measurements can be made, the second law of thermodynamics holds.
The symmetric random walk
A Markov process that behaves in quite different and surprising ways is the symmetric random walk. A particle occupies a point with integer coordinates in d-dimensional Euclidean space. At each time t = 1, 2,… it moves from its present location to one of its 2d nearest neighbours with equal probabilities 1/(2d), independently of its past moves. For d = 1 this corresponds to moving a step to the right or left according to the outcome of tossing a fair coin. It may be shown that for d = 1 or 2 the particle returns with probability 1 to its initial position and hence to every possible position infinitely many times, if the random walk continues indefinitely. In three or more dimensions, at any time t the number of possible steps that increase the distance of the particle from the origin is much larger than the number decreasing the distance, with the result that the particle eventually moves away from the origin and never returns. Even in one or two dimensions, although the particle eventually returns to its initial position, the expected waiting time until it returns is infinite, there is no stationary distribution, and the proportion of time the particle spends in any state converges to 0!
Queuing models
The simplest service system is a single-server queue, where customers arrive, wait their turn, are served by a single server, and depart. Related stochastic processes are the waiting time of the nth customer and the number of customers in the queue at time t. For example, suppose that customers arrive at times 0 = T0 < T1 < T2 <⋯ and wait in a queue until their turn. Let Vn denote the service time required by the nth customer, n = 0, 1, 2,…, and set Un = Tn − Tn − 1. The waiting time, Wn, of the nth customer satisfies the relation W0 = 0 and, for n ≥ 1, Wn = max(0, Wn − 1 + Vn − 1 − Un). To see this, observe that the nth customer must wait for the same length of time as the (n − 1)th customer plus the service time of the (n − 1)th customer minus the time between the arrival of the (n − 1)th and nth customer, during which the (n − 1)th customer is already waiting but the nth customer is not. An exception occurs if this quantity is negative, and then the waiting time of the nth customer is 0. Various assumptions can be made about the input and service mechanisms. One possibility is that customers arrive according to a Poisson process and their service times are independent, identically distributed random variables that are also independent of the arrival process. Then, in terms of Yn = Vn − 1 − Un, which are independent, identically distributed random variables, the recursive relation defining Wn becomes Wn = max(0, Wn − 1 + Yn). This process is a Markov process. It is often called a random walk with reflecting barrier at 0, because it behaves like a random walk whenever it is positive and is pushed up to be equal to 0 whenever it tries to become negative. Quantities of interest are the mean and variance of the waiting time of the nth customer and, since these are very difficult to determine exactly, the mean and variance of the stationary distribution. More realistic queuing models try to accommodate systems with several servers and different classes of customers, who are served according to certain priorities. In most cases it is impossible to give a mathematical analysis of the system, which must be simulated on a computer in order to obtain numerical results. The insights gained from theoretical analysis of simple cases can be helpful in performing these simulations. Queuing theory had its origins in attempts to understand traffic in telephone systems. Present-day research is stimulated, among other things, by problems associated with multiple-user computer systems.
Reflecting barriers arise in other problems as well. For example, if B(t) denotes Brownian motion, then X(t) = B(t) + ct is called Brownian motion with drift c. This model is appropriate for Brownian motion of a particle under the influence of a constant force field such as gravity. One can add a reflecting barrier at 0 to account for reflections of the Brownian particle off the bottom of its container. The result is a model for sedimentation, which for c < 0 in the steady state as t → ∞ gives a statistical derivation of the law of pressure as a function of depth in an isothermal atmosphere. Just as ordinary Brownian motion can be obtained as the limit of a rescaled random walk as the number of steps becomes very large and the size of individual steps small, Brownian motion with a reflecting barrier at 0 can be obtained as the limit of a rescaled random walk with reflection at 0. In this way, Brownian motion with a reflecting barrier plays a role in the analysis of queuing systems. In fact, in modern probability theory one of the most important uses of Brownian motion and other diffusion processes is as approximations to more complicated stochastic processes. The exact mathematical description of these approximations gives remarkable generalizations of the central limit theorem from sequences of random variables to sequences of random functions.
Insurance risk theory
The ruin problem of insurance risk theory is closely related to the problem of gambler’s ruin described earlier and, rather surprisingly, to the single-server queue as well. Suppose the amount of capital at time t in one portfolio of an insurance company is denoted by X(t). Initially X(0) = x > 0. During each unit of time, the portfolio receives an amount c > 0 in premiums. At random times claims are made against the insurance company, which must pay the amount Vn > 0 to settle the nth claim. If N(t) denotes the number of claims made in time t, thenprovided that this quantity has been positive at all earlier times s < t. At the first time X(t) becomes negative, however, the portfolio is ruined. A principal problem of insurance risk theory is to find the probability of ultimate ruin. If one imagines that the problem of gambler’s ruin is modified so that Peter’s opponent has an infinite amount of capital and can never be ruined, then the probability that Peter is ultimately ruined is similar to the ruin probability of insurance risk theory. In fact, with the artificial assumptions that (i) c = 1, (ii) time proceeds by discrete units, say t = 1, 2,…, (iii) Vn is identically equal to 2 for all n, and (iv) at each time t a claim occurs with probability p or does not occur with probability q independently of what occurs at other times, then the process X(t) is the same stochastic process as Peter’s fortune, which is absorbed if it ever reaches the state 0. The probability of Peter’s ultimate ruin against an infinitely rich adversary is easily obtained by taking the limit of equation (6) as m → ∞. The answer is (q/p)x if p > q—i.e., the game is favourable to Peter—and 1 if p ≤ q. More interesting assumptions for the insurance risk problem are that the number of claims N(t) is a Poisson process and the sizes of the claims V1, V2,… are independent, identically distributed positive random variables. Rather surprisingly, under these assumptions the probability of ultimate ruin as a function of the initial fortune x is exactly the same as the stationary probability that the waiting time in the single-server queue with Poisson input exceeds x. Unfortunately, neither problem is easy to solve exactly, although there is a very good approximate solution originally derived by the Swedish mathematician Harald Cramér.
Martingale theory
As a final example, it seems appropriate to mention one of the dominant ideas of modern probability theory, which at the same time springs directly from the relation of probability to games of chance. Suppose that X1, X2,… is any stochastic process and, for each n = 0, 1,…, fn = fn(X1,…, Xn) is a (Borel-measurable) function of the indicated observations. The new stochastic process fn is called a martingale if E(fn|X1,…, Xn − 1) = fn − 1 for every value of n > 0 and all values of X1,…, Xn − 1. If the sequence of Xs are outcomes in successive trials of a game of chance and fn is the fortune of a gambler after the nth trial, then the martingale condition says that the game is absolutely fair in the sense that, no matter what the past history of the game, the gambler’s conditional expected fortune after one more trial is exactly equal to his present fortune. For example, let X0 = x, and for n ≥ 1 let Xn equal 1 or −1 according as a coin having probability p of heads and q = 1 − p of tails turns up heads or tails on the nth toss. Let Sn = X0 +⋯+ Xn. Then fn = Sn − n(p − q) and fn = (q/p)Sn are martingales. One of the basic results of martingale theory is that, if the gambler is free to quit the game at any time using any strategy whatever, provided only that this strategy does not foresee the future, then the game remains fair. This means that, if N denotes the stopping time at which the gambler’s strategy tells him to quit the game, so that his final fortune is fN, then
Strictly speaking, this result is not true without some additional conditions that must be verified for any particular application. To see how efficiently it works, consider once again the problem of gambler’s ruin and let N be the first value of n such that Sn = 0 or m; i.e., N denotes the random time at which ruin first occurs and the game ends. In the case p = 1/2, application of equation (21) to the martingale fn = Sn, together with the observation that fN = either 0 or m, yields the equalities x = f0 = E(fN|f0 = x) = m[1 − Q(x)], which can be immediately solved to give the answer in equation (6). For p ≠ 1/2, one uses the martingale fn = (q/p)Sn and similar reasoning to obtainfrom which the first equation in (6) easily follows. The expected duration of the game is obtained by a similar argument.
A particularly beautiful and important result is the martingale convergence theorem, which implies that a nonnegative martingale converges with probability 1 as n → ∞. This means that, if a gambler’s successive fortunes form a (nonnegative) martingale, they cannot continue to fluctuate indefinitely but must approach some limiting value.
Basic martingale theory and many of its applications were developed by the American mathematician Joseph Leo Doob during the 1940s and ’50s following some earlier results due to Paul Lévy. Subsequently it has become one of the most powerful tools available to study stochastic processes.
David O. Siegmund