Poisson process

Template:Mergefrom A Poisson process, one of a variety of things named after the French mathematician Siméon-Denis Poisson (1781 - 1840), is a stochastic process which is defined in terms of the occurrences of events in some space. A stochastic process N(t) is a (time-homogeneous, one-dimensional) Poisson process if,

• The number of events occurring in two disjoint (non-overlapping) subintervals are independent random variables.
• The probability of the number of events in some subinterval [t,t+τ] is given by
[itex] P [(N(t+ \tau) - N(t)) = k] = \frac{e^{-\lambda \tau} (\lambda \tau)^k}{k!} \qquad k= 0,1,\dots,[itex]

where k is the number of events occurring within [t, t + τ] and λ is a parameter known as the rate parameter. It can be seen that the probability defined is a Poisson distribution with parameter λ τ.

More generally, a Poisson process is one that assigns to each bounded interval of time or to each bounded region in some space (for example, a Euclidean plane or a 3-dimensional Euclidean space) a random number of events in such a way that

• The number of events in one interval of time or region in space and the number of events in another disjoint (non-overlapping) interval of time or region in space are independent random variables.

Technically, and perhaps more precisely, one should say each set of finite measure is assigned such a Poisson-distributed random variable.

The Poisson process is one of the most well-known Lévy processes. (Time-homogeneous) Poisson processes are also examples of (time-homogeneous) continuous-time Markov chains. A time-homogeneous, one-dimensional Poisson process is a pure-birth process, the simplest example of a birth-death process.

 Contents

Examples

• The number of telephone calls arriving at a switchboard during any specified time interval may have a Poisson distribution, and the number of calls arriving during one time interval may be statistically independent of the number of calls arriving during any other non-overlapping time interval. This is a one-dimensional Poisson process. In simple models, one may assume a constant average rate of arrival, e.g., λ = 12.3 calls per minute. In that case, the expected value of the number of calls in any time interval is that rate times the amount of time, λt. In messier and more realistic problems, one uses a non-constant rate function λ(t). In that case, the expected value of the number of calls between time a and time b is
[itex]\int_a^b \lambda(t)\,dt.[itex]
• The number of bombs falling on a specified area of London in the early days of the Second World War may be a random variable with a Poisson distribution, and the number of bombs falling on two areas of the city that do not overlap may be statistically independent. The number of bombs observed to have fallen within an area A is a 2-dimensional Poisson process over the space defined by the area A.
• Astronomers may treat the number of stars in a given volume of space as a random variable with a Poisson distribution, and the numbers of stars in any two or more non-overlapping regions as statistically independent. The number of stars observed within some volume V is a 3-dimensional Poisson process over the space defined by the volume V.

1-dimensional Poisson processes

A 1-dimensional Poisson process on the interval from 0 to ∞ (essentially this means that the clock starts at time 0; that is when we begin counting) may thus be viewed as an integer-valued nondecreasing random function of time N(t) that counts the number of "arrivals" before time t. Just as a Poisson random variable is characterized by its scalar parameter λ, a Poisson process is characterized by its rate function λ(t), which is the expected number of "events" or "arrivals" that occur per unit time. A homogeneous Poisson process has a constant rate function λ(t) = λ. If the rate remains constant, then the number N(t) of arrivals before time t distribution has a Poisson distribution with expected value λt.

Let Xt be the number of arrivals before time t. Let Tx be the time of the xth arrival, for x = 1, 2, 3, ... . (We are using capital X and capital T for random variables, and lower-case x and lower-case t for non-random quantities.) The random variable Xt has a discrete probability distribution -- a Poisson distribution -- and the random variable Tx has a continuous probability distribution.

Clearly the number of arrivals before time t is less than x if and only if the waiting time until the xth arrival is more than t. In symbols, the event [ Xt < x ] occurs if and only if the event [ Tx > t ]. Consequently the probabilities of these events are the same:

[itex]P(X_tt).[itex]

This fact plus knowledge of the Poisson distribution enables us to find the probability distribution of these continuous random variables. In case the rate, i.e., the expected number of arrivals per unit time, remains constant, this is fairly simple. In particular, consider the waiting time until the first arrival. Clearly that time is more than t if and only if the number of arrivals before time t is a 0. If the rate is λ arrivals per unit time, then we have

[itex]P(T_1>t)=P(X_t=0)=e^{-\lambda t}.[itex]

Consequently, the waiting time until the first arrival has a exponential distribution. This exponential distribution has expected value 1/λ. In other words, if the average rate of arrivals is, for example 6 per minute, then the average waiting time until the first arrival is (unsurprisingly) 1/6 minute. This exponential distribution is memoryless, i.e. we have

[itex]P(T_1>t+s \mid T_1>t)=P(T_1>s).[itex]

This says that the conditional probability that we need to wait, for example, more than another 10 seconds before the first arrival, given that the first arrival has not yet happened after 30 seconds, is no different from the initial probability that we need to wait more than 10 seconds for the first arrival. This is often misunderstood by students taking courses on probability: the fact that P(T1 > 40 | T1 > 30) = P(T1 > 10) does not mean that the events T1 > 40 and T1 > 10 are independent. To summarize: "memorylessness" of the probability distribution of the waiting time T1 until the first arrival means

[itex]\mathrm{(Right)}\ P(T_1>40 \mid T_1>30)=P(T_1>10).[itex]

It does not mean

[itex]\mathrm{(Wrong)}\ P(T_1>40 \mid T_1>30)=P(T_1>40).[itex]

(That would be independence. These two events are not independent.)

Characterization of Poisson processes

In its most general form, the only two conditions for a 1-dimensional process to be a (not necessarily homogeneous) Poisson process are:

• Orderliness: which roughly means
[itex]\lim_{\Delta t\to 0} P(X_{t+\Delta t} - X_t > 1 \mid X_{t+\Delta t} - X_t \geq 1)=0 [itex]
which implies that arrivals don't occur simultaneously (but is actually a stronger statement). Simultaneous arrivals occur in some compound Poisson processes.
• Memorylessness (also called evolution without aftereffects): the number of arrivals occurring in any bounded interval of time after time t is independent of the number of arrivals occurring before time t.

These seemingly unrestrictive conditions actually impose a great deal of structure in the Poisson process. In particular, they imply independent exponential (memoryless) interarrival times (with parameter λ for homogeneous processes). Because the interarrival times are exponentially distributed, the time between the 4th and 9th arrival (for instance) is distributed as the sum of exponential random variables (i.e. 5th order gamma distribution). Also, these conditions imply that the probability distribution of the number of events in the interval [a,b), which is also written as XbXa is Poisson-distributed, (with parameter λ(ba) for homogeneous processes).

This is a sample one-dimensional homogeneous Poisson process, Xt; not to be confused with a density or distribution function.

Missing image
Sampleprocess.png
image:Sampleprocess.png

Sample homogeneous Poisson process

• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy