Talk:Stochastic process
|
The definition given on this page for "stochastic process" as a random function is an elegant mathematical definition if one takes the viewpoint that there is a family of functions on a common domain and range and a probability measure on a sigma algebra of subsets of that family. But might a reader interpret this to mean that there could be a random choice of a function from a set of functions with different domains and ranges? (And would that fall within the intended definition?)
- One way to deal with varying domains is by taking direct sums of the function spaces involved. -- Miguel
Whatever the final verdict on the above point is, it would help to explain how the often seen definition of a stochastic process as "an indexed family of random variables" agrees with the idea of "random function". This would involve explaining that an indexing set can be finite, countable or uncountable. That fact may surprise someone who hasn't studied abstract mathematics.
- This article needs input from otehr people, please add this definition if you'd like. -- Miguel
There is some inconsistency with the way books treat the term "stochastic process". Some (such as Gardiner) restrict it to a process in time. This appears to be in the same spirit as author's who say, for a vector, "In this book W will be a finite dimensional vector space" in order to set a context that is more specific than the general definition of "vector". There are books which use "Random Field" to include processes that take place in time. These are arbitrary conventions. It would help the reader to mention that he may encounter these inconsistencies.
- I think it is best to give the broadest possible definition to which the basic techniques can be applied, and list special cases or narrower definitions later in the article. If you can add other definitions with references to the literature that will greatly improve the article. -- Miguel
- I'll attempt to do that, but first I must study how the Wikipedia does things. Is there a way to use standard html editors like Netscape Composer or Mozilla's composer to create Wikipedia pages? Stephen Tashiro
- Just go ahead and edit the page. Under the editing window there is an editing help link you can follow to get instructions on how to format the content and include mathematics. And remember the wikipedia policy: be bold in updating pages. -- Miguel
I can't yet expound upon the QM approach to stochastic processes. Below is what I propose for the traditional approach. If I put it all in at the top of the page, I get a warning about a 33 Kb limit, so I didn't edit the entry yet. Tashiro
- Wow, that's pretty impressive. Why don't you add the content to the article a bit at a time? That way you will get around the 33Kb limit, and if you use sectioning (see the editing help about this) appropriately you will never hit the limit. Also, the rest of us will be able to work on your edits as you add them to the article more easily than if you added the whole thing at once. -- Miguel
OK, I pasted the draft into the beginning of the article. I confess that I don't know html or the wikipedia editing conventions very well. My method is compose the pages with Mozilla, view source, cut and paste into the Wiki editing window. Some spurious blank lines have been introduced and people who use the Wiki editor may not like the symbols introduced by Mozilla. Is there a consensus about whether using html editors to compose Wiki pages is a good or bad thing? Tashiro
- You really have to read the Editing Help. The point of wiki is that ho HTML is necessary to create a page, you just write text and add very little wiki-specific markup.
- I have moved your discussion to the end of the article because it does not really have the structure of an Encyclopedia article, although there is very good content in it. I will work on merging your discussion into the existing content. The first thing I'll do is section your discussion. -- Miguel
Contents |
Modified Definition
I don't find the given definition very rigourous - it is more of a description than a definition. Heres the beginings of a new version. I still need to do some work on this as it tails off a bit towards the end...
- A one dimensional stochastic process consists of a set <math>S<math> of random variables together with a bijective map <math>\eta:I\to S<math> assigning one of these random variables to each member of an index set <math>I<math>. All of the random variables share the same codomain (a probability space <math>(\Omega,P)<math>) and the same domain <math>D<math> (a measurable space). Thus each point <math>\omega\in\Omega<math> corresponds to a value for each random variable and hence a function <math>f_\omega : I\to D<math>, known as a realisation of the stochastic process.
- Technically a stochastic process is not a function, only a particular realisation of it, <math>f_\omega<math> and the mapping <math>\eta<math> can be described as such. Despite this, the term random function and notations such as <math>\eta(t)<math> are convenient abbreviations.
- Stochastic processes can then be discrete or continuous: a discrete stochastic process is an indexed collection of random variables
- <math>
S_i : \Omega \to D, <math>
- where i runs over the countable index set I, while a continuous stochastic process is an uncountably infinite set of random variables, with an uncountable index set.
- A particular stochastic process is determined by specifying the joint probability distributions of the various random variables f(x).
Comments and opinions please...! SgtThroat 15:21, 12 Dec 2004 (UTC)
- In an encyclopedia article, clarity should come first and rigor later. Michael Hardy 19:56, 12 Dec 2004 (UTC)
I quite agree, but in a mathematical article such as this anything that claims to be a definition should be rigourous; compare for example the articles on rings or sigma-algebras, bith of which start off with a clear definition. I suppose that what I am really saying is that I find the definition given unclear as well as un-rigorous, perhaps with the first as a consequence of the second. As an alternative suggestion how about calling the 'definition' a 'description' (although I still feel that it needs work), then placing a proper definition later on..?SgtThroat 13:37, 13 Dec 2004 (UTC)
- But should one always begin with a definition? Sometimes clarity might be better served by beginning with an intuitive description and putting the rigorous definition after some discussion that enables the reader to understand the definition. Michael Hardy 22:33, 13 Dec 2004 (UTC)
Once again I was not precise enough with my language - I meant 'further down' when I wrote 'later on', so I think we are in agreement. Thus I propose putting the definition further down. This necessitates renaming the existing 'definition' (which I don't think is appropriate anyway) - any suggestions? Also I'd like comment on the definition above - is it completely correct?SgtThroat 01:01, 14 Dec 2004 (UTC)
I have now changed the "definition" section to "Common stochastic processes" and removed some of what I felt was rather confusing language. I've put in an example, probably in the wrong place, which I hope gives a feel for the index set and the domain of the random variables. I plan to rework the above "modified definition" to remove some stuff that is probably redundant due to overlap with the reworked first section of the article, then include it towards the end of the article. I'd like to hear opinions and comments on both parts. SgtThroat 20:30, 15 Dec 2004 (UTC)
- Moved the Brownian motion example into the following section, which I renamed "examples". I then removed the empty examples section that was already there. Forgot to add an edit summary... SgtThroat 20:51, 15 Dec 2004 (UTC)
The last section
The last section is too long. How about spinning it off as a seperate article? -wshun 03:07, 14 Nov 2003 (UTC)
I'm putting it here. for the time being. This will allow us to fix the formatting problems, and also to move content gradually from here to the main text. I don't think a new article is necessary, but maybe the need for it will become apparent in the process. -- Miguel 17:17, 14 Nov 2003 (UTC)
Informal Discussion
Several different definitions are found for "stochastic process" in mathematical literature. This is not particularly scandalous. Some mathematical definitions describe things which have many specific properties that are crucial in writing mathematical proofs. (For an example of such a definition, see the entry for vector space in the Wikipedia.) Other mathematical definitions do not provide much specific information. The traditional definitions of "stochastic process" fall in this latter category. Even a book whose subject is "stochastic processes", may treat the definition rather casually. Such texts chose to add details later by defining special cases of the general concept. In order to explain the distinctions among the various defintiions of "stochastic process" it is best to begin with an informal discussion.
One purpose of defining the term "stochastic process" is to create terminology that is broad enough to describe a random phenomenon that produces an infinite amount of data each time it occurs. Another goal is to have terminology that is narrow enough to describe situations where the data is , in a manner of speaking, all of the same format (e.g. it might be all be prices in dollars or it might be all be measurements of air pressure in millibars ). We give some examples of some physical situations that can be viewed as a stochastic process.
Example 1: Consider air pressures in millibars at the local airport airport from 6:00AM to 7:00AM. Assuming that time is a continuum, there are an infinite number of times between 6:00AM and 7:00AM. A single occurrence ("realization") of the process is this infinite set of pressures that occurs on a particular day. Each of these pressures is a datum that is in the same format as the others.</p>
Example 2: Suppose we hand out 8.5" by 11" sheets
of white paper to each member of an audience and ask them to draw a
picture. Let us take the simplistic view that underlying
process that creates the picture is the same for each member
of the audience. Each picture we receive is a realization of
this random phenomena. Each picture contains an infinite amout
of data if we take the idea of the space as a continuum
seriously. At each location (x,y)
in the picture there is a certain color value that is part of the
realized data. There are an infinite number of such locations
(x,y) on a sheet of 8.5" by 11" paper.
To describe a color we may need more than a single number.
Suppose the color data at location (x,y) is
given by a triplet of numbers (r,g,b) that measure the
red,blue and green intensities. Since the data has this form
at all locations, we can think of it as being "all of the same
format".
In practical applications, a theoretical
infinity of data is often approximated by a large but finite amount
of data. The concept of "stochastic
process" includes those cases where the amount of data
produced is finite. Suppose instead of the continuous
readings of example 1, the sample of pressure readings might
be recorded by a device that records one temperature reading per
second Then each realization (recording for an hour)
produces 3600 readings. We may consider this
situation to be a stochastic process. (We might also consider
it to be a multivariate random variable
with 3600 components.) Likewise the images in example 2
might be recorded with a scanner. This would produce a file
which had (r,g,b) data for a large but finite number of
pixels.
It is important to understand that the same practical problem can be described in mathematics in different ways. Mathematics itself does not specify a unique translation of a physical situation into mathematical terms. We give some examples of such ambiguity.
Example 3: Suppose we measure the height of a randomly selected person, we may think of this as a process that produces a single datum, the persons height, each time we perform it The most common mathematical treatment of this situation is to view it as a realization of a random variable . It is also possible to view this as a stochastic process that produces 1 datum on each realization. However it is not customary to do this.
Example 4: Suppose we perform an experiment where we measure the weight, height and temperature of a randomly selected person. This is usually viewed as a realization of a multivariate random variable which has three components. It is also possible to view this as a stochastic process that produces 1 datum , the triplet of (weight, height , temperature), on each realization. However it is not customary to do this.
Some authors [ Neftci] view a stochastic process as a "random function" in the following manner. A stochastic process is considered to be a function of two variables f(t,w). The first variable describes which datum we wish to examine from all the data produced by one realization of the process. The second variable w represents which specific realization of the process occurred.
In example 1, We may think of w is a datum
that specifies information such as "January 3, 2002
6:00AM to 7:00AM". The variable t
would be used to indicate a specific time. For example, one
possible value of t is 6:03 AM. From this
point of view, the realization of the process consists of
picking a specific value of w. Then, with w
being fixed, the function f(t,w) becomes a
function of t alone. So a single realization of
the process is a specific function of t.
To view
example 2 as a "random function" , we consider it to be
f(s,w). We let s be a vector of numbers (x,y) that give a location on the picture. We let w be
a datum that describes a specific picture , such as "picture
by Wilbur Semismith, completed January 3, 2003 9:42 AM".
We consider f(s,w) to be a vector valued
function whose range is the set of 3 dimensional vectors that give the (r,g,b) data.
The values of most of the variables in the above examples are familiar mathematical quantites, such as numbers or vectors. But the reader may it have difficulty conceptualizing the nature of the variable w and stating exactly what possible values it make have. The possible values of w are taken from a probability space . Roughly speaking, the "probability space" refers to three things: 1) a set of things that we may think of as "primitive" events, 2) a collection of subsets of the primitive events and 3) a function or algorithm that is able to assign a probablity to each of these subsets. In the simple examples above, we can only give a partial description of the probability space. In example 1, the primitive events can be described as "all possible 6 to 7 AM time periods at the airport". In example 2, the primitive events can be described as "all possible pictures that people might draw on 8.5" by 11" paper". These descriptions dodge the question of which subsets of primitive events can be assigned a probability and how this might be done. A primitive event in the probability space should determine all the values of the random phenomenon. For instance, in example 1 it would not be correct to say that a pressure reading of 1013.25 millibars at 6:03 AM is a primitive event. Giving the air pressure at a single time would not determine it at the others. In example 2, a primitive event would not be "all the colors a person might draw at some location on the paper" or "all the images a person might draw in the upper left hand corner of the picture". A primitive event should determine the whole picture.
In practical applications of stochastic processes , there is often a quantitative description of the probability space. For example, one may assume a specific formula or algorithm generates the occurrence of the process. The algorithm will usually involve taking realizations of random variables and doing certain computations with the results to arrive at the realization of the stochastic process. In such a case, the primitive events are the set of all possible realizations of the random variables employed by the algorithm. The probabilities involved are computed from the joint distribution of these random variables.
Example 5: For the sake of having a simple example, assume that nature generates the air pressures of example 1 according to the following scheme. Pick two air pressures in millibars by selecting a two random numbers number p0 and p1 from a probability distribution on the interval -0.2.to 0.2. Let the resulting pressure readings be given by a pressure-vs-time graph that is a straight line connecting the points ( 6:00 AM, 1013.25 + p0) with (7:00, 1013.25 + p1).
We may view stochastic process of example 5 as a function f(t,w) where w is the vector (p0,p1). A primitive event is the selection of specific values for p0 and p1. The probabilities of various subsets of primitive events can be computed from the joint distribution of (p0,p1), which is the product of two uniform distributions since we have assumed p0 and p1 are independent. For example, we can compute the probability of the subset "p1 > 0.0 and p2 > 0.1".
Many authors ([Doob] [ Iyanaga and Kawada][ Karlin and
Tayor][ Parzen]) define a stochastic process as an "indexed
collection of random variables". The idea of "random
function" can be reconciled with this viewpoint. If
f(t,w) is a "random function", the
variable t is viewed as an index and the possible values of t
are viewed as an "index set". One may think of t
as providing a way to index a specific datum in all the data that
have been produced by one realization of the stochastic process.
In example 1, t is a time and the index set is the set of all times between 6:00 AM and 7:00 AM (One should not assume that an "index set " must be a set of integers. Indexing via a set of integers is often used computer programming, as when we index an array A by integers in order to refer to A[1], A[2], etc. However the concept of "index set" in a stochastic process is more general than this. The "index set" can be any set at all.) The pressure at a specific time, such as 6:03 AM can be viewed as a single random variable since the pressure at this time will be different on different days. We think of the stochastic process as an infinite family of random variables X(t) that are indexed by the time t. A random variable has a domain and a range. A realization of a variable like X(6:03 AM) is a single real number. So we may say its range is the set of numbers that are possible pressure readings. The domain of the random variable X(603 AM) ) is not the set of times, even though the notation X(t) makes it appear this way. The domain of X(6:03 AM) is the probability space for the phenomena. In this example, the domain is a datum that would describe a specific date at the airport such as "January 3, 2002 6:00AM to 7:00AM"
In example 2, once the random picture is realized, we can
refer to a single datum by giving its location as a 2 dimensional
vector s = (x,y). The set of all locations on
the image is the "index set". (The index set in this
example does not consist of integers. A location such as (
1.345, 4.019) is considered to be an index.) Each location
may be viewed as a random variable X(s). The range of
each random variable X(s) is the set of all 3
dimensional vectors of color information (r,g,b). The
domain of each X(s) is a datum that describes a specific
picture such as "picture by Wilbur Semismith, completed January
3, 2003 9:42 AM".
Another way of looking at
example 2 is to let the index set be the set of all (x,y,k)
where (x,y) defines a location on the image and k
is 1,2 or 3 depending on whether we wish to index the red,green
or blue datum. Then an individual random variable X(s),
with s = (x,y,k), has a range that is the set of real
numbers that describe a color intensity ( instead of the set of 3
dimensional vectors of such numbers). This somewhat goes
against the idea of having each datum in the range of the random
variables be "all of the same format", since we
might not consider "red color intensity" information to
have the same physical meaning as "blue color intensity".
However, if we decide to think of all these data as
random variables whose range is the set of real numbers, then we may
do this.
Example 5 can be interpreted as an "indexed"
collection of random variables in the same way as example 1, except
that the primitive events in the probability space are given
by the set of possible values for p0 and p1.
(Assigning specific values to p0 and p1,
determines all the pressure readings.) This is the domain of
each X(t). The range of X(t) is the set of
numbers that give a single pressure reading.
In
the "indexed collection of random variables" view of
stochastic process , all the variables have the same domain and
range. The fact they have the same range implements the idea
that the phenomena produces data that is "all of the same
format". The fact they have the same domain indicates
that they are all realized when a single primitive event in the
probability space is realized. For instance, in example 2,
X(6:00 AM) and X(6:30 AM) denote the pressure readings at
two different times. A realization of the process consists of
picking a particular day at the airport. The
realization of X(6:00 AM) and X(6:30) gives the
pressure readings at those times on that one date. We do not
think of a realization of the process as measuring a value for
X(6:00 AM) on one day and then picking a different day to
measure the value of X(6:30 AM).
It is not
correct to think each variable in the "indexed collection of
random variables" as necessarily being the "same"
random variable realized over and over again. Two random
variable in the collection must be "the same" only in two
respects: they must have the same range and same domain.
However they need not be independent of each other. In
example 5, the measurement X(6:00AM) is completely determined
by the choice of a value for p0. However
the measurement at X(6:30 AM) depends both on the choice of
p0 and p1. The measurement
X(6:00 AM) can be near the maximum pressure of 1013.25 + 0.2
if p0 is near its maximum of 0.2. But the
measurement at X(6:30 AM) cannot be near the maximum unless
both p0 and p1 are near the
maximum (i.e. unless the linear graph is high at both ends).
This suggests that X(6:30AM) is not independent of X(6:00
AM) since both depend on p0. It
also suggests that their marginal probability distributions are not
the same. To write the formula for the joint distribution of
X(6:00 AM) and X(6:30 AM) is not a simple task, even for
a person experienced in probability theory. However a reader
who is familiar with computer programming should be able to
write a Monte Carlo simulation of this example and investigate the
dependence of the two measurements.
Consider an attempt
to model the process of example 2 by using a single random variable.
Suppose we scan a large number of sample pictures into
pixels. Then we create a histogram for how often
each (r,g,b) datum occurred in their pixels. To
realize a random picture, we randomly select an (r,g,b)
value for each pixel from this histogram according to the frequency
with which the various (r,g,b) vectors occur.
Most of the image that we would create this way would be a cloudy
mess. They would lack the images of people, houses, flowers
and dogs that appear in pictures drawn by human beings.
Stochastic processes like example 2, whose realizations typically
contain a high degree of organization and structure, are
poorly approximated by making each X(t) an independent
realization of the same random variable.
The
following situation is discussed in most probability texts. Certain
aspects of it are misleading if the reader erroneously assumes they
apply to all stochastic processes.
Example 6: Consider
tossing a coin and suppose the probability it lands head is a known
probability p. (The example of tossing a thumbtack is
also used by authors who wish to make it clear that p need
not be 0.5 ) A single toss of a coin is usually viewed as a
realization of a random variable. A fixed number
of tosses of a coin can be viewed as a multivariate random variable.
Suppose we wish to consider a question such
as "What is the probability that we must make more than
30 tosses before getting the first occurrence of 'heads'? "
Then we must consider the situation where a coin is tossed
over and over again an unlimited number of times. The
usual way to view this is to consider the repeated tossing of the
coin to be a stochastic process. Each toss is a datum that is
either "heads" or "tails". A realization of the
process is one particular infinite sequence of such data.
We
may think of example 6 as a random function f(t,w).
The variable 't' takes on integer values 1,2,3...
depending on which toss in the infinite sequence of tosses we wish
to examine. The variable 'w' must be a primitive event
in the probability space. Since an infinite sequence of coin
tosses is a conceptual experiment rather than an acutual one, we
don't define a primitive event in the probability space to be an
event like "Tosses begun by Lula Mumshelter on January 3, 2002
9:42 AM". The customary way to define the
primitive events for coin tossing is to say that it is the set S
of all possible infinite sequences of the form {r1,r2,r3...}
where each ri is either "heads" or
"tails". Notice that the general concept of a
stochastic process has no requirement that a realization of
the process contains exactly the same information as a primitive
event. However, this is special feature of example 6.
In coin tossing both w and f(t,w)
describe the results of a particular infinite sequence of coin
tosses. To meet all the requirements a probability space we
must be able to compute probabilities. The technical details
will not be given here. Probability texts give examples
of computing various subsets of primitive events. For example,
texts show how to answer questions like "What is the
probability that there are at least 30 tails before the first head".
This is the probability of the subset of S consisting
of all sequences whose first 30 terms are "tails" and
whose other terms contain at least one "head".
If
we think of example 6 as an "indexed collection of random
variables" then the index set can be defined as the set of
integers {1,2,3...} As mentioned above, the index set for a
process need not be a set of integers, but in this particular
example it is. We can define the random variable X(k)
to be the result of the kth toss of the coin.
The range of each random variable X(k) that is the set of two
things {heads,tails}. The domain of this random variable is the the
set S of infinite sequences of heads or tails. ( The
domain is not merely the set of two things {heads, tails} . Remember
that an event in the domain must determines the entire realization
of the random process, which is all the data for the entire
collection of random variables) As remarked above, this
example is unusual in that one realization of the indexed collection
of random variables can be identified with an event in
probability space associated with w.
In example
6, a reader may think of the process as "realizing
the same random variable over and over again". As we
pointed out earlier, this is not a correct view of the general
stochastic process. However in example 6, random
variables such as X(1) and X(2) are independent.
And they do have the same marginal distributions. (
The phrase "marginal distribution" is required if we
wish to express the idea that the probability that X(1) is
"heads" is the same as the probability that X(2) is
"heads". To assign probabilities to the
events "heads" and "tails" we need a
distribution whose domain is set of two events {heads, tails}.
As remarked above, this set of two events is not the domain of
the X(i). The marginal distribution of X(1) over
S is used to find the probability that X(1) is heads
and the other X(i) are any values whatsoever. So we may
say the marginal distribution of X(1) has domain
{heads,tails}. The marginal distribution X(2) has the
same domain. Saying the marginal distributions are the same
correctly expresses the idea.)
If we reader wishes to be
reminded of the general definition of a "stochastic process"
by memorizing only one example, it would be best not to
choose example 6. Coin tossing has many features that are not
typical of more general stochastic processes.
Mathematical
literature contains variations of the definitions that we have
sketched above. Many authors (e.g. [Doob] [Parzen] ) do not
explicitly say that each member of the "indexed
collection of random variables" must have the same probability
space. However, in studying specific stochastic processes they
make additional definitions that do require this. Some authors
(e.g. [Gardiner] ) say the index set must represent time.
Some authors say that the random variables must be real
valued (e.g. [Iyanaga and Kawada]). The definition we
will give is consistent with the above informal
discussions.
Definition
Let P be a
probability space consisting of (S,E,m) where S is a
set, E is a sigma algebra of subsets of E and m
is a probability measure defined on E. Let R be
a set.
Let X be a collection of random variables indexed
by some index set T and having the property that each X(T)
in X is random variable whose domain is the probability space
P and whose range is R. Then X is
stochastic process on P with index set T.
A
stochastic process whose index set represents time is called a Time
Series. The use of the word "Series" does not
imply that the data must necessarily be indexed by integers. Both
the case where T is a set of integers and the case where T
is the set of real numbers are studied. Example 1 can be
regarded as a Time Series.
The term Random
Field is often used as a synonym for "stochastic
process", especially when an author wishes to emphasize
that the index set T need not represent time. The word
"Field" does not necessarily imply that a Random
Field represents something like an electric or magnetic field,
although there are many publications that do apply the theory of
stochastic processes to such topics. Example 2 can be
regarded as a random field.
You might want to remove stock markets and heart rate from the list of processes. They show characteristics of chaotic systems. See this article A Multifractal Walk down Wall Street; February 1999; by Mandelbrot; I also have heard that one reason people like Bach's Brandenburg concertos are that they are fractal and even mimic the heart rate. --66.44.104.246 12:57, 3 Aug 2004 (UTC)
The algebraic approach
The section titled "The Algebraic Approach" seems to contain an attempt to axiomatically define random variables (only for the complex valued case) and expectations for these. This is largely unrelated to the topic of the page, i.e. to "stochastic processes".
Also the paragraph claims "One of the important features of the algebraic approach is that apparently infinite-dimensional probability distributions are not harder to formalize than finite-dimensional ones." Can anybody make sense of this? The approach given seems to be only for one-dimensional random variables. And what might be the meaning of "apparently infinite-dimensional"?
My suggestion would be to simply remove this section titled "The Algebraic Approch". --Jochen 00:45, 28 Nov 2004 (UTC)
Remove it, as it is half-cooked. However, this is what's going on. There are two approaches to measure theory and integration. In the geometric approach, one defines first what is meant by measurable sets and measures on them, and then uses measures to define integrals and integrable functions. In the algebraic approach, one starts by defining an algebra of integrable functions and the integral as a positive linear functional on that algebra. Then, measurable sets are those whose characteristic function is integrable and their measure is the integral of their characteristic function. The geometric approach gets harder and harder when vector measures on high-dimension measurable sets are involved, while the algebraic approach is no harder for Banach-space valued measures on infinite-dimensional spaces than it is for real random variables on [0,1].
Kolmogorov's axioms for probability theory are analogous to the geometric approach to measure theory, and the Kolmogorov extension theorem constructs a sigma-algebra of measurable sets and a probability measure on it given the finite-dimensional distributions of a stochastic process. There is an alternative algebraic axiomatization of probability theory in terms of algebras, and an alternative "extension theorem" within that framework.
But, like I said, the current section is half-baked, so it would be fair to remove it.
— Miguel 02:02, 2004 Nov 28 (UTC)
- I have moved that material to algebra of random variables and created links from five other pages to that page. Michael Hardy 02:29, 28 Nov 2004 (UTC)