Talk:Bayesian probability
|
"Bayesian probability is also known as subjective probability, personal probability, or epistemic probability." Is this true? I'm no expert on this, but I thought subjective probability was more or less the "feel" for how probable something was, that personal probability was a very close synonym of subjective probability, and epistemic probability was, well, something different yet: the degree to which a person's belief is probably true given the total evidence that the person has. Again, I could easily be wrong, but wouldn't it be an at least somewhat controversial theory about subjective, personal, and epistemic probability to say that they are each reducible to Bayesian probability? --LMS
- The way you describe it, they all seem like the same thing. Cox's theorem suggests that either they must be the same thing or that one or more of the theories contains inconsistencies or violates "common sense".
They are not the same thing. The mathematics involved is the same thing according to what Richard Cox is saying, but what the mathematics is applied to is the difference. Mathematicians like to think probability theory is only about mathematics and nothing else, but that's just narrowness of mathematicians. (Shameless plug: See my paper on pages 243-292 of the August 2002 issue of Advances in Applied Mathematics. I assert there that I think Cox's assumptions are too strong, although I don't really say why. I do say what I would replace them with.) -- Mike Hardy
I agree with what Larry Sanger said above. The article was wrong to say the following (which I will alter once I've figured out what to replace it with): "The term 'subjective probability' arises since people with different 'prior information' could apply the theory correctly and obtain different probabilities for any particular statement. However given a particular set of prior information, the theory will always lead to the same conclusions." That people with the SAME prior information could assign DIFFERENT probabilities is what subjectivity suggests; if people with the same prior information were always led to the same epistemic probability assignments when they correctly applied the theory, then those would be epistemically OBJECTIVE assessments of probability. They would not be "objective" in the sense in which that word unfortunately gets used most often; i.e., the probabilities would not be "in the object"; they would not be relative frequencies of successes in independent trials, nor proportions of populations, etc. I distinguish between "logical" epistemic probabilities, which are epistemically objectively assigned (and nobody has any idea how to do that except in very simple cases) and "subjective" epistemic probabilities, which measure how sure someone is of something --- the "feel", as Larry puts it. I have been known to say in public that the words "random" and "event" should be banished from the theory of probability, but we're getting into gigantic Augean stables when we say that. Michael Hardy 22:27 Jan 18, 2003 (UTC)
- It's hard to overstate the difficulties in understanding the foundations of probability. A quick web search turns up [1] (http://cepa.newschool.edu/het/essays/uncert/subjective.htm) which mentions several schools of thought in a few paragraphs, inevitably including several different schools within the "subjective probability" camp! User:(
It's not strictly correct to say that no one knows how to assign logical probabilities in non-trivial cases. For example, the empirical fact of Benford's Law (http://mathworld.wolfram.com/BenfordsLaw.html) can be logically explained using scale invariance, which is a special case of E.T. Jaynes (http://bayes.wustl.edu/etj/etj.html)'s Principle of transformation groups. Another non-trivial case which can solved with this principle is Bertrand's Problem (http://mathworld.wolfram.com/BertrandsProblem.html). Jaynes's articles are available here (http://bayes.wustl.edu/etj/node1.html). Cyan 22:42 Mar 27, 2003 (UTC)
Jaynes' arguments are very interesting, but I don't think they have yet reached the stage of proof-beyond-all-reasonable-doubt, and there's a lot of work to be done before many statisticians can reliably apply them in typical applied statistics problems. Michael Hardy 00:31 Mar 28, 2003 (UTC)
But you must admit that while Jayes' position is not complete, it has wider scope and greater consistancy than the frequentist approach (muddle of ad-hoc methods)? For me, Jaynes' recent book makes this case, and does it by focusing on comparison of results (i've added an ext-link to it on his page). (I fear that you may be interpreted as implying 'even the best cars can get stuck in the mud - so you should always walk...'. While researchers rightly focus on what is left to be added, someone reading an encyclopedia is looking to learn what is) 193.116.20.220 16:58 16 May 2003 (UTC)
- It is also at times an attempt to describe the scientific method of starting with an initial set of beliefs about the relative plausiblity of various hypotheses, collecting new information (for example by conducting an experiment), and adjusting the original set of beliefs in the light of the new information to produce a more refined set of beliefs on the plausibility of the different hypotheses.
This sentence and the paragraph "Applications of Bayesian Probability" should be removed or moved to Bayes Theorem. In order to avoid confusion it is crucial to distinguish the philosophical interpretation of probability from the mathematical formula developed by Bayes. These are not the same, they are often misunderstood, and the current version of the article makes it easy to get it wrong. Bayes' Theorem is a mathematical formula whos truth cannot be reasonably disputed. Bayes probability is the interpretation of mathematical construct (probability) and there was significant dispute in the past. I suggest we discuss the philosophy and the historical dispute in this article and the math with its applications in Bayes Theorem. 134.155.53.23 14:28, 23 Dec 2003 (UTC)
Contents |
Crow Paradox
Isn't this related to the All Crows are Black Paradox?
- Hempel's paradox says that "All Crows are Black" (uncontroversial) logically implies "All not-Crows are not-Black" (manifestly untrue). What's your point? 217.42.117.73 14:46, 6 Mar 2005 (UTC)
- No, it's equivalent to "all non-black things are not crows". Banno 19:44, Mar 6, 2005 (UTC)
Observation and question
I don't know how or if this can be incorprated, but it's been my experience from comparison of frequentist multiple-range tests (e.g., Ryan-Einot-Gabriel-Welsch) with a Bayesian test (e.g., Waller-Duncan) that the former are more subject to perturbation by the overall distribution of the dataset. Specifically, if one mean is very much greater magnitude than all the other means, the frequentist test will assign the extreme mean to one group while assigning all other means to a second group, no matter how much difference there may be among the remaining means and no matter how tightly each treatment's results group around their respective means. The Bayesian test, on the other hand, does not do this. While none of us poor molecular biologists in our lab have our heads around the math, the Bayesian outcome "smells better" given our overall understanding of the specific system, developed over multiple experiments. Since we're cautious, we just report both analyses in our papers. Dogface 04:07, 13 Oct 2004 (UTC)
problem of information
I just added a section about (well-known) problem of conveying information with probabilities; I think it should be better integrated with the rest, but I am not an expert in the rest. :) Samohyl Jan 19:59, 11 Feb 2005 (UTC)
- Looks interesting, but can it be revised to either use "information" in the technical sense, or some other word if not? It seems like we should omit the disclaimer note that the use of "information" here is not a strict mathematical one if possible. Regards & happy editing, Wile E. Heresiarch 00:37, 12 Feb 2005 (UTC)
- You're right. I have changed "information" to "evidence", and it looks much better now. Samohyl Jan 10:16, 12 Feb 2005 (UTC)
Disputed
I would dispute the following:
This can lead to paradox situations such as false positives in bayesian inference, where your prior probability can be based on more evidence (contains more information) than your posterior probability, if the conditional probability is not based on enough evidence.
What evidence is there that paradoxes occur? User:Blaise 21:11, 2 Mar 2005 (UTC)
- You are right, I thought there was a connection, but it isn't, so I will remove the sentence. Thanks for correction. But some example where this is a problem would be fine, if there is any. Samohyl Jan 19:54, 3 Mar 2005 (UTC)
Another dispute
In the history section, this apparent paraphrasing of Laplace bugs me:
- 'It is a bet of 11000 to 1 that the error in this result is not within 1/100th of its value'
Am I reading this wrong, or is that saying exactly the opposite of what is mean? Shouldn't "not within" be replaced by either "within" or "not more than" in this context? Or am I reading the stating of the odds backwards? I traced it all the way back to the edit (http://en.wikipedia.org/w/index.php?title=Bayesian_probability&diff=52675&oldid=52285) which added that whole section, so it's not just some random minor vandal sticking a "not" in there, at least. --John Owens (talk) 23:28, 2005 Mar 14 (UTC)
Need a better example
In the section titled Bayesian and Frequentist probability, the statement is:
- 'For example, Laplace estimated the mass of Saturn (described above) in this way. According to the frequency probability definition, however, the laws of probability are not applicable to this problem. This is because the mass of Saturn is a constant and not a random variable, therefore, it has no frequency distribution and so the laws of probability cannot be used.'
My reading of this is that the statement refers to the value for the mass arrived at experimentally, not the absolute mass of Saturn. The experimental results will have a frequency distribution, depending on the degree of error associated with the method and apparatus used. I don't have an alternative example, but I find this one less than ideal.