Talk:Doomsday argument
|
In view of the lively debate conducted by many smart people, we would be wrong to believe the simplistic view that the problem is just caused by an incorrect use of probability.
I would add: there are results of probability that are counter-intuitive, and yet correct. E.g. the Birthday paradox. The challenge with the doomsday argument is to find where it is incorrect, and we can't just say it is incorrect because it is counter-intuitive. User:Pcarbonn 18 Apr 2004
Actually we would be right to believe that the problem is caused by a simplistic use of probability. Take:
- Assuming the following (held out to be valid computations of probability):
- If you pick a number uniformly at random in a set of numbers from 1 to N, where N is unknown to you, and if we name that number j, then:
- Your best estimate of N is 2 × j,
- You can say with 95% confidence that N is between 40/39 × j and 40 × j.
- If you pick a number uniformly at random in a set of numbers from 1 to N, where N is unknown to you, and if we name that number j, then:
2 × j is not your best estimate of N, it is an unbiased estimate - but a requirement to be unbiased often produces strange results. The maximum likelihood estimate of N is j, which makes the doomsday argument worse. Your best estimate of N will depend on your prior probability distribution for N (and your loss function); assuming an improper prior that all values of N are equally likely and even after seeing j, your expected value for N will be infinite.
Similarly you cannot usually say with 95% confidence that N is between 40/39 × j and 40 × j. You might be able to say that if N is not between 40/39 × j and 40 × j then the probability of seeing j or a more extreme value is less than 5%, but that is not the same; as a statement about N it confuses probability with likelihood. Again, any statement you want to make about N depends on your prior.--Henrygb 13:01, 28 May 2004 (UTC)
Contents |
60 billion??
Some scholars have used an estimate of the running total of the humankind population, (perhaps 60 billion people) - this is an incorrect figure. Something like 60-70% of people who have ever lived are still alive, putting the running total of humankind closer to around 10 billion. (If this figure seems shocking, remember that the human population has grown exponentially with doubling period signficantly shorter than human lifespan.) Anyone got a better figure? Mat-C 19:40, 3 Jul 2004 (UTC)
- I don't have a better figure, but just following your logic (6 billion people now, having doubled every two generations of 25 years or so) backwards seems to put Adam and Eve at around 400 AD. Not even young-earth creationists have quite that time scale. The article Evolution of Homo sapiens suggests something of the order of 250,000-400,000 years ago for the first modern humans, and 2 million year for the first of the genus. --Henrygb 17:15, 20 Jul 2004 (UTC)
From _A Concise History of World Population_ by Massimo Livi-Bacci
[1] (http://www.amazon.com/exec/obidos/tg/detail/-/0631204555/002-6203819-5541644?v=glance)
total historical world population is:
Before 10,000 BC: 9.29 billion
10,000 BC to "0": 33.6 billion
"0" to AD 1750: 22.64 billion
AD 1750 to AD 1950: 10.42 billion
AD 1950 to AD 1990: 4.79 billion
Total: 80.74 billion
--80.188.210.180 13:08, 19 Aug 2004 (UTC)
Major rewrite
I have taken the liberty of a major rewrite to the page. I think I've presented the basic argument in the most straight-forward way possible. A more formal version of the argument would have to be made using a bit of Bayesian analysis. I've taken the current cumulative human population to be 50 billion which is a compromise between Gott's 70 billion and the previous page's 20 billion figure. If people really don't think my version of the argument is any good then of course they're free to discard it (and replace it with a previous version if they want).
User:John Eastmond 4 Oct 2004
Added Many-Worlds section
This is my proposed solution to the Doomsday Argument - hope that's ok.
User:John Eastmond 16 Nov 2004
- Sorry, but I'm afraid it's not. Wikipedia is for writing about pre-existing knowledge, not for contributing original research or theories. Additionally, I find your argument unclear and unconvincing, but this isn't the issue that matters. Write up your theory somewhere else, and it will be mentioned if someone else finds it worthy of being in this encyclopedia. -- Schaefer 05:58, 19 Nov 2004 (UTC)
Fair enough
User:John Eastmond 19 Nov 2004
Taken out Singularity section
The only two things Heinz von Foerster's argument has in common with Brandon Carter's Doomsday Argument are the words "Doomsday" and "Population" and nothing else. The Doomsday Argument is a probabilistic argument based on cumulative population whereas von Foerster's argument is based on an extrapolation of a particular model of population growth.
User:John Eastmond 30 Nov 2004
The Onion extrapolated the survival of human culture a couple of years ago. They calculated that the earliest date pop-culture is nostalgic about is 9.5 years ago, and that every passing year reduces that by about 4 months, so that the "world will run out of past" circa 2030. The Onion's 'singularity' is probably a lot more credible than von Foerster's, and a better comparison to the probabilistic DA. My comments in the next section (on grouping the von Foerster singularity with this in a category) were meant as a reply to John Eastmond's point here. Wragge 00:56, 2005 Apr 29 (UTC)
Dispute
IMO all info should stay. I only removed some references to the history of singularity which seemed unneccessary (they should be restored to the singularity if they are not there already). John Eastmond is not the only person to consider the particular interpretation he presents, and there is no rreason to remove it. [[User:Sam Spade|Sam Spade Arb Com election]] 16:50, 30 Nov 2004 (UTC)
- The Singularity information could be made to complement the Doomsday argument page, but John Eastmond's point seems logical; it would be better to split tangentially related subjects across different pages (by name if possible). I don't know if there is a category like "Doomsday predictions (secular)" but if not it should be created (within Eschatology?) and used to group together this page and Heinz von Foerster's Singularity prediction page. Other proposed category members:
- technological singularity
- Doomsday clock
- mean-time-to-asteroid apocalypse calculation
- the Victorian calculation of the survival probability of any surname to infinity.
Furthermore, the page is too long now. I propose a [[Category:Doomsday argument refutations]] category, which will organize these and prevent this page getting too long to present a simple definition of "the Doomsday argument".
Wragge 10:05, 2005 Apr 22 (UTC)
Special Relativity and the Reference Problem
I've been thinking of putting the following paragraph in but I'm not sure about it. I'd be interested in any comments about it:-
There has always been the problem of which observers to include in the definition of "humans": the so-called Reference problem. Should we include just homo sapiens or should our definition include all "intelligent" observers together with any artificial intelligences we might create in the future?
Actually there is a more fundamental constraint on the definition of the class of observers arising from considerations of Special Relativity. The Doomsday argument asks us to consider our position within the chronologically ordered list of all human births. However the set of human births comprise a set of distinct "events" in spacetime. The order of these events along a "timeline" actually depends on the velocity of a particular observer's frame of reference. Thus different observers will have conflicting ideas about the chronological order of the birth events.
Perhaps the Doomsday argument can only be pursued in terms of the lifetime of the individual observer whose physical states form one continuous worldline of events. Such a worldline does have an invariant "proper" time associated with it. As each event is causally connected to the next in a chain of events there is no ambiguity about their chronological order.
Thus it seems that the only reference class that can be used is the set of days (say) that comprise the lifetime of the individual. When we wake in the morning our experience of "today" selects it from the set of N days that will comprise our life. As each morning awakening is equivalent to any other (apart from arbitrary details) then each day has the same prior probability. Thus the prior probability of "today" is always 1/N. One thus deduces that N must be finite in order that the prior probability of today is non-zero. Have we proved that an individual's immortality is impossible? :)
User:John Eastmond 20 Jan 2005
Well maybe. Assuming that the argument is correct, N cannot be infinite but it can still be boundless (in the sense that whichever finite value you choose for N, I can choose a bigger one and we can repeat the process without limit). You might say that the argument would allow us to rule out the possibility of immortality but not the possibility of living forever, one day at a time. If that makes sense, <grin>. -- Derek Ross | Talk 15:43, 2005 Jan 20 (UTC)
What do you think about the special relativity objection to applying the argument to the human race? Apparently a set of birth events in spacetime can only be said to have a time-order if it is possible in principle to send a slower-than-light signal from each event to the next. But this is an unnatural constraint which need not be realised at all. One could imagine the human race colonizing the galaxy. It could easily be the case that a birth event on one side of the galaxy cannot be linked by a slower-than-light signal to a birth event on the other side of the galaxy (in other words each is outside the "light-cone" of the other). If the Doomsday argument is applicable to populations of observers then surely it should be applicable to all populations regardless of the spacetime positions of the individuals' birth events? The fact that it isn't seems to show that it can only be applied to an individual's set of life-events (that are naturally causally connected and thus time-ordered).
User:John Eastmond 21 Jan 2005
- I'll need to think about that, John. However note that this sounds suspiciously like original research and may therefore be irrelevant to this Wikipedia article. -- Derek Ross | Talk 05:46, 2005 Jan 24 (UTC)
You're right - these are original ideas and therefore should be published elsewhere.
User:John Eastmond 2 Feb 2005
Layman edition needed
I hate equation and did not understood the process. Could someone explain with metaphor, idea, examples, instead of maths. The article should be amended accordingly. [[User:Vrykolaka|Reply to Vrykolaka]] 17:31, 24 Jan 2005 (UTC)
I think you've got a point there. I'm happy for anyone to rewrite the article using examples and thought-experiments.
User:John Eastmond 2 Feb 2005
I added a section with a simplified example. I realise the example comes from a refutation of the argument, however it did help me to better understand the argument itself. If it does not do justice to the argument, please tell me.
Oops, sorry, forgot to sign! UnHoly 03:39, 14 Feb 2005 (UTC)
Why is N=# of humans?
There seems to be some people who intensely studied this argument here, so I have a question. Why is it that N was chosen to be the number of humans? For example, I could make the same argument by using N as the numebr of year since humanity has been discovered, and I will get a different results because the number of humans who ever lived is not linear in time.
In fact, I could also make the same argument using a totally arbitrary variable. For example, I could say that N is the total number of humans who ever lived, weighted by close in time from the dawn of humanity they were born. Then present-day human would contribute for next to nothing to N.
My point is, prior to computing the probability, one should be convinced that the method will yield a probability that has meaning. In this case, there should be an argument that applying this formula yields an estimate for the lifetime of humanity, and that N should be taken to be the number of humans. I am not convinced.
UnHoly 01:05, 6 Feb 2005 (UTC)
I'm beginning to think that there is a problem with the argument when applied to the human race. It seems to make the tacit assumption that the eventual chronologically ordered list of all the human beings ever to live could have had a different order. This is not possible. If I swapped places with one of my ancestors so that I was born in his time and place and he in mine then I would *be* him and vice-versa - there would be no change. If one cannot cannot consider such permutations as possible then I think that one cannot argue that one is equally likely to find oneself at any particular position n.
Normally one would say that given that there will be N humans then there are N! (i.e. N * N-1 * N-2 * ... * 3 * 2 * 1) ways that such a list of humans could be ordered. If we assume that we are at any position n then there are (N-1)! ways in which the other humans could be placed around us consistent with us remaining at position n,. Thus the probability that we are at any position n is given by (N-1)! / N! = 1/N i.e. that, a priori, we are equally likely to be at any position n within the list. But if it is not possible to permute a set of humans along a timeline then we can't derive this uniform prior probability distribution for our position. Without this uniform distribution the argument can't get off the ground.
John Eastmond 17:15, 14 Feb 2005 (UTC)
Well, I am not sure this answers my question, but it touches some of the same points.For exmaple, if you take N to be the lifetime of humanity, it is an ordered set (1881 is not 1534), while if you take N to be the number of humans it is not an ordered set. Then both these possibilities would yield different answers, while they should be the same, since there is a perfect correlation between the number of humans and the numebr of years (we know how many humans were born every year).
UnHoly 22:45, 14 Feb 2005 (UTC)
Indeed, the reference class is a hugely important factor in this problem. I would argue that the reference class has exactly one member:
me, right here, right now. Assuming that the world has some sort of consistency with my memories (I hope!), then I can only "find myself" in the
place and time that I'm in, or one with imperceptible differences. Of course, now it becomes a question of metaphysics, time, and identity, but I am sure I can resolve it in a way that leaves everyday life on a firm footing, and that's what matters. --nanite 142.207.92.56 23:53, 27 Apr 2005 (UTC)
- It's certainly true that the Doomsday argument leaves 'everyday life' on a firm footing (infact, a firmer footing than without it, by Gott's estimates). It calculates a very low chance of extinction within the lifetime of anyone reading this, and if thats 'what matters' then the argument is irrelevant. I can only speculate on why many researchers consider the problem worth thinking about; maybe because they want to challenge ideas of permenance, or because they are worried about their descendents.
- I agree, that in metaphysical terms the only thing that can ever matter is the subjective experience of the (short lived) individual. Widening the reference class beyond the individual is questionable, although Gott's "Copernicus method" approach is agnostic: He's not really invoking a reference class of more than one. All he says is that within the single reference class of your individual lifetime the things you see will probably be typical (a tautology). Wragge 10:44, 2005 Apr 28 (UTC)
A query for Henrygb
Henrygb, I don't understand. In response to this:
If you pick a number uniformly at random in a set of numbers from 1 to N, where N is unknown to you, and if we name that number j, then:
Your best estimate of N is 2 × j,
You can say with 95% confidence that N is between 40/39 × j and 40 × j.
You disagreed, saying first that:
The maximum likelihood estimate of N is j,...
Now, before we start getting all Bayesian, this already strikes me as odd. I can only think that there are two (or more) different understandings of the case. Assume that your situation, as the random selector of the number, is as follows: You know that you are to select with uniform probability (p=1/N) just one natural number j from a set whose lowest member is 1, whose highest member is N, and whose other members (if any) are all the natural numbers that are larger than 1 and no larger than N, but you have no information about the value of N. In this case, surely your best estimate, or maximum likelihood estimate, for N is not j. How could it be? It must indeed be 2 x j. (If not, why not?) But then, what is the alternative understanding of the case that you are working from, if your answer is j? Please explain. --Noetica 14:12, 17 Feb 2005 (UTC)
- OK, let's work out the maximum likelihood. The probability of getting j given N is
- (1/N) × I(1≤j≤N) where I(.)=1 if the inside (.) is true and =0 otherwise
- This is also the likelihood of N given j.
- It is obvious maximised when 1/N is maximised and I(1≤j≤N)=1,
- i.e. when N is minimised subject to the constraint N≥j,
- i.e. when N=j.
- The likelihood when N=j is 1/j, and when N=2j is 1/(2j) and the first is higher.
- Clear enough? --Henrygb 10:12, 18 Feb 2005 (UTC)
Thanks for your response, Henry. But alas, it doesn't seem to me that my question (presented in non-symbolic and non-formulaic terms) has been answered. A big part of this is that I don't understand all the symbols you have used. If you have still more patience, you can help me (and possibly others here who are interested in the Doomsday Argument) by addressing the following. (But this will only work if you communicate in something other than the specialised symbolic manner appropriate to well-practised experts in your field.)
I take that it that maximum likelihood can deliver, in practical terms, a best bet. (Am I right?) Now, suppose that you have a situation isomorphic to the one I carefully described earlier, but with numbered marbles to select from an urn. You are told that there are exactly N marbles concealed in the urn, that each marble has been uniquely marked with one natural number, and therefore that each natural number from 1 to N is represented by exactly one marble. Beyond this, you have no information about N: you know only that it is some natural number (not excluding 1). You select just one marble from the urn (with all marbles having an equal probability of being chosen). The number on your marble turns out to be j. You are then given a free bet concerning the value of N.
If the above is not isomorphic to what I outlined earlier, please tell me how it is not. Then say what your best bet is for the value of N. My bet would be 2xj. Would yours be j? If so, that seems bizarre to me. It would mean that you think it more probable that the marble you happened to select was the highest-numbered marble than that some marble numbered j+1 (for example) was the highest-numbered. Why? What am I missing? Extra points will be awarded for a response confined to plain but precise English. Thanks again. --Noetica 13:27, 18 Feb 2005 (UTC)
- "Best" is not in itself a well-defined word. So you need some kind of criterion to judge it. You seem to find bias bizarre and a reason for rejecting a method. Fair enough. But you have to recognise that choosing an estimate which has a lower likelihood than another can be seen as peculiar by others, especially if you have a free bet which I assume pays off only if you guess N correctly. The argument goes that the smaller N is, the more likely you are to choose the jth marble, providing that there are at least j marbles in total. So in your free bet you may be wise to guess that N is j. And indeed this is correct and entirely logical, unless there is some external reason why you think N is more likely to be a particular high value than a particular low value.
- Try it yourself by getting you favourite spreadsheet to chose lots of different Ns (so long as no particular high value is more likely than a particular low value), then in each case choosing a random j from 1 through to N, and then seeing how often N is equal to j or to 2×j.
- Note that 2×j will never be correct when N is an odd number. If you restrict Ns to even numbers then guessing j or j+1 (so your guess is even) will still be better than guessing 2×j.
- Having said all that, my personal view is that you would do better to have some view of how N is chosen (a prior distribution), and to understand what counts as a good guess (a loss function). But this points at Bayesian methods, rather than unbiased or maximum likelihood arguments from ignorance. --Henrygb 16:54, 18 Feb 2005 (UTC)
Ah yes! I came to the core insight into all of this by myself, a couple of hours after posting what I did, Henry; and now your careful reply confirms and expands things for me. As you suggest, "best" is not as well-defined as I had assumed. In practical terms, given that the bet I speak of pays off only if I get N exactly right, then I should indeed bet on j, and not on 2 x j. I was mistaken. I have now re-found my marble.
But my remaining concern is with the relevance of this to the Doomsday Argument. You write (see your first comment on this page):
2 × j is not your best estimate of N, it is an unbiased estimate - but a requirement to be unbiased often produces strange results. The maximum likelihood estimate of N is j, which makes the doomsday argument worse.
Now, here you yourself use "best" imprecisely (which may have contributed to my own uncertainty). With the Doomsday Argument, isn't an unbiased estimate of N what we're after, rather than a maximum likelihood estimate? To put this in practical terms with bets concerning the marbles (and assuming replacement, for selections by other bettors): if offered a free bet that paid off only when your estimate is no further from the true value of N than some competitor's estimate (which is unknown to you), you'd bet on 2 x j, wouldn't you? Isn't the task in Doomsday to bet on a year that maximises your chances of winning this sort of bet? Picking the exact year of extinction is not the task. So 2 X j seems best for our purpose, and I can't yet see that this is a "strange result". Will you help once more, without yet getting Bayesian (which may in the end be best, I agree)? --Noetica 21:42, 18 Feb 2005 (UTC)
- If I was betting against other people, I would try to get into an area where I thought other people were not betting so as to maximise my chance of winning. But that is a different game, and there are plenty of paradoxes involved in such betting games.
- What I was trying to say at the top of the page was that the Doomsday argument does need Bayesian analysis and some assessment of probable patterns of populations and extinctions. I believe that traditional statistical methods often produce a nonsense, and this is an example. We do have information which might inform the Doomsday argument: we think we know something about past extinctions of other species. We also have little evidence that future population trends are driven by what has happened in the distant past; the present (and perhaps recent past) may be all that matters. And there is a change of scale problem too: the Doomsday argument is based on human population numbers, not length of time the human species has existed (which would give much longer estimates for the remaining time). But this is strange: what it says is that the faster population grows, the sooner we will become extinct. Yet the evidence is that it is species whose populations are in decline which become extinct faster, and that ones which are distributed worldwide with a growing population which survive longest. Ignoring all this is just daft and in my view invalidates the argument. --Henrygb 23:55, 18 Feb 2005 (UTC)
Thanks once more, Henry. I agree with a lot of what you say. I also share your reservations about betting games, and that is why I sought to word things carefully: "...a free bet that paid off only when your estimate is no further from the true value of N than some competitor's estimate (which is unknown to you)...". Though this doesn't quite do the job, such games are often a good way to get concrete and clear about things. I'll try a simpler variant on you:
In the marble set-up as described above, with just one selection, you will be fined |N-(your estimate of N)|x$100. What do you give as your estimate of N?
If I were motivated solely to minimise the expected value of my fine, I'd give 2 x j. What would you give, with the same motivation?
I put it to you that, for as long as we are going to resist the entirely necessary Bayesian analysis, this 2 x j estimate is the most apt and relevant one in the case of the Doomsday Argument. And it is probably a good idea to get clear about the non-Bayesian way of viewing things (which may not deliver "nonsense", as you have it, but merely a less accurate result) before proceeding to the Bayesian. --Noetica 01:04, 19 Feb 2005 (UTC)
- How can you resist something necessary?
- You give a particular loss function, based on the absolute deviation. So the answer must be the median of the posterior distribution, though you will need a prior distribution to work that out. --Henrygb 01:59, 19 Feb 2005 (UTC)
To be more precise, then: "...for as long as we are going to resist the indisputably preferable Bayesian analysis,...". And I put the following to you again, still seeking an answer that will not appeal to Bayesian notions:
In the marble set-up as described above, with just one selection, you will be fined |N-(your estimate of N)|x$100. What do you give as your estimate of N?
If I were motivated solely to minimise the expected value of my fine, I'd give 2 x j. What would you give, with the same motivation?
--Noetica 02:16, 19 Feb 2005 (UTC)
- I wouldn't play the game. It is not much better than not being told j and simply having to guess N and face a fine for getting it wrong. What makes one game more acceptable than another?
- Seriously, you haven't given a motivation for your guess. Unbiasedness isn't justified by the structure of the fine. Suppose you knew that N could be any number from 1 to 6 with equal probability, and you were told j=4. Would you guess 8, just because 2×j is the unbiased estimator? Yes and you are throwing money away; no and you concede that unbiased is not particularly desirable.
- There might be a Bayesian way 2×j can be justified, I think for example if you use an improper prior where the probability of N is proportional to 1/N and where the fine was proportions to (N-(your estimate of N))^2. If you take same improper prior and your fine structure then 2×j−1 might be better. But there is no particular justification for that prior; why not use an improper prior where the probability of N is proportional to 1 and get an infinite estimator? One piece of data is not enough to stake your wallet on.--Henrygb 21:20, 19 Feb 2005 (UTC)
As I present the example with the fine, "not playing the game" is no option for you. How could we think it is? No one would volunteer to be in a game in which the best outcome is a fine of $0, and there are many outcomes that are ruinous!
I dispute this in your analysis: "It is not much better than not being told j and simply having to guess N and face a fine for getting it wrong." That is quite a different case, as any attentive examination of the way I set things up will show. You continue, a little further on: "Suppose you knew that N could be any number from 1 to 6 with equal probability, and you were told j=4." But this is to ignore the details of the case as I set it up, in which all you know about N is that it is some natural number. If you alter your prior epistemic situation so radically as you propose, of course the estimate 2 x j will not be good!
The rest of your reply goes into Bayesian matters that I am not concerned to address.
To summarise (from my point of view, at least), I originally asked you to explain something in your mention of maximum likelihood estimates. I am satisfied with this, and I thank you for helping me to clear up a confusion I had. But I say that maximum likelihood estimates are not as relevant to Doomsday as unbiased estimates are. While you were happy to say something about maximum likelihood estimates without wheeling in Bayesian notions as a matter of course, I have not succeeded in pinning you down to a similar statement regarding unbiased estimates, in a well-described situation. Nevertheless, I have now gathered all the information I need (including some meta-information about what virtuoso Bayesians are ready to commit themselves to, perhaps!). I have no further questions at this stage. Thanks very much indeed! --Noetica 22:39, 19 Feb 2005 (UTC)
---
Argh, <insert expletives here>. I spent ages tonight thinking about this, and in the end thought I had come up with a great and novel solution. Except now I find it's already been written by Olum. Oh well... Anyway, I'm going to add it, since it's not original research after all. :-) Evercat 02:55, 24 Feb 2005 (UTC)
Did I miss something?
When there were two people, there was only a 0.00000000003 probability that there would ever be 60 billion people. Why doesn't this matter?--66.65.67.135 20:05, 4 Mar 2005 (UTC)
- In one sense it does matter, and the Doomsday argument says it will be wrong for 5% of humans, including with hindsight Adam and Eve. It then says that you have a 5% chance of being in that 5%.
- In another sense it exposes one possible flaw in the Doomsday argument, as there is no particular reason to suppose that the past of humanity affects the future, except through the present. So the prospects for humanity do not depend on whether you are human number 60 thousand million or human number 60 million million. All that matters is that you are here now and the world is as it is (see martingale). --Henrygb 14:44, 23 Mar 2005 (UTC)
Principle of indifference
I've added a couple of references to the Principle of indifference, since this is a crucial assumption in the linear distribution of n. Another reason I added the reference is that I know the Doomsday argument as the principle of indifference (I came across it by that name, but probably I'm unusual).
I'm not sure if I should have made the second reference (under 'Bertrand') to the article's section on the Bertrand Paradox.
The Principle of indifference#Application to continuous variables alludes to the question of whether the 'correct' measure is 'humans born', 'years of human civilization', or '(humans born) * (self-aware life-span)'. Each measure gives a different 2-standard-deviation estimate for N. Although the explanaition is fairly clear in the link, it is titled 'Application to continuous variables' which 'humans born' definitely is not.
Should this second reference be fleshed out or removed? (unsigned by User:Wragge 21:52, 7 Apr 2005)
- I suspect that you should make your second point more explicit and if you want to, include the link to Bertrand's paradox (probability) there (Bertrand was not a Bayesian as far as I know), and seperate it from the Bayesian point which is slightly different. I wouldn't mention continuous variables at all, which probably should not be mentioned that way in the Principle of indifference.
- The Bayesian point is that although some use what they call , this is not quite the same as the Principle of indifference and even among those who do there is some debate about which is best. But for other Bayesians even such ideas are not acceptable: imagine coming across a tent in a field which you know has been there a day. Would you assign the same personal probability to it being there in a year's time as you would to a brick building which had been finished the day before? I would not, because I have lots of prejudices and indirect information which suggest to me that canvas lasts a shorter time than brick. Ignoring might be seen as irrational. --Henrygb 23:59, 7 Apr 2005 (UTC)
Thanks for the advice, Henrygb. If your prior experience of unsigned Wikipedia comments has lead you to a high-confidence inference of user-inexperience I can provide more confirming data, as this was my first discussion page post.
I agree that its important to be clear, so I've removed the second reference to Principle of indifference. I think that Bertrand may be an example of a economist who has utilized Bayesian probability, but who would have questioned either of the strong inferences necessary to produce a U(0,1] f distribution. I could be wrong, but would suggest adding an example of the 'many Bayesians' who question this step, and why they do so.
I would say that a condition of the Principle of indifference is that either: (1) no priors exist from which inferences can be made, or (2) The existing priors don't argue for differing state probabilities. Since we have no knowledge of previous similiar civilizations (to homo sapien culture) we might be in the first case with the Doomsday argument. By this line of reasoning the 'uninformative priors' analogy you make between this case and brick houses/tents is a false analogy, because we have personal experience of many brick houses and a good feel for how long they tend to stay up compared with tents. I would feel a lot more confident of a high N estimate if we have experience of several multi-trillion population civilizations (via SETI or pre-historical archeology, say).
The low N "pessimists" could plausibly argue (from the Fermi Paradox) that the very lack of priors is itself evidence which should bias our 2-sd N estimate downwards from the Principle of indifference value of 20n. However, I think we should try to separate these arguments across the relevant entries. What we can agree upon (I think) is that no relevant 'confirming' or 'disconfirming' evidentiary priors exist for comparable intelligent species population. We are then left with the the subjective 'prior' probability of logical extrapolation, which we should combine with the lower boundary evidence of N (that is, n) via Bayesian statistics to produce a 2-sd N estimate. Under what conditions could the forward-looking priors by powerful enough to produce an n multiple above (say) 50 with 95% confidence?
Caves' rebuttal, quoted at the end of the article seems to rely on the same analogy as the one you make between houses and civilizations. In my opinion it is a misleading analogy since in the first case (human lifespan) we have ample actuarial data, but in the second (civilization survival) we don't have a single complete data point. There is a lot more to Caves' arguments than this element, but I would like to express the opinion that quoting this example will tend to confuse the 'Doomsday argument' definition. Unfortunately, I can offer no better quote from his paper except his conclusion: "the Copernican principle is irrelevant to considerations of the survival of our species".
I acknowledge that 'priors' can relate to inferences not drawn from frequencies (in this case they will have to be logical projections) but I feel that the distinction between evidence-based 'priors' and extrapolation should not be confused by a comparison to a case where evidence is readily available. The precise point of the Gott thesis is that unprecendented cases do not present meaningful evidence, and hence that the principle of indifference is applicable. (For instance, the Berlin Wall was not really comparable to any previous structure when he visited it, and this enabled him to produce his estimate of its survival time.) Wragge 01:43, 8 Apr 2005 (UTC)
New "Other Versions" sub-section
Henrygb suggested that I make the choice of sampling variable point more explicit, so I've added a new sub-section: "Sampling from observer-moments" under "Other Versions" that details an alternative f distribution, by a uniform sampling over (life-span * n). This includes the earlier reference I made to Bertrand's paradox (probability), but I now directly link to the definition.
Unfortunately, some of the Anthropic subjects I refer to in this section aren't defined yet as Wikipedia pages. Rather than make red links I've added cross-references to discussions of these topics in other articles. Is this better style than adding red-links if those red-links already exist (on Anthropic bias)?
I am concerned that this section might be too long, but I wanted to give a full description of this argument. Is it too wordy, or still not explicit enough?
I added the sub-section to "Other Versions" partly because that only had a single subsection. Is this the appropriate place?
Anyway, it should act as a stub for extension of the function-form side of the definition. Talk:Doomsday_argument#Why_is_N.3D.23_of_humans.3F relates to this.
Wragge 18:40, 2005 Apr 8 (UTC)
NPOV?
The Many worlds section starts: "The problem with the argument might lie..." Reading this I would feel that Wikipedia has the POV that the Doomsday argument is flawed. Is this a valid interpretation? Would we have a more NPOV if this section started: "The argument implicitly assumes..."
The Caves' rebuttal section does not have a NPOV; it says: "uses Bayesian arguments to show that...". This is disputed claim, which would be more neutrally described as: "uses Bayesian theory to argue that...". Wragge 18:55, 2005 Apr 8 (UTC)
Yes, those changes would probably improve the NPOV of the article slightly. If you feel them to be worthwhile, I doubt that anybody will object. -- Derek Ross | Talk 02:31, Apr 21, 2005 (UTC)
- Thanks for confirming what I had thought, I just wanted to clarify how NPOV should be applied. The great thing about Wikipedia is that objections can be made at any time, but I will take this as confirmation of my POV about NPOV, and probably make the changes soon. Wragge 09:57, 2005 Apr 21 (UTC)
Why the pictures?
I'm not really sure the pictures add anything to the understanding of the article Jackliddle 17:28, 6 Jun 2005 (UTC)
- I added the pictures, and I agree they don't add anything to the explanation, but Wikipedia guidelines recommend adding pictures to all articles, even if only connected in an abstract way. If you have better ideas for images that would enhance comprehension, or be more appropriate, please add or suggest them.
- Thanks for reading, Wragge 17:41, 2005 Jun 6 (UTC)
- Fair enough if thats the Wikipedia guidelines. Jackliddle 18:46, 6 Jun 2005 (UTC)
Pseudo-scientific hoax??
I suspect this entire article is an academic hoax akin to the postmodern hoax by professor whats his name of the university of wherever.
- I'd never heard of this but the maths seems to check out and its certainly fun trying to refute it. I think its a interesting problem that teaches us a lot about the application of statistics. Jackliddle 17:16, 15 Jun 2005 (UTC)
the infinite N objection
The article mentions that Andrei Linde objects to the DA based on the idea that N may in fact be infinite. The Wikipedia article has this to say of that objection:
if N really is infinite, any random sample of n will also be infinite, with the chance of finite n being vanishingly small... In fact, Leslie takes finite n to be "superbly strong probabilistic grounds for rejecting the theory" of infinite N.
Isn't this actually very poor reasoning? We pretty much know for a fact right now as of the time I write this, that n is finite. At some point in the past, there was a time before humanity, right? There was a first human born at some point. And if n is finite at any given point in time, then the only way it could ever become infinite is for there to be an interval of time between now and some future point where the human birth rate is infinite. But since the birth rate can never be infinite (for it to be infinite, there would have to be some sort of "factory" capable of cranking out an infinite number of humans in a finite period of time -- not possible in our universe, right?), it is a mathematical impossibility for n to be infinite, regardless of whether N is infinite.
n is a function of time. You can in fact say n(t). It is impossible for the function n(t) to ever take on an infinite value. It is possible for <math>\lim_{t \to \infty}t(n)<math> to be infinite, but that is not the same thing at all!
Therefore, how can our observation of finite n have any influence on the likelihood of finite or infinite N? If N were finite, then n would be finite. If N were infinite, then n would still be finite because at the time any particular human is alive and thinking about n, it is impossible that an infinite number of humans lived before him. We can't prove that humanity won't live forever, but surely we can be confident that humanity hasn't always existed for all time, right?
So, it seems to me that these grounds for arguing that N cannot be infinite use bad reasoning at best. It is not possible to take "any random sample of N" because of the involvement of time.
---
I was going to comment exactly the same thing. The number of humans already born must (under "reasonable" assumptions) always remain finite, yet if time continues arbitrarily, an infinite number of humans may be born. I think the remark should be removed until someone explains the argument better (if there is indeed a correct argument here). -Benja