A Conversation for The Doomsday Argument

Doomsday? Probably not.

Post 1

Underhill

"Given that there are x numbers of humans that ever lived, that your number in this sequence is virtually near the end of the current sequence"

What do you mean by "virtually"? What do you mean by "near"? What do you mean by "current" and most particularly "sequence"? All a bit loose, don't you think?

Also, would not precisely the same "argument" apply at any previous point in history?


Doomsday? Probably not.

Post 2

raymondo

The Doomsday Argument (thereafter, DA) attributed to Brandon Carter, has been described by John Leslie (1992). DA can be described as follows. Consider an event A: the final extinction of the human race will occur before year 2150. One can estimate at 1 chance from 100 the probability that this extinction occurs: P(A) = 0,01. Let also ~A be the event: the final extinction of the human race will not occur before 2150. Consider also the event E: I live during the 1990s. In addition one can estimate today at 50 billions the number of humans having existed since the birth of humanity: let H1997 be such a number. In the same way, the current population can be evaluated to 5 billions: P1997 = 5x109. One calculates thus that one human from ten, if event A occurs, will have known the 1990s. The probability that humanity is extinct before 2150 if I have known the 1990s, is thus evaluated: P(E, A) = 5x109/5x1010 = 0,1. On the other hand, if the human race passes the course of the 2150s, one can think that it will be destined to a much more significant expansion, and that the number of humans will be able to rise for example to 5x1012. In this case, the probability that the human race is not extinct after 2150 if I have known the 1990s, can be evaluated as follows: P(E, ~A) = 5x109/5x1012 = 0,001. This now makes it possible to calculate the posterior probability of the human race extinction before 2150, using Bayes formula: P'(A) = [P(A) x P(E, A)] / [P(A) x P(E, A) + P(~A) X P(E, ~A)] = (0,01 x 0,1) / (0,01 x 0,1 + 0,99 x 0,001) ∀ 0,5025. Thus, the fact of taking into account the fact that I live currently has made the probability of the human race extinction before 2150 shift from 0,01 to 50,25.


Doomsday? Probably not.

Post 3

raymondo

or...

Suppose we come across two indistinguishable urns, both filled with balls. A small dispenser device is found near the base of each urn. We are told (by someone extraordinarily reliable in matters concerning these urns and their contents) that the balls are all identical in mass and shape, and are randomly distributed in their respective urns. Our informant also tells us that, in each urn, there is one red ball, and the rest are blue.

We choose an urn, press the button on the dispenser, and a ball is drawn; it is red. We are then told by our reliable informant that one of the urns contains only ten balls, whereas the other contains a thousand.

Which urn have we drawn from?

In light of the new information obtained (i.e. the number of balls in each urn), the Bayesian should favour the hypothesis about the urns that renders the initial outcome (drawing a red ball) more rather than less probable. Thus we should favour the hypothesis that we have drawn the red ball from the urn with only ten balls, rather than the urn containing a thousand.

We should do so, the Bayesian thinks, because once we are told the number of balls in the urns, we conclude that the chance of the red ball having been at the selection point when we made our draw is p=1/10 for the one urn, but p=1/1000 for the other urn. Until further information is obtained, we should thus favour the hypothesis that the red ball has been drawn from the urn with fewer balls.

Now consider a sequence H={1,...,n,...,n+k} of all humans who have lived, are presently living, and ever will live, ordered roughly by time of birth. Imagine that we do not know where we are in the sequence H. We are certain that we occupy some position in H, and we assume that k most likely has some definite magnitude. We further assume that some hypotheses concerning the magnitude of k will be more plausible than others, given available evidence. But in lieu of any information about H, we provisionally conclude, in good Laplacian fashion, that the odds of finding ourselves at any one position in H are p=1/(n+k).

Suppose that we then discover that our position in the sequence H is n. As good Bayesians, we will reason in the same fashion as we did in the urn case: we will favor hypotheses about k that make n more rather than less probable, given the evidence. Thus, upon discovering n, we ought to conclude that k is probably quite small, and certainly not much greater than n.

Reasoning in this fashion, we should expect to be nearer the end of the human race than the beginning. Doomsday should be sooner, rather than later.


Doomsday? Probably not.

Post 4

Atlantic_Cable

Here's a worse one for you. You may not even exist.
The theory goes like this.

At present computers can manage about 10^22 operatios per second. (I'm talking about the latest super computers, not PCs obviously.)

Once they get to the 10^24 operations per second, they will be able to simulate all the matter in your brain.

Once they get to the 10^42 operations per second, they will be able to simulate all the matter in the known universe.

Assuming they get to this stage (and Moore's law says they may well do), they would probably become household items. People would start simulating their own worlds. We do this now, playing games, and there's no reason to think this will stop in the near future.

If you were in onw of those worlds, you'd never know it.

Now here comes the maths.

Ignoring for the moment parallel universes, there is one "Earth" which is pre-simulation. I.e. an Earth at the point before we can simulate these worlds.

However, as billions of people get these computers, and run several simulations on them. The total number of simulated "Earth"'s would be billions, possibly trillions.

Now, based on classic probability, what is the chance you are living in the pre-simulation Earth?

1 in trillions. A number so small, it's as close to zero as makes no difference.


Doomsday? Probably not.

Post 5

raymondo

I just read about that last week and thought about writing an article on it, there is a name for it, can't remember it though. Yes odds are we are not real, and the omega point A954542 has already occurred or something even more numdane like your example. if we are all property of broderbund/electronic arts, I for one want a raise.

I have also been writing a short story called [email protected] based very loosly on the joke about the COBOL programmer who had himself frozen after making a bundle on Y2K date conversions and needed a new liver, but wanted to wait until he get a new liver with out surgery. He was awakened in 2999 and after undergoing a little shock about the date asked if they could give him a new liver without surgery yet. He was told no, but that the surgery was already performed and he was in no pain or distress. He asked why everyone was smiling at him and was told "We understand you know COBOL?"


Doomsday? Probably not.

Post 6

Underhill

"there is a name for it, can't remember it though"

Neither can I, but the definitive statement of this theory is in a book by Frank Tipler called something like "The Physics of Immortality". I can't remember the title in English because I read it in German translation. The last time I looked it was out of print in English.

I do not accept the red ball / blue ball analogy. To say "we assume that k most likely has some definite magnitude" is not the same thing as "there are ten balls here and 1000 there". k is simply an unknown and there is no way of knowing it.

This whole thing strikes me as strangely similar to the cosmological theories which put human beings at the centre of creation - anthropocentric? Can't remember the word. Lots of jargon, lots of long words (and longer books) doing an Emperor's New Clothes makeover on a basically thin idea.


Doomsday? Probably not.

Post 7

Underhill

I remember now - anthropic principle.

This is the theory which was compared by Douglas Adams to a puddle saying "Wow, this hole in the ground fits me exactly - it must have been made for me - I must be special" and that is as good a refutation of it as I have heard.


Doomsday? Probably not.

Post 8

raymondo

Actually I have a copy of The Physics of Immortality, but there is another name for the "we are a sim" theory. I like DNA's response smiley - smiley


Doomsday? Probably not.

Post 9

Underhill

I have been thinking about the red ball / blue ball analogy and I think it it is applicable only in this sense: we are in the position of either one of the balls BEFORE it is drawn. We look around and see only that we are surrounded. The drawing in the analogy provides extra information not available to us in the real-life situation.

Try this analogy: you are standing in a single-file line of everyone who has ever lived, is alive, or will live. The future is ahead of you, the past behind. The number of people behind you is very large but could in theory be counted. The number ahead is unknown, and could be anything from one to just short of infinity. How near the head of the queue are you? No idea. You can't see past the head of the person in front of you.

Then someone who was not in the queue comes up to you - just you - and gives you a lollipop. Wow! Extra information! Unfortunately, although I have been singled out from all these people, I still have no idea how many people are ahead of me.

It will take a lot to convince me that this is not closer to the truth than the previous analogies.


Doomsday? Probably not.

Post 10

Skatehorn

raymondo said: "Now consider a sequence H={1,...,n,...,n+k} of all humans who have lived, are presently living, and ever will live, ordered roughly by time of birth...Reasoning in this fashion, we should expect to be nearer the end of the human race than the beginning."

There is a slight problem with this example. Let N be the random variable denoting the total number of human beings that have ever and will ever live. Let S be the sequence (1,2,3, ... , N) representing all humans ordered by date of birth (so Adam = 1, Eve = 2, etc). Let X denote the random variable giving our position in the sequence. We assume (modestly) that P(X=x|N=n) = 1/n (I think the mistake you make is to confuse P(X=x|N=n) with P(X=x) ). Now suppose that we are told that X=x, we now want to calculate P(N=n|X=x). Now

(1) P(N=n|X=x) = P(N=n & X=x) / P(X=x)

we also know that:

(2) 1/n = P(X=x|N=n) = P(X=x & N=n) / P(N=n)

from (2) we get

(3) P(N=n & X=x) = P(N=n) / n

substituting (3) into (2) we get

P(N=n|X=x) = P(N=n) / (n.P(X=x))

So to calculate the probability that there will be exactly n humans in total, given that we are the xth human to have lived, we need to know the probabilties P(N=n) and P(X=x), which we cannot deduce logically, so have to guess as best we can, from what limited information we have.

The main point is that we can draw no conclusion about the liklihood of doomsday without having some prior notion of the distribution of the total number of humans; such a notion must come from empirical factors, and our own judgement. We cannot therefore draw any conclusion - on logical grounds - about how likely the end of humanity is.

However, given such a prior distribution we can still say something interesting. Consider two possibilities: N=n1 (event 1), and N=n2 (event 2). So we are assuming that there are either n1 or n2 humans in total. Now assign probabilities P(N=n1)=p1, and P(N=n2)=p2 (=1-p1), and suppose that n1<n2, so that event 1 means there will be less humans in total, event 2 that there will be more.

It turns out that when we learn that extra bit of information, X=x, and recalculate the probabilities, taking account of the new information, we will always find that event 1 has become more likely. So once we have formed our opinion about the liklihood of there being a certain number of humans, subsequently taking account of the fact that we are the xth human will always apparently bring the extinction of humanity forward; ignorance truely is bliss.

I include the argument (which can be generalized to any number of events) justifying this claim in appendix 2, here is an example* exhibiting the effect:

event 1: There will be 200 billion humans (n1=200bn)
event 2: There will be 200 trillion humans (n2=200tr)

event 1 has probability 5% (P(N=n1)=0.05)
event 2 has probability 95% (P(N=n2)=0.95)

We are given that we are the 60 billionth human (X=60bn), and we wish to calculate the probabilty of event 1, based on this new information. We use Bayes theorem (see appendix 1).

P(N=200bn|X=60bn) = P(X=60bn|N=200bn).P(N=200bn) / (P(X=60bn|N=200bn).P(N=200bn) + P(X=60bn|N=200tr).P(N=200tr))

= (0.05/200bn) / (0.05/200bn + 0.95/200tr) = 0.98

so taking account of the fact that we are the 60billionth human increases the probability of the extinction of humanity after 200bn individuals from 5% to 98%. In fact this conclusion does not depend in any way on the number 60bn; we would get exactly the same result if we observe that we are the first human or the 200 billionth human.

To recap: We can draw no conclusions based only on the observation of how many people have gone before us; we cannot magically conclude the humanity will go extinct tomorrow from the mere observation that 60 billion people have already lived. We require a prior notion of the probability distribution of the total number of humans. Once we have such a prior idea, we can show that the information that 60 billion people have aleady lived increases the probability of a smaller rather than a larger total number of humans, thus apparently bringing doomsday closer.

If you want to find out more about the Doomsday argument, and related arguments and paradoxes, have a look at http://www.anthropic-principle.com


* this example is taken from the PhD thesis "Observational Selection Effects and Probability" by Nick Bostrom. http://www.anthropic-principle.com

Appendix 1:
P(A) denotes the probability of event A occuring.
P(A&B) denotes the probability of events A and B both occuring.
P(A|B) denotes the probability of event A occuring, given the event B has already occured.
P(A|B) = P(A&B) / P(B).
Bayes theorem: P(B1|A) = P(A|B1).P(B1) / (P(A|B1).P(B1) + P(A|B2).P(B2))

Appendix 2:
From Bayes theorem

(A1) P(N=n1|X=x) = P(X=x|N=n1).P(N=n1) / (P(X=x|N=n1).P(N=n1) + P(X=x|N=n2).P(N=n2))

Now we are assuming that P(X=x|N=n1) = 1/n1, and P(X=x|N=n2) = 1/n2, so (A1) becomes

(A2) P(N=n1|X=x) = (P(N=n1)/n1) / (P(N=n1)/n1 + P(N=n2)/n2)

if we let P(N=n1)=p1, and P(N=n2)=1-p1, and q1=P(N=n1|X=x) then (A2) becomes

(A3) q1 = (p1/n1) / (p1/n1 + p2/n2)

which we can rewrite as

(A4) q1 = n2.p1 / (n2.p1 + n1.p2)

we want to compare the posterior probability q1 with the prior probability p1.

(A5) q1 / p1 = n2 / (n2.p1 + n1.p2)

So q1 / p1 < 1 <=> n2 < n2.p1 + n1.p2 <=> 0 < (n1-n2)(1-p1) <=> n2<n1

so the posterior probability for the event involving more humans is always less than the corresponding prior probability, and the posterior probability of the event involving less humans is always greater than the corresponding prior probability.

This argument can be generalized to any number of events and we always find that the posterior probability of the event involving the least number of humans is greater than the corresponding prior probability.


Doomsday? Probably not.

Post 11

raymondo

...maybe we should ask our local Matrioshka Brain

A998841


Key: Complain about this post

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more