In principle, rational choice under uncertainty is straightforward — maximize expected utility.1 Doing this requires you to know both the probability and utility of all of the outcomes you are considering. Most of the time you don’t.

The problem is particularly clear in the case of low probability high value outcomes. With a little googling I can get a pretty good estimate of the probability that it will rain today, with a little more of the probability that I will have a heart attack next year. Estimating the probability that, if I am an Alcor member and die I will be successfully frozen, preserved, and revived, is a harder problem, one that requires me to estimate the odds of winning a sequence of gambles:

1. Dying from something that doesn’t destroy my brain

2. Somebody realizing I am an Alcor member, taking the recommended actions with my body and notifying Alcor

3. Someone from Alcor getting to my body before it suffers irreversible damage

4. Medical progress eventually making it possible to bring my frozen body back to life and cure whatever killed me. “Eventually” is at least decades, could be a century or more, which makes it harder to estimate the odds of

5. Alcor surviving, maintaining and being able to act on its commitment to preserve the bodies of members until that happens.

6. Alcor or someone else choosing to revive me.

And, to count as a win, I have to come back in a world I prefer to being dead.

Combining all of those, Alcor membership looks like a low probability gamble. Whether that means ten percent, one percent, or .01 percent I don’t, probably can’t, know.

Winning the gamble has a very high payoff, not literal immortality but close, since medical progress sufficient to revive a corpsicle should be sufficient to cure almost all lethal diseases, including aging. At ten percent it is worth the price, at .01 percent it isn’t. To decide whether to pay the substantial cost of signing up for Alcor membership I have to have some idea of how likely it is to pay off.

If I do join — and I did — I am confronted with several much lower probability but also much lower cost gambles. Alcor members get a medical alert medallion to hang around their neck to tell medical personnel in charge of their dead or dying body what number to call and what to do before the Alcor people arrive. I keep mine in the desk drawer of my home office.

I am downstairs about to take a letter to the mailbox, a ten minute walk, when it occurs to me that I am not wearing my medallion. Is it worth going upstairs to my office to get it? The chance of my suffering a lethal accident in the next twenty minutes is tiny but I would feel pretty stupid if the one reason my long-shot gamble didn’t pay off was that I wasn’t wearing the medallion. Or would if I could.

I went back upstairs. Should I have?

If I have a solution to the first problem, whether to sign up with Alcor, it can be used to help with the second. The payoff to wearing the medallion is, roughly speaking,2 the probability of something lethal happening during my walk, p_{w} times the probability that, when I die, Alcor membership will result in my eventual revival, p_{R}, times the value to me of being revived, V_{R}. If I have already decided that p_{R}V_{R}>C_{A}, the cost of Alcor membership, it follows that if C_{w}, the cost of walking upstairs to get the medallion, is less than C_{A} p_{w} then, if joining Alcor was a correct decision *ex ante* so is going up to get the medallion.

#### Less Exotic Versions

For a less exotic version of the same problem, consider the decision to go on a first date. The chance that it will lead to anything important is very small but the prize, a loving wife, is very large. If you enjoy the process of dating it’s an easy choice — but some people don’t.

An easier one is putting on your seatbelt. The chance of an auto accident is very low, but so is the cost of putting it on.

#### Pascal’s Mugging

The wager uses the following logic (excerpts from *Pensées*, part III, §233):

· God is, or God is not. Reason cannot decide between the two alternatives

· A Game is being played... where heads or tails will turn up

· You must wager (it is not optional)

· Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing

· Wager, then, without hesitation that He is. (...) There is here an infinity of an infinitely happy life to gain, a chance of gain against a finite number of chances of loss, and what you stake is finite. And so our proposition is of infinite force when there is the finite to stake in a game where there are equal risks of gain and of loss, and the infinite to gain.

· But some cannot believe. They should then 'at least learn your inability to believe...' and 'Endeavour then to convince' themselves.

(Wikipedia on Pascal’s Wager)

Variants show up in the modern world.3 Suppose there is a very low probability, say one in a thousand, of some catastrophe destroying the human race. If your objective is to maximize total utility summed over all humans ever, and if you believe that, absent the catastrophe, humans will continue to exist for millions of years and spread over many worlds, you should give an enormous weight to the utility of future humans, many thousands of times greater than the cost to present humans of even quite expensive precautions to prevent the catastrophe. Hence any cost imposed on present humans to make the catastrophe less likely is justified.

One example of such an argument would be the claim that research in alignment, finding ways of making sure that future artificial intelligences care enough about human welfare not to destroy us, is far more important than contributions to preventing malaria or famine, hence that all altruistic expenditure should be channeled to such research.4 Another is the argument that even if the likely effects of climate change are well short of catastrophic there is some non-zero probability of effects so catastrophic that it is hardly worth worrying about how likely they are, hence that any cost of reducing the risk, however high, is worth paying.

It is possible, if unlikely, that global warming is all that is preventing the next glaciation.5 Unlike hypothetical catastrophes from too much heat, glaciations have happened repeatedly within the past million years; judging by the past, the result could be half a mile of ice over the present location of London and Chicago and a drop in sea level of several hundred feet, leaving every port in the world high and dry.

That is an example of a very general problem with such arguments; once we take seriously all very large possible consequences however unlikely, it takes only a little imagination to find examples on both sides of any proposed policy. Pascal, for example, does not consider the possibility of a god who is willing to let an honest atheist into heaven but not an opportunist who believes only because he has been persuaded that doing so is a profitable gamble.

The same problem exists with the precautionary principle. We cannot prove that constructing nuclear reactors will not lead to a disastrous leak of radioactive material or terrorists acquiring the ingredients for a nuclear weapon. Hence the precautionary principle implies that we must ban them. We cannot prove that banning nuclear reactors will not lead to continued production of CO_{2} and catastrophic climate change. Hence the precautionary principle implies that we must build them.

Not doing something is also a choice.

Past posts, sorted by topic

A search bar for past posts and much of my other writing

Using the Von Neumann-Morgenstern definition of utility.

Not precisely, because the probability that I will be revived depends partly on how I die. The odds are probably highest for a slow death in a hospital, since the medical personnel will know about Alcor and Alcor will know about my impending mortality.

“Pascal’s Mugging” is a term coined by Eleazar Yudkowski for arguments of this form.

The argument is made by people who believe that both the danger of superintelligent AI destroying the human race and the probability that alignment research could prevent it are substantial but gets put in the “however low you think the odds are” form to convince other people.

Every step we take is a low probability, live or die gamble. Data suggests such is also true taking a shower, with the odds against us significantly greater.

None the less most of us find the risk reward ratio quite acceptable.

Great article, especially appreciated since I'm an Alcor member since 1988.

FWIW, the AI XRisk debate needs this input: https://zerothprinciples.substack.com/p/ai-alignment-is-trivial