27 Comments

"People care about relative as well as absolute outcomes."

I really want to see data on this, I'm skeptical.

To be clear, people surely talk a lot like they care, but talk is cheap. What are people's revealed preferences?

For one data point take migration. Migrants often trade-off increase in life quality against relative status. Most migrants move where their relative status decreases (poor countries to rich, country to big cities).

Expand full comment
author

Fair point.

Expand full comment

There might be a lag: before migrating and a few years after they still mostly compare themselves to their friends and neighbors back in the old place; ten years in this may have changed, but it's too late now. This is how I imagine my "comparing myself" activity would play out, should I move to a much richer country for a higher income.

Expand full comment

"religious belief is not entirely heritable; if it were, western societies would not have become less religious in recent decades." This is incorrect. Heritability is the proportion of phenotypic variance explained by genetic variance in a given population at a given time. Heritability is not about population means. Suppose the population mean environment changes at time t and this changes the population mean phenotype. This tells us nothing about heritability. Heritability could well be 1 (all phenotypic variance is explained by genetic variance) both before and after t. Under high heritability, those that are more religious than average after t are the descendants of those that were more religious than average before t. Heritability, and not the genetic and environmental effects on population means, is what matters for evolution.

Expand full comment

Also, there are now a number of studies that show ca 25-60% heritability in various religious traits (I suspect mainly dealing with Christianity, but still). Some traits have higher heritability (e.g. the probability of experiencing a religious "awakening") and others much lower (e.g. going to church on Sundays, which can be just a societal norm).

Expand full comment

Biopolitical is the perfect name for this post

I have invented Car Attention Deficit Disorder and Logging In And Out Deficiency, since Covid. I take Ritalin for my CADD and Zoloft for my LIAOD.

I am considering a new diagnosis - bipolitical disorder, in which you swing from being left wing to being right wing, depending on the issue.

Expand full comment

> "Switching from evolutionary biology to introspection gets me to the question of why I in particular have not donated to a sperm bank and would be unlikely to so for any reasonable reward. I care about my children, identify with them, only want them to exist if they are going to be reared by parents I think suitable for the job. I might be willing to donate sperm for a couple I knew and strongly approved of as parents but not for a random couple. That attitude may be unreasonable, since there is evidence that quite a wide range of child rearing strategies work, but, having been very lucky in my parents, I do not like the idea of another me being much less so."

But what would these counterfactual children prefer? Having suboptimal parents, or not existing at all?

Expand full comment

As for why males are unable to identify their own offspring, I think the answer is simple: There is no practical way for an organism to do that. If some allele arose that both gave the bearer some not too deleterious trait and made the bearer only support offspring with that same trait – which to me seems like something that should not arise too often (consider the cuckoo) – presumably this allele and trait would soon become fixed and thus no longer serve that function.

Expand full comment

> if the income of poor people doubles and the income of rich people triples

On some level poeple may know the inflation numbers are kinda bullshit and if that 2x was during a time when housing prices went 3x... well the instinct seems correct

Expand full comment

It's hard to say how much more males should be concerned with status relative to females, I've seen quite a bit of data over the years looking at how many children make it to their reproductive years as a function of income/status in non-modern societies, the effect seems pretty large. Also as you mention one of the things that is inherited is status, which is also a reason for females to seeks high status males, and be concerned with relative status.

An interesting dynamic that you didn't mention is that men seemed to have evolved to form close warrior groups, that can increase their reproductive success by raiding other tribes, combining this with polygyny and you have a pretty good explanation for why early humans were so violent. Women however benefit from competing within the ingroup more than men, as they can't as easily increase their "reproductive breadth", instead relying on status gain.

Expand full comment

Relative wealth usually corresponds to concrete and productive capital, which provides some people with leverage over others. Additionally, status conditions trust for most people, and trust is an important part of navigating mixed games. No society is constituted entirely of positive sum exchanges, it's composed of a mixture of positive sum games and various other games which try to camouflage themselves as positive sum. The game theoretic optimal approach, per Rubinstein-Osborne, thus deviates from the simpler toy case of "only positive sum games" in complicated ways.

Expand full comment

Hypergamy

Expand full comment

Your third prediction is seen in bird flocks that have rigid hierarchies. A dominant bird may let the lowest status birds come quite close and feed next to him, but it ferociously pushes way the birds right below him in hierarchy, lest they start to think too highly of themselves.

Expand full comment

The current concern is only nominally about "robot revolt" and is rather more concerned with sub-goals. See R. Stuart, G. Hinton, E. Yudkowsky, R. Yampolskiy, and a host of others in the field.

The idea that robots (AGI) will revolt is a great plot device, thus we're used to thinking in those terms, but the real issue is that for a robot (AGI) to acheive ANY goal: fix climate change, grow better crops, end poverty, secure the border, stop terrorists, make markets "fair," adjudicate without bias, help us get to mars, manage a city's infrasture (pretty much anything that is not narrow, like "win at chess") the AGI must prioritize two things:

1. Avoid being turned off, i.e. control the off switch.

2. Gather power and resources, i.e. avoid not being able to pursue the primary goal due to running out of gas

It is from those two sub-goals we end up with the "paperclip maximizer" problem and others.

As a sidebar, Sam Harris gave a Ted Talk a few years ago on this subject (further digging and it turns out he did a podcast with Yudkowsky and has apparently agreed with Yudkowsky's position on this). In the talk he pointed out that the AGI might simply see us they way we see ants when building a road or dam or building. They don't "revolt" or say, "let's kill humans," they just go about their business of building roads and buildings (or likely massive cpu clusters that run better in a very cold and de-oxygenated environment).

Max Tegmark said as much on Lex Fridman's podcast when he said to eliminate corrosion they might just get rid of that pesky oxygen stuff.....and "good surviving after that."

After leaving Google, Hinton basically spoke out with the same theme, humanity is over, we're a stepping stone in the evolution towards digital intelligence. That Kurzweil guy is nice, but he's living n a fantasy world thinking the AGI will want to bring us along for the ride.

Anyway, it's depressing, as you mentioned before, "there's no getting off this train," and I agree, but it's a sad thing to realize that we're, as a species, so bent on genociding ourselves.

Expand full comment
author

I think you are treating a possible negative outcome as if it were certain.

Expand full comment

The certainity is that there is risk.

When test pilots died they didn't take down the entire human population (or their town, city, state, nation, hemisphere, or race).

They risked their lives, which were their lives to risk, so a fair play.

Humans are now guinea pigs in an experiement they've not consented to be part of and for the most part don't understand.

Putting aside existential risk of extinction, there's another issue that was well articulated in the Industrial Society and Its Future paper (the Unabomber Manifesto).

Once machines become superintelligent (something nobody in the field argues will never happen or has no chance, they only argue timelines) the machines will build machines.

Two possibilities: They are not in human control or they are in human control.

If not in human control, all bets are off, nobody knows what will happen, and extinction can't be ruled out, nor a utopian paradise, or anything, we're flying blind in this case.

If in human control, the elites control the world. As Kaczynski argues with very sound logic, they'll either have to sublimate humanity (they'll have billions and billions with nothing to do, no meaningful work required, etc) or they'll have to eliminate (genocide or sterlize) humanity.

Kaczynski points out if common humans aren't eliminated and we find ourselves happy and in paradise, we'll be analogous to domesticated animals.

Elon Musk Tweeted on the day of his death that he might be right (regarding his predictions).

Again, any number of possibilities are, of course, possible, the problem is we don't know what Everett Universe we live in, but in ALL possible worlds, we're being asked to wager our lives without our consent.

My libertarian/anarchist principles tell me this is unethical and evil.

It's like No Country for Old Men: David, flip a coin.

Why you ask....

Don't ask, heads or tails....

What's the wager?

Everything.

Expand full comment
author

Humans have always been guinea pigs in an experiment they've not consented to be part of and for the most part don't understand. The world changes, in part as a result of human actions, and there is nobody in charge and never has been.

That doesn't require AI. Both the invention of reliable contraception and of paternity testing changed some of the facts that all past human societies were based on, with unpredictable consequences. So, a little earlier, did the invention of agriculture.

Expand full comment

No argument that we're lab rats and I don't even think we have free will...but that's a different rabbit trail.

Neither agriculture nor condoms, the pill, nuclear weapons, cars, guns, or even weaponized small pox could kill 100% of humans in a programmed instant.

Perhaps international agreements to ban weaponized bio and chemicals are irrational or an unethical use of power by the big nation-states, perhaps not, perhaps it's rational.

When the foremost experts in the field of AI say that human exinction is a LIKELY outcome, I'd say it, at the very least, lands in the same discussion of chemical and bioweapons (which would at least be survivable by some percentage of humans).

If your argument is that we'd have a better world if biotech and chemical weapons weren't banned (except by "home owners associations") and even if there was a very likely outcome of extinction, you'd still not favor stopping it by any means necessary, then that's also another argument.

That arguement is unwinnable as it essentially says there's no threat in which we can defend ourselves until we're already dead (death being the proof that we were aggressed against).

I'm curious, if you were convinced by experts via math, logic, science, a god from another dimension, that it was a 50-50 coin toss between human extinction and utopia, would you push the button?

Personally I believe the right to self defense would allow me to ethically shot the guy who was about to push that button, on the grounds that I've got a 50% expectation of death, but I can see where it gets stickier if it's 1 in 1000 or 1 in a million.

Another thought experiment: You're being attacked by a guy that is down to one bullet and the experts tell you that the shooter misses 50% of the time. He's about to pull the trigger, do you have an ethically defendable position to shot first in self-defense?

If you say yes, what about if he misses 60% of the time? 90%? What about 99%?

At what expectation of death does self-defense become unreasonable?

Part of the arugument here is based on your disbelief (I think, maybe I'm wrong) that the threat is as real as Geoffrey Hinton thinks.

Hinton basically says we're already done, the game is already over. Now, he might be insane, but there's only a few people on the planet in his league in the field of AI and not unsurprisingly, they seem to all agree that we're anywhere from probably dead to maybe dead to nearly certainly dead.

Considering that I can assume you love your kids and grandkids as much as I love mine, the only reason I can surmise you're not arguing we should use force to stop these people is that you're disagreeing with all the experts in the field.

I don't get that, honestly. I mean, sure, they could be wrong, experts are wrong all the time, but to this degree? Like absolutely certainly wrong? There's not even a 1 in 10,000 chance?

Seems like the risk here (the bet size being all of humanity now and forever) is so big that we should error on the side of reasonable caution.

But, since I don't have enough money to buy an army, there's nothing I can do.

I will say I'm questioning my beliefs - if the freedoms we've enjoyed (even being limited as they are) are going to lead to all my children, grandchild, and all future people being erased, I'd say the belief system had a flaw.

Expand full comment
Jun 29, 2023·edited Jun 29, 2023

> From the standpoint of reproduction, being male is a high-risk gamble.

There's at least some evidence, although I haven't tried to evaluate its quality, that mothers are more likely to have boys in times when food is plentiful:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2602810/

Expand full comment

Once you have artificial wombs and gene editing, why even bother with parents? Create children using the best genes cribbed from the Human Genome Project. Notice that following this logic, the human race effectively becomes eusocial, with birthing factories taking the role of queens.

Expand full comment
author

People want to have their own children; reproduction by parents provides resources to bring children up. In your system what is the substitute?

Expand full comment

Children mass produced by hatcheries, gene selected to be loyal to the hatchery, educated in associated schools. Think Brave New World.

Remember that most parents already outsource raising their children to public schools. Also look up what's happening with surrogacy.

Expand full comment

I’m surprised you didn’t give an “economics” answer to the question of why people care about relative as well as absolute outcomes: surely if poor people’s income doubled, but rich people’s tripled, for example. the poor could end up with a decline in real purchasing power (and that’s why they care about relative outcomes).

The nice thing about an economics model is that you could make numerical predictions about how much people would care about relative change ; whereas the eco psych model doesn’t make any numerical predictions.

Expand full comment
author

Economists usually talk in terms of real, not nominal, income, and I was doing so.

Expand full comment

I'm pretty sure that whole part was about real changes in income, not nominal.

Expand full comment

Great article, thank you. Is relative as opposed to absolute happiness that surprising, though? It is literally impossible for an individual human to gain absolute wealth. For a variety of reasons (death, homeostasis, consciousness, the absence of any meaningful absolute scale for anything humans value, etc) we live in a world without footholds.

Expand full comment