29 Comments
Jun 14, 2023·edited Jun 14, 2023

“I am told that the average driver is, by his report, better than average”.

I sometimes joke at parties that I consider myself an excellent exemplar of the Dunning-Kruger effect. To this day, I wonder just how often that joke actually hits. I then wonder what this might imply about my exemplariness.

Expand full comment

“I am told that the average driver is, by his report, better than average”

Perhaps. Depends on the average here, I suspect. The median guy is probably better than average if the distribution isn’t a bell curve.

One way to measure this is in a system with penalty points the average is inevitably non zero - unless nobody has any - but the median guy is probably zero.

Expand full comment

I am interested in academic work by biblical maximalists, in the spirit of what you wrote above.

Expand full comment

This is great! We should always be checking why we believe things.

I want to suggest a third way, which is to learn as much as you can stand about a topic, and then to declare, "Well I just don't know." Agnosticism. It's hard to be agnostic, most people want you to take a stand this way or the other. So I'm agnostic about several things; photons, God... Let's just start with those two, at opposite ends of the spectrum. So there are arguments, both for and against the existence of God and photons. (google anti-photons, by Lamb, W.E.)

And yet being agnostic about both God and photons, I find both ideas very useful. And so being uncertain of the truth, I still embrace the ideas.

(It's generally hard for males to say, "I don't know".)

Expand full comment

Of course, no one prediction that turns out to be false indicates irrationality. Since I am unsure how good a rational person would be at prediction, I cannot evaluate my own record of successes and failures.

Expand full comment
author

Proofs, in that context or most others, are scarce. Each confident prediction I made that turns out to be false is evidence against my rationality. If it is a prediction that fits what I want to believe it is evidence that I engage in wishful thinking. If it is a prediction that fits what the people important to me want me to believe, it is evidence of the Kahan effect. Evidence shifts one's subjective probability, proofs shift it to zero or 1.

Expand full comment

Perhaps each confident prediction I made that turns out to be false is evidence against my rationality, and each confident prediction I made that turns out to be true is evidence in favor of my rationality. But I must combine all this evidence to reach a conclusion about how rational I have been--and how do I do that? (By the way, it is not easy to enumerate the evidence, because many of my predictions do not get articulated.)

Expand full comment
author

I suggested making predictions publicly, which requires you to articulate them and prevents you from retconning them when they turn out to be wrong.

I'm not suggesting a formula for concluding that you are .65% rationality, only a way of modifying your priors on the subject.

Expand full comment

Making public predictions and then checking them for accuracy enables you to collect evidence of your rationality, but the evidence is thin. To assess your rationality as a whole, you need to consider all your predictions and expectations, not just those that you are motivated to publicize. Admittedly, you can hope that the latter are typical, as regards rationality.

My impression (yes, on thin evidence) is that you rank high among human beings for rationality; grading on the curve, I would give you an A, maybe even an A+. The main counter-evidence is your anarchism, which, as you note, seems to be based in part on wishful thinking about human nature. But, after all, political philosophizing *is* putting forth some sort of *ideal*, and it is not clear what counts as a *realistic* ideal.

By the way, I think it is harder to be rational when very old. A notable feature of old age is the decline in energy. Updating one’s beliefs and practices to accommodate new information requires energy. It is easy, especially when old and tired, not to bother—to keep operating with the views and habits that have served one well enough in the past. This is a corollary of the more general point, that rationality requires intelligence—the more, the better; the decline in mental energy is one sort of decline in intelligence.

Expand full comment
author

I think of your final point as the choice between fluid and crystalized intelligence, between finding a new solution to a problem and remembering an old solution. As you get older you have a larger body of old solutions to draw on and a shorter future in which to get the benefit of a better new solution, so it makes sense, energy aside, to shift towards crystalized.

When the pandemic started I was on a speaking trip in Europe. My younger son urged me, by email, to cancel and come home. My initial response was to shrug it off, to assume that he was exaggerating the problem. It eventually occurred to me that I was going on crystalized intelligence, the fact that there had been no pandemics serious enough to make me change my plans in my lifetime, so I cancelled and flew home.

But I agree that age tends to reduce intellectual energy.

Expand full comment

I prefer to be an informed voter, but sometimes have made the rational decision to give up on seeking more data. There's lots of local elections where the candidates don't have much data available. It's particularly bad in elections such as Texas judicial seats where the official ethics code keeps candidates from taking stands on issues.

Expand full comment

I like the idea of making predictions. I think in ten years either guns will be outlawed or people will agree it's something that has a 50% chance of happening.

I have plenty of documented predictions about my kids, for example I am usually able to tell when they're coming down with pneumonia several days to weeks in advance.

Expand full comment

Let's see. That sent before I was ready.

1. I think Spanish speakers will become less anti semitic, as a group, over the next five years. (I'm doing my part.)

2. I think the US will still be powerful for another decade at least.

3. I think that people want attention more than they want to be free. So people will be more okay with more surveillance over time. (inspired by something Jeff Jacoby wrote in the Arguable this week)

4. There is no point in predicting an apocalypse, because if it happens. there will be nobody to congratulate me on my accurate predictions.

(I stupidly write that voice to text, and The child that I am currently trying to put to bed says that if there's a zombie apocalypse, it's not usually as many people to fight the zombies as the zombies. But they could hide up ladders, because the physics of zombies prevents them from climbing up ladders. So that's it for now. Thanks for a fun exercise though. )

Expand full comment

Your article here sounds consistent with you being irrationally bonded to a principle that happens to be core to rationalism. Namely, the Litany of Tarski (which I'm sure you're familiar with). My idea here is that you have an emotional drive to know the correct answer, that is much like, and may even be, the emotional drive other people have to cheer for a specific sports team or political party. (Given that, one could quibble with whether my choice of the word "irrationally" above makes sense.)

I think this might be a useful hack. If we assume that it's impossible to escape into pure robotic reasoning, it might still be possible to intentionally channel our emotional drive to bond to a tribe, by making our tribe that of Tarski. If so, then the usual attack on such bonding - that an emotional investment in a belief will blind us to evidence that counters that belief - is neatly sidestepped. It would now be our emotional drive compelling us to consider conflicting evidence. Any evidence that challenged our core belief in the Litany would itself have to be evaluated in line with that very Litany; anything refuting logic itself could only be evaluated logically.

At that point, one primary hazard is making mistakes in logical evaluation. (Another might be dispensing with other emotional bonds. I don't have a ready rational explanation for familial loyalty beyond something I might have overheard from an evolutionary biologist, but I don't feel ready to reject such bonds, either. More prosaically, I'm also not ready to give up tasty food, even on utilitarian grounds.)

Expand full comment
author
Jun 15, 2023·edited Jun 15, 2023Author

Prior to reading your comment, I don't think I had ever heard of the Litany of Tarski. I expect you were assuming that I am a rationalist in a social as well as intellectual sense, that my views came to a substantial degree from the current rationalist community. I have gotten a few interesting ideas from Scott Alexander (none that I know of from Yudkowski, although perhaps some via other people) but my basic system of beliefs long predates the existence of that community. I published my first book about seven years before Yudkowski was born.

I now know what the Litany is and on the whole agree with it, although I could imagine special circumstances in which I wouldn't. But note that Kahan offers a counterexample that might apply to many people, a reason why it might be in their interest to hold false beliefs.

Expand full comment

Yudkowski is becoming a bit famous these days for his position on AI Ruin, something which I, at first, thought he was insane about. Like nutso. Then, being inclined not to make an irrational judgement, I listened to about 4 or 5 hours of him on YT and picked up Rationality AI to Zombies (a collection of his blog posts).

Turns out that his basic position is rational and I have yet to find a single AI expert who disagrees with him on principle (the best I can find is things like, "I agree with Eliezer but I think human extinction is not certain, more like 50-50.").

Those who totally disagree with EY are people who are not experts in the field and haven't the slightest clue about the science behind what's being grown inside massive cpu clusters in various massive corporations (OpenAI, Meta, Google, Microsoft, Tesla).

Both Sam Altman and Elon Musk, techno-optimists have said publically that if we mess up, humanity is, in Altman's words "lights out."

I listened to an hour and a half of Robin Hansen explain why EY is wrong and he sounded like, well, an insane economist who believes theory trumps practice. He obviously doesn't understand EY's position (or he refuses to acknowledge it) and he spent at least a good hour strawmanning and giving meaningless counter examples of why AI won't kill us.

If there's an expert in the field who doesn't think we're probably doomed, I'd love to hear their position and arguments.

Now, all that said, I noticed something you wrote about nanotech and that some libertarian minded people are seeing the potential and leaning towards accepting that some government regulation might be best.

Interestingly to me is that that is how I feel about AI technology. I'm a pretty strong ancap having first read both your positions and Rothbard back a decade or so ago and studying further (I'd been a libertarian conservative, i.e. voted for Bush, then a libertarian, voted for Ron Paul as a write in, then a non-voting anarchist).

Today I'm finding myself realizing that if we all die, if humanity goes extinct, then, yeah, anarchy didn't serve us too well...I don't know what to think exactly, but without a powerful one-world agreement, something stronger than nuclear and bio-weapons treaties, humanity is doomed to die out (total extinction and likely inside our children's generation max, maybe sooner).

It sounds like a fantastic claim (made by insane people) but digging into the evidence they present and listening to dozens of talks now, I can't see a rational arguement as to why they are wrong.

Connor Leahy went on CNN and said we have little chance on our current course to avoid human extinction and people didn't really hear him, I don't think, or it would be the only topic on anyone's mind. What's more important than realizing that Mark Zuckerberg or Sam Altman are unleashing technology that will likely mean your children or grandchildren will never grow up? I can't think of anything...

So, if true, are we finally proven wrong about freedom and capitalism and anarchy? If humans go extinct, I think so...

Expand full comment
author

I raised the possibility that AI might destroy us in _Future Imperfect_, where I wrote: "At least three of the technologies I discuss in this book--nanotech, biotech, and artificial intelligence--have the potential to wipe out our species well before the end of the century." The book was published in 2008, which I think was about the same time Yudkowski started to write about the subject.

I think, however, that you exaggerate the probability; the future is more uncertain than the argument implies. It might turn out that it isn't possible to create super-intelligent computers, for any of several reasons. It might turn out that Kurzweil's solution, mind to machine links that let us do more and more of our thinking in silicon, work and let us become superintelligent too. It might turn out that superintelligences are benevolent.

And your final points imply that the catastrophe could be prevented by some non A-C system. We currently have a non A-C system yet you think the catastrophe will happen. Why do you assume that the internal politics of a world government wouldn't lead to more and more powerful computers and ultimately an AI catastrophe? You might note that the research which it appears created Covid was government funded — and could have created a much more lethal pandemic.

Expand full comment

I do not assume that the internal politics of the world governments won't lead to more and more powerful computers and machine learning and ultimately a catastrophe.

I think that's the most likely thing to happen.

The problem with comparing a totally anarchist world and a totally statist world is that from the computer's perspective, it's all the same.

When we bulldoze a field we don't care so much if the bunnies are "Watership Down" bunnies or mean bunnies or actually rats and mice, we just bulldoze the field.

When I postulate that the problem could be solved with a one-world solution, I'm merely pointing out that if there were a government body/treaty/agreement backed by powerful enough forces to enforce it's rules, it could, perhaps, be a solution. This was EY's thought in the Time piece, i.e. we need a world treaty enforced by the threat of violence in the same way we (supposedly) have for bio-weapons.

Perhaps a solution, perhaps not. I do see that the "terrorists" or whatever bad actors haven't killed millions of us yet with anthrax and small pox, which seems a reasonably easy thing to do (see Demon in the Freezer, Preston and compare to what one actor did post 911 with anthrax).

In a free (or freeish) market world (lets call the tech world, at least in America, a mostly free market, sure we can debate about what that means, but currently nobody is regulating these LLM other then the owners of them, i.e. you can't prompt with certain words/ideas on ChatGPT) the building of a superintelligence is certianly in the realm of possible, some say inevitable.

The difference between AI and the threats from bio-nano-nuclear and even conventional warfare on a larger scale is that it's certain this superintelligence will be in a different class: a small pox virus cannot send out an email or start a chat with a human and convince that human to do something for it, even with the relatively dumb system we have now, it can already manipulate humans.

The threat of bio-nano and to some degree nuclear/convention is enhanced by AI technology in that it can manipulate so much more information so much faster and thus us these weapons so much more effectively. Once it learned chess (regardless of whether it's thinking about it or not) it wins. Period. It wins even if you bring 1000 humans into the room to decide the next human move.

Could it all end up benevolent? Sure, that's the gamble.

The problem as I see it is bet size.

Any trader or poker player understands that bet size must be appropriate or you'll drunk walk off the cliff just due to random variation in the game. If you go "all-in" everytime you get AA in Texas Hold'em you'll lose quick enough and be sent to the rail. It's the same with options, fx, bond trading, whatever, if you bet too much of your stack (in terms of percentage) you'll utimately lose.

The current bet size is every single human alive and every single human who might have ever lived in the future. The bet is "All-in" and the stack size is humanity.

That seems overly aggressive in my opinion, even if the risk of extinction is 1 in a 1000 or one in a million.

The other problem is that the bet is an externality. Sam Altman can only die once, yet he's betting some 8 billion lives.

Expand full comment
author

The problem with an argument of the form "if we do this there is a very small probability of an enormous cost, so forbid it" (what Scott calls a Pascal mugging) is that there is also a very small probability that banning it will result in an enormous cost, in your case that supercomputers are just what we are going to need to save us from some other catastrophe. For a real world example, versions of the "precautionary principle" have greatly raised the cost of building nuclear reactors and the lack of nuclear reactors is arguably a major reason for climate change.

Once you start considering one in a million risks, there are no safe options.

Expand full comment

I guess it depends, then, on who's actually correct.

Many are saying that developing AGI is essentially guaranteed death for humanity while their reasonable opponents are saying, naw, it's more like a coin toss (I've heard a few estimates at 10-20% as well).

All reasonable opinions, of experts, put the chance of human extinction well above "a very small chance."

In any case, I think it's just mental tennis, there is nothing that can be done to stop what's happening unless you've got an army.

From a ethical position, it seems, exposing people to the risk of not only dying but losing all their future children/grandchild/etc., without consent, seems wrong to me.

Expand full comment

Michael, thanks for writing this. I am the farthest thing possible from an expert on AI, and have been wondering for a while why so many intelligent people are stressed about it. I had seen the comparison to nanotechnology recently but did not put it together until reading your post.

(Should you want to read a particularly ignorant take on it, this is what I wrote: https://ishayirashashem.substack.com/p/artificial-intelligence-vs-g-d?sd=pf)

Only recently did I figure out how to make chat GPT helpful to me, and I'm still not super impressed. But it is amazing how fast it works

Expand full comment

When looking at a tool like ChatGPT it is helpful to remember that it might be analagous to cell phones in the 80s or it might be analogous to the telegraph.

We just don't know.

Although it seems logical to believe it's not analogus to the iPhone13.

Intelligent people are stressed because they're following a logic tree that branches into many bad outcomes among all possible outcomes, some of which are good ones, to be sure.

Eliezer compares to natural selection, if you're interested, you might find some of his interviews on YouTube and listen to a few of them, it's a bit complex at first (and he's less than the ideal spokesman as he's a nerdy techno guy with an extremely high IQ and it's hard for him to dumb down his lectures to be understood by normal people).

In the entire specturm of "what's possible with natural selection" we have something close to infinite numbers of species, of which only a tiny percentage came to be, and of those, some 98% or so have gone exinct for one reason or another.

Guys like Turning Award winning machine learning expert Judea Pearl say things like "we're building a new species" and however you want to take that, if it's going to be "alive" or "not alive" doesn't matter to the end result. At least in terms of the effect.

If we look at it in those terms, we can then look at what happens when an advanced species with better tech (if available or just better claws and teeth if it's not using tech) meets a less advanced species (whether using tech or just less affective teeth and claws).

The lessor losses to the better. That's how life evolved, and, well, you're a student of the OT, I see from your blog, so you can appreciate that it's how human socieities have evolved as well.

Tribe A murders Tribe B (save maybe the fertile females) and thus Tribe B is erased from history except in some oral and/or written traditions about how they'd followed the wrong deity and got what they deserved.

Tribe A now writes the rules and new histories, generally ones that showcases how they were in the right and the murders they committed weren't unlawful murders, but rather, gracious and kind killings in the name of righteousness and good.

So, projecting into the future, we can't know if (or the percentage of likelihood) of one of these two things that might be our end:

One. If the AI will follow a path in which it simply dominates us pursuing an goal (doesn't matter the goal, it doesn't need to be 'kill humans' it might just be 'acquire power and ensure survival').

Two. The AI might be controlled by Tribe X which, as we see from history, doesn't turn out too well for other tribes.

Expand full comment

I think I don't think of you as a social rationalist (although most of my direct interaction with you is via rationalist forums), but I also thought of the Litany as something that saw exposure outside of it. I could very well be mistaken. I know of no counterexamples, for instance.

Is Kahan's counterexample in the link you posted? I followed it, and the thing I found was - as I'll try to summarize - that some beliefs compel taking actions beyond the believer's capability, e.g. belief in global warming compels divesting oneself of all fossil fuel products, or of nuking China, but also compel other actions that are not, such as _calling_ for laws to limit fossil fuels or for a war on China, or recycling more, or other actions that are readily noticed by the believer's acquaintances, interpreted as a tribal alliance, and consequently motivate those acquaintances to be more pleasant around the believer. Or to summarize even more: direct factors dominate, and may dominate even factors that make the believer better off in the long run (such as calling for more nuclear power, or for checking the science more carefully).

Which in turn makes me wonder (as I do on occasion) why it's so hard to think in longer terms... to which my ready answer is "well, multiple reasons, from financial poverty to the need to preserve close social relationships", and then "but these cases also threaten financial and social status in the long term!", and things sort of peter out there due to the game theory getting too complicated to completely hold in my head.

Expand full comment
author
Jun 17, 2023·edited Jun 17, 2023Author

Kahan's counterexample, which I sketched in "Why I Believe Things" and discussed at greater length in "Tribal Politics," is:

---

What I believe about evolution or global warming has very little effect on the world, which is a large place, but can have a large effect on me. If I live in a small town where almost everyone is a fundamentalist Christian, announcing that I believe in evolution could have serious negative consequences. If I am a professor at an elite university, announcing that I don't believe in evolution might have even more serious consequences.

It is easier to pretend to believe in something if you really do believe in it, so it is in my interest to persuade myself of whatever views the people who matter to me approve of. One result, according to Dan's research, is that the more intellectually able someone is, the more likely he is to agree with his group's position, whether that means believing in evolution or not believing in it. Smarter people are better at talking themselves into things. (from "Why I Believe Things")

---

It has nothing to do with short run vs long run.

Expand full comment

Well, let's see.

The reason I'm inclined to believe the short vs. long run idea (to the extent I actually do, which is not absolute) comes from my inspecting the examples for tribal alignment, and noticing one way in which individuals might make sense. To take your example, someone surrounded by fundamentalist Christians, but undecided on creationism, might reason (as you lay out) that believing in creationism (and thus making it easier to say so) leads to better consequences; however, that individual might also believe that evolution might _also_ have better consequences (by dint of being undecided). The obvious reason to believe in creationism might be harmony with one's neighbors; the obvious reason to believe evolution is the possibility that one's neighbors will change their beliefs or be gradually pushed out by neighbors who believe otherwise, due to the other belief being more predictive in some way. That was what I was thinking of when I thought of the "long term".

This is not the only reasoning process one could use. One could obviously just profess the belief of whichever group seems militarily dominant, for example. (Or culturally dominant, if in a society that is expected to be militarily dominant for a long time and which is itself divided into multiple cultures.) And then the intellectual engine takes it from there and devises the rationalization for the belief. And then, if the dominant belief changes, that intellect engine dutifully crafts the reason that the new dominant belief was clearly correct all along.

I get the sense that Orwell drew much attention to that approach, such that many people recognize it. (One case in point might be the rapid consensus around global warming.) When I consider what would be a superior approach, I'm back at the one that looks first at which belief more accurately models the world - and it looks to me like sacrificing short term harmony with one's neighbors, in exchange for long term harmony with the neighbors one expects to have.

(I hasten to add, if someone's choice was "believe as your current neighbors or have your entire lineage smothered", then I certainly see the advantage in the short term option.)

Expand full comment
author

Your "the neighbors one expects to have" seems to assume that truth will win out, and do so in your lifetime. That's optimistic.

In the case of population hysteria fifty years ago, truth won out to the extent that people stopped pushing the claim that population growth would do terrible things in the near future, since it didn't. But in the circles that held that belief, I don't think people who held it ended up losing status with those that mattered. Even Erhlich, to take the extreme case, seems to be still taken seriously.

As with religious end of the world movements, when the world doesn't end true believers can preserve their self-regard by shifting the date forward.

Expand full comment

I agree it's an optimistic observation. I was thinking that as I wrote it, in fact. But I also remember you noting that a tree you plant that you won't be around to enjoy when it's grown can still be worth planting - at the limit, you can sell the planted tree to someone who _will_ be around. Is that applicable here? It's different in one way I can see: ideas are public goods (right?), not something we can trade the way we would a sapling, so the incentives are different. Different enough? (I also think I remember you stating you were willing to share ideas you had, despite some cost to yourself, in exchange for the knowledge that ideas you believe useful are more widely known.)

It's also possibly more moot, if it turns out that current technology enables ideas (good or bad) to spread more effectively, and evolve faster. A fellow might very well still be around to see the idea landscape change.

I also keep thinking about the other end of this. Suppose the rational option is to do as the Romans do when surrounded by them. If so, then why are there people with heterodox views at all? Erhlich's views might be taken seriously, but not universally. Why not? There was an apparent homogeneity before there came Jordan Peterson, Jonathan Haidt, Judith Curry, and... well, yourself. And I keep happening upon more and more. I don't believe this cohort is being simply irrational. What's a more likely explanation? Multiple semi-stable factions at startup, so everyone has their own Romans to hang around? A diva effect? Something else?

Expand full comment