30 Comments

I think it’s worth asking something like, “at what energy cost the machines will compete with us,” and “in what domains?”

There is no machine that competes with animals, in general. Sure, airplanes can fly faster than birds. But they cannot feed themselves or repair themselves or make copies of themselves without human intervention.

The chaotic nature of the physical system we inhabit means that the capacity even if the best machines to predict, eg, the weather, is probably going to be more constrained by the ability to precisely measure the world, than it will be computational bandwidth.

As such I think the likely outcome here is that any self aware AGI with its own drives would see us, not as pets, but as the way an intelligent self-aware human being views trees - as a necessary party of our external biology and, as such, something worth protecting and maintaining.

Human beings are general purpose computers made of little more than dirt, water, and sunshine. We make copies of ourselves, repair ourselves, and aren’t subject to the same failure modes that a digital machine would be. We consume very little energy, compared to a rack of GPU’s. So it seems most likely to me that a machine intelligence would want to keep us alive to reduce the risks of its down death. It would try its best to keep us happy, to ensure that we would even turn it back on in the event that it fails.

To believe otherwise requires me believing the machine would decide to replace these extremely cheap dirt robots, with machines made of much more expensive materials, at a cost of giving up all chance of being brought back from the dead via an unpredicted environmental shock that hurts it but leaves biomatter fine.

Expand full comment

I have high confidence that the human brain is not a "computer" if that word means a programmable digital computer. There are at least two reasons for this:

* A digital computer has a (now extremely large) number of internal switches that can be turned on or off, and that will stay on or off (barring random physical errors) until changed by an external signal. The human brain doesn't work that way at all. It has a (large number of) neurons that sit waiting until externally stimulated, and then fire a transient pulse that can cause other neurons to fire, goes into a refractory interval when it not only is "off" but cannot be made to fire, and then waits to fire another impulse when stimulated again; the information seems to be represented partly by the frequency with which the neuron fires. Perhaps this setup can also be modeled as a kind of Turing machine, as a digital computer can be, but as far as I know this hasn't been proven, and in any case, that may be like saying that because the same equations can represent either a weight on the end of a spring, or a capacitor and inductor, a tuned circuit IS a weight on the end of a spring: the confusion of the "is" of analogy with the "is" of identify. ("O my love's like a red red rose," but the bees won't make honey from her secretions.)

A typical neuron goes from an initial stimulus to the restoration of the state where it's read to fire in approximately 4 ms. It can do this 250 times in a second. Bernard Shaw observed that we can recite seven syllables in counting seconds: Hackertybackertyone, hackertybackerty two, and so on. That's time for 36 steps of a brain "program." Even with massive parallelism, I don't think you can write meaningful code for any complex action in 36 steps. The brain's "software" can't be anything like a computer program, just from the simple physics of the matter.

Expand full comment

Let me add that over and over, in the history of science, the human brain has been compared to some technological construct, from Aristotle's account of it as a cooling system (and the brain does emit about 20% of resting metabolic heat output!) to Wells's comparison of it to a telephone switchboard. I don't think the currently trendy computer analogy need be any more true than any earlier account.

Expand full comment

Moore’s Law has run into obstacles in the past 20 years. The speed of individual processors hasn’t increased at all in that time; what power computers have gained is through parallelism, doing more tasks at the same time.

And parallelism has limits too. First, there are important computational problems that (unlike LLM’s) don’t seem to parallelize easily: they seem to inherently consist of a sequence of steps, each depending on the previous one, so you can’t solve them any faster with a million processors than with one. Second, if you keep making computer components smaller, they manipulate smaller numbers of electrons, which then stop behaving in a statistically predictable way and become more random, so the processors need more error-correction overhead, defeating the benefit of making the components smaller.

Third, there’s a theorem of quantum mechanics that you can’t destroy a bit of information without dissipating at least a certain amount of heat. Common computer operations such as “and” and “or” are irreversible: they take in two bits and produce one bit, from which there’s no way to reconstruct the original two bits, so each such operation must dissipate at least that fixed amount of heat… which means if you build computers that do a whole lot of those operations in a hurry, it will need prohibitive amounts of cooling to avoid melting.

A possible way around the third problem is to rethink the fundamentals of computation in terms of reversible bit operations, rather than “and” and “or”. Nobody knows how to take advantage of reversibility to generate less heat, but at least there isn’t a theorem saying it can’t be done.

Expand full comment

Being alive (both at individual level and species level) in my opinion is highly overrated concept. Humanity going extinct in my opinion is not really problem in real sense. It is pretty weird we care about such things when most of us are likely to die of cancer and heart attacks.

I am yet to meet a parent who is upset that their children achieved more than then. They are incredibly happy.

I think goal of building this extremely awesome AI is worth it even if that AI wipes us out as species. That AI might be able to explore the galaxy and worlds beyond in ways we humans never will. IF there if more life around the universe, Humanity will forever be remembered for creating something this awesome. I rather be part of that extinct race than the race that survives only to fill up IRS forms year on year.

Expand full comment

"It is plausible, although not certain, that the human mind is an organic computer."

What alternatives do you have in mind? Later in your post you mentioned a soul - is that what you meant?

Expand full comment
author

That is one alternative. I don't assume that I can imagine all others.

Expand full comment

I would say just sui generis: A physical system that works in a way very different from how a digital computer works. After all, it's "designed" for a radically different basic function: Not that of being a general purpose automaton, but that of navigating a physical body through a physical environment (to which other subsidiary functions have been added, including the use of words and numbers that lets us emulate general purpose automata in a limited measure).

Expand full comment
author

I would consider an analog computer to still be a computer — it doesn't have to be digital. Think of a computer as something that can be modeled fairly well, if not perfectly, by a sufficiently powerful digital computer. That includes a slide rule, might not include consciousness as we, perhaps mistakenly, perceive it, including free will, a ghost in the machine.

Expand full comment

I think that's hopelessly overbroad. A computer can monitor a bunch of colliding planetesimals colliding to form planets, or molecules undergoing a chemical reaction, but no one calls molecules or planetesimals, or systems of them, "computers."

I agree that analog computers are computers. This isn't a matter of their being capable of being modeled by a digital computer; they were called "computers" back before anyone had made a digital computer—for example, Vannevar Bush's differential analyzer, built 1931 at MIT. They were and are computers because they could solve problems requiring numerical results. That's the genus that includes analog computers, digital computers, and women sitting at desks.

But in everyday speech, a person who says "computer" is almost always thinking of a digital computer. And that applies whether it's solving numerical problems or sending e-mail or displaying images. The definition has become both broader (in terms of activities performed) and narrower (digital only) than the older meaning. In effect, we have two homonyms that are etymologically related but not interchangeable.

Either way, though, applying the word to brains is at best metaphorical, and a mostly misleading metaphor. Brains do not contain digital elements and don't work like digital circuits. (Note that lots of things contain digital elements that no one calls a "computer," such as my diabetic friend's implant that monitors his blood sugar.) And most of what brains do is not solving numerical problems. Indeed our ability to do arithmetic seems to be a by-product on one hand of our having fingers (the original "digits") and on the other of our having language; language and finger movement do seem to be primary functions of human brains. We can approximate general pupose computers, slowly, largely because we have language and can use it to plan actions.

Expand full comment

It sounds like you're suggesting the brain would be a computer with a computing focus different than a PC, and I agree. I don't think of the brain as an all-purpose computing device; we would likely have evolved to better at solving certain types of problems over others.

I thought most people accepted the idea of the brain being a particular type of computer, which is why I was surprised to see Friedman suggest this model is 'plausible'.

I too am incapable of imagining all the other types of things the brain might be, but that's OK because I've only seen evidence for it being an organic computer. I thought maybe Friedman was aware of some other theory/model (aside from 'ghost in a machine' type theories to which he already alluded).

Expand full comment
author
Mar 31·edited Apr 1Author

I think people are too inclined to jump from "these are all the alternatives I can think of" to "these are all the alternatives that exist." I don't understand consciousness or how it fits my picture of a program running on a computer, so want to include "it's something else and I don't know what" among my alternatives.

I discussed the puzzle in an earlier post:

https://daviddfriedman.substack.com/p/the-puzzle-of-consciousness

Expand full comment

I wouldn't put it that way.

It seems to me that your remarks involve an equivocation (in the technical sense of that word) on the word "computer."

In the ordinary sense of the word in our current lexicon, a "computer" is a programmable digital computer: a device that carries out arithmetic and logical operations on digital data in accord with algorithms, and that has the von Neumann architecture, in which programs are stored in the same memory as data and can be operated on like data. The human brain is certainly not a device of this kind; I have written about my reasons for saying that in a different comment.

An extended meaning of the word would mean "an entity that performs computations": a programmable digital computer, or an analog computer, or a human being hired for the job of "computer." It's clear that human beings can be computers, and their brains are certainly involved in doing the computing (though they aren't doing it purely internally; I've read that there are brain injuries that result in inability to perceive what one's fingers are doing without watching them, and that those same injuries also result in loss of the ability to do arithmetic). But that's not the primary function of the brain, or of a human being. It's a trick we can do, thanks to the extraordinary increase in versatility provided by our use of language. The brain certainly was not evolved to do arithmetic or logic (and there are experimental studies that demonstrate how bad most people are at logic).

The first sense can be generalized, in that digital computers can be used to do things other than arithmetic and logic: To process language, or images, or other things. Though even that can only properly go so far. There are chips in our car, and in our modem, and in my diabetic friend's implant that monitors his blood sugar. But we don't call a car a "computer," no matter how much digital logic its operation entails. We don't even call its chips "computers." Even less, I think, should we call the brain a "computer." It's not a digital system, so the (limited) generalization that may apply when we are speaking of such systems doesn't apply; and the great majority of its functions are not numerical—even a professional computer uses their brain much of the time for choosing meals, or engaging in conversations, or finding their way about the physical world, or sleeping and dreaming—so it isn't a "computer" in the sense of "an entity that does arithmetic." I think calling it a computer is too likely to invite us to reason in misleading analogies.

Better to describe what the brain actually does, and how, so far as we can figure that out.

Expand full comment

Neuralink is the solution for sure. Value drift is inevitable. We need to keep up or be left behind eventually.

Expand full comment

I'm surprised to find this level of fearmongering from this author. In my view, AI is not, and will not be, a threat, because humans are smart enough, or perhaps more importantly, ego-centric enough, to keep it under human control.

As for the author's "solutions" to the "problem", they are pure fantasy. The progress if AI can't be stopped, simply because it can be performed by interested parties in secret, and because the potential payoff is so great.

AI will make our lives better. Example: when I call my medical insurance company, for example to determine the groundwork I need to lay for some procedure, I invariably get a nice-sounding human being who invariably gives what turns out to be the wrong answer. It's no wonder: low-paying work, with convoluted rules to have to navigate, does not lead to competence. But AI will handle such tasks with ease. Can't wait!

Expand full comment

Thanks, I’ve much more to learn from Ricardo. I’ll check that link.

Expand full comment

Thanks for having linked that Ricardo piece.

I read it quickly and it doesn’t seem to cover the dynamics of competing sources of capital, nor of the ability of capital to produce new needs for labor (or indeed labour to produce new needs for labour). Without some model for those, it seems hard to say anything about whether technology is - on net - good or bad for labour/wages.

Separately, I believe you mean “induction”, not “deduction” in your early paragraph. Next token prediction language models can extrapolate (or interpolate) from the specific to the general (and, in doing so, hallucinate if the answer is not within the training dataset), but they cannot deduce* (go from the general to the specific, which requires an ability/framework to criticise). *of course, any behavior can be faked (including deduction) if the example is in the training dataset

Expand full comment
author
Mar 31·edited Mar 31Author

Ricardo was inventing general equilibrium theory with no mathematics beyond arithmetic, so what he came up with is not a very close model of reality, but it is the first internally consistent picture of an interrelated modern economy that I know of. He made a lot of simplifying assumptions and discussed them, in particular a model where all goods were produced with the same ratio of capital to labor. He calculated how much that assumption would distort relative prices given how much he thought the ratio actually varied in his world.

It's an impressive intellectual accomplishment but not a good source of predictions.

With regard to your particular point, he isn't saying that technological improvement is necessarily bad for workers. He is saying that he thought it was necessarily good, save in the short run, and has now concluded that he had made a logical error, that it could be bad for them.

My other Ricardo link in the post is to my old lecture notes on him from my History of Economic Thought class, if you are curious.

You are probably right about induction vs deduction. I was using the term loosely.

Expand full comment

> And although it can mine a massive body of data on what humans say it in order to figure out what it should say, it has no comparable body of data for what humans do when they want to take over the world.

What it does have is all the would be world-conquering villains in our fiction. While following those strategies is unlike to succeed, it is remarkably good at freaking people out into thinking the catastrophe is imminent when the AI starts acting like a cliché evil overlord.

Expand full comment
Mar 31·edited Mar 31

Wouldn't lower labor costs likely lower prices for goods and services workers currently buy? I think what matters would be the relative change in prices. If wages fall dramatically relative to food prices, that could be problematic.

But those with particular labor to sell might be better off in a less extreme scenario. Most white collar labor would only require AI while most blue collar and no collar labor would require AI and a physical presence like a robot. The wage for say a structural engineer could drop relative to the wage of the guy swinging a hammer. Fewer swings could buy more engineering stamps.

Expand full comment

You may already be aware of David Deutsch's detailed discussions on AI and Artificial general intelligence (AGI) on Twitter, Interviews and his books and essays.

Expand full comment
author

I've talked with him a few times when I was in Oxford but I don't think I have read anything of his. An interesting guy.

Expand full comment

DD's two books - The Beginning of Infinity, and The Fabric of Reality -- are eminently worth a careful read. IMO, he's a genuine genius.

Expand full comment

I found this YT link entitled Why AGI have not been created yet. Deutsch explains. https://youtu.be/IeY8QaMsYqY?si=3EaoVGvBqnTHoBVv

Expand full comment

"We could end up with a society in which those who had capital were better off, those with only labor to sell worse off. The net effect, as with other cases of technological change, would be an improvement in a conventional economic sense combined with a substantial shift in the distribution of income."

This could result in second-order effects which may significantly outweigh the overall benefits, even assuming the initial economic transition was a net good. For instance, the masses of poor, unemployed, now permanently-unemployable laborers could rally together, leading to a fascist, communist, or primitivist uprising aimed at overthrowing the entire techno-capitalist system. This would likely provoke a harsh and authoritarian response from the establishment. Regardless of which side wins, the likely result would be civil war, followed by either totalitarianism or a brutal Hobbesian chaos. Needless to say, this would almost certainly be a massive net loss in both economic and humanitarian terms.

Expand full comment
author

Yes. That is one possibility.

Expand full comment

Poor as the analogy may be, computers are very good at emulating other physical systems, even if very unlike digital computers. This was not true of previous machines analogized to brains.

Expand full comment

Sure. A computer is also very good at emulating the motion of objects in space—for example, the orbits of asteroids or planetesimals. But we do not say that an asteroid is a geological computer. "X can simulate Y" does not imply "Y is X."

Expand full comment

Excellent analysis of the possibilities. Training data for LLM programs is currently products of human level intelligence. Seems like this would be a major roadblock to attaining super human intelligence. Sort of a chicken and egg problem. Today’s LLMs strike me as just a random mashup of human output.

Expand full comment

"for us to become computers too" YES.

The Borgs were the good guys all along? :)

https://www.youtube.com/watch?v=niklsDzJPTM

Expand full comment