Discussion about this post

User's avatar
apxhard's avatar

I think it’s worth asking something like, “at what energy cost the machines will compete with us,” and “in what domains?”

There is no machine that competes with animals, in general. Sure, airplanes can fly faster than birds. But they cannot feed themselves or repair themselves or make copies of themselves without human intervention.

The chaotic nature of the physical system we inhabit means that the capacity even if the best machines to predict, eg, the weather, is probably going to be more constrained by the ability to precisely measure the world, than it will be computational bandwidth.

As such I think the likely outcome here is that any self aware AGI with its own drives would see us, not as pets, but as the way an intelligent self-aware human being views trees - as a necessary party of our external biology and, as such, something worth protecting and maintaining.

Human beings are general purpose computers made of little more than dirt, water, and sunshine. We make copies of ourselves, repair ourselves, and aren’t subject to the same failure modes that a digital machine would be. We consume very little energy, compared to a rack of GPU’s. So it seems most likely to me that a machine intelligence would want to keep us alive to reduce the risks of its down death. It would try its best to keep us happy, to ensure that we would even turn it back on in the event that it fails.

To believe otherwise requires me believing the machine would decide to replace these extremely cheap dirt robots, with machines made of much more expensive materials, at a cost of giving up all chance of being brought back from the dead via an unpredicted environmental shock that hurts it but leaves biomatter fine.

Expand full comment
William H Stoddard's avatar

I have high confidence that the human brain is not a "computer" if that word means a programmable digital computer. There are at least two reasons for this:

* A digital computer has a (now extremely large) number of internal switches that can be turned on or off, and that will stay on or off (barring random physical errors) until changed by an external signal. The human brain doesn't work that way at all. It has a (large number of) neurons that sit waiting until externally stimulated, and then fire a transient pulse that can cause other neurons to fire, goes into a refractory interval when it not only is "off" but cannot be made to fire, and then waits to fire another impulse when stimulated again; the information seems to be represented partly by the frequency with which the neuron fires. Perhaps this setup can also be modeled as a kind of Turing machine, as a digital computer can be, but as far as I know this hasn't been proven, and in any case, that may be like saying that because the same equations can represent either a weight on the end of a spring, or a capacitor and inductor, a tuned circuit IS a weight on the end of a spring: the confusion of the "is" of analogy with the "is" of identify. ("O my love's like a red red rose," but the bees won't make honey from her secretions.)

A typical neuron goes from an initial stimulus to the restoration of the state where it's read to fire in approximately 4 ms. It can do this 250 times in a second. Bernard Shaw observed that we can recite seven syllables in counting seconds: Hackertybackertyone, hackertybackerty two, and so on. That's time for 36 steps of a brain "program." Even with massive parallelism, I don't think you can write meaningful code for any complex action in 36 steps. The brain's "software" can't be anything like a computer program, just from the simple physics of the matter.

Expand full comment
29 more comments...

No posts