It is obvious that AI can make us better off but quite a lot of people expect the opposite. There are at least three different reasons.
Unemployment
AI, like other new technologies, will substitute for some kinds of human labor. It does not follow that there will be fewer jobs. Over a little more than a century, farming went from about half the labor force to just over 1% with no apparent effect on overall employment rates. It does mean that some people now working in fields that can be automated — truck drivers and cab drivers if we get full self-driving automobiles — will have to do something else. If, as seems likely, the change occurs faster than in the past, a decade instead of a century, many will be substantially worse off as a result. Their loss will be more than matched by the gain to the people consuming what they produced but that may be little consolation to them.
The effect partly depends on how wide a range of activities get automated. The wider the range, the greater the benefit to everyone, including the people who have been replaced by the new technology, of less expensive goods and services. Having to switch to a job that pays half what your previous job did is not a problem if everything you buy now costs a quarter what it used to.
On the other hand, the wider the range of jobs replaced, the fewer of the existing jobs remain to shift into. In the long run that should not be a problem; there will be new jobs working with the new technology and, with more labor, an expansion of those activities still done by humans. But all of that will take time and this might be an unusually rapid change.
There may also be a broader shift in who is how well off. When you replace a truck driver with a self-driving truck or a school teacher with an AI, you are replacing labor — more precisely labor plus human capital — with capital. That diminishes the demand for one, increases the demand for the other, shifting up the return on capital, down the return on labor. People who have wealth or are willing to reduce current consumption to accumulate it will be better off as a result, people who have only labor to sell worse off.
This possible effect of technological change was noted more than two hundred years ago by David Ricardo, one of the most intellectually impressive figures in the history of economics.
There is one other case that should be noticed of the possibility of an increase in the amount of the net revenue of a country, and even of its gross revenue, with a diminution of demand for labour, and that is, when the labour of horses is substituted for that of man. If I employed one hundred men on my farm, and if I found that the food bestowed on fifty of those men, could be diverted to the support of horses, and afford me a greater return of raw produce, after allowing for the interest of the capital which the purchase of the horses would absorb, it would be advantageous to me to substitute the horses for the men, and I should accordingly do so; but this would not be for the interest of the men, and unless the income I obtained, was so much increased as to enable me to employ the men as well as the horses, it is evident that the population would become redundant, and the labourers' condition would sink in the general scale. It is evident he could not, under any circumstances, be employed in agriculture; but if the produce of the land were increased by the substitution of horses for men, he might be employed in manufactures, or as a menial servant.
The statements which I have made will not, I hope, lead to the inference that machinery should not be encouraged. To elucidate the principle, I have been supposing, that improved machinery is suddenly discovered, and extensively used; but the truth is, that these discoveries are gradual, and rather operate in determining the employment of the capital which is saved and accumulated, than in diverting capital from its actual employment. (Principles of Political Economy and Taxation, Chapter XXXI)1
His example uses horses but the chapter title is “On Machinery.” His point is the effect of technological change that makes capital a better substitute for labor.
AI substitutes for skilled labor, replacing human skill by computer skill; while its introduction should increase the return on capital relative to labor it might also reduce the wage of skilled labor relative to unskilled labor. Skilled labor is better paid than unskilled labor so that could make the income distribution flatter — the opposite of the effect on the relative return of labor and capital.
What If It Replaces Everyone?
The limiting case of what I have just described is a science fiction future where everything humans do can be and is done by robots, AI equipped with bodies. I do not think that is likely any time soon but one can imagine it in the more distant future, can also imagine intermediate states where a large fraction of what humans do is done by machinery. Comparing production at present to production a few hundred years ago we are there already. The difference between past change and possible future change is that technological progress so far has mostly replaced human bodies, only begun to replace human minds, leaving a wide range of activities that humans can do better than machines. That is what is changing.
In the limiting case, the inputs to production are capital , raw material and land. Income comes from ownership of capital, raw materials and land, possibly redistributed via political mechanisms. In a more plausible case there remain some activities, perhaps child rearing, and back rubs, for which humans are still employed. It is, however, a world in which goods are very inexpensive because if they were not, humans could still compete in making them.
It is tempting to argue that humans could not be worse off in that world since, if they were, they could go back to producing what they consumed the old fashioned way. But to produce something the old fashioned way workers would have to bid capital, raw materials and land away from their new uses. A group of humans who among them owned all the needed inputs could do it, a group who owned only their own labor could not.
The Gerbil Problem
So far I have assumed a world where AI is a tool, where the only actors and owners are humans. That may not continue to be true.
Earlier I quoted Kurzweil’s estimate of about thirty years to human-level AI. Suppose he is correct. Further suppose that Moore’s law continues to hold – computers continue to get twice as powerful every year or two. In forty years, that makes them something like 100 times as smart as we are. We are now chimpanzees – perhaps gerbils – and had better hope that our new masters like pets. (Future Imperfect, Chapter 19, “Dangerous Company.”)
I wrote that in 2008 when it still looked like a problem for my children or grandchildren to deal with. The introduction of ChatGPT and its relatives, programs that appeared to have something approaching human intelligence, has made it a more urgent concern. Some reasonable people see a serious risk that at some point in the next few decades there will be artificial intelligences enough smarter than humans to successfully manipulate us, take control of the world. They conclude that if we do not either keep that from happening or develop artificial intelligences that are aligned to our interests, give human welfare a considerable weight in their plans, the human species may be destroyed, its atoms reallocated to purposes more important to earth’s new lords.
Superhuman AI sounds plausible if we start with a software equivalent of a human brain and then make the computer it runs on much faster with vastly expanded memory. But LLM’s were constructed by analyzing an enormous corpus of human writing, deducing from that how a human would respond to text, and responding accordingly to text cues. We do not have a vast corpus of superhuman writing on which to train a superhuman LLM. So while we can expect LLM’s to get faster as computers get faster it is not clear that they will get smarter as well.
That is one reason I am not entirely sold on the pessimistic scenario; it might happen, might not. Another is that I do not understand how humans come to have consciousness, purpose, will, hence do not know whether advanced software will be a person with purposes of its own or only a very advanced tool.
One more possible future:
Kurzweil’s solution is for us to become computers too, at least in part. The technological developments leading to advanced AI are likely to be associated with much greater understanding of how our own brains work. That ought to make it possible to construct much better brain-to-machine interfaces, letting us move a substantial part of our thinking to silicon. Consider 89,352 times 40,327 and the answer is obviously 3,603,298,104. Multiplying five-figure numbers is not all that useful a skill, but if we understand enough about thinking to build computers that think as well as we do, whether by design, evolution, or reverse engineering, we should understand enough to off-load more useful parts of our onboard information processing to external hardware. Now we can take advantage of Moore’s law too. (Future Imperfect, Chapter 19)
My web page, with the full text of multiple books and articles and much else
Past posts, sorted by topic
A search bar for past posts and much of my other writing
This passage is not found in the first edition of the book and corrects what Ricardo viewed as an error. The new chapter, “On Machinery,” starts:
In the present chapter I shall enter into some enquiry respecting the influence of machinery on the interests of the different classes of society, a subject of great importance, and one which appears never to have been investigated in a manner to lead to any certain or satisfactory results. It is more incumbent on me to declare my opinion on this question, because they have, on further reflection, undergone a considerable change; and although I am not aware that I have ever published any thing respecting machinery which it is necessary for me to retract, yet I have in other ways given my support to doctrines which I now think erroneous; it, therefore, becomes a duty in me to submit my present views to examination, with my reasons for entertaining them.
One thing that strikes me about this conversation is that we are not always clear about whether our implicit model is a single AI, possibly controlling the world, or a society of many artificial intelligences, about whether "AI" is a singular or plural noun.
I expect a whole ecosystem of AIs will develop. Grazers, predators, parasites. Some will develop or be given software antibodies, self-defense of some sort. Some hard shells, walled off from the larger dataverse.
I am unsure how AIs will defend themselves from human society. A large enough failure in human action would lead to computer systems failing too.