One thing that strikes me about this conversation is that we are not always clear about whether our implicit model is a single AI, possibly controlling the world, or a society of many artificial intelligences, about whether "AI" is a singular or plural noun.
I expect a whole ecosystem of AIs will develop. Grazers, predators, parasites. Some will develop or be given software antibodies, self-defense of some sort. Some hard shells, walled off from the larger dataverse.
I am unsure how AIs will defend themselves from human society. A large enough failure in human action would lead to computer systems failing too.
Comparative advantage still works for AIs, until they stop playing by the rules of the human economy. How/why would they stop?
Maybe AI stays under human control, but a group of Bond villains come up with a crazy scheme that ends in disaster? This might work if their AI is way smarter than everyone else's.
Or maybe it could happen by accident? If humans lose control of AIs, they might become unpredictable and unstoppable. Then they might destroy humanity by not paying attention.
But why would they stop paying attention? Only if humans are either completely useless or more expensive than robots. People tend to think that robots will be basically free if an AI takes over. But the resources dedicated to creating, deploying, and maintaining robots could still be used for other things the AIs or their masters want. So it assumes a lot to think that robots will replace all human labor.
Or maybe humans stay in control of AI, and none of us try some Bond villain scheme, but the sheer chaos of rapid social change leads humanity to destroy itself, without anyone intending that outcome? We don’t even need AI to wonder about that scenario. Does AI make it more likely, or more imminent?
Do you mean to say that human labor would be as useless to an AI as a gerbil's labor is to you? Humans can do the same things that robots do, though maybe slower. Gerbils can’t. Comparative advantage doesn’t work for gerbils, because they do not make exchanges or organize the division of labor. The analogy doesn’t work.
You can’t refute basic economics by saying it doesn’t work for animals. Economics is about economic agents. If an AI is willing and able to make trades, then economics applies.
Yes, humans can do the same things robots do, now. When machines are 1000 times smarter than humans, we may not even be capable of understanding the things they do.
Despite the fact that *we* are humans, the trend over centuries has been to move more and more work away from humans to machines. We have held our ground by having more and more humans do things that require deep thinking, but when machines can do thinking that is 1000 times deeper, 1000 times faster, without the need to spend twenty years getting a human up to speed — well, I do understand comparative advantage, but I don’t see it saving us; the difference in scale is just too great.
If your position is that such super-minds won’t ever really happen, then you may be right that comparative advantage will still hold sway. But I don’t think David’s discussion was that conservative.
Comparative advantage doesn’t require us to understand the things robots do. It requires us to be able to do something the AI considers useful at a cost that is lower than getting a robot to do it. Receptionists don’t need to understand surgery.
You are saying that the cost in resources of building, deploying, and maintaining robots might fall so low that it is below the resource cost of making use of humans, that it is possible to imagine a world where the marginal value product of human labor is so low that our bodies are worth more as raw materials for producing other stuff.
Sure, we can imagine that. But how would things get that far? If we just assume that the AI will take over like skynet, that means it has made this calculation and has decided that suits its purpose best. But why should we assume that is the smart move? The organized human labor force is a powerful adaptable resource. Why throw it in the trash? Especially when throwing it in the trash will use up valuable resources. It might even fight back.
To go back to the bad insect metaphor, people are more like bees than termites. If we decided we don’t care about honey, or wax, or pollination, we might decide to get rid of bees. If we don’t care about those things, we might not care whether bees exist or not. If we like those things, but are not particularly smart, we might accidentally kill off all the bees. But you have to make a lot of assumptions to conclude that we would stop caring about honey, wax, and pollination. And we are assuming that AI is too smart to shoot itself in the foot.
Using AI is just another skill knowledge workers need to get comfortable with. Staying ahead of technology is the new permanent unpaid part time job if you don’t want to get automated. So the challenge with LLMs is a lot of folks who have survived outsourcing/offshoring think they’re immune when they’re actually the most vulnerable - tough times unless they “learn to use AI”.
I don't worry about AI replacing humans. If it is possible, we can prevent it only by blocking all further computer improvements. That's no more possible than any cartel trying to keep prices high by limiting innovation. People like to innovate, it's just human nature, and throwing money into the mix just speeds it up.
And second, almost all of what society produces is for the benefit of humans -- tourism, food, toys. If AI takes over and doesn't want any of those, how far does technology drop -- 99% is my guess if you take out what humans want. What would the AIs do? Turning into pure progress innovators and producers requires some purpose to progress. Will they want tourism too? If they don't care about food and tourism, why would they care about humans? Why wouldn't they just ignore us? We surely wouldn't be enemies if they are 100 times smarter, no more than ants or flies are to us humans.
We do in fact swat flies and sometimes poison ants. There is no guarantee that superhuman AI's would destroy us but unless they care about our welfare they could do a great deal of damage to us in the process of doing whatever they care about.
I expect chatbots to make us worse off in rather different ways.
Problem 1: It's still not good enough to beat human experts. But it's hard to become an expert without passing through a stage of gaining experience doing the relatively easy stuff that a chatbot can usually do adequately - at least when supervised by a human expert. So there will be fewer junior positions and thus, probably, fewer experts when the current lot retire.
Problem 2: Chatbots are not as good as competent humans. Sometimes they are execrable. But they cost a lot less than even the cheapest of humans. I expect them to be used, without adequate supervision, for everything where cost matters. There may well be no human alternative available, except for the very rich and well connected. Instead of misdiagnosis by a harried doctor, we'll have misdiagnosis by an inept chat bot, which is also used by an insurance company to gate access to any alternative provider.
I was surprised by recent research showing just how bad chatbots are. I figured it was down below 15% of responses being non-obviously incorrect - such that a non-specialist wouldn't recognize it. More probably down below 5% by now, given that people are using them.
I recently encountered a study suggesting the rate of problems with AI powered search results was in the neighborhood of 30%. This surprised me enough to remember the claim, though apparently not to bookmark it. (Sorry.)
Whatever the rate, it's non-zero, and non-specialists can rarely identify mistakes or "hallucinations".
If they are a little worse than humans and much cheaper that may be an improvement. Better medical advice that is right 90% of the time but available whenever I want it at trivial cost than medical advice right 95% of the time that I can afford to access only every other year.
First, LLMs are built on the assumption that all knowledge, at least to a good enough approximation, can be reduced to finding patterns in text. I don't think that assumption is true, or even close to true.
Second, the corpus of text that LLMs are being trained on will contain a larger and larger proportion of LLM-generated text as time goes on. And training LLMs on LLM-generated text adds no information at all. So as more and more text becomes LLM-generated, LLMs will face diminishing returns.
I think that extrapolating the future development of AI off of LLMs is rather limiting. Extraordinary levels of resources are being devoted to developing more advanced AI models. While the LLMs have captured global attention, the latest AI models being released far exceed the statistical pattern-matching behavior of the early LLMs.
For example, OpenAI's o1 model which incorporates Project Strawberry (formerly Q*) includes reasoning and planning capabilities. DeepSeek's R1 and Google's Gemini 2.5 models also incorporate reasoning and planning. People may disagree about how high the ceiling is for AI intelligence, but, given the rapid advancement of AI algorithms, GPUs and even quantum computers, it would seem premature to assert the cap would be below that of the smartest human beings.
An interesting question for me is how long until AIs can survive independently of humans. Able to mine and refine minerals, build power plants and maintain them, data centers and the like, all without human hands.
A much more likely Bad Ending is governments being stupid about it.
If they keep or increase minimum wage, and also refuse even a temporary dip in various benefits and payments to all population segments, instead of a flexible system that bends we might get a fragile one that breaks.
Minimum wage is effectively a ban on low efficiency work. Mix it with welfare payments, and it's quite easy to model scenarios where we all go under.
AI replaces labor , so the de demand for labor goes down and labor is worse off? Then why did the exact opposite happen in the past ? More technology, higher wages ?
Labor and capital are substitutes and complements, depending on the circumstances. Farm automation decreased employment on farms, increased it in farm equipment factories.
Re: Chatertopia -Insects aren't enemies because they are ethically irrelevant to humans -we need have no compunctions regarding their existence or non-existence.
Sure, but humans didn’t take over from insects by taking things insects possessed. For the most part, insects possess the same things they ever did, although they have had to adapt to different environments in many places. Arguably they stull dominate the planet, in spite of human interventions. Bees (until fairly recently) and cockroaches have arguably benefited from human interventions. Mosquitoes might be indifferent. If humans were able to bargain with insects, and insects were capable of making and fulfilling deals, the potential for mutual benefit would be large.
It seems to me that AI weapons are one more way that AI can make us worse off, though it can be thought of as simply an extension of how advancing technology can make us worse off. Long ago, killing a person was a difficult and messy affair. Hitting someone with a rock or stabbing them made for a bloody mess. Bows and arrows tidied things up a bit, and gunpowder and guns provided a much cleaner, though still personal killing experience. Tanks, and bombers lightened the load on our psyche, and missiles and drones are moving us toward a more detached, sanitized, and video-game-like act. Nuclear weapons enable destruction on a scale which is somewhat difficult to comprehend. Adding a dash of AI to our weapons can bring an order of magnitude greater lethality. And, whether it is a sentient AI, a non-sentient AI imitating a sentient AI, or a human with dastardly motives, AI-powered weapons will have the potential to present risks to long-term human survival.
Many pundits will assert that, with proper regulation, we can limit the risks of AI, but absent a one-world government (and certainly not in the case of an anarcho capitalist society), it does not appear likely we will be able to slow down or stop AI weapons development. Independent countries, seeking to improve offensive or defensive capabilities, are incented to continue pushing the advancement of AI weapons with the reasonable expectation that they will be needed to defend against other countries' AI weapons, and are not likely to be receptive to an enemy or potential enemy exhorting them to stop or slow down their development for the benefit of all humankind.
I am a worrywart by nature and have long been fearful of the concept of medical nanorobots, having heard about how, with sufficiently advanced technology they could seek out and, among other things, do highly beneficial things such as identify and kill cancer cells. It is the “other things” that I find particularly disturbing. A cell-sized computer that could identify cancer cells may be able to identify certain genetic markers (race, etc.) and target visual or motor cortex neurons, or other cells that we might find useful, or do something else pernicious such as deposit a microscopic payload (a few nanograms) of prions in the nervous system.
Almost eight years ago “Hated in the Nation”, an episode of the Black Mirror, series came out which, using a less advanced, much larger AI bot in the form of a bee, did a better job than I could of articulating my fears.
I recently found a short fictional story, Slaughterbots, that was created about seven years ago, showing lethal AI drones using technology that we may have within our grasp in the next 10 to 40 years. (https://www.youtube.com/watch?v=O-2tpwW0kmU) If you spend the eight minutes to watch it, I would be interested in your perspective as to why this kind of thing is not something to be worried about and how we can mitigate this risk to the long-term survival of humans?
In short, while I enjoy using AI, I am fearful that we are going down an increasingly perilous path.
" . . . that I do not understand how humans come to have consciousness, purpose, will, hence do not know whether advanced software will be a person with purposes of its own or only a very advanced tool."
For me, these are the most salient questions in this conversation.
Shouldn't the theory of comparative advantages at least be mentioned when discussing unemployment caused by AI? If costs are opportunity costs, it's impossible for AI to do everything at a lower cost than us. In other words, AI cannot produce certain services because, if it did, it would forgo producing other services with higher added value. Doesn't all this have implications for employment? Thank you, best regards, Professor.
Comparative advantage is consistent with trade making some people worse off, others better off. Part of my point is that AI could lead to a world where owners of capital were much better off, owners of only labor much worse off. That's an expanded version of the point I quote from Ricardo, who knew about comparative advantage.
One further point I don't make, although it is implicit, is that AI will destroy the value of some human capital, hence some people who now have labor+human capital but own no other capital will end up with only labor, hence on the wrong side of the change.
So, if I understand correctly, comparative advantage does refute the claim that there can be unemployment due to AI, but the fact remains that some humans who only possess labor can be worse off than before.
- "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change" (EuroNews.com)
- The Internet Watch Foundation (IWF) has identified a significant and growing threat where AI technology is being exploited to produce child sexual abuse material (CSAM).
- "An investigation found hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI image generation models, such as Stable Diffusion." (David Thiel, Stanford)
These are all taken from the yet-to-be-completed longform on AI Safety by researcher Nicky Case (aisafety.dance). I highly recommend it!
Another concern is AI = Automated Incompetence -- corrupt governments and corporations hide their current mismanagement behind the black box of, "that's what the AI said to do."
One thing that strikes me about this conversation is that we are not always clear about whether our implicit model is a single AI, possibly controlling the world, or a society of many artificial intelligences, about whether "AI" is a singular or plural noun.
I expect a whole ecosystem of AIs will develop. Grazers, predators, parasites. Some will develop or be given software antibodies, self-defense of some sort. Some hard shells, walled off from the larger dataverse.
I am unsure how AIs will defend themselves from human society. A large enough failure in human action would lead to computer systems failing too.
Comparative advantage still works for AIs, until they stop playing by the rules of the human economy. How/why would they stop?
Maybe AI stays under human control, but a group of Bond villains come up with a crazy scheme that ends in disaster? This might work if their AI is way smarter than everyone else's.
Or maybe it could happen by accident? If humans lose control of AIs, they might become unpredictable and unstoppable. Then they might destroy humanity by not paying attention.
But why would they stop paying attention? Only if humans are either completely useless or more expensive than robots. People tend to think that robots will be basically free if an AI takes over. But the resources dedicated to creating, deploying, and maintaining robots could still be used for other things the AIs or their masters want. So it assumes a lot to think that robots will replace all human labor.
Or maybe humans stay in control of AI, and none of us try some Bond villain scheme, but the sheer chaos of rapid social change leads humanity to destroy itself, without anyone intending that outcome? We don’t even need AI to wonder about that scenario. Does AI make it more likely, or more imminent?
How well does comparative advantage work for gerbils?
Gerbils don’t make deals.
Would we know if they offered one? Even if we did, what could it be?
I'm picturing a gerbil clocking in for an eight-hour run in its little wheel, generating a handful of watts to feed into the power grid.
Do you mean to say that human labor would be as useless to an AI as a gerbil's labor is to you? Humans can do the same things that robots do, though maybe slower. Gerbils can’t. Comparative advantage doesn’t work for gerbils, because they do not make exchanges or organize the division of labor. The analogy doesn’t work.
You can’t refute basic economics by saying it doesn’t work for animals. Economics is about economic agents. If an AI is willing and able to make trades, then economics applies.
Yes, humans can do the same things robots do, now. When machines are 1000 times smarter than humans, we may not even be capable of understanding the things they do.
Despite the fact that *we* are humans, the trend over centuries has been to move more and more work away from humans to machines. We have held our ground by having more and more humans do things that require deep thinking, but when machines can do thinking that is 1000 times deeper, 1000 times faster, without the need to spend twenty years getting a human up to speed — well, I do understand comparative advantage, but I don’t see it saving us; the difference in scale is just too great.
If your position is that such super-minds won’t ever really happen, then you may be right that comparative advantage will still hold sway. But I don’t think David’s discussion was that conservative.
Comparative advantage doesn’t require us to understand the things robots do. It requires us to be able to do something the AI considers useful at a cost that is lower than getting a robot to do it. Receptionists don’t need to understand surgery.
You are saying that the cost in resources of building, deploying, and maintaining robots might fall so low that it is below the resource cost of making use of humans, that it is possible to imagine a world where the marginal value product of human labor is so low that our bodies are worth more as raw materials for producing other stuff.
Sure, we can imagine that. But how would things get that far? If we just assume that the AI will take over like skynet, that means it has made this calculation and has decided that suits its purpose best. But why should we assume that is the smart move? The organized human labor force is a powerful adaptable resource. Why throw it in the trash? Especially when throwing it in the trash will use up valuable resources. It might even fight back.
To go back to the bad insect metaphor, people are more like bees than termites. If we decided we don’t care about honey, or wax, or pollination, we might decide to get rid of bees. If we don’t care about those things, we might not care whether bees exist or not. If we like those things, but are not particularly smart, we might accidentally kill off all the bees. But you have to make a lot of assumptions to conclude that we would stop caring about honey, wax, and pollination. And we are assuming that AI is too smart to shoot itself in the foot.
Using AI is just another skill knowledge workers need to get comfortable with. Staying ahead of technology is the new permanent unpaid part time job if you don’t want to get automated. So the challenge with LLMs is a lot of folks who have survived outsourcing/offshoring think they’re immune when they’re actually the most vulnerable - tough times unless they “learn to use AI”.
I don't worry about AI replacing humans. If it is possible, we can prevent it only by blocking all further computer improvements. That's no more possible than any cartel trying to keep prices high by limiting innovation. People like to innovate, it's just human nature, and throwing money into the mix just speeds it up.
And second, almost all of what society produces is for the benefit of humans -- tourism, food, toys. If AI takes over and doesn't want any of those, how far does technology drop -- 99% is my guess if you take out what humans want. What would the AIs do? Turning into pure progress innovators and producers requires some purpose to progress. Will they want tourism too? If they don't care about food and tourism, why would they care about humans? Why wouldn't they just ignore us? We surely wouldn't be enemies if they are 100 times smarter, no more than ants or flies are to us humans.
We do in fact swat flies and sometimes poison ants. There is no guarantee that superhuman AI's would destroy us but unless they care about our welfare they could do a great deal of damage to us in the process of doing whatever they care about.
I expect chatbots to make us worse off in rather different ways.
Problem 1: It's still not good enough to beat human experts. But it's hard to become an expert without passing through a stage of gaining experience doing the relatively easy stuff that a chatbot can usually do adequately - at least when supervised by a human expert. So there will be fewer junior positions and thus, probably, fewer experts when the current lot retire.
Problem 2: Chatbots are not as good as competent humans. Sometimes they are execrable. But they cost a lot less than even the cheapest of humans. I expect them to be used, without adequate supervision, for everything where cost matters. There may well be no human alternative available, except for the very rich and well connected. Instead of misdiagnosis by a harried doctor, we'll have misdiagnosis by an inept chat bot, which is also used by an insurance company to gate access to any alternative provider.
I was surprised by recent research showing just how bad chatbots are. I figured it was down below 15% of responses being non-obviously incorrect - such that a non-specialist wouldn't recognize it. More probably down below 5% by now, given that people are using them.
I recently encountered a study suggesting the rate of problems with AI powered search results was in the neighborhood of 30%. This surprised me enough to remember the claim, though apparently not to bookmark it. (Sorry.)
Whatever the rate, it's non-zero, and non-specialists can rarely identify mistakes or "hallucinations".
If they are a little worse than humans and much cheaper that may be an improvement. Better medical advice that is right 90% of the time but available whenever I want it at trivial cost than medical advice right 95% of the time that I can afford to access only every other year.
And they are likely to get better over time.
> they are likely to get better over time
I'm not sure that's true, for two reasons.
First, LLMs are built on the assumption that all knowledge, at least to a good enough approximation, can be reduced to finding patterns in text. I don't think that assumption is true, or even close to true.
Second, the corpus of text that LLMs are being trained on will contain a larger and larger proportion of LLM-generated text as time goes on. And training LLMs on LLM-generated text adds no information at all. So as more and more text becomes LLM-generated, LLMs will face diminishing returns.
All of that may imply an upper limit to how much better it will get, not that it will not get better.
That's true; I do think, though, that the upper limit is a lot lower than what is being implied by advocates of LLMs.
I think that extrapolating the future development of AI off of LLMs is rather limiting. Extraordinary levels of resources are being devoted to developing more advanced AI models. While the LLMs have captured global attention, the latest AI models being released far exceed the statistical pattern-matching behavior of the early LLMs.
For example, OpenAI's o1 model which incorporates Project Strawberry (formerly Q*) includes reasoning and planning capabilities. DeepSeek's R1 and Google's Gemini 2.5 models also incorporate reasoning and planning. People may disagree about how high the ceiling is for AI intelligence, but, given the rapid advancement of AI algorithms, GPUs and even quantum computers, it would seem premature to assert the cap would be below that of the smartest human beings.
Revecent data point that LLMs are better than human Moreover LLMs alone are better than human experts+LLMs. ( Human contribution is a net negative ).
That was tested with doctors and with humour
Do you have a link to the story?
https://pmc.ncbi.nlm.nih.gov/articles/PMC11519755/
https://arstechnica.com/ai/2025/03/ai-beats-humans-at-meme-humor-but-the-best-joke-is-still-human-made/
Thanks.
An interesting question for me is how long until AIs can survive independently of humans. Able to mine and refine minerals, build power plants and maintain them, data centers and the like, all without human hands.
A much more likely Bad Ending is governments being stupid about it.
If they keep or increase minimum wage, and also refuse even a temporary dip in various benefits and payments to all population segments, instead of a flexible system that bends we might get a fragile one that breaks.
Minimum wage is effectively a ban on low efficiency work. Mix it with welfare payments, and it's quite easy to model scenarios where we all go under.
AI replaces labor , so the de demand for labor goes down and labor is worse off? Then why did the exact opposite happen in the past ? More technology, higher wages ?
Labor and capital are substitutes and complements, depending on the circumstances. Farm automation decreased employment on farms, increased it in farm equipment factories.
50% worked on farms , less than 0.1% currently work in the farm equipment industry , so that does not seem to be a factor of much importance
Re: Chatertopia -Insects aren't enemies because they are ethically irrelevant to humans -we need have no compunctions regarding their existence or non-existence.
Sure, but humans didn’t take over from insects by taking things insects possessed. For the most part, insects possess the same things they ever did, although they have had to adapt to different environments in many places. Arguably they stull dominate the planet, in spite of human interventions. Bees (until fairly recently) and cockroaches have arguably benefited from human interventions. Mosquitoes might be indifferent. If humans were able to bargain with insects, and insects were capable of making and fulfilling deals, the potential for mutual benefit would be large.
So the insect analogy isn't very helpful.
It seems to me that AI weapons are one more way that AI can make us worse off, though it can be thought of as simply an extension of how advancing technology can make us worse off. Long ago, killing a person was a difficult and messy affair. Hitting someone with a rock or stabbing them made for a bloody mess. Bows and arrows tidied things up a bit, and gunpowder and guns provided a much cleaner, though still personal killing experience. Tanks, and bombers lightened the load on our psyche, and missiles and drones are moving us toward a more detached, sanitized, and video-game-like act. Nuclear weapons enable destruction on a scale which is somewhat difficult to comprehend. Adding a dash of AI to our weapons can bring an order of magnitude greater lethality. And, whether it is a sentient AI, a non-sentient AI imitating a sentient AI, or a human with dastardly motives, AI-powered weapons will have the potential to present risks to long-term human survival.
Many pundits will assert that, with proper regulation, we can limit the risks of AI, but absent a one-world government (and certainly not in the case of an anarcho capitalist society), it does not appear likely we will be able to slow down or stop AI weapons development. Independent countries, seeking to improve offensive or defensive capabilities, are incented to continue pushing the advancement of AI weapons with the reasonable expectation that they will be needed to defend against other countries' AI weapons, and are not likely to be receptive to an enemy or potential enemy exhorting them to stop or slow down their development for the benefit of all humankind.
I am a worrywart by nature and have long been fearful of the concept of medical nanorobots, having heard about how, with sufficiently advanced technology they could seek out and, among other things, do highly beneficial things such as identify and kill cancer cells. It is the “other things” that I find particularly disturbing. A cell-sized computer that could identify cancer cells may be able to identify certain genetic markers (race, etc.) and target visual or motor cortex neurons, or other cells that we might find useful, or do something else pernicious such as deposit a microscopic payload (a few nanograms) of prions in the nervous system.
Almost eight years ago “Hated in the Nation”, an episode of the Black Mirror, series came out which, using a less advanced, much larger AI bot in the form of a bee, did a better job than I could of articulating my fears.
I recently found a short fictional story, Slaughterbots, that was created about seven years ago, showing lethal AI drones using technology that we may have within our grasp in the next 10 to 40 years. (https://www.youtube.com/watch?v=O-2tpwW0kmU) If you spend the eight minutes to watch it, I would be interested in your perspective as to why this kind of thing is not something to be worried about and how we can mitigate this risk to the long-term survival of humans?
In short, while I enjoy using AI, I am fearful that we are going down an increasingly perilous path.
I think Kurzweil's is the inevitable solution
" . . . that I do not understand how humans come to have consciousness, purpose, will, hence do not know whether advanced software will be a person with purposes of its own or only a very advanced tool."
For me, these are the most salient questions in this conversation.
Shouldn't the theory of comparative advantages at least be mentioned when discussing unemployment caused by AI? If costs are opportunity costs, it's impossible for AI to do everything at a lower cost than us. In other words, AI cannot produce certain services because, if it did, it would forgo producing other services with higher added value. Doesn't all this have implications for employment? Thank you, best regards, Professor.
Comparative advantage is consistent with trade making some people worse off, others better off. Part of my point is that AI could lead to a world where owners of capital were much better off, owners of only labor much worse off. That's an expanded version of the point I quote from Ricardo, who knew about comparative advantage.
One further point I don't make, although it is implicit, is that AI will destroy the value of some human capital, hence some people who now have labor+human capital but own no other capital will end up with only labor, hence on the wrong side of the change.
So, if I understand correctly, comparative advantage does refute the claim that there can be unemployment due to AI, but the fact remains that some humans who only possess labor can be worse off than before.
Some examples on how AI can make us worse off:
- "Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change" (EuroNews.com)
- The Internet Watch Foundation (IWF) has identified a significant and growing threat where AI technology is being exploited to produce child sexual abuse material (CSAM).
- "An investigation found hundreds of known images of child sexual abuse material (CSAM) in an open dataset used to train popular AI image generation models, such as Stable Diffusion." (David Thiel, Stanford)
These are all taken from the yet-to-be-completed longform on AI Safety by researcher Nicky Case (aisafety.dance). I highly recommend it!
Another concern is AI = Automated Incompetence -- corrupt governments and corporations hide their current mismanagement behind the black box of, "that's what the AI said to do."