I've always felt this type of philosophical discussion about utilitarianism is like intellectual masturbation.
For example, take Nozick's experience machine. It's so unrealistic that it doesn't have any real-world meaning. It's completely imaginary. It's impossible to use it for anything that has utility in the real world. That's why most people don't answer it rationally. Instead, they search for an answer that feels good and then try to come up with reasons why it's a good answer.
I recommend Henry Hazlitt's book The Foundations of Morality (available online). It's the best book on rule utilitarianism that I've read. Of course, it could also be because it's the only one I've read... But anyway, it's pretty brilliant stuff. I don't know why nobody is familiar with it, even though Hazlitt is very well-known in the libertarian scene.
Utilitarian philosophy is much more utilitarian if you apply it to the rules of society instead of evaluating individual cases.
When you're looking at individual cases, everything quickly becomes a big mess. You can analyze each case forever and find different ways to look at them. There are no right or wrong answers. Does it help you to live a good life? No.
Rule utilitarianism is much better. You can look at and analyze the rules of society. What are their consequences in real life? What happens to most people when they abide by or break these rules? Sometimes the rules create bad results, so you can compare them with good results and see the net effect.
Nozick's experience machine is the limiting case of a lot of real world alternatives, ones in which one tricks the mechanisms evolved to reward behavior that benefits you in the long run — in principle measured by reproductive success — into granting rewards for other things. Obvious examples are masturbation, recreational drugs, computer games, especially ones in VR.
I'm a fan of Nozick's experience machine in that it really helps us explore our underlying desires and motivations. I would gladly go into the machine if I knew my wife and kid would be happier if I did, but not otherwise.
By the way, if anyone is interested in reading a lengthy defense of hedonism, see here. https://benthams.substack.com/p/my-up-to-date-case-for-hedonism I don't think it's at all initially obvious that one should plug into the experience machine, but I think the arguments for hedonism end up being overwhelming.
> Suppose you accept my explanation for the endowment effect, that it exists not because it serves the present interest of the individual but because it served the reproductive interest of other individuals long ago in a very different environment. Should you still take that preference as a given in evaluating economic institutions?
The danger of rejecting the preference is that we miss something in designing the alternative. For example, humans seem to have evolved to be omnivores. Some vegetarians and vegans reject the evolutionary preference to sometimes eat meat because of ethical or environmental considerations. Nutrition science is notoriously bad but there seems to be a decent argument that vegetarianism and veganism is a suboptimal diet (a simple example is Vitamin B deficiencies but there are broader concerns).
Personally, I take preferences that seem to come from evolution or intuition as extremely strong bayesian priors. Sure, if someone could create a pill that provides all nutrients and I can avoid killing other animals, in principle, that is plausible. In reality, I think it's highly unlikely for at least hundreds of years, and more a reflection of hubris to override such preferences. I think this also applies to questions of utility that disregard questions of meaning.
Additionally, evolution selects for survival, so we should expect it to be usually pretty good at giving you reliable intuitions about which foods are healthy, but if you have some other intuition that evolved for non-truth-tracking reasons, we should expect it to be unreliable.
> I disagree with the claim that veganism is worse for health--see here, for example
That article primarily discusses ethics. I agree factory farming is generally immoral.
> but if you have some other intuition that evolved for non-truth-tracking reasons, we should expect it to be unreliable
My point though is that "truth" is a big topic and our understanding of it is limited, primarily to narrow mechanistic science. For example, there may be "truth" in maximizing "meaning" for every lifeform, something that current science have a very poor understanding of. Of course, this isn't to deny the power of science and the fact that it can override our instincts, intuitions, and so on. My point is that the burden of proof to disregard such instincts and intuitions is highly plausibly to make extremely high.
//That article primarily discusses ethics. I agree factory farming is generally immoral.//
It also discusses health.
//My point though is that "truth" is a big topic and our understanding of it is limited, primarily to narrow mechanistic science. For example, there may be "truth" in maximizing "meaning" for every lifeform, something that current science have a very poor understanding of.//
I don't think our understanding of truth--the concept--is that limited, though it's certainly the case that our knowledge of the truth if often wrong. Why would there be truth in maximizing meaning for every life form? What does that even mean? If we're ignorant of what truth is, we shouldn't expect evolution to be truth tracking.
I didn't find much. Nutrition science is a dismal science (so much confounding, few RCTs, small sample sizes, etc.); if a nutrition argument doesn't steelman the other side(s), in my opinion, it is most likely motivated reasoning. I found little health discussion, and no steelmanning, although I only skimmed the first sentence of every paragraph. Feel free to summarize here.
> What does that even mean? If we're ignorant of what truth is, we shouldn't expect evolution to be truth tracking.
Exactly, this gets to foundational philosophical questions about what "truth" means. My argument is that simply asserting that "truth tracking" is whatever modern science currently espouses is a hubristic mental model, and a potentially dangerous one for personal and societal health. The Soviet Union asserted that they were "truth tracking".
The farther we get away from physics and chemistry, the more my bayesian prior flips from assuming the science is true (physics, chemistry, etc.) to assuming the science is unlikely to be true if it doesn't align with evolutionary instincts and intuitions. A simple heuristic is any science that needs to add "science" to its name: nutrition science, political science, climate science, social science, etc.
//I didn't find much. Nutrition science is a dismal science (so much confounding, few RCTs, small sample sizes, etc.//
It's true we should have some uncertainty, but I think the studies supporting veganism are pretty good--e.g. the Casini meta-analysis.
//Exactly, this gets to foundational philosophical questions about what "truth" means. My argument is that simply asserting that "truth tracking" is whatever modern science currently espouses is a hubristic mental model, and a potentially dangerous one for personal and societal health.//
This isn't what was asserted, though I would modern science to be modeling. I think truth is that which corresponds to reality.
> I think the studies supporting veganism are pretty good--e.g. the Casini meta-analysis.
And studies to steelman the other side?
> I think truth is that which corresponds to reality
Agreed, and evolutionary instincts and intuitions are part of reality. Unfortunately, they're murky and hopefully we'll understand them better over time.
I don't believe the experience machine is a sufficient representation for the idea of pure utility preference, given its illusory nature, as you stated. Although I think a similar point can be made if you imagine say a utility coach, which can force a person to maximize their own welfare. I argue that it would not be a moral duty to impose this coach onto others (rather, a moral wrong), despite being a moral duty to do so under utilitarianism.
Additionally, I also believe that given the amoral selection pressures which crafted our utility functions, leading to arbitary preferences, our utility functions lack inherent moral weight. I think Moore sufficiently refuted ethical naturalism, making utilitarianism rest on some very shaky ground.
I make these arguments in the below links, if you happen to be curious.
A big problem with the experience machine thought experiment for me is the plausibility of the offer. Suppose I was really interested in the possibility of trading my real life in for a virtual improved one, the reality of my experience isn’t the only issue I face. What would be the point of going for experience if I died immediately, or if something else went wrong and my virtual life became either real or imagined torment? I would need to have a great deal of trust in the process for it to be a serious alternative to my real life. What is sustaining my consciousness after I go into the machine? Whose resources are being used, and why? What prevents them from changing their minds, or running out of resources? It's just not plausible. So I don’t know what I would actually think about a truly plausible version of the offer. Could I take my friends and loved ones with me?
It is a bit like cryonics, except cryonics represents a tiny chance to cheat death, a way to prolong life rather than improve it.
Quick clarification: 'Utility' for philosophers means *well-being*, about which there are many different theories, hedonism being just one:
https://www.utilitarianism.net/theories-of-wellbeing
I've always felt this type of philosophical discussion about utilitarianism is like intellectual masturbation.
For example, take Nozick's experience machine. It's so unrealistic that it doesn't have any real-world meaning. It's completely imaginary. It's impossible to use it for anything that has utility in the real world. That's why most people don't answer it rationally. Instead, they search for an answer that feels good and then try to come up with reasons why it's a good answer.
I recommend Henry Hazlitt's book The Foundations of Morality (available online). It's the best book on rule utilitarianism that I've read. Of course, it could also be because it's the only one I've read... But anyway, it's pretty brilliant stuff. I don't know why nobody is familiar with it, even though Hazlitt is very well-known in the libertarian scene.
Utilitarian philosophy is much more utilitarian if you apply it to the rules of society instead of evaluating individual cases.
When you're looking at individual cases, everything quickly becomes a big mess. You can analyze each case forever and find different ways to look at them. There are no right or wrong answers. Does it help you to live a good life? No.
Rule utilitarianism is much better. You can look at and analyze the rules of society. What are their consequences in real life? What happens to most people when they abide by or break these rules? Sometimes the rules create bad results, so you can compare them with good results and see the net effect.
Nozick's experience machine is the limiting case of a lot of real world alternatives, ones in which one tricks the mechanisms evolved to reward behavior that benefits you in the long run — in principle measured by reproductive success — into granting rewards for other things. Obvious examples are masturbation, recreational drugs, computer games, especially ones in VR.
I'm a fan of Nozick's experience machine in that it really helps us explore our underlying desires and motivations. I would gladly go into the machine if I knew my wife and kid would be happier if I did, but not otherwise.
What if you did not yet have a wife and kids?
By the way, if anyone is interested in reading a lengthy defense of hedonism, see here. https://benthams.substack.com/p/my-up-to-date-case-for-hedonism I don't think it's at all initially obvious that one should plug into the experience machine, but I think the arguments for hedonism end up being overwhelming.
> Suppose you accept my explanation for the endowment effect, that it exists not because it serves the present interest of the individual but because it served the reproductive interest of other individuals long ago in a very different environment. Should you still take that preference as a given in evaluating economic institutions?
The danger of rejecting the preference is that we miss something in designing the alternative. For example, humans seem to have evolved to be omnivores. Some vegetarians and vegans reject the evolutionary preference to sometimes eat meat because of ethical or environmental considerations. Nutrition science is notoriously bad but there seems to be a decent argument that vegetarianism and veganism is a suboptimal diet (a simple example is Vitamin B deficiencies but there are broader concerns).
Personally, I take preferences that seem to come from evolution or intuition as extremely strong bayesian priors. Sure, if someone could create a pill that provides all nutrients and I can avoid killing other animals, in principle, that is plausible. In reality, I think it's highly unlikely for at least hundreds of years, and more a reflection of hubris to override such preferences. I think this also applies to questions of utility that disregard questions of meaning.
I disagree with the claim that veganism is worse for health--see here, for example, though it's certainly true that one ought to supplement b12 if they're vegan. https://benthams.substack.com/p/factory-farming-delenda-est
Additionally, evolution selects for survival, so we should expect it to be usually pretty good at giving you reliable intuitions about which foods are healthy, but if you have some other intuition that evolved for non-truth-tracking reasons, we should expect it to be unreliable.
> I disagree with the claim that veganism is worse for health--see here, for example
That article primarily discusses ethics. I agree factory farming is generally immoral.
> but if you have some other intuition that evolved for non-truth-tracking reasons, we should expect it to be unreliable
My point though is that "truth" is a big topic and our understanding of it is limited, primarily to narrow mechanistic science. For example, there may be "truth" in maximizing "meaning" for every lifeform, something that current science have a very poor understanding of. Of course, this isn't to deny the power of science and the fact that it can override our instincts, intuitions, and so on. My point is that the burden of proof to disregard such instincts and intuitions is highly plausibly to make extremely high.
//That article primarily discusses ethics. I agree factory farming is generally immoral.//
It also discusses health.
//My point though is that "truth" is a big topic and our understanding of it is limited, primarily to narrow mechanistic science. For example, there may be "truth" in maximizing "meaning" for every lifeform, something that current science have a very poor understanding of.//
I don't think our understanding of truth--the concept--is that limited, though it's certainly the case that our knowledge of the truth if often wrong. Why would there be truth in maximizing meaning for every life form? What does that even mean? If we're ignorant of what truth is, we shouldn't expect evolution to be truth tracking.
> It also discusses health.
I didn't find much. Nutrition science is a dismal science (so much confounding, few RCTs, small sample sizes, etc.); if a nutrition argument doesn't steelman the other side(s), in my opinion, it is most likely motivated reasoning. I found little health discussion, and no steelmanning, although I only skimmed the first sentence of every paragraph. Feel free to summarize here.
> What does that even mean? If we're ignorant of what truth is, we shouldn't expect evolution to be truth tracking.
Exactly, this gets to foundational philosophical questions about what "truth" means. My argument is that simply asserting that "truth tracking" is whatever modern science currently espouses is a hubristic mental model, and a potentially dangerous one for personal and societal health. The Soviet Union asserted that they were "truth tracking".
The farther we get away from physics and chemistry, the more my bayesian prior flips from assuming the science is true (physics, chemistry, etc.) to assuming the science is unlikely to be true if it doesn't align with evolutionary instincts and intuitions. A simple heuristic is any science that needs to add "science" to its name: nutrition science, political science, climate science, social science, etc.
//I didn't find much. Nutrition science is a dismal science (so much confounding, few RCTs, small sample sizes, etc.//
It's true we should have some uncertainty, but I think the studies supporting veganism are pretty good--e.g. the Casini meta-analysis.
//Exactly, this gets to foundational philosophical questions about what "truth" means. My argument is that simply asserting that "truth tracking" is whatever modern science currently espouses is a hubristic mental model, and a potentially dangerous one for personal and societal health.//
This isn't what was asserted, though I would modern science to be modeling. I think truth is that which corresponds to reality.
> I think the studies supporting veganism are pretty good--e.g. the Casini meta-analysis.
And studies to steelman the other side?
> I think truth is that which corresponds to reality
Agreed, and evolutionary instincts and intuitions are part of reality. Unfortunately, they're murky and hopefully we'll understand them better over time.
I don't believe the experience machine is a sufficient representation for the idea of pure utility preference, given its illusory nature, as you stated. Although I think a similar point can be made if you imagine say a utility coach, which can force a person to maximize their own welfare. I argue that it would not be a moral duty to impose this coach onto others (rather, a moral wrong), despite being a moral duty to do so under utilitarianism.
Additionally, I also believe that given the amoral selection pressures which crafted our utility functions, leading to arbitary preferences, our utility functions lack inherent moral weight. I think Moore sufficiently refuted ethical naturalism, making utilitarianism rest on some very shaky ground.
I make these arguments in the below links, if you happen to be curious.
https://neonomos.substack.com/p/the-utility-coach-thought-experiment
https://neonomos.substack.com/p/freedom-vs-utility
A big problem with the experience machine thought experiment for me is the plausibility of the offer. Suppose I was really interested in the possibility of trading my real life in for a virtual improved one, the reality of my experience isn’t the only issue I face. What would be the point of going for experience if I died immediately, or if something else went wrong and my virtual life became either real or imagined torment? I would need to have a great deal of trust in the process for it to be a serious alternative to my real life. What is sustaining my consciousness after I go into the machine? Whose resources are being used, and why? What prevents them from changing their minds, or running out of resources? It's just not plausible. So I don’t know what I would actually think about a truly plausible version of the offer. Could I take my friends and loved ones with me?
It is a bit like cryonics, except cryonics represents a tiny chance to cheat death, a way to prolong life rather than improve it.