Utility in philosophy means something close to pleasure or happiness. According to the philosophy of utilitarianism one ought to act to maximize the total, or in some versions the average1, of human utility.
In economics, utility describes not what makes someone happy but what he chooses to do. "The utility to me of an apple is greater than that of an orange" means that, given the choice, I will choose the apple. We expect that people will usually choose what makes them happy hence expect a close correlation between utility in the economist's sense and utility in the philosopher's sense, but not an identity.
One of the things economists do, when they are not making a point of being objective, value-free scientists, is to draw conclusions about what people ought to do, for instance that they ought to abolish tariffs and price controls.2 Those conclusions usually depend on the assumption, stated or unstated, that maximizing utility in the economist's sense will also maximize it in the philosopher's sense. That was clearer a little over a century ago when the economic arguments were being made by an economist, Alfred Marshall, who was not afraid to make explicit the utilitarian foundations of his economic conclusions.
The two senses of utility are correlated but not the same. Suppose you are going to die six months from now. Is your utility greater if you have several months advance warning, as cancer patients often do, or if your death comes as a complete surprise? Spending several months knowing that you are about to die would be, for most of us, a very unpleasant experience, so if utility is another word for happiness, imagined as a characteristic of what is going on inside your head, the second alternative is almost certainly preferable to the first.
Happiness is not all that matters to people. If one could choose in advance, perhaps by instructions to your doctor, many of us would prefer to know. We have things we would like to get done before dying, things to be said to children, wife, friends, projects to be completed whose completion matters, if only to our sense of having lived a life worth living, arrangements to be made for the future of those dear to us. A close friend spent a good deal of his last few months reducing to something more like order his crowded and cluttered house for the benefit of his wife and daughters.
For another example of tension between what makes you happy and what you choose, consider Robert Nozick’s experience machine.3
Someone invents an experience machine; get into it and you will have a fully convincing illusion of experience. The inventor, who somehow knows what your life is going to be like, makes you the following offer:
Get into my experience machine, spend the rest of your life there, and I will give you the illusion of a life slightly better than the one you would otherwise live. Your average income in the illusion will be a few thousand dollars higher than it would have been in reality, your wife a little prettier, your children slightly better behaved, your promotions a little prompter. Your illusory summers won't be quite as hot or winters quite as cold. Once you are in the machine you will not know that it is an illusion.
Assume you believe his offer. Do you accept it? If not, why not?
If all that matters to you about the world is how it impinges on you, what effect it has on your sensations, you should accept the offer. But for me and, I suspect, many other people, that is not all that matters. I do not merely want the illusion of having written an interesting, enjoyable and original book, I want to have actually done it. I do not just want to think people read my books and are affected by the ideas in them, I want them to actually read and be affected. I don't want just the illusion of wonderful children, I want my wonderful children to actually exist.
I wouldn't touch that machine with a ten-foot pole.
The question is relevant to things more realistic than Nozick's hypothetical. Consider recreational drugs. A lot of us have a gut level feeling that the pleasure from being high on a drug, however intense, is somehow less valid, less real, than the pleasure from accomplishing something, even if it is only winning a game of tennis or climbing a mountain. Feeling good about yourself because you are drunk is somehow less valid than feeling good about yourself because you have just saved someone's life at risk of your own or solved an important problem.
Virtual reality gets us closer to Nozick's experience machine. Why do I feel better about making Germanic lyres in my basement than Whitesoul Helms in World of Warcraft? Why do I feel less comfortable about consuming many hours online fighting computer-generated monsters than about consuming a similar amount of time, also online, arguing with people about subjects of interest to me?
The problem is older than World of Warcraft. I know some very smart people who put substantial amounts of time and effort into playing games: chess, bridge, poker. As long as one views it as recreation, there is no problem. But what about someone who treats the game as his real life and whatever he does to earn money for food and rent as an annoying distraction? Something about that feels wrong to me, feels as though he would be leading a better life if he put the same talent and passion into building better houses or writing better computer programs. But I can’t prove it; I might be wrong about values or he might be someone for whom the alternative I thought better was, for one reason or another, not an option.
Should Irrational Preferences Count?
The field of behavioral economics deals with predictable patterns of behavior that appear inconsistent with rationality. My one contribution to the field is a chapter, "Economics and Evolutionary Psychology," in the book Evolutionary Psychology and Economic Theory; a draft is available on my web page. In it I try to show that some patterns of behavior which are puzzling in terms of the assumptions of economics make sense in terms of evolutionary psychology, can be explained as behavior that got hardwired into us because it increased an individual's reproductive success in the hunter-gatherer societies where our species spent most of its history.
Consider as one example the endowment effect, the observation that individuals value items that belong to them more than items that do not even if, as in the classic Cornell coffee cup experiment, who owns what is the result of random chance. I explain this as a commitment strategy that acts to enforce property rights in a world without police and courts, the human elaboration of the territorial behavior observed in many animal species. If I am willing to fight harder to defend what is mine than to take what is yours, you are similarly willing, and both of us know it, then most of the time ownership will not have to be fought for.4
Economists define efficiency in terms of people getting what they want, not what they should want. Suppose you accept my explanation for the endowment effect, that it exists not because it serves the present interest of the individual but because it served the reproductive interest of other individuals long ago in a very different environment. Should you still take that preference as a given in evaluating economic institutions?
Why Risk Aversion Isn't
Many fields use technical terms that sound self-explanatory and aren't; many people believe they know what those terms mean and don't. Millions of people believe that they understand the Theory of Relativity, even if not the mathematical details. The theory says that everything is relative. Surely that is clear enough.
Clear but wrong; that is not what the theory says. One of the things the Theory of Relativity tells us is that the speed of light is quite impossibly absolute, the same relative to you however fast you are moving. Economics has similar problems with terms such as efficiency and competition.
And risk aversion. It sounds as though it means aversion to risk; one might expect a risk averse person to avoid dangerous hobbies, a risk preferring person to be drawn to them. It is not true. There is nothing in the definition of risk aversion that implies that a risk averse person is less likely to take up hang gliding or mountain climbing than a risk preferrer.
The definition of risk aversion, as any good textbook that covers the subject will explain, is that a risk averse person, faced with the choice between an uncertain set of monetary payments and a certain payment with the same expected value, will prefer the latter. That is a statement not about his taste for risk but about his taste for money.
To see why we would expect people to be risk averse, imagine that you are faced with two possible jobs. One pays you $60,000/year. The other has equal odds of paying you $20,000/year or $100,000 year. The expected value of pay is the same for both but we expect most people to prefer the former job, all else being equal. Why?
If you accept the risky alternative you are giving up dollars in the future where you lose the bet in order to get dollars in the future where you win the bet. You are giving up (probabilistic) dollars used to buy things you would get as your income increased from $20,000 to $60,000 in order to get (probabilistic) dollars to buy things you would get as it increased from $60,000 to $100,000. As your income increases you buy the more important things first, so we would expect the gain from getting a dollar at the high end to be less than the loss from losing one at the low.
As this (entirely conventional) exposition shows, risk aversion is simply declining marginal utility of income. The fact that your marginal utility of income decreases as your income increases tells us nothing at all about how the marginal utility of other things changes as the amount you have of them changes, hence the fact that you are risk averse does not tell us what your attitude will be to risks that involve non-monetary payoffs. For example ...
Your doctor calls you into his office to give you some very bad news. You have been diagnosed with a disease that, if untreated, will kill you in fifteen years. There is an operation which, if it succeeds, will let you live thirty years — but half the time it kills the patient. You have a choice of a certainty of fifteen years or a fifty/fifty gamble between thirty and zero.
As it happens, the one thing in life you most want to do is to produce and bring up children. Thirty years is long enough to do that, fifteen is not. You grit your teeth and sign up for the operation. You may be, probably are, risk averse in dollars, which have decreasing marginal utility to you. But you are risk preferring in years of life, because years of life have increasing marginal utility to you; thirty years are worth more than twice as much as fifteen.
Risk preference is not about risk.
1 Utility and Utilitarianism were invented by Jeremy Bentham. Henry Sidgwick seems to be the first person to have raised the question of whether it was the total or average that should be maximized, relevant if one is comparing alternatives with different numbers of people in them. I offer a partial solution to that problem in "What Does Optimum Population Mean?" Research in Population Economics, Vol. III (1981), Eds. Simon and Lindert.
I discuss the issue in Chapter 15 of Price Theory.
Nozick describes the experience machine in Anarchy, State and Utopia. The version of the idea presented here is based on his but modified for my purposes.
I discuss this approach to enforcing rights in much greater detail in chapters 51 and 52 of the third edition of The Machinery of Freedom.
Quick clarification: 'Utility' for philosophers means *well-being*, about which there are many different theories, hedonism being just one:
https://www.utilitarianism.net/theories-of-wellbeing
I've always felt this type of philosophical discussion about utilitarianism is like intellectual masturbation.
For example, take Nozick's experience machine. It's so unrealistic that it doesn't have any real-world meaning. It's completely imaginary. It's impossible to use it for anything that has utility in the real world. That's why most people don't answer it rationally. Instead, they search for an answer that feels good and then try to come up with reasons why it's a good answer.
I recommend Henry Hazlitt's book The Foundations of Morality (available online). It's the best book on rule utilitarianism that I've read. Of course, it could also be because it's the only one I've read... But anyway, it's pretty brilliant stuff. I don't know why nobody is familiar with it, even though Hazlitt is very well-known in the libertarian scene.
Utilitarian philosophy is much more utilitarian if you apply it to the rules of society instead of evaluating individual cases.
When you're looking at individual cases, everything quickly becomes a big mess. You can analyze each case forever and find different ways to look at them. There are no right or wrong answers. Does it help you to live a good life? No.
Rule utilitarianism is much better. You can look at and analyze the rules of society. What are their consequences in real life? What happens to most people when they abide by or break these rules? Sometimes the rules create bad results, so you can compare them with good results and see the net effect.