Reading comments on my recent post on implications of different versions of utilitarianism I was struck by the number of commenters who identified utilitarianism with central planning, assumed that anyone who took utilitarianism seriously must be in favor of the government ordering people around, redistributing income, controlling the society.
There is an unfortunate tendency among some libertarians to attack any idea that could form a building block for a statist argument. (I've probably done it myself.) I've been criticized for using empirical analysis in economics, because that might enable central planning. But their criticisms are usually poorly founded, and my work has withstood a market test. (Greedy businesses pay me for my service.)
But this motivation for attack is common to many ideologies. Many leftists would attack spontaneous order because it's a building block to free market thought.
I think Scott Alexander has a post somewhere on "arguments as soldiers," but a quick search didn't find it. Many, perhaps most, people are more interested in whether an argument has a conclusion they like than whether it is correct. As I pointed out in an earlier comment thread, I devote a chapter in Machinery to Rand doing it, my exchanges with the Bleeding Heart Libertarians to their doing it, and several things I have written critiquing Rothbard to his doing it.
I think the best compliment I have ever received was from a commenter a while back who said that the best arguments he had found against my positions were ones I had written.
I don't think you need to take utilitarianism as a premise in order to get libertarianism as a conclusion, which is a good thing, since, in my view, people's views are quite distant from utilitarianism (which, my main complaint, is also quite alien to the spirit of libertarianism—since it asserts that what we should do is determined by some total good of which one's own good is a minute part).
At the very closest, people "substantially value" the idea that /they/ should act to maximize total (or average) utility /of their nearest and dearest/, and, if they are prepared to universalize this, that individuals should do the same for /their/ nearest and dearest. This is more like egoism than utilitarianism (and, we may note, more in the spirit of libertarianism—I wish to be left alone to live my life as I think best, and I am prepared to extend the same right to others).
But it doesn't matter for your overall conclusion. For suppose, as you argue, that libertarian political institutions maximize total utility. Well, since all people basically seek only their own good (and that of their nearest and dearest), and libertarian institutions do the greatest good for the greatest number, then the greatest number of people have good reason to support liberatarian policies. The argument is pretty much the same, but the rhetoric different.
Re”I could not do it in the large, could not know enough about eight billion strangers to measure their utility.
That version of the argument about the impossibility of utilitarianism depends on assuming that nothing is worth doing if it is not done precisely.”
I do not think this is a true statement. Regardless of whether something is done precisely or not, at some point claiming one is doing something depends a bit on whether it is actually happening. For example, if you claim to be going to the grocery store but in fact have been driving circles around the Home Depot parking lot for two hours I think one can reasonably say that you are not going to the grocery store in an imprecise manner.
To claim one tries to take into account the utility of 8 billion or just 8 people in decisions suggests that one should roughly attempt to find out what that utility might be. Often it probably rounds to “Zero… who the hell even are you?” On almost all decisions. Not all of them, however, but I think most people don’t even try to figure it out to a reasonable degree when it might affect a lot of people they don’t know. Those that make handwavey gestures towards such considerations don’t seem to try hard to really claim they are doing it.
I think all you're saying is that if you are so far from perfect that you're just as likely to shoot yourself in the foot as to get a bulls eye, then maybe don't do it. I think we can all agree on that, but if you're a utilitarian, it doesn't matter at all how accurate you are - attempting to be more accurate for more questions is the only possible way to get to a better truth and more optimal world. To a utilitarian, you're basically just saying "give up on figuring out what is right and wrong".
But of course, you still believe in deciding what is right or wrong, you just have a different method you think is better. The fact of the matter is that any determination of what choice is better or worse (ie "right" or "wrong") must be judged based on the results of making that choice, which is what consequentialism and utilitarianism are. There can be no other basis for justification of a morality, as I argue here: https://governology.substack.com/p/a-defense-of-utilitarianism .
I feel we are talking about two different things and mixing them up a bit. There is on the one had the judgement of what would be the better (more moral) action to take (1), and then on the other there is actually deciding to take an action (2). Then there is a third thing that you bring up, correctly I think, which is then attempting to get more accurate judgements of what would be better, based in part on examining the results of trial and error actions (3).
My argument is that outside of our relatively immediate environment (people we know well or are otherwise close to us physically or metaphorically) we do not have any real ability to do (1), and can only do (3) if we really try hard. For most actions we assume the effects on everyone outside of the immediate/proximate people are zero, because otherwise we could never act; even trying to figure out net effect of buying one product over another would take all week. So when we get to doing (2) we are doing a really fast and loose utilitarian calculus at best, and only paying much attention to those around us.
That is perfectly correct and fine. It is what we can do, being mere humans and all. I would add that there is some awkward grey area around how we discount possible outcomes' effects on others and ourselves based on certitude of the magnitude, but whatever, it is a pretty good way to do things. We get better at it the more we try.
Where I think people go wrong is in deciding that "we" should do (2) because the math has been done and the far reaching action has positive net utility. The questions that immediately raises are "How do you know?" and "What's your margin of error on your estimates?" and "Did you seriously try to learn about all the people affected to see what the results might be?" Those are not questions that even can be answered for most actions affecting a broad range of strangers except in extreme cases like "I think we should nuke a city". (Of course, that is even if you agree on subjective values and utility; things get more awkward when people disagree about utility changes.) The notion that one is doing math gives a veneer of scientism to the enterprise that tricks one into thinking they are being more rigorous, when in fact they have no idea whether the action is net utility improving or not.
Likewise, even measuring the outcome and learning from it (3) is extremely difficult. What is the relevant time span? How do you control for confounders? What are you measuring by? The early Effective Altruists did good work on this, but it is worth noting that there was lots of good work to do despite centuries of charity work aimed at trying to improve the overall utility of people. The fact it seems that most attempts to do good outside of one's immediate environs tends to backfire and make things worse implies that either A) charities are all run by evil people posing as good people, or B) we are really bad at identifying what improves utility for strangers.
Although A is tempting, I am inclined towards B, especially seeing as how social scientists can't agree whether giving people aid in-kind or as cash transfers is better.
So yes, I believe we have to make judgements on what is right and wrong, but I think that putting a false pretense of accuracy and objectivity on that judgement leads us far astray, particularly when we get into the realm where we cannot reasonably make accurate judgements on the outcomes. By tricking ourselves into believing we can make such accurate and objective measures of outcomes we give ourselves license to take actions when it is irresponsible to do so, when not getting involved is the better course.
(As a side note, I disagree that the morality of a choice is determined by the results of the choice alone. That's another matter, however, and I am willing to go with the assumption here for now.)
I definitely agree that many people have a false sense of certainty about many things. But of course, our certainty about things should affect what actions we take. We don't jump out the second story window just in case downstairs is on fire. But what this means is that a good collective decision making process is one that has the ability to properly take into account uncertainty. This is why I advocate for supermajority decision making over bare-majority decision making, for example. There should be a high bar for making decisions that are inherently more difficult to get correctly (either because the effects are farther from us or because there are inherent biases in which effects would actually be good).
This is also why local governance is preferable over larger-jurisdictions, because then people can more easily move to places where the systems are working better. Some things need to be an experiment with a potentially unknown result. But it can be catastrophic if that experiment is performed on everyone rather than just a small group, because not only do you have negative effects on far more people, but you don't even have a control to compare against so you can't even reasonably know if the outcome was better or worse than without the experimental change.
> the morality of a choice is determined by the results of the choice alone
More accurately, I'm saying that the morality of a choice is determined by the *expected* results of the choice, not necessarily the actual result of a specific choice. Eg, its practically always a mistake to play roulette if you're trying to make money (because it has a negative expected return), and even if you win, it doesn't make that choice any more wise in retrospect.
I don't think you are really coming to grips with how that uncertainty affects utilitarian decision making. You bring up some points about how governance can work around that, which I agree with, but not how that ties back to utilitarianism's central claims and shortcomings. At the risk of putting too fine a point on it, at the large scale the honest utilitarian does exactly the same thing as the non-utilitarian and basically says "Look, I can't demonstrate this in an objective or rigorous way, but I think this is the better thing to do." People were doing that long before Bentham.
Regarding morality of choices, one would do well to study Adam Smith's Theory of Moral Sentiments. He outlines four sources of moral approval that people care about when it comes to actions: intention, the effect on other people (with caveats), how well the act comports with the rules and norms of behavior, and whether or not that behavior, if generally done that way, would be good or bad for society. You touch on intention mattering as well as results (e.g. just because someone got lucky/unlucky doesn't mean we don't judge what they did) but the other two count quite a bit too. For example, if I think my neighbor is a serial killer and I go over and kill him, my intent to prevent murders won't count for much, and even if the result is I prevented more (because it is discovered that hey, he did have a bunch of bodies under the porch) probably won't save me from jail time along with the moral disapproval of others. It might, but more likely I am going to get condemned for not calling the cops to investigate instead of just offing the guy myself. Good intent, good results, but doesn't comport with the rules and generally we don't feel good about people deciding to kill neighbors on suspicion.
> at the large scale the honest utilitarian does exactly the same thing as the non-utilitarian and basically says "Look, I can't demonstrate this in an objective or rigorous way, but I think this is the better thing to do."
I think "objective or rigorous" is doing too much work in your assertion. That doesn't sound like a very utilitarian thing to do, but you didn't really give reasons for your assertion.
> You touch on intention mattering as well as results (e.g. just because someone got lucky/unlucky doesn't mean we don't judge what they did)
My point was not to that intention matters. My point was that expected results matter. I would say the intention is irrelevant except in so far as it may indicate expected result (eg intent to murder is much more likely to result in an actual murder than someone without that intention). If you punch someone in the face in malice, with the intention to hurt, your action was wrong even if you accidentally prevented the person from being shot (by moving his head out of the way with your punch). It was wrong not because of the intention, but because of the expected result: harming another person's face. The luck of the matter is irrelevant to its moral quality.
> four sources of moral approval that people care about when it comes to actions: intention, the effect on other people (with caveats), how well the act comports with the rules and norms of behavior, and whether or not that behavior, if generally done that way, would be good or bad for society.
Yes, these are ways people think about morality, however I maintain that only one is the ultimate source of morality: the result. You can justify a good intention by the logic that a good intention is likely to lead to a good result. You can justify rules and norms by the logic that those rules are likely to lead to a good result. I don't see the difference between "the effect on other people" and "good or bad for society", but both of those seem to be bare utilitarianism. You cannot justify something to be good or bad without thinking about the expected result. But you can justify something without thinking about norms or intentions. That is why consequentialism is the logical foundation of morality.
> probably won't save me from jail time
There is a difference between what is right and what is legal. But what is legal often approximtes what is right. I would argue that killing a murderer deprives them of their right to due process and potentially also exacts a punishment not prescribed by law. Both of those are reasons to make that kind of vigilantism illegal. But beyond that, the moral reason to make extrajudicial killings illegal regardless of whether the law would determine capital punishment was justified or not, is because you should expect that vigilantes deciding secretly who to kill would not be likely to result in justice. Here again, what is important is the expected result, not the actual result.
I find it strange you are posting under two different user names. I am also noting that you are discussing in the shape but not the content of the conversation at hand, that is to say I am pretty sure you are AI. So, thanks, but I am done.
Great little piece. Students of mine have also, for some reason, assumed that support for utilitarianism entails support for central planning, which is... odd, to say the least. I'm not a libertarian myself, just a boring moderate liberal. But I think market freedom is crucially important, and to the extent that I do, it's for the kind of utilitarian reasons you allude to.
When "we" collectively want to make a change, that seems to implicate the need for some kind of authority capable of implementing that change. If "we" should start using price signals, what does that look like? Either a hodge podge of individual decisions, some of which will clearly not be doing that, or some kind of centralized approach to nudge/push/force people in that direction.
I think you can be libertarian and try to convince people to accept your position without any coercion. "We" cannot really do that, though. Collective action in the "we" sense is not really possible in a decentralized system. I'm comfortable pursuing libertarian goals that way, but many people seem to think of that as a contradiction in terms. Or simply ineffective.
If there is some shared conception of value - any shared conception at all - it seems to warrant collective enforcement of that shared conception. I think this is the intuition people are relying on when they reject utilitarianism- and my guess is they’d reject _any_ conception of the good (except for libertarianism) for the same reason.
What if the conception of value that is shared is that collective enforcement is wrong? And in general, shared conceptions of value do not imply collective enforcement.
Those are good questions. As a practicing Roman Catholic I’d say that whenever you try to use violence to enforce values beyond the basic legal structures necessary for civilization, you’re going against God’s will and thus in the process of destroying yourself.
What I’m pointing at is what I think the shared intuition is - although I think we are likely in agreement that this intuition is wrong.
> If there is some shared conception of value - any shared conception at all - it seems to warrant collective enforcement of that shared conception
Why? I see no reason for that conclusion. Any shared conception of value would need no enforcement because people would be in alignment. Enforcement is only necessary when values are not aligned. Regardless, you may value a banana exactly as much as I do, and we can still disagree who gets the banana. Enforcement should be done if and only if the result is a better world. If enforcement results in a worse world, of course we shouldn't do it, regardless of any shared values.
Only if the process of that collective enforcement of the shared conception didn't introduce its own costs and distortions, overwhelming the benefits of the commonly-shared good being in a state of "enforcement".
David, sorry, but with all respect, you are exasperatingly stubborn. Let's go at this a different way.
1. What is the meaning of "utility," or "total utility," or "average utility" (or other lawyerly form) other than "what's best"?
>>Kind of woolly, eh? Ya gonna build a society on that?
2. Can "what's best" be quantified, or measured in any way that's useful for ordinal ranking of "utilities"?
>>No, it is UTTERLY SUBJECTIVE.
3. Does any utilitarian, or any kinda-sorta-maybe utilitarian hereabouts, assert that this "what's best" is permanent?
>>No, it is as evanescent as a maiden's daydream on a summer day.
4. While nobody has suggested a utilitarian standard requiring it be "done perfectly" for "eight billion strangers" (can you say "straw man"?), how is a subjective, evanescent, botched standard going to provide any meaningful measure?
>>Impossible.
5. If "utilitarian" applies not just to Bentham, but to Marshall, von Mises – actually anybody on the face of the earth who opines as to "what's best" – how useful is that term in identifying anyone's position?
>>It isn't.
6. You say:
"[I] use signals I get through the price system to provide information about indirect effects of my actions"
and
"The concept of economic efficiency, the nearest thing modern economics offers to a way of evaluating economic outcomes"
>>Here you want to roll market pricing under the rubric of "utilitarianism" – this is really beyond too slippery. You are making EVERYBODY AND HIS DOG a "utilitarian," which by that inclusiveness renders the term useless.
** UTILITARIANISM IS !_NOT_! MARKET PRICING ** Jeremy Bentham's "hedonistic calculus" is specifically NON-MARKET, resting on measuring pleasure/satisfaction/what's best by totally goofy categories like intensity, duration, certainty, propinquity, etc., etc. etc.
** THE WORD 'UTILITARIANISM' HAS SIGNIFICANCE ONLY IN SOME NON-MARKET SENSE LIKE "HEDONISTIC CALCULUS **
7. Is Walter Block mistaken in saying that you stake libertarianism on a utilitarian defense? Is that why you are determined to defend it, no matter how far out of recognition or meaning it must be stretched?
>>Either Block was a fabricator or that seems to be the case.
“Is Walter Block mistaken in saying that you stake libertarianism on a utilitarian defense? …
He is mistaken.
I offer consequentialist arguments for libertarianism, not utilitarian ones. The objective is to persuade people that they would prefer the consequences of a libertarian society to those of the alternatives, in terms of their values. I do not assume that they are utilitarians — most people are not — only that there is enough correlation of values among most people and the consequences of a libertarian society are enough better than the consequences of the alternatives that most people would prefer the former.
>>Either Block was a fabricator or that seems to be the case.”
Walter is a nice man and, so far as I know, honest, but he is sometimes mistaken.
I think I have provided adequate answers to the rest of your points in our exchange in an earlier comment thread.
Ha ha... an excellent reason for me to be quiet! – Although the brass ring of stubbornness has an allure!
The great thing about David's substack is his wonderful loftiness of mind – never tendentious or partisan but truly the mind of a philosopher. He truly believes in the DEMOCRACY OF REASON and properly reigns here.
> What is the meaning of "utility," or "total utility," or "average utility" (or other lawyerly form) other than "what's best"?
Utility of an individual is not "what's best" but rather "how good is their experience in each moment" for that individual. Its a measure, hypothetical that it may be. Total utility is simply adding the utilities of many individuals.
> Can "what's best" be quantified, or measured in any way that's useful for ordinal ranking of "utilities"?
Yes. We can and should assume utility between humans is quite similar, because of our similar nature and nurture.
You seem to merely be making claims, but not providing logical support.
Regardless, if you can't even estimate a quantification for what's best, then how you can you have any morality whatsoever? How you can you determine what is justice? Shall we simply let everyone do whatever they want, including murder and pillage since we can't quantify which is best, a world with 1000 murders this month or a world with 10?
I would argue that case utilitarianism is not different from rule utilitarianism. Its, I think, a misleading way to think about it. Utilitarianism is about the goal: improving/maximizing utility. And that goal doesn't change whether you consider cases or rules.
Case vs rule is not about the goal, but rather about the method. Focusing on cases implies that we can figure out for each what is best to do. However, in reality, considering each case on its own without any rules is incredibly expensive and time consuming, and therefore impractical. You could do it, but it wouldn't be worth doing in most cases. Rules allow you to use heuristics to reduce the cost of deciding what is likely to be better, however the trade off is probably slightly less accurate decisions than if you spent an infinite amount of time considering each case independently.
There is an unfortunate tendency among some libertarians to attack any idea that could form a building block for a statist argument. (I've probably done it myself.) I've been criticized for using empirical analysis in economics, because that might enable central planning. But their criticisms are usually poorly founded, and my work has withstood a market test. (Greedy businesses pay me for my service.)
But this motivation for attack is common to many ideologies. Many leftists would attack spontaneous order because it's a building block to free market thought.
I think Scott Alexander has a post somewhere on "arguments as soldiers," but a quick search didn't find it. Many, perhaps most, people are more interested in whether an argument has a conclusion they like than whether it is correct. As I pointed out in an earlier comment thread, I devote a chapter in Machinery to Rand doing it, my exchanges with the Bleeding Heart Libertarians to their doing it, and several things I have written critiquing Rothbard to his doing it.
I think the best compliment I have ever received was from a commenter a while back who said that the best arguments he had found against my positions were ones I had written.
Might it be https://www.astralcodexten.com/p/book-review-the-scout-mindset?
Very likely. Thanks.
The first "arguments are soldiers" I remember is Yudkowsky in 2007, which I would assume Galef was building on: https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer
I don't think you need to take utilitarianism as a premise in order to get libertarianism as a conclusion, which is a good thing, since, in my view, people's views are quite distant from utilitarianism (which, my main complaint, is also quite alien to the spirit of libertarianism—since it asserts that what we should do is determined by some total good of which one's own good is a minute part).
At the very closest, people "substantially value" the idea that /they/ should act to maximize total (or average) utility /of their nearest and dearest/, and, if they are prepared to universalize this, that individuals should do the same for /their/ nearest and dearest. This is more like egoism than utilitarianism (and, we may note, more in the spirit of libertarianism—I wish to be left alone to live my life as I think best, and I am prepared to extend the same right to others).
But it doesn't matter for your overall conclusion. For suppose, as you argue, that libertarian political institutions maximize total utility. Well, since all people basically seek only their own good (and that of their nearest and dearest), and libertarian institutions do the greatest good for the greatest number, then the greatest number of people have good reason to support liberatarian policies. The argument is pretty much the same, but the rhetoric different.
Re”I could not do it in the large, could not know enough about eight billion strangers to measure their utility.
That version of the argument about the impossibility of utilitarianism depends on assuming that nothing is worth doing if it is not done precisely.”
I do not think this is a true statement. Regardless of whether something is done precisely or not, at some point claiming one is doing something depends a bit on whether it is actually happening. For example, if you claim to be going to the grocery store but in fact have been driving circles around the Home Depot parking lot for two hours I think one can reasonably say that you are not going to the grocery store in an imprecise manner.
To claim one tries to take into account the utility of 8 billion or just 8 people in decisions suggests that one should roughly attempt to find out what that utility might be. Often it probably rounds to “Zero… who the hell even are you?” On almost all decisions. Not all of them, however, but I think most people don’t even try to figure it out to a reasonable degree when it might affect a lot of people they don’t know. Those that make handwavey gestures towards such considerations don’t seem to try hard to really claim they are doing it.
I think all you're saying is that if you are so far from perfect that you're just as likely to shoot yourself in the foot as to get a bulls eye, then maybe don't do it. I think we can all agree on that, but if you're a utilitarian, it doesn't matter at all how accurate you are - attempting to be more accurate for more questions is the only possible way to get to a better truth and more optimal world. To a utilitarian, you're basically just saying "give up on figuring out what is right and wrong".
But of course, you still believe in deciding what is right or wrong, you just have a different method you think is better. The fact of the matter is that any determination of what choice is better or worse (ie "right" or "wrong") must be judged based on the results of making that choice, which is what consequentialism and utilitarianism are. There can be no other basis for justification of a morality, as I argue here: https://governology.substack.com/p/a-defense-of-utilitarianism .
I feel we are talking about two different things and mixing them up a bit. There is on the one had the judgement of what would be the better (more moral) action to take (1), and then on the other there is actually deciding to take an action (2). Then there is a third thing that you bring up, correctly I think, which is then attempting to get more accurate judgements of what would be better, based in part on examining the results of trial and error actions (3).
My argument is that outside of our relatively immediate environment (people we know well or are otherwise close to us physically or metaphorically) we do not have any real ability to do (1), and can only do (3) if we really try hard. For most actions we assume the effects on everyone outside of the immediate/proximate people are zero, because otherwise we could never act; even trying to figure out net effect of buying one product over another would take all week. So when we get to doing (2) we are doing a really fast and loose utilitarian calculus at best, and only paying much attention to those around us.
That is perfectly correct and fine. It is what we can do, being mere humans and all. I would add that there is some awkward grey area around how we discount possible outcomes' effects on others and ourselves based on certitude of the magnitude, but whatever, it is a pretty good way to do things. We get better at it the more we try.
Where I think people go wrong is in deciding that "we" should do (2) because the math has been done and the far reaching action has positive net utility. The questions that immediately raises are "How do you know?" and "What's your margin of error on your estimates?" and "Did you seriously try to learn about all the people affected to see what the results might be?" Those are not questions that even can be answered for most actions affecting a broad range of strangers except in extreme cases like "I think we should nuke a city". (Of course, that is even if you agree on subjective values and utility; things get more awkward when people disagree about utility changes.) The notion that one is doing math gives a veneer of scientism to the enterprise that tricks one into thinking they are being more rigorous, when in fact they have no idea whether the action is net utility improving or not.
Likewise, even measuring the outcome and learning from it (3) is extremely difficult. What is the relevant time span? How do you control for confounders? What are you measuring by? The early Effective Altruists did good work on this, but it is worth noting that there was lots of good work to do despite centuries of charity work aimed at trying to improve the overall utility of people. The fact it seems that most attempts to do good outside of one's immediate environs tends to backfire and make things worse implies that either A) charities are all run by evil people posing as good people, or B) we are really bad at identifying what improves utility for strangers.
Although A is tempting, I am inclined towards B, especially seeing as how social scientists can't agree whether giving people aid in-kind or as cash transfers is better.
So yes, I believe we have to make judgements on what is right and wrong, but I think that putting a false pretense of accuracy and objectivity on that judgement leads us far astray, particularly when we get into the realm where we cannot reasonably make accurate judgements on the outcomes. By tricking ourselves into believing we can make such accurate and objective measures of outcomes we give ourselves license to take actions when it is irresponsible to do so, when not getting involved is the better course.
(As a side note, I disagree that the morality of a choice is determined by the results of the choice alone. That's another matter, however, and I am willing to go with the assumption here for now.)
I definitely agree that many people have a false sense of certainty about many things. But of course, our certainty about things should affect what actions we take. We don't jump out the second story window just in case downstairs is on fire. But what this means is that a good collective decision making process is one that has the ability to properly take into account uncertainty. This is why I advocate for supermajority decision making over bare-majority decision making, for example. There should be a high bar for making decisions that are inherently more difficult to get correctly (either because the effects are farther from us or because there are inherent biases in which effects would actually be good).
This is also why local governance is preferable over larger-jurisdictions, because then people can more easily move to places where the systems are working better. Some things need to be an experiment with a potentially unknown result. But it can be catastrophic if that experiment is performed on everyone rather than just a small group, because not only do you have negative effects on far more people, but you don't even have a control to compare against so you can't even reasonably know if the outcome was better or worse than without the experimental change.
> the morality of a choice is determined by the results of the choice alone
More accurately, I'm saying that the morality of a choice is determined by the *expected* results of the choice, not necessarily the actual result of a specific choice. Eg, its practically always a mistake to play roulette if you're trying to make money (because it has a negative expected return), and even if you win, it doesn't make that choice any more wise in retrospect.
What else could it be based on?
I don't think you are really coming to grips with how that uncertainty affects utilitarian decision making. You bring up some points about how governance can work around that, which I agree with, but not how that ties back to utilitarianism's central claims and shortcomings. At the risk of putting too fine a point on it, at the large scale the honest utilitarian does exactly the same thing as the non-utilitarian and basically says "Look, I can't demonstrate this in an objective or rigorous way, but I think this is the better thing to do." People were doing that long before Bentham.
Regarding morality of choices, one would do well to study Adam Smith's Theory of Moral Sentiments. He outlines four sources of moral approval that people care about when it comes to actions: intention, the effect on other people (with caveats), how well the act comports with the rules and norms of behavior, and whether or not that behavior, if generally done that way, would be good or bad for society. You touch on intention mattering as well as results (e.g. just because someone got lucky/unlucky doesn't mean we don't judge what they did) but the other two count quite a bit too. For example, if I think my neighbor is a serial killer and I go over and kill him, my intent to prevent murders won't count for much, and even if the result is I prevented more (because it is discovered that hey, he did have a bunch of bodies under the porch) probably won't save me from jail time along with the moral disapproval of others. It might, but more likely I am going to get condemned for not calling the cops to investigate instead of just offing the guy myself. Good intent, good results, but doesn't comport with the rules and generally we don't feel good about people deciding to kill neighbors on suspicion.
> at the large scale the honest utilitarian does exactly the same thing as the non-utilitarian and basically says "Look, I can't demonstrate this in an objective or rigorous way, but I think this is the better thing to do."
I think "objective or rigorous" is doing too much work in your assertion. That doesn't sound like a very utilitarian thing to do, but you didn't really give reasons for your assertion.
> You touch on intention mattering as well as results (e.g. just because someone got lucky/unlucky doesn't mean we don't judge what they did)
My point was not to that intention matters. My point was that expected results matter. I would say the intention is irrelevant except in so far as it may indicate expected result (eg intent to murder is much more likely to result in an actual murder than someone without that intention). If you punch someone in the face in malice, with the intention to hurt, your action was wrong even if you accidentally prevented the person from being shot (by moving his head out of the way with your punch). It was wrong not because of the intention, but because of the expected result: harming another person's face. The luck of the matter is irrelevant to its moral quality.
> four sources of moral approval that people care about when it comes to actions: intention, the effect on other people (with caveats), how well the act comports with the rules and norms of behavior, and whether or not that behavior, if generally done that way, would be good or bad for society.
Yes, these are ways people think about morality, however I maintain that only one is the ultimate source of morality: the result. You can justify a good intention by the logic that a good intention is likely to lead to a good result. You can justify rules and norms by the logic that those rules are likely to lead to a good result. I don't see the difference between "the effect on other people" and "good or bad for society", but both of those seem to be bare utilitarianism. You cannot justify something to be good or bad without thinking about the expected result. But you can justify something without thinking about norms or intentions. That is why consequentialism is the logical foundation of morality.
> probably won't save me from jail time
There is a difference between what is right and what is legal. But what is legal often approximtes what is right. I would argue that killing a murderer deprives them of their right to due process and potentially also exacts a punishment not prescribed by law. Both of those are reasons to make that kind of vigilantism illegal. But beyond that, the moral reason to make extrajudicial killings illegal regardless of whether the law would determine capital punishment was justified or not, is because you should expect that vigilantes deciding secretly who to kill would not be likely to result in justice. Here again, what is important is the expected result, not the actual result.
I find it strange you are posting under two different user names. I am also noting that you are discussing in the shape but not the content of the conversation at hand, that is to say I am pretty sure you are AI. So, thanks, but I am done.
Great little piece. Students of mine have also, for some reason, assumed that support for utilitarianism entails support for central planning, which is... odd, to say the least. I'm not a libertarian myself, just a boring moderate liberal. But I think market freedom is crucially important, and to the extent that I do, it's for the kind of utilitarian reasons you allude to.
When "we" collectively want to make a change, that seems to implicate the need for some kind of authority capable of implementing that change. If "we" should start using price signals, what does that look like? Either a hodge podge of individual decisions, some of which will clearly not be doing that, or some kind of centralized approach to nudge/push/force people in that direction.
I think you can be libertarian and try to convince people to accept your position without any coercion. "We" cannot really do that, though. Collective action in the "we" sense is not really possible in a decentralized system. I'm comfortable pursuing libertarian goals that way, but many people seem to think of that as a contradiction in terms. Or simply ineffective.
Warranting a guess:
If there is some shared conception of value - any shared conception at all - it seems to warrant collective enforcement of that shared conception. I think this is the intuition people are relying on when they reject utilitarianism- and my guess is they’d reject _any_ conception of the good (except for libertarianism) for the same reason.
What if the conception of value that is shared is that collective enforcement is wrong? And in general, shared conceptions of value do not imply collective enforcement.
Those are good questions. As a practicing Roman Catholic I’d say that whenever you try to use violence to enforce values beyond the basic legal structures necessary for civilization, you’re going against God’s will and thus in the process of destroying yourself.
What I’m pointing at is what I think the shared intuition is - although I think we are likely in agreement that this intuition is wrong.
Yes to all that.
> If there is some shared conception of value - any shared conception at all - it seems to warrant collective enforcement of that shared conception
Why? I see no reason for that conclusion. Any shared conception of value would need no enforcement because people would be in alignment. Enforcement is only necessary when values are not aligned. Regardless, you may value a banana exactly as much as I do, and we can still disagree who gets the banana. Enforcement should be done if and only if the result is a better world. If enforcement results in a worse world, of course we shouldn't do it, regardless of any shared values.
Only if the process of that collective enforcement of the shared conception didn't introduce its own costs and distortions, overwhelming the benefits of the commonly-shared good being in a state of "enforcement".
Oh, I agree. Hence the word “seems.” I think the claim about markets being better about this is right.
David, sorry, but with all respect, you are exasperatingly stubborn. Let's go at this a different way.
1. What is the meaning of "utility," or "total utility," or "average utility" (or other lawyerly form) other than "what's best"?
>>Kind of woolly, eh? Ya gonna build a society on that?
2. Can "what's best" be quantified, or measured in any way that's useful for ordinal ranking of "utilities"?
>>No, it is UTTERLY SUBJECTIVE.
3. Does any utilitarian, or any kinda-sorta-maybe utilitarian hereabouts, assert that this "what's best" is permanent?
>>No, it is as evanescent as a maiden's daydream on a summer day.
4. While nobody has suggested a utilitarian standard requiring it be "done perfectly" for "eight billion strangers" (can you say "straw man"?), how is a subjective, evanescent, botched standard going to provide any meaningful measure?
>>Impossible.
5. If "utilitarian" applies not just to Bentham, but to Marshall, von Mises – actually anybody on the face of the earth who opines as to "what's best" – how useful is that term in identifying anyone's position?
>>It isn't.
6. You say:
"[I] use signals I get through the price system to provide information about indirect effects of my actions"
and
"The concept of economic efficiency, the nearest thing modern economics offers to a way of evaluating economic outcomes"
>>Here you want to roll market pricing under the rubric of "utilitarianism" – this is really beyond too slippery. You are making EVERYBODY AND HIS DOG a "utilitarian," which by that inclusiveness renders the term useless.
** UTILITARIANISM IS !_NOT_! MARKET PRICING ** Jeremy Bentham's "hedonistic calculus" is specifically NON-MARKET, resting on measuring pleasure/satisfaction/what's best by totally goofy categories like intensity, duration, certainty, propinquity, etc., etc. etc.
** THE WORD 'UTILITARIANISM' HAS SIGNIFICANCE ONLY IN SOME NON-MARKET SENSE LIKE "HEDONISTIC CALCULUS **
7. Is Walter Block mistaken in saying that you stake libertarianism on a utilitarian defense? Is that why you are determined to defend it, no matter how far out of recognition or meaning it must be stretched?
>>Either Block was a fabricator or that seems to be the case.
“Is Walter Block mistaken in saying that you stake libertarianism on a utilitarian defense? …
He is mistaken.
I offer consequentialist arguments for libertarianism, not utilitarian ones. The objective is to persuade people that they would prefer the consequences of a libertarian society to those of the alternatives, in terms of their values. I do not assume that they are utilitarians — most people are not — only that there is enough correlation of values among most people and the consequences of a libertarian society are enough better than the consequences of the alternatives that most people would prefer the former.
>>Either Block was a fabricator or that seems to be the case.”
Walter is a nice man and, so far as I know, honest, but he is sometimes mistaken.
I think I have provided adequate answers to the rest of your points in our exchange in an earlier comment thread.
Ummmm ... it takes two to be exasperatingly stubborn, and I would say the last to post in a stubborn thread is more exasperating and more stubborn.
Ha ha... an excellent reason for me to be quiet! – Although the brass ring of stubbornness has an allure!
The great thing about David's substack is his wonderful loftiness of mind – never tendentious or partisan but truly the mind of a philosopher. He truly believes in the DEMOCRACY OF REASON and properly reigns here.
As someone who has disagreed with David as well - my goodness he’s patient. I aspire to be as such.
> What is the meaning of "utility," or "total utility," or "average utility" (or other lawyerly form) other than "what's best"?
Utility of an individual is not "what's best" but rather "how good is their experience in each moment" for that individual. Its a measure, hypothetical that it may be. Total utility is simply adding the utilities of many individuals.
> Can "what's best" be quantified, or measured in any way that's useful for ordinal ranking of "utilities"?
Yes. We can and should assume utility between humans is quite similar, because of our similar nature and nurture.
You seem to merely be making claims, but not providing logical support.
Regardless, if you can't even estimate a quantification for what's best, then how you can you have any morality whatsoever? How you can you determine what is justice? Shall we simply let everyone do whatever they want, including murder and pillage since we can't quantify which is best, a world with 1000 murders this month or a world with 10?
Which do you believe is more nearly libertarian--act or rule utilitarianism?
Probably rule.
I agree.
I would argue that case utilitarianism is not different from rule utilitarianism. Its, I think, a misleading way to think about it. Utilitarianism is about the goal: improving/maximizing utility. And that goal doesn't change whether you consider cases or rules.
Case vs rule is not about the goal, but rather about the method. Focusing on cases implies that we can figure out for each what is best to do. However, in reality, considering each case on its own without any rules is incredibly expensive and time consuming, and therefore impractical. You could do it, but it wouldn't be worth doing in most cases. Rules allow you to use heuristics to reduce the cost of deciding what is likely to be better, however the trade off is probably slightly less accurate decisions than if you spent an infinite amount of time considering each case independently.