Some years ago I got involved in an online argument on the subject of rational ignorance. My claim was that rational voters, knowing that their vote had a negligible probability of altering the outcome of an election, had no incentive to pay the substantial cost of learning enough about political alternatives to have a well informed opinion as to which candidate was better.
The friend I was arguing with raised the obvious counterargument—if I was right, why do people bother to vote at all? I made my usual response. People enjoy the pleasure of partisanship, as demonstrated at football games. Every four years a game is played out across the nation with the future of the world at stake. For the cost of an hour or so of your time, you can not only cheer for your team, you can even, in at least a token sense, play on it. Who could resist?
Enjoying the pleasure of partisanship does not require the partisan to have a well informed opinion of which side he should be a partisan for. Acquiring such an opinion might even make cheering less fun, since in some cases, on some issues, you would have to face the fact that you were cheering for the bad guys. Better, more fun if less realistic, to believe that your people are all good, their opponents bad.
His response, at least as I interpreted it, was that he rejected the rational ignorance argument because he saw it as a tactic of the bad guys, a way of undercutting support for democracy; our exchange had grown out of a disagreement on the relative merits of government vs private production of things such as health care. The problem with that response is that, until you know if the rational ignorance argument is correct, you do not know who the bad guys are. If democracy really works as badly as the argument predicts, that is a reason to switch sides on some political issues, changing which side you approve of arguments in favor of.
I concluded (and said) that he was offering evidence for my view of democracy, picking what to believe not on the basis of arguments or evidence but on the basis of partisanship. My argument undercut the position of his team. His conclusion was not that he should reconsider which side he supported but that he should reject the argument.
Which started me thinking about to what degree my own views are based on reason, to what degree something else.
I think I can fairly claim to be more familiar with the arguments for and against my political positions than most people are with the arguments for and against theirs; to that extent my position is based on reason. I think I have some evidence from my past behavior that when I am faced with a strong argument against my views to which I can find no plausible rebuttal, I eventually change the views.
On the other hand, there is a significant range of political positions that are defensible, positions I disagree with but cannot claim to have adequate arguments to refute. At the very least it covers the range from my father's limited government views to my anarchism, and arguably quite a lot more than that. Why, within that range, do I believe what I do?
I think at least part of the answer is wishful thinking.
I would like to believe in a world where people are primarily rational and benevolent, a world where political conflict ultimately comes down to trying to figure out what is true, not to which side can force the other to give in. Looking at the same question on a smaller scale, I cannot ever remember a conversation with any of my children that came down, on either side, to "I don't care what the arguments are, I want ... ." If I participated in such a conversation I would find it upsetting, on a small scale the same problem that makes me see the biggest risk of having children as the risk of having children who don't like you — something that I am very glad never happened to me.
The same attitude shows up in my fiction. One common criticism of my Salamander is that the characters are implausibly rational and reasonable. My response is that some people are like that. Those are the people I prefer to interact with and, by extension, to imagine and write about.
Which gets me back to my political beliefs. I prefer to believe that people are fundamentally rational and benevolent, would, on the whole, prefer that good things rather than bad things happen to other people. I think it is clear that some people are like that and reasonably clear that practically everyone is to some degree like that. But it is not a full description of human beings and I have no good basis to estimate how good a description it is, how many people to what degree fit my preferred pattern. My political beliefs come in part from modeling the world on the assumption that rationality and benevolence are the norm, the signal, everything else something more like random noise.
Which is to say that they come in part from wishful thinking.
How Do I Kahan Myself?
In an earlier post I discussed Dan Kahan's explanation for why we believe things. What I believe about evolution or global warming has very little effect on the world, which is a large place, but can have a large effect on me. If I live in a small town where almost everyone is a fundamentalist Christian, announcing that I believe in evolution could have serious negative consequences. If I am a professor at an elite university, announcing that I don't believe in evolution might have even more serious consequences.
It is easier to pretend to believe in something if you really do believe in it, so it is in my interest to persuade myself of whatever views the people who matter to me approve of. One result, according to Dan's research, is that the more intellectually able someone is, the more likely he is to agree with his group's position, whether that means believing in evolution or not believing in it. Smarter people are better at talking themselves into things.
It's easy enough to believe that other people behave that way, but what about yourself? How can you figure out whether your beliefs are the result of a rational examination of arguments and evidence? People on the other side of whatever the issue is will be happy to tell you that they are not and offer arguments to prove it.
One approach would be to find a body of neutrals whose opinions you have reason to trust. If all the people who are well informed on the subject and have no reason to identify with either side agree, that is pretty good evidence for the position they agree on. If it is not your position, perhaps you hold it for the wrong reason. With big issues, such as climate change, it may be hard to find many people who qualify, but it might work for a controversy where the partisan groups represent a small part of the population. A sports fan can check whether his belief in his team's superiority is rational calculation or partisanship by comparing his estimate of the chance they will win their next game with the betting odds.
If you cannot find neutrals, perhaps you can find someone with known biases and conclusions that don’t fit them. I have been involved in discussions of nanotechnology for a long time, know some of the people at the Foresight Institute, the group that pushed the idea for many years before it became suddenly fashionable. One thing I know about them is that that their general political biases are libertarian. Hence when I observe them expressing serious concerns about the dangers of unregulated nanotech I am inclined to take it seriously. They may be wrong but they aren't believing it because they want to believe it. Generalizing the argument, it is worth weighting information in part according to the incentives of the source, being more skeptical of people who tell you things they want to believe than those who tell you things they do not want to believe.
Another approach is to look for propositions that do not follow from the arguments for my side but do follow from identification with it. It is possible to make bad arguments for true conclusions. Suppose someone provides me clear and unambiguous evidence that one of our people is a fraud or a liar or that some claimed fact that supports our position not true. If I hold my beliefs for rational reasons, I should accept the conclusion. If I hold them for partisan reasons, on the other hand, I very likely will not.
One can also look at the consistency of your own reasoning. One of my rules of thumb is to distrust any anecdote, historical or current, that makes a good enough story to have survived on its literary merit. Perhaps the clearest example is H.L.Mencken’s bathtub hoax, an invented piece of history designed to play into people’s desire to believe in their own superiority to their ancestors. Other examples, such as medieval knights needing a crane to get on their horses because of the weight of armor, may be conjectures converted into facts by a process of memetic evolution, a pattern I have observed at first hand. It helps if the anecdote not only fits what people expect but is argumentatively useful, such as the claim that Herbert Hoover responded to the beginning of the Great Depression by slashing government expenditure.1
Do I apply my rule of thumb to anecdotes I want to believe in? I can think of one case where I did but there are probably others where I didn’t.
Another approach is to use the present to judge your past. Were you confident of things you wanted to believe that turned out not to be true? To avoid retconning past beliefs write them down, make predictions with statements of how confident you are.2 Look back at them a year or two later and see how they fit what happened. A blog is useful for the purpose; readers are invited to search mine for evidence that I was less rational than I thought I was. If you don’t have a blog of your own, make your predictions as comments on someone else’s.
Am I rational? More rational than most people, yes — but I would believe that.3 As rational as I should be, probably not.
From 1929 to 1932 federal spending increased by 50% in nominal terms, doubled in real terms, tripled as a fraction of national income.
An idea I got from Scott Alexander, who routinely does so.
I am told that the average driver is, by his report, better than average. But I have not checked that factoid to be sure it is true.
“I am told that the average driver is, by his report, better than average”.
I sometimes joke at parties that I consider myself an excellent exemplar of the Dunning-Kruger effect. To this day, I wonder just how often that joke actually hits. I then wonder what this might imply about my exemplariness.
“I am told that the average driver is, by his report, better than average”
Perhaps. Depends on the average here, I suspect. The median guy is probably better than average if the distribution isn’t a bell curve.
One way to measure this is in a system with penalty points the average is inevitably non zero - unless nobody has any - but the median guy is probably zero.