You come across the claim that some contentious issue has been settled, that it has been shown that capital punishment deters (or does not deter) crime, that right wingers are authoritarian and left wingers are not, that legalizing concealed carry reduces crime, that increasing the minimum wage reduces employment opportunities for unskilled workers, that the U.S. army deliberately spread diseases in order to kill off Indian tribes. Following up the claim you come across an article, perhaps even a book, which does indeed support that claim. Should you believe it?
The short answer for all of those examples, some of them claims I agree with, is that you should not. As I think I have demonstrated in past posts, claimed proofs of contentious issues are quite often wrong, biased, even fraudulent.
More examples from previous posts, here or on my old blog:
An estimate of the cost of a ton of carbon dioxide calculated with about half the total depending on the implicit assumption of no progress in medicine for the next three centuries.
A factbook on state and local finance that deliberately omitted the most important relevant fact that readers were unlikely to know.
A textbook, in its third edition, with multiple provably false claims.
The important question is how to tell. There are three answers:
1. Read the book or article carefully, check at least some of its claims — easier now that the Internet provides you with a vast searchable library accessible from your desktop — and evaluate its argument for yourself. Doing this is costly in time and effort and requires skills you may not have; depending on the particular issue that might include near-professional expertise in statistics, history, physics, economics, or any of a variety of other fields. I have taught elementary statistics at various points in my career, both in an economics department and a law school, but gave up on a controversy of considerable interest to me (concealed carry) when the statistical arguments got above the level I could readily follow.
2. Find one or more competent critiques of the argument and see if you find them convincing. This is the previous answer on easy mode. You still have to think things through but you don’t have to search out mistakes in the argument for yourself because the critic will point you at them, with luck offer evidence.
There are three possible conclusions that that exercise may support: that the argument is wrong. that it might be wrong, that it is probably right. The way you reach the last conclusion is from the incompetence or dishonesty of the critique; I am thinking of a real case.
John Boswell, a gay historian at Yale, argued that both the scriptures and early Christianity, unlike modern Christian critics of homosexual sex, treated it as no worse than other forms of non-marital intercourse. What convinced me that Boswell had a reasonable case was reading an attack on him by a prominent opponent which badly misrepresented the contents of the book I had just read. People who have good arguments do not need bad ones.
Of course, there might be other critics with better arguments.
An entertaining version of this approach is to find an online conversation with intelligent people covering a wide range of views and follow discussions of whatever issues you are interested in. With luck all of the good arguments for both sides will get made and you can decide for yourself whether one side, the other, or neither is convincing. Forty years ago I could do it in the sf groups on Usenet, which contained lots of smart people who liked to argue. Five years ago I could do it in the comment threads of Slate Star Codex. Currently Data Secrets Lox works for a few controversies but the range of views represented on it is too limited to provide a fair view of most.
The comment threads of this blog are at present too thin for the purpose, with between one and two orders of magnitude fewer comments than the SSC average used to be, but perhaps in another few years … .
3. Recognize that you don’t know whether the claim is true and have no practical way of finding out, at least no way that costs less in time and effort than it is worth. This is the least popular answer but probably the most often correct.
Official Scientific Truth
A pattern I have observed in a variety of public controversies is the attempt to establish some sort of official scientific truth proclaimed by a suitable authority, a committee of the National Academy of Science, the Center for Disease Control or the equivalent — Australia has an official “chief scientist.” It is a mistake based on a fundamental misunderstanding of how science works. Scientific truth is established not by the vote of an authoritative committee but by a decentralized process which, with luck, eventually results in almost everyone in the field agreeing. There is no single source, however authoritative, that defines it or, in controversial cases, can be trusted to report it.
The more the official truth approach is followed the less well it will work. You start out with a body that exists to let experts interact with each other and so really does represent more or less disinterested expert opinion. It is asked to offer an authoritative judgement on some controversy such as the origin of Covid. The first time it might work, although there is the risk that the committee established to give judgement will end up dominated not by the most expert but by the most partisan. But the more times the process is repeated, the greater the incentive of people who want their views to get authoritative support to get themselves or their friends positions of influence within the organization, to keep those they disapprove of out of such positions, and so to divert it from its original purpose to becoming a rubber stamp for their views. The result is to subvert both the organization and the scientific enterprise, especially if support by official truth becomes an important determinant of research funding.
A case which struck me some time ago had to do with second hand smoke. A document defending a proposal for a complete smoking ban on my campus was supported by a claim cited to the Center for Disease Control. Following the chain of citations it turned out that the CDC was basing the claim on something published by the California EPA which cited no source at all for it. As best I could determine, the claim originated with research that was probably fraudulent, using cherry-picked data to claim enormous and rapid effects from smoking bans. Pretty clearly, the person on my campus who was most responsible for the document had made no attempt to verify the claim himself, merely taken it on the authority of the CDC. For more details see my post on the case.
An interesting older case involved Cyril Burt, a very prominent British Psychologist responsible for, among many other things, early studies of the heritability of I.Q., a highly controversial subject. After his death he was accused of academic fraud of various sorts. The British Psychological Association concluded that he was guilty, a conclusion that many people then took, and some still take, for gospel. Subsequently, two different authors published books arguing convincingly that some or all of the charges against him were bogus. Interested readers can find a detailed discussion of the case in Cyril Burt: Fraud or Framed, which concludes that much, at least, of the case against Burt was in error.
It is natural enough that observers of such controversies want an authoritative answer from a authoritative source — quoting the CDC is much less work than actually looking at the research a claim is based on. But treating such answers as authoritative is a mistake — and a pattern of people treating them that way a dangerous one.
P.S. Thanks to a helpful reader, I now have a search page on my site that lets you search my substack, my blog, and lots of stuff on my site.
PPS Three days later:
A commenter pointed me at a substack post by Michael Huemer from a little more than a year ago:
Is Critical Thinking Epistemically Responsible?
He asks the same question I do in this post but gets a different answer, concludes that the right policy is to believe the experts because, for some of the same reasons I discuss, figuring out what is true for yourself is hard.
He underestimates, in my view, the difficulty of figuring out what experts to trust. He writes:
For most controversial issues, it is a lot easier to judge who is an expert on X than it is to judge the truth of X itself. That’s why you don’t need meta-experts to tell you who the ordinary experts are. E.g., you could look for people who have PhD’s and have written books and articles on a subject, as part of their academic research.
That works very poorly for any issue where people have strong non-scientific reasons, political, ideological, or religious, to favor one side or the other. Paul Krugman has not only a PhD but a Nobel prize. His view of the effects of minimum wage laws switched when he went from being an academic economist to being a left wing public intellectual.
In an earlier post I showed that a Nature article on the cost of carbon depended for its result on the implicit assumption of no medical progress for the next three centuries. The article had 24 authors most of whom, I expect, had PhD’s — and all of whom were in an environment where it was in their interest to conclude that the cost of CO2 was high.
From a comment I put on his post:
The individually rational choice, in the typical case such as your "abortion, gun control, global warming," is to find experts who support the conclusion popular with the people around you and believe them. Your belief, after all, will have no significant effect on how the issue gets dealt with but a significant effect on how you interact with the people who matter to you. Dan Kahan, as you may know, has offered empirical evidence that that is how people behave, specifically that the more intellectually able you are the more likely to agree with your group's views, whether that means believing in evolution or not believing in evolution.
If you ask instead how it is in the general interest for you to behave, the problem with your answer is that the more people follow the "believe the experts" rule, the greater the incentive for partisans to try to control who counts as an expert — avoid funding research or giving promotions or publishing articles by people on the wrong side of a controversy. The result is to corrupt the scientific enterprise.
I trace my skepticism about AGW, since rebranded as Climate Change, to the fact that its main proponent refused to share his data with all and sundry.
In science, especially a non-experimental science like climatology, one must share everything; the raw data and the metadata. We should accept nothing less.
Regarding John Boswell and his critics, I call it the liars method: which critics take the most shortcuts with what passes for the truth? If they prefer to lie instead of rebut the arguments, my instinct is to believe they have the wrong side of the argument.
Case in point, global warming. When I first heard of it, I remember how odd it was that global cooling and imminent ice ages had reversed course and now the world was doomed to bake to death, but I mostly ignored it; global cooling had never gone anywhere, why should global warming?
But the more strident they got, the more I paid attention, and came to the conclusion at least a decade ago that they were out and out liars. Michael Mann's hockey stick, the climate gate emails, and most especially all those hysterical predictions of no snow, no Arctic ice, no more coral reefs, no more polar bears or penguins ... one hysterical lie after another. By now I am absolutely convinced that whatever role humans play is too insignificant to measure, CO2 is good plant food and better for us all, and however you measure "global temperature", even a 5°C rise would be overall beneficial, although of course adjustments would have to be made.
Just look for the lies, that's my motto when the subject is too deep for the amount of time it would take to really study it.