Discover more from David Friedman’s Substack
Professionally Correct Speech
When the question of alcohol and health came up on “Doctor Radio,” a satellite radio program, all of the participants agreed that evidence showed that consuming a moderate level of alcohol, something like one beer a day for a woman, one or two for a man, or the equivalent in other drinks, was good for you, better than no alcohol at all. All of them also agreed that they would not advise their patients to do so.
Why? They mentioned that there were problems with prescribing something that depended on the exact dosage and that a higher level of consumption was likely to lead to auto accidents, but distinguishing one beer a day from three is not a difficult problem even for those who are not doctors. My conjecture was that the real explanation was the reluctance of doctors to appear to be on the wrong side. Everyone knew that alcohol was a bad thing, a source of auto accidents and various medical (and other) problems. By giving a truthful account of the medical evidence the doctors on the program might appear to be pro-alcohol; all good people are anti. Hence they had to qualify their conclusion as a purely theoretical matter, not something that would affect what they told their patients. Think of it as a different version of PC — Professionally Correct speech.
A similar pattern exists for ice cream. Multiple independent studies have found evidence that consuming ice cream reduces the chance of getting diabetes — and found ways of explaining the evidence away. In several cases they have gone so far, in public statements, as to report that yogurt is protective against diabetes, other dairy products are not, when ice cream in their study showed as strong, in some studies a stronger, effect than yogurt.
Yogurt, as everyone knows, is a healthy food. Ice cream, as everyone knows, is bad for you.
Subscribe for free to receive new posts.
From time to time I see a news story on some piece of scientific research that somewhat weakens the case for taking strong action against global warming. I believe that every time I have seen such a report it was accompanied by a quote from the researchers to the effect that global warming was a serious problem and their work should not be taken as a reason to be less worried about it. They almost certainly believed the first half of that, but their work was a reason to be less worried even if not to stop worrying.
Good people are on the side that believes that warming is happening, is anthropogenic, is a serious problem that needs to be dealt with immediately. Bad people deny one or more of those claims. If that is what all the people who matter to you, such as the fellow members of your profession, believe, and you are so unfortunate as to produce results that strengthen the bad people's case, it is prudent to make it clear that you are still on the side of the angels. Just as, if you are so unfortunate as to be an honest doctor aware of the evidence in favor of alcohol, it is prudent to make it clear that you have not transferred your allegiance to demon rum.
An article in the Journal of the American Medical Association reported that being overweight, as defined by Body Mass Index, may be good for you, that people in the recommended BMI range are more likely to die ("all cause mortality") than people whose weight classifies them as overweight but not obese. What I found most interesting about the news coverage of the article was the reaction reported, people quoted as criticizing the article without offering any good reason to think it was wrong. From the USA Today story:
Walter Willett, head of the department of nutrition at the Harvard School of Public Health, says the findings are "complete rubbish" because the methodology used in the analysis seriously underestimates "the hazards of being overweight and obese."
Steven Heymsfield, one of the authors of the accompanying editorial in the journal and the executive director of the Pennington Biomedical Research Center in Baton Rouge:
We don't really know the ideal weight for a long life and optimal health. Science is still working that out. But falling in the normal, healthy weight range is still the safest place to be."
Other people offered possible ways of explaining away the result, such as the suggestion that overweight people got more medical attention, but no actual evidence.
My impression was that in this case, as in the case of evidence in favor of the moderate consumption of alcohol, there is an official truth and a tendency to discount evidence against it. The Heymsfield quote is a nice example of one way of doing so, since it appears to endorse the orthodox view of what people should do while actually saying nothing. That falling in the healthy weight range is the safest place to be is the definition of the healthy weight range, but does not tell us where it is.
A related point is the popularity of the Body Mass Index, along with the use of objective sounding terms ("overweight," "obese," "obesity grade 1," ...) for its categories. The nice thing about BMI is that it is easy for anyone to calculate, since it is merely weight divided by height squared. The problem is that it is a poor measure, since people differ in other relevant dimensions, most notably in how wide they are; if your objective is to produce accurate information relevant to health, you would want to take that into account. But not if your objective is to pressure people into losing weight, since the more complicated the measure, the easier it is for people who don't want to diet to explain away their weight.
I have come across at least one case where the argument for suppressing correct arguments that support the wrong side was made explicit, although by people who decided not to do it. The subject was an article which pointed out a reason to think the requirement for herd immunity to Covid was being overstated.
we were concerned that forces that want to downplay the severity of the pandemic as well as the need for social distancing would seize on the results to suggest that the situation was less urgent. We decided that the benefit of providing the model to the scientific community was worthwhile. (from the editor’s blog1 of Science Magazine, brought to my attention by a commenter on a blog post in which I offered a simpler version of the same argument as the article.)
That implies that the editors believe in filtering the scientific literature in order to bias the public perception in the direction they approve of, although in this case they decided not to. It follows that one cannot take the published scientific literature on any controversial issue as giving an unbiased picture of the actual science.
Official Scientific Truth
A pattern I have observed in a variety of public controversies is the attempt to establish some sort of official scientific truth, proclaimed by a suitable authority — a committee of the National Academy of Science, the Center for Disease Control, or the equivalent. Trying to do so is a mistake, one based on a misunderstanding of how science works. Truth is not established by an authoritative committee but by a decentralized process which (sometimes) results in everyone or almost everyone in the field ending up in agreement.
Part of the problem with the attempt to establish official truth is that the longer it is followed, the worse it works. You start with a body that exists to let experts interact with each other and so really does represent more or less disinterested expert opinion. It is asked to offer an authoritative judgement on some controversy: whether capital punishment deters murder, the effect on crime rates of permitting concealed carry of handguns, the effect of second hand smoke on human health.
The first time it is asked to rule on some such question it might produce an answer representing a balanced professional judgement, although even then there is the risk that the committee established to give judgement will end up dominated not by the most expert but by the most partisan. The more times the process is repeated, the greater the incentive of people who want their views to get authoritative support to get themselves or their friends positions of influence within the organization, to keep those they disapprove of out of such positions, to divert it from its original purpose to becoming a rubber stamp for their views. The result is to subvert both the organization and the scientific enterprise, especially if support by official truth becomes an important determinant of research funding.
An old example involved Cyril Burt, a very prominent British Psychologist responsible for early studies of the heritability of I.Q., a controversial subject. After his death he was accused of various sorts of academic fraud. The British Psychological Society announced Burt’s guilt in a booklet entitled A balance sheet on Burt, the seven authors of which consisted of four of Burt’s attackers and three people who had been uninvolved in the controversy. Many people took, some still take, their conclusion for gospel.2
Subsequently, two different authors working independently, neither having any prior connection with Burt, published books arguing convincingly that some or all of the charges against him were bogus. Interested readers can find a detailed discussion of the case in Cyrus Burt: Fraud or Framed, which concludes that most of the case against Burt is at best unproven.
Critics claimed that some or all of his data on identical twins separated at birth was invented. Evidence offered included the fact that the final report, with a considerable number of added twins, had correlation coefficients many of which were identical to three decimal places to those in a previous report with a smaller number. It was pointed out, correctly, that that was a very unlikely coincidence. It was, however, an almost equally unlikely fraud, since a statistician inventing new data would have taken the obvious precaution of inventing new and slightly different correlation coefficients. The obvious explanation was that Burt, then in his eighties and working by himself in a world without desktop computers, statistical programs, or word processors, managed to confuse some of his old correlation coefficients with some of his new ones when copying numbers into the final text of his article.
Some critics, eager to deny heritability of intelligence, claimed that Burt had invented all of his twin data. Later researchers, working independently, duplicated his work and reached similar results.
Many of the questions about puzzling features in his articles might have been answered from his notes. Unfortunately …
A few days after the news of Burt’s death in 1971, I wrote to Miss Creed Archer, who was Burt’s private secretary for over twenty years, to request that she preserve the two or three tea crates of old raw data that Burt had once told me he still possessed. I told Miss Archer that I would travel to London the following summer to go through this material. I supposed it include IQ test data on twins, in which I had an interest and which I thought could be used in certain newer kinds of genetic analysis that Burt had not attempted. Miss Archer replied that all of those data had been destroyed within days after Burt’s death, on the advice of Dr. Liam Hudson, professor of educational psychology at Edinburgh University. He had come to Burt’s home soon after the announcement of Burt’s death. Miss Archer, distraught and anxious to vacate Burt’s large and expensive flat …expressed concern to Hudson about what to do with these boxes of old data. Hudson looked over their contents and advised that she burn them, as being no longer of any value. Miss Archer said she believed the boxes included the data on twins … (A. R. Jensen in Fraud or Framed.)
It is natural enough that observers of such controversies want an authoritative answer from an authoritative source — quoting the CDC or the BPS is much less work than looking through the evidence and arguments. But treating such answers as authoritative is a mistake, a pattern of treating them that way a dangerous mistake.
When I quoted this on my blog I had a link to the passage. Currently it is gone from not only the web but the Internet Archive as well.
Including the authors of the Wikipedia page on Burt, who mention some of the arguments in Burt’s defense but pretty clearly believe in his guilt.
“On the other side of the Atlantic, Liam Hudson, Professor of psychology at Edinburgh University, considers that the inconsistencies in Burt’s data and the difficulty in tracking down his coauthors put the question of Burt’s fraudulence ‘beyond argument.’” (Oliver Gillie, “Did Sir Cyril Burt Fake His Research on Heritability of Intelligence? Part I,” The Phi Delta Kappan, Vol. 58, No. 6, Technology and Education (Feb., 1977), pp. 469-471
My own conclusion on the controversy was that Burt almost certainly did not invent his original data, might have invented the later data — there are puzzles as to how he obtained them, although at least one possible solution — certainly engaged in some minor dishonesty, such as publishing book reviews under fictitious names in a journal he controlled. His attackers took advantage of the fact that he was no longer around to defend himself either because they disliked him — he was a powerful and controversial figure in the world of British psychology — or because they were committed to the nurture side of the nature/nurture controversy, so eager to debunk his work showing intelligence to be in large part heritable.
The day after Gillie’s sensational charges of fraud in The Sunday Times, there appeared in the Times an interview with Tizard, titled “theories of IQ pioneer “completely discredited”’. It began: “The theory of Sir Cyril Burt …that man’s intelligence is largely caused by heredity was now completely discredited, Professor Jack Tizard, Professor of Child Development at London University, said yesterday … (Fraud or Framed)