One thing I liked about Trump becoming president was that he got lots of lay people and such to realize fake news was a thing, I think at first lots of people thought fake news was basically a thing Trump made up but pretty quickly people started to realize that the media couldn't be trusted on a wide range of issues. Interestingly enough the same has yet to happen with respects to science/academia and the replication crisis, to the extent that people do seem sceptical it seems mostly driven my politics as opposed to understanding p-hacking and publication bias and base rate neglect etc. there have been some fairly prominent high status meta science and open science people in recent years but none have really seemed to reach mainstream appeal. I also think its funny that in common parlance doing "research" consists of reading media articles as opposed to becoming familiar with a body of academic literature. My estimation is that most lay people have a very romanticized view of how science is done and view "scientists" as very high status people to be trusted, in a similar way to doctors and such.
One point you mentioned in another piece is that there is a general tendency to use behavioural economics to find examples of market failure and then argue for government intervention, failing to apply insights from behavioural economics as to how governments are also subject to market failure.
This is a fantastic post about the replication failures / TX sharpshooter. IMO, it would be great to have that as a stand-alone w/o the Danny stuff.
>a psychologist who won, and probably deserved, a Nobel prize
To my eye, this ("probably") sounds like petty jealousy. Like you're smarter than the Nobel committee.
>I made the same point more than twenty years before Kahneman’s book was published
Yeah, but Danny and Amos were doing this work many years before your bit. ;-)
Danny only got to writing his magnum opus later.
>results that played a substantial role in Kahneman’s work.
This is most definitely wrong -- priming was just one small part of his work.
Since I was first exposed to their work in the 90s, I have found evolutionary psychology to be the main source of insight. We're "rational" in that our minds work to get our genes to the next generation ... in our evolutionary setting.
It was intended to be positive. I don't assume that everyone who gets a Nobel prize, or some other high profile award, deserves it.
When my father got the Nobel he commented that it was less important to him than having gotten the Clark medal, which is given to the best economist under forty, because that was the judgement of his peers.
Dan Ariely's book _Predictably Irrational_ makes the same point as the first half of your post: that not only is a lot of human economic behavior "irrational" in the economist's sense, but it's _consistently_ and _predictably_ irrational, not just noise. For example, under certain predictable circumstances, people treat "free" as dramatically less expensive than even the smallest positive price, and under other predictable circumstances, people treat "free" as equivalent to quite a high price. ("Why are you doing this for free? I'm willing to pay you." "Because if you paid me what I'm worth, you couldn't afford me.")
It too quotes a lot of experiments, and it would be interesting to see how those experiments have fared under replication.
I have long thought that the most promising use of behavioral economics would be macro. It's not my field, but my impression is that most or all of the theories involve people repeatedly making mistakes, and behavioral economics is a theory of mistakes.
Would the Ariely book work for that? Does he mention it?
Dan Ariely's work was found to be fraudulent at least with respects to his paper on people lying, ironic I know. This isn't to say he doesn't have any insights, I am reminded of the fact that Rob Bensinger's post as part of Rationality A-Z ends with a summary of Ariely's point about humans being not simply not rational but predictably irrational.
Shouldn’t predictable irrationality be fairly easy to turn into a money machine? It’s not like people are too ethical to exploit such an advantage. How many times can the same person make the same mistake without going bankrupt? Seems like it might be a lot. Perhaps I am being uncharitable, but the way enthusiasts talk about it, no one ever learns how to stop making such mistakes. There has to be some moderating influence. Maybe we even learn from experience? People do lots of stupid things, but typically do not do the same one again and again, never stopping. What makes the predictable action stop? Or is it only predictable the first time? Or it is still predictable, but the second time there is a different prediction? Or am I full of crap, attributing group characteristics to individuals?
I don't think predictable irrationality is any more of a "money machine" than predictable rationality (which people have been monetizing for hundreds or thousands of years). Remember, "irrational" in this case doesn't mean "making wrong decisions", it means "making decisions that violate certain standard economic assumptions". A "rational" decision can turn out just as wrong as an "irrational" one, and yes, we're capable of learning from our mistakes in both cases.
One can profit from predictably rational persons primarily by serving their interests. One can profit from predictably irrational behavior by exploiting the predictable errors.
But maybe I was not interpreting irrationality as you were. I tend to think of something like intransitive preferences, where , depending on the framing, a consumer prefers A to B and B to C, but with a new framing might prefer C to A. But this is a rather theoretical concern, difficult to imagine being exploited in practice. After a round or two, the consumer would probably twig to the fact that they are paying to go around in a circle.
Funny you should mention transitive preferences, since they’re a good example of how group behavior works differently from individual behavior. Saari proved that, under certain assumptions that are almost always true of committees discussing complex issues, a committee really can have circular preferences: a majority prefers B over A, a majority prefers C over B, … and a majority prefers A over K, without any one member of the committee changing its mind or having “irrational” preferences.
But yes, it’s probably best not to interpret “irrational” economic behavior as “errors”. When a scientific theory doesn’t correctly describe a naturally occurring phenomenon, nature isn’t wrong; the theory is.
If we do not “ interpret “irrational” economic behavior as “errors,” what is the alternative? Irrationality involves either using means that don’t accomplish your ends, or having ends that are themselves in some sense irrational. If the later even makes sense, they both sound like mistakes.
I guess our host would put it in terms of not maximizing expected utility. Why do I resist that? Both maximizing and doing statistics are difficult for the conscious mind. It doesn’t seem unreasonable to be skeptical of the thought that the unconscious mind might be better at it. So we are either irrational all the time, or the phrase “maximizing expected utility” has to be interpreted fairly weakly. Hence his emphasis on empirical tests, rather than logic alone, I suppose.
Are you sure that all the priming results turned out to be non-replicable, rather than just those that were spectacularly unexpected, and hence more attractive to would-be replicators?
I don't have knowledge either way, but my fast system has classed them all as "under suspicion" rather than "definitely wrong" ;-) Obviously there are lots of possibilities. With enough replication attempts, easiest with small samples, some predictable proportion of results will replicate at whatever statistical level one uses, even if the effect sought is non-existent. So cherry picking a few successful replications won't help. But I'd be unsurprised if priming effects (or context effects) were sometimes real, if only because it fits my model of human behaviour, and I'd hate to overcorrect from "not proven" to "definitely false".
My impression is that almost all of them failed to replicate, but that's from reading other people's comments. It isn't a subject I have any expertise in. Googling, it looks as though some in the field belief it is almost all bogus:
Seven years later, the storm has uprooted many of social priming’s flagship findings. Eric-Jan Wagenmakers, a psychologist at the University of Amsterdam, says that when he read the relevant part of Kahneman’s book, “I was like, ‘not one of these studies will replicate.’ And so far, nothing has.”
some that "social priming might survive as a set of more modest, yet more rigorous, findings."
The question leaping to my mind is: did they fail to replicate because all studies were repeated and came up negative, or because they're more costly to conduct than the benefit gained for the researcher by reporting a positive replication, and the few times the cost/benefit comparison favored running them, they came up negative by pure chance?
(And maybe the even fewer that did come up positive weren't published, or otherwise never reached Wagenmakers' attention?)
One thing I liked about Trump becoming president was that he got lots of lay people and such to realize fake news was a thing, I think at first lots of people thought fake news was basically a thing Trump made up but pretty quickly people started to realize that the media couldn't be trusted on a wide range of issues. Interestingly enough the same has yet to happen with respects to science/academia and the replication crisis, to the extent that people do seem sceptical it seems mostly driven my politics as opposed to understanding p-hacking and publication bias and base rate neglect etc. there have been some fairly prominent high status meta science and open science people in recent years but none have really seemed to reach mainstream appeal. I also think its funny that in common parlance doing "research" consists of reading media articles as opposed to becoming familiar with a body of academic literature. My estimation is that most lay people have a very romanticized view of how science is done and view "scientists" as very high status people to be trusted, in a similar way to doctors and such.
One point you mentioned in another piece is that there is a general tendency to use behavioural economics to find examples of market failure and then argue for government intervention, failing to apply insights from behavioural economics as to how governments are also subject to market failure.
This is a fantastic post about the replication failures / TX sharpshooter. IMO, it would be great to have that as a stand-alone w/o the Danny stuff.
>a psychologist who won, and probably deserved, a Nobel prize
To my eye, this ("probably") sounds like petty jealousy. Like you're smarter than the Nobel committee.
>I made the same point more than twenty years before Kahneman’s book was published
Yeah, but Danny and Amos were doing this work many years before your bit. ;-)
Danny only got to writing his magnum opus later.
>results that played a substantial role in Kahneman’s work.
This is most definitely wrong -- priming was just one small part of his work.
Since I was first exposed to their work in the 90s, I have found evolutionary psychology to be the main source of insight. We're "rational" in that our minds work to get our genes to the next generation ... in our evolutionary setting.
IMHO.
Personally I interpret the "probably deserved" in a positive light. But then, I don't have a lot of respect for any institution.
It was intended to be positive. I don't assume that everyone who gets a Nobel prize, or some other high profile award, deserves it.
When my father got the Nobel he commented that it was less important to him than having gotten the Clark medal, which is given to the best economist under forty, because that was the judgement of his peers.
Dan Ariely's book _Predictably Irrational_ makes the same point as the first half of your post: that not only is a lot of human economic behavior "irrational" in the economist's sense, but it's _consistently_ and _predictably_ irrational, not just noise. For example, under certain predictable circumstances, people treat "free" as dramatically less expensive than even the smallest positive price, and under other predictable circumstances, people treat "free" as equivalent to quite a high price. ("Why are you doing this for free? I'm willing to pay you." "Because if you paid me what I'm worth, you couldn't afford me.")
It too quotes a lot of experiments, and it would be interesting to see how those experiments have fared under replication.
Sounds interesting.
I have long thought that the most promising use of behavioral economics would be macro. It's not my field, but my impression is that most or all of the theories involve people repeatedly making mistakes, and behavioral economics is a theory of mistakes.
Would the Ariely book work for that? Does he mention it?
Dan Ariely's work was found to be fraudulent at least with respects to his paper on people lying, ironic I know. This isn't to say he doesn't have any insights, I am reminded of the fact that Rob Bensinger's post as part of Rationality A-Z ends with a summary of Ariely's point about humans being not simply not rational but predictably irrational.
Shouldn’t predictable irrationality be fairly easy to turn into a money machine? It’s not like people are too ethical to exploit such an advantage. How many times can the same person make the same mistake without going bankrupt? Seems like it might be a lot. Perhaps I am being uncharitable, but the way enthusiasts talk about it, no one ever learns how to stop making such mistakes. There has to be some moderating influence. Maybe we even learn from experience? People do lots of stupid things, but typically do not do the same one again and again, never stopping. What makes the predictable action stop? Or is it only predictable the first time? Or it is still predictable, but the second time there is a different prediction? Or am I full of crap, attributing group characteristics to individuals?
I don't think predictable irrationality is any more of a "money machine" than predictable rationality (which people have been monetizing for hundreds or thousands of years). Remember, "irrational" in this case doesn't mean "making wrong decisions", it means "making decisions that violate certain standard economic assumptions". A "rational" decision can turn out just as wrong as an "irrational" one, and yes, we're capable of learning from our mistakes in both cases.
One can profit from predictably rational persons primarily by serving their interests. One can profit from predictably irrational behavior by exploiting the predictable errors.
But maybe I was not interpreting irrationality as you were. I tend to think of something like intransitive preferences, where , depending on the framing, a consumer prefers A to B and B to C, but with a new framing might prefer C to A. But this is a rather theoretical concern, difficult to imagine being exploited in practice. After a round or two, the consumer would probably twig to the fact that they are paying to go around in a circle.
Funny you should mention transitive preferences, since they’re a good example of how group behavior works differently from individual behavior. Saari proved that, under certain assumptions that are almost always true of committees discussing complex issues, a committee really can have circular preferences: a majority prefers B over A, a majority prefers C over B, … and a majority prefers A over K, without any one member of the committee changing its mind or having “irrational” preferences.
But yes, it’s probably best not to interpret “irrational” economic behavior as “errors”. When a scientific theory doesn’t correctly describe a naturally occurring phenomenon, nature isn’t wrong; the theory is.
If we do not “ interpret “irrational” economic behavior as “errors,” what is the alternative? Irrationality involves either using means that don’t accomplish your ends, or having ends that are themselves in some sense irrational. If the later even makes sense, they both sound like mistakes.
I guess our host would put it in terms of not maximizing expected utility. Why do I resist that? Both maximizing and doing statistics are difficult for the conscious mind. It doesn’t seem unreasonable to be skeptical of the thought that the unconscious mind might be better at it. So we are either irrational all the time, or the phrase “maximizing expected utility” has to be interpreted fairly weakly. Hence his emphasis on empirical tests, rather than logic alone, I suppose.
Are you sure that all the priming results turned out to be non-replicable, rather than just those that were spectacularly unexpected, and hence more attractive to would-be replicators?
I don't have knowledge either way, but my fast system has classed them all as "under suspicion" rather than "definitely wrong" ;-) Obviously there are lots of possibilities. With enough replication attempts, easiest with small samples, some predictable proportion of results will replicate at whatever statistical level one uses, even if the effect sought is non-existent. So cherry picking a few successful replications won't help. But I'd be unsurprised if priming effects (or context effects) were sometimes real, if only because it fits my model of human behaviour, and I'd hate to overcorrect from "not proven" to "definitely false".
My impression is that almost all of them failed to replicate, but that's from reading other people's comments. It isn't a subject I have any expertise in. Googling, it looks as though some in the field belief it is almost all bogus:
Seven years later, the storm has uprooted many of social priming’s flagship findings. Eric-Jan Wagenmakers, a psychologist at the University of Amsterdam, says that when he read the relevant part of Kahneman’s book, “I was like, ‘not one of these studies will replicate.’ And so far, nothing has.”
some that "social priming might survive as a set of more modest, yet more rigorous, findings."
https://www.nature.com/articles/d41586-019-03755-2
The question leaping to my mind is: did they fail to replicate because all studies were repeated and came up negative, or because they're more costly to conduct than the benefit gained for the researcher by reporting a positive replication, and the few times the cost/benefit comparison favored running them, they came up negative by pure chance?
(And maybe the even fewer that did come up positive weren't published, or otherwise never reached Wagenmakers' attention?)