However, as documented in a recent critical review of loss aversion by Derek Rucker of Northwestern University and myself, published in the Journal of Consumer Psychology, loss aversion is essentially a fallacy. That is, there is no general cognitive bias that leads people to avoid losses more vigorously than to pursue gains. Contrary to claims based on loss aversion, price increases (ie, losses for consumers) do not impact consumer behavior more than price decreases (ie, gains for consumers). Messages that frame an appeal in terms of a loss (eg, “you will lose out by not buying our product”) are no more persuasive than messages that frame an appeal in terms of a gain (eg, “you will gain by buying our product”).
People do not rate the pain of losing $10 to be more intense than the pleasure of gaining $10. People do not report their favorite sports team losing a game will be more impactful than their favorite sports team winning a game. And people are not particularly likely to sell a stock they believe has even odds of going up or down in price (in fact, in one study I performed, over 80 percent of participants said they would hold on to it).
I have not dug into the paper so…who knows?! but I find it plausible.
This is super annoying. The ego depletion one was extra annoying due to the fact that the literature had seemed good. Loss aversion loss is annoying because of the pervasiveness of use of the concept. It was the example of behavior economics.
We really need to separate out the work that is inherently high risk in fields like psychology and nutrition.
Note: when looking up the ego depletion stuff I came across a post touting recent “strong” evidence for ego depletion in the form of two sorts of large studies with preregistration. That’s prima facie interesting but I’m going to retain a pretty high level of skepticism. Certainly when folks write (emphasis added)
Moreover, combining results from the two studies, there was an overall small, but statistically significant, ego depletion effect even after removing outlier participants (and this was after only a five-minute self control challenge, so you can imagine the effects being larger after more arduous real life challenges).
Arrrrrgh! The results of two studies with a combined n of around 1000 is a small but “statistically significant” (I presume p=0.05) effect. No no no no. That’s super dangerous.
Worse, speculating about how much bigger the effects would be with bigger manipulation is super duper dangerous. This is stoking confirmation bias. And we shouldn’t be looking at current tiny effects as evidence for future awesome effects.