The Loss of Loss Aversion

As with ego depletion, loss aversion turns out to probably not be a thing:

However, as documented in a recent critical review of loss aversion by Derek Rucker of Northwestern University and myself, published in the Journal of Consumer Psychology, loss aversion is essentially a fallacy. That is, there is no general cognitive bias that leads people to avoid losses more vigorously than to pursue gains. Contrary to claims based on loss aversion, price increases (ie, losses for consumers) do not impact consumer behavior more than price decreases (ie, gains for consumers). Messages that frame an appeal in terms of a loss (eg, “you will lose out by not buying our product”) are no more persuasive than messages that frame an appeal in terms of a gain (eg, “you will gain by buying our product”).

People do not rate the pain of losing $10 to be more intense than the pleasure of gaining $10. People do not report their favorite sports team losing a game will be more impactful than their favorite sports team winning a game. And people are not particularly likely to sell a stock they believe has even odds of going up or down in price (in fact, in one study I performed, over 80 percent of participants said they would hold on to it).

I have dug into the paper so…who knows?! but I find it plausible.

This is super annoying. The ego depletion one was extra annoying due to the fact that the literature had seemed good. Loss aversion loss is annoying because of the pervasiveness of use of the concept. It was the example of behavior economics.

We really need to separate out the work that is inherently high risk in fields like psychology and nutrition.

Note: when looking up the ego depletion stuff I came across a post touting recent “strong” evidence for ego depletion in the form of two sorts large studies with preregistration. That’s prima facie interesting but I’m going to retain a pretty high level of skepticism. Certainly when folks write (emphasis added)

Moreover, combining results from the two studies, there was an overall small, but statistically significant, ego depletion effect even after removing outlier participants (and this was after only a five-minute self control challenge, so you can imagine the effects being larger after more arduous real life challenges).

Arrrrrgh! The results of two studies with a combined n of around 1000 is a small but “statistically significant” (I presume p=0.05) effect. No no no no. That’s super dangerous.

Worse, speculating about how much bigger the effects would be with bigger manipulation is super duper dangerous. This is stoking confirmation bias. And we shouldn’t be looking at current tiny effects as evidence for future awesome effects.

Advertisements

Trust Not In Pundits with Too Much Bonkers Confidence

I put all my blogging energy into an LGM comment about the problems with Seth Abramson, Twitter legal conspiracist, albeit against Trump these days:

Seth has done an outstanding job on this.

Really?

First, some context:

https://www.huffingtonpost.com/seth-abramson/5-things-weve-learned-abo_b_9772408.html

https://www.huffingtonpost.com/entry/in-sanders-wins-californi_b_10252990.html

https://www.huffingtonpost.com/entry/how-to-explain-the-sanders_b_10206250.html

He wrote a lot of bonkers stuff during the election. Not in the league of HA Goodman or even Greenwald, but a lot of silly stuff all with the same breathless, relentless, overconfident air.

This should give us a bit of pause. It doesn’t mean he didn’t change for the better, but it means we should take care.

Second, that thread:

Trump didn’t “as good as” out himself as a Russian agent—he *literally* did, and his statement about DNI Coats literally proves it.

“Literally” eh?

Trump’s hand-picked Republican Director of National Intelligence, Dan Coats, has the *same* intel on Russia’s attack on America Trump has—by definition. It’s undisputed.

Say what? I mean, the only sense in which this is true is that Trump has *formal* access to everything Coats has. We also know that Trump reads almost nothing, cannot sit through most briefings, retains little, ignores everything, and makes up a ton more.

So, it’s not undisputed that Trump is working rationally and knowingly off the same information. In fact, that’s certainly not true!

This is the key for some wacko inferences:

Or *would* be, if we didn’t know that there is *one* difference between the two men’s intelligence sources.

Trump doesn’t read, listen, or care about them?

The *one* difference between the intel Coats has and the intel Trump has is that *Trump has met privately with Russians on multiple occasions*. He did so—without the necessary meeting attendees, including advisers and witnesses—in the Oval Office, at a prior summit, and today.

See, this is very silly. This is obviously a difference, but it’s one of a multitude. It’s significant, but not necessarily dominant.

So the only explanation for Trump saying that he “disagrees” with intel his own DNI is 100% on is if he’s relying on the sources *he* has that DNI Coats doesn’t

I mean, what? The right explanation for Trump saying that he disagrees is that the US intelligence communities information is really bad for Trump. In all sorts of ways. That’s totally sufficient for Trump to say it’s wrong. He said that the content of an interview with him from a conservative UK paper was false mere hours afterwards. He didn’t have “intel” the paper didn’t. He just didn’t like the fallout.

So what did Trump do with the additional intelligence he received from the Russians in his three (at a minimum) protocol-busting meetings with hostile foreign actors? He used that intelligence—and the disagreement with Coats it bred in him—to *attack the United States* on TV.

We think that he got “intelligence” from the Russians? I mean, this stretches the meaning of the term out of shape. The Russians aren’t sharing intelligence with Trump! They may be lying to him, blackmailing him, or colluding with him, but not by sharing intelligence that he acts upon. I mean come the fuck on.

And now:

In doing so, Trump *literally* was acting as an agent of Russia, relying on Russian intelligence as his marching orders in spreading dangerous propaganda on international television. *That’s* why Brennan called his actions treasonous—because they *literally* (by law) are.

“Intelligence as marching orders” is incoherent and redundant. Whatever this is, it isn’t a proof that Trump “literally” confessed or that Coats’s statement “literally” proves it. It just isn’t. It’s a bizarre inference which isn’t necessary to reach a reasonably analogous conclusion.

And it’s really doubtful that his actions now are “literally” by law treasonous. The president has enormous power and wide latitude esp in foreign affairs. It’s not determined that he’s given “aid and comfort” to an “enemy” because he has a great deal of influence of who counts as an enemy! So, like with impeachment, whether what he’s doing is treason will be primarily a political determination.

If Seth’s hackery bore useful political fruit, I’d wince and be ok. It’s not at all clear that it does. I’ll leave you with these sage words:

I’ve been a metamodernist creative writer for many years now, but had not seen an opportunity to bring this earnest, optimistic, and loving art practice into my professional writing activities until Bernie Sanders came along. Not only do I fully support and endorse Senator Sanders’ agenda, I see in his political methodology evidence of the metamodern, just as I know for certain when I hear Clinton’s cynical incrementalism that I am in the presence of a postmodern political ethos. The reason we think of Bernie Sanders as impractical or even naive is that he is; what most fail to see, however, is that his is the “informed naivete” of metamodernism. He sees that our economic and cultural markets are in a terminal state of deconstruction, and yes, this makes him angry and “negative” in a certain respect, but he sees too that the opportunity this deconstruction affords us all is a moment in which we can reconstruct everything we’ve known in a way that better reflects our values.

So when I wrote that “Bernie Sanders Is Currently Winning the Democratic Primary Race, and I’ll Prove It to You,” I was offering a “minority report” of the Real:

Nonsense presented as analysis is a problem cf Glenn Greenwald. It often is concealed under a torrent of “evidence”. This irks me as I’m a torrent of evidende sort of guys and I resent people using the form poorly for bad ends.

Making Principled Unprincipled Choices

I like principled decision making. Indeed, few things inspired me as much as this quote from Leibniz:

if controversies were to arise, there would be be no more need of disputation between two philosophers than between two calculators. For it would suffice for them to take their pencils in their hands and to sit down at the abacus, and say to each other (and if they so wish also to a friend called to help): Let us calculate.

Alas, there’s no decision making situation where this vision holds, even in principle. But still, I like my decisions to conform to some articulable rationale, preferably in the form of some set of general rules.

But some of my rules are meta-rules which focus on resource use. Obviously, one goal of decision making rules in to maximise the chances of making the “right” choice. But for any metric of rightness (let’s say, an appliance with the best value for money) there’s a cost in the effort to assure the maximum (e.g., research, testing, comparing…lots of shopping). That cost can be quite large and interact with subsequent satisfaction in a variety of ways. I’m prone to this and, indeed, end up in decision paralysis.

In response to this, one of my meta-rules is “don’t over-sweat it”. So, for small stuff, this reduces to “don’t sweat the small stuff”. But, because of my anxiety structures, I tend to see certain classes of small stuff as big stuff. So, I dedicate some effort to seeing small stuff as small. Sometimes, this means making it invisible to me. Poor Zoe often has to make the actual purchase after I’ve done the research, or even make the decision after I’ve done the research. For various classes of minor, irrevocable sub-optimal decisions, I prefer not to know about them. I will obsess, and that doesn’t help anyone.

When the decision is essentially arbitrary (because all choices are incommensurable in toto, or their value is unknowable at the moment), I try to make myself flip a coin (metaphorically, at least). What I try to avoid is building a fake rationale (except when that enables the choosing or makes me happier with the arbitrary choice).

Technical (or teaching) decisions often are best treated as arbitrary, but we have tons of incentives to treat them as requiring a ton of analysis to make the “right” choice. At the moment, I’m evaluating what Python testing framework to use and teach in my software engineering class. I currently use doctest and unittest and have a pretty decent lesson plan around them. doctest is funky and unittest is bog standard. I’d consider dropping doctest because I need room and we don’t do enough xUnit style testing for them to really grasp it. They are also built into the standard library.

But then there’s pytest which seem fairly popular. It has some technical advantages, including a slew of plugins (including for regression testing and BDD style testing). It scales in complexity nicely…you can just write a test function and you’re done.

But, of course, it’s a third party thing and needs to be installed. Any plugins would have to be installed. Is it “better enough” to ignore the built in libraries? Or should I add it on with the builtin libraries? AND THERE MIGHT BE SOMETHING YET BETTER OUT THERE OH NOES!!!!

No. The key principle here is a meta-principle: Don’t invest too much more effort. Make a decision and stick with it. In the end, any of the choices will do and a big determiner will be “does it spark my interest now?” while the other will be “how much extra work is that?”

And that’s fine.

 

Countering (Massive Numbers of) Lies Doesn’t Work

Lies are dangerous in a number of ways. Putting aside that there are lots of situations where a false belief leads to very bad action (e.g., believing homeopathy is an effective cancer treatment leads to forgoing treatment that would have saved one’s life or mitigated suffering). They are also dangerous because people with bad agendas tend to resort to lying because they can’t win on the merits. And they don’t just resort to a bit of deception, or even clever deception. It turns out that wholesale, massive, shameless, easily rebutted lies are pretty effective, at least for something.

Consider the decades long attack on the EU:

But Britain has a long and well-observed tradition of fabricating facts about Europe—so much so that the European Commission (EC) set up a website to debunk these lies in the early 1990s. Try our interactive quiz below and see if you can spot the myths.

Since then the EC has responded to over 400 myths published by the British media. These range from the absurd (fishing boats will be forced to carry condoms) to the ridiculous (zippers on trousers will be banned). Some are seemingly the result of wilful misunderstandings.

Sadly, for all the commission’s hard work, it is unlikely to be heard. The average rebuttal is read about 1,000 times. The Daily Mail’s website, by contrast, garners 225m visitors each month.

And, of course, the Leave campaign, itself, was almost wholly lie based. Remain made some (economic) predictions that were falsified (and that needs to be understood), but it didn’t traffic in wholesale lies, to my knowledge.

Similarly, we have a decades long campaign, almost entirely easily-debunked-lie based, against Hillary Clinton. Just take claims about her honesty (esp. next to Trump). Robert Mann produced a very interesting graph of Polifact’s fact checking of a selection of politicians:

It isn’t even close! HRC is one of the most honest politicians (in terms of telling falsehoods) and Trump is one of the most dishonest.

Yet, when I was debating folks on Democratic leaning blogs, I had people saying that Clinton was a pathological liar. When presented with this chart, they stuck to their guns. (Note, they didn’t think Obama was a liar.)

You can quibble with the methodology (see Mann’s blog post for a discussion), but Polifact’s fact checker tries to be evenhanded. One should be at least a little struck by this evidence.

But correction often just doesn’t work, backfires, or isn’t effective in changing attitudes and behavior. For example,

Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

Or consider Emily Thorson’s concept of belief echoes:

However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction–even when it is fully believed–does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects,which I call “belief echoes,” are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking.

Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person’s pre-existing political views

We see this in the various formulations of the Clinton Rules.

One major harm of such mechanisms is that it opens up a line of defense for very bad people, e.g., Trump, to wit, that there are “Trump rules” and the bad things pointed out about him are fake. They aren’t, but why trust a gullible media about it?

I’ve had personal experience of this. I used to comment a lot on LGM. One commenter with a propensity for persistently saying very silly things (about, e.g., statistics, causality, politics, and even the law (they are a lawyer)) got to a point where they couldn’t stand my repeated refutations (including pointing out how they’d been refuted before). They embarked on a pretty systematic campaign to lie about me, primarily about my mental health and that I was “stalking” them, on the verge of a breakdown, that they were frightened of me, that I had no sex life or other kind of life, that I spent large period of times looking things up on them (stalking!), etc. These were transparent lies and obvious gaslighting. No one took them directly seriously, but they did have effects. People would see an exchange and assume that there was some fault on my part (however mild). This would pop up elsewhere, in other comments.  Some of these people were more sympathetic to a gaslighting liar than they had any right to be.

So, pretty exemplary behavior and a sterling reputation vs. transparent lies and extremely bizarre slanders and…well, I’m the one not commenting any more. It worked, in a way. (Trump winning had an effect too. It’s not solely due to this bad behavior.)

Given sufficient shamelessness and no structural counter (e.g., moderation) and no big effort on my part (e.g., an active campaign), there’s little penalty for such lying and it advances their noxious cause.

These examples can be multiplied easily (anti-vaccine, pro-tobacco, climate change denial campaigns come to mind).

It’s very difficult to deal with. We need to.

Update:

How severe is the problem? I just saw a report on a survey using Trump’s and Obama’s inauguration crowd photos:

For the question about which image went with which inauguration, 41 percent of Trump supporters gave the wrong answer; that’s significantly more than the wrong answers given by 8 percent of Clinton voters and 21 percent of those who did not vote.

But what’s even more noteworthy is that 15 percent of people who voted for Trump told us that more people were in the image on the left — the photo from Trump’s inauguration — than the picture on the right. We got that answer from only 2 percent of Clinton voters and 3 percent of nonvoters.

The article discusses the idea of “expressive responding”:

Why would anyone give the wrong answer to a pretty simple question?

To many political psychologists, this exercise will be familiar. A growing body of research documents how fully Americans appear to hold biased positions about basic political facts. But scholars also debate whether partisans actually believe the misinformation and how many are knowingly giving the wrong answer to support their partisan team (a process called expressive responding).

Expressive responding is yet another form of lying with potentially far reaching consequences.

On Calling Out a Lie

Given the massive amount of un-, anti-, and non-truth spewed by Trump, his minions, and the Republican Party, the media has had a lot of trouble coping with it. Trumpsters and their ilk even have started complaining about “fake news” by which they don’t mean actual fake news, but instead they mean true news that they don’t like.

The media needs to deal with the situation better. There are lots of vulnerable points (e.g., the need for access, the cult of balance, the shamelessness of the deception). But one problem is a strong unwillingness to call a lie a lie (well, except for the liars, who are quite willing to call anything they don’t like a lie).

There’s a fairly narrow idea of a lie making its way around that’s used to justify this. Take Kevin Drum (who’s on the pro-call-out-lies side):

The problem with branding something a lie is that you have to be sure the speaker knew it was wrong. Otherwise it’s just ignorance or a mistake.

Arrrgh! Even Drum falls into a pretty obvious error! Just because you don’t utter a deliberate, explicit, knowing falsehood doesn’t mean you are innocently making some sort of error (i.e., acting from ignorance or making a mistake)! Just simple contemplation of lies of omission reveal that. Or recall standard tricks such as:

Is there anything else material that you want to tell us?

No.

But it says here that you did X and X is material! Why did you lie?!

I didn’t lie. I didn’t want to tell you about X.

Lots of people have come to rely on Frankfurt’s notion of “bullshit” (utterances made without regard for the truth) and “lie” (utterances made with a regard for falsity). I remember when Frankfurt’s article came out and I enjoyed it. It’s a nice distinction, but it’s been misused. A bullshitter is a kind of liar (or, if you want to be annoying, a deceiver). (Wikipedia correctly puts Frankfurtian “bullshit” as a topic on the “lie” page.)

Frankfurt spends a great deal of time trying to suss out the distinction between lying and bullshitting:

The elder Simpson identifies the alternative to telling a lie as bullshitting one’s way through. This involves not merely producing one instance of bullshit; it involves a of producing bullshit to whatever extent the circumstances require. This is a key, perhaps, to his preference. Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth. On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared to fake the context as well, so far as need requires.

Meh. When you have enough fabrication and one of your targets is yourself, this idea of focus isn’t pertinent. One way of lying is being a shameless liar most of the time so when one speaks the truth one isn’t believed.

It is sometimes worth figuring out the etiology of someone’s false (or otherwise wrong) utterances. It can make a difference in how you counter them. If someone is mistaken, they may be amenable to correction. If they are a “true believer”, it may be quite difficult to merely correct them (so maybe you don’t bother).

But, with the Trumpians and other Republicans, come on. There needs to be some strict liability here. Lying so well that you convince even yourself that it’s true is a kind of lying. Coming to believe your own lies (supposedly) doesn’t get you off the hook for all that lying nor does it make it not lying.

I’m sorta ok with Drum’s desire to focus on deception rather that (narrow) lying. But…in ordinary vernacular, deception is lying. A lie of omission is a lie. If you bullshit me, you are lying to me. If you lie to yourself, you are lying.

With Trump, it’s super easy: it’s almost all straightforward lies.

Update: LGM caught up with the NYT finally putting “lie” in the headline with appropriate skepticism.

Experiments vs. Case Studies

My recent post on validities was motivated by John Proveti posting a draft of an abstract he was submitting about the Salaita affair. John focused on exploring the use of case studies in moral analysis. This prompts me to write up (again) my spiel on experiments and case studies.

The primary aim of a controlled experiment is internal validity, that is, demonstrating causal relationships. The primary tool for this is isolation, that is, we try to remove as much as possible so that any correlations we see are more likely to be causal. If you manipulate variable v1 and variable v2 responds systematically and there are no other factors that change through the manipulation then you have a case that changes in v1 cause those changes in v2. (Lots of caveats. You want to repeat it to rule out spontaneous changes to v2. Etc.) Of course, you have lots of problems holding everything except v1 and v2 fixed. It’s probably impossible in almost all cases. You may not know all the factors in play! This is especially true when it comes to people. So, you control as much as you can and us a large number of randomly selected participants to smooth out the unknowns (roughly). But critically, you shrink the v and up the n (i.e., repetitions).

Low v tends to hurt both external and ecological validity. In other circumstances, other factors might produce the changes in v2 (or block them!). For other controlled circumstances, this might be fairly easy to find the interaction. But for field circumstances, the number of factors potentially in play explodes.

Thus, the case study, where we lower the number of n (to n=1) in order to explore arbitrary numbers of factors. Of course, the price we pay for that is weakening internal and external validity, indeed, any sort of generalisability.

Of course, in non-experimental philosophy, the main form of experiment is the thought experiment. But you can see the experiment philosophy at work: The reason philosopher dream up outlandish circumstances is to isolate and amplify the target v1 and v2. Thus, in the trolly problem, you have a simple choice. No one else is involved, and we pit number of lives vs. omission or commission and the result is death. That the example is hard to relate to is a perfect example of a failure of ecological validity. But philosophers get so used to intuiting under though laboratory conditions that they become a bit like mice who have been bred to be susceptible to cancer: Their reactions and thinking is suspect. (That it is all so clean and clever and pure makes it seem like one is thinking better. Bad mistake!)

Of course, we can have thought case studies as well. This is roughly what I take Martha Nussbaum to claim about novels in “Flawed Crystals: James’s The Golden Bowl and Literature as Moral Philosophy“:

To show forth the force and truth of the Aristotelian claim that “the decision rests with perception,” we need, then-either side by side with a philosophical “outline” or inside it—texts which display to us the complexity, the indeterminacy, the sheer difficulty of moral choice, and which show us, as this text does concerning Maggie Verver, the childishness, the refusal of life involved in fixing everything in advance according to some system of inviolable rules. This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face? And how, without conveying this, can it convey the active adventure of the deliberative intelligence, the “yearnings of thought and excursions of sympathy” (p. 521) that make up much of our actual moral life?

I take this as precisely the point that more abstract explorations of moral reasoning lack ecological validity.

This, of course, has implications both for moral theorising and for moral education. Our moral theories are likely to be wrong about moral life in the field (and, I would argue, in the lab as well!). (I think this is what Bernard Williams was partly complaining about in Utilitarianism For and Against.) But further, learning how to reason well about action in in the circumstances of our lives won’t work by ingesting abstract moral theories (even if they are more or less true). We still need to cultivate moral judgement.

I think we can do philosophical case studies that are not thought case studies just as we can do experimental philosophy without thought experiments. Indeed, I recommend it.

On Validities

In an Introduction to Symbolic Logic class offered by a philosophy class, you will probably learn:

  1. An argument is valid if when the premises are all true, the conclusion is (or must be) true.
  2. An argument is sound if it is valid and the premises are all true.

In such a class with a critical reasoning component, you will also learn about various common logical fallacies, that is, arguments which people take as valid but which are not (e.g., affirming the consequent, which is basically messing up modus ponens).

You might also get some discussion of “invalid but good” arguments, namely, various inductive arguments. (Perhaps these days texts include some proper statistical reasoning.) This notion is passé. I think reserving “validity” for “deductive validity” is unhelpful. In many scientific papers, there will be a section on “threats to validity” where the authors address various issues with the evidence they provide, typically:

  1. Internal validity (the degree to which the theory, experimental design, and results support concluding that there is a causal relationship between key correlated variables)
  2. External validity (the degree to which the theory, experimental design, and results generalise to other (experimental) populations and situations)
  3. Ecological (or field) validity (the degree to which the theory, experimental design, and results generalise to “real world” conditions)

There are dozens of other sorts of validity. Indeed, the Wikipedia article presents deductive validity as restricted:

It is generally accepted that the concept of scientific validity addresses the nature of reality and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the truth of inferences made from premises.

I like the general idea that a validity of an argument is the extent to which the argument achieve what it is trying to achieve.  Typically, this is to establish the truth (or likelihood) of a conclusion. Deductions are useful, but they aren’t what you need most of the time. Indeed, per usual, establishing the truth of the premises is critical! And we usually can’t fully determine the truth of the premises! So, we need to manage lots of kinds of evidence in lots of different ways.

An argument is a relationship between evidence and a claim. The case where the relationship is deductive is wonderful and exciting and fun, but let’s not oversell it.