Archive for the 'Critical Reasoning' Category

Countering (Massive Numbers of) Lies Doesn’t Work

January 25, 2017

Lies are dangerous in a number of ways. Putting aside that there are lots of situations where a false belief leads to very bad action (e.g., believing homeopathy is an effective cancer treatment leads to forgoing treatment that would have saved one’s life or mitigated suffering). They are also dangerous because people with bad agendas tend to resort to lying because they can’t win on the merits. And they don’t just resort to a bit of deception, or even clever deception. It turns out that wholesale, massive, shameless, easily rebutted lies are pretty effective, at least for something.

Consider the decades long attack on the EU:

But Britain has a long and well-observed tradition of fabricating facts about Europe—so much so that the European Commission (EC) set up a website to debunk these lies in the early 1990s. Try our interactive quiz below and see if you can spot the myths.

Since then the EC has responded to over 400 myths published by the British media. These range from the absurd (fishing boats will be forced to carry condoms) to the ridiculous (zippers on trousers will be banned). Some are seemingly the result of wilful misunderstandings.

Sadly, for all the commission’s hard work, it is unlikely to be heard. The average rebuttal is read about 1,000 times. The Daily Mail’s website, by contrast, garners 225m visitors each month.

And, of course, the Leave campaign, itself, was almost wholly lie based. Remain made some (economic) predictions that were falsified (and that needs to be understood), but it didn’t traffic in wholesale lies, to my knowledge.

Similarly, we have a decades long campaign, almost entirely easily-debunked-lie based, against Hillary Clinton. Just take claims about her honesty (esp. next to Trump). Robert Mann produced a very interesting graph of Polifact’s fact checking of a selection of politicians:

It isn’t even close! HRC is one of the most honest politicians (in terms of telling falsehoods) and Trump is one of the most dishonest.

Yet, when I was debating folks on Democratic leaning blogs, I had people saying that Clinton was a pathological liar. When presented with this chart, they stuck to their guns. (Note, they didn’t think Obama was a liar.)

You can quibble with the methodology (see Mann’s blog post for a discussion), but Polifact’s fact checker tries to be evenhanded. One should be at least a little struck by this evidence.

But correction often just doesn’t work, backfires, or isn’t effective in changing attitudes and behavior. For example,

Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

Or consider Emily Thorson’s concept of belief echoes:

However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction–even when it is fully believed–does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects,which I call “belief echoes,” are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking.

Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person’s pre-existing political views

We see this in the various formulations of the Clinton Rules.

One major harm of such mechanisms is that it opens up a line of defense for very bad people, e.g., Trump, to wit, that there are “Trump rules” and the bad things pointed out about him are fake. They aren’t, but why trust a gullible media about it?

I’ve had personal experience of this. I used to comment a lot on LGM. One commenter with a propensity for persistently saying very silly things (about, e.g., statistics, causality, politics, and even the law (they are a lawyer)) got to a point where they couldn’t stand my repeated refutations (including pointing out how they’d been refuted before). They embarked on a pretty systematic campaign to lie about me, primarily about my mental health and that I was “stalking” them, on the verge of a breakdown, that they were frightened of me, that I had no sex life or other kind of life, that I spent large period of times looking things up on them (stalking!), etc. These were transparent lies and obvious gaslighting. No one took them directly seriously, but they did have effects. People would see an exchange and assume that there was some fault on my part (however mild). This would pop up elsewhere, in other comments.  Some of these people were more sympathetic to a gaslighting liar than they had any right to be.

So, pretty exemplary behavior and a sterling reputation vs. transparent lies and extremely bizarre slanders and…well, I’m the one not commenting any more. It worked, in a way. (Trump winning had an effect too. It’s not solely due to this bad behavior.)

Given sufficient shamelessness and no structural counter (e.g., moderation) and no big effort on my part (e.g., an active campaign), there’s little penalty for such lying and it advances their noxious cause.

These examples can be multiplied easily (anti-vaccine, pro-tobacco, climate change denial campaigns come to mind).

It’s very difficult to deal with. We need to.

Update:

How severe is the problem? I just saw a report on a survey using Trump’s and Obama’s inauguration crowd photos:

For the question about which image went with which inauguration, 41 percent of Trump supporters gave the wrong answer; that’s significantly more than the wrong answers given by 8 percent of Clinton voters and 21 percent of those who did not vote.

But what’s even more noteworthy is that 15 percent of people who voted for Trump told us that more people were in the image on the left — the photo from Trump’s inauguration — than the picture on the right. We got that answer from only 2 percent of Clinton voters and 3 percent of nonvoters.

The article discusses the idea of “expressive responding”:

Why would anyone give the wrong answer to a pretty simple question?

To many political psychologists, this exercise will be familiar. A growing body of research documents how fully Americans appear to hold biased positions about basic political facts. But scholars also debate whether partisans actually believe the misinformation and how many are knowingly giving the wrong answer to support their partisan team (a process called expressive responding).

Expressive responding is yet another form of lying with potentially far reaching consequences.

On Calling Out a Lie

January 24, 2017

Given the massive amount of un-, anti-, and non-truth spewed by Trump, his minions, and the Republican Party, the media has had a lot of trouble coping with it. Trumpsters and their ilk even have started complaining about “fake news” by which they don’t mean actual fake news, but instead they mean true news that they don’t like.

The media needs to deal with the situation better. There are lots of vulnerable points (e.g., the need for access, the cult of balance, the shamelessness of the deception). But one problem is a strong unwillingness to call a lie a lie (well, except for the liars, who are quite willing to call anything they don’t like a lie).

There’s a fairly narrow idea of a lie making its way around that’s used to justify this. Take Kevin Drum (who’s on the pro-call-out-lies side):

The problem with branding something a lie is that you have to be sure the speaker knew it was wrong. Otherwise it’s just ignorance or a mistake.

Arrrgh! Even Drum falls into a pretty obvious error! Just because you don’t utter a deliberate, explicit, knowing falsehood doesn’t mean you are innocently making some sort of error (i.e., acting from ignorance or making a mistake)! Just simple contemplation of lies of omission reveal that. Or recall standard tricks such as:

Is there anything else material that you want to tell us?

No.

But it says here that you did X and X is material! Why did you lie?!

I didn’t lie. I didn’t want to tell you about X.

Lots of people have come to rely on Frankfurt’s notion of “bullshit” (utterances made without regard for the truth) and “lie” (utterances made with a regard for falsity). I remember when Frankfurt’s article came out and I enjoyed it. It’s a nice distinction, but it’s been misused. A bullshitter is a kind of liar (or, if you want to be annoying, a deceiver). (Wikipedia correctly puts Frankfurtian “bullshit” as a topic on the “lie” page.)

Frankfurt spends a great deal of time trying to suss out the distinction between lying and bullshitting:

The elder Simpson identifies the alternative to telling a lie as bullshitting one’s way through. This involves not merely producing one instance of bullshit; it involves a of producing bullshit to whatever extent the circumstances require. This is a key, perhaps, to his preference. Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth. On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared to fake the context as well, so far as need requires.

Meh. When you have enough fabrication and one of your targets is yourself, this idea of focus isn’t pertinent. One way of lying is being a shameless liar most of the time so when one speaks the truth one isn’t believed.

It is sometimes worth figuring out the etiology of someone’s false (or otherwise wrong) utterances. It can make a difference in how you counter them. If someone is mistaken, they may be amenable to correction. If they are a “true believer”, it may be quite difficult to merely correct them (so maybe you don’t bother).

But, with the Trumpians and other Republicans, come on. There needs to be some strict liability here. Lying so well that you convince even yourself that it’s true is a kind of lying. Coming to believe your own lies (supposedly) doesn’t get you off the hook for all that lying nor does it make it not lying.

I’m sorta ok with Drum’s desire to focus on deception rather that (narrow) lying. But…in ordinary vernacular, deception is lying. A lie of omission is a lie. If you bullshit me, you are lying to me. If you lie to yourself, you are lying.

With Trump, it’s super easy: it’s almost all straightforward lies.

Update: LGM caught up with the NYT finally putting “lie” in the headline with appropriate skepticism.

Experiments vs. Case Studies

January 4, 2016

My recent post on validities was motivated by John Proveti posting a draft of an abstract he was submitting about the Salaita affair. John focused on exploring the use of case studies in moral analysis. This prompts me to write up (again) my spiel on experiments and case studies.

The primary aim of a controlled experiment is internal validity, that is, demonstrating causal relationships. The primary tool for this is isolation, that is, we try to remove as much as possible so that any correlations we see are more likely to be causal. If you manipulate variable v1 and variable v2 responds systematically and there are no other factors that change through the manipulation then you have a case that changes in v1 cause those changes in v2. (Lots of caveats. You want to repeat it to rule out spontaneous changes to v2. Etc.) Of course, you have lots of problems holding everything except v1 and v2 fixed. It’s probably impossible in almost all cases. You may not know all the factors in play! This is especially true when it comes to people. So, you control as much as you can and us a large number of randomly selected participants to smooth out the unknowns (roughly). But critically, you shrink the v and up the n (i.e., repetitions).

Low v tends to hurt both external and ecological validity. In other circumstances, other factors might produce the changes in v2 (or block them!). For other controlled circumstances, this might be fairly easy to find the interaction. But for field circumstances, the number of factors potentially in play explodes.

Thus, the case study, where we lower the number of n (to n=1) in order to explore arbitrary numbers of factors. Of course, the price we pay for that is weakening internal and external validity, indeed, any sort of generalisability.

Of course, in non-experimental philosophy, the main form of experiment is the thought experiment. But you can see the experiment philosophy at work: The reason philosopher dream up outlandish circumstances is to isolate and amplify the target v1 and v2. Thus, in the trolly problem, you have a simple choice. No one else is involved, and we pit number of lives vs. omission or commission and the result is death. That the example is hard to relate to is a perfect example of a failure of ecological validity. But philosophers get so used to intuiting under though laboratory conditions that they become a bit like mice who have been bred to be susceptible to cancer: Their reactions and thinking is suspect. (That it is all so clean and clever and pure makes it seem like one is thinking better. Bad mistake!)

Of course, we can have thought case studies as well. This is roughly what I take Martha Nussbaum to claim about novels in “Flawed Crystals: James’s The Golden Bowl and Literature as Moral Philosophy“:

To show forth the force and truth of the Aristotelian claim that “the decision rests with perception,” we need, then-either side by side with a philosophical “outline” or inside it—texts which display to us the complexity, the indeterminacy, the sheer difficulty of moral choice, and which show us, as this text does concerning Maggie Verver, the childishness, the refusal of life involved in fixing everything in advance according to some system of inviolable rules. This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face? And how, without conveying this, can it convey the active adventure of the deliberative intelligence, the “yearnings of thought and excursions of sympathy” (p. 521) that make up much of our actual moral life?

I take this as precisely the point that more abstract explorations of moral reasoning lack ecological validity.

This, of course, has implications both for moral theorising and for moral education. Our moral theories are likely to be wrong about moral life in the field (and, I would argue, in the lab as well!). (I think this is what Bernard Williams was partly complaining about in Utilitarianism For and Against.) But further, learning how to reason well about action in in the circumstances of our lives won’t work by ingesting abstract moral theories (even if they are more or less true). We still need to cultivate moral judgement.

I think we can do philosophical case studies that are not thought case studies just as we can do experimental philosophy without thought experiments. Indeed, I recommend it.

On Validities

January 2, 2016

In an Introduction to Symbolic Logic class offered by a philosophy class, you will probably learn:

  1. An argument is valid if when the premises are all true, the conclusion is (or must be) true.
  2. An argument is sound if it is valid and the premises are all true.

In such a class with a critical reasoning component, you will also learn about various common logical fallacies, that is, arguments which people take as valid but which are not (e.g., affirming the consequent, which is basically messing up modus ponens).

You might also get some discussion of “invalid but good” arguments, namely, various inductive arguments. (Perhaps these days texts include some proper statistical reasoning.) This notion is passé. I think reserving “validity” for “deductive validity” is unhelpful. In many scientific papers, there will be a section on “threats to validity” where the authors address various issues with the evidence they provide, typically:

  1. Internal validity (the degree to which the theory, experimental design, and results support concluding that there is a causal relationship between key correlated variables)
  2. External validity (the degree to which the theory, experimental design, and results generalise to other (experimental) populations and situations)
  3. Ecological (or field) validity (the degree to which the theory, experimental design, and results generalise to “real world” conditions)

There are dozens of other sorts of validity. Indeed, the Wikipedia article presents deductive validity as restricted:

It is generally accepted that the concept of scientific validity addresses the nature of reality and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the truth of inferences made from premises.

I like the general idea that a validity of an argument is the extent to which the argument achieve what it is trying to achieve.  Typically, this is to establish the truth (or likelihood) of a conclusion. Deductions are useful, but they aren’t what you need most of the time. Indeed, per usual, establishing the truth of the premises is critical! And we usually can’t fully determine the truth of the premises! So, we need to manage lots of kinds of evidence in lots of different ways.

An argument is a relationship between evidence and a claim. The case where the relationship is deductive is wonderful and exciting and fun, but let’s not oversell it.

Thinking critically about critical reasoning

February 10, 2014

After observing a comment exchange with a classic (to my eye) misuse of an ad hominem accusation, I was moved to ask whether there were critical reasoning classes that looked at more than good vs. fallacious arguments and took a more dialectical or even broader view. This got turned into a NewAPPS post by Ed Kazarian. There are some interested suggestions for books in the comments, and I hope to follow up on them. There is a ton of work and books on things like cognitive blindnesses, discussion, etc. but I don’t know if there’s a good text/hand book on, well, roughly, being a good (in a general sense) cognitively driven social being. Let me try an example. What’s wrong with:

If the moon is made of cheese, then its delicious.
The moon is made of cheese.
Therefore it is delicious.

If I think to my very old critical reasoning days, the diagnosis would be that this is valid but unsound with the second premise being definitely false and the first being dubious (but it might be hard to articulate exactly what’s dubious about it). But let’s compare it to:

We all agree, I’m sure, that if the moon is made of cheese, then its delicious
As everyone knows, the moon is made of cheese.
Therefore it is delicious.

This version is somehow worse, though the logic is probably the same. It depends a bit on the reading of “We all agree, I’m sure” and the “As everyone knows”. It might be false that we all agree, that I’m sure we do, and that everyone knows without the core argument being either unsound or invalid. None of these decorations alter the logical force or truth status of the moon/cheese argument. Yet, the second could be much worse, or, at least, irritating to me. The decorations are designed to incline us against questioning the premises. This could be good if it would be a waste of time to investigate them, or a big problem if they are in contention. They could be a problem just by triggering a lot of discussion about the truth (or appropriateness) of the decorations and thus potentially derailing the discussion.

After observing a comment exchange with a classic (to my eye) misuse of an ad hominem accusation, I was moved to ask whether there were critical reasoning classes that looked at more than good vs. fallacious arguments and took a more dialectical or even broader view. This got turned into a NewAPPS post by Ed Kazarian. There are some interested suggestions for books in the comments, and I hope to follow up on them. There is a ton of work and books on things like cognitive blindnesses, discussion, etc. but I don’t know if there’s a good text/hand book on, well, roughly, being a good (in a general sense) cognitively driven social being. Let me try an example. What’s wrong with:

If the moon is made of cheese, then its delicious.

The moon is made of cheese.

Therefore it is delicious.

If I think to my very old critical reasoning days, the diagnosis would be that this is valid but unsound with the second premise being definitely false and the first being dubious (but it might be hard to articulate exactly what’s dubious about it). But let’s compare it to:

We all agree, I’m sure, that if the moon is made of cheese,

As everyone knows, the moon is made of cheese.

Therefore it is delicious.

Consider:

Look you fatuous idiot, if the moon is made of cheese, then its delicious.
Even a monster asshole like you knows that the moon is made of cheese.
Therefore it is delicious. I also hate you and all your works.

Of course, this does not instantiate the ad hominem fallacy. Indeed, there is no attempt at refutation. There is a lot of insult and denigration in this presentation, however. The first (“Look you fatuous idiot” speaks against the intellect of the interlocutor; the second speaks against their character; and the third just expresses what we might have figured out from the first two, i.e., that the speaker has a wee touch of animosity toward the listener.
But from what I recall of a standard critical reasoning perspective, the main thing you can say about this is that it’s a bit obscure and perhaps inciting. But these things might have a dialectical or community oriented focus. They may even, perhaps, be appropriate (it at least partially depends on 1) whether it is and taken as ironic and even friendly or 2) whether the hearer “deserves” it or 3) whether the overall effect is positive).
There are clearly some dialectical considerations in standard critical reasoning disucssions. Burden of proof is a dialectical notion. But consider this (glib) presentation of the related “fallacy“:

The burden of proof lies with someone who is making a claim, and is not upon anyone else to disprove. The inability, or disinclination, to disprove a claim does not render that claim valid, nor give it any credence whatsoever. However it is important to note that we can never be certain of anything, and so we must assign value to any claim based on the available evidence, and to dismiss something on the basis that it hasn’t been proven beyond all doubt is also fallacious reasoning.

This does not take into account anything about the relative positions of the discussants, the prima facie nature of the claims, or the ways that demands might be unfair, or otherwise problematic (except to point out that demand for conclusiveness is another fallacy).

In fact, we often do earn a presumptive credibility for unsupported claims and can lose the benefit of presumptive credibility for unsupported doubts. We can get or lose these credibilities unjustly or ineffectively (e.g., many problematic uses of authority involve giving too much presumptive credibility).
I feel that a lot of the philosophising I’ve encountered (and promulgated!) is not very sensitive to important aspects of individual and joint cognition over the mid to long haul. (Lots of people have observed this, cf. for example, loads of feminist critiques, among others.)

This makes me sad. When I was a brash young philosopher major, I said to people that I liked philosophy because it was the best way to learn how to think. This is so obviously bonkers, I’m rather ashamed to have thought it. This attitude doesn’t seem that uncommon.