Degrees of Belief

I’m not sure why this paper showed up and lingered in my tabs, but it did. I vague recall thinking “oh that sounds interesting!” then being disappointed.

It starts with a weird argument for why the topic (the metaphysical status of beliefs) is worth exploring. But the arguments seem pretty…weird. One is to help formal epistemologists avoid having to say “all out belief” or “binary belief” instead of just “belief” and then taking about degrees of confidence rather than degrees of belief. I guess I’m losing some aspect of being a philosopher because that sounds like a really dumb reason to write a paper.

We then see one rebuttal of a supposedly common argument:

Assumption 1: The property of having confidence that p is identical to the property of having belief that p.
Assumption 2: ‘Belief’ and ‘confidence’ pick out the same thing.

They then infer that since the property of having confidence, or the thing picked out by ‘confidence’, comes in degrees, it follows that belief comes in degrees.

However, no reasons are given for Assumptions 1 and 2. They seem to just be assumed. Now, on the face of things, belief and confidence do seem to be similar sorts of mental entities; perhaps they are identical. On the other hand, our having formed different words for them is some evidence that they are distinct. So, as it stands, I see no convincing argument here that beliefs come in degrees. We will have to look elsewhere for better arguments.6

Now I want to say, “are you kidding me”. First I want to know is how common this argument is. Next I want to know what problems this eliding causes, if it exists. Finally, I want to know whether the author has even seen a thesaurus. Multiple words for the same thing happens all the time.

But it gets worse:

Consider (i). One can talk of much hope, little confidence, much desire, and so on. For any paradigm propositional attitude that comes in degrees, higher or lower degrees of that attitude can be attributed to a person by way of an occurrence of a mass noun. This is inductive evidence for (i).

Consider (ii). One cannot ascribe higher or lower degrees of belief to a person with ‘belief’. (5) does ascribe belief by way of a mass noun, but this only ascribes a number of single beliefs to a population, not a degree of belief to a single individual. Whenever belief is ascribed to a single person by way of a noun, it is by the occurrence of a count noun and not a mass noun. That is why (3) and (4) do not make sense. From (i) and (ii), it follows that beliefs do not come in degrees.

Say what? We easily say that I have a strong or weak believe or that this belief is stronger than that one. And language is quirky! Consider temperature! It canonically comes in degrees! But I can’t say that I have much or little temperature!

Just no.

And, you know, people ask “Ok you believe P, but how much do you believe it?” “100%!”

“Do you believe it more or less than you believe the earth is round?” “Oh much less.”

So I remember now why I gave up with irritation. If you are going to argue from natural language to metaphysics (which I find weird in this day and age) and even if we accept confining yourself to English (which is bad) a minimal constraint should be a systematic linguistic analysis! Not a couple of cherry picked examples and some blather about mass vs count terms!

(Note that I don’t believe my example prove that belief does have degrees because I am not a silly person. I recognize that people might well talk about things in funny ways!)

In any case, I would have thought a metaphysical paper would have explored the, you know, metaphysics. Eg looked at the ontological aspects of beliefs. One might explore whether neuroscience dictates some aspect of the metaphysics of belief. (If beliefs supervenes on excitation dispositions they have a natural degree aspect in us independent of evidential strength.)

I’m so grouchy.

RIP Lynne Rudder Baker

She died last month.

I first heard of her cognitive suicide argument at a Wesleyan Philosophy of Mind symposium (a futile attempt to keep Ken Taylor there, IIRC). Jay Garfield discussed it (his talk also was my introduction to Sellarsian dot quotes). I was fascinated by it. My thesis proposal came out of that fascination (and because a different sort of cognitive suicide).

(In the attic purge, I found lots of thesis printouts. I may be able to look at them.)

I didn’t realise how theistic she was!   That’s interesting. It puts some of her intellectual history in a new light, although don’t think she was a supernaturalist in any real way. I suspect that she’s a bit Kantian in wanting to make room for God, freedom and immortality (though again, not in a supernatural way).

I never met her or see her talk, but Saving Belief was a friend. We grew apart, but I still have fond college memories of it.

MLK on Love and Power

Scribd threw up in unlimited an audiobook with bits of MLK recorded speeches. Needless to say, it’s pretty fabulous.

MLK wrote some pretty amazing books and I wish they were more widely read.

One bit I found arresting (partial transcript)

What is needed is a realization that power without love is reckless and abusive, and that love without power is sentimental and anemic. (Yes) Power at its best [applause], power at its best is love (Yes) implementing the demands of justice, and justice at its best is love correcting everything that stands against love. (Speak) And this is what we must see as we move on.

This is pretty clearly bound up in King’s theological/philosophical perspective, which I’m definitely not in tune with, but I still find it pretty cool. There’s a koanness to the line power at its best. Power (at its best) is love implementing justice and justice is love…er…crushing its enemies? (Justice is love correcting antilove. Simple substitution clearly doesn’t work. Interesting, the goal here seems to be attacking (in part) the black power movement:

Now what has happened is that we’ve had it wrong and mixed up in our country, and this has led Negro Americans in the past to seek their goals through love and moral suasion devoid of power, and white Americans to seek their goals through power devoid of love and conscience. It is leading a few extremists today to advocate for Negroes the same destructive and conscienceless power that they have justly abhorred in whites. It is precisely this collision of immoral power with powerless morality which constitutes the major crisis of our times. (Yes)

I generally don’t quite agree with King about Black Power, but I’d love to see a full scholarly treatment about his relationship to the movement.

This latter quote also makes me think of the very cool Bernie Boxill essay “Fear and Shame as Forms of Moral Suasion in the Thought of Frederick Douglass“. I’ve had my quarrels with Bernie over this article (more on whether fear is either a necessary or sufficient condition, or even a promoter, of moral suasion than on Douglass exegesis), but it seems to engage the King above on several fronts. Worth a read (always).

Countering (Massive Numbers of) Lies Doesn’t Work

Lies are dangerous in a number of ways. Putting aside that there are lots of situations where a false belief leads to very bad action (e.g., believing homeopathy is an effective cancer treatment leads to forgoing treatment that would have saved one’s life or mitigated suffering). They are also dangerous because people with bad agendas tend to resort to lying because they can’t win on the merits. And they don’t just resort to a bit of deception, or even clever deception. It turns out that wholesale, massive, shameless, easily rebutted lies are pretty effective, at least for something.

Consider the decades long attack on the EU:

But Britain has a long and well-observed tradition of fabricating facts about Europe—so much so that the European Commission (EC) set up a website to debunk these lies in the early 1990s. Try our interactive quiz below and see if you can spot the myths.

Since then the EC has responded to over 400 myths published by the British media. These range from the absurd (fishing boats will be forced to carry condoms) to the ridiculous (zippers on trousers will be banned). Some are seemingly the result of wilful misunderstandings.

Sadly, for all the commission’s hard work, it is unlikely to be heard. The average rebuttal is read about 1,000 times. The Daily Mail’s website, by contrast, garners 225m visitors each month.

And, of course, the Leave campaign, itself, was almost wholly lie based. Remain made some (economic) predictions that were falsified (and that needs to be understood), but it didn’t traffic in wholesale lies, to my knowledge.

Similarly, we have a decades long campaign, almost entirely easily-debunked-lie based, against Hillary Clinton. Just take claims about her honesty (esp. next to Trump). Robert Mann produced a very interesting graph of Polifact’s fact checking of a selection of politicians:

It isn’t even close! HRC is one of the most honest politicians (in terms of telling falsehoods) and Trump is one of the most dishonest.

Yet, when I was debating folks on Democratic leaning blogs, I had people saying that Clinton was a pathological liar. When presented with this chart, they stuck to their guns. (Note, they didn’t think Obama was a liar.)

You can quibble with the methodology (see Mann’s blog post for a discussion), but Polifact’s fact checker tries to be evenhanded. One should be at least a little struck by this evidence.

But correction often just doesn’t work, backfires, or isn’t effective in changing attitudes and behavior. For example,

Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

Or consider Emily Thorson’s concept of belief echoes:

However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction–even when it is fully believed–does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects,which I call “belief echoes,” are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking.

Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person’s pre-existing political views

We see this in the various formulations of the Clinton Rules.

One major harm of such mechanisms is that it opens up a line of defense for very bad people, e.g., Trump, to wit, that there are “Trump rules” and the bad things pointed out about him are fake. They aren’t, but why trust a gullible media about it?

I’ve had personal experience of this. I used to comment a lot on LGM. One commenter with a propensity for persistently saying very silly things (about, e.g., statistics, causality, politics, and even the law (they are a lawyer)) got to a point where they couldn’t stand my repeated refutations (including pointing out how they’d been refuted before). They embarked on a pretty systematic campaign to lie about me, primarily about my mental health and that I was “stalking” them, on the verge of a breakdown, that they were frightened of me, that I had no sex life or other kind of life, that I spent large period of times looking things up on them (stalking!), etc. These were transparent lies and obvious gaslighting. No one took them directly seriously, but they did have effects. People would see an exchange and assume that there was some fault on my part (however mild). This would pop up elsewhere, in other comments.  Some of these people were more sympathetic to a gaslighting liar than they had any right to be.

So, pretty exemplary behavior and a sterling reputation vs. transparent lies and extremely bizarre slanders and…well, I’m the one not commenting any more. It worked, in a way. (Trump winning had an effect too. It’s not solely due to this bad behavior.)

Given sufficient shamelessness and no structural counter (e.g., moderation) and no big effort on my part (e.g., an active campaign), there’s little penalty for such lying and it advances their noxious cause.

These examples can be multiplied easily (anti-vaccine, pro-tobacco, climate change denial campaigns come to mind).

It’s very difficult to deal with. We need to.

Update:

How severe is the problem? I just saw a report on a survey using Trump’s and Obama’s inauguration crowd photos:

For the question about which image went with which inauguration, 41 percent of Trump supporters gave the wrong answer; that’s significantly more than the wrong answers given by 8 percent of Clinton voters and 21 percent of those who did not vote.

But what’s even more noteworthy is that 15 percent of people who voted for Trump told us that more people were in the image on the left — the photo from Trump’s inauguration — than the picture on the right. We got that answer from only 2 percent of Clinton voters and 3 percent of nonvoters.

The article discusses the idea of “expressive responding”:

Why would anyone give the wrong answer to a pretty simple question?

To many political psychologists, this exercise will be familiar. A growing body of research documents how fully Americans appear to hold biased positions about basic political facts. But scholars also debate whether partisans actually believe the misinformation and how many are knowingly giving the wrong answer to support their partisan team (a process called expressive responding).

Expressive responding is yet another form of lying with potentially far reaching consequences.

On Calling Out a Lie

Given the massive amount of un-, anti-, and non-truth spewed by Trump, his minions, and the Republican Party, the media has had a lot of trouble coping with it. Trumpsters and their ilk even have started complaining about “fake news” by which they don’t mean actual fake news, but instead they mean true news that they don’t like.

The media needs to deal with the situation better. There are lots of vulnerable points (e.g., the need for access, the cult of balance, the shamelessness of the deception). But one problem is a strong unwillingness to call a lie a lie (well, except for the liars, who are quite willing to call anything they don’t like a lie).

There’s a fairly narrow idea of a lie making its way around that’s used to justify this. Take Kevin Drum (who’s on the pro-call-out-lies side):

The problem with branding something a lie is that you have to be sure the speaker knew it was wrong. Otherwise it’s just ignorance or a mistake.

Arrrgh! Even Drum falls into a pretty obvious error! Just because you don’t utter a deliberate, explicit, knowing falsehood doesn’t mean you are innocently making some sort of error (i.e., acting from ignorance or making a mistake)! Just simple contemplation of lies of omission reveal that. Or recall standard tricks such as:

Is there anything else material that you want to tell us?

No.

But it says here that you did X and X is material! Why did you lie?!

I didn’t lie. I didn’t want to tell you about X.

Lots of people have come to rely on Frankfurt’s notion of “bullshit” (utterances made without regard for the truth) and “lie” (utterances made with a regard for falsity). I remember when Frankfurt’s article came out and I enjoyed it. It’s a nice distinction, but it’s been misused. A bullshitter is a kind of liar (or, if you want to be annoying, a deceiver). (Wikipedia correctly puts Frankfurtian “bullshit” as a topic on the “lie” page.)

Frankfurt spends a great deal of time trying to suss out the distinction between lying and bullshitting:

The elder Simpson identifies the alternative to telling a lie as bullshitting one’s way through. This involves not merely producing one instance of bullshit; it involves a of producing bullshit to whatever extent the circumstances require. This is a key, perhaps, to his preference. Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth. On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared to fake the context as well, so far as need requires.

Meh. When you have enough fabrication and one of your targets is yourself, this idea of focus isn’t pertinent. One way of lying is being a shameless liar most of the time so when one speaks the truth one isn’t believed.

It is sometimes worth figuring out the etiology of someone’s false (or otherwise wrong) utterances. It can make a difference in how you counter them. If someone is mistaken, they may be amenable to correction. If they are a “true believer”, it may be quite difficult to merely correct them (so maybe you don’t bother).

But, with the Trumpians and other Republicans, come on. There needs to be some strict liability here. Lying so well that you convince even yourself that it’s true is a kind of lying. Coming to believe your own lies (supposedly) doesn’t get you off the hook for all that lying nor does it make it not lying.

I’m sorta ok with Drum’s desire to focus on deception rather that (narrow) lying. But…in ordinary vernacular, deception is lying. A lie of omission is a lie. If you bullshit me, you are lying to me. If you lie to yourself, you are lying.

With Trump, it’s super easy: it’s almost all straightforward lies.

Update: LGM caught up with the NYT finally putting “lie” in the headline with appropriate skepticism.

The Muddling of the Mental and the Physical

Nature also teaches me, through these sensations of pain, hunger, thirst and so on, that I (a thinking thing) am not merely in my body as a sailor is in a ship. Rather, I am closely joined to it—intermingled with it, so to speak—so that it and I form a unit. If this were not so, I wouldn’t feel pain when the body was hurt but would perceive the damage in an intellectual way, like a sailor seeing that his ship needs repairs. And when the body needed food or drink I would intellectually understand this fact instead of (as I do) having confused sensations of hunger and thirst. These sensations are confused mental events that arise from the union—the intermingling, as it were—of the mind with the body. Descartes, Meditation 6

Descartes is, of course, the arch-dualist. Mind and body are different substances with entirely different natures and can exist independently. Human beings, on the other hand, are not just their minds (even though the mind is the ego who’s existence we know first, and best). The things that teach us that we form a kind of unit — pain, hunger, thirst, etc. — are perceptions of the body which differ from how experience the rest of the world.

I was thinking about this because I’ve been feeling like crap for months now. Clearly there is a strong physical element, but equally so, there’s a strong mental component. They go back and forth in a complex dynamic but it’s not always clear which is which or even if they are fully separable. If I dry heave, it could be pure anxiety, a stomach virus, or a side effect of medication (perhaps for anxiety).

The most striking (for me) example in my personal history was the interaction between my inner ear issues and social anxiety. When I was a teen-ager, I developed an inner ear disorder that ranged from subtle to extremely overt (i.e., spinning for three days at a shot). But effect of the subtle variant was that in noisy environment with a fair bit of motion, my ability to distinguish my movement and other objects movement was diminished. (Think of being on a smooth and slow moving train when it just starts up and you’ve been distracted.) This can make you feel very uneasy and off balance and…anxious.

This inflected my experience of social gatherings…dances, parties, etc. When this got really going I would feel unsettled and uncomfortable and usually seek a quiet berth (kitchen, outside, or…not there). Part of this was undoubtably due to this inner ear phenomenon, but I had no idea that it even existed. So I interpreted this mostly physiological reaction as being a dislike of parties or part of my social anxiety. Which didn’t help the anxiety at all. On the contrary.

We know that many physical illness tend to have certain mental co-morbidities. Being sick sucks, so depression isn’t uncommon.

Our Cartesian unity…the fact that we are a big muddle of a complex system…makes life difficult. Our parts don’t swap easily.

Bernard Williams on Scribd

I’m a Scribd subscriber, though slightly sad that they killed (had to kill, I warrant) their all you can eat audiobook thing for a (fairly stingy by comparison) credit based system. Ah well. Free/super cheap things are hard and they still have a ton of books and their book selection is getting better.

In fact, they have a pretty good selection of Williams! Which is good, because basically all I’ve ever read of his is the Utilitarianism paper, and my nosing around suggested that there might be some interesting tensions between that and other of his work. Now I can read a big chunk without having to work hard to find/purchase/take out it all. It’s sort of the future! (Only sort of because the Scribd app/website experience is pretty horrendous.)

Alas, they seem to have no Langer. Oh well.

(Blogging everyday is hard. Esp. as I don’t have lots of filler ready to go and I quickly can get into writing a piece that will take hours.)

Bernard Williams on Case Studies

From “A critique of utilitarianism” (in Utilitarianism: For and Against, pp 96-96):

For a lot of the time so far we have been operating at an exceeding abstract level. This  has been necessary in order to get clearer in general terms about the differences between consequentialist and other outlooks, an aim which is important if we want to know what features of them lead to what results for our thought.

I found this a bit confusing, but I think the point here is conceptual clarity. Somehow, being clear in general terms helps us understand causal (or conceptual) relationships. I’m not convinced (or even convinced I understand it), but ok. Clear formulation of the manipulations or treatments we are comparing is a good idea. Whether we need to do this in general terms or not isn’t critical. We want to know exactly how each moral theory works in the cases under examination. At least, enough to “run the simulation”.

Now, hover, let us look more concretely at two examples, to see what utilitarianism might say about them, what me might say about utilitarianism and, most importantly of all, what would be implied by certain ways of thinking about the situation.

At this point, I don’t know that it matters whether the cases are experiments are case studies. There are uses for either with these specific goals.

The examples are inevitably schematized, and they are open to the objection that they beg as many questions as they illuminate. There are two ways in particular in which examples in moral philosophy tend to beg important questions. One is that, as presented, they arbitrarily cut off and restrict the range of alternative courses of action…The second is that they inevitably present one with the situation as a going concern, and cut off questions about how the agent got into it, and correspondingly about moral considerations which might flow from that…

I’m not sure that these are quite matters of question begging. In general, moral reasoning (like most normal reasoning) is heavily non-monotonic: that is, the conclusion might change as you add new information (and change back as you add still more). And, with respect to the first, it’s clear that if we add a new possibility to a scenario that might change what’s right! (A moral dilemma is solved by finding a third, permitted, option, after all.) With respect to the second, obviously, backstory can matter quite a lot to our judgment: If a child takes a toy that another child is playing with, we might chide them, but it is a reasonable defense if the first child says, “This is my toy. I brought it here. They took it and won’t let me or anyone else play with it.”

These are threats to external and ecological validity if there is never a reasonable attenuation of factors to consider. (Williams makes this point later, sort of, as I will quote.) We never know all the backstory or are aware of all the options, so the mere fact that a scenario necessarily elides some option or backstory details it not itself a reasonable point. These specific ones might fail because, say, no conclusion can be drawn with out some backstory (who’s toy is it?) or because there’s an obvious possible action not mentioned. But that’s a different problem.

I think these are different worries than the one’s Nussbaum raised. To requote:

This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face?

The second problem (hardness and plainness) is clearly not a matter of missing propositions (as with Williams’ problems), but of richness of form. (In a future post, I’ll use Suzanne Langer to articulate this a bit more.) Obviously, Nussbaum can live with finite presentations, but she thinks that philosophical writing fails in some ways when compared to novelistic writing.

These difficulties, however, just have to be accepted, and if anyone finds these examples cripplingly defective in this sort of respect, then he must in his own thought rework them in richer and less question-begging form.

I kinda agree and am kinda annoyed by this. In one sense, Williams is correct. If these examples don’t suit, one response is to enrich them. On the other, there’s no justification of his examples. Are they sufficiently rich as not to be cripplingly defective? And there are other respects in which they may be problematic (e.g., are they typical? representative? do they cover problems in non-utilitarian theories?) Philosophy of this era isn’t stylised in the way many scientific papers have become, but I kinda want a “materials” section that discusses the corpus of examples!

If he feels that no presentations of any imagined situation can ever be other than misleading in morality, and that there an be never be any substitute for the concrete experienced complexity of actual moral situations

Note! Nussbaum thinks there is a substitute! But Williams isn’t writing no novel and his examples are pretty abstract and weird so he can still fail in Nussbaumian terms.

then this discussion, with him, must certainly grind to a halt: button one may legitimately wonder whether every discussion with him about conduct will not grind to a halt, including any discussion about the actual situations, since discussion about how one would think and feel about situations somewhat different from the actual (that is to say, situations to that extent imaginary) plays an important role in discussion of the actual.

One may legitimately wonder whether anyone would or has held such a silly position! Williams spends much more time defending against an extreme position that is so implausible he says that there is no talking to people who hold it than actually defending his actual examples. Indeed, he spends zero time defending his actual examples.

I, in general, love this essay. But whenever I dig in I really hate it. This is not good form. It gives the impression of giving due consideration as to whether the examples are useful and legit without even starting to do so.

I mean, consider that the imaginariness bit is just a red herring: We never have full knowledge of a situation. So we’re always working with an incomplete description even “in the moment”. So the real question is are we dealing with case descriptions of sufficient detail to allow for reasonably accurate simulation of moral deliberation. And I think we can answer that question, fallibly, partially, with the expectation that we can always do better. The Williams examples are not the worst ever, but they are much closer to thought experiments than thought case studies for all that he gives actors cute names (the wife and older friend don’t get names, nor does the captain or Indians, but Pedro does).

(I find the universal “he” pretty damn distracting, fwiw! I’m glad we’re past that.)

Experiments vs. Case Studies

My recent post on validities was motivated by John Proveti posting a draft of an abstract he was submitting about the Salaita affair. John focused on exploring the use of case studies in moral analysis. This prompts me to write up (again) my spiel on experiments and case studies.

The primary aim of a controlled experiment is internal validity, that is, demonstrating causal relationships. The primary tool for this is isolation, that is, we try to remove as much as possible so that any correlations we see are more likely to be causal. If you manipulate variable v1 and variable v2 responds systematically and there are no other factors that change through the manipulation then you have a case that changes in v1 cause those changes in v2. (Lots of caveats. You want to repeat it to rule out spontaneous changes to v2. Etc.) Of course, you have lots of problems holding everything except v1 and v2 fixed. It’s probably impossible in almost all cases. You may not know all the factors in play! This is especially true when it comes to people. So, you control as much as you can and us a large number of randomly selected participants to smooth out the unknowns (roughly). But critically, you shrink the v and up the n (i.e., repetitions).

Low v tends to hurt both external and ecological validity. In other circumstances, other factors might produce the changes in v2 (or block them!). For other controlled circumstances, this might be fairly easy to find the interaction. But for field circumstances, the number of factors potentially in play explodes.

Thus, the case study, where we lower the number of n (to n=1) in order to explore arbitrary numbers of factors. Of course, the price we pay for that is weakening internal and external validity, indeed, any sort of generalisability.

Of course, in non-experimental philosophy, the main form of experiment is the thought experiment. But you can see the experiment philosophy at work: The reason philosopher dream up outlandish circumstances is to isolate and amplify the target v1 and v2. Thus, in the trolly problem, you have a simple choice. No one else is involved, and we pit number of lives vs. omission or commission and the result is death. That the example is hard to relate to is a perfect example of a failure of ecological validity. But philosophers get so used to intuiting under though laboratory conditions that they become a bit like mice who have been bred to be susceptible to cancer: Their reactions and thinking is suspect. (That it is all so clean and clever and pure makes it seem like one is thinking better. Bad mistake!)

Of course, we can have thought case studies as well. This is roughly what I take Martha Nussbaum to claim about novels in “Flawed Crystals: James’s The Golden Bowl and Literature as Moral Philosophy“:

To show forth the force and truth of the Aristotelian claim that “the decision rests with perception,” we need, then-either side by side with a philosophical “outline” or inside it—texts which display to us the complexity, the indeterminacy, the sheer difficulty of moral choice, and which show us, as this text does concerning Maggie Verver, the childishness, the refusal of life involved in fixing everything in advance according to some system of inviolable rules. This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face? And how, without conveying this, can it convey the active adventure of the deliberative intelligence, the “yearnings of thought and excursions of sympathy” (p. 521) that make up much of our actual moral life?

I take this as precisely the point that more abstract explorations of moral reasoning lack ecological validity.

This, of course, has implications both for moral theorising and for moral education. Our moral theories are likely to be wrong about moral life in the field (and, I would argue, in the lab as well!). (I think this is what Bernard Williams was partly complaining about in Utilitarianism For and Against.) But further, learning how to reason well about action in in the circumstances of our lives won’t work by ingesting abstract moral theories (even if they are more or less true). We still need to cultivate moral judgement.

I think we can do philosophical case studies that are not thought case studies just as we can do experimental philosophy without thought experiments. Indeed, I recommend it.

On Validities

In an Introduction to Symbolic Logic class offered by a philosophy class, you will probably learn:

  1. An argument is valid if when the premises are all true, the conclusion is (or must be) true.
  2. An argument is sound if it is valid and the premises are all true.

In such a class with a critical reasoning component, you will also learn about various common logical fallacies, that is, arguments which people take as valid but which are not (e.g., affirming the consequent, which is basically messing up modus ponens).

You might also get some discussion of “invalid but good” arguments, namely, various inductive arguments. (Perhaps these days texts include some proper statistical reasoning.) This notion is passé. I think reserving “validity” for “deductive validity” is unhelpful. In many scientific papers, there will be a section on “threats to validity” where the authors address various issues with the evidence they provide, typically:

  1. Internal validity (the degree to which the theory, experimental design, and results support concluding that there is a causal relationship between key correlated variables)
  2. External validity (the degree to which the theory, experimental design, and results generalise to other (experimental) populations and situations)
  3. Ecological (or field) validity (the degree to which the theory, experimental design, and results generalise to “real world” conditions)

There are dozens of other sorts of validity. Indeed, the Wikipedia article presents deductive validity as restricted:

It is generally accepted that the concept of scientific validity addresses the nature of reality and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the truth of inferences made from premises.

I like the general idea that a validity of an argument is the extent to which the argument achieve what it is trying to achieve.  Typically, this is to establish the truth (or likelihood) of a conclusion. Deductions are useful, but they aren’t what you need most of the time. Indeed, per usual, establishing the truth of the premises is critical! And we usually can’t fully determine the truth of the premises! So, we need to manage lots of kinds of evidence in lots of different ways.

An argument is a relationship between evidence and a claim. The case where the relationship is deductive is wonderful and exciting and fun, but let’s not oversell it.