Archive for the 'Philosophy' Category

Countering (Massive Numbers of) Lies Doesn’t Work

January 25, 2017

Lies are dangerous in a number of ways. Putting aside that there are lots of situations where a false belief leads to very bad action (e.g., believing homeopathy is an effective cancer treatment leads to forgoing treatment that would have saved one’s life or mitigated suffering). They are also dangerous because people with bad agendas tend to resort to lying because they can’t win on the merits. And they don’t just resort to a bit of deception, or even clever deception. It turns out that wholesale, massive, shameless, easily rebutted lies are pretty effective, at least for something.

Consider the decades long attack on the EU:

But Britain has a long and well-observed tradition of fabricating facts about Europe—so much so that the European Commission (EC) set up a website to debunk these lies in the early 1990s. Try our interactive quiz below and see if you can spot the myths.

Since then the EC has responded to over 400 myths published by the British media. These range from the absurd (fishing boats will be forced to carry condoms) to the ridiculous (zippers on trousers will be banned). Some are seemingly the result of wilful misunderstandings.

Sadly, for all the commission’s hard work, it is unlikely to be heard. The average rebuttal is read about 1,000 times. The Daily Mail’s website, by contrast, garners 225m visitors each month.

And, of course, the Leave campaign, itself, was almost wholly lie based. Remain made some (economic) predictions that were falsified (and that needs to be understood), but it didn’t traffic in wholesale lies, to my knowledge.

Similarly, we have a decades long campaign, almost entirely easily-debunked-lie based, against Hillary Clinton. Just take claims about her honesty (esp. next to Trump). Robert Mann produced a very interesting graph of Polifact’s fact checking of a selection of politicians:

It isn’t even close! HRC is one of the most honest politicians (in terms of telling falsehoods) and Trump is one of the most dishonest.

Yet, when I was debating folks on Democratic leaning blogs, I had people saying that Clinton was a pathological liar. When presented with this chart, they stuck to their guns. (Note, they didn’t think Obama was a liar.)

You can quibble with the methodology (see Mann’s blog post for a discussion), but Polifact’s fact checker tries to be evenhanded. One should be at least a little struck by this evidence.

But correction often just doesn’t work, backfires, or isn’t effective in changing attitudes and behavior. For example,

Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

Or consider Emily Thorson’s concept of belief echoes:

However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction–even when it is fully believed–does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects,which I call “belief echoes,” are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking.

Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person’s pre-existing political views

We see this in the various formulations of the Clinton Rules.

One major harm of such mechanisms is that it opens up a line of defense for very bad people, e.g., Trump, to wit, that there are “Trump rules” and the bad things pointed out about him are fake. They aren’t, but why trust a gullible media about it?

I’ve had personal experience of this. I used to comment a lot on LGM. One commenter with a propensity for persistently saying very silly things (about, e.g., statistics, causality, politics, and even the law (they are a lawyer)) got to a point where they couldn’t stand my repeated refutations (including pointing out how they’d been refuted before). They embarked on a pretty systematic campaign to lie about me, primarily about my mental health and that I was “stalking” them, on the verge of a breakdown, that they were frightened of me, that I had no sex life or other kind of life, that I spent large period of times looking things up on them (stalking!), etc. These were transparent lies and obvious gaslighting. No one took them directly seriously, but they did have effects. People would see an exchange and assume that there was some fault on my part (however mild). This would pop up elsewhere, in other comments.  Some of these people were more sympathetic to a gaslighting liar than they had any right to be.

So, pretty exemplary behavior and a sterling reputation vs. transparent lies and extremely bizarre slanders and…well, I’m the one not commenting any more. It worked, in a way. (Trump winning had an effect too. It’s not solely due to this bad behavior.)

Given sufficient shamelessness and no structural counter (e.g., moderation) and no big effort on my part (e.g., an active campaign), there’s little penalty for such lying and it advances their noxious cause.

These examples can be multiplied easily (anti-vaccine, pro-tobacco, climate change denial campaigns come to mind).

It’s very difficult to deal with. We need to.

Update:

How severe is the problem? I just saw a report on a survey using Trump’s and Obama’s inauguration crowd photos:

For the question about which image went with which inauguration, 41 percent of Trump supporters gave the wrong answer; that’s significantly more than the wrong answers given by 8 percent of Clinton voters and 21 percent of those who did not vote.

But what’s even more noteworthy is that 15 percent of people who voted for Trump told us that more people were in the image on the left — the photo from Trump’s inauguration — than the picture on the right. We got that answer from only 2 percent of Clinton voters and 3 percent of nonvoters.

The article discusses the idea of “expressive responding”:

Why would anyone give the wrong answer to a pretty simple question?

To many political psychologists, this exercise will be familiar. A growing body of research documents how fully Americans appear to hold biased positions about basic political facts. But scholars also debate whether partisans actually believe the misinformation and how many are knowingly giving the wrong answer to support their partisan team (a process called expressive responding).

Expressive responding is yet another form of lying with potentially far reaching consequences.

On Calling Out a Lie

January 24, 2017

Given the massive amount of un-, anti-, and non-truth spewed by Trump, his minions, and the Republican Party, the media has had a lot of trouble coping with it. Trumpsters and their ilk even have started complaining about “fake news” by which they don’t mean actual fake news, but instead they mean true news that they don’t like.

The media needs to deal with the situation better. There are lots of vulnerable points (e.g., the need for access, the cult of balance, the shamelessness of the deception). But one problem is a strong unwillingness to call a lie a lie (well, except for the liars, who are quite willing to call anything they don’t like a lie).

There’s a fairly narrow idea of a lie making its way around that’s used to justify this. Take Kevin Drum (who’s on the pro-call-out-lies side):

The problem with branding something a lie is that you have to be sure the speaker knew it was wrong. Otherwise it’s just ignorance or a mistake.

Arrrgh! Even Drum falls into a pretty obvious error! Just because you don’t utter a deliberate, explicit, knowing falsehood doesn’t mean you are innocently making some sort of error (i.e., acting from ignorance or making a mistake)! Just simple contemplation of lies of omission reveal that. Or recall standard tricks such as:

Is there anything else material that you want to tell us?

No.

But it says here that you did X and X is material! Why did you lie?!

I didn’t lie. I didn’t want to tell you about X.

Lots of people have come to rely on Frankfurt’s notion of “bullshit” (utterances made without regard for the truth) and “lie” (utterances made with a regard for falsity). I remember when Frankfurt’s article came out and I enjoyed it. It’s a nice distinction, but it’s been misused. A bullshitter is a kind of liar (or, if you want to be annoying, a deceiver). (Wikipedia correctly puts Frankfurtian “bullshit” as a topic on the “lie” page.)

Frankfurt spends a great deal of time trying to suss out the distinction between lying and bullshitting:

The elder Simpson identifies the alternative to telling a lie as bullshitting one’s way through. This involves not merely producing one instance of bullshit; it involves a of producing bullshit to whatever extent the circumstances require. This is a key, perhaps, to his preference. Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth. On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared to fake the context as well, so far as need requires.

Meh. When you have enough fabrication and one of your targets is yourself, this idea of focus isn’t pertinent. One way of lying is being a shameless liar most of the time so when one speaks the truth one isn’t believed.

It is sometimes worth figuring out the etiology of someone’s false (or otherwise wrong) utterances. It can make a difference in how you counter them. If someone is mistaken, they may be amenable to correction. If they are a “true believer”, it may be quite difficult to merely correct them (so maybe you don’t bother).

But, with the Trumpians and other Republicans, come on. There needs to be some strict liability here. Lying so well that you convince even yourself that it’s true is a kind of lying. Coming to believe your own lies (supposedly) doesn’t get you off the hook for all that lying nor does it make it not lying.

I’m sorta ok with Drum’s desire to focus on deception rather that (narrow) lying. But…in ordinary vernacular, deception is lying. A lie of omission is a lie. If you bullshit me, you are lying to me. If you lie to yourself, you are lying.

With Trump, it’s super easy: it’s almost all straightforward lies.

Update: LGM caught up with the NYT finally putting “lie” in the headline with appropriate skepticism.

The Muddling of the Mental and the Physical

September 4, 2016

Nature also teaches me, through these sensations of pain, hunger, thirst and so on, that I (a thinking thing) am not merely in my body as a sailor is in a ship. Rather, I am closely joined to it—intermingled with it, so to speak—so that it and I form a unit. If this were not so, I wouldn’t feel pain when the body was hurt but would perceive the damage in an intellectual way, like a sailor seeing that his ship needs repairs. And when the body needed food or drink I would intellectually understand this fact instead of (as I do) having confused sensations of hunger and thirst. These sensations are confused mental events that arise from the union—the intermingling, as it were—of the mind with the body. Descartes, Meditation 6

Descartes is, of course, the arch-dualist. Mind and body are different substances with entirely different natures and can exist independently. Human beings, on the other hand, are not just their minds (even though the mind is the ego who’s existence we know first, and best). The things that teach us that we form a kind of unit — pain, hunger, thirst, etc. — are perceptions of the body which differ from how experience the rest of the world.

I was thinking about this because I’ve been feeling like crap for months now. Clearly there is a strong physical element, but equally so, there’s a strong mental component. They go back and forth in a complex dynamic but it’s not always clear which is which or even if they are fully separable. If I dry heave, it could be pure anxiety, a stomach virus, or a side effect of medication (perhaps for anxiety).

The most striking (for me) example in my personal history was the interaction between my inner ear issues and social anxiety. When I was a teen-ager, I developed an inner ear disorder that ranged from subtle to extremely overt (i.e., spinning for three days at a shot). But effect of the subtle variant was that in noisy environment with a fair bit of motion, my ability to distinguish my movement and other objects movement was diminished. (Think of being on a smooth and slow moving train when it just starts up and you’ve been distracted.) This can make you feel very uneasy and off balance and…anxious.

This inflected my experience of social gatherings…dances, parties, etc. When this got really going I would feel unsettled and uncomfortable and usually seek a quiet berth (kitchen, outside, or…not there). Part of this was undoubtably due to this inner ear phenomenon, but I had no idea that it even existed. So I interpreted this mostly physiological reaction as being a dislike of parties or part of my social anxiety. Which didn’t help the anxiety at all. On the contrary.

We know that many physical illness tend to have certain mental co-morbidities. Being sick sucks, so depression isn’t uncommon.

Our Cartesian unity…the fact that we are a big muddle of a complex system…makes life difficult. Our parts don’t swap easily.

Bernard Williams on Scribd

January 8, 2016

I’m a Scribd subscriber, though slightly sad that they killed (had to kill, I warrant) their all you can eat audiobook thing for a (fairly stingy by comparison) credit based system. Ah well. Free/super cheap things are hard and they still have a ton of books and their book selection is getting better.

In fact, they have a pretty good selection of Williams! Which is good, because basically all I’ve ever read of his is the Utilitarianism paper, and my nosing around suggested that there might be some interesting tensions between that and other of his work. Now I can read a big chunk without having to work hard to find/purchase/take out it all. It’s sort of the future! (Only sort of because the Scribd app/website experience is pretty horrendous.)

Alas, they seem to have no Langer. Oh well.

(Blogging everyday is hard. Esp. as I don’t have lots of filler ready to go and I quickly can get into writing a piece that will take hours.)

Bernard Williams on Case Studies

January 6, 2016

From “A critique of utilitarianism” (in Utilitarianism: For and Against, pp 96-96):

For a lot of the time so far we have been operating at an exceeding abstract level. This  has been necessary in order to get clearer in general terms about the differences between consequentialist and other outlooks, an aim which is important if we want to know what features of them lead to what results for our thought.

I found this a bit confusing, but I think the point here is conceptual clarity. Somehow, being clear in general terms helps us understand causal (or conceptual) relationships. I’m not convinced (or even convinced I understand it), but ok. Clear formulation of the manipulations or treatments we are comparing is a good idea. Whether we need to do this in general terms or not isn’t critical. We want to know exactly how each moral theory works in the cases under examination. At least, enough to “run the simulation”.

Now, hover, let us look more concretely at two examples, to see what utilitarianism might say about them, what me might say about utilitarianism and, most importantly of all, what would be implied by certain ways of thinking about the situation.

At this point, I don’t know that it matters whether the cases are experiments are case studies. There are uses for either with these specific goals.

The examples are inevitably schematized, and they are open to the objection that they beg as many questions as they illuminate. There are two ways in particular in which examples in moral philosophy tend to beg important questions. One is that, as presented, they arbitrarily cut off and restrict the range of alternative courses of action…The second is that they inevitably present one with the situation as a going concern, and cut off questions about how the agent got into it, and correspondingly about moral considerations which might flow from that…

I’m not sure that these are quite matters of question begging. In general, moral reasoning (like most normal reasoning) is heavily non-monotonic: that is, the conclusion might change as you add new information (and change back as you add still more). And, with respect to the first, it’s clear that if we add a new possibility to a scenario that might change what’s right! (A moral dilemma is solved by finding a third, permitted, option, after all.) With respect to the second, obviously, backstory can matter quite a lot to our judgment: If a child takes a toy that another child is playing with, we might chide them, but it is a reasonable defense if the first child says, “This is my toy. I brought it here. They took it and won’t let me or anyone else play with it.”

These are threats to external and ecological validity if there is never a reasonable attenuation of factors to consider. (Williams makes this point later, sort of, as I will quote.) We never know all the backstory or are aware of all the options, so the mere fact that a scenario necessarily elides some option or backstory details it not itself a reasonable point. These specific ones might fail because, say, no conclusion can be drawn with out some backstory (who’s toy is it?) or because there’s an obvious possible action not mentioned. But that’s a different problem.

I think these are different worries than the one’s Nussbaum raised. To requote:

This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face?

The second problem (hardness and plainness) is clearly not a matter of missing propositions (as with Williams’ problems), but of richness of form. (In a future post, I’ll use Suzanne Langer to articulate this a bit more.) Obviously, Nussbaum can live with finite presentations, but she thinks that philosophical writing fails in some ways when compared to novelistic writing.

These difficulties, however, just have to be accepted, and if anyone finds these examples cripplingly defective in this sort of respect, then he must in his own thought rework them in richer and less question-begging form.

I kinda agree and am kinda annoyed by this. In one sense, Williams is correct. If these examples don’t suit, one response is to enrich them. On the other, there’s no justification of his examples. Are they sufficiently rich as not to be cripplingly defective? And there are other respects in which they may be problematic (e.g., are they typical? representative? do they cover problems in non-utilitarian theories?) Philosophy of this era isn’t stylised in the way many scientific papers have become, but I kinda want a “materials” section that discusses the corpus of examples!

If he feels that no presentations of any imagined situation can ever be other than misleading in morality, and that there an be never be any substitute for the concrete experienced complexity of actual moral situations

Note! Nussbaum thinks there is a substitute! But Williams isn’t writing no novel and his examples are pretty abstract and weird so he can still fail in Nussbaumian terms.

then this discussion, with him, must certainly grind to a halt: button one may legitimately wonder whether every discussion with him about conduct will not grind to a halt, including any discussion about the actual situations, since discussion about how one would think and feel about situations somewhat different from the actual (that is to say, situations to that extent imaginary) plays an important role in discussion of the actual.

One may legitimately wonder whether anyone would or has held such a silly position! Williams spends much more time defending against an extreme position that is so implausible he says that there is no talking to people who hold it than actually defending his actual examples. Indeed, he spends zero time defending his actual examples.

I, in general, love this essay. But whenever I dig in I really hate it. This is not good form. It gives the impression of giving due consideration as to whether the examples are useful and legit without even starting to do so.

I mean, consider that the imaginariness bit is just a red herring: We never have full knowledge of a situation. So we’re always working with an incomplete description even “in the moment”. So the real question is are we dealing with case descriptions of sufficient detail to allow for reasonably accurate simulation of moral deliberation. And I think we can answer that question, fallibly, partially, with the expectation that we can always do better. The Williams examples are not the worst ever, but they are much closer to thought experiments than thought case studies for all that he gives actors cute names (the wife and older friend don’t get names, nor does the captain or Indians, but Pedro does).

(I find the universal “he” pretty damn distracting, fwiw! I’m glad we’re past that.)

Experiments vs. Case Studies

January 4, 2016

My recent post on validities was motivated by John Proveti posting a draft of an abstract he was submitting about the Salaita affair. John focused on exploring the use of case studies in moral analysis. This prompts me to write up (again) my spiel on experiments and case studies.

The primary aim of a controlled experiment is internal validity, that is, demonstrating causal relationships. The primary tool for this is isolation, that is, we try to remove as much as possible so that any correlations we see are more likely to be causal. If you manipulate variable v1 and variable v2 responds systematically and there are no other factors that change through the manipulation then you have a case that changes in v1 cause those changes in v2. (Lots of caveats. You want to repeat it to rule out spontaneous changes to v2. Etc.) Of course, you have lots of problems holding everything except v1 and v2 fixed. It’s probably impossible in almost all cases. You may not know all the factors in play! This is especially true when it comes to people. So, you control as much as you can and us a large number of randomly selected participants to smooth out the unknowns (roughly). But critically, you shrink the v and up the n (i.e., repetitions).

Low v tends to hurt both external and ecological validity. In other circumstances, other factors might produce the changes in v2 (or block them!). For other controlled circumstances, this might be fairly easy to find the interaction. But for field circumstances, the number of factors potentially in play explodes.

Thus, the case study, where we lower the number of n (to n=1) in order to explore arbitrary numbers of factors. Of course, the price we pay for that is weakening internal and external validity, indeed, any sort of generalisability.

Of course, in non-experimental philosophy, the main form of experiment is the thought experiment. But you can see the experiment philosophy at work: The reason philosopher dream up outlandish circumstances is to isolate and amplify the target v1 and v2. Thus, in the trolly problem, you have a simple choice. No one else is involved, and we pit number of lives vs. omission or commission and the result is death. That the example is hard to relate to is a perfect example of a failure of ecological validity. But philosophers get so used to intuiting under though laboratory conditions that they become a bit like mice who have been bred to be susceptible to cancer: Their reactions and thinking is suspect. (That it is all so clean and clever and pure makes it seem like one is thinking better. Bad mistake!)

Of course, we can have thought case studies as well. This is roughly what I take Martha Nussbaum to claim about novels in “Flawed Crystals: James’s The Golden Bowl and Literature as Moral Philosophy“:

To show forth the force and truth of the Aristotelian claim that “the decision rests with perception,” we need, then-either side by side with a philosophical “outline” or inside it—texts which display to us the complexity, the indeterminacy, the sheer difficulty of moral choice, and which show us, as this text does concerning Maggie Verver, the childishness, the refusal of life involved in fixing everything in advance according to some system of inviolable rules. This task cannot be easily accomplished by texts which speak in universal terms—for one of the difficulties of deliberation stressed by this view is that of grasping the uniqueness of the new particular.  Nor can it easily be done by texts which speak with the hardness or plainness which moral philosophy has traditionally chosen for its style—for how can this style at all convey the way in which the “matter of the practical” appears before the agent in all of its bewildering complexity, without its morally salient features stamped on its face? And how, without conveying this, can it convey the active adventure of the deliberative intelligence, the “yearnings of thought and excursions of sympathy” (p. 521) that make up much of our actual moral life?

I take this as precisely the point that more abstract explorations of moral reasoning lack ecological validity.

This, of course, has implications both for moral theorising and for moral education. Our moral theories are likely to be wrong about moral life in the field (and, I would argue, in the lab as well!). (I think this is what Bernard Williams was partly complaining about in Utilitarianism For and Against.) But further, learning how to reason well about action in in the circumstances of our lives won’t work by ingesting abstract moral theories (even if they are more or less true). We still need to cultivate moral judgement.

I think we can do philosophical case studies that are not thought case studies just as we can do experimental philosophy without thought experiments. Indeed, I recommend it.

On Validities

January 2, 2016

In an Introduction to Symbolic Logic class offered by a philosophy class, you will probably learn:

  1. An argument is valid if when the premises are all true, the conclusion is (or must be) true.
  2. An argument is sound if it is valid and the premises are all true.

In such a class with a critical reasoning component, you will also learn about various common logical fallacies, that is, arguments which people take as valid but which are not (e.g., affirming the consequent, which is basically messing up modus ponens).

You might also get some discussion of “invalid but good” arguments, namely, various inductive arguments. (Perhaps these days texts include some proper statistical reasoning.) This notion is passé. I think reserving “validity” for “deductive validity” is unhelpful. In many scientific papers, there will be a section on “threats to validity” where the authors address various issues with the evidence they provide, typically:

  1. Internal validity (the degree to which the theory, experimental design, and results support concluding that there is a causal relationship between key correlated variables)
  2. External validity (the degree to which the theory, experimental design, and results generalise to other (experimental) populations and situations)
  3. Ecological (or field) validity (the degree to which the theory, experimental design, and results generalise to “real world” conditions)

There are dozens of other sorts of validity. Indeed, the Wikipedia article presents deductive validity as restricted:

It is generally accepted that the concept of scientific validity addresses the nature of reality and as such is an epistemological and philosophical issue as well as a question of measurement. The use of the term in logic is narrower, relating to the truth of inferences made from premises.

I like the general idea that a validity of an argument is the extent to which the argument achieve what it is trying to achieve.  Typically, this is to establish the truth (or likelihood) of a conclusion. Deductions are useful, but they aren’t what you need most of the time. Indeed, per usual, establishing the truth of the premises is critical! And we usually can’t fully determine the truth of the premises! So, we need to manage lots of kinds of evidence in lots of different ways.

An argument is a relationship between evidence and a claim. The case where the relationship is deductive is wonderful and exciting and fun, but let’s not oversell it.

An Alternative Reproductive Imperative Argument

September 6, 2015

The Challenge

The was some buzz about an essay by Torbjörn Tännsjö which was commissioned by Vox then rejected. I heard about via LGM, and went into quite a bit of discussion in comments. Tännsjö was trying to present an extraction of an argument from his recent book in about 1000 words for a general audience. He is a pretty hard core utilitarian with some focus on the Repugnant Conclusion.

Easy to see that this could go pear shaped.

The essay is a disaster I think. But, what I read of his other work is not so disastrous. This is a hard form to write and his is a hard topic to fit into this form. Ok

Having lots of important, time critical stuff to do, I naturally am going to give it a shot. Yay me!

Constraints:

  1. It must be about 1000 words
  2. It must argue that it is plausible that we have a moral duty to produce a lot of children…close to resource limitations all other things being equal.
  3. It must go by way of the Repugnant Conclusion.

Whew!

The Reproductive Imperative

While there are many belief systems which impose a duty to have children and even lots of children or as many children as you can produce, many of these justify that duty by a command (“Be fruitful and multiply!”) or by subgroup survival preferences. And there are clearly strong personal, partially biologically supported reasons that many, perhaps most, people want kids. But, is there any positive duty to have kids or a lot of kids? Is the choice whether or not to have children fundamentally a direct (though perhaps overridable) moral imperative? I think it’s fair to say that in advanced industrial societies there’s a strong current of thought that says that having children is a free choice, though having a lot of children is problematic.

There is a somewhat surprising argument that we all have a positive duty to produce as many children as reasonably possible. Indeed, the argument holds that it’s quite likely we should have more children than fewer but happier children. The surprising part is the argument is quite difficult to resist and follows from straightforward moral intuitions.

A Fantasy Scenario

Imagine a world somewhat like ours. It has 10 billion people living on it but it has solved the sustainability issues of such a population and, furthermore, the standard of living of 99% of the population is equivalent to living in a US household with an income of $300k/year. This is a rich rich world full of happy people.

Now, suppose a disaster was about to strike this world which would wipe out the whole population. You have two possible solutions: one (A) will kill off half the population, but leave the rest untouched. The other (B) will maintain the whole population, but half would see their income reduced to about $50k a year.

I trust that most people think that saving 5 billion people is the clearly right thing to do. That is, it’s hard to see how to ethically prefer A to B. Indeed, if you thought that A was the only solution, it would be a profound relief that someone came up with B.

It’s not hard to see why! Those 5 billion people almost certainly prefer to live, even under reduced circumstances. Their lives are filled with good things and thus are a good thing we should preserve.

This all seems very straightforward. But what if our world has only 5 billion people on it (with $300k incomes) and we know we could add another 5 billion (though with only $50k incomes)? How would we add them? Probably by having more babies, but we could imagine that we had the technology to produce babies without anyone having to take on the risk of pregnancy. So, we have worlds A (5 billion, all rich) and B (10 Billion, half rich, half ok) again. If we preferred B to A before, shouldn’t we do the same now? After all, they are exactly the same worlds. The only difference in the scenarios is how we get to them.

My personal gut reaction is that I feel that in the “saving” scenario, I have a strong moral obligation to get to B. Lots of sacrifices would be justified and if I picked A over B the I would be a monster.

But with the multiplying option, I feel a lot more indifferent. I don’t think B should be blocked, but I don’t feel that I’m a monster for not picking it. But, it’s very unclear, from a moral perspective, that I should distinguish the two cases. In the first, I’ve saved a lot of good, but in the second I’ve produced a lot of good! If I value good enough to save it, why doesn’t that push me to produce it?

This preference for worlds with more and more people (even if their individual happiness is lower than in worlds with smaller populations) is known as the Repugnant Conclusion, and it’s one of the trickier bits of moral theory.

Now we see, roughly, that we can get from the Repugnant Conclusion to the Reproductive Imperative: The “easiest” way for fertile people to produce significantly more good in the world is to have another baby (assuming that we aren’t at carrying capacity, or it won’t destroy the parents’ lives). It’s pretty hard to do more good than creating another person (just as it’s hard to do more bad than killing one).

Note that we don’t need to produce the “best” babies or babies in the best circumstances. It’s hard to see that the best off person in the world is better off than two other people (at a reasonable level of welfare). People grow up, fall in love, have families, enjoy food, watch movies, hang out with friends, and do a myriad of other things that imbue a life with value. In general, it’s very hard to make a person twice as happy as they were, but it’s pretty easy to produce two people who are pretty happy. Thus, if we have obligations to produce “better” worlds, we have some obligation to have as many children as we reasonably can.

Yick

My gut reaction to this conclusion is extremely negative. After all, the burden of duties to have children have typically fallen heavily on women, including feelings of guilt. It’s easy to see that adding a secular argument pro-having-children could have some nasty effects in current societies. Similarly, I suspect that slowing down population growth might be helpful given some of our more extreme resource bound problems. It’s not a given, of course, because we can up our consumption levels pretty quickly even without population growth. But, it’s an intuition that many share.

But these don’t attack the argument directly. They show that he argument may not be applicable to our society, but that provides an argument that we should be pursuing changes to our society so as to make increased population feasible.

Resisting the argument on its own terms is difficult. We might try to make a difference between “saving” existing people and “producing” future ones (who currently don’t exist). We might claim that while we may have strong obligations toward existing people, we can have no obligations toward non existing people since they aren’t there to owe anything to. This would justify a difference in preference between saving and producing.

But there are costs to this solution. In particular, it seems to ignore something essential about the psychology of preferring B to A in the saving case: We cherish not the bare lives shorn of the living which make them worth living, but the living itself. And that cherishing is similar to the cherishing we have for doing good for others, for bringing more good into existence. This suggests that the repugnance we feel might come from a systematic misunderstanding we have about the case.

Reflections

I think this is a better essay with close enough content. It’s not a rewrite of Tännsjö’s obviously, but it seems to be a near neighbour.

Part of the problem with Tännsjö’s essay is that it simultaneously does too much and too little and in a very disjointed way. I just re read it and I felt a content sense of whiplash. Now for an audience that is already inclined to be pro-maximal reproduction, perhaps the initial shock would be lower so they’d find the rest of the ride a bit smoother. But if you already agree that we should maximise baby production, why is the rest of the article needed? The target audience has to be one who isn’t on board from the start. But then you really need to address the fact that there will be a lot of issues that aren’t purely argument related. That is, the dialectic is complex. Throwing a wacky conclusion as if it were immediate and obvious, then dealing (poorly) with two objections that are not the immediate reactions people would have, just makes for confusion.

I tried to address this in several ways. First, I put the authorial voice in the same situation as the reluctant reader. It helps that I am such a reader, but that doesn’t matter per se. I want the reluctant reader to come with me through the argument, not reject it reflexively. Second, I separate out the Repugnant Conclusion from the Reproductive Imperative and start with what I think is a very strong presentation of the Repugnant Conclusion. This lets me push the idea that even if we can ultimately reject the RI, the RC makes it difficult. And our rejection of the RI might be different than we thought it was. I certainly now think a bit differently about the RI than I was used to. Perhaps I’ll get back to my earlier perspective eventually, but it’s not as easy to support as I’d like. (More on this in a subsequent post.)

On the plus side, I think we have a good basis in this argument for a better utilitarian handling of disability. Some of Tännsjö’s papers touch on this and I think a direct argument against eugenics is well worth having. More on this later as well.

The shortness of the form is very challenging (mine was 1179 words and ends abruptly). These are not easy waters.

The Liberty Principle, Gay Marriage, and Sleeping Under Bridges

January 6, 2015

There is much to dislike about McAdams’s bog-standard right-wing “omg, PCness in the university” attacking Cheryl Abbate, with a fair number of the issues articulated in several Daily Nous posts. There are a lot of academic freedom bits to think about in everything from how Abbate handled the student, to McAdams’s response, to the university’s response to McAdams. At first blush, basically everyone except Abbate has behaved rather badly. (Really, Mr. Undergrad? You secretly taped your instructor during a fishing expedition? Sheesh.)

I do think the question she raised in class (roughly, what are some positions that conflict with Rawls’ Liberty principle) and the particular proposition (gay marriage bans or lack of gay marriage conflicts with the Liberty principle) is pretty interesting. So that’s what this blog post is about. I’m going to go with the minimal level of scholarship I can get away with as I don’t have any texts handy and don’t feel like futzing around to get them.

Rawls’ Liberty principle goes roughly (since there are some variants):

Each person has the same indefeasible claim to a fully adequate scheme of equal basic liberties, which scheme is compatible with the same scheme of liberties for all;

Now, there are a range of anti-gay marriage legal situations possible. Gay marriage might be unrecognised by the state in a variety of ways (e.g., there’s a legally identical status which is not called “marriage”; there’s a related status, but it doesn’t function the same way e.g., it allows for joint tax returns but only overridable next of kin status). Gay marriage or gay marriage recognition might be affirmatively banned (again, in a variety of ways up to making any sort of homosexual relationship illegal and harshly punished). The basic situation I’ll consider is that we have a legally recognised relationship called “marriage” which has roughly the set of formal and informal benefits and privileges that marriage in the US has and is restricted to opposite sex couples. (I’ll call this the Moderately Sucky Regime (MSR). It’s only moderately sucky because there aren’t punishments for being in a gay relationship and yes this is grading on a curve.) Is this permitted by the Liberty principle?

The “Duh It’s Incompatible” Line

I think this should be the obvious, default starting place. Take two women, Mary1 and Mary2 who different only in that Mary1 loves Juan (a cis-hetero-man) and Mary2 loves Juanita (a cis-lebsian-woman). In the MSR, Mary1 has right to marry Juan (assuming e.g., they both want to get married, they both aren’t otherwise currently  married, etc., so ceteris paribus), but Mary2 does not have the right to marry Juanita. Marrying is either a fairly basic liberty or it’s heavily implicated in a number of basic liberties or it is implied by some basic liberties (various forms of association, for example).

I take it most people think it’s a basic liberty these days. So this argument sets the burden appropriately.

The Majestic Awesomeness of Freedom to Marry Only Outside Your Orientation

There is the oft-quote Faux Liberty Principle (Anatole France):

The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, beg in the streets or steal bread.

This is a principle driven by formalist equality: As long as there is no formal or perhaps explicit inclusion of group distinction, then the law treats those groups equally. The application of this variant of the principle to gay marriage would be something like:

Hey! Mary2 can get married…to a person of the opposite sex. EVERYONE can get married to someone of the opposite sex. Even straight folks can’t marry people of the same sex. So everyone has exactly the same rights!!!

I think this is a possibly non-homophobic attempt to reconcile anti-gay-marriage with the Liberty principle. Indeed, it could be offered as a reductio of the Liberty principle as a sufficient or correct or useful principle of justice.

Now, with respect to the Abbate case, it’s important to note that the gay marriage instance of the Majestic Equality reading, while justifying the MSR, is not the only instance. The original one will do nicely. One can run it for less controversial marriage situations as well as many other disparate impact laws. The gay marriage version is merely timely not uniquely good. Timely topics can be pedagogically effective but they can also be a pedagogic disaster. This is easily seen when the learning outcome has little to do with the timely topic per se. As timely, you run the risk that people will be too engaged with it either because they have settled and passionate opinions or they just can’t easily separate out the public focus from what’s needed to make the classroom point. So the benefit (the students have knowledge and interest) can be a problem.

This is putting aside the possibility that people might behave badly to the detriment of other students or a reliantly hammering on even the non-homophobic variant might be unduly and pointlessly upsetting to other students. You don’t have to think that one must shield students from every uncomfortable thing to acknowledge that upsetting students in a class when there is no pedagogic benefit attached to it is something that should be avoided. Confusing students can be pedagogically useful as well, but that doesn’t justify all confusings.

The Inadequacy of Majestic Equality

Majestic equality fails because a majestically equal scheme of basic liberties might not be a fully adequate scheme of equal basic liberties. Indeed, it’s trivial to generate loads of obviously bonkers schemes of majestically equal basic liberties: E.g., consider a law which forbids advocacy of Republican (or Democratic) political positions. Hey! They affect everyone equally! Or consider a law forbidding belonging to a Christian religion. Hey! Muslims and atheists are forbidden from joining Catholicism as well! EQUALITY!!! Etc. etc. etc.

Clearly, that a law doesn’t carve out a set of persons by name for specifically restricted liberty doesn’t mean it doesn’t, essentially, restrict liberty for some group. I don’t think it’s at all a stretch to read “fully adequate scheme of equal basic liberties” as excluding such shenanigans. It’s unlikely that purely formal criteria will do the job. (I feel like there must be a theorem to this effect somewhere.)

More Iterations

There are definitely more moves to be made or these can be deepened. However, it’s really easy to get sucked into a US legal discussion or just go into a general discussion of gay marriage. For example, if an anti-er goes for a definitional move, “But ‘marriage’ just MEANS 1 man-1 women because procreation.” (or the “compelling interest” variant), it’s not going to illuminate the Liberty principle very much. Similarly, denying that marriage is a basic right does mean that anti-gay marriage might not violate the Liberty principle per se (though it probably dies on the second principle), but then it’s a bad example. If you do concede it’s a basic right then it’s hard to see how bans aren’t an immediate clash with the liberty principle. If you don’t concede that, then it’s irrelevant. Debating whether it is a basic right is also irrelevant (much of the time) to a discussion of Liberty principle applicability.

Some Philosophy Hiring Data Analysis

December 29, 2014

I got involved in a discussion (on Daily Nous) of Carolyn Dicey Jennings’ data about US (I think) philosophy hires. In was in the context of a characterisation of “the New Consensus”. This all seems somewhat mixed up with the recent Leiterevents, but a lot of themes remind me of stuff I heard in graduate school in the 1990s.

Background

In any case, the initial claim is:

who suggests that Carolyn Dicey Jennings’ data (that women who receive TT jobs have on average half the publications of men who receive TT jobs) indicates that women get preferential treatment

With a follow up by a different commentator:

@JT – CDJ’s attempt to provide an alternative explanation for her data seems rather tortured, and has widely been recognised as such. I agree that we don’t know whether AA in hiring overcompensates for other, previous discrimination.

The alternative explanation (at least the first move):

What is the mean number of publications for women and men in this data set? For all of the jobs (tenure-track, postdoctral, and VAP) and for all peer-reviewed publications, placed women have an average of 1.13 publications, whereas placed men have an average of 2.17 publications. Thus it looks as though placed men have one more publication, on average, than placed women. Yet, if we look at median number of publications, this difference evaporates: the midpoint of publications by both women and  men is 1 publication. (The mode is 0 for each.) Why this difference between mean and median? The difference comes down to those at the extremes: 15% of men and 5% of women have 5+ publications.

Roughly, if you have a distribution of quantities with no upper bound and skewed left in a kinda of long tail, mean as a measure of central tendency is vulnerable to outliers. (This is roughly what I was saying here.)

There are several other interesting posts by Philippe Lemoine. I owe them a response, but I’ve started but not finished a line of analysis and want to get an interim report out on that, so I won’t really engage his points yet. Sorry Philippe!

Some Considerations

First, it’s clear that this discussion could get pretty cantankerous esp. as things fit or fail to fit various political/policy positions. I’m not yet ready to discuss policy recommendations, but I want to get clear on the data.

Second, my bias is to suspect that the market is disproportionately adversarial to women. Considerations of implicit bias (though in conflict with positive action) etc. would suggest this straight off. However, I don’t know that initial tenure track hires is a place where this plays out strongly. Regardless, I will definitely be inclined to keep looking when analysis suggests otherwise and this raises a risk of confirmation bias even if I don’t delude myself about any piece of analysis. Fortunately, Philippe seems to have different priors so this might help. I’m pretty cognisant of this problem which can help.

Also, the current analyses are really just too shallow to say much in any direction. I think e.g., Philippe, me, and CDJ all agree on this.

Third, if it turns out that the TT job market isn’t unduly adversarial for women, I will be delighted. This is a great outcome. If it is unduly and unjustifiably adversarial for other groups that will not be good, but I don’t want to ignore the good. Lack of negative bias against women in hiring is a good thing.

Fourth, the data are probably not even close to sufficient to making strong conclusions, if only because we don’t know what the unsuccessful candidate pool looks like. But also,

  • I’m pretty sure publication number are not the only consideration in determining a good candidate. Indeed, there’s plenty of prima facie reasons to supposed it’s not even correlated with overall quality, e.g., possible trade offs between teaching and research or quantity and quality.
  • Gender might be correlated with other properties, e.g., program ranking which might dominate. I.e., when you control for the other factor, differences seemingly due to gender might disappear.
  • Most of these candidates (I think!?) didn’t compete with each other as they weren’t all applying for the same pool of jobs. Some jobs might be out of reach due to AOS or might have been less desirable due to location or dept. We need a model of how the decision making might be unduly influenced and preferably at least an operational notion of problematic bias.
  • And, I’ll just repeat and following on from the prior point, without some idea about the unsuccessful pool, it’s hard to make conclusions about why the current set got in. After all, you don’t need to beat out the other successful candidates for other jobs, just the unsuccessful ones for your job. If the whole pool of unsuccessful candidates is worse than the whole pool of successful candidates (and the head to heads are appropriately distributed), then the differences between the male and female pools of successful candidates are not evidence of bias in selection, just differences in the cohort.

Toward the “weight at the high end” hypothesis

So, my first move, I’ve broken out the data in two ways:

  1. I separate out by year (2012 and 2013). There’s three reasons for this: a) candidates primarily compete within a year (esp. successful ones…I presume most successful candidates for a TT position don’t go on the job market the very next year; if you did so, I’d love to hear why!), b) the selection committees and positions are different from year to year, and c) the first time through I tried to do it in Excel and for some reason I found it easier to start with 2012 alone and it kinda stuck through. What? Analysis isn’t always pretty, y’know!
  2. Within each year I break down the male and female cohorts by number of publications so we can get a more precise view of the distribution.

Method: I imported Data 2 from CDJ’s spreadsheet into BaseX and ran some queries to extract the first three columns for each set, then Excelled the rest. I’ll release the whole thing when I have it a bit further along. I’m using the “PR_Pubs”.

Here are the tables (sorry for the screenshots, but WP is sucking for me now; I need to decide whether to go premium or just move the blog):

screenshot_04

Key: Pub Ct = Number of publications for a candidate. PubTotal = The cumulative number of publications for the cohort up to that row. CumAvg = The average number of publications for the cohort up to that row. Cum%=the percentage of that cohort up to that row. % of all=the percentage of the cohort appearing in that row. The totals are the total number of candidates.

I didn’t break out the medians per se, though you can sorta see where they’ll be. The first thing I noticed is that where the “CumAvg”s diverge: In 2012…huh! Well, in one version when I rounded to one decimal place, they didn’t diverge until 3 pubs (whereas in 2013, they diverge after 0). Here, they diverge at 1 because I’m not rounding/truncing/whatever Excel does before then. Hmmm. And of course, if you Whatever to the integer, divergence starts happening at 7 (2012) or 6 (2013).

I’m really not sure how to go here. On the one hand, a difference in averages of 0.03 papers doesn’t seem very meaningful. On the other hand, a difference of 0.6 does seem meaningful. I guess, the key think is that there is a lean toward the male cohort even when the differences aren’t very meaningful. So I’ll leave that as it is for the moment.

In 2012, 94% of the women and 73% of the men had 3 or under publications. 2013 had a higher publication year for both cohorts. What’s interesting to me is that the 0-pub percentage stays roughly at 1/3 for the men and a bit under 1/2 for the women across both years. There’s a bit of shuffling at the 1s and 2s, with the 2012 cohort outperforming the men (as percentages) in 1s and 2s (which helps explain why their divergence is delayed in 2012).

Overall, men outnumber women 2 to 1. This means there’s more “room” for more exceptional candidates (publicationwise) in a sense.

So what does this mean? Got me. These years, the successful women candidate cohort had more 0s and fewer of the high end. But it’s not clear what the “natural” rate should be. (John Proveti mentioned that if we have a lot of female continental candidates, they may be more book than paper oriented and that might make a difference.)

The jump between the % of women with 1 pub in 2012 (30%) and 2013 (22%) makes me a bit wary (esp. when it’s the same number of women :))

Conclusions

Well, it’s all rather tentative at the moment. I guess my first thought is that these data don’t show any evidence that women at being discriminated against at the TT hiring level. If only like 2% of women had 0 and most 1 where the male numbers stayed the same, that would be pretty striking. Similar in the reverse. But that’s not the case. What we have is a lot of 0s, a fair bit of 1s and maybe 2s, and then a lot of variation. The curves look pretty similar:

screenshot_05

My second thought is that I find the gap in the 0s more concerning than the gap at the high end. I’m not quite sure whether this is well grounded or not. My intuition is that large number of publications aren’t really typical, but 0 vs. 1 might be significant. Either way, I want to know what’s going on and whether this is predictive of publication in the future (or or success in getting tenure).

My third thought is that I still don’t know if sex is a selection bias, but this data doesn’t rule it out for sure. Whether you find it suggestive of pro-woman bias depends at this point, I’d warrant, on your priors, more than anything else. But I think I agree with Philippe that my simple conceptual example (where a couple of outliers at the high end really mess things up) is probably not what’s going on here, though I don’t see that:

Of course, when the mean number of publications is greater for men than for women even though the median is the same, it’s also conceivable that it’s because a handful of men have a very large number of publications. But, for this to explain a difference between the mean numbers of publications as significant as that which Carolyn found, the number of publications of those men would really have to be ridiculous. So ridiculous that we can pretty much rule out this possibility at the outset, because we know that nobody goes on the market with that many publications.

I’m not sure what would count as a “handful”, but at least in 2012 we have 3 people with 12 and 1 with 14. If we added 3 with 14 (for 4 in total), we move the culm average from 2.06 to 2.27 for men. So significant movement can be made with small numbers within the bound of what existed. Now that’s not the full difference, but it’s non-neglible. So I’m not sure it was “ridiculous”. Of course, it’s not quite the case, so I’m happy to concede the point in this instance for the moment. (Hedge!)

I would hope this is “needless to say”, but all this is rather preliminary and there may be all sorts of errors not least in the translation to blog post. Corrections and suggestions most welcome.