Quantitative Social Sciences vs. the Humanities

December 29, 2016

Post Mortems

As we inch closer to realizing the Trump disaster, the election post-mortem’s continue. Obama has claimed that he would have beaten Trump. I’m unsure about the wisdom of that from either an analytical or political perspective. Qua analysis, it could be banal, reasonable, or silly:

  1. Banal: Your take on the election could be, roughly, while the fundamentals favored a generic Republican, Trump was an unusually bad candidate running and unusually bad campaign so that, absent extraordinary intervensions esp. from the FBI, a reasonable Democrat would have won. A bit more subtly, he could be claiming that Democrats can win when they aren’t also running against the press plus FBI plus Russia plus Wikileaks and he is a candidate that the press (a key enabler of the others) doesn’t run against.
    This isn’t quite as banal as “A major party candidate always has a good shot in this polarised age” in that it posits that Clinton specific features strengthened the Trump campaign just enough. However, it doesn’t posit any Obama specific features, hence the banality.
  2. Reasonable: Your take on the election could be, roughly, that given the closeness of Trump’s victory, a bit more juicing of Democratic turnout would have been sufficient (esp. when combined with all the items under the banal scenario) for victory. Obama has a good record of turnout which seems to be some combination of his personal qualities as well as his GOTV operation. If we posit that Clinton had the equivalent GOTV operation, then we’re left with his personal qualities which are a superset of “not having the Clinton failings”. I think you can probably make a case like this based on the exit polls. While reasonable, it’s highly defeasible. What’s more, it’s not clear that you add much over the banal case. You need something like what’s in the reasonable case to distinguish Obama vs. Sanders.
  3. Silly: Obama would have crushed Trump because Trump is an extremely bad candidate while Obama is an extremely good candidate. I feel like both those statements are true but we really need to take seriously the idea that candidate quality matters at best at the margins. It’s not just that fundamental models tend to do well empirically, but that the causal mechanisms for candidate or even campaign quality mattering are opposed by a lot of evidence and a lot of alternative causal stories. What voters hear, how they come to make decisions, the small number of “true idependents” etc. tend to point toward the partisan identity thesis of voting, to wit, voters tend to vote their party identity regardless of the policy implications or political behavior of the candidate. Voter attributions of decision making based on campaign specifics can be plausibly attributed (for many voters) on things like (supported) rationalisation.

Politically, all this seems to do is set up Clinton as a scapegoat or perhaps, better, set up Obama as the leader of the opposition. The former is pointless. The latter is perhaps worthwhile. It’s clear that Obama campaigning on the behalf of others isn’t effective (he’s not had notably strong coattails, for example). More significantly, I rather suspect he’s going to take a traditional ex-president role an be relatively quiet about Trump. If that’s the case, it would be bad for him to become leader of the opposition.

There’s lots to unpack about the election and we have the problem that, on the one hand, good analysis and data gathering takes time while, on the other hand, the further the election recedes into the past, the more evidence evaporates. This is all next to the fact that post mortems serve political goals thus are subject to motivated distortion.

The Loomis Hypotheses

Ok, that was a digression. What prompted this more directly is Erik Loomis’ latest entry in his war/trolling on the scientific status of social sciences like economic and political science. This is a bit more general than attempts to use the election outcome against specific models/prognosticators/etc. and, of course, Erik is provocatively overstating:

It’s time to put my humanities hat on for a bit. Obviously there are political scientists and economists who do good work. And we need people studying politics and economics, of course. But the idea that there is anything scientific about these fields compared to what historians or philosophers or literature critics do is completely laughable. As I tweeted at some point right after the election, the silver lining to November 8 is that I never have to even pretend to take political science seriously as a field ever again. Of course that’s overstated, but despite the very good political scientists doing good work (including my blog colleagues!) the idea that this field (Sam Wang, Nate Silver, etc., very much included) had some sort of special magic formula to help us understand politics this year, um, did not turn out to be true. They are just telling stories like I do, but with the pretense of scientific inquiry and DATA(!!!) around it. It’s really the same with economists, far too many of whom are completely deluded by their own models and disconnected from the real life of people.

Before trying to structure these a bit, I want to point out that we have some serious challenges to  making either a defensive or offensive claim about methodological validity or superiority based on prognostic outcomes of elections: All the models are probabilitistic with extremely small test cases. So, even Sam Wang’s prediction of a 99% chance of a Clinton win is consistent with what happened. Silver’s higher odds for Trump aren’t necessarily validated by Trump’s winning! You have to dig into the details in order to find grounds for determining which one actually overstated the odds and your arguments are going to be relatively weak. But conversely, your arguments that these models serve no useful purpose has to do more than say, “They got the election outcome wrong!!!” Highly accurate models might be only “empirically valid” that is, they succeed but provide no insight and don’t track the underlying causal structure. Highly uncertain models might tell you a lot about why certain outcomes are easily predictable.

Overall, I think the burden of argument is on the model proposers rather than the skeptics. First, this is the natural placement of burden: the person making the claim has to defend it. Models need content and if you rely on the fact that both Wang and Silver had a Trump win as a possibility, then you risk making them all essentially equivalent to coin toss models. In which case, Erik’s attack gets some purchase.

There seems to be three rough claims:

  1. (Quantitative) Social Science is no more scientific than history, philosophy, or literary criticism.
  2. (Quantitative) Social Science wrongly claims to have a “formula” that provides superior understanding of politics. Instead, they are “just telling stories.”
  3. The problem (Quantitative) Social Science is that they are deluded by their models and thus disconnected from the real lives of people.
    This could mean many things including: current models are oversimplistic (i.e., disconnected) yet treated as gold, models in principle are oversimplifying so will never be a good tool, or models are only useful in conjunction with other (qualitative) methods.

2 can be seen as a refinement of 1, that is, that the way that (Quantitative) Social Science is no more scientific than history, philosophy, or literary criticism is that it doesn’t do anything more than “tell stories,” albeit with a quantitative gloss. Obviously, there’s some difference in what they do as a novel about lost love is topic-distinct from a history of glass blowing in Egypt. Even when topic congruent, we expect that a novel about the Civil War to be a different kind of thing than a history of the Civil War. Not all stories have the same structure or purpose or value for a task, after all.

A Standard Caveat

Many debates about the “scienciness” of a field are prestige fights and as a result tend to be pretty worthless. That something is or isn’t a science per se doesn’t necessarily tell you about the difficulty or significance of it or much at all about its practitioners. There are sensible versions but they tend to be more focused on specific methodological, evidential, sociological, or ontological questions.

Comparative Scientivisity

(I’m not going to resolve this issue in this post. But here’s some gestures.)

While there’s some degree of “qualitative humanties is superior” in Erik’s posts (cf claim 3 and, wrt 1 and 2, the idea that they at least know their limits), let’s stick to the comparative scienciness claim. These points (the categorical and the superiorness) aren’t fully separable. (I.e., science is successful in certain enviable ways thus other fields try to glom on.)

Let’s pick a distant pair: election forecasting and interpretative literary criticism. It does seem that these two things are really different. If the literary criticism teases out a possible interpretation of, say, a poem, then the evaluative criteria for  the interpretation is whether it is “correct”, or “valid”, or “insightful” and the evaluative mechanism is (typically) either brute human judgement or more criticism (i.e., the presentation of other interpretations either of the criticism or of the original poem). The most obvious evaluative criterion for election forecasts is predictive success (and usually rather easy to verify predictive success). Prediction, of course, is a key indicator of science, so the fact that election forecasting (inherently)aims at prediction might be enough to cast a sciency feel on its parent discipline, political science.

Of course, astrology and tarot also aim at prediction. Their lack of science status doesn’t solely rest on their predictive failure. Indeed, predictive failure alone won’t give us a categorical judgement (science/nonscience) since it just as easily indicate bad or failing science. Throwing in some math won’t do the job, as astrology and numerology are happy to generate lots of math. The fact that the math tends to generate models that reasonable cohere with other knowledge of the physical world is a better indicator.

If we move over to history, it’s tempting to say that the main difference is analogous to autopsy vs. diagnosis: It’s much easier to figure out what killed someone (and when) than what will kill someone (and when). Even that there are epistemically or ontologically ambiguous cases (i.e., we can’t tell which bullet killed them or multiple simultaneous bullets were each sufficient to kill them) doesn’t make autopsy harder. (For one, it’s generally easier to tell when one is in such a situation.)

But there’s plenty of backward looking science. Cosmology and palentology and historical climate studies come to mind. They do try to predict things we’ll find (if we look at the right place), but it’s hard to say that they are fundamentally easier. What’s more, they all rest on a complex web of science.

I feel confident that history could (and probably does) do a lot of that as well. Surely more than most literary critcism would or perhaps should (even granting some literary critcism, such as author attribution, has become fairly sciency).

What does this mean for Erik’s claims?

I’m not sure. A lot of what we want from understanding of phenomena is how to manipulate those phenomena. But one thing we can learn is that we don’t have the capacity to manipulate something the way we’d like. This goes for buildings as well as elections.

(Oops. Gotta run to a play. But I don’t want to leave this hanging, so I’ll leave it with a hanging ending. But I’m also genuinely unsure where to go with this. I still have trouble interpreting Erik’s claims that leads me to any action.)

3 Responses to “Quantitative Social Sciences vs. the Humanities”

  1. halfspin Says:

    If I remember correctly (and I haven’t checked since soon after the election), Sam Wang’s critical error was modeling the variance of the actual election from the polls with the variance among the polls. It was an assumption that fit well with the results in the last few US elections before 2016, but last year was a big outlier. Nate Silver’s more complex model incorporated his special sauce, which might be nothing more than a gut feeling, but as it’s proprietary we can’t really judge its scientific merit, and it doesn’t really contribute to political science as a field.

    Now what makes Wang’s work at least qualify as possibly scientific is that any random idiot can take his formulae, plug them into R or SPSS or Pandas or whatever the kids are using these days, and produce the same results. Reproducibility is a necessary condition of science. Compare that with Nate Silver’s secret sauce, or with Loomis’s narrative approach. Wang might not have any more insight into the election than Loomis, but I can’t reproduce one of Loomis’s narratives even if I wanted to. Since they aren’t reproducible it also means they’re much harder to evaluate, since our ability to test their accuracy depends on whether Loomis is around and feels like researching and writing narratives that offer testable predictions.

    I don’t remember enough of my Popper or Kuhn to really make this argument about science, so feel free to fill in the rest of the details. I’ll have to ask my friend Noah Smith what he thinks of the economics argument that Loomis makes. I generally tend to think that Erik overemphasizes the importance of narrative power over scientific accuracy or explanatory modeling, but he’s a(n) historian and I’m a quasi-engineer, so what else would you expect?


  2. Hi, Bijan!

    Nate Silver and Sam Wang don’t do political science; they do statistical analysis. Political science purports to provide understandings that go beyond what one could learn from just feeding numbers into a formula. It is supposed to tell us what the formula should be, from a basis that draws on more than just the outcome of past number crunching. What Silver’s greater success, over the course of years, demonstrates is the lack of added value the process gains from the injection of poli-sci theories.

    Political science that purports to be a hard science, as opposed to that which is similar to history and anthropology, is a science in much the same way that alchemy was a science. It uses rigorous methods within an established framework that allows for comparisons (assuming here that alchemists freely shared their lab reports) and the drawing of conclusions from a body of results. The problem is, it has at its basis a set of assumptions that turn out to be quite subjective, but which the political scientists convince themselves are objective.

  3. Bijan Parsia Says:

    Nate Silver and Sam Wang don’t do political science

    I would dispute that. Silver, for example, clearly looks for deeper understanding and so does Wang. They both try to build informed models.

    It is supposed to tell us what the formula should be

    Sure, and Wang and Silver both have fairly extensive theory on that.

    What Silver’s greater success, over the course of years, demonstrates is the lack of added value the process gains from the injection of poli-sci theories.

    Oh, I see. Poly sci is the pejorative term here 🙂

    Well, the Keys theory was correct in its prediction 🙂

    I don’t think very many polysci folks think that political science is a hard science. They seem well aware of the limits of their methods.

    Erik was attacking your version of the statistical analysis folks.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: