Archive for the 'Politics' Category

The Great Unraveling

February 18, 2017

(I’ve not been very productive. My nearly year long cough is gone, but the dizziness, light-headedness, and exhaustion persist. Oh and I’m having hand surgery on Wed…yeek!)

I’m not sure that “The Great Unraveling” is quite the right phrase, but in the US and the UK we’re at the cusp of massive transformations that, to be frank, are going to suck. Both involve attempts to dismantle much of the corresponding states in a very short time. (The Great Dismantling? Perhaps what’s Unraveled is the consensus which leaves us open to dismantling?)

Brexit at all, much less the hard and hostile Brexit we risk, is not a simple matter. For ≈40 years we’ve been a member state and that has consequences to everything we’ve done and how the whole constitutional order was structured. Note that I don’t think Brexit will net us more sovereignty, but it will net us more local “control” and responsibility. I don’t think it nets us sovereignty because while we get more local say we lose EU say. We lose power, in general.

We had an outsized influence in the EU and were shielded from the worst bit (the Euro). So, this will be a pretty big loss.

Trump and the Republican Congress (and likely Republican Supreme Court) and the Republican states are, with varying degrees of effectiveness, trying to break down the post LBJ welfare state and the international order (well, the latter is mostly President Bannon). The badness of their aims is matched only by their general ineptness at governance. Since destruction is the goal and they’re pretty happy with a lot of collateral damage, this isn’t a big deal for them except it might result in a counterbalancing wave elections (I hope, I hope).

There’s no attempt to produce a better alternative. The ACA repeal and uh…something…something there’s a replacement but it’s a secrete move is pretty telling. There are no positive goals, no acknowledgement of strengths of the ACA, or accurate assessment of the weaknesses (and ways to fix them). It’s just hack it and give the money to rich people. The end.

Let’s not even get into the norms of governance that Trump blithely shits on.

So here we are. Two “conservative” governments engaged in a destructive spree with barely any recognition that what they’re doing is destructive at all. Strange times.

Countering (Massive Numbers of) Lies Doesn’t Work

January 25, 2017

Lies are dangerous in a number of ways. Putting aside that there are lots of situations where a false belief leads to very bad action (e.g., believing homeopathy is an effective cancer treatment leads to forgoing treatment that would have saved one’s life or mitigated suffering). They are also dangerous because people with bad agendas tend to resort to lying because they can’t win on the merits. And they don’t just resort to a bit of deception, or even clever deception. It turns out that wholesale, massive, shameless, easily rebutted lies are pretty effective, at least for something.

Consider the decades long attack on the EU:

But Britain has a long and well-observed tradition of fabricating facts about Europe—so much so that the European Commission (EC) set up a website to debunk these lies in the early 1990s. Try our interactive quiz below and see if you can spot the myths.

Since then the EC has responded to over 400 myths published by the British media. These range from the absurd (fishing boats will be forced to carry condoms) to the ridiculous (zippers on trousers will be banned). Some are seemingly the result of wilful misunderstandings.

Sadly, for all the commission’s hard work, it is unlikely to be heard. The average rebuttal is read about 1,000 times. The Daily Mail’s website, by contrast, garners 225m visitors each month.

And, of course, the Leave campaign, itself, was almost wholly lie based. Remain made some (economic) predictions that were falsified (and that needs to be understood), but it didn’t traffic in wholesale lies, to my knowledge.

Similarly, we have a decades long campaign, almost entirely easily-debunked-lie based, against Hillary Clinton. Just take claims about her honesty (esp. next to Trump). Robert Mann produced a very interesting graph of Polifact’s fact checking of a selection of politicians:

It isn’t even close! HRC is one of the most honest politicians (in terms of telling falsehoods) and Trump is one of the most dishonest.

Yet, when I was debating folks on Democratic leaning blogs, I had people saying that Clinton was a pathological liar. When presented with this chart, they stuck to their guns. (Note, they didn’t think Obama was a liar.)

You can quibble with the methodology (see Mann’s blog post for a discussion), but Polifact’s fact checker tries to be evenhanded. One should be at least a little struck by this evidence.

But correction often just doesn’t work, backfires, or isn’t effective in changing attitudes and behavior. For example,

Facts don’t necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

Or consider Emily Thorson’s concept of belief echoes:

However, through a series of experiments, I find that exposure to a piece of negative political information persists in shaping attitudes even after the information has been successfully discredited. A correction–even when it is fully believed–does not eliminate the effects of misinformation on attitudes. These lingering attitudinal effects,which I call “belief echoes,” are created even when the misinformation is corrected immediately, arguably the gold standard of journalistic fact-checking.

Belief echoes can be affective or cognitive. Affective belief echoes are created through a largely unconscious process in which a piece of negative information has a stronger impact on evaluations than does its correction. Cognitive belief echoes, on the other hand, are created through a conscious cognitive process during which a person recognizes that a particular negative claim about a candidate is false, but reasons that its presence increases the likelihood of other negative information being true. Experimental results suggest that while affective belief echoes are created across party lines, cognitive belief echoes are more likely when a piece of misinformation reinforces a person’s pre-existing political views

We see this in the various formulations of the Clinton Rules.

One major harm of such mechanisms is that it opens up a line of defense for very bad people, e.g., Trump, to wit, that there are “Trump rules” and the bad things pointed out about him are fake. They aren’t, but why trust a gullible media about it?

I’ve had personal experience of this. I used to comment a lot on LGM. One commenter with a propensity for persistently saying very silly things (about, e.g., statistics, causality, politics, and even the law (they are a lawyer)) got to a point where they couldn’t stand my repeated refutations (including pointing out how they’d been refuted before). They embarked on a pretty systematic campaign to lie about me, primarily about my mental health and that I was “stalking” them, on the verge of a breakdown, that they were frightened of me, that I had no sex life or other kind of life, that I spent large period of times looking things up on them (stalking!), etc. These were transparent lies and obvious gaslighting. No one took them directly seriously, but they did have effects. People would see an exchange and assume that there was some fault on my part (however mild). This would pop up elsewhere, in other comments.  Some of these people were more sympathetic to a gaslighting liar than they had any right to be.

So, pretty exemplary behavior and a sterling reputation vs. transparent lies and extremely bizarre slanders and…well, I’m the one not commenting any more. It worked, in a way. (Trump winning had an effect too. It’s not solely due to this bad behavior.)

Given sufficient shamelessness and no structural counter (e.g., moderation) and no big effort on my part (e.g., an active campaign), there’s little penalty for such lying and it advances their noxious cause.

These examples can be multiplied easily (anti-vaccine, pro-tobacco, climate change denial campaigns come to mind).

It’s very difficult to deal with. We need to.

Update:

How severe is the problem? I just saw a report on a survey using Trump’s and Obama’s inauguration crowd photos:

For the question about which image went with which inauguration, 41 percent of Trump supporters gave the wrong answer; that’s significantly more than the wrong answers given by 8 percent of Clinton voters and 21 percent of those who did not vote.

But what’s even more noteworthy is that 15 percent of people who voted for Trump told us that more people were in the image on the left — the photo from Trump’s inauguration — than the picture on the right. We got that answer from only 2 percent of Clinton voters and 3 percent of nonvoters.

The article discusses the idea of “expressive responding”:

Why would anyone give the wrong answer to a pretty simple question?

To many political psychologists, this exercise will be familiar. A growing body of research documents how fully Americans appear to hold biased positions about basic political facts. But scholars also debate whether partisans actually believe the misinformation and how many are knowingly giving the wrong answer to support their partisan team (a process called expressive responding).

Expressive responding is yet another form of lying with potentially far reaching consequences.

On Calling Out a Lie

January 24, 2017

Given the massive amount of un-, anti-, and non-truth spewed by Trump, his minions, and the Republican Party, the media has had a lot of trouble coping with it. Trumpsters and their ilk even have started complaining about “fake news” by which they don’t mean actual fake news, but instead they mean true news that they don’t like.

The media needs to deal with the situation better. There are lots of vulnerable points (e.g., the need for access, the cult of balance, the shamelessness of the deception). But one problem is a strong unwillingness to call a lie a lie (well, except for the liars, who are quite willing to call anything they don’t like a lie).

There’s a fairly narrow idea of a lie making its way around that’s used to justify this. Take Kevin Drum (who’s on the pro-call-out-lies side):

The problem with branding something a lie is that you have to be sure the speaker knew it was wrong. Otherwise it’s just ignorance or a mistake.

Arrrgh! Even Drum falls into a pretty obvious error! Just because you don’t utter a deliberate, explicit, knowing falsehood doesn’t mean you are innocently making some sort of error (i.e., acting from ignorance or making a mistake)! Just simple contemplation of lies of omission reveal that. Or recall standard tricks such as:

Is there anything else material that you want to tell us?

No.

But it says here that you did X and X is material! Why did you lie?!

I didn’t lie. I didn’t want to tell you about X.

Lots of people have come to rely on Frankfurt’s notion of “bullshit” (utterances made without regard for the truth) and “lie” (utterances made with a regard for falsity). I remember when Frankfurt’s article came out and I enjoyed it. It’s a nice distinction, but it’s been misused. A bullshitter is a kind of liar (or, if you want to be annoying, a deceiver). (Wikipedia correctly puts Frankfurtian “bullshit” as a topic on the “lie” page.)

Frankfurt spends a great deal of time trying to suss out the distinction between lying and bullshitting:

The elder Simpson identifies the alternative to telling a lie as bullshitting one’s way through. This involves not merely producing one instance of bullshit; it involves a of producing bullshit to whatever extent the circumstances require. This is a key, perhaps, to his preference. Telling a lie is an act with a sharp focus. It is designed to insert a particular falsehood at a specific point in a set or system of beliefs, in order to avoid the consequences of having that point occupied by the truth. This requires a degree of craftsmanship, in which the teller of the lie submits to objective constraints imposed by what he takes to be the truth. The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of that truth. On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a certain falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared to fake the context as well, so far as need requires.

Meh. When you have enough fabrication and one of your targets is yourself, this idea of focus isn’t pertinent. One way of lying is being a shameless liar most of the time so when one speaks the truth one isn’t believed.

It is sometimes worth figuring out the etiology of someone’s false (or otherwise wrong) utterances. It can make a difference in how you counter them. If someone is mistaken, they may be amenable to correction. If they are a “true believer”, it may be quite difficult to merely correct them (so maybe you don’t bother).

But, with the Trumpians and other Republicans, come on. There needs to be some strict liability here. Lying so well that you convince even yourself that it’s true is a kind of lying. Coming to believe your own lies (supposedly) doesn’t get you off the hook for all that lying nor does it make it not lying.

I’m sorta ok with Drum’s desire to focus on deception rather that (narrow) lying. But…in ordinary vernacular, deception is lying. A lie of omission is a lie. If you bullshit me, you are lying to me. If you lie to yourself, you are lying.

With Trump, it’s super easy: it’s almost all straightforward lies.

Update: LGM caught up with the NYT finally putting “lie” in the headline with appropriate skepticism.

We can’t have nice things

January 22, 2017

Like the Conservative 2015 win, Trump’s win and the Republican control of government will lead to lots and lots of bad things and the destruction of lots of good things. Some of the loss is direct and immediate (e.g., people will lose their health insurance). Some of the loss is less direct and immediate (e.g., people will die unnecessarily because they lost their health insurance). And some of the lost is indirect and diffuse (e.g., the economy will suffer because health insurance and health care is messed up).

You can hope that breaking things as hard and as fast as the Trumpublicans seem anxious to do will result in accountability from the voters. But this isn’t a net good:

The one silver lining here is that all this is certain to be spectacularly unpopular. This combination of spending-side austerity and huge tax cuts will likely create major economic problems, as similar policies at the state level (and during Bush’s time as well) have shown. Trump is already the most disliked president-elect in the history of polling, and what little support he does have is partly the result of a campaign whose major message was the precise opposite of what’s about to happen.

Bush took a budget surplus and turned it into giant deficits, something we still are dealing with. Breaking things tends to lead to more broken things and easier to beak things.

Plus, voters seem to have short memories. Trump looks to be worse than Bush by some orders of magnitude and this didn’t deter lots of people from voting for Trump and the Republicans. Austerity in the UK imposed pointless misery and the Tory’s won 2015 on austerity.

Conservative parties have been taking fairly narrow wins of dubious contests wherein they lied like hell and decided to go maximal in their execution of disastrous policies. In the UK, it seems very unlikely that the electorate will turn on them anytime soon for this. In the US, it will be challenging to get one house of congress, much less both, in 2018. There’s a lot more amok that can be run.

The Comey II Effect

January 16, 2017

Background

US presidential elections are extremely complex events. There are a lot of a lot of moving parts from candidate selection to the Electoral College. The occur only every four years and there’s been under 60 in total, with maybe a third in a reasonably modern era (e.g., with mass communication). Furthermore, the US, in general and in the political scene, exhibit complex dynamics. It’s changing all the time! This makes them hard to study. This XCKD cartoon is a good reminder to be epistemicly humble.

However, this doesn’t mean we should just shrug our shoulders. Predicting winners is going to be hard we predict because increasing polarization, plus some structural features, means elections will be close. When elections are close, they are hard to call (at least individually). That doesn’t mean we aren’t right to predict that they will generally be close and that gaps between the popular vote and the electoral vote will increase and tend to benefit Republicans. These seem quite true.

However, autopsy is different that predicting.  When we look backwards, we are dealing with one set of events, not many possible ones. Now, of course, we can’t go back in time and rerun stuff. Plus evidence degrades quickly with time. But we are looking at something static. And by careful study we can learn surprising things! For example, the consensus is that Ross Perot did not spoil the 1992 election for George Bush. See this excellent blog post by Samuel T. Coop that has good links to the literature as well as a nice discussion both of the direct and possible indirect effects of Perot on the 1992 and 1996 elections. Note that the kind of study one does (and confidence one has) depends critically on how one frames the question. “Did Perot spoil the election for Bush?” has one kind of answer if you operationalise it as “Were Perot voters such that they would have voted for Bush if Perot had been removed from the ballot the week before the election in numbers and patterns sufficient to change the election outcome?” A much harder question to answer is, “If Perot never entered the race, would Clinton still have one?”

The former question might nevertheless be unanswerable or difficult to answer if we don’t have good data. The latter question might well be simply unanswerable absent singularity level simulations.

In general, effect questions about events close to an election are more tractable than ones about events further away. This shouldn’t be surprising.

Comey had two big interventions in the election: The first was in July 2016 when he cleared Clinton of wrong doing, closed the election, and (wrongly and inappropriately) bad mouthed her. The second was when he released a letter a week before the election saying, “Hey! We might have something to look at!” (then another letter just before the election saying, “Oops, nothing here”). There is no disputing the wrongness and inappropriateness of the second intervention. The “best” interpretation is that he was trying to get ahead of rogue agents in New York. Of course, this is several levels of failure including that rogue agents shouldn’t be rogue and the way he did things was strongly biased against (a totally innocent!) Clinton. In any case, it was against policy and precedent, and it was definitely biased toward Trump. We can see this in his current refusal to discuss the FBI’s investigation of the Trump-Russia issue with congress even in private. This is the Bush v. Gore “We only intend this discussion/principle to hold for this one case where it steals the election” bit all over again.

One thing that is conclusively established is that Comey did some very wrong things and that Comey’s FBI is in the running for one of the worst FBIs ever (which is saying something).

But not all wrong actions have bad consequences. (Luck can intervene.) A critical question, thus, is whether Comey threw the election to Trump. Scott Lemieux thinks that the answer is yes, partly based on a recent Vox article by McElwee,  McDermott, and  Jordan which looks at 4 pieces of evidence that “the Comey effect was real, it was big, and it probably cost Clinton the election”.

Some Qualifications

Both Scott and the Vox folks try to disarm one of the standard counter arguments, to wit, that even if Comey II had an effect, it was dwarfed/only made possible by the badness of Clinton and/or her campaign. Their attempt is roughly, “Yes, the campaign made mistakes, but all campaigns do.” Meh. I want to see more affirmative evidence that the Clinton campaign was bad other than “it shouldn’t have been close enough for Comey to affect” or even “She lost to Trump.” The overall evidence is that Clinton is a pretty good candidate (nearly won against Obama and won the popular vote in a year when the fundamentals, the press, and the FBI and Russia/wikileaks, etc were against her). In 2008, we had very specific evidence that campaign competency was a problem (critically, they didn’t pay enough attention to caucuses and delegate math). There’s little such evidence this year. So, pfft. She ran a pretty good campaign in adverse circumstances. Qua candidate, she has vulnerabilities, but clearly also strengths: It seems that it took fundamentals, plus the press, plus Russia, plus Wikileaks, plus the FBI to beat her narrowly.

So, meh, to that.

We’re looking for a change in voter behavior from what it would have otherwise been without Comey II but with no new events. Thus, if some hackers held back some Comey II equivalent info because they thought Comey II was sufficient, then the counterfactual “But for Comey II, Clinton wins” is false. But this isn’t the right standard since we are trying to determine the effect of Comey II on the actual election. This is an autopsy. If someone gets hit in the brain by three bullets spaced three seconds apart, each of which were sufficient to kill them, we don’t say that the first bullet wasn’t the cause of death just because if it hadn’t been fired the person would still be dead.

Finally, even if Comey II had no effect, it would still be unjustified and a serious failing on Comey’s part. If it had some effect that doesn’t seem quite enough to throw the election, it’s still very bad. However, obviously, throwing the election aligns intent and horrific effect.

The Vox Case

The conclusion is:

the Comey effect was real, it was big, and it probably cost Clinton the election

The evidence is in four “exhibits” (and I grabbed their headings to make this list):

  1. “Exhibit 1: the state polls.”
    This is a weird one because it should say something like, “Looking at the state polls, we see a decisive, unusually large shift toward Trump in key states. Indeed, the average is from Clinton +3 on the 28th (Comey II day) to Trump +1.2…4 points! in a week!” However, they are focused on the “surprise” aspect of the election and how some polls just before the election undershots Trump’s actual win. I think this is important (see below), but it’s not part of the first order case for the Comey Victory. They are convincing that some states where the polling didn’t show the trend were underpolled. But this is sorta beside the point. The idea is that there was a swing in votes that was captured by a swing in the polls (and between the polls and the actual outcome).
  2. “Exhibit 2: the national polls”
    Basically, every account of the national polls showed a big hit (2-3 points) against Clinton in reaction to Comey II. Comey I also produced a direct swing in the polls against her. So Comey announcement affecting the polls seems reasonable. Note that this does not yet generate a Trump win…national polls still had Clinton up. But, she did win the popular vote!
  3. “Exhibit 3: The early voting numbers compared with the late deciders”
    Clinton led in a lot of the early voting. That could be a biased sample, but  Clinton had a huge drop between early and election day voting was in blue states like RI. Obama saw gains in such circumstances.
  4. “Exhibit 4: media coverage of email, email, and more email”
    EMAILZ!!! dominated the news coverage and, correlatively, voter perceptions of Clinton, “While 79 percent of registered voters had heard “a lot” about Clinton’s emails, only 23 percent heard “a lot” about Trump’s housing discrimination, 27 percent heard “a lot” about the Donald J. Trump Foundation’s illegal political contribution to the Florida attorney general, and, surprisingly, only 59 percent had heard a “a lot” about the Hollywood Access tape.”

Only 59% of voters heard “a lot” about the Trump tape?! Whoa.

We see some undeniable Comey II effects. 4 (media coverage) is just plainly evident. (Note: Obviously the media were complicit. They could have treated Comey II correctly and didn’t. How this absolves Comey is a mystery to me. He knew or should have known what would happen.) 1 & 2 seem probable both based on the timing and on past effects. 3 needs a bit of work to directly establish the relationship, but as a supporting consideration is quite alright. That all these things march together strengthens the story. As the Vox piece puts it:

Instead, the evidence is clear, and consistent, regarding the Comey effect. The timing of the shift both at the state and national levels lines up very neatly with the publication of the letter, as does the predominance of the story in the media coverage from the final week of the campaign. With an unusually large number of undecided voters late in the campaign, the letter hugely increased the salience of what was the defining critique of Clinton during the campaign at its most critical moment.

Challenges

Let’s recall what has to have happened for a Comey Victory:

  1. Before Oct 28th, Clinton had to have been really ahead. That is, enough people in the right places would have voted for her or, at least, not voted for Trump (e.g., by splitting their ticket or staying home).
  2. On election day, we have the result we have.
  3. People changed their voting behavior (e.g., stayed home, changed their vote, or came out for Trump).

And, of course, this change had to be caused by Comey II. But I think if we can establish 1-3 then we’ve done rather well. Direct polling on this (i.e., “Did Comey II change your voting behavior”) would be welcome but will get less reliable the further out we go (due to recall bias).

So, we have a shift in the polls with the right timing plus a mechanism (coverage). Isn’t this enough?

There are some possible alternatives.

  1. The shift was due to some other factor like awesome Trump ground game. (Unlikely.)
  2. Before or around Oct 28th, Trump was ahead or close to being so and the Clinton lead was a polling illusion.
  3. Trump’s mostly was ahead but there was a strong, consistent polling illusion.

1 suggests a different dynamic. 2 and 3 suggest that the race was more static than a Comey Victory requires.

Could the race have been more favourable to Trump than the polls suggested? Event driven swings in polling, esp. large ones in early polls, have been viewed with skepticism for quite some time. Convention bounces, for example, tend to be bounces: They boost the candidates polls for a short while then fade. The idea that voting intention would be so fickle seems improbable, esp. in this day and age. Recall the myth of the independent voter, that is, in spite of increasing numbers of Americans identifying themselves as independent, only a small fraction are “true” independents, that is, exhibit voting behavior markedly different than some partisan. (And of course, “independent” doesn’t mean “indecisive”.) One rising explanation of such poll volitility without real change is “differential response rates“.

Roughly speaking, given the low response rates typical of modern polling, shifts in polling results can come from a systematic change in who is likely to respond. Note that this is different than standard sampling issues and isn’t addressed by, for example, larger sample sized. It’s also different from problems in your likely voter models. (Assuming your likely voter model is stable, it could give a consistent error but is unlikely to yield big swings.)

Exhibit 3 suggests that the change was real: We have an unusual difference in actual voting behavior from early voting to election day. We have a large number of “undecided” who broke strongly for Trump. So we have a clear causal story.

One challenge with that story is whether the shift from undecided to Trump would have happened anyway. This is similar to the “shy Trump” effect sometimes posited. But basically, people tend to vote identity. Identity doesn’t tend to shift over a campaign, so who people will end up voting for is pretty predictable. But who people think or say they are going to vote for is more flexible. And this isn’t because they are shy (shy people lie) but because they are genuinely conflicted.

Imagine a Bernie supporting, Green curious Democrat. They might well flirt with the idea of voting Green the whole election season but when it comes down to the crunch they vote Clinton because, well, Trump! (I was that sort of Nader supporter in 2000. Even though I though Gore would win and had bought some of the “not a dimes worth of difference” line, I couldn’t risk being part of a Bush win.)

So it is possible that Republicans who were “concerned” by Trump always were going to vote for him. We saw this in a lot of “Never Trumper” elites who declined to support him until they did.

Worse, experientially, many of these folks will take Comey II as the causal reason for their vote. But, the argument goes, that’s a cognitive illusion. In reality, they would have always found a way to vote for Trump. This distinction will be hard to sort out, if even possible.

What Should We Believe About Comey II?

Scott advances a strong (but slippery) version of the Vox conclusion:

As I’ve said before, at this point to deny the effects of Comey’s interventions is essentially trooferism. There is no serious alternative explanation that can account for the data. The “durrrr, correlation is not causation, durrr” argument loses any plausibility when you consider that every Comey intervention caused a wave of negative media coverage about Clinton and was followed by a significant decline in national polls numbers. The “polls can’t account for Trump being a celebrity” response fails to explain why Election Day voters were more affected by Trump’s celebrity status than early voters although he didn’t become more famous in the interim (but people were treated to an obsessive wave of negative coverage about Clinton.) Even if Comey had not sent the letter on October 28, we can be as confident that Clinton would have won as we could ever be confident in such a counterfactual.

The strongest line against it is that voting intentions are fairly stable and we have some mechanisms to explain that polling produced shifts are illusory.  But, let’s not, that there is no direct evidence for this in the current situation. Yes, differential response, shy Trump, and dithery Trump could explain everything, but even for early event responses evidence is thin on the ground. It’s mostly the general thesis that voting intention is stable. And clearly, there’s some truth to that! I was never going to vote for the Republican candidate, ever, in 2016. Ever. There are lots of similar sorts. The general closeness of elections is suggestive as is the research on nominally independent voters.

One interpretation of the election is that campaign quality doesn’t matter at all, given Trumps weak traditional campaign. Specifically, campaign quality doesn’t have a causal effect (as opposed to a balance of causes). However, given Russian hacking, Wikileaks dribbling, Comey, and a compliant press, a rival interpretation is that narrow Republican campaign quality matters less because the Republican campaign includes parts of the US government as well as state and non-state actors as well as the press.

The very closeness of the results in key states is also suggestive. Comey doesn’t have to swing or consolidate a lot of votes in the Clinton firewall to break it. Unlike conventions which are 1) standard, expected events and 2) way early in the campaign before voter intentions have solidified, Comey II was imminent to the election. So it’s more reasonable to suspect real changes rather than differential response.

So, I lean toward the Comey II effect was real and likely made the difference.

Now, there are some definite bias risks here. Confirmation bias (partly due to anchoring) is strong. I’ve been a proponent of a version of the stability hypothesis (though, usually of the “polls far out aren’t super reliable; polls in Nov tend to be” sort). I think differential response rates are fascinating and provide an elegant explanation of convention bounces. (I think all polls should publish their response rates!) I feel Trump is illegitimate on many fronts. Comey’s actions (and the press reaction) are clearly indefensible. So there’s a lot of room for motivated reasoning here.

That being said, it’s clearly possible to swing too far the other way. The evidence for a Comey Steal are more direct and multifaceted. (Early to late voting behavior isn’t subject to polling illusion!) The idea that that coverage had no effect seems pretty bonkers. You’d want some very strong evidence for that.

So, trooferism? Maybe? The Vox picture (however awkwardly put) is pretty compelling. We’ll see how the evidence evolves. It’s still the case that saying, “Pretty compelling, but we still need some details to know how big the Comey effect was” is reasonable, but perhaps on the edge of reason. The “you can’t know!!!!!” folks are clearly way out of line.

Quantitative Social Sciences vs. the Humanities

December 29, 2016

Post Mortems

As we inch closer to realizing the Trump disaster, the election post-mortem’s continue. Obama has claimed that he would have beaten Trump. I’m unsure about the wisdom of that from either an analytical or political perspective. Qua analysis, it could be banal, reasonable, or silly:

  1. Banal: Your take on the election could be, roughly, while the fundamentals favored a generic Republican, Trump was an unusually bad candidate running and unusually bad campaign so that, absent extraordinary intervensions esp. from the FBI, a reasonable Democrat would have won. A bit more subtly, he could be claiming that Democrats can win when they aren’t also running against the press plus FBI plus Russia plus Wikileaks and he is a candidate that the press (a key enabler of the others) doesn’t run against.
    This isn’t quite as banal as “A major party candidate always has a good shot in this polarised age” in that it posits that Clinton specific features strengthened the Trump campaign just enough. However, it doesn’t posit any Obama specific features, hence the banality.
  2. Reasonable: Your take on the election could be, roughly, that given the closeness of Trump’s victory, a bit more juicing of Democratic turnout would have been sufficient (esp. when combined with all the items under the banal scenario) for victory. Obama has a good record of turnout which seems to be some combination of his personal qualities as well as his GOTV operation. If we posit that Clinton had the equivalent GOTV operation, then we’re left with his personal qualities which are a superset of “not having the Clinton failings”. I think you can probably make a case like this based on the exit polls. While reasonable, it’s highly defeasible. What’s more, it’s not clear that you add much over the banal case. You need something like what’s in the reasonable case to distinguish Obama vs. Sanders.
  3. Silly: Obama would have crushed Trump because Trump is an extremely bad candidate while Obama is an extremely good candidate. I feel like both those statements are true but we really need to take seriously the idea that candidate quality matters at best at the margins. It’s not just that fundamental models tend to do well empirically, but that the causal mechanisms for candidate or even campaign quality mattering are opposed by a lot of evidence and a lot of alternative causal stories. What voters hear, how they come to make decisions, the small number of “true idependents” etc. tend to point toward the partisan identity thesis of voting, to wit, voters tend to vote their party identity regardless of the policy implications or political behavior of the candidate. Voter attributions of decision making based on campaign specifics can be plausibly attributed (for many voters) on things like (supported) rationalisation.

Politically, all this seems to do is set up Clinton as a scapegoat or perhaps, better, set up Obama as the leader of the opposition. The former is pointless. The latter is perhaps worthwhile. It’s clear that Obama campaigning on the behalf of others isn’t effective (he’s not had notably strong coattails, for example). More significantly, I rather suspect he’s going to take a traditional ex-president role an be relatively quiet about Trump. If that’s the case, it would be bad for him to become leader of the opposition.

There’s lots to unpack about the election and we have the problem that, on the one hand, good analysis and data gathering takes time while, on the other hand, the further the election recedes into the past, the more evidence evaporates. This is all next to the fact that post mortems serve political goals thus are subject to motivated distortion.

The Loomis Hypotheses

Ok, that was a digression. What prompted this more directly is Erik Loomis’ latest entry in his war/trolling on the scientific status of social sciences like economic and political science. This is a bit more general than attempts to use the election outcome against specific models/prognosticators/etc. and, of course, Erik is provocatively overstating:

It’s time to put my humanities hat on for a bit. Obviously there are political scientists and economists who do good work. And we need people studying politics and economics, of course. But the idea that there is anything scientific about these fields compared to what historians or philosophers or literature critics do is completely laughable. As I tweeted at some point right after the election, the silver lining to November 8 is that I never have to even pretend to take political science seriously as a field ever again. Of course that’s overstated, but despite the very good political scientists doing good work (including my blog colleagues!) the idea that this field (Sam Wang, Nate Silver, etc., very much included) had some sort of special magic formula to help us understand politics this year, um, did not turn out to be true. They are just telling stories like I do, but with the pretense of scientific inquiry and DATA(!!!) around it. It’s really the same with economists, far too many of whom are completely deluded by their own models and disconnected from the real life of people.

Before trying to structure these a bit, I want to point out that we have some serious challenges to  making either a defensive or offensive claim about methodological validity or superiority based on prognostic outcomes of elections: All the models are probabilitistic with extremely small test cases. So, even Sam Wang’s prediction of a 99% chance of a Clinton win is consistent with what happened. Silver’s higher odds for Trump aren’t necessarily validated by Trump’s winning! You have to dig into the details in order to find grounds for determining which one actually overstated the odds and your arguments are going to be relatively weak. But conversely, your arguments that these models serve no useful purpose has to do more than say, “They got the election outcome wrong!!!” Highly accurate models might be only “empirically valid” that is, they succeed but provide no insight and don’t track the underlying causal structure. Highly uncertain models might tell you a lot about why certain outcomes are easily predictable.

Overall, I think the burden of argument is on the model proposers rather than the skeptics. First, this is the natural placement of burden: the person making the claim has to defend it. Models need content and if you rely on the fact that both Wang and Silver had a Trump win as a possibility, then you risk making them all essentially equivalent to coin toss models. In which case, Erik’s attack gets some purchase.

There seems to be three rough claims:

  1. (Quantitative) Social Science is no more scientific than history, philosophy, or literary criticism.
  2. (Quantitative) Social Science wrongly claims to have a “formula” that provides superior understanding of politics. Instead, they are “just telling stories.”
  3. The problem (Quantitative) Social Science is that they are deluded by their models and thus disconnected from the real lives of people.
    This could mean many things including: current models are oversimplistic (i.e., disconnected) yet treated as gold, models in principle are oversimplifying so will never be a good tool, or models are only useful in conjunction with other (qualitative) methods.

2 can be seen as a refinement of 1, that is, that the way that (Quantitative) Social Science is no more scientific than history, philosophy, or literary criticism is that it doesn’t do anything more than “tell stories,” albeit with a quantitative gloss. Obviously, there’s some difference in what they do as a novel about lost love is topic-distinct from a history of glass blowing in Egypt. Even when topic congruent, we expect that a novel about the Civil War to be a different kind of thing than a history of the Civil War. Not all stories have the same structure or purpose or value for a task, after all.

A Standard Caveat

Many debates about the “scienciness” of a field are prestige fights and as a result tend to be pretty worthless. That something is or isn’t a science per se doesn’t necessarily tell you about the difficulty or significance of it or much at all about its practitioners. There are sensible versions but they tend to be more focused on specific methodological, evidential, sociological, or ontological questions.

Comparative Scientivisity

(I’m not going to resolve this issue in this post. But here’s some gestures.)

While there’s some degree of “qualitative humanties is superior” in Erik’s posts (cf claim 3 and, wrt 1 and 2, the idea that they at least know their limits), let’s stick to the comparative scienciness claim. These points (the categorical and the superiorness) aren’t fully separable. (I.e., science is successful in certain enviable ways thus other fields try to glom on.)

Let’s pick a distant pair: election forecasting and interpretative literary criticism. It does seem that these two things are really different. If the literary criticism teases out a possible interpretation of, say, a poem, then the evaluative criteria for  the interpretation is whether it is “correct”, or “valid”, or “insightful” and the evaluative mechanism is (typically) either brute human judgement or more criticism (i.e., the presentation of other interpretations either of the criticism or of the original poem). The most obvious evaluative criterion for election forecasts is predictive success (and usually rather easy to verify predictive success). Prediction, of course, is a key indicator of science, so the fact that election forecasting (inherently)aims at prediction might be enough to cast a sciency feel on its parent discipline, political science.

Of course, astrology and tarot also aim at prediction. Their lack of science status doesn’t solely rest on their predictive failure. Indeed, predictive failure alone won’t give us a categorical judgement (science/nonscience) since it just as easily indicate bad or failing science. Throwing in some math won’t do the job, as astrology and numerology are happy to generate lots of math. The fact that the math tends to generate models that reasonable cohere with other knowledge of the physical world is a better indicator.

If we move over to history, it’s tempting to say that the main difference is analogous to autopsy vs. diagnosis: It’s much easier to figure out what killed someone (and when) than what will kill someone (and when). Even that there are epistemically or ontologically ambiguous cases (i.e., we can’t tell which bullet killed them or multiple simultaneous bullets were each sufficient to kill them) doesn’t make autopsy harder. (For one, it’s generally easier to tell when one is in such a situation.)

But there’s plenty of backward looking science. Cosmology and palentology and historical climate studies come to mind. They do try to predict things we’ll find (if we look at the right place), but it’s hard to say that they are fundamentally easier. What’s more, they all rest on a complex web of science.

I feel confident that history could (and probably does) do a lot of that as well. Surely more than most literary critcism would or perhaps should (even granting some literary critcism, such as author attribution, has become fairly sciency).

What does this mean for Erik’s claims?

I’m not sure. A lot of what we want from understanding of phenomena is how to manipulate those phenomena. But one thing we can learn is that we don’t have the capacity to manipulate something the way we’d like. This goes for buildings as well as elections.

(Oops. Gotta run to a play. But I don’t want to leave this hanging, so I’ll leave it with a hanging ending. But I’m also genuinely unsure where to go with this. I still have trouble interpreting Erik’s claims that leads me to any action.)

Starting to Sort Out the Election

November 29, 2016

There’s lots to sort out. Obviously, that it was a disaster is not something we need to sort out: We know that it was a disaster. The precise contours of the disaster will only become clear over time, but it will be pretty damn bad.

We do need to come to an understanding of what happened, including what we can or cannot know about it. Joe from lowell suggests that the election was analogous to the Iraq war from a punditry (and maybe political science?) perspective: That is, people got it very wrong and those people should have some humility and be treated as at least somewhat unreliable.

I said stuff before the election about the election outcome, so I need to figure out if I’m unreliable (or to what degree I’m unreliable). Note that this isn’t a flagellation exercise, or at least, not intended to be primarily one. If I’m relying on flawed models I should update my models! This election does seem to potentially provide some interesting new data, in any case.

Before the election:

And I really don’t think Karen24’s Llama Drama is because she’s a Rat Rogerer. I think she’s a Nervous Nellie who keeps panicking in the comments here at least partly because she can be so authoritatively reassured by other commenters. For those of us who are freaked out by the reminders that it’s not an automatic total blowout even with Donald Trump on the ballot, because (1) the MSM won’t do its fucking job and (2) so many American voters are still such horrid reactionary dumbshits, I think you underestimate how soothing it is for Bijan Parsia et al. to set us straight.

Here’s an example in that thread of me “setting them straight” with “authoritativeness”:

Look at the pattern though. “Trend” doesn’t mean “current slope”.

And look at several:

http://polltracker.talkingpointsmemo.com/contests/us-president-2016

http://elections.huffingtonpost.com/pollster/2016-general-election-trump-vs-clinton

It’s annoying, but there’s no evidence that he’s winning or that his line will cross hers. If it does, that will be a surprise.

I think that what I said is consistent with what happened, so I guess I would say it again in relevantly similar situations. Should the people who trusted me before trust me in the future? I…guess? I’m not going to overpredict worst case scenarios, but I don’t think I was overconfident.

One thing that did come out of this election was the idea that many aspects of campaigns matter even less than we thought. The classic view about “fundamentals” based models of elections is that modern campaigns are fairly evenly balanced so tend to cancel each other out. So all that’s left are things like the state of the economy.

One thing that was clear in this election was that the campaigns were very lop sided on some key features like get out the vote operations. This suggested that it would be a powerful natural experiment!

I would guess this range underestimates Clinton’s chances, because the models can’t account for Trump’s unusually unprofessional campaign

I’ve felt this a few times, but then I start to worry that perhaps an unusual (or unusually unprofessional) campaign might not be the sort of drag we’ve thought it would be.

Just consider two factors:

1) Advertising, esp. television
2) GOTV, esp. day of ground game

Up until now, HRC has had the airwaves to herself. It’s not clear that it’s done any good. Or, I’d like to know how it’s done good. Maybe it’s too early. Maybe it just hasn’t shown up in polling. Who knows. In the primaries, the rest of the candidates held off and Trump never collapsed. So we don’t really know what advertising will do to this heavily exposed, extremely strange candidate.

What is the effect of GOTV efforts? I believe that it will have an effect, but it might be pretty small compared to random turn out effects. It’s good bit of body english, but it doesn’t seem to be the makings of a blow out. (I’d love pointers to literature on this.)

Maybe campaigns *really* don’t matter?

wtjs formulated an interpretation of possible outcomes:

wjtssays:

But what would show us that the inept vs. ept campaign didn’t matter?

Certainly a Trump win (absent a major exogenous shock like 9/11) would. A win for Clinton along the lines of Obama’s reelection might. A convincing win for Clinton in the popular vote plus a sweep or near-sweep of the toss-up states and a win in one or two “lean R” states in the Electoral College would be pretty good evidence that campaigns do matter.

So, there we go?

One potential confounding factor is “the media” and, frankly, the FBI. One alternative story is that the last minute Comney letters were fairly strong events (given all the media priming on EMAILZ!!!) that worked to the Trump campaign’s advantage.

In general, Democrats seem to have a disadvantage in turn out (as we see dramatically in midterm elections). There are almost certainly loads of factors, but that might start the scales strongly against them. This would explain that the “vote against Trump” effect wasn’t so very strong.

Of course, the simplest explanation is just that fundamentals set the stage and a certain amount of randomness completes the job. If you make it to a major party nomination, you can win. Trump was slightly favored by the fundamentals, strongly hampered by his campaigned, but had a bit of luck.

This isn’t very satisfying! It’s particularly unhelpful for any future planning. (Lots of post mortem analysis is actually quite worthless for future planning. “Nominate a better candidate” doesn’t really help as no one is really in control of the nominations! This conclusion suggests that Bernie would not have done better. If campaign effects are small they are small in both directions. My guess is that he would have been “as likely” to lose…that is, not very likely, but it would be possible.)

Turn out is obviously critical, but our understanding of how to goose turnout is really poor. Voter suppression efforts are pure evil, but it’s unclear that they are turning elections (yet). I can’t find the reference, but apparently the Clinton campaign contacted twice the voters that the Trump campaign did. Now, some are suggesting that they ended up turning out Trump voters, but I don’t think we know the actual effects yet. If we go back to Romney-Obama, we see a large gap in effort but not a dramatic advantage:

We estimate that the presidential campaigns increased turnout by more than 10 percentage points among targeted subgroups, indicating that modern campaigns can significantly alter the size and composition of the voting population.

In this paper, we exploit the 2012 presidential campaign to assess the aggregate effects of a large-scale campaign on the size and composition of the voting population. We take advantage of variation in ground campaigning across state boundaries, extensive information on Romney and Obama campaign tactics, and detailed information on every voter in the United States to estimate the effects of the entire campaign. Our results suggest that the aggregate mobilizing effect of a presidential campaign is quite large. We estimate that the 2012 campaign increased aggregate turnout by approximately 7 percentage points

This analysis also allows us to compare the relative effectiveness of the Obama and Romney campaigns. The Obama campaign of 2012 has been championed as the most technologically-sophisticated, evidence-based campaign in history while the Romney campaign was more traditional (e.g., Issenberg 2013). When we began this project, we surveyed 46 academics, and they predicted that Obama’s campaign was almost 3 times as effective as Romney’s in mobilizing supporters.17 Do these perceptions manifest themselves in the data?

As discussed above, this analysis allows us to roughly compare the effectiveness of the Obama and Romney campaigns in mobilizing their respective supporters. Despite the purported technological sophistication of the Obama campaign and its devotion to a data-driven, evidence- based campaign, we see similar mobilization effects on both sides of Figure 3. The two campaigns were roughly comparable in their ability to turn out supporters.

One interpretation of the Romney campaign’s slight advantage with their own partisans is that Democrats are simply harder to mobilize than Republicans. Indeed, previous research suggests that, on average, GOTV interventions are more effective for conservative and high-socioeconomic- status citizens (Enos, Fowler, Vavreck 2014). The amount of effort and resources needed to mobilize Democratic supporters may be greater than that needed to mobilize Republican supporters. With this in mind, even if the Obama campaign was more advanced than the Romney campaign, this difference was not great enough to overcome this structural disadvantage

If this is confirmed in the 2016 data, it points a direction, at least, for the Democratic party: It needs needs needs to solve the GOTV problem. This would solve all sorts of problems including midterm elections, statehouse controls, etc. etc.

The problem is that we really don’t know how.


On a related note, Nate Silver is sorta claiming that his prediction was more accurate because it gave more weight to Trump’s chances of winning (plus some details of why his model gave such weight…roughly, the scenario that played out was one his model could and did contemplate). I thought that 538’s coverage and the Silver models were poorer than alternatives, esp. Sam Wang‘s. Now , Wang’s prediction was much more heavily weighted toward Clinton (i.e., 99% chance vs. ≈70% in Silver’s).

I need to think more about this.

At long last…

November 8, 2016

My vote was cast weeks ago by absentee ballot. I’ve contributed a bit to various House and Senate campaigns. I feel like shit but that’s only partly related to the election.

2016 has been a pretty awful year. I’m definitely not over the Brexit referendum.

My guess is that the results today will be between my worst and best. The odds are strongly for Hillary Clinton as Prez. That’s a big deal. Trump is a cataclysm in several dimensions even above having a unified Republican government. So, whew!

Democratic control of the Senate is likely, but not as sure a bet. Clinton + Democratic Senate = pretty damn good.

Democratic control of the House is very unlikely. We’ll make some gains for sure, but it’s a long shot. HRC + Dem Senate + Dem House = me dancing around. That would take some of the sting out of Brexit follies.

So, I’m expected Clinton + Dem Senate which means, finally, a sane Supreme Court. 2018 will be ugly, but we can make some progress with this combo.

Go out and vote. I would prefer that you didn’t vote stupid and evil, so I encourage you to vote a straight Democratic ticket.

Update:

Well, that went poorly. Extremely poorly. We hit the worst case scenario. That sucks.

So, for maybe the first time in my life I’m thinking of going low-information on politics and policy.

1) There seems to be little point. Nothing good is coming down the pike in the US or UK (my polities). I’m not sure much good will happen in the EU. I don’t know how being well informed about a shitstorm does much.

2) There’s a pretty big emotional and time toll.

3) For politics, at least, it seems that all the stuff I’ve read is pretty wrong. We’re in a strange era. While some bits remain interesting (e.g., differential non-response) it doesn’t seem to add up to anything helpful. Maybe a new consensus will emerge, but it’s not like I’m going to do research in the area.

4) There’s a lot of wonderful people talking politics and policy but also a lot of nasty ones. It doesn’t take a lot for it to be annoying. This adds pointlessly to the emotional and time toll.

In the end, I know how I’m going to vote. I will vote. I’m happy to give some money. But I feel pretty hopeless otherwise. Even if I had the temperament to do canvassing and stuff, the benefit levels seem pretty low.

Should you prefer sensitive (noisy) or insensitive (lagging) poll aggregation?

September 22, 2016

There are quite a few poll aggregators and predictive models based on poll aggregation. This is a huge improvement on the status quo ante where our basic access to polling data was at the individual poll level.

Polls have error. Polls have biases (hidden and otherwise). Polls are a snapshot.

When you see a headline number of a poll, remember there are at least three factors: The poll’s data acquisition methodology (their sampling strategy, questions they ask, etc.), the actual data gathered, and the interpretation of that data. Each of these can have a very large effect on the headline numbers and any of them could easily reverse the rank order of the candidates. (See the wonderful Upshot article wherein they gave the same gathered data to 4 pollsters and got 4 different results which include a Trump and several Clinton leads. These pollsters were all doing a defensible job! No hackery there!)

Poll aggregation is, in effect, a poll of polls. So the same things feed in: their methodology (do you include 4-way race polls?), actual data, and interpretation (do you weight your averages?). As a result they can give you different results. For example, Talking Point Memo’s PollTracker:

http://core.talkingpointsmemo.com/pt/charts/contest/us-president-2016?f=%7B%22methodology%22%3A%5B%5D%2C%22new%22%3A1%2C%22r%22%3A%5B0%2C1%5D%2C%22population%22%3A%5B%5D%2C%22with_candidates%22%3A%5B%224e8b71050a30d83b5587ba54%22%2C%224e8b71050a30d83b5587c275%22%5D%2C%22pollsters%22%3A%5B%5D%7D

Is generally a bit more pessimistic about Clinton than the HuffPost Pollster

http://elections.huffingtonpost.com/pollster/2016-general-election-trump-vs-clinton/embed.js

And the RealClearPolitics one is more pessimistic about Clinton:

http://www.realclearpolitics.com/scripts/widget_embed.js?id=5491&width=450&height=338&key=general_election_trump_vs_clinton

(I’m going roughly by the number of times Trump’s trend line touches or crosses Clinton’s.)

When we get to forecasting models, we get even more variance. A forecasting model is a prediction of a candidates chances of winning, usually expressed as a probability. So if you see that Clinton has a 65% chance of winning, it’s not that she’s polling at 65%, but that she has a 65% chance of winning the election (which she might do by a razor thin margin!). For win probability, a very stable razor thin margin is better than a highly volatile large margin. Or it should be!

Some predictive models are more volatile than others. You can see this most easily on FiveThirtyEight’s prediction page because they have convenient radio buttons for selecting between three models with different levels of sensitivity to the polls (with the “nowcast” being is a “straightforward” poll aggregation). In contrast, Sam Wang’s model tends to move more slowly, by design.

So, which should you prefer?

In general, just as with polls, it’s good to look at multiple models. It gives you more information and reminds you that prediction is a tough tough game.

I think, in general, it’s worth being stable rather than highly reactive, so I tend to lean on less volatile models. There are several reasons:

  1. We’re still pretty far out. Getting worked up about something that might be a statistical blip or a cyclic movement is pretty unwise. If some movement in the averages or forecasts is worth worrying about, then it will be durable and show up in all the models. Getting a “jump” on bad (or good) news isn’t really helpful, esp. as there’s little to do in response (for most of us). It’s similar to the stock market: Most of us aren’t equipped to do much short term trading efficiently, so it’s better off thinking long.
  2. We really don’t know the underlying causal structure. One phenomenon that has been shown in the lab is “differential (non-)response”, that is, it is common that people respond (at all!) to polls depending on “(de)energising” events. Thus, consider convention bumps. Each candidate typically gets a boost in the polls that then fades after their convention. Why? Are people changing their mind? Are they really that fickle? Perhaps, but it also could be the case that there voting intentions (which is what we care about) don’t change, but whether and how they respond to polls changes. Thus, in addition to sampling error and other methodological and interpretive biases, we have the possibility that salient events might change polling results without there being a change in the phenomenon we’re trying to measure.
  3. Given the strong negatives associated with a Trump victory, anything from a 10% on up is extremely worrisome. It’s worth being worried. If you can use that worry to prompt action, you should do it regardless of the current state of the polls.

So, prefer the more stable aggregators and forecasts. Also prefer the ones that are most inclusive of polls and minimise the “special sauce” in their models. If you want to know what a fundamentals model predicts, just use a separate prediction rather than trying to weave it into your polls based predictor. There’s enough interpretative variably that adding things which aren’t really made to work together is a bad idea. Better that each sort of evidential base has it’s own predictive model and you can compare them more or less directly.

Note: WordPress.com scrubbed all the embedding code for the aggregations. I’ll try to update with screenshots later. Sigh.

Update: PollyVote is a forecast model aggregator! So it saves you the work 🙂 (It seems to have two levels of aggregation: It aggregates with a type of forecasting method, e.g., prediction market vs. econometric, and it aggregates over those types.) One interesting thing is that it provides a popular and EV vote total, as opposed to a win probability. Another is that it doesn’t incorporate error estimates (indeed, it’s hard to see how to do it). OTOH, it’s super simple and straightforward and covers the main sources of evidence. It will be interesting to see how it does in this weird weird year.

Blog Shout Out for Now Face North

September 21, 2016

Now Face North is a blog by LGM long time most-valued commenter JL. Any JL comment is worth reading. A JL comment about sexual or domestic abuse or rape victims (esp. about support) or activism is worth spending some serious time with. As we go into the election, her stuff will present a side of electioneering that you won’t typically see. Whether it’s about political trials and public defenders, the differential treatment of pro-Trump and anti-Trump by police at an RNC protest, or the ins and outs of being a Street Medic, there’s a lot there. It’s experientially grounded but clear, coherent, and thoughtful. And sometimes pretty funny:

Do you know where this march is going?

Okay, seriously, undercover/plainclothes cops, I don’t know why you all always seem to think that medics will know the answer to this question, but we usually don’t. Please stop asking me. Also, most of you are bad at pretending to be protesters. There are notable exceptions, but they are generally not the ones who meander up to street-medics fake-casually to ask where the march is going. If you’re not a cop and you’re asking me this question anyway, I still probably don’t know. Ask an organizer.

Add it to your rotation this election season. JL doesn’t post that frequently, but binge reading is a delight.