Why did so many pollsters miss the Republican wave?

Like most forecasters, we were overly cautious in predicting the magnitude of the Republican midterm victory last week.

Videos by Rare

However, our caveats did give readers a good guide to interpreting the results. Thom Tillis upset Kay Hagan in North Carolina, Pat Roberts hung on in Kansas, and David Perdue won outright (averting a run-off) in Georgia. We didn’t predict any of those outcomes, but we raised all of them as distinct possibilities, and explained what those victories would mean.

The size of the Republican wave was a surprise to forecasters for a simple reason. The polling this cycle was unusually bad. Mark Blumenthal and Ariel Edwards-Levy, the Huffington Post’s polling analysts, observed after the election that, per an analysis of historical partisan bias in Senate polls by Nate Silver, the error in polling averages was the worst since 1998.

In that year — the first in decades where the president’s party gained seats in a midterm — polls understated Democrats performance by an average of 4.9 percentage points. This year the polls understated Republican performance by a similar margin.

Political scientists will be combing through data for years to pin down what went wrong, but we can get some idea by looking at the few pollsters who outperformed the rest. One of those is Ann Selzer of Selzer and Company, the Iowa-based firm that conducts polls for the Des Moines Register.

Most pollsters showed a close race in Iowa, with Joni Ernst ahead by 3 points or fewer. When the Des Moines Register poll showed Ernst ahead by 7 points, many assumed the result was an outlier. But Ernst won by 8.5 points — within the margin of error for Selzer’s poll, and no one else’s.

What did Selzer do right?

“My method has been the same throughout my time with the Iowa Poll,” Selzer told the Register for a post-election story on her success.

“I have not varied, I have not changed the way we identify likely voters. We don’t take a look at the most recent election and say here’s what that means we should do differently.”

Instead of previous election results, Selzer takes census data as a starting point and balances demographics based on how different groups respond to her likely voter screen. She explained to Nate Silver how this works in the context of youth and minority voters in 2008, when her results were, correctly, showing better numbers for Obama than other polls.

The Register continues:

Did she have doubts about those Senate results? No, not really, she said.

Selzer parsed the data and found no evidence that one group or another was overrepresented. And Braley’s deficit in his own congressional district was a strong indicator for her that the overall results were on target.

“Internally, there was a consistent story,” she said. “It was hard to find any little piece of light that said Braley could pull this out.”

Perhaps Selzer’s confidence, borne of decades of experience, can’t be replicated, but some parts of her methodology can be. The bad news is that it can’t necessarily be done cheaply.

Selzer’s sample includes cell phones, which requires live interviewers. Laws aimed at telemarketers usually forbid calling mobile phones with recordings that ask respondents to push buttons. As more and more people give up their landlines, the latter method becomes increasingly problematic.

The demand for polls and the budgetary constraints of the media outlets that commission them mean that there are likely to be a lot of low-quality polls. Prediction models try to account for this by putting more weight on high-quality polls like Selzer’s, but there are only so many of them.

Data analysis, it turns out, has its limits. Election results are not going to be a completely predictable any time soon. The good news is that when politics is less predictable, it’s also more fun.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

Mitch McConnell elected Senate majority leader

The trouble with sex offender registries