What did we learn about polling in 2016? And how can we do better going forward?

What did we learn about polling in 2016? And how can we do better going forward?

By Angus Reid, Chairman, and Ian Holliday, Research Associate

December 9, 2016 – In the world of public opinion research, 2016 was a year marked by two big surprises: First, in June, most polls leading up to the “Brexit” referendum showed the British public narrowly leaning toward a vote for the U.K. to remain in the European Union, only to see the “leave” side post a narrow victory in the end.

Then, just last month, polling aggregators gave Hillary Clinton between a 70 and 99 per cent chance of winning the U.S. presidential election, only to see Donald Trump win 306 electoral votes and the presidency.

These were significant polling and projection misses, and much has been written about what went wrong and how the industry could avoid such mistakes in the future.

It’s worth noting, of course, that neither of these misses was necessarily as big as it has been made out to be. On Brexit, for example, regardless of which side they had coming out on top, most pollsters had the race getting closer as referendum day drew nearer and undecided voters began to choose sides.

And in the U.S., Clinton’s lead in the popular vote has risen to more than 2.5 million votes since election day – good for a 2-percentage-point victory. While this is still short of the 4-point lead many pollsters showed Clinton enjoying nationally, it’s within the margin of error for most.

These near misses show that public opinion polling is not broken, but it is in dire need of improvement, particularly given the paramount importance of public opinion in creating such seismic changes as Brexit and Trump.

So, how can we better measure and report on public opinion in 2017? Here are some suggestions:

  1. The media infatuation with polling aggregation must end.
  2. Polling reports must link outcome projections to different turnout scenarios.
  3. Private foundations and major associations must engage respected polling organizations to provide funding for high-quality, multi-modal research.

In a world of aggregated polling averages and omnipresent media coverage, pollsters need to be more conscious of the effect their research can have on election outcomes.

In science, the “observer effect” is the idea that certain things cannot be measured without being influenced by the very act of measuring them. This may well be true of democracy as well. When polls consistently agree on an outcome, and data journalists predict one result has a 70 or 80 or 90 per cent chance of occurring, those who prefer that outcome may be lulled into a false of security. If they believe their preferred outcome (i.e. staying in the E.U. or electing Hillary Clinton) is a fait accompli, they may see less reason to turn out and vote.

Polling aggregators like FiveThirtyEight and The Upshot exacerbate this problem by combining numerous polls into a single prediction metric, flattening disagreement between individual pollsters and lumping together both high- and low-quality research. Whether it’s intended to or not, this increases the certainty people feel about election polling, which can galvanize those who see themselves on the losing end, and create complacency among those who anticipate victory.

One way pollsters – and aggregators, for that matter – can counteract this election polling observer effect is to publish turnout-specific estimates of candidate support, building on known tendencies in the electorate. For example: In democracies around the world, young people tend to be less likely to vote, while older people are more likely to do so. A high-turnout election virtually always includes more young voters, while a low-turnout election virtually always includes fewer of them.

Pollsters could use their data to produce projections based on turnout the same as the previous election, as well as estimates of what the result would be if turnout increased or decreased by a significant margin.

Another development that would help improve the measurement of public opinion in 2017 would be to find a new model for funding public opinion polling.

Newspapers and television networks, by and large, can no longer afford to fund high-quality public opinion research in the way they once did, and the flattening effect of aggregators makes it harder for firms doing high-quality research to capitalize on their success and build brand recognition.

With the development of new technologies for conducting polls and free long-distance calling, virtually anyone with a $3,000 auto-dialer can be “polling” via Interactive Voice Response (IVR) robo-polling.

There’s no easy solution to the funding puzzle, but as 2016 has shown, finding a way to accurately and reliably gauge public opinion is essential to the future of democratic societies. With this in mind, charitable foundations and think tanks with an interest in democracy should consider funding high-quality public opinion research.

There have been lots of complaints directed at pollsters in 2016. The industry needs to listen and learn, but equally those who would disparage polling need to consider what a major election or referendum would look like without any reliable advanced notice of what the voting public might be thinking.

Today’s polling industry has its roots in the 1930s – a time when Fascism was on the rise in many parts of the world, and when public opinion was often gauged by the size of crowds at rallies and the number of people marching in the streets. For all its flaws, contemporary polling remains a vastly superior means of measuring the will of the people, and a safeguard against powerful interests who would claim to know what that will is.

Tags assigned to this article:
2016accuracypolling record