Judge online polling by real-world accuracy, not academic theory

By Angus Reid

My commitment to the online methodology came from an epiphany of sorts that I had at the turn of the millennium, when non-participation rates for conventional polling were starting to top 90 per cent.  It became clear to me that we must find a better way to bring potential survey respondents into the research process – especially in a world that places more emphasis than ever on personal privacy and where robo-calls and even live interviewer engagement are seen as spam.

If enough care and attention (and dollars) are committed to this task, it is possible to build very large panels of “double opt-in” respondents – that is, people who choose to be part of the panel and then choose whether to participate in the surveys offered to them. With the proper investment, these panels can be large enough to represent each of the major regions and segments of a given country. Following on the footsteps of the very successful YouGov operation in the U.K., I started building panels in Canada, the U.S. and Britain.

The online panel offers some major advantages over other forms of research. Surveys can be completed on mobile devices at the respondent’s convenience and can include pictures and video to obtain a more realistic response context. Most of the investment in online research goes into maintaining and growing a large group of potential respondents – rather than paying for interviewers in call centres.

Because online polls involve sampling from a pre-recruited group willing to take a survey, some have attacked this method, claiming that the sample is not truly random and therefore the margin of error typically put out when a poll is released (e.g. “accurate +/- 4 %, 19 times out of 20”) cannot be used.

While this is technically correct about online polls, it is arguable that no poll today should use a margin of error, given the very serious problems of low completion- and high refusal-rates that inhibit a truly random sample.

Adding to the complexity and confusion is a lack of understanding on part of many reporters and editors who cite polls, and find themselves invariably skeptical of any poll that doesn’t report a margin of error.

In this environment, it is more appropriate to judge pollsters on their real-world performance than through the use of abstract mathematical models. We are living in a time of rapidly changing communication technology and, unfortunately, the standards used to assess polling are rooted in the wrong century.

In the polling world, there are two types of measures to assess the quality of election polling. The first, and most important, is picking the eventual winner. The second involves the level of accuracy surrounding the final projection. In golf parlance, it’s “how close did we get to the pin?”

On the first of the standards – picking the eventual winner – my accuracy has been 95 per cent. Ironically the two we missed were closest to my home in Vancouver: Alberta in 2012 and British Columbia in 2013. (Pollsters across all methods missed B.C. in 2013, suggesting something other than polling method problems were at work in that unusual election.)

In terms of precision, my average is better than three percentage points. In some cases, such as the 2011 federal contest and the 2012 U.S. Presidential election, we were off by one point or less. In other cases, we projected the winner but our margin of error was much higher (in Alberta in 2008, for example).

With the plethora of polling methods currently being deployed, it can be difficult to sort out results based on quality. Rather than leaving this determination to theoretical models, it makes more sense to judge the pollster by their record.

Here you can view my “election biography,” so readers can draw their own conclusions. I can’t speak for the entire polling industry, nor for others in the online field, but I’m quite happy with what we’ve been able to accomplish in eight years of working with a completely new research technology.

Tags assigned to this article:

Be the first to receive our new data and reports: Subscribe below.