Make Questionnaires Great Again

Make Questionnaires Great Again

By Shachi Kurl, Executive Director & Ian Holliday, Research Associate

February 17, 2017 – Pollsters and polling organizations are frequently the target of criticism, derision and disdain. People don’t like the data, so folks get mad. We sometimes get it wrong, so folks get mad. And that’s fair play.

Most major pollsters really do everything in their power to conduct public opinion research in a way that is accurate, and fair. And while a lot of time is spent by observers dissecting the methodology pollsters use to collect data, an equally important but overlooked part of the survey process is the way the questions are written.

We can tell you, at the Angus Reid Institute, we take this part very seriously. We sweat over vocabulary. We fact check our questions. How much explanation is too much? Have we now created a leading question? Then it’s back to the drawing board. How much explanation is not enough? Will our respondents fully understand the context around the issues we’re canvassing?

We’ve been on conference calls with colleagues that have lasted hours, trying to get the phrasing just right. And we publish our survey questionnaires along with the data, so people can see for themselves what we asked and why we asked it.

And then…. there’s the Trump poll. Or rather, the “Mainstream Media Accountability Survey”, published on the Donald Trump campaign website Thursday afternoon.

When it started making the rounds on social media, and then around our offices, reactions ranged from horror to humor. This is, in our humble but professional opinions, a really, supremely, incredibly badly written poll.

But hey, we’re all about education and furthering polling discourse at the Angus Reid Institute, so let’s just turn this into a teachable moment, and walk through all the ways this survey could have been written better.

First of all, there’s nothing wrong with canvassing American trust in mainstream media organizations. It’s a legitimate subject. One that Pew Research tackles ably every day.

But these questions in the Trump survey? Let’s start by considering question 5, which asks: “On which issues does the mainstream media do the worst job of representing Republicans?” and then instructs respondents to “select as many that apply.” How can they be the things the media is “worst” at if we can pick them all?

Question 6 suffers from almost the opposite problem. It asks respondents from which television source they primarily get their news from, then lists a grand total of four options: Fox News, CNN, MSNBC, and “local news.” There are other channels, Mr. President.

The rest of the 25-question survey, aside from a pair of open-ended prompts, follows the same pattern: A yes or no question is asked, and respondents are given four possible response choices: “yes,” “no,” “no opinion,” and “other, please specify.”

Here’s why that’s a terrible response metric:

First, it leads to nonsensical situations like this one:

TrumpQuest1

Sometimes it’s good to have a “no opinion” option. Questions about personal experiences are not those times.

Second, phrasing each question as a yes or no question removes nuance, and adding “no opinion” and “other” only partially address this problem.

If a respondent agrees with the premise of each question, then they will answer yes to each one, but there’s no way for the people analyzing the data later to know which views the respondent expressed, if any, were most important to them. All “yeses” are equal.

If you’ve ever taken a survey and wondered why questions tend to have four- (or more) point scales (“strongly agree,” “moderately agree,” etc.), that’s why.

Another problem with yes or no questions is that they can be leading. Asking a respondent “do you believe X” has a suggestive effect. It implies that “X” is a reasonable, believable thing, and that makes it easier to say yes to it.

Of course, most of the questions in this Trump survey are leading in much more obvious ways. For instance, lots of questions are framed in the negative. Right from the start, this survey is not about whether the media treats the Trump administration fairly, it’s about whether you believe the mainstream media has reported “unfairly” about “our movement”:

TrumpQuest2

For one thing, it’s bad practice to assume that your respondents all belong to the same “movement.” This alienates respondents who don’t feel like they’re part of the movement, and sets up a clear expected response for all involved. If you’re a member of the movement, you say “yes, we are treated unfairly,” and if you’re not, you say “no, I don’t think you folks in the movement are treated unfairly, get over yourselves.”

Here’s a better way to phrase that question:

Do you agree or disagree with the following statement? “In general, the mainstream media covers the Trump administration fairly.”

Here’s an even better way:

How would you describe media coverage of the Trump administration? Would you say it is totally fair, mostly fair, mostly unfair, or totally unfair? You could even throw in a “not sure” option for got measure.

By and large, the questions are all framed this way. Here are some other examples:

TrumpQuest3

TrumpQuest4

The data from these kinds of questions will be totally useless. Moreso even than the average unscientific quick-poll on a news website.

Of course, we suspect the Trump administration wasn’t attempting to get academically rigorous data from this survey anyway. Still, we thought it would be valuable to demonstrate some ways to Make this Questionnaire Great Again.


Tags assigned to this article:
pollingUS election

Want to see all of our latest data first at NO cost? Subscribe below.