Back to Africa Check

GUIDE: Understanding and reporting on opinion polls

Not all opinion polls and surveys are created equal. But all too often news websites, newspapers and radio and television stations fail to properly interrogate them. Just as a single-source news article will lack credibility, so does a news report based solely on the results of a snapshot poll or a survey.

Journalists should always question how a poll or survey was done, and dig deeper. Context, additional comment and analysis are vital.

Some polls and surveys are designed to mislead. Others claim to be more representative of public opinion than they really are. And some are little more than conjecture.

Take this "prediction" by the financial services group, Nomura South Africa. It stated that the African National Congress’s share of the vote in South Africa was “expected” to fall from the 65.9% it obtained in the 2009 election to 56.2% in the forthcoming national election. Nomura also “predicted” that support for the opposition Democratic Alliance would rise sharply from 2009’s 16.7% to 27% in this year’s election.

Business Day newspaper reported on the Nomura press release even though it included this remarkable quote, which called the data into question and should have set alarm bells ringing: “Nomura acknowledges that given a total lack of publicly available detailed polling data at this stage, the margin for error around these forecasts is clearly quite large.”

The Business Day report was picked up by the New Statesman. And the Mail and Guardian website ran a similar report headlined, “Analyst: No rosy elections outlook for the ANC”. An important caveat was buried at the end of the article: “The UK-based analyst had not based his predictions on any detailed public polling information, as none has been made available yet.”
 

Never take the findings of an opinion poll for granted


To distinguish between scientific and unscientific polls, it is crucial to know how the respondents were selected and who selected them.

The results of the well-conducted scientific poll can provide a reliable guide to the opinions of certain segments of society - even an entire nation - while the results of an unscientific poll tell you nothing more than what the respondents had to say.

A useful handbook aimed at journalists covering polls explains what distinguishes scientific from unscientific polls: “In a scientific poll, the pollster identifies and seeks out the people to be interviewed. In an unscientific poll, the respondents usually ‘volunteer’ their opinions, selecting themselves for the poll.”

At the very least a news article about an opinion poll should tell you who was surveyed, when the survey was done, where it was done and how it was conducted. Were the respondents interviewed face-to-face, by phone or did they complete a survey online? What questions were asked and how were they phrased? Were there any limitations to the survey or any problems that could skew the data?

When evaluating a poll or survey, journalists should always ask the following questions:

 

1. Who conducted the poll?


How reputable is the organisation that conducted the poll? Do they have a proven track record? Are they willing to answer questions and provide detailed information about sample sizes, the manner in which the poll was conducted and the limitations of the poll?

 

2. Who commissioned the poll and why?


If the poll was commissioned by a political party in the run-up to an election, they might have had a specific goal in mind and the findings could be biased because of the sampling and questions asked. It’s not uncommon for political parties to come up with pre-election polls that are favourable to them and portray their opponents in a bad light.

 

3. How were the survey questions phrased?


The way in which a question is phrased can fundamentally affect the result of a survey. In some instances, the questions are specifically designed to elicit a particular result. In the United States, for example, respondents were asked if they favoured a crackdown on illegal gun sales. The vast majority responded in the affirmative leading the survey researchers, who were working for a gun-control lobby group, to conclude that three-quarters of Americans favour gun control. A separate survey, conducted at the behest of the National Rifle Association, which champions the right of Americans to bear arms, asked respondents if they would favour a law giving police the power to decide who could or could not own a firearm. Most respondents opposed the idea. The Association subsequently claimed that most Americans opposed gun control.

 

4. How were the questions asked?


Even the sequence in which questions are posed to respondents is critical. If someone is asked how a country’s flagging economy has personally affected them and then whether they think the president is doing a good job, there is a good chance that they will comment negatively on the president’s performance. But if the questions are reversed, they may answer differently.

 

5. How were the respondents chosen?


Is the sample a random one and does it reflect, as best as possible, the diversity and distribution of the population? Or did the respondents choose to participate in the poll?

 

6. Who was surveyed and how representative was the survey?


An opinion poll that samples the views of a few dozen – or even a few hundred – people is unlikely to be representative of the views of a population of millions. Most polling organisations suggest that a well-chosen, random sample of around 1,000 people is the minimum required to produce accurate results. That said, even large-scale surveys can be skewed. Therefore, the most important consideration is whether the sample is representative. If thousands of readers respond to an online survey on a popular news website, can it be said to be representative? No. At best the survey can be said to be representative of the views of the readers who took the time to complete it. Nothing more. The sample, however large, is not representative of the greater population.

 

7. What was the margin of error?


Researchers allow a margin of error of up to five percentage points either way. This is particularly important when survey results are very narrow; for example if 51% of respondents said they believed crime levels had fallen and 49% said they did not.

 

8. How was the data collected?


By phone, face to face or online? Beware of surveys conducted in shopping centres, at taxi ranks or any other public place, as they cannot be random or representative. Treat surveys conducted on the Internet with scepticism. The online polls that we see daily on news websites can be entertaining, but they are unscientific, skewed and tell you nothing about broader public opinion.

 

9. When was the survey conducted?


This can be important. A question about safety in Kenya before the Westgate Mall terror attack could elicit a very different answer to the same question asked after the event. Similarly, a question about police brutality in South Africa would almost certainly have elicited different answers before and after the deaths of protestor Andries Tatane, the murder of taxi driver Mido Macia and the Marikana massacre.
 

A glossary of important terms


Exit poll

An opinion poll of voters taken as they leave a polling station. Voters are asked how they voted and the results are used in an effort to predict the outcome of an election or determine the opinions and characteristics of the candidates' supporters. In South Africa, where a national election is due to take place on 7 May, exit polls are banned under the 1998 Electoral Act, which states: “During the prescribed hours for an election, no person may print, publish or distribute the result of any exit poll taken in that election.” There is no prohibition on the publication of polls and surveys conducted before the election.

Representative sample

For example, a sample of South African youth aged between 18 and 20 that accurately and without bias, reflects the entire population.

Random survey

This means that everyone has an equal and known chance of being included in an opinion poll.

Margin of error

The percentage by which the result may vary. If there is a 5% margin of error in a survey that finds that the ANC will get 52% of the vote compared to the DA’s 48%, the survey results cannot be taken to mean that the ANC is ahead in the race. The survey results could vary by 5% either way. This means that the true ANC support could be as high as 57% and as low as 47% and support for the DA could be as high as 53% and as low as 43%.

 

Additional reading

The Reporters without Borders handbook for journalists during elections.

IDASA’s toolkit for reporting on public opinion surveys.

What is a survey? By Fritz Scheuren.

Twenty questions journalists should ask about poll results.

Covering polls: A handbook for journalists, published by the Media Studies Center.

Republish our content for free

We believe that everyone needs the facts.

You can republish the text of this article free of charge, both online and in print. However, we ask that you pay attention to these simple guidelines. In a nutshell:

1. Do not include images, as in most cases we do not own the copyright.

2. Please do not edit the article.

3. Make sure you credit "Africa Check" in the byline and don't forget to mention that the article was originally published on africacheck.org.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
limit: 600 characters

Want to keep reading our fact-checks?

We will never charge you for verified, reliable information. Help us keep it that way by supporting our work.

Become a newsletter subscriber

Support independent fact-checking in Africa.