Kenyans vote on 9 August 2022 to pick a new president, lawmakers and county officials.
But the value of opinion polling is much debated in Kenya.
Following the violent aftermath of elections in 2007, an independent review said some voters claimed that opinion polls “were manufactured by partisan pollsters” to sway how they voted. It added that many Kenyans perceived the election as rigged because their preferred presidential candidate did not win, despite opinion polls predicting victory.
In 2012, a law to govern electoral opinion polls was passed.
A 2014 study by three Kenyan academics on the law’s impact found politicians felt opinion polls “were flawed, manipulative and biased exercises – even publicity stunts” if the data did not back their views.
The law requires that for every opinion poll published up to a year before an election, pollsters must “together with the results” include information such as who conducted it, how it was done, who and what was asked and the educational level of the participants.
It also bans the publication of poll results less than five days to an election or on voting day. Penalties include a fine of up to KSh1 million (roughly US$10,000) or a year in prison.
What did pollsters find in 2007, 2013 and 2017?
In 2013, Odinga again polled ahead of Uhuru Kenyatta, the president he is now looking to succeed. Opinion polls suggested no candidate could get the 50% plus one of valid votes needed to win outright and avoid a run-off.
Ahead of 2017 elections, pollsters were split. Infotrak, a leading pollster in Kenya, gave Odinga a slight lead of 47% against 46% for Kenyatta, a statistical tie. Another prominent pollster, Ipsos, gave Kenyatta the lead with 47% to Odinga’s 43%, predicting “a first round victory” for Kenyatta “if the turnout [was] right”.
The country’s electoral commission declared Kenyatta the winner, with 54.2% of the vote against 44.7% for Odinga. Odinga returned to court. The supreme court nullified the election citing widespread “irregularities and illegalities”. Odinga lost the re-run.
“Of course, given the disputed nature of the official results in all three of these elections, it is impossible to know which survey firm’s results were closer to ‘the truth’,” Dr Tom Wolf, an independent research analyst, wrote in a May 2022 article, part of a series he is producing for the Elephant publication.
Wolf has been involved with opinion surveys in Kenya since 2005. He is currently working with Trends and Insight for Africa, a market research firm that has published polls on public sentiment ahead of the August 2022 election.
He has also sought to explain why pollsters’ 2013 data differed from election results.
Polls can get outcomes (badly) wrong
But it’s not only in Kenya that polls are scrutinised. Polling firms elsewhere have made predictions not borne out by the final result – such as for the 2016 US presidential election and the UK referendum to leave the European Union the same year.
Given this and the slew of opinion polling, what should Kenyans look out for?
The handbook has 20 questions you should ask when vetting an opinion poll, a few which we highlight in the five questions below. The British polling council, which promotes transparency in polling, also has a variation on these five questions.
1) Who did the poll?
Kenya’s opinion polls law requires that published results must give details on who sponsored a poll and which research firm conducted it.
“Polls are not conducted for the good of the world,” the US handbook says. “They are conducted for a reason – either to gain helpful information or to advance a particular cause.”
So knowing who is footing the bill and why it was commissioned is a good place to begin examining an opinion poll.
2) How many people were interviewed for the survey, and how were they chosen?
Generally, the more people are interviewed in a scientific poll, the smaller the error likely from the size of the sample, all other things being equal.
But be careful – more is not automatically better, as other factors come into play. The handbook explains some of these.
“In scientific polls, the pollster uses a specific method for picking respondents. In unscientific polls, the person picks himself to participate.”
In a reliable survey, the chance of selecting each person in the target population is also known – what is referred to as a random sample.
This means that the results of the sample can be said to reflect the views of the entire population. So interviews of 1,000 or so well-chosen people can give a credible insight into the views of 50 million Kenyans.
Where the sample is drawn from also matters. For example, a poll on voting intentions in Kenya would interview registered voters.
Only if the interviewees were chosen from all Kenyan adults would their views represent the opinions of all these adults.
If you phone 1,000 mobile phone-owning Kenyans, your sample would likely be limited to phone-owning Kenyans. Not all Kenyans have mobile phones, will pick up or will answer survey questions.
And while this sample could work if these Kenyans resembled the general population, you may have to rethink if your survey is aimed at a certain demographic, such as the poor.
3) What questions were asked?
A question about perceptions of Kenya’s Independent Electoral and Boundaries Commission (IEBC) would have to be masked to remove bias, Ambitho said.
“If we say ‘do you think IEBC is not a trustworthy body’, then we have asked a biased question because the likelihood then is I would start thinking about whether IEBC is trustworthy or not,” she said.
Rather, the question to ask is: “On a scale of 1-10, how would you rate your trust or confidence in the following institutions that are mandated to ensure that we have free and fair elections?”
How the sample was interviewed also matters – was there a language barrier, or the interviewer perceived in a different way, for example culturally? And in what order were the questions asked?
4) When was the poll done?
Political events can have a dramatic impact on opinion poll results, a factor polling firms in Kenya have been keen to highlight.
Ambitho said the data her firm released, especially on the popularity of candidates, was a “moving variable”.
The results are not predictive and are only valid on the day the data was collected, she said.
“Any poll you see today is just a number that is giving you information for that specific moment in time. We are very clear that that information or those statistics can change even within 48 hours depending on what happens.”
Pollsters should not be held to account if polling data differs from the election results,” Ambitho added.
“The only time that you can actually hold us to a poll is when we tell you this is a predictive poll and it is predictive in its nature only because with some sort of certainty know that very little can change between that window and the voting time.”
5) What was the margin of error?
Kenyan news publications rarely give the margin of error when reporting opinion poll results.
Researchers allow a margin of error, given in percentage points either way. This is particularly important when survey results are narrow.
Two scientific polls, say of 1,000 Kenyans, could yield slightly different results. This range of possible results, the handbook says, is called the error due to sampling, often called the margin of error.
To grossly simplify this, if the gap between two candidates, say Ruto and Odinga, is within the error margin, you should not say one candidate is ahead of the other. (Note: Read a more detailed explanation of the margin of error here.)
Only if the gap is equal to or more than twice the margin of error – and there are only two candidates and no undecided voters – should you say one is ahead of the other. Else rather say the race is “close” or “there is little to choose” between the candidates.
Polls can’t be fact-checked, but can be vetted
In the end, any opinion poll has a lot of assumptions embedded in its design. Its accuracy is dependent on the sampling, the weighting of the sample, the framing of the questions, and changes in the external environment.
So while it is impossible to fact-check an opinion poll, it is useful to understand how to vet it so as to make sense of the political conversation it invariably generates.