How can we understand and trust the results of polls?

How can we understand and trust the results of polls?

In recent months and years, the Thai public has been exposed to the results of a growing number of opinion poll results. There is a clear appetite from policy and decision makers, the media and from the general public for those polls. Indeed, public opinion can be a critical force in shaping and transforming society, providing an opportunity to the general public to both have a voice and hear the opinion of fellow citizens.

There is also a growing level of scepticism, particularly when results look too positive or seem to back the strategy or interests of a particular group.

In the end, citizens are split and simply do not know if they can trust poll results. If they don't trust the results, they may also be discouraged from taking part in opinion polls.

We believe that opinion polls can play a positive role in a democratic society as long as the public and the media are aware of some limitations and get some ways to evaluate if the poll has been conducted, interpreted and reported properly.

A bit of theory: As described by Robert Worcester, the founder of MORI and one of the best experts in opinion research, opinion polling is the marriage of the science of sampling and the art of asking questions.

You do not need to drink a full bottle to know the taste of wine. Similarly, you do not need to interview the whole population of a country to assess public opinion. The secret is to define a representative sample and it is more about the quality of sampling than the quantity of respondents.

To keep it simple, the sample needs to reflect the profile of the group being surveyed. There are two main methods. The first is "random" sampling, the second "quota sampling". Both methods aim to provide a representative sample of the target population. Some polls reported in the press are not conducted by professionals and do not follow scientifically valid approaches. Phone-in polls conducted by television programmes or links on websites asking viewers to participate in a survey suffer from the same problem -- no control over who is responding. This often means you only get the opinions of those who feel really strongly about an issue.

A large sample without control of design can be misleading. In a famous historical case, a well-established US magazine, the Literary Digest, polled two million of its subscribers in 1936 and arrived at the conclusion that republican challenger Alf Landon would triumph over President Franklin D Roosevelt, the Democratic incumbent. The sample size was huge but not representative. A young rival pollster, George Gallup, made his own prediction before the magazine issued its poll: He said Literary Digest would get it all wrong. Gallup used a random poll sample of 50,000 people. Roosevelt won with a convincing 63% of the vote and Literary Digest was out of business the next year.

A small-sample survey based on a scientific sampling approach is better than a large-sample self-selecting survey.

The result of an opinion poll is not an absolute number. It has to be interpreted as an estimate within a margin of error. The precision depends mainly on the sample size and on the percentage observed. Assuming that the design of the sample is appropriate a measure of 50% among a random sample of the Thai population of N=1000 has to be interpreted as 50% +/- 3.1%, at 95% confidence level.

This can be important in the case of a close political contest or a topic splitting the population. However, there is a more insidious aspect linked to the wording of survey questions. What you ask is what you get.   

Questions should be simple, clear and precise. They should not deliberately or inadvertently lead the respondent to answer in a particular way. They should not assume knowledge that the respondent may not have. Opinion polls are often conducted in times of crisis and researchers need to be sensitive to respondent concerns and their willingness and ability to answer certain questions.

The impact of the wording of the questions on approval ratings is well documented and it must be carefully considered when interpreting the results of polls. 

One group of US poll agencies led by Gallup traditionally asked respondents if they approve or disapprove of the way the US president is handling his job as president.

Another group including GFK and Ipsos preferred to ask respondents if they approve, disapprove, or have mixed feelings about the way the president is handling his job as president? Then in the case of mixed feelings, they ask respondents if they lean more towards approve or disapprove?

Results are on average more positive by 4% in the second option.

On the issue of happiness, one agency was asking respondents:

Are you very happy?

Are you pretty happy?

Are you not too happy?

While another one preferred:

Are you very happy?

Are you fairly happy?

Are you not very happy?

The second scale generates 15% more happiness!

The media needs to be careful when reporting poll data. Researchers and those publishing survey data must make available sufficient information to enable the public and other stakeholders to evaluate the results.

Eight questions have to be asked:

Who conducted the poll?

Who paid for the poll and why was it done?

What group of people is the survey intended to represent?

How many people were interviewed for the survey?

How were those people chosen?

When was the poll done?

How were the interviews conducted (what interviewing method was used)?

Has the data been weighted to correct any inbalances in who was interviewed?

The media should also report the exact questions and scales used in the survey. Newspapers reported the results of a recent poll conducted by the National Statistical Office by claiming that 99.3% of Thais were satisfied with the government's overall performance. Another result of the same poll was mentioning 59.4% highly satisfied in the Northeast. The poll used clearly different levels of satisfaction (such as highly satisfied, rather satisfied) that should be mentioned as they reflect quite different attitudes.

We should not expect any decline in the number of polls in the future. A total of 17,058 polls were published in the 2012 US election. There will be many more coming out this year.

We have to get ready and develop a critical eye for the methods and wording used. After all, polls may not be perfect, but they are the best, or least bad way of measuring what the public thinks.


Jerome Hervio, President of TMRS Craig Griffin, representative of ESOMAR in Thailand.

Do you like the content of this article?
COMMENT (1)