Surveys and their subliminal bias

DO you believe in surveys? Can surveys be trusted?

For the longest time, I couldn’t make heads or tails of the conflicting preference surveys being done by Kantar Media Philippines and the Nielsen Company (Philippines) for broadcast companies ABS-CBN and GMA Network—which, before the cancellation of the former’s franchise—fought tooth and nail for viewers’ ratings or preference. Nielsen often placed GMA on top of its nationwide ratings, while Kantar favored ABS-CBN.

Why the two ratings’ firm couldn’t reconcile their respective surveys is beyond me, considering that both were presumably using the same polling science in their field of work. Naughty minds couldn’t help but cast aspersions on the reliability of their findings. Were these ratings firms conducting objective or non-partisan surveys, or were they in the employ of the network that their survey favored? In either case, the results of their respective surveys do not serve the viewers that these two networks are seeking loyal patronage from.

I raise this point in connection with what other people think was an improbable 91 percent approval rating of President Duterte in the September 14-20, 2020 survey conducted by Pulse Asia, reportedly among 1,200 adults and with a margin of error of only 2.8 percent. Many people found the Pulse Asia survey hard to believe because the survey was conducted amid a growing perception that the government has dropped the ball on handling Southeast Asia’s worst coronavirus outbreak and in the thick of the months-long community lockdown—the longest in the world. The Philippines, which to date has recorded the most Covid-19 infection cases in the region, plunged into recession in the second quarter of the year and now faces its deepest economic contraction in decades.

Did Pulse Asia err? What was the methodology used in coming up with such a conclusion? Unfortunately, pollsters keep their methods close to their chest, and we can only speculate.

Statistics is valid math. It is how manufacturers make sure that their products are safe for the public without testing every single one. For instance, Mars Inc. can’t check every M&M chocolate candy that leaves their machines, so they use statistics to make sure that they know that each chocolate piece they produce meets their standards and is safe to eat. To do this, they test samples from a manufactured batch. As a science, statistics has been around for thousands of years, but it took a great leap forward starting in the 18th century, and had many of its principles well-established by the 1930s. By taking around 2,000 samples, a manufacturer can be relatively comfortable whether a million or even 10 million products that it is putting out on supermarket shelves are safe to eat.

The principles of statistics were eventually applied to opinion surveys to create a snapshot of people’s views, opinions or preferences. At the heart of any valid opinion survey is the same math that works in a manufacturing plant: random sampling. The challenge is that people aren’t M&Ms. While a manufacturer has control over where to get the test samples, an opinion pollster can’t line up people on a conveyor belt and randomly choose survey participants. They have to find a way to get their samples, which is not an easy task.

Lists of people (such as a voters’ list) can be incomplete, unreliable and out-of-date. Finding people and securing their permission is not that easy either, especially during a pandemic. So pollsters really have to go out of their way to find a set of people to survey who will represent 110 million Filipinos. Slovin’s formula wasn’t even used here to calculate an appropriate sample size. The fact that Pulse Asia used a mere 0.0011 percent of the entire population is already a huge red flag. You’ll often see online surveys using people who just signed up to participate. Now, that too is questionable, given how one person can create multiple accounts.

The second challenge is asking the right question to elicit an answer. There is no way to chemically test and determine if a person is a Trump supporter or a Biden supporter, or approves of President Rodrigo Duterte. People’s answers can be shaped by the environment, whether it be fear or the presence of government representatives during the survey, or if their responses are in exchange for any compensation in cash or kind.

Surveys can fail on both fronts, no matter how well-intentioned or meticulously designed they are, as we saw with Brexit and the 2016 US presidential elections.

In their most recent survey, Pulse Asia has not divulged the number of people that were randomly selected, but refused to participate, or which survey questions were left unanswered. This would have given the general public an idea about how their  sampling technique and results were affected by fear of the President or being on the wrong side of the administration, especially if the people who served as respondents were dependent on food or cash aid.

The bigger challenge may be the questionnaire items. How reliable and valid were the questions asked? Pulse Asia asked people whether they approved of the President’s performance from June to August. This is a very narrow timeframe. People were very scared of the virus and, for them, being alive was probably a bigger relief than any concern about the President’s performance. Thus, they may have been only too willing to say that they were fine with the administration’s performance.

Would this crisis have been the reason for Duterte’s over-the-top trust rating? Answering a survey would certainly not be on top of anyone’s priority list. Putting food on the table is what drives people to scrounge for whatever sustenance they can get, and with the government providing them with relief good, would they bite the hand that feeds them?

However you look at it, talk is cheap. Misrepresenting yourself as a survey respondent has no immediate or dire consequence. Worse, there is such a thing as the Hawthorne effect. Because you cannot force people to be honest, it  becomes suspect if surveys can really reflect an accurate view about what people truly believe and value. Sadly, there’s solid evidence that surveys are often unreliable, and give a slanted picture of the real situation (Kantar and Nielsen come to mind).

Beliefs and partialities are hidden by default. It’s even more telling that we haven’t really even come close to creating a gadget or an app, if you will, that  can peek into anyone’s mind. So instead we resort to what seems to be the next best thing: simply ask people about what they believe and value. Pollsters call this survey research. But such only makes sense if people would be honest. It’s given that we are capable of faking what we feel and sometimes for valid reasons. Because of this, it raises the question of whether surveys can provide an accurate view about what people truly believe and value. What is worse is that surveys during election period becomes a self-fulfilling prophecy. It rightly or wrongly spawns subliminal bias.

For comments and suggestions, e-mail me at mvala.v@gmail.com

Total
92
Shares

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Article

SMC sets sights on hydropower

Next Article

Tax perks for e-vehicle parts-makers reviewed

Related Posts

Editorial
Read more

Genomics can bolster PHL’s food security

Singrow, an agri-genomics firm based in Singapore, announced last month that it was able to develop the world’s first climate-resilient strawberry (See, “ISAAA: Singaporean agri firm develops climate-resilient strawberry variety,” in the BusinessMirror, March 16, 2023). The novel strawberry variety can be grown in tropical climate, according to the company. Singrow said its goal in developing the variety is to make strawberries more affordable while reducing the environmental impact of its production.