Society falls prey to disinformation

Society falls prey to disinformation

At the end of September, a seminar on fake news and disinformation will be held in Bangkok by the Southeast Asian Press Alliance (Seapa), a regional NGO that promotes press freedom. (Creative Commons)
At the end of September, a seminar on fake news and disinformation will be held in Bangkok by the Southeast Asian Press Alliance (Seapa), a regional NGO that promotes press freedom. (Creative Commons)

As Facebook executives appeared last Thursday before a Senate hearing in the United States to defend the world's most-accessed and frequented communication platform against accusations of promoting disinformation, a rigorous debate about fake news was taking centre stage at the Communication Policy Research South conference in Maputo, Mozambique, funded by the Canada-based International Development Research Centre (IDRC).

Less than two weeks earlier, another symposium on disinformation was organised in Jakarta by Digital Asia Hub, a Hong Kong-based think tank founded by the Berkman Klein Centre for Internet and Society at Harvard University. At the end of this month, another seminar will be held in Bangkok on the same theme by the Southeast Asian Press Alliance (Seapa), a regional NGO that promotes press freedom.

At least over the last year or so, no topic has been more hotly pursued in information-related academic and professional seminars than the likes of fake news, hoaxes, misinformation and disinformation.

Assistant professor Pirongrong Ramasoota teaches and researches on media, communication and society at the Faculty of Communication Arts, Chulalongkorn University.

In November 2017, the Council of Europe passed an Information Disorder Report which pointed out how the discourse on fake news conflates three notions: misinformation, disinformation and malinformation, all of which constitute the growing "information disorder". This disorder is shaped largely by the open and primarily user-generated platform of the online world.

Disinformation has received the most emphasis in academic and policy circles. Unlike misinformation, which is false yet spread unintentionally without knowing it is false, disinformation is deliberately and usually covertly spread in order to influence public opinion or obscure the truth. Malinformation, on the other hand, is based on real events and most emphatic in its intent to cause harm.

While the world, largely inspired by the "fake news" scandals of the US presidential election in 2016, appears to have recently awoken to the world of disinformation, it is hardly a new phenomenon.

Political rumours were common in the 1800s while dezinformatsiya, as the Russians call it, was a key scheme used during the Cold War to confuse the public. Similar Russian operations reportedly played an important role in the more recent Russo-Georgia war and Crimea crisis, albeit with more advanced technologies like bots and automated trolls.

Apart from its falseness, disinformation may also entail not giving all the facts or secretly emphasising only one way of looking at them, making it akin to propaganda or public relations spins. In some contexts, hate speech is also construed as disinformation.

The main difference between disinformation in this day and age is the speed with which it travels and the sophisticated way it is presented to make it seem so real that even professional journalists can be duped.

Worse yet, in today's digital world where people screen their information exposure with Google and Facebook, algorithms used in these information platforms tend to favour the most-viewed content without verifying its accuracy, hence amplifying disinformation and its impact. Intensifying polarisation and inciting violence are the two main charges the US Senate Intelligence Committee flagged at the hearing with Facebook last week.

Disinformation can entail more far-reaching consequences in contexts with low media literacy and in information-saturated contexts, such as during election periods, or when there are certain happenings that capture the public's attention.

In the recent 18-day rescue operation of young footballers from the Tham Luang cave in Mae Sai, Chiang Rai, for instance, a considerable amount of misinformation and disinformation went viral on both traditional and online media as the world watched and followed the events in awe courtesy of the 1,500-plus international reporters that flocked to the area.

Prior to the boys' discovery, even the Thai Public Broadcasting Service (Thai PBS), Thailand's only public TV channel, misinformed the public with widely shared news on social media that the US could use military satellite-imaging technology to help locate the boys. A local space authority later debunked this.

Later, more disinformation went viral saying the trapped boys were kidnapped by drug lords as Tham Luang cave was allegedly a drug transportation route.

This fake news was promoted by an anti-government, anti-coup Facebook page that accused the military government of a major cover-up. It rode on a conspiracy theory that a deal was being cut between a powerful military figure and the drug lords in exchange for the kidnapped boys. Within the same day, a police commissioner gave a press conference dispelling the news as groundless.

Prime Minister Prayut Chan-o-cha was also hit with fake news when he was quoted as saying people could fill their car tanks with water if they could not afford oil. After announcing the news was fake, the authorities came down hard on those who shared the news, charging them with violations of the computer crime law and sedition.

Imposing legal sanctions against people who share disinformation is indeed draconian and not a sustainable measure given the prevalence of disinformation, and the lack of malicious intent on the side of the information sharers, most of who are ignorant of the truth.

To fight disinformation, experts are proposing a multi-sector approach involving self-regulation by information intermediaries, increasing information literacy, and empowering high-quality news sources. The last one has been championed by veteran journalists now embattled by an economic crisis.

Another provocative recommendation was to impose stricter legal accountability for third-party content on platforms like Facebook and Twitter. This option needs to be carefully designed to not curb free speech.

However these intermediaries should also have a moral responsibility toward their users. More transparency and awareness need to be raised with respect to the handling and value of personal data on their platforms. While these can only be made possible via regulatory intervention, it remains to be seen to what extent related policymakers and regulators can go beyond policy lip service.

Pirongrong Ramasoota

Chulalongkorn University Professor

Pirongrong Ramasoota, PhD, is a professor of communication at Chulalongkorn University and a senior research fellow at LIRNEasia

Do you like the content of this article?
COMMENT (8)