From Nigeria to Myanmar, the spread of ‘fake news’ has led to social unrest and mob violence, effectively polarising views and exacerbating enmities. In this context, there is a real danger for fake news to permeate and influence the political discourse in India’s upcoming elections.
With about 900 million people eligible to vote, the elections are both a bureaucratic and logistical feat. Considering there are some 560 million Internet users in India, there is a great overlap between those who will be casting their vote and those who have access to at least one form of social media.
The issue stems from the fact that the leap in access to social media has not always been accompanied by equal levels of digital literacy, leaving a large percentage of Indians dependent on WhatsApp and Facebook for news and information. This in turn could hinder their ability to differentiate between legitimate political discussions and intentionally incendiary misinformation circulated via social media.
In preparation for the elections, tightened regulations have pushed foreign Internet companies operating in India to implement a number of measures to limit the reach and impact of malicious disinformation. This includes deleting fake user accounts, curbing the virality of fake news, increasing transparency in paid advertisements, checking online abuse, and launching educational campaigns to raise awareness.
How companies are fighting against misinformation
Facebook, which counts India as its largest market in terms of number of users, has already heightened its election efforts with 40 teams working on the polls scheduled to begin in April. In December 2018, Facebook began an offline verification process for all political advertisers. In an attempt to increase ad transparency, Facebook has decided to place electoral ads in India in a searchable online library which will contain contact information for ad buyers or their official regulatory certificates, and will verify individuals buying political ads by matching their listed names with government-issued documents.
Starting in March, the company will publish a weekly ad archive report for India that lists the number and amount spent for ads related to politics and issues of national importance in India. Additionally, a Facebook user can see the ad’s creative, start and end date and performance data, including the range of impressions, ad-spend and details such as the age, gender and location across India of the ad’s viewers.
As part of its election integrity efforts, Facebook created a training module titled the “Facebook Cyber Security Guide for Politicians and Political Parties”, which was shared with over 850 policy-makers in India including Parliamentarians, Chief Ministers of states, and Chief Electoral Officers appointed by the Election Commission of India (ECI). Facebook is now planning to set up an operations centre in Delhi to monitor election content on its platform. To reduce the virality of fake news, the company has partnered with independent fact checkers to help identify false news across languages such as English, Hindi, Bengali, Telugu, and Malayalam.
WhatsApp, meanwhile, is using artificial intelligence (AI) to monitor and flag suspicious activities like bulk registrations of similar accounts and users that send a high volume of messages within a short span of time. Used by an estimated 200 million people in India, WhatsApp has also warned Indian political parties that their accounts could be blocked if they try to abuse the platform during the election campaign.
WhatsApp launched an integrated campaign in December 2018 – in multiple languages and across television, print, online media and radio – to help prevent the spread of fake news. Titled “Share Joy, Not Rumours“, the campaign conveys real scenarios about dangerous rumours that spread via spam among family and school groups. Prior to this, WhatsApp curtailed the number of times a message could be forwarded, and clearly defined the difference between an original and a “forwarded” message.
Twitter and Google both updated their ads policies to increase transparency in the way political organisations and personalities get their messages across their respective platforms. In November 2018, Twitter launched a social initiative called #PowerOf18 to encourage the Indian youth to contribute to public debate and civic engagement in the election season. Additionally, the Ads Transparency Centre (ATC) for India went live in March to provide details on paid ads and messages – including billing information, ad-spend, and impressions data for each sponsored tweet.
Google updated its election ads policy to include an India-specific political advertising transparency report and political ads library to provide information on political ad-spend on Google platforms in India. Advertisers running election ads will be required to provide a “pre-certificate” issued by the ECI or anyone authorised by it for each ad they wish to run. Google will further verify the identity of advertisers, and following confirmation, run a disclosure “Paid for by” message using the information provided during the verification process.
How the government curbs the impact of fake news
The question over who is at fault for the spread of misinformation is contentious and politically charged – politicians call on internet companies to trace the source of malicious content, while these companies are resolute that they cannot access the encrypted information sent via their platforms. They, in turn, blame political parties for misusing social media during election period. Nevertheless, it is increasingly clear that fake news can and have become a political tool, able to tilt public sentiment and opinion as fast as it takes to type a distorted or uninformed headline.
In this context, government bodies have taken proactive steps to not only curb the use of online platforms by malicious groups, but also improve accountability measures that punish those who intentionally subvert democracy.
The ECI announced that the model code of conduct which came into effect with the announcement of the 2019 Lok Sabha polls, will apply to social media networks as well. Intermediaries will have to ensure that all political ads published on their platforms will be certified by the Media Certification & Monitoring Committees (MCMC), disclose expenditure on ads with the ECI, and adhere to the “silence period” that comes into effect 48 hours before the polls. Simultaneously, the ECI launched the cVIGIL app to allow citizens to report any violation of the poll code, including the spreading of fake news.
The ECI also instated a requirement for candidates to give details of their social media accounts. For instance, they must divulge the amounts spent on social media campaigning, and these expenses will be included within their limit of election expenditure. In the absence of specific laws to regulate social media, it has been proposed that intermediaries be asked to effectively respond to misuse of their platforms and remove damaging content within 2-3 hours. Intermediaries have assured the ECI that they will appoint grievance officers to take necessary action against any poll-related violations. Additionally, the Internet and Mobile Association of India is formulating a code of ethics for intermediary online platforms.
The Ministry of Electronics and Information Technology (MeitY), in its draft Intermediary Guidelines 2018, proposes making it mandatory for companies to remove “unlawful” content, including material which affects the “sovereignty and integrity of India”, within 24 hours. It further stipulates that companies with more than 5 million users set up a local office and appoint a nodal person of contact to co-ordinate with law enforcement agencies.
It remains to be seen, however, if these efforts are not “too little, too late”. Launching pilot projects several months before the elections would have provided invaluable insights into the measures’ strengths and weaknesses. For example, apart from monitoring accounts directly linked to political parties, there is no clear mechanism for checking accounts propagating political narratives that cannot be traced back to specific political candidates.
People have a tendency to share news without fact-checking them, and they tend to trust news that affirm their beliefs – two factors that do not necessarily make users malicious per se. There is also the practical difficulty of identifying threats in real time, which means it is hard to avoid a time gap between the detection and the eradication of fake news items. A large-scale test during the recent state elections would have helped both the government and internet companies improve the way they monitor the volume and the impact of fake news.
Photo by Rachit Tank