Newsreel Asia

View Original

Report: Meta Approves Political Ads Inciting Violence in India

It Allowed Inflammatory Ads During Ongoing 2024 Lok Sabha Election

Newsreel Asia Insight #228
May 21, 2024

Meta, the owner of Facebook and Instagram, has approved AI-manipulated political adverts spreading disinformation and inciting religious violence during India’s election, according to a report that was exclusively shared with the U.K.’s The Guardian newspaper. The report reveals that Meta allowed inflammatory ads targeting India’s Muslim minority, containing hate speech, disinformation and calls for violence.

The adverts, submitted by India Civil Watch International (ICWI) and Ekō, aimed to test Meta’s ability to detect and block harmful political content. These included slurs like “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned.” One ad falsely claimed an opposition leader wanted to “erase Hindus from India” next to a Pakistan flag.

The adverts were submitted after the Lok Sabha election began on April 19, and before the election concludes on June 1. The Bharatiya Janata Party (BJP) has been accused of using anti-Muslim rhetoric for votes, with Prime Minister Modi referring to Muslims as “infiltrators” at a rally.

The report shows that 14 of 22 adverts submitted in English, Hindi, Bengali, Gujarati and Kannada were approved by Meta. Three more were approved after minor tweaks. All were removed by researchers before publication.

Meta’s systems failed to detect AI-manipulated images, contradicting their commitment to prevent AI-generated content during the election, the study notes.

The report shows that Meta failed to recognise these adverts as political, even though they targeted political parties opposing the BJP.

Under Meta’s policies, political adverts must undergo a specific authorisation process before approval, but only three were rejected on this basis. This oversight allowed these adverts to potentially violate India’s election rules, which ban political advertising 48 hours before polling begins and during voting.

The investigation involved setting up new Facebook accounts and submitting ads with disinformation narratives prevalent in India’s socio-political landscape. Each ad was accompanied by manipulated images generated using AI tools like Stable Diffusion, Midjourney and Dall-E. The adverts targeted contentious districts during the election “silence period” – a 48-hour window before and during voting phases when election-related advertising is prohibited.

A previous report by ICWI and Ekō found that “shadow advertisers” aligned with political parties, particularly the BJP, paid vast sums to disseminate unauthorised political adverts on Meta’s platforms. These often endorsed Islamophobic tropes and Hindu supremacist narratives. Despite these findings, Meta denied that most of these adverts violated their policies.

Meta has publicly touted its investments in content reviewers and safety measures. However, civil society, whistleblowers and experts have long criticised the company’s moderation practices, particularly in non-English languages and its alleged political bias toward the BJP. Researchers found that Meta’s automated ad review system had significant vulnerabilities, allowing bad actors to exploit the platform’s algorithms easily.

The report concludes with recommendations for Meta to address these issues, including adopting an election silence period, ensuring transparency in political advertising, banning shadow advertisers and enhancing fact-checking processes. It also calls for proportionate resource allocation and shutting down recommender algorithms that use personal data for profiling.

A previous study by Verité Research examined hate speech and disinformation on social media platforms, particularly Facebook, YouTube and Twitter, focusing on the Sri Lankan context – though its relevance extended beyond Sri Lanka, including South Asia and potentially having global implications.

It proposed a “reputational-cost approach,” a strategy that emphasised the importance of societal response and public backlash in influencing social media platforms’ practices, particularly in content moderation. It suggested that social media platforms could be compelled to improve their content moderation efforts if they faced a significant risk of reputational damage, which could adversely affect their commercial “good-will” and financial valuation.

The approach also highlighted the role of civil society in creating a societal architecture that could generate reputational costs. This would involve identifying and highlighting content moderation failures, quantifying and ranking the performance of social media platforms in content moderation and communicating these issues to build global awareness and networking.