Newsreel Asia

View Original

Compelling Social Media to Better Moderate Hate Speech

Without Relying on Harsh Legislation

Newsreel Asia Insight #153
March 7, 2024

A study by a think tank in Sri Lanka suggests methods to encourage social media platforms to undertake content moderation more responsibly, thus tackling hate speech and disinformation more effectively. The study offers an alternative to stringent legislative measures, which frequently result in excessive government control over content regulation.

The study, “Better Moderation of Hate Speech on Social Media: A Sri Lankan Case Study for Reputational-Cost Approaches,” by Verité Research examines hate speech and disinformation on social media platforms, particularly Facebook, YouTube and Twitter, focusing on the Sri Lankan context. The study’s relevance extends beyond Sri Lanka, encompassing South Asia and potentially having global implications.

The study describes content moderation as the process through which social media platforms monitor and assess published content to determine its potential harm. It defines hate speech as expressions that instil hatred against individuals or groups based on their identity, urging others towards harm and inciting violence. The study defines disinformation as intentionally misleading or deceptive false information. This contrasts with misinformation, which may be circulated unintentionally due to errors or misunderstandings, whereas disinformation is purposefully crafted to manipulate public opinion

The multi-billion-dollar social media organisations employ algorithms and technology to harvest personal data and provide curated and personalised content for users, the study points out. “The use of such algorithms and technology have enabled the phenomenon of ‘echo-chambers’ that amplify certain types of content among a certain group of users. Inflammatory content and disinformation (which are contrary to the Community Standards of social media platforms) can also be amplified by such algorithms and technology, thereby increasing the potential of such content to result in problematic-behaviours, to gain ‘viral’ traction, and thereby to even catalyse violence.”

The study examines the two-fold problem of content moderation: the inadequacy of voluntary content moderation by social media companies and the risks of government overreach in content regulation.

India, the study says, follows the “safe harbour regime” for regulating social media, granting immunity to service providers from liability for third-party content, conditioned on adherence to due diligence as per the Information Technology (Intermediary Guidelines) Rules 2011 (IT Rules). These rules mandate transparency in platform usage rules, warning against prohibited content (like obscene or hateful material), and require cooperation with government agencies and the appointment of a Grievance Officer.

Non-compliance with these conditions removes the safe harbour protection. The IT Act empowers government officials to block public access to certain information, with penalties for non-compliance. This has raised concerns about content censorship and the vague definition of “prohibited” content, as highlighted by criticism from entities like the Committee to Protect Journalists and the Indian Supreme Court’s observations in the Shreya Singhal case, which acknowledged the government’s expansive censorship authority under the IT Act. 

In the case, the Court struck down Section 66A of the Information Technology Act, 2000, which had been widely criticised for being vague and overbroad, leading to arbitrary enforcement and a chilling effect on free speech. The section criminalised sending information of “annoying, inconvenient, or grossly offensive” nature through computers and communication devices.

Societal regulation, therefore, can be more effective in ensuring responsible content moderation, the study suggests.

“Service providers require positive public perceptions of their platform to attract and retain users. Retaining a high number of users and engagement allows the service provider to accumulate larger volumes of user data, and explore wider revenue generation methods, such as personalised advertising and selling licenses to access and re-use user data. Therefore, a loss of reputation that results in reducing users or reducing engagement can significantly disrupt the operations of a service provider,” the study says.

It proposes a “reputational-cost approach,” a strategy that emphasises the importance of societal response and public backlash in influencing social media platforms’ practices, particularly in content moderation. It suggests that social media platforms can be compelled to improve their content moderation efforts if they face a significant risk of reputational damage, which could adversely affect their commercial “good-will” and financial valuation.

The approach relies on generating a broader societal discussion and criticism, especially when online behaviour leads to offline harm. This societal pressure can lead to a reputational cost for social media platforms, pushing them to take proactive measures to improve content moderation.

While the approach acknowledges the global influence of social media platforms, it emphasises the importance of addressing local issues and the effectiveness of local societal response in creating a global reputational impact.

The approach also highlights the role of civil society in creating a societal architecture that can generate reputational costs. This involves identifying and highlighting content moderation failures, quantifying and ranking the performance of social media platforms in content moderation, and communicating these issues to build global awareness and networking.