New IT Rule Orders 3-Hour Content Removal, Threatening Free Speech, Privacy
From the Editor’s Desk
February 16, 2026
The central government has notified amendments to the Information Technology Rules, requiring online platforms to remove content within three hours of receiving an official government order, and introducing new legal requirements for detecting, labelling and tracing AI generated or altered media. The design risks rapid removal of lawful speech and deeper intrusion into user privacy because platforms that fail to follow government orders can lose their legal protection from being held responsible for what users post.
The amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, were notified on February 10, and are scheduled to take effect on February 20. They reduce the government notice based removal timeline for unlawful content from 36 hours to three hours, and they shorten some grievance redressal timelines to two hours, as reported by The Print.
The amendments apply to major online platforms including Meta, YouTube and X. Officials have called the changes a response to risks linked to deepfakes, synthetic media and AI-generated content that can cause sexual abuse, harassment, threats to public order or national security.
The amendments expand the meaning of synthetically generated information to include AI-created or -altered audio and video that appears real. They require platforms to use automated systems to block unlawful synthetic content, ask users to declare AI-generated material, verify and clearly label such content, attach traceable data to it, and in some cases reveal the identity of the user to the person who files a complaint.
There are at least five bases for concern.
The first concern comes from rule-making through extreme deadlines, because a three-hour legal clock turns careful judgement about legality into an emergency response.
German sociologist Max Weber explained that the authority of the modern state rests on stable rules and fair procedures, and that public trust grows from consistent and reasoned process. Extremely short timelines push officials and companies to remove content quickly in order to avoid legal trouble, even in cases where the lawfulness of the content needs careful examination. This pressure can create a system where speed becomes the main sign of obedience to the law, while careful reasoning, evidence and context receive less attention.
A second concern arises because legal pressure gives private technology companies the power to decide what speech stays online and what gets removed, a role that normally belongs to the state, and this change alters how control and responsibility work in everyday public life.
Michael Lipsky, an American political scientist who developed the idea of street-level bureaucracy, showed that people working on the frontlines of institutions end up shaping real policy because they decide how rules are applied in daily situations. In this case, the frontline consists of platform moderation teams and automated systems that choose which content disappears first, often before any careful human review takes place. The government’s main tool is the legal shield that normally protects platforms from being held responsible for user posts, and the risk of losing that shield pushes companies to remove content quickly in order to protect themselves rather than to balance rights and fairness.
A third concern involves fairness in the legal process, because the three-hour deadline may apply to many different types of online content and the rule does not clearly say it is limited to deepfakes. Judging whether online material is illegal in India often requires careful reading of context, including questions of defamation, obscenity, public order, satire, journalism, political speech and local meaning. A three-hour window encourages platforms to remove content quickly in doubtful situations, since legal risk falls on the company while the user has little time to challenge the decision. Over time this pattern can discourage lawful expression, as creators, journalists and ordinary users see that disputed content can vanish rapidly while restoration takes much longer.
A fourth concern comes from the limits of current technology in a country with many languages and social contexts, especially where rules depend on automated systems to detect and label AI-generated content. Tools that try to identify deepfakes and synthetic media still make frequent mistakes and often work unevenly across different languages, dialects and mixed forms of speech. Systems that must act within hours rely on rough signals instead of careful cultural reading across many languages, which can lead to uneven enforcement and unequal limits on speech across different communities.
A fifth concern involves pressure on privacy created by rules that require verification, clear labelling, traceable data attached to content and, in some situations, disclosure of a user’s identity to the person who files a complaint.
Lawrence Lessig, an American legal scholar who studies how digital systems shape behaviour, has argued that control in the online world often comes from the design of technology itself rather than from written law alone, and these rules make identity tracking a normal condition for speaking online. Systems that store traceable data or allow identity disclosure can place users at risk of retaliation, harassment or repeated complaints, especially in tense political settings and in cases involving gender-based abuse. Linking speech to real world identity also changes who feels safe to speak, because anonymity often protects whistleblowers, survivors, dissidents and people from marginalised communities while societies still try to prevent genuine abuse.
A workable solution would keep strong protection against clearly harmful synthetic content while restoring time and process for careful legal judgement. Rules could create separate tracks, one for urgent harms such as non-consensual sexual imagery or credible threats, where rapid temporary blocking is allowed, and another for complex or disputed speech, where platforms receive more time and users gain a clear right to challenge removal before any final decision.
Independent review bodies, transparent reporting of takedown orders and limits on identity disclosure could protect privacy and public trust. Technical duties could focus on improving detection accuracy across Indian languages and require human review for sensitive categories, so that speed does not replace fairness.
Some countries in Europe offer at least partial examples through risk-based regulation of online platforms.
The European Union’s Digital Services Act sets graded duties based on platform size, requires transparency about moderation decisions and gives users a formal path to appeal removals through independent dispute bodies. Emergency action is allowed for serious harm, yet longer procedures exist for contested speech, which helps balance safety with freedom of expression.
You have just read a News Briefing, written by Newsreel Asia’s text editor, Vishal Arora, to cut through the noise and present a single story for the day that matters to you. We encourage you to read the News Briefing each day. Our objective is to help you become not just an informed citizen, but an engaged and responsible one.