SupremeToday Landscape Ad
Back
Next

Published on 25 October 2025

Intermediary Liability and AI Regulation

India's New AI Rules Challenge Safe Harbour Protections for Intermediaries

Subject : Technology, Media, and Telecommunications Law - Information Technology and Cyber Law

India's New AI Rules Challenge Safe Harbour Protections for Intermediaries

Supreme Today for News Article

Description :

News Article

New Delhi – The Indian government has unveiled sweeping draft amendments to its Information Technology (IT) Rules, 2021, aimed at regulating AI-generated content, but the proposal has triggered significant legal debate over its potential to erode the "safe harbour" protections that have long shielded online intermediaries from liability for user-generated content. Legal experts are closely examining the new obligations, which they argue could fundamentally alter the liability landscape for social media platforms and challenge the principles established by the Supreme Court in the landmark case of Shreya Singhal v. Union of India .

The draft rules, released by the Ministry of Electronics and Information Technology (MeitY) on October 22, 2025, introduce a stringent framework to combat the proliferation of "synthetically generated information," including deepfakes. The proposal mandates clear labelling, technical verification, and user declarations for all AI-created or modified content. While the government's stated intent is to enhance transparency and user safety, the mechanism for enforcement places a heavy burden on intermediaries, potentially forcing them into the role of proactive content monitors—a position the judiciary has previously sought to limit.

The Erosion of a Foundational Legal Precedent

At the heart of the legal controversy is the potential conflict with Section 79 of the Information Technology Act, 2000, and its interpretation by the Supreme Court. Section 79 grants intermediaries immunity from liability for third-party content, provided they adhere to due diligence requirements. In Shreya Singhal , the Court established the "actual knowledge" standard, ruling that an intermediary's obligation to take down content arises only upon receiving a specific order from a court or a government agency. This precedent was designed to prevent private censorship and protect free expression, ensuring platforms were not compelled to arbitrate the legality of speech themselves.

The proposed amendments appear to circumvent this standard. By mandating that platforms "deploy ‘reasonable and proportionate technical measures’, such as automated detection tools, to verify the accuracy of user declarations," the rules arguably shift the standard from "actual knowledge" to a form of constructive knowledge.

As one analysis notes, "by mandating that SSMIs [Significant Social Media Intermediaries] deploy verification tools, the law presumes they have the means of knowledge. Consequently, if an unlabelled deepfake is found on a platform, the law will impute knowledge to the intermediary." This shift forces platforms to proactively monitor content, a departure from their traditional role as passive conduits. Failure to comply could result in the loss of safe harbour protection, exposing them to legal and financial risks for content they host but do not create.

Key Provisions of the Draft Amendments

The draft rules introduce a multi-pronged approach to regulating synthetic media:

  • Mandatory Labelling: All AI-generated content must be clearly labelled. For visual media, the label must cover at least 10% of the display area, and for audio, it must be audible for 10% of the duration. A permanent metadata identifier or watermark is also required to ensure traceability.
  • User Declaration: Platforms must require users to declare if their content has been generated or modified by AI.
  • Technical Verification: Intermediaries are obligated to use technical measures to verify these user declarations, effectively requiring them to scan and analyze uploaded content.
  • Shared Responsibility: Both content creators and hosting platforms are held equally responsible for compliance, with the explicit threat of losing safe harbour immunity for failure to do so.

This framework represents a significant expansion of the due diligence obligations under the existing IT Rules. IT Minister Ashwini Vaishnaw has framed the rules as a necessary response to the misuse of AI for impersonation and misinformation. However, the legal community is concerned that the cure may be more damaging than the disease.

Chilling Effects and the Risk of Over-Censorship

A primary concern among legal and civil liberties experts is that the revised liability structure will lead to a "chilling effect" on free speech. Faced with the risk of losing immunity, platforms are likely to adopt overly cautious moderation policies.

As one analysis points out, the new rules could lead platforms to "err on the side of caution and purge content—no matter the merits—rather than face enforcement heat." This concern is amplified by provisions that appear to allow social media companies to take down content based solely on user complaints, without the need for a court or government order. This grants significant power to platforms to act as arbiters of online speech, a role that parliamentary committees have recently expressed bipartisan concern over.

This dynamic creates a precarious balance. While the goal of curbing harmful deepfakes—used in financial scams, election manipulation, and personal harassment—is widely supported, the method could inadvertently suppress legitimate forms of expression, including parody, artistic creation, and political satire that utilize AI tools.

An Integrated but Contentious Regulatory Approach

India's decision to embed AI regulation within the existing IT Rules, rather than creating standalone legislation like the EU's AI Act, reflects a strategy of agile governance. This allows for faster rule-making and integrates AI content under the same due diligence framework as other user-generated material.

However, this approach also blurs the lines between platform-specific and AI-specific obligations, potentially creating compliance complexities. The broad definition of "synthetically generated information"—any content created or altered by a computer resource that "reasonably appears to be authentic or true"—encompasses a vast range of media but leaves ambiguity around text-based outputs from generative AI.

The rules mark a critical inflection point in India's digital governance. MeitY Secretary S. Krishnan clarified the intent is transparency, not censorship, stating, "You can post AI content — just label it clearly." Yet, for legal practitioners advising tech companies, the operational challenges of implementing immutable metadata, verifying content at scale, and navigating inconsistent international standards remain formidable.

As the government considers stakeholder feedback before finalizing the rules, the legal community will be watching closely. The outcome will not only define the future of AI regulation in India but will also serve as a crucial test for the durability of the safe harbour principles that have underpinned the growth of the internet for over two decades. The central question remains: can India build a framework that ensures accountability without dismantling the legal architecture that enables a free and open digital ecosystem?

#IntermediaryLiability #ITRules #ShreyaSinghal

Breaking News

View All
SupremeToday Portrait Ad
logo-black

An indispensable Tool for Legal Professionals, Endorsed by Various High Court and Judicial Officers

Please visit our Training & Support
Center or Contact Us for assistance

qr

Scan Me!

India’s Legal research and Law Firm App, Download now!

For Daily Legal Updates, Join us on :

whatsapp-icon telegram-icon
whatsapp-icon Back to top