Published on 25 October 2025
Intermediary Liability and AI Regulation
Subject : Technology, Media, and Telecommunications Law - Information Technology and Cyber Law
Description :
New Delhi – The Indian government has unveiled sweeping draft amendments to its Information Technology (IT) Rules, 2021, aimed at regulating AI-generated content, but the proposal has triggered significant legal debate over its potential to erode the "safe harbour" protections that have long shielded online intermediaries from liability for user-generated content. Legal experts are closely examining the new obligations, which they argue could fundamentally alter the liability landscape for social media platforms and challenge the principles established by the Supreme Court in the landmark case of Shreya Singhal v. Union of India .
The draft rules, released by the Ministry of Electronics and Information Technology (MeitY) on October 22, 2025, introduce a stringent framework to combat the proliferation of "synthetically generated information," including deepfakes. The proposal mandates clear labelling, technical verification, and user declarations for all AI-created or modified content. While the government's stated intent is to enhance transparency and user safety, the mechanism for enforcement places a heavy burden on intermediaries, potentially forcing them into the role of proactive content monitors—a position the judiciary has previously sought to limit.
At the heart of the legal controversy is the potential conflict with Section 79 of the Information Technology Act, 2000, and its interpretation by the Supreme Court. Section 79 grants intermediaries immunity from liability for third-party content, provided they adhere to due diligence requirements. In Shreya Singhal , the Court established the "actual knowledge" standard, ruling that an intermediary's obligation to take down content arises only upon receiving a specific order from a court or a government agency. This precedent was designed to prevent private censorship and protect free expression, ensuring platforms were not compelled to arbitrate the legality of speech themselves.
The proposed amendments appear to circumvent this standard. By mandating that platforms "deploy ‘reasonable and proportionate technical measures’, such as automated detection tools, to verify the accuracy of user declarations," the rules arguably shift the standard from "actual knowledge" to a form of constructive knowledge.
As one analysis notes, "by mandating that SSMIs [Significant Social Media Intermediaries] deploy verification tools, the law presumes they have the means of knowledge. Consequently, if an unlabelled deepfake is found on a platform, the law will impute knowledge to the intermediary." This shift forces platforms to proactively monitor content, a departure from their traditional role as passive conduits. Failure to comply could result in the loss of safe harbour protection, exposing them to legal and financial risks for content they host but do not create.
The draft rules introduce a multi-pronged approach to regulating synthetic media:
This framework represents a significant expansion of the due diligence obligations under the existing IT Rules. IT Minister Ashwini Vaishnaw has framed the rules as a necessary response to the misuse of AI for impersonation and misinformation. However, the legal community is concerned that the cure may be more damaging than the disease.
A primary concern among legal and civil liberties experts is that the revised liability structure will lead to a "chilling effect" on free speech. Faced with the risk of losing immunity, platforms are likely to adopt overly cautious moderation policies.
As one analysis points out, the new rules could lead platforms to "err on the side of caution and purge content—no matter the merits—rather than face enforcement heat." This concern is amplified by provisions that appear to allow social media companies to take down content based solely on user complaints, without the need for a court or government order. This grants significant power to platforms to act as arbiters of online speech, a role that parliamentary committees have recently expressed bipartisan concern over.
This dynamic creates a precarious balance. While the goal of curbing harmful deepfakes—used in financial scams, election manipulation, and personal harassment—is widely supported, the method could inadvertently suppress legitimate forms of expression, including parody, artistic creation, and political satire that utilize AI tools.
India's decision to embed AI regulation within the existing IT Rules, rather than creating standalone legislation like the EU's AI Act, reflects a strategy of agile governance. This allows for faster rule-making and integrates AI content under the same due diligence framework as other user-generated material.
However, this approach also blurs the lines between platform-specific and AI-specific obligations, potentially creating compliance complexities. The broad definition of "synthetically generated information"—any content created or altered by a computer resource that "reasonably appears to be authentic or true"—encompasses a vast range of media but leaves ambiguity around text-based outputs from generative AI.
The rules mark a critical inflection point in India's digital governance. MeitY Secretary S. Krishnan clarified the intent is transparency, not censorship, stating, "You can post AI content — just label it clearly." Yet, for legal practitioners advising tech companies, the operational challenges of implementing immutable metadata, verifying content at scale, and navigating inconsistent international standards remain formidable.
As the government considers stakeholder feedback before finalizing the rules, the legal community will be watching closely. The outcome will not only define the future of AI regulation in India but will also serve as a crucial test for the durability of the safe harbour principles that have underpinned the growth of the internet for over two decades. The central question remains: can India build a framework that ensures accountability without dismantling the legal architecture that enables a free and open digital ecosystem?
#IntermediaryLiability #ITRules #ShreyaSinghal
Vague 'Bad Work' Can't Presume Penetrative Sexual Assault Under POCSO Section 4 Without Evidence: Patna High Court
28 Apr 2026
Limiting Crop Damage Compensation to Specific Wild Animals Excluding Birds Violates Article 14: Bombay HC
28 Apr 2026
Appeal Limitation in 1991 Police Rules Yields to Uttarakhand Police Act 2007 on Inconsistency: Uttarakhand HC
28 Apr 2026
Nashik Court Reserves Verdict on Khan's TCS Bail Plea
29 Apr 2026
Delhi Court Grants Bail to I-PAC Director in PMLA Case
30 Apr 2026
No Historic Record of Saraswati Temple Demolition, Muslim Body Tells MP High Court in Bhojshala Dispute
30 Apr 2026
No Absolute Bar on Simultaneous Parole/Furlough for Co-Accused Under Delhi Prisons Rules: Delhi High Court
30 Apr 2026
Rejection of Jurisdiction Plea under Section 16 Arbitration Act Not Challengeable under Section 34 Till Final Award: Supreme Court
30 Apr 2026
'Living Separately' Under Section 13B HMA Means Cessation Of Marital Obligations, Regardless Of Residence: Patna High Court
30 Apr 2026
Login now and unlock free premium legal research
Login to SupremeToday AI and access free legal analysis, AI highlights, and smart tools.
Login
now!
India’s Legal research and Law Firm App, Download now!
Copyright © 2023 Vikas Info Solution Pvt Ltd. All Rights Reserved.