Published on 28 October 2025
Intermediary Liability and AI Regulation
Subject : Technology, Media, and Telecommunications - Information Technology Law
Description :
New Delhi – The Ministry of Electronics and Information Technology (MeitY) has ignited a critical legal debate with its proposed 2025 amendments to the Information Technology (IT) Rules, 2021. Aimed at curbing the proliferation of "synthetically generated information" or deepfakes, the draft rules impose significant new obligations on digital intermediaries. However, legal experts warn that in doing so, the government may be fundamentally reshaping intermediary liability through subordinate legislation, creating a direct conflict with the safe harbour provisions enshrined in the parent Information Technology Act, 2000.
The proposed amendments mandate that all social media users declare and label AI-generated content, with such disclaimers required to cover at least 10% of the visual display area or the initial 10% of an audio clip's duration. The rules go further for Significant Social Media Intermediaries (SSMIs), requiring them to implement "reasonable and appropriate technical measures" to verify the accuracy of user declarations.
While the policy goal of enhancing transparency and combating misinformation is widely supported, particularly in the wake of high-profile cases like Sadhguru Jagadish Vasudev & Anr v. Igor Isakov & Ors where deepfakes were used to violate personality rights, the chosen legal mechanism has raised serious questions about the scope of delegated legislation.
The Core Conflict: Redefining Safe Harbour by Rule
At the heart of the controversy is the potential clash with Section 79 of the IT Act, which grants "safe harbour" protection to intermediaries. This immunity is statutorily conditioned on the intermediary playing a passive role—specifically, that it does not initiate a transmission, select the receiver, or select or modify the information contained in the transmission.
The draft amendments, however, compel intermediaries to transition from a passive conduit to an active moderator. By requiring them to verify user declarations, ensure prominent labeling, and embed "permanent unique metadata or identifier," the rules mandate a level of intervention that appears to directly contravene the conditions for safe harbour under Section 79(2).
One of the provided news sources articulates the central paradox: "The Act conditions immunity on inaction, the Rules demand intervention and then declare it harmless. Such reversal of statutory architecture cannot safely be done through subordinate legislation."
Recognizing this conflict, MeitY has included a new proviso to Rule 3(1)(b), which states that an intermediary's good faith efforts to remove or disable synthetic information will not result in a loss of safe harbour protection. Yet, this attempt to provide assurance via a rule is seen by legal analysts as a doctrinal problem. The core argument is that the executive cannot use its rule-making power under Section 87 of the IT Act to substantively alter the conditions for safe harbour that were established by Parliament in the primary statute. Such a move, critics argue, risks being challenged in court as ultra vires —beyond the powers of the parent Act.
If Parliament intends to modernize Section 79 to accommodate proactive content moderation in the age of AI, legal scholars contend this must be achieved through a direct legislative amendment, not through executive rule-making.
Heightened Compliance and Operational Burdens for Intermediaries
The proposed framework places a substantial compliance load on a wide range of technology firms. A MeitY official clarified the expansive scope, stating, "Any software, database or computer resource that is used to generate synthetic content would be covered under the mandate, to make the effort of labelling AI fool proof. The rules are not only for social media platforms."
This brings a vast ecosystem of popular AI tools under scrutiny, including OpenAI’s ChatGPT and Sora, Google’s Gemini, and Microsoft’s Copilot. These companies, many of which are already involved in global initiatives like the Coalition for Content Provenance and Authenticity (C2PA) to develop technical standards for content origin, are now studying the specific implications of India's mandate.
For SSMIs, the burden is heavier. They must not only facilitate user declarations but also develop and deploy technical systems to verify them. This requires significant investment in technology and human resources to review and classify vast volumes of user-generated content, moving them further away from their traditional role as neutral platforms.
A Piecemeal Approach in a Global Context
India’s regulatory action is being compared to international approaches, revealing different philosophies on AI governance. The European Union's landmark AI Act employs a comprehensive, risk-based framework. It categorizes AI applications based on their potential for harm, imposing the strictest requirements on "high-risk" systems while categorizing deepfake labeling under "limited risk" transparency obligations.
In contrast, China has adopted a more assertive stance, issuing regulations that require both explicit labels and embedded, machine-readable metadata to identify synthetic content.
Commentators have described India's strategy as "piecemeal," addressing the immediate threat of deepfakes through amendments to existing IT rules rather than formulating a broad national AI framework. While officials have indicated that these labeling standards will eventually be part of a larger, innovation-focused AI framework, the current approach prioritizes immediate intervention over a comprehensive regulatory architecture.
Clarifications on Content Takedown Orders
In a related but significant development, the amendments to Rule 3(1)(d) seek to streamline the process for content removal. MeitY has clarified that from November 15, 2025, only senior government officials (at the Joint Secretary level and above) and senior law enforcement personnel (at the Deputy Inspector General of Police rank and above) will be authorized to issue takedown notifications to intermediaries. These orders must specify the legal basis and the exact URL of the content to be removed, a move intended to bring greater accountability and precision to the takedown process.
The Road Ahead: A Call for Deliberation
The draft amendments, open for public feedback until November 6, 2025, represent a crucial step in India's efforts to govern the complex challenges posed by generative AI. The objective of safeguarding democratic integrity and protecting individuals from malicious deepfakes is undisputed.
However, the legal community and technology industry will be closely watching how MeitY addresses the fundamental concerns regarding the scope of its rule-making power and the potential erosion of the safe harbour doctrine. As one expert noted, "The next step should be to establish clear implementation standards and collaborative frameworks between government and industry, to ensure the rules are practical, scalable, and supportive of India’s AI leadership ambitions.”
Ensuring that this regulatory framework is both effective and legally sound will require careful deliberation to strike the right balance between proactive moderation, intermediary protection, and adherence to the statutory architecture of the IT Act. The outcome of this process will have profound implications for the future of digital governance and innovation in India.
#ITRules #AIregulation #SafeHarbour
Appeal Limitation in 1991 Police Rules Yields to Uttarakhand Police Act 2007 on Inconsistency: Uttarakhand HC
28 Apr 2026
Nashik Court Reserves Verdict on Khan's TCS Bail Plea
29 Apr 2026
Delhi Court Grants Bail to I-PAC Director in PMLA Case
30 Apr 2026
No Historic Record of Saraswati Temple Demolition, Muslim Body Tells MP High Court in Bhojshala Dispute
30 Apr 2026
No Absolute Bar on Simultaneous Parole/Furlough for Co-Accused Under Delhi Prisons Rules: Delhi High Court
30 Apr 2026
Rejection of Jurisdiction Plea under Section 16 Arbitration Act Not Challengeable under Section 34 Till Final Award: Supreme Court
30 Apr 2026
'Living Separately' Under Section 13B HMA Means Cessation Of Marital Obligations, Regardless Of Residence: Patna High Court
30 Apr 2026
Belated Challenge by Non-Bidders to GeM Tender Conditions for School Sports Equipment Not Maintainable: Delhi High Court
30 Apr 2026
Interim Bail Extended Till May 25 or Judgment Delivery in Rape Conviction Appeal: Rajasthan High Court
01 May 2026
Login now and unlock free premium legal research
Login to SupremeToday AI and access free legal analysis, AI highlights, and smart tools.
Login
now!
India’s Legal research and Law Firm App, Download now!
Copyright © 2023 Vikas Info Solution Pvt Ltd. All Rights Reserved.