Artificial Intelligence Regulation
Subject : Technology, Media, and Telecommunications - Information Technology Law
New Delhi – In a significant move to combat the proliferation of deepfakes and AI-generated misinformation, India's Ministry of Electronics and Information Technology (MeitY) has released draft amendments to the nation's IT rules, proposing a stringent regulatory framework for "synthetically generated information." The proposed changes, titled the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, introduce mandatory labelling, user declaration, and technical verification obligations for online intermediaries, with severe consequences for non-compliance, including the potential loss of crucial safe harbour protections.
The draft rules, now open for public consultation until November 6, represent India's most direct legislative attempt to govern the rapidly evolving landscape of generative AI. The government's stated aim is to enhance the accountability of social media platforms and curb the potential for AI-generated content to "spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud."
At the heart of the proposed amendments is the introduction of a legal definition for "synthetically generated information." The draft inserts Rule 2(1)(wa) to define this as: “Information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.” This broad definition is designed to encompass all forms of AI-manipulated content, from deepfake videos to synthetic audio and digitally altered images.
The draft significantly expands the due diligence obligations for intermediaries under Rule 3 of the IT Rules, 2021. Intermediaries that provide tools or resources for creating or modifying synthetic content will be required to:
These requirements place a direct onus on platforms that enable AI content creation, such as those offering generative AI models like OpenAI's Sora or Google's Gemini, to build traceability and transparency into their systems from the point of creation.
The amendments single out Significant Social Media Intermediaries (SSMIs)—platforms with over 5 million registered users, such as Meta, Google's YouTube, and X—for additional and more rigorous obligations. Under a proposed new sub-rule (1A) to Rule 4, before allowing any content to be uploaded, SSMIs must:
Crucially, the draft clarifies that an intermediary will be deemed to have failed its due diligence obligations if it "knowingly permits, promote, or fail to act upon the publication of synthetically generated content that misleads or deceives users." This provision directly challenges the passive host defence, suggesting a more proactive gatekeeping role is expected.
The most significant legal implication of the proposed rules is the potential loss of safe harbour immunity under Section 79 of the Information Technology Act, 2000. This provision shields intermediaries from liability for content posted by third-party users. The amendments explicitly state that non-compliant platforms risk losing this protection, which would expose them to a barrage of civil and criminal lawsuits over user-generated content.
Pavan Duggal, a Supreme Court advocate specialising in cyberlaw, highlighted the gravity of this change. “If an intermediary knowingly permits or ignores unmarked synthetic content, it is deemed to have failed in due diligence—risking the vital Section 79 safe harbour immunity,” he noted. This shift transforms the regulatory framework from a reactive takedown model to a proactive verification and labelling regime, fundamentally altering the risk calculus for major tech companies operating in India.
The legal and tech communities have reacted with a mix of cautious optimism and significant concern. Supporters view the draft as a necessary and historic step towards digital accountability. “For the first time, Indian cyber law draft amendments recognised and clearly defined ‘synthetically generated information’ as computer-altered content masquerading as genuine—a much-needed shift aligning law with digital realities,” said Duggal.
However, critics and industry executives have raised serious questions about the technical feasibility and potential for overreach. The obligation for platforms to use "technical measures" to verify user declarations is seen as particularly challenging.
"The obligations are easy to write into the rules but very difficult to implement technically — and even easier to circumvent," stated a senior executive at a social media company. The sheer volume of content, coupled with the increasing sophistication of AI generation tools, makes accurate, large-scale verification a formidable technical and financial hurdle.
Furthermore, there are concerns that the rules could stifle legitimate forms of expression. Dhruv Garg, of the India Governance and Policy Project, warned that "regulatory safeguards must be carefully designed to prevent misuse of such provisions in ways that could inadvertently restrict legitimate expression or artistic, satirical, and creative uses of synthetic media."
N.S. Nappinai, a senior counsel at the Supreme Court, argued that while the amendments amplify intermediary obligations, they may not be sufficient. "AI deepfakes proliferation, impact and harm...has now reached a critical scale, sufficient for the Centre to consider more robust and standalone AI laws," she commented, suggesting that specific criminal provisions may be more effective deterrents.
The proposed rules are now subject to a stakeholder consultation process, where tech companies, civil society, and legal experts will have the opportunity to provide feedback. The final form of the regulations will depend heavily on this feedback and the government's willingness to address concerns about implementation and the delicate balance between preventing harm and protecting free speech in the digital age.
#ITRules #AIregulation #IntermediaryLiability
Appeal Limitation in 1991 Police Rules Yields to Uttarakhand Police Act 2007 on Inconsistency: Uttarakhand HC
28 Apr 2026
Nashik Court Reserves Verdict on Khan's TCS Bail Plea
29 Apr 2026
Delhi Court Grants Bail to I-PAC Director in PMLA Case
30 Apr 2026
No Historic Record of Saraswati Temple Demolition, Muslim Body Tells MP High Court in Bhojshala Dispute
30 Apr 2026
No Absolute Bar on Simultaneous Parole/Furlough for Co-Accused Under Delhi Prisons Rules: Delhi High Court
30 Apr 2026
Rejection of Jurisdiction Plea under Section 16 Arbitration Act Not Challengeable under Section 34 Till Final Award: Supreme Court
30 Apr 2026
'Living Separately' Under Section 13B HMA Means Cessation Of Marital Obligations, Regardless Of Residence: Patna High Court
30 Apr 2026
Belated Challenge by Non-Bidders to GeM Tender Conditions for School Sports Equipment Not Maintainable: Delhi High Court
30 Apr 2026
Interim Bail Extended Till May 25 or Judgment Delivery in Rape Conviction Appeal: Rajasthan High Court
01 May 2026
Login now and unlock free premium legal research
Login to SupremeToday AI and access free legal analysis, AI highlights, and smart tools.
Login
now!
India’s Legal research and Law Firm App, Download now!
Copyright © 2023 Vikas Info Solution Pvt Ltd. All Rights Reserved.