SupremeToday Landscape Ad
Back
Next

Artificial Intelligence Regulation

India Proposes Strict AI Content Rules, Threatening Social Media's Safe Harbour - 2025-10-24

Subject : Technology, Media, and Telecommunications - Information Technology Law

India Proposes Strict AI Content Rules, Threatening Social Media's Safe Harbour

Supreme Today News Desk

India Proposes Strict AI Content Rules, Threatening Social Media's Safe Harbour

New Delhi – In a significant move to combat the proliferation of deepfakes and AI-generated misinformation, India's Ministry of Electronics and Information Technology (MeitY) has released draft amendments to the nation's IT rules, proposing a stringent regulatory framework for "synthetically generated information." The proposed changes, titled the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, introduce mandatory labelling, user declaration, and technical verification obligations for online intermediaries, with severe consequences for non-compliance, including the potential loss of crucial safe harbour protections.

The draft rules, now open for public consultation until November 6, represent India's most direct legislative attempt to govern the rapidly evolving landscape of generative AI. The government's stated aim is to enhance the accountability of social media platforms and curb the potential for AI-generated content to "spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud."

A New Legal Definition and Expanded Due Diligence

At the heart of the proposed amendments is the introduction of a legal definition for "synthetically generated information." The draft inserts Rule 2(1)(wa) to define this as: “Information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.” This broad definition is designed to encompass all forms of AI-manipulated content, from deepfake videos to synthetic audio and digitally altered images.

The draft significantly expands the due diligence obligations for intermediaries under Rule 3 of the IT Rules, 2021. Intermediaries that provide tools or resources for creating or modifying synthetic content will be required to:

  • Embed Permanent Identifiers: Label or embed a permanent unique metadata or identifier on all synthetically generated information.

  • Ensure Prominent Visibility: The label must be conspicuously displayed, covering at least 10% of the visual surface area or announced during the initial 10% of an audio clip's duration.

  • Prevent Removal: Intermediaries must ensure these labels or identifiers cannot be modified, suppressed, or removed.

These requirements place a direct onus on platforms that enable AI content creation, such as those offering generative AI models like OpenAI's Sora or Google's Gemini, to build traceability and transparency into their systems from the point of creation.

Heightened Responsibility for Major Social Media Platforms

The amendments single out Significant Social Media Intermediaries (SSMIs)—platforms with over 5 million registered users, such as Meta, Google's YouTube, and X—for additional and more rigorous obligations. Under a proposed new sub-rule (1A) to Rule 4, before allowing any content to be uploaded, SSMIs must:

  • Obtain User Declarations: Require users to affirmatively declare whether the content being uploaded is synthetically generated.

  • Implement Technical Verification: Deploy "reasonable and appropriate technical measures," including automated tools, to verify the accuracy of these user declarations.

  • Display Clear Labels: If content is confirmed to be synthetic, the platform must ensure a clear label or notice is prominently displayed to all users viewing the content.

Crucially, the draft clarifies that an intermediary will be deemed to have failed its due diligence obligations if it "knowingly permits, promote, or fail to act upon the publication of synthetically generated content that misleads or deceives users." This provision directly challenges the passive host defence, suggesting a more proactive gatekeeping role is expected.

The Specter of Lost Safe Harbour

The most significant legal implication of the proposed rules is the potential loss of safe harbour immunity under Section 79 of the Information Technology Act, 2000. This provision shields intermediaries from liability for content posted by third-party users. The amendments explicitly state that non-compliant platforms risk losing this protection, which would expose them to a barrage of civil and criminal lawsuits over user-generated content.

Pavan Duggal, a Supreme Court advocate specialising in cyberlaw, highlighted the gravity of this change. “If an intermediary knowingly permits or ignores unmarked synthetic content, it is deemed to have failed in due diligence—risking the vital Section 79 safe harbour immunity,” he noted. This shift transforms the regulatory framework from a reactive takedown model to a proactive verification and labelling regime, fundamentally altering the risk calculus for major tech companies operating in India.

Legal and Industry Reactions: A Balancing Act

The legal and tech communities have reacted with a mix of cautious optimism and significant concern. Supporters view the draft as a necessary and historic step towards digital accountability. “For the first time, Indian cyber law draft amendments recognised and clearly defined ‘synthetically generated information’ as computer-altered content masquerading as genuine—a much-needed shift aligning law with digital realities,” said Duggal.

However, critics and industry executives have raised serious questions about the technical feasibility and potential for overreach. The obligation for platforms to use "technical measures" to verify user declarations is seen as particularly challenging.

"The obligations are easy to write into the rules but very difficult to implement technically — and even easier to circumvent," stated a senior executive at a social media company. The sheer volume of content, coupled with the increasing sophistication of AI generation tools, makes accurate, large-scale verification a formidable technical and financial hurdle.

Furthermore, there are concerns that the rules could stifle legitimate forms of expression. Dhruv Garg, of the India Governance and Policy Project, warned that "regulatory safeguards must be carefully designed to prevent misuse of such provisions in ways that could inadvertently restrict legitimate expression or artistic, satirical, and creative uses of synthetic media."

N.S. Nappinai, a senior counsel at the Supreme Court, argued that while the amendments amplify intermediary obligations, they may not be sufficient. "AI deepfakes proliferation, impact and harm...has now reached a critical scale, sufficient for the Centre to consider more robust and standalone AI laws," she commented, suggesting that specific criminal provisions may be more effective deterrents.

The proposed rules are now subject to a stakeholder consultation process, where tech companies, civil society, and legal experts will have the opportunity to provide feedback. The final form of the regulations will depend heavily on this feedback and the government's willingness to address concerns about implementation and the delicate balance between preventing harm and protecting free speech in the digital age.

#ITRules #AIregulation #IntermediaryLiability

Breaking News

View All
SupremeToday Portrait Ad
logo-black

An indispensable Tool for Legal Professionals, Endorsed by Various High Court and Judicial Officers

Please visit our Training & Support
Center or Contact Us for assistance

qr

Scan Me!

India’s Legal research and Law Firm App, Download now!

For Daily Legal Updates, Join us on :

whatsapp-icon telegram-icon
whatsapp-icon Back to top