SupremeToday Landscape Ad
Back
Next

Artificial Intelligence Regulation

India Proposes New AI Content Rules: A Deep Dive into Intermediary Liability and Free Speech Concerns - 2025-10-30

Subject : Technology, Media, and Telecommunications - Information Technology and Data Protection

India Proposes New AI Content Rules: A Deep Dive into Intermediary Liability and Free Speech Concerns

Supreme Today News Desk

India Proposes New AI Content Rules: A Deep Dive into Intermediary Liability and Free Speech Concerns

New Delhi – The Ministry of Electronics and Information Technology (MeitY) has initiated a significant regulatory shift with its proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The draft notification, issued on October 22, aims to tackle the proliferation of deepfakes and AI-generated content by mandating a stringent labeling and verification regime, placing a heavy compliance burden on digital intermediaries and raising critical questions about free speech, technical feasibility, and the scope of intermediary liability in India.

The amendments introduce the concept of "synthetically generated information" (SGI) and seek to fortify the due diligence obligations for platforms, from major social media giants to services that enable content creation. As stakeholders rush to submit feedback by the November 6 deadline, the legal community is closely examining the potential ramifications of these rules on innovation, user expression, and the existing safe harbour protections under the Information Technology Act, 2000.


Defining the Undefinable: The Ambiguous Scope of 'Synthetically Generated Information'

At the heart of the proposed changes is the new definition for SGI under Rule 2(1)(wa), which broadly encompasses any information "artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears authentic or true." While clearly intended to target malicious deepfakes and misinformation, legal experts argue the definition’s breadth is a significant cause for concern.

A literal interpretation could sweep in a vast array of digital content, from AI-generated art and marketing campaigns to innocuous photo filters and automated video enhancements. As the Internet Freedom Foundation (IFF) warns, this ambiguity risks overreach. "The definition of ‘synthetically generated information’ includes any content ‘algorithmically created or modified in a manner that appears authentic or true’, it could cover everything from satire and remix videos to harmless filters," the IFF noted, raising the spectre of over-censorship.

This expansive definition sets the stage for a compliance minefield, forcing intermediaries to make difficult judgment calls on what constitutes SGI and potentially chilling lawful creative expression to avoid regulatory penalties.

Recalibrating Intermediary Liability and Safe Harbour

The draft amendments represent a direct challenge to the traditional safe harbour protections afforded to intermediaries under Section 79 of the IT Act. The new rules introduce a tiered system of obligations that significantly increases the due diligence burden.

  1. For Content Generation Platforms: Intermediaries offering tools that "enable, permit, or facilitate" the creation of SGI must embed a permanent unique identifier or metadata. Furthermore, such content must be conspicuously labeled, with the draft prescribing that the label cover "at least 10% of the visual surface" or the "first 10% of audio." This obligation not to allow the removal of such labels places a proactive monitoring burden on platforms where content is created.

  2. For Significant Social Media Intermediaries (SSMIs): The proposed Rule 4(1A) imposes a multi-layered verification duty on platforms with over five million users. SSMIs must:

    • Obtain a declaration from users as to whether their uploaded content is synthetically generated.
    • Deploy "reasonable and proportionate technical measures" to verify these user declarations.
    • Ensure all identified SGI is clearly labeled.

A failure to perform these duties is explicitly linked to a breach of due diligence, thereby jeopardizing the platform's safe harbour status. While the language "reasonable and proportionate" offers some flexibility, it also creates legal uncertainty. Without clear industry standards or technological benchmarks, platforms may be pressured into adopting an "overly cautious approach, resulting in excessive removal of content or self-censorship to mitigate perceived legal risks," as noted in one analysis.

Apar Gupta, founder of the IFF, highlighted the practical flaws in this approach, telling Inc42, "Mandates are technically easy to evade... metadata watermarks are routinely stripped during cross-platform reposting... the burden falls largely on good-faith users and platforms, while determined offenders migrate to tools and channels with minimal oversight."

Compelled Speech vs. Reasonable Restriction: The Constitutional Dimension

The IFF has characterized the mandatory labeling requirement as a form of "compelled speech," arguing it forces both creators and platforms to carry a government-mandated message on potentially lawful and harmless content. This raises a fundamental question under Article 19 of the Constitution: does this regulation constitute a "reasonable restriction" in the interest of public order and preventing incitement?

The government's move is contextualized by a surge in high-profile deepfake incidents, such as the case brought by actor Aishwarya Rai Bachchan, where the Delhi High Court granted injunctions to protect her personality and publicity rights. Proponents of the rules argue that they are a necessary proactive measure to combat reputational harm, fraud, and election interference. Legal experts cited in the source material contend that the rules fall within the ambit of reasonable restrictions.

However, critics maintain that the broad, undifferentiated application of labeling to all SGI—from dangerous political deepfakes to AI-assisted art—fails the test of proportionality. A risk-based approach, as suggested by some experts, would involve stricter rules for high-risk content like election material while applying lighter regulations to creative or commercial uses.

Global Context and Implementation Challenges

India’s regulatory push aligns with a global trend toward AI transparency. The European Union’s AI Act, for instance, mandates that AI-generated content be tagged, and the United States has issued executive orders for watermarking standards. However, India's unique digital landscape—characterized by massive user volumes, linguistic diversity, and varying levels of digital literacy—presents distinct challenges.

The technical and financial costs of implementation are substantial. Deploying automated detection tools is expensive and their accuracy is not guaranteed, especially against sophisticated adversarial techniques. The new rules could disproportionately impact smaller startups and individual creators who may lack the resources to comply with complex labeling and verification requirements.

There is also a debate over where accountability should ultimately lie. Divya Agarwal of Bingelabs suggested that the obligation should be on the foundational model providers. "If you generate something using OpenAI, Gemini, or any other LLM, the output could include a hidden code or identifier... When that content is uploaded to a platform like Meta, the system could automatically detect that it’s AI-generated," Agarwal proposed. This would shift the primary technical burden from content-hosting platforms to the AI developers themselves—a possibility hinted at by government sources who confirmed consultations with major AI labs like OpenAI and Google.

Conclusion: A Precarious Balance

The Draft Amendments to the IT Rules signal India's intent to be at the forefront of regulating AI-generated content. The policy goal—to empower users to distinguish between authentic and synthetic information—is laudable. However, the proposed framework raises profound legal and operational challenges.

For legal professionals advising tech clients, the key concerns will revolve around the definitional ambiguity of SGI, the heightened risk of losing safe harbour protection, and the practicalities of implementing costly and technically complex compliance systems. The coming weeks will be crucial as industry stakeholders, civil society, and legal experts provide feedback to MeitY. The final text of the rules will need to strike a delicate balance: fostering transparency and accountability without imposing disproportionate burdens that stifle innovation, chill free expression, and ultimately prove ineffective against determined malicious actors.

#TechLaw #AIregulation #IntermediaryLiability

Breaking News

View All
SupremeToday Portrait Ad
logo-black

An indispensable Tool for Legal Professionals, Endorsed by Various High Court and Judicial Officers

Please visit our Training & Support
Center or Contact Us for assistance

qr

Scan Me!

India’s Legal research and Law Firm App, Download now!

For Daily Legal Updates, Join us on :

whatsapp-icon telegram-icon
whatsapp-icon Back to top