SupremeToday Landscape Ad
Back
Next

Information Technology Act, 2000 - Synthetic Content Regulation

Centre Amends IT Rules to Regulate Synthetic AI Content and Impose 2-Hour Deepfake Takedowns: MeitY Notification - 2026-02-11

Subject : Cyber Law - Intermediary Guidelines

Centre Amends IT Rules to Regulate Synthetic AI Content and Impose 2-Hour Deepfake Takedowns: MeitY Notification

Supreme Today News Desk

India Tightens Grip on AI: New IT Rules Mandate Swift Deepfake Removals and Content Labeling

Introduction

In a significant move to curb the rising menace of artificial intelligence-generated misinformation and deepfakes, the Indian government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 . Issued by the Ministry of Electronics and Information Technology (MeitY) on February 10, 2026 , and set to take effect from February 20, 2026 , these changes introduce stringent obligations on social media platforms and intermediaries to label synthetic content, expedite the removal of harmful AI-generated material, and enhance user awareness. The amendments target the misuse of AI tools for creating deceptive audio-visual content that could undermine privacy, incite violence, or spread falsehoods, reflecting growing global concerns over technology's dark side. While hailed by some as a balanced regulatory step, industry voices warn of added compliance pressures on digital platforms like X, Facebook, and Instagram. This development builds on earlier drafts from October 2025 , dropping controversial watermarking proposals but reinforcing due diligence requirements under the Information Technology Act, 2000 .

Regulatory Background

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 , were originally introduced to foster accountability among online intermediaries, balancing free speech with the need to prevent illegal content dissemination. These rules, notified under Section 87 of the IT Act, 2000 , defined intermediaries—such as social media companies and internet service providers—as entities shielded from liability for user-generated content provided they exercise " due diligence " by promptly removing unlawful material upon awareness.

The 2021 framework already mandated timelines for content takedowns (e.g., 36 hours for general unlawful content, 24 hours for non-consensual intimate imagery) and required grievance redressal mechanisms. However, the explosive growth of generative AI technologies, particularly deepfakes—synthetic media that convincingly alters videos or audio to depict false events or statements—exposed gaps. High-profile incidents, including AI-manipulated videos of public figures and non-consensual deepfake pornography, prompted MeitY to draft AI-specific rules in October 2025 . That draft controversially proposed watermarking 10% of all online content, drawing sharp criticism from tech lobbies like Nasscom for being overly broad and technically challenging.

The final amendments, published in the Gazette of India on February 10, 2026 , refine these proposals. They introduce a precise definition of " synthetically generated information " while aligning with broader legal frameworks like the Bharatiya Nyaya Sanhita, 2023 (replacing the Indian Penal Code), the Protection of Children from Sexual Offences (POCSO) Act, 2012 , and the Explosive Substances Act, 1908 . The 10-day compliance window from notification to enforcement underscores the urgency, giving platforms limited time to overhaul systems. This regulatory evolution mirrors international efforts, such as the EU's AI Act , but tailors them to India's digital ecosystem, where over 800 million internet users amplify the risks of AI-driven disinformation.

Key Provisions of the Amendments

The core of the 2026 amendments lies in expanding intermediary obligations to address AI-specific threats. A pivotal addition is the definition under Rule 2(1)(wa) : “‘ synthetically generated information ’ means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.”

Exceptions carve out benign uses, such as routine editing for clarity or educational content, ensuring the rules target malicious deepfakes without stifling legitimate innovation. Under the new Rule 3(3) , intermediaries offering AI tools must deploy "reasonable and appropriate technical measures, including automated tools," to prevent users from generating content that violates laws—encompassing child sexual abuse material, false documents, explosive instructions, or deceptive portrayals of individuals.

Labeling is another cornerstone. Non-prohibited synthetic content must be "prominently labelled" for easy identification, embedded with permanent metadata and unique identifiers tracing back to the platform's resources. Significant social media intermediaries face additional pre-upload checks: users must declare if content is synthetic, with platforms verifying and displaying warnings if confirmed, per Rule 4(1A) . Failure to act knowingly could deem them non-compliant with due diligence under Section 79(2) of the IT Act .

Takedown timelines have been drastically shortened to reflect AI's speed. Rule 3(1)(d) now requires removal of unlawful content "within three hours" of a complaint or order, down from 36 hours. For non-consensual sexual imagery, including deepfakes, the window shrinks to two hours from 24, while general grievances must be resolved in seven days instead of 15. Complaints involving defamation or harassment get 36 hours for resolution, reduced from 72.

User notifications are beefed up under Rule 3(1)(c) , mandating quarterly alerts (every three months, not annually) about compliance consequences, potential penalties, and mandatory reporting of offenses under laws like the Bharatiya Nagarik Suraksha Sanhita, 2023 , and POCSO Act. Platforms must warn that violations could lead to account suspensions, content removals, and identity disclosures to victims or authorities.

Enforcement ties into criminal provisions, with non-compliance attracting penalties under the IT Act and referenced statutes. The notification clarifies that proactive removals of synthetic content do not violate safe harbor protections , encouraging swift action.

Stakeholder Reactions and Rationale

While the government positions these rules as a "reasonable obligation" leveraging platforms' technological prowess, reactions are mixed. A senior official, speaking anonymously, emphasized that "platforms have demonstrated the capacity to act within minutes," citing automated filtering capabilities. They dismissed censorship fears, noting government takedown requests form less than 1-2% of total actions, with most handled via community guidelines.

Industry bodies like Nasscom , through Vice-President Ashish Aggarwal, praised the removal of the 10% watermarking clause as a "victory for the industry," calling the rules "fairly balanced." Aggarwal highlighted that AI content generation happens in seconds, justifying tight timelines, and noted that filtering is largely automated, minimizing manual burdens. However, he urged platforms to assess implications, as all intermediaries now fall under the net.

Lawyers and experts express caution. Supratim Chakraborty of Khaitan & Co. critiqued the "convoluted" intermediary definition, arguing it blankets diverse AI businesses under one umbrella, complicating compliance. He advocated for separate categories for AI-focused entities to create a "progressive regime." Rutuja Pol from Ikigai Law flagged the 10-day rollout as forcing "overnight" workflow changes, perpetuating flaws in classifying all AI tools as intermediaries despite 2025 AI guidelines recognizing nuances.

Broader concerns include overreach: Could mandatory labeling chill satire or artistic AI use? Enforcement challenges in a multilingual, vast digital space also loom, with platforms like Meta and Google yet to respond publicly. Queries to these companies went unanswered at publication, but past pushback on similar rules suggests potential legal challenges under free speech grounds, invoking Article 19(1)(a) of the Constitution .

Legal Analysis

These amendments operationalize Section 79 of the IT Act by evolving " due diligence " into proactive AI governance. Unlike the passive awareness-based removals of 2021, intermediaries must now preempt violations through technical safeguards, echoing judicial interpretations in cases like Shreya Singhal v. Union of India (2015), where the Supreme Court struck down Section 66A for vagueness but upheld intermediary accountability if narrowly tailored.

The synthetic information definition draws from global standards, distinguishing deceptive deepfakes from harmless edits, akin to U.S. DEEP FAKES Accountability Act proposals. Relevance to referenced laws is clear: Deepfakes enabling identity misrepresentation could invoke Bharatiya Nyaya Sanhita provisions on forgery (Sections 336-342) or defamation (Section 356), while POCSO integrations target child exploitation.

Labeling and metadata requirements address provenance, preventing "hallucinations" in AI outputs from evading liability. This aligns with the IT Act's false electronic records prohibition under Section 65B, extending to synthetic media. Takedown accelerations prioritize harm prevention, but risk errors in automated systems, potentially raising due process issues.

For significant social media intermediaries (platforms with 5 million+ users), Rule 4(1A) 's verification duties could invite scrutiny if declarations are falsified, shifting some liability upstream. The proviso deeming knowing failures as non- due diligence reinforces Christian Louboutin SAS v. Nakul Bajaj (2018 Delhi HC ), where platforms lost safe harbor for inaction.

No direct precedents are cited in the notification, but the framework implicitly builds on K.S. Puttaswamy v. Union of India (2017) for privacy protections against non-consensual deepfakes invading bodily privacy under Article 21 . Distinctions are made: Benign enhancements (e.g., color correction) escape regulation, unlike manipulative alterations, ensuring proportionality.

Key Observations

The notification's language underscores intent: " For the removal of doubts , it is hereby clarified that the removal of, or disabling of access to, any information, including synthetically generated information ... shall not amount to a violation of the conditions specified under clauses (a) or (b) of sub-section (2) of section 79 of the Act ." This shields proactive platforms.

On labeling: "Every such information not covered under sub-clause (i) of clause (a) is prominently labelled in a manner that ensures prominent visibility in the visual display that is easily noticeable and adequately perceivable."

A government official noted: "Platforms have demonstrated the capacity to act within minutes—tech companies have very clever technical features and resources... The three-hour window is a reasonable obligation given their technological capabilities."

Industry expert Ashish Aggarwal observed: "The timeline revisions, while potentially subject to pushback... is in keeping with the idea that it only takes a few seconds or minutes for AI platforms to generate explicit or unlawful content."

Lawyer Supratim Chakraborty warned: "The definition and coverage of intermediary in India has become heavily convoluted and complicated."

These excerpts highlight the balance between enforcement and feasibility.

Implications and Future Outlook

The amendments culminate in a comprehensive regime: Intermediaries must integrate AI detection tools, update terms of service, and train grievance officers, with non-compliance risking safe harbor loss and criminal probes. Practically, this could reduce deepfake proliferation—vital ahead of elections under the Representation of the People Act, 1951 —but may strain smaller platforms, fostering a market favoring Big Tech with resources.

For users, quarterly warnings empower informed behavior, while victim disclosures aid remedies under laws like the Sexual Harassment Act, 2013 . Broader effects include bolstering digital trust, potentially inspiring state-level adaptations or international alignments.

Yet, challenges persist: Technical feasibility in detecting subtle synthetics, appeals against erroneous takedowns, and balancing with fundamental rights. Future cases may test constitutionality, much like past intermediary battles. As India positions itself as an AI leader—aiming for $1 trillion digital economy by 2026—these rules signal a maturing regulatory landscape, prioritizing safety without stifling innovation. Legal professionals should monitor compliance advisories, as the 10-day sprint to February 20 tests the ecosystem's resilience.

synthetic content - takedown timelines - intermediary liability - AI misuse - user notifications - labeling requirements - compliance burden

#ITRulesAmendment #DeepfakeRegulation

Breaking News

View All
SupremeToday Portrait Ad
logo-black

An indispensable Tool for Legal Professionals, Endorsed by Various High Court and Judicial Officers

Please visit our Training & Support
Center or Contact Us for assistance

qr

Scan Me!

India’s Legal research and Law Firm App, Download now!

For Daily Legal Updates, Join us on :

whatsapp-icon telegram-icon
whatsapp-icon Back to top