Shields Tharoor from Deepfake Abuse
In a significant ruling for the digital age, the has indicated it will pass interim orders to remove AI-generated deepfake videos falsely depicting Congress MP Shashi Tharoor praising Pakistan's diplomatic prowess. Justice Mini Pushkarna, hearing Tharoor's urgent petition, issued summons to social media giants and , as well as the , signaling a robust judicial stance on the amid surging AI misuse. The case underscores the growing threat of deepfakes to public figures, particularly during election cycles, and highlights the court's readiness to wield against fabricated content that could erode national discourse and diplomatic standing.
Tharoor, a prominent parliamentarian and former Minister of External Affairs, argued that the videos not only tarnish his personal reputation but also pose risks to India's international image, potentially exploitable by foreign actors. This development comes as Indian courts grapple with an influx of similar pleas from celebrities and politicians, positioning at the forefront of battles against unchecked AI technologies.
Background of the Petition
The controversy erupted with the circulation of sophisticated AI-generated videos during the recent Kerala election campaign. These deepfakes portrayed Tharoor lauding Pakistan's "absolute brilliance" in diplomacy and claiming that
"Pakistan is faring much better diplomatically than India."
Fabricated with uncanny realism, the clips cloned not just Tharoor's facial features and voice but also his distinctive oratorial cadence, manner of speaking, and refined vocabulary—hallmarks of his public persona.
According to the petition, these videos were strategically deployed to damage Tharoor's public image, mislead political observers, journalists, and the electorate, and sway public opinion. Tharoor emphasized that despite prior complaints to authorities, the content proliferated relentlessly online, resurfacing like
"the ten heads of Ravana,"
as likened by his counsel,
.
The plea sought comprehensive restraint on the
"misappropriation of his name, likeness, image, voice, signature, oratorial cadence, manner of speaking and highly refined vocabulary"
for any deepfake creation. It framed the issue as a direct assault on his
, invoking equitable remedies to prevent
.
Court Proceedings and Indications of Relief
The matter first came up before Justice Pushkarna on Thursday, with a hearing scheduled for —reflecting the court's urgency. During the proceedings, the bench heard Tharoor's submissions and promptly issued summons to respondents, directing replies within four weeks. Critically, the court orally indicated: “(Blocking) orders will be passed,” assuring interim protection.
Counsel for informed the court that specific Instagram links flagged by Tharoor had been taken down. Undeterred, Justice Pushkarna affirmed that formal orders for content removal would still be issued, underscoring proactive judicial oversight. The hearing highlighted the platforms' roles as gatekeepers under India's intermediary liability framework, potentially invoking obligations under the .
Tharoor's Compelling Arguments
Tharoor's advocacy was poignant, blending personal grievance with national interest.
"I am a former external affairs minister. It matters to India's standing as well... It is liable to be misused by foreign states,"
he stated in court. Elaborating in the petition: “They have misappropriated my personality and created these videos praising another country to my detriment. I have been the External Affairs Minister. It matters to India’s standing as well.”
Senior Advocate Sibal reinforced this, noting the futile cycle of complaints: “We have complained to the authorities, but these deepfakes keep coming back like ten heads of Ravana.” These statements elevated the case beyond individual redress, framing deepfakes as vectors for geopolitical misinformation.
The Deepfake Menace at the Core
Deepfakes leverage generative AI to synthesize hyper-realistic media, often evading casual detection. In this instance, the videos' fidelity—mimicking Tharoor's eloquence—amplified their deceptive potential, fooling even informed viewers. Circulated on platforms like Instagram and X, they exploited algorithmic amplification during a high-stakes election, illustrating how AI can weaponize personality for political sabotage.
Broader Context: AI Cases on the Rise
This petition is no outlier. The has emerged as a vanguard in protecting against AI encroachments. Last month, it granted Allu Arjun interim relief against AI tools, fake voice generators, chatbot profiles, and sexually explicit deepfakes using his identity sans consent. The order mandated intermediaries to remove infringing content upon notification.
Precedents abound: Anil Kapoor secured safeguards against voice cloning and morphed imagery, while Amitabh Bachchan contested unauthorized use of his name and likeness. These cases trace the evolution of in India—from privacy roots in (right to life and dignity) to a robust , recognized in judgments like R. Rajagopal v. State of Tamil Nadu (1994) and Justice K.S. Puttaswamy v. Union of India (2017).
Legal Analysis: in the AI Era
Under Indian jurisprudence, are not statutorily codified but firmly entrenched in common law, equity, and constitutional protections. Courts grant injunctions to prevent one's persona and . Here, Tharoor's claim aligns with misappropriation doctrine: unauthorized commercial or defamatory exploitation of distinctive attributes.
Remedies include ex parte ad-interim stays, against unknowns, and for evolving URLs—tools increasingly deployed against digital fugitives. Intermediaries face risks under if they fail due diligence, especially post-IT Rules mandating traceability and misinformation curbs.
Globally, parallels exist: the U.S. recognizes (e.g., California's celeb protections), while the EU's AI Act classifies deepfakes as high-risk. India's judiciary is bridging this gap, but Sibal's "Ravana" analogy spotlights enforcement hurdles—AI's hydra-like regeneration demands tech-savvy policing.
Implications for Legal Practice and the Justice System
For IP litigators, this heralds a boom in deepfake suits, necessitating expertise in forensic AI analysis and platform subpoenas. Public figures must now proactively monitor digital footprints, leveraging automated takedown tools. Social media counsel will refine compliance protocols, anticipating stricter judicial timelines.
Broader impacts ripple to policy: the case amplifies calls for deepfake-specific legislation, perhaps amending IT Act for mandatory watermarking or AI disclosure. During elections, it spotlights risks to democratic integrity, urging the to integrate deepfake guidelines. Nationally, Tharoor's foreign policy angle invokes cybersecurity imperatives, potentially spurring interventions.
Practitioners should note the Delhi HC's pattern: swift interim relief favors petitioners with cases of , lowering thresholds for political personalities.
Conclusion: A Judicial Firewall Against AI Deception
The 's proactive shield for Shashi Tharoor marks a pivotal assertion of in India's AI-infused landscape. By targeting deepfakes' pernicious spread, Justice Pushkarna's bench not only safeguards individual dignity but fortifies public discourse against manipulation. As similar disputes proliferate, this ruling sets a blueprint for balancing innovation with accountability—urging lawmakers to codify protections before deepfakes erode trust further. Legal professionals must adapt, ensuring equity tempers technology's unchecked march.