Supreme Court Flags "Alarming" AI Fake Citations Trend

In an extraordinary convergence of global ambition and domestic caution, India's Supreme Court issued a stark warning about the misuse of artificial intelligence (AI) in legal practice on the very day 88 countries gathered outside its doors to endorse a pledge for "trusted and human-centric" AI. A bench comprising Chief Justice Surya Kant, Justice BV Nagarathna, and Justice Joymalya Bagchi flagged what the Chief Justice described as an "alarming" trend of lawyers filing petitions drafted with AI tools that cite wholly non-existent judgments . Justice Nagarathna highlighted a petition invoking a fictional case titled " Mercy vs Mankind " as a binding authority , while CJI Kant noted that in Justice Dipankar Datta's court, "not one but a series of such judgments were cited" , all fabricated. This revelation, coinciding with the India AI Impact Summit's New Delhi Declaration, underscores a critical governance gap as India positions itself as a leader in global AI regulation.

The incident is not merely anecdotal; it represents the first major confrontation in India's apex court with the pitfalls of agentic AI —systems capable of autonomous action like generating legal arguments—in the legal profession. Judges lamented that even citations to real judgments often featured invented passages , imposing "an additional burden on the part of the judges" who must now verify basic references before addressing substantive merits. This development demands immediate attention from legal professionals, bar associations, and policymakers.

The Bench's Stark Warning in the Supreme Court

The remarks emerged during hearings in the Supreme Court of India (SCI) , where the bench was addressing unrelated matters but seized the opportunity to address a growing menace. Chief Justice Surya Kant, leading the bench, labeled the practice an "alarming" trend , emphasizing that AI-drafted petitions were infiltrating the highest court with fabrications that undermine the foundational trust in judicial proceedings. Justice BV Nagarathna provided a concrete example: a petition that "placed before the Court a case titled ' Mercy vs Mankind ' as a binding authority " . No such judgment exists in Indian legal databases or records, exposing a hallucination typical of generative AI models like ChatGPT or similar tools.

CJI Kant extended the critique, revealing that "in Justice Dipankar Datta's court, 'not one but a series of such judgments were cited', all fabricated" . Justice Joymalya Bagchi added that the problem extends beyond phantom cases; even authentic citations are marred by fabricated quotes , forcing judges into a preliminary verification exercise. This "additional burden," as Justice Nagarathna phrased it, diverts precious judicial resources from merits adjudication to fact-checking basics—a role traditionally reserved for advocates bound by duties of candor.

This is not hyperbole. Under Section 35 of the Advocates Act, 1961 , lawyers face disciplinary action for professional misconduct, including misleading the court. Rule 12 of the Bar Council of India Rules mandates scrupulous accuracy in citations. The SCI's intervention signals that AI misuse could trigger contempt proceedings under the Contempt of Courts Act, 1971 , especially if deliberate.

Echoes from Justice Datta's Court and Broader Patterns

Justice Dipankar Datta's courtroom has become a flashpoint, with multiple instances of serial fabrications. These are not isolated errors but symptoms of over-reliance on unregulated AI drafting tools. Legal practitioners, under pressure from caseloads exceeding 50 million pending cases nationwide (per National Judicial Data Grid ), are turning to AI for efficiency. However, tools trained on vast but imperfect datasets produce hallucinations —confident yet false outputs—a known flaw documented in studies like those from Stanford's HELM benchmark.

The bench's concerns mirror global precedents. In the U.S., the 2023 case Mata v. Avianca saw New York lawyers sanctioned $5,000 each for citing six non-existent cases generated by ChatGPT. Similar incidents in Canada and the UK have prompted judge-led advisories. In India, the SCI's proactive flagging elevates this to a systemic alert.

The India AI Impact Summit: A Global Pledge Nearby

Just outside the courtroom building, the India AI Impact Summit unfolded with fanfare. Hosted in New Delhi, it saw representatives from 88 countries endorse the New Delhi Declaration , committing to "inclusive, trusted and human-centric AI for all of humanity" . India, through its AI Mission (₹10,000 crore allocation) and National Strategy for Responsible AI, announced ambitions to lead global governance—hosting the Global Partnership on AI (GPAI) and contributing to UN frameworks.

Key summit themes included ethical AI, bias mitigation, and regulatory sandboxes. Prime Minister Narendra Modi's government positioned India as a bridge between Global North tech giants and developing nations, advocating "AI for All." Yet, the declaration's emphasis on trustworthiness rang hollow amid the SCI's revelations mere meters away.

Collision of Ambition and Reality: An Instructive Juxtaposition

The simultaneity is not ironic; it is instructive , as observers note. India cannot credibly lead global AI norms while its own legal fraternity grapples with governance failures. Agentic AI 's incursion into advocacy—drafting petitions, researching precedents—amplifies risks in high-stakes litigation. The SCI incident exposes a "first serious governance failure," challenging India's narrative of harmonious AI adoption.

This collision highlights a paradox: While the summit pledged safeguards, courts face immediate harms. It calls for bridging rhetoric with regulation, integrating legal sector lessons into national policy.

Ethical and Professional Ramifications

Legally, lawyers owe a duty of candor under common law principles (e.g., R v. Kessing , SCI on misleading affidavits). AI-generated content, if unverified, breaches this. Ethically, the Bar Council of India (BCI) must update standards—perhaps mandating AI disclosure, as proposed in U.S. ABA Formal Opinion 512 ( 2024 ).

Potential sanctions include fines, suspensions, or disbarment. Courts may evolve practices: requiring sworn verification of AI use or citations, or leveraging tools like AI detectors (e.g., OpenAI's classifier).

Reshaping Legal Practice in India

The impact on India's 1.7 million lawyers is profound. Junior advocates, reliant on AI for speed, face skill gaps in verification. Courts, already overburdened (SCI alone handles 40,000+ cases yearly), risk docket delays. Proposals include: - BCI Guidelines : Mandatory AI training, disclosure rules. - Judicial Protocols : Pre-listing citation checks via e-Courts platform. - Tech Integration : SCI's SUPACE (AI for research) with human oversight.

Law firms like Cyril Amarchand Mangaldas are piloting AI with audits, but small practices lag. This could spur a two-tier system unless democratized.

Lessons from Global Precedents

Internationally, responses vary. The EU's AI Act ( 2024 ) classifies legal AI as high-risk, demanding transparency. U.S. courts issue standing orders (e.g., Judge Brantley's Southern District of NY directive). Singapore's judiciary mandates AI disclosure. India can draw from these, perhaps piloting via the Indian Law Institute .

Comparatively, India's scale—vast bar, digital courts—positions it uniquely. The summit offers a platform to export "legal AI safeguards" globally.

Path Forward for AI Governance

To harness AI's promise (e.g., predictive analytics reducing case pendency), India must act. A joint BCI-SCI task force could draft rules by 2025, aligning with the New Delhi Declaration. Investments in AI literacy—via National Judicial Academy — are essential. Policymakers should embed legal use cases in the India AI Mission.

Failure risks eroded public trust: If apex courts question filings, faith in justice wanes.

Conclusion

The SCI's warning amid the AI Summit is a clarion call. India stands at a crossroads: lead AI governance by confronting its shadows in the legal arena, or risk hypocrisy. Legal professionals must prioritize verification over velocity, embracing AI as a tool, not a crutch. By weaving these lessons into policy, India can truly pioneer trusted AI for humanity —starting in its own courtrooms.