Supreme Court Warns Lawyers on Fake Judgments and AI
In a stark reminder of the evolving challenges posed by artificial intelligence in the legal domain, a Supreme Court of India bench comprising Justices BV Nagarathna and Ujjal Bhuyan has underscored the
imperative duty
of lawyers to meticulously verify judgments before citing them in court. Dismissing a Special Leave Petition (SLP) that relied on fabricated or misrepresented judicial authorities, the court orally cautioned the Bar against complacency, particularly in an era where AI-generated "deep fakes" threaten the integrity of legal research. Justice Nagarathna's pointed query—
"Is this artificial intelligence or natural intelligence?"
—captured the bench's frustration, signaling a new frontier in professional responsibility.
This incident not only highlights a procedural lapse by petitioner's counsel but also amplifies broader concerns about the authenticity of legal sources amid rapid technological advancements. With High Courts now publishing in official Indian Law Reports (ILR) and the Supreme Court in Supreme Court Reports (SCR), the bench emphasized reliance on these authenticated repositories over secondary articles or unverified websites.
Background: The Rise of AI in Legal Research and Its Pitfalls
The legal profession worldwide is grappling with the double-edged sword of generative AI tools like ChatGPT, Gemini, and specialized platforms such as Harvey AI. These tools promise efficiency in drafting, research, and analysis but have repeatedly produced hallucinatory outputs—fabricated case citations, altered quotes, or nonexistent precedents. In India, the integration of technology through e-Courts, NJDG (National Judicial Data Grid), and digital libraries has accelerated this shift, yet it has also opened doors to misuse.
This SLP episode is not isolated. Petitioner's counsel admitted to sourcing judgments from "articles on websites," bypassing primary verification—a common pitfall in fast-paced litigation. Globally, precedents abound: In the U.S., the 2023 case Mata v. Avianca saw a New York federal court sanctioned lawyers for citing AI-generated fake cases from ChatGPT. Canadian courts have issued similar warnings, and the UK Bar Standards Board has updated guidance on AI use. India's Supreme Court, alive to these trends, is now mandating proactive diligence, potentially reshaping how advocates approach case law.
The context is particularly resonant post-pandemic, with virtual hearings and AI assistants becoming staples. However, as Justice Nagarathna noted, this imposes an
"additional burden on lawyers and judges,"
necessitating robust verification protocols to safeguard judicial proceedings.
Court Proceedings: Unmasking the Fake Citations
The matter came before Justices Nagarathna and Bhuyan during hearing of the SLP. Respondent's counsel flagged anomalies: one cited judgment was entirely fictitious, while others existed but did not contain the quoted passages. This revelation prompted rigorous scrutiny from the bench.
When questioned, petitioner's counsel owned up to drafting the petition himself, relying on online articles rather than official texts. The court refused to let this slide, with Justice Bhuyan retorting firmly: “You should have cross verified. That is the duty of the lawyer.”
Justice Nagarathna elaborated, advising: “Don't go by articles, go to the real judgement and verify.” She highlighted the availability of standardized official citations—ILR for High Courts and SCR for the Supreme Court—as foolproof antidotes to such errors. After counsel tendered an unconditional apology, the bench closed the SLP but issued a general caution to all advocates present, reinforcing that lapses in verification undermine the court's trust in the Bar.
Present in court was Vikas Singh, President of the Supreme Court Bar Association (SCBA). Justice Nagarathna directly appealed to him: “What to do about this problem? You have a conference on this. This is an additional burden on lawyers and judges now.” This public directive underscores the judiciary's expectation of institutional response from the Bar.
Judicial Observations: AI vs. "Natural Intelligence"
The bench's remarks were laced with rhetorical flair, blending concern over technology with condemnation of human error. Justice Nagarathna's verbatim observation cut deep: “What is this? Is this artificial intelligence or natural intelligence? Artificial intelligence is a different thing but natural intelligence doing this we cannot condone. Because of artificial intelligence the lawyers and the judges, we have an additional duty to see whether it is a real or deep fake.”
This distinction—AI as a tool gone awry versus deliberate or negligent human folly—elevates the discourse beyond technology to core ethical tenets. It echoes Bar Council of India (BCI) Rules under the Advocates Act, 1961, particularly Rule 11 (duty to the court) mandating utmost good faith, and principles of competence under BCI's Standards of Professional Conduct. While no formal sanctions were imposed here (owing to the apology), future instances could invite costs, contempt proceedings, or disciplinary referrals.
Legal Analysis: Reinforcing the Duty of Due Diligence
At its heart, this ruling reaffirms the foundational principle of due diligence in advocacy. Indian jurisprudence has long imposed on lawyers a duty of candor toward the tribunal, implicit in Articles 19(1)(a) and 21 of the Constitution (fair trial rights) and explicit in judicial precedents like Rishi Raj & Ors. v. Union of India (on accurate pleadings). Citing fake authorities not only misleads the court but erodes public confidence in justice delivery.
The AI angle introduces novel dimensions. Unlike traditional research errors, AI "hallucinations" are probabilistic outputs lacking intent but demanding heightened skepticism. Courts may now expect advocates to disclose AI assistance (aligning with ABA Model Rule 1.1 on competence) and demonstrate verification steps. In India, this could spur amendments to BCI Rules or SCBA advisories, mandating affidavits of verification for citations.
Moreover, the bench's nod to official reporters institutionalizes best practices: Platforms like SCC Online, Manupatra, and the SCI website offer searchable, authenticated texts. Lawyers ignoring these for blogs or summaries risk professional repercussions, potentially under Section 35 of the Advocates Act (professional misconduct).
Broader Implications for Legal Practice and the Justice System
This pronouncement reverberates across the Bar. Junior advocates, often reliant on quick web searches, must pivot to rigorous protocols: cross-checking via multiple official sources, using Ctrl+F for quotes, and peer reviews for complex matters. Firms may invest in AI-detection tools like Originality.ai or blockchain-verified case law databases.
For judges, the "additional duty" means preliminary scrutiny of citations, possibly delaying proceedings but enhancing accuracy. The SCBA's role is pivotal—President Vikas Singh's conference could yield guidelines, training modules, or collaborations with tech firms for AI-proof research.
Institutionally, it bolsters India's digital justice infrastructure. With over 5 crore cases pending (NJDG data), efficient yet reliable research is key. This could accelerate adoption of unified digital repositories, reducing reliance on disparate sources.
Globally, it positions India as a leader: While U.S. sanctions were punitive, SC's approach is educative, fostering a culture of verification without stifling innovation.
Comparative Context and Emerging Global Standards
Internationally, responses vary. The U.S. Federal Judiciary's 2023 advisory mandates human verification of AI outputs; Australia's courts require disclosure. In the EU, GDPR-aligned AI regs emphasize transparency. India's organic judicial intervention—sans legislation—mirrors its common law roots, potentially influencing Commonwealth jurisdictions.
Similar Indian incidents, though underreported, include High Court rebukes for mis-citations. This SC stance may standardize responses, with cascading effects on moot courts, law schools, and CLE programs emphasizing AI ethics.
Recommendations for the Bar and Recommendations
To mitigate risks: - Adopt Verification Checklists: Cite only official reporters; verify quotes verbatim. - AI Literacy Training: SCBA/BCI webinars on prompt engineering and hallucination detection. - Tech Solutions: Integrate APIs from SCI for real-time auth checks. - Ethical Guidelines: Update BCI rules for AI disclosure.
Law firms should audit research workflows, prioritizing quality over speed.
Conclusion: Navigating the AI Frontier with Vigilance
The Supreme Court's admonition in this SLP is a clarion call: Technology augments, but does not supplant, human judgment. By mandating cross-verification, Justices Nagarathna and Bhuyan have fortified the Bar's ethical ramparts against AI perils. As Vikas Singh and the SCBA deliberate, the legal fraternity must embrace this "additional duty" not as a burden, but as a badge of professionalism. In an age of deep fakes, authenticity remains the true hallmark of justice.
This episode ensures that "natural intelligence" prevails, upholding the sanctity of citations and the rule of law.