Supreme Court Flags Alarming AI Drafting Trend

In a stark warning to the legal fraternity, a Supreme Court of India bench led by Chief Justice Surya Kant has highlighted an "alarming" surge in lawyers employing artificial intelligence (AI) tools to draft petitions, resulting in fabricated case citations and nonexistent judicial extracts. During a public interest litigation (PIL) hearing, Justices BV Nagarathna and Joymalya Bagchi joined the Chief Justice in expressing grave concerns over AI-generated "hallucinations" infiltrating court filings, underscoring the urgent need for rigorous verification to safeguard judicial integrity.

This development, observed on February 17, 2026 , amid a case seeking guidelines on political speeches, reveals deepening anxieties within India's apex court about technology's double-edged impact on legal practice. As AI tools proliferate, the bench cautioned that unverified outputs not only mislead proceedings but also impose undue burdens on judges tasked with sifting fact from fiction.

Context of the Hearing

The remarks emerged during the hearing of a PIL filed by academician Roop Rekha Verma, petitioning for regulatory guidelines on inflammatory political rhetoric. While critiquing the petition's hasty drafting, the bench—comprising Chief Justice Surya Kant, Justice BV Nagarathna, and Justice Joymalya Bagchi—veered into a broader critique of emerging practices in legal drafting.

Chief Justice Kant set the tone, stating verbatim: “We have been alarmingly told that some lawyers have started using AI for drafting.” This observation was not isolated; it reflected a pattern the court had noted repeatedly, where pleadings bore hallmarks of AI generation—lengthy, unoriginal compilations of precedents riddled with errors—without human oversight.

The bench lamented the petition's quality, remarking, "We are alarmed to reflect that some lawyers have started using AI to draft petitions. It is absolutely uncalled for." This candid intervention signals the judiciary's growing impatience with technological shortcuts that compromise professionalism.

Stark Examples of AI 'Hallucinations'

Justice BV Nagarathna provided concrete illustrations of the perils, drawing from her own courtroom experiences. "There was a case of Mercy vs Mankind which does not even exist," she noted, highlighting how AI confidently invents entire precedents. This fictitious citation exemplifies "AI hallucinations"—a well-documented phenomenon where generative models fabricate plausible but false information to fulfill output demands.

The issue extends beyond phantom cases. Justice Nagarathna elaborated: “Then some are citing real supreme court cases, but those quoted portions do not even exist in the judgment." Such fabricated excerpts from genuine rulings complicate verification, as judges must cross-check against authorized reports like SCR or SCC Online, exacerbating docket pressures.

Chief Justice Kant referenced a parallel incident before Justice Dipankar Datta's bench: “All precedents cited in the petition never existed." In that matter, an entire series of citations proved illusory, forcing the court to navigate a minefield of misinformation.

These examples are not anomalies. Multiple sources report recurrent instances where AI-drafted special leave petitions (SLPs) feature voluminous, unattributed quotes with scant original analysis, deviating from the precision exemplified by stalwarts like Ashoke Kumar Sen .

Judicial Frustrations and Broader Concerns

The bench's dismay transcended specific errors, touching on systemic vulnerabilities. Justice Nagarathna emphasized the practical fallout: "Attribution of fake quotes... makes verification a major challenge and puts additional burden on the part of the judges." In a precedent-driven system like India's, where stare decisis reigns, such lapses erode foundational trust.

Justice Joymalya Bagchi mourned the "decline in the art of legal drafting," observing that modern SLPs often devolve into "lengthy quotations from prior judgments, with little articulation of legal grounds." He contrasted this with historical benchmarks, invoking Sen's era of concise, accurate advocacy. The CJI echoed this, linking hasty AI reliance to a broader erosion of advocacy standards.

These concerns align with prior judicial admonitions. Last year, the Supreme Court and high courts encountered similar AI artifacts in pleadings and even orders, prompting repeated directives for accuracy against official records.

Echoes in High Courts and Precedents

The problem is not confined to the apex court. The Bombay High Court recently imposed a fine on a petitioner for an AI-generated incorrect citation, signaling zero tolerance at intermediate levels. This mirrors global trends, though India's context—marked by voluminous filings and resource constraints—amplifies risks.

Judges have consistently stressed that while AI aids research and case management, ultimate responsibility rests with counsel. Courts mandate checking every citation against primary sources before filing, a duty rooted in the Advocates Act, 1961 , and Bar Council rules on professional misconduct.

Legal and Ethical Implications

For legal professionals, this episode invokes core ethical tenets under Chapter II, Part VI of the Bar Council of India Rules: diligence, candor toward the tribunal, and avoidance of deceit. Submitting unverified AI outputs risks contempt proceedings or disciplinary action, potentially violating Order XVIII Rule 3 CPC or SLP formatting norms.

AI hallucinations stem from models trained on vast but imperfect datasets, prioritizing fluency over fidelity. In law, where precision is paramount, this mismatch poses existential threats. Legal scholars argue for hybrid workflows: AI for initial drafts, human vetting for finals, augmented by tools like Westlaw's AI verifiers.

The bench's intervention may catalyze formal guidelines, akin to the US federal judiciary's standing orders on AI disclosure (e.g., Northern District of Texas). In India, it could spur e-Committee initiatives for AI literacy and verification protocols, ensuring tech enhances rather than undermines justice.

Impact on Legal Practice and Justice Delivery

The ramifications ripple across the bar and bench. Lawyers face heightened scrutiny, with courts likely to demand affidavits attesting to source authenticity. Junior advocates, reliant on AI for efficiency, must prioritize training in manual research to hone drafting acumen.

Judges, already overburdened (SCI's ~80,000 pendency), endure amplified workloads verifying dubious claims, delaying justice in a system averaging 3-5 years per case. This erodes public confidence, particularly in PILs shaping policy.

Chambers risk obsolescence without AI integration, yet unchecked adoption invites sanctions. Firms may invest in compliant tools, fostering a premium on verified expertise.

Recommendations and Future Outlook

To mitigate risks, the SCI could issue practice directions mandating AI disclosure in filings, coupled with Bar Council CLE modules on ethical AI use. Tech solutions like blockchain-verified citations or AI detectors (e.g., OpenAI's classifiers) offer promise.

Internationally, the ABA's Formal Opinion 512 endorses AI with safeguards; India might adapt similarly. The e-Committee's ongoing digital push positions it to lead.

Conclusion

The Supreme Court's unflinching critique marks a pivotal moment: AI's promise in democratizing legal access must not compromise veracity. As Chief Justice Kant's alarm resonates, the bar must reclaim diligence, ensuring technology serves justice, not subverts it. In an era of rapid innovation, balanced adoption—verified, ethical, human-centered—will define the profession's future.