SupremeToday Landscape Ad
Back Icon Back Next Next Icon
AI icon Copy icon AI Message Bookmarks icon Share icon Up Arrow icon Down Arrow icon Zoom in icon Zoom Out icon Print Search icon Print icon Download icon Expand icon Close icon

Regulation of AI in Judicial Proceedings

CJI Surya Kant Stresses Careful AI Use in Judiciary to Avoid Overpowering Decisions

2025-12-05

Subject: Technology Law - AI and Legal Practice

AI Assistant icon
CJI Surya Kant Stresses Careful AI Use in Judiciary to Avoid Overpowering Decisions

Supreme Today News Desk

CJI Surya Kant Stresses Careful AI Use in Judiciary to Avoid Overpowering Decisions

In a recent Supreme Court hearing, Chief Justice of India (CJI) Surya Kant underscored the judiciary's cautious approach to artificial intelligence (AI), emphasizing that AI tools must assist rather than dominate judicial decision-making. During proceedings on a public interest litigation (PIL) seeking guidelines to regulate AI's use in courts, the bench allowed the petitioner to withdraw the plea while permitting submissions on the administrative side. This development highlights ongoing concerns about AI's integration into legal processes, particularly the risks of misuse such as fabricated precedents, amid the rapid evolution of technology in the legal sector.

The case, W.P.(C) No. 1041/2025, Kartikeya Rawal vs. Union of India , brought to light apprehensions over the "unregulated" deployment of generative AI in court proceedings. Represented by Senior Advocate Anupam Lal Das, the petitioner argued for comprehensive guidelines to prevent errors stemming from AI-generated content. CJI Surya Kant, leading a bench with Justice Joymalya Bagchi, firmly rebutted notions of unregulated use, stating, "There is no question of unregulated use by us. I, my brothers and Sisters have spoken on this—that we are using it in a very careful manner." This hearing, held on December 5, reflects the judiciary's proactive stance on balancing technological innovation with the integrity of human judgment in legal adjudication.

Background: The Rise of AI in Indian Courts

The integration of AI into the judicial system is not a novel concept but has accelerated in recent years, driven by the need for efficiency in an overburdened legal framework. India's courts handle millions of cases annually, with backlogs exceeding 50 million as per recent National Judicial Data Grid reports. AI tools promise to streamline processes like legal research, case management, and document analysis, potentially reducing pendency and enhancing access to justice.

However, this technological shift is not without pitfalls. The petitioner's counsel highlighted instances where advocates cited AI-generated "fake precedents"—non-existent case laws that could mislead proceedings. Such errors underscore the dual-edged nature of generative AI, which can produce convincing but inaccurate outputs if not verified. CJI Surya Kant acknowledged this, noting, "AI tools must have generated fake precedents because advocates appeared to have cited such fictitious case laws somewhere." He cautioned lawyers to remain vigilant, emphasizing that reliance on fabricated material contravenes professional ethics under the Advocates Act, 1961, and Bar Council rules.

The Supreme Court itself has been at the forefront of AI adoption. In 2023, it issued a white paper on AI's responsible use, outlining principles for ethical deployment. This document, referenced during the hearing, stresses transparency, accountability, and human oversight—core tenets echoed by the CJI. Meanwhile, the Kerala High Court recently introduced a policy for AI tools in its district judiciary, mandating guidelines for responsible usage. CJI Surya Kant, responding to the counsel's mention of this policy, remarked, "You are speaking as if we do not know what is happening in Kerala High Court," and revealed ongoing consultations with the high court's leadership. He explained that such policies necessitate broad stakeholder input, including judges, lawyers, and technologists, before implementation.

The bench's decision to dismiss the PIL as withdrawn—while allowing suggestions on the administrative side—signals a preference for internal, consultative mechanisms over adversarial litigation. The order stated: "Sr counsel seeks permission to withdraw the matter, and is permitted to withdraw this petition; however, the petitioner is allowed to submit the suggestions to us on the administrative side." This approach aligns with the Supreme Court's administrative role under Article 145 of the Constitution, enabling policy formulation without judicial overreach.

Key Highlights from the Hearing

The hearing was brief but revealing, lasting mere minutes on December 5. Advocate Subhash Chandran, appearing for the petitioner in related submissions, pointed to errors in lower courts where orders cited phantom Supreme Court precedents. "Lower courts have passed orders citing Supreme Court precedents that don’t even exist," he argued, attributing this to unchecked AI reliance.

CJI Surya Kant countered by affirming the judiciary's safeguards: "We use AI in a very conscious manner. We don’t want it to overpower judicial decision making." He highlighted judicial training programs that emphasize verification of citations, stating, "Judges must cross check. This is already part of judicial training. With time, both the Bar and the Bench will learn. That does not mean we should issue directions." This reflects a pragmatic view: AI as a tool for augmentation, not automation, preserving the Article 50 directive for an independent judiciary free from algorithmic bias.

The CJI's objection to the "unregulated use" label was pointed: "We don't want AI and machine learning to overpower the judicial decision-making process—many times we have highlighted." His remarks serve as a reminder that while AI can accelerate research—platforms like LexisNexis and indigenous tools like Lexlegis.ai offer rapid access to case laws—final decisions remain a human domain, rooted in constitutional principles of fairness and equity.

Legal Implications: Balancing Innovation and Integrity

For legal professionals, this hearing raises critical questions about AI's role in practice. Under the Indian Evidence Act, 1872, and principles of natural justice, any reliance on unverified sources could vitiate proceedings, potentially leading to appeals or mistrials. The emergence of "hallucinated" AI outputs—where tools invent facts—poses risks akin to perjury if not disclosed. Lawyers must now incorporate AI literacy into their due diligence, perhaps updating continuing legal education (CLE) mandates to include technology ethics.

From a broader perspective, the judiciary's cautious stance could influence regulatory frameworks. The government's impending AI legislation, teased as "AI For All," may draw from judicial insights, focusing on sector-specific guidelines. Internationally, parallels exist: the U.S. Federal Judiciary's AI principles and the EU's AI Act classify high-risk applications like judicial tools under strict scrutiny. India's approach, emphasizing administrative evolution, avoids hasty directives but risks uneven adoption across high courts.

Ethically, the Bar's responsibility intensifies. Rule 11 of the Bar Council of India Rules prohibits misleading the court, extending to AI-assisted advocacy. Firms adopting tools like CaseMine or Kira Systems for contract analysis must implement verification protocols to mitigate liability under tort law for negligence.

AI's Transformative Potential in Legal Practice

Despite the cautions, AI's benefits for the Indian legal ecosystem are undeniable. In document review and e-discovery, natural language processing (NLP) sifts through vast datasets, cutting costs by up to 50%—vital for pro bono and small-firm practitioners. Predictive analytics, analyzing historical data, aids in forecasting outcomes, informing settlement strategies under Section 89 of the Code of Civil Procedure, 1908.

Platforms like SpotDraft automate contract drafting, flagging risks via machine learning, while chatbots handle client queries under data protection norms of the Digital Personal Data Protection Act, 2023. For research, AI enhances precision: Lexlegis.ai, India's pioneering tool, contextualizes statutes like the Bharatiya Nyaya Sanhita, 2023, successor to the Indian Penal Code.

Yet, challenges persist. Bias in training data could perpetuate inequalities, contravening Article 14's equality guarantee. The Kerala High Court's policy addresses this by requiring audits and human vetoes, a model other courts might emulate. As CJI Surya Kant noted, evolution is key: "If you have suggestions, give them on the administrative side." This invites collaboration, potentially birthing national standards via the e-Committee of the Supreme Court.

Impacts on the Justice System and Legal Community

The withdrawal of the PIL does not end the discourse; it pivots it toward constructive input. For judges, it reinforces training imperatives—workshops on AI verification could become standard, akin to those under the National Judicial Academy. Advocates face a learning curve: ignoring AI risks obsolescence, but blind trust invites sanctions. Law schools must revise curricula, integrating tech law modules to prepare the next generation.

Broader societal impacts include accelerated justice delivery, aligning with Sustainable Development Goal 16 on access to justice. However, without guidelines, disparities may widen—tech-savvy urban courts versus resource-scarce rural ones. The CJI's vision of "conscious" use could foster equitable adoption, ensuring AI serves rather than supplants the human element in adjudication.

In conclusion, CJI Surya Kant's observations encapsulate a judiciary at a crossroads: embracing AI's efficiencies while safeguarding its soul. As submissions flow to the administrative side, legal stakeholders must engage actively. The message is clear—AI is a servant, not a sovereign, in the temple of justice. This balanced path promises a resilient legal future, where technology enhances, but never eclipses, the wisdom of the bench and bar.

#AIinLaw #JudicialTechnology #SupremeCourtAI

Breaking News

View All
SupremeToday Portrait Ad
logo-black

An indispensable Tool for Legal Professionals, Endorsed by Various High Court and Judicial Officers

Please visit our Training & Support
Center or Contact Us for assistance

qr

Scan Me!

India’s Legal research and Law Firm App, Download now!

For Daily Legal Updates, Join us on :

whatsapp-icon telegram-icon
whatsapp-icon Back to top