Don't Become Artificial Lawyers: Judges Warn on AI Misuse in Courts
In a pointed admonition to the legal fraternity, judges from India—and echoing sentiments from their UK counterparts—have issued a stern warning: the judiciary must not surrender its core
to artificial intelligence. Justice Hakesh Manuja of the
articulated this balanced yet cautious stance, emphasizing that while AI holds transformative potential for judicial efficiency, it cannot replace human judgment.
"Don't become an 'artificial lawyer',"
the judges implore, highlighting risks of over-reliance amid growing AI adoption in courtrooms worldwide.
This call comes at a pivotal moment for India's overburdened judiciary, where over 4.4 crore cases pend across courts, creating urgent demand for technological aids. Justice Manuja's remarks, delivered in a recent judicial forum, underscore a nuanced approach: embrace AI as a supportive tool, but with rigorous safeguards to preserve justice's human essence.
Justice Manuja's Balanced Perspective
At the heart of these warnings is Justice Hakesh Manuja's forthright views. Speaking on AI's role, he stated unequivocally:
"It is definitely going to help the judicial system, the
but till the point we don’t hand over the
itself to the AI."
This delineates a clear boundary—AI as assistant, not arbiter.
Manuja highlighted practical applications, noting AI's utility for the government, often the nation's largest litigator. Predictive analytics could forecast appeal outcomes, streamlining litigation strategies. He specifically referenced an
"Indian bail predictive system"
where inputs like an
yield probabilities of bail success:
"There is a platform. Indian bail predictive system has also been developed. where you can give an
and it can predict the success of bail,"
Justice Manuja said.
Such tools exemplify AI's promise in tackling India's case backlog, but Manuja was quick to temper enthusiasm with pragmatism.
Highlighted Benefits and Real-World Examples
Justice Manuja pointed to innovations already in play. Judges at the
, for instance, utilize AI to generate summaries of pleadings and entire case files.
"Now there we are lagging behind. We are not having that kind of facility. We have to work a lot on this, because if the pleadings are shortened, we are provided with the summary of the pleadings. We are provided with the summary of the complete file. Of course, it will have to be verified or cross checked at some level, but that will help a judge, that will give him more time to record judgment, to give reasons, to give conclusions,"
he elaborated.
This aligns with India's national AI push in judiciary. The 's ( Portal for Assistance by AI) aids research, while translates judgments into regional languages. Bail prediction platforms, though nascent, draw from data analytics akin to U.S. tools like ' public bail algorithms. These could reduce pre-trial detention disparities, a pressing criminal justice reform issue.
For legal professionals, AI summaries mean faster case prep; for judges, more time for reasoned orders—potentially cutting disposal times amid mounting arrears.
The Imperative for Caution and Verification
Yet, optimism is laced with peril. Justice Manuja issued a clarion call:
"We have to be cautious, verify information. We have to be doubly sure before putting it in pleadings or arguments before Court."
This resonates amid reports of AI "hallucinations"—fabricated citations plaguing U.S. lawyer briefs, leading to sanctions.
In India, unverified AI outputs risk perpetuating biases in training data, disproportionately affecting marginalized litigants. Predictive tools, if unchecked, could entrench systemic inequities, echoing COMPAS algorithm critiques in U.S. sentencing.
Training: Starting with the Bench
A novel emphasis: prioritize judges over juniors for AI literacy.
"My opinion is rather than starting with youngsters, it should start with us only,"
Manuja asserted. This inverts typical tech-upskilling, recognizing judges' gatekeeping role.
lags Kerala, but initiatives like the
's AI workshops signal momentum.
Legal educators must follow suit, integrating AI ethics into bar exams and CLE programs.
India's AI Judicial Landscape
India's e-Courts project (Phase III, ₹7,000 crore allocation) embeds AI deeply. Beyond , district courts pilot virtual assistants. The 2023 IT Ministry's AI framework mandates judicial oversight for high-risk applications—adjudication qualifies.
Contrastingly, the source headline nods to UK parallels. British judges, via Judicial AI Inventory, test tools like Pegasus for transcription but ban generative AI in judgments without review, mirroring Manuja's caution.
Global Parallels and Lessons
Globally, AI-judiciary tensions mount. In the U.S., ABA Formal Opinion 512 urges lawyer verification of AI outputs. Canada's Law Society guidelines demand disclosure of AI use. The classifies judicial AI as "high-risk," requiring transparency.
UK's Lord Chancellor warned in 2023 against AI drafting judgments, aligning with Manuja: human accountability paramount. These convergences suggest emerging —AI-specific judicial norms.
Legal and Ethical Implications
Core question: liability for AI errors? Under , expert opinions need scrutiny; AI lacks "opinion" status sans validation. Tortious negligence could snag lawyers submitting unchecked AI pleadings ( ).
Ethically, AI erodes adversarial process if parties gain unequal access. Bias audits mandatory? Predictives risk self-fulfilling prophecies—low bail odds deter applications.
Constitutionally, 's demands human deliberation; AI "black box" opacity threatens .
Impacts on Legal Practice
For advocates: AI drafts motions faster, but verification doubles workload initially. Firms investing in tools like Harvey.ai gain edge, but solos risk malpractice.
Judges: freed for complex reasoning, backlog reduction (target: 90-day disposal). Government litigators: predictive wins trim fiscal losses (₹1 lakh crore annual litigation cost).
Litigants benefit from efficiency, but digital divide exacerbates—rural advocates lack AI access.
Looking Ahead: Responsible AI Integration
Justice Manuja's vision—AI as accelerator, not usurper—charts a prudent path. Policymakers must fund verification protocols, standardized training, and bias-mitigated datasets. Collaborative forums like India-Judicial AI Alliance could harmonize adoption.
Ultimately, these warnings safeguard justice's soul. As AI evolves, legal professionals must wield it judiciously, ensuring technology serves, not supplants, the . The message is clear: innovate boldly, but verify relentlessly.