Skip to content
Courts & Justice

The role of humans: Integrating human judgment in court systems in the AI era

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 6 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 6 minute read

Using the judicial system is rarely anyone’s first choice, so at the very least we should be able to rely on some human interaction to make it more palatable and be secure in the knowledge that there will be a human somewhere in the loop

Interacting with the court as a pro se litigant is inherently challenging, and it would be significantly more difficult without compassion. While the law is written in black and white, its applications often involve nuanced interpretations.

Relying on an entity that operates strictly within these binary constraints to make complex decisions affecting real lives underscores the necessity of human involvement in the legal system. Therefore, it is imperative that any technology integrated into the judicial process incorporates human oversight, which should then ensure that regardless of the extent of AI integration, human participation remains essential to respect these nuances and uphold the integrity of the legal proceedings.

The judicial process encompasses various roles, some of which could theoretically be replaced or supported by AI. And this is even more probable when considering generative AI (GenAI), and its ability to complete tasks that can often be done by humans with little to no assistance.

While clerks, paralegals, and court administrators might employ AI to direct individuals to necessary information more promptly and consistently, it is conceivable that technology could be developed in which judges, juries, and mediators could be replaced by algorithms capable of analyzing facts and rendering decisions. Further, case workers, forensic experts, and probation officers could utilize AI to standardize pretrial decision-making processes. Transcripts may also achieve higher accuracy if generated by AI rather than traditional court reporters. Of course, all of this raises the question of whether the benefits outweigh the risks.


…It is conceivable that technology could be developed in which judges, juries, and mediators could be replaced by algorithms capable of analyzing facts and rendering decisions.


While each of these options presents certain advantages, it is crucial to maintain human involvement in the judicial process, explains Judge Scott Schlegel of the U.S. Court of Appeals for the Fifth Circuit. “The practice of law isn’t simply about applying rules to facts,” he says. “Similar to the intricate political maneuvering depicted in Dune, it requires a nuanced understanding, careful navigation of precedent, and the ability to craft arguments that resonate with human experience. It necessitates what lawyers often refer to as the feel of a case — an intuitive grasp of the issues derived from years of experience and critical analysis.”

How many humans?

In the current economy and political climate, courts — like many other government departments and agencies — are faced with the dilemma of providing greater service with less fiscal ability. Quite simply, as the number of staff declines, the number of cases is remaining steady or, in some jurisdictions, increasing. Enter the shiny toy of GenAI, having been developed in private industry and seemingly ripe to help the public sector. As adaptations begin, the first question is a compound one: What ought to be done? And how many people are necessary to get it done properly?

The initial step is to ensure a fully staffed IT department and a well-funded budget to develop and implement an effective program within the court system. This requires allocating resources to develop technology that can function properly within government programs. An assessment of this sort will provide a clearer indication of the number of personnel needed to implement those programs that could benefit each system.

Where in the loop?

The application of GenAI in the legal sector is demonstrated, for example, by the Beagle+ chatbots, created by the People’s Law School in Canada. The chatbot can answer basic legal questions, directing a person to the correct statute, rule, form, or other resource in seconds. Although the People’s Law School is not a court system, it examines the functionality of the court system to improve user interaction. Human involvement in the development and testing of such GenAI systems could help ensure that AI supports legal processes while maintaining human oversight.

As the use of chatbots progresses, it requires human review and verification of the outputs generated by these bots. This may involve programming the bots to refer specific issues to humans for resolution and periodic human assessment of the chatbot’s output. This places humans at the beginning of the loop, allowing for control of the output.

In other instances, AI can be used as a research or drafting aid. In these situations, the AI acts as a paralegal or first-year associate, meaning the human in the loop is a more experienced or well-trained individual. Humans can serve as intermediaries in this process, and it is crucial that humans do not become complacent with the work performed by the system and must always rely on their own expertise with the justice system.

Closing the loop

The fear voiced by most people in this process is over the other instances in which AI can be used in the legal process. For example, one key fear is that AI can become the final arbiter of the case — although it’s not a likely outcome, it is one which runs contrary to what most judges want.


In the current economy and political climate, courts — like many other government departments and agencies — are faced with the dilemma of providing greater service with less fiscal ability.


Judge Schlegel makes the point eloquently, noting that every day, courts determine who raises children, whether someone is evicted, and who goes to jail or receives a second chance at life. These aren’t abstract data points or business metrics; rather, they’re profound decisions that demand empathy, experience, and the kind of nuanced judgment that comes only from years of practice.

To this end, we have to be careful with new iterations of AI, such as agentic AI — which operates autonomously, making decisions and adapting to changes, similar to a human employee, while performing tasks with minimal supervision. Indeed, we have to prevent agentic AI from taking over the final part of the litigation process. The growth of agentic AI alone necessitates an important for discussion around maintaining human oversight in AI operations.

Examples of agentic AI include autonomous vehicles, virtual assistants, robotic process automation, AI in gaming characters, industrial robots, and algorithmic trading systems. Although these programs are advancing, their involvement in courts remains a distant prospect, thus far.

Conclusion

There will always be a human involved in the judicial process. From technical support to referral attorneys, human presence is essential to verify the work completed by AI. Therefore, it is crucial to train attorneys not only in their legal professions but also as proficient users of new technologies. Indeed, developing new AI models must prioritize both clarity and user-friendliness — this is not optional, but rather it is imperative for an effective system.


You can find more about how courts are using AI-driven technology here

More insights