Judges and court administrators must understand the capabilities, limitations, and ethical considerations of generative AI-driven technologies to effectively integrate them into the judicial system, ensuring fairness and enhancing efficiency
As technologies powered by generative artificial intelligence (GenAI) continue to evolve, their implications for the judicial system become increasingly significant. For judges and court administrators, understanding the capabilities, limitations, and ethical considerations of these technologies is crucial.
In a new series of blog posts, we look to summarize the most critical learnings about GenAI for judicial professionals, drawing on insights from experts like Jake Heller, Head of Product and Co-Counsel at Thomson Reuters; and Jake Porway, Technology Expert at the National Center for State Courts. Both Heller and Porway recently led an education session on Fundamentals of AI in the US Court System.
GenAI, particularly large language models (LLMs) like OpenAI’s GPT-4, are designed to generate human-like text based on the data on which they’ve been trained. These models are fed vast amounts of text from various sources, they then process the data and learn to predict and generate coherent text in response to the prompts they’re given.
Current applications in judicial system
Uses within the nation’s court system offer the opportunity to improve efficiency and increase service to stakeholders of judicial operations, especially the general public who may be accessing the courts. Of course, it is important to emphasize that any use of GenAI responsibly within the courts is only appropriate with human oversight that carefully vets, reviews and approves each legal research and output. Also, inappropriate use of GenAI in the courts — such as allowing AI to offer strategic legal advice, make legal and judicial decisions, and solely provide legal representation — should be avoided at all costs.
Yet, there are currently several responsible applications of AI and GenAI within the courts, including:
Conducting legal research and document review — GenAI can significantly expedite legal research and document review processes. By processing and summarizing large volumes of text, these models can assist in identifying relevant case law, statutes, and legal precedents. “We’re at a place today in which AI can read, understand, and write at a post-graduate level,” says Heller. This capability can be leveraged to manage more extensive legal documents, reducing the time and effort required for manual review.
Drafting legal documents — GenAI can assist in drafting legal documents, providing initial drafts that can be refined by legal professionals. This can help streamline the preparation of briefs, memos, and other legal texts. “With the help of AI, tasks that used to take an entire day can now be completed in minutes,” explains Porway. This allows legal professionals to focus on more complex and strategic aspects of their work.
Frontline services and virtual assistants — AI-powered virtual assistants can aid court clerks and other frontline workers in managing inquiries and procedural tasks. For example, the Orange County Superior Court’s virtual assistant, Eva, helps clerks by providing information and guidance based on the court’s specific protocols. “They’re finding about the same level of quality with folks of just three months of experience as they previously did with those having three years of experience,” explains Porway. This demonstrates how AI can enhance the efficiency and effectiveness of court operations.
“GenAI is a legal assistant, not a lawyer”
Ethical concerns limiting adoption
At the same time, there still are ongoing concerns about the use of GenAI within court operations and the justice system, including:
Accuracy and reliability — While GenAI can perform many tasks efficiently, it is not infallible. The technology should be viewed as an assistant to court operations and procedures, not a replacement for human judgment. It is essential to verify the outputs of AI models in legal contexts where accuracy is paramount. Heller cautioned and emphasized that GenAI “is an assistant, not a lawyer. Treat AI like it’s your most junior person on staff.” This means that while AI can provide valuable assistance and give you a great head start on your tasks, its output requires oversight and validation by experienced legal professionals.
Bias and fairness — AI models can inadvertently perpetuate biases present in the data upon which they are trained. This is a critical concern in the legal field, where impartiality and fairness are fundamental. Porway emphasizes the need for rigorous evaluation. “What we need are better guideposts on which services or homegrown solutions honor the data privacy that we want.”
To learn more about the ethics behind GenAI use in the courts, you can access the Ethics of GenAI: A Guide for Judges and Legal Professionals webinar here.
Data privacy — Protecting personal information is an important ethical consideration in the court system. Indeed, it ensures that sensitive information about individuals involved in legal proceedings is protected from unauthorized access to uphold the integrity of the judicial process. Additionally, safeguarding data privacy helps to maintain public trust in the legal system by preventing misuse of personal information, which could lead to potential harm or bias in judicial outcomes. Consequently, it is vital that AI and GenAI models utilized by courts have rigorous safeguards to protect personal and sensitive information.
Actions to address critical ethical concerns
Heller and Porway highlighted several ways to address these significant drawbacks in the use of GenAI within the court system:
Develop capability to craft excellent prompts to mitigate bias — Heller acknowledged the potential for bias in GenAI outputs, given that these models are trained on vast datasets that reflect societal biases. He emphasized that the key to mitigating bias lies in carefully crafting prompts and providing the AI with specific context, focusing on objective, fact-based tasks rather than open-ended tasks that leave room for subjective interpretation.
Require open and reliable appraisal processes — Porway stresses the need for transparent and robust evaluation methods for GenAI tools, similar to the rigorous assessments applied to human professionals. This ensures there is a solid understanding of the technologies’ limitations and potential gaps in bias, fairness, privacy, accuracy, and reliability. Indeed, increasing adoption of AI in the legal field necessitates a deeper examination of how we define and measure good outcomes in a justice system.
For more on how courts can best leverage AI-driven technologies and tools, check out the new white paper, Artificial Intelligence Guidance for Use of AI and Generative AI in Courts here.
Evaluate vendor practices — Assessing supplier methods for data privacy is crucial when adopting AI tools in legal and court settings. These include assessing how vendors handle data security and privacy, understanding if and what data is being used to train and re-train AI models, exploring options like local AI deployment to maintain control of sensitive data, and asking vendors detailed questions about data flows and privacy protections.
Immediate next steps
Heller and Porway made several recommendations for courts that may be considering AI adoption. First, court officials should start experimenting and gaining hands-on experience with the technology now. By practicing with AI and GenAI tools, they will gain experience and understand the capabilities and limitations of the technology.
Importantly, treat AI as an assistant to your tasks, not a replacement for human judgment. As part of that, courts should develop internal tests and evaluation processes to assess AI tools, checking for accuracy, bias, and other potential issues. They should also thoroughly review and scrutinize GenAI outputs with human review.
Finally, court workers should initially focus their AI and GenAI use on constrained tasks, such as document review, research, and summarization rather than on open-ended generation of documents or legal matters.
Looking forward
To prepare for this rapidly evolving landscape, judges and court administrators should actively explore GenAI tools, experiment with different applications, and develop robust evaluation frameworks, all the while being mindful of potential digital divides in AI access, as well as ethical concerns around the technology.
Indeed, by looking for ways to ensure equitable adoption across the justice system and by embracing a critical and thoughtful approach, the legal profession can harness the power of GenAI to improve efficiency, enhance accuracy, and ultimately strengthen the pursuit of justice within the nation’s court system.
The Establishing GenAI literacy in courts series examines how to implement GenAI in courtrooms in order to effectively improve constituent experience while managing key concerns, especially around ethics.
This series is part of the AI Policy Consortium partnership between Thomson Reuters Institute and the National Center for State Courts.