Skip to content
AI for Justice

How to navigate ethics for common AI use cases in courts

Natalie Runyon  Director / ESG content / Thomson Reuters Institute

· 6 minute read

Natalie Runyon  Director / ESG content / Thomson Reuters Institute

· 6 minute read

As judicial systems navigate the ethical challenges of integrating artificial intelligence, carefully evaluating AI tools for language interpretation, legal research, and transcription to uphold the principles of fairness and due process in the courts is essential

The integration of AI into court proceedings necessitates a careful balance between technological advancement and ethical considerations to best maintain the integrity of the justice system.

Judicial and legal professionals must stay informed about AI’s capabilities and limitations, ensure human oversight (often known as Human in the Loop methodology), address potential biases, and carefully evaluate AI tools for language interpretation, legal research, and transcription to uphold the principles of fairness and due process in the courts.

Codes of conduct help to navigate ethical maze

The adoption of AI into judicial operations requires competence and fairness to uphold the trustworthiness of the court system, says David Sachar, Director at the Center of Judicial Ethics at the National Center for State Courts (NCSC). “AI might be scary to some of us, and we don’t understand it, but we are required to know about it as lawyers and as judges,” Sachar says. “We have ethical responsibilities that are written into our codes.”

More specifically, judges have an ethical responsibility to stay informed about technological advancements, including the benefits and risks associated with AI, as outlined in legal and judicial codes of conduct, Sachar explains, adding that this often involves staying informed about how generative AI (GenAI) operates, its drawbacks, and how to mitigate biases — all the while ensuring careful monitoring and supervision of outputs.

Additionally, judges must carefully evaluate AI tools, considering their accuracy, data privacy implications, and the potential impact on judicial fairness. By doing so, they can effectively integrate AI into their work while safeguarding the principles of justice.

Ethical inquiries for GenAI use cases

A recent webinar hosted by the NCSC and the Thomson Reuters Institute as part of their partnership on the AI Policy Consortium analyzed how court and legal professionals should think about ethics in consideration for using AI in court operations around several use cases, which include language interpretation, legal research, and transcription.

Some of the more detailed analysis of the use cases included:

Using AI for language interpretation — The use of AI for language interpretation in courts raises several critical ethical inquiries, some of which include significant deliberations for translation accuracy and reliability, especially for rare languages, and maintaining fairness and avoiding bias. In addition, protecting privacy and data security while adhering to legal requirements for interpreters are some of the ethical factors to contemplate. “Even the best AI interpretation tools will make errors, so courts must establish robust mechanisms of human oversight that can account for these limitations” says Lea Strohm, Lead for Data & Model Ethics at Thomson Reuters. “This could include a panel of multilingual experts to test AI interpretation tools, clear procedures for challenging and correcting AI-generated translations, and establishing processes for giving involved parties the opportunity for post-hoc corrections.” This process underscores the vital importance of human review for any AI output, including translations.

Legal research by clerks — Courts and legal professionals can ethically use AI tools, but with important caveats. Any AI-assisted legal research system should utilize Retrieval-Augmented Generation (RAG), so that specific, trusted legal databases and domain-engineered systems —rather than the entire Internet — are consulted when the system generates its outputs, according to Sachar and Strohm. Users must be trained on and understand the drawbacks of AI, including potential biases in training data, and why they must always review any output from an AI system.

AI-generated court transcripts — Current ethical frameworks generally do not support fully replacing human court reporters with AI systems. While AI may assist human transcriptionists, it cannot yet match the accuracy and nuanced understanding provided by trained professionals, especially for complex legal proceedings. It is important for courts to maintain transcript accuracy and integrity, protect the privacy of court proceedings, and uphold existing professional standards and certifications for court reporters.

Steps to evaluate GenAI for common use cases

Implementing AI solutions requires careful planning and management to ensure ethical and effective use. The first step is to identify the specific areas where AI can enhance processes within the courts, such as automating repetitive tasks or improving decision-making accuracy.

“Courts should adopt a proactive stance in managing AI risks, from data protection to ethical implementation,” says Carter Cousineau, Vice President of Data & Model Governance at Thomson Reuters. “This approach should include rigorous vetting of AI vendors, requiring them to demonstrate robust data measures, transparent AI governance frameworks, and adherence to established ethical standards. By doing so, courts can ensure the responsible use of AI technologies while maintaining the integrity of the judicial process.”

Once potential GenAI-assisted tasks are identified, the second step is selecting the right tools, including their compatibility with courts’ existing systems. It is also important to consider the ethical implications, such as data privacy and potential biases, and ensure that chosen tools comply with relevant regulations and standards.


Courts should adopt a proactive stance in managing AI risks, from data protection to ethical implementation.


Managing AI tools involves continuous monitoring and evaluation to guarantee that they deliver the expected benefits without any unintended consequences. As a result, the key third step is to establish clear guidelines for AI usage, including human oversight and accountability, as this helps courts maintain control over their AI operations.

Finally, courts also should provide regular staff training and encourage feedback loops among the staff as these can enhance understanding and proficiency in using AI tools. In fact, maintaining open communication channels for feedback ensures that any issues are promptly identified and addressed, leading to ongoing improvements and successful implementation of AI-powered solutions.

The incorporation of GenAI into court proceedings demands thoughtful examination of ethical duties for those presiding over court cases. Remaining knowledgeable about AI’s capacities and its limitations and subjecting the outputs of GenAI-driven tools to human analysis — including regularly analyzing GenAI-driven outputs for potential bias, prejudice, and unjust results — are all part of a comprehensive ethical framework.

Utilizing this approach can help courts and court professionals positively impact decision-making and case outcomes, while also maintaining the human element that is so crucial to interpreting laws and administering justice.


You can register for the next upcoming webinar, Getting the Best of GenAI — How to Use Prompt Engineering, in the TRI/NCSC AI Policy Consortium series, here.

More insights