Skip to content
Best Practices in Courts & Administration

Humanizing Justice: The transformational impact of AI in courts, from filing to sentencing

Allyson Brunette  Workplace Consultant

· 5 minute read

Allyson Brunette  Workplace Consultant

· 5 minute read

AI tools have enabled court and government employees to focus more on human interactions in their work and provided learning opportunities to improve equity and reduce bias in judicial outcomes

Artificial intelligence (AI) tools are being introduced at every step of client interactions with the criminal and civil justice systems — well beyond the courtroom. These tools have improved efficiency and equity for defendants and their legal representation from court filings to parole boards. They’ve also enabled employees to focus more on human interactions in their work and provided learning opportunities to improve equity and reduce bias in judicial outcomes.

Clerks’ offices: Streamlining document processing

AI-driven tools have been deployed in courts and clerks’ offices over the past five years, allowing clerks to reduce inefficiencies and errors that may occur in a largely human-run filing process. Palm Beach County, for example, received a national digital innovation award in 2018 for its use of a Lights-Out Document Processing program, which allows users to seamlessly analyze document filings as well as tag and index them with appropriate case information. Palm Beach County employees tested the software on a limited number of document types as a low-risk pilot in 2018. And after training the software on hundreds of documents, the team audited all of the documents that were organized by the software.

With a 98% to 99% accuracy rate, the machines far outperformed the accuracy of their human counterparts. This deployment of five robotic document management systems was equivalent to the workload capacity of 19 human employees and freed up Palm Beach County workers for more thoughtful jobs that enabled them to grow both their skillsets and their earning potential.

Public defender offices: Enhancing legal advocacy

Also in Florida, Miami-Dade County Public Defender Carlos Martinez (previously featured our Revolutionizing Rights series) has advocated for using large language model AI tools to aid the public defenders’ office in initial drafting of legal documents and research. Their office is one of the first in the United States to make use of the AI tools that can aides attorneys and their teams in research, document preparation, and memo drafting.

Technology can also help to improve case outcomes by diverting people from prison. Like Miami-Dade, the office of the Los Angeles County Public Defender also has integrated AI solutions into their toolkit. Chief Information Officer Mohammad Al Rawi helped migrate 24 legacy, in-house systems to cloud-based platforms — fortuitously, right before the global Covid-19 pandemic.

The efforts of the office centered on humanizing the indigent and treating the people intersecting with the criminal justice system as just that, people rather than simply case numbers. With AI tools to manage the troves of data within the office’s systems and then organize them by person (rather than by case), data can be transformed into a human narrative. This narrative approach helps legal teams to advocate for alternative treatments and is part of the office’s larger goal to reduce incarceration.

In the non-criminal landscape, generative AI (GenAI) tools can help both legal professionals and pro se litigants.

In fact, some organizations funded by the Legal Service Corp. have used AI tools to generate pleadings and other filings for high-frequency, low-complexity case types, such as workers compensation, landlord-tenant disputes, and consumer debt filings. Additionally, for those people who these organizations are not able to help individually, AI tools can provide do-it-yourself resources for pro se litigants, including document assembly tools and AI-consumer-focused chatbots.

While some are wary of the risk of these AI-driven tools veering into unauthorized practice of law, if these programs are not tailored services and notice is given that ghostwriting was used, this would not be considered the unauthorized practice of law.

AI’s role in reducing incarceration and predicting recidivism

A Tulane University study assessed the use of AI tools when they helped inform judges’ decision-making in more than 50,000 convictions in the State of Virginia. AI software was used to score each offender’s risk of re-offending and then advise judges on sentencing options: incarceration or alternative punishments. The study found that the AI tool could help to correct gender and racial bias that may come into play in judges’ discretionary decisions.

The tools would generate recommendations, but final sentencing decisions were made by human judges. The study found that judges disproportionately declined to offer alternative punishments to defendants of color, even when the tools suggested the alternative punishments.

Still, there is ample opportunity for AI tools to be utilized in this way. For example, the New York State Parole Board is one in several states which uses the actuarial tool COMPAS Risk and Needs Assessment to support decisions on parole and assess the likelihoods of recidivism. The actuarial score for each incarcerated person is based on factors such as education level, age at the time of conviction, and their individual plans for re-entry into society.

A University of California-Davis study employed an AI tool to analyze data from more than 4,000 individuals released on parole between 2012 and 2015. This research evaluated the outcomes for these individuals, considering their COMPAS scores and subsequent parole board decisions. The findings indicated that parole was often denied to those with low-risk COMPAS scores on account of the severity of their initial offenses. Conversely, a Dartmouth College study and a  ProPublica investigation have critiqued COMPAS, suggesting that its algorithm might be no better than human judgment and may suffer from bias.
We understand that the use of AI tools comes with concerns. Responsible utilization of AI technology demands the formulation of ethical GenAI principles, the establishment of governance frameworks, and continuous engagement with interdisciplinary experts to tackle ethical issues pertaining to bias, fairness, accuracy, reliability, and data privacy. Moreover, human oversight remains essential, as AI serves as a tool rather than as a lawyer itself.

While none of these studies suggest that AI tools should replace human judgement, of course, but these tools can be used to evaluate and identify unconscious bias which often can directly work against efforts to reduce rates of incarceration.


You can find out more about how AI is being leveraged in courts here.

More insights