Law firms and corporate legal departments may be making the Gen AI leap, but more than half of court personnel aren’t sure whether this kind of advanced technology can or should be used in the court setting
To say that generative artificial intelligence (Gen AI) has impacted the legal industry might be a bit of an understatement. Fully 70% of legal professionals said they believe advancements in AI and Gen AI will have a transformative or high impact on the profession within the next five years, according to the Thomson Reuters’ recent Future of Professionals Report.
Already, law firms and corporate legal departments alike have experimented with and begun integrating AI tools, while state bars and regulators have begun instituting rules for how Gen AI should properly be used in legal work settings.
However, there is one crucial element of the legal ecosystem that has, for the most part, sat on the Gen AI sidelines: the nation’s court system. Indeed, in general courts are not utilizing Gen AI technologies, with less than 10% either actively using the tools or planning on using them within the next year, according to the Thomson Reuters Institute’s recent State of the Courts survey report. Further, many judges and court professionals at state, municipal, and county courts throughout the United States that responded to this survey said they do not feel they even know enough information to have an opinion on whether Gen AI should be used in court settings.
“I just don’t know enough about the systems to say that I am confident [these technologies] are safe and secure,” said one state court judge. Another state court judge added they have “concerns the information provided will not be accurate and will confuse litigants.”
As more and more legal parties begin to rely on Gen AI, however, courts also could risk falling out of lockstep with those appearing in their courtrooms. “It’s just so new,” explained one county staff attorney. “Court systems are notoriously behind the times, and with AI technology evolving so quickly, I worry the courts won’t be able to remain caught up to the latest AI safety and system requirements.”
Many are still learning
Although public versions of Gen AI technology have been available in the market for more than a year, many court personnel still desire more education around its use. When asked whether Gen AI can be used in a court setting, 68% of survey respondents said they were unsure or did not have enough information to answer the question. Similarly, when asked whether it should be used in a court setting, 58% said they were unsure.
“Not enough info. Just don’t trust it yet,” answered one state judge when asked about their concerns with the technology. Another noted that they have “general anxiety and uncertainty about using technology which replaces human judgment and intuition with programmed algorithms for decision-making purposes.”
Notably, judges might actually be more unsure about Gen AI’s implications in court than other court professionals. In fact, 67% of judges said they were unsure or did not have enough information about whether Gen AI should be used in courts, compared to 50% of other personnel, according to the report. As to whether Gen AI can be used, 69% of judges were unsure, compared to 66% of other court personnel surveyed.
This uncertain nature was also more pronounced at the county and municipal court level as a whole than at the state level. For instance, 75% of respondents in county/municipal courts said they did not know whether Gen AI can be used in a court setting, versus 65% of state court respondents. For whether Gen AI should be used, more than two-thirds (67%) of county/municipal court respondents said they were unsure, compared to just more than half (54%) of state court respondents.
Part of the reason for this viewpoint may be a lack of training with Gen AI tools and a need for more targeted education. “Chambers staff never receives adequate training when our IT department makes new technology available to us. We just get to go figure it out,” said one state law clerk.
Others feel there simply has not been the time — or budget — to properly invest in Gen AI at this stage. “Court staff will never have the expertise to manage AI to keep it within its assigned tasks and effectively monitor its effectiveness and accuracy,” explained one state court judge. “We would have to rely on outside help, and we do not have the budget or staff to do it.”
What’s holding them back?
Of those respondents who did have firm opinions on Gen AI however, the outlook tended to skew negatively. When asked whether Gen AI can be used in a court setting, a roughly equal portion of respondents said yes (15%) and no (17%); but when asked whether it should be used in a court setting, the no (33%) answers far outweighed the yes (9%) answers.
The reasons as to why were varied, but one in particular was paramount: Gen AI’s perceived inaccuracies, cited by 51% of those who said they had risk concerns. A number of respondents pointed specifically to cases in which public-facing Gen AI programs such as ChatGPT experienced hallucinations in court cases, leading to incorrect or completely made-up case citations during research.
“Garbage in, garbage out. Data on which artificial intelligence (hah!) functions rely upon the honesty and capability of those who enter it into digital storage,” answered one state magistrate judge. “I do not trust those who create the data banks and programs. When I read of lawyers asking artificial intelligence to write briefs for them — and the AI citing non-existent cases — I mind the wisdom of [Harry Potter character] Arthur Weasley: ‘Never trust anything that can think for itself if you can’t see where it keeps its brain.’”
However, the risk of hallucinations continues to decrease, both as proprietary legal AI tools introduce checks to prevent incorrect answers, and as the large language models underpinning these tools become more accurate themselves. As a result, an emerging concern for some courts is not the accuracy of the technology itself, but whether its results will be interpreted or applied incorrectly.
This could be a concern internally as well, as one state court judge noted: “I am concerned that my law clerks will rely too extensively on AI research and fail to corroborate such research with their own independent research via traditional Westlaw search methods.” However, some respondents also pointed to the effect of Gen AI tools on the public, such as one county law clerk who answered: “Lay people will use it to draft their own court documents, not understanding the nuances of specific words in a court setting.”
The report’s respondents also noted other potential barriers to AI adoption. More than 10% of judges cited each of three categories — data protection and security; needing more experience and training; and the potential for misuse and manipulation — as pain points for current AI usage.
But for some, there remains the simple barrier that a machine is not a human. This aversion to technology has been mirrored across various populations in Thomson Reuters Institute studies, but it perhaps particularly acute in the court setting — 19% cited the lack of an emotional component as a concern with Gen AI, the second-most cited concern about inaccuracies.
In that way, the biggest barrier to Gen AI’s growth in the court setting may not be the technology itself, but how to best marry the technology with the human focus of law to provide the best experience for all.
“One word, artificial,” said one state court judge. “The courts deal with real people with real issues. Being in the court system is dehumanizing currently, and we fight to minimize that effect.”
You can download a full copy of the Thomson Reuters Institute’s recent State of the Courts survey report here.