Skip to content
Courts & Justice

Generative AI and the courts: Balancing efficiency and legal obligations

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 5 minute read

Rabihah Butler  Manager for Enterprise content for Risk, Fraud & Government / Thomson Reuters Institute

· 5 minute read

How our nation's court system regards and rules on issues involving generative AI and the ethical questions it raises will require a delicate balance between the benefits of the technology and the legal obligations to which users have to adhere

As a part of the judicial process, each jurisdiction makes its own rules and regulations covering how attorneys and petitioners present their concerns before the court. Over time the county, state, or country comes up with clear and consistent standards.

In cases where there is a new, particularly innovative technology, smaller jurisdictions are the best place to quickly implement rules and regulations that cover unanticipated situations; however, those rules and regulations can become inconsistent across jurisdictions as innovation develops, sometimes unintentionally conflicting with pre-existing obligations.

Such is the case with artificial intelligence (AI) and its latest iteration, generative AI, which can create wholly new content like documents, legal summaries, or answers to questions based off often unseen algorithms.

The development of gen AI at lightning speed has made it necessary for courts to address the proverbial elephant in the room — what is the ethics and legality of AI-created word product, especially in a legal setting?

Indeed, the case of Mata v. Avianca, Inc. has shown the embarrassing possibilities that come with the improper use of gen AI among lawyers. The case involves a lawyer who submitted filings to the court using gen AI to create the documents, which were found to have incorrect and even imaginary case citations and opinions. The case highlights the importance of counsel properly reviewing filings with the court and raises concerns over the ethical implications of gen AI use in this way.

Not surprisingly, individual courts are attempting to prevent such inappropriate behavior in the future.

Technically competent

Across the United States, attorneys are required to provide competent representation to their clients, and more recently that requirement was expanded to include an awareness of the benefits and risks associated with new and relevant technology.

This means that attorneys cannot simply bury their heads in the sand and act as though new technology does not exist or assume it has no benefits for their clients. On the other hand, it would be problematic to use this powerful technology without fully understanding the benefits and risks.

Cognizant of this problem with gen AI, many law firms already are warning their attorneys about the use of gen AI in a professional capacity. According to a recent Thomson Reuters Institute survey report on gen AI use among law firms, 15% of law firm respondents said their firms have issued a warning around generative AI or ChatGPT usage at work, including 21% at large law firms and 11% at Midsize law firms. Two-thirds (66%) indicated they had not received such warnings, and 19% said they did not know whether or not their firm had issued a warning.

Beyond that, attorneys must consider the position that the court has taken on use of such technology.

Courts’ choices

Several courts, including those in Texas, Illinois, and Manitoba, Canada, have issued rulings or standing orders on the use of gen AI in their courtrooms. Each court’s ruling places the onus on the attorney to notify the court of their use of gen AI in detail. The courts also require that the attorney review and confirm the accuracy of the work done by gen AI.

Judges also feel that they are being asked to parse through cases and technicalities to see when the use of gen AI is appropriate. “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them,” wrote Judge Brantley Starr in his order. “Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up — even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath.”

Other judges go a bit further. Judge Stephen Vaden of the U.S. Court of International Trade, ordered that attorneys must also certify that confidential information was not disseminated. This goes a bit further than other orders or rulings as it requires that the attorney outline each section that uses generative AI.

In the Thomson Reuters Institute survey, attorneys expressed concern that the use of a generative AI solution where manual review is required does little to increase efficiency — a concern that leads to the question of whether these rulings limit the use of gen AI to a point that all but prohibits its use in any substantive way.

The larger concern is that attorneys have an inherent obligation to be competent in new technologies and make use of them in ways that benefit their clients. However, these new rulings could serve as a barrier to upholding this obligation.

In addition to these concerns, a minority of courts have put forth rules expressing an opinion on this matter. The majority of courts, attorneys, and other firms have yet to come to a conclusion as to how to actively and properly use gen AI technologies. It is clear that a significant amount of discourse is necessary to balance the individual attorney’s obligations to their clients — especially around privacy, competency, and fiscal responsibility — and their equally weighty obligations to comply with local court rulings.