While many law firms are excited about the prospect of generative AI for legal work, there are a sizeable number that simply don’t know what the technology can do or how firms are approaching it, making adoption as much about education as technological fit
Many law firm attorneys feel positively about the prospect of generative artificial intelligence (AI) and such AI-enabling tools as ChatGPT, according to the Thomson Reuters Institute’s ChatGPT and Generative AI within Law Firms report released in April. Indeed, the report revealed that more than 80% of law firm leaders surveyed said they believed generative AI can be applied to legal work now, and more than half believed it actively should be applied to legal work.
Among the other respondents, however, the feeling was not necessarily distaste for generative AI, although some of that did exist. Instead, many are still simply unsure exactly what generative AI is or what it can do. Yet, considering the pace at which law firms have adopted technology previously, this is an understandable feeling given that the public release of OpenAI’s ChatGPT occurred only in November 2022.
Overall, the report found uncertainty across the board among law firm respondents: 25% said they did not know whether generative AI should be applied to legal work, and 21% did not know whether it should be applied to non-legal work. These feelings even extended to other forms of AI outside of generative AI/ChatGPT, as 24% did not know whether their firm uses AI outside of the generative context.
Jason Adaska, Director of the Innovation Lab at Holland & Hart, has a team of data scientists working on potential generative AI applications for the firm. Adaska says that because generative AI has appeared on the scene so quickly, there is “an increasing bifurcation in the conversations that I have” between people interested in using it and those unaware of its existence.
“Some people, they’ve seen it in the media, they’re kind of up to date with it,” he adds. “They may not come from a technology perspective, but at least they know about the conversation. Even in March I had some conversations with people who say, ‘I didn’t catch that. What is ChatGPT, what is this word you’re throwing at me?’”
Discovering how generative AI & ChatGPT can help
Similarly, Arsen Shirokov, National Director, Information Technology at McMillan, has already begun having conversations with internal stakeholders and external vendors about ways generative AI can be applied in his firm. A sticking point he’s run into, however, is that unlike previous legal technologies that have had a distinct use case, generative AI’s applications are so expansive that they can be hard to nail down.
“Almost everywhere else in technology, you say what this product is: this is an IG solution, this is a business workflow solution, this is an architecture solution, right? …With generative AI, I think we haven’t figured that out yet,” Shirokov says. “We don’t necessarily know which generative AI solutions are for research, for example. Take ChatGPT: It can also draft things for you, but for review, you cannot feed the bunch of documents to ChatGPT yet and just say, review this.”
Until those questions are answered, many lawyers also remain unsure of how their firms will handle generative AI on a wider scale. Our report found that 36% of respondents said they did not know whether their law firm had risk concerns around generative AI usage. Additionally, 19% did not know whether their firm had issued warnings against unauthorized generative AI use; and 22% did not know whether their firm had banned unauthorized generative AI use outright.
“How are we going to get people comfortable not just with the technology, but with the fact that they are interacting with a machine, and yet it doesn’t feel like you’re interacting with a machine?”
Even those respondents who reported their firm had underlying risk concerns over these advanced technologies counted a lack of technological maturity as one of those barriers. “A lack of understanding of the underlying risks,” wrote one respondent when asked why their firm had concerns around generative AI.
“Lack of insight/ability to control algorithms, data sets, and assumptions/biases of generated results. Lack of disclosure of disclaimers, boundaries, and assumptions when results return. Lack of ability to assess confidence in generated results,” wrote another.
As a result, for those law firms actively considering embracing generative AI — the report found 40% of firms were at least considering its use — encouraging adoption may be as much of a knowledge and informational issue as a technological issue. To that end, Jessica Lipson, Partner and Co-Chair of the Technology, Data & IP Department at Morrison Cohen, said her firm has been treating communication as a “high strategic issue” in potentially adopting generative AI technologies. “How are we going to get people comfortable not just with the technology, but with the fact that they are interacting with a machine, and yet it doesn’t feel like you’re interacting with a machine?” Lipson asks.
Part of the answer may take a cue from the 1989 film Field of Dreams: “If you build it, they will come.”
Holland & Hart’s Adaska says he’s gained interest in his team’s generative AI efforts by simply letting attorneys play around with the tool themselves. “I think that’s the story of the last few months in this,” Adaska adds. “A number of people who maybe would have either not paid attention or have been skeptical are being won over by actually trying things they thought weren’t possible and being pleasantly surprised.”