As law firms and corporate legal departments watch how generative AI performs, they may miss out, says Northwestern Professor Daniel Linna
Many law firms and legal departments are waiting on and watching generative AI and how it answers their queries. While waiting for that process to become perfect, those watching may miss out on a great opportunity, argues Northwestern Professor Daniel Linna.
Forum: Here’s what should be an easy question: Should you, as a legal professional, know how to use artificial intelligence (AI)?
Prof. Linna: The answer is that you likely already do, whether you realize it or not. Spell check, autocomplete, legal research or document management tools, even Google – just about every modern piece of professional software has some element of AI and machine learning baked in.
Forum: However, here is a potentially scarier follow-up: Should you, as a legal professional, know how to use generative artificial intelligence?
Prof. Linna: That may be trickier to answer. Generative AI refers to a subset of AI that seeks to generate new data, text, images, and even videos that mimic the characteristics of human-generated content. This output is created by taking tons and tons of data – in some cases, much of which is publicly available on the internet – to compute an answer to a given prompt. And generative AI programs, headlined by OpenAI’s free ChatGPT application, have become a buzzy topic in business circles.
“I’m talking to lots of law firms and some of them say things to me like, ‘We don’t need to be at the bleeding edge, we’ll let other people figure out what tools to buy.’ And that’s like saying I’m not going to practice playing the game of golf, I’ll just figure out the best set of clubs to buy next year and then I’ll go out there and I’ll compete in the Tour.”
To many legal professionals, tackling generative AI sounds daunting, perhaps impossible. However, according to Daniel Linna, Senior Lecturer & Director of Law and Technology Initiatives at Northwestern Pritzker School of Law and McCormick School of Engineering, the answer is a clear yes: You, as a legal professional, should know how generative AI works, at least at a functional level. It’s not a technology issue, Linna argues, it’s a client-service issue.
“If your client says, ‘You can use my contracts to train your system, but you need to guarantee me that no content in my contracts will ever be output from your system for a different client,’ do you understand the concern and risks? Do you understand what your client is asking, and do you know how to go to the vendor, describe the concern, and ask follow-up questions to be able to provide assurances to your client?” Linna asks. “If you don’t understand how the system works, you probably don’t really understand the concern, or the potential risks and benefits.”
The first step to answering those questions then, is figuring out what different generative AI programs can actually solve and how they do so.
What Gen AI does (and doesn’t do)
Many in the legal profession are in wait-and-see mode with generative AI technologies. Thomson Reuters Institute surveys conducted in April and May found that although more than 80% of respondents said generative AI could be used for legal work and half of respondents said generative AI should be used for legal work, less than 10% actually do so. Linna says in his conversations with legal professionals, some attorneys tell him they will wait until tools like ChatGPT are perfected by others and then they will adopt them.
There are a couple of problems with this line of thinking, Linna says. For one, ChatGPT and other public generative AI programs like Anthropic’s Claude draw from the internet as a data source, but specialized case law often isn’t found publicly. As a result, without that training data, these general-purpose platforms are unlikely to be perfected for specialized use cases in legal.
“How are we going to solve the hallucination problem with ChatGPT and tools like that? Well, I think the answer is you’re not going to solve it because a large-language model is not designed to store cases and other facts about the world,” Linna explains, adding that many attorneys are “unlikely to get what they need from a tool like ChatGPT, so that’s why these specialized tools are being created for legal tasks.”
Indeed, a host of legal technology providers – specializing in everything from legal research to contract management to electronic discovery – have hopped aboard the generative AI bandwagon. They look to develop new tools that will provide more accurate responses, training their applications with more law-specific data and not providing answers until they are cross-referenced with legal-specific data. Many of these providers also partner with large technology companies such as Microsoft and Google, whose respective Copilot and Bard applications draw on exceptionally large amounts of data in similar ways to OpenAI’s ChatGPT.
Even then, however, generative AI is just that – a tool. And realizing what these tools can and cannot do is also a function of what you want a tool to do. This is the analysis of the lawyers and legal professionals who deliver legal solutions. People and process, it turns out, still need to underpin any piece of technology, no matter how powerful the tech may be. A client service element, in this case, needs to be baked into the technology adoption decision.
“You don’t just come with technology and then instantly transform something like drafting contracts or doing legal research,” Linna says. “There are other parts to it. You need to understand the process, need to understand what are our goals, what are we really trying to accomplish, how do people actually use this tool, and how are we going to build systems that empower people.”
Of course, Linna adds, this will mean that people will need to change the way they work. “If you want that to happen, you need to engage them early and empower them in the innovation process,” he says. “Organizations that start to do this now will have a sizable advantage over organizations that try to wait and ‘buy the perfected technology’ down the road.”
Embracing the uncertainty
Figuring out how technology can be used to empower people is one thing, but law firms and corporate law departments actually empowering people to use technology for better client service is another. In the Thomson Reuters Institute survey on generative AI in the workplace, roughly the same number of legal respondents said their organizations had outright banned unauthorized generative AI as were using the technology in the first place.
Linna says he has heard from some law firms that are banning anything related to generative AI due to bad press around ChatGPT, such as the New York federal court case Mata v. Avianca in which attorneys were sanctioned for submitting fabricated case law created by ChatGPT. However, those policies often fail on two levels, Linna says. First is the practical: Ban it on company laptops, and people can still pull out their phones to use the same technology. “When people need to get the work done, they often figure out ways to work around these barriers,” he says.
But perhaps more pressing are the business implications of not adopting the technology. Linna fears that those waiting until the technology is perfect may be left watching as a golden opportunity passes them by. “I’m talking to lots of law firms and some of them say things to me like, ‘We don’t need to be at the bleeding edge, we’ll let other people figure out what tools to buy,’” Linna explains. “And that’s like saying I’m not going to practice playing the game of golf, I’ll just figure out the best set of clubs to buy next year and then I’ll go out there and I’ll compete in the Tour.”
As the adage goes, practice makes perfect. And right now, just about everybody is still practicing. Even so, law firms and corporate law departments often don’t want to admit they don’t have all of the answers – in our surveys, for example, a massive 83% of corporate law respondents said they don’t know how their outside law firms are handling generative AI on client matters.
“How are we going to solve the hallucination problem with ChatGPT and tools like that? Well, I think the answer is you’re not going to solve it because a large-language model is not designed to store cases and other facts about the world.”
This isn’t the time for reticence, Linna insists, even if the conversation is initially uncomfortable. “I think these are challenging conversations to have because when you talk to your client, when you go pitch for business, when you do work for your client, you want to be able to be there as the expert,” he explains. “In this area, there’s a ton of uncertainty, so it’s even more difficult to have those conversations with the client. But that also means there are tremendous opportunities for law firms to talk to their clients, learn about their pain points, and find ways to collaborate on technology and analytics projects. Not enough of these conversations are happening.”
That means the hard work of planning, of course – policies around proper generative AI usage, training around different types of tools, identifying use cases throughout the organization. However, it also means a truly collaborative effort to push generative AI initiatives forward. Linna says he believes that proper generative AI usage means education, but with listening occurring at the leadership level as much as the organizational level.
“More than ever what we see with these tools is that it’s not going to be a top-down sort of thing,” Linna says. “I mean, the firm could decide, let’s use this Thomson Reuters tool or let’s use some other generative AI tool. But it’s really the people who are closest to the work who know best the kind of work they do and how these tools can be helpful. Empower people closest to the work to transform how they deliver value to clients. That’s where we’re really going to see innovation.”