Artificial intelligence applications have held the attention of the legal industry in recent years, but has AI’s success matched its hype? That depends...
Artificial intelligence (AI) applications have held law firms, legal software vendors, and the legal press in thrall over the past few years. But has success matched the hype? Yes and no. Some commercial solutions, particularly for eDiscovery and contract analysis, have certainly proved their worth. But many law firms that counted on AI to raise their innovation initiatives to the next level have seen little or no return on their investments. Why is that?
Some commentators claim AI adoption has been slow because lawyers view the technology as a threat to their billable hours. I don’t believe that’s true. In my experience, lawyers will happily adopt a new technology that meets four criteria:
-
-
- It addresses an important problem;
- It presents an effective solution to that problem;
- It fits into the lawyer’s overall workflow; and
- It’s affordable.
-
Sadly, most legal AI applications fail to meet at least one of these criteria.
1. Does the application address an important problem? Of course, there are plenty of examples of AI applications built to perform important functions, such as drafting legal documents, predicting case outcomes, or answering complex legal questions. But some AI tools — and I would put many legal chatbots in this category — seem more like solutions in search of a problem.
2. Does the application present an effective solution? Many applications based on supervised machine-learning are ineffective in two ways. First, some of them require extensive training and feedback by subject matter experts, whose time is expensive and limited. Without that training, they only solve a part of the problem. Second, these applications rely on probabilistic methods, so they sometimes make mistakes. Of course, humans make mistakes also, but we tend to expect more from software.
3. Does the application fit into the lawyer’s workflow? That’s a big challenge, because most AI tools only address one task within a much bigger work process. Many lawyers will not take the time to learn and adopt a new technology, let alone incorporate it into their habitual workflow, when the technology only delivers an incremental benefit and they must interrupt their workflow to use it.
4. Is the application affordable? Many AI applications are priced to be affordable to customers who have a high volume of the sort of tasks the application was designed to address; they aren’t cost effective if they are useful only to a small group of lawyers in an organization, or are used only occasionally. Still other applications rely on AI training that is highly fact-specific, so expanding them to new contexts requires an investment in more machine training than can be justified by the benefits achieved.
So where does this leave us? I can think of four categories of legal AI applications that routinely satisfy these tests and add value to my law firm every day.
eDiscovery — This category of AI-enabled software analyzes and categorizes documents in the eDiscovery process, and can be taught to find documents that are relevant to a discovery request. Given the high cost of document review, and the increasingly large body of documents to be reviewed, AI tools that can add efficiency to this process are in high demand. They aren’t infallible, but they have been shown to be less fallible than human reviewers.
Contract analysis — Another category of AI tools, useful in due diligence reviews and contract management, categorizes contracts and extracts their important terms. In a legal organization that performs a high volume of contract review work, these tools can add substantial value. And if the tools require additional training, that effort can be recouped over a large base of work.
Enterprise search — Many firms use enterprise search tools to find relevant information within their otherwise siloed document management systems, financial systems, intranets, and more. While many people do not consider enterprise search to be AI, many of these search tools in fact rely on the same AI algorithms that power machine-learning tools, including Bayesian inference.
Legal research — We have also seen the emergence of several powerful legal research applications that rely on natural-language processing, an AI technology that enables these products to ingest, analyze, and draw statistical inferences from the entire body of US case law.
What I have not yet seen is any legal application of AI that is as impressive as the programs that have beaten the world’s top chess and Go players. Those AI programs rely on deep learning to find novel solutions of their own, with little human guidance. I think the reason those sorts of deep-learning tools haven’t appeared in the legal context is because they work best in situations where there is a single goal — e.g., to checkmate your opponent — and clear, unvarying rules. Legal problems tend to fail one or both of those conditions.
That’s why I have been closely following the work of UK neuroscientist Karl Friston. Dr. Friston’s Free Energy Principle was inspired by discussions with computer scientist Geoffrey Hinton, one of the fathers of deep-learning. It could explain how human beings are able to operate so well in a world of constantly shifting goals and rules.
Leading AI researchers have begun to apply Friston’s equations to their AI models. Perhaps one day, for example, we will see an AI application that regards any litigation contest as a game and devises creative and decisive strategies to win. The combination of the Free Energy Principle and deep learning could usher in a new and more exciting era of legal AI.