Skip to content
Legal Technology

Building your legal practice’s AI future: Beginning with strategy and people

Toby Brown  CEO / DV8 Legal Strategies

· 7 minute read

Toby Brown  CEO / DV8 Legal Strategies

· 7 minute read

The strategy driving a law firm’s AI future must focus on strategic growth in key areas of strength as well as having the right team of people to move the initiative forward

You have already heard this one: Generative artificial intelligence (GenAI) is changing the legal industry. However, what you have probably have not heard is how — as in how you and your law firm should be pursuing this.

This is a complicated question — enough so that it will take a few installments of this blog series to cover. Predicting the future when it changes once a day is challenging at best. That being said, this series should, at a minimum, give law firms (and corporate legal departments to some degree) some guidance around navigating the challenges when charting your own GenAI path.

In a previous paper, I explored how economic forces produced by GenAI would impact the large law firm sector. The core of that paper focused on two basic dynamics: First, GenAI is poised to bring material gains in productivity to the legal industry. (And by productivity, I mean producing more legal work product output in fewer hours, not more billable hours as the industry has traditionally used this term.) Second, this increase will lead to lower revenue per matter, and more importantly, lower profit margins. The primary recommendation from this paper was that law firms should choose where they invest their AI dollars wisely.

And this is where we begin this current exploration.

The strategic investment opportunity

Studies of GenAI definitely demonstrate how this innovative technology will increase efficiencies in legal services. However, the studies also demonstrate that quality improves — an aspect that is often overlooked. In practice, this will mean that effective AI investments will create competitive advantage for many law firms. Into whichever practices that firms choose to invest, they will have a distinct advantage for winning work from their competitors.

So, logically, firms should invest their AI dollars into those practices in which they already are strong, or in which they want to be strong. Of course, this means they should align AI with their firm’s strategic plan, which sounds simple, but strategic planning for most firms is not that strategic.

I was watching a recent webinar in which a law firm innovation leader said the firm had asked for lawyers to volunteer to work on AI projects. And while the firm was looking for those lawyers willing and able to participate, this was a random, not strategic, appeal, and it carried the likely outcome that AI investments will be made in sub-optimal practices. Firms should, in fact, resist these approaches and instead have an articulated strategy for AI.

Another reason to have a strategic approach is that too often, technology and innovation initiatives are deemed successful when accessible by only a small fraction of a law firm’s workforce. Having a team of several dozen AI users may create pockets of competitive advantage, but it falls short of enabling the law firm to holistically keep pace with the businesses that it serves. Over time, lack of widespread accessibility to AI could also lead to a productivity imbalance that may have a devastating impact on firm culture which of course, relies on productivity as the great equalizer among its workforce.

Perhaps the biggest driver for being strategic is that AI is expensive — very expensive. The tech alone costs big dollars, but that will not be the biggest cost — that will be the people.

Many law firms also have the option of making AI investments that are focused on administrative work done by timekeepers or back-office work done by the internal business departments, such as marketing. Those options may be good opportunities for firms to learn how to implement AI effectively, but in the long term I do not suggest that firms focus their AI investments here. Firms do not need competitive advantage in how they open matters. They should leave that to the firms outside software vendors. In fact, I would push those vendors to offer these administrative work solutions — and if they don’t or won’t, find someone who will.

The people part

In order to create successful use cases for AI development, and do this effectively, firms will need to focus on their people and establish certain roles with multiple skillsets, many of which may not currently exist. Some of these new roles may include:

1. Subject matter experts (SMEs) — also known as lawyers

This role, the SME, fortunately already exists. Going forward, however, SMEs will be leveraged in a brand-new way. These roles will focus on the various stages and tasks performed in a chosen matter type. SMEs will need deep knowledge, so likely this will need to be at least a senior-level associate. (Remember when we touched on how expensive AI will be? We have now arrived there.) The SME will identify possible points in the life of the matter in which AI might be best utilized and also can perform quality control on the outputs.

2. The use case expert (UCE)

To be fair, I have made this name up since I have not heard a good one used yet. The skills needed here are a solid understanding of how GenAI works and in which situations it works best. So once SMEs have identified possible task options, UCEs can weigh in on which of these are best suited for AI and how to approach it. GenAI is better at some things and does poorly at others — and the UCE will be the person with that knowledge. A good place to start when developing this role will be from within the firm’s knowledge management team.

3. The commercial role

As noted earlier, there are some commercial impacts to be considered when implementing GenAI. I ran an analysis on an M&A matter to determine the impacts on revenue and margin. In the model, I projected that AI could disrupt 5% of partner tasks and 20% of associate tasks. This assumption comes from an analysis of $500 million of legal billings, showing that 40% of time entries contain the words draft or review, which are likely targets of GenAI. My analysis showed that revenue goes down 13% and profit goes down between 8% and 11%. The point here is that AI investments should not be made in a profit vacuum. Someone on the team needs to understand this and be able to model the impacts in order to better guide investment decisions. A good place to find these commercially oriented people will be on the firm’s pricing team.

4. The security role

As most people have likely heard, the current open-source large-language models (LLMs) have some issues around security. Submitting confidential client information into any LLM will need to be done with a full understanding of the security issues. Most of the big ones out there have tried to provide assurances via contracts and terms of service (ToS). However, security reviews need to be more than technical. I watched one program in which a company (not a law firm) had people designated to monitor all relevant ToS for any changes that could expose information because, apparently, ToS are changing more frequently these days. Not surprisingly, a firm will need someone in a security role to oversee these tasks.

5. The tech role

Of course, firms will need to find a deeper bench of GenAI tech nerds on the team to help evaluate the technologies available and identify how best implement them. This person will work hand-in-hand with the UCE to deepen the team’s collective understanding of the technology’s capabilities. This role also will be tasked with keeping up on the latest available options.


This is the first in a series of three blog posts about building your legal practice’s AI future. In the next installment, we will look at the actual technologies involved.

More insights