Skip to content
Legal Technology

Building your legal practice’s AI future: Understanding the actual technologies

Toby Brown  CEO / DV8 Legal Strategies

· 5 minute read

Toby Brown  CEO / DV8 Legal Strategies

· 5 minute read

The implementation of a successful AI strategy for a law firm depends not only on having the right people, but also understanding the tech and how to make it work for the firm

Setting the landscape of your legal practice’s artificial intelligence-driven future is no small task. As I explained in the first part of this series, you must start with a strategic focus on in which areas the firm is already strong and in which areas it wants to be strong. And the first key step to implementing that strategic vision is assembling the right people, as I stated.

Yet, one of the key roles for any successful artificial intelligence (AI) deployment team will necessarily be a tech role — someone who understands the abilities of the technology and is willing to be the go-to source to determine which technologies are available to meet which identified needs of the firm.

Looking at the existing GenAI platforms

While we’re not delving deep here into how generative artificial intelligence (GenAI) and large language models (LLMs) work, we will talk generally about different categories of tech and emerging GenAI functionalities that are specific for legal.

Indeed, this is another place where you will want to choose wisely and why you will need to have the GenAI tech nerd on the team. The underlying tech for AI is changing rapidly. For example, one of my non-legal AI newsletters has been showing two photos side-by-side for the past months having you guess which one is GenAI-created and which is an actual photo. A few months before I wrote this piece, I was pretty good at picking which one was real. Then I had to give up because the GenAI photos became that good.

The point here is you will want to select a GenAI technology that within its core tech is evolving with the market. Otherwise, you could end up with a dead-end AI project when the market tech leapfrogs over what you have.


One of the key roles for any successful AI deployment team will necessarily be a tech role — someone who understands the abilities of the technology and is willing to be the go-to source to determine which technologies are available to meet which identified needs of the firm.


While there are categories of GenAI that include general tools like Microsoft’s CoPilot or OpenAI’s ChatGPT, there are also legal-specific LLMs on the market. These are LLMs tuned with legal content, compared to general content used by OpenAI, Google, or others. The hoped-for outcome of having an LLM trained on legal content should result in better legal-specific outputs, and at least one of these can be deployed behind your firm’s firewall to address client data security concerns. It’s also important to remember that these tools are in their early stages for the most part, so there is some gamble on how well they will evolve.

Further, there are point solutions entering the market that are designed to address one or a defined limit of tasks. The advantage of this type of tool is that it’s purpose-built and should require less people-resources to deploy. The downside is that a firm could end up with a long list of point solutions that the firm will need to manage, including dealing with data moving all over the place.

While this is just a rough look at the types of tools out there, it underscores how important it is to consult with your tech people on these issues, especially since this list will continue to grow and change.

Tuning and managing GenAI

Two other tech functionalities also need to be considered: fine tuning and agentic AI. Fine tuning is essentially used to describe how an LLM is trained. For companies like OpenAI, a lot of time and resources are spent on tuning their models. However, this trains them in broad, general knowledge. So, if you want an LLM further trained with your own content, that is called fine tuning.

Let’s use an M&A transaction as an example: A law firm may want to fine tune an LLM to the way the firm drafts its agreements; for instance, the firm may have a playbook on how to handle certain agreement clauses. To fine tune for this, a firm might need to submit a few hundred document examples to teach the model. Based on my experience with law firms, your lawyers will want to do this because a big part of their value is in the knowledge built into their documents. Your AI plans need to account for this and ensure that it happens.

The other tech functionality, agentic AI, is an emerging approach in which a set of tasks are handled by an agent. Consider our M&A example: Some diligence needs to be done, which leads to a negotiation strategy, which in turn leads to the use of certain clause types in an agreement. Each of these steps will be a separate AI function, but you will want them done in a holistic fashion — and that requires the use of an agent or a way to ensure a set of tasks are done across several separate AI functions. Again, we’re not diving deep into the technical aspects here, but hopefully this gives you an idea of the concept.

I point out these two concepts to help broaden your horizon on both the technology and the possible uses your firm might find for them. Now comes the upcoming final part: Compiling the right data.


This is the second in a series of three blog posts about building your legal practice’s AI future. In the final installment, we will look at data concerns and other key considerations.