Skip to content
AI & Future Technologies

Agentic AI in legal: What it is and why it may appear in law firms soon

Zach Warren  Manager for Enterprise Content for Technology & Innovation / Thomson Reuters Institute

· 7 minute read

Zach Warren  Manager for Enterprise Content for Technology & Innovation / Thomson Reuters Institute

· 7 minute read

Legal teams of the future will have attorneys and AI agents working hand-in-robot-hand to achieve success for clients. Sound futuristic? A recent panel revealed it’s closer than you might think

NEW YORK — Since the introduction of generative artificial intelligence (GenAI) into the legal sphere, many law firm leaders have pondered what legal teams of the future will look like. Some believe there will be fewer associates, as GenAI automates a number of low-level tasks typically assigned to new attorneys. Others predict they’ll see an influx of technologists and data scientists into the legal profession, with new technology-centric skills becoming increasingly necessary to handle tasks at scale.

And some legal technologists are even looking into the future and seeing a slightly different team, prompting the question: What if the ideal legal team isn’t entirely made up of people at all?

That’s the idea behind agentic AI, a concept that has taken hold within many technology companies and is beginning to make its way into the legal industry. Within agentic AI, GenAI developers create an autonomous agent that is assigned a specific goal, such as research or document drafting. Given the power of GenAI technology, these agents can accomplish specified goals with little human oversight, containing the power to check their own work before turning it over to human attorneys or legal professionals for a final review.

Sound futuristic? Perhaps. But according to law firm and technology panelists at InsidePractice’s Legal AI conference, agentic AI testing is already underway at some large law firms, and AI agents could be members of legal teams sooner rather than later.

The role of the agent

For those having some difficulty wrapping their head around where an AI agent may fit into a legal matter, the panel suggested thinking about an executive assistant. A long-standing executive assistant can be life-changing, anticipating all of their executive’s preferences and needs with the ability to check their work. But if that assistant is sick and a new person fills in, things may fall apart — not necessarily because of the quality of the replacement, but because they can’t anticipate those desires and needs as well.

That’s the solution that today’s GenAI-enabled agents could provide. Today, AI technology is advancing so rapidly to the point where human language conversation is relatively easy. Now, the trick is getting the AI agents to have that context about what attorneys need, while being able to tell right from wrong.


Some legal technologists are even looking into the future and seeing a slightly different team, prompting the question: What if the ideal legal team isn’t entirely made up of people at all?


Legal technologists are approaching this point. The conference’s panelists said some law firms are in the beginning stages of building these agents and integrating them into their own workflows. They explained that engineers are currently building out the thickness of the AI wrapper around these agents — essentially, how many inputs go into the agent’s decision making. By increasing these inputs, even across multiple databases and resources, the agent becomes more complex to engineer but also becomes increasingly able to contextually reason.

Plus, the panel noted, a thicker wrapper also allows AI agents more ability to check and explain their own work, introducing more autonomy to the workflow and differentiating these agents from large language models (LLMs) that produce just one answer to a prompt. Or to put it another way: LLMs can write an essay, but agentic AI can research an essay.

That type of technological capability could be good news for attorneys, whose daily work regularly mixes routine tasks with complex reasoning. The panel mentioned potential use cases including reviewing large numbers of documents for privilege, checking citations for accuracy, or comparing contracts to one another. One panelist noted their firm has begun deploying two agentic AI-assisted contract tools that have achieved up to 92% accuracy, a higher rate than most LLM-based tools.

Breaking down the workflows

However, there is a downside to deploying agentic AI: These complex agents first need to be built. Even the conference panelists called agentic AI “not necessarily for the weak,” noting that it takes a whole host of clean data and difficult engineering to make agentic AI function. Plus, given that an AI agent’s success often has a direct correlation to the thickness of its wrapper — the inputs into its system — the work it takes to achieve that complexity may not be worth the squeeze.

However, the panel also noted that the regular workflow of legal tasks may also provide a barrier that is perhaps unique to the industry. Because of the complex nature of legal work, many attorneys aren’t fully thinking through all the steps it takes to complete a specific task. “Conduct legal research,” for instance, is treated as one task, rather than its many parts, which can include understanding the case, understanding jurisdiction, interpreting where in the case timeline research will occur, sorting through past precedent, verifying case citations, and more.

AI agents don’t function the same way that human beings do. These agents specialize in one specific task and follow discrete steps — and those building agentic AI tools need to discern what all of those steps actually are before a tool can be built. Plus, not every attorney works in the same fashion, meaning that an agent may fit differently into each attorney’s individual workflow.


AI agents don’t function the same way that human beings do — these agents specialize in one specific task and follow discrete steps.


This creates even more work and complexity for tech developers, who need to sit down with attorneys and sort through how exactly they come to a conclusion. “You’re not going to be able to automate it without understanding it,” one panelist noted. “That might mean standardizing — good luck, it’s hard.”

As a result, law firms on the forefront of exploring agentic AI in legal are developing parallel paths. There is the technology piece, which many firms are beginning to experiment with in earnest; but there is also a crucial project management component that requires an understanding of both the workflows for each piece of legal work and the teams needed to complete those tasks.

Work is being done right now to inch law firms closer to that reality, but it remains in its early stages. Agentic AI for legal today is currently just in the testing phase, and it only exists in law firms or legal organizations with thoroughly developed technology functions that also have the data scientists and engineers able to build it. However, given the rapid pace of technology advancement, the panel said it shouldn’t surprise anybody to see AI agents pop up on legal teams sometime soon.

In the meantime, technologically advanced law firms may want to begin deciphering what agentic AI may mean for their own organizations so that they can be prepared when these AI agents are finally built. “It will be a gradual change, as everything is in our industry,” another panelist said, adding, however, that “in the midst of that, we need to train.”


You can find more about the impact of GenAI in legal here.

More insights