Legal teams often feel they’re in the middle of a maelstrom – and this has never been more true than with recent advances in artificial intelligence (AI), specifically generative AI. Like so many others, legal professionals are trying to understand how AI will affect their work
Generative AI has placed corporate lawyers on the front lines of AI decision-making. In-house and external lawyers are fielding questions from clients about when and how AI may be used in different contexts. These lawyers are getting – and asking – questions about a technology that continues to evolve and one where even some of its creators don’t fully understand its capabilities.
At IBM, we’ve been in the business of AI for more than a decade. While we’re always working to improve our understanding of AI, its capabilities, and the role it plays in our business and for our clients, our long history has allowed us to approach this rapidly changing landscape with a level head. Not only do we have a long history of developing and deploying this technology for clients, but we also have a long-standing commitment to using AI responsibly and ethically. This history and commitment to responsible AI has helped us create a holistic approach to AI.
Check out the entire Winter 2024 issue of Forum magazine here.
Pulling from our own playbook, the guide below can help serve as a starting point for legal leaders who are looking to meet the moment for their clients and organizations. Legislation should consider the different roles of AI creators and deployers and hold them accountable in the context in which they develop or deploy AI.
Developing principles from the outset
At IBM, we’ve been able to have this holistic approach to AI because our products and services are governed by our principles for trust and transparency.
At the core of our business, those principles say:
-
-
- the purpose of AI is to augment human intelligence, not replace it
- data and insights belong to their creators
- new technology, including AI systems, must be transparent and explainable
-
Elements of this framework apply to any organization endeavoring to use AI responsibly. Companies must begin laying out a set of values and principles on paper to help guide decision-making around this technology. These are the building blocks needed to develop internal governance structures and a playbook for mitigating risk.
Understanding data use and prioritizing transparency
While AI isn’t new, many lawyers are grappling with it for the first time because of generative AI. This technology has moved AI from IT departments into the hands of anyone with an internet browser, elevating questions of data use and transparency. Those questions will exist across all industries, and they’re bound to hit lawyers’ desks in the immediate future, if they haven’t already.
Data is the critical ingredient of AI, and data privacy and security are a big part of using AI responsibly. For example, what is the risk posed if employees are entering confidential data into chatbot tools? While it may streamline workflows, serious issues with intellectual property exposure arise. A similar risk exists with personally identifiable information.
Legislation should consider the different roles of AI creators and deployers and hold them accountable in the context in which they develop or deploy AI.
Like data privacy, ensuring that AI is transparent and not an unexplainable black box is a long-standing and difficult problem that is tightly linked to the quest for responsible AI. For AI to be responsible, we need to be able to explain its decisions. The importance of explainability varies widely by use case, and the risk involved needs to be understood. For example, if an AI model is being used to recommend a movie, we’re probably less concerned if the AI can’t explain how it reached a particular recommendation. However, if AI is being used to determine whether someone can get a mortgage, then it’s important to understand how that algorithm works, both from a legal and an ethical point of view.
As one method of tackling this challenge, IBM Research created AI FactSheets. Just as nutrition labels tell you what’s inside your food, an AI FactSheet tells you what’s inside an AI solution. These FactSheets provide transparency around the data used to train the model and its parameters, giving users a view of the inputs that affect decision-making and outputs. These FactSheets also enable customers to decipher and decide whether a model is appropriate for use in a particular application.
Using AI as a force for good
While this all might feel outside some comfort zones, it’s important to remember there are AI upsides for legal teams as well. At IBM, for example, we’re using AI to help decrease our risk by applying the technology to our compliance programs. Our AI solution for compliance encompasses data, AI governance, and privacy; and it helps us manage the incredible amount of regulation coming at companies globally, particularly those new rules addressing privacy and data. Further, our AI solution helps us to scale compliance with those laws across our global operations.
Developing responsible AI for clients was a key reason IBM created its own AI platform, watsonx; and being able to use it as part of our privacy, AI, and data governance program is a critical benefit. Using our technologies and governance model, we put into one enterprise-ready platform a set of AI assistants designed to help our clients scale and accelerate the impact of responsible AI with data they can trust across their businesses.
Understanding how current legal frameworks apply
An example of this is the EU’s General Data Protection Regulation (GDPR). Although commonly thought of as a privacy law, the regulation also addresses algorithmic and automated decision-making and gives individuals rights to opt out of certain automated decision-making. Most chief privacy offices and general counsel departments started by focusing on personal information, and the GDPR. To know whether you are managing and processing personal information, you need to understand the data you’re managing and processing more broadly. Most organizations already have a program in place that addresses data as it relates to privacy risk, so rather than reinvent the wheel, it makes sense to tune that program to address AI as well.
While organizations may be concerned about new AI regulations, it’s important to recognize that many current legal frameworks already exist. Many of the laws that currently address AI are privacy laws with provisions for tools like automated decision-making or other existing and emerging tech.
While AI isn’t new, many lawyers are grappling with it for the first time because of generative AI. This technology has moved AI from IT departments into the hands of anyone with an internet browser, elevating questions of data use and transparency.
For example, the GDPR contains a principle called data minimization, which says that any personal information collected by organizations must be relevant and necessary to its intended goals. When considering an AI use case, data minimization starts with the AI model and its training. Lawyers should ask whether personal information was used to train the model and whether the model could be trained without that information. If any personal information is collected to train the model, lawyers should ask whether it’s possible to filter or mask that information to make it less sensitive.
While AI isn’t new, many lawyers are grappling with it for the first time because of generative AI. This technology has moved AI from IT departments into the hands of anyone with an internet browser, elevating questions of data use and transparency.
Advocating for precision regulation
Corporate accountability and understanding compliance with current regulations are critical starting points. Still, regulators in Washington, D.C., and around the world have made clear their desire to ensure the responsible use of AI. So, lawyers will need to understand the repercussions of those decisions as well.
At IBM, we welcome government involvement in this space and have released a set of recommendations for the most effective ways governments and companies interact in this space. Precision regulation is ultimately how we’ll mitigate risk while still allowing innovation and discovery to flourish.
We believe governing bodies worldwide should focus on regulating risk and end uses – places where the technology touches consumers’ lives – rather than the technology and AI algorithms themselves. We are advocating for precision regulation that addresses the use of AI in high-risk scenarios. That’s where regulation should focus – with the most stringent regulations applied to the highest-risk applications of AI, such as credit, housing, and employment determination.
We should also hold those who create and deploy AI accountable. Legislation should consider the different roles of AI creators and deployers and hold them accountable in the context in which they develop or deploy AI.
And finally, lawmakers should prioritize open innovation rather than a licensing regime for AI. Licensing would increase costs, hinder innovation, disadvantage smaller players and open-source developers, and cement the market power of a few bigger players.
New opportunities
The sudden advance of AI, especially generative AI, heaps new pressures and expectations on lawyers. Like any dramatic shift, however, this advancement also produces new opportunities – not only to use AI to make our own work more efficient, but to solidify our roles as strategic allies and forward-thinking advisors to our clients in even the most turbulent of times.
This article was written by Christina Montgomery, the Chief Privacy & Trust Officer for IBM