ARTICLE
Data security: A critical factor for trustworthy AI
It should be clear by now that AI is here to stay for professionals in numerous industries. Need evidence? In July, Thomson Reuters published our Future of Professionals Report, which examines the state of AI among professionals in practices including legal, risk, and tax.
Of the 2,200-plus professionals surveyed in the report, 77% believe that AI tools will have a high or transformational impact on their work over the next five years, especially when managing and analyzing ever-increasing volumes of data.
However, the professionals surveyed in the report also express serious concerns about AI’s ability to protect sensitive company and client data from being improperly shared or even stolen. Survey respondents say that demonstrable data security — or the lack thereof — will influence how quickly their organizations adopt AI tools.
Data security and the ethical use of AI have been critical considerations for Thomson Reuters as we develop solutions such as CoCounsel, our generative AI (GenAI) assistant for legal, tax, and accounting professionals.
We talked with Carter Cousineau, Thomson Reuters Vice President of Data and Model Governance, to discuss how her company is assuring users of its AI technology that their data is being kept safe from cybercriminals. Carter’s work includes building out Thomson Reuters AI data security and ethics programs.
Q&A with Carter
What measures does Thomson Reuters implement to protect user data and comply with data privacy regulations?
Any kind of project that involves creating and using AI and data use cases goes through what we call a “data impact assessment.” This term makes it sound simple compared to how much it actually covers. The data impact assessment model we’ve developed incorporates data governance, model governance, privacy issues, the questions that Thomson Reuters’ General Counsel has raised, intellectual property questions, and information security risk management. We started our development process for data impact assessment by embedding our privacy impact assessment into the first version.
In a data impact assessment, we use the term “use case” for a Thomson Reuters business’s project or initiative. We’ll ask the business several questions in our assessment process, such as:
- What are the types of data in this use case?
- What are the types of algorithms?
- What is the jurisdiction where you’re trying to apply this use case?
- Ultimately, what are the intended purposes of the product?
In terms of identifying risks, this is where many privacy and governance issues come into effect.
We then build out clear mitigation plans and techniques associated with each of the different risks. This process includes ensuring the data is anonymized where necessary, appropriate access and security are in place, and data-sharing agreements have been established. From a privacy perspective, we work to understand the sensitivities of the data when a use case is utilizing, for example, personal data. Then, we apply the needed controls.
How often do you audit and update security measures?
When generative AI emerged, we developed specific guidance for Thomson Reuters. There are procedural documents that we constantly update throughout the year, and those documents also include mitigation responses. We mapped out each of our standard statements according to the set of controls that would be or could be applied based on the risk scenario, and each of those statements undergoes much more frequent review and assessment.
We also have what we call the Responsible AI Hub, which captures everything in a centralized view to build trust. We perform some of our audits and updates annually, while many others are very frequent. We’re tracking mitigations weekly, if not daily, depending on the task and the team.
What safeguards do you use to prevent unauthorized access or misuse of data?
Our data access security and management standard feeds directly into our data governance policy. Simply put, we’re making sure that the owner providing access to their data set is giving out the least amount of information needed for the use of whoever is requesting it. We’ve built many of our data security controls into our data platform environment, and we have a specific tool that creates role-based security access.
What accomplishments would you like to emphasize?
I'm very proud of our team for putting together these ethical concepts and these AI risks. Data risks in the ethics space are especially difficult to identify clearly. They're difficult to define all the way through end-to-end risk management, and the team built the Responsible AI Hub pretty much from the ground up. We spent a lot of time having conversations about identifying and talking through the breadth and depth of AI risks. We’ve spent even more time making those risks come to life regarding how we can take action on them and what that action looks like from a risk-mitigation perspective.
I think the work we've put in over the past three years has allowed us to get a handle on AI risks a little quicker than most companies.
You can learn more about how AI has been changing the future of professionals and how they work.
Related insights
Trained by industry experts, CoCounsel is the only generative AI assistant built on 150 years of authoritative content