Leveraging artificial intelligence to aid financial services firms with their compliance obligations may require bringing in expert help and proprietary technology
Expert artificial intelligence (AI) systems are needed to deliver the level of accuracy required to automate complex compliance tasks and help financial services firms manage regulatory change programs. Indeed, firms will only get so far with generic generative AI models such as ChatGPT, according to several regtech leaders.
Regulators and financial services firms have clamored to find compliance uses for applications [LINK] such as ChatGPT since it captured the public imagination in late-2022; however, generative AI alone will not drive the kinds of efficiencies financial services firms would like to achieve through automation.
“While generative AI is incredibly powerful, it is inherently inadequate to disrupt regulatory compliance fundamentally because more than perfect accuracy is needed,” says Sumeet Singh, founder and chief of LighthouseAI in Los Angeles. “Generative AI can therefore only augment — not replace — human compliance efforts. As a result, generative AI is relegated to creating greater efficiencies (reducing costs), but not actively mitigating risk (protecting revenue).”
Proof of concept
Many regtech companies are running proof-of-concepts to train generative AI on regulatory rule sets in order to create chatbot-like interfaces that can answer compliance questions and summarize texts accurately. In this way, it’s become more of a bespoke approach to generative AI.
“We’ve got an internal proof of concept running with generative AI that’s trained on compliance models. It’s not just a generic model. It is working well so far in terms of summarizing, for example, compliance and regulatory texts. It works very well generating the first draft, for example, of a compliance manual or a specific policy for a specific topic. And it works extremely well, and it can even take specificities from a client and their content,” says Evgeny Likhoded, chief executive and founder of Clausematch in London, which was acquired recently by Corlytics, a regtech company based in Dublin.
Likhoded and his Clausematch team have been running another proof of concept on a generative AI tool that allows users to ask specialized compliance questions and receive accurate answers.
Raj Bakhru, chief strategy officer at ACA Group in New York, said the industry has high expectations for these new technologies. “The hopes and expectations for what [ChatGPT] can do are incredible,” says Bakhru. “I believe this is not a trend or hype, that it will be a material change to the way we do things. It’s been on the cusp of coming for a long time, and ChatGPT has opened everyone’s eyes to what’s potentially possible. But it’s not all the way there yet.”
ChatGPT is good at content generation and auto-summarization and could be useful for running compliance checks on disclosures on marketing material, Bakhru adds. “It’s great at identifying that type of stuff, making sure it has the right disclosures, disclaimers, the right sentiment, appropriate language. It’s great at electronic communication surveillance. Compliance officers need to surveil emails, text messages, WhatsApp, or Wechat. ChatGPT has the ability to parse all of that content and identify potential concerns.
“That could be material non-public information or bribery, or a gift exchange that shouldn’t be exchanged. It can really do a really great job at identifying those types of things in content,” Bakhru explains.
Running these kinds of checks requires firms to load their content into a generative AI model on a secure platform. Firms can also run local models — not ChatGPT, but open-source generative AI models — in their own environment, which offers one way to address the privacy and security risks firms are facing with ChatGPT, Bakhru notes.
Deterministic AI drives accuracy
The accuracy of off-the-shelf AI models is between 16% to 50%, and when it comes to summarizing complex regulatory documents and rules, the off-the-shelf accuracy is below 20%, says John Byrne, chief executive at Corlytics.
“When we train a model, we get it to about 85% accuracy, but we have a lot of lawyers that check the accuracy and make suggestions. When we get those errors, we put those errors back into the model to retrain the model and over a period of time, then you get towards 99% accuracy. But most of what we’re doing in AI is what’s called a determinate, and that comes from testing and is model driven as opposed to a machine learning approach,” Byrne explains.
It takes about five years to train the models and achieve the level of precision that compliance clients require. Corlytics worked on auto-summarization for enforcement notices and other regulatory documents, and it took about six years to get it to work, Byrne adds.
“ChatGPT has uses for interpreting Twitter feeds and so on, but for regulatory documents of hundreds of pages in length, it doesn’t have the precision. That’s a problem,” he says.
Automation, assurance & attestation
Summarizing documents and answering compliance queries is only part of what regtech companies hope AI can bring to the compliance function. The real prize is end-to-end automation of the compliance process that could improve firms’ ability to provide assurance around systems, controls, and policies while generating an audit trail to back up attestation reports.
“The European Banking Authority and the Fed want attestation reports from all the large banks they regulate,” Byrne notes. “The regulator now says, ‘Well tell me from start to finish, how you have complied and what your process is.’ The key thing is to have everything audit trailed.”
Companies such as Corlytics have used AI to power automation in regulatory horizon-scanning, which feeds into a regulatory change management system that is joined into a digital regulation library. That library digitizes regulation and breaks it down into obligations to inform the policy management process. Firms can then formulate and document their policy changes and finally generate an attestation report when that process is completed.
This kind of system should allow firms to get on top of the regulatory change process. When a rule change comes into the library, it shows as a pending change with a markup of what that change is going to be. Firms then can start formulating their policies early on in the cycle in anticipation of a final draft.
“What this does is gives the firms much more notice, so that they can start drafting their policies and changing their controls way in advance of a change going live,” Byrne says.