Artificial intelligence has an important role to play in corporations' governance, risk management, and compliance efforts, especially around areas of fraud detection and protection
Artificial intelligence (AI) has emerged as a new fascination and topic of discussion in countless areas of society and businesses, across virtually all industries. Although generative AI could assist with the challenging tasks of drafting and maintaining internal documents and policies, other forms of AI or machine learning are already being incorporated into countless other areas related to risk and compliance.
Governance, risk management, and compliance (GRC) software platforms are vital tools used by almost all compliance and risk departments — and these will be at the core of AI. Historically most businesses practiced these areas separately; now however, GRC platforms combine various software programs and datasets into one coordinated platform, which can help companies increase efficiency, reduce noncompliance risk, and share information more effectively.
Indeed, AI is finding its way into GRC platforms in ways far more complex and well beyond the use of large language models (LLMs) or chatbots that can answer questions or help draft policies.
Potential AI use cases
The list of areas ripe for AI-based improvements is endless. Areas specific to compliance include regulatory change management, horizon scanning, obligation libraries, policy management, control management, third-party risk management, anti-money laundering (AML), know-your-customer (KYC) obligations, enhanced detections, and monitoring capabilities, to name just a few.
Many GRC software solutions offer advanced capabilities for quickly identifying and harmonizing risk and control libraries; locating missing relationships between risks, controls, and processes; and proactively identifying issue trends, emerging risks, and control failures. Adding AI to these programs will make them even more powerful.
Some of the key improvements and benefits of adding AI to GRC platforms include:
-
-
- detection of risk, audit, and control deficiencies;
- detection of duplicate risks and controls;
- detection of patterns of over-testing and under-testing of controls;
- potential reduction of false positives, which is valuable in AML/KYC applications; and
- predictive planning and prioritization of risk assessments.
-
There are many use cases for AI, and no single solution covers everything. However, incorporating AI across a GRC architecture that scans multiple data sets and systems offers incredible potential for improvements in various areas of risk and compliance, including:
Horizon scanning — AI can be used to better scan and evaluate pending legislation, proposed rules, enforcement actions, speeches, and public comments made by regulators to detect future risks and concerns.
Obligation libraries and regulatory change management — AI can better monitor current regulatory obligations and notifications while comparing and tracking regulatory changes and notifications to help stay current. Many financial institutions receive hundreds of daily change alerts, which must be manually reviewed, prioritized, and delegated. AI can dramatically improve this process by improving reaction and adoption times, which could minimize fines and compliance risks.
Policy management — AI will help map regulations and change management and coordinate it with an organization’s current policies and procedures. It will better detect gaps and necessary policy changes, and it may also suggest language for the updates to fill such gaps.
Internal controls, finance risk, and resilience management — AI offers the potential to integrate and improve other aspects of businesses, including finance and internal controls. Often legacy systems that are currently being used are independent and siloed; yet incorporating them into a more holistic GRC platform and adding AI will improve them. Efforts to evaluate and optimize controls using AI can provide insight into the effectiveness of controls by analyzing data and identifying trends. Detecting control failures, those most likely to fail, or duplicate controls, also can identify weaknesses and save costs.
Additional areas ripe for AI improvement
Beyond change-management, AI will dramatically alter third-party and vendor-risk management, cyber and IT risk management, and other areas, such as in financial risk, environmental, social & governance (ESG), and AML/KYC functions.
From a cybersecurity perspective, the growing frequency, complexity, and sophistication of cyber-threats make enhanced defense capabilities a necessity. AI-powered or -enhanced defense efforts can help organizations augment their cyber-capabilities through advanced threat detection, predictive analytics, and real-time monitoring. Continuous AI monitoring in coordination with relevant regulations can help firms better comply with applicable IT, privacy, and cyber-regulations.
Banks and financial institutions can use AI to leverage large, complex data sets and better detect and understand risk models that may also be more accurate than historical models that are based on standard statistical analysis. An effective AI tool may also scan and identify patterns and potential causes of risk events and recommend controls to mitigate such risks.
From an AML/KYC or financial crime perspective, AI can better scan for sanctioned or politically exposed persons, detect suspicious activity, and better connect the dots across multiple data sets. AI can also analyze more volumes of financial data and customer behavior patterns to identify suspicious activity.
Most importantly, from a financial crime or AML perspective, AI’s ability to decrease false positives and do it quicker than the manual process can be an enormous cost savings for an organization.
Finally, generative AI provides the ability to ask difficult questions and receive plain language answers, which could minimize risks and improve communications between senior executives, such as chief technology officers (CTOs), chief information security officers (CISOs), and their organizations’ boards of directors.
AI challenges and best practices
AI will be transformative in many businesses, but it will require its own compliance policies and procedures, because the technology poses ethical, legal, and compliance risks when not appropriately governed.
However, AI’s benefits, particularly from a risk mitigation perspective, outweigh its risks and consequences. Regulations and laws surrounding AI are presently scarce and in their infancy, but more complex laws and regulations are inevitable and will require thoughtful governance by organizations.
The challenge will be to create internal policies, procedures, and oversight mechanisms to harness AI effectively. To this end, some best practices, policies, or concepts to consider surrounding AI include appointing a dedicated AI leader like a CISO or, at a minimum, creating a senior working group that includes an AI ethicist.
Firms should also map all uses of AI in their organizations and adopt a set of governing principles, intended benefits, and risks. AI systems should also be transparent and fully explainable and understandable. And they should also be continuously tested, validated, and monitored — and this process should be thoroughly documented.