How will new iterations of generative AI and next-generation ChatGPT tools impact how companies address their compliance challenges?
The public-facing generative artificial intelligence (AI) tool ChatGPT has consumed the news since its launch in late-2022. Its applications have been proven useful in everything from customer service bots to assistance with writing.
There are use cases for this technology in the financial sector as well; and in fact, both the bad actors and corporate compliance professionals are finding uses for the new technology.
ChatGPT, developed by OpenAI, is a natural-language processing tool driven by AI technology that allows the user to have human-like conversations, and select the desired length, format, style, level of detail, and language to which the tool will respond. The language model can answer questions and assist you with tasks, such as composing emails, essays, and code.
According to an analysis by Swiss bank UBS, ChatGPT is the fastest-growing app of all time, with estimates showing that ChatGPT had 100 million active users in January, only two months after its launch. The rapid growth causes this technology to outpace regulatory capacity. Indeed, with no standards, people can exploit the loopholes that are available to them.
Historically, AI has been relegated to individuals who have specialized knowledge, but generative AI is a new category that includes more easily accessible tools, such as ChatGPT, Microsoft’s Copilot, and more. However, this means that people with nefarious intentions, good intentions, or no intentions can all take advantage of this technology, which may be a very dangerous prospect.
Use by fraudsters
The primary concern is what bad actors can do with this new technology and who they may target in these scams. “Scammers have historically been on the cutting edge of technology, and I don’t see this being any different,” explains Tom Bartholomy, CEO of the Better Business Bureau of Southern Piedmont and Western North Carolina. “As they see that work, as they see people engaging with it, they’re just going to continue to refine it and continue to find other scams that they can feed that same technology into.”
Fraudsters have been known to capitalize on new technology, especially in its infancy which is where we are at this point with ChatGPT. Here are some areas where we can expect fraudsters to exploit using generative AI tools:
-
-
- more persuasive phishing emails;
- more convincing impersonators scamming for information by phone;
- fake ChatGPT (or similar) browser extensions and apps;
- malware created by ChatGPT (or similar tools); and
- data breaches that threaten the release of personally identifiable information.
-
While each of these scams is of concern now as Chat GPT evolves, they will become more prevalent and more difficult to identify as the technology evolves. Individuals will have to be always on higher alert and exercise skepticism beyond what had previously been recommended.
In addition to actual crimes, generative AI also runs the risk of deficiencies that could include providing inaccurate, unreliable, or less than sufficient information. Additionally, in cases in which actual human contact is necessary, these deficiencies can provide an additional hurdle that could contribute to the damage that has occurred.
Use by fraud fighters
The most obvious use of generative AI is for chatbots or virtual assistants. The programming can be completed to provide real-time support and manage account activity. Chatbots use small bits of code programmed to perform straightforward functions with the basic idea that the code can provide better customer service in a more timely fashion. Indeed, it often satisfies the customer’s need for immediate assistance.
In this role of servicing customers, AI may have a certain utility from a business perspective, but there are other uses for this technology that will be important from a regulatory perspective. Banks and other financial institutions can now use the coding to create programs that can be used to fulfill regulatory requirements or bridge gaps in oversight. They can monitor transactions, automate on-boarding processes, and properly identify the individuals with whom they are doing business.
While many regulators have yet to publish guidelines that relate to the use of bots or AI in financial transactions, financial institutions should take internal measures, such as updating their anti-money laundering and client relationship policies and procedures when introducing chat bots and other AI technology to automate their customer-facing interactions, according to recently published legal guidance. “These new policies and procedures should take into account that customers will no longer be interacting with staff trained to look for suspicious behaviors and red flags and ensure that the technology used is sophisticated enough to continue to spot and alert supervisory teams to potential red flags.”
While trying to outpace cyber-criminals, here are some steps that banks and financial institutions can take now:
-
-
- begin constant and consistent employee training;
- vigilantly report scams and other suspicious activity;
- incorporate new technology using a risk-based approval and monitoring process; and
- monitor new or changing regulatory requirements and technological advancements.
-
Generative AI — like any other technology — has the potential to do good and move society forward; however, there is also the possibility for it to do great harm. This is especially the case for financial services industry organizations or others working in the financial sector. Just as ChatGPT-generated content can fool its victims, it can also be used to protect assets and increase compliance — the current task ahead is to balance the potential.