In the final part of our blog series, we examine how financial institutions can navigate the AI-driven landscape to find the right tech solutions to best detect and prevent AI-based fraud
Digital channels are widely used today to create efficiency in the on-boarding process for various financial products, including commercial and retail checking accounts, credit cards, automotive loans, and commercial loans. However, these channels also present opportunities for fraudsters, particularly in committing new account fraud, often due to the remoteness and anonymity they offer.
With the rapid acceleration of AI in various business processes and workflows, AI-generated fraud is changing how banks and insurance companies must approach fraud prevention and detection.
After describing the technological considerations these institutions much manage and how they can identify the types of fraud they are up against, this final article in our series explores the implications of AI-generated fraud and how financial institutions and insurance companies are responding to these new challenges.
The impact of AI-generated fraud
One of the most notable examples of deceptive use of artificial intelligence involves a tech portal author who downloaded an AI voice cloner to trick his wife — with great success.
This easily accessible technology highlights the potential risks for banks that rely on biometric identifiers for security and verification, particularly voice recognition. If a bank uses the prompt, My voice is my password; please verify me, it can become vulnerable to voice-cloning attacks. To commit this fraud, illicit actors need only an audio file of the victim’s voice, which can be obtained from a phone’s automatic answer service or from social media content available online.
Bypassing voice recognition is just the beginning. Visual identifiers, such as facial verification and visual liveness checks, are also at risk due to the explosion of deepfake technology. The innovation in creating realistic looking deepfakes is astonishing, with some being so authentic that they deceive even the most discerning viewers. For instance, a website featuring a deepfake of Keanu Reeves was so convincing that many fans believed it was the real actor.
In the value chain of a fraud operation, all other components needed for verification — such as the victim’s name, personal information, email account access, and bank details — must also be in place, especially if a bank relies on two-factor authentication.
With the continued acceleration of data breaches, however, these components can be at risk as well, and one can assume that personally identifiable information for all American citizens is available on the dark web and ready to be purchased. On platforms such as Telegram, fraud service providers create the necessary components to help fraudsters bypass know your customer (KYC) identity controls. For example, to open an account, a fraudster might use forged state-issued documents, fake identification, and even a cloned voice to impersonate a real person or existing client. One service, called Docs 4 You, enables the creation of a completely new identity, complete with a driver’s license, selfie videos, and a passport. The goal is to cultivate an identity for the long term and then establish a credit history that can later be maxed out. In one such advertisement, a seller claims their deepfakes can bypass at least five of the largest liveness-detection software packages.
The insurance industry is also affected by AI-generated images, which are used to simulate car accidents, for example. If it is easy to clone voices and faces, it is even easier to create fake accident images, leading to fraudulent claims that are difficult to detect without thorough investigation and personal inspection of the affected property or vehicle.
How financial Institutions and insurance companies can respond
While machine learning, predictive analytics, and behavioral biometrics are effective for detecting ongoing account fraud, illicit actors seek to use AI-driven fraud to bypass security protocols such as liveness checks and voice verification during customer verification processes in both new and existing account fraud cases.
AI-generated fraud largely impacts three categories, including the use of:
-
-
- AI-generated videos and images to bypass liveness detection;
- AI-generated voices to bypass voice verification; and
- AI-generated documents and pictures to be used as supporting documentation (such as IDs, financial records, and insurance claims).
-
To combat these threats, financial institutions, insurance companies, and corporations must upgrade their detection and prevention capabilities. This includes implementing the latest technologies and introducing new measures during their customer on-boarding and claims management processes to counter AI-generated fraud.
Despite being a target of AI-generated fraud, biometric information remains a crucial component of any on-boarding or verification solution. However, its limitations as a standalone verifier mean it must be combined with existing customer data from robust public sources. For example, if the identity of a customer cannot be verified using public records, a visit to a branch or in-person verification process may be necessary, even if a liveness check is confirmed by a biometric provider.
If an organization relies solely on digital channels, remote verification may be the only option. In such cases, the location of the individual can offer additional insight. For instance, a US-based institution might block account openings or credit card limit expansion requests if the online session or call originates from outside the United States or a specific region within the country.
Combining data, technology & personal interactions
The field of AI detection and prevention technology is rapidly evolving, offering innovative capabilities. Advanced liveness detection now utilizes 3D depth sensing and multi-angle face scans with anti-spoofing algorithms. Deepfake detection AI analyzes frame-level inconsistencies and employs neural networks trained on datasets of authentic versus deepfake videos.
As voice verification becomes more common in the financial industry, anti-spoofing systems can detect audio spectrum inconsistencies and synthetic overtones, which are typical of AI-generated voices. These technologies are particularly effective in call center operations.
For document validation, authentication solutions using optical character recognition and image forensics are essential for detecting fraud. Digital watermarking, for instance, adds invisible pixels or audio patterns to documents or files that computers can detect but humans cannot. Innovations in pixel veracity techniques, for example, can further help uncover document and image alterations.
Document verification systems and deepfake detection tools are poised to become essential components of the anti-fraud arsenal in financial institutions and insurance companies. Combining the capacity and power of these tools is critical and is achieved through multimodal verification methods. Given the rapid pace of innovation in AI, it is essential to calculate returns on investment over shorter time spans.
Conclusion
Obviously, financial institutions and insurance companies should not rely solely on technology in their fight against AI-driven fraud.
The financial implications of this innovative type of fraud may necessitate additional steps in the account-opening process. For instance, live verification steps — such as face-to-face verification conducted by local branches or notaries — could serve as a deterrent to fraudsters.
By combining advanced technology with personal interactions and robust data analysis, financial institutions and insurance companies can better protect themselves against the evolving threat of AI-generated fraud. This multi-faceted approach ensures that while technology plays a crucial role, human oversight and interaction remain integral to the fraud prevention and detection process.
You can read our 3-part blog series of the technological considerations on how financial institutions and insurance companies need to manage fraud detection and prevention here.