Generative AI (GenAI) is a subset of artificial intelligence (AI) that can create new and original content, such as text and images, synthetic data, and even complex models, based on a given prompt or input and the data they have been trained on.
GenAI uses a combination of AI and emerging technologies, such as machine learning, to analyze and learn patterns from large datasets and generate content or data. New-age threat actors and criminals are also armed with Generative AI capabilities, which has led to a sudden spike in financial fraud. They are exploiting GenAI to create hyper-realistic deepfakes, falsify documents, generate synthetic media, such as images, audio, or video, and execute highly sophisticated schemes.
In this blog, you will learn the various ways in which threat actors are exploiting GenAI to carry out financial fraud in banking and finance and how financial institutions can safeguard themselves and combat such risks.
Types of GenAI-Enabled Financial Frauds
As GenAI evolves, it introduces various new methods for committing financial fraud, pushing banks and financial institutions to adapt and upgrade their defenses against such sophisticated tactics continuously.
1. Deepfakes
With AI-generated hyper-realistic deepfake videos and images, criminals can impersonate key individuals in authoritative positions to deceive financial institutions. They can use it to perform illegal activities, such as money laundering, financial fraud, blackmail, fake investment schemes, and more.
Earlier in the year 2024, a striking incident occurred when a finance employee at a UK multinational company in Hong Kong transferred $25 million after being tricked by a deepfake video call that convincingly mimicked the company’s chief financial officer (CFO).
2. KYC Impersonation
With GenAI capabilities, it has become much easier to impersonate anyone’s identity. By generating facial features and forged but realistic-looking documents, criminals can bypass the know-your-customer protocols, especially when it comes to video-based verification methods—also known as Video KYC.
The largest threat to the business, according to Deloitte’s 2024 Financial Services Industry Predictions report, is generative AI or GenAI, which may increase fraud losses in the US from $12.3 billion in 2023 to $40 billion by 2027.
3. Spoofing Voice Calls
Criminals can leverage GenAI to clone and mimic a person’s voice. They can imitate the accent, timbre, intonation, and emotions inherent to human speech and exploit them for scams. For example, offenders can deceive individuals into committing illegal activities inadvertently, such as authorization of transactions based on voice-based payment requests. Scammers are using AI to enhance their family emergency schemes – Federal Trade Commission.
One of the most recent known incidents of voice spoofing occurred in Newfoundland, where at least eight senior citizens were conned out of $200,000 over three days using AI-powered voice cloning—referred to as the “grandchild scheme.”
In this case, fraudsters mimicked the voices of “grandchild” or loved ones in distress, claiming that they got into a car accident or arrested—to convince the victims that urgent financial help was needed.
4. Synthetic Identity Fraud
Synthetic IDs are created by combining real and fake personal information, which can be used to open bank accounts or obtain credit/loans illicitly. With wide availability of GenAI, it has become easier for criminals to make synthetic IDs and enhance their synthetic identity fraud capabilities.
According to a Wakefield Research Survey report, 76% of respondents believe that their organization has customers using synthetic identities that have been approved for an account.
With GenAI, creating a synthetic identity is as simple as asking, “Create an accurate New York driver’s license.” The GenAI application can generate a fake ID using real people’s photos that can be readily found online. What’s alarming is that these GenAI apps enable fraudsters to fabricate documents quickly and with a high level of realism. This makes it harder for banks and financial institutions to detect fraudulent or suspicious activities.
5. GenAI-Based Phishing Attacks
Using GenAI, criminals can generate compelling phishing emails and messages, increasing the effectiveness and impact of phishing scams. Spear Phishing attacks have now become easier where criminals use social engineering to target specific or key individuals using the information gleaned from their social media profiles, data breaches, and other sources. These AI-generated spear phishing emails are often very convincing with a high click-through rate.
For instance, Singapore’s Government Technology Agency published the findings of an experiment at Black Hat USA 2021 in which the security team sent internal users spear phishing scams. While some were created by hand, others were produced using OpenAI’s GPT-3 technology. The AI-generated phishing emails had a far higher click-through rate than the human-written ones.
Detecting and Preventing Generative AI-Enabled Financial Fraud
With the rise in AI or GenAI-driven threats, financial institutions must adopt advanced detection tools and strategies to combat the growing risk of financial fraud. Below are some techniques and strategies banks and financial institutions can employ.
1. Leverage AI Against AI
The fight against AI is increasingly becoming an AI vs AI showdown. AI is needed to build the defense systems of tomorrow. Banks and financial institutions can use the same AI accessible to fraudsters to build, power, and enhance their fraud detection systems. These systems can help detect or identify suspicious behavior and fraudulent activities at scale.
Using machine learning models, right training on vast datasets including both real and fake (deepfake) media, and AI algorithms, banks and financial institutions can detect subtle differences between synthetic videos/images, which cannot be recognized by the human eye. It can detect irregularities in digital content and differentiate between real and synthetic media with high accuracy.
However, this is a cat-and-mouse game where AI systems designed to detect deepfakes are also used by criminals to improve or train their deepfake models. As a result, they can make the fakes more realistic and harder to detect. As detection improves, deepfake techniques evolve, continuing the cycle.
2. Use Multi-Factor and Biometric Authentication
Combining security protocols, such as multi-factor authentication along with biometric verification, such as facial recognition, and thumbprints/fingerprints, can make it significantly harder for fraudsters to use generative AI to bypass the identification verification measures.
Also, insert additional layers of authentication when it comes to defense against deepfakes. For instance, deploy monitoring systems to detect synthetic audio and assign a ‘confidence score’ based on which agents may ask additional questions for authentication.
3. Leverage Behavioral Analytics
Monitoring and examining customer behavior patterns can help banks and financial institutions gather useful information that could help them identify unusual behaviors indicative of fraud and flag possible incidents of suspicious/fraudulent activities for manual review. Some examples include high value-transactions, changes in login patterns.
Banks and financial institutions can leverage AI-based behavioral analytics (artificial intelligence and big data analytics) to develop automated fraud detection systems capable of screening large volumes of transactions and detecting trends, anomalies, and suspicious activities in real-time with accuracy for appropriate actions.
An example of unusual behavior that could signal fraud is if a consumer purchases some groceries in New York and, just 15-20 minutes later, makes another purchase for luxury goods in Tokyo or another location. The rapid change in location is highly unlikely and raises suspicion.
Another example is if a consumer suddenly makes a high-value transaction in a completely unfamiliar place. For instance, if someone’s financial credentials are used to book ten first-class flights from a location in Dubai, while they’ve never traveled outside of their hometown in Australia.
4. Collaborate with Industry and Third Parties
By partnering with trusted third-party vendors and managed services providers, banks and financial institutions can co-create/develop advanced fraud detection systems capable of detecting sophisticated GenAI-enabled financial frauds. It enables financial institutions to have immediate access to some of the brilliant minds and expertise that can help them design, develop, and maintain robust AI algorithms and models for financial applications.
Similarly, they can collaborate with other financial organizations, such as banks, investment firms, etc., to share threat intelligence and best practices to build a collective defense against emerging fraud techniques.
For instance, by sharing data and insights on GenAI-enabled fraud, such as AI-generated phishing or synthetic identity fraud, banks and other financial institutions can quickly identify and address evolving threats.
5. Educate and Engage Customers
Educating and engaging with customers about the risks of generative AI-enabled fraud is one of the key strategies for creating a proactive defense. Banks and financial institutions can use various touchpoints, such as push notifications, emails, SMS, social media, etc. to inform and educate customers about the various potential risks and best practices for protecting their accounts.
For example, banks could,
- Send a push notification warning about AI-generated phishing emails and offer guidance on how to recognize them.
- Post a short video on social media explaining how fraudsters use voice cloning and deepfakes, to impersonate trusted contacts.
By engaging customers in these ways, financial institutions empower them to identify suspicious activity, such as receiving a phone call that sounds like a family member urgently asking for money but is actually a scam carried out with the help of GenAI.
Additionally, banks could offer webinars or interactive sessions for customers where they can learn about advanced fraud techniques, helping them protect themselves as well as financial institutions.
6. Continuous Training and Talent Development
Banks and financial institutions must invest in talent acquisition, development, and continuous employee training to stay ahead of AI-driven frauds. To combat the sophisticated GenAI-enabled frauds, such as deepfakes, it’s crucial to build in-house expertise in fraud detection and AI technology. By developing these technologies, institutions can respond more swiftly and effectively to evolving threats.
For example, banks and financial institutions can leverage digital learning or e-learning platforms, such as Fluent, can deliver accessible, outcome-based training tailored to their workforce without disrupting daily operations.
It can help boost their workforce training, augment skills, and build hands-on capabilities in AI, cybersecurity, and more within a few weeks to months.
With regular training on signs of deepfakes, synthetic identities, AI-manipulated communications, and tools, they can prepare their staff to detect fraud patterns and take appropriate actions.
Conclusion
In today’s rapidly evolving landscape of AI-driven fraud, financial institutions must stay proactive in defending against sophisticated threats like deepfakes, synthetic identities, and AI-generated phishing.
By investing in advanced fraud detection technologies, continuous employee training, and customer education, banks can strengthen their defenses and reduce risk.
In this AI-versus-AI battle, continuous learning, adaptation, collaboration with industry partners, and leveraging AI-powered tools will be the keys to success.
Ready to co-create AI-driven fraud prevention solutions? Contact us at info@anaptyss.com to build the future of secure financial operations together.