Blog

The best way to fight AI fraud is with AI defense

Security Banking Technology Fraud prevention
Since ChatGPT first hit the scene in November 2022, everyone who is anyone in anything to do with cybersecurity must have had sleepless nights mulling over the question: “How does this threaten what we do?”
The good news is, we can use artificial intelligence (AI), like ChatGPT, to our advantage. The bad news, however, is that fraudsters are already using it to their advantage too, and we are playing catch up.

Artificial intelligence’s threat to cybersecurity

So, how do you start writing on the topic of AI? Why, you consult AI of course, which is exactly what I did. My question was: “How can artificial intelligence threaten cybersecurity?” The answer that came back was six varieties of fear-inducing attack mechanisms, followed by what should have been the starting point: “To counter these threats, it is essential to develop robust AI-based cybersecurity systems that can detect and mitigate AI-powered attacks. Employing AI techniques for defense, such as anomaly detection, behavioral analysis, and intelligent threat hunting, can help organizations stay ahead of evolving threats. Additionally, ongoing research, collaboration between experts, and ethical guidelines for AI development can contribute to minimizing the risks associated with AI in cybersecurity.”
“To counter these threats, it is essential to develop robust AI-based cybersecurity systems that can detect and mitigate AI-powered attacks.'
Let’s unpack that. As revealed in our post earlier this year about banking security trends to watch out for in 2023, artificial intelligence is not new to anyone in technology. In financial technology, we have been using it to help us defend consumers for quite some time now. Incorporating risk signals into an authentication ecosystem to determine which challenges to implement to ensure safety for the customer is stock standard for any good tech security merchant. We have also been employing factors, and combinations thereof, that range from geolocation pinning, device fingerprinting, gyroscope usage, call-in-progress signals, device biometric changes, mule account detection, and even server-side selfies and liveness detection techniques, just to name a few.

While these all have merit in their own right and assist in protecting the attack surface in their own way, the challenge we are facing now is that fraudsters have easy access to what we have been using to outsmart them.

How to mitigate the risk of social engineering and other attacks

In the six-point result list, coming in at numbers 4 and 6, were the risks that social engineering poses, and described as: “AI can enhance phishing and social engineering attacks by generating highly convincing and personalized messages. AI algorithms can analyze vast amounts of data, including social media profiles, emails, and online activity, to craft targeted and persuasive phishing emails or messages. This can increase the success rate of phishing attacks and make them more difficult to detect.”
And: “Deepfake technology, which uses AI to manipulate or generate realistic audio, video, or images, poses a significant threat to cybersecurity. Attackers can use deepfakes to impersonate individuals, forge identities, or create misleading content. This can lead to reputational damage, fraud, or misinformation campaigns.”

So, how do you ensure your customers are protected against what they perceive to be their manager requesting them to do something, or their friend asking for a financial transfer to help in a sticky situation? The answer lies in the same methodology used to craft the attack – the machine. Using enhanced risk signals to mitigate payer manipulation can go a long way in ensuring fraud attempts fail.
"Using enhanced risk signals to mitigate payer manipulation can go a long way in ensuring fraud attempts fail."
Let's look at an example. You receive a voice note from your friend that lives across state asking for a short-term loan of $500. They have included a link to their personal bank account in the form of a QR code as they need the money immediately. Because you “recognize” the (AI deep-fake) voice, you click and pay and never see your $500 again. If it were with a merchant that was ahead of the curve and protecting their customers against these new threats, the authentication would have failed, and you would have still had your $500.
Defenses like destination account validation, mule account detection, and enhanced call signal, for example, used in combination with each other, will increase the likelihood of one of the signals triggering a red flag to challenge the transaction and in so doing protect your customer.
The other five results listed ranged from the hazards that real-time mechanisms can cause, how malware can mature, how systems can be bypassed, and how data can be compromised. The reality is that technology is ever-evolving, but now it is also as accessible to fraudsters as it is to the people protecting us. The only way to stay ahead of this is through continuous innovation and exploration to enhancing defenses and minimize the attack surface. And, most importantly, to use a combination of protection, because no matter how robust, no one defense mechanism is strong enough to guard against the rapid fraud growth we are experiencing.

ChatGPT’s fraud prevention advice to banks

Lastly, when I asked ChatGPT “How can banks protect themselves from Generative AI fraud?”, it had this to say:
1. Implement advanced machine learning algorithms: Use advanced machine learning algorithms to detect and flag suspicious transactions in real-time. These algorithms can be trained on historical data to identify patterns and anomalies that may indicate fraudulent activity.
2. Use natural language processing (NLP): NLP analyzes the language used in customer communications and detects signs of fraud, such as the use of certain keywords or phrases.
3. Monitor social media and the dark web: Constantly check social media and the dark web marketplaces for signs of fraud, such as the sale of stolen identities or the use of stolen credit card information.
4. Monitor for Generative AI: Banks can monitor for Generative AI model usage in communication and transaction. If the model is not recognized or the use is not authorized the bank can flag it as suspicious.
5. Implement multi-factor authentication (MFA): Banks can implement MFA, such as requiring customers to provide a fingerprint or facial recognition in addition to a password or PIN, to verify their identity.
6. Collaborate with other organizations: Banks can collaborate with other organizations, such as other financial institutions and law enforcement, to share information and stay up to date on the latest fraud trends.
7. Train employees: Banks can train employees to recognize and respond to signs of fraud, including Generative AI fraud, to ensure they are able to detect and flag suspicious activity.
With that sound advice on stopping AI fraud from AI itself, we leave you with this: Talk to us, not only to the bots. Our strong multi-factor customer authentication and payment authentication technology can – and will – help you reduce the risk of banking and payment fraud, freeing you up to focus on expanding your customer base, enhancing your business, and growing your revenue.