In Chapter 2 of our “Fraud Fighters Manual for Fintech, Crypto, and Neobanks,” Unit21’s Alex Faivusovich explores the concept of stolen and fake identities, with particular emphasis on synthetic identities. He looks at how fraudsters find the sensitive information they need to conduct this type of fraud, and then explains how to detect and prevent synthetic ID fraud.
In the chapter, we touch on why it’s so challenging for businesses to mitigate risk when they don’t know who they are actually dealing with, and try to help teams navigate these challenges.
The following responses have been collected via an audience of respondents, all of whom are leading experts in the risk management industry.
Artificial Intelligence (AI) and Machine Learning (ML) technologies are powering a wide array of new and innovative solutions. Unfortunately, bad actors are already finding ways to use these technologies to perfect their fraud and money laundering efforts so they can bypass anti-fraud measures.
In this installment of Fraud Fighters Manual: Community Insights, we expand on the core concept of compromised and synthetic identities to dive deeper into the threats that AI and ML pose for identity verification. We’ll look specifically at how fraudsters exploit this tech to beat fraud prevention solutions by leveraging advanced tech like deep fakes to commit fraud.
Let’s dive in.
New Challenges for Identifying Compromised, Fake, & Synthetic IDs: Deep Fakes and AI
Because AI, ML, and deep fake technology can be used to alter images and videos, they’re already becoming core tools for fraudsters—and a sharp thorn in the side of risk managers.
And these technologies spell serious complications for identity verification, especially when it comes to more complicated cases, like those involving compromised or synthetic identities. Since they use some legitimate credentials that can pass basic KYC checks, these identities are hard to detect.
Currently, this threat is coming from sophisticated, savvy criminals that understand how to leverage these technologies effectively already. But as these technologies become more popular and more readily available to the average person, this type of fraud will be on the rise.
Shivi Sharma, a Data Scientist at Varo, notes that “deep fake can empower fraudsters in impersonating real people,” making it more important than ever “for technology to advance and identify the difference between a real person and a deep fake.”
To help teams manage this risk, we look at some of the best practices for detecting and preventing compromised and synthetic IDs, especially in the context of increasing risks from AI and ML technologies.
Best Practices for Detecting and Preventing Identity Fraud in the Modern Era
An anonymous respondent noted that “given that these identities use a variety of legitimate credentials, they are often more challenging to detect. Fortunately, there are several methods organizations can use to detect and prevent them.”
While each individual method may have some weaknesses—particularly in relation to AI and ML technology—a combination of these methods can be extremely effective. Different organizations may find that different solutions are more effective at mitigating fraud, and will need to try different measures to see what works best.
Baptiste Forestier, Head of Compliance at Hero, says their “advice is to stay up to date with the latest types of AI powered frauds and to ask providers how their tools can prevent them.”
Below, we cover some of the best practices for detecting and preventing identity fraud according to our audience of respondents.
It’s important that organizations not only verify customers during onboarding but also each time they access their product. Otherwise, it’s really challenging to know if the account is still being used by the true owner.
Customer authentication is a solid foundational check for stolen, fake, and synthetic IDs in almost any situation. It keeps users’ accounts safe by stopping fraudsters from being able to sign into—and compromise—accounts they shouldn’t have access to.
While it does add friction to the customer’s experience, it’s usually worth it to validate that it’s the true account holder that’s using your product. Modern customers are used to 2FA, MFA, and other customer authentication processes when accessing services through an app or website, and most won’t be deterred by a form of authentication for each login attempt.
Enhanced information can be invaluable in improving the performance of customer authentication processes. If the system can also monitor for additional signals, teams may be able to tell when a user completed authentication from a new device, location, or IP address, all of which could signal an account takeover.
While AI and ML technology means customer authentication isn’t foolproof, it still mitigates fraud by making it harder for fraudsters to authenticate accounts successfully. For some, this added layer of protection will deter them from trying to commit fraud on your platform. While it shouldn’t be the last measure used to prevent compromised and synthetic IDs, it should be one of the first that teams consider using as a basic check that the user is who they say they are.
Baptiste Forestier states, “liveness detection is definitely the best tool for us at the moment to prevent identity theft.”
Liveness checks are extremely hard to fake, making them an ideal layer to add to fraud prevention programs. While identity documents, images, and personal information can be faked rather easily, it’s incredibly challenging for these visual identifiers to pass a liveness check.
However, liveness technology shouldn’t be your only prevention method, as some deep fake technologies can alter image, audio, and video. That being said, current deep fake technology has shortcomings and struggles to produce live output that is indistinguishable from the real thing. Until this technology improves, liveness detection is a great tool for preventing risk, or at the very least, hindering fraudsters from successfully exploiting your organization.
Even now, but especially as deep fake and AI tech improves, teams will want to leverage additional preventative measures.
Deep fake technology powered by AI or ML are incredible threats when it comes to verifying identity documents, as they can enable fraudsters to tamper with documents in ways that were previously impossible.
To mitigate the risk of approving tampered documents, Pratik Zanke from PayMate suggests “having a robust OCR [optical character recognition] tech to read the data from the document” and “having a tamper detect tool to verify the genuinity of the document.” After deploying this into their own anti-fraud program, Pratik’s organization saw a “success rate of over 90%”, showing just how powerful these tools are in the fight against financial crime.
Whether it’s ID documents, images, or live video, tamper-detect tools are designed to look at documents and other identifying information to check for forgery or alterations. With this technology applied during onboarding, teams can identify when someone is trying to maliciously gain access and prevent the fraudster from accessing the platform in the first place. They can also stop existing customers from applying for new products, expanding the organization’s exposure to risk.
Analyze User Behavior
Behavioral analysis is an invaluable tool in a fraud prevention expert’s toolkit, empowering teams to analyze user behavior and identify anomalies from typical user behavior.
In most cases, fraudsters behave very differently from legitimate customers. They have different objectives and motivations for engaging with your platform, and that is typically reflected in their behavior.
One anonymous respondent noted that “real people usually change their contact information, address, or phone number; while criminals using synthetics remain static and unchanged. They have actual social media pages showing communication with family and friends, and they maintain public tax records, get parking tickets, and have matchable income. Authentic people leave information trails throughout their life.”
When teams have systems that track user behavior as part of their risk program, they can create rules that flag customers that aren’t behaving like legitimate users would. Accounts that haven’t had any credential changes in a specified period can immediately be flagged for review—and further investigated for suspicious activity.
Additional Background Checks
Adding verification and liveness checks are great ways of mitigating fraudsters—and using a combination of solutions is best when it comes to fraudsters using ML and AI technology to bypass fraud prevention systems. Unfortunately, they always involve adding an additional check—and friction—to the customer experience.
Fortunately, analyzing behavior and using additional data signals from customers doesn’t always mean teams need to add this friction. Modern fraud prevention solutions empower teams to perform checks that operate entirely in the background, without requiring any action from the customer. Instead, these checks validate users based on other factors, leveraging a mix of user data to enhance the information available to risk teams.
According to Shivi Sharma, “it is important to leverage insights from the email, phone, IP address, and behavioral patterns on the application to stop those using stolen identities, and flag fewer good customers. Model scores that combine these attributes are helpful in flagging users with stolen identities and synthetic identities.” The more information that can be ingested into your system, the more signals your team will have to verify customers.
Another anonymous respondent states that “the onboarding process should always be linked to a cell phone number” and organizations should “prevent the use of prepaid phones and Internet phone numbers.” Additional features like “active phone monitoring for real-person activity and biotechnology for individual identity and documentation” are also good checks to use, making it harder for fraudsters to access platforms, even when using ML and AI technologies.
After onboarding, background checks can be used to identify and authenticate users that are signing in and accessing services. These solutions can compare the credentials of the current users against previously used credentials, and can even analyze differences in user behavior based on historical data.
The user’s device, IP address, and geographic location are all signals that can be used to verify a customer’s identity that don’t involve adding friction to their customer experience. With this information at their disposal, investigators have more information to determine if the person accessing an account is the true account holder, using this information to shut down a compromised account. In some cases, teams can even use this information during onboarding to stop fake accounts from being created in the first place.
These solutions empower teams to perform less ID verification checks on users, or simply enhance the data available to make their determinations.
Monitor for New Account Velocity
Rapid customer growth is usually a good thing for businesses. But sometimes, it can be a signal of fake and synthetic ID fraud.
A wave of new users—while great from a product adoption standpoint—could instead be a huge influx in fraudsters. And this wave—if verified and approved to use your service—could pose a serious (and expensive) threat to any business.
One anonymous respondent isn’t taking any chances, stating that at their organization, “we monitor account creation velocity in real-time. Our technology looks for a spike in new account activity and alerts our team to get eyes on it ASAP.” With a rule that monitors for spikes in new account creation velocity in real-time, teams can quickly get investigators on the case and immediately take action.
When done with precision, teams will be able to mitigate fraud losses by reducing the number of fraudsters that successfully onboard while still limiting false positives. Teams can then analyze the spike itself to get insights on why it occurred, and potentially develop protection methods that prevent their platform from being vulnerable to these attacks in the future.
Detect and Prevent Stolen and Synthetic IDs with Unit21
There is no doubt that AI, ML, and deep fake technologies will make it exceedingly difficult for fraud prevention systems and specialists to root out—and stop—identity fraud.
While these challenges will make the job of risk managers and investigators more challenging, there are still methods (and solutions) available to mitigate risk associated with the use of AI and ML for identity theft.
The process should always start with onboarding. It’s best to keep these criminals off your platform in the first place—and that starts with adequate KYC procedures, including customer due diligence (CDD) and enhanced due diligence (EDD). But verification and authentication checks shouldn’t stop at onboarding; instead, they should continue throughout the customer’s life with your product.
Customer authentication checks serve as a great initial touchpoint, making it hard for less intentional fraudsters—think The Thief or The Opportunist from Chapter 1—from successfully committing fraud. Liveness checks, tamper-detect tools, behavior analysis, and background verification checks are all great solutions for more savvy criminals like The Con Artist, The Disguise Artist, The Impersonator, and even Organized Criminals.
Learn how Unit21’s Transaction Monitoring Solution—which offers true data monitoring of all user activity—can help your team stop identity fraud rings from taking advantage of your Fintech platform. With enhanced data about your customers and the ability to create and update detection and prevention rules quickly and easily, you’ll be able to root out fake and synthetic IDs with ease.
We’re not done, this is just Chapter 2! Go check out the Community Insights from Chapter 3—How to Adapt Fraud Prevention for Crypto Fraud. In it, we dive into cryptocurrency fraud with insights from Mastercard and our audience of fraud professionals.