It can be tough to apply for anything online these days without first establishing your identity using a screenshot of yourself. With online identity verification becoming more integrated into daily life, fraudsters have become more interested in outwitting the system, whether opening a bank account or making a random online purchase.
As a result, deep fakes have become their ultimate weapon, imitating actual humans using artificial intelligence (AI) capabilities. The million-dollar question now is whether corporations can effectively use AI to battle fraudsters with existing tools.
According to a Regula identity verification report, a whopping one-third of global businesses have already fallen victim to deep fake fraud, with fraudulent activities involving deep fake voice and video posing significant threats to the banking sector. For instance, fraudsters can simply impersonate you in order to gain access to your bank account.
And, as AI technology for creating deep fakes becomes more accessible, the risk of businesses being impacted grows. This raises the question of whether the identity verification method should be tweaked.
The race to detect deep fake videos
Fortunately for us, we are not yet at our wits’ end, and most deep fakes can still be detected—either by eagle-eyed people or AI technologies that have been included into ID verification solutions for quite some time. But don’t let your guard down. Deep fake treats are evolving quickly and we are already on the verge of seeing convincing samples that can hardly trigger any doubt, even under careful scrutiny.
The good news is that AI can be trained to detect bogus information created by its AI peers. How does it perform this magical feat? To begin, AI models are not created in a vacuum; instead, they are shaped by human-fed data and sophisticated algorithms. AI-powered strategies can be developed by researchers to screen out rogue actors in synthetic fraud and deep fakes.
The core idea of this protective technology is to be on the lookout for anything suspicious or inconsistent while doing ID liveness checks and “selfie” sessions (in which you take a live photo or video with your ID). An AI-powered identity verification technology can detect changes that occur over time, such as changes in lighting or movement, as well as changes that occur within the image itself, such as clever copy-pasting or image stitching.
Fortunately, AI-generated fraud still has some blind spots, which businesses should exploit. Deep fakes, for example, frequently fail to capture shadows correctly and have strange backgrounds. Fake documents often lack optically variable security elements and would fail to produce project-specific images at certain angles.
Another key challenge criminals face is that many AI models are primarily trained using static face images, mainly because those are more readily available online. These models struggle to deliver realism in live “3D” video sessions, where individuals must turn their heads.
Another vulnerability that organizations can exploit is the difficulty in modifying papers for authentication vs. attempting to use a false face (or “swap a face”) during a liveness session.
This is because criminals often only have access to one-dimensional ID scans. Furthermore, current IDs often incorporate dynamic security features that are visible only when the documents are in motion. Moreover, as the industry is constantly evolving in this field, it is practically impossible to create convincing fake documents that can pass a capture session with liveness validation, in which the documents must be rotated at multiple angles. Therefore, demanding physical identification for a liveness check can dramatically improve an organization’s security.
While AI training for ID verification systems is constantly evolving, it is perpetually a cat-and-mouse game with fraudsters, with often unforeseen results. Interestingly, criminals are training AI to outwit greater AI detection, creating a never-ending loop of detection and escape.
Take, for example, age verification. During a liveness test, fraudsters might use masks and filters to make a person appear older. As a result, researchers are under pressure to uncover new cues or signs of altered media and train their algorithms to detect them. It’s a never-ending struggle, with each side attempting to outwit the other.
Enhance maximum level of security
First, to achieve the highest level of security in ID verification, embrace a liveness-centric approach for identity checks.
While most AI-generated forgeries still lack the naturalness needed for convincing liveness sessions, organizations seeking maximum security should work exclusively with physical objects — no scans, no photos — just real documents and real people.
In the ID verification process, the solution must validate both the liveness and authenticity of the document and the individual presenting it.
This should be backed up by an AI verification model that has been trained to detect even the most subtle video or image changes that may be imperceptible to the human eye. It can also assist in detecting other factors that may indicate anomalous user behavior. This involves checking the device used to access a service, its location, interaction history, image stability and other factors that can help verify the authenticity of the identity in question.
The final tip is to request that customers use their mobile phones during liveness sessions instead of a computer’s webcam. This is because it is generally much more difficult for fraudsters to swap images or videos when using a mobile phone’s camera.