Deepfake video in KYC: Bypassing biometric identity verification

How deepfake video injections defeat KYC identity verification. Analyze the threat to liveness detection and the need for hardware-level defense.

Updated on
3 min read
Deepfake video in KYC: Bypassing biometric identity verification
W

William Blondel

Hi there 👋! I'm a Senior Full Stack Web Developer. I dedicate my free time to uncovering family history through genealogy.

The end of “selfie verification”

In the digital banking era, opening an account rarely involves visiting a branch. Instead, users undergo a “Know Your Customer” (KYC) process via a smartphone app: they scan their ID card and then record a short video selfie, often following instructions like “turn your head left” or “blink.”

This process relies on “Liveness Detection”—the assumption that a video feed represents a live human present at that moment. Generative AI has shattered this assumption. We are now seeing “Deepfake Injection Attacks” where criminals use real-time face-swapping technology to pass these biometric checks, opening mule accounts to launder money or access credit in someone else’s name.

The attack vector: camera injection

The vulnerability is not necessarily in the AI algorithm of the bank, but in the data pipeline.

Virtual Cameras: Attackers do not hold a picture up to a webcam (which is easily detected). Instead, they use software like OBS (Open Broadcaster Software) or specialized root tools on Android to create a “Virtual Camera.”

Real-Time Swap: They feed a pre-generated deepfake video or a real-time face-swap stream (using tools like DeepFaceLive) directly into the data stream. The banking app believes it is receiving data from the physical camera sensor, but it is actually receiving a synthetic video stream.

Defeating “active liveness” challenges

Early KYC systems used “Active Liveness” challenges (e.g., “Smile,” “Look up”). These were thought to be secure against static photos.

However, modern Generative AI models can animate a static photo in real-time.

One-Shot Learning: Tools like EOMO (Expressive One-Shot Motion) can take a single ID photo of a victim and animate it to follow the movements of the attacker’s face. If the attacker smiles or nods, the stolen ID photo smiles or nods on the screen, satisfying the liveness challenge.

The scale of the threat

This is not a theoretical risk. In early 2024, a sophisticated group used deepfakes to bypass the KYC checks of a major crypto exchange, creating thousands of verified accounts. These accounts were then sold on the Dark Web as “Fully Verified” identities for up to $200 each. This “Identity-as-a-Service” market fuels ransomware cash-outs and terrorism financing.

Defense: passive liveness & hardware attestation

The defense must move from analyzing the image to analyzing the device and the light.

Passive Liveness: Instead of asking the user to move (which AI can fake), modern systems analyze the screen reflection in the user’s eyes or the subtle color changes in skin caused by the heartbeat (rPPG - remote Photoplethysmography). These micro-signals are currently very difficult for AI to replicate in real-time.

Device Attestation: Apps must verify that the camera feed is coming from genuine hardware, not a software driver. On Android, this involves checking for “Root” access or “Hooking” frameworks. If a virtual camera driver is detected, the KYC process should be instantly terminated.

Conclusion

The “Video Selfie” is no longer proof of life. As deepfakes become indistinguishable from reality, identity verification must rely on a multi-layered approach combining device fingerprinting, behavioral biometrics, and possibly hardware-backed cryptographic attestation (like Apple’s Secure Enclave) to ensure the camera itself has not been hijacked.

William Blondel

10 posts published

Senior full-stack web developer and amateur genealogist. Born geek with an Amstrad CPC 6128. PHP & Laravel Expert 🐘