Parya Lotfi is CEO & Cofounder of DuckDuckGoose, helping lead AI-driven deepfake detection in the fight against crime.
Financial institutions are being increasingly targeted by deepfake-enabled fraud during know your customer (KYC) processes. These sophisticated attacks threaten the integrity of identity-verification frameworks that support anti-money laundering (AML) and counter-terrorism financing (CTF) systems.
The U.S. Treasury’s FinCEN has reported an increase in suspicious activity involving AI-generated media. It warns that “bad actors are seeking to exploit [generative AI]
to defraud … financial institutions and their customers.”
Meanwhile, Wall Street’s FINRA has issued its warning: Deepfake audio and video scams could cost the financial sector as much as $40 billion by 2027, according to research from Deloitte’s Center for Financial Services cited by the Wall Street Journal.
Biometric checks can no longer be relied on as the sole defense. A 2024 survey by Regula found that 49% of businesses across industries, including banking and fintech, have already encountered fraud schemes using audio or video deepfakes, with average losses approaching $450,000 per incident.
As these figures escalate, understanding the anatomy of a deepfake intrusion becomes critical for safeguarding customers, reputations and the global financial system.
Real-World Breach: Over 1,100 Deepfake Attempts In Indonesia
In late 2024, an Indonesian bank saw more than 1,100 attempts to bypass its digital KYC loan-application process in just three months, according to cybersecurity firm Group-IB.
Fraudsters combined AI-powered face-swapping with virtual-camera tools to spoof the bank’s liveness-detection controls, despite the institution’s “robust, multi-layered security measures.” Potential losses from these intrusions have been estimated at $138.5 million in Indonesia alone.
As stated by Group-IB, “AI-driven face-swapping tools enabled fraudsters to replace a victim’s facial features with those of another person.” Thus, enabling them to exploit “virtual camera software to manipulate biometric data … deceiving institutions into approving fraudulent transactions” during KYC processes.
Inside The Deepfake KYC Fraud Playbook
Deepfake-enabled KYC fraud follows a methodical, multistage process:
1. Data Acquisition: Fraudsters begin by collecting personal data, in many instances using malware, social networking sites, phishing scams or the dark web. This data is then used to create convincing fake identities.
2. Manipulation: Deepfake technology is then used to alter identity documents. Fraudsters swap photos, adjust details or even re-create entire identities to bypass traditional KYC checks.
3. Exploitation: Fraudsters use virtual cameras or prerecorded deepfake videos to supply spurious biometric data to verification systems. This helps them evade detection of liveness by simulating real-time interactions.
4. Execution: With these tools in place, fraudsters can open fraudulent accounts, apply for loans and carry out high-value transactions, all while appearing completely legitimate.
This opens up a tough reality: The conventional authentication procedures, including facial recognition or document verification, are no longer sufficient to counter these advanced attacks. Consider that, on average, there has been one deepfake attempt every five minutes over the past 12 months, while, in a recent 2025 study, only 0.1% of people can spot deepfakes.
Fortifying KYC: A Multilayer Defense Strategy
Together, these issues highlight an urgent need for financial institutions to evolve from reactive incident response toward proactive, AI-powered detection and multilayer defenses.
Some of the technologies that companies should be considering in the fight against deepfakes include:
1. Multimodal Biometrics: Combine facial recognition with voice biometrics, behavioral patterns (e.g., typing rhythms) and advanced liveness cues to create overlapping verification barriers.
2. Explainable-AI Detection: Deploy AI tools trained to spot deepfake artifacts, such as unnatural flickering, mismatched body movement or inconsistencies between speech and facial expressions.
3. Layered Verification: Integrate document‐authenticity checks, geolocation validation and transaction‐pattern analytics alongside biometric scans to catch anomalies before account approval.
4. Continuous Monitoring: Extend fraud detection beyond onboarding. Real‐time AI monitoring of account behavior can detect suspicious transfers or device changes indicative of post-admission compromise.
5. Employee Training: Arm employees with deepfake-awareness training so they can spot red flags, such as off-sync audio or unnatural facial movement, in live or recorded customer interactions.
Beyond technology, institutions must establish robust internal protocols and cross-functional collaboration.
Traditional injection or presentation attack detection methods are inadequate, as deepfakes convincingly mimic human behaviors, even replicating nuanced physiological traits like our heartbeat pattern’s influence on the skin color.
Thus, it’s imperative that dedicated fraud response teams comprising compliance officers, cybersecurity analysts and customer-relations managers should regularly analyze fraud patterns and update KYC procedures. Regular onboarding audits and deepfake attack simulations proactively identify vulnerabilities. Clear escalation pathways ensure rapid, consistent responses to suspicious activities.
Implementing comprehensive governance policies is also essential for securely integrating new detection methodologies, ensuring compliance with emerging regulations such as the EU AI Act and privacy laws. Regular risk assessments and tabletop exercises stress-test KYC and AML protocols against evolving deepfake scenarios, allowing ongoing strategic adjustments.
Future Challenges And Evolution
Looking ahead, deepfake technologies will continue to evolve rapidly, driven by innovations like real-time voice cloning, hyper-realistic lip syncs and advanced text-to-video models such as Google’s Veo 3 or OpenAI’s Sora. Meanwhile, the increasing digitization of financial interactions and growing consumer demand for convenience inadvertently open new avenues for fraudsters using unpredictable, sophisticated generative AI methods.
To stay ahead, organizations must invest in cutting-edge research and collaborate with industry and academia to anticipate and adapt to these continually evolving threats.
Conclusion: A Continuous Battle For Digital Integrity
As deepfakes grow more sophisticated and widespread, financial institutions face a critical juncture: proactively adapting to new technological threats or risking severe financial and reputational damage. By adopting multilayered defenses, fostering continuous innovation and promoting internal readiness, banks and fintech firms can build resilient strategies capable of addressing the evolving threat landscape.
Staying ahead in the AI arms race is not just beneficial, it’s essential to preserving digital integrity and customer trust.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
