A new report from S&P Global Market Intelligence (paywall) highlights deepfakes as "a growing threat in fintech and payments," where the level of sophistication among fraudsters continues to increase exponentially. The report, released earlier this week, warns that widely available generative AI tools have transformed deepfake fraud from a cottage industry into a "rapidly scalable, enterprise-grade operation," creating what S&P calls an "existential challenge to biometric and document-based verification in banking and payments."
The report's warnings are supported by alarming statistics. According to Deloitte, 25.9% of organizations experienced one or more deepfake incidents targeting financial and accounting data in just the past 12 months, with 51.6% expecting an increase in such attacks in the coming year. The financial impact is staggering – especially as Experian reports that, coupled with other forms of identity-related fraud, U.S. adults lost a total of $43 billion to identity theft and fraud in 2023.
As S&P Global notes, the threat is particularly concerning for financial institutions' critical communication channels. A 2023 University College London study cited in the report revealed that humans were only able to detect deepfake audio 73% of the time, highlighting the sophistication of modern AI-generated content. This vulnerability has already led to significant losses – in 2024, British engineering firm Arup reportedly lost $25 million to an AI-enabled social engineering scam.
Multiple Attack Vectors
The S&P report outlines several key areas where financial institutions face deepfake threats:
Customer Onboarding: Fraudsters use deepfakes to bypass photo and document ID checks, potentially allowing money launderers or sanctioned individuals to open accounts.
Authentication: Deepfakes can be used to manipulate biometric verification systems, leading to account takeovers.
Social Engineering: Perhaps most concerningly, deepfakes enable sophisticated impersonation attacks. CFO fraud, where manipulated video or audio is used to trick executives into transferring money, has become increasingly common.
The stakes are particularly high given increasing regulatory scrutiny. The S&P report emphasizes that regulators worldwide are assessing large fines for failures in anti-money laundering controls or sanctions breaches. Financial institutions that unknowingly onboard bad actors through compromised verification systems could face severe consequences.
While technological solutions are crucial, organizational culture and awareness play equally important roles. According to CyberArk, 70% of surveyed security leaders are confident their employees can identify deepfakes of their leaders. Yet this confidence may be misplaced – 34% of workers reported they would struggle to differentiate between a real or fake phone call or email from their boss.
The industry is responding to these challenges with AI-powered solutions. The deepfake detection market is expected to grow 42% annually through 2026, according to Mastercard. This growth reflects the industry's understanding that AI-powered threats require AI-powered defenses.
Leading the Charge Against Deepfakes
As financial institutions grapple with this rapidly evolving threat landscape, Reality Defender stands out for its comprehensive approach to deepfake detection. Unlike solutions focused solely on document verification, Reality Defender's technology detects deepfaked audio and video in real-time across contact centers and videoconferencing platforms – precisely the attack surfaces where financial institutions are most vulnerable.
With deepfake fraud increasingly threatening both customer trust and institutional security, Reality Defender's proven real-time detection capabilities provide the multi-modal protection that modern financial institutions need.
To learn how Reality Defender can help secure your institution's critical communications against deepfake impersonation attacks, schedule a conversation with our team.