Mitigating AI-Powered Attacks Against Identity and Authentication

Learn more
March 6, 2026
|
BETTER IDENTITY COALITION
This paper is intended for FIs, cybersecurity and fraud professionals, AI service providers, telecommunications companies, and policymakers at regulatory agencies and in legislative bodies who are responsible for safeguarding identity systems and mitigating the risks posed by Gen AI.

The purpose of this paper is to highlight three current and emerging attack vectors powered by the malicious use of Gen AI – along with ten concrete examples of how adversaries are using these attack vectors to compromise the identity and authentication tools used by many financial institutions. The paper outlines potential mitigations that FIs can deploy to guard against each of them. It also includes a maturity model for identity controls to combat malicious use of Gen AI that lays out high-level technologies, ideas, and frameworks financial institutions can work towards to mitigate Gen AI-powered attacks.

The emergence of Gen AI has helped to supercharge the ability of attackers to fake likenesses and identities. Attacks that were once resource-intensive and difficult to execute have now become commoditized – with cheap or free deepfake tools powered by GenAI now able to spoof video, images, and voices.   By examining these threats and exploring how both GenAI and traditional AI can be leveraged for defense, the paper delivers practical insights to help financial institutions protect their systems and consumers from increasingly sophisticated fraud and identity risks.

This paper is a deliverable of the Financial Services Sector Coordinating Council’s Artificial Intelligence and Identity and Authentication Workstream (AI-IA), which was co-chaired by the American Bankers Association and Better Identity Coalition.

Download