Combatting AI-Enabled Fraud – A Top Financial Crime Threat

,

Summary

Situation Overview: In January 2026, the Association of Certified Anti-Money Laundering Specialists (ACAMS) published its 2026 Global Anti-Financial Crime Threats Report (ACAMS Report). The ACAMS Report identified artificial intelligence (AI)-enabled fraud as a top financial crime threat. Intergovernmental agencies and the U.S. Department of the Treasury (Treasury) are also focused on the risks of AI-enabled fraud.

What: As criminals increasingly utilize AI to commit crimes, firms must ensure they are prepared to identify and mitigate the risks of AI-enabled fraud using effective risk management frameworks and AI threat detection tools.

Who: All financial institutions, including bank and non-bank institutions.

Background

AI-enabled fraud can occur throughout the client lifecycle – both at the time of customer onboarding, through identity theft and fraudulent identity documentation, as well as post-account opening, through account takeover. AI-enabled fraud can be carried out at scale, as perpetrators can create voluminous synthetic identities, and deploy AI-enabled face generation and voice clones, as well as AI-generated fraudulent documentation.

In Depth

AI-Driven Scams Are on the Rise

The ACAMS Report identified AI-enabled fraud as the number one threat institutions will face in 2026. AI-driven scams are schemes in which criminals utilize generative AI and large language models (LLMs) to commit their crime. These scams are increasingly popular among bad actors because they are both sophisticated and scalable and enable criminals to hyper-personalize scams to their targets. Examples include creating an AI-generated phishing e-mail in a target’s native language and utilizing deepfake voice cloning to impersonate a target’s family members. Employees can also exploit this technology, for example, by generating fake IDs and documents to open fake accounts. In addition to hyper-personalization, AI and LLMs have enabled these scams to be scalable. Criminals often utilize social media and messaging platforms to reach more victims. For example, criminals may generate highly convincing fraudulent messages that appear to be legitimate alerts from a target’s bank.

Additional Focus on AI-Driven Scams

In December 2025, the Financial Action Task Force, a global intergovernmental body, published “Horizon Scan – Artificial Intelligence and Deepfakes,” a report that provides a forward-looking view of AI-related threats and trends, and mitigations to those threats, such as using information from independent sources to confirm a customer’s identity.

The Treasury has also issued multiple reports relating to AI risks. For example, in November 2024, Treasury’s Financial Crimes Enforcement Network (FinCEN) issued a “FinCEN Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions” to “help financial institutions identify fraud schemes associated with the use of deepfake media…” For example, to detect and mitigate deepfake identity documents, financial institutions can conduct re-reviews of a customer’s account opening documents, examine an image’s metadata, and use software designed to detect possible deepfakes. Indicators for triggering re-reviews of identity documents include red flags such as inconsistencies among multiple identity documents submitted by a customer, a customer’s inability to authenticate their identity, and/or inconsistencies between the identity document and other aspects of a customer’s profile.

How to Combat AI-Driven Scams

As AI enables an increasingly complex threat environment to financial institutions, firms must stay on top of developments in AI technology. Patomak recommends financial institutions keep their risk management frameworks current to detect and mitigate AI-enabled fraud and use well-governed AI models to detect AI-enabled fraud.

Risk management frameworks can identify and mitigate AI-enabled fraud by:

  1. Identifying a comprehensive inventory of risks and vulnerabilities the firm may be exposed to regarding AI-enabled fraud;
  2. Deploying multi-layered controls and mitigants for identified risks, such as multi-factor authentication, advanced biometrics, and geolocation analytics;
  3. Reviewing and addressing regulatory guidance as it is released from federal and state regulators;
  4. Educating board members, senior executives, employees, and customers about AI-enabled fraud risks; and
  5. Engaging regularly with industry groups and external subject matter experts to maintain awareness of AI developments and foster intelligence sharing.

Well-governed AI models can be used to detect AI-enabled fraud. AI models should be subject to the firm’s model risk management framework, including:

  1. Model identification and use cases;
  2. Clear model development and validation documentation, which explains how the AI-enabled model works, and the model’s data lineage, including where data is sourced from, how the model uses the data, and where the model sends the data;
  3. Model monitoring, including defined performance metrics, such as the AI model’s ability to accurately detect false positives; and
  4. Model output which includes a documented rationale for why the AI model made its decision.

Put Patomak’s Expertise to Work

As financial crime threats evolve over time, firms must regularly assess their vulnerabilities and opportunities amid these changes. Patomak has deep expertise in risk management, model risk management, and financial crimes compliance. If you would like to learn how Patomak can partner with you to navigate these and other areas, please reach out to Partner Diane Daley and ddaley@patomak.com, Senior Director Heather Espinosa at hespinosa@patomak.com, or Associate Stephanie Moore at smoore@patomak.com.