AI

FinCEN's AI Provision Is a Signal, Not a Solution. Here's What Practitioners Should Do With It.

Published
April 28, 2026
Read Time
8
mins
Kunal Datta
Kunal Datta
Chief Product Officer, Unit21
Subscribe to stay informed
Table of contents

On April 7, FinCEN proposed a rule to “fundamentally reform” AML/CFT programs. Within days, the practitioner community lit up. AML officers called out the vague language. Veteran compliance leaders pointed out that the main beneficiaries of regulatory pageantry tend to be "regulators, lawyers, consultants, and professional money launderers." One attorney put it best: it "could all have been said in so. many. fewer. words."

They're not wrong. The proposed rule uses "risk-based," "effectiveness," and "reasonably designed" like load-bearing walls, but never defines what those walls are made of. If you've spent any time operationalizing AML programs, you know the pattern: regulators publish aspirational language, practitioners scramble to figure out what it actually means for their Monday morning, and the gap between intent and implementation becomes someone else's problem.

But buried in 200+ pages of familiar regulatory prose, there is one provision that deserves a closer look. Not because the language is precise (it isn't), but because of what it signals about where the industry is heading.

FinCEN explicitly states that institutions that "responsibly experiment with innovative technologies in their AML/CFT programs will not incur any additional risk of being subject to a significant supervisory AML/CFT action or AML/CFT enforcement action solely based on the use of innovative technologies."

And it goes further. When evaluating enforcement actions, FinCEN will consider whether an institution is "employing innovative tools such as artificial intelligence that demonstrate the effectiveness" of the institution's AML/CFT program.

For the first time, a federal regulator is saying: using AI in your compliance program is a positive factor, not a risk factor.

Before You Celebrate: The Language Problem

I want to be careful here, because the practitioner's skepticism is earned.

"Responsibly experiment" is doing a lot of work in that sentence. What does "responsibly" mean? What counts as "experiment" versus "production deployment"? If your AI agent misses a case, does the fact that you were "responsibly experimenting" protect you in an exam? The rule doesn't say.

This is the same problem practitioners have identified across the entire proposed rule: terms like "effectiveness" and "reasonably designed" mean different things to different people. To a board, an effective program is one without exam findings. To an AML officer, it's when law enforcement calls you because your SARs are that good. To a regulator, it could mean anything from clean CTRs to zero audit findings.

FinCEN is asking institutions to build effective, risk-based programs without defining indicators of effectiveness. That's the gap. And it matters, because compliance officers are the ones who have to operationalize this language into actual decisions on a Tuesday afternoon.

So let me make the case for why the AI provision matters anyway, not because the language is airtight, but because of what AI can actually deliver to practitioners trying to close exactly this gap.

The Indicators of Effectiveness Problem

Practitioners have been pushing FinCEN to define "indicators of effectiveness" since at least 2020. The argument is straightforward: don't just tell institutions to be "effective." Show them what effective looks like. Give them measurable, operational indicators they can point to when the examiner walks in.

This is exactly where AI changes the game, and not in the way most RegTech marketing pitches frame it.

Manual compliance programs struggle to produce indicators of effectiveness because the work is inherently unstructured. An analyst opens tabs, does searches, reads articles, maybe uses Google Translate, and pulls findings into a narrative. The quality varies analyst to analyst, day to day. When someone asks "how effective is your program?", the honest answer is often "we think it's pretty good, but we don't have great data on that."

AI-first compliance workflows produce indicators of effectiveness by design. Every alert processed by an AI investigation agent generates a structured record: what evidence was gathered, what sources were checked, what reasoning led to the disposition, how long it took, what confidence level the system assigned. You can measure accuracy against eval sets benchmarked to your best analysts. You can sample and QA decisions systematically. You can show, with data, that your program reviewed 100% of alerts at a consistent depth, not just the ones your team had time to get to.

The irony is that the "indicators of effectiveness" practitioners have been asking regulators to define are the same metrics that AI systems produce naturally: throughput, accuracy, consistency, auditability, and coverage. A five-person team using AI investigation agents doesn't just process more alerts. It produces a measurably more defensible program.

This doesn't require FinCEN to define the indicators first. Teams can start producing them now. And when the final rule lands, the institutions that already have measurable, auditable, AI-assisted workflows will be the ones best positioned to demonstrate "effectiveness," whatever definition FinCEN eventually settles on.

From Letter to Spirit (And Why That Requires AI)

The deeper shift in the proposed rule is philosophical. FinCEN is moving from compliance-as-checklist to compliance-as-effectiveness. Do you have a BSA officer? Check. Written policies? Check. File SARs? Check. Under the old model, if you had the boxes checked, you were largely fine, even if your program wasn't particularly effective at finding the nefarious activity moving through your institution.

The proposed rule says that's not enough anymore. FinCEN wants programs that work, not just programs that exist.

Here's the uncomfortable truth about why most programs don't work as well as they could: it's not incompetence. It's operational capacity. Institutions have finite headcount. They define what's "high risk," do deep-dive investigations on that population, and apply lighter treatment to everything else. It's a rational response to limited resources. But it means vast swaths of transaction activity get minimal scrutiny.

The spirit behind AML regulations is to ensure the financial system is not being used to fund human trafficking, terrorism financing, political corruption, and other serious crimes. The money moves through fintechs, crypto platforms, banks, and credit unions. With limited human capacity, most institutions can only achieve the letter of the law: the regulatory floor.

AI changes this equation. An AI investigation agent can conduct the same depth of review on every alert, not just the high-risk ones: pulling transaction histories, checking watchlists, searching open-source intelligence, drafting investigation narratives, and presenting a complete evidence package for human review. This isn't about replacing analysts. It's about giving a five-person team the investigative depth of a fifty-person team, so institutions can pursue the spirit of the regulation, not just the letter.

And that's what FinCEN's effectiveness-based framework is ultimately asking for, even if the language doesn't spell it out yet.

The Accountability Question Hasn't Gone Away

Let me be direct: FinCEN's proposed rule is not a blank check for AI adoption.

If an AI agent misses a case that should have been flagged, "my AI decided not to" is not a defensible answer. Just as "my analyst decided not to" isn't defensible if the analyst had no training, no process, and no quality controls.

What FinCEN is signaling is that the presence of AI itself is not the problem. Poor implementation is. And that distinction matters.

The institutions that will benefit most from this provision are the ones that treat AI adoption the way they'd treat any other program component: with governance, testing, monitoring, and continuous improvement. Run eval sets against a golden standard (not average analyst performance, but your best analysts). Sample AI decisions for QA review. Maintain complete audit trails. Keep humans in the loop for final disposition on every case.

Do that, and you're not just protected under the proposed rule. You're building the kind of measurably effective program that will hold up regardless of how FinCEN's final language lands.

What Practitioners Should Actually Do

The comment period runs through June 9, 2026. Here's what I'd suggest:

Submit comments. Practitioners need to be in the conversation, not just regtechs and lobby firms. If the AI provision matters to you, tell FinCEN what "responsible" AI adoption looks like from the practitioner's seat. Pick one or two questions from the NPRM and respond. The people operationalizing these rules should be the ones shaping them.

Start building your indicators of effectiveness now. Don't wait for FinCEN to define them. If you're running AI-assisted workflows, you already have the data: alert coverage, investigation throughput, accuracy rates, reasoning audit trails. If you're not running AI yet, start by identifying the highest-volume workflows that consume analyst time (L1 alert triage and initial investigation are the obvious candidates) and evaluate how AI can handle those at scale.

Ask your vendors hard questions. Can the AI explain its reasoning for every decision? Can you audit every disposition? Can your analysts override and correct the AI? Is the system learning from your team's feedback? A vendor that tells you "don't worry, if the new rules apply to you, we'll let you know" is not the right fit. You need technology that gives your team control over operationalizing your institution's specific threats, not a vendor that makes those decisions for you.

Don't forget the other two NPRMs. The AI provision in the Reform rule gets the headlines, but the Whistleblower rule and the GENIUS Act rule both landed in the same nine-day window. Board updates in April and May should cover at minimum the Whistleblower provisions and the Reform NPRM. If you're in stablecoin or digital assets, the GENIUS Act NPRM requires a close read.

Prime a new risk assessment methodology. The proposed rule's risk assessment language is vague, but the direction is clear: your risk assessment needs to inform your program, not just sit in a binder. Threats plus vulnerabilities equals risk. You don't need an expensive consulting firm for this. You need access to your data and people who understand it.

The Signal Through the Noise

I build AI for compliance. I'm not going to pretend I'm a neutral observer here. But I've also spent enough time in this industry to know that regulatory language alone doesn't change programs. Practitioners change programs.

What FinCEN's proposed rule does is remove the single biggest objection compliance teams had to AI adoption: "What will the examiner say?" For years, that question had no good answer. Now it does. Not a perfect answer. Not a fully defined answer. But a clear directional signal that using AI responsibly is a positive factor, not a liability.

The practitioners who were already leaning into AI now have regulatory cover. The ones who were waiting for a signal just got one. And the ones who continue relying on purely manual processes will eventually need to explain, in an effectiveness-based framework, why they chose not to use tools that could have made their programs measurably better.

The fear of AI in compliance was always about the regulatory unknown. FinCEN just made it less unknown. The question now is what you do with that clarity.

Kunal Datta
Kunal Datta
Chief Product Officer, Unit21

Kunal Datta is the Chief Product Officer at Unit21. Prior to Unit21, he led the Product team for Checkout at Fast, and prior to that, led the Product teams responsible for automating aerial wildfire safety inspections at Pacific Gas & Electric.

He has a background leading Product teams using AI to automate processes at regulated entities, as well as financial products, machine learning products, web applications, mobile applications, hardware products, and data products. Kunal is a Fulbright Scholar and studied Civil and Environmental Engineering and Music Science Technology at Stanford University.

Learn more about Unit21
Unit21 is the leader in AI Risk Infrastructure, trusted by over 200 customers across 90 countries, including Sallie Mae, Chime, Intuit, and Green Dot. Our platform unifies fraud and AML with agentic AI that executes investigations end-to-end—gathering evidence, drafting narratives, and filing reports—so teams can scale safely without expanding headcount.
AI

AI task spotlight | Edition no. 01: Structuring activity analysis

Gal Perelman
Gal Perelman
Product Marketing Lead, Unit21
This is some text inside of a div block.
Crypto
|
8
min

MiCA Regulation 2026 FAQs: What crypto compliance teams need to know

Gal Perelman
Gal Perelman
Product Marketing Lead, Unit21
This is some text inside of a div block.
Fraud
|
8
min

AI agents for fraud detection and investigation: how they work and what to evaluate

Gal Perelman
Gal Perelman
Product Marketing Lead, Unit21
This is some text inside of a div block.
See Us In Action

Boost fraud prevention & AML compliance

Fraud can’t be guesswork. Invest in a platform that puts you back in control.
Get a Demo