FinCEN

FinCEN's Proposed Rule 2026: What the Two-Prong Framework Actually Means for BSA Officers

Published
May 12, 2026
Read Time
6
mins
Gal Perelman
Gal Perelman
Product Marketing Lead, Unit21
Subscribe to stay informed
Table of contents

For decades, BSA program evaluations have been primarily a paperwork exercise. If you had written policies, a designated BSA officer, a training schedule, and independent testing on the books, you could pass an exam, even if your program wasn't catching real financial crime.

FinCEN's proposed rule 2026 changes that. The Notice of Proposed Rulemaking (NPRM), published under Docket FINCEN-2026-0034, introduces what compliance practitioners are calling the "two-prong framework": a new evaluation structure that separates how your program is designed from how it actually performs. For BSA officers, this is the most significant shift in program evaluation methodology in years.

The comment period closes June 9, 2026. Here's what you need to understand before then.

The Shift: From Checking Boxes to Proving Results

Under the current BSA framework, the four pillars of a compliant program are well-known: a system of internal controls, independent testing, a designated BSA officer, and training. Examiners evaluate whether these components exist and whether they're documented.

The problem is that this approach rewards the appearance of compliance without necessarily measuring whether a program actually works. An institution can maintain pristine documentation while its detection rules haven't been tuned in three years and its SAR narratives are boilerplate.

FinCEN's proposed rule flips this. The NPRM introduces an evaluation model that asks two distinct questions: Is your program well-designed? And does it actually produce results?

How the Two-Prong Framework Works

The FinCEN proposed rule 2026 structures program evaluation into two separate assessments.

Prong 1: Program Design. This covers the structural elements: your written policies, risk assessment methodology, training plan, governance structure, and the overall architecture of your AML/CFT program. Think of this as the blueprint. Examiners evaluate whether the design is sound, risk-based, and aligned with the institution's actual risk profile.

Prong 2: Operational Execution. This is where the NPRM breaks new ground. Prong 2 evaluates whether your program delivers measurable outcomes: detection effectiveness, investigation quality, filing accuracy, and how well your operational controls match the risks identified in Prong 1. This is no longer about whether you have monitoring rules. It's about whether those rules are catching what they should.

The critical detail: examiners evaluate each prong separately. A well-designed program that has a single operational miss doesn't automatically trigger enforcement. Under the proposed rule, only "significant or systemic failures" in execution would rise to that level. This distinction matters enormously for institutions that are doing the work but aren't perfect.

The Buried Lede: Auditors Can No Longer Substitute Their Own Judgment

This may be the most consequential provision in the entire NPRM for working BSA officers, and it's getting far less attention than it deserves.

Under the proposed framework, if your institution's risk-based approach is documented and defensible, an examiner who would have made different choices cannot penalize you simply because they disagree. Your risk assessment drives your program. Your rules reflect your risk assessment. If the logic holds and the outcomes are reasonable, that's sufficient.

In practical terms: if your institution serves a particular customer base and you've documented why certain risk areas take priority over others based on your product mix and geography, an examiner can't substitute their own risk priorities for yours. Two institutions with different risk profiles can legitimately run very different programs, and both can be compliant.

This is meaningful protection for BSA officers who have long operated under the unwritten expectation that they must anticipate and mirror their examiner's preferences rather than build a program that reflects their institution's actual risks.

What "Demonstrate Effectiveness" Means in Practice

The FinCEN NPRM 2026 repeatedly uses the phrase "demonstrate effectiveness," and for BSA officers, the natural question is: demonstrate it how?

The answer is quantitative evidence. The proposed rule signals that examiners will look for measurable indicators that your program is working, not just that it exists. That means you need to be tracking and ready to present metrics like:

  • False positive rates and how they've changed over time as you've tuned detection rules
  • Investigation cycle times from alert generation to disposition
  • SAR quality indicators, including narrative completeness and consistency
  • Rule tuning history, showing that detection logic is actively maintained and adapted to emerging risks
  • Coverage alignment, demonstrating that your monitoring rules map to the risks identified in your risk assessment

The institutions that will be best positioned under this framework are the ones that can tell a clear story: here's what our risk assessment says, here's how our program is designed to address those risks, and here's the evidence that the program is actually delivering.

What to Do Before June 9

The comment period deadline is June 9, 2026 (Docket: FINCEN-2026-0034). Whether or not the final rule changes from the proposed version, the direction is clear. Here's how to start preparing now.

Inventory your program against the two-prong split. Separate your design documentation (policies, risk assessment, governance) from your operational evidence (metrics, tuning logs, investigation outcomes). Most programs have these intermingled. Splitting them now will clarify where your gaps are.

Start collecting effectiveness metrics. If you aren't already tracking false positive rates, investigation times, and rule performance data, start now. You don't need a perfect dashboard on day one. You need a baseline you can show improvement against.

Pressure-test your risk assessment rationale. Under the new framework, your risk assessment isn't just a compliance document. It's the foundation that justifies every downstream decision in your program. Make sure you can articulate why your program is designed the way it is, and make sure the connection between your risk assessment and your detection rules is explicit and documented.

Submit a comment. The comment period exists for a reason. If aspects of the proposed rule would create unintended consequences for your institution type or size, FinCEN needs to hear that. Comments submitted through the Federal Register under Docket FINCEN-2026-0034 are part of the public record that shapes the final rule.

Looking Ahead

The two-prong framework represents a fundamental shift in how BSA programs will be evaluated: from compliance theater to demonstrable effectiveness. For BSA officers who have built strong, risk-based programs, this is a welcome change. It means your work gets evaluated on its merits, not on whether it matches an examiner's subjective preferences.

For institutions still operating on autopilot, with untuned rules, boilerplate SARs, and risk assessments that haven't been updated in years, the message is clear: the paperwork alone won't be enough anymore.

The proposed rule also signals a broader modernization of BSA oversight. FinCEN's willingness to formally recognize innovative approaches to AML program effectiveness, including the use of AI and advanced analytics, suggests an examination framework that rewards institutions for investing in tools that deliver better outcomes, not just more documentation.

The compliance programs that will thrive under this framework are those built on infrastructure that gives BSA officers direct control over their detection logic, transparent evidence for every decision, and operational data to prove their programs work. Self-service rules, auditable alert-to-filing chains, and real-time performance analytics aren't nice-to-haves under the two-prong framework. They're how you demonstrate effectiveness.

Gal Perelman
Gal Perelman
Product Marketing Lead, Unit21

Gal Perelman is the Product Marketing Lead at Unit21, where she spearheads go-to-market strategies for AI-driven risk and compliance solutions. With over a decade of experience in the fintech and fraud sectors, she has led high-impact launches for products like Watchlist Screening and AI Rule Recommendations.

Previously, Gal held marketing leadership roles at Design Pickle, Sightfull, and Lusha. She holds a Master’s degree from American University and a Bachelor’s from UCLA, and is dedicated to helping banks and fintechs navigate complex regulatory landscapes through innovative technology.

Learn more about Unit21
Unit21 is the leader in AI Risk Infrastructure, trusted by over 200 customers across 90 countries, including Sallie Mae, Chime, Intuit, and Green Dot. Our platform unifies fraud and AML with agentic AI that executes investigations end-to-end—gathering evidence, drafting narratives, and filing reports—so teams can scale safely without expanding headcount.
Analyst Report
|
7
min

Unit21 named Category Leader by Chartis in Enterprise and Payment Fraud Solutions—and the highest-scoring vendor for AI

Tyler Allen
Tyler Allen
CEO, Unit21
This is some text inside of a div block.
AML
|
10
min

What a complete watchlist screening program looks like in 2026

Gal Perelman
Gal Perelman
Product Marketing Lead, Unit21
This is some text inside of a div block.
AI
|
11
min

ML, AI, GenAI, Agentic AI: A Field Guide for Buyers Who Are Done with Buzzwords

Kunal Datta
Kunal Datta
Chief Product Officer, Unit21
This is some text inside of a div block.
See Us In Action

Boost fraud prevention & AML compliance

Fraud can’t be guesswork. Invest in a platform that puts you back in control.
Get a Demo