FinCEN Just Rewrote the Rules: Why Effectiveness Now Demands a Unified Platform

By Whitney Anderson

Yesterday, the Financial Crimes Enforcement Network effectively told the entire financial services industry that decades of compliance theater are over.

On April 7, 2026, FinCEN issued a Notice of Proposed Rulemaking that fundamentally restructures AML/CFT program requirements under the Bank Secrecy Act, superseding the 2024 proposed rule in its entirety. And it replaces the question every examiner has asked for thirty years, "Did you follow the checklist?" with a far more consequential one: "Is your program actually effective at identifying, preventing, and reporting financial crime?"

That single shift changes everything about the technology financial institutions need.

The End of Checklist Compliance

For as long as most compliance officers can remember, the operating model has been defensive. Build a program that satisfies the four pillars. Document your policies. Run your transaction monitoring. File your SARs. Survive your exam. Repeat.

If you checked the boxes, you were safe. It didn't matter much whether your program actually caught financial criminals. What mattered was that the program existed and could be demonstrated to an examiner. The NPRM dismantles that assumption.

The proposed rule explicitly recenters AML/CFT programs on effectiveness, defined as the actual identification, prevention, and reporting of money laundering, terrorist financing, and other financial crime. FinCEN draws a deliberate distinction between program establishment (design) and program maintenance (implementation), making clear that having a well-designed program on paper is necessary but nowhere near sufficient. The program must work in practice, continuously, adaptively.

FinCEN will evaluate not just whether you built the machine, but also whether it is running and whether it's catching what it's supposed to.

What the Rule Actually Says

The NPRM consolidates and replaces separate program rules for banks, casinos, and money services businesses into a single harmonized standard. The four core pillars remain: internal policies, procedures, and controls (now explicitly including risk assessment and ongoing customer due diligence); independent testing; a designated BSA/AML compliance officer based in the United States; and ongoing employee training. However, they operate under a fundamentally different enforcement philosophy, with three key provisions. 

First, the risk-based approach is now formalized and specific. Financial institutions must allocate more attention and resources to higher-risk customers and activities, and proportionally less to lower-risk ones. Risk assessments must cover products, services, distribution channels, customers, intermediaries, and geographic locations. And they must be "updated promptly upon significant changes.” Not annually, not at the next exam cycle, but promptly.

While this approach sounds intuitive, most institutions today cannot operationalize a genuinely risk-based approach because their compliance infrastructure is fragmented. Their screening system doesn't talk to their transaction monitoring system. Their case management platform doesn't feed back into their risk assessment. They have no unified view of an entity across the compliance lifecycle. They can tell you whether a customer was screened at onboarding. They cannot tell you, in real time, how that customer's risk profile has evolved based on the totality of their screening results, transaction patterns, and investigative history.

You cannot allocate "more attention and resources toward higher-risk customers" if your systems don't share a unified customer view. Full stop.

Second, FinCEN has explicitly incentivized the use of artificial intelligence. The NPRM states that in determining enforcement and supervisory actions, the Director will consider whether a financial institution is "employing innovative tools such as artificial intelligence" that demonstrate program effectiveness. It's something potentially more powerful than a mandate: a positive factor in the exercise of enforcement discretion.

FinCEN is telling institutions, on the record, that deploying AI effectively will be considered a mitigating factor in enforcement decisions. This is the first time a federal financial regulator has offered what amounts to an explicit enforcement incentive for AI adoption in compliance.

The operative phrase is "demonstrate program effectiveness." FinCEN isn't rewarding AI for its own sake. It's rewarding AI that makes programs better at catching financial crime. Rules-based alert engines that generate thousands of false positives don't demonstrate effectiveness; they demonstrate activity. 

Third, the enforcement threshold has been meaningfully raised. Under the proposed rule, if a financial institution has established its AML/CFT program (meaning the design meets the required standard) FinCEN would generally not take enforcement action. Significant supervisory action would require a "significant or systemic failure to maintain" the program.

This is a profound rebalancing. It tells institutions: if you build the program right and operate it in good faith, we won't penalize you for imperfections. We're going to focus enforcement on institutions that either don't build the program at all or allow systemic failures in how they run it.

Combined with the 30-day advance notice requirement for federal banking supervisors to notify FinCEN before taking significant AML/CFT supervisory actions, this creates a more predictable, less punitive enforcement environment, designed to encourage investment and innovation rather than defensive minimalism.

Why Point Solutions Can't Deliver Effectiveness

The compliance technology stack at most financial institutions was built for the checklist era, often a collection of point solutions acquired to satisfy specific regulatory requirements. One vendor for sanctions screening. Another for transaction monitoring. A third for case management. A fourth for SAR filing. Maybe a fifth for customer risk rating. Each system checks its respective box.

In a checklist world, this architecture works. Each tool can be independently validated, each producing its own audit trail, with examiners reviewing each component separately. The fragmentation is an inconvenience rather than a disqualifier. However, under an effectiveness standard, it is precisely the problem.

Effectiveness means catching actual financial crime, following the thread from an initial screening hit through behavioral monitoring, through investigation, and through reporting. That thread doesn't respect vendor boundaries. A sanctions screening alert on an entity is meaningless without the context of that entity's transaction history. A transaction monitoring alert is meaningless without the context of that entity's screening results and risk profile. A case investigation is meaningless if the investigator can't see the full picture.

When FinCEN says institutions must direct "more attention and resources toward higher-risk customers," it's describing an entity-level intelligence requirement. Higher-risk customers aren't identified by any single system. They're identified by the convergence of signals across screening, monitoring, behavioral analysis, and investigative history. A customer might clear initial screening but develop a concerning transaction pattern. Another might trigger a transaction alert but have a well-documented risk profile that explains the activity. The risk-based approach demands that institutions synthesize these signals, which means their systems must be capable of synthesis.

Point solutions, by design, are not.

The Architecture Effectiveness Demands

If you accept that effectiveness requires connected intelligence across the compliance lifecycle, then the technology architecture becomes clear. Financial institutions need platforms, not toolkits, that unify the following pillars.

1. Entity screening as the front door: determining who you're doing business with, not just at onboarding, but continuously. Sanctions, PEPs, adverse media, beneficial ownership structures, and corporate hierarchies must be evaluated before a relationship begins and reassessed as conditions change. Under the NPRM's risk-based framework, this is where the institution's first-line risk judgments are made. Screening results must persist as part of the entity's living risk profile, informing every downstream decision.

2. Entity monitoring as the connective tissue: tracking how an entity's risk posture evolves over the life of the relationship. New sanctions designations, changes in ownership structure, geographic expansion, adverse media triggers, and regulatory actions against related parties all can alter the risk calculus. Entity monitoring ensures that the risk assessment the NPRM demands is genuinely ongoing rather than a snapshot frozen at onboarding. This is the layer that makes "updated promptly upon significant changes" operationally possible.

3. Transaction monitoring as the behavioral layer: detecting suspicious patterns in how entities actually use the financial system. but more critically, transaction monitoring that operates with full awareness of the entity's risk context. A $9,800 wire from a newly onboarded entity in a high-risk jurisdiction with recent adverse media hits is a fundamentally different signal than the same transaction from a ten-year client with a clean profile. Effective monitoring (the kind the NPRM now demands) requires that the system automatically know the difference, without an analyst having to manually pull context from three separate tools.

4. Compliance and reporting as outcomes: case management, SAR filing, and regulatory reporting that draw on the unified record built by the preceding layers. When an institution reports suspicious activity, it should report with the full context of what it knows about that entity, including screening history, monitoring alerts, transactional patterns, and automatically assembled investigative findings, replacing manual reconstruction by an analyst piecing together exports from disconnected systems.

5. AI agents as the analyst's force multiplier: embedded throughout the workflow as both an intelligence layer and as practical assistants to the human investigators and compliance officers who do this work every day. AI agents that triage the alert queue by prioritizing the 12 genuine threats in a sea of 10,000 false positives so analysts spend their time actually investigating. Agents that build cases automatically by pulling entity histories, assembling transaction timelines, cross-referencing screening results, and drafting narrative summaries that an investigator can review and refine rather than create from scratch. Agents that prepare regulatory filings, pre-populating SAR narratives with the relevant facts, flagging incomplete fields, and ensuring consistency across submissions. And agents that learn from every investigative outcome, feeding resolved cases back into the detection models so the system gets smarter with every decision. This is what FinCEN means when it cites "innovative tools such as artificial intelligence" as a positive factor. Instead of AI for its own sake, it’s AI that demonstrably makes human compliance professionals more effective. 

Each of those capabilities exists in isolation at most institutions today. The NPRM doesn't ask whether you have them; it asks whether they work together. You cannot demonstrate effectiveness with disconnected tools any more than you can conduct an orchestra with musicians who can't hear each other.

The Adaptive Imperative

The NPRM's requirement that risk assessments be "updated promptly upon significant changes" introduces another dimension that most current compliance architectures cannot satisfy.

Consider what constitutes a significant change. A new sanctions designation. A shift in typology patterns. A change in a customer's business model. Entry into a new market or product line. A geopolitical event that alters country risk. Each of these should promptly and automatically trigger a reassessment.

Static, annually updated risk assessments have been the industry standard for decades. They've survived because the regulatory framework tolerated them. This NPRM does not. "Promptly upon significant changes" means the risk assessment must be a living process rather than a periodic document. It means the technology underlying that process must be able to incorporate new information in near-real time and propagate its implications across the program.

This is where the distinction between establishment and maintenance becomes operationally critical. Establishing a program means designing it to be adaptive. Maintaining it means actually adapting continuously and demonstrably. An institution that designs an adaptive program but runs it statically has established itself but failed to maintain it. Under this rule, that's where enforcement risk lives.

The AI Incentive Is Bigger Than It Looks

The initial commentary on the NPRM has largely focused on the compliance obligations, but FinCEN's explicit recognition of AI as a positive enforcement factor is the more significant development.

Most regulatory guidance on AI in financial services has been cautionary, focusing on model risk management, algorithmic bias, and explainability requirements. The interagency guidance has emphasized guardrails and governance, while avoiding encouraging AI adoption. This NPRM breaks from that pattern by offering something affirmative: use AI effectively, and it counts in your favor.

Notably, FinCEN also acknowledged industry concerns about applying traditional model risk management principles to AML/CFT programs and is soliciting feedback on the topic. This signals an awareness that the existing MRM framework, designed primarily for credit and market risk models, may not translate cleanly to compliance AI, where the objectives, data structures, and validation challenges are fundamentally different.

For institutions that have been hesitant to deploy AI for compliance purposes due to regulatory uncertainty, this NPRM substantially changes the calculus. The risk of under-deploying AI is now as real as the risk of deploying it poorly, and it has significant implications for how institutions evaluate their compliance technology partners. Vendors that can demonstrate AI-driven effectiveness beyond AI features, such as measurable improvements in detection, efficiency, and reporting quality, will have a material advantage. Vendors offering rules-based engines labeled "AI" will inevitably be left behind.

What Happens Next

The NPRM opens a comment period closing June 9, 2026, and the final rule will reflect industry input, but the direction is unmistakable. FinCEN is moving the regulatory framework from process compliance to outcome effectiveness, from checklist validation to risk-based intelligence, from technology agnosticism to explicit AI encouragement.

Financial institutions that move now aren’t just positioning for the final rule; they’re leading the charge in building the capability the rule is designed to incentivize. The institutions that invest now in unified platforms, entity-level intelligence, and embedded AI won't just be better positioned for the final rule. They'll be better positioned to do what the rule actually asks: catch financial crime.

---

R. Whitney Anderson is CEO of FraudNet, a unified platform for entity screening, entity monitoring, transaction monitoring, compliance reporting, and AI-driven financial crime detection.

Table of Contents

You might be interested in…

Get Started Today

Experience how FraudNet can help you reduce fraud, stay compliant, and protect your business and bottom line

Recognized as an Industry Leader by