
For years, financial crime technology has been built to improve how humans do investigations. Each new generation of tools promised better versions of the same thing: stronger rule engines, faster case management systems, or cleaner dashboards for analysts. These tools helped teams organize work, but they didn’t fundamentally change how investigations happened.
Then AI arrived. The industry did what it often does with major shifts: turning something transformative into a checkbox. Vendors started adding AI features to product pages, and buyers began asking a simple question: “Do you have AI?” The problem is that this question misses the real shift happening underneath.
Today, almost every financial crime platform claims to include AI. Some tools summarize cases. Others suggest dispositions or help rewrite investigation narratives. These capabilities can make analysts more efficient, but they don’t fundamentally change the system.
From what we're seeing within today's most innovative risk & compliance teams, the shift is much bigger. AI is beginning to change how financial crime operations are structured. When AI moves beyond assisting analysts and starts performing parts of the investigation itself, it stops being a feature and becomes part of the operating model.
Most compliance and fraud organizations still run investigations using the same basic structure that has been in place for years. Systems generate alerts, and those alerts are routed to human analysts who manually investigate them.
In most institutions, the workflow still looks like this:
When alert volumes increase, organizations typically respond by hiring more analysts, expanding offshore teams, or relying on BPO providers. Over time, this creates operational strain and growing alert backlogs that make it harder to focus on real risk.
Many AI tools today still assume that humans are responsible for the investigation. They summarize data, provide suggestions, or automate small pieces of the workflow. Those improvements matter, but they do not change the structure of the work.
The transformation occurs when AI begins to perform the fraud investigation itself. Instead of assisting an analyst, the system can take an alert and execute the investigative process—gathering context, reviewing transactions, checking signals, and assembling findings before presenting the results to a human reviewer.
At that point, AI is no longer sitting alongside the workflow; it is the workflow.
One way to think about this shift is through the idea of digital workers. Agentic workers is an AI system capable of performing operational tasks that previously required human analysts.
Instead of simply helping analysts work faster, agentic workers can execute investigations at scale. They can review alerts continuously and in parallel to analysts, collect, collect context, analyze patterns, and produce structured findings.
For organizations, this changes the scaling model entirely. Instead of adding more headcount every time volumes grow, teams can deploy an agentic workforce to handle routine investigations while analysts focus on the hardest cases.
The rise of an agentic workforce does not eliminate the role of analysts. Instead, it changes how analysts spend their time. The goal of risk and compliance teams is to identify and prevent financial crime, and the fact is that agentic workers drastically improve this ability by removing the busywork. Much of the operational workload in financial crime investigations involves repetitive tasks: reviewing alerts, gathering context, documenting findings, and closing routine cases. These tasks are exactly where AI performs well.
As those responsibilities shift to agentic workers, analysts can focus on the work that benefits most from human judgment, including complex investigations, strategy decisions, and improving detection approaches.
One of the clearest signals that AI is becoming more than a feature is how platforms are starting to price these capabilities. Traditional financial crime software typically follows a SaaS pricing model (per-seat, per-license, or per-user tier).
When AI systems begin performing investigative work directly, pricing often shifts toward operational outcomes. Instead of paying for software access, institutions start paying for work completed.
This can include metrics such as:
The shift matters because it reflects what customers are actually buying: operational capacity.
One thing I hear constantly from buyers is how much noise there is in the market. Almost every vendor now claims to offer “agentic AI,” but those claims often mean very different things.
It takes only minutes to add the words “AI-powered” to a product page. Building a system that can reliably execute investigations in production, while maintaining governance, auditability, and regulatory confidence, takes far longer.
For buyers evaluating solutions, the most useful question is simple: Does the AI actually do the work? A real operational AI system should be able to demonstrate:
If AI is only summarizing what an analyst already sees, it is a feature. If it is executing the investigation, it represents something fundamentally different.
The previous generation of financial crime technology focused on organizing investigations. Case management platforms helped teams centralize workflows and manage regulatory reporting.
The next generation will go further. Instead of simply managing investigations, platforms will increasingly execute them. This shift, from human-driven operations to agentic workflows with human oversight, changes how compliance teams scale, how vendors are evaluated, and what “best-in-class” technology actually means.
Over the last year, I’ve watched the industry focus on one question: “Does the platform have AI?” That question is quickly becoming outdated.
AI is not just another feature to check off in a product comparison, as it marks the beginning of a new operating model for financial crime prevention. The teams that recognize this shift early will be best prepared for what comes next.
If you're exploring how agentic workers could fit into your investigation workflows, schedule a demo today to see how our AI Agents review alerts, automate investigations, and help your team scale without expanding analyst headcount.

Trisha Kothari is the co-founder and CEO of Unit21, a solution that proactively mitigates risks tied to money laundering, fraud, and other illicit activities. Trisha is driven by a powerful mission to empower the fight against financial crimes. Her professional journey, deeply rooted in engineering and product management, includes significant roles at companies such as Google, LinkedIn, and Affirm. During her tenure as an early engineer and product manager at Affirm, Trisha gained firsthand insight into the gross inefficiency and siloed ways that AML and Fraud operated. This experience served as a catalyst for her to develop innovative AML and Fraud solutions that Unit21 now offers today.
After leaving Affirm in 2018, Trisha joined South Park Commons, a community of builders, tinkerers, and domain experts. There, she met her co-founder and began tinkering with the concept of Unit21. Follow Trisha on LinkedIn, where she usually discusses new regulatory changes to be aware of, reacts to industry trends, and shares Unit21 product enhancements.