
I joined Unit21 as the company's first hire in 2019 fresh out of four years in tech, stepping into my first real software engineering job. We were three people with an idea: give compliance teams the ability to build their own detection rules without needing a team of SQL developers. No more waiting on vendors. No more static detection logic that couldn't adapt. Just a self-service, no-code rules engine for AML.
At the time, that felt revolutionary. Financial institutions had been operating on the same rigid infrastructure for decades hand over your data to a legacy provider, wait weeks for rule changes, and accept whatever false positive rate they handed you. We broke that model. The market responded.
But I'm writing this from a very different vantage point than 2019. The financial crime landscape has changed dramatically and not in our favor.
I recently sat down with Jason Mikula on his Fintech Business Weekly podcast to unpack what an AI-first approach to compliance really looks like. Below are some of my key takeaways but tune into the full episode as well.
Financial crime is not holding pace with the growth of financial services. It is outpacing it. Fraud attacks have surged. Impersonation scams have become terrifyingly sophisticated. I was at a Bank of America conference recently where even the team that built their own alert system couldn't distinguish which messages were from the bank versus a fraudster. Deep fakes have eliminated the last line of "just hop on a video call to confirm." A CFO wired $20 million to a fraudster last year after a deepfake of the CEO passed every human check voice, image, talk track. All of it.
Meanwhile, compliance teams are being asked to operate with constrained headcount against a threat landscape that can fail 99 times out of 100 and still generate meaningful ROI on that one success. We're playing an asymmetric game. The fraudster gets one win; we have to be right every time.
The traditional answer to this problem has always been the same: hire more people, or outsource to a BPO. And yes, companies like GenPact and AML Right Source serve a real need right now. But hiring is slow, outsourcing is expensive, and this approach fundamentally cannot scale. When you're facing a 90% false positive rate meaning nine out of ten alerts your analysts review is noise and backlogs stretching weeks, you're not just dealing with an efficiency problem. You're creating real harm. That alert sitting in a queue for 30 days might be the only signal that a human being is being trafficked. That's another month. Because we couldn't get to it.
When we launched our rules engine, the unlock was control. Compliance teams could finally build and test detection logic themselves, without writing SQL or waiting on a vendor. That was exactly the right product for 2019.
But the world has changed. The problem isn't that compliance teams lack control anymore. It's that the volume of work has outpaced what any team of humans can meaningfully execute, no matter how much control they have. You can give someone a faster car, but at some point the road itself is the bottleneck.
The next unlock isn't tools that help people work faster. It's tools that do the work.
I have more conversations about AI in financial crime than almost anyone. My calendar regularly runs 50 meetings in four days. And I've watched the market converge around three meaningfully different approaches all of which get labeled "AI" in the marketing materials.
The first is the chat interface model. Think of a ChatGPT embedded into your compliance platform with access to your alert data. It's genuinely useful for one-off questions. But it fundamentally relies on a human to drive every interaction, which means it doesn't scale. There's also a repeatability problem: chatbots are designed for dynamic, open-ended conversations. In financial crime compliance, you don't want dynamic. You need consistent, auditable decisions that a regulator can reconstruct three years later. Being accurate 80% of the time is not an acceptable outcome in this space.
The second is what I think of as the LLM wrapper a newer provider making external LLM calls on top of someone else's infrastructure. They'll call Claude or GPT, return a paragraph, and frame it as an AI agent. The output is only as good as the data passed to it via API, and the orchestration work still largely falls on the buyer. It makes for an impressive demo. It's not an infrastructure decision.
The third approach is what we've built at Unit21: long-running, background agents that are pre-configured to your standard operating procedures, assigned to your queues, and automatically process alerts as they arrive. When an alert comes in, an agent picks it up pulls transaction history, checks watchlists, analyzes counterparties, and writes a complete investigation narrative before any human has touched it. The analyst's job becomes quality review, not primary research.
This is the operational difference between a tool that makes the job easier and a system that actually does the job.
Our AI product strategy is built around two problems: investigation and detection. Both matter. Neither is sufficient alone.
On the investigation side, the efficiency gains are real and substantial. Traditional AML alert reviews take 30 to 60 minutes of analyst time. Our agents complete them in under five minutes. But what excites me more is that the reviews are actually more comprehensive not just faster. When an analyst manually clears a sanctions alert, the write-up is often two sentences: "not a clear match, closing as false positive." Our agent documents the name reviewed, the name on the list, every address compared, the provider's match score, the relevant list details, and a full rationale for the conclusion. The agent does all of this in parallel, in seconds, and creates a defensibility record that a human write-up almost never achieves.
Scale that to a structuring case with five counterparties where a truly comprehensive investigation would require 10 minutes on each, an hour of work that simply doesn't get done — and you start to see the ceiling the current model has always had. The agent runs all five in parallel. It's genuinely a have-your-cake-and-eat-it-too moment: more ROI, faster resolution, and simultaneously a more thorough investigation that catches more of the real crime.
On the detection side, the unlock is automating something that currently happens once or twice a year at best: rule tuning. Right now, most organizations revisit their detection rules quarterly if they're disciplined, annually if they're not. It's a painful, weeks-long process pulling alert data, manually evaluating false positive patterns, identifying what looks like true risk versus noise for your specific customer population.
We've replaced this with a continuous background process. As our agents complete reviews, they build a pattern recognition of what constitutes a false positive versus a real escalation for your organization. That same logic feeds daily automated backtesting. The system surfaces recommendations "this tuning change would eliminate 20% of your false positives with zero missed escalations, do you want to deploy it?" without anyone having to ask. Detection logic that used to improve twice a year now improves every day.
Before we shipped a single AI agent to a production environment, we demoed our approach to the OCC and state regulators in California. We maintain a quarterly standing sync with FinCEN specifically around AI products. I was the person who launched most of this work, and knowing the regulatory environment well meant we could anticipate the objections before they came.
What regulators care about, distilled to its essence: what is your human-in-the-loop policy, what does your audit trail look like, and how is customer data being handled?
The human-in-the-loop question is simpler than people fear. Most organizations, even the most tech-forward fintechs, aren't ready to remove humans from every alert decision and they don't need to be. The architecture that makes sense for most companies today is a QA model: AI handles the investigation and writes the recommendation, a human reviews and approves. That's defensible, it's scalable, and it's what we've built toward.
The audit trail question is actually where AI creates a structural advantage over human-only processes. An agent produces a complete trace of everything it accessed, every step it took, every source it cited. That's a more complete record than most human investigators generate. When a regulator asks you to reconstruct a decision three years later, that matters enormously.
On data handling: we run our entire LLM stack inside AWS. Customer data never leaves that environment. We don't use it for model training. For the InfoSec and IT teams that are often the real gatekeepers for AI adoption in regulated industries not the compliance leaders, not the executives this is usually the question that has to be answered first. We've also built to the EU AI Act, which comes into full effect in August 2025 and requires (among other things) that no AI agent makes a final compliance decision without human review and that organizations have policies addressing model bias.
I'm also proud that we're the only PCI-compliant AI agent in the market. That didn't happen by accident. It happened because we had clients for whom it was non-negotiable, and we built to that standard. That's what "regulatory ready" actually means not checking a box, but anticipating what the most demanding environment in your customer base will require.
The near-term trajectory is clear: more fintechs moving quickly, more traditional financial institutions making 2026 the year they commit to AI in compliance. The tier-one banks I'm in conversations with have been in planning discussions for over a year; we're starting to get to actual pilots. Financial institutions that are still navigating the on-prem to cloud transition are simultaneously being asked to figure out AI. That's a real challenge but it's also a forcing function.
The bigger picture is what truly excites me. We started with no-code rules in 2019 because the unlock was control giving compliance teams the ability to act without depending on engineering. We're now in the era of AI agents because the unlock is scale doing the work, not just supporting it. The next era, already beginning to take shape, is what I think of as an observability model: systems that continuously learn and self-correct, where the human's primary job is to instruct and evaluate, not to execute at volume.
When I think about what's possible, I genuinely believe we're at the first point in the industry's history where making a meaningful dent in financial crime is achievable. Not because any single company has solved it the competition in this space is healthy and it pushes all of us but because the technology layer has finally caught up with the scale of the problem.
Financial crime is outpacing the economy. The old answer was more people. The new answer is systems that scale the best people we already have.
That's what brought me back to building.
For more content on our strategy, read my previous post, Why We Rebuilt Everything Around AI Agents. You can also explore our core offerings on our AML solutions page and read Trisha's recent post: AI in financial crime prevention: why it's more than just a checkbox.

Tyler Allen is the COO of Unit21 and was the company's first hire, writing some of the first lines of code seven years ago. He is a driving force behind Unit21's vision as the leader in AI risk infrastructure, having led the AI team before becoming COO. A deep technical leader, Tyler recently returned to the codebase to personally build AI agent configurations, pairing his technical expertise with seven years of experience observing how compliance teams operate.