
Since the earliest days of fraud detection, one debate has never gone away: Do you trust the models—or the rules?
For years, teams have lived between two poles: the automation of machine learning scores and the precision of human-written rules. Both deliver value. Both have limits.
At Unit21, we’ve found that the real power comes from a third path—blending the intuition of rules with the reasoning of large language models (LLMs) to get the best of both worlds.
Machine learning (ML) scores have long been the default answer for scale.
Why teams love them:
But here’s the catch:
Rules are the original intuition engine. They turn human expertise into crisp logic: “If X happens and Y is true, trigger an alert.”
Why teams love rules:
Why they’re hard:
Rules give you total control, but they need constant care to stay sharp.
Imagine rules that evolve automatically. AI that reasons like an analyst. A system that adapts in real time without losing transparency or control. That’s where LLMs come in.
Before we had AI Agents, we had Matt, our Head of Forward Deployed Engineering. An all-around rule-tuning legend.
Matt spent years manually improving detection: digging into noisy rules, comparing true vs. false positives, and adding business context until signal quality soared.
The results were undeniable. The only problem? We can’t clone Matt (we checked).
So we did the next best thing: we built an AI system that could think like him.
We studied Matt’s process, step by step. How he reviewed outcomes, analyzed data, and refined logic.
Then we taught an LLM to replicate that reasoning at scale.
The result: the AI Rule Recommendation Agent (a.k.a. MattAI).
It doesn’t rely on slow-to-learn ML models or static patterns. Instead, it uses language models that can reason over contextual data, dispositions, and alerts—and suggest smarter rules that humans can review, edit, and deploy instantly.
Think of it as an AI collaborator that bridges rules and reasoning.
It scans historical outcomes, looks for patterns, and proposes new rule logic. Each recommendation includes an explanation and the underlying evidence, so analysts stay fully in control.
With it, teams can:
Once validated, rules can be updated instantly. No retraining orwaiting for quarterly model updates.
Traditionally, teams had to pick: speed (automation) or safety (human control). With the AI Rule Recommendation Agent, you get both.
The system continuously learns from your alert history and investigations, generating explainable recommendations that analysts can trust. Every suggestion comes with context and rationale, so decisions are fast, auditable, and transparent.
The AI Rule Recommendation Agent isn’t alone. It’s part of a growing family of AI Agents inside Unit21 that handle the repetitive, high-volume work analysts do daily.
Together, they create a continuous intelligence loop:
It’s a closed loop of reasoning, action, and learning, powered by LLMs, guided by humans.
As the system learns from real outcomes, it sharpens over time. That means fewer unnecessary alerts, faster investigations, and stronger detection coverage. No black boxes. No retraining cycles. Just explainable AI that improves continuously.
Detection is only half the story.
The AI Investigation Agents handle the other side: investigations. They review and summarize alerts, assemble context, and even draft narratives — reducing handle time by up to 90% and maintaining accuracy above 99%.
Over 150,000 alerts have already been processed by these agents across 100+ financial institutions worldwide.
Fraud and compliance teams shouldn’t have to choose between speed and control, or between transparency and automation.
By combining rules with LLM-powered reasoning, Unit21 delivers:
It’s not about replacing humans. It’s about amplifying their impact.
The AI Rule Recommendation Agent represents a new chapter in fraud detection. One where LLMs reason alongside humans to make smarter, faster, and more explainable decisions.
With Unit21, your rules don’t just run, they evolve. And your analysts don’t just react, they lead.
The future of risk operations isn’t rule-based or machine-learning-based. It’s LLM-driven and human-guided. A collaboration that turns every decision into intelligence.
See our AI Detection and Investigation Agents for yourself! Request a demo.

Kunal Datta is the Chief Product Officer at Unit21. Prior to Unit21, he led the Product team for Checkout at Fast, and prior to that, led the Product teams responsible for automating aerial wildfire safety inspections at Pacific Gas & Electric.
He has a background leading Product teams using AI to automate processes at regulated entities, as well as financial products, machine learning products, web applications, mobile applications, hardware products, and data products. Kunal is a Fulbright Scholar and studied Civil and Environmental Engineering and Music Science Technology at Stanford University.