AI
Fraud
AML
Fintech
Financial Institutions

Rules vs. Machine Learning: Finding the Best of Both Worlds

Published
October 27, 2025
Kunal Datta
Kunal Datta
Chief Product Officer, Unit21
Table of contents

Since the earliest days of fraud detection, one debate has never gone away: Do you trust the models—or the rules?

For years, teams have lived between two poles: the automation of machine learning scores and the precision of human-written rules. Both deliver value. Both have limits.

At Unit21, we’ve found that the real power comes from a third path—blending the intuition of rules with the reasoning of large language models (LLMs) to get the best of both worlds.

Traditional ML Scores: Smart, But Stiff

Machine learning (ML) scores have long been the default answer for scale.

Why teams love them:

  • The score just shows up. no setup, no tuning.
  • They learn over time as new data arrives.
  • One model can capture thousands of behavioral signals in a single number.

But here’s the catch:

  • They struggle with emergent behavior. You need hundreds or thousands of examples before they notice new patterns.
  • Changing anything requires retraining, validation, and redeployment.
  • Analysts can’t easily see or adjust the logic behind the score.

Rules: Transparent, Fast, and Human

Rules are the original intuition engine. They turn human expertise into crisp logic: “If X happens and Y is true, trigger an alert.”

Why teams love rules:

  • They react instantly to new behavior.
  • They’re transparent and explainable.
  • Analysts can create, tune, and retire them anytime.

Why they’re hard:

  • They take time to write and maintain.
  • They don’t evolve automatically.
  • They can grow messy or duplicative over time.

Rules give you total control, but they need constant care to stay sharp.

What If You Could Have the Best of Both?

Imagine rules that evolve automatically. AI that reasons like an analyst. A system that adapts in real time without losing transparency or control. That’s where LLMs come in.

The Birth of the AI Rule Recommendation Agent

Before we had AI Agents, we had Matt, our Head of Forward Deployed Engineering. An all-around rule-tuning legend.

Matt spent years manually improving detection: digging into noisy rules, comparing true vs. false positives, and adding business context until signal quality soared.

The results were undeniable. The only problem? We can’t clone Matt (we checked).

So we did the next best thing: we built an AI system that could think like him.

From Human Intuition to LLM-Powered Reasoning

We studied Matt’s process, step by step. How he reviewed outcomes, analyzed data, and refined logic.

Then we taught an LLM to replicate that reasoning at scale.

The result: the AI Rule Recommendation Agent (a.k.a. MattAI).

It doesn’t rely on slow-to-learn ML models or static patterns. Instead, it uses language models that can reason over contextual data, dispositions, and alerts—and suggest smarter rules that humans can review, edit, and deploy instantly.

How the AI Rule Recommendation Agent Works

Think of it as an AI collaborator that bridges rules and reasoning.

It scans historical outcomes, looks for patterns, and proposes new rule logic. Each recommendation includes an explanation and the underlying evidence, so analysts stay fully in control.

With it, teams can:

  • Reduce false positives by refining outdated rules.
  • Surface emerging fraud signals early.
  • Test new logic safely in shadow mode before deploying live.

Once validated, rules can be updated instantly. No retraining orwaiting for quarterly model updates.

From Insight to Action

Traditionally, teams had to pick: speed (automation) or safety (human control). With the AI Rule Recommendation Agent, you get both.

The system continuously learns from your alert history and investigations, generating explainable recommendations that analysts can trust. Every suggestion comes with context and rationale, so decisions are fast, auditable, and transparent.

Part of a Larger AI Suite

The AI Rule Recommendation Agent isn’t alone. It’s part of a growing family of AI Agents inside Unit21 that handle the repetitive, high-volume work analysts do daily.

Together, they create a continuous intelligence loop:

  1. The Rule Recommendation Agent refines rule logic.
  2. Alerts are generated and reviewed by AI Investigation Agents.
  3. Investigation outcomes feed back to improve the next set of recommendations.

It’s a closed loop of reasoning, action, and learning, powered by LLMs, guided by humans.

Smarter Rules, Fewer False Positives

As the system learns from real outcomes, it sharpens over time. That means fewer unnecessary alerts, faster investigations, and stronger detection coverage. No black boxes. No retraining cycles. Just explainable AI that improves continuously.

AI-Powered Investigations

Detection is only half the story.

The AI Investigation Agents handle the other side: investigations. They review and summarize alerts, assemble context, and even draft narratives — reducing handle time by up to 90% and maintaining accuracy above 99%.

Over 150,000 alerts have already been processed by these agents across 100+ financial institutions worldwide.

Why It Matters

Fraud and compliance teams shouldn’t have to choose between speed and control, or between transparency and automation.

By combining rules with LLM-powered reasoning, Unit21 delivers:

  • Faster adaptation to new fraud patterns.
  • Continuous learning without retraining cycles.
  • Full transparency for analysts and regulators alike.

It’s not about replacing humans. It’s about amplifying their impact.

Take Your Risk Operations Toward a Smarter, Safer Future

The AI Rule Recommendation Agent represents a new chapter in fraud detection. One where LLMs reason alongside humans to make smarter, faster, and more explainable decisions.

With Unit21, your rules don’t just run, they evolve. And your analysts don’t just react, they lead.

The future of risk operations isn’t rule-based or machine-learning-based. It’s LLM-driven and human-guided. A collaboration that turns every decision into intelligence.

See our AI Detection and Investigation Agents for yourself! Request a demo.

Kunal Datta
Kunal Datta
Chief Product Officer, Unit21

Kunal Datta is the Chief Product Officer at Unit21. Prior to Unit21, he led the Product team for Checkout at Fast, and prior to that, led the Product teams responsible for automating aerial wildfire safety inspections at Pacific Gas & Electric.

He has a background leading Product teams using AI to automate processes at regulated entities, as well as financial products, machine learning products, web applications, mobile applications, hardware products, and data products. Kunal is a Fulbright Scholar and studied Civil and Environmental Engineering and Music Science Technology at Stanford University.

Learn more about Unit21
Unit21 is the leader in AI Risk Infrastructure, trusted by over 200 customers across 90 countries, including Sallie Mae, Chime, Intuit, and Green Dot. Our platform unifies fraud and AML with agentic AI that executes investigations end-to-end—gathering evidence, drafting narratives, and filing reports—so teams can scale safely without expanding headcount.
AI
|
6 min

The New Unit21: Why We Rebuilt Everything Around AI Agents

Tyler Allen
Tyler Allen
COO, Unit21
This is some text inside of a div block.
AI
|
6 min

AI in Financial Crime Prevention: Why It’s More Than Just a Checkbox

Trisha Kothari
Trisha Kothari
CEO & Co-Founder, Unit21
This is some text inside of a div block.
AML
|
6 min

Unit21 Bolsters Compliance Systems for PrizePicks

Trisha Kothari
Trisha Kothari
CEO & Co-Founder, Unit21
This is some text inside of a div block.
See Us In Action

Boost fraud prevention & AML compliance

Fraud can’t be guesswork. Invest in a platform that puts you back in control.
Get a Demo