

The landscape of financial crime detection and investigations has changed dramatically over the last few decades. We moved from manual, paper-driven processes and rigid transaction-monitoring systems to deeper analytics and early machine learning initiatives.
Today, the combination of scale, complexity, and fragmentation — more data, more transactions, more channels, more customers — has made it harder than ever to run effective programs. At Unit21, we believe that the arrival of custom AI agents marks a turning point.
They represent a fundamental shift toward more intelligent, dynamic, and context-aware AI compliance tools for financial institutions. Below, we explain what these custom AI agents do, why they complement traditional approaches, how they should be governed, and what to expect when deploying them.
For compliance teams, the promise of custom AI agents lies in both productivity and precision. Financial institutions spend heavily to maintain compliance and strengthen fraud operations, yet traditional methods often struggle to keep pace.
According to a McKinsey analysis, financial institutions that have adopted AI-driven operations report productivity gains of 200-2,000%, faster case resolution, and improved consistency across investigations.
Custom AI agents allow organizations to handle more data, process complex transaction patterns, and manage multiple channels while maintaining human oversight. Early adopters of Unit21’s AI agents are already seeing measurable results:
These numbers illustrate what we’ve long hoped for: custom AI agents that don’t just speed up processes but enhance the accuracy and integrity of decision-making.
To understand what makes custom AI agents so powerful, we need to look at the technology behind them — Large Language Models (LLMs). LLMs excel at predicting language patterns, understanding nuance, and generating contextually accurate responses.
In simple terms, these models process text like we process conversations: they listen, interpret, and respond intelligently. This capability makes them ideal for analyzing unstructured data, like narratives, customer communications, and case notes that legacy systems often overlook.
In fraud and AML operations, this means custom AI agents can:
Financial institutions have used machine learning (ML) models for nearly two decades. These models typically produce a risk score, a numerical indicator of the probability of fraud or money laundering.
But traditional ML models require structured, labeled data. They depend heavily on consistency, predefined parameters, and constant retraining. As many of us have experienced, this makes them both time-intensive and limited in adaptability.
Custom AI agents, on the other hand, leverage LLMs that can interpret context, work with messy data, and dynamically reason like human investigators. However, they do complement each other.
LLMs complement ML by enabling:
For example, Unit21 combines ML for pattern detection and alert scoring with LLMs for contextual understanding, creating a comprehensive approach to fraud and AML investigations.
Today’s AI compliance tools for financial institutions are often built on top of “foundation models” developed by major providers such as:
Each of these models has strengths suited to different use cases. Some are designed for deeper reasoning and complex review, while others specialize in faster, cost-effective processing. At Unit21, we use a mix of these technologies, selecting models based on task relevance, performance, and data security considerations.
The key for teams is not to chase the newest model, but to understand its performance, data security, and policy implications, especially how it handles and protects sensitive financial data.
A key innovation in AI compliance tools for financial institutions is Retrieval-Augmented Generation (RAG). The principle is simple: before generating an answer, the AI retrieves relevant information from available data sources, just like a human would perform a quick search before responding.
In fraud and AML investigations, this function is critical. Every alert review involves referencing multiple datasets, such as customer history, transaction records, previous alerts, and external sources. With RAG, custom AI agents can access the right data at the right time, producing contextually accurate outputs that investigators can trust.
Just as training programs guide new human agents, these same structured instructions can be replicated for custom AI agents. When properly designed, this ensures that the AI consistently follows compliance logic, significantly improving the quality and reliability of automated investigations.
Quality matters more in fraud and AML than in many other domains. Two foundational practices make this possible: prompting and eval sets:
Unit21 integrates both RAG and eval sets into our AI agents, enabling teams to configure tasks, select relevant data, and achieve high accuracy at scale.
Unit21’s AI agents are configurable, integrated, no-code tools designed for specific fraud and AML tasks. Examples include:
As a vendor in the space, we handle RAG, prompting, and eval sets for our clients. Essentially, this means we allow them to easily select the data they’re looking for. We have the AI configured to access that data, analyze it correctly, toggle features on and off as needed, and then select the alerts they want to review and deploy.
Unit21 offers a variety of AI agents designed for specific compliance and risk tasks, including:
AI Agents for AML:
AI Agents for Fraud:
At Unit21, our approach when it comes to responsible AI is: Accuracy + Oversight = Trust
So, how does oversight come into play? You’ll often hear the term “human-in-the-loop.” It ensures that every AI-driven decision remains explainable and transparent for regulators.
The general concept comes down to policy:
That balance between automation and human verification is what builds trust in AI compliance tools for financial institutions.
We also use another concept called tracing. All that means is that for each AI input and output, we log it to a database and store it. Then, we have another LLM, which we call LLM-as-a-judge, to review and rate the agent's input and output. So, we essentially have a two-eyed quality review system with our agents.
At Unit21, we not only support this model through our technology but also proactively monitor and receive notifications if anything goes off track with our agents, ensuring responsible, compliant AI oversight from end to end.
Our custom AI agents are built for flexibility. They can be configured to execute end-to-end investigations or assist human reviewers through configurable workflows, all without any coding.
Inside the Unit21 dashboard, teams can:
For example, Unit21 custom AI agents include:
Every Unit21 custom AI agent operates with full transparency. Analysts can track every AI action, review how outputs are generated, and edit narratives as needed, ensuring both auditability and regulatory traceability.
Because these AI compliance tools for financial institutions are fully customizable, teams can design agents that align with their risk appetite and operational workflows. Whether it’s counterparty risk analysis or behavior deviation detection, Unit21 enables teams to automate routine tasks while keeping human oversight where it matters most.
In practice, these custom AI agents have reduced investigation times by over 90%, turning hours of manual work into instant outputs, helping compliance teams focus on decisions, not data gathering.
Traditional systems have reached their limits, detecting only a fraction of illicit activity despite massive investments. Custom AI agents represent the next evolution, moving from static, rule-based systems to intelligent, adaptive tools that learn, reason, and act with human-like precision.
Through technologies such as LLMs, RAG, and human-in-the-loop oversight, AI compliance tools for financial institutions can finally close the gap between compliance effort and effective risk detection.
At Unit21, we’re proud to be at the forefront of that change. Request a demo today and let us help you transition from reactive programs to proactive, data-driven defenses against fraud and AML!

Tyler Allen is the CEO of Unit21 and was the company's first hire, writing some of the first lines of code seven years ago. He is a driving force behind Unit21's vision as the leader in AI risk infrastructure, having led the AI team before becoming COO. A deep technical leader, Tyler recently returned to the codebase to personally build AI agent configurations, pairing his technical expertise with seven years of experience observing how compliance teams operate.