How AI for Credit Unions Is Transforming Risk Strategy Today

September 17, 2025
Tyler Pihl
VP, Enterprise Risk, Service Credit Union
Ho Yin (Kenneth) Leung
Director of Fraud, Teachers Federal Credit Union

Artificial intelligence (AI) has become one of the most powerful forces shaping the future of financial services. For credit unions navigating increasingly complex threats, from synthetic identities and deepfakes to faster payment rails and sophisticated scams, the time to adopt AI for credit unions is not tomorrow. It’s today.

In this blog, we’ll share how we at Teachers Federal Credit Union and Service Credit Union are transforming credit union risk management through practical, responsible adoption of AI

We’ll unpack what we’re seeing from fraudsters, how we’re deploying AI for credit unions, and offer tactical advice to help your teams modernize their AML and fraud detection programs without compromising on trust, transparency, or member service.

The New Reality: Criminals Are Already Using AI

If there’s one takeaway we want to reinforce: criminals are already using AI. Every day.

Over the past two years, we’ve seen fraud become exponentially more sophisticated. Phishing emails used to be easy to spot, as they were filled with typos and obvious red flags. Now, they’re nearly indistinguishable from legitimate communication, thanks to generative AI tools like ChatGPT. 

The same applies to deepfakes and voice cloning. Bad actors can now convincingly impersonate colleagues, executives, or even family members to commit fraud. They can spin up hundreds of synthetic identities or impersonate a trusted contact over video calls in real-time, which is all powered by AI.

And this is not hypothetical.

At Service Credit Union, we’ve seen impersonation scams increase not just in complexity, but in volume. In one single day, we saw over 2,000 new account applications, which is more than a 1,000% spike. Many were linked to romance scams, lottery fraud, and multi-year deception campaigns. These scams targeted our members en masse, exploiting trust and access in ways that wouldn’t have been possible just a few years ago.

The reality is simple: legacy systems can’t keep up. Just as fraudsters are leveraging AI to scale their operations, credit unions must also use AI to defend their members and operations.

Rethinking KYC in the Age of AI

Historically, KYC (Know Your Customer) has served as the gatekeeper to financial services. But today, it’s no longer sufficient as a standalone checkpoint. Identity fraud has evolved into a dynamic, digital lifecycle, and fraud often occurs well after onboarding.

Bad actors can easily acquire real personal data from dark web sources and bypass outdated controls. The digital nature of today's services makes it easier than ever to present false but convincing identities. That’s why effective credit union risk management today means going beyond the initial KYC moment. 

Identity verification must be continuous, dynamic, and behavior-based. It’s not enough to verify someone once and hope they’re safe. We need to monitor patterns across the full customer lifecycle, from onboarding to ongoing activity, across devices, behaviors, and transaction histories. 

Why Traditional AML and Fraud Detection Must Converge

In many institutions, fraud and AML (Anti-Money Laundering) teams have operated in silos. But the lines are blurring. Real-time fraud tactics feed directly into downstream laundering efforts. That’s why we believe credit unions must modernize their entire approach, not by layering more systems, but by unifying fraud and AML within an intelligent platform.

The future of AML and fraud detection lies in merging behavioral signals, identity intelligence, and transactional data into one cohesive risk framework. 

Building the Foundation: Incorporating AI for Credit Unions

We know criminals are using AI. The question is — how do we use it better? If you're new to AI, start small. 

Data Integrity

For us at Teachers Federal Credit Union, the strategy has been focused on credit union risk management as a lifecycle, not a point-in-time effort. We're using AI for credit unions to understand member behavior holistically, from onboarding to transactional habits and device usage. With the proper labeling and clean data inputs, AI models can help detect outliers, adapt to evolving threats, and prioritize the alerts that truly matter.

That said, AI is only as good as the data it ingests. If you feed in biased or incomplete data, your model will return poor results. Which is why choosing a vendor with explainability and transparency is non-negotiable. In fraud, precision matters. We need to trust that the AI isn’t just accurate, but auditable.

AI Governance Committee

At Service Credit Union, we embraced explainability and compliance from the start by forming an AI governance committee to establish internal guardrails and ensure safe data flows. Anticipating vendors embedding AI into tools, we took a proactive approach to understand how and where AI would be used, especially with sensitive data.

To maintain control and minimize risk, we adopted Microsoft Copilot, which provides a secure and controlled environment for our teams to work efficiently. This AI-powered tool has helped automate tasks like calendar scheduling, drafting policies, and summarizing emails. Most importantly, it gave us a low-risk way to start exploring the power of AI.

Real-Time Prevention

The need for real-time prevention is urgent.

As payment rails like FedNow and RTP (real-time payments) continue to grow, we’re entering a 24/7 financial system. That means the traditional model, where alerts are generated in batches once a day, just doesn’t work. By the time an alert is triggered the next morning, the money is gone.

Credit union risk management must evolve. At Service Credit Union, we made real-time monitoring a prerequisite before launching FedNow send functionality. We’re actively scoring payments before they leave our institution. That capability didn’t exist with older batch-based systems, so we had to rethink both our vendor strategy and internal data access.

You don’t have to go all-in at once. We started by categorizing our data streams based on what needed to be real-time (e.g., online banking activity) versus what could be updated every 20 minutes (e.g., core data). It’s about prioritizing and focusing where the risk is highest.

The biggest lesson? Centralized, unified data is essential. Whether it’s fraud detection, AML, or sanctions screening, everything works better when the signals come together in one place. Fragmented data leads to fragmented insights, and that’s a gap fraudsters will exploit.

Why Explainable AI for Credit Unions Matters for Compliance

AI can feel risky, especially in highly regulated environments. But the risk isn't in AI itself. It’s in not understanding how it works. That’s why explainability is non-negotiable.

Every AI solution we use or evaluate must be auditable, transparent, and accurate. We run pilots. We test outputs. We use thumbs-up/thumbs-down features and team feedback to refine models over time.

We also frame risk based on the use case. Using AI to summarize a policy? Low risk. Using AI to disposition a SAR (Suspicious Activity Reports)? That demands stricter QA. By tailoring oversight to each AI application, we maintain trust and meet regulatory expectations.

Vendors play a big role here, too. If your provider doesn’t offer explainability, or worse, doesn’t have an AI roadmap, it’s time to ask why.

Addressing the Fear of Job Loss with AI for Credit Unions

Let’s address the elephant in the room: Will AI take your job?

We don’t believe it will. In fact, we’ve seen the opposite. AI has helped our lean teams keep up with increasing alert volumes without adding headcount. Fraud professionals bring a sixth sense that’s hard to train into machines — a gut feeling that something’s off even when everything looks normal. These are jobs only human analysts can do well, and AI helps us spend more time on them, not less.

That’s why the goal isn’t to remove people from the loop, but to free them from paper-pushing so they can apply their intuition and judgment where it counts. More importantly, AI gives us back the time to focus on what truly matters: protecting our members. 

With routine tasks automated, we can prioritize efforts like educating scam victims, identifying large-scale attack patterns, and investigating complex money laundering schemes. Because at the end of the day, no machine can replace a phone call to a scammed senior, and no chatbot can offer the comfort a victim needs.

5 Tips for Getting Started with AI for Credit Unions

If you’re not using AI yet, don’t worry. You don’t have to do everything at once. But you do have to start. Here’s our tactical advice for credit unions just getting started:

  1. Demystify AI: Start with safe, contained tools like ChatGPT or Copilot. Try simple prompts. Write fake member letters. Summarize a policy. Understand what the technology can (and can’t) do.
  2. Start Small, Scale Smart: Don’t automate everything at once. Choose one or two low-risk areas to begin. Measure outcomes and build from there.
  3. Set Governance Early: Create an internal AI committee. Understand your risk appetite. Set guardrails around data access and usage.
  4. Centralize and Label Your Data: The better your data, the better your AI. Invest in systems that unify data from across silos and make it actionable.
  5. Choose Explainable Vendors: Work with partners who offer clear audit trails, regulatory alignment, and human-level accuracy, especially in AML and fraud detection.

A Future-Ready Risk Strategy with AI for Credit Unions with Unit21

We’re optimistic about where this is heading. Credit unions have always stood out for their member-first approach. Now, with the right technology and a thoughtful AI strategy, we can continue to protect those members — faster, smarter, and more effectively.

At Unit21, we launched our L1 AI Agent to help automate alert investigations, with tasks like online searches, detecting structuring, and flagging impossible transactions. 88 customers are already using it, have reviewed 7,000+ alerts, and reduced alert handling times by 60–80%, with zero hallucination escalations.

The AI handles the busywork, while analysts stay focused on what matters: using human judgment to assess risk. If your current vendor doesn’t offer AI, it might be time to rethink. Get a demo today!

Subscribe to our Blog!

Please fill out the form below:

Related Articles

Getting started is easy

See first-hand how Unit21
can help bolster your risk & compliance operations
GET a demo