We Warned You: The Billion-Agent Threat Is Here
In early 2024, I published a projection that drew equal parts attention and skepticism across the financial services industry: by the end of 2024, we would see more than one billion nefarious AI agents operating across the global financial system.
Malicious bots, I argued, already outnumbered legitimate ones. Months later, in September 2024, I brought that same thesis to a room full of payments executives at the New York City Real-Time Payments Conference. The audience was polite. The skepticism was not.
Most of the people in that room were still debating whether large language models could help their analysts write better SARs. The idea that autonomous AI systems would be weaponized at an industrial scale, not by nation-states but by garden-variety fraud rings, seemed to many like science fiction with a short timeline.
The timeline was aggressive by roughly two years. But the trajectory, the direction, the velocity, and the implications have proven exact. And the financial services industry's collective response has not kept pace with any of it.
The Projection and the Pushback
Precision matters when you're asking an industry to rethink its entire defensive posture. So let's be precise about what the projection actually said.
The claim was that autonomous malicious agents, not simple bots running credential-stuffing scripts, but AI-driven systems capable of multi-step financial crimes, would proliferate to the billion-unit threshold by the end of 2024. The basis wasn't speculation. It was math.
At the time, Imperva's annual Bad Bot Report had already documented that automated bot traffic accounted for nearly a third of all internet traffic, with "bad bots" (those designed for scraping, fraud, account takeover, and abuse) making up the majority of that automated traffic. The report would go on to confirm that bad bots accounted for 32% of all internet traffic, marking the fifth consecutive year of growth. Financial services were among the most targeted verticals.
Cloudflare's data told a similar story. By early 2024, their network was seeing automated traffic account for roughly a third of all HTTP requests, with a significant and growing share classified as malicious. Akamai's threat research documented that bot attacks against financial services had increased by over 60% year-over-year.
The numbers were already there. Instead of being a discontinuity, the projection was the logical extension of a trend that was compounding quarter over quarter. The only question was velocity: how fast would it grow?
What Changed Between Then and Now
The two years since that initial publication have been a masterclass in the exponential evolution of threats:
Late 2024
Generative AI tools became commoditized. Open-source models capable of producing synthetic identity documents, such as driver's licenses, utility bills, and bank statements, reached a quality threshold that enabled them to defeat most automated document verification systems. The barrier to entry for identity fraud dropped from "need a skilled forger" to "need a laptop and an afternoon."
Early 2025
The first documented cases of coordinated agent swarms in financial fraud emerged in FinCEN advisories. These weren't isolated bots. They were orchestrated systems, with one agent generating synthetic identities, another submitting account applications, a third building transaction histories that mimicked legitimate customer behavior, and a fourth executing the actual fund extraction. The division of labor was automated. The coordination was machine-speed.
Mid-2025
The FBI's Internet Crime Complaint Center (IC3) reported that losses from AI-enabled financial fraud had more than doubled year over year. Europol's Internet Organized Crime Threat Assessment flagged autonomous-agent-based fraud as an "escalating and priority threat" for the first time. The language in these reports shifted from "emerging" to "established."
Late 2025 through early 2026
The qualitative leap. The industry began seeing (and the FraudNet platform began detecting) agent systems that do more than execute pre-programmed fraud playbooks. They adapt, testing which synthetic identities pass KYC at which institutions. They learn which transaction patterns trigger alerts and which don't. They modify their behavior in real time based on outcomes, without human intervention. So, not only has agentic AI increased the speed and scale of fraud, but it has also become autonomous.
The Numbers Today
Here is the picture as it stands in early 2026, drawn from public threat intelligence and operational data across the industry.
Imperva's most recent Bad Bot Report documents that automated bot traffic now accounts for more than half of all internet traffic directed at financial services APIs. The majority of that traffic is classified as "advanced,” meaning it exhibits behaviors designed to evade detection, including browser fingerprint spoofing, residential proxy rotation, and human-like interaction patterns. These are not the primitive bots of five years ago.
Cloudflare's 2025 threat data showed AI-augmented bot traffic growing at roughly 2x the rate of overall bot traffic growth, with financial services and payments among the top three targeted sectors. Their researchers specifically called out the emergence of "agent chains,” or sequences of automated systems that hand off tasks to each other to accomplish multi-step objectives.
Akamai's State of the Internet reports have documented a sustained surge in sophisticated bot attacks targeting account-opening, authentication, and payment endpoints across the banking and fintech sectors. The attacks targeting these endpoints have grown more sophisticated faster than the attack surface has expanded.
The billion-agent threshold from that early 2024 projection? Whether we've crossed it depends on how you count. If you're counting distinct autonomous malicious software agents operating globally across all industries, including agent instances spun up, used for a campaign, and destroyed, the number is likely already there or within striking distance by year-end 2026. If you're counting persistent, continuously operating agents, the number is in the hundreds of millions and accelerating. Either way, the order of magnitude has surpassed controversy and become consensus.
What "Nefarious Agents" Actually Means in 2026
The terminology matters, so let's be specific about what we're defending against. When we talk about "nefarious AI agents," we are not talking about:
- Credential-stuffing bots that spray stolen username/password pairs at login pages
- Simple web scrapers harvesting pricing data
- Click-fraud bots inflating ad metrics
Those threats still exist and still matter. They're also four years behind the current attack surface, based on the 2020 threat model, not 2026.
What we're talking about, and what financial institutions are seeing in production, across their environments, every day, are autonomous systems that conduct multi-step financial crimes end-to-end. Here is a real attack pattern, composited from multiple incidents to protect confidentiality:
Step 1: Identity Fabrication.
An agent generates a synthetic identity, not a stolen one, a fabricated one. It combines a real Social Security number (purchased from a dark-web breach dump) with a fictitious name, a generated address, and AI-produced supporting documents. The identity has no prior history, which is actually an advantage: there's nothing to contradict it.
Step 2: Account Opening.
A second agent simultaneously submits account applications across multiple financial institutions. It doesn't reuse the same browser fingerprint or IP address. It completes KYC flows, including document upload, selfie verification (using a generated face that matches the generated ID), and knowledge-based authentication (using data from the breached SSN's real history). Success rates vary by institution, but they're higher than most compliance officers want to admit.
Step 3: Seasoning.
A third agent operates the accounts over a period of weeks, making small purchases, receiving direct deposits (from other controlled accounts), and paying bills. It builds a transaction history that looks like a real, if boring, consumer. This is the patience that distinguishes agentic fraud from traditional fraud. Humans don't have the discipline to maintain dozens of synthetic accounts with normal-looking activity for weeks, but agents do.
Step 4: Extraction.
Once the accounts are seasoned, a coordinated set of agents executes the actual theft with structured transactions designed to stay below reporting thresholds, peer-to-peer payments to mule accounts, and purchases of convertible assets. The timing is coordinated across accounts to maximize extraction before any single institution detects anomalous behavior.
Step 5: Cleanup.
The agents close accounts, delete digital footprints, and cycle the infrastructure (IP addresses, device fingerprints, phone numbers) for reuse. By the time a human investigator is assigned to a case (if one ever is), the trail is cold.
This is not theoretical. Every step described above has been documented in regulatory filings, law enforcement advisories, or detection data across the industry. The only novelty is the degree of autonomy and coordination, which continues to increase.
The Uncomfortable Question
Here is what should keep every financial institution's board and C-suite up at night:
If this threat was foreseeable two years ago, and many people across the industry saw it coming, why are most financial institutions still defending with the same architecture they had in 2023?
Talk to bank CTOs, CCOs, and CISOs across the industry, and a consistent picture emerges. The vast majority are running fraud detection systems built on rule engines and supervised machine learning models trained on historical human-generated fraud patterns. Their transaction monitoring generates alerts that are routed to case management queues for human analysts to work at a rate of 20-40 cases per day. Their KYC processes assume that document verification and selfie matching are sufficient for identity proofing, but these assumptions break down against an agentic adversary.
Rules engines miss coordinated attacks that stay within individual parameter thresholds. ML models trained on last year's fraud produce false negatives on patterns they've never seen. Case management queues that assume a manageable alert volume collapse when an agent swarm can trigger thousands of account applications in an afternoon. Document verification that relies on visual inspection (even automated visual inspection) fails against AI-generated documents that are pixel-perfect.
The industry has, for the most part, responded to an exponential threat with linear improvements. Better rules, better models, more analysts, faster case management are all real improvements that make a real difference at the margin. However, the adversary is not operating at the margin.
What Comes Next
There is a fundamental architectural question the financial services industry needs to confront, and it needs to confront it now, not in next year's budget cycle.
When your adversary operates autonomously, at machine speed, across every boundary of your fraud and compliance infrastructure, your defense must do the same.
That means detection systems that identify coordinated behavior across accounts, channels, and time horizons, not just anomalies in individual transactions. It means investigation capabilities that can pull data, analyze patterns, and surface evidence at the speed the threat demands. It means compliance workflows that can scale to volumes that would bury any human team. And it means all of these capabilities share intelligence in real time, because the adversary's agents certainly do.
When the billion-agent projection first appeared in early 2024, it was easy to dismiss. By the time it was repeated from the keynote stage in September of that year, the early signals were already confirming the thesis. Now, in 2026, agents are here, operating at a scale and sophistication that has outpaced most institutional defenses. What the industry does about that—and how quickly—is the only question left worth asking.
---
R. Whitney Anderson is CEO of FraudNet, where he leads the development of AI-native fraud and compliance infrastructure for financial institutions. He has spent over two decades at the intersection of financial services, technology, and regulatory compliance.

You might be interested in…
Get Started Today
Experience how FraudNet can help you reduce fraud, stay compliant, and protect your business and bottom line
%20(640%20x%201229%20px).png)
