marketApril 5, 202613 min read···

Fraud Prevention in Emerging Markets: The 3 LatAm Patterns You'll See First

Fraud in LatAm doesn't look like Europe or the US. The three patterns we see first in real deployments across Brazil, Mexico, Argentina, Colombia.

Aryanne Reis

Aryanne Reis

Customer Success Brazil

Fraud in LatAm does not look like fraud in Europe or the US. I've spent the last year watching customer integrations across Brazil, Mexico, Argentina and Colombia — sitting in war rooms, reviewing fraud tickets line by line, debriefing with risk teams after a bad week — and the same three patterns keep coming up. Every fintech eventually hits all three. Most only recognize one of them at first, and they usually recognize the wrong one.

This post is what I wish I could send every new client during onboarding. It's not theory. It's what we see in real deployments, in the first ninety days, when the first fraud wave lands and everyone in the room is suddenly very interested in the risk dashboard.

Let me start with the numbers, because the size of the problem here is something a lot of founders coming from European or US fintech still underestimate.

The numbers first#

LatAm fraud grew 32% in the first half of 2024 versus the prior year (Veriff). That's the backdrop. Everything below happens inside that curve.

A few more to anchor this:

  • In Mexico, account takeover rose 324% in fifteen months (BioCatch).
  • Brazil booked R$2.7 billion in PIX fraud in 2024, up 43% year over year (Febraban).
  • Scam attempts across the region jumped 155% in 2025 — that's across 36 institutions and roughly 300 million customers (BioCatch).
  • Malware attacks climbed 225%, stolen-device fraud 344%, and remote-access tool usage went up by 5x (BioCatch).
  • More than half of all fraud in the region now involves AI on the attacker side (Feedzai).
  • The LatAm fraud detection market itself is scaling to match: $1.74B in 2025, projected to $9.14B by 2034, a 20.2% CAGR.

When I show these numbers to a client who's just had their first big fraud month, the reaction is almost always the same: "We thought we were being careful." They were. Careful doesn't cut it here, because the attack surface is different. The ID systems are different. The payment rails are different (PIX alone is a category of attack). And the fraudsters have had years to specialize on these specific failure modes.

OK. On to the patterns.

1. Synthetic identity fraud#

Every time we onboard a new Brazilian client, this is the one that eats their first month. It's also the one they're least likely to recognize, because on paper every application looks clean.

Synthetic identity fraud is real data fragments from multiple real people stitched into a "clean" identity. The CPF exists — it's a real person's CPF. The name exists — it's a different real person. The address exists — a third real person. The email was generated to match the name. The phone was activated last month on a prepaid SIM. The photo is AI-generated or pulled from a stock source.

Now here's the part that makes this pattern devastating: credit models approve it. Human reviewers approve it. Nothing about the application fails a check, because every individual component is real. The person is not.

What happens next is what the industry calls bust-out. The synthetic customer behaves perfectly for ninety to one hundred and eighty days. They pay on time. They grow their limit. They open additional products. Then one day they max out every available line of credit, execute a few large PIX transactions to accounts that are also synthetic, and disappear.

Brazil synthetic identity fraud rose 140% last year (Sumsub). Globally it's 8x in 2025, with LatAm accounting for roughly half of that (LexisNexis).

Why does LatAm see it more? Three reasons, in order of impact:

  1. Fragmented ID systems. CPF in Brazil, CURP plus INE in Mexico, DNI in Argentina, Cédula in Colombia. There's no federated identity layer. A fraudster can harvest fragments from leaked databases in country A and combine them with fragments from country B and nothing in the verification chain will catch the mismatch.
  2. Data breach volume. The breached-data markets feeding LatAm synthetics are very active. I've had clients show me leaked datasets with tens of millions of CPF-name-address tuples for sale at commodity prices. The supply of raw material is effectively unlimited.
  3. Credit bureau gaps. Informal economy, thin files, no historical behavior on most new identities. The bureaus can't tell you much about someone who "just started their credit life" — and synthetics are, by definition, someone who just started their credit life.

What actually catches it, in deployment? Not document verification alone. Not OCR. Not selfie + liveness. All of those pass, because the data is real and the biometric is a human face (sometimes even a real one).

What catches it is the combination:

  • Behavioral biometrics during onboarding. A real first-time user fills the form differently from someone who has typed this exact CPF-name-address combo 14 times this week.
  • Velocity checks on device, IP, SIM, and behavioral fingerprint. How many "new" identities has this device seen in the last 30 days? How many of those shared at least one attribute?
  • Network analysis. Is this device linked to 14 other "new" accounts opened in the last 60 days? Does this address cluster with a bust-out cohort from Q3? Is this phone number one hop from a known mule account in our graph?

When we turn on network analysis for a new client, the first week is always loud. Accounts that cleared initial KYC light up as members of clusters the client didn't know existed. That's the signal. The existing stack wasn't wrong — it just wasn't looking at the right layer.

For more on how we think about the KYC side of this, the KYC in Latin America complete guide is the right next read.

2. Account takeover (ATO)#

The second pattern is the one that scares our Mexican clients most, and for good reason.

Account takeover (ATO) rose 324% in Mexico in the last fifteen months. Regionally, remote-access tool usage is 5x what it was. These two numbers are the same story from different angles.

Here's the pattern we see in real deployments:

The victim receives a phone call, a WhatsApp message, or both. The social engineering is sophisticated — the attacker knows the victim's bank, their approximate balance, often their last transaction. They pretend to be from fraud prevention. They create urgency. They walk the victim through "verifying" their account, which in practice means installing AnyDesk or a similar remote-access app "so we can protect you." Once the app is installed, the attacker has the real device.

From the fintech's side, everything looks legitimate. Real device. Real IP. Real SIM. Same behavioral fingerprint the user always has. Same geo. Same hour of day. Because the victim is literally sitting there, sometimes watching the screen while the attacker operates.

Legacy fraud stacks — the rule-based ones — cannot see this. Every signal they check comes back green. The transaction is from a known device in a known location at a known time by a known user, and it goes through.

What catches it is behavioral biometrics in real time, applied to the session itself, not just onboarding. The attacker can take over the device, but they cannot take over the victim's motor pattern. The typing rhythm changes. The mouse dynamics change. Scroll behavior changes. Pauses between fields change. Session timing changes. A human's motor pattern is as much a signature as their face, and unlike their face it doesn't transfer with credentials.

The cleanest way to introduce this layer, in our experience, is to run it in shadow mode first — scoring every session without blocking anyone. Sessions that clear every other legitimate-session signal but score anomalous on motor patterns are almost always ATOs the existing stack was missing. Legitimate outliers do exist — a user who lent their phone to a family member for ten minutes will look anomalous too — but they are a small minority of the flagged set and they resolve cleanly with a lightweight step-up challenge.

The attacker can have the device, the credentials, the SIM, and the geolocation. They cannot have the user's motor pattern. That gap is where behavioral biometrics lives.

Why LatAm sees this one more than Europe or the US: cash is still king culturally, which means users are used to being contacted by humans to "verify" things. The cold WhatsApp call from "the bank" lands on an audience that hasn't been pre-trained by a decade of "never click this link" PSAs. The social engineering works.

If you want the deeper architectural piece on why this kind of defense needs to be in the core stack rather than bolted on top, read The AI-native compliance stack.

3. First-payment default (intent fraud)#

The third pattern is my personal least favorite, because it's the quietest and the hardest to price.

First-payment default — also called intent fraud — is when a real customer, with real KYC, clean credit check, no red flags, takes out a loan or opens a credit line and simply never pays. Not once. The intent was never to pay.

This is different from synthetic identity fraud. The person is real. The identity is verified. No stitching, no bust-out cycle, no network cluster. It's just a human being who filled out the form knowing they would default, took the cash or the credit, and walked away.

It's more common in markets without robust credit bureaus, which is most of LatAm for a large segment of first-time borrowers. If you can't see historical behavior — because the person has no credit file, or a very thin one, or a file that was created ten minutes ago for the purpose of applying to you — you cannot distinguish intent-to-pay from no-intent-to-pay using credit data alone.

The second complication: from a reporting perspective, first-payment default looks almost identical to genuine early-stage delinquency. The customer misses the first payment. You send a reminder. They don't respond. You escalate. They're gone. For the risk team, the only difference is retrospective — and by the time you know, the money is gone.

What catches it, in practice, in our deployments:

  • Onboarding behavior analysis. How long did the user spend on each screen? Did they correct typos in their name? Did they hesitate on income? Did they paste or type? Did they tab through fields in the order a first-time user tabs, or in the order of someone who has filled this form before? Real customers take real time. Intent-fraud applicants are usually efficient in a telltale way.
  • Open banking data where it's available. Brazil's open finance regime is genuinely useful here — if a customer consents to share their banking data, you can see whether their cash flow is consistent with the loan they're asking for. Mexico is catching up. Argentina and Colombia are earlier.
  • Network signals on the device. Is this device associated with other accounts that went into first-payment default in the last 90 days? Is this phone number linked to a cluster of default cases?
  • Geographic and temporal anomalies. Application filed at 3am in a region with no local branch of the employer the user listed. Small signals. Stacked together they're predictive.

No single one of these catches first-payment default cleanly. It's a classifier problem, and the classifier needs all of them together. The lift grows over time: the first ninety days are about building the base signal set, and performance improves as the model learns each client's specific customer base.

For the AML side of how this interacts with reporting and mule-account networks, AML challenges for LatAm fintechs covers the rest of the picture.

The AI arms race#

One thread runs through all three patterns: the attackers are using AI, and the defenders who are not using AI are losing.

Some numbers from our deployment data and from the broader industry:

  • Legacy rule-based fraud detection stacks in LatAm deployments run under 70% accuracy on the fraud patterns above. I've seen them as low as 40%.
  • AI-powered fraud detection on equivalent traffic reaches 95%+ accuracy.
  • Detection latency drops from hours (human review queue) to seconds.
  • False positive rates on older rule stacks can reach 98%. AI models get under 5%.

That last one is worth sitting with. A 98% false positive rate means that for every real fraudster you catch, you've blocked 49 legitimate customers. That is a conversion rate killer. I've had clients tell me their fraud team was "working" — catching fraud — while their product team was simultaneously trying to figure out why onboarding drop-off was 40%. Those two teams were describing the same problem from opposite sides.

More than half of fraud attempts in the region now involve AI on the attacker side (Feedzai). Deepfake attempts in Brazil are up 700% (Sumsub). Identity theft in Colombia is up 400% since 2020. The fraudsters are running generative models to produce synthetic documents, synthetic faces, synthetic voices, and synthetic behavioral patterns that can fool rule-based checks.

There's no opting out of this fight. A rule-based stack from 2019 is not going to hold the line against an AI-assisted fraud ring in 2026. It's not about team size or effort — it's that the attackers' tooling has moved and the defenders' tooling has to move with it.

What "good" looks like in deployment#

Based on what we see across our 54 live customers in Brazil, Mexico, Argentina and Colombia, here's what healthy fraud prevention looks like in numbers:

  • Fraud catch rate above 90% on the first 30 days. If you're under 70%, something in the stack is missing a pattern.
  • Onboarding drop-off under 10%. If catching fraud costs you more than 10% of your legitimate funnel, you've added friction on top instead of precision underneath.
  • False positive review queue stable or shrinking as the model learns the client's traffic. Month 2 should require less manual review than month 1, not more.
  • SAR cycle time within regulatory SLA, every jurisdiction you operate in. In Brazil that's the COAF window. In Mexico, the UIF window. Argentina and Colombia have their own. Missing any of these is a finable event independent of how well the fraud layer is working.
  • Behavioral signals integrated end-to-end — onboarding, session, transaction. Not three separate tools with three separate dashboards. One continuous signal.

When we've got those five moving in the right direction, the risk team stops firefighting and starts doing the work they actually want to do: investigating cases that matter, tuning the model, feeding the learning loop. That's the steady state. It's achievable. We see it in most deployments by day 90.

Compliance is projected to be 15-20% of fintech operational budget in 2026. If that spend is going into firefighting the same three patterns with the wrong tools, it's wasted. If it's going into infrastructure that handles all three at the behavioral and network layer, it compounds — your stack gets smarter, your team gets faster, and your unit economics start to work.

If you're reading this and you're about to launch in LatAm, or you're two months in and the fraud numbers are not where you hoped, reach out. We've watched a lot of first ninety days and we know what the early signals look like. The sooner you see the pattern, the cheaper it is to fix. And if you're still getting oriented on the broader stack, the welcome post has the rest of what we're publishing here.

Share this post

Get new posts in your inbox

One email when we publish. No spam. Unsubscribe whenever you want.