Collusion is the perfect crime for fraudsters.
It evades traditional detection systems, hiding across user roles and blending into normal platform activity. By the time you realize it’s happening, the damage is done.
To better understand how collusion is evolving and what platforms can do about it, we brought together three experts on the frontlines:
Their main takeaway? You can’t catch collusion by looking at individuals. You have to zoom out and connect the signals—linking devices, locations, and behaviors.
Collusion isn’t new, but it’s evolving—and the #1 fraud concern for gig economy platforms going into 2026.
Several trends are accelerating its spread.
Many platforms have separate systems for drivers/couriers, riders/consumers, and merchants.
And, many fraud tools look at one user at a time—one courier account, one consumer complaint, one merchant dashboard. But collusion doesn’t stay in one lane, it crosses them.
Matheus noted that this was a challenge at iFood, where courier and merchant fraud teams were “handling the same collusion in isolation,” each seeing only a piece of the network.
That separation gives fraudsters room to operate. Each actor looks clean, and the connections between them stay invisible.
In the past, fraudsters worked more independently to come up with fraud methods..
But today, online platforms like Telegram, WhatsApp, and even TikTok spread fraud strategies in seconds.
They don’t know each other, but they work together.
— Aditya Ananda Uttama, Careem
Both Aditya and Matheus described multi-accounting as a core enabler of collusion.
It’s easier than ever for fraudsters to create and cycle through multiple accounts. Things like app cloners, virtual devices, and Fraud-as-a-Service lower the barrier to entry.
Someone who once used a few accounts for fraud can now use dozens.
When those accounts are running on tampered devices, the problem compounds.
Manipulated environments distort the very signals platforms rely on to prevent abuse—device IDs, location data, and behavioral insights.
Collusion is a range of coordinated behaviors, not a single tactic. And because each user looks legitimate in isolation, the patterns only appear when signals are linked across roles.
This pattern applies to both delivery and ride sharing.
It’s one of the most common entry points—simple, profitable, and easy to hide behind normal customer support flows.
A driver or courier marks a trip or order as completed while a cooperating rider or consumer (or controlled account) claims it never happened or never arrived as expected. The platform issues a refund or adjustment, and the two split the payout.
Case by case, nothing looks unusual. Refunds and fare adjustments happen, and delivery or trip confirmations are routine.
But when you link repeated interactions between the same driver/courier and a small set of riders/consumers—or accounts tied to multi-accounting and device manipulation—the pattern emerges.
This pattern appears on delivery platforms where merchants participate in fulfillment flows.
In this pattern, couriers and merchants coordinate to boost earnings or merchant metrics. Orders or pickups move faster than physically possible or repeatedly cycle between the same courier/driver and merchant.
Aditya has seen this appear in markets where courier and merchant ecosystems are tightly connected, and the behavior blends into normal workflows.
Operationally, everything looks fine. Orders or pickups are placed, accepted, and completed. But behind the scenes, the flow exploits gaps in fulfillment or logic.
This pattern appears primarily with grocery, convenience, and retail merchants.
The merchant uses a consumer account they control to order from their own store. After receiving the payout, they file a false complaint—expired, missing, or damaged item—and get a refund.
Because the merchant controls both sides, they keep the payout and the refund, and still hold the inventory.
The platform pays twice, while the merchant’s revenue stays intact. At scale, it’s a costly abuse loop.
This example came directly from Aditya’s experience, especially in markets with large numbers of micro-retailers.
This is one of the most sophisticated patterns because it involves the entire order cycle. A merchant fakes an order, a courier “delivers” it, and a consumer confirms or disputes it strategically.
Triads are especially hard to detect without strong device and location intelligence, and graph-based analysis.
Some collusion happens entirely within the courier or driver network itself.
Couriers may share devices, rotate accounts, swap assignments, or coordinate cancellations to trigger incentives and compensation.
Because couriers often work in proximity and follow similar routines, this collusion is hard to spot without strong device identity and integrity controls. It mimics normal variance until the device links reveal the overlap.
This is less common but highly damaging.
In these cases, the platform’s support agents or operational staff may reinstate banned accounts, override safeguards, or process fraudulent refunds in coordination with external actors.
Internal collusion amplifies external fraud, making fraud rings more resilient and harder to contain.
Traditional fraud controls on gig economy platforms are built to evaluate individual users or single transactions. Great for catching one-off issues like account takeovers or stolen payment methods, but they fall short when multiple accounts coordinate across roles.
Rules engines, payment fraud tools, and even many machine learning models focus on whether a single action looks risky.
Collusion hides in the relationships between actions—who interacts with whom, how often, under what conditions.
Each role’s behavior can look completely normal until you zoom out and map the cross-role interactions.
Many gig economy platforms operate courier, consumer, and merchant systems separately, with different teams and data pipelines.
A courier may look clean in courier data. A consumer may look clean in consumer data. The fraud only appears when those profiles interact.
iFood experienced this firsthand.
Their courier fraud team and merchant fraud team were each investigating what looked like separate issues — but they were actually looking at different parts of the same collusion network. Because the signals weren’t connected, the broader pattern stayed hidden.
We were handling collusion cases in isolation. My team was dealing with collusion on the courier side, while the merchant fraud prevention team was tackling the same issues with different strategies. We were solving the symptoms, but we weren’t solving the root cause.
— Matheus Vieira Costa, iFood
Without shared signals, teams often end up resolving incidents one at a time while the broader network continues operating untouched.
Even when platforms use device-based risk signals, fraudsters can spoof them with tools like app cloners and emulators.
If the device environment isn’t trustworthy, the identity and location signals coming from it aren’t trustworthy either.
Without reliable device signals, platforms may mistake coordinated behavior for unrelated activity.
Collusion is difficult to spot, but it’s not impossible to stop.
The most effective path forward is strengthening the signals that uncover relationships, not just isolated behaviors.
Collusion crosses roles, so detection should too.
Using the same device identity signals across user types lets platforms see when a fraudster uses the same device to access accounts for multiple types of roles or creates multiple accounts.
That connection often exposes the collusion activity. Unified device identity signals turn isolated behaviors into visible patterns.
Device intelligence is only useful if the device environment is trustworthy.
Without tamper detection, fraudsters can spoof attributes and hide connections between activity on different accounts.
When you can trust the environment, you can trust its signals—crucial for catching multi-accounting and device-based coordination.
Location is one of the strongest signals for detecting coordinated activity.
High-precision location signals make it possible to see:
Aditya specifically described the value of mapping relationships when trying to uncover collusion clusters.
Graph modeling connects devices, accounts, merchants, locations, and behaviors in one view. This can make coordinated patterns—triads, courier-consumer pairs, merchant loops—stand out more clearly.
Graph analysis helps teams target the central nodes driving the network instead of simply chasing single incidents.
Collusion thrives on fast payouts and low friction. Slowing or holding payments for high-risk accounts disrupts the economics.
Targeted payout holds, extra verification, or temporary friction can hinder or collapse networks built on fast transaction flow.
Platforms will be best positioned to fight collusion when fraud, ops, support, identity, payments, and product teams share signals and work from a common dataset.
Collusion is designed to exploit silos. The more aligned your internal teams are, the easier it becomes to dismantle coordinated networks instead of chasing individual cases.
Collusion isn’t new, but it’s becoming a top priority for fraud teams at gig economy platforms.
The good news: once platforms strengthen the signals that expose the connections between accounts and activity—unified device identity signals, device integrity checks, precise location data, and graph analysis—collusion becomes visible.
When those technical capabilities are paired with cross-team collaboration, fraud teams can move from reacting to isolated cases to dismantling entire collusion rings.
The platforms investing in these capabilities today are already uncovering patterns they couldn’t see before—and fighting back more effectively against coordinated abuse.
As collusion continues to evolve, that combination of strong signals and connected teams will make all the difference.