Acoru Blog · Classify Accounts. Trace Mules. Stop Scams.

Why Collaboration is Key in Fraud Detection

Written by Acoru | Dec 4, 2025 8:17:34 AM


If you’re trying to stop fraud but you can only see your own data, you’re fighting the battle half-blind. Fraudsters today open accounts across multiple institutions, move money through layers of mules, and vanish before anyone connects the dots.

 

Financial institutions know collaborative intelligence is vital. But many still remain hesitant to share data across borders and competitors. The rules on what can and can’t be shared are often murky, and the liabilities tied to disclosure make collaboration feel risky. 

Yet as regulators begin to clarify those boundaries, and privacy-preserving technologies make secure data collaboration possible, the question is changing from “Can we collaborate?” to “Can we afford not to?” Here’s why collaboration is key in modern fraud detection. 

 



The Importance of Collaboration 

To effectively fight mule account activity, it is critical to understand that not all mules are created equal. Some orchestrate the fraud. Others are manipulated or deceived.  

Fraud is now a global, borderless enterprise that starts and finishes faster than individual institutions can respond. Criminal networks no longer operate within a single jurisdiction or channel. Instead, they exploit the fragmented visibility between banks, fintechs, and payment providers. 

A scam that begins in one country can be routed through multiple institutions across borders in seconds. When each organisation only sees a sliver of that activity, the pattern might look benign. But viewed collectively, it tells the full story.

Modern fraud is also increasingly synthetic. Attackers use a blend of real and fabricated data to create identities that pass KYC checks and open accounts undetected. They link these synthetic profiles to webs of devices, IP addresses, and mule accounts, which makes rule-based monitoring far less effective. As a result, the industry’s prevention strategy can no longer rely solely on internal data or static controls.

Within privacy-preserving data- and intelligence-sharing arrangements, banks can enrich their fraud detection models with digital footprint intelligence. So this means correlating external indicators such as device fingerprints, IP addresses, and email domains with their own internal transaction or account signals, without exposing raw customer data.

A collaborative approach connects what each institution sees in isolation into a multi-dimensional risk picture. This helps teams better distinguish genuine users from organized criminal activity far earlier in the process.

Recent experiments by Swift showed how powerful a collective view can be. In trials involving ten million artificial transactions, a collaborative model trained across multiple institutions was twice as effective in detecting fraud in real time compared with models trained on data from a single bank. 

Using privacy-enhancing technologies (PETs), participants securely shared artificial transaction data. These PETs are usually cryptographic tools that enable data to be analyzed or compared without revealing its underlying contents. Rather than exchanging raw transaction data, institutions share encrypted or synthetic representations of it, allowing patterns to be identified collectively while maintaining end-to-end privacy. 

In a second use case, the team combined PETs with federated learning. This is an AI technique that trains algorithms locally within each institution, which allows models to learn from shared experience without ever exposing customer data.

 



Regulators Are Pushing Collaboration Too

The regulatory tide is shifting somewhat to also recognize the power of collaboration in fighting back against fraud. In the UK, the Economic Crime and Corporate Transparency Act 2023 introduced new information-sharing measures that enable AML-regulated private-sector entities to share customer information with one another (or via intermediaries) for the purpose of detecting, preventing, or investigating economic crime.

Meanwhile, in the EU, the upcoming Anti-Money Laundering measures (effective July 2027), introduce a formal legal framework for firms to share financial crime intelligence through authorised ‘partnerships for information sharing’. By establishing consistent standards for cross-sector and cross-border data exchange and embedding data-protection safeguards, the regulation is expected to reduce the legal and operational uncertainty that has long restrained collaboration across institutions.

 



The Promise and the Pitfalls of Collaboration in Fraud and AML 

Promise

  • Increased speed in detecting fraud: Sharing signals and intelligence across institutions reduces the “blind spots” that fraudsters exploit — enabling faster detection of patterns that cross organisational boundaries.
  • Improved accuracy of detection: With a broader dataset (accounts, devices, IPs, email/phone addresses, behaviour), multiple institutions can distinguish genuine users from organised fraud rings more reliably.
  • Cost reduction and efficiency gains: Fraud is expensive: one study found that for North American financial institutions, every $1 of fraud loss ends up costing approximately $4.41 when you include legal, processing, investigation and recovery costs.
  • Broader defence ecosystem: Collaboration builds a collective shield that no single institution can build alone. This means smaller players can participate, and networks of banks/fintechs raise the bar for fraudsters.
  • Pre-emptive defence rather than reactive: Shared intelligence allows earlier detection of emerging fraud typologies (e.g., synthetic identity, cross-border mule networks) before significant losses occur.


Pitfalls

  • Privacy and data-protection concerns: Sharing information, even anonymised or tokenised, might raise questions over what kind of data can legally be shared under frameworks like GDPR, and how to ensure customer rights are respected.
  • Commercial sensitivity and trust barriers: Institutions may be reluctant to share internal metrics, risk data, or exposure details with competitors or external parties, fearing loss of competitive advantage or reputational damage.
  • Technical integration and standardisation hurdles: Data formats, real-time pipelines, device/mobile/behaviour signals differ widely between institutions. This can make meaningful sharing complex and sometimes costly to implement.
  • Liability and governance ambiguity: When multiple parties share data or intelligence, there’s often uncertainty around who is responsible for what if the shared signal fails, or if erroneous blocking occurs.

 


The Importance of Collaboration: Why Generic Privacy Platforms Fall Short

Even as regulators encourage information sharing, technical and operational barriers have kept large-scale collaboration rare. The industry’s first response has been to experiment with privacy-enhancing technologies (PETs) and generic privacy platforms designed to make data sharing safe. These tools show important progress because they prove that banks can collaborate without breaching confidentiality or regulatory boundaries.

Yet most of these platforms were never built for fraud detection. They were created to solve generic data-sharing challenges, for example, enabling research or marketing teams to analyze anonymized datasets, not to detect real-time financial crime. And while they protect privacy, they often fail to deliver actionable fraud intelligence or reflect how complex and fast-moving modern scams really are.

Fraud detection demands real-time, contextual, and highly specialized intelligence. Generic privacy platforms don’t understand the nuances of how fraud teams work — the evolving typologies, behavioral red flags, and real-world response requirements. 

Also, every bank structures and labels its data differently, from account hierarchies to transaction types and device identifiers. In fraud detection, those differences matter. A privacy platform might anonymize the data but still fail to align the meaning behind it.

Traditional “consortia” models often amount to little more than shared blacklists of flagged accounts or entities that quickly become outdated. Meanwhile, scammers adapt in real time, creating new synthetic identities and mule accounts that easily bypass these static defences. Because these systems depend so heavily on the skill and availability of each institution’s data scientists, collaboration remains fragmented and inconsistent.

 


The Acoru Approach to Collaboration

Generic privacy platforms protect data but rarely stop fraud. They anonymize and transport information safely, yet they don’t understand the context or velocity of criminal behaviour. Acoru closes that gap by combining privacy-preserving architecture with the fraud-detection expertise and real-time orchestration that those platforms lack.

Instead of pooling sensitive customer data into a shared database, Acoru’s Consortium Manager enables participating institutions to consult one another in real time securely, selectively, and under each bank’s own policy controls. Each request stays privacy-first: no personally identifiable information (PII) is ever pooled or exposed.

Each response is explainable, signed, and auditable, so banks can act confidently, knowing they have verifiable intelligence that meets regulatory standards. Anonymized or tokenized account and behavioural signals are exchanged under strict policy enforcement, meaning collaboration doesn’t come at the expense of compliance.

Where generic privacy tools stop at anonymization, Acoru builds in fraud-specific intelligence:

  • Beyond destination reputation: members can share and correlate pre-fraud signals such as new-payee behaviour, transaction sequencing, or sudden asset liquidation.
  • Omnichannel enrichment: combine device and network intelligence with feedback from confirmed fraud cases and even AML lookups, for example, compliant cross-branch searches for sanctioned entities.
  • Configurable and lawful by design: each bank decides which fields to share; the consortium enforces purpose limitation, minimization, and residency rules automatically at request time.
  • No data-model headaches: Acoru’s schema abstraction layer normalizes each institution’s structure, avoiding the endless standardization projects that stall other privacy platforms.

Beyond data sharing, Acoru’s Consortium Manager also enables federated model training, so all participants’ fraud-detection models improve through shared learning.

Ready to harness the value of collaboration in fighting fraud?

Learn more about the Acoru Consortium Manager.