Financial institutions know collaborative intelligence is vital. But many still remain hesitant to share data across borders and competitors. The rules on what can and can’t be shared are often murky, and the liabilities tied to disclosure make collaboration feel risky.
Yet as regulators begin to clarify those boundaries, and privacy-preserving technologies make secure data collaboration possible, the question is changing from “Can we collaborate?” to “Can we afford not to?” Here’s why collaboration is key in modern fraud detection.
To effectively fight mule account activity, it is critical to understand that not all mules are created equal. Some orchestrate the fraud. Others are manipulated or deceived.
Fraud is now a global, borderless enterprise that starts and finishes faster than individual institutions can respond. Criminal networks no longer operate within a single jurisdiction or channel. Instead, they exploit the fragmented visibility between banks, fintechs, and payment providers.
A scam that begins in one country can be routed through multiple institutions across borders in seconds. When each organisation only sees a sliver of that activity, the pattern might look benign. But viewed collectively, it tells the full story.
Modern fraud is also increasingly synthetic. Attackers use a blend of real and fabricated data to create identities that pass KYC checks and open accounts undetected. They link these synthetic profiles to webs of devices, IP addresses, and mule accounts, which makes rule-based monitoring far less effective. As a result, the industry’s prevention strategy can no longer rely solely on internal data or static controls.
Within privacy-preserving data- and intelligence-sharing arrangements, banks can enrich their fraud detection models with digital footprint intelligence. So this means correlating external indicators such as device fingerprints, IP addresses, and email domains with their own internal transaction or account signals, without exposing raw customer data.
A collaborative approach connects what each institution sees in isolation into a multi-dimensional risk picture. This helps teams better distinguish genuine users from organized criminal activity far earlier in the process.
Recent experiments by Swift showed how powerful a collective view can be. In trials involving ten million artificial transactions, a collaborative model trained across multiple institutions was twice as effective in detecting fraud in real time compared with models trained on data from a single bank.
Using privacy-enhancing technologies (PETs), participants securely shared artificial transaction data. These PETs are usually cryptographic tools that enable data to be analyzed or compared without revealing its underlying contents. Rather than exchanging raw transaction data, institutions share encrypted or synthetic representations of it, allowing patterns to be identified collectively while maintaining end-to-end privacy.
In a second use case, the team combined PETs with federated learning. This is an AI technique that trains algorithms locally within each institution, which allows models to learn from shared experience without ever exposing customer data.
The regulatory tide is shifting somewhat to also recognize the power of collaboration in fighting back against fraud. In the UK, the Economic Crime and Corporate Transparency Act 2023 introduced new information-sharing measures that enable AML-regulated private-sector entities to share customer information with one another (or via intermediaries) for the purpose of detecting, preventing, or investigating economic crime.
Meanwhile, in the EU, the upcoming Anti-Money Laundering measures (effective July 2027), introduce a formal legal framework for firms to share financial crime intelligence through authorised ‘partnerships for information sharing’. By establishing consistent standards for cross-sector and cross-border data exchange and embedding data-protection safeguards, the regulation is expected to reduce the legal and operational uncertainty that has long restrained collaboration across institutions.
Promise
Pitfalls
Even as regulators encourage information sharing, technical and operational barriers have kept large-scale collaboration rare. The industry’s first response has been to experiment with privacy-enhancing technologies (PETs) and generic privacy platforms designed to make data sharing safe. These tools show important progress because they prove that banks can collaborate without breaching confidentiality or regulatory boundaries.
Yet most of these platforms were never built for fraud detection. They were created to solve generic data-sharing challenges, for example, enabling research or marketing teams to analyze anonymized datasets, not to detect real-time financial crime. And while they protect privacy, they often fail to deliver actionable fraud intelligence or reflect how complex and fast-moving modern scams really are.
Fraud detection demands real-time, contextual, and highly specialized intelligence. Generic privacy platforms don’t understand the nuances of how fraud teams work — the evolving typologies, behavioral red flags, and real-world response requirements.
Also, every bank structures and labels its data differently, from account hierarchies to transaction types and device identifiers. In fraud detection, those differences matter. A privacy platform might anonymize the data but still fail to align the meaning behind it.
Traditional “consortia” models often amount to little more than shared blacklists of flagged accounts or entities that quickly become outdated. Meanwhile, scammers adapt in real time, creating new synthetic identities and mule accounts that easily bypass these static defences. Because these systems depend so heavily on the skill and availability of each institution’s data scientists, collaboration remains fragmented and inconsistent.
Generic privacy platforms protect data but rarely stop fraud. They anonymize and transport information safely, yet they don’t understand the context or velocity of criminal behaviour. Acoru closes that gap by combining privacy-preserving architecture with the fraud-detection expertise and real-time orchestration that those platforms lack.
Instead of pooling sensitive customer data into a shared database, Acoru’s Consortium Manager enables participating institutions to consult one another in real time securely, selectively, and under each bank’s own policy controls. Each request stays privacy-first: no personally identifiable information (PII) is ever pooled or exposed.
Each response is explainable, signed, and auditable, so banks can act confidently, knowing they have verifiable intelligence that meets regulatory standards. Anonymized or tokenized account and behavioural signals are exchanged under strict policy enforcement, meaning collaboration doesn’t come at the expense of compliance.
Where generic privacy tools stop at anonymization, Acoru builds in fraud-specific intelligence:
Beyond data sharing, Acoru’s Consortium Manager also enables federated model training, so all participants’ fraud-detection models improve through shared learning.
Ready to harness the value of collaboration in fighting fraud?
Learn more about the Acoru Consortium Manager.