Top 5 Scams in 2025
While reports and regulations help you understand fraud in theory, the scams unfolding in the real world often paint the clearest picture of what’s...
4 min read
Acoru : Mar 25, 2026 8:00:00 AM
Social media has woven itself so tightly into everyday life that it’s easy to forget how much power it now holds. It’s where people maintain relationships, discover opportunities, seek advice, and increasingly, make decisions that carry real financial consequences. Unsurprisingly, this means social media is also a major starting point for online scams.
For fraud teams, this creates a familiar tension. Many scams originate on social platforms, but the financial loss often crystallises when a payment is executed through a bank or PSP. Until now, responsibility has largely fallen on financial institutions.
This is why the EU’s recent provisional agreement on PSD3 and PSR introduces conditional liability for online platforms in specific fraud scenarios, particularly where reported scam content is not acted upon. Read on to unpack what the legislative changes involve, and what they could mean in practice for fraud managers overseeing APP risk, mule activity, and regulatory exposure.
At a high level, the proposed EU rules introduce something genuinely new into the fraud equation: conditional liability for social media platforms. Where platforms fail to act on scam content that has been reported and allowed to persist, they may be liable to compensate financial institutions for resulting losses. The change comes as part of a payment services deal to bring into force the Third Payment Services Directive (PSD3) and the Payment Services Regulation (PSR). No specific date for implementation is set in stone just yet, though.
These broader changes in EU payment regulation emphasize greater consumer protection, transparency, and reimbursement obligations. Banks are expected to intervene earlier, absorb more liability, and prevent harm rather than simply resolve disputes after the fact.
The interesting news here is that platforms like TikTok, Instagram, Facebook, LinkedIn, and X are now being pulled into that same accountability loop. For fraud managers, the significance is how regulators are formalising the idea that fraud responsibility doesn’t begin at the point of payment.
This has practical implications. Disputes may no longer focus solely on whether a transaction was authorised, but on whether reported scam activity was allowed to persist on social media and whether reasonable steps were taken across the ecosystem to disrupt it.
Industry representatives argue that the law sets a “dangerous precedent” by shifting responsibility away from those best positioned to prevent fraud. Platforms owned by Meta, X, etc, that host vast volumes of user-generated content feel that banks should be taking responsibility for fraud.
But these platforms are not neutral or powerless bystanders. They design recommendation systems, monetise reach, and control takedown processes. Accountability is expanding and questioning whether the wider fraud journey showed signals that should have triggered earlier action.
It would be short-sighted to view this legislation as a niche intervention aimed solely at the social media platforms owned by Big Tech. What it really reflects is a broader shift in how early companies are expected to act when combating fraud.
Regulators are no longer satisfied with post-event explanations about where fraud technically occurred. The question increasingly being asked is whether warning signs existed earlier, and whether those signals were ignored, fragmented, or simply invisible across organisational boundaries.
In practice, this shift reflects a growing expectation that organisations can demonstrate:
Earlier awareness of risk, not just post-transaction detection
Clear reasoning for intervention, rather than opaque automated decisions
Proportionate responses, avoiding unnecessary friction for legitimate users
Cross-channel visibility, where fraud signals rarely stay confined to one system
Social platforms are being pulled into scope precisely because they sit upstream in many fraud journeys. For banks and payment service providers, this should feel familiar. Similar logic already underpins APP reimbursement rules and emerging payment regulation: if an institution could reasonably have detected risk before funds moved, liability follows.
What we’re seeing here is that fraud prevention is a shared, cross-ecosystem responsibility. This is where the conversation shifts to pre-fraud signal intelligence. Whether it’s a social media company receiving a report about a marketplace scam, or a bank monitoring accounts to predict future fraud, the common requirement is the ability to recognise risk before harm occurs.
The move to impose liability on social media platforms didn’t come out of nowhere. It reflects a simple reality that scams are no longer confined to a peripheral misuse of social platforms… they’re everywhere. Data from the FTC in the USA found that one in four people who reported losing money to fraud said that the scam started on social media.
For fraudsters, social media offers a uniquely efficient environment. It combines reach, targeting, and trust in a way no other channel can. Messages arrive in familiar interfaces, from accounts that appear genuine and contextual. Also, ads appear that seem algorithmically relevant. The result is a steady stream of fraud that feels personal rather than criminal.
In practice, the most visible scams on social platforms tend to come in a handful of different forms.
Impersonation: Fake profiles/pages posing as banks, customer support, employers, or trusted public figures.
Investment and crypto scams: Sponsored posts, direct messages, or group invitations promising insider access or time-limited opportunities.
Mule recruitment: “Side-hustle” or payment-task offers that persuade users to move funds on someone else’s behalf.
It is the scams like these that have been previously reported yet allowed to persist on social media platforms that the EU’s new regulatory changes aim to address.
Those who act pre-emptively by identifying patterns, correlating signals, and classifying risk early are better positioned to intervene proportionately and defensibly. Those who wait for confirmation, completion, or loss will increasingly find themselves on the wrong side of both regulators and business outcomes.
As fraud responsibility spreads across platforms and providers, opaque, last-minute decision-making becomes harder to defend. Visibility, context, and clear classification are becoming just as important as stopping the fraud itself.
Online payment scams develop across interactions, accounts, and behaviours, often long before a payment is initiated. Rather than focusing solely on transaction-level alerts, institutions need account-level intelligence. Acoru’s account monitoring solution helps your business ingest relevant fraud data, unify signals across disparate channels, and take action before money moves and someone gets scammed.
While reports and regulations help you understand fraud in theory, the scams unfolding in the real world often paint the clearest picture of what’s...
"Hi Dad” scams, also known as “Hi Mum” scams, are a type of authorized fraud (scam). These scams typically involve a deceptive impersonation tactic,...
What are CEO Scams? CEO scams are a form of social engineering where fraudsters impersonate senior executives to manipulate targets into redirecting...