Typical pretexts cite “suspicious transfers” or “account verification,” then steer victims to share credentials or one-time passcodes, approve a login or payment, install “security tools” for remote access, or, most typically, move funds to accounts that fraudsters control.
What distinguishes these scams is a stacked playbook that combines multi-channel social engineering (phone calls, SMS, websites) with technical tactics such as number and domain spoofing, reverse-proxy pages that relay MFA, session token capture, device-ID evasion, and rapid money-mule routing.
Bank impersonation scams can play out quite differently depending on what pretext the fraudster uses, but here are some general steps common to most of them:
Fraud groups assemble target lists from breached datasets, lead brokers, and recycling of older phishing logs, then enrich with phone numbers, email addresses, and bank brands. In parallel, they build the pretext and infrastructure: a short SMS script, a look-alike website (often with valid TLS and cloned UI), and local VoIP numbers that match the victim’s region.
Pretexts aren’t only “bank” pages. Parcel delivery, retailer refunds, investment portals, and tax notices are common on-ramps designed to harvest credentials, one-time codes, device info, and personal details that will stand up in later “verification” steps.
The opening touch is usually an SMS or email containing a link to a lure site. Texts often use sender-ID spoofing so messages appear in the existing thread titled with the bank’s name on the handset. The landing page requests logins, DOB/postcode, card fragments, or prompts for codes under the guise of “cancelling a suspicious payment.”
A live caller phones from a number that appears to be the bank (cheap to do with VoIP caller-ID spoofing, especially when calls originate outside jurisdictions enforcing strict attestation). Using data already collected, the caller “verifies” the account, then steers the victim through actions the attacker needs: reading back a code, approving a login/push, enrolling a “secure device,” installing a “security tool,” or moving funds to a “safe account.”
With control of the session (and the narrative), attackers usually finish by executing authorised push payments. They get victims to add a new payee, raise limits, and send funds. Payments are split to stay below review thresholds and routed through staged mule accounts, then cashed out, forwarded across banks, or converted via crypto on-ramps.
| Category | Score (/10) | Key Insights | |
| 1 | Initial Investment (Scammer Setup Cost) |
Bank-impersonation runs on readily available parts: leaked data, cheap domains with passable branding, SMS routes, and VoIP numbers. Kits for fake banking pages and scripted lures are widely sold and reused, so setup is mostly integration and rehearsal, not R&D. Costs rise when you add highly accurate voice cloning, better hosting hygiene, or mule onboarding. Still, it’s a mid-cost campaign to run, not something requiring high investment. |
|
| 2 | Exposure Risk (Likelihood of Getting Caught) |
|
Phone and SMS leave traces like caller records, message routes, takedown trails, and contact-center recordings. Money mules add another point of failure if they’re identified. That said, VoIP churn, cross-border routing, and quick cash-out keep personal exposure manageable. |
| 3 | Success Rate (Likelihood of Scamming a Victim) | Moderate · 6/10 |
The scam hinges on timing and persuasion: catching customers during real activity, sounding credible, and driving a few critical steps (new payee, limit change, high-value transfer). Education, name-check warnings, and in-app approvals blunt many attempts, yet multi-stage pretexts still convert often enough, especially off-hours, with first-time payees, or when a live caller manages the script. |
| 4 | Return on Investment (ROI) | Moderate to High · 9/10 |
When a bank impersonation scam lands, payout is strong. Authorized push payments clear quickly, and reversals are hard. Infrastructure is reusable across brands and regions, and harvested data fuels further fraud long after the first hit. With instant rails, funds move before scrutiny catches up. |
For financial institutions, the response must be to treat these scams as a kill chain and move detection upstream to the pre-fraud stages. That means watching for pre-fraud signals: indicators that something nefarious is underway. This could be a fresh look-alike domain tied to the brand; proxy fingerprints and OTP relay behavior on the web session, customer credential exposure; confirmation-of-payee mismatches; sudden contact-detail changes, etc.
None of these on their own proves fraud. Together, and in correlation, they describe an attack forming. Financial institutions ultimately need pre-fraud signal intelligence to be able to take action early enough to interrupt bank impersonation scams before money moves.
Also, checks like caller ID, SMS OTP, and knowledge-based prompts no longer signal safety when fraudsters can easily control the conversation and the context. The anchor of trust may need to shift away from customers’ phone networks and toward bank-controlled channels.
Bank impersonation scams are a global issue. Here are several examples in recent times that occurred throughout Europe.
In March 2025, elEconomista reported on the arrest of a 24-year-old who defrauded victims out of more than €200,000 by posing as a bank security employee. The report describes how the fraudster used prior info obtained about the victims to make the later stages of the scam easier to fall for. These later stages involved phone calls and text messages posing as a bank employee and requesting the victims take certain actions that transferred money to accounts under his control.
A South London man received a jail sentence of five years after defrauding bank customers of £988,719 (over €1 million) through bank impersonation. In a classic case of the multi-channel nature of these scams, the perpetrator first called victims, pretending to be investigating fraudulent activity. Then, victims would get directed to a fake website, where they’d unknowingly disclose their important account details. The fraudster then used those details to transfer funds to mule accounts under his control.
In 2025, a French court ordered BNP Paribas to reimburse a customer who had previously fallen victim to a bank impersonation scam. This particular scam exploited the trust relationship between the customer and his bank advisor at BNP Paribas. The fraudster was able to spoof his telephone number to appear as the customer’s usual bank advisor. The fake advisor directed the customer to transfer funds totalling €54,000.
In July 2025, the Bavarian Police in Germany issued a press release that described bank impersonation scams conning customers out of €100,000. The tactics rested on fraudulent phone calls posing as bank employees. In an interesting exploitation of real time payments using push-based approval, the fraudsters told victims they were hacked and that to confirm the return of funds to their accounts, they needed to approve push notifications on their banking apps. In fact, these push notifications ended up approving real-time transfers to the fraudsters’ accounts.
In a demonstration of the scale of these scams, Dutch police swooped in to arrest 8 people suspected of running a bank impersonation operation in September 2025. The eight scammers posed as bank helpdesk employees. Typically, they’d spoof their phone numbers to appear as the customers’ banks, inform customers of an account hack, and advise transfer to supposedly safe accounts. These accounts were in fact under the control of fraudsters. Up to 150 people lost a total of €1.6 million from this gang’s activities.
Generative AI will keep tilting the field toward the fraudsters’ benefit with natural-language scripts in any dialect, cloned voices that mirror local branch staff, and cheap automation that coordinates SMS, calls, and web lures at scale.
The operating model needs to shift from “prove the loss after” to “see the setup before.” That means continuous account classification across the lifecycle (onboarding, device changes, contact-detail edits, beneficiary creation, outbound transfers). Also, those companies that best prevent these scams will need a pre-fraud signal intelligence fabric that links what’s happening across channels. This needs to be omnichannel intelligence that connects the dots across all channels.
Want to learn more about how Acoru does this? Request a demo here.