Oana Ifrim
26 Feb 2026 / 5 Min Read
Oana Ifrim, Lead Editor at The Paypers, outlines the growth of industrialised, AI-powered fraud and its worldwide effects on consumers, businesses, and digital platforms.

Fraud is accelerating in scale, sophistication, and automation. From amateur phishing kits to fully managed Fraud-as-a-Service (FaaS) platforms, criminals are leveraging automation, AI, and social engineering to exploit both technical and behavioural vulnerabilities. Fraud losses continue to rise sharply, with Juniper Research projecting a 153% increase from USD 23 billion in 2025 to USD 58.3 billion by 2030. The Global State of Scams 2025 report – based on 46,000 adults across 42 markets – puts direct consumer losses at USD 442 billion. Fifty-seven percent of adults encountered a scam in the past year. Shopping scams accounted for 54% of cases, investment scams 48%, and unexpected money scams 48%. In lower-GDP regions across South America and Oceania, exposure rates reached 72–73%. Sixty-four percent of scams concluded within a day, often within minutes. Seventy-three percent of respondents believed they could identify scams, yet 23% still lost money.
Corporate losses mirror the consumer crisis. TransUnion’s H2 2025 Global Fraud Report found that firms lost an average of 7.7% of revenue to fraud, totalling USD 534 billion across surveyed companies. In the US, the average loss was 9.8% of revenue, equating to USD 114 billion, while online payment fraud globally is projected to cost merchants over USD 362 billion from 2023 to 2028. In 2024, the FBI recorded nearly 860,000 internet crime complaints with over USD 16 billion in losses, driven mainly by phishing, extortion, and investment fraud, disproportionately affecting older adults and certain US states.
Fraud in 2026 is industrialised. Scam operations function as coordinated networks, supported by subscription-based Fraud-as-a-Service (FaaS) platforms. For as little as USD 50 per month, low-skilled actors access enterprise-grade phishing kits, mule networks, automation frameworks, and synthetic identity tools. Campaigns are optimised, outsourced, and scaled across borders. AI is embedded throughout this ecosystem: deepfake voice and video impersonation fuel investment, romance, and impersonation scams; AI-generated phishing mimics tone and context with high precision; automated systems scan dark web markets, test credentials at scale, and pivot to secondary targets; and synthetic identities are gradually built, sometimes over years, before activation. Autonomous malware adapts to evade detection, while voice cloning requires minimal audio samples.
Fraud is increasingly intent-driven rather than purely technical. Authorised transactions now account for a growing share of losses, as victims themselves approve payments or disclose credentials. These interactions appear legitimate to institutions, underpinning surges in authorised push payment (APP) fraud, account takeovers (ATO), remote-access scams, and post-compromise credit abuse. Controls optimised for perimeter breaches often fail to detect intent-based manipulation until funds are moved.
Additional pressure points include rising first-party fraud, insider threats, crypto laundering, decentralised finance platforms, and evolved ransomware targeting both operational disruption and data exposure. While financial institutions remain primary targets, ecommerce and marketplaces face account compromise, payment abuse, and return fraud, with small and medium-sized businesses disproportionately vulnerable.
Fraud risk is not a proxy for intelligence. Behavioural and contextual factors dominate over income, education, or professional status. Impulsivity, optimism bias, heavy online activity, prior victimisation, and overconfidence increase exposure. Younger, digitally fluent, highly educated individuals are frequently overrepresented in scam exposure data, often engaging faster and questioning less.
APP scams, particularly in Europe, have overtaken traditional card fraud in value. Victims authorise transfers to fraudsters, often exceeding EUR 2,000 per incident. EU regulators now treat APP scams as both fraud and AML issues, driving real-time detection, suspicious activity reporting, and unified FRAML monitoring. Deloitte predicts APP fraud could cost US institutions USD 15–18 billion by 2028, fuelled by instant payments and advanced social engineering. Recovery is difficult once funds are transferred, and AI-generated communications exacerbate the threat.
Synthetic identity fraud is increasingly sophisticated, leveraging AI to create hyper-realistic ‘Frankenstein identities’ with fabricated jobs, credit histories, and social media activity. These identities bypass traditional KYC checks, leaving no immediate victim and forcing layered risk signals, cross-institutional data sharing, and forensic pattern recognition. In the US, synthetic fraud causes USD 30–35 billion in annual losses, with lenders losing USD 3.3 billion from H1 2025 new accounts alone. Alloy’s 2026 State of Fraud Report notes 8.3% of digital account creations were suspicious, with 44% of firms ranking synthetics as their top threat. Mitigation requires dynamic, continuous identity verification, behavioural biometrics, and real-time monitoring.
Generative AI enables hyper-tailored resumes and deepfake candidates, increasing employment fraud risks in remote workforces. Employers may inadvertently onboard impostors, exposing sensitive systems to unauthorised access. Meanwhile, smart home devices (virtual assistants, locks, appliances, and emerging humanoid robots) expand attack surfaces, allowing data harvesting, surveillance, and account takeovers. Both vectors require enhanced identity verification, device monitoring, and access controls.
Long-term social engineering combined with fake crypto investment platforms is increasing. Scammers groom victims over weeks or months via social media and dating apps, often extracting six-figure sums. AI tools enable hyper-personalised messaging, making fraud harder to detect. Recent data underscores the scale:
ATO fraud has evolved via credential stuffing, infostealer malware, and AI-driven session hijacking. Global digital ATO volume rose 21% from H1 2024 to H1 2025 and 141% since 2021. Behavioural biometrics tracking keystroke dynamics, swipe patterns, and device interactions are essential to shift from reactive to proactive defences.
Scammers increasingly target emotions (vulnerability, empathy, urgency) rather than purely financial transactions. EPC highlights methods including phishing, vishing, smishing, quishing, APP fraud, bank employee impersonation, safe account transfers, remote support schemes, family emergency fraud, recruitment scams, ghost tap attacks, and malware targeting mobile and banking systems. Emerging methods like IVR phishing, SEO poisoning, typosquatting, and NFC relay attacks bypass traditional controls, compounding financial and emotional impacts.
Fraudsters increasingly exploit verified customer accounts. First-party fraud includes fabricated disputes, inflated returns, or ghost payments on BNPL schemes, causing EUR 2.5 billion in Europe alone. More than 90% of money mule transactions identified through the European Money Mule Actions are linked to cybercrime. Money mule networks recruit through TikTok, LinkedIn, and Telegram, laundering billions via unsuspecting participants. These schemes mimic legitimacy, blending synthetic identities and AI-generated alibis. Hyper-normalcy rather than anomalies is now the defining detection challenge.
Cybercrime has shifted from opportunistic attacks to industrialised, service-based operations. Advanced Persistent Threat (APT) actors – state-linked groups and organised criminals – conduct sustained campaigns, targeting critical infrastructure, IoT, operational technology, and financial institutions. AI accelerates the threat: personalised phishing, deepfake impersonation, autonomous malware, and self-propagating ransomware increase both frequency and severity.
Ransomware-as-a-Service (RaaS) platforms have matured, offering affiliate programs, revenue sharing, and technical support. Multi-stage extortion (encryption, data exfiltration, public disclosure threats, and pressure on suppliers or clients) is standard. Subscription models enable rapid scaling, while public shaming and leak sites amplify coercion. Treating ransomware solely as a backup issue underestimates its operational, reputational, and regulatory risks.
Fraud-as-a-Service (FaaS) platforms replicate SaaS models, providing dashboards, phishing kits, mule recruitment, and AI-powered deepfake tools. Services bundle phishing, ransomware, and synthetic identity capabilities into subscription tiers, accessible to low-skilled actors. Affiliates execute campaigns globally, blurring attribution and complicating law enforcement. By 2026, AI-driven fraud exposure is expected to intensify across financial services, payments, ecommerce, and digital platforms.
Generative AI amplifies identity fraud, targeting video verification, customer support, and executive communications. Attackers fabricate identities that bypass static checks, compromising MFA via fatigue attacks, session hijacking, and phishing kits. Defence requires behavioural and contextual analysis beyond traditional credential-based verification.
Third-party and supply chain risk
Interconnected digital ecosystems create systemic risk. Supply chain attacks include software compromise, hardware tampering, and exploitation of outsourced services such as call centres or payment processors. Cloud concentration magnifies impact; a single breach in a shared provider can cascade across multiple institutions. Continuous vendor monitoring, contractual risk allocation, and real-time threat intelligence integration are now baseline expectations.
Financial crime is transnational by default. Instant payment rails, cross-border platforms, digital assets, and DeFi reduce friction for commerce—and illicit flows. Key patterns include trade-based money laundering (TBML), supply chain finance abuse, sanctions evasion via layered shell structures, and cryptocurrency-enabled laundering. Detection requires intelligence sharing and analytics linking fragmented signals across systems.
Static, perimeter-based security models are insufficient. Dynamic, distributed defences are required.
Regulation is tightening in response to systemic cyber risk. Breach reporting, resilience standards, zero-trust mandates, real-time monitoring, enhanced vendor oversight, and board-level accountability are becoming standard. Geopolitical tensions introduce state-sponsored cyber threats, blurring lines between crime and warfare. Organisations must adopt intelligence-led, risk-based frameworks integrating compliance, security, and governance.
Effective defence requires multi-layered architectures: behavioural biometrics, real-time transaction analytics, adaptive authentication, and AI-driven anomaly detection. Zero-trust principles must extend across internal networks, cloud environments, and third-party connections. Cybersecurity is a core operational capability tied to resilience, trust, and continuity. Institutions relying on reactive controls face compounding exposure; those embedding intelligence-led prevention will define the next standard of resilience.
This article is part of the The Paypers` Money Movement in 2026: Trends in AI, Payments & Regulation Newsletter, a source of expert insights on the forces reshaping fintech, payments, and banking, covering fraud and financial crime, AI in fraud prevention and risk intelligence, real-time and cross-border payments, European payments sovereignty and the future of instant rails, stablecoins, agentic commerce, compliance and the evolving regulatory landscape, payments fragmentation driven by geopolitics and regulation, infrastructure bottlenecks in banking modernisation, the shift from generic scale to verticalised, value-added payment models, and digital wallets changing consumer payment behaviour – delivering a clear, data-backed view of what will shape strategy and innovation in 2026 and beyond.
Explore the other contributions in this Money Movement in 2026: Trends in AI, Payments & Regulation Newsletter series for more expert insights:

Oana Ifrim is Lead Editor at The Paypers, keeping a close pulse on the banking and fintech sectors. She brings passion for content strategy and narrative design, along with rigorous trend analysis and industry research, to fintech, banking, and payments coverage, delivering clarity, depth, and strategic insight. Oana conducts expert interviews and thought leadership content, moderates webinars and conference panels, leads research projects and industry reports, and represents The Paypers at key industry events.
She can be reached at oana@thepaypers.com or on LinkedIn.
The Paypers is a global hub for market insights, real-time news, expert interviews, and in-depth analyses and resources across payments, fintech, and the digital economy. We deliver reports, webinars, and commentary on key topics, including regulation, real-time payments, cross-border payments and ecommerce, digital identity, payment innovation and infrastructure, Open Banking, Embedded Finance, crypto, fraud and financial crime prevention, and more – all developed in collaboration with industry experts and leaders.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright