Irina Ionescu
27 Feb 2026 / 10 Min Read
Irina Ionescu, Senior Editor at The Paypers, tackles the new wave of age verification legislation and what it takes for these initiatives to be successful.
In the early decades of the Internet, anonymity was celebrated as liberation. Age was a number easily fabricated, and identity a costume easily worn. But as online platforms evolved into marketplaces, social ecosystems, and entertainment hubs, that early liberty soon clashed with darker realities – organised fraud rings, industrial-scale scams, and identity theft targeting ever-younger victims.
Today, age verification has emerged as a controversial tool in the fight against online dangers. Governments are tightening rules around minors’ access to social media and adult content. Platforms are experimenting with facial age estimation, document uploads, and third-party verification providers. However, determined teens can still access workarounds, such as routing around restrictions through VPNs, fake credentials, or migrating towards the darker corners of the web.
Can age verification truly reduce exposure to various types of fraud, including pig butchering, identity theft, and sextortion? Or does it create more friction without real protection? When integrated into broader regulatory, technological, and educational frameworks, age verification could serve as a powerful gatekeeping mechanism against some of the most predatory forms of digital fraud that affect millions of youngsters globally.
Pig butchering, crypto scams, identity theft, and sextortion are among the most common types of fraud online targeting minors.
Pig butchering describes long-con investment scams in which victims are groomed over weeks or months, often through social media or messaging platforms. Scammers cultivate trust and emotional intimacy before persuading victims to invest in fraudulent cryptocurrency schemes.
While often associated with adult victims, teenagers are increasingly targeted due to their hype on digitalisation and easy earnings that could provide a sense of autonomy. Young people accustomed to digital intimacy may be particularly susceptible to online relationships that quickly shift into financial manipulation. A 16-year-old experimenting with cryptocurrency or trading apps can be drawn into a scam without understanding the warning signs of manipulated returns or fake investment dashboards.
The stats are globally discouraging, with a report from Chainalysis mentioning that, in 2024, pig butchering scams grew nearly 40% YOY, as the fraud industry leveraged AI and increased in sophistication. An estimated USD 12.4 billion were reported as crypto scam revenues in 2024, which includes pig butchering scams. Unfortunately, the numbers only count for reported losses, but analysts mention that the amount could be significantly bigger, as many scams go unreported due to victims’ fear of exposure and ostracisation.
Teenagers also represent prime targets for identity theft. Their credit histories are often blank, meaning fraudulent activity may go undetected for years. Fraudsters harvest personal data through phishing links, gaming platforms, fake scholarship offers, or influencer recruitment schemes. Once acquired, that identity data can be used to open credit lines, launder money, or fuel larger fraud operations.
At the same time, sextortion has become one of the fastest-growing threats facing minors. In this type of fraud, offenders coerce teens into sharing explicit images and then threaten to distribute them unless payment is made. Social media and messaging apps are primary vectors.
A study conducted by Thorn on 1,200 young people ages 13-20 showed that one in seven victims were driven to harm themselves as a result of their experience. One in five teens report experiencing sextortion, while a staggering 81% of these threats happen exclusively online. The abuse takes many forms, at times overlapping, and can range from relational sextortion to exploitative content or financial threats.
In the US, the FBI and other agencies have warned about organised sextortion networks that operate at scale, often targeting boys between 14 and 17, with operations being streamlined – fake profiles, scripted conversations, rapid coercion, and digital payment demands. What’s even more shocking is the young age at which this type of fraud begins – the study conducted by Thorn revealed that one in six victims were age 12 or younger when they first experienced sextortion. Across all these categories, one pattern emerges: social media and online platforms represent the entry points.
What makes teenagers particularly vulnerable? Adolescence is a developmental phase defined by experimentation, risk-taking, and identity formation. Fraudsters prey on the youngsters by exploiting emotional vulnerability and loneliness, the desire for validation and connection, limited financial literacy, and inexperience with online manipulation tactics. Organised fraud rings understand these psychological levels and engineer interactions accordingly.
Thus, age verification is not simply about restricting access to adult content, but also about limiting early exposure to high-risk digital environments where manipulation thrives.
In recent years, policymakers across various jurisdictions have moved decisively to restrict minors’ access to certain online platforms.
Australia has been at the forefront of digital safety reform. The Online Safety Amendment Act, passed by the Australian Parliament in November 2024, strengthened the eSafety Commissioner’s powers and introduced more robust mechanisms to remove harmful content, including cyber-abuse material targeting children. Taking effect on 10 December 2025, the act saw popular social media platforms such as Facebook, Instagram, Reddit, Snapchat, TikTok, X, Threads, Twitch, and YouTube restricting access for minors under the age of 16, with the possibility of adding more platforms in the future.
The broader regulatory direction in Australia reflects growing willingness to hold platforms accountable for protecting minors – although Meta, the parent company of Facebook, Instagram, and WhatsApp, mentioned that they would prefer mobile app store operators such as Apple’s App Store or Google’s Play Store verify a user’s age instead of the liability falling under individual platforms.
In the UK, the Economic Crime and Corporate Transparency Act 2023 (ECCTA) represents a major legislative overhaul aimed at fighting corporate fraud, beneficial ownership transparency, and financial crime enforcement. As of 18 November 2025, ECCTA introduced mandatory identity verification requirements for all new individual company directors and people with significant control.
Alongside the Online Safety Act, also adopted in 2023, the UK has established stricter obligations for platforms to protect children from harmful content and criminal exploitation. Under the act, platforms must prevent children from accessing harmful content, including adult content, while companies must implement highly effective age-assurance measures to comply. Services are also legally required to remove illegal content, including terrorism or child exploitation. Major social media platforms like X, Meta, or Discord are adjusting policies to comply.
To remain compliant, Discord announced on 9 February 2026 that it will roll out teen safety features globally, creating a safer, more inclusive experience for users over the age of 13. Key privacy protections of Discord’s age assurance approach include the guarantee that video selfies for facial age estimation won’t leave a user’s device and that all identity documents submitted to vendor partners will be deleted quickly, often immediately after age confirmation. At the same time, to enforce more protection for minors navigating Discord channels online, a user’s age group status will not be seen by other users.
Beyond Australia and the UK, multiple countries and jurisdictions are pushing for laws requiring age checks for access to adult content or social media.
Age verification mechanisms are increasingly viewed as foundational infrastructure for compliance, especially since nearly 40% of children aged 8-12 in the US use some form of social media. In the US, the Children’s Online Privacy Protection Act (COPPA) was enforced as early as 2000 and acts as a digital guardian for children under the age of 13. COPPA restricts data collection from users under 13 but doesn’t require strict age verification for platforms. However, states like Nebraska have introduced laws requiring platforms to verify age and parental consent for minors. The Parental Rights in Social Media Act (LB 383) in Nebraska mandates that social media platforms verify users’ ages to obtain parental consent before allowing minors under the age of 18 to create accounts. The act is set to take effect on 1 July 2026.
In the European Union, the Digital Services Act (DSA) requires very large online platforms to mitigate risks to minors, including effective age verification where appropriate. Under the same DSA, the age limit for minors to hold a social media platform account is generally 13 years old, although some countries look into raising it to 16. For instance, France mandates compulsory age verification and parental consent for all users under the age of 15, while Spain and Norway are moving to raise or enforce age limits of 15-16 years old for all minors looking to navigate through social media platforms.
In Brazil, the Digital Child and Adolescent Statute (Digital ECA) was enacted on 17 September 2025 and establishes a comprehensive regulatory framework to protect children and adolescents in digital environments. The law applies to social media, games, websites with adult content, and apps accessed by minors where self-declaration of age is no longer accepted. Instead, digital platforms must use reliable methods – including identity checks, biometrics, or advanced age estimation techniques – to display content meant for adults. At the same time, the law protects minors’ personal data and claims that the data cannot be used for targeted advertising or in ways that violate minors’ privacy. The law mandates strict, non-self-declaration age verification to be in place by March 2026.
Malaysia is also set to ban children under age 16 from social media by mid-2026. Platforms operating in the country must enforce eKYC to check for official IDs or passports and, thus, move away from self-declaration of age.
These regulations and initiatives across the world share a core premise: unrestricted access creates measurable risk and fraud suspicions, and platforms must implement age assurance systems proportionate to that risk.
Critics often frame age verification as solely about blocking minors from viewing explicit content. But its fraud-prevention implications are broader.
Many scams originate in loosely moderated communities where anonymity flourishes, and social media platforms have become a haven for fraudsters worldwide. By restricting minors’ access to certain environments and especially those associated with adult interactions, platforms can reduce the probability of contact with organised fraud networks. For instance, pig butchering operations and sextortion frequently begin with unsolicited messages on social apps. If minors face higher barriers to account creation or certain platform features, the attack surface shrinks.
Robust age verification often involves identity verification. While privacy safeguards are essential, tying accounts to validated identities can deter large-scale scam operations that rely on disposable accounts.
At the same time, fraud rings depend on scale for their success. If creating thousands of fake profiles becomes more complex or expensive, then operational costs rise, and some attacks might become economically unviable.
Accurate age data enhances fraud detection algorithms. Platforms can better flag suspicious adult-minor interactions, detect grooming patterns, and escalate potential sextortion cases to trust and safety teams.
Opponents may argue that age verification simply drives minors underground. The truth is that tech-savvy teens can still use VPNs, borrow IDs, falsify information, or migrate to less regulated spaces, including the Dark Web, which could potentially be more harmful than simply accessing content beyond their legitimate age.
In fact, a Shufti Pro research released in 2025 shows that one in four would-be sign-ups at age-gated sites are still suspected minors. Most teenagers and children attempted to use borrowed or purchased adult IDs, which was spotted in 38% of the cases. VPNs and proxies masking the location were spotted in 33% of the cases, while deep-fake or AI-aged selfies were only spotted in 11% of the cases, leaving minors exposed not only to adult content but to a wide array of potential fraud types.
Those criticising age verification laws and legislation raise legitimate concerns such as privacy risks, false positives, and migration risks. Privacy risks are often associated with collecting ID data from minors, which creates potential data breach vulnerabilities, while AI-based age estimation may misclassify users as adults, increasing false-positive rates. At the same time, minors who cannot access their preferred social media platforms due to parental consent or age verification laws in their jurisdictions might be forced to migrate towards more unregulated platforms (including Telegram) that could increase exposure to higher harms.
And, as verification tends to apply to all users, not just minors, there are wider social concerns revolving around data collection linked to surveillance or the misuse of personal information.
However, the argument assumes that all deterrence must be 100% accurate to be worthwhile. Public policy rarely operates on absolutes, and creating safer web browsing habits for the younger generation could lead to educated adults who are less prone to becoming fraud victims.
According to Sumsub, ‘to balance safety with privacy, platforms must partner with trusted verification providers that offer secure, compliant, and user-friendly solutions, ensuring strong protection for minors without compromising user rights or digital access’.
From a macroeconomic perspective, fraud has become one of the most lucrative criminal industries across the globe. Organised scam networks operate similarly to multinational corporations, complete with HR departments, scripts, and performance metrics.
In this environment, teenagers don’t just become victims – they are sometimes recruited as money mules. Fraud rings exploit young users to transfer funds, open accounts, or launder cryptocurrency.
When done properly, age verification, combined with identity verification, can reduce mule recruitment by:
Age verification will not eliminate pig butchering, account takeover, deepfakes, sextortion, money mulling, scams, or other types of fraud. Determined actors will adapt, leveraging constantly evolving technologies and preying on the misinformed web users of all ages. Minors just tend to be more susceptible, therefore, easier victims for fraud ring operators.
What age verification can do, when implemented effectively, is to raise operational costs for scammers, reduce exposure to high-risk environments, improve detection of adult-minor exploitation, signal societal norms about child protection, and provide enforcement leverage for regulators. Combined, these techniques can translate into thousands of potential victims removed from fraud rings or harmful situations online and offline and create a generation of more resilient adults against fraudsters.
The Internet’s original architecture prioritised openness over accountability. That design choice now collides with a reality in which international fraud rings prey on minors with industrial efficiency.
Age verification is not the Holy Grail, but a mere friction point that reshapes the risk landscape. It can deter casual access, complicate fraud scale, and empower platforms to act more intelligently. The real question is not whether age verification alone can stop online scams. It cannot. The question is whether, in a world where teenagers are targeted by pig butchering schemes, identity thieves, and sextortion networks, we can afford not to use the reasonable safeguards available.
In that context, age verification is less about restriction and more about responsibility. It acknowledges that vulnerability is predictable, exploitation is organised, and prevention can be taught.
The future of digital safety will not be defined by a single tool – and no fraud-prevention tool, no matter how sophisticated, can educate generations of web users, regardless of their age, about the dangers of living a digital life. However, age verification, thoughtfully designed and proportionately implemented, will likely remain a central pillar in the architecture of online trust.

Irina is Senior Editor at The Paypers, primarily specialising in online payments and fraud prevention. She has a Ph.D. in Economics and a strong economic academic background, with interests in fraud prevention, chargebacks, fintech, ecommerce, and online payments. Reach out to her via LinkedIn or email at irina@thepaypers.com.
The Paypers is a global hub for market insights, real-time news, expert interviews, and in-depth analyses and resources across payments, fintech, and the digital economy. We deliver reports, webinars, and commentary on key topics, including regulation, real-time payments, cross-border payments and ecommerce, digital identity, payment innovation and infrastructure, Open Banking, Embedded Finance, crypto, fraud and financial crime prevention, and more – all developed in collaboration with industry experts and leaders.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright