Darwinium, a US-based AI fraud prevention company, has published research examining the structural risks that agentic commerce introduces into the digital economy, revealing significant gaps between organisations' perceived readiness for AI-driven fraud and their actual detection and prevention capabilities.
The research found that 97% of organisations report an increase in AI-facilitated attacks over the past year, driven by improvements in fraud-as-a-service automation kits, increased attacker targeting precision, and more sophisticated evasion techniques. Despite this, only 36% of organisations can stop fraud as it arises across the customer journey, with most limited to catching threats at isolated checkpoints such as login or checkout. A further 52% cannot explicitly track or label AI-assisted fraud, relying instead on broad security measures that generate significant false positives.
Financial impact and the cost of false positives
The research quantifies the dual cost of inadequate fraud controls. Organisations report an average of USD 4.5 million in annual losses from AI-enabled fraud, alongside USD 3.1 million in revenue impact from false positives, instances where legitimate customers are blocked by blunt-force fraud tools. The combined effect creates an approximately USD 3 million annual blind spot where businesses lose revenue both to fraudsters and to the collateral damage of their own controls. 60% of companies report losing more than 25% of accounts affected by fraud events.
Agentic commerce and governance gaps
As autonomous AI agents become more prevalent in commerce, organisations face a structural identity problem. While 89% of respondents expect non-human traffic to increase, the market is split on how to handle legitimate agentic activity: 48% allow it by default with monitoring, while 31% proactively block it. Authentication and identity binding are the top barriers to managing agentic traffic, cited by 46% of respondents.
The research also identifies an unresolved liability question when AI agent-driven transactions go wrong, namely, 39% of respondents believe the AI or agent provider should be liable, 20% point to the customer, and only 15% support a shared liability model, leaving a governance vacuum at the centre of agentic commerce.
Furthermore, regarding deepfakes, 93% of organisations report encountering deepfake-style attempts in the past 12 months, with the highest incidence at payments and checkout, customer support and call centres, and onboarding and identity verification.