The rapid growth of agentic commerce—where AI systems autonomously search, compare, and complete purchases on behalf of consumers—is creating a significant and largely overlooked risk for global merchants. According to Chargebacks911, legitimate AI-initiated purchases are being misclassified as fraudulent bot activity, triggering a wave of false declines and scaling revenue loss.
While much of the industry conversation surrounding agentic commerce has heavily focused on disputed AI purchases and fraud risks, the reverse problem represents an equally significant threat. Fraud systems originally built for human behaviour are now fundamentally struggling to distinguish legitimate AI-driven transactions from malicious automated bots.
The scale of the AI commerce shift
The warning arrives as agentic commerce moves rapidly from a futuristic concept to commercial reality. According to The Paypers Global Ecommerce Report 2026, agentic commerce could account for 25% to 30% of all global online purchases by 2030.
Major industry players are already leaning heavily into this ecosystem:
-
Visa and Mastercard are currently piloting agent-initiated transactions alongside major banking partners.
-
Platforms including Perplexity, Walmart, and Amazon are actively enabling AI agents to transact directly on their consumers’ behalf.
However, fraud detection systems have largely failed to keep pace with this rapid shift. According to Imperva’s 2025 Bad Bot Report, 51% of internet traffic is now generated by bots, of which 37% is considered malicious.
Historically, merchants could reliably distinguish between automated bot traffic and genuine consumer behaviour. Today, however, AI shopping agents operate seamlessly within browsers, generating traffic patterns that appear increasingly human, thereby triggering fraud systems designed for a world where a human being was always behind the keyboard.
A structural threat to merchant revenue
Monica Eaton, founder and CEO of Chargebacks911, highlighted the immediate threat these false declines pose to businesses.
“The fraud systems most merchants rely on today were built to detect bad human behaviour,” Eaton stated. “They were not designed for a world where a legitimate AI agent and a malicious bot look almost identical.”
Eaton warned that merchants face a clear choice as agentic commerce scales: adapt their detection and evidence infrastructure now, or watch a growing share of legitimate revenue get declined by their own legacy systems. A false decline carries no chargeback, but the immediate cost in lost sales, damaged brand trust, and reduced visibility to AI agents is severe.
Traditional fraud prevention relies on behavioural signals tied directly to human interaction, such as device fingerprints, session patterns, click sequences, and authentication flows. Agent-initiated transactions inherently disrupt each of these markers. Without a clear evidence trail showing what the agent was authorised to do, what it actually executed, and on whose behalf it acted, merchants lack the essential data needed to accurately classify the transaction.
Adapting infrastructure for an agentic future
To address this gap, Chargebacks911 utilises its Unified Dispute Management System (UDMS) and ResolveLab, which leverage AI and machine learning to build and analyse the exact evidence architecture that agentic transactions require. Rather than relying on point-of-transaction signals, UDMS captures the full consent and permission trail. This includes what the agent was authorised to do, the limits in place, and a timestamped record of each action taken.
Donald Kossmann, chief technology officer at Chargebacks911, noted that merchants must shift their perspective on transaction validation.
“In an agentic commerce environment, the evidential anchor shifts from a real-time human action to a prior consent framework,” Kossmann explained. “Merchants need systems that can read that framework accurately and quickly. The organisations that build that capability now will not only reduce false declines; they will have a structural advantage as AI-driven purchasing becomes the norm.”
To safeguard their revenue, Chargebacks911 recommends three immediate actions for merchants:
-
Establish highly granular permission frameworks for AI agents that transact on their platforms.
-
Invest heavily in evidence capture infrastructure that logs agent authorisation directly alongside transaction data.
-
Comprehensively review fraud detection thresholds and rules to actively account for the behavioural differences between human and agent-initiated purchases.
“The industry has rightly focused on what happens when an AI agent makes a purchase the customer did not want,” Eaton concluded. “The question that remains largely unasked is what happens when a merchant’s fraud system refuses the purchase the customer did want. Both problems need solving, and both require the same thing: a clear, auditable record of what was authorised and what happened.”


