
Conny Dorrestijn Prins
25 Feb 2026 / 5 Min Read
As AI agents move from recommending to transacting, trust becomes the new checkout: not a feeling, but a question of accountability, compliance, and the foundation of economic growth.
The remedy for unpredictability, for the chaotic uncertainty of the future, is contained in the faculty to make and keep promises.
Hannah Arendt, The Human Condition (1958)
In an era where “trust” has become both a buzzword and a battleground, it’s worth pausing to ask: what is trust — really? And perhaps more importantly: how do we rebuild it when our systems, technologies and worldviews keep pulling us apart?
I was recently reminded of this during a Good Morning NED webinar by Hemingway — the boardroom whisperers — on the topic of trust. It sent me back to first principles. Trust is that intangible element which, since the beginning of mankind, has brought us together and torn us apart. It has made us dare and achieve the unimaginable — and it has destroyed more than we can imagine too. Most pointedly, it often stops us from even trying to build something greater together.
Turning to philosophy, I found Hannah Arendt — the twentieth-century thinker best known for her work on totalitarianism — who observed that “Human action is, by its nature, unpredictable.” That unpredictability is precisely why trust is never a given. Trust is a conscious act of risk, sustained through promises, accountability, and people choosing to act in concert.
This insight feels particularly prescient today. Distrust is not only a social malaise; it has become an organisational bottleneck. In business, when trust erodes, we pile on regulation, compliance and legal safeguards. We drift into what I called in a recent talk a culture of “lawyering up” rather than one of collaboration. We rely less on social intelligence — our ability to read signals of intention, character and credibility — and more on documents, boxes ticked, and audit trails.
Of course, that instinct is understandable. We need to keep bad actors out. But at the same time, a lack of trust is bad for business. Empirical research over the years suggests economies with higher levels of social trust are significantly more entrepreneurial and innovative, because people take collective action rather than retreat into siloed self-protection. Trust is not a soft value; it is an economic growth factor.
And now we are entering a new chapter in the trust story: Agentic Commerce.
In this emerging world, an AI agent doesn’t just recommend a product. It acts. It compares, decides, negotiates, switches providers, and initiates transactions — on behalf of a consumer, a merchant, or a business. Suddenly, the most important question is no longer “Do I trust this brand?” but:
Do I trust the agent?
This is not a theoretical question. It is already becoming practical.
Imagine an agent that buys insurance for you. It scans policies, fine print, exclusions, pricing, and claims reputations — then selects and purchases the best option. It might even switch providers automatically next year. That sounds efficient. But it also raises a chillingly modern concern: if something goes wrong, who is accountable? The insurer? The agent provider? The bank? The platform? Or the user who clicked “approve” without understanding what was approved?
Or consider subscriptions. An agent might cancel, renegotiate, or switch services on your behalf: broadband, SaaS tools, streaming, utilities. A small example, perhaps — but it points to a much bigger shift. Agents will increasingly “act” in the economy, and their actions will create winners, losers, and consequences.
Arendt helps us see the core issue: trust is not primarily about intelligence. It is about governability, actions and behaviour that are in step with promises made.
A payment service provider can be regulated. A human can be held accountable. A contract can be enforced. But an autonomous agent? The moment it becomes a participant in commerce, we must ask: can it operate within the compliance frameworks that keep markets safe?
This is where a deeper fracture emerges. The old world of payments build trust through audited accounts, regulated intermediaries, enforceable legal structures, and supervision. The new world of agents assumes that speed and automation are inherently good, that “the system” will optimise outcomes. Yet both worlds claim to seek trust — and again, they risk talking past one another.
The temptation will be to hide behind technical language: models, tokens, autonomy, orchestration. But trust cannot be outsourced to jargon. In regulated finance, trust is built through behaviour visible over time — and through the ability to evidence that behaviour.
In my own thinking, I describe the evolution of trust as Trust 0.0 to Trust 3.0:
Agentic Commerce is, in many ways, a stress test of Trust 2.0 and Trust 3.0.
Because in payments, the question is not simply: can the agent do the job? It is: can the agent operate under constraints — sanctions, fraud controls, consumer protection, auditability, dispute handling, and accountability? Can it be governed? Can it be supervised? Can it be challenged?
This is where trust becomes the bridge between innovation and sustainable growth.
If we get it right, agents could remove friction from commerce and free up human capacity for creativity, entrepreneurship and progress. But if we get it wrong — if we treat trust as an afterthought — we will not get acceleration. We will get backlash. More fear. More regulation. More “seeking cover”. And, inevitably, less growth.
In the end, trust is not naïve, and it is not soft. Trust is not about eliminating risk; on the contrary, it is about allowing strangers, institutions and innovators to act together despite uncertainty — so we can build more together than we ever could alone.
So here’s the uncomfortable question Arendt would force us to ask: If an agent can act, can it also promise? Can it be held to its word? Can it be accountable — not just technically, but socially and legally?
Because that is what trust ultimately is: not optimism, but responsibility to oneself and to others. And if we ignore that, we will discover — yet again — that the real cost of distrust is not merely emotional. It is economic. It is societal. It is the slow, quiet shrinking of what we could dare to build together.
Try it: in your next product meeting, compliance review, or board discussion, ask one simple question: Where is trust actually created here — and where are we merely documenting its absence?
Trust me, you might be surprised.
Conny Dorrestijn Prins is a Fintech Mentor and Non-Executive Director at Augmentum Fintech, Singer Capital Markets, and Worldpay BV. Named one of the ‘2021 Top Ten Voices in European Fintech,’ Conny Dorrestijn is a trusted advisor to payments and fintech firms looking to scale or reposition internationally. With a career dedicated to innovation in financial technology, she also champions empowerment, mentoring through Women in Payments, Money2020’s Rise Up program, and the Global Give Back Circle. A frequent international speaker (EBA, Money2020, Citi, Techleap, and more), she brings a unique mix of experience and curiosity, having introduced Europe’s first internet banking tech, driven a payments hub to successful exit, and advised on positioning and strategy across the industry. She is known for combining clear business focus with an inclusive, future-driven voice.
The Paypers is a global hub for market insights, real-time news, expert interviews, and in-depth analyses and resources across payments, fintech, and the digital economy. We deliver reports, webinars, and commentary on key topics, including regulation, real-time payments, cross-border payments and ecommerce, digital identity, payment innovation and infrastructure, Open Banking, Embedded Finance, crypto, fraud and financial crime prevention, and more – all developed in collaboration with industry experts and leaders.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright