Paula Albu
29 Apr 2026 / 8 Min Read
How should payment firms respond to Singapore’s agentic AI governance framework? Tim Khamzin, Vivox AI, explains what is changing and why.
Singapore is known for consistently developing guardrails and frameworks to promote responsible AI use and governance. However, the recent release, in January 2026, of Singapore’s governance framework for agentic AI - the Model AI Governance Framework for Agentic AI (the ’MGF’) marks a definite turning point in how artificial intelligence is understood in financial services. It moves the discussion beyond models and into systems that act.
The relatively new agentic AI governance framework explicitly addresses AI systems that are autonomous, goal-directed, and capable of taking actions across workflows. And in doing so, it challenges the adequacy of traditional model risk management approaches, introducing a new set of operational expectations for firms deploying AI in production.
This distinction is, in fact, crucial for payments firms: AI is no longer confined to assisting with decisions; it is increasingly ‘responsible’ for executing them. From onboarding and KYB to AML investigations and operational workflows, AI agents are now embedded directly into core processes in payments. The question is no longer only whether these systems are accurate, but whether they are governable.
Conventional AI governance in financial services has focused on models: their training data, validation processes, and performance metrics. This approach assumes relatively bounded systems, where inputs and outputs can be clearly defined and monitored.
Agentic AI does not operate within those boundaries. These systems are designed to pursue objectives, dynamically selecting actions based on context. An AI agent conducting onboarding, for example, may determine which data sources to query, how to resolve discrepancies, and whether to escalate or approve a case. The risk is no longer limited to incorrect outputs but extends to inappropriate sequences of actions.
Singapore’s framework highlights this distinction: it notes that agentic systems ‘may exhibit emergent behaviours not explicitly programmed’, and therefore require governance mechanisms that go beyond model validation. In practice, this means firms must shift from assessing model accuracy to controlling system behaviour.
Rather than treating explainability as a model-level feature, Singapore positions auditability as a system-wide requirement, where every action taken by an AI agent must be recorded, reconstructable, and attributable. This introduces a significant structural change in how AI risk is managed, i.e., governance must account for decision pathways, not just the outcomes.
One of the most significant implications of the framework is its emphasis on traceability, the need for organisations to maintain ‘sufficient records of agent decisions and actions to enable review and accountability.’
In payments, where regulatory scrutiny around AML and sanctions compliance is already high, this requirement becomes critical. AI agents operating across onboarding or transaction monitoring workflows may interact with multiple data sources and execute a series of decisions before reaching an outcome.
Without comprehensive auditability, firms cannot reconstruct how a decision was made. This is not merely a technical limitation; it is a regulatory risk. Auditability must be embedded at the system level. Every action taken by an AI agent, including data retrieval, model invocation, decision logic, and escalation, must be recorded in a structured and interpretable way.
This represents a departure from traditional logging practices, requiring firms to design AI systems with auditability as a core capability, not an afterthought.
The framework also places strong emphasis on human oversight, noting that organisations should ensure `appropriate human involvement in the oversight of agentic systems, particularly in high-risk contexts.`
In practice, this reframes human-in-the-loop from a safeguard to a control mechanism. For firms, this means defining precisely where human intervention is required, how it is triggered, and how it interacts with automated processes. In an AML investigation workflow, for instance, an AI agent may handle the majority of cases autonomously but escalate complex or ambiguous scenarios to a human analyst.
The challenge lies in the calibration of the approach for each case: excessive reliance on human review undermines the efficiency gains of automation, while insufficient oversight introduces accountability risks. The framework recognises this balance, encouraging firms to design systems where human involvement is proportionate to risk. This requires more than policy. It demands operational design: clear escalation thresholds, structured review processes, and integration between human and machine decision-making.
Explainability has long been a focus of AI governance, but typically at the level of individual models. Agentic AI expands the scope of this requirement.
The Singapore framework also highlights the importance of being able to ’explain the rationale behind agent decisions and actions in a manner that is understandable to stakeholders.’ For firms, this means explaining not just what decision was made, but how it was reached across a sequence of steps.
For instance, an AI agent flagging a customer for additional due diligence must be able to demonstrate which data sources were used, how conflicting information was resolved, and why a particular course of action was chosen.
This shifts explainability from a technical feature to an operational capability. Firms must develop tools that allow compliance teams to interrogate entire decision flows, rather than isolated model outputs. In effect, explainability becomes a property of the system, not just the model.
The emergence of Singapore’s framework (although it is not a rule but rather a recommendation) adds to an already complex regulatory landscape. Payments firms operating internationally must also consider the EU AI Act, with its risk-based classification and requirements for high-risk systems, as well as evolving UK expectations around AI governance and operational resilience. The challenge now is not compliance with individual regimes, but efficient alignment across them.
A fragmented approach, i.e., developing separate governance processes for each jurisdiction, drives duplication, inconsistency, and operational inefficiency. A more effective strategy is to build a unified governance architecture that can accommodate multiple regulatory requirements within a single coherent framework.
At a minimum, this architecture should include:
Such an approach allows companies to map regulatory requirements onto a consistent operational model. Singapore’s focus on agent behaviour, the EU’s emphasis on risk categorisation, and the UK’s principles-based supervision can all be addressed within this structure.
Singapore’s agentic AI framework does more than introduce new guidance. It reflects a broader recognition that AI in financial services is entering a new phase, one in which systems are not only intelligent but autonomous.
In payments, this changes the very nature of deployment: success is no longer defined solely by performance metrics, but by the ability to demonstrate control, accountability, and transparency in live environments.
Firms that rely on legacy model risk frameworks will find them increasingly insufficient. Those that treat governance as an integral part of system design will be better positioned to scale AI across critical workflows.
According to McKinsey, only around 5% of organisations have successfully scaled AI into production. In this sense, Singapore’s framework does not simply raise the bar for compliance - it redefines what it means for AI to be production-ready.

Tim Khamzin is the founder and CEO of Vivox AI, developing technology for financial crime operations through atomic AI agents that combine automation and intelligence. His work applies AI to real-world financial settings, guided by close collaboration with leading global financial institutions. Previously, he led a digital transformation at Central Europe’s largest bank, reducing the workforce from 24,000 to 5,800 and demonstrating how technology enhances efficiency, control, and resilience.
Vivox AI builds trusted, explainable AI agents for AML, KYB, and financial crime compliance workflows, enabling payments companies, fintechs, and banks to accelerate onboarding from days to minutes. Vivox automates up to 90% of KYB and AML workflows, reducing due diligence to under 30 minutes. Built on domain-trained models with full auditability, traceability, and regulatory alignment, the platform securely and efficiently supports financial institutions globally.
The Paypers is a global hub for market insights, real-time news, expert interviews, and in-depth analyses and resources across payments, fintech, and the digital economy. We deliver reports, webinars, and commentary on key topics, including regulation, real-time payments, cross-border payments and ecommerce, digital identity, payment innovation and infrastructure, Open Banking, Embedded Finance, crypto, fraud and financial crime prevention, and more – all developed in collaboration with industry experts and leaders.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright