By Gal Helemski, PlainID, Co-founder and CPO
Across every industry, the proliferation of artificial intelligence is fundamentally reshaping the workforce, elevating consumer expectations, and redefining the value and vulnerability of data. We are shifting into an era where agentic AI becomes an active participant in the core of enterprise systems, including those in financial services. These systems are the new class of “digital employees,” accessing databases, invoking tools and services, and acting with increasing autonomy to perform tasks from executing trades to assessing loan applications. This evolution from a supportive tool to an autonomous actor promises unprecedented efficiency, but it also introduces a critical new challenge. For an industry built on trust and regulatory oversight, the question is not if we will adopt agentic AI, but how we will build the necessary trust into these systems from the ground up.
The challenge is magnified in a multi-agent system (MAS), where one AI agent’s decisions affect another’s actions, creating a complex, high-speed web of interactions and an entirely new chain of command. A recent, landmark paper from the Cloud Security Alliance (CSA), “Agentic AI Identity and Access Management: A New Approach,” confirms that legacy governance models are fundamentally unfit for this new reality. As the paper notes, in a multi-agent system we may see a "Confused Deputy" problem, where an agent with broad permissions systematically explores the limits of its access to perform its task, potentially misusing that access in ways its creators never intended .
The CSA paper confirms this governance breakdown, highlighting the following key challenges inherent in traditional approaches:
To solve this, a strategic pivot is required. The focus must shift from merely verifying an AI’s identity—for example, knowing that a specific AI agent can access a private customer account knowledgebase—to governing its authorization—knowing precisely what actions it is allowed to perform, when, and under what specific conditions, consistently across all parts of the Agentic flow including data and tools usage.
Think of it as the evolution of a passport. A traditional passport is a static form of identity; it confirms who you are. In the context of financial services security, a next-generation "dynamic passport," however, provides real-time authorization to specific activities with highly granular precision and based on changing conditions. This dynamic, action-level real-time authorization is crucial for managing risk and ensuring compliance.
Fortunately, a modern architecture for this challenge is gaining consensus. The CSA paper calls for a radical paradigm shift, as the agentic AI era requires a purpose-built end-to-end, multi-layered approach to security. A cornerstone of this new model is a dynamic access control layer, including a robust, centralized policy-based framework for authorization.
This approach brings AI operations into the light by externalizing the rules of operation. Here is how it creates clarity and traceability:
To understand the impact, consider an AI-powered loan origination system.
Using traditional static guardrails, an AI agent has the broad role of "Loan Processor." A new data privacy regulation is introduced. The IT team must now manually recode, redeploy and verify every service that touches that data—a process that is slow, expensive, and prone to error. In the interim, the AI operates under its old permissions, creating a compliance gap. If requested by auditors, the team would likely provide a simple log showing the "Loan Processor" accessed a file, with no context as to why and the multiple steps, entities, and affected decisions that were tied to it.
With dynamic access controls, an administrator updates a single, human-readable policy in the central authorization platform. The policy can now enforce fine-grained rules, such as tying data access to the agent’s specific task, limiting it to certain database schemas—down to the cell level, restricting it based on the user's region and business hours, or even enforcing 'Just in Time' access for a limited time. The change is enforced instantly across the entire ecosystem, while providing precise, irrefutable proof of compliance for every AI-driven decision.
For financial institutions, embracing AI and satisfying regulatory demands are not opposing forces. The same technological advancements that enable powerful autonomous systems can also provide the transparent, granular governance that regulators and customers have always demanded. The path to innovation runs through modern authorization. By building a dynamic and compliant access control layer for AI-dominant systems, AI architects and financial security leaders can ensure that as agentic AI evolves and promotes innovation and business growth, it also fosters trust and confidence at every level.