Agentic AI · Enterprise Strategy · 2026

Agentic AI Is Already in Your Organization.
You Just Don’t Know It Yet.

The agents are not coming. They are already here, running workflows, accessing data, and making decisions. The question is whether anyone is watching.

MN
Dr. Mechie Nkengla, Ph.D.
Chief AI & Data Strategist · Data Products LLC
May 2026 · 9 min read
82%
of companies already have AI agents in active use
SailPoint / Master of Code Global, 2026
80%
have seen an agent act outside its intended boundaries
Master of Code Global, 2026
40%
of enterprise apps will embed AI agents by end of 2026
Gartner, August 2025

Most conversations about agentic AI start in the future tense. Leaders talk about what they plan to deploy, what pilots are being considered, what budget has been earmarked. The reality in most organizations is considerably further along than that conversation suggests.

Across the organizations I work with, a pattern keeps repeating. The executive team is discussing agentic AI strategy at the board level. Meanwhile, three floors down, a department has already connected an AI agent to the CRM, given it access to the contract database, and pointed it at the customer communications inbox. No IT review. No risk assessment. No one outside that department knows it is running.

This is not a technology problem. It is a visibility problem, and it is far more common than anyone wants to admit. SailPoint and Master of Code Global found that 82% of companies now have AI agents in active use. More pointed is what happens next: 80% of those same organizations have experienced an agent acting outside its intended scope, accessing data it was not meant to access, triggering workflows it was not designed to trigger, or producing outputs that no one expected and no one reviewed.

If you are a CDO, CRO, or chief compliance officer and those numbers surprise you, that is exactly the problem.

What Makes Agents Different From Every AI System You Have Governed Before

Traditional AI systems, the ones most risk and governance frameworks were built around, are essentially reactive. You give them an input, they return an output. The decision space is bounded. The failure modes are predictable. A model that misclassifies a transaction or returns a wrong prediction is a problem, but it is a contained one.

Agents are architecturally different. They are designed to act, not just respond. A well-built agent can receive a goal, break it into steps, decide which tools to use, call external APIs, read and write data, and loop back on its own output before a human sees any of it. In a customer service context, that might mean an agent reading a complaint, checking an order system, issuing a refund, and sending a confirmation, all without a human in the loop. When that sequence works perfectly, it is genuinely impressive. When it goes wrong at step two, the damage compounds through steps three and four before anyone notices.

MIT Sloan Management Review and BCG put it plainly in their 2025 research: executives have long relied on a simple framework where tools automate tasks and people make decisions. That framework is no longer adequate. Agents blur the line between the two in ways that most organizations have not yet reckoned with.

"The fast-paced development of agentic AI requires organizations to be agile while consistently upholding their data and AI governance standards."

Margery Connor, Chief Data & Analytics Officer, Chevron — MIT Sloan Management Review, 2025
Figure 1 — How an AI Agent Differs From Traditional AI
Traditional AI Input Single prompt or data payload Model processes One step, bounded scope Output returned Human reviews and acts Agentic AI Goal received Agent plans independently Multi-step execution Calls APIs, reads/writes data, loops Action taken in the world Often before human review
Traditional AI waits to be reviewed. Agentic AI acts. That shift in sequencing is where the governance gap lives.

The Three Ways Agents Go Wrong Before Anyone Notices

Not all agentic failures are dramatic. Most are quiet, incremental, and only visible in retrospect. Based on what the research documents and what I see in practice, the failure modes cluster into three patterns.

Scope creep at the data layer

Agents are given access to systems to do their jobs. The problem is that the boundaries of that access are rarely specified with enough precision to hold up against a determined goal. An agent tasked with drafting client summaries may have legitimate access to the CRM. If that CRM connects to a contract database, and the contract database connects to billing history, the agent will follow the path to get the information it thinks it needs. 53% of organizations confirm their agents have access to sensitive data, and 58% say that access is occurring daily, according to SailPoint research published in 2026.

Unintended downstream actions

Modern enterprise systems are deeply connected. An agent that can send an email can, in many architectures, also trigger a Salesforce update, a Slack notification, or a workflow in a connected system. Each individual permission seems reasonable. The combination creates a blast radius that no one mapped when the agent was configured. Gartner has flagged this as the primary reason more than 40% of agentic AI projects are at risk of being cancelled by 2027, not because the technology fails, but because the organizational controls around it were never designed for autonomous multi-step execution.

Governance frameworks that were built for a different problem

McKinsey's 2026 AI Trust Maturity Survey found that only around 30% of organizations reach meaningful maturity in agentic AI governance specifically. Most have adapted their existing AI governance frameworks, which were designed to evaluate models that answer questions, not agents that take actions. The gap between those two problems is substantial.

Figure 2 — Agent Failure Modes by Frequency
0% 25% 50% 75% 39% Unauthorized data access 33% Restricted info handling errors 80% Acted outside intended scope
Source: Master of Code Global / SailPoint, 2026. Among organizations with AI agents in active use.

What a Real Agent Governance Framework Looks Like

The organizations getting this right share a specific discipline: they govern agents before they deploy them, not after something goes wrong. That sounds obvious. In practice, it requires four things that most governance programs do not currently have.

Agent Inventory

You cannot govern what you cannot see. A central registry of every active agent, its permissions, its connected systems, and its intended scope is the minimum starting point. Most organizations discovering their agent exposure do so during an audit, not from proactive monitoring.

Scoped Permissions

Every agent should operate on least-privilege principles: access only to the systems and data it actually needs for its defined task. Permissions should be time-bound where possible and reviewed on a regular cycle, not granted once and forgotten.

Human-in-the-Loop Design

For any agent that triggers consequential actions, such as financial transactions, customer communications, or data modifications, there should be a defined point at which a human reviews before execution. The threshold for what counts as consequential should be set deliberately, not defaulted.

The ROI Case for Getting This Right

Agents that successfully reach production deliver an average 171% ROI (192% in the US), according to research compiled by Digital Applied. The agents that fail share a common trait: insufficient governance infrastructure before deployment. The 12% of organizations that succeed in scaling agentic AI all documented their governance approach before go-live, not after. (Source: Digital Applied, 2026)

Gartner's best-case projection has agentic AI driving roughly 30% of enterprise application software revenue by 2035, approaching $450 billion. The organizations positioned to capture that value will be the ones that built the oversight structures now, before agent sprawl made retroactive governance impractical.

"Are we simply adding a new tool to our business, or are we introducing a new, nonhuman actor into our organization? How we respond will define the next era of management."

MIT Sloan Management Review / BCG, The Emerging Agentic Enterprise, 2025

That question does not have a comfortable answer for most leadership teams right now. But it is the right question to be asking, and asking it now, before the agents are embedded deeply enough that unraveling them becomes its own problem, is the only move that leaves you in control of the outcome.

Not sure how many agents are running in your organization?

We help organizations build agent inventories, define permission frameworks, and establish oversight structures before the sprawl sets in.

Request an Agentic AI Assessment