Two findings landed within a fortnight of each other and they read like the same story told twice.
The first: SailPoint's research says 67% of organisations cannot account for what their staff share with AI tools, and 35% of staff admit to using unapproved AI services for work. The second: enterprises are deploying agentic AI (autonomous AI tools that can act on systems on a user's behalf) faster than they can govern it. Ping Identity's CEO Andre Durand puts it directly: "Enterprises are deploying autonomous AI faster than they can govern it."
The two findings are the same finding. Your organisation already has AI use it doesn't see. Adding agents on top doesn't create a new governance problem. It scales the one you already have.
The same gap, two faces
Shadow AI is a person typing a meeting transcript into a chatbot. Or a contract. Or a customer list. Or a piece of source code. The data leaves the organisation through a browser tab, with no log, no policy enforcement, no record of what went where. The IT department finds out about it the next time someone asks why the AI tool already knows things it shouldn't.
Agentic AI is a tool that holds a credential and clicks the buttons itself. Read email. Send email. Update a CRM record. Move money. Spawn another agent that does the next step. Where shadow AI risks data leaving, agentic AI risks data, money, and decisions moving without anyone watching.
Both fail in the same place: identity and authorisation. The organisation can't say with confidence who or what is doing what, with which credential, on whose behalf, and whether they should be.
What goes wrong specifically
The categories are worth naming, because they don't fit neatly into existing security categories:
- Permission drift. A human user has reasonable permissions. The agent acting on their behalf inherits them. Then it combines several reasonable permissions in a way no human ever would, producing an unreasonable outcome that nothing flags because each step looked fine.
- Sub-agents and chained calls. An agent spawns another agent to do part of the work. The audit trail shows the original action and not the chain underneath it. By the time something goes wrong, "who did this" has stopped being a question with a clean answer.
- Static credentials, dynamic context. Traditional identity systems check you at the door, give you a token, and trust it for an hour. Agents need continuous evaluation. The session that was fine ten minutes ago might not be fine now: the user's risk score changed, the device fell out of compliance, the threat intel updated. Without re-evaluation, the token is a master key.
- Shadow consumption. People paste internal data into models the organisation does not control, with terms of service it has not read.
What actually helps
Most organisations don't need a new framework. They need to apply the things they already do for human users and machine accounts to agents and AI consumption, and accept that the line between "service account" and "AI agent" has blurred.
- Inventory the AI use you already have. Browser telemetry, network egress, finance records (look for unfamiliar SaaS subscriptions). You can't govern what you can't see.
- Approve a small number of AI tools and make them easier to use than the alternatives. Shadow AI is mostly people doing their jobs. Give them sanctioned options and they'll use them.
- Treat agents as first-class identities. Each agent gets its own credential, its own scope, its own audit trail. Not a copy of an admin's password.
- Short-lived credentials, narrow scope, explicit delegation. A long-lived API key for an agent is the same risk as a long-lived API key for anything else, only the agent makes more requests with it.
- Make humans approve the consequential actions. Sending money, deleting data, communicating externally. The agent can prepare the action; a person should still confirm it.
- Log the chain, not just the final action. If an agent spawned a sub-agent, you need to be able to reconstruct what happened, end to end.
The reason these findings keep landing is not that agentic AI is uniquely dangerous. It's that AI use has outpaced the controls organisations had in place for ordinary identity and access management. The fix is not new. It's catching up.
How Steelwise can help
Working out what AI use is already happening, what it would take to bring it under sensible controls, and what a defensible policy for agents looks like is the kind of review we run for clients. Get in touch.
Further reading
- ITPro: UK firms left in the dark over what workers are sharing with AI
- ITPro: Enterprises are adopting agents faster than they can secure them
- Computer Weekly: Why AI agents are triggering a rethink of enterprise identity