Organizations that give AI systems more access than they need are reporting far more security incidents, according to a Teleport study covered by Infosecurity Magazine. The survey found that companies running “over-privileged” AI had a 76% incident rate and saw incidents at 4.5 times the rate of organizations with tighter controls.
The report focuses on enterprise AI assistants, copilots and agents connected to internal tools and infrastructure. The risk is not simply model error. It is what happens when an AI system has broad access to cloud environments, source code, secrets, internal databases or admin functions. In those setups, prompt injection, tool abuse, stolen credentials or unsafe automation can turn a bad output into a real security event.
The findings add to a growing body of guidance warning companies not to treat AI like a low-risk productivity tool once it can take actions inside corporate systems. Security teams have been pushing for least-privilege access, short-lived credentials, approval gates for sensitive actions and detailed logging. Those controls matter even more for agentic AI, which can operate at machine speed and across multiple connected services.
Teleport’s data should be read with some caution. The figures come from a vendor-backed survey, not a public breach dataset, and the summary report does not fully answer key questions such as how “over-privileged” was defined, what qualified as an “incident,” or how large the respondent pool was. That means the study shows a strong correlation, but not proof that broad AI permissions directly caused every incident.
Even so, the message is clear: the old identity and access management problem is now showing up in AI deployments. A chatbot with read-only access is one thing. An AI agent with the keys to production is another. For organizations rolling out internal AI tools, the safer default is narrow permissions, segmented environments and human review before high-impact actions are allowed. For staff accessing AI tools remotely, basic protections such as a trusted VPN can help reduce exposure, but they do not solve overbroad permissions inside the enterprise.




