Over-privileged AI tied to 4.5 times higher incident rates, study finds

March 21, 20262 min read2 sources
Share:
Over-privileged AI tied to 4.5 times higher incident rates, study finds

Organizations that give AI systems more access than they need are reporting far more security incidents, according to a Teleport study covered by Infosecurity Magazine. The survey found that companies running “over-privileged” AI had a 76% incident rate and saw incidents at 4.5 times the rate of organizations with tighter controls.

The report focuses on enterprise AI assistants, copilots and agents connected to internal tools and infrastructure. The risk is not simply model error. It is what happens when an AI system has broad access to cloud environments, source code, secrets, internal databases or admin functions. In those setups, prompt injection, tool abuse, stolen credentials or unsafe automation can turn a bad output into a real security event.

The findings add to a growing body of guidance warning companies not to treat AI like a low-risk productivity tool once it can take actions inside corporate systems. Security teams have been pushing for least-privilege access, short-lived credentials, approval gates for sensitive actions and detailed logging. Those controls matter even more for agentic AI, which can operate at machine speed and across multiple connected services.

Teleport’s data should be read with some caution. The figures come from a vendor-backed survey, not a public breach dataset, and the summary report does not fully answer key questions such as how “over-privileged” was defined, what qualified as an “incident,” or how large the respondent pool was. That means the study shows a strong correlation, but not proof that broad AI permissions directly caused every incident.

Even so, the message is clear: the old identity and access management problem is now showing up in AI deployments. A chatbot with read-only access is one thing. An AI agent with the keys to production is another. For organizations rolling out internal AI tools, the safer default is narrow permissions, segmented environments and human review before high-impact actions are allowed. For staff accessing AI tools remotely, basic protections such as a trusted VPN can help reduce exposure, but they do not solve overbroad permissions inside the enterprise.

Share:

// SOURCES

// RELATED

AI and deepfakes are making cyber-attacks easier to launch, Cloudflare warns

Cloudflare says AI and deepfakes are helping attackers scale phishing, impersonation and fraud with less skill and greater realism.

2 min readMar 21

Critical Langflow flaw was exploited within hours of disclosure

A critical Langflow vulnerability enabling unauthenticated RCE was reportedly exploited within hours of public disclosure.

2 min readMar 21

Ransomware payments fall even as attacks jump, signaling a harsher but less reliable extortion market

Chainalysis data shows ransomware attacks up 50% in 2025, while total payments fell 8% and median payouts climbed sharply.

2 min readMar 21

AI is shrinking attacker breakout time to four minutes, report says

ReliaQuest says AI-assisted attacks can reach breakout in four minutes and exfiltration in under 10, shrinking defender response time.

2 min readMar 21