Over-privileged AI tied to 4.5 times higher incident rates, study finds

March 21, 20262 min read2 sources
Share:
Over-privileged AI tied to 4.5 times higher incident rates, study finds

Organizations that give AI systems more access than they need are reporting far more security incidents, according to a Teleport study covered by Infosecurity Magazine. The survey found that companies running “over-privileged” AI had a 76% incident rate and saw incidents at 4.5 times the rate of organizations with tighter controls.

The report focuses on enterprise AI assistants, copilots and agents connected to internal tools and infrastructure. The risk is not simply model error. It is what happens when an AI system has broad access to cloud environments, source code, secrets, internal databases or admin functions. In those setups, prompt injection, tool abuse, stolen credentials or unsafe automation can turn a bad output into a real security event.

The findings add to a growing body of guidance warning companies not to treat AI like a low-risk productivity tool once it can take actions inside corporate systems. Security teams have been pushing for least-privilege access, short-lived credentials, approval gates for sensitive actions and detailed logging. Those controls matter even more for agentic AI, which can operate at machine speed and across multiple connected services.

Teleport’s data should be read with some caution. The figures come from a vendor-backed survey, not a public breach dataset, and the summary report does not fully answer key questions such as how “over-privileged” was defined, what qualified as an “incident,” or how large the respondent pool was. That means the study shows a strong correlation, but not proof that broad AI permissions directly caused every incident.

Even so, the message is clear: the old identity and access management problem is now showing up in AI deployments. A chatbot with read-only access is one thing. An AI agent with the keys to production is another. For organizations rolling out internal AI tools, the safer default is narrow permissions, segmented environments and human review before high-impact actions are allowed. For staff accessing AI tools remotely, basic protections such as a trusted VPN can help reduce exposure, but they do not solve overbroad permissions inside the enterprise.

Share:

// SOURCES

// RELATED

‘Copy Fail’ is a real Linux security crisis wrapped in AI slop

A critical, actively exploited Linux kernel flaw (CVE-2024-1086) allows root access, but the disclosure was marred by controversial AI-generated text.

6 min readMay 5

Nearly every Linux system built since 2017 vulnerable to ‘Copy Fail’ flaw

A critical flaw, CVE-2024-5219, in the Linux kernel since 2017 allows local attackers to gain root access. Admins are urged to patch immediately.

6 min readMay 2

A critical flaw in 911 systems could allow attackers to disrupt emergency services

A critical 9.8 CVSS vulnerability (CVE-2024-6074) in Intrado 911 gateways allows attackers to disrupt emergency services. Learn how to patch it.

6 min readApr 27

Former ransomware negotiator pleads guilty in BlackCat conspiracy, exposing a critical insider threat

A former ransomware negotiator has pleaded guilty to conspiring with the BlackCat group, using his insider knowledge to help them attack U.S. companie

7 min readApr 25