Background and context
Enterprise use of generative AI is growing far faster than most security programs were designed to handle. According to an Infosecurity Magazine report summarizing Zscaler research, enterprise AI usage jumped 91%, while Zscaler analysts found critical vulnerabilities in 100% of the enterprise AI systems they tested and were able to compromise 90% of them in under 90 minutes [Infosecurity Magazine]. Those are striking numbers, but they need careful framing: they reflect a vendor study of a defined set of systems and test conditions, not a universal measurement of every AI deployment in the market.
Even with that caveat, the findings line up with what security teams have been seeing across 2024 and 2025. Organizations are deploying chatbots, coding assistants, document summarizers, retrieval-augmented generation (RAG) tools, and agentic workflows into email, cloud storage, ticketing systems, and internal knowledge bases. At the same time, many employees are using unsanctioned consumer AI tools for work tasks, creating a “shadow AI” problem similar to shadow IT but with a much higher risk of sensitive data exposure. Zscaler has previously published research on rising AI/ML transaction volumes, blocked AI traffic, and data leakage concerns tied to public AI services [Zscaler].
Security agencies and standards bodies have been warning about this gap. OWASP’s Top 10 for LLM Applications highlights prompt injection, insecure output handling, training data poisoning, supply chain weaknesses, sensitive information disclosure, and excessive agency among the leading risks for AI applications [OWASP]. NIST’s AI Risk Management Framework makes a similar point from a governance angle: AI systems introduce risks that must be mapped, measured, and managed across the full lifecycle, not just at deployment [NIST].
What the technical findings likely mean
The Infosecurity report does not point to a single CVE, and that matters. Many AI security failures are not classic software bugs with a patch and a version number. They are design flaws, weak guardrails, dangerous integrations, or business logic issues that emerge when a language model is connected to enterprise data and tools.
One of the biggest issues is prompt injection. In a traditional application, untrusted input might trigger SQL injection or command injection. In an AI system, untrusted text can manipulate the model itself. A direct prompt injection happens when a user enters malicious instructions into a chatbot. An indirect prompt injection is more subtle: malicious instructions are hidden inside a web page, PDF, email, support ticket, or document that an AI assistant later reads. If the system is poorly designed, the model may treat that hostile content as instructions rather than data. OWASP lists prompt injection as the top LLM risk for good reason [OWASP].
For enterprises, the danger rises sharply when AI systems are connected to tools. A model that only generates text is one thing. A model that can search SharePoint, open support tickets, send email, query CRM records, or run code is another. If an attacker can manipulate an agent into taking actions, the result is not just bad output but unauthorized behavior. This is why “agentic AI” has become such a security concern. Excessive permissions, weak approval gates, and broad connector access can turn a prompt injection into data theft or workflow abuse.
Data leakage is the second major category. Employees routinely paste source code, contracts, customer records, financial data, legal drafts, and internal strategy documents into AI tools. If those tools are public services, the organization may lose visibility over retention, logging, downstream processing, or model-improvement settings. Even where vendors offer enterprise controls, misconfiguration is common. NIST and ENISA have both emphasized that AI systems can create privacy and confidentiality risks when organizations do not clearly define what data is allowed into them [NIST] [ENISA].
There is also a supply-chain problem. Most enterprise AI deployments are not standalone products. They rely on foundation models, APIs, vector databases, orchestration frameworks, browser extensions, plugins, and SaaS connectors. Every added component expands the attack surface. A weakness in a plugin, a stolen API key, an over-permissioned OAuth token, or a poisoned document in a RAG pipeline can all become entry points. In many environments, the AI layer inherits the security debt of every system it touches.
Another likely factor behind Zscaler’s “under 90 minutes” compromise claim is discoverability. AI systems often expose clear, testable behavior. Attackers can rapidly probe for weak system prompts, missing authorization checks, unrestricted tool access, and poor output filtering. Unlike some traditional intrusions, these attacks may not require malware or persistence. A successful compromise may simply involve convincing the AI to reveal data, ignore policy, or trigger a connected action.
Impact assessment
The organizations most at risk are not only early AI adopters. They are companies that deployed AI quickly without a full inventory of where models are used, what data they can access, and which actions they can take. That includes large enterprises rolling out copilots across office suites, software firms embedding code assistants into development pipelines, and regulated sectors such as healthcare, finance, legal services, and government where sensitive data is abundant.
The severity can range from moderate to critical depending on how the AI system is integrated. If the exposure is limited to a public chatbot with no internal access, the main risk may be confidentiality loss through user prompts. If the system has connectors into email, cloud drives, ticketing platforms, or internal databases, the consequences are much more serious: bulk data exfiltration, unauthorized messages, altered records, compliance violations, and reputational damage.
There is also a hidden operational risk. AI-generated outputs can be wrong, manipulated, or incomplete, yet users often treat them as authoritative. That creates a second-order security issue: staff may act on false summaries, approve unsafe code, or trust instructions generated from poisoned content. In other words, AI risk is not just about direct compromise. It is also about decision contamination.
For individuals, the immediate effect is usually privacy-related. Employee prompts may contain personal data, client details, or confidential work product. For customers, the risk is indirect but significant: if an enterprise AI system can access support histories, contracts, health records, or financial documents, a compromise could expose that information even when the customer never used the AI system directly.
Why the findings matter beyond one vendor report
Zscaler’s claims should be read as a warning sign, not a census of the whole market. Vendor studies can have limited scope, and the exact methodology matters. Still, the themes are consistent with public guidance from OWASP, NIST, and ENISA: AI systems are highly attractive targets because they combine untrusted input, probabilistic behavior, and broad integration into business processes [OWASP] [NIST] [ENISA].
That combination makes AI security different from ordinary SaaS governance. Blocking a few domains is not enough. Organizations need visibility into prompts, uploads, connectors, model APIs, and agent actions. They also need stronger privacy controls, including transport security and data handling rules; for remote staff and distributed teams, protecting access to cloud tools with vetted VPN service options can help reduce exposure on untrusted networks, though it does not solve AI-specific flaws by itself.
How to protect yourself
1. Inventory every AI system and connector. Know which public and internal AI tools are in use, who uses them, what data they can access, and what actions they can perform. Include browser extensions, plugins, and API-based integrations.
2. Classify data and set hard rules. Define which information must never be pasted into external AI systems: credentials, source code, regulated personal data, legal documents, unreleased financials, and customer records should be on that list unless a vetted enterprise workflow explicitly permits it [NIST].
3. Apply least privilege to AI tools. Give copilots and agents the minimum access they need. Avoid broad read access across file stores, inboxes, and business systems. Require human approval for sensitive actions such as sending emails, changing records, or executing code.
4. Test for prompt injection and indirect injection. Security teams should red-team AI systems using malicious prompts, poisoned documents, hostile web content, and manipulated tickets or emails. Treat untrusted content as hostile input, not passive data [OWASP].
5. Monitor for abnormal AI behavior. Log prompts, retrieval events, connector usage, output destinations, and agent actions where privacy and policy allow. Watch for unusual spikes in document retrieval, prompt patterns associated with jailbreaks, and unexpected outbound transfers.
6. Lock down identities and tokens. Protect API keys, OAuth grants, and service accounts. Rotate credentials regularly and review token scopes. Many AI compromises are really identity and integration failures in disguise.
7. Train employees on shadow AI risk. Staff need simple guidance: what tools are approved, what data is prohibited, and how to report suspicious AI behavior. Most enterprise AI exposure begins with normal productivity habits, not overtly malicious actions.
8. Protect access on the network side too. Use MFA, device posture checks, and encrypted connections. For users who frequently access cloud AI tools from public or travel networks, a trusted hide.me VPN can reduce interception risk, though the bigger challenge remains data governance and application design.
The bottom line
The headline numbers from Zscaler are dramatic, but the broader conclusion is hard to dispute: enterprise AI adoption is moving faster than enterprise AI security. The most dangerous failures are not only software bugs. They are unsafe defaults, overpowered integrations, weak governance, and misplaced trust in model behavior. For defenders, the task now is to treat AI systems as part application, part identity plane, and part insider-risk channel. Organizations that do that early will be better positioned than those still treating AI as just another productivity feature.




