Risk of AI model collapse will push zero trust data governance, Gartner says

March 23, 20262 min read2 sources
Share:
Risk of AI model collapse will push zero trust data governance, Gartner says

Gartner predicts that by 2028, half of organizations will adopt zero trust data governance to reduce the risk of AI model collapse, according to reporting by Infosecurity Magazine. The forecast reflects growing concern that enterprise AI systems will be trained on polluted, low-quality, or synthetic data that weakens model performance over time.

Model collapse describes a failure mode where models increasingly trained on AI-generated content begin to lose accuracy, diversity, and fidelity to real-world data. In practice, that can mean more hallucinations, amplified bias, weaker performance on edge cases, and less reliable outputs. Gartner’s framing treats this as a data integrity problem: organizations should not implicitly trust data simply because it comes from internal systems, known pipelines, or widely available online sources.

Zero trust data governance applies familiar security principles to AI data pipelines. That includes verifying data provenance, maintaining lineage, classifying data by trust level and sensitivity, enforcing access controls, and continuously checking datasets for contamination or drift. For security teams, the issue sits close to data poisoning and supply chain risk, even if it does not map to a specific vulnerability or CVE.

The timing matters because generative AI use has surged since 2022, while the public web is filling with machine-generated text, images, and code. Researchers have warned that recursive training on generated data can cause models to “forget” rare but important patterns and drift away from the original distribution. A widely cited paper, “The Curse of Recursion: Training on Generated Data Makes Models Forget,” helped establish model collapse as a serious technical concern.

For enterprises, the likely impact is higher spending on governance tooling, provenance tracking, and policy controls around what data can be used for training and fine-tuning. Regulated sectors such as finance, healthcare, and government may move first, especially where AI outputs affect high-stakes decisions. The broader message is that AI security is expanding beyond model behavior and application controls into the trustworthiness of the data itself.

Organizations building AI systems may also pair governance controls with privacy and secure access measures such as a VPN for remote teams handling sensitive datasets, though provenance and validation remain the central issue.

Share:

// SOURCES

// RELATED

Enterprise cybersecurity software fails 20% of the time, warns Absolute Security

A new report finds 20% of enterprise security tools are failing due to poor patch management and IT complexity, leaving organizations dangerously expo

6 min readApr 1

The FCC's router ban: A necessary security measure or the wrong fix?

The FCC put foreign-made consumer routers on its prohibited list to protect national security, but critics argue the ban creates a false sense of secu

6 min readApr 1

Trivy hack spreads infostealer via Docker, triggers worm and Kubernetes wiper

A hypothetical supply chain attack on the Trivy security scanner via Docker Hub highlights a severe threat involving an infostealer, worm, and a Kuber

6 min readApr 1

We found eight attack vectors inside AWS Bedrock. Here's what attackers can do with them

Security researchers have uncovered eight critical attack vectors in AWS Bedrock, Amazon's AI platform, revealing how its deep enterprise integration

7 min readApr 1