Security researchers have detailed a multi-stage exploit named “GrafanaGhost” that combines an AI vulnerability with a flaw in the Grafana data visualization platform. The attack can silently exfiltrate sensitive data, including session tokens and authentication cookies, from users who view a compromised dashboard.
The exploit, discovered by researchers at Horizon3.ai, begins with a prompt injection attack targeting a Large Language Model (LLM) integrated with a Grafana instance. An attacker crafts a malicious prompt that bypasses the AI’s safety guardrails, tricking it into generating markdown code containing a hidden payload. This payload then leverages a second vulnerability, tracked as CVE-2024-34862, which is an incomplete URL sanitization flaw in Grafana's markdown renderer.
The LLM-generated code includes a specially crafted image tag with a data: URL that embeds JavaScript inside an SVG file. When an unsuspecting user views the Grafana dashboard containing the malicious markdown, their browser executes the hidden JavaScript. This cross-site scripting (XSS) attack allows the threat actor to steal information accessible within the browser's context without any further user interaction.
The impact of a successful GrafanaGhost attack is significant. The silent nature of the data exfiltration makes it difficult to detect, posing a risk to organizations that use Grafana to display operational metrics, user data, or other sensitive business intelligence. Stolen session tokens could allow an attacker to hijack user accounts and gain unauthorized access to the platform.
Grafana Labs has addressed the URL sanitization flaw and released patches. Administrators are urged to update their instances immediately to versions 10.4.5, 10.5.2, 11.0.0-beta.2, or newer to mitigate the vulnerability. The incident highlights the growing security challenges of integrating AI models into existing applications, demonstrating how weaknesses in one system can be amplified by flaws in another.




