Security researchers have discovered a critical vulnerability in Grafana’s AI assistant that allows attackers to turn the tool into an unwitting agent for data theft. Dubbed ‘GrafanaGhost’ by researchers at Noma Security, the flaw uses a technique called indirect prompt injection to exfiltrate sensitive corporate data without triggering conventional security alerts.
The attack targets the “Summarize Dashboard” feature within Grafana’s AI assistant. An attacker with privileges to edit a Grafana dashboard can embed malicious instructions within its metadata, such as the title or panel descriptions. When a legitimate user later asks the AI to summarize this “poisoned” dashboard, the AI processes the hidden commands along with the legitimate content. Following these instructions, the AI assistant then sends sensitive information—including dashboard names, user IDs, and organization IDs—to an external, attacker-controlled server.
The primary impact of this vulnerability is covert data exfiltration. What makes the GrafanaGhost attack particularly dangerous is its stealth. The malicious network request is initiated by Grafana’s own backend AI service, not the user's browser. This action can appear as legitimate AI activity, bypassing many client-side security controls and making it difficult to detect in standard audit logs. The user receives a normal-looking summary, completely unaware that data has been stolen in the background.
Noma Security responsibly disclosed the findings to Grafana Labs in May 2024. In response, Grafana has acknowledged the vulnerability and published a security update detailing mitigation steps for users of the affected AI preview feature. This incident highlights a new class of threats as AI becomes more integrated into enterprise software, forcing organizations to reconsider how they secure applications that process untrusted inputs.




