The Open Source Security Foundation (OpenSSF), in a major collaboration with Google and Anthropic, has announced Project Glasswing, an initiative to use advanced artificial intelligence to find and fix security flaws in open-source software.
The project will leverage powerful large language models (LLMs), including Google’s Gemini and Anthropic’s Claude, to automatically scan source code for vulnerabilities. The primary goal is to proactively identify complex security bugs before they can be discovered and exploited by malicious actors. By automating this discovery process, the initiative aims to secure the foundational open-source components that underpin countless applications and digital services worldwide.
This defensive effort comes as the cybersecurity community anticipates that attackers will increasingly use AI for offensive purposes, such as finding zero-day vulnerabilities at an accelerated pace. Project Glasswing is a direct response, aiming to equip defenders with equivalent AI-powered capabilities to fortify software before it is deployed. The recent scare involving a backdoor in the XZ Utils library highlighted the severe risks lurking within the software supply chain, underscoring the need for more advanced, scalable security solutions.
Project Glasswing will focus on identifying a range of flaws, from injection vulnerabilities to memory safety issues. The findings will be shared with the maintainers of the respective open-source projects to facilitate timely patching. While AI offers the potential to analyze code at a scale impossible for humans, experts caution that human oversight will remain essential for validating findings and addressing the nuanced context of complex vulnerabilities. The project represents a significant investment in strengthening the security of the shared digital infrastructure.




