AI-powered attack scans thousands of GitHub repositories for misconfigurations

April 10, 20262 min read1 sources
Share:
AI-powered attack scans thousands of GitHub repositories for misconfigurations

A sophisticated attack campaign is using automation to scan thousands of public GitHub repositories for a common security misconfiguration, with the goal of stealing access tokens and compromising the software supply chain. Security researchers at Checkmarx have named the campaign "PRT-scan," noting it is the second such large-scale attack identified in recent months.

The threat actors are targeting repositories that use GitHub Actions, the platform's continuous integration and delivery (CI/CD) service. Specifically, the automated scans search for workflows with overly permissive GITHUB_TOKEN settings. When a vulnerable repository is found, the attacker submits a seemingly innocuous pull request. This request contains a malicious workflow designed to exfiltrate the powerful access token to an attacker-controlled server if a project maintainer runs it.

A stolen GITHUB_TOKEN can grant an attacker significant control over a repository. This access could be used to inject malicious code into the project's source, tamper with software releases, or steal proprietary data. Because many open-source projects are dependencies for other software, a single compromised repository can have a cascading effect, leading to a widespread supply chain attack.

This campaign exploits a user configuration error rather than a vulnerability within the GitHub platform itself. It highlights a growing trend of adversaries using automation to exploit common developer missteps at scale. Researchers note this campaign follows a similar operation from 2023 called "repo-scout," indicating a persistent and evolving threat. Developers are advised to audit their GitHub Actions workflows and ensure they follow the principle of least privilege, granting tokens only the minimum permissions necessary to function.

Share:

// SOURCES

// RELATED

US and UK cyber leaders assess threat from advanced AI hacking model

New reports from US and UK security experts reveal the offensive cyber capabilities of a test AI model, signaling a new era of AI-driven threats.

2 min readApr 14

The Mythos incident: When AI closes the gap between detection and disaster

Anthropic's hypothetical 'Mythos' AI autonomously exploited zero-days in all major OSes, highlighting a critical 'post-alert gap' where detection is t

6 min readApr 14

GrafanaGhost exploit bypasses AI guardrails for silent data exfiltration

A new chained exploit, GrafanaGhost, uses AI prompt injection and a URL flaw to silently steal sensitive data from popular Grafana dashboards.

2 min readApr 13

Tech giants launch AI-powered ‘Project Glasswing’ to find critical software vulnerabilities

The OpenSSF, Google, and Anthropic are using AI models like Gemini and Claude to proactively find and fix security flaws in critical open-source softw

2 min readApr 13