Musician admits using AI bots to steal $10 million in streaming royalties

March 22, 20262 min read1 sources
Share:
Musician admits using AI bots to steal $10 million in streaming royalties

North Carolina musician Michael Smith has pleaded guilty to a scheme that fraudulently collected more than $10 million in music royalty payments by using AI-generated tracks and automated bot activity to inflate stream counts across Spotify, Apple Music, Amazon Music, and YouTube Music.

According to BleepingComputer, Smith uploaded or controlled a large catalog of songs, including tracks reportedly generated with AI, then used bots to simulate legitimate listening at scale. The fake plays triggered royalty payouts from major streaming services, turning manipulated engagement into real revenue. The case is a criminal fraud matter rather than a software exploit, with no CVEs or malware tied to the operation.

The guilty plea highlights a growing problem for digital platforms: abuse of business logic instead of direct network intrusion. In this case, the target was the royalty system itself. By pairing low-cost AI music generation with automated streaming, the scheme allegedly created a scalable way to siphon money from payout pools intended for legitimate artists and rights holders.

The broader impact extends beyond the defendant. Fraudulent streams can distort recommendation systems, rankings, and shared royalty calculations, potentially reducing payouts for real musicians while undermining trust in platform metrics. The case also adds pressure on streaming companies, distributors, and anti-fraud teams to improve uploader verification, behavior analysis, and detection of synthetic listening patterns. Common warning signs in this kind of abuse include repetitive playback behavior, suspicious account creation patterns, and unusual geographic distribution of streams, though investigators have not publicly released detailed indicators in this case.

For cybersecurity and fraud teams, the case is a reminder that automated abuse does not need a code vulnerability to cause significant financial damage. It can be enough to exploit weak controls around identity, engagement, and monetization. As platforms weigh stronger anti-bot defenses, some may also push users and operators toward better privacy and account protection practices, including use of a VPN where appropriate, though that would not prevent platform-side royalty manipulation.

Sentencing details and any restitution or forfeiture terms were not included in the initial report.

Share:

// SOURCES

// RELATED

Trivy hack spreads infostealer via Docker, triggers worm and Kubernetes wiper

A hypothetical supply chain attack on the Trivy security scanner via Docker Hub highlights a severe threat involving an infostealer, worm, and a Kuber

6 min readApr 1

We found eight attack vectors inside AWS Bedrock. Here's what attackers can do with them

Security researchers have uncovered eight critical attack vectors in AWS Bedrock, Amazon's AI platform, revealing how its deep enterprise integration

7 min readApr 1

Hackers now exploit critical F5 BIG-IP flaw in attacks, patch now

F5 reclassified a BIG-IP flaw as a critical RCE vulnerability, CVE-2023-46747, now actively exploited to deploy webshells. Immediate patching is vital

5 min readApr 1

The AI arms race: why unified exposure management is becoming a boardroom priority

The weaponization of AI is accelerating the speed and sophistication of cyberattacks. This analysis explores why a proactive Unified Exposure Manageme

6 min readApr 1