Anthropic says Chinese AI firms used Claude in distillation attacks

March 21, 20262 min read2 sources
Share:
Anthropic says Chinese AI firms used Claude in distillation attacks

Anthropic has accused Chinese AI companies DeepSeek, Moonshot AI and MiniMax of using its Claude models for unauthorized “distillation” attacks, according to reporting by Infosecurity Magazine. The company says the firms queried Claude at scale to help train rival systems that could mimic some of Claude’s capabilities, a practice Anthropic says violated its terms of service and attempted to bypass its safeguards.

The allegation centers on model distillation, a standard machine learning technique that becomes contentious when developers use outputs from a proprietary system without permission to build a competing product. In this case, Anthropic is framing the issue less as a conventional software exploit and more as model extraction through API abuse: repeated prompts, automated collection of responses and use of those outputs as training data.

Anthropic’s warning adds to a growing security concern for major AI providers. Unlike a traditional breach, there are no CVEs, malware indicators or publicly disclosed infrastructure compromises tied to the claim. The attack surface is the model interface itself. Providers typically respond with rate limits, account monitoring, prompt-pattern analysis and stronger abuse detection, but those controls can be difficult to enforce if actors spread activity across accounts or infrastructure.

The broader impact is commercial as much as technical. Frontier AI models are expensive to build, and large-scale extraction can let competitors reproduce useful behaviors at a fraction of the cost. The case also raises questions about whether model outputs should be treated more like protected intellectual property and whether AI vendors will further restrict access, logging and customer verification. For enterprise users, that could mean tighter controls around how models are accessed, tested and integrated, including over remote connections where teams may already rely on a VPN.

The named firms had not publicly rebutted the allegations in the source report. Anthropic’s claims arrive amid intensifying US-China competition in generative AI, where disputes over model theft, training data and API misuse are becoming part of the wider cybersecurity and policy debate.

Share:

// SOURCES

// RELATED

AI and deepfakes are making cyber-attacks easier to launch, Cloudflare warns

Cloudflare says AI and deepfakes are helping attackers scale phishing, impersonation and fraud with less skill and greater realism.

2 min readMar 21

Critical Langflow flaw was exploited within hours of disclosure

A critical Langflow vulnerability enabling unauthenticated RCE was reportedly exploited within hours of public disclosure.

2 min readMar 21

Ransomware payments fall even as attacks jump, signaling a harsher but less reliable extortion market

Chainalysis data shows ransomware attacks up 50% in 2025, while total payments fell 8% and median payouts climbed sharply.

2 min readMar 21

AI is shrinking attacker breakout time to four minutes, report says

ReliaQuest says AI-assisted attacks can reach breakout in four minutes and exfiltration in under 10, shrinking defender response time.

2 min readMar 21