Anthropic has accused Chinese AI companies DeepSeek, Moonshot AI and MiniMax of using its Claude models for unauthorized “distillation” attacks, according to reporting by Infosecurity Magazine. The company says the firms queried Claude at scale to help train rival systems that could mimic some of Claude’s capabilities, a practice Anthropic says violated its terms of service and attempted to bypass its safeguards.
The allegation centers on model distillation, a standard machine learning technique that becomes contentious when developers use outputs from a proprietary system without permission to build a competing product. In this case, Anthropic is framing the issue less as a conventional software exploit and more as model extraction through API abuse: repeated prompts, automated collection of responses and use of those outputs as training data.
Anthropic’s warning adds to a growing security concern for major AI providers. Unlike a traditional breach, there are no CVEs, malware indicators or publicly disclosed infrastructure compromises tied to the claim. The attack surface is the model interface itself. Providers typically respond with rate limits, account monitoring, prompt-pattern analysis and stronger abuse detection, but those controls can be difficult to enforce if actors spread activity across accounts or infrastructure.
The broader impact is commercial as much as technical. Frontier AI models are expensive to build, and large-scale extraction can let competitors reproduce useful behaviors at a fraction of the cost. The case also raises questions about whether model outputs should be treated more like protected intellectual property and whether AI vendors will further restrict access, logging and customer verification. For enterprise users, that could mean tighter controls around how models are accessed, tested and integrated, including over remote connections where teams may already rely on a VPN.
The named firms had not publicly rebutted the allegations in the source report. Anthropic’s claims arrive amid intensifying US-China competition in generative AI, where disputes over model theft, training data and API misuse are becoming part of the wider cybersecurity and policy debate.




