Anthropic confirmed on Tuesday that the internal code for its AI coding assistant, Claude Code, was accidentally leaked due to human error.
“No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said in a statement shared with CNBC.
“This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
The issue was discovered after the AI startup released version 2.1.88 of the Claude Code npm package, where users noticed a source map file that could expose Claude Code’s source code—nearly 2,000 TypeScript files totaling over 512,000 lines of code. That version has since been removed from npm.
Security researcher Chaofan Shou was the first to publicly highlight the leak on X, posting, “Claude code source code has been leaked via a map file in their npm registry!” The post has since garnered over 28.8 million views.
The exposed codebase remains available on a public GitHub repository, where it has accumulated more than 84,000 stars and 82,000 forks.
A source code leak of this scale is significant because it provides software developers and Anthropic’s competitors with a detailed blueprint of how the popular coding tool operates.
Users who have explored the code have shared insights into its self-healing memory architecture, which helps bypass the model’s fixed context window limits, along with other internal components.
