Anthropic Confirms Claude Code Source Leak, says user data isn’t exposed

Anthropic Confirms Claude Code Source Leak, says user data isn’t exposed
We may receive commissions for purchases made through links on our website. We appreciate your support.

Anthropic confirmed on Tuesday that Claude Code, its AI coding assistant, had its internal source code leaked. The company clarified that it was a mistake by an employee and that customer data wasn’t compromised, but developers are already digging through the code to learn how the tool works.

Following Anthropic’s Claude Code 2.1.88 update, users discovered that it contained the sourcemap file, leaking over 500,000 lines of code.

It seems that Anthropic already removed the npm package, which allegedly had the full TypeScript source code.

“This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again,” Anthropic stated in an emailed statement to The Verge.

While Anthropic claims that no sensitive data was revealed, the source code leak allows developers and competitors to study how the AI assistant is built. One of the GitHub repositories with exposed codebase remains public and has reached over 91,300 stars in the last 17 hours.

The Claude Code leak was first flagged by a security researcher, Chaofan Shou, who posted his findings and the link to the code on X yesterday. Posted on March 31st, the post currently has over 31 million views.

“Claude code source code has been leaked via a map file in their npm registry,” Shou said on X.

Claude Code leak X post

The leak comes just days after Anthropic exposed 3,000+ Content Management System (CMS) files, including details regarding the upcoming AI model and a private CEO event, as first reported by Fortune.

Many developers have already started exploring the code to understand how the AI assistant works, finding upcoming features and insights into its memory architecture.

One Reddit user claimed to have reportedly found an entire Tamagotchi-like system, with a pet that “sits beside your input box and reacts to your coding.”

Another developer claimed to have learned that Claude Code uses a “self-healing memory” system, addressing the problem of context entropy that affects other AI systems. While other Large Language Models (LLMs) risk a gradual quality decrease because they accumulate too much data, Claude Code prioritizes efficiency over storing everything in a database.

Users are also sharing ways to build a functional Claude Code from the leaked sourcemap. However, downloading unknown repositories, like from accounts with no GitHub history, is dangerous and can infect your device with malware.

Some GitHub users are already promoting their own Claude Code build. However, the leaked source code doesn’t make it open source, so there’s risk of legal action based on Antropic’s terms of use.

“Just because the source is now “available” *DOES NOT MEAN IT IS OPEN SOURCE*. You are violating a license if you copy or redistribute the source code, or use their prompts in your next project!,” full-stack developer Justin Schroeder said on X.

Overall, the incident didn’t reveal the whole source code for the Claude Code model to be fully replicated. However, it raises broader questions regarding security practices of an AI company that presents itself as focused on safety.