For weeks, whispers of “AI shrinkflation” surrounded Anthropic’s Claude models, with developers and AI enthusiasts alleging a decline in performance. Reports flooded GitHub, X, and Reddit, suggesting Claude had become less reliable, more prone to errors, and wasteful with tokens. Today, Anthropic broke its silence, acknowledging that recent changes to Claude’s operating instructions were the culprits behind the degradation.
Anthropic, known for its AI models designed to assist with complex tasks, has been a staple for developers seeking robust AI solutions. However, users noticed a shift, claiming the models had become less capable of handling intricate engineering challenges. The controversy gained traction when high-profile users and third-party benchmarks echoed these concerns, creating a trust gap that Anthropic could no longer ignore.
In a detailed post-mortem, Anthropic identified three product-layer changes that inadvertently affected Claude’s performance. First, a change in the “reasoning effort” setting aimed at improving UI latency led to a drop in intelligence for complex tasks. Second, a caching logic bug caused the model to forget previous interactions, leading to repetitive outputs. Lastly, new verbosity limits on system prompts resulted in decreased coding quality.
The impact was felt across various Anthropic products, including the Claude Agent SDK and Claude Cowork, although the Claude API remained unaffected. To address these issues, Anthropic has promised several operational changes. They plan to increase internal testing with public builds, expand evaluation suites, and implement tighter controls on prompt changes. Additionally, subscribers will receive compensation for the token waste caused by these bugs.
For engineers and founders relying on AI for development, this incident underscores the importance of transparency and reliability in AI tools. Anthropic’s commitment to regaining user trust through operational improvements is a critical step. The company will also use its new @ClaudeDevs account on X and GitHub to maintain open communication with its developer community.
As Anthropic moves forward, the industry will be watching closely. The implications extend beyond just one company; they highlight the challenges of maintaining AI performance amid rapid changes. For those in the tech world, this serves as a reminder of the delicate balance between innovation and stability.




















