After a brief flirtation with Claude Code, I decided it was time to cancel my subscription. Initially, Claude seemed promising, offering a fair token allowance and solid performance. But the honeymoon period didn’t last, and the cracks began to show.
Claude Code, a product of Anthropic, is designed to assist developers with coding tasks using AI. The idea is to streamline workflows and enhance productivity by providing code suggestions and refactoring options. However, the reality of using Claude was less impressive than its initial pitch. The token system, which dictates usage limits, started malfunctioning. After a routine break, my token usage inexplicably spiked to 100% after just two small queries, leaving me locked out of my own work.
Support was another sore point. The AI support bot was unhelpful, and when I finally reached a human, the response was a generic, copy-paste explanation of usage limits. The support ticket was closed without resolving my issue, leaving me frustrated and without answers. It’s a reminder that while AI can automate many tasks, customer support still requires a human touch.
The quality of Claude’s code suggestions also began to decline. Initially, I could juggle multiple projects, but soon the token limits were exhausted after just a couple of hours on a single project. The AI’s suggestions, once helpful, began to feel like shortcuts rather than solutions. For instance, Claude Opus suggested a lazy workaround for a coding task, which I had to correct manually, wasting precious tokens and time.
Anthropic’s handling of token limits further complicated matters. The conversation cache, which should have saved tokens by remembering previous interactions, frequently reset, forcing me to pay for the same data twice. This inefficiency, combined with unexpected changes to weekly token windows and misleading warnings about monthly limits, made the service unreliable.
In the competitive landscape of AI coding assistants, Claude faces stiff competition from products like GitHub’s Copilot and OpenAI’s Codex. These alternatives offer more consistent performance without the token headaches. For founders and engineers, the lesson is clear: a product’s initial appeal can quickly sour if the user experience isn’t maintained.
Anthropic’s challenges in scaling Claude Code highlight a broader issue in AI services. The cost of compute doesn’t decrease with more users; it scales linearly, making it difficult to maintain quality as the user base grows. This is a crucial consideration for startups entering the AI space, where managing growth without sacrificing user experience is key.
What happens next for Claude Code is uncertain. Anthropic needs to address these growing pains if they hope to retain users and compete effectively. For now, I’ve taken the load off their servers by canceling my account, a decision that reflects a broader skepticism among tech-savvy users. If Claude wants to win back trust, it needs to offer more than just a promising start.




















