The recent RSA Conference 2026 highlighted a critical issue in AI security, as leading tech companies converged on the need for zero trust in AI systems. Keynotes from Microsoft, Cisco, CrowdStrike, and Splunk emphasized the urgency of extending zero trust principles to AI agents, a move driven by the growing deployment of AI in enterprises. With 79% of organizations already using AI agents, but only 14.4% having full security approval, the industry faces a significant governance gap.
Anthropic’s Managed Agents
Anthropic has introduced Managed Agents, a new architecture that separates AI agents into three distinct components: the brain, hands, and session. This design prevents the execution environment from accessing sensitive credentials, thereby reducing the risk of credential exposure. By storing OAuth tokens in an external vault and using session-bound tokens for external calls, Anthropic ensures that compromised sandboxes do not yield reusable credentials. This architecture not only enhances security but also improves performance, with a 60% reduction in median time to first token. The approach offers a compelling solution for enterprises concerned about the security of their AI deployments.
Nvidia’s NemoClaw Approach
Nvidia’s NemoClaw takes a different path by embedding AI agents within a tightly controlled sandbox environment. This architecture employs multiple security layers, including kernel-level isolation and intent verification, to monitor and restrict agent actions. While this provides strong runtime visibility, it also demands significant operator involvement, which can increase costs in production environments. NemoClaw’s approach emphasizes security through observation, although it lacks the session durability found in Anthropic’s design. Organizations must weigh the trade-offs between security and operational overhead when considering NemoClaw for their AI deployments.
Industry Implications
The introduction of these architectures marks a pivotal shift in AI security, addressing the risks associated with traditional monolithic agent patterns. As more companies adopt AI, the pressure to secure these systems intensifies. The divergence between Anthropic’s and Nvidia’s approaches highlights the ongoing debate over credential proximity and execution environment security. With the gap between AI deployment velocity and security readiness remaining wide, these developments underscore the need for robust governance frameworks. Enterprises must evaluate their AI strategies to mitigate potential breaches and ensure compliance with emerging security standards.
As the industry moves forward, the focus will likely remain on refining AI security architectures and developing comprehensive governance policies. Companies adopting AI must prioritize zero trust principles to safeguard their systems against evolving threats. The ongoing advancements in AI security will play a crucial role in shaping the future landscape of enterprise technology.




















