Insights from a recent Citi dinner with CrowdStrike leadership reveal a clear strategy: fortify its AI defenses through elite partnerships while deliberately keeping humans in control of critical actions.
The discussion confirmed that the cyber threat landscape has fundamentally changed. First, AI-powered offense is no longer theoretical. Adversaries are using AI to find vulnerabilities faster than ever. CrowdStrike's own 2026 Global Threat Report noted that the average 'breakout time'—the time it takes for an attacker to move from initial compromise to other systems—has shrunk to just 29 minutes. This creates immense pressure on security teams, making proactive defense essential. This is where techniques like AI-assisted fuzzing become critical, as they help discover flaws before attackers can.
This is why CrowdStrike's partnership with Anthropic, called Project Glasswing, is so significant. Second, this collaboration is a strategic defensive play, not an immediate commercial product. CrowdStrike gains privileged, early access to Anthropic's powerful Claude Mythos AI model. However, this access is gated and comes with usage credits, indicating the initial goal is to harden CrowdStrike's own platform and co-develop sophisticated defense methods. It's about building a long-term competitive moat by being at the forefront of AI safety and security research, rather than quickly monetizing a new feature.
Finally, the conversation reinforced a crucial principle: full automation is not yet safe. While AI can automate many tasks, the idea of 'agentic patching'—where an AI autonomously decides and applies security patches—is considered too risky. The memory of major outages, like the one in July 2024, serves as a powerful reminder of what can go wrong. Attackers could even exploit a flawed automated patch. Therefore, a human-in-the-loop approach remains mandatory. CrowdStrike's strategy aligns with this, using AI to assist human experts in prioritizing vulnerabilities, such as those listed in CISA's KEV catalog, but leaving the final decision to them. This ensures both speed and safety, a balance the market seems to appreciate despite the rich valuation.
- AI-assisted Fuzzing: A software testing technique where AI is used to intelligently generate and input a wide range of invalid or unexpected data into a program to find security flaws and bugs.
- Moat: In business, a competitive moat is a distinct advantage a company has over its competitors, which allows it to protect its market share and profitability.
- Human-in-the-loop: A model that requires human interaction. In this context, it means AI systems assist security professionals by providing data and recommendations, but humans make the final, critical decisions.
