The conversation around agentic AI has fundamentally shifted from 'what can it do?' to 'can we control it?'. This change marks a critical maturation point for the industry, driven by enterprises demanding safety and accountability before scaling these powerful tools.
This trend began with a stark warning. In mid-2025, the research firm Gartner predicted that over 40% of corporate agentic AI projects would be canceled by 2027. The reasons were clear: escalating costs, unclear returns, and most importantly, inadequate risk controls. At the same time, AI models were rapidly becoming more capable of complex 'computer use'—performing tasks across multiple desktop applications—which only amplified corporate anxiety about deploying uncontrollable technology.
This created a clear causal chain. First, the capability of AI agents outpaced the controls available to manage them. Second, this gap validated Gartner's warning, making businesses hesitant to adopt agents widely despite their potential. Third, leading technology companies recognized this gap as a major business opportunity. In early 2026, a wave of announcements confirmed this pivot to governance.
OpenAI launched 'Frontier', a platform built around identity and permissions. Snowflake previewed 'Project SnowWork', tying agents directly to its governed data systems. And at its GTC conference, NVIDIA unveiled a security-focused runtime for agents called 'NemoClaw', designed to enforce policies and sandbox agent activities. All these platforms share a common goal: to make AI agents auditable, permissioned, and secure.
Therefore, Gartner's 40% cancellation forecast should not be seen as a failure of agentic AI. Instead, it signals a consolidation. Projects built on insecure, experimental foundations will likely be canceled, but the investment and effort will shift toward these new, governed platforms. For businesses, the key question is no longer just about performance benchmarks, but about whether an AI agent can operate safely within their existing security and compliance frameworks.
- Glossary
- Agentic AI: AI systems that can proactively and autonomously take actions to achieve goals, such as booking travel or managing inventory, rather than just responding to direct commands.
- Governance (in AI): The framework of rules, policies, standards, and processes for directing and controlling the development, deployment, and use of AI systems to ensure they are safe, ethical, and accountable.
- RBAC (Role-Based Access Control): A security method that restricts system access to authorized users based on their roles within an organization. For example, an analyst can view data, but only a manager can approve changes.
