Check Point Unveils AI-Powered Security Solution for Enterprise AI Systems
Enterprise AI Systems Face New Security Risks as Autonomy Grows
As artificial intelligence (AI) evolves from assisting tasks to autonomously performing functions, organizations face a growing risk of uncontrolled activity within their systems.
To address this concern, Check Point has introduced the AI Defense Plane, a comprehensive security solution designed to govern and secure enterprise AI systems throughout their entire lifecycle.
"The enterprise is entering the agentic era," said David Haber, Vice President of AI Security at Check Point Software Technologies. "AI is no longer limited to generating content; it is accessing systems, invoking tools, chaining actions, and operating with increasing autonomy. This changes the security model."
The Growing Attack Surface
With AI systems expanding beyond traditional boundaries, the attack surface widens to include agentic workflows, delegated actions, non-human access, and shadow agents operating within real-world business environments.
The AI Defense Plane addresses this challenge by providing runtime control over how AI behaves inside real environments, combining discovery, governance, observability, and continuous validation across the AI execution lifecycle.
The AI Defense Plane Modules
The AI Defense Plane consists of three primary modules:
- Workforce AI Security: Provides visibility, governance, and runtime safeguards for how employees use AI-powered applications, enforcing policy in real-time and reducing the risk of sensitive data exposure.
- AI Application Agent Security: Offers discovery, posture, and runtime control for AI applications and agentic systems embedded across the business, allowing organizations to identify, evaluate, and govern the permissions and trust relationships shaping agentic execution.
- AI Red Teaming: Enables continuous adversarial testing of prompts, reasoning paths, workflows, tool use, and agent behavior, helping organizations uncover exploitable weaknesses early and strengthen resilience as AI systems transition from prototype to production.
"Red teaming has become essential for agentic systems," said George Davis, Product Leader at Sierra. "When AI can query infrastructure, trigger workflows, and interact with sensitive data, the risk is no longer theoretical. Organizations need continuous testing to understand how these systems can be manipulated, where controls break down, and how resilient they are in production."