Securing the Future of Business: Enterprise AI Deployments Under Scrutiny
Enterprises Scramble to Secure AI Deployments as Threats Mount
The increasing use of artificial intelligence (AI) in enterprise environments has created new security challenges. AI assistants, integrated into various systems such as ticketing platforms, source code repositories, and cloud dashboards, can execute tasks, access databases, and modify code with limited human oversight. A recent report by Cisco, “The State of AI Security 2026,” highlights the growing trend of AI-driven operations connecting directly to core business systems.
New Security Challenges
However, many organizations have granted agentic systems excessive authority without adequate security measures, creating exposure across model interfaces, tool integrations, and supply chains. According to the report, only 29% of organizations felt prepared to secure their agentic AI deployments, despite plans to integrate AI into business functions.
Risk of Attacks on AI Systems
The maturity of prompt injection and jailbreak techniques has increased the risk of attacks on AI systems. Multi-turn attacks, which unfold over extended conversations, have achieved success rates of up to 92% in testing across eight open-weight models. These attacks can steer models toward disallowed content and unsafe actions over successive prompts. Single-turn protections have proven less effective in longer sessions involving memory and tool access.
Amy Chang, Leader of AI Threat Intelligence and Security Research at Cisco, emphasizes the importance of tracking multi-turn resilience as a separate metric, particularly for agents operating over longer sessions. “Jailbreak success rates remain a valid indicator of a model’s robustness against adversarial prompts, but multi-turn resilience is a concern that enterprises should assess,” Chang notes.
Chang also stresses that security readiness metrics should align with an organization’s level of AI maturity. “Enterprises must consider their relative maturity level when implementing security controls,” she says. “For instance, agent tracing and telemetry may not be necessary for organizations in the initial stages of integrating large language models into their tech stack.”
Autonomy of AI Agents
The autonomy of AI agents introduces additional risk, as compromised agents can execute unauthorized commands, exfiltrate data, and move laterally across systems. The use of standardized protocols, such as the Model Context Protocol (MCP), has expanded the attack surface. Researchers have identified tool poisoning, remote code execution flaws, overprivileged access, and supply chain tampering within MCP ecosystems.
AI Supply Chain Risks
The AI supply chain has also emerged as a point of exposure, with open-source repositories hosting millions of models and datasets. Malicious code embedded in model objects can trigger automatically when a model initializes, and data poisoning can implant backdoors that activate under specific trigger phrases. Provenance gaps in model origin, training data, and modification history compound supply chain risk.
Nation-State Actors
Nation-state actors are also leveraging AI-enabled operations, expanding the attack surface and operational capability. Security teams are adapting zero-trust controls, least privilege access, continuous authentication, and behavioral monitoring to AI systems interacting directly with business processes.
Prioritizing Security
As enterprises continue to integrate AI into their workflows, it is essential to prioritize security and address the growing threats to agentic AI deployments.
