A Critical Analysis: Why the Focus on AI Safety is Misdirected at the Wrong Layer

A-Critical-Analysis-Why-the-Focus-on-AI-Safety-is-Misdirected-at-the-Wrong-Layer

Challenges with Identity Systems for AI Agents

Organizations have invested heavily in developing identity systems to secure their operations, but these efforts have resulted in fragmented identity systems, characterized by numerous roles, credentials, and disconnected tools.

A Unified Identity Layer for All Actors

According to Teleport’s Chief Executive Officer, Ev Kontsevoy, a unified identity layer is necessary to treat every actor – human, machine, or AI agent – as a first-class identity.

This would involve tying non-human identities to verifiable attributes, such as workloads, devices, or agents, and granting access based on policy-driven constraints.

Addressing Regulatory Challenges

Regulated industries, including finance, healthcare, and critical infrastructure, face challenges in adapting to the rapid adoption of agentic AI.

Kontsevoy suggests that regulators need to shift their focus from governance and risk classification to operational accountability in agentic environments.

Ultimately, operational accountability depends on control over identity and the policies governing it.

Actionable Steps for Security Leaders

  • Establishing identity as the control plane across the entire infrastructure.
  • Eliminating static, long-lived credentials and replacing them with short-lived, dynamically issued credentials tied to a verifiable identity.
  • Continuously hardening the environment using visibility gained from the first two steps.

Practical Advice for Security Leaders

Kontsevoy advises against creating new service accounts as shortcuts, embedding credentials into scripts and workflows, and assuming internal environments are inherently safe.

He stresses that the current public conversation about AI risk focuses on model behavior, but the more significant risk lies in the identity and authorization layer, not in the models themselves.

If a model provides incorrect information, it can be recovered from, whereas if an agent with inadequate access takes an incorrect action, the consequences are severe.




About Author

en_USEnglish