AI Adoption Gains Pace as Safety Measures Struggle to Keep Up
The Rapid Advancement of Artificial Intelligence
The rapid advancement of artificial intelligence (AI) is leaving behind a trail of unpreparedness among its stakeholders, including individuals, organizations, and governments. Unlike previous technological revolutions, AI adoption is happening at an unprecedented pace, and the safeguards surrounding its development, deployment, and use are struggling to keep up.
A Growing Number of Incidents
According to the latest data from the Stanford University’s Institute for Human-Centered Artificial Intelligence, the number of reported AI incidents has increased significantly over the past year. In 2025, there were 362 reported incidents, up from 233 in 2024.
Incident Types and Causes
-
These incidents range from unintended outputs to misuse and operational failures, and they highlight the complexities of AI systems, which often operate in customer-facing channels or internal automation pipelines.
-
As a result, small errors can surface quickly and be observed across multiple environments, making it challenging for teams to interpret system behavior that does not always map cleanly to defined failure states.
Restricted Access and Disclosure Gaps
The increasing reliance on AI models has led to a shift towards restricted access, with most models coming from industry and being delivered through application programming interfaces (APIs).
However, this limited access also affects how organizations evaluate vendors and tools before deployment, as they have less visibility into training processes or model architecture.
Capability Testing vs Safety Testing
Capability testing remains more visible than safety testing, with model developers publishing results on benchmarks that measure reasoning, coding, and general task performance.
However, safety-related benchmarks are reported less consistently and cover a narrower set of models, reducing the ability to compare systems on how they behave under risk conditions.
Oversight Practices and Vendor Relationships
Oversight practices are adapting to limited visibility, with security and risk teams placing more emphasis on continuous monitoring and internal validation.
Teams are developing processes to classify and respond to AI-related issues that do not fit into categories such as software bugs or security vulnerabilities.
Vendor relationships are also changing, with organizations relying more heavily on contractual terms, usage controls, and service-level expectations to define accountability.
Conclusion
The rapidly advancing field of AI requires a coordinated effort to develop effective safeguards and oversight mechanisms to ensure its safe and responsible development, deployment, and use.
