Cybersecurity Professionals Overwhelmed by Excessive Overtime
Cybersecurity Professionals Face Unsustainable Workloads as Industry Shifts Towards AI-Driven Governance
A recent survey of 300 cybersecurity and IT leaders in the United States reveals that professionals in the field are working an average of 10.8 extra hours per week beyond their contracted schedules. This equates to a de facto sixth working day, with nearly half of respondents logging 11 or more overtime hours weekly. The psychological toll is significant, with many experiencing emotional exhaustion and anxiety.
Despite the pressures, 94% of respondents would choose cybersecurity as a career again, indicating a deep commitment to the field. However, the industry is undergoing a significant shift towards AI-driven governance, which is altering the skills profile of cybersecurity professionals.
The Increasing Adoption of AI Tools
The increasing adoption of AI tools is driving a greater need for interpersonal, communication, and business skills. Over 80% of leaders reported that people skills, including stakeholder management and influence, are now more central to their effectiveness than they were five years ago. This shift is more pronounced in smaller enterprises, where leaders are more likely to identify the growing importance of people skills.
The Pressures of Managing Automated Systems
As AI becomes more integral to cybersecurity operations, leaders are facing new pressures to manage automated systems, audit AI outputs, and connect security decisions to organizational objectives. However, many organizations are adding AI governance responsibilities to security leaders without adjusting their job structures, leading to burnout.
Ravid Circus, CPO at Seemplicity, notes that the organizational chart itself needs to be reworked to accommodate AI governance. Dedicated AI governance functions should be embedded within security teams, with defined accountability and formal ownership of AI outputs. This includes establishing clear escalation paths and decision frameworks for human intervention.
The Need for Practical Training and Structured Frameworks
The survey found that nearly two-thirds of respondents have sufficient budget to implement AI features, but over half described the training available for human-AI collaboration as limited or insufficient. Circus argues that the budget is not the problem, but rather the lack of practical, role-specific enablement. Security leaders need training that answers questions about validating AI system reports, overriding AI decisions, and explaining AI-driven decisions to stakeholders.
The absence of structured frameworks for human-in-the-loop workflows is also a significant issue. Most teams are improvising accountability in real-time, leading to decision fatigue and operational friction. Circus emphasizes the need for clear lines of accountability and transparency in AI decision-making.
Building Trust in AI Systems
Cybersecurity leaders prioritize consistent, measurable accuracy over time as the primary factor for trusting AI systems. They also value clear accountability, human override controls, and transparent explanations of decision-making processes. However, t
