
Exabeam has released new research highlighting the gap between executive confidence in artificial intelligence and the daily reality experienced by front-line security analysts. The report, From Hype to Help: How AI Is (Really) Transforming Cybersecurity in 2025, revealed that while AI adoption is widespread, its impact on productivity, trust, and team structure varies sharply by role and region.
The findings confirm a critical divide. In Asia Pacific and Japan, 71% of executives believe AI has significantly improved productivity across their security teams, yet only 5% of security analysts, those closest to the tools, agree. This perception gap reveals more than a difference in opinion; it underscores a deeper issue with operational effectiveness and trust.
Executives often focus on AI’s potential to reduce costs, streamline operations, and improve strategy. But security analysts on the front lines report a very different experience – one shaped by false positives, increased alert fatigue, and the ongoing need for human oversight.
For many, AI hasn’t eliminated manual work; it’s simply reshaped it, often without reducing the burden. This disconnect suggests that some organisations may be overestimating the maturity and reliability of AI tools and underestimating the complexity of real-world implementation.
“There’s no shortage of AI hype in cybersecurity, but ask the people actually using the tools, and the story falls apart,” said Exabeam Chief AI and Product Officer Steve Wilson. “Analysts are stuck managing tools that promise autonomy but constantly need tuning and supervision. Agentic AI flips that script. It doesn’t wait for instructions. It takes action, cuts through the noise, and moves investigations forward without dragging teams down.”
AI Delivers Most Impact in Threat Detection, Investigation, and Response
While the findings reveal a difference in perception, they also demonstrate AI’s positive impact, most consistently in threat detection, investigation, and response. In Asia Pacific and Japan, 46% of security teams report that AI has improved productivity in these areas by offloading repetitive analysis, reducing alert fatigue, and improving time to insight. AI-driven solutions are strengthening security operations with better anomaly detection, faster mean time to detect, and more effective user behaviour analytics.
Still, trust in AI autonomy remains low. Only 23% of Asia Pacific and Japan teams trust AI to act on its own. The industry is aligned on one thing: performance precedes trust. In security operations, organisations aren’t looking to hand over the reins; they’re counting on AI to exceed the limits of the human mind at scale. By consistently delivering accurate outcomes and automating tedious workflows, AI can become a force multiplier for security analysts, enabling faster, smarter threat detection and response.
Security Teams Are Restructuring in Response to AI
AI adoption is driving structural shifts in the security workforce. More than half of the surveyed Asia Pacific and Japan organisations have restructured their teams due to AI implementation. While 31% report workforce reductions tied to automation, 23% are expanding hiring for roles focused on AI governance, automation oversight, and data protection.
These changes reflect a new operational model for modern security operations centres, one where agentic AI supports faster decisions, deeper investigations, and higher-value human work.
Regional Gaps Signal Uneven Adoption
The report also surfaces regional disparities in the adoption of AI and its impact on productivity. Organisations in India, the Middle East, Turkey, and Africa report the highest productivity gains (81%), followed by the United Kingdom, Ireland and Europe (60%). Asia Pacific and Japanese organisations see the third-highest AI-driven improvements to productivity at 46%, slightly higher than North American organisations at 44%.
Bridging Strategy and Execution
As AI continues to reshape the cybersecurity landscape, organisations must reconcile leadership ambition with operational execution. Organisations that want to close the gap between vision and reality can look at adopting agentic AI for its proactive, action-based capabilities.
Successful strategies will be defined by their ability to align AI capabilities with front-line needs, involve analysts in deployment decisions, and prioritise outcomes over hype.
You can read the full report here.