Anthropic recently announced Project Glasswing. A coalition of major technology companies and financial institutions will gain access to Claude Mythos Preview, a model that has reportedly found thousands of high-severity vulnerabilities across every major operating system and web browser.
The headlines are dramatic; a 27-year-old flaw in OpenBSD, a 16-year-old bug in FFmpeg that survived five million automated tests, and Linux kernel chains that allow complete system takeover. All discovered by an AI system operating with limited human intervention.
There is an immediate question of whether the advances in AI security research represent a security crisis. Perhaps the more useful question is what it means for the economics of cyber defence.
The Capabilities Are Real
It is no longer speculative to say that frontier AI models can find and exploit software vulnerabilities at a level competitive with elite human security researchers. Anthropic's announcement is the most public confirmation so far, but the trajectory has been visible for some time.
The implications are straightforward:
These are not marginal improvements. They represent a shift in the fundamental nature of security research, both for attackers and defenders.
The Talent Shortage Makes It Harder to Respond
There is a second structural pressure that receives less attention but matters just as much. According to the 2026 SANS / GIAC Cybersecurity Workforce Research Report, skills gaps have overtaken headcount as the industry's top workforce challenge. For the first time, 60% of organisations identified skills gaps as the greater problem versus 40% citing staff shortages.
The report finds that 74% of organisations say AI is already impacting team size and role structures, yet only 21% have a comprehensive AI security framework in place. Meanwhile, 27% of organisations have experienced actual security breaches as a direct result of workforce capability gaps, and 47% report slower incident response due to shortages.
What this means in practice is that even if AI gives defenders better tools, many organisations lack the people who can deploy, interpret, and act on them. The tools are advancing faster than the workforce can absorb them. This tightens the case for models that do not depend on building a large internal team; fractional expertise, managed services, and automation-first strategies.
Exploitation Speed Has Already Outpaced Most Patching Cycles
Anthropic's announcement is alarming, but it sits on top of a problem that already existed. According to multiple 2025 studies, the gap between disclosure and exploitation has collapsed:
The old model of monthly patch cycles was already failing. AI-augmented discovery simply makes the failure more visible and more costly.
Edge devices and security appliances are bearing the brunt. Google's data shows operating systems accounted for 44% of all zero-day exploitation in 2025, with security and networking products making up half of enterprise-targeted zero-days. These are the systems that often lack endpoint detection coverage and remain unpatched the longest.
What Changes for CISOs
If you are leading security for an organisation, this shift demands attention in four areas.
1. Patching Must Become a Continuous Process
Most organisations operate with a patching cadence measured in weeks or months. Critical patches might be deployed in days. That timeline was built around human-paced vulnerability discovery.
When exploits appear within 24 hours of disclosure, a 30-day patching window becomes indefensible. CISOs need to ask whether their vulnerability management programmes can compress to days, and whether their change control processes are the bottleneck.
This is not a call to patch recklessly. It is a call to align change management, testing, and deployment pipelines with a world where unpatched vulnerabilities are discovered and exploited faster than before.
2. Technical Debt Is Now a Material Risk
The vulnerabilities found by Mythos Preview were not in obscure corners of the codebase. They were in FFmpeg, OpenBSD, and the Linux kernel — software that has been reviewed by humans and tested by automation for decades.
If decades-old, well-scrutinised code contains critical flaws, the implications for your own legacy systems are obvious. Every line of unmaintained code, every forgotten integration, and every shortcut taken under deadline pressure is now more likely to be found and exploited.
CISOs should be having direct conversations with their CTOs and engineering leads about technical debt reduction. This is no longer purely a productivity issue. It is a board-level risk issue.
3. Detection and Response Must Operate at Machine Speed
Prevention is ideal, but it is not sufficient. If AI-assisted attackers can develop novel exploits faster than your patching cycle, your ability to detect anomalous behaviour and respond to incidents becomes the last line of defence.
The question for CISOs is whether their security operations centre can detect and contain an intrusion in hours rather than days. Human-paced triage and manual investigation are no longer adequate. Detection pipelines need to be automated. Response playbooks need to be pre-approved and executable without waiting for a committee.
4. Talent Strategy Must Change
With an estimated global shortfall of more than 4.7 million skilled cybersecurity professionals and skills obsolescence accelerating, the assumption that you can hire your way out of the problem is no longer realistic. The SANS report notes that 60% of organisations cite lack of time due to workload as their greatest training barrier. Teams caught in operational firefighting cannot pause to develop new capabilities.
CISOs need to rethink how security capability is sourced: automation for repetitive tasks, fractional expertise for augmenting or bursting strategic leadership, and structured upskilling for existing team members. Hiring junior analysts into an already overwhelmed team is not a solution.
What Boards Should Ask
For non-executive directors and audit committees, there are four questions worth asking at the next board meeting:
What is our mean time to patch for critical vulnerabilities? If the answer is measured in weeks, ask what would be required to reduce it to days.
Do we know where our highest-risk legacy systems are? Not just the crown jewels, but the forgotten integrations and unmaintained dependencies that attackers often use as pivot points.
Can we respond to an incident without waiting for human approval at every step? Automated containment is essential when the attacker operates at machine speed.
Do we have the right talent model for the next two years? Given the skills gap and cost of senior hires, is our security capability built around the right mix of internal team, automation, and external expertise?
Is This a Crisis?
Not yet. The sky is not falling. But the ground is shifting beneath our feet.
For the past two decades, cybersecurity has been a game of incremental improvement: better tools, better training, slightly faster patching. The emergence of AI-augmented vulnerability discovery is changing the underlying economics. The cost of finding flaws has dropped by orders of magnitude. The cost of exploiting them will follow.
The organisations that adapt their defensive posture now - compressing patching timelines, reducing technical debt, automating detection and response, and rethinking talent strategy — will be in a far better position when these capabilities become widely available to adversaries, if that day isn't here already.
The ones that treat this as just another vendor announcement will discover, too late, that their defensive assumptions were built for a different era.
The Bottom Line
The good news is that there is no new class of attack to defend against. The vulnerabilities are the same ones we have always faced. What has changed is the speed and scale at which they can be found and exploited, combined with a worsening shortage of the people needed to defend against them.
For CISOs, this is a strategic signal, not a tactical emergency. Use the current window to fix the fundamentals that have been deferred for too long. The organisations that do so will find that AI can be an amplifier of whatever security posture they already had. Make sure yours is worth amplifying.

