Yesterday, Anthropic announced something unusual: a new AI model so capable at finding and exploiting software vulnerabilities that they’ve decided not to release it to the public.

Instead, they built a controlled program around it — Project Glasswing — and gave access to a curated group of companies and organizations whose job it is to use it defensively, to find and fix critical vulnerabilities before attackers can exploit them.

This is a significant moment. It signals that AI has crossed a threshold in cybersecurity — and it raises immediate questions for security professionals: What exactly can this model do? Who gets access? And how do you get involved?

TL;DR

  • Anthropic launched Project Glasswing on April 7, 2026 — a defensive cybersecurity initiative built around a new frontier model called Claude Mythos Preview
  • The model surpasses human experts at finding software vulnerabilities, including zero-days that had gone undetected for decades
  • It’s deliberately not being released publicly — access is restricted to vetted partners and organizations working on defensive security
  • Partners include AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, Microsoft, NVIDIA, and ~40 additional critical infrastructure organizations
  • Open-source maintainers and security organizations can apply for access through the Claude for Open Source program
  • Anthropic has committed $100M in model usage credits and $4M in direct funding to open-source security orgs

What Is Project Glasswing?

Project Glasswing is a partnership program launched by Anthropic to channel the defensive applications of their most capable AI security model to organizations that can use it responsibly.

The name comes from the glasswing butterfly — nearly invisible, able to move through environments undetected. The parallel is intentional: the program’s goal is to let defenders find vulnerabilities before attackers do, ideally without anyone noticing the vulnerabilities existed.

The centerpiece is Claude Mythos Preview — a frontier AI model that Anthropic describes as performing “strikingly” at computer security tasks. More specifically: it can find and exploit zero-day vulnerabilities in real codebases, reverse-engineer exploits in closed-source software, and turn known-but-unpatched (N-day) vulnerabilities into working exploits.

The launch partners read like a Who’s Who of enterprise security:

  • Amazon Web Services
  • Apple
  • Broadcom
  • Cisco
  • CrowdStrike
  • Google
  • JPMorganChase
  • Linux Foundation
  • Microsoft
  • NVIDIA
  • Palo Alto Networks

Beyond these eleven named partners, approximately 40 additional organizations responsible for building or maintaining critical software infrastructure have been granted access to scan and secure both their own systems and open-source code they depend on.


What Claude Mythos Preview Can Actually Do

This is the part that matters. Anthropic isn’t vague about capabilities — and the numbers are striking.

It finds vulnerabilities humans miss. Mythos Preview has already discovered thousands of critical vulnerabilities across major operating systems and web browsers — including some that had evaded detection for decades. These aren’t obscure edge cases; they’re in widely deployed, heavily reviewed code that security researchers have been looking at for years.

It discovers zero-days. A zero-day is a vulnerability that’s unknown to the vendor and therefore unpatched — the most valuable category of security flaw. Mythos Preview can find them autonomously in real codebases.

It can build exploits. The model can take a known vulnerability and generate a working exploit. This is the capability that makes Anthropic unwilling to release it publicly — in the wrong hands, it would dramatically accelerate the timeline from vulnerability disclosure to weaponized attack.

The pace is uncomfortable. Over 99% of the vulnerabilities Mythos has found in testing remain unpatched. That’s not a criticism — it reflects how fast the model works compared to human patch development cycles. It’s generating findings faster than they can be fixed.

The comparison Anthropic uses is telling: Mythos Preview “matches or exceeds human experts” at finding security flaws. Not junior researchers — expert-level vulnerability researchers with years of experience. The model operates at that level, at scale, continuously.


Why This Is a Turning Point

For decades, the asymmetry in cybersecurity has favored attackers. Defenders must protect everything; attackers only need to find one way in. Defenders work business hours; attackers work around the clock. Defenders are constrained by budget and headcount; attackers scale cheaply.

AI changes that calculus — but only if defenders get access to it first.

The concern Project Glasswing is directly addressing: AI models with sophisticated security capabilities are going to proliferate. The question is whether defenders or attackers get meaningful access first. By creating a structured program that channels the most capable model toward defensive work, Anthropic is explicitly trying to tilt that advantage toward defenders.

As one partner noted in the announcement: “AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure.”

The same model capability that lets Mythos find unpatched zero-days in open-source libraries also means that, eventually, models with similar capabilities will be available to threat actors. The window to get ahead of that — to find and patch the most critical vulnerabilities before they’re weaponized — is the explicit mission of Project Glasswing.


The Risk Side: A Model Too Dangerous to Release

Anthropic is unusually direct about why Mythos Preview isn’t publicly available: it’s too capable at offense.

The same features that make it useful for finding vulnerabilities — autonomous code analysis, exploit generation, reverse engineering of closed-source software — make it a potentially catastrophic tool in adversarial hands. Unlike most AI capabilities where the defensive and offensive applications can be somewhat separated, vulnerability exploitation is inherently dual-use.

The decision to restrict access rather than release broadly is a deliberate risk management choice. Anthropic is betting that maintaining strict control over who uses the model, with requirements for defensive intent and accountability, produces better outcomes than open access.

Not everyone agrees this is sufficient. Critics note that a model this capable, once it exists, will eventually leak or be independently replicated — and that the $100M in credits going to ~50 organizations may not be enough to patch critical infrastructure at the scale and speed the model can find vulnerabilities.

These are legitimate concerns. For now, controlled access is the chosen approach.


How to Get Involved

There are three routes to accessing Project Glasswing resources, depending on your organization’s profile:

Route 1: Open-Source Maintainers

If you maintain open-source software that’s part of critical infrastructure, you can apply directly through the Claude for Open Source program at anthropic.com/glasswing.

Anthropic has committed $2.5M to Alpha-Omega and the OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation — specifically to fund security scanning of open-source projects. Open-source maintainers are explicitly in scope.

This is the most accessible route for independent security researchers and smaller organizations.

Route 2: Critical Infrastructure Organizations

Organizations that build or maintain critical software infrastructure — think operating systems, cloud platforms, network equipment firmware, industrial control systems — are being considered for the ~40 additional partner slots.

There’s no public application form for this tier. The path is direct contact with Anthropic through anthropic.com/glasswing, with a clear articulation of what infrastructure you’re responsible for and how you’d use the access.

Route 3: API Access with Usage Credits

For organizations that don’t qualify for free access but want to use Mythos Preview for defensive security work, API access is available:

  • Input: $25 per million tokens
  • Output: $125 per million tokens
  • Available via: Claude API directly, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry

Anthropic’s $100M in committed usage credits covers participants throughout the research preview — if you qualify as a partner, substantial usage is covered.

What Anthropic Is Looking For

Based on the partner profile and program structure, successful applicants will likely demonstrate:

  • Clear defensive security use case (scanning code, finding vulnerabilities to patch — not offensive research)
  • Responsibility for software or infrastructure that’s widely deployed or considered critical
  • Capacity to actually act on findings (patch development, coordinated disclosure process)
  • Accountability structures appropriate for access to a dual-use capability

Security consultancies and penetration testing firms are not the target audience here — the program is focused on organizations that own and can fix the code, not those paid to find problems in others’ code.


What This Means for Security Teams

Even if your organization doesn’t qualify for direct access, Project Glasswing will affect you — because it will change the vulnerability landscape.

More patches, faster. As Mythos findings flow through partner organizations, expect an accelerated pace of critical CVEs being disclosed and patched in major open-source libraries, operating systems, and cloud platforms. The velocity of security updates from major vendors may increase substantially.

Higher baseline of hardened code. If the model is scanning the critical open-source libraries that the entire industry depends on (OpenSSL, curl, glibc, the Linux kernel), the overall security baseline of software that everyone uses improves.

The arms race accelerates. The flip side: as AI-powered vulnerability discovery becomes normalized, the expectation will shift. Security teams that aren’t using AI-assisted scanning will fall behind. The question of “should we invest in AI-powered security tooling” has an increasingly obvious answer.

Watch for Mythos-adjacent capabilities in commercial tools. CrowdStrike, Palo Alto Networks, and others are named partners. Expect capabilities developed through this program to eventually appear in their commercial products — AI-powered vulnerability scanning baked into EDR platforms and cloud security tooling.


Our Take

Project Glasswing is a serious initiative by a serious organization taking a responsible approach to a genuinely difficult problem.

The framing — AI has reached a threshold where it must be controlled, not released broadly — is credible given what Mythos Preview demonstrably does. Finding zero-days that escaped human detection for decades is not hype. The decision to channel that capability toward defensive work first, with strict access controls, is the right call given the dual-use nature of the capability.

For security professionals, the immediate action is to evaluate whether your organization or projects qualify for access. If you maintain open-source security-critical software, apply. If you’re responsible for critical infrastructure software, make the call to Anthropic.

For everyone else: patch faster than usual in the coming months. A model that can find vulnerabilities at this scale is about to generate a lot of coordinated disclosure notifications.



Sources