Anthropic launches Project Glasswing with Amazon Apple Google

Anthropic has launched Project Glasswing, a major cybersecurity initiative bringing together over 45 organizations, including tech giants like AWS, Apple, Google, Microsoft, and Amazon. This project utilizes Anthropic's new AI model, Claude Mythos Preview, to identify and address security vulnerabilities in software. The goal is to proactively use advanced AI for defensive purposes, preparing the industry for future AI-powered threats.

The Claude Mythos Preview model demonstrates exceptional capabilities, having already discovered thousands of security flaws, including critical zero-day vulnerabilities and older issues in operating systems and browsers. Anthropic is not releasing Mythos publicly due to its advanced ability to find and exploit software weaknesses, instead providing limited access to partners. This strategy aims to give security professionals a crucial head start in defending against potential AI-driven cyberattacks. AWS supports this effort by providing cloud infrastructure for Anthropic's model development.

In other significant AI news, Nvidia's acquisition of SchedMD, the company behind the widely used Slurm workload management software, has raised concerns within the AI community. Slurm manages most of the world's supercomputers and AI clusters. Experts worry that Nvidia could leverage this control to favor its own GPUs, potentially impacting competition. Nvidia, however, states that Slurm will remain open-source and hardware-agnostic.

Meanwhile, Microsoft is actively working to expand AI language support globally through projects like Gecko and Paza, aiming to make AI accessible beyond its current English-centric training. The broader impact of AI is also being explored, with "vertical AI" promising to transform industries by reasoning and executing specialist tasks, moving beyond traditional SaaS limitations. While AI coding tools simplify software creation, they also introduce risks of increased complexity and expanded attack surfaces. The fashion industry is evaluating AI's role in sustainability, balancing potential efficiency gains against the energy demands of generative AI.

Current AI limitations are also evident, as OpenAI CEO Sam Altman noted that ChatGPT's voice model still struggles with basic timekeeping functions, like starting a timer, and may require another year for reliable performance. Cybersecurity remains a critical focus, with discussions at GEANT Security Days 2026 addressing AI, internet resilience, and agentic LLMs. Additionally, a webinar highlights how disconnected enterprise applications create "dark matter" attack surfaces, which AI agents could inadvertently exploit, underscoring the need for robust identity management. Clemson University is also hosting a forum on April 9, 2026, to discuss its human-centered approach to AI in learning, research, and public engagement.

Key Takeaways

  • Anthropic launched Project Glasswing with over 45 partners, including AWS, Apple, Google, Microsoft, and Amazon, to enhance cybersecurity.
  • Anthropic's Claude Mythos Preview AI model has found thousands of security flaws, including zero-day vulnerabilities, in operating systems and browsers.
  • Access to Claude Mythos Preview is restricted to partners due to its advanced capabilities and potential for misuse by malicious actors.
  • Nvidia acquired SchedMD, the developer of Slurm workload management software, sparking concerns about Nvidia's potential control over AI computing systems.
  • Microsoft is developing projects like Gecko and Paza to expand AI language support and make AI more accessible globally, addressing English-centric training.
  • OpenAI CEO Sam Altman stated that ChatGPT's voice model currently struggles with reliable timekeeping and may take another year to improve.
  • AI coding tools simplify software creation but can lead to increased complexity, errors, and an expanded attack surface for hackers.
  • Vertical AI is emerging to reason and execute specialist tasks in industries, aiming to impact labor costs and operational efficiency beyond traditional SaaS.
  • AI offers potential for fashion sustainability through automation and supply chain efficiency, but generative AI's energy consumption raises environmental concerns.
  • Enterprise security faces risks from "dark matter" applications and identity gaps, which AI agents could inadvertently amplify, necessitating improved identity management.

Anthropic's Mythos AI finds security flaws with top tech partners

Anthropic has launched Project Glasswing, a cybersecurity initiative involving major tech companies like AWS, Apple, and Google. They are using Anthropic's new AI model, Claude Mythos Preview, to find security vulnerabilities in software. This model has already discovered thousands of flaws, including old ones in operating systems and browsers. The goal is to use AI for defense before attackers can, with partners testing the model for security work.

AI rivals unite to find software security weaknesses

Anthropic is leading Project Glasswing, a new initiative with over 45 organizations, including competitors like Apple and Google. They will use Anthropic's Claude Mythos Preview AI model to test and improve cybersecurity. The model is very good at finding software flaws, and the project aims to prepare for a future where such AI capabilities are widely available. Partners will use the model for defensive security and share their findings to help the industry.

Anthropic's Mythos AI targets enterprise security defenses

Anthropic has released Mythos, a new AI model focused on cybersecurity, for a limited preview with major companies. This marks Anthropic's move into specialized AI solutions beyond general assistants like Claude. The model aims to help enterprise security teams by finding vulnerabilities and anomalies faster than human analysts. This limited release strategy allows for rigorous feedback in high-stakes environments.

Project Glasswing unites tech giants to secure software for AI era

Anthropic, AWS, Apple, Google, Microsoft, and others have formed Project Glasswing to secure critical software using Anthropic's new AI model, Claude Mythos Preview. This model can find and exploit software vulnerabilities at a level surpassing most humans. The initiative aims to use these advanced AI capabilities for defensive purposes, finding and fixing thousands of security flaws before they can be exploited by malicious actors. Anthropic is providing credits and donations to support this effort.

AWS and Anthropic build AI defenses against future threats

AWS is partnering with Anthropic on Project Glasswing, using Anthropic's advanced AI model, Claude Mythos Preview, to enhance cybersecurity. This model is highly capable in reasoning and cybersecurity tasks, helping AWS identify weaknesses even in well-tested code. The collaboration focuses on building defenses proactively, using AI to find and fix vulnerabilities at scale before they can be exploited by attackers. AWS provides the cloud infrastructure for Anthropic's model development.

Anthropic's Mythos AI: A cybersecurity 'reckoning' for good actors

Anthropic has developed a powerful new AI model called Claude Mythos Preview, which is too advanced for public release due to potential misuse. Instead, over 40 tech companies, including Apple, Amazon, and Microsoft, will use it through Project Glasswing to find and fix software security flaws. Anthropic aims to raise awareness about the evolving AI threat landscape and give security professionals a head start in defending against future attacks.

Anthropic limits Mythos AI access due to hacker concerns

Anthropic is restricting access to its advanced AI model, Claude Mythos Preview, due to fears that hackers could exploit its ability to find software weaknesses. The model, which excels at identifying security flaws, is being used by about 40 companies, including Microsoft and Apple, through a cybersecurity initiative called Project Glasswing. Anthropic believes this limited release gives defenders a head start against potential AI-powered cyberattacks.

Anthropic's Mythos AI scans for vulnerabilities in new security initiative

Anthropic has released a preview of its new AI model, Mythos, for cybersecurity work through its Project Glasswing initiative. Twelve partner organizations, including Amazon, Apple, and Microsoft, will use the model to scan software for code vulnerabilities. Anthropic claims Mythos has already found thousands of critical, zero-day flaws, some dating back decades. The model, while not specifically trained for cybersecurity, shows strong coding and reasoning skills.

Mythos AI finds thousands of software bugs, sparking security race

Anthropic's new AI model, Claude Mythos Preview, can identify thousands of zero-day vulnerabilities in major operating systems and web browsers. The model is so powerful that Anthropic is not releasing it publicly, instead using it with partners in 'Project Glasswing' to proactively fix critical bugs. This initiative includes companies like Amazon, Apple, and Microsoft, aiming to patch flaws before AI-powered attacks become widespread. Mythos can even chain vulnerabilities to gain system control.

Nvidia's SchedMD acquisition raises AI software control concerns

Nvidia has acquired SchedMD, the company behind Slurm, an open-source workload management software used by most of the world's supercomputers and AI clusters. Experts worry this deal could give Nvidia more influence over how global AI computing systems operate. Concerns exist that Nvidia might optimize Slurm to favor its own GPUs, potentially creating an unfair advantage over rival chipmakers. Nvidia states Slurm will remain open-source and hardware-agnostic.

Nvidia deal sparks AI ecosystem control worries

Nvidia's acquisition of SchedMD, which manages the widely used Slurm software for AI clusters and supercomputers, is raising concerns in the AI community. Experts fear Nvidia could leverage its control over this key software layer to favor its own hardware, potentially impacting competition. While Nvidia assures Slurm will remain open-source and neutral, many are watching closely to see if the company's influence over both hardware and software management grows.

AI coding tools create risks alongside benefits

AI tools are making it easier for people without coding experience to create websites and apps. However, experts are concerned that this could lead to an explosion of complex and error-prone software. While AI can help review code and find security vulnerabilities faster, it also makes mistakes and can create readability issues. The increased complexity from AI-generated code also expands the potential attack surface for hackers.

Clemson University hosts forum on human-centered AI approach

Clemson University is holding a forum on April 9, 2026, to introduce its 'human-centered' approach to artificial intelligence. The event will cover how AI will be used in learning, teaching, research, and public engagement at the university. Leaders will discuss the AI Initiative's role and expectations for students, faculty, and staff. Future sessions will focus on AI tools, applications, and security guidelines.

Vertical AI promises to transform industries beyond software

Vertical AI is set to revolutionize industries by moving beyond the limitations of traditional vertical SaaS. Unlike SaaS, which primarily assisted and recorded, vertical AI reasons and executes tasks at a specialist level. This shift means AI will directly impact labor costs and operational efficiency in sectors like healthcare. Key capabilities include compounding learning, contextual reasoning, and concurrent execution, enabling AI to perform complex work and create more durable businesses.

AI in fashion: Boosting sustainability or increasing energy use?

Artificial intelligence offers potential benefits for fashion sustainability teams by automating reporting and improving supply chain efficiency. Brands like H&M and Kering are using AI for tasks like demand planning and traceability. However, generative AI is energy-intensive, raising concerns about its environmental footprint. While AI can boost productivity and reduce resource consumption, its overall impact on sustainability is still being evaluated.

GEANT Security Days 2026 to focus on AI and internet resilience

GEANT Security Days 2026 in Utrecht will address key challenges in AI, internet resilience, and cybersecurity. Topics include agentic LLMs, automation, and securing networked systems. Discussions will also cover building local internet resiliency clubs and using playfulness in security practices. The event will explore AI's role in incident response, cloud security, and the operational pressures on security teams.

Microsoft works to make AI accessible in more languages

Microsoft is developing tools and partnerships to expand AI language support, aiming to make it accessible to more people worldwide. Currently, AI models are heavily trained on English content, creating a disparity in who benefits. Projects like Gecko and Paza are focused on building AI that understands diverse languages, accents, and cultural contexts, ensuring technology empowers communities rather than excluding them. This effort is crucial as AI becomes more integrated into daily life.

ChatGPT still can't reliably track time, says CEO

OpenAI CEO Sam Altman stated that ChatGPT's voice model may take another year to reliably perform simple functions like starting a timer. Despite its advanced capabilities, AI models notoriously struggle with timekeeping. Altman acknowledged this as a known issue, indicating that adding time-tracking intelligence to voice models is a priority. This highlights a current limitation in AI's practical application.

Webinar addresses AI risks from identity gaps

A new webinar will discuss how disconnected applications within enterprises create security risks, especially with the rise of AI. These 'dark matter' applications are outside centralized identity systems, forming an unmanaged attack surface. AI agents, while increasing productivity, can inadvertently amplify these risks by accessing unsecured systems. The session will offer a roadmap for closing these identity gaps and securing organizations against AI-exploited vulnerabilities.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Cybersecurity Vulnerability Detection Software Security Anthropic Project Glasswing Claude Mythos Preview AI Model Enterprise Security AWS Apple Google Microsoft Zero-day Vulnerabilities AI Threats Defensive AI AI for Good Nvidia SchedMD Slurm AI Software Control AI Ecosystem AI Coding Tools AI Risks Human-Centered AI Clemson University Vertical AI AI in Industries AI in Fashion Sustainability Energy Consumption GEANT Security Days 2026 Agentic LLMs Internet Resilience AI Incident Response AI Language Support Microsoft AI ChatGPT OpenAI AI Limitations Identity Gaps AI Security Risks

Comments

Loading...