Anthropic finds Firefox flaws as OpenAI launches GPT-5.4

Anthropic's AI model, Claude Opus 4.6, recently identified 22 vulnerabilities in the Firefox browser over a two-week period in February 2026. Of these, 14 were classified as high-severity security flaws, a number exceeding the high-severity bugs reported in any single month during 2025. Mozilla has since addressed 22 of these issues in Firefox version 148.0, released on February 24, 2026, demonstrating AI's significant capability in uncovering software weaknesses.

OpenAI has advanced AI agent capabilities with its new GPT-5.4 model. This model now allows AI agents to autonomously operate across computers, applications, and the internet to perform various tasks. OpenAI states that GPT-5.4 is its most factual model to date, reducing false claims by 33% compared to its predecessor, GPT-5.2. The enhanced GPT-5.4 Thinking model is currently available for ChatGPT Plus, Team, and Pro users, aiming to improve efficiency and user control in AI interactions.

The competitive nature of the AI field is evident as OpenAI's GPT-5.4 debuted on the Artificial Analysis Intelligence Index with a score of 57, tying with Google's Gemini 3.1 Pro Preview. This marks the first instance a new OpenAI model has not outright topped the index. While GPT-5.4 excels in computer use and professional knowledge work, Gemini 3.1 Pro offers better cost efficiency. Meanwhile, Liquid AI introduced LocalCowork, an open-source desktop application that runs private AI agent workflows locally using the LFM2-24B-A2B model, optimized for consumer hardware like the Apple M4 Max.

Beyond these developments, AI is finding diverse applications and sparking broader discussions. Cyolo PRO version 7.0 now integrates AI-powered session intelligence and OT asset discovery to bolster industrial access security. In cybersecurity, AI-augmented teams completed 73% of challenges in a 72-hour competition, significantly outperforming human-only teams who finished 46%. Furthermore, new AI solutions are making customer service bots sound more human, potentially enhancing user experience.

The societal implications of AI continue to be a topic of discussion. Pope Leo XIV offered a balanced perspective on AI's future, comparing its potential impact to historical technological shifts and emphasizing human responsibility in its use. In Goodyear, Arizona, a class for seniors explored AI's uses and risks, including a recent ruling that AI prompts can serve as court evidence. Additionally, the 10th annual Naval Applications of Machine Learning workshop in San Diego gathered experts to discuss AI advancements for naval capabilities.

Key Takeaways

  • Anthropic's Claude Opus 4.6 identified 22 vulnerabilities, including 14 high-severity flaws, in Firefox during February 2026.
  • Mozilla fixed 22 bugs, including those found by Claude, in Firefox version 148.0, released February 24, 2026.
  • OpenAI launched GPT-5.4, enhancing AI agent capabilities to autonomously use computers and applications, and reducing false claims by 33% compared to GPT-5.2.
  • The GPT-5.4 Thinking model is available for ChatGPT Plus, Team, and Pro users.
  • OpenAI's GPT-5.4 scored 57 on the Artificial Analysis Intelligence Index, tying with Google's Gemini 3.1 Pro Preview, indicating increased competition.
  • Liquid AI released LocalCowork, an open-source desktop application for private, local AI agent workflows, optimized for consumer hardware like the Apple M4 Max.
  • Cyolo PRO v7.0 integrates AI for session intelligence and OT asset discovery, enhancing industrial access security.
  • AI-augmented teams completed 73% of challenges in a cybersecurity competition, significantly outperforming human-only teams (46%).
  • New AI solutions are making customer service bots sound more human, potentially eliminating hold music.
  • Pope Leo XIV presented a balanced view on AI's future, emphasizing human responsibility, while a class in Goodyear, Arizona, discussed AI risks and uses, including AI prompts as court evidence.

AI finds over 100 bugs in Firefox, Mozilla fixes 22

Mozilla announced it fixed over 100 bugs in its Firefox browser, with 22 of them being security flaws found by Anthropic's AI. This highlights how AI can quickly find vulnerabilities, even in well-tested software. Anthropic's AI, Claude, identified 14 high-severity security bugs in areas like memory storage and access boundaries. Mozilla released version 148 of Firefox on February 24, 2026, which includes fixes for these issues. This event shows how AI is changing bug detection and may require open-source projects to adapt.

Anthropic's AI finds 22 Firefox security flaws

Anthropic's AI model, Claude Opus 4.6, discovered 22 vulnerabilities in the Firefox browser over two weeks in February 2026. Mozilla confirmed 14 of these were high-severity flaws. This collaboration shows AI's growing ability to find serious bugs in complex software. Claude identified issues in Firefox's JavaScript engine and other areas. Mozilla has fixed most of these bugs in Firefox version 148, with remaining ones to be addressed in future updates.

Claude AI discovers 22 Firefox bugs in two weeks

Anthropic's AI, Claude Opus 4.6, found 22 vulnerabilities in Firefox during a two-week period in February 2026, with 14 classified as high-severity. This is more high-severity bugs than were reported in any single month in 2025. The AI also identified other non-security related bugs. Most of these issues have been fixed in Firefox version 148.0, demonstrating AI's powerful capability in finding software flaws.

OpenAI's GPT-5.4 model boosts AI agent capabilities

OpenAI has released its new GPT-5.4 model, a significant step towards creating autonomous AI agents. This model can now use computers, applications, and the internet to perform tasks. GPT-5.4 also shows improved reasoning and factuality, being 33% less likely to make false claims than GPT-5.2. The GPT-5.4 Thinking model is available for ChatGPT Plus, Team, and Pro users, offering better guidance for complex queries. This update aims to enhance user control and efficiency in AI interactions.

OpenAI launches GPT-5.4, enhancing AI agents

OpenAI has launched its new GPT-5.4 model, designed to advance AI agent capabilities. The model can now autonomously operate across devices and applications, write code, and issue commands. OpenAI claims GPT-5.4 is its most factual model yet, reducing errors by 33% compared to GPT-5.2. The GPT-5.4 Thinking model is available for premium ChatGPT users, allowing for better task completion and user guidance. This release aims to improve AI's performance in professional services and computer use tasks.

Liquid AI's LocalCowork runs AI agents privately on your device

Liquid AI has released LocalCowork, an open-source desktop application that runs AI agent workflows locally and privately. It uses the LFM2-24B-A2B model, optimized for low-latency tool use on consumer hardware like the Apple M4 Max. LocalCowork operates offline, using the Model Context Protocol (MCP) to execute tools for tasks like file operations, security scanning, and document processing. The system logs all actions locally for an audit trail, ensuring data privacy for enterprise environments.

Goodyear seniors explore AI uses and risks

A class in Goodyear, Arizona, aimed at seniors, explored the uses and risks of artificial intelligence. The session addressed how AI is becoming more prevalent and the questions surrounding its application. One key point discussed was a recent ruling that AI prompts can be used as evidence in court. The class provided a space for older adults to learn about and discuss this rapidly evolving technology.

Pope Leo XIV offers balanced view on AI's future

Pope Leo XIV, in his message for the 60th World Day of Social Communications, presented a balanced perspective on artificial intelligence. He compared AI's potential impact to historical technological revolutions like writing and the printing press, noting that while risks exist, the technology itself is not inherently good or evil. The Pope cautioned against blindly trusting AI as an infallible source of knowledge. He emphasized that humanity is responsible for how AI is used, whether for good or ill.

Naval AI workshop held in San Diego

The Naval Information Warfare Center Pacific, with AFCEA International San Diego, hosted the 10th annual Naval Applications of Machine Learning workshop in San Diego from March 2-5, 2026. The event gathered researchers, engineers, and military personnel to discuss AI and machine learning advancements for naval use. Key topics included autonomous systems, data analytics, and cybersecurity. The workshop aimed to foster collaboration and speed up the development of AI/ML technologies to improve naval capabilities.

Cyolo PRO v7.0 enhances industrial security with AI

Cyolo PRO version 7.0 now includes AI-powered session intelligence and OT asset discovery to improve industrial access security. The new version analyzes recorded sessions using AI, creating transcripts and categorizing user actions for faster incident response. It also passively discovers OT assets and traffic without needing agents, providing better visibility into operational technology networks. Enhanced dashboards offer a unified view of activity, simplifying management and strengthening governance in industrial environments.

AI customer service bots sound more human

New artificial intelligence solutions are making customer service bots sound much more human. These advancements could potentially eliminate the need for 'hold music' in customer service interactions. The improved AI technology aims to create more natural and less frustrating experiences for customers interacting with automated systems.

GPT-5.4 ties Gemini 3.1 Pro on AI index

OpenAI's new GPT-5.4 model has debuted on the Artificial Analysis Intelligence Index, scoring 57, which ties it with Google's Gemini 3.1 Pro Preview. This is the first time a new OpenAI model has not topped the index outright. While GPT-5.4 leads in specific areas like computer use and professional knowledge work, Gemini 3.1 Pro offers better cost efficiency. The index results show the AI field is becoming increasingly competitive, with leading models closely matched.

AI teams outperform humans in cybersecurity competition

A 72-hour cybersecurity competition called NeuroGrid showed AI-augmented teams completing challenges at a significantly higher rate than human-only teams. AI teams successfully finished about 73 percent of challenges compared to 46 percent for humans. The AI advantage was strongest in medium-difficulty tasks and for lower-ranked teams, narrowing at the elite level. AI teams were also faster at the elite tier, highlighting their potential to change threat assessments and security operations.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Artificial Intelligence Bug Detection Software Vulnerabilities Firefox Mozilla Anthropic Claude AI Security Flaws OpenAI GPT-5.4 AI Agents Autonomous Systems Liquid AI LocalCowork Private AI Cybersecurity Industrial Security Customer Service Bots AI Ethics Naval AI Machine Learning Google Gemini 3.1 Pro

Comments

Loading...