Meta has reportedly delayed the launch of its new foundational AI model, codenamed Avocado, from March to at least May. This postponement stems from the model's underperformance in internal tests for reasoning, coding, and writing when compared to rivals like Google, OpenAI, and Anthropic. While Avocado did surpass Meta's previous model and Google's Gemini 2.5, it did not match Gemini 3.0. Consequently, Meta's AI leaders are considering temporarily licensing Google's Gemini model to power their AI products, despite the company's significant capital spending plans of $115 billion to $135 billion this year for its "superintelligence" ambitions.
Meanwhile, OpenAI is preparing to release GPT-5, a substantial upgrade beyond GPT-4o, promising enhanced reasoning capabilities, a larger context window, and a significant reduction in factual errors. GPT-5 will also feature improved native understanding of multiple formats like text, vision, and audio, aiming to be a reliable partner for complex problem-solving. In a different application, Palantir's software demonstrations reveal how military officials can use AI chatbots, such as Anthropic's Claude, to assist in generating war plans. Palantir's AI Platform (AIP) integrates third-party large language models, allowing analysts to query classified data and receive tailored responses, though the AI does not make final decisions.
An Amazon tech lead, recognized for building AI products, shared insights into "vibe coding," where AI now writes about 95% of their code. This approach emphasizes understanding Large Language Model limitations and critical review at each step. Beyond coding, AI is increasingly integrating with the physical world, making hardware the primary interface, a trend observed at events like CES and Davos. However, this shift presents challenges due to the differing development cycles of rapid AI evolution and slower hardware production.
The broader implications of AI continue to be explored, from legal questions surrounding AI deepfakes of minds, like Grammarly's use of writer Julia Angwin's style without consent, to concerns about inherent biases. Artist Nouf Aljowaysir's project, for instance, highlights how AI models can misidentify subjects in historical photographs, revealing potential for severe misinterpretations. Addressing the gender gap in AI, where women constitute only 22% of the global workforce, Silicon Valley nonprofit Technovation, founded by Tara Chklovski, empowers young women through STEM curriculum and entrepreneurship competitions. Additionally, the Harvard Business Review's Executive Agenda for March 2026 is addressing AI's impact on entry-level jobs, while F5 CEO François Locoh-Donou notes AI is accelerating application delivery and security, increasing complexity for enterprises managing AI workloads.
Key Takeaways
- Meta delayed its new AI model, Avocado, from March to at least May due to underperformance against Google, OpenAI, and Anthropic models, and is considering licensing Google's Gemini.
- Meta plans significant capital spending of $115 billion to $135 billion this year for its AI ambitions.
- OpenAI is set to release GPT-5, featuring enhanced reasoning, a larger context window, reduced factual errors, and improved multimodal understanding (text, vision, audio).
- Palantir's AI Platform (AIP) integrates third-party LLMs like Anthropic's Claude to help military officials generate war plans from classified data, though AI does not make final decisions.
- An Amazon tech lead reports AI now writes about 95% of their code, emphasizing critical understanding of LLM limitations and continuous code review.
- AI's integration with the physical world makes hardware the primary interface, posing challenges due to differing AI and hardware development cycles.
- Legal questions arise from AI deepfakes of individuals' minds, highlighting the need for frameworks to protect intellectual property and personal rights, as seen with Grammarly's use of Julia Angwin's style.
- Artist Nouf Aljowaysir's project demonstrates AI biases, showing models misidentifying subjects in historical photos, which could lead to dangerous misinterpretations in critical applications.
- Technovation, a Silicon Valley nonprofit, addresses the gender gap in AI by empowering young women through STEM education and entrepreneurship, as women currently make up only 22% of the global AI workforce.
- F5 CEO François Locoh-Donou notes AI is accelerating application delivery and security, increasing complexity for enterprises managing AI workloads, with F5 consolidating solutions in its Advanced Delivery Services Platform (ADSP).
Meta delays new AI model launch due to performance issues
Meta has reportedly delayed the release of its new AI model, code-named Avocado, which was originally planned for March. The model's performance in internal tests for reasoning, coding, and writing did not meet expectations compared to rivals like Google, OpenAI, and Anthropic. Meta is now considering licensing Google's Gemini model temporarily. The company is heavily investing in AI ambitions, with capital spending plans of $115 billion to $135 billion for the year to pursue 'superintelligence'. The delayed launch is now targeted for at least May.
Meta's AI model Avocado delayed due to poor performance
Meta has reportedly postponed the launch of its new foundational AI model, codenamed Avocado, from March to at least May. Sources indicate the model underperformed in internal tests for reasoning, coding, and writing when compared to competitors like Google, OpenAI, and Anthropic. Due to these performance concerns, Meta's AI leaders have discussed potentially licensing Google's Gemini model for their AI products. This delay occurs as Meta invests significantly in its AI development, with capital expenditures totaling nearly $107 billion over the past two years.
Meta delays Avocado AI model release over performance concerns
Meta has delayed the release of its new foundational AI model, codenamed Avocado, from March to at least May due to performance issues. Internal tests showed the model fell short of rivals like Google, OpenAI, and Anthropic in reasoning, coding, and writing. While Avocado outperformed Meta's previous model and Google's Gemini 2.5, it did not match Gemini 3.0. Meta's AI leaders are considering temporarily licensing Gemini to power their AI products. This delay impacts CEO Mark Zuckerberg's ambitious AI goals, which involve significant investment.
Silicon Valley program empowers young women in AI
A Silicon Valley program is working to address the gender gap in artificial intelligence by encouraging young women to become leaders in the field. Technovation, a nonprofit founded by Tara Chklovski, provides STEM curriculum and runs entrepreneurship competitions for girls up to age 18. The program aims to create a more inclusive environment and boost women's participation in AI, where they currently make up only about 22% of the workforce globally. This initiative seeks to build confidence and inspire a new generation of women to shape AI development.
Technovation nonprofit boosts women's leadership in AI
The nonprofit Technovation, based in Silicon Valley, is actively encouraging young women to take on leadership roles in artificial intelligence. Founder Tara Chklovski believes social norms often prevent girls from pursuing tech paths, leading to missed potential. Technovation offers STEM education and runs the world's largest tech entrepreneurship competition for young women. The program aims to build confidence and community, addressing the disparity where women make up only 22% of the global AI workforce. This effort is crucial for ensuring women help design and control the future of AI.
OpenAI's GPT-5 promises advanced reasoning and multimodal understanding
OpenAI is set to release GPT-5, a significant upgrade beyond its previous models like GPT-4o. This new AI model is designed for complex reasoning and real-world interaction, aiming to be a reliable partner that can solve problems autonomously. Key improvements include greatly enhanced reasoning capabilities, a much larger context window for processing extensive data, and a significant reduction in factual errors or 'hallucinations'. GPT-5 also boasts improved native understanding of multiple formats like text, vision, and audio, enabling integrated analysis of mixed media inputs.
AI's impact on entry-level jobs discussed
The Harvard Business Review's Executive Agenda for March 2026 addresses the evolving landscape of entry-level jobs in the age of artificial intelligence. The publication provides leaders with insights and strategies to navigate consequential decisions and emerging trends. The HBR Executive Agenda includes playbooks, masterclasses, a weekly newsletter, and live conversations with experts. It also features 'The Strategy Lab,' an AI-powered platform designed to assist C-suite leaders in developing and aligning business strategies.
AI deepfakes of minds raise legal questions
The use of AI to create 'deepfakes' of individuals' minds, like Grammarly's use of writer Julia Angwin's name and style, raises concerns about intellectual property and personal rights. Grammarly offered editing suggestions supposedly from various writers, including Angwin and Stephen King, without their explicit consent. This practice highlights the need for legal frameworks to address AI's exploitation of personal identity and creative work. Angwin suggests that New York's century-old Right of Publicity Law could be applied to protect individuals from unauthorized commercial use of their name, likeness, or voice by AI companies.
Hardware becomes key interface as AI moves physical
Artificial intelligence is increasingly integrating with the physical world, making hardware the primary interface for user interaction. While software remains the core engine, its impact is now experienced through tangible hardware. This shift was evident at events like CES and Davos, showing a market trend towards physical engagement for building trust and demonstrating value. However, companies face challenges as AI evolves rapidly while hardware development cycles are much longer, creating a gap. Manufacturers must decide whether to embed third-party AI or compete in software, with partnerships often being a strategic choice.
Palantir demos show military AI chatbots creating war plans
Palantir's software demonstrations reveal how military officials could use AI chatbots, such as Anthropic's Claude, to help generate war plans. These chatbots can analyze vast amounts of intelligence data and suggest courses of action for military analysts. Palantir's AI Platform (AIP) integrates third-party large language models, allowing users to query classified data and receive tailored responses. Demos show the AIP Assistant helping analysts interpret threats, identify enemy units, and generate potential targeting strategies, although the AI does not make final decisions.
Amazon tech lead shares AI coding tips for product development
An Amazon tech lead, who was promoted for building AI products, shares key strategies for effective AI-assisted coding, or 'vibe coding.' The lead emphasizes understanding the inner workings of Large Language Models (LLMs) to anticipate their limitations and effectively prompt them. Key advice includes thinking critically before coding, prompting for challenging scenarios like error handling and scalability, and reviewing code at each step to catch errors early. While AI now writes about 95% of their code, understanding the generated code remains crucial for responsibility and debugging.
Artists highlight AI dangers and biases
Artist Nouf Aljowaysir's photography project at the Moody Center for the Arts exposes the dangers and biases of artificial intelligence. By overlaying AI-generated outlines and certainty percentages onto historical photographs of the Middle East, the project reveals how AI models misidentify subjects with alarming inaccuracy. For example, camels are labeled as horses, and structures are misidentified as military equipment. This highlights the potential for deadly consequences if AI is relied upon for critical decisions, as human biases embedded in algorithms can lead to severe misinterpretations and tragic errors.
F5 CEO sees AI accelerating app delivery and security
F5 President and CEO François Locoh-Donou views the current era as the most exciting in two decades for application delivery and security, driven by the acceleration of AI. As enterprises increasingly run AI workloads across various environments, the complexity of delivering and securing apps and APIs grows. Locoh-Donou highlights that enterprises are managing AI workloads themselves, requiring robust delivery and security solutions, which presents a significant opportunity for partners. He emphasizes the consolidation of delivery and security capabilities into F5's Advanced Delivery Services Platform (ADSP) to simplify operations in complex hybrid and multi-cloud environments.
Sources
- Meta delays rollout of new AI model, NYT reports
- Meta reportedly delays the launch of its new AI model because itâs just not that good
- Meta Delays Rollout of New A.I. Model After Performance Concerns
- Silicon Valley program encourages young women to lead in AI
- Silicon Valley nonprofit encourages young women to lead in AI
- GPT-5 : Tout ce que nous savons sur le prochain modèle dâOpenAI (2026)
- AI and the Entry-Level Job
- Opinion | Me, Myself and My A.I. Sloppelgänger
- Hardware Is the New Software
- Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans
- An Amazon tech lead's top tips for vibe coding with AI
- Artists Lay Bare The Dangers And Biases Of Artificial Intelligence
- F5 CEO On âThe Most Exciting Timeâ In Two Decades As AI Accelerates App Delivery, Security: Exclusive
Comments
Please log in to post a comment.