OpenAI recently revised its artificial intelligence agreement with the Pentagon, adding explicit prohibitions against using its technology for domestic surveillance of U.S. persons and nationals. CEO Sam Altman acknowledged the initial deal was "opportunistic and sloppy," made quickly after Anthropic's contract ended. The updated language also restricts the use of commercially acquired personal information for surveillance, aligning with constitutional laws and addressing public and employee concerns.
Meanwhile, Anthropic faced a widespread outage for its Claude AI chatbot on March 2, 2026, affecting consumer services due to unprecedented demand. Several U.S. cabinet agencies, including the Departments of State, Treasury, and Health and Human Services, have stopped using Anthropic's AI products, shifting to competitors like OpenAI's ChatGPT and Google's Gemini. This follows President Trump's order to phase out Anthropic's products amid a dispute with the Pentagon over AI technology safeguards. Despite these challenges, Anthropic's refusal to compromise on ethical safeguards has boosted its consumer reputation, with Claude surpassing ChatGPT in app downloads.
Beyond these government contracts, employees at both Google and OpenAI are advocating for stricter limits on military AI use, citing concerns about mass surveillance and autonomous weapons. Google is also reportedly negotiating with the Pentagon over its Gemini AI. In other enterprise news, Intel and Infosys are collaborating to accelerate AI adoption for businesses, integrating Infosys' Topaz Fabric with Intel's hardware to create a unified AI framework. Meta Platforms is testing a new AI shopping tool for its chatbot, aiming to compete with OpenAI's ChatGPT and Google's Gemini, and is also forming a new AI engineering team within its Reality Labs division to integrate AI into metaverse technologies.
Globally, China's leading AI models, such as Kimi, MiniMax, and DeepSeek, scored below 12 percent on the ARC-AGI-2 benchmark, lagging significantly behind top U.S. AI labs in reasoning capabilities. AI inference security is emerging as a critical concern for businesses in 2026, with experts noting risks like data extraction and leakage during the operational phase of AI models. Additionally, researchers have developed a highly efficient AI model inspired by macaque monkey neurons, shrinking it to 10,000 variables to mimic the human brain's visual system, potentially leading to more energy-efficient and humanlike AI. AHCA/NCAL has also submitted comments to the Department of Health and Human Services, urging careful consideration of AI in healthcare, focusing on patient safety, data privacy, and ethical issues.
Key Takeaways
- OpenAI revised its Pentagon AI deal to explicitly prohibit domestic surveillance of U.S. persons and nationals, with CEO Sam Altman admitting the initial agreement was "sloppy."
- Anthropic's Claude chatbot experienced a widespread outage on March 2, 2026, due to high demand.
- U.S. cabinet agencies, including State, Treasury, and HHS, are phasing out Anthropic's AI products and directing staff to use OpenAI's ChatGPT and Google's Gemini.
- Anthropic's stance on ethical safeguards, despite government disputes, has boosted its consumer reputation, with Claude surpassing ChatGPT in app downloads.
- Employees at Google and OpenAI are urging stricter limits on military AI use, concerned about surveillance and autonomous weapons.
- Intel and Infosys are partnering to accelerate enterprise AI adoption by integrating Infosys' Topaz Fabric with Intel's hardware for scalable and secure deployments.
- Meta Platforms is testing an AI shopping tool for its chatbot to compete with OpenAI's ChatGPT and Google's Gemini, and is forming a new AI engineering team for Reality Labs.
- Chinese AI models lag significantly behind U.S. counterparts in reasoning tests, with scores below 12 percent on the ARC-AGI-2 benchmark.
- AI inference security is a growing concern for businesses in 2026, with risks of data extraction and leakage during model operation.
- Researchers developed an energy-efficient AI model inspired by macaque monkey neurons, mimicking the human brain's visual system with only 10,000 variables.
OpenAI adds surveillance limits to Pentagon AI deal
OpenAI has updated its artificial intelligence deal with the Pentagon to include stronger protections against using its technology for mass surveillance of Americans. The original agreement allowed the Pentagon to use OpenAI's AI systems for any lawful purpose. The updated deal specifically states that the AI systems shall not be intentionally used for domestic surveillance of U.S. persons and nationals. This amendment also prohibits the deliberate tracking or monitoring of U.S. persons, even if it involves commercially obtained personal information. The Defense Department stated it is open to discussions and has made reasonable agreements.
Anthropic's Claude chatbot faces widespread outage
Thousands of users reported disruptions with Anthropic's Claude AI chatbot on March 2, 2026. The outage affected consumer-facing services like claude.ai and the company's apps, but businesses using Claude's AI models were not impacted. Anthropic cited unprecedented demand for the service as a reason for the disruption. The company announced that the issue was resolved later that day, with all systems returning to normal operation. This outage occurred during a period of high usage for Anthropic, which has been in a dispute with the U.S. Defense Department.
OpenAI and Pentagon revise AI deal with new surveillance safeguards
OpenAI and the Pentagon have updated their artificial intelligence agreement to include stricter protections against domestic surveillance. OpenAI CEO Sam Altman worked with Pentagon officials to rework the contract after initial concerns were raised. The revised language explicitly states that the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals, aligning with constitutional laws. The agreement also prohibits the use of commercially acquired personal information for surveillance. The Pentagon has not yet formally designated Anthropic a supply chain risk, as discussions continue.
US agencies shift from Anthropic to OpenAI for AI tools
Several U.S. cabinet agencies, including the Departments of State, Treasury, and Health and Human Services, have stopped using Anthropic's AI products. They are now directing staff to use AI models from competitors like OpenAI and Google. This action follows President Trump's order to phase out Anthropic's products. The move is seen as a significant rebuke to Anthropic, which has been in a dispute with the Pentagon over its AI technology safeguards. These agencies will now use platforms such as OpenAI's ChatGPT and Google's Gemini.
OpenAI revises Pentagon deal after CEO calls it 'sloppy'
OpenAI is amending its agreement with the Pentagon to add specific prohibitions against using its artificial intelligence for domestic surveillance. CEO Sam Altman admitted the initial deal, made quickly after Anthropic's contract ended, appeared 'opportunistic and sloppy.' The revised contract language ensures OpenAI's AI systems will not be intentionally used for domestic surveillance of U.S. persons and nationals, in line with U.S. laws. It also restricts the use of commercially purchased data for surveillance. Legal experts are reviewing the enforceability of these new restrictions.
OpenAI amends Pentagon deal, admits initial agreement was 'sloppy'
OpenAI is changing its artificial intelligence deal with the U.S. Department of War following criticism that the initial agreement was rushed and potentially allowed for domestic surveillance. CEO Sam Altman acknowledged the deal looked 'opportunistic and sloppy' and stated the company would explicitly prohibit its technology from being used for domestic surveillance or by intelligence agencies like the NSA. This revision comes after Anthropic, the previous AI contractor, was dropped due to disagreements over safeguards. The changes aim to address public backlash and employee concerns.
Google and OpenAI employees urge military limits on AI
Employees at Google and OpenAI are calling for stricter limits on the military's use of artificial intelligence. They have circulated letters expressing concerns about AI being used for mass surveillance and autonomous weapons. This comes after the Defense Department blacklisted Anthropic's technology and amid U.S. military actions. Google is reportedly in negotiations with the Pentagon over its Gemini AI, while OpenAI has amended its deal to add surveillance protections. The employees aim to create solidarity and prevent companies from being pressured into accepting unfavorable terms.
Pentagon AI dispute boosts Anthropic's image but questions military readiness
Anthropic's refusal to compromise on ethical safeguards for its AI technology in dealings with the Pentagon has boosted its consumer reputation, with its Claude chatbot surpassing ChatGPT in app downloads. However, the dispute also highlights concerns about the readiness of AI for military applications. Experts question whether current AI chatbots are reliable enough for high-stakes tasks like warfare, citing potential errors and unreliability. The Pentagon has ordered agencies to phase out Anthropic's products, while Anthropic plans to challenge the decision legally.
OpenAI revises Pentagon deal, admits it was 'sloppy'
OpenAI CEO Sam Altman stated the company rushed its deal with the U.S. Department of Defense and is revising the terms to include explicit prohibitions against domestic surveillance. The updated agreement clarifies that OpenAI's AI systems will not be intentionally used for domestic surveillance of U.S. persons and nationals. It also prevents the use of commercially acquired personal information for surveillance and confirms that OpenAI's tools will not be used by intelligence agencies like the NSA. These changes follow a dispute with Anthropic over AI safeguards.
Infosys and Intel partner to speed up enterprise AI adoption
Infosys and Intel are collaborating to help businesses implement artificial intelligence initiatives more effectively, moving them from pilot projects to full production. The partnership combines Infosys' Topaz Fabric, an AI services suite, with Intel's hardware and software. This creates a unified AI framework designed to connect infrastructure, models, data, and applications. The goal is to enable scalable, secure, and cost-efficient AI deployments for enterprises across various industries. Both companies aim to optimize AI workloads on Intel processors and accelerators.
Infosys and Intel deepen AI collaboration for global enterprises
Infosys and Intel are expanding their strategic partnership to help global enterprises scale their AI deployments from pilot to production. The collaboration integrates Infosys Topaz, an AI services suite, with Intel's high-performance compute platforms. This aims to create a unified AI fabric that connects infrastructure, models, data, and applications for secure and efficient AI use. The companies are co-innovating on AI workloads across Intel's processors and accelerators to deliver measurable business outcomes. This partnership focuses on making AI solutions more accessible and cost-effective for businesses worldwide.
Meta tests AI shopping tool to compete with ChatGPT and Gemini
Meta Platforms is testing a new artificial intelligence feature for its chatbot that helps users research products for shopping. This tool aims to compete with similar features offered by OpenAI's ChatGPT and Google's Gemini. The feature, available to some U.S. users of the Meta AI web browser, provides product suggestions with brand, website, and price information. Meta AI may tailor recommendations based on user location and inferred gender. While there is no direct purchasing option, users can click links to merchant websites.
China's AI models lag behind US counterparts in reasoning tests
New results from the ARC-AGI-2 benchmark show that leading Chinese AI models, including Kimi, MiniMax, and DeepSeek, scored below 12 percent. These scores are significantly lower than those achieved by top U.S. AI labs, which reached higher percentages in July 2025. The ARC-AGI benchmark evaluates a model's ability to generalize and solve unfamiliar problems, focusing on reasoning rather than just knowledge. Analysts suggest this gap may stem from different design priorities, with U.S. labs prioritizing reasoning architecture while Chinese developers focus on efficiency and rapid iteration.
Scientists create tiny AI brain inspired by monkey neurons
Researchers have developed a highly efficient artificial intelligence model that mimics the human brain's visual system, using data from macaque monkeys. This AI model was shrunk to use only 10,000 variables, a tiny fraction of its original size, making it significantly more energy-efficient than current AI systems. The compact model functions more like a living brain, which could aid in studying brain diseases like Alzheimer's. This biologically inspired approach may lead to more powerful and humanlike artificial intelligence.
AI inference security is a growing concern for businesses
Experts are highlighting AI inference, the operational phase where AI models are used, as a critical security frontier for 2026. Unlike model training, inference exposes proprietary logic and sensitive prompts to risks like data extraction and leakage. Panelists noted that many organizations lack confidence in their AI systems meeting current security standards. They recommend preparing for quantum threats by inventorying cryptographic dependencies and integrating quantum-resistant techniques. Securing AI inference is becoming crucial as AI becomes more integrated into business operations.
Meta forms new AI engineering team for Reality Labs
Meta Platforms is establishing a new organization focused on applied artificial intelligence within its Reality Labs division, which develops metaverse technologies. This new team will operate with a flat structure, aiming to foster innovation and speed up decision-making. The move underscores Meta's increasing focus on AI as a key driver for its metaverse ambitions. Reality Labs has experienced significant operating losses, but Meta continues to invest heavily in AI research and development. The new AI engineering group will work on integrating AI into VR, AR, and social metaverse experiences.
AHCA/NCAL comments on AI in healthcare
AHCA/NCAL has submitted comments to the Department of Health and Human Services regarding the use of artificial intelligence (AI) in health care. They emphasized the need for careful consideration of AI implementation, focusing on patient safety, data privacy, and workforce impact. While AI could improve resident care through personalized plans and better diagnostics, concerns exist about ethical issues, potential bias in algorithms, and staff training. AHCA/NCAL urged HHS to develop clear guidelines for responsible AI development in long-term care settings.
Relational intelligence is key in the age of AI
In an era dominated by artificial intelligence, the importance of human connection and 'relational intelligence' is highlighted. This involves the ability to build trust, navigate relationships, and create meaning with others, skills that cannot be automated. The article argues that strengthening human relationships and 'relational infrastructure' is crucial for societal well-being, impacting learning, work, and democracy. It suggests that investing in environments that foster connection is as important as technological advancement for human flourishing.
Sources
- OpenAI Amends A.I. Deal With the Pentagon
- Anthropic’s Claude Chatbot Goes Down For Thousands of Users
- Scoop: OpenAI, Pentagon add more surveillance protections to AI deal
- US agencies switch from Anthropic to OpenAI for AI services
- Sam Altman says OpenAI is renegotiating with the Pentagon after an ‘opportunistic and sloppy’ deal
- OpenAI amends Pentagon deal as Sam Altman admits it looks ‘sloppy’
- Google employees call for military limits on AI amid Iran strikes, Anthropic fallout
- Pentagon dispute boosts Anthropic reputation but raises questions about AI readiness in military
- OpenAI's Sam Altman admits ‘rushed’ deal with Defense Department after backlash
- Infosys collaborates with Intel to accelerate enterprise AI adoption
- Infosys and Intel Deepen Strategic Collaboration to Unlock AI Value for Enterprises Globally
- Meta Tests AI Shopping Research Tool to Rival ChatGPT, Gemini
- Exclusive | Meta to Create New Applied AI Engineering Organization in Reality Labs Division
- AHCA/NCAL Submits Comments on Artificial Intelligence Use RFI
- Welcome to the Era of Relational Intelligence (SSIR)
- Chinese Models Including Kimi, MiniMax And DeepSeek Score Lower Than 12% On ARC-AGI 2, Lesser Than US Frontier Labs’ Scores From July 2025
- Scientists make a pocket-sized AI brain with help from monkey neurons
- Securing AI Inference: The Overlooked Security Frontier in 2026
Comments
Please log in to post a comment.