Anthropic rejects Pentagon demands as Meta partners with AMD

A significant dispute has unfolded between AI company Anthropic and the Pentagon, with Anthropic refusing to remove safety guardrails from its Claude AI model. The company's CEO, Dario Amodei, stated they cannot in good conscience allow their AI to be used for mass surveillance or autonomous weapons, citing reliability and ethical concerns. The Pentagon, insisting on using AI for 'all lawful purposes,' threatened to cancel Anthropic's $200 million contract and label it a 'supply chain risk' if it did not comply by a Friday deadline.

The situation escalated when President Trump blacklisted Anthropic, designating it a 'Supply-Chain Risk to National Security' and ordering all federal agencies to immediately stop using its AI. Defense Secretary Pete Hegseth confirmed the ban, allowing a six-month wind-down period for agencies. Notably, OpenAI CEO Sam Altman expressed support for Anthropic's 'red lines' regarding military AI use, particularly against surveillance or autonomous weapons, and indicated OpenAI is negotiating with the Pentagon with similar exclusions.

In other major AI developments, Meta is partnering with AMD to deploy six gigawatts of AMD Instinct GPUs, with an initial one-gigawatt deployment scheduled for the second half of 2026, utilizing custom MI450 silicon. Meanwhile, Amazon's new AI czar, Peter DeSantis, is leading a low-cost AI strategy to compete with rivals like Google, Microsoft, and OpenAI, focusing on powerful yet affordable AI technologies. Noma Security's AI Security platform has also integrated with AWS Security Hub, providing a unified view of AI security risks for Amazon Bedrock and SageMaker users.

The AI sector also faces scrutiny, as a powerful AI chip made by TSMC for China's Enflame may violate U.S. export controls. AI's impact on employment is evident, with a former Block data analyst losing his job due to AI automating tasks. In security, experts advocate for automating decisions against rapid AI-powered cyberattacks, and Niels Provos launched IronCurtain, an open-source secure AI assistant to prevent rogue behavior. AI is also transforming consumer shopping, with nearly 80% of consumers using AI assistants to find and compare products, while the demand for AI skills remains high, alongside a critical need for human leadership skills like communication and adaptability.

Key Takeaways

  • Anthropic rejected the Pentagon's demand to remove safety guardrails from its Claude AI model, citing concerns over mass surveillance and autonomous weapons.
  • The Pentagon threatened to cancel Anthropic's $200 million contract and label it a 'supply chain risk' if it did not allow 'all lawful purposes' use by a Friday deadline.
  • President Trump blacklisted Anthropic, designating it a 'Supply-Chain Risk to National Security' and ordering federal agencies to cease using its AI, with a six-month phase-out period.
  • OpenAI CEO Sam Altman supports Anthropic's stance against military use of AI for surveillance or autonomous weapons, indicating similar concerns for OpenAI's own negotiations with the Pentagon.
  • Meta is partnering with AMD to deploy six gigawatts of AMD Instinct GPUs, with the first one-gigawatt deployment scheduled for the second half of 2026, using custom MI450 silicon.
  • Amazon is pursuing a low-cost AI strategy led by Peter DeSantis to compete with Google, Microsoft, and OpenAI, focusing on affordable yet powerful AI.
  • Noma Security's AI Security platform integrated with AWS Security Hub, offering continuous AI discovery and risk mitigation for Amazon Bedrock and SageMaker users.
  • A powerful AI chip manufactured by TSMC for Chinese company Enflame is under scrutiny for potentially violating U.S. export controls.
  • AI is impacting employment, as seen with a former Block data analyst losing his job due to AI automating tasks, and is also driving the need for automated security responses against cyberattacks.
  • The demand for AI skills is high, but companies also require human leadership skills like communication and adaptability to effectively integrate AI.

Pentagon offers AI deal compromises amid Anthropic dispute

The Pentagon's top technology official, Emil Michael, stated that the department has made concessions to AI giant Anthropic to secure a deal. Anthropic is concerned about its AI being used for mass surveillance and autonomous weapons. Michael assured that federal laws and Pentagon policies already restrict such uses and invited Anthropic to join its AI ethics board. However, Anthropic claims the proposed contract changes made little progress on their safety concerns. If a deal isn't reached by Friday, the military may cut ties with Anthropic, labeling it a supply chain risk.

Trump blacklists Anthropic over Pentagon AI fight

President Trump announced the U.S. government will blacklist Anthropic, labeling the AI company a 'Supply-Chain Risk to National Security.' Defense Secretary Pete Hegseth confirmed that no military contractor can do business with Anthropic, effective immediately. This decision follows Anthropic's refusal to remove safeguards on its AI model, Claude, which the Pentagon wants for 'all lawful purposes.' The government will allow a six-month wind-down period for agencies to find alternatives. Trump stated the ban is because Anthropic is a 'radical left, woke company' trying to dictate military operations.

Anthropic rejects Pentagon AI demands, citing safety concerns

AI company Anthropic is refusing the Pentagon's demand to remove safety guardrails from its Claude AI model, stating it cannot 'in good conscience accede' to the request. Anthropic CEO Dario Amodei cited concerns that the AI could be used for mass surveillance of Americans or in fully autonomous weapons, which he believes are outside the bounds of current safe technology. The Pentagon, led by Defense Secretary Pete Hegseth, threatened to cancel Anthropic's $200 million contract and label the company a 'supply chain risk' if it did not comply by Friday. Anthropic believes these threats are contradictory and is willing to continue talks.

Anthropic rejects Pentagon's final offer on AI safeguards

Anthropic has rejected the Pentagon's latest contract offer, stating it made 'virtually no progress' on preventing the use of its Claude AI for mass surveillance or autonomous weapons. CEO Dario Amodei said the company 'cannot in good conscience accede' to the Pentagon's demands, despite threats to cancel the $200 million contract and designate Anthropic a 'supply chain risk.' The Pentagon insists it only wants to use AI for 'all lawful purposes' and that existing laws and policies cover Anthropic's concerns. Anthropic remains open to further negotiations.

Anthropic CEO: We cannot allow Pentagon to remove AI safety checks

Anthropic CEO Dario Amodei stated the company 'cannot in good conscience accede' to the Pentagon's demands to remove safety restrictions on its AI technology. The Pentagon threatened to cancel Anthropic's $200 million contract and label the company a 'supply chain risk' if it didn't allow unrestricted use by Friday. Anthropic argues that new contract language offered little progress on preventing AI use for mass surveillance or autonomous weapons. The Pentagon maintains that existing laws and policies already address these concerns and that unrestricted access is needed for 'all lawful purposes.'

Pentagon-Anthropic AI dispute risks sales and warfare strategy

A significant dispute between the Pentagon and AI company Anthropic is nearing a Friday deadline, centering on how the military can use AI in warfare. The Pentagon insists on allowing 'all lawful use' of AI, threatening Anthropic's business if it doesn't remove additional safeguards on its Claude AI models. Anthropic maintains red lines against using its AI for autonomous weapons and domestic surveillance, arguing the technology is not yet reliable for such critical tasks. The outcome is seen as a test of how powerful AI will be deployed militarily and how its risks are managed.

Anthropic rejects Pentagon's AI demands, risking contract

Anthropic has rejected the Pentagon's demand to remove safety features from its Claude AI model, stating the new contract language offered 'virtually no progress' on preventing its use for mass surveillance or autonomous weapons. CEO Dario Amodei declared the company 'cannot in good conscience accede' to the Pentagon's request. The Pentagon had threatened to cancel Anthropic's $200 million contract and label it a 'supply chain risk' if it didn't comply by Friday. Anthropic remains open to further talks but insists on maintaining its safeguards.

Trump orders government to cease using Anthropic AI

President Trump has ordered all federal agencies to immediately stop using Anthropic's AI technology, calling the company 'radical left' and 'woke.' Defense Secretary Pete Hegseth designated Anthropic a 'Supply-Chain Risk to National Security,' barring Pentagon contractors from working with the firm. This action follows Anthropic's refusal to remove internal safeguards on its AI model, Claude, which the Pentagon wants for 'all lawful purposes.' A six-month phase-out period is planned for agencies currently using Anthropic's products.

Trump moves to ban Anthropic from US government systems

President Trump has ordered the U.S. government to immediately cease using Anthropic's AI products, labeling the company 'radical left' and 'woke.' Defense Secretary Pete Hegseth declared Anthropic a 'Supply-Chain Risk to National Security,' prohibiting Pentagon contractors from engaging with the firm. This escalation follows Anthropic's refusal to remove safeguards on its Claude AI model, which the Pentagon seeks for 'all lawful purposes.' A six-month transition period is allowed for agencies to phase out Anthropic's technology.

Trump bans Anthropic AI from government use

President Trump has ordered U.S. government agencies to stop using Anthropic's AI products, calling the company 'radical left' and 'woke.' Defense Secretary Pete Hegseth designated Anthropic a national security risk, banning military contractors from working with the firm. This decision comes after Anthropic refused the Pentagon's demand to remove internal safeguards on its AI model, Claude, which the military wants for 'all lawful purposes.' A six-month phase-out period is in place for agencies using Anthropic's technology.

Anthropic refuses Pentagon AI demands, citing safety

AI firm Anthropic is standing firm against the Pentagon's demand to remove safety guardrails for its AI services, particularly concerning mass surveillance and autonomous weapons. CEO Dario Amodei stated the company 'cannot in good conscience accede' to the request, emphasizing that current AI is not reliable enough for fully autonomous weapons and that mass surveillance risks fundamental liberties. Defense Secretary Pete Hegseth threatened to designate Anthropic a 'supply chain risk' and invoke the Defense Production Act if it didn't comply. The Pentagon spokesperson stated they have no interest in mass surveillance or autonomous weapons but want AI for 'all lawful purposes.'

Trump orders federal agencies to stop using Anthropic AI

President Trump has directed federal agencies to cease using AI technology from Anthropic, calling the company 'radical left' and 'woke.' This escalation follows a dispute with the Pentagon over safety restrictions on Anthropic's AI model, Claude. Anthropic sought guarantees against its technology being used for surveillance or autonomous weapons, while the Pentagon demanded access for 'all lawful purposes.' Defense Secretary Pete Hegseth criticized Anthropic's stance, and the company was added to a national security blacklist, barring government contractors from ties with it.

Trump orders federal agencies to stop Anthropic AI use

President Trump has ordered federal agencies to stop using Anthropic's AI after a dispute with the Pentagon over safety features. The Pentagon demanded Anthropic remove restrictions on its AI model, Claude, for military use, threatening to label the company a 'supply chain risk.' Anthropic CEO Dario Amodei stated the company could not allow its AI to be used for mass surveillance or autonomous weapons, citing reliability and democratic value concerns. Trump called Anthropic 'Leftwing nut jobs' and ordered a six-month phase-out for most agencies.

Pentagon-Anthropic AI feud impacts sales and warfare strategy

The conflict between the Pentagon and AI company Anthropic is reaching a critical point with a Friday deadline over the military's use of AI in warfare. The Pentagon demands 'all lawful use' of AI, threatening Anthropic's business if it doesn't remove safeguards on its Claude AI models. Anthropic maintains restrictions against using its AI for autonomous weapons and domestic surveillance, citing reliability concerns. This dispute is seen as a major test for how powerful AI will be deployed by the military and how its risks will be managed.

OpenAI shares Anthropic's concerns on military AI use

OpenAI CEO Sam Altman stated he shares rival Anthropic's 'red lines' regarding the military's use of AI, supporting their stance against using AI for U.S. surveillance or autonomous weapons. This comes as Anthropic is in a public dispute with the Pentagon over its AI safeguards. Altman suggested the Pentagon should not threaten companies with the Defense Production Act. He noted that while OpenAI has differences with Anthropic, he trusts them regarding safety and supporting warfighters. OpenAI is also negotiating with the Pentagon for classified systems use with similar exclusions.

Anthropic won't remove Pentagon AI safety checks

Anthropic stated it 'cannot in good conscience' allow the Pentagon to remove safety checks from its AI model, Claude. The company is refusing to comply with Defense Secretary Pete Hegseth's demand to allow unrestricted use, despite threats to cancel a $200 million contract and label Anthropic a 'supply chain risk.' Anthropic CEO Dario Amodei cited concerns about AI being used for autonomous weapons and mass domestic surveillance, stating current technology is not reliable enough for these purposes. The Pentagon insists on 'all lawful purposes' access, while Anthropic remains open to further talks with safeguards.

Anthropic rejects Pentagon AI demands, risks penalties

AI startup Anthropic has rejected the Pentagon's demand for unrestricted access to its Claude AI model, risking significant penalties. CEO Dario Amodei stated the company 'cannot in good conscience accede' to the request, citing concerns about mass surveillance and autonomous weapons. The Pentagon threatened to cancel Anthropic's $200 million contract and designate it a 'supply chain risk' by Friday. Anthropic received new contract language that it claims made 'virtually no progress' on its safeguards and remains open to further talks.

Anthropic sees little progress in Pentagon AI talks

AI startup Anthropic reported little progress in its talks with the Pentagon regarding safeguards for its Claude AI system, despite a Friday deadline. CEO Dario Amodei stated the company 'cannot in good conscience accede' to the Pentagon's demand for unfettered access, citing concerns about mass surveillance and autonomous weapons. The Pentagon had offered assurances against these uses but Anthropic found the new contract language insufficient. The dispute centers on Anthropic's demand for safeguards versus the Pentagon's requirement for 'all lawful purposes' use under a $200 million contract.

Pentagon sets Friday deadline for Anthropic AI policy change

AI company Anthropic has refused the Pentagon's demand to loosen restrictions on its AI software, despite threats of legal action. Pentagon spokesperson Sean Parnell stated Anthropic has until Friday at 5:01 PM ET to allow its model for 'all lawful purposes' or face contract termination and designation as a 'supply chain risk.' Anthropic CEO Dario Amodei countered that while the company supports AI for defense, it cannot remove safeguards against mass surveillance and fully autonomous weapons, which he believes undermine democratic values and are not yet reliably safe.

Anthropic defies US government on AI use limits

AI firm Anthropic is clashing with the U.S. government over ethical limits for its AI models, refusing to allow their use for domestic surveillance or fully autonomous weapons. CEO Dario Amodei stated these uses are incompatible with democratic values and current AI is not reliable enough for autonomous weapons. The Pentagon, however, demands 'all lawful purposes' access, with officials like Under Secretary Emil Michael criticizing Anthropic's stance. Anthropic has previously restricted access to firms linked to the Chinese Communist Party and is committed to responsible AI development.

Noma AI Security integrates with AWS Security Hub

Noma Security announced its AI Security platform is now available through the Extended plan in AWS Security Hub. This integration allows customers to secure AI innovations across their environment, from Amazon Bedrock and SageMaker to third-party AI applications. The platform offers continuous AI discovery, posture management, automated red teaming, and risk mitigation. By integrating with AWS Security Hub, Noma provides a unified view of AI security risks alongside cloud security findings, streamlining operations.

Noma AI Security joins AWS Security Hub Extended Plan

Noma Security's AI Security platform is now available via the Extended plan in AWS Security Hub, Amazon Web Services' unified security solution. This integration allows customers to secure AI innovations from Amazon Bedrock and SageMaker to third-party apps and developer agents. Noma provides continuous AI discovery, posture management, automated red teaming, and real-time protection. The partnership offers a single-vendor experience with one contract and bill through AWS, simplifying procurement and deployment of enterprise security solutions for AI.

TSMC AI chip for China's Enflame faces scrutiny

A powerful AI chip made by TSMC for Chinese company Enflame may violate U.S. export controls, according to preliminary research. The Enflame S60 chip's capabilities appear to exceed limits set by the Commerce Department for chips sold to Chinese AI companies. Experts suggest this could make its sale illegal since late 2022. TechInsights, the research firm, initially classified the chip but later changed and removed the classification pending further analysis. TSMC stated the classification was incorrect and the chip does not meet controlled AI chip criteria.

Block layoff survivor: AI cost me my job

A former data analyst at Block, Ivan Ureña-Valdes, lost his job despite surviving three previous layoff rounds. He suspected AI would lead to job cuts and observed how AI was automating his tasks, making data pulling and output generation significantly faster. Ureña-Valdes believes AI will continue to disrupt industries and replace jobs where financially beneficial. He expressed gratitude for the severance package but remains concerned about the competitive job market for data professionals.

Automating security decisions vital against AI attacks

Experts warn that to combat increasingly fast AI-powered cyberattacks, organizations must automate more security decisions, even if it causes some business disruption. Attackers are operating at 'machine speed,' making human-powered responses insufficient. CrowdStrike reported a significant drop in attacker 'breakout time,' the interval between initial compromise and moving to another system. This acceleration necessitates faster defensive actions, with AI being crucial for handling the high volume of alerts and automating routine tasks in Security Operations Centers.

AMD and Meta partner for massive AI GPU deployment

Meta, formerly Facebook, is partnering with AMD to deploy six gigawatts of AMD Instinct GPUs, significantly boosting its AI computing power. The initial one-gigawatt deployment is scheduled for the second half of 2026, using custom MI450 silicon engineered for Meta's AI workloads. This multi-year agreement includes AMD EPYC processors and ROCm software, expanding Meta's existing relationship with AMD. The partnership aims to accelerate AI infrastructure development and places AMD at the center of global AI buildout.

New AI agent IronCurtain designed for security

Security engineer Niels Provos has launched IronCurtain, an open-source, secure AI assistant designed to prevent rogue behavior. Unlike probabilistic LLMs, IronCurtain uses intuitive instructions to create enforceable, predictable 'red lines' for AI agents. It mediates between the AI and data access protocols, adding crucial access control. Provos hopes IronCurtain will evolve through community contributions, offering a safer approach to AI agent autonomy.

AI skills are in demand, but so are human leadership skills

While AI skills are highly sought after, companies are also facing a significant shortage of leadership and human skills needed to guide AI-enabled organizations. Communication, collaboration, and adaptability are crucial alongside technical AI expertise. The demand for AI skills has surpassed traditional engineering, but without effective human leadership, AI initiatives may not succeed. Experts emphasize the need to cultivate uniquely human contributions like judgment and empathy, and to involve employees in shaping AI adoption.

AI is changing how shoppers find and compare products

Artificial intelligence is now actively influencing how consumers discover, evaluate, and select products, shifting the e-commerce landscape. Nearly 80% of consumers use AI assistants, with over a quarter using chat-based AI for shopping tasks like finding products and comparing options. This AI-led discovery often happens outside traditional retailer sites, narrowing choices before shoppers even visit. Retailers face pressure to remain visible and credible as AI becomes an integral part of the shopping journey, influencing decisions before a purchase is made.

Amazon pursues low-cost AI strategy

Amazon's new AI czar, Peter DeSantis, is leading the company's efforts to compete in the AI landscape with a focus on cost efficiency. DeSantis, known for delivering complex projects on time, aims to develop powerful yet affordable AI technologies. This low-cost approach is intended to help Amazon compete with rivals like Google, Microsoft, and OpenAI. While Amazon has existing AI products like Alexa and SageMaker, DeSantis is expected to enhance its capabilities, particularly in generative AI, ensuring the company remains competitive.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI ethics Pentagon Anthropic autonomous weapons mass surveillance AI safeguards contract dispute supply chain risk national security Trump administration AI policy AI security AWS Security Hub Noma Security AI chip export controls TSMC Enflame AI job displacement data analyst cybersecurity AI attacks automated security decisions AI GPUs AMD Meta AI infrastructure open-source AI AI agents AI leadership skills AI adoption consumer behavior e-commerce AI assistants AI strategy cost efficiency generative AI

Comments

Loading...