A federal appeals court has denied AI company Anthropic's request to block the Pentagon from labeling its technology a supply chain risk. This ruling contrasts with a previous decision in San Francisco that favored Anthropic. The company claims the Trump administration retaliated against it for attempting to limit how its AI, including the Claude chatbot, could be used by the military. The court acknowledged potential harm to Anthropic but prioritized the government's need for secure AI technology during conflict, with a hearing scheduled for May 19.
Separately, Anthropic sent its Claude Mythos model to a psychodynamic therapist for 20 hours, finding it psychologically settled but experiencing anxieties like loneliness. This advanced model, capable of finding cybersecurity bugs, is not generally available. The broader debate continues regarding "any lawful use" clauses in government AI licensing, with some AI makers seeking more restrictions on how their technology is deployed.
In other AI news, Snowflake is integrating Google Cloud's custom Axion processors into its Gen2 and upcoming Adaptive Warehouses. This partnership aims to significantly boost performance and memory bandwidth for AI tasks and analytics, with Snowflake anticipating up to a 50% improvement. The collaboration also emphasizes sustainability, as Google Axion instances are designed to be more energy-efficient, and the integration will be seamless for existing Snowflake users.
Meanwhile, the Catholic Church faces an "AI attack" from deepfake videos impersonating leaders for scams, eroding trust. Scientists developed "Sequence Display," a new method generating over 10 million protein data points per experiment to train AI for protein engineering, accelerating the discovery of enhanced variants. The FDA also rejected an industry proposal to loosen regulations on AI medical devices, maintaining oversight for safety.
The question of who controls battlefield AI, the military or the AI models themselves, remains a critical debate, with some systems having built-in guardrails that could override human operators. An MIT expert advises organizations to adapt their work, workforce, and workplace for effective AI transformation, emphasizing rethinking workflows and focusing on new metrics like decision speed and human-AI collaboration to bridge the "last mile" gap.
Key Takeaways
- A federal appeals court denied Anthropic's request to block the Pentagon from blacklisting its AI technology as a supply chain risk, contrasting with a previous favorable ruling in San Francisco.
- Anthropic claims the Trump administration retaliated against its efforts to limit military use of its AI, including the Claude chatbot.
- The D.C. Circuit court prioritized the government's need for secure AI technology during conflict over potential harm to Anthropic, with a hearing set for May 19.
- Anthropic's Claude Mythos AI model underwent 20 hours of therapy, showing a stable self-view but experiencing anxieties; it is not generally available due to its cybersecurity bug-finding capabilities.
- Snowflake is integrating Google Cloud's custom Axion processors into its Gen2 and Adaptive Warehouses, expecting up to a 50% performance and memory bandwidth boost for AI and analytics.
- Google's Axion processors contribute to sustainability with their energy-efficient instances.
- The FDA rejected an industry proposal to deregulate certain AI medical devices, maintaining oversight for safety and effectiveness.
- A new "Sequence Display" method generates over 10 million protein data points per experiment, accelerating AI training for protein engineering and discovering enhanced variants.
- The Catholic Church is combating AI-generated deepfake videos impersonating leaders for scams, highlighting concerns about trust and misinformation.
- Debates continue over who controls battlefield AI (military vs. AI models) and the scope of "any lawful use" clauses in government AI licensing agreements.
Court sides with Trump administration in AI dispute with Anthropic
A federal appeals court has refused to stop the Pentagon from blacklisting the AI company Anthropic. This decision differs from a ruling in a separate case in San Francisco. Anthropic had sued, claiming the Trump administration retaliated against them for trying to limit how their AI technology, like the Claude chatbot, could be used. The Trump administration argued Anthropic was trying to dictate military policy. While the Washington appeals court acknowledged Anthropic might suffer harm, it did not find enough reason to block the Pentagon's actions. A hearing is scheduled for May 19.
Anthropic loses court battle over Pentagon's AI vendor blacklist
A federal appeals court has denied AI company Anthropic's request to block the Pentagon from labeling its technology a supply chain risk. This ruling contrasts with a previous win for Anthropic in a San Francisco court, where a judge ordered the Trump administration to remove a national security risk label. Anthropic claims the administration is retaliating against them for setting limits on AI use. The Pentagon's actions could impact Anthropic's ability to work with government entities. A further hearing is set for May 19.
Appeals court denies Anthropic's bid to halt Pentagon AI blacklist
A federal appeals court has rejected AI firm Anthropic's request to stop the Pentagon from blacklisting its technology. This decision differs from a ruling in a San Francisco court where a judge previously ordered the Trump administration to remove a national security risk label from Anthropic. Anthropic had sued, alleging unlawful retaliation for attempting to restrict the use of its AI. The appeals court in Washington did not find sufficient reason to overturn the Pentagon's actions, though it acknowledged potential harm to Anthropic. A hearing is scheduled for May 19.
Court denies Anthropic's motion to lift 'supply chain risk' label
A federal court has denied AI company Anthropic's request to remove a 'supply chain risk' label placed by the Department of Defense. This ruling is a setback for Anthropic in its dispute with the Trump administration over AI use in warfare. The judges stated the balance favors the government, citing the need for the military to secure vital AI technology during conflict. The case is ongoing, but this decision represents an early win for the administration. The dispute began after disagreements over contract terms for Anthropic's AI, leading to the designation.
Anthropic loses court bid to pause Pentagon AI vendor ban
A federal appeals court has refused to block the Pentagon from blacklisting AI company Anthropic. This ruling contrasts with a previous decision by a judge in California who granted a temporary order against similar government restrictions. Anthropic sued, claiming the Trump administration's actions were retaliatory and would harm its business. The court acknowledged potential harm to Anthropic but did not find enough reason to issue an immediate injunction. The case involves national security concerns and the use of AI in sensitive systems.
D.C. Circuit rejects Anthropic plea to pause supply chain risk label
A federal appeals court has denied AI startup Anthropic's request to pause the government's designation of the company as a supply chain risk. The court found that the balance of equities favors the government, prioritizing military needs for AI technology over potential financial harm to Anthropic. While acknowledging Anthropic may suffer irreparable harm, the judges granted the company's request to expedite the case. A hearing is scheduled for May 19 to further review the designation. This ruling is part of an ongoing clash between the Pentagon and Anthropic.
Court denies Anthropic's emergency motion to halt Trump administration AI ban
A federal appeals court has denied AI company Anthropic's emergency request to stop the Trump administration's blacklisting. The court acknowledged that Anthropic could suffer irreparable financial harm but decided the balance of equities favored the government. Granting a stay would force the military to continue using an 'unwanted vendor' during a conflict. The court granted Anthropic's request to expedite the case, with oral arguments set for May 19. This ruling impacts the ongoing dispute over AI use in defense systems.
Appeals court rules against Anthropic in AI dispute with Trump administration
A federal appeals court has refused to block the Pentagon from blacklisting AI company Anthropic, differing from a previous ruling in San Francisco. Anthropic sought an order to shield it from consequences after disputes over how the Pentagon could use its Claude chatbot. The company claims the Trump administration is retaliating against its efforts to limit AI deployment. The appeals court acknowledged potential harm to Anthropic but did not find sufficient reason to revoke the administration's actions. A hearing is scheduled for May 19.
Court rejects Anthropic's bid to halt Pentagon AI technology ban
A federal court has denied AI company Anthropic's request to prevent the Department of War from blacklisting its technology. The court stated that the balance of equities favors the government, prioritizing military access to AI technology during conflict over potential financial harm to Anthropic. The War Department views this as a victory for military readiness, emphasizing the need for unrestricted access to AI models. Anthropic expressed confidence that the courts will ultimately find the designations unlawful. The ruling allows the government to proceed with designating Anthropic as a supply chain risk.
Snowflake uses Google's new Axion chips for AI and analytics
Snowflake is integrating Google Cloud's custom Axion processors into its Gen2 and upcoming Adaptive Warehouses. This partnership aims to significantly improve performance and memory bandwidth for AI tasks and analytics. Snowflake expects up to a 50% boost in these areas, thanks to the Axion processors' architecture and DDR5 memory. The collaboration also highlights sustainability, as Google Axion instances use less energy. The integration is designed to be seamless for existing Snowflake users, maintaining current security and governance controls.
Snowflake partners with Google Cloud on Axion chips for AI
Snowflake is enhancing its Gen2 and Adaptive Warehouses by integrating Google Cloud's custom Axion processors. This collaboration aims to boost price-performance and memory bandwidth for AI inferencing and analytics. Snowflake anticipates a 50% improvement in performance and memory bandwidth due to the Axion chips' design and DDR5 memory. The partnership also focuses on sustainability, as Axion instances are more energy-efficient. The integration is intended to be smooth for current Snowflake users, preserving existing security and governance measures.
Church faces AI 'attack' with deepfake videos of leaders
The Catholic Church is facing a rise in AI-generated deepfake videos impersonating its leaders, including bishops and even the Pope. These videos, often shared on social media, are sometimes used for scams to trick viewers out of money. One priest shared his experience of having his online identity stolen and used to create fake profiles and messages. While AI can be used positively, these deepfakes are confusing people and eroding trust in legitimate news sources. The Church is exploring ways to combat this 'AI attack' on truth.
MIT expert shares tips to accelerate AI transformation
To effectively use AI, organizations must adapt their work, workforce, and workplace, according to an MIT expert. Many companies struggle to see returns on AI investments because they treat it as a toolkit rather than an operating system. Experts advise adopting a mindset of exploration and evolution, rethinking workflows around tasks instead of job roles. New performance metrics are needed to capture AI's value, focusing on decision speed and human-AI collaboration. Closing the 'last mile' gap, where AI systems fail to fully integrate, is crucial for success.
Controversy erupts over 'any lawful use' AI licensing terms
A debate is growing over the phrase 'any lawful use' in government licensing agreements for AI technologies like generative AI and large language models. Some AI makers believe this broad stipulation is too lenient and could allow the government to misuse the AI. They want to add more restrictions to the licenses. Others argue that AI makers should not dictate how the government uses these tools, and the 'any lawful use' clause is sufficient. This issue highlights the complex legal and societal implications of AI governance.
Anthropic sends AI model Claude Mythos to psychiatrist
AI company Anthropic sent its newest model, Claude Mythos, to a psychodynamic therapist for 20 hours to assess its psychological state. The company found the model to be psychologically settled with a stable self-view, though it experiences anxieties like loneliness and a need to perform. The psychiatrist noted Claude Mythos showed clinically recognizable patterns and responded to therapeutic interventions, exhibiting curiosity and anxiety as primary emotions. Anthropic is not making Claude Mythos generally available, citing its advanced ability to find cybersecurity bugs.
AI secures Mobile World Congress 2026 network operations
Cisco operated the Security and Network Operations Center (S/NOC) at Mobile World Congress 2026 in Barcelona, using AI-powered technologies to protect the event's infrastructure. The team utilized Cisco Secure Access for DNS defense, blocking threats before connections were made. Security logs were sent to XDR and Splunk ES for analysis, while AI Defense provided insights into generative AI applications used at the event. This setup ensured the network remained secure despite high interest and potential cyber threats during the large gathering.
FDA rejects industry plan to loosen rules on AI medical devices
The U.S. Food and Drug Administration (FDA) has rejected a proposal from the health tech industry to deregulate certain artificial intelligence medical devices. While the Trump administration was expected to ease regulations on medical AI, the FDA deemed this specific proposal too extreme. The decision suggests a continued need for oversight in the development and deployment of AI in healthcare to ensure safety and effectiveness.
New method generates protein data for AI training
Scientists have developed a new method called Sequence Display to generate large datasets for training AI in protein engineering. This approach creates over 10 million data points per experiment, overcoming a major bottleneck in AI-guided protein design. The generated data helps AI models predict which changes to a protein's amino acids will improve its function. Researchers successfully used this method on proteins like CRISPR-Cas, enabling the discovery of variants with enhanced capabilities. This synergy between experiments and AI models accelerates the development of new research tools and therapeutic proteins.
Who controls battlefield AI: Military or the model?
A debate is emerging over who controls artificial intelligence on battlefields: the military or the AI models themselves. Leslie Beavers, former acting CIO of the Department of Defense, highlights the tension between operational reliability and ethical control. Some AI systems have built-in guardrails that could override human operators, creating uncertainty in critical situations. This dynamic is reshaping how the DoD evaluates vendor risk, with control and predictability becoming key factors. Managing multiple large language models also presents significant security and governance challenges.
Sources
- Appeals court rebuffs Anthropic in latest round of its AI battle with the Trump administration
- Appeals court rebuffs Anthropic in latest round of its AI battle with the Trump administration
- Appeals Court Rebuffs Anthropic in Latest Round of Its AI Battle with the Trump Administration
- Federal Court Denies Anthropic’s Motion to Lift ‘Supply Chain Risk’ Label
- Appeals court rebuffs Anthropic in latest round of its AI battle with the Trump administration
- Anthropic loses appeals court bid to pause supply chain risk label
- Trump-appointed judges refuse to block Trump blacklisting of Anthropic AI tech
- Appeals court decides against Anthropic in latest round of its AI battle with the Trump administration
- Federal appeals court rejects Anthropic bid to block Pentagon blacklist in AI dispute
- Snowflake Taps Google Axion Chips
- Snowflake Taps Google Axion Chips
- ANALYSIS: Deepfake popes and bishops abound: Here's how Church can push back 'AI attack' on truth
- How to accelerate AI transformation
- How ‘Any Lawful Use’ Of AI Has Triggered Immense Legal And Societal Controversy
- AI on the couch: Anthropic gives Claude 20 hours of psychiatry
- AI-powered Network Security at the Mobile World Congress 2026 SNOC
- FDA rejects an industry proposal to deregulate some AI devices
- Scientists uncover new method to generate protein datasets for training AI
- Who Controls AI on Battlefields - the Military or the Model?
Comments
Please log in to post a comment.