openai unveils new tools as anthropic ships new models

OpenAI, the company behind ChatGPT, has secured a significant agreement with the Pentagon to deploy its AI models on the military's classified network. OpenAI CEO Sam Altman announced that this deal includes key safety principles, such as prohibiting domestic mass surveillance and ensuring human responsibility for the use of force. The Department of War has agreed to incorporate these principles into law and policy, with OpenAI also developing technical safeguards to ensure its models operate as intended.

This partnership comes after President Trump ordered all federal agencies to cease using AI technology from rival company Anthropic. The Pentagon designated Anthropic a 'supply chain risk to national security,' citing disagreements over the terms of use for its Claude AI model. Anthropic had refused to allow its technology for mass domestic surveillance or fully autonomous weapons, a stance Secretary of War Pete Hegseth deemed incompatible with American principles. Anthropic plans to legally challenge this designation, which it calls unprecedented for an American company.

In other AI developments, Salesforce is emerging as a central hub for businesses managing multiple AI agents, transforming its platform into a critical operational tool. Integrations like Momentum, which automatically logs sales calls, allow AI agents to handle tasks such as booking meetings and qualifying leads, providing context and smoother operations. Meanwhile, the Ohio Department of Job and Family Services received a national award for using four AI tools to improve state unemployment services, including a multilingual virtual assistant and an intelligent document processing system.

Amidst these advancements, David W. Bates, a cybernetics specialist, argues that comparing the brain to a computer is misleading. He believes the term 'artificial intelligence' contributes to a philosophical crisis where humans perceive themselves as inferior to machines, leading to a loss of human agency. Bates emphasizes the need to move beyond a simplistic opposition between human and AI, suggesting that the crisis lies in humans becoming more automated in their thinking due to digital infrastructures.

Key Takeaways

  • OpenAI has reached an agreement with the Pentagon to deploy its AI models on the military's classified network.
  • The deal includes OpenAI's safety principles, prohibiting domestic mass surveillance and ensuring human control over the use of force.
  • President Trump ordered federal agencies to stop using Anthropic's AI technology.
  • The Pentagon designated Anthropic a 'supply chain risk to national security' due to disagreements over AI use terms.
  • Anthropic refused to allow its Claude AI model for mass domestic surveillance or fully autonomous weapons.
  • Anthropic plans to legally challenge the 'supply chain risk' designation.
  • Salesforce is becoming a central platform for businesses to manage multiple AI agents, streamlining operations.
  • The Ohio Department of Job and Family Services received an award for using AI to enhance unemployment services.
  • Cybernetics specialist David W. Bates argues that comparing the brain to a computer is misleading and contributes to a loss of human agency.
  • OpenAI will implement technical safeguards and deploy employees to work with government personnel on classified projects.

Pentagon partners with OpenAI for AI use, bans Anthropic

The Pentagon has agreed to use OpenAI's AI models on its classified network. This deal comes after President Trump ordered all federal agencies to stop using Anthropic's AI technology. OpenAI CEO Sam Altman stated that the company will implement technical safeguards to ensure its models function correctly, aligning with the Pentagon's safety requirements. These principles include prohibitions against domestic mass surveillance and ensuring human control over the use of force. The Pentagon's decision to ban Anthropic followed disagreements over the terms of use for its AI models, particularly concerning potential ties to China and accusations of the company being a supply chain risk.

Trump administration bans Anthropic AI, favors OpenAI

The Trump administration has banned the use of Anthropic's AI products by federal agencies, escalating a conflict over how the US military can use AI systems. This decision came shortly after OpenAI's CEO, Sam Altman, announced a deal to supply the Pentagon with its technology. The Pentagon designated Anthropic a 'supply chain risk,' requiring a six-month transition away from its products, including the Claude AI model used in sensitive intelligence and weapons development. Anthropic argued against certain uses like mass domestic surveillance and autonomous weapons, while the Pentagon insisted it cannot have mission decisions limited by vendor terms.

OpenAI's AI models to be used by Pentagon

OpenAI has reached an agreement with the US Department of War to deploy its AI models on the military's classified network. CEO Sam Altman stated that the agreement includes key safety principles, such as prohibiting domestic mass surveillance and ensuring human responsibility for the use of force, including autonomous weapon systems. The Department of War has agreed to these principles and will incorporate them into law and policy. OpenAI will also develop technical safeguards to ensure their models operate as intended, a requirement also desired by the Department of War.

OpenAI secures Pentagon deal as Trump removes Anthropic

OpenAI has agreed to deploy its AI models on the Pentagon's classified network, announced CEO Sam Altman. This deal follows President Trump's order for federal agencies to phase out rival AI company Anthropic due to military AI safety concerns. Altman emphasized that the agreement includes OpenAI's core safety principles, such as prohibiting domestic mass surveillance and ensuring human control over the use of force, which the Department of War has accepted. Secretary of War Pete Hegseth designated Anthropic a 'supply-chain risk to National Security,' initiating a six-month phase-out period.

OpenAI and Pentagon agree on AI use after Anthropic dispute

OpenAI, the creator of ChatGPT, has reached an agreement with the Pentagon to provide its AI technologies for classified systems. This deal was finalized just hours after President Trump ordered federal agencies to stop using AI technology from Anthropic. OpenAI CEO Sam Altman stated that the Pentagon respected safety concerns and agreed to principles prohibiting domestic mass surveillance and requiring human responsibility for the use of force. OpenAI will implement technical safeguards to ensure its AI models function as intended, and some OpenAI employees will work with government personnel on classified projects.

OpenAI gains Pentagon AI access post-Anthropic ban

OpenAI CEO Sam Altman announced an agreement with the Pentagon to deploy its AI models on classified networks, incorporating principles against domestic mass surveillance and ensuring human responsibility for the use of force. These terms were agreed upon by the Department of War, which also requested technical safeguards from OpenAI. This development occurred shortly after the Pentagon declared Anthropic a supply-chain risk, prompting President Trump to order federal agencies to stop using Anthropic's technology. Anthropic stated it would legally challenge the designation, which is typically reserved for foreign adversaries.

Pentagon accepts OpenAI's AI safety limits, drops Anthropic

OpenAI has reached an agreement with the Pentagon to use its AI models, accepting specific safety principles that Anthropic had also requested. CEO Sam Altman announced that the Department of War agreed to prohibitions on domestic mass surveillance and ensuring human responsibility for the use of force. OpenAI will also implement technical safeguards. This deal follows President Trump's order to ban Anthropic from federal contracts, with Secretary of War Pete Hegseth designating the company a 'supply-chain risk.' Altman hopes these terms will be offered to all AI companies.

Pentagon labels Anthropic a supply chain risk

US Secretary of Defense Pete Hegseth has designated Anthropic, an AI company, as a 'supply chain risk,' prohibiting military contractors from conducting business with them. This action follows failed negotiations over the use of Anthropic's AI models, specifically Claude, with the company refusing to allow use for mass domestic surveillance or fully autonomous weapons. Anthropic called the designation 'legally unsound' and unprecedented for an American company, stating they would challenge it in court. Meanwhile, OpenAI announced an agreement with the Department of Defense to deploy its AI models with similar safety principles.

OpenAI secures Pentagon AI deal after Anthropic ban

OpenAI has signed a deal with the Pentagon to use its AI tools in classified systems, with safety guardrails similar to those requested by rival Anthropic. CEO Sam Altman stated that the Department of War agreed to principles prohibiting domestic mass surveillance and ensuring human responsibility for the use of force. OpenAI will also implement technical safeguards and deploy engineers to the Pentagon. This agreement came hours after President Trump ordered federal agencies to stop using Anthropic's technology, and the Pentagon declared Anthropic a 'supply-chain risk.'

OpenAI gets Pentagon AI access after Anthropic dispute

OpenAI will deploy its AI models on the Defense Department's classified network following an agreement that includes principles against domestic mass surveillance and for human responsibility in the use of force. CEO Sam Altman announced the deal, noting that the Department of War agreed to these principles and requested technical safeguards. This comes after the Pentagon declared Anthropic a supply-chain risk, leading President Trump to order federal agencies to stop using Anthropic's technology. Anthropic has stated its products should not be used for surveillance or autonomous weapons.

US government partners with OpenAI after banning Anthropic AI

OpenAI has reached an agreement with the Department of War to deploy its AI models on classified networks, including protections against domestic mass surveillance and for human responsibility in the use of force. CEO Sam Altman stated that the Department of War agreed to these principles and that OpenAI will build technical safeguards. This deal follows President Trump's order for federal agencies to stop using Anthropic's technology, with Secretary of War Emil Michael welcoming OpenAI as a reliable partner. The agreement raises questions about the Pentagon's confrontation with Anthropic, as OpenAI secured similar safeguards without public conflict.

AI debate: Brain is not a computer, says expert

David W. Bates, a cybernetics specialist and professor of rhetoric, argues that the term 'artificial intelligence' is misleading and that the brain should not be compared to a computer. He believes that the current conceptualization of human intelligence in relation to computers leads to a loss of human agency, as people begin to see themselves as inferior to prediction machines. Bates emphasizes the need to move beyond a simplistic opposition between human and AI, suggesting that the crisis lies in humans becoming more automated in their thinking due to digital infrastructures.

Pentagon designates Anthropic a security risk over AI dispute

The Pentagon has designated Anthropic, an AI company, as a 'supply chain risk to national security,' a move that could prevent contractors from doing business with them. This decision stems from a dispute over the use of Anthropic's AI model, Claude, with the company refusing to allow its use for mass surveillance of Americans or for autonomous weapons. Secretary of Defense Pete Hegseth stated that Anthropic's stance is incompatible with American principles. Anthropic plans to challenge the designation in court, calling it legally unsound and unprecedented for an American company.

Trump orders federal agencies to stop using Anthropic AI

President Trump has ordered all federal agencies to cease using artificial intelligence technology developed by Anthropic. This directive follows weeks of tension between the Pentagon and the AI startup over ethical guardrails for the technology. Defense Secretary Pete Hegseth subsequently declared Anthropic a 'supply chain risk to national security,' a designation typically reserved for foreign adversaries. Anthropic is expected to challenge this decision in court, as it could significantly impact the company's government contracts and business operations.

Ohio wins award for AI use in job and family services

The Ohio Department of Job and Family Services has received a national award for its use of artificial intelligence to improve state unemployment services. The state implemented four AI tools, including a multilingual virtual assistant for filing claims, an intelligent document processing system, an AI bot to speed up call center resolutions, and a tool to simplify policy manuals. These advancements aim to provide faster, more reliable, and accessible services for Ohio residents. The award, the Merrill Baumgardner Award, recognizes innovative uses of technology in government services.

Anduril founder critiques Anthropic's AI use restrictions

Palmer Luckey, founder of Anduril, argues that Anthropic's restrictions on AI use for national security are untenable and potentially dangerous. He believes that allowing a private corporation to dictate the limits of AI applications, even with seemingly reasonable terms, creates significant risks. Luckey highlights practical and geopolitical complications, questioning who determines definitions like 'civilian' or 'target' and how corporate interests might influence critical decisions. He asserts that these issues apply to any ethically fraught capability, not just AI, and that relying on corporate judgment over government or legal frameworks undermines democratic principles.

Salesforce becomes AI agent hub for businesses

Businesses are increasingly using Salesforce as a central hub for managing multiple AI agents, transforming the platform from dormant software into a critical operational tool. With the integration of AI tools like Momentum, which automatically logs sales calls, companies are seeing AI agents handle tasks such as booking meetings, sending emails, and qualifying leads. Without a central system like Salesforce, managing numerous autonomous agents can lead to chaos, conflicting data, and duplicate efforts. Salesforce's ability to integrate data from various AI agents provides context and enables smoother operations.

Ethical AI scaling responsibly with intelligent agents

As artificial intelligence systems become more autonomous, ethical considerations are crucial for their responsible scaling. Ethical AI principles guide how systems behave, ensuring fairness, transparency, and accountability, especially with agentic AI that pursues goals independently. Risks often originate during the AI system's development, influenced by data sources and design choices. Conversational AI also presents ethical challenges, as the way information is communicated can impact user trust and understanding. Building trust requires clear communication and ensuring systems help people make informed choices rather than replacing human judgment.

AI firm Anthropic declared national security risk

The Pentagon has declared Anthropic, a leading AI company, a 'supply chain risk to national security.' This designation follows a breakdown in negotiations over the use of Anthropic's AI model, Claude, by the military. Anthropic has refused to allow the technology's use for autonomous weapons or mass surveillance of Americans, citing ethical concerns. Secretary of Defense Pete Hegseth stated Anthropic's position is incompatible with American principles. Anthropic plans to legally challenge the designation, which could prevent contractors from doing business with the company.

Trump to end government use of Anthropic AI

Former President Donald Trump announced he would end the government's use of AI models from Anthropic if elected, citing weeks of tension between the Pentagon and the AI startup over technology guardrails. The Pentagon has been exploring AI for military decision-making and was working with Anthropic on a pilot program. Anthropic has pushed for stronger guardrails to prevent malicious use, while the Pentagon worried these could hinder effectiveness in combat. Trump's statement adds to the debate on AI use in government and national security.

AI debate: The brain is not a computer

David W. Bates, a cybernetics specialist, argues that comparing the brain to a computer is misleading and contributes to a philosophical crisis where humans see themselves as inferior to machines. He believes the term 'artificial intelligence' reinforces this comparison, leading to a loss of human agency as people become more automated in their thinking. Bates suggests moving beyond a simple opposition between human and AI, emphasizing that human intelligence is not simply a simulation of artificial intelligence.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

Pentagon OpenAI Anthropic AI models classified network safety requirements domestic mass surveillance human control use of force supply chain risk Trump administration federal agencies military AI Claude AI intelligence weapons development autonomous weapons vendor terms Department of War technical safeguards national security ChatGPT ethical guardrails cybernetics human agency prediction machines digital infrastructures Ohio Department of Job and Family Services unemployment services virtual assistant document processing call center policy manuals government services Anduril Palmer Luckey AI use restrictions democratic principles Salesforce AI agent hub businesses operational tool sales calls booking meetings qualifying leads data integration ethical AI responsible scaling agentic AI fairness transparency accountability data sources design choices conversational AI user trust informed choices human judgment

Comments

Loading...