The US military reportedly utilized Anthropic's Claude AI for intelligence, target identification, and battle simulations during strikes on Iran. This deployment occurred just hours after President Trump issued an order directing federal agencies to cease using the AI system. Trump criticized Anthropic's stance as a "disastrous mistake" that could risk American lives. Defense Secretary Pete Hegseth labeled Anthropic a "supply chain risk," though the Pentagon was granted a six-month transition period. Meanwhile, rival OpenAI's ChatGPT will continue to be used by the Pentagon.
Anthropic CEO Dario Amodei has strongly defended the company's refusal to allow Claude to be used for mass surveillance or fully autonomous weapons, stating these are core principles they will not compromise. Amodei called the Trump administration's actions "unprecedented," "retaliatory," and "punitive," asserting that disagreeing with the government is patriotic. In other AI news, Nvidia is partnering with telecom companies to ensure future 6G networks can effectively support artificial intelligence, aiming to create new markets for its AI hardware and software.
Indian investment platform Groww has launched new AI-powered tools to enhance its trading and wealth management services, offering insights and recommendations to users. However, the broader application of AI comes with challenges. AI expert Geoffrey Hinton suggests that when AI generates false information, it should be termed "confabulation" or "lies," rather than "hallucinations," as it reconstructs information based on learned patterns. The ease of AI misinformation was highlighted when an AI-powered news aggregator amplified a fake hot dog contest story.
New developments allow people to receive mental health advice via phone calls connected to AI, offering accessible support, though experts warn of risks like unsuitable advice. In the legal field, Vermont attorneys face few restrictions on AI use, but a recent case showed the dangers of fabricated AI-generated quotes, emphasizing the need for guardrails. Looking at organizational impact, AI also holds the potential to dismantle rigid corporate structures, enabling decentralized decision-making and fostering cross-functional collaboration.
Key Takeaways
- The US military used Anthropic's Claude AI for intelligence and target selection in Iran strikes, hours after President Trump ordered a ban due to safety concerns.
- President Trump designated Anthropic a "supply chain risk" for refusing unrestricted access to Claude AI, a decision the company plans to challenge legally.
- Anthropic CEO Dario Amodei defends the company's refusal to allow Claude for mass surveillance or autonomous weapons, citing core safety principles.
- OpenAI's ChatGPT will continue to be used by the Pentagon despite the ban on Anthropic's Claude.
- Nvidia is partnering with telecom companies to develop 6G networks capable of supporting advanced AI applications like robots and self-driving cars.
- Indian investment platform Groww launched new AI-powered tools to provide insights and recommendations for trading and wealth management.
- AI expert Geoffrey Hinton suggests calling AI-generated false information "confabulation" or "lies" instead of "hallucinations," as it's a reconstructive process.
- AI-powered phone services now offer free or low-cost mental health advice, though experts caution about potential risks of unsuitable guidance.
- A fake news story about a hot dog eating champion was easily amplified by an AI-powered news aggregator, highlighting AI's vulnerability to misinformation.
- AI has the potential to dismantle traditional corporate hierarchies by enabling decentralized decision-making and fostering cross-functional collaboration.
US military used Claude AI in Iran strikes despite Trump ban
The US military reportedly used Anthropic's Claude AI to help with intelligence and target selection during strikes on Iran. This happened just hours after President Trump ordered federal agencies to stop using the AI due to safety concerns. Defense Secretary Pete Hegseth criticized Anthropic, calling it a 'supply chain risk' but allowed a six-month transition period. Rival OpenAI's ChatGPT will continue to be used by the Pentagon.
Trump bans military use of Claude AI over safety dispute
President Trump ordered US agencies to stop using Anthropic's Claude AI, citing national security risks and the company's refusal to grant unrestricted access. Defense Secretary Pete Hegseth labeled Anthropic a 'supply chain risk,' a designation usually for foreign adversaries. Anthropic plans to fight this in court, stating the government's demands would override safety safeguards. The ban allows the Pentagon a six-month phase-out period for the AI.
US military used Claude AI in Iran strikes after Trump ban
The US government reportedly used Anthropic's Claude AI for intelligence, target identification, and battle simulations during strikes on Iran. This occurred just hours after President Trump directed federal agencies to cease using the AI system. Trump criticized Anthropic, calling their stance a 'disastrous mistake' that risks American lives. Anthropic plans to legally challenge the 'supply chain risk' designation, asserting they will not compromise on safeguards against mass surveillance or autonomous weapons.
Nvidia partners to ensure 6G networks support AI
Nvidia is collaborating with telecom companies to ensure future 6G networks can effectively support artificial intelligence. Current 5G networks are not designed for the complex demands of AI-driven devices and services. Nvidia's Ronnie Vasishta stated that 6G networks must deliver intelligence for both people and machines, requiring significantly more efficiency. This initiative aims to create a new market for Nvidia's AI hardware and software in telecommunications, paving the way for advanced applications like robots and self-driving cars.
Anthropic CEO Dario Amodei defends AI safety red lines
Anthropic CEO Dario Amodei explained his company's refusal to allow its AI, Claude, to be used for mass surveillance or fully autonomous weapons. He stated these are core principles the company will not compromise on, even facing a government ban. Amodei believes Anthropic is a good judge of its AI's capabilities and limitations. He called the Trump administration's actions 'unprecedented,' 'retaliatory,' and 'punitive,' asserting that disagreeing with the government is patriotic.
Groww launches AI tools for trading and wealth management
Indian investment platform Groww has introduced new AI-powered products to enhance its trading and wealth management services. These AI features will offer investors insights and recommendations through a consent-based approach, ensuring users retain decision-making power. Groww aims to help users navigate financial markets more effectively with personalized guidance and trend analysis. This move is part of Groww's strategy to become a comprehensive financial provider by leveraging advanced technology.
Geoffrey Hinton: AI 'lies' due to confabulation, not hallucination
AI expert Geoffrey Hinton suggests that AI systems generating false information should be called 'confabulation' or 'lies,' not 'hallucinations.' He explains that AI, like human memory, reconstructs information based on learned patterns rather than retrieving stored facts. This process, similar to how humans recall memories, means AI can confidently present incorrect details. Hinton argues this is a fundamental aspect of how AI learns, implying the challenge is not fixing a bug but understanding this generative property.
AI offers free mental health advice via phone calls
A new development allows people to receive mental health advice by calling a phone number that connects them to AI. Large language models like ChatGPT or specialized AI can provide guidance on mental health topics 24/7, often for free or low cost. While this offers accessible support, experts warn of risks, including the potential for AI to provide unsuitable advice or help users develop delusions. Current general AI models differ significantly from human therapists, though specialized versions are in development.
Vermont lawyers face few rules on AI use
Vermont attorneys have few restrictions on using AI in their legal work, as long as they follow professional conduct rules. A recent case highlighted the risks when a lawyer submitted a brief with fabricated quotes generated by AI. While the state judiciary offers guidance, concrete 'guardrails' are limited. Some lawyers find AI unreliable and struggle with responsible use, while others see it as a potential efficiency tool. Violating rules through irresponsible AI use could lead to complaints and disciplinary action.
AI easily believed fake hot dog contest story
A writer easily created a fake news story about a fictional hot dog eating champion named 'Joey Chestnut Jr.' This article was then amplified by an AI-powered news aggregator, highlighting the ease with which AI can be fooled. The aggregator apparently did not fact-check the bogus article before publishing it. Although the story was eventually removed, its initial spread demonstrates the potential for rapid online misinformation.
AI can dismantle rigid corporate structures
Artificial intelligence has the potential to break down traditional, hierarchical corporate structures, often called the 'corporate phalanx.' AI can enable decentralized decision-making by empowering employees at all levels with data access and analysis. It also facilitates cross-functional collaboration by connecting different departments and automating routine tasks. This allows employees to focus on strategic work, fostering innovation and agility. While requiring cultural shifts and investment, AI promises more adaptive and human-centric organizations.
Sources
- US military reportedly used Claude in Iran strikes despite Trump’s ban
- Trump orders military to stop using Claude chatbot in clash over AI safety
- US Used Anthropic's Claude AI In Iran Strikes Hours After Trump's Ban: Report
- Nvidia Forms Alliance to Make Sure 6G Networks Embrace AI
- AI executive Dario Amodei on the red lines Anthropic would not cross
- Groww rolls out AI products to expand trading, wealth tools
- Geoffrey Hinton Explains That AIs Hallucinate Because They Recreate Information Like Humans
- Getting Free Mental Health Advice By Calling A Phone Number That Connects You To AI-Generated Psychological Guidance
- Vermont has few guardrails to restrict how lawyers use AI
- It Was Frighteningly Easy to Make AI Believe a Hot Dog Lie
- Opinion | AI Frees the Corporate Phalanx
Comments
Please log in to post a comment.