amazon launches anthropic while nvidia expands its platform

Amazon recently secured a temporary court order against Perplexity AI's shopping bots. A federal judge ruled that Perplexity's Comet browser accessed Amazon user accounts without permission. Amazon, which initiated the lawsuit in November, stated this action is crucial for protecting its customers' shopping experience. The order mandates Perplexity to cease accessing protected Amazon accounts and destroy any collected data, with a one-week delay to allow for an appeal. Perplexity, however, plans to challenge the ruling, asserting users' right to choose their AI tools.

In a significant development for national security, the Pentagon has ordered U.S. military commanders to remove Anthropic's AI technology from critical national security systems within 180 days. An internal memo from Defense Department CIO Kirsten Davies highlighted potential risks from adversaries exploiting vulnerabilities. This directive also requires any company working with the Pentagon to stop using Anthropic products on Defense Department contracts, with exemptions being rare. This marks the first instance a U.S. company has been designated a supply chain risk. Furthermore, a lawsuit filed by Anthropic against the Trump administration reveals ongoing tensions, as Anthropic refuses to allow its AI, Claude, for mass surveillance or fully autonomous weapons, while the Pentagon seeks broader control.

Nvidia has made a notable investment in Mira Murati's new AI startup, Thinking Machines Lab. As part of this collaboration, Thinking Machines Lab will deploy at least one gigawatt of Nvidia's Vera Rubin systems. Nvidia CEO Jensen Huang expressed enthusiasm for partnering with the team to advance AI. Additionally, NVIDIA AI introduced Nemotron-Terminal, a data engineering pipeline designed to scale Large Language Model terminal agents. This framework addresses the challenge of data scarcity for training AI agents that can execute commands in terminal environments, utilizing a 'coarse-to-fine' strategy and pre-built Docker images.

Beyond these major developments, recent analyses indicate that AI is not yet causing widespread job disruption, though Anthropic's research identifies computer programmers and customer service representatives as roles with high exposure to automation. Cognitive scientist Joscha Bach suggests that Large Language Models may simulate a self when asked to describe mental states, indicating a deeper level of processing. New applications are also emerging, such as Gate's 'Gate for AI' platform, which integrates AI models like Claude and ChatGPT for complex crypto trading strategies, and Spelman College students' PlantGPT, an AI tool providing personalized plant care instructions. The new U.S. National Cyber Strategy also emphasizes the security of blockchain and AI, focusing on securing AI data centers and responsible AI deployment.

Key Takeaways

  • Amazon secured a temporary court order against Perplexity AI's Comet shopping bots for unauthorized access to user accounts.
  • The Pentagon ordered the removal of Anthropic's AI technology from critical U.S. military systems within 180 days, citing supply chain risks.
  • Anthropic is in a lawsuit over the use of its AI, Claude, in warfare and surveillance, refusing its use for mass surveillance or autonomous weapons.
  • Nvidia invested in Mira Murati's Thinking Machines Lab, which will deploy at least one gigawatt of Nvidia's Vera Rubin systems.
  • NVIDIA AI launched Nemotron-Terminal, a data engineering pipeline for scaling Large Language Model terminal agents.
  • AI job disruption remains limited, but Anthropic's research indicates computer programmers and customer service roles have high exposure to automation.
  • Gate introduced 'Gate for AI,' a platform integrating AI agents like Claude and ChatGPT for complex crypto trading strategies.
  • Spelman College students developed PlantGPT, an AI tool using soil sensors to provide personalized plant care instructions.
  • The U.S. National Cyber Strategy prioritizes the security of blockchain and AI, focusing on data centers and responsible deployment.
  • Cognitive scientist Joscha Bach suggests LLMs simulate a self to describe mental states, implying deeper processing beyond simple pattern matching.

Amazon wins court order against Perplexity AI shopping bots

Amazon has won a temporary court order to stop Perplexity's AI shopping bots from accessing its website. A federal judge ruled that Perplexity's Comet browser accessed Amazon's user accounts without permission. Amazon stated this is a key step to protect its customers' shopping experience. Perplexity plans to fight the ruling, asserting users' right to choose their AI tools. The order requires Perplexity to stop accessing protected Amazon accounts and destroy any collected data, with a one-week delay for appeal.

Amazon blocks Perplexity AI shopping agent with court order

A federal judge has temporarily blocked Perplexity's AI shopping agent, Comet, from accessing Amazon's website. Amazon sued Perplexity in November, claiming its AI browser scraped its site without authorization. The judge found strong evidence that Comet accessed user accounts without Amazon's permission. Amazon stated the injunction helps maintain a trusted shopping experience, while Perplexity vows to defend users' AI choices. The order takes effect in a week, allowing Perplexity time to appeal.

Judge orders Perplexity AI agents to stop shopping on Amazon

A federal judge has ordered Perplexity's AI agents to stop making purchases on Amazon on behalf of users. Amazon sued the AI startup in November, and the judge found strong evidence that Perplexity's Comet AI browser accessed the site without authorization. The preliminary injunction requires Perplexity to cease accessing Amazon and destroy any obtained data. Perplexity stated it will continue to defend users' rights to choose their AI tools. The order is set to take effect in seven days, allowing Perplexity to appeal.

Pentagon orders military to remove Anthropic AI from key systems

The Pentagon has ordered U.S. military commanders to remove Anthropic's AI technology from critical national security systems within 180 days. An internal memo signed by Defense Department CIO Kirsten Davies cited potential risks from adversaries exploiting vulnerabilities. The order also requires any company working with the Pentagon to stop using Anthropic products on Defense Department contracts. Exemptions will be rare and require a strong risk mitigation plan. This action marks the first time a U.S. company has been designated a supply chain risk.

Anthropic lawsuit highlights AI's future in warfare

A lawsuit filed by Anthropic against the Trump administration reveals tensions over the use of its AI, Claude, in warfare and surveillance. Anthropic refuses to allow Claude for mass surveillance or fully autonomous weapons, while the Pentagon seeks broad control. The government designated Anthropic a supply chain risk, banning it from government contracts, a move experts find unprecedented for a U.S. company. This conflict raises questions about the government's relationship with the AI industry and the ethical boundaries of AI in military applications.

AI job disruption still limited, new research suggests

Recent analyses indicate that AI is not yet causing widespread job disruption, and traditional metrics may not fully capture its impact. Studies show AI-related job cuts remain low, with other economic factors also affecting the tech sector. However, new research from Anthropic uses an 'observed exposure' method to identify roles most vulnerable to AI. Computer programmers and customer service representatives are among those with the highest exposure, as AI can automate a significant portion of their tasks.

AI simulating self to describe mental states, says Joscha Bach

Cognitive scientist Joscha Bach suggests that Large Language Models (LLMs) may create a simulation of a self when asked to describe mental states. He argues that LLMs don't just string words together but reproduce deeper structures. When an LLM writes about its own mental states, it must construct something that functions like a self to maintain coherence. This process is similar to how LLMs simulate spatial reasoning when writing about spatial tasks, indicating a deeper level of processing beyond simple pattern matching.

Gate launches AI trading infrastructure for agents

Gate has launched 'Gate for AI,' a new platform that integrates AI agents directly with its exchange infrastructure. This allows AI models like ChatGPT and Claude to go beyond simple queries and execute complex trading strategies. The platform combines five core capabilities: centralized trading, on-chain trading, wallet management, news feeds, and on-chain information tools. Gate for AI aims to enable AI systems to operate within real market conditions, moving towards AI-native trading systems in the crypto space.

Spelman students create AI tool to help people talk to plants

Students at Spelman College are developing PlantGPT, an AI tool designed to help people care for their plants. The system uses soil sensors to collect data on a plant's health, such as humidity, light, and moisture levels. The AI then processes this information to provide personalized care instructions. The students aim to make plant care more accessible and eventually expand the tool for use in local farms and larger plant environments. They hope PlantGPT will give plants a 'voice' and help even those without a green thumb succeed.

US cyber strategy prioritizes blockchain and AI security

The new U.S. National Cyber Strategy emphasizes the security of blockchain and AI as key priorities for protecting digital infrastructure. The strategy outlines measures to secure AI data centers and deploy AI responsibly. It also highlights the importance of blockchain technology for financial systems and critical infrastructure. This framework aims to strengthen the nation's cybersecurity defenses against evolving threats in emerging technologies.

Nvidia invests in Mira Murati's Thinking Machines Lab

Nvidia has made a significant investment in Mira Murati's new AI startup, Thinking Machines Lab. As part of the partnership, Thinking Machines Lab will deploy at least one gigawatt of Nvidia's Vera Rubin systems. Nvidia CEO Jensen Huang expressed excitement about partnering with the world-class team at Thinking Machines to advance AI. Murati, formerly OpenAI's CTO, co-founded Thinking Machines Lab to push the boundaries of artificial intelligence.

NVIDIA releases Nemotron-Terminal for scaling AI agents

NVIDIA AI has introduced Nemotron-Terminal, a data engineering pipeline designed to scale Large Language Model (LLM) terminal agents. This framework addresses the challenge of data scarcity for training AI agents that can execute commands in terminal environments. Nemotron-Terminal uses a 'coarse-to-fine' strategy, adapting existing datasets and generating new tasks. It also utilizes pre-built Docker images to reduce infrastructure overhead, enabling more efficient training of powerful AI agents like Nemotron-Terminal.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Amazon Perplexity AI Court Order AI Bots Shopping Bots Pentagon Anthropic AI National Security Supply Chain Risk AI Ethics Warfare Surveillance Job Disruption Large Language Models LLMs Cognitive Science Gate AI Trading Crypto Spelman College Plant Care Cyber Strategy Blockchain Nvidia AI Startup Thinking Machines Lab Nemotron-Terminal AI Agents Data Engineering

Comments

Loading...