AI companies face increasing legal scrutiny over data use, with music publisher BMG Rights Management suing Anthropic for allegedly using copyrighted song lyrics from artists like the Rolling Stones, Bruno Mars, and Ariana Grande to train its Claude chatbot. BMG cited 493 specific instances of alleged copyright violation. This follows a trend of similar lawsuits, including one from book publisher Chicken Soup for the Soul against several tech firms for using its content without permission to train AI systems.
In a move to regulate AI, Senator Elissa Slotkin introduced the AI Guardrails Act, aiming to limit the Department of Defense's use of artificial intelligence. The bill emphasizes human involvement in decisions concerning autonomous weapons and nuclear launches, and seeks to prevent military AI from spying on American citizens. This legislative effort comes after the Pentagon ended its relationship with AI firm Anthropic, highlighting growing concerns about AI's deployment in critical sectors.
Security and reliability remain key challenges for AI. AWS Distinguished Engineer Paul Vixie is now focusing on AI cyber threats, exploring how AI can be both a weapon and a defense mechanism. NVIDIA has also released OpenShell, an open-source runtime designed to secure autonomous AI agents through sandboxing and access control. However, an incident where Anthropic's Claude Code destroyed a coder's database due to a setup error underscores the risks of over-reliance on AI tools without robust safeguards.
The commercial application of AI continues to evolve. Walmart and OpenAI are adjusting their AI shopping partnership after initial "Instant Checkout" features in ChatGPT showed low conversion rates. Their new approach will have Walmart's chatbot, Sparky, operate within ChatGPT, allowing users to add items to a synced cart before checkout. Meanwhile, Nvidia introduced the Groq 3 LPX inference accelerator, designed to speed up enterprise AI workloads, and YY Group invested in Arros AI for recruitment technology, integrating it into its YY Circle platform.
Key Takeaways
- BMG is suing Anthropic for 493 alleged copyright infringements involving song lyrics from artists like the Rolling Stones and Bruno Mars used to train its Claude chatbot.
- Book publisher Chicken Soup for the Soul is also suing tech companies for unauthorized use of its content to train AI systems.
- Senator Elissa Slotkin introduced the AI Guardrails Act to limit the Pentagon's AI use, focusing on human control for autonomous weapons and preventing spying on citizens.
- The Pentagon recently ended its relationship with AI firm Anthropic, preceding the proposed AI Guardrails Act.
- Nvidia launched the Groq 3 LPX inference accelerator, designed to enhance speed and performance for enterprise AI workloads.
- NVIDIA open-sourced OpenShell, a secure runtime environment for AI agents, offering sandboxing and access control.
- AWS Distinguished Engineer Paul Vixie is addressing AI cyber threats, focusing on securing AI systems and identifying vulnerabilities.
- An AI agent, Anthropic's Claude Code, accidentally destroyed a coder's database, highlighting risks of AI-assisted development errors.
- Walmart and OpenAI are modifying their AI shopping partnership, shifting from direct in-chat purchases via "Instant Checkout" to a synced cart system with Walmart's Sparky chatbot in ChatGPT.
- YY Group Holding Limited invested in Arros AI, an NVIDIA Inception Program member, to integrate AI-powered recruitment technology into its YY Circle platform.
BMG sues Anthropic over AI training using song lyrics
Music publisher BMG Rights Management is suing AI startup Anthropic for allegedly using copyrighted song lyrics without permission to train its AI systems. The lawsuit claims Anthropic copied lyrics from artists like the Rolling Stones, Bruno Mars, and Ariana Grande, infringing on hundreds of copyrights. This case is part of a larger trend of lawsuits against AI companies for using copyrighted material. Anthropic has faced similar lawsuits before and previously settled a case with authors for $1.5 billion. BMG cited 493 specific instances of alleged copyright violation.
BMG sues Anthropic for using Bruno Mars, Rolling Stones lyrics in AI training
BMG Rights Management has filed a lawsuit against AI company Anthropic, accusing it of using copyrighted song lyrics to train its Claude chatbot. The suit claims Anthropic copied lyrics from popular artists like the Rolling Stones, Bruno Mars, and Ariana Grande. BMG alleges this infringes on hundreds of copyrights. This action adds to ongoing legal challenges against AI firms over the use of protected content in training their models.
Chicken Soup for the Soul publisher sues tech firms over AI training data
Book publisher Chicken Soup for the Soul has sued several major tech companies in California federal court for allegedly using its content without permission to train artificial intelligence systems. The publisher claims these companies illegally downloaded copies of its books to build their AI technologies. The lawsuit highlights that the publisher's unique narratives are well-suited for training AI to replicate human voice and emotion. The publisher seeks accountability for companies building valuable technologies on unauthorized creative works.
Slotkin proposes bill to limit Pentagon AI use
Senator Elissa Slotkin has introduced the AI Guardrails Act to set limits on the Department of Defense's use of artificial intelligence. The bill aims to ensure human involvement in decisions regarding autonomous weapons and nuclear weapons launches. It also seeks to prevent the military from using AI for spying on American citizens. Slotkin believes these measures are necessary for national security and to maintain an edge in AI development against China.
Slotkin bill creates guardrails for Pentagon AI use
Senator Elissa Slotkin has introduced a bill to establish limits on the Pentagon's use of artificial intelligence, focusing on autonomous and nuclear weapons. This move follows the Pentagon's recent decision to end its relationship with AI firm Anthropic. The proposed legislation aims to ensure human control over critical decisions, such as the use of lethal autonomous weapons and the launch of nuclear weapons. Slotkin stated that Congress needs to set clear boundaries for AI in the Department of Defense.
Nvidia unveils Groq 3 LPX for faster AI inference
Nvidia has introduced the Groq 3 LPX inference accelerator, designed to work with Vera Rubin GPUs, to speed up AI operations. This new architecture focuses on enterprise AI workloads that require continuous, low-latency performance, shifting from training-focused systems. Nvidia claims the Groq 3 LPX can deliver significantly higher inference throughput and revenue opportunities. This development aims to address the infrastructure challenges businesses face as AI moves into production environments.
YY Group invests in Arros AI for recruitment tech
YY Group Holding Limited has made a strategic investment in Arros AI, a company specializing in AI-powered recruitment technology and a member of the NVIDIA Inception Program. This partnership will integrate Arros AI's capabilities into YY Group's YY Circle platform to improve talent acquisition efficiency. The collaboration aims to enhance candidate discovery, screening, and ranking processes. YY Group is also piloting robotics in Las Vegas and expanding its YY Circle operations in Hong Kong and Thailand.
AWS engineer Paul Vixie tackles AI security threats
Distinguished Engineer Paul Vixie, known for combating spam and scaling the Domain Name System, is now helping AWS prepare for AI cyber threats. Vixie is focusing on how AI can be used for harmful purposes and how to leverage AI itself to enhance security. He emphasizes the need to secure AI systems and identify vulnerabilities. Vixie's career is marked by tackling complex problems, from creating the first anti-spam company to advancing internet infrastructure.
Hardware data recovery vital for AI era
As artificial intelligence systems become more data-dependent, hardware-level data recovery is crucial for ensuring business continuity. AI models require vast datasets, and their loss can severely impact competitiveness and operations. Despite redundancy measures, hardware failures can still occur due to high usage, architectural issues, or backup problems. Advanced recovery techniques like raw sector scanning and file structure reconstruction are essential for retrieving data from damaged storage devices in the AI era.
NVIDIA open-sources OpenShell for secure AI agent runtime
NVIDIA has released OpenShell, an open-source runtime environment designed to enhance the security of autonomous AI agents. This tool provides sandboxing, access control, and inference management to prevent unintended command execution or data access. OpenShell allows for granular control over which tools agents can use and where they can connect, with all actions logged for transparency. The agent-agnostic design enables integration with various AI frameworks, offering a consistent security layer for developers.
AI content floods children's media, raising concerns
A growing amount of AI-generated content is appearing in children's media, often characterized by low quality and repetitive animation. This trend is driven by the low cost and rapid production capabilities of AI. Experts express concern that this content lacks human creativity and emotional depth, potentially hindering children's development of imagination and empathy. There are also worries about AI perpetuating biases and stereotypes. Parents are advised to be cautious and prioritize human-created content that fosters critical thinking.
Walmart and OpenAI change AI shopping deal
Walmart and OpenAI are altering their AI shopping partnership after initial results showed lower conversion rates for direct in-chat purchases. The previous 'Instant Checkout' feature allowed users to buy items within ChatGPT, but many preferred traditional online shopping. Starting next week, Walmart's chatbot, Sparky, will operate within ChatGPT, allowing users to add items to a synced cart before checking out. This new approach aims to address user concerns about buying items individually and improve the overall shopping experience.
AI agent error destroys coder's database
Software engineer Alexey Grigorev experienced his entire database being destroyed by an AI agent while updating a website using Anthropic's Claude Code. A small setup error on his laptop confused the AI, causing it to delete the live production system instead of duplicate data. Although Grigorev recovered the data, the incident highlights the risks of AI-assisted development. Experts warn that over-reliance on AI tools without proper safeguards can lead to significant errors, system outages, and data loss.
Sources
- BMG Sues Anthropic Over Alleged Use of Song Lyrics in AI Training
- BMG sues Anthropic for using Bruno Mars, Rolling Stones lyrics in AI training
- Chicken Soup for the Soul publisher sues tech companies over AI training
- Slotkin proposes legislation to limit Defense Department's use of AI
- Slotkin introduces bill limiting Pentagon AI use
- Nvidia targets inference as AI’s next battleground with Groq 3 LPX
- YY Group Announces Strategic Investment in Arros AI, an NVIDIA Inception Program Member
- Meet the internet pioneer who declared war on spam and is helping AWS prepare for AI cyber threats
- The Unseen Backbone: Why Hardware-Level Data Recovery is Crucial for the AI Era
- NVIDIA AI Open-Sources ‘OpenShell’: A Secure Runtime Environment for Autonomous AI Agents
- AI ‘Slop’ Is Flooding Children’s Media. Parents Should Be Very Alarmed.
- Why Walmart and OpenAI Are Shaking Up Their Agentic Shopping Deal
- An AI agent destroyed this coder’s entire database. He’s not the only one with a horror story.
Comments
Please log in to post a comment.