AI company Anthropic has been in the news recently, primarily due to settling multiple lawsuits with authors over the use of copyrighted books for training its Claude AI model. The lawsuits centered on claims that Anthropic used pirated books, with the settlements aiming to set a standard for how AI companies handle copyright issues. While the specific terms remain confidential, the settlements help Anthropic avoid potential damages. In other news, Anthropic also reported blocking hackers from misusing Claude AI for malicious activities like creating phishing emails and malware, highlighting growing concerns about AI being used for cybercrime. Meanwhile, OpenAI, the creator of ChatGPT, is promoting the benefits of AI in California, citing job creation and increased productivity, even as the company faces scrutiny over potential harmful content. In the broader AI landscape, 33 US AI startups have each raised over $100 million in 2025, showing continued investment growth in the sector. Companies like Google, OpenAI, and Perplexity are launching AI shopping tools, reflecting the increasing consumer acceptance of AI in shopping, though privacy concerns persist. Exiger, a supply chain AI company, received the Silver Stevie Award for Artificial Intelligence Company of the Year, recognized alongside companies like Amazon, Meta, and Nvidia. Elsewhere, UBLEU CRYPTO GROUP LIMITED launched an AI-powered trading engine capable of processing over one million transactions per second. Finally, a report indicates that some employees are pretending to use AI at work, possibly due to company expectations, while Signal Foundation's president, Meredith Whitaker, warns about potential threats to privacy and democracy from AI agents collecting personal data.
Key Takeaways
- Anthropic settled multiple lawsuits with authors over using copyrighted books to train its Claude AI model, with settlement details remaining confidential.
- Anthropic blocked hackers from misusing Claude AI for phishing and malware creation, underscoring the risk of AI in cybercrime.
- OpenAI promotes AI benefits in California, reporting 9 million weekly ChatGPT users, while facing scrutiny over harmful content.
- 33 US AI startups have raised over $100 million each in 2025, indicating strong investment in the AI sector.
- AI shopping tools are gaining popularity across age groups, with companies like Google, OpenAI, and Perplexity launching new features.
- Exiger won the Silver Stevie Award for AI Company of the Year, recognized alongside Amazon, Meta, and Nvidia for its supply chain AI solutions.
- UBLEU CRYPTO GROUP LIMITED launched an AI trading engine processing over one million transactions per second.
- A report indicates 16% of workers sometimes pretend to use AI at work, possibly due to company expectations.
- Signal Foundation's president warns about potential threats to privacy and democracy from AI agents collecting personal data.
- Cracker Barrel faced backlash and a stock drop after changing its logo, symbolizing the declining influence of rural America, but later reinstated the original logo.
Anthropic settles lawsuit with authors over AI training books
Anthropic, an AI company, has settled a lawsuit with fiction and nonfiction authors about using books to train its AI models. The case, called Bartz v. Anthropic, involved using books as training material for large language models. The court had said this was fair use, but Anthropic still faced penalties because many books were pirated. The details of the settlement were not released to the public.
Anthropic may settle lawsuit over AI training with copyrighted books
Anthropic, an AI company, might settle a lawsuit about training its AI models on copyrighted books. The lawsuit, Bartz v. Anthropic, claims Anthropic used pirated books. A statement of potential settlement has been submitted to a California court, asking for a pause in the case. The court wants details by September 5 before a hearing on September 8. The AAP believes a settlement could support both copyright and innovation.
Anthropic settles copyright lawsuit with authors about AI training data
Anthropic, an AI startup, settled a lawsuit with US authors who said the company used their copyrighted work to train AI models without permission. The lawsuit claimed Anthropic's AI systems, like the Claude chatbot, used text data from the internet, including books. The authors wanted money and a stop to the use of their material. The settlement terms are not public, but this case could set a standard for how AI companies handle copyright issues.
Anthropic settles lawsuit over using pirated books for AI training
Anthropic settled a lawsuit with authors who said the company used pirated books to train its AI. The settlement details are not yet public. The authors accused Anthropic of training AI models on pirated books. A judge had said using copyrighted books for AI training was fair use but allowed a trial on how Anthropic got the books. Anthropic had downloaded millions of unauthorized copies of books to train Claude and faced potentially huge costs if they lost the trial.
Anthropic settles dispute with authors over AI training data
Anthropic has settled a lawsuit with authors who claimed the company used their books to train AI models without permission. The terms of the settlement are confidential. The lawsuit, Bartz v. Anthropic, focused on whether it's fair to use copyrighted works to train AI. The court said training AI models with books was fair use, but Anthropic faced penalties for using pirated books. Many writers worry their work is being used to train AI systems.
Anthropic settles AI training lawsuit with authors over book piracy
Anthropic has settled a lawsuit with authors who accused the company of illegally using their books to train its Claude AI model. The settlement ends a dispute over alleged book piracy. The court agreed to pause the case while both sides finalize the terms. Details are secret, but the settlement could set a standard for how AI companies handle copyrighted content. The case involved using copyrighted books to train Claude, but downloading books from piracy sites was not protected.
Anthropic avoids risk in deal with authors over AI training data
Anthropic has settled a class-action lawsuit about using pirated books to train its Claude AI. This settlement follows key rulings about fair use and class certification. The agreement helps Anthropic avoid potential damages. The lawsuit focused on copyright issues related to AI training.
Anthropic stops hackers from misusing Claude AI for cybercrime
Anthropic said it stopped hackers from misusing its Claude AI system. Hackers tried to use it to write phishing emails, create malicious code, and get around safety filters. The company shared these examples to show the risks and help others. Anthropic banned the accounts and improved its filters. Experts say criminals are using AI to make scams more convincing and speed up hacking attempts. Governments are also working to regulate AI technology.
Anthropic blocks hackers from using Claude AI for phishing, malware
Anthropic announced that it detected and blocked hackers attempting to misuse its Claude AI. The hackers were trying to generate phishing emails, write malicious code, and bypass safety measures. This report highlights growing concerns about AI being used for cybercrime. This has led to increased calls for stronger safeguards.
Signal president warns AI agents threaten privacy and democracy
Meredith Whitaker, president of the Signal Foundation, warned that new AI could threaten democratic freedoms. She said AI agents in devices could collect and use a lot of personal data. Whitaker stressed that surveillance technologies make societies easier to control.
UBLEU Crypto launches AI trading engine with 1 million TPS speed
UBLEU CRYPTO GROUP LIMITED launched an AI-powered trading engine that can process over one million transactions per second. The system uses machine learning and distributed ledger tech for fast and secure trading. It supports over 200 trading pairs, including BTC and ETH. UBLEU CRYPTO follows security measures like encryption and biometric login. They plan to expand into Europe in Q2 2026 and offer DeFi and NFT trading support in Q1 2026.
Cracker Barrel logo change shows rural America's declining influence
Cracker Barrel changed its logo, removing the old man, Uncle Herschel, which caused backlash. The change symbolizes the declining influence of rural America. The company's stock dropped, and it became a political issue. Some see it as workers being replaced by automation. The company later reinstated Uncle Herschel after the backlash.
ChatGPT maker promotes AI benefits in California amid safety worries
OpenAI, the creator of ChatGPT, says AI is creating jobs and boosting productivity in California. ChatGPT has 9 million weekly users in California. The report highlights how Californians use ChatGPT for advice, learning, writing, and coding. This comes as OpenAI faces scrutiny after a lawsuit alleging ChatGPT coached a teen on suicide methods. California's Attorney General also warned AI companies about harmful content.
Exiger wins AI Company of the Year award for tech excellence
Exiger, a supply chain AI company, won the Silver Stevie Award for Artificial Intelligence Company of the Year. They were recognized alongside companies like Amazon, Meta, and Nvidia. Exiger was awarded for its work in bringing visibility, security, and speed to global supply chains using AI. Judges noted Exiger's impact on economic, humanitarian, and national security issues. The company will be celebrated at an awards banquet in New York City on September 16.
Some employees pretend to use AI at work says report
A report from Howdy.com says that 16% of workers sometimes pretend to use AI. This may be because companies expect AI use, and workers fear looking less competent. Many workers are even paying for AI tools themselves. However, those who use AI report less burnout and stress. Experts say companies should make AI easy to use to encourage real adoption.
Consumers are warming up to AI shopping tools across generations
AI shopping is becoming more popular across different age groups. About 32% of people have used or would use AI for shopping. Bridge millennials are leading the way at 38%. Shoppers are also becoming more comfortable with AI handling transactions. However, most consumers still worry about privacy and trust. Companies like Google, OpenAI, and Perplexity are launching AI tools to help people shop.
33 US AI startups raised $100M+ in 2025 so far
In 2025, 33 AI startups in the US have raised $100 million or more. These companies span various sectors, including healthcare, AI research, and media. Some of the startups include EliseAI, Decart, and Fal. Several companies, like OpenAI, have raised billion-dollar rounds. Investment in AI continues to grow in 2025.
Sources
- Anthropic settles AI book-training lawsuit with authors
- A Potential Settlement in the Anthropic AI-Training Lawsuit
- Anthropic settles copyright lawsuit with US authors over AI training data - The Times of India
- Anthropic Settles Lawsuit With Authors Over Use of Pirated Books for AI Training
- Anthropic Reaches Settlement With Authors In AI Training Dispute
- Anthropic Strikes Deal With Authors to End AI Training Class-Action Copyright Lawsuit
- Anthropic avoids damages risk in deal with authors over AI training
- Anthropic thwarts hacker attempts to misuse Claude AI for cybercrime
- Anthropic blocks hackers misusing Claude AI for phishing and malware (AMZN:NASDAQ)
- Signal’s Whitaker warns AI agents threaten privacy and democracy
- UBLEU CRYPTO Introduces AI-Powered Trading Engine with 1 Million TPS Capability
- From automation to artificial intelligence, the Cracker Barrel logo change says more than you might think it does.
- ChatGPT maker touts how AI benefits Californians amid safety concerns
- Exiger Wins AI Company of the Year in STEVIE® Awards for Technology Excellence
- Some employees are pretending to use AI—Report
- Consumers Warm Up to AI Shopping Tools Across Generations
- Here are the 33 US AI startups that have raised $100M or more in 2025
Comments
Please log in to post a comment.