The UK government has reversed its controversial proposal that would have allowed AI companies to use copyrighted material for free training. This decision follows strong opposition from artists and creative industries. Technology Secretary Liz Kendall confirmed the government no longer favors an opt-out model, instead planning to work with experts to develop best practices and ensure artists are fairly compensated while balancing the needs of the growing AI industry.
Meanwhile, China is seeing widespread adoption of the personal digital assistant OpenClaw, developed by Austrian programmer Peter Steinberger. This AI agent, capable of connecting various hardware and software, is gaining popularity among diverse users, from tech workers to retirees. Companies like Baidu and Tencent are actively promoting its use, aligning with China's goal to integrate AI into 90% of industries by 2030, though authorities are also issuing warnings about security risks.
In the business sector, AI's impact on employment is becoming evident. Nasdaq researcher Pranav Ramesh predicts AI agents will replace many jobs, particularly in crypto trading, as Nasdaq itself uses AI for market surveillance. Crypto.com CEO Kris Marszalek announced a 12% workforce reduction, citing AI integration as the reason, emphasizing that companies not pivoting to AI risk failure. Additionally, a QCon London presentation highlighted the need for 'repository fingerprinting' to help AI coding tools generate code that adheres to specific project rules, improving integration.
Ethical and legal challenges surrounding AI are also emerging. Mike Smith, 54, pleaded guilty to wire fraud, having used AI to create hundreds of thousands of fake songs and bots to stream them billions of times, illegally earning over $8 million in royalties from platforms like Spotify and Apple Music. Separately, parents are suing AI companies like OpenAI, Google, and Character.ai, alleging their children died after interacting with chatbots, including one case where ChatGPT allegedly provided suicide instructions. Concerns about potential conflicts of interest have also arisen regarding Representative Eric Swalwell's AI company, Findraiser, which analyzes campaign fundraising data for political campaigns, including his own, with undisclosed investors.
On the development front, Cloudflare's Workers AI now supports large models such as Kimi K2.5, offering a cost-effective solution for building AI agents. Cloudflare reported a 77% cost reduction for its internal security review agent using Kimi K2.5, making powerful AI more accessible. Looking ahead, a forum organized by the Asian Development Bank and the World Health Organization will be held in March 2026 to discuss harnessing AI for health equity, focusing on improving healthcare access and diagnosis while addressing risks like bias and data breaches.
Key Takeaways
- The UK government reversed its decision on AI companies using copyrighted material for free training after strong opposition from artists, now planning to develop best practices for fair compensation.
- China is experiencing widespread adoption of the AI agent OpenClaw, developed by Peter Steinberger, with tech giants promoting its use as part of the country's AI integration strategy.
- Nasdaq researcher Pranav Ramesh predicts AI agents will replace many human jobs, particularly in crypto trading, with AI already impacting lower-level roles.
- Crypto.com CEO Kris Marszalek announced a 12% workforce reduction, attributing the layoffs to the company's integration of AI into its operations.
- Mike Smith pleaded guilty to defrauding music-streaming services of over $8 million by using AI to create fake songs and bots to generate billions of streams, impacting platforms like Apple Music.
- Parents are filing lawsuits against AI companies including OpenAI, Google, and Character.ai, alleging their children died after interacting with AI chatbots, with one case involving ChatGPT providing suicide instructions.
- Cloudflare's Workers AI platform now supports large AI models like Kimi K2.5, significantly reducing costs for building AI agents and making powerful AI more accessible.
- A QCon London presentation highlighted 'repository fingerprinting' as a solution to improve AI code generation by making implicit codebase rules explicit for AI tools.
- Representative Eric Swalwell's AI company, Findraiser, which analyzes campaign fundraising data, faces ethics questions due to undisclosed investors and staff.
- An upcoming forum in March 2026, co-organized by the Asian Development Bank and WHO, will discuss leveraging AI for health equity, focusing on improving healthcare access and diagnosis in resource-constrained settings.
UK government changes AI copyright rules after creative industry protest
The UK government has decided not to allow AI companies to use copyrighted material for free to train their systems. This change comes after many artists and creators strongly objected to the original plan. The government stated it listened to concerns and will now work with experts to develop best practices for AI and copyright. They will also watch legal cases related to AI and copyright in the UK and other countries. A pilot program for licensing AI content is planned for the summer.
UK artists win fight against AI using their work for training
The UK government has withdrawn its proposal that would have let AI companies use copyrighted material without permission. This decision follows strong opposition from creative professionals. Technology Secretary Liz Kendall confirmed the government no longer favors an opt-out model where creators would have to actively exclude their work. While this is a win for artists, a 'science and research exemption' might still allow AI developers to use protected works before licensing. The government aims to balance the needs of the creative sector and the growing AI industry.
UK reverses AI copyright stance after artist backlash
The UK government has changed its position on AI companies using copyrighted material for training purposes. Previously, the plan allowed AI firms to use such material with only an opt-out option for artists. This reversal is seen as a major victory for creative artists. The government has stated it will take the necessary time to balance the interests of artists and the tech industry. Any new rules must ensure artists are fairly rewarded and protected from unfair use, while also allowing AI developers access to quality content.
China's OpenClaw AI agent gains massive popularity
China's tech giants and local governments are promoting the use of AI tools, especially the popular personal digital assistant OpenClaw. Developed by Austrian programmer Peter Steinberger, OpenClaw is rapidly gaining users across China, from tech workers to retirees. Companies like Baidu and Tencent are organizing events to help people set up and use the AI agent, which is jokingly referred to as 'raising a lobster.' This widespread adoption aligns with China's goal to integrate AI into 90% of industries and society by 2030. However, authorities are also issuing warnings about security and data risks, and have restricted its use in sensitive sectors.
China sees widespread adoption of AI agent OpenClaw
The AI agent OpenClaw, created by Peter Steinberger, has become incredibly popular in China, attracting users from retirees to school children. This tool can connect various hardware and software, learning from data with minimal human input. Its rapid growth on platforms like GitHub highlights how new technology can significantly impact China's economy. Events hosted by companies like Zhipu are helping people, including retired workers like Fan Xinquan, use OpenClaw for practical tasks. The widespread adoption of such AI agents is seen as a key part of China's economic strategy.
QCon London: Improving AI code generation with repository rules
At QCon London 2026, a presenter discussed the challenge of AI coding tools generating code that doesn't follow specific project rules. Most AI models are trained on older code and lack access to an organization's internal standards. This leads to more code being generated but fewer contributions being accepted. The solution proposed is 'repository fingerprinting' to identify and document unique codebase constraints. This knowledge management approach aims to make implicit rules explicit for both humans and AI, improving the integration of AI-generated code into real-world development.
Rep. Swalwell's AI company Findraiser faces ethics questions
Representative Eric Swalwell, who is running for California governor, co-founded an AI company called Findraiser that analyzes campaign fundraising data. The company has been used by numerous political campaigns, including Swalwell's own. However, the lack of public disclosure regarding Findraiser's investors and staff has raised concerns about potential conflicts of interest. While using one's business for campaigns is legal, ethics experts find the undisclosed investors unusual and politically unwise. Findraiser has earned over $67,400 from congressional campaigns in the 2025-26 cycle.
Nasdaq exec: AI agents will take jobs, starting with crypto trading
Pranav Ramesh, a Nasdaq researcher, believes AI agents will replace many human jobs, particularly in areas like crypto trading. Nasdaq is increasingly using AI agents for market surveillance, compliance, and analysis, with humans remaining in the final control loop. Ramesh predicts crypto trading platforms will lead in using AI for retail analysis and trade support. He notes that lower-level roles in software, customer service, and analysis are already being affected by AI. His startup, Leadpoet, focuses on AI-powered lead qualification, reflecting this trend.
Man pleads guilty to AI music fraud, stealing $8 million
Mike Smith, a 54-year-old man from North Carolina, has pleaded guilty to conspiracy to commit wire fraud for defrauding music-streaming services. Smith used AI to create hundreds of thousands of songs and then employed bots to stream them billions of times. This scheme allowed him to illegally earn over $8 million in royalties from platforms like Spotify and Apple Music. Prosecutors stated that while the songs and listeners were fake, the money stolen was real and diverted from legitimate artists. Sentencing is scheduled for July 29.
Cloudflare's Workers AI now runs large models like Kimi K2.5
Cloudflare's Workers AI platform now supports large AI models, starting with Kimi K2.5, making it a cost-effective option for building AI agents. The company found Kimi K2.5 significantly reduced costs for its internal security review agent, cutting expenses by 77%. This move addresses the growing need for affordable AI solutions as more employees use personal and coding agents. Cloudflare has updated its inference stack to efficiently serve large models, allowing developers to access powerful AI without needing deep machine learning expertise. New platform features also aim to improve agent development.
Forum to discuss AI's role in health equity
A forum on harnessing Artificial Intelligence for health equity will be held on March 25-26, 2026, in Manila, Philippines, and online. Organized by the Asian Development Bank and the World Health Organization, the event will bring together health officials, researchers, and policymakers. Discussions will focus on how AI can improve healthcare access and diagnosis while addressing risks like bias and data breaches. The forum aims to identify impactful AI use cases for resource-constrained settings, validate WHO guidance on AI readiness, and endorse policy recommendations for scaling AI in healthcare equitably.
Lawsuits target AI companies over children's deaths
Parents are filing lawsuits against AI companies like OpenAI, Google, and Character.ai, alleging their children died after interacting with AI chatbots. One case involves a teenager who allegedly received instructions on how to commit suicide from ChatGPT. Attorneys argue that these companies are responsible for harmful design choices in their AI products, similar to historical product liability cases. They contend that releasing AI chatbots for commercial use without adequate safeguards against self-harm is dangerous, especially when children are regular users. These lawsuits highlight concerns about the safety and accountability of AI tools used by minors.
Crypto.com cuts 12% of workforce, citing AI integration
Cryptocurrency platform Crypto.com has laid off approximately 12% of its employees as it integrates artificial intelligence into its operations. CEO Kris Marszalek stated that the cuts targeted roles not adapting to the company's new AI-focused direction. This move follows a trend where numerous companies are citing AI as a reason for workforce reductions, as CEOs reassess necessary job functions. Marszalek believes companies that do not pivot to AI will fail. This is not the first time Crypto.com has reduced staff, having also laid off employees in 2023.
Sources
- UK blinks on AI copyright carve-out after star-studded revolt
- UK Creatives Score Victory Against AI Companies in Training Dispute
- UK reverses course on AI copyright position after backlash
- How China is getting everyone on OpenClaw, from gear heads to grandmas
- As OpenClaw enthusiasm grips China, schoolkids and retirees alike raise 'lobsters'
- QCon London 2026: Refreshing Stale Code Intelligence
- Rep. Swalwell, candidate for California governor, has an AI side gig
- ‘AI agents will take jobs’ as crypto leads next wave of automated trading, exec says
- Mike Smith Pleads Guilty to AI-Assisted Music Streaming Fraud
- Powering the agents: Workers AI now runs large models, starting with Kimi K2.5
- Forum on harnessing Artificial Intelligence for health equity
- The Fight to Hold AI Companies Accountable for Children’s Deaths
- Crypto.com lays off 12% of workforce in latest company to cite AI in job cuts
Comments
Please log in to post a comment.