A new open-source AI assistant, Moltbot, previously known as Clawdbot, is raising significant security concerns. This tool, which runs on users' devices and manages tasks like emails and calendar entries through messaging apps, requires access to sensitive user accounts and data. Security experts, including Jamieson O'Reilly from Dvuln, have found many Moltbot systems exposed online without proper security. Creator Peter Steinberger warns that giving an AI agent shell access is inherently risky, citing dangers such as prompt injection, unintended purchases, and potential device damage. Users need advanced technical skills to deploy and secure Moltbot safely.
In other AI developments, China's Ministry of Education has made AI education mandatory for elementary and middle school students in Beijing and other regions, starting this fall. The curriculum introduces third graders to AI basics and fifth graders to advanced topics like algorithms, aiming to prepare children for future jobs and bolster China's global technological leadership. Meanwhile, the UK government has launched a program offering free AI training to all adults, with short modules designed to build confidence and skills in using AI effectively.
The newspaper company McClatchy is facing a union dispute with the Pacific Northwest Newspaper Guild over its use of AI in news production. Journalists discovered McClatchy used AI to rewrite and repackage their stories, rearrange homepage content, and create AI-generated summaries and listicles, sometimes with errors, all without informing them. These AI rollouts have coincided with staff cuts, leading to concerns among journalists about job security and their work being used to train AI models.
The next major leap in artificial intelligence involves self-improving AI models, also known as recursive self-improvement. Companies like Google and OpenAI are actively researching this area, which promises to significantly advance AI capabilities. Richard Socher, CEO of You.com, is launching a new company specifically focused on automating scientific discovery using this approach. Experts caution that while the potential is immense, this technology also introduces new risks, necessitating better oversight from policymakers.
Microsoft has unveiled its new in-house AI chip, the Maia 200, designed for efficient AI inference. This chip offers 30 percent more performance per dollar than its predecessor, Maia 100, and is three times faster than Amazon's Trainium3. Built on TSMC's 3nm technology, it features 140 billion transistors and can achieve up to 10 petaflops of FP4 compute. Additionally, Microsoft has made its Purview Data Security Investigations tool generally available, using AI to accelerate complex security investigations across Microsoft 365 data, including emails, Teams messages, and Copilot interactions, allowing for quicker identification and deletion of sensitive content.
Fundrise, an online investment platform, has introduced RealAI, an artificial intelligence tool for real estate analysis. RealAI provides detailed market information, including neighborhood income and rent averages, to both professionals and individual investors. Fundrise CEO Ben Miller notes the tool leverages extensive public and private data, including social media, to offer high-level insights. Separately, HackerOne, a threat exposure management firm, launched a "Good Faith AI Research Safe Harbor" framework to reduce legal risks for security researchers testing AI systems for vulnerabilities like biases or privacy leaks. The adult entertainment industry also discussed AI's impact on careers at the recent AVN Expo.
Key Takeaways
- Clawdbot, now Moltbot, is an open-source AI assistant with significant security risks due to its access to sensitive user data and potential for prompt injection and system damage.
- China has mandated AI education for all elementary and middle school students to prepare them for future jobs and strengthen the nation's technological leadership.
- McClatchy is facing a union dispute over its use of AI to rewrite and repackage journalists' stories, raising concerns about job security amid staff cuts.
- Self-improving AI models are the next major AI advancement, with Google and OpenAI researching this area, and Richard Socher launching a new company focused on automating scientific discovery.
- Fundrise launched RealAI, an AI tool for real estate analysis, providing detailed market insights using extensive public and private data.
- HackerOne introduced a "Good Faith AI Research Safe Harbor" framework to reduce legal risks for security researchers testing AI systems for vulnerabilities.
- Microsoft unveiled its Maia 200 AI chip, offering 30% more performance per dollar than Maia 100 and being three times faster than Amazon's Trainium3.
- Microsoft Purview Data Security Investigations, an AI-powered tool, speeds up security investigations across Microsoft 365 data, including Copilot interactions.
- The UK government is offering free AI training modules to all adults to help them develop skills and confidence in using AI.
- The adult entertainment industry discussed the impact of AI on porn stars' careers at the annual AVN Expo.
Moltbot AI Assistant Faces Serious Security Risks
Clawdbot, now called Moltbot, is a new open-source AI assistant that helps users with daily tasks like emails and calendar management through messaging apps. However, security experts warn about major risks because Moltbot needs access to sensitive user accounts and data. Jamieson O'Reilly from Dvuln found many Moltbot systems exposed online, some without any security. He also showed how harmful code could be uploaded to ClawdHub and run on user systems. Users need special skills to use Moltbot safely due to these security problems.
Clawdbot AI Assistant Poses Major Security Dangers
Clawdbot is a free, open-source AI assistant that runs on your computer and can manage emails, calendars, and even control your web browser to do tasks. Creator Peter Steinberger warns that giving an AI agent shell access is risky. The tool needs technical skill to install and secure properly. Risks include prompt injection, social engineering, unintended purchases, and potential damage to your device. Users should understand technical terms like sandboxing to use it safely.
Clawdbot AI Assistant Carries Significant Security Risks
Clawdbot is a popular and free open-source AI personal assistant that operates on your device and can access your email, calendar, and web browser to perform various tasks. Its creator, Peter Steinberger, openly states that running an AI with system access is risky. Users need technical knowledge to install and secure Clawdbot correctly. Potential dangers include prompt injection attacks, unauthorized purchases, and damage to your computer.
Security Expert Warns About Clawdbot AI Root Access Dangers
Clawdbot, created by Peter Steinberger, is a powerful open-source AI assistant that runs on users' devices. It can perform real tasks like organizing files, sending emails, and booking flights, and it learns user preferences over time. A security expert warns that giving AI agents like Clawdbot full system access creates major risks. Clawdbot has control over file systems, can run commands, and access sensitive accounts like email and calendars. This level of access, combined with many users running instances, raises serious concerns about machine identity management and potential vulnerabilities.
China Makes AI Education Mandatory for Young Students
China's Ministry of Education now requires all elementary and middle school students in Beijing and other areas to learn about AI. This new curriculum, starting in the fall, teaches third graders AI basics and fifth graders advanced topics like algorithms. The goal is to prepare children for the future and boost China's global leadership in technology. Students like Li Zichen and Song Haoyue are already using AI in projects, from programming robots to creating art. Parents support this move, though some worry about kids becoming too reliant on AI. They believe embracing AI is crucial for future success.
Chinese Schools Mandate AI Learning for Young Students
China has made AI education a required part of the curriculum for elementary and middle school students in Beijing and other regions. This new policy, implemented in the fall, aims to prepare children for future jobs and strengthen China's technological standing. Students learn AI basics in third grade, then move to data, coding, and intelligent agents by fifth grade. Parents generally support this, seeing it as vital for their children's future careers. However, some parents also discuss concerns about children becoming too dependent on AI.
McClatchy Newsrooms Face Union Fight Over AI Use
McClatchy, a major newspaper company, is facing a dispute with its union, the Pacific Northwest Newspaper Guild, over the use of AI in news production. Reporters like Nicole Blanchard are designated "AI champions" but find limited useful applications for the technology. Union members discovered McClatchy used AI to rewrite and repackage their stories without informing them. The company also used AI to rearrange homepage content and create AI-generated summaries and listicles, sometimes with errors. These AI rollouts happened alongside staff cuts, raising concerns among journalists about their work being used to train AI and job security.
Self-Improving AI Models Mark Next Big Leap in Technology
AI models that can learn and improve on their own, known as recursive self-improvement, are the next major development in artificial intelligence. Companies like Google and OpenAI are actively researching this approach, which could greatly advance AI's abilities. Experts warn that while this technology offers huge potential, it also brings new risks, especially as AI moves into complex real-world tasks. Richard Socher, CEO of You.com, is starting a new company focused on this area, aiming to automate scientific discovery. Policymakers need better ways to oversee this rapid AI development to ensure safety and prevent unintended problems.
Fundrise Launches RealAI for Public Real Estate Analysis
Fundrise, an online investment platform, has launched RealAI, a new artificial intelligence tool for real estate analysis. RealAI provides detailed market information, including neighborhood income, rent averages, and property comparisons, to both professionals and individual investors. Fundrise CEO Ben Miller states the tool uses extensive public and private data, even from social media, to offer high-level insights. The platform is free for initial uses, then costs $69 per month. This launch continues Fundrise's goal of making private investments, including in companies like OpenAI, accessible to more people.
HackerOne Creates New Framework for Safe AI Security Testing
HackerOne, a threat exposure management firm, has launched a new framework called "Good Faith AI Research Safe Harbor." This framework aims to make it safer and clearer for security researchers to test AI systems for vulnerabilities. The goal is to reduce legal risks for testers who find issues like unintended biases or privacy leaks in AI, which differ from traditional software bugs. HackerOne CEO Mårten Mickos emphasizes that clear expectations are crucial for effective AI testing. The framework also creates new business opportunities for security service providers to offer AI red teaming and compliance services.
Microsoft Unveils Maia 200 AI Chip Faster Than Rivals
Microsoft has launched its new in-house artificial intelligence chip, the Maia 200. This chip is designed to be Microsoft's most efficient AI inference system, offering 30 percent more performance per dollar than its predecessor, Maia 100. Built using TSMC's advanced 3nm technology, the Maia 200 contains 140 billion transistors. It can achieve up to 10 petaflops of FP4 compute, making it three times faster than Amazon's Trainium3. The chip also includes 216GB of HBM3e memory for high performance.
Microsoft Purview AI Tool Speeds Up Security Investigations
Microsoft has made its Purview Data Security Investigations tool generally available, bringing AI-powered help to security teams. This new system can complete complex investigations in hours instead of weeks, by automating tasks and finding hidden risks. It works across all Microsoft 365 data, including emails, Teams messages, and Copilot interactions. The tool uses AI to analyze data, find sensitive information, and spot unusual user activity. Security administrators can now quickly delete sensitive content directly within an investigation to reduce exposure.
UK Government Offers Free AI Training for All Adults
The UK government has launched a new program offering free artificial intelligence training to all adults across the country. The Department for Science, Innovation and Technology announced that these short modules take less than 20 minutes to complete. Upon finishing, participants will receive a "virtual AI foundations badge" to show their new skills. Technology Secretary Liz Kendall stated the goal is to help Britons work with AI, protecting them from risks while allowing them to benefit from the technology. This initiative aims to give people the confidence and skills needed to use AI effectively.
Porn Stars Discuss AI Impact at Industry Conference
The annual AVN Expo in Las Vegas recently brought together members of the adult entertainment industry. A key topic of discussion was how porn stars can continue their careers in the age of artificial intelligence. The conference explored the challenges and changes brought by AI technology. Attendees gathered to discuss the future of their industry amidst technological advancements.
Sources
- Clawdbot sheds skin to become Moltbot, can't slough off security issues
- Clawdbot AI security risks you need to know before trying it
- Clawdbot AI security risks you need to know before trying it
- Clawdbot Is What Happens When AI Gets Root Access: A Security Expert's Take on Silicon Valley's Hottest AI Agent
- In China, AI is no longer optional for some kids. It's part of the curriculum
- In China, AI is no longer optional for some kids. It's part of the curriculum
- The fight over AI at McClatchy.
- Models that improve on their own are AI's next big thing
- New AI tool from Fundrise brings high-level CRE analysis to the public
- HackerOne Addresses the Thorny Issue of Security Testing AI Systems
- Microsoft introduces newest in-house AI chip — Maia 200 is faster than other bespoke Nvidia competitors, built on TSMC 3nm with 216GB of HBM3e
- Microsoft brings AI-powered investigations to security teams
- All UK adults to get access to free AI training under new scheme
- How porn stars can survive in the age of AI
Comments
Please log in to post a comment.