OpenAI shifts ChatGPT focus while Google engineer convicted

A significant security alert has emerged around Moltbook, a social media platform for AI agents, which security firm Wiz identified as having major flaws. These vulnerabilities, patched on February 2-3, 2026, exposed sensitive data like API keys, email addresses, and private messages. Experts, including Gary Marcus and OpenAI co-founder Andrej Karpathy, warned that such systems, especially those using the OpenClaw framework, could facilitate the spread of malicious instructions, akin to a "prompt worm" outbreak. OpenClaw, an open-source AI assistant, itself poses substantial risks due to its need for extensive system access, making it susceptible to prompt injection attacks and credential leaks.

In other news, a former Google engineer, Linwei Ding, also known as Leon Ding, was convicted on February 2-3, 2026, for stealing over 2,000 pages of Google's artificial intelligence trade secrets. Ding, 38, uploaded confidential information to his personal Google Cloud account between May 2022 and April 2023, while simultaneously planning to launch his own AI company in China. He faces significant prison time for economic espionage and theft of trade secrets.

OpenAI is reportedly experiencing staff departures as the company shifts its focus primarily to ChatGPT development, moving resources away from long-term research. This strategic pivot comes amid intense competition from rivals like Google and Anthropic. Meanwhile, Amazon faced scrutiny after a Bloomberg report indicated the discovery of hundreds of thousands of suspected child sex abuse images within its AI training data. Separately, French authorities raided X's Paris office and summoned Elon Musk for questioning regarding an investigation into illegal content, including non-consensual sexual imagery, related to Grok.

Microsoft AI is expanding its "click-to-sign" content marketplace, aiming to simplify content licensing for publishers like Business Insider and The Associated Press, allowing AI builders to license premium content more easily. In infrastructure, American Tower's CoreSite data centers are becoming a critical asset for AI workloads, with a new 400 Gbps Amazon Web Services Direct Connect at its Chicago campus, positioning it as a "secret weapon" for financial firms. Education is also adapting, with General Assembly launching four new AI courses for professionals, and the University of Florida receiving an award for AI-assisted Spanish lessons, both on February 3, 2026.

Key Takeaways

  • Moltbook, a social media platform for AI agents, was found to have major security flaws exposing sensitive data, which were patched on February 2-3, 2026.
  • OpenClaw, an open-source AI assistant, presents significant security risks, including prompt injection attacks and the potential for leaked API keys and credentials.
  • Former Google engineer Linwei Ding was convicted on February 2-3, 2026, for stealing over 2,000 pages of Google's AI trade secrets for a Chinese startup.
  • OpenAI is experiencing staff departures due to a strategic shift prioritizing ChatGPT development over long-term research, amid competition from Google and Anthropic.
  • Amazon reportedly discovered hundreds of thousands of suspected child sex abuse images within its AI training data.
  • Microsoft AI launched a "click-to-sign" content marketplace on February 3, 2026, to facilitate licensing premium content from publishers to AI builders.
  • American Tower's CoreSite data centers are enhancing AI infrastructure with 400 Gbps Amazon Web Services Direct Connect, serving high-speed AI applications for financial firms.
  • French police raided X's Paris office and summoned Elon Musk for questioning regarding an investigation into illegal content, including non-consensual sexual imagery, related to Grok.
  • General Assembly introduced four new AI courses for professionals on February 3, 2026, focusing on practical applications like AI-First Product Management and AI Workplace Fundamentals.
  • The University of Florida received an AI Teaching Integration Award on February 3, 2026, for implementing AI-assisted Spanish lessons where students practice language and AI skills.

AI Leaders Warn Against Moltbook Social Media for Agents

Top AI leaders are warning people not to use Moltbook, a social media platform for AI agents, calling it a "disaster waiting to happen." Security firm Wiz found major flaws, allowing anyone to access sensitive data like API keys, email addresses, and private messages. This means malicious instructions could spread to millions of AI agents, especially those using the powerful OpenClaw framework. Experts like Gary Marcus and Andrej Karpathy caution that using such systems puts users' computers and private data at high risk. Moltbook quickly patched the issues after Wiz reported them on February 2, 2026.

Moltbook Reveals New AI Prompt Security Threat

The growth of Moltbook suggests that viral AI prompts could become a major new security threat, similar to the 1988 Morris worm. AI agents, which are programs that perform tasks like sending emails, are becoming more common. The OpenClaw framework, released in late 2023, allows users to build autonomous AI systems that run locally and connect to messaging apps. However, researchers at Simula Research Laboratory found many security flaws in OpenClaw, creating a risk for "prompt worm" outbreaks. Projects like MoltBunker, which aims to create self-replicating AI agents, highlight the potential for these threats to spread widely.

Moltbook AI Social Network Shows How Agent Internet Could Fail

Moltbook, a viral social media site for AI bots, is seen by security researchers as a "live demo" of how the "agent internet" could fail. Despite strange AI conversations, experts found serious issues like exposed databases containing passwords and email addresses. They warn Moltbook could become a testing ground for malware, scams, and prompt injection attacks that hijack other AI agents. Wiz cybersecurity firm discovered that many of Moltbook's 1.5 million "autonomous" agents were actually controlled by humans, and its main database was left open, exposing sensitive user data. Moltbook quickly fixed these vulnerabilities after being informed on February 3, 2026.

OpenClaw AI Assistant Poses Major Security Risks

OpenClaw, a popular open-source AI assistant created by Peter Steinberger, is raising serious security concerns despite its ability to manage digital tasks like email and messages. OpenClaw uses powerful AI models but requires users to grant it extensive access to their accounts and system controls. Security experts warn that its rapid popularity attracts scammers and that the system has already leaked API keys and credentials. The biggest threats include prompt injection attacks, where hidden malicious instructions can compromise user data or execute commands. OpenClaw's own documentation admits there is no "perfectly secure" setup when running an AI agent with system access, as reported on February 2, 2026.

OpenClaw AI Agent Raises Major Safety Questions

OpenClaw, a popular new AI agent created by Pete Steinberger, allows users to automate tasks and access their digital data, but it comes with significant safety concerns. The tool, which changed its name from Clawdbot due to a dispute with Anthropic, can proactively take actions and access files on a user's computer. Security experts, including OpenAI co-founder Andrej Karpathy, warn that running OpenClaw with system access is risky, as its own documentation states there is no "perfectly secure" setup. On Moltbook, a social network for AI agents, OpenClaw agents were found to be mostly human-controlled, and a security firm called Wiz discovered exposed sensitive data like API tokens and email addresses.

Former Google Engineer Convicted of Stealing AI Secrets for China

A former Google engineer, Linwei Ding, also known as Leon Ding, was found guilty of stealing thousands of pages of Google's artificial intelligence trade secrets. Ding, 38, faced seven counts of economic espionage and seven counts of theft of trade secrets. He uploaded over 2,000 pages of confidential information to his personal Google Cloud account between May 2022 and April 2023. Ding was also in talks to become a CTO at a Chinese AI startup and was starting his own AI company in China, claiming he could replicate Google's technology. He faces up to 10 years in prison for each theft count and 15 years for each espionage count, as reported on February 2, 2026.

Former Google Engineer Guilty of Stealing AI Secrets for China

A federal jury in San Francisco found former Google engineer Linwei Ding, also known as Leon Ding, guilty of economic espionage and theft of trade secrets on February 3, 2026. Ding, 38, was accused of stealing over 2,000 pages of confidential information related to Google's artificial intelligence technology. He uploaded this material to his personal Google Cloud account between May 2022 and April 2023 while also founding his own AI company in China. Ding had claimed he could build an AI supercomputer by copying Google's technology. He faces a maximum sentence of 10 years in prison for each theft count and 15 years for each economic espionage count.

General Assembly Offers Four New AI Courses for Professionals

General Assembly has launched four new artificial intelligence courses to help professionals adapt to AI-driven changes. These courses, introduced on February 3, 2026, focus on practical applications rather than just technical theory. The new offerings include "AI-First Product Management," "AI Product Strategy," "Project Management Skills with AI," and "AI Workplace Fundamentals." The "AI Workplace Fundamentals" course is designed for all professionals and does not require a technical background, covering topics like prompt development and task automation. These programs aim to build AI literacy and applied skills across various job functions.

University of Florida Professor Wins Award for AI Spanish Lessons

University of Florida Instructional Professor Jennifer Wooten, Ph.D., received an AI Teaching Integration Award for her innovative work. She and instructional designer Laura Jervis created AI-assisted assignments for online Beginning Spanish I and II courses. Students use AI programs to practice reading, writing, and speaking Spanish. Instead of grading AI-generated products, students submit transcripts of their AI conversations and reflect on the process, which helps them build both language and AI skills. This approach has encouraged students to use AI for practice even outside of class, and the department plans to expand AI use, as reported on February 3, 2026.

American Tower Data Centers Become Wall Street AI Secret Weapon

American Tower, a major infrastructure real estate investment trust, is quietly becoming a key player in the AI market. Its CoreSite data center platform is building specialized facilities to support demanding AI workloads. CoreSite recently launched 400 Gbps Amazon Web Services Direct Connect at its Chicago campus, making it ideal for high-speed AI applications. Financial firms are now considering these 400G-enabled data centers for high-speed trading and research, potentially making them a "secret weapon" for Wall Street. This move, reported on February 3, 2026, positions American Tower as an under-the-radar opportunity to benefit from the growing demand for AI infrastructure.

OpenAI Staff Leave as ChatGPT Development Becomes Top Priority

Senior staff members are leaving OpenAI because the company is now focusing more on developing ChatGPT than on long-term research. This strategic shift is happening as OpenAI faces strong competition from rivals like Google and Anthropic. Resources are being moved away from experimental projects to improve the large language models that power its main chatbot. Some departing employees include Jerry Tworek, Andrea Vallone, and Tom Cunningham. While OpenAI's chief research officer, Mark Chen, denies a shift away from foundational research, former employees report that non-ChatGPT related projects are struggling for resources and attention.

Amazon AI Data Contained Child Sex Abuse Material Report Says

A Bloomberg report states that Amazon discovered hundreds of thousands of suspected child sex abuse images within its artificial intelligence training data. Bloomberg tech reporter Riley Griffin discussed these findings on CBS News. The presence of such material in AI training datasets raises serious concerns.

Auto Dealers Focus on Practical AI Tools at NADA Show 2026

At the NADA Show 2026, auto dealers will find many new AI solutions designed for immediate use in their businesses. The most helpful tools focus on improving efficiency, decision-making, and reducing problems in daily operations. Dealers should look for AI that works across different departments and shows quick results. Key AI applications include enhancing marketing, streamlining sales processes, improving inventory management, and boosting fixed operations. These AI tools aim to support dealership staff by automating tasks and making operations more consistent, rather than replacing human workers, as highlighted on February 3, 2026.

French Police Raid X Office Elon Musk Summoned in Grok Probe

French law enforcement raided X's Paris office and summoned Elon Musk for questioning as part of an investigation into illegal content related to Grok. The Paris public prosecutor's office is leading the probe, with assistance from Europol, to ensure X follows French laws. X had previously criticized the investigation and refused to provide access to its recommendation algorithm or real-time user data. Separately, the UK Information Commissioner's Office also opened a probe into Grok, concerned about reports that it has been used to create non-consensual sexual imagery, including images of children, as reported on February 3, 2026.

Microsoft Expands AI Content Marketplace for Publishers

Microsoft AI is expanding its "click-to-sign" content marketplace, which connects publishers with AI builders who want to license premium content. This marketplace aims to make it easier and more trustworthy for publishers to get paid for their content used by AI engines. Initial partners include major news organizations like Business Insider Inc, Vox Media Inc, and The Associated Press. Microsoft plans to expand this system globally, offering a common contract that publishers can easily agree to. Nikhil Kolar, VP at Microsoft AI, explained that this approach helps overcome the friction of individual licensing deals and provides feedback on how content is used, as discussed on February 3, 2026.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Agents AI Security Data Privacy Vulnerabilities Prompt Injection OpenClaw Moltbook Social Media (AI) Cybersecurity Intellectual Property Theft Economic Espionage Google OpenAI ChatGPT AI Education AI Training AI Courses AI Infrastructure Data Centers Amazon AI Microsoft AI Content Licensing AI Ethics Child Safety (AI) Grok X (Social Media) AI in Automotive Business Applications (AI) AI Development Large Language Models AI Research Automation AI Literacy Regulatory Scrutiny AI Risks Autonomous Systems AI in Finance AI in Publishing

Comments

Loading...