OpenAI hires Steinberger as Google faces lawsuit

OpenAI made a significant move on February 15, 2026, by hiring Peter Steinberger, the creator of the popular open-source AI program OpenClaw. Steinberger, known for his decade-long building experience including PSPDFKit, will now lead the development of the next generation of personal AI agents at OpenAI. OpenClaw, also called Clawdbot and Moltbot, gained viral popularity with over 145,000 GitHub stars for its ability to autonomously manage tasks like emails and reservations. OpenAI CEO Sam Altman confirmed that OpenClaw will continue as an open-source project under a foundation supported by OpenAI, aligning with his vision for personal agents as a key part of future AI products.

This hiring highlights a broader industry shift from conversational AI to actionable agents, though OpenClaw itself drew scrutiny for potential security risks if improperly configured. The rise of AI agents also presents new challenges for Chief Information Security Officers (CISOs), who must now manage the outcomes of AI actions and evaluate security products for autonomous operation at machine speed. Reflecting these concerns, the European Parliament disabled built-in AI features on work devices for security and data protection reasons. Ethical considerations, such as the potential for AI authoritarianism, are also being explored, as seen in the Off Broadway play "Data," which echoes real-world reports on predictive policing and an Anthropic safety researcher's concerns.

The expanding role of AI is creating new job opportunities, notably in AI labeling, also known as data labeling. This crucial profession, which involves human-driven reinforcement learning to train AI models like ChatGPT, is seeing high salaries, with some reaching six figures and even $2 million signing bonuses for those with deep subject knowledge. However, AI's integration also brings legal challenges. Former NPR host David Greene is suing Google, alleging the company stole his voice for the male podcaster in its NotebookLM AI tool. Google denies the claim, stating they used a paid professional actor, despite an AI forensic firm's 53% to 60% confidence that Greene's voice was used for training.

Globally, the debate over AI regulation intensifies. President Trump opposes Utah's HB 286, the Utah AI Transparency Act, which aims to regulate large AI developers, advocating instead for a single national "Rulebook" to ensure US leadership in AI. Meanwhile, AI is finding practical applications, with farmers in India testing app-based AI advice systems like MahaVISTAAR and Sarlaben for crop management. Yet, the misuse or errors of AI remain a concern, as demonstrated by Kenosha County District Attorney Michael Graveley, who was sanctioned for using AI in a court filing that led to misapplying one case and citing a made-up case, highlighting the need for careful implementation and verification.

Key Takeaways

  • OpenAI hired Peter Steinberger, creator of the viral open-source AI assistant OpenClaw, on February 15, 2026, to lead its personal agent development.
  • OpenClaw, known for autonomous task management, will continue as an open-source project under a foundation supported by OpenAI, as stated by CEO Sam Altman.
  • Former NPR host David Greene is suing Google, alleging his voice was used without permission for the male podcaster in its NotebookLM AI tool, a claim Google denies.
  • AI labeling, a human-driven process crucial for training models like ChatGPT, is an emerging profession offering high salaries, some reaching $2 million in signing bonuses.
  • Chief Information Security Officers (CISOs) face new responsibilities for AI agent actions and must evaluate security products for safe, autonomous operation.
  • The European Parliament disabled AI features on work devices for lawmakers and staff due to cybersecurity and data protection concerns.
  • President Trump opposes state-level AI regulation, specifically Utah's AI Transparency Act, advocating for a single national AI rulebook.
  • Farmers in India are adopting app-based AI advice systems, such as MahaVISTAAR and Sarlaben, for crop management.
  • Kenosha County District Attorney Michael Graveley was sanctioned for using AI in a court filing that resulted in misapplied and fabricated case citations.
  • Ethical concerns about AI, including potential for authoritarianism and security risks, are being explored in cultural works and are reflected in real-world events.

OpenAI hires OpenClaw creator Peter Steinberger

OpenAI announced on February 15, 2026, that it hired Peter Steinberger, the creator of the popular open-source AI program OpenClaw. OpenClaw, also known as Clawdbot and Moltbot, helps users with tasks like managing emails and making reservations autonomously. OpenAI CEO Sam Altman stated that OpenClaw will continue as an open-source project supported by OpenAI. Steinberger aims to make an AI agent simple enough for anyone to use and believes OpenAI is the best place to achieve this vision.

OpenAI recruits OpenClaw AI agent developer

On February 15, 2026, OpenAI hired Peter Steinberger, the creator of the popular open-source artificial intelligence program OpenClaw. OpenClaw, previously known as Clawdbot and Moltbot, gained a following for its ability to perform tasks like clearing inboxes and making restaurant reservations. OpenAI CEO Sam Altman confirmed that OpenClaw will live in a foundation as an open-source project that OpenAI will continue to support. Steinberger expressed his goal to build an agent that even his mother can use, expanding its reach through OpenAI.

OpenAI hires OpenClaw creator for AI agent push

OpenAI hired Peter Steinberger, the creator of the open-source AI agent OpenClaw, as announced by CEO Sam Altman on February 15, 2026. OpenClaw achieved viral popularity for its personal assistant capabilities, including checking emails and writing code. Steinberger will now lead the development of the next generation of personal agents at OpenAI. Altman stated that OpenClaw will become an open-source project under a foundation that OpenAI will support. The software is still in early stages and OpenClaw also drew scrutiny for potential security risks if improperly configured.

OpenClaw creator Peter Steinberger joins OpenAI

Peter Steinberger, creator of the viral open-source AI assistant OpenClaw, joined OpenAI to help build the next generation of personal AI agents. OpenClaw, which emerged in late 2025 as Clawdbot and Moltbot, quickly gained over 100,000 GitHub stars for its ability to act on users' behalf. Steinberger chose OpenAI for its infrastructure and research resources to bring intelligent agents to a broader audience. OpenAI CEO Sam Altman views personal agents as an important part of future AI products, and OpenClaw will continue as an open-source project supported by OpenAI.

OpenAI hires OpenClaw founder amid AI agent race

OpenAI hired Peter Steinberger, creator of the viral OpenClaw AI assistant, to lead its personal agent development. OpenClaw, previously known as Clawdbot and Moltbot, will continue as an open-source project under an independent foundation supported by OpenAI. Steinberger started OpenClaw as a weekend project in November 2025, and it quickly gained over 145,000 GitHub stars. This move highlights the shift in AI from conversational to actionable agents. Despite the competitive rush, enterprise deployment of AI agents remains limited due to reliability and security concerns.

NPR host David Greene sues Google over AI voice

Former NPR host David Greene is suing Google, claiming the company stole his voice for the male podcaster in its AI tool, NotebookLM. Greene, known for "Morning Edition" and "Up First," discovered the voice in fall 2024 after NotebookLM launched in 2024. He states the AI voice sounds exactly like him, with similar cadence and intonation. Google denies the allegations, asserting they hired a paid professional actor for the voice. An AI forensic firm rated a 53% to 60% confidence that Greene's voice was used to train the bot.

David Greene sues Google for alleged AI voice theft

Former NPR host David Greene is suing Google, alleging the company copied his voice for the male co-host in its NotebookLM AI tool. Google launched NotebookLM's Audio Overviews in 2024, which creates short podcasts from user notes. Greene, a former co-host of NPR's "Morning Edition," claims Google replicated his unique voice without permission or payment. Google denies the lawsuit, filed January 23, stating the voice belongs to a paid professional actor. An AI forensic firm's analysis indicated a 53-60% confidence that Greene's voice was used for training.

High salaries for AI labeling jobs

A new profession called AI labeling, also known as data labeling, is emerging with high salaries, some reaching six figures and even $2 million signing bonuses. This work is crucial for training AI models like ChatGPT, helping them understand and process information. As AI becomes more complex, labelers need deep subject knowledge in fields like medicine or law, along with strong data literacy and critical thinking skills. This human-driven process, called reinforcement learning from human feedback, cannot be automated. It ensures AI systems are accurate and safe, especially as they are used in critical areas like finance and healthcare.

Play explores AI authoritarianism and ethics

The Off Broadway play "Data" explores the ethical challenges faced by employees at an AI company and the potential for AI authoritarianism. The story centers on Maneesh, a brilliant programmer who develops a powerful predictive algorithm for a Department of Homeland Security contract to track immigrants. The play highlights how tech leaders often justify potentially harmful projects. Recent real-world events, such as reports on predictive policing and an Anthropic safety researcher quitting over global peril, echo the play's themes, making its message seem prophetic.

CISOs face new security challenges with AI

The role of Chief Information Security Officers, or CISOs, is rapidly changing due to the rise of AI agents. CISOs are now responsible for the results of AI agent actions and for failing to use AI-driven security tools. The security model is shifting to a mix of human and AI workforces, where CISOs must decide which tasks can be safely automated. While businesses sometimes adjust security controls to meet revenue goals, these compromises are intentional and monitored. CISOs now evaluate security products based on their ability to operate safely and autonomously at machine speed, rather than relying on constant human oversight.

Indian farmers test AI advice for crops

Farmers in Chanegaon village, Maharashtra, India, are testing new app-based AI advice systems like MahaVISTAAR and Amul's Sarlaben. This shift comes as traditional sources of farm knowledge, such as shared family wisdom, are disappearing. Farmers like Vitthaldas Balkisan Asawa, who grows sugarcane, are looking for quick and reliable local advice to help with their crops. They hope these AI tools can provide trusted guidance faster than traditional methods.

Trump opposes Utah AI safety bill

President Trump is against Utah's HB 286, known as the Utah AI Transparency Act, which aims to regulate large AI developers. This bill requires AI companies to create public safety plans for major risks and child protection plans. Trump believes there should be only "One Rulebook" for AI across the nation, not many state laws, to ensure the US leads in AI development. He signed an executive order in December to discourage state AI legislation and created an AI litigation task force to challenge states with different rules. Supporters of the Utah bill are disappointed, viewing Trump's stance as favoring industry over public safety.

EU Parliament disables AI features for security

The European Parliament has disabled built-in AI features on work devices for lawmakers and staff due to cybersecurity and data protection worries. The IT department could not guarantee the security of data sent to cloud services by these tools. Affected features include writing assistants, virtual assistants, and webpage summaries on tablets and phones. The Parliament urged members to use similar caution with their private devices, especially when handling work-related information. This action reflects the EU's strong focus on data security and privacy.

OpenClaw creator built success over a decade

Peter Steinberger, the creator of OpenClaw, achieved his recent success with OpenAI after more than a decade of building and experimenting. In 2010, he founded PSPDFKit, a company that developed a multi-platform PDF solution. PSPDFKit grew without outside funding, becoming profitable from the start and serving major clients like Dropbox and IBM. By 2021, nearly a billion people used its technology. Steinberger is known as a "polyagentmorous builder" due to his many open-source projects and continuous experimentation with developer tools, which laid the groundwork for OpenClaw's innovation.

Kenosha DA sanctioned for using AI in court

Kenosha County District Attorney Michael Graveley was sanctioned by a judge for using artificial intelligence in a court filing. Graveley admitted to using AI for research, which led to misapplying one case and citing a made-up case. Defense attorney Michael Cicchini suspected AI use due to the state's odd arguments and identified "AI hallucinations," including fabricated case citations like "State v. Hamsa." The judge dismissed the burglary cases without prejudice, and Graveley's office stated they have reviewed and reinforced internal practices for accuracy and disclosure.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

OpenAI Peter Steinberger OpenClaw AI Agents Open Source AI Personal AI Assistants AI Development Sam Altman AI Security GitHub AI Innovation Hiring AI Industry Trends Google David Greene AI Voice Voice Cloning Intellectual Property AI Ethics Lawsuit NotebookLM Generative AI AI Labeling Data Labeling AI Training RLHF AI Safety AI Accuracy AI Jobs Data Literacy AI Authoritarianism Predictive AI Societal Impact of AI AI Governance Predictive Policing Cybersecurity CISOs Automation AI in Business Security Controls AI in Agriculture Farming Technology AI Advice Systems Rural Development AI Adoption AI Regulation AI Policy Government Utah AI Transparency Act Child Protection Data Protection Privacy European Parliament EU Cloud Security Entrepreneurship Software Development AI in Legal AI Hallucinations Legal Research Court Sanctions

Comments

Loading...