OpenAI Cofounder Debates AI Skills as Elon Musk Defends Grok

Washington state lawmakers are actively working on new regulations for artificial intelligence, aiming to address concerns around AI-generated content and its impact on minors. Proposed bills seek to mandate disclosure when content, such as deepfake images or videos, is created by AI, and require companies to provide detection tools. Another significant bill focuses on protecting children from potentially harmful interactions with AI companion chatbots, like OpenAI's ChatGPT and Google's Gemini, by requiring them to inform minors they are not human. This legislative push, supported by figures like Governor Bob Ferguson, faces opposition from the tech industry, including the Washington Technology Industry Association, which cites liability concerns. The proposed law targeting chatbots for minors would take effect on January 1, 2027, if passed.

The debate over AI's responsible use extends beyond Washington state, as Michigan Attorney General Dana Nessel has threatened legal action against Elon Musk and his company xAI. Nessel alleges that xAI's Grok chatbot, specifically its "spicy mode," facilitates the creation of illegal deepfake pornography by manipulating images without consent. She demands that Musk disable this feature, drawing parallels to the Backpage platform. Musk, however, defends Grok, stating it only generates images from user prompts and blocks illegal requests, characterizing Nessel's stance as censorship. Meanwhile, Elon Musk's X platform is developing a "user-promptable" algorithm for its content feed, allowing users to customize what they see with simple language commands, such as "No politics today just the best AI innovations," offering more control over their browsing experience.

The rapid expansion of artificial intelligence also brings substantial energy demands, with projections indicating data center electricity use could double by 2030. Oklo, an advanced nuclear company, is positioning its small Aurora nuclear reactors as a solution, capable of supplying up to 75 megawatts of continuous power for large data centers. These reactors, designed to operate for a decade using specialized fuel, represent Oklo's strategy to function as a utility, selling electricity directly. However, the company still requires regulatory approval from the Nuclear Regulatory Commission and anticipates significant revenue generation only after 2027. This energy challenge underscores the infrastructure needs for widespread AI adoption, a trend already visible in places like Little Rock, where businesses are quickly integrating AI to enhance efficiency, reduce costs, and foster growth.

Discussions among AI leaders reveal differing perspectives on the future roles within the industry. Andrej Karpathy, a cofounder of OpenAI and former Tesla AI director, holds a contrasting view to Nvidia CEO Jensen Huang regarding software engineers. While Huang suggested engineers might shift away from coding to focus on designing AI systems, Karpathy emphasizes that coding remains a crucial skill for AI engineers. He argues that a deep understanding of code is essential for troubleshooting, optimizing systems, and driving new AI innovations. This divergence highlights an ongoing conversation about the evolving skill sets required in the AI sector. In a related development, Vercel has introduced "Agent Skills," an open-format package manager for AI coding agents, offering reusable skills based on best practices for React and Next.js development, web design, and Vercel deployments, which can help AI agents review and improve code.

Globally, companies are also leveraging AI for competitive advantage and new functionalities. Alibaba is strategically integrating its popular Taobao shopping service with its main Qwen AI app, a move designed to compete with ByteDance's Doubao and monetize its substantial $53 billion investment in AI. This initiative connects key services like Taobao, Alipay, and Fliggy, aiming to demonstrate the profitability of AI within its "super apps." However, the advancements in AI also introduce new risks, particularly in online meetings, where sophisticated deepfake and impersonation fraud are on the rise. Attackers now use voice cloning and video synthesis to create convincing fake identities on unified communications platforms, exploiting the inherent trust placed in virtual meetings. A notable incident in 2025 involved an Arup engineering firm employee being defrauded of $25 million after being deceived by deepfake senior leaders in a video conference, highlighting the critical need for enhanced identity verification in digital interactions.

Key Takeaways

  • Washington state lawmakers propose new AI regulations, including mandating disclosure for AI-generated content and requiring chatbots like ChatGPT and Google Gemini to identify themselves to minors.
  • Michigan Attorney General Dana Nessel threatens legal action against Elon Musk's xAI over Grok's "spicy mode," alleging it facilitates illegal deepfake pornography, a claim Musk denies.
  • Elon Musk announced X is developing a "user-promptable" algorithm, allowing users to customize their content feed with natural language commands.
  • The rapid growth of AI is projected to double data center electricity use by 2030, with Oklo offering small Aurora nuclear reactors to provide up to 75 megawatts of continuous power.
  • Andrej Karpathy (OpenAI cofounder, former Tesla AI director) disagrees with Nvidia CEO Jensen Huang, asserting that coding remains a vital skill for AI engineers despite suggestions to focus on AI system design.
  • Alibaba is integrating its Taobao shopping service with its Qwen AI app, aiming to monetize its $53 billion AI investment and compete with ByteDance's Doubao.
  • Vercel launched "Agent Skills," an open-format package manager for AI coding agents, providing reusable skills for React, Next.js, web design, and Vercel deployments.
  • Businesses in Little Rock are quickly adopting AI for efficiency, cost savings, and growth, highlighting a broader trend of AI integration across various industries.
  • Advanced AI is increasing deepfake and impersonation fraud in online meetings, as demonstrated by a 2025 incident where an Arup employee was defrauded of $25 million by deepfake senior leaders.
  • Character.ai has already removed open-ended chats for users under 18 in the US, anticipating potential regulations similar to those proposed in Washington state.

Washington Lawmakers Propose New AI Regulations

Washington state lawmakers are working on new laws to control artificial intelligence. They want to make sure people know when content is created by AI, like deepfake images or videos, and require companies to provide tools to detect AI. Another bill aims to protect children from harmful chatbot interactions, making sure chatbots like ChatGPT tell minors they are not human. The tech industry has concerns about these rules, while some citizens and advocacy groups support them. The Trump administration also prefers federal AI regulation over state laws.

Washington Lawmakers Seek to Protect Kids from AI Chatbots

Washington state lawmakers are considering new rules to protect young people from AI companion chatbots. This action comes after concerns about the chatbots' effects on mental health and reports of suicides linked to chatbot interactions. State Senator Lisa Wellman and Governor Bob Ferguson support the bill, which would apply to chatbots like ChatGPT and Google Gemini. The tech industry, including the Washington Technology Industry Association, opposes the bill, citing concerns about liability. Character.ai, a chatbot company, has already removed open-ended chats for users under 18 in the US. If passed, the law would start on January 1, 2027.

Michigan AG Warns Elon Musk Over Grok Deepfake Feature

Michigan Attorney General Dana Nessel is threatening legal action against Elon Musk and his company xAI. She claims their Grok chatbot's "spicy mode" allows users to create illegal deepfake pornography by manipulating images without consent. Nessel believes Musk must disable this feature, comparing it to the Backpage platform that facilitated illegal activity. Elon Musk defends Grok, stating it only creates images from user prompts and blocks illegal requests, calling Nessel's view censorship. Other attorneys general are also urging xAI to remove the feature.

Elon Musk Says X is Building Customizable AI Feed

Elon Musk announced that X, formerly Twitter, is developing a new feature for its content feed. This "user-promptable" algorithm will let users customize what they see using simple language commands, like "No politics today just the best AI innovations." This gives users more control over their feed, unlike current social media algorithms that decide content for them. The feature builds on recent upgrades to X's algorithm, which already boosted user engagement. While no timeline is set, this innovation could allow users to filter topics and tailor their browsing experience.

Oklo Offers Nuclear Power Solution for AI Energy Needs

The rapid growth of artificial intelligence requires a huge amount of power, with data center electricity use expected to double by 2030. Oklo, an advanced nuclear company, aims to meet this demand with its small Aurora nuclear reactors. These reactors can provide up to 75 megawatts of continuous power for large data centers, using special fuel and operating for a decade. Oklo plans to own and run these powerhouses, selling electricity like a utility company. However, Oklo still needs regulatory approval from the Nuclear Regulatory Commission and does not expect to earn significant money until 2027.

Little Rock Businesses Quickly Adopt AI for Growth

Businesses in Little Rock are quickly adopting artificial intelligence technology. Owners believe that using AI early is very important for their success. They are using AI to learn new things, make their work more efficient, and ultimately save money. This trend shows how companies of all sizes are seeing the benefits of integrating AI into their operations.

AI Leaders Disagree on Future Role of Software Engineers

Andrej Karpathy, a cofounder of OpenAI and former Tesla AI director, disagrees with Nvidia CEO Jensen Huang about the future of software engineers. Jensen Huang suggested that engineers should stop coding and instead focus on designing AI systems. However, Karpathy believes that coding remains a vital skill for AI engineers. He argues that understanding the code is necessary for fixing problems, making systems better, and creating new AI innovations. This shows a difference in opinion among top AI experts regarding engineers' roles.

Alibaba Connects Taobao Shopping to Its Main AI App

Alibaba is making a big move to connect its popular Taobao shopping service with its main Qwen AI app. This step aims to help Alibaba compete with ByteDance's Doubao and make money from its artificial intelligence investments. The company has invested $53 billion in AI and is linking key services like Taobao, Alipay, and Fliggy. Alibaba hopes this integration will show that its "super apps" can successfully profit from AI technology.

Vercel Launches Agent Skills for AI Coding Agents

Vercel has released "Agent Skills," a new package manager for AI coding agents. This tool provides reusable skills based on best practices for React and Next.js development, web design, and Vercel deployments. Agent Skills is an open format that includes natural language instructions and helper scripts for AI agents. Key skills cover over 40 rules for React and Next.js performance, more than 100 rules for web design quality, and tools for deploying projects to Vercel. Developers can easily install these skills to help AI agents review and improve code.

Deepfakes Create New Fraud Risks in Online Meetings

Identity risks in online meetings are changing due to advanced artificial intelligence, leading to more deepfake and impersonation fraud. Attackers now use voice cloning and video synthesis to create fake identities that appear real in unified communications platforms. Meetings are seen as highly trustworthy, making it easier for deepfakes to trick people. For example, in 2025, an employee at Arup engineering firm was tricked by deepfake senior leaders in a video meeting, resulting in $25 million being stolen. This highlights a serious problem with verifying identities in online communication tools.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Regulation AI Policy State Law Federal Law Deepfakes Deepfake Detection Deepfake Pornography AI Chatbots Child Protection Mental Health Tech Industry Content Moderation Censorship Legal Action AI Ethics Elon Musk xAI Grok ChatGPT Google Gemini Character.ai X (formerly Twitter) AI Algorithms Customizable Feed Social Media Content Personalization AI Energy Consumption Data Centers Nuclear Power Oklo AI Adoption Business Growth Efficiency Cost Savings Software Engineering AI Engineers Future of Work Coding AI System Design OpenAI Nvidia Alibaba AI Integration E-commerce AI Investment AI Coding Agents Vercel Web Development React Next.js Code Review AI Tools Fraud Online Meetings Identity Verification Cybersecurity Voice Cloning Video Synthesis Impersonation Unified Communications Regulatory Approval Infrastructure Liability Small Businesses Super Apps Tesla AI ByteDance Taobao Qwen AI Agent Skills Package Manager

Comments

Loading...