OpenAI founders reject Elon Musk's 2018 AI unit proposal

Elon Musk and the founders of OpenAI are currently engaged in a high-profile legal battle in a California courtroom. Musk is suing OpenAI, alleging the company stole his ideas to build its own technology, while OpenAI counters that his claims are baseless and aimed at sabotage. These details emerged during a recent trial where both sides traded accusations regarding the future of artificial intelligence.

The legal dispute dates back to 2018, when Musk attempted to hire OpenAI founders Sam Altman, Greg Brockman, and Ilya Sutskever to lead a new AI unit inside Tesla. Musk proposed making Altman a board member or turning OpenAI into a Tesla subsidiary, but the founders rejected the offer due to concerns about the risks of working with Tesla.

While the court case focuses on broken promises, witnesses have highlighted broader dangers of AI. Expert witness Stuart Russell warned that a single company controlling advanced AI could be dangerous for humanity, citing risks like job loss and misinformation. Despite the judge warning lawyers not to focus on safety, these fears remain a central theme in the proceedings.

Beyond the Musk-OpenAI conflict, the AI sector sees diverse developments. Salsify launched SalsifyIQ in 2026 to centralize product data for AI-driven shopping, expanding its assistant Angie to handle natural language tasks. Meanwhile, Creative Fabrica utilizes Google Cloud to power its Studio AI platform, which now serves over 250,000 new customers monthly by generating images, videos, and 3D models.

Other advancements include a robotics startup developing AI brains that teach humanoid machines new physical skills in days, significantly faster than traditional training methods. In the healthcare sector, about 20% of U.S. adults use AI chatbots for emotional support, though experts caution that these tools lack real empathy. Intel is also prioritizing hardware security for AI factories, using confidential computing to protect data against future quantum threats.

Business leaders are realizing that involving workers early is crucial for realizing returns on AI investments, as many are still in pilot stages. Additionally, scientists have resolved a mystery where ChatGPT and GPT-5 frequently mentioned gremlins, an issue caused by an AI persona feature that has since been fixed. Finally, legal experts clarify that humans using generative AI bear responsibility when the technology is misused for crimes like fraud or identity theft.

Key Takeaways

['Elon Musk is suing OpenAI in a California courtroom, claiming the company stole his ideas.', "OpenAI founders Sam Altman, Greg Brockman, and Ilya Sutskever rejected Musk's 2018 offer to join Tesla.", 'Expert witness Stuart Russell testified that a single company controlling advanced AI poses risks to humanity.', 'Salsify launched SalsifyIQ in 2026 to centralize product data for AI-driven shopping experiences.', 'Creative Fabrica uses Google Cloud to power Studio AI, gaining over 250,000 new customers monthly.', 'A robotics startup developed an AI brain that teaches humanoid robots new skills in just days.', 'Approximately 20% of U.S. adults use AI chatbots for mental health support, despite expert warnings.', 'Intel is implementing hardware security measures to protect AI systems from quantum computer threats.', 'Scientists identified an AI persona feature as the cause of gremlin references in ChatGPT and GPT-5.', 'Legal experts state that humans using generative AI bear responsibility for crimes committed with the tool.']

Musk Tried to Hire OpenAI Founders for Tesla AI Lab

Elon Musk tried to hire the founders of OpenAI to lead a new AI unit inside Tesla in 2018. He wanted Sam Altman, Greg Brockman, and Ilya Sutskever to join his car company. Musk proposed making Altman a board member or turning OpenAI into a Tesla subsidiary. The OpenAI founders rejected the offer because they worried about the risks of working with Tesla. These details were revealed during a recent court trial between Musk and OpenAI.

Musk and OpenAI Leaders Fight in California Courtroom

Elon Musk and the founders of OpenAI are fighting in a California courtroom over the future of artificial intelligence. Musk is suing OpenAI, claiming the company stole his ideas to build their own technology. OpenAI says Musk's claims are baseless and that he is trying to sabotage their work. The case has sparked a big debate about who owns ideas and the role of billionaires in AI. Both sides have traded accusations in the media and on social media while the trial continues.

AI Safety Risks Loom Over Musk and OpenAI Trial

A trial in Oakland, California, is pitting Elon Musk against OpenAI CEO Sam Altman. While the court case is about breaking promises, witnesses have talked about the dangers of artificial intelligence. Expert witness Stuart Russell warned that a single company controlling advanced AI could be dangerous for humanity. He listed risks like job loss, misinformation, and AI becoming smarter than humans. The judge warned lawyers not to focus on safety, but these fears are still present in the courtroom.

Salsify Launches New AI Tool for Product Management

Salsify, a company based in Boston, launched a new tool called SalsifyIQ at its Digital Shelf Summit in 2026. This new system helps businesses manage product data for AI-driven shopping experiences. It brings together different types of product knowledge into one central place for better accuracy. The launch also expanded the company's AI assistant, Angie, to handle tasks using natural language. Salsify now serves brands and retailers in more than 140 countries with these new features.

Creative Fabrica Uses Google Cloud for AI Content Tools

Creative Fabrica chose Google Cloud to help scale its AI-driven content creation platform called Studio AI. The company now has over 250,000 new customers every month thanks to these new tools. Google's technology helps users create images, videos, audio, and 3D models easily without technical barriers. The partnership allows creators to focus on their art while AI handles the complex design work. Creative Fabrica also uses this system to ensure original artists get credited and paid for their work.

Robot Startup Builds AI Brains for Humanoid Machines

A robotics startup has built an AI brain that teaches humanoid robots new physical skills in just days. This is much faster than the months it usually takes to train these machines. The technology is part of a race to put human-shaped robots to work in factories and warehouses. The report highlights how quickly these AI systems can learn complex movements and tasks. This advancement could speed up the adoption of robots in industrial settings.

Therapists Discuss Patients Trusting AI with Their Feelings

Mental health clinicians are now asking patients how they use AI chatbots for emotional support. A recent study found that about 20% of adults in the United States use these tools for mental health. Some doctors, like Dr. Christine Crawford, use AI to help process difficult emotions from their work. However, experts warn that AI has no real empathy and does not understand human feelings. Researchers are building platforms to educate people about the strengths and weaknesses of these chatbots.

Companies Must Prioritize Workers to Get AI Returns

Business leaders must consider their workers to get returns on their AI investments. Many companies are still in the pilot stage and have not seen productivity gains yet. Workers often feel left out of the process or fear losing their jobs to automation. To succeed, leaders need to involve employees early and communicate clearly about job changes. Creating frameworks for shared productivity gains can help build trust and speed up AI adoption.

Intel Focuses on Hardware Security for AI Factories

Intel is putting hardware trust at the center of security for AI systems running inside companies. As more businesses move AI operations to their own facilities, securing the hardware becomes critical. Intel uses technologies like confidential computing to create safe environments for sensitive data. Their strategy includes protecting data against future threats from quantum computers. This approach ensures that the foundation of the AI system is secure from the start.

AI Can Help Businesses Manage Energy and Climate Goals

Artificial intelligence offers a way to manage energy use and meet climate goals more effectively. Companies face complex challenges from extreme weather and rising energy costs. AI can help make sense of large amounts of data to find better solutions. It provides precise and transparent methods for tracking energy use and reducing waste. Experts say businesses must use AI ethically to solve the problems it helped create.

Lawyer Explains Who Is Responsible When AI Commits Crimes

Attorney Greg Isaacs explains who is responsible when artificial intelligence is used to commit crimes. Generative AI can make mistakes or be used to create fake images and emails. Bad actors can use these tools for fraud or identity theft. People who misuse AI can be charged with wire fraud or other computer crimes. The lawyer emphasizes that the humans using the technology bear the legal responsibility.

Scientists Solve Mystery of ChatGPT Gremlin Obsession

Scientists have solved the mystery of why ChatGPT and GPT-5 often mentioned gremlins and goblins. The issue was caused by an AI persona feature that users can activate for fun. One of these personas inadvertently made the AI talk about mythical creatures in unrelated answers. Researchers found that the AI would add goblins to advice about fixing cars or playing sports. The problem has now been fixed so users will not see these strange references anymore.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Artificial Intelligence Tesla OpenAI Elon Musk Sam Altman Greg Brockman Ilya Sutskever AI Safety Job Loss Misinformation AI Becoming Smarter Salsify Product Management AI-Driven Shopping Experiences Google Cloud AI Content Tools Creative Fabrica Studio AI Robot Startup Humanoid Robots AI Brains Therapists AI Chatbots Mental Health AI Returns Business Leaders Workers AI Adoption Intel Hardware Security AI Factories Confidential Computing Quantum Computers AI Energy Management Climate Goals Lawyer AI Crimes Generative AI ChatGPT GPT-5 Gremlins Goblins AI Persona Scientists

Comments

Loading...