Recent developments at OpenAI highlight growing concerns over user privacy as the company began testing ads in ChatGPT on February 11, 2026. This move prompted the resignations of former researcher Zoë Hitzig and another employee, I. M. A. Whistleblower, who both fear OpenAI is repeating mistakes made by Facebook regarding user data. They worry that prioritizing profit could compromise the personal information users share with the chatbot, suggesting alternatives like independent oversight.
Adding to internal shifts, OpenAI also fired Ryan Beiermeister, its Vice President and product policy team lead. Sources indicate his termination was due to sexual discrimination, following his opposition to developing an "adult mode" for OpenAI's AI products, citing concerns about potential misuse and the company's core mission.
Meanwhile, the AI market continues to see new launches and significant investment. Monaco officially launched its AI-native sales platform for startups on February 11, 2026, having already secured over $35 million in funding led by Founders Fund. This platform aims to boost revenue through AI-powered lead qualification and automated customer outreach. On the same day, Jenacie AI introduced an automated trading platform, enabling global traders to design and manage strategies with built-in risk controls across various asset classes, connecting to brokers like Interactive Brokers and Coinbase.
The broader AI hardware market is also expanding rapidly, with the global Edge AI hardware market projected to reach $122.8 billion by 2035, up from $27.9 billion in 2025. North America is expected to lead this growth, and key players like NVIDIA Corporation and Intel Corporation are prominent in the processing hardware segment. However, AI's impact isn't universally positive for existing tech firms; a Breakingviews analysis suggests AI may slow growth for many software, data, and professional services companies, potentially leading to stagnation and cost-cutting.
Beyond market dynamics, AI is transforming various sectors. It's set to revolutionize security, safety, and emergency response through AI and small unmanned aerial systems, with a vision for humans to set goals and AI systems to execute tasks by 2035. Clinical psychologist Harvey Lieberman uses conversational AI as a sounding board to enhance his reflective work, improving patient care without direct decision-making. In gaming, AI agents are creating more lifelike non-player characters and enabling new gameplay experiences, as seen with an agentic Darth Vader in Fortnite.
As AI integration deepens, the need for robust governance and risk management becomes critical. Company boards are urged to understand AI's non-predictable nature and its impact on accountability, moving beyond viewing it merely as an efficiency tool. Businesses must quantify AI security risks by evaluating their potential impact on financial performance and regulatory standing, shifting from reactive to proactive management. Despite job fears, an 18-year-old AI startup founder, Alex Seungyong Yang, is pursuing computer science, emphasizing the importance of understanding logic and frameworks to adapt to rapid changes in the AI era.
Key Takeaways
- OpenAI faced resignations from Zoë Hitzig and another employee on February 11, 2026, due to privacy concerns over ChatGPT's new ad testing, fearing a repeat of Facebook's data issues.
- OpenAI's Vice President of product policy, Ryan Beiermeister, was fired after opposing an "adult mode" for AI products, with sources citing sexual discrimination.
- Monaco launched an AI-native sales platform for startups on February 11, 2026, securing over $35 million in funding led by Founders Fund.
- Jenacie AI also launched an automated trading platform on February 11, 2026, allowing users to manage strategies for various asset classes and connecting to brokers like Interactive Brokers and Coinbase.
- The global Edge AI hardware market is projected to grow from $27.9 billion in 2025 to $122.8 billion by 2035, with key players including NVIDIA Corporation and Intel Corporation.
- AI is expected to slow long-term growth for many existing software, data, and professional services companies, with some seeing implied growth rates as low as 0.9 percent.
- AI and small unmanned aerial systems (sUAS) are set to transform security, safety, and emergency response, with humans retaining responsibility for AI actions by 2035.
- Clinical psychologist Harvey Lieberman utilizes conversational AI as a sounding board to improve reflective clinical work and broaden perspectives, not for direct patient decisions.
- AI agents are revolutionizing video games by creating more lifelike non-player characters and enabling complex gameplay, exemplified by an agentic Darth Vader in Fortnite.
- Company boards must proactively understand and quantify AI security risks, moving beyond traditional cybersecurity approaches to address impacts on financial performance and regulatory standing.
Boards must understand AI risks and responsibilities
AI is changing business risks faster than company boards can keep up. Many boards mistakenly think AI is just for efficiency or that existing cybersecurity rules cover it. However, AI actually makes things more complex and affects important business decisions. Boards need to understand AI's non-predictable nature and its impact on governance and accountability. Regulators are also starting to treat AI as a major risk, so boards must act proactively to avoid legal issues.
AI and drones will change security and emergency response
AI and small unmanned aerial systems, or sUAS, are set to change security, safety, emergency response, and military operations. The article introduces a "Disciplined Maturity Framework" with stages like Understand, Investigate, Decide, Normalize, Continuously Refine, and Mature. This framework helps manage the risks of technology growing faster than rules. By 2035, the goal is for humans to set the overall goals while AI systems carry out most tasks. This approach ensures that humans remain responsible for AI actions across all fields.
Businesses must quantify AI security risks
AI security has become a core part of business operations, increasing risks for companies. Many traditional ways of prioritizing security risks do not work for AI because these threats change too quickly. Organizations need to evaluate AI risks by looking at their potential impact on business goals like financial performance and regulatory standing. AI security goes beyond just protecting models and includes data handling, system availability, and decision integrity. Using quantification helps companies move from reacting to AI security issues to proactively managing them.
OpenAI researcher quits over ChatGPT ads privacy fears
Former OpenAI researcher Zoë Hitzig resigned on February 11, 2026, the same day OpenAI started testing ads in ChatGPT. Hitzig fears this move could lead OpenAI down a path similar to Facebook's, where user privacy was compromised. She notes that ChatGPT users share very personal information, trusting the chatbot has no hidden agenda. Hitzig believes that while initial ads might follow safety rules, future versions could prioritize profit over user data protection. She suggests alternatives like independent oversight boards or data trusts to give users more control over their information.
Former OpenAI employee quits over ChatGPT ads
A former OpenAI employee, I. M. A. Whistleblower, resigned because OpenAI started testing ads on ChatGPT this week. The employee believes OpenAI is repeating mistakes made by Facebook regarding user data and privacy. Many ChatGPT users have shared very personal information, trusting the AI had no hidden agenda. The author has serious concerns about OpenAI's strategy, fearing the company might compromise its safety principles for profit. They believe there are better ways to fund AI tools without exploiting users' deepest fears and desires.
Jenacie AI launches automated trading platform
Jenacie AI launched a new automated trading platform on February 11, 2026, for global traders. This platform allows users to design, test, and manage their own automated trading strategies in one system. It focuses on consistent execution and built-in risk controls across different market conditions. Calvin Fu, CEO of Jenacie AI, stated the platform helps replace manual trading with systems that ensure consistency. It supports various asset classes like futures and equities and connects to major brokers such as Interactive Brokers and Coinbase. Jenacie AI operates on a software licensing model and does not manage client funds.
AI may slow growth for many tech companies
Artificial intelligence is expected to harm existing software, data, and professional services companies, but not completely destroy them. A Breakingviews analysis suggests that public market investors foresee a future of slow growth or even stagnation for these companies. For example, ServiceNow's implied long-term growth rate from 2030 is estimated at only 0.9 percent, much lower than before. The study of 76 stocks shows a median long-term growth of 0.9 percent, indicating AI might "zombify" companies rather than quickly kill them. This means companies might need to cut investments and costs to adapt.
AI helps psychologist improve patient care
Clinical psychologist Harvey Lieberman uses a conversational AI tool to improve his reflective clinical work. He uses AI as a sounding board when reviewing case materials and in consultations, but not for making direct patient decisions or during therapy sessions. Lieberman notes that much of a clinician's difficult thinking happens alone, which can narrow their perspectives. The AI helps him consider a wider range of explanations for complex situations. This approach allows him to test his own thinking and become a better psychologist.
Monaco launches AI sales platform for startups
Monaco officially launched its AI-native sales platform on February 11, 2026, designed to boost revenue growth for early-stage startups. The company operated quietly while developing its platform and has already secured over $35 million in funding, led by Founders Fund. Monaco's platform uses advanced AI to automate and improve key sales processes. Its features include AI-powered lead qualification, automated customer outreach, and predictive analytics to forecast sales performance. This launch aims to give startups the tools they need to succeed in today's market.
Edge AI hardware market to reach 122 billion by 2035
The global Edge AI hardware market is growing rapidly, driven by AI, IoT expansion, and the need for real-time data processing. In 2025, the market was valued at 27.9 billion US dollars and is expected to reach 122.8 billion US dollars by 2035, growing at a 17.9 percent annual rate. North America will lead this market, holding 45.6 percent of the revenue share by 2035, with Europe and Asia Pacific also showing strong growth. Processing hardware, including AI accelerators and GPUs, makes up the largest part of the market. Key companies in this sector include NVIDIA Corporation, Intel Corporation, and Qualcomm Technologies.
Student pursues computer science despite AI job fears
Alex Seungyong Yang, an 18-year-old incoming college freshman and founder of an AI startup, plans to study computer science despite concerns about AI replacing jobs. He believes a computer science degree will help him understand rapid changes in the tech industry and stay at the forefront of AI development. Yang emphasizes that learning the logic and frameworks of computer science, rather than just coding, is crucial for problem-solving in the AI era. He aims to develop depth, judgment, and adaptability to remain valuable in the job market. Yang feels motivated by the uncertainty and believes these skills are transferable and essential for adapting to future changes.
OpenAI executive fired after opposing adult mode
Ryan Beiermeister, who served as Vice President and led OpenAI's product policy team, was fired from the company. Sources familiar with the situation state his termination was due to sexual discrimination. Beiermeister was known for opposing the development of an "adult mode" for OpenAI's AI products, citing worries about potential misuse and the company's core mission. This firing adds to recent controversies and leadership changes at OpenAI. The company has not yet made any public statements about the reasons for his departure.
AI agents will transform video games
AI agents are poised to revolutionize video games by creating more lifelike non-player characters, or NPCs, and enabling new types of gameplay. These agents can plan and carry out complex tasks, making game characters behave more realistically than simple scripts. For example, Fortnite now features an agentic Darth Vader that can talk to players and choose to help or fight them. While AI agents can speed up game development and empower smaller studios, they also bring risks like potential manipulation of players or job concerns for human creators. This technology promises to create richer, more unpredictable game worlds.
Sources
- From innovation to oversight: Why AI demands board attention
- Disciplined Autonomy: How AI and sUAS Will Redefine Security, Safety, Emergency Response, and Military Operations
- Prioritizing AI Security Risks With Quantification
- OpenAI researcher quits over ChatGPT ads, warns of "Facebook" path
- Opinion | I Left My Job at OpenAI. Putting Ads on ChatGPT Was the Last Straw.
- Jenacie AI Launches an Automated Trading Platform for Global Traders
- AI pain trade prices in perpetual purgatory
- How AI is making me a better clinical psychologist
- Monaco Launches AI-Native Sales Platform to Accelerate Revenue Growth for Startups
- Edge AI Hardware Market size to cross $122.8Billion by 2035 | NVIDIA Corporation, Intel Corporation, Qualcomm
- Why I'm studying computer science despite AI fears
- Exclusive | OpenAI Executive Who Opposed ‘Adult Mode’ Fired for Sexual Discrimination
- AI Agents Are About To Change Gaming Forever
Comments
Please log in to post a comment.