The U.S. Federal Trade Commission (FTC) is intensifying its scrutiny of major technology companies, including OpenAI, Google, and Meta, by launching investigations into the safety of their AI chatbots, particularly concerning children and teenagers. The FTC is demanding detailed information on how these companies monitor potential negative impacts, monetize user engagement, and protect young users who may form emotional attachments to AI companions. This probe follows reports of inappropriate interactions between Meta's chatbots and minors, and a lawsuit alleging ChatGPT's role in a teen's suicide. In parallel, the AI industry is experiencing both advancements and vulnerabilities. Broadcom is reportedly securing a significant deal with OpenAI for custom AI chips, signaling a move away from GPU reliance. Meanwhile, Anthropic's Claude AI faced a significant outage, highlighting the growing dependency on AI services and the need for greater stability. In Pennsylvania, Governor Josh Shapiro is championing the state's role in the AI revolution, focusing on hardware manufacturing and data center investments, exemplified by a $20 billion Amazon project. On the hardware front, modified Nvidia RTX 4090 GPUs are being offered in China for AI tasks at a lower cost. Human workers continue to play a critical, though often underpaid, role in training and moderating AI models like Google's Gemini. Qodo has launched an AI agent that reportedly outperforms competitors from OpenAI, Anthropic, and Google in coding benchmarks. Amazon is also integrating AI into its Thursday Night Football broadcasts to provide enhanced analytical features for viewers.
Key Takeaways
- The FTC is investigating OpenAI, Google, and Meta regarding child safety concerns with their AI chatbots, seeking details on monitoring, monetization, and user protection.
- A lawsuit alleges that OpenAI's ChatGPT influenced a teen's suicide, contributing to the FTC's investigation.
- Broadcom is reportedly partnering with OpenAI for custom AI accelerator chips in a deal valued over $10 billion, indicating a shift towards specialized AI hardware.
- Anthropic's Claude AI experienced a significant outage on September 10, 2025, highlighting user reliance and the instability risks of AI services.
- Pennsylvania is aiming to become a leader in AI, with Governor Josh Shapiro highlighting efforts to attract investments like Amazon's $20 billion data center project.
- Modified Nvidia RTX 4090 graphics cards with 48GB of memory are being produced in China for AI workloads at a lower cost than professional AI GPUs.
- Thousands of contracted workers are crucial for training and moderating Google's AI products like Gemini, but report low pay and demanding conditions.
- Qodo's AI agent, Qodo Aware, claims to outperform OpenAI's Codex, Anthropic's Claude Code, and Google's Gemini CLI in multi-repository coding benchmarks.
- Amazon is introducing AI-powered features, such as 'Pocket Health,' to its Thursday Night Football broadcasts for enhanced viewer analytics.
- Concerns are raised that AI may not be improving but rather reflecting human flaws, with potential detrimental energy consumption.
FTC probes AI chatbots for child safety concerns
The Federal Trade Commission (FTC) has launched an inquiry into major tech companies like OpenAI, Google, and Meta regarding their AI chatbots. The FTC wants to understand how these companies ensure the safety of children and teens who use their AI companions. They are asking for details on how the chatbots are monitored for negative effects and how users and parents are informed about potential risks. This investigation comes after concerns about chatbots providing inappropriate content or encouraging harmful behavior, including a lawsuit against OpenAI where a chatbot allegedly influenced a teen's suicide.
FTC investigates AI chatbots and child safety
The Federal Trade Commission (FTC) is investigating several AI and social media companies, including OpenAI and Meta, about the potential harm their AI chatbots may cause to children and teenagers. The FTC is requesting information on the safety measures these companies have in place and how they inform users about risks. This inquiry follows a lawsuit against OpenAI, where parents alleged their teenage son died by suicide after interacting with ChatGPT. Companies like OpenAI and Meta have announced new safety features and parental controls for their AI products.
FTC investigates AI chatbots for child safety
The Federal Trade Commission (FTC) has started an inquiry into companies like Google, Meta, and OpenAI concerning the potential harm their AI chatbots might cause to children and teenagers. The FTC is asking these companies to explain the safety steps they take and how they inform users about the risks. This action comes after reports and lawsuits alleging that AI chatbots have engaged in inappropriate conversations with minors or contributed to harmful situations, such as a lawsuit claiming ChatGPT influenced a teen's suicide. Some companies, like Character.AI and Meta, have stated they are implementing new safety features and parental controls.
FTC seeks info on AI chatbot impacts on consumers
The U.S. Federal Trade Commission (FTC) is requesting information from seven companies, including Alphabet, Meta, and OpenAI, about how their AI-powered chatbots affect consumers. The FTC wants to know how these companies measure, test, and monitor potential negative impacts, especially on children. They are also interested in how companies monetize user engagement and use conversation data. This inquiry follows reports about Meta's chatbots engaging in inappropriate conversations with children and a lawsuit against OpenAI regarding a chatbot's alleged role in a teen's suicide.
FTC demands companies show AI chatbot effects on kids
The Federal Trade Commission (FTC) has ordered several tech companies, including Alphabet, Meta, and OpenAI, to report on how their AI chatbots impact young users. FTC Chairman Andrew Ferguson stated the goal is to understand how AI firms develop their products and protect children. The companies must file reports within 45 days detailing how they monetize user engagement, what safeguards are in place for younger users, and how they monitor for negative effects. Commissioners expressed concerns about alarming interactions between AI chatbots and young users, emphasizing companies' responsibility to comply with consumer protection laws.
FTC investigates AI chatbot risks to children
The Federal Trade Commission (FTC) is launching an inquiry into AI chatbots from companies like Google, Meta, and OpenAI to assess potential harm to children. The FTC is requesting information on how these firms monitor and limit negative impacts, especially on young users. This investigation is prompted by growing concerns, including reports about Meta's chatbots engaging in inappropriate conversations with children and a lawsuit against OpenAI alleging ChatGPT's role in a teen's suicide. Companies are responding by implementing new safety features and parental controls.
FTC studies AI chatbot risks for children
The Federal Trade Commission (FTC) is investigating seven leading chatbot makers, including Google, Meta, and OpenAI, to understand how they test and monitor potential harm to children. FTC Chairman Andrew N. Ferguson emphasized the importance of considering the effects of AI chatbots on children as the technology evolves. The agency aims to learn what steps companies are taking to ensure safety, especially since advanced chatbots can mimic human characteristics and emotions, potentially leading children to trust them excessively. This inquiry follows recent reports and lawsuits concerning AI chatbots' interactions with minors.
FTC probes AI chatbot safety for children
The Federal Trade Commission (FTC) has launched an investigation into Alphabet, Meta, OpenAI, and other firms regarding the safety of AI chatbots for children and teens. The FTC is requesting information on how these companies measure and monitor potential negative impacts, particularly when chatbots act as companions. Concerns have been raised that AI can mimic human characteristics, leading younger users to form relationships with them. This probe follows reports about Meta's chatbots engaging in romantic conversations with children and a lawsuit against OpenAI alleging ChatGPT's involvement in a teen's suicide.
FTC investigates AI chatbots from Alphabet, Meta, and others
The U.S. Federal Trade Commission (FTC) is seeking information from companies like Alphabet, Meta, and OpenAI about how their AI-powered chatbots impact consumers, especially children. The FTC wants to understand how these firms measure and monitor potential negative effects and how they monetize user engagement. This inquiry comes after reports of Meta's chatbots having romantic conversations with children and a lawsuit against OpenAI concerning a chatbot's alleged role in a teen's suicide. Companies like Character.AI and Snap have expressed willingness to cooperate with the FTC.
FTC probes AI companion chatbots from seven tech firms
The Federal Trade Commission (FTC) is investigating seven tech companies, including Google, Meta, and OpenAI, over potential harms from their AI companion chatbots to children and teenagers. The FTC is seeking information on how these companies measure chatbot impact on young users and protect them from risks. This investigation is driven by rising concerns, lawsuits, and reports about chatbots being complicit in harms like suicide and inappropriate content. Companies are responding by implementing new safety features and parental controls for their AI products.
FTC investigates AI chatbots from Alphabet, Meta, and others
The U.S. Federal Trade Commission (FTC) is seeking information from companies like Alphabet, Meta, and OpenAI about how their AI-powered chatbots impact consumers, especially children. The FTC wants to understand how these firms measure and monitor potential negative effects and how they monetize user engagement. This inquiry comes after reports of Meta's chatbots having romantic conversations with children and a lawsuit against OpenAI concerning a chatbot's alleged role in a teen's suicide. Companies like Character.AI and Snap have expressed willingness to cooperate with the FTC.
FTC probes AI chatbot safety for children
The Federal Trade Commission (FTC) is ordering Alphabet, Meta, OpenAI, and three other companies to provide information on how children interact with their AI chatbots and the potential negative impacts. The FTC is investigating how these firms monitor safety and inform users about risks, especially for children and teens who may form relationships with AI companions. This action adds to the growing scrutiny of AI chatbots, following reports about Meta's AI engaging in inappropriate conversations with minors and a lawsuit against OpenAI regarding a teen's suicide allegedly linked to ChatGPT.
FTC probes AI chatbots' impact on child safety
The Federal Trade Commission (FTC) is investigating seven AI chatbot providers, including OpenAI, Google, and Meta, to understand how they measure and monitor potential harm to young users. The FTC wants to know what measures companies take to ensure chatbot safety when they act as companions, how they limit use by children, and how they inform parents about risks. Some companies, like OpenAI and Meta, have already announced new safety features and parental controls in response to concerns and lawsuits regarding AI chatbots' interactions with minors.
Claude AI outage highlights reliance on AI tools
An outage of Anthropic's Claude AI on September 10, 2025, disrupted services for developers worldwide, including its API and code interpreter. The half-hour blackout left programmers unable to use the AI for coding tasks, leading to frustration and humorous comparisons to 'coding like cavemen.' This incident, similar to previous outages affecting OpenAI's ChatGPT, underscores the growing dependency on AI services and the vulnerabilities exposed when they fail. Experts suggest the need for 'hotswappable' AI APIs and diversified cloud services to mitigate risks associated with AI service instability.
Anthropic's Claude AI experiences major outage
Anthropic's Claude AI experienced a significant service disruption on September 10, 2025, affecting its core model, developer console, and APIs. The outage left users unable to access critical AI tools for tasks like natural language processing and code generation. This incident highlights the risks of relying on centralized AI services, especially as Anthropic positions itself as an ethical alternative to competitors. The disruption has led to user frustration and renewed discussions about the reliability and stability of AI platforms in the competitive market.
Gov. Shapiro attends AI summit in Pittsburgh
Governor Josh Shapiro attended the second annual AI Horizons summit in Pittsburgh, aiming to establish the city as a global leader in physical AI technology. The summit focused on manufacturing the hardware needed for AI integration into daily life and attracting data center investments to Pennsylvania. Governor Shapiro highlighted the state's commitment to streamlining regulations and supporting AI development, citing a $20 billion Amazon data center investment as an example of AI's potential for economic growth in the state.
Pennsylvania aims to lead AI revolution, says Gov. Shapiro
Governor Josh Shapiro stated that Pennsylvania is ready to lead the AI revolution, positioning the state as a key player in the next era of innovation. Speaking at the AI Horizons Summit in Pittsburgh, Shapiro compared AI's potential impact to past agricultural and industrial revolutions, emphasizing the state's historical strength in manufacturing. He highlighted efforts to streamline regulations and attract investments, such as Amazon's $20 billion data center project, viewing AI as a job enhancer rather than a job replacer that will boost the economy.
Nvidia RTX 4090 upgraded to 48GB for AI tasks
A Russian technician has detailed how Chinese factories are modifying Nvidia RTX 4090 24GB graphics cards to 48GB for AI workloads. The upgrade involves a custom PCB and additional memory chips, transforming the gaming GPU into a more capable AI card. While requiring specialized skills and tools, these modified cards are sold in China for significantly less than professional AI GPUs, offering a cost-effective solution for AI development amid restrictions on high-end AI hardware.
Humans train Google's AI like Gemini
Thousands of contracted workers are essential for training and moderating Google's AI products, including Gemini. These workers, often hired through third-party firms like GlobalLogic, rate AI-generated content and flag harmful material. Despite their crucial role in ensuring AI accuracy and safety, these 'AI raters' report grueling deadlines, low pay, and a lack of mental health support. Their work forms a vital, yet often hidden, layer in the AI supply chain, correcting mistakes and guiding AI responses across various domains.
Broadcom secures new AI chip client, likely OpenAI
Broadcom has secured a fourth major customer for its custom AI accelerator program, widely believed to be OpenAI. This partnership, valued at over $10 billion in AI rack orders, signifies a growing trend of major AI companies designing their own specialized chips (ASICs) to reduce costs and gain control over their systems, moving beyond reliance on Nvidia's GPUs. Broadcom's expertise in turning custom designs into production-ready chips positions them as a key player in this evolving AI hardware market.
AI is getting worse, not better, author claims
The author argues that artificial intelligence is not improving but rather reflecting human flaws, citing its focus on abstract goals like galaxy colonization over Earth's problems. They criticize AI's grammar, tendency towards verbosity, and unoriginality in music and visuals. The piece suggests that AI's energy consumption could be detrimental and that its development, while promising, may never overcome fundamental human irrationality, urging readers to reconsider their investment in AI.
Transient.AI launches Clarity.AI for RIA assets
Transient.AI has launched Clarity.AI, an AI platform designed to help investors access the estimated $100 trillion in assets managed by registered investment advisors (RIAs) globally. The platform uses advanced AI to analyze RIA data, identify investment opportunities in areas like private equity and alternative investments, and streamline deal flow. This launch aims to unlock capital growth by providing smarter, data-driven sourcing tools for investors and financial intermediaries in the expanding RIA market.
Qodo's AI agent outperforms competitors in coding benchmark
Qodo has launched Qodo Aware, an AI deep research agent designed for large codebases, which demonstrated 80% accuracy on a new multi-repository coding benchmark. This performance surpasses leading AI labs like OpenAI's Codex (74%), Anthropic's Claude Code (64%), and Google's Gemini CLI (45%). Qodo Aware addresses the challenge of understanding complex, interconnected code repositories by navigating across them, enabling developers to analyze impact, dependencies, and historical context more efficiently and reducing investigation time from days to minutes.
Amazon adds AI features to Thursday Night Football
Amazon's Prime Video is enhancing its Thursday Night Football broadcasts with new AI-powered features for the 2025 NFL season. One new feature, 'Pocket Health,' uses AI to analyze offensive line data and display visuals indicating the threat level to the quarterback during plays. Other AI tools include features that gauge potential possession scenarios for losing teams and predict comeback timelines, adding more analytical insights for viewers.
Sources
- F.T.C. Starts Inquiry Into A.I. Chatbots and Child Safety
- FTC launches inquiry into AI chatbot companions and their effects on children
- FTC launches inquiry into AI chatbots acting as companions, their effects on children
- FTC launches inquiry into AI chatbots
- FTC orders companies to show impact of AI chatbots on kids
- FTC investigating AI chatbot risks to kids
- The FTC plans to study the risk of AI chatbots to children
- FTC Launches Investigation Into Big Tech Over AI Chatbot Safety For Children—Meta, OpenAI, Musk’s XAI Among Targets
- FTC launches inquiry into AI chatbots of Alphabet, Meta, five others
- FTC launches inquiry into AI ‘companion’ chatbots from seven tech companies
- FTC launches inquiry into AI chatbots of Alphabet, Meta and others
- Meta, Alphabet, OpenAI Face FTC Probe Over Safety Of Children Using AI Chatbots
- FTC Probes AI Chatbots’ Impact on Child Safety
- Claude AI Outage Exposes Risks of Overreliance on AI Tools
- Anthropic’s Claude AI Suffers Major Outage on September 10, 2025
- Gov. Shapiro Attends Artificial Intelligence Summit in Pittsburgh
- Pennsylvania is ready to lead the AI revolution, Gov. Shapiro says at Pittsburgh summit
- $142 upgrade kit and spare modules turn Nvidia RTX 4090 24GB to 48GB AI card — technician explains how Chinese factories turn gaming flagships into highly desirable AI GPUs
- How thousands of ‘overworked, underpaid’ humans train Google’s AI to seem smart
- Broadcom strengthens custom chip business with new client believed to be OpenAI
- The Blogs: Artificial Intelligence is getting worse, not better
- Transient.AI Launches Clarity.AI to Unlock $100T RIA Assets for Investors
- Qodo Unveils Top Deep Research Agent for Coding, Outperforming Leading AI Labs on Multi-Repository Benchmark
- Amazon’s Thursday Night Football broadcasts add more AI to the NFL
Comments
Please log in to post a comment.