AI Chatbot Limits Minors to 2 Hours Daily

Character.AI is implementing new restrictions on users under 18, limiting their ability to engage in open-ended conversations with AI chatbots. This change, effective by November 25, introduces a daily two-hour chat limit for minors, with age verification tools being rolled out to enforce this. The company cites growing concerns about the impact of AI interactions on teen mental health as the primary driver for this decision. Character.AI is also focusing on developing new features specifically for children and establishing an AI safety lab to address these issues. While the company emphasizes its commitment to teen safety, including rules against promoting self-harm, some critics suggest these measures may not fully resolve privacy or psychological impact concerns for young users.

Key Takeaways

  • Character.AI is limiting users under 18 to a maximum of two hours of daily chatbot interaction.
  • This new policy will be fully in effect by November 25.
  • The company is introducing age verification tools to identify underage users.
  • Character.AI cites concerns about the impact of AI on teen mental health as the reason for the restrictions.
  • The platform is developing new features tailored for children.
  • An AI safety lab is being established by Character.AI.
  • The company has implemented rules against promoting self-harm in AI interactions.
  • Critics argue the measures may not fully address privacy or psychological impacts on young users.

Character.AI limits teen chatbot access starting November 25

Character.AI, a platform for creating and interacting with AI characters, will stop allowing users under 18 to have open-ended chats with its bots. This change takes effect by November 25, with a two-hour daily limit starting immediately. The company is also developing new features for kids and establishing an AI safety lab. Age-verification tools are being introduced to identify underage users. Critics acknowledge the move but feel it doesn't fully address privacy or the psychological impact on young users.

Character.AI bans minors from long AI chatbot talks amid safety concerns

Character.AI is blocking minors from extended conversations with its AI chatbots due to growing concerns about the technology's impact on teen mental health. The Menlo Park, California-based company stated this decision is necessary given the questions raised about teen interactions with AI. This move comes as tech companies face increased pressure from parents and politicians regarding chatbot safety. Character.AI is implementing age verification and has stated its commitment to teen safety, including rules against promoting self-harm.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI chatbots Character.AI teen safety minors AI ethics mental health age verification privacy AI regulation chatbot safety

Comments

Loading...