Meta Platforms announced on January 23, 2026, that it will temporarily stop teens from accessing its AI characters across all apps globally, with the change starting in the coming weeks. This decision comes amid growing concerns about AI's impact on young people and precedes a trial regarding app harms to children. Meta plans to release an updated version of AI characters for teens that will include parental controls, though teens can still use Meta's general AI assistant.
Meanwhile, the security landscape for AI faces significant challenges. A study published in October 2025 by researchers from OpenAI, Anthropic, and Google DeepMind revealed that all 12 tested AI defenses failed against adaptive attacks, with bypass rates ranging from 95% to 100%. This highlights that traditional security methods struggle with AI's semantic-level attacks, and AI deployment is outpacing security measures. Cybercriminals are already leveraging AI for new attack types.
Concerns about AI extend to other sectors, as a December 2025 Wolters Kluwer Health survey indicated that larger health systems, particularly those with over 25,000 employees, worry more about AI privacy and data breaches. CFOs, in particular, showed greater concern for privacy than providers. This aligns with a broader trend in 2026 where CFOs are demanding clear business cases for AI investments, moving away from experimentation. While global AI spending may hit $200 billion, only a small fraction of companies currently report financial benefits from their AI projects.
In response to these evolving challenges, new legislative and technological solutions are emerging. On January 22, 2026, House Representatives Madeleine Dean and Nathaniel Moran introduced the TRAIN Act, aiming to provide copyright owners with a subpoena process to determine if their work was used to train AI models. On the security front, several companies launched new tools this week. HackerOne introduced the Good Faith AI Research Safe Harbor, Obsidian Security released a SaaS supply chain solution, Sophos launched Sophos Workspace Protection, and Vectra AI updated its platform for preemptive security against AI-powered attacks.
Legit Security also introduced a new AI tool integrated into its ASPM platform, designed to enhance software supply chain safety. This tool uses a threat feed to identify and prioritize vulnerabilities based on real threats to applications, a crucial feature given that AI coding tools are generating many new vulnerabilities. Legit Security CTO Liav Caspi noted that the platform also uses AI to suggest fixes, identify AI models, and reduce false positives, aiming for future self-fixing applications while acknowledging the need to build trust in these AI-driven solutions.
Key Takeaways
- Meta will stop teens from accessing AI characters globally starting in the coming weeks, with plans for an updated version including parental controls.
- A study by OpenAI, Anthropic, and Google DeepMind in October 2025 found that all 12 tested AI defenses failed against adaptive attacks with 95-100% success rates.
- Larger health systems, especially CFOs, are increasingly concerned about AI privacy and data breaches, according to a December 2025 Wolters Kluwer Health survey.
- CFOs in 2026 are demanding clear business cases for AI investments, as global AI spending may reach $200 billion but few companies see financial benefits.
- The TRAIN Act, introduced on January 22, 2026, aims to provide copyright owners with a subpoena process to verify AI training data usage.
- New security tools include HackerOne's Good Faith AI Research Safe Harbor, Obsidian Security's SaaS supply chain solution, Sophos Workspace Protection, and Vectra AI's updated platform.
- Legit Security launched a new AI tool for its ASPM platform that uses a threat feed to prioritize software supply chain vulnerabilities and suggests AI-driven fixes.
- AI coding tools are contributing to a rise in new software vulnerabilities, making advanced security solutions critical.
- The rapid deployment of AI is outpacing the development of effective security measures, creating significant risks.
- The focus for AI investments is shifting from experimentation to achieving practical, measurable business returns.
Meta stops teens from using AI characters
Meta Platforms Inc. will temporarily stop teens from using its AI characters. This change starts in the coming weeks and applies to anyone Meta identifies as a minor. The company says this is until an updated experience is ready. This move comes as concerns grow about AI's effects on children and before a trial about app harms to children. Teens can still use Meta's general AI assistant.
Meta stops global teen access to AI characters
Meta Platforms announced on January 23, 2026, that it will stop teens from accessing AI characters across its apps globally. This change will begin in the coming weeks. Meta plans to release an updated version of AI characters for teens that will include parental controls. This decision follows increased scrutiny from U.S. regulators regarding the potential negative effects of AI chatbots on young people.
Large hospitals worry more about AI privacy
A new survey by Wolters Kluwer Health shows that larger health systems worry more about AI privacy and data breaches. The survey, done in December 2025, asked over 500 healthcare leaders about "shadow AI" and other AI risks. While patient safety was the top concern for everyone, 57% of leaders at systems with over 25,000 employees worried about data breaches. Administrators, especially CFOs, showed more concern for privacy than providers. Experts suggest better training and clear rules for AI use to make sure it is safe and responsible.
CFOs face new AI challenges in 2026
In 2026, companies will focus on making AI investments show clear business results and accountability. CFOs are key in this shift, as many earlier AI projects had mixed outcomes. While global AI spending may hit $200 billion, only a small number of companies currently see financial benefits from AI. Experts like Swami Chandrasekaran from KPMG note that deploying AI is complex, leading to a pause for better planning. CFOs are now demanding clear business cases for AI spending, showing a move from experimentation to practical, measurable returns.
New bill seeks AI training data transparency
On January 22, 2026, House Representatives Madeleine Dean and Nathaniel Moran introduced the TRAIN Act. This new bill aims to help copyright owners learn if their work was used to train AI models. It creates a new subpoena process where owners can ask AI developers for proof of training materials. If a developer does not comply, it could mean they copied the work. This act is a major step to make AI training data more open, unlike current state laws that offer only general information.
AI defenses fail against new attacks
Researchers from OpenAI, Anthropic, and Google DeepMind found that all 12 AI defenses they tested failed against adaptive attacks. These defenses, which claimed near-zero risk, were bypassed with 95% to 100% success rates. The study, published in October 2025, showed that traditional security methods cannot handle AI attacks because they assume static behavior. AI attacks work at a deeper, semantic level that current defenses cannot understand. This is concerning because AI deployment is happening faster than security measures, and cybercriminals are already using AI to create new types of attacks.
Top security companies launch new AI tools
This week, several security companies announced new products and initiatives. HackerOne introduced the Good Faith AI Research Safe Harbor, a framework to protect researchers who test AI systems. Obsidian Security launched a new solution to secure SaaS supply chains, offering full visibility and breach detection. Sophos released Sophos Workspace Protection, which includes a secure browser and other tools to protect hybrid work and AI use. Vectra AI also updated its main platform to offer preemptive security and defense against AI-powered cyber-attacks.
Legit Security AI tool boosts software supply chain safety
Legit Security launched a new AI tool that uses a threat feed to find risks in the software supply chain. This tool, added to their ASPM platform, helps DevSecOps teams prioritize which vulnerabilities to fix first based on real threats to their applications. Legit Security CTO Liav Caspi explained that this is crucial because AI coding tools are creating many new vulnerabilities. The platform also uses AI to suggest fixes, identify AI models, and reduce false positives. The goal is to move towards applications that can automatically fix themselves, but building trust in these AI-driven solutions remains a key challenge.
Sources
- Meta pauses teen access to AI characters
- Meta halts teens' access to AI characters globally
- Health system size impacts AI privacy and security concerns
- Top 5 AI adoption challenges facing CFOs in 2026
- New House Bill on AI Transparency Aims to Pull Back the Curtain on AI Training Data
- Researchers broke every AI defense they tested. Here are 7 questions to ask vendors.
- Endpoint Security and Network Monitoring News for the Week of January 23rd: Sophos, Obsidian Security, Vectra AI, and More
- Legit Security AI Tool Uses Threat Feed to Identify Risks to Software Supply Chain
Comments
Please log in to post a comment.