google launches openai while meta expands its platform

Mind, a mental health charity, has launched a year-long investigation into artificial intelligence and mental health, prompted by a Guardian investigation. This inquiry follows revelations that Google's AI Overviews provided dangerous and incorrect medical advice. Rosie Weatherley, an expert from Mind, specifically criticized Google's AI Overviews for offering "very dangerous" mental health advice, noting they flatten complex issues and gave inaccurate information, such as suggesting starvation is healthy. Mind aims to establish strong safeguards and regulation to ensure AI's responsible use in mental health.

Globally, the competition in AI is intensifying, particularly between the U.S. and China. The Trump administration is establishing a new Tech Corps, planning to send up to 5,000 U.S. science and math graduates abroad over five years. This initiative aims to encourage partner nations to adopt American AI technology and reduce reliance on Chinese products. An analyst predicts that China's rapid AI advancements, focusing on efficient model development and open-source options, could lead to most of the world's population using a Chinese tech stack within five to ten years, challenging U.S. dominance despite export controls on advanced chips.

In the commercial sector, Sarvam AI, an Indian startup from IIT Madras, is preparing to launch a new application similar to ChatGPT, aiming to differentiate itself from major players like OpenAI, Google, Meta, and Anthropic. This move follows the introduction of three foundational AI models by Sarvam AI. Meanwhile, Bell Cyber and Radware are enhancing their AI-driven security services, offering a managed solution to protect web applications, APIs, and infrastructure from automated attacks. The growing demands of AI data centers, which require immense energy and water, are also sparking futuristic proposals, including the idea of moving these centers into orbit to leverage constant solar energy and natural cooling.

AI's practical applications are expanding into various fields. A two-part AI training series has equipped educators with tools for lesson planning, classroom management, and personalized learning, emphasizing responsible use. In security, AI-based screening is becoming standard, as seen at the Minnesota State Capitol, where AI analyzes object composition to enhance human judgment rather than replace it. The broader societal impact of AI, particularly its potential to affect white-collar jobs, remains a significant topic of discussion, prompting calls to refocus AI investments in light of these potential changes.

Key Takeaways

  • Mind charity is investigating AI's impact on mental health, prompted by Google's AI Overviews providing dangerous and incorrect advice.
  • Google's AI Overviews offered inaccurate mental health advice, such as suggesting starvation is healthy, drawing criticism from Mind experts.
  • The U.S. is launching a Tech Corps to send up to 5,000 science and math graduates abroad over five years to promote American AI technology and counter China's influence.
  • An analyst predicts China's AI advancements could lead to most of the world using a Chinese tech stack within five to ten years, challenging U.S. dominance.
  • Indian startup Sarvam AI plans to launch a ChatGPT-like application, aiming to differentiate its approach from OpenAI, Google, Meta, and Anthropic.
  • AI training programs are equipping educators with tools for responsible AI use in lesson planning, classroom management, and personalized learning.
  • Bell Cyber and Radware are enhancing AI-driven managed security services to protect web applications and infrastructure from automated attacks.
  • The high energy and water demands of AI data centers are leading to proposals for moving them into space to utilize constant solar power and natural cooling.
  • AI security screening, like that at the Minnesota State Capitol, enhances human judgment by analyzing object composition, rather than replacing human oversight.
  • Discussions continue regarding AI's potential impact on white-collar jobs, suggesting a need to refocus AI investments.

Mind charity probes AI dangers in mental health advice

The mental health charity Mind is launching a year-long investigation into artificial intelligence and mental health. This comes after a Guardian investigation revealed that Google's AI Overviews provided dangerous and incorrect medical advice. The inquiry will involve experts, policymakers, and people with lived experience to create safer digital mental health resources. Mind aims to ensure AI's potential benefits for mental health are realized responsibly, with strong safeguards and regulation. They want to prevent innovation from harming well-being and prioritize the voices of those with mental health challenges.

Expert warns Google AI gives 'very dangerous' mental health advice

Rosie Weatherley, an expert from the mental health charity Mind, has stated that Google's AI Overviews provide 'very dangerous' advice on mental health. She explained that these AI summaries flatten complex issues into simple answers, which can be harmful, especially to those in distress. During a test, Mind experts found AI Overviews gave inaccurate information, such as suggesting starvation is healthy or that mental health problems are solely due to chemical imbalances. Weatherley criticized Google's reactive approach to fixing errors, calling it insufficient for a company profiting from AI Overviews.

US launches Tech Corps to counter China in AI race

The Trump administration is creating a new initiative called the Tech Corps to compete with China in artificial intelligence. This program plans to send up to 5,000 U.S. science and math graduates abroad over five years. Their goal is to encourage partner nations to use American AI technology and reduce reliance on Chinese products. The Tech Corps aims to update the Peace Corps for the digital age and promote U.S. AI adoption globally. This initiative is part of a larger effort to prioritize U.S. leadership in AI technology.

China's AI surge challenges US dominance

An analyst predicts that China's AI advancements could lead to most of the world's population using a Chinese tech stack within five to ten years. China is rapidly developing AI, focusing on efficient model development and open-source options, which could give it an edge despite U.S. export controls on advanced chips. While the U.S. still holds advantages in areas like semiconductor technology and research, China's progress in cost-effective AI and increasing power capacity for data centers poses a significant challenge. This competition is shifting from model performance to value realization, potentially benefiting Chinese AI companies.

Indian startup Sarvam AI to launch ChatGPT-like app

Sarvam AI, an artificial intelligence startup from IIT Madras, is preparing to launch a new application similar to ChatGPT. This move is part of their strategy to generate revenue and monetize their services. The company recently introduced three foundational AI models. Sarvam AI aims to differentiate its approach from major tech companies like OpenAI, Google, Meta, and Anthropic. The upcoming app will initially have limited access as the company works on its commercialization efforts.

AI training equips educators with new teaching tools

A two-part AI training series, organized by the Merkos Chinuch Office and the Menachem Education Foundation, has provided educators with practical tools to enhance their teaching. Rabbi Mendel Blau discussed the responsible use of AI in education, emphasizing its potential to support teachers while upholding traditional values. Rabbi Shneur Zalman Munitz demonstrated AI tools for lesson planning and classroom management. Rabbi Shmuly Gniwisch led an advanced session on integrating AI for personalized learning and engagement. Educators found the training inspiring and highly practical, offering skills they can immediately apply.

Bell Cyber and Radware boost AI security services

Bell Cyber and Radware are enhancing their AI-driven security services, now offered as a managed solution. This combined service protects web applications, APIs, and infrastructure against automated attacks, bots, and DDoS threats. It reframes security from managing tools to achieving operational outcomes. The service offers early anomaly detection without increasing staff and adapts to new attack patterns. Bell Cyber's Canadian operations center provides monitoring and response under Canadian governance, ensuring data sovereignty and compliance.

Could AI data centers launch into space?

The immense energy and water demands of AI data centers are causing environmental concerns and local opposition. One proposal suggests moving these data centers into orbit to address these issues. In space, solar panels could provide constant energy, and the cold environment would eliminate cooling problems. Processing could occur in orbiting data centers, with results beamed back to Earth. While this concept is futuristic, it offers a potential solution to the growing environmental impact of AI infrastructure.

AI security screening enhances human judgment

Dr. Manjeet Rege, a professor at the University of St. Thomas, explained how AI security detection works at the Minnesota State Capitol. AI analyzes an object's shape, density, and material composition as people pass through scanners. Rege stated that AI-based screening is becoming standard in many areas. He emphasized that AI enhances, rather than replaces, human judgment in security processes, suggesting a hybrid model that includes metal detection and human oversight.

Will AI eliminate white-collar jobs?

This article discusses the potential impact of artificial intelligence on white-collar jobs. It is part of a broader collection of topics covered in the February 21st, 2026 edition, which also includes discussions on Admiral Sam Paparo, French electricity, marriage, passive investments, HS2, sad songs, and management doublespeak. The piece suggests a need to refocus AI investments in light of these potential job market changes.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI safety mental health AI ethics Google AI AI regulation AI competition US AI policy China AI AI startups AI in education AI security AI infrastructure AI data centers AI job displacement AI applications

Comments

Loading...