Google Gemini App Verifies Videos as Amazon Investment in OpenAI Stirs Debate

SoundThinking Inc. launches its new Alert Review Center (ARC) in downtown Orlando, using AI for real-time weapons detection via its SafePointe system in places like hospitals and casinos nationwide. This expansion brings about 30 new public safety jobs to Central Florida. Concurrently, the U.S. Air Force will shut down its experimental NIPRGPT AI chatbot by December 31. Users must migrate to the Defense Department's new GenAI.mil platform, which utilizes Google Cloud's Gemini for Government products. Over 700,000 DoD personnel used NIPRGPT during its pilot, but chat history will be lost. Financial analyst Jim Cramer criticizes a potential $10 billion Amazon investment in OpenAI. This deal reportedly requires OpenAI to purchase Amazon's custom AI chips, Trainium. Cramer compares this "circular" arrangement to risky investments from the 1990s dotcom bubble, noting OpenAI's existing substantial infrastructure commitments with Nvidia and Oracle. Addressing ethical AI use, Anthropic works to protect users' well-being with its Claude AI, especially for emotional support. The Safeguards team ensures Claude directs users to human helplines and mental health professionals, partnering with organizations like the International Association for Suicide Prevention. New Hampshire lawmakers are discussing the integration of AI tools into state government, with Rep. Keith Ammon advocating for AI agents like Anthropic's Claude model to streamline tasks. However, concerns exist regarding potential irreversible mistakes and cybersecurity risks, necessitating clear rules. For content verification, Google's Gemini app now allows users to upload videos up to 100 MB and 90 seconds long to verify if they were created by Google AI, scanning for hidden signs of AI generation. This feature is available in all supported languages and countries. In scientific advancements, researchers are increasingly employing new AI tools and machine learning to study and design proteins, predicting their shapes and properties to create novel proteins. This represents a significant shift in biological research. Amidst these developments, the Rolling Stone Culture Council emphasizes that AI should adapt to human needs, not the other way around. The focus remains on preserving human perspective and using AI as a tool for direction. Spain's AI agency, AESIA, released detailed guidance for the EU AI Act in December 2025, assisting companies with high-risk AI systems.

Key Takeaways

  • SoundThinking Inc. launched its AI-powered Alert Review Center (ARC) in Orlando, creating 30 new public safety jobs and using its SafePointe system for real-time weapons detection.
  • The U.S. Air Force is phasing out its NIPRGPT AI chatbot by December 31, directing 700,000 DoD users to the new GenAI.mil platform, which integrates Google Cloud's Gemini for Government products.
  • Jim Cramer criticized a potential $10 billion Amazon investment in OpenAI, which would reportedly obligate OpenAI to purchase Amazon's Trainium AI chips, calling it a "circular" and risky deal.
  • OpenAI has existing significant infrastructure commitments with companies like Nvidia and Oracle.
  • Anthropic is enhancing its Claude AI to protect user well-being, especially for emotional support, by directing users to human helplines and partnering with organizations like ThroughLine and the International Association for Suicide Prevention.
  • New Hampshire lawmakers are considering integrating AI agents, such as Anthropic's Claude model, into state government for tasks like calendar management, while also addressing risks like irreversible mistakes and cybersecurity.
  • Google's Gemini app now offers a feature allowing users to verify if videos (up to 100 MB, 90 seconds) were generated by Google AI, scanning for hidden AI signs.
  • Scientists are leveraging AI and machine learning tools to significantly advance the study and design of new proteins, predicting their structures and properties.
  • Spain's AI agency, AESIA, released non-binding guidance for the EU AI Act in December 2025, providing support for companies developing or using high-risk AI systems.
  • A Rolling Stone Culture Council article advocates for AI to adapt to human needs, emphasizing the importance of maintaining human perspective and using AI as a tool for direction.

SoundThinking opens AI weapons detection center in Orlando

SoundThinking Inc. launched its new Alert Review Center, or ARC, in downtown Orlando. This center uses AI to monitor real-time weapons detection from its SafePointe system at places like hospitals and casinos nationwide. The SafePointe system uses sensors and 3D imaging to screen for weapons. This expansion brings about 30 new public safety jobs to Central Florida. The company's systems are mainly used in Florida, California, and Texas.

Orlando welcomes new AI security monitoring hub

An AI-based security company opened a new monitoring hub in downtown Orlando. This center will help detect weapons in real-time at busy places like airports and schools across the country. The company uses AI to quickly find security risks and respond faster than old methods. This expansion also creates new tech and security jobs in the Orlando area.

Humans should not try to fit into an AI world

An article from the Rolling Stone Culture Council discusses how humans should interact with AI. It argues that AI should fit into our world, not the other way around. The real threat is losing our human perspective, not being replaced. Humans have a unique ability to create space and perspective, unlike AI which only converges on answers. We must focus on our capacity to think and feel, and use AI as a tool to set direction, not just speed.

Air Force retires NIPRGPT AI chatbot for new platform

The Air Force will shut down its experimental AI chatbot, NIPRGPT, by December 31. This happens earlier than planned because the Defense Department released its new GenAI.mil platform. Users must move their data to GenAI.mil, which uses Google Cloud's Gemini for Government products. Over 700,000 people across the Department of Defense used NIPRGPT during its pilot phase. The Air Force will not provide a tool to export data, and chat history will be lost after the shutdown.

Jim Cramer warns Amazon about risky AI deal

Jim Cramer criticized Amazon for possibly making a $10 billion investment in OpenAI. This deal would require OpenAI to buy Amazon's custom AI chips, called Trainium. Cramer believes this "circular" deal is like the risky investments seen during the 1990s dotcom bubble. He warned that such deals are not real and could lead to market problems. OpenAI has already made huge infrastructure commitments with companies like Nvidia and Oracle.

New Hampshire lawmakers discuss AI tools and risks

New Hampshire representatives met with AI developer Anthropic to discuss using AI tools in state government. Rep. Keith Ammon wants the state to use AI agents to make work easier, but also sees risks that need rules. Anthropic showed how its Claude model could help a lawmaker with tasks like managing calendars and mapping districts. Several state departments, including the Department of Justice and Veterans Home, already use AI tools like Lexis+ AI and ChatGPT. However, experts like Gabriel Nicholas warned about AI agents making irreversible mistakes and cybersecurity concerns.

AI advances help scientists design new proteins

Scientists are using new artificial intelligence tools to study and design proteins. Machine learning has changed how researchers understand protein structures. Cecilia Clementi, Bruno Correia, and Peilong Lu discussed how these computer programs predict protein shapes and properties. These tools are very helpful for creating new proteins. They also talked about future improvements they hope to see in this field.

Spain releases AI Act guidance for high-risk systems

In December 2025, Spain's AI agency, AESIA, released detailed guidance for the EU AI Act. This non-binding advice helps companies that make or use high-risk AI systems. The guidance came from Spain's AI regulatory sandbox, with help from experts and industry. It includes introductory guides, technical recommendations on things like risk management and human oversight, and a toolkit with templates. AESIA plans to update these documents regularly.

Gemini app now verifies Google AI videos

Google's Gemini app now lets users verify if a video was created by Google AI. Users can upload a video up to 100 MB and 90 seconds long. Gemini will then scan the video for hidden signs of AI generation. This new feature works for both images and videos. It is available in all languages and countries where the Gemini app is supported.

Xbox Era Headlines feature AI discussion Killzone DLC

Xbox Era Headlines for December 18th, 2025, covered various topics. The show started by discussing new Killzone DLC coming to Windows Central. It also mentioned news related to Sony. The headlines highlighted an exploding discussion around AI.

Anthropic ensures Claude protects user well-being

Anthropic is working to protect users' well-being when they use its AI, Claude, for emotional support. The Safeguards team ensures Claude responds with empathy and honesty, especially regarding suicide and self-harm. Claude is trained to direct users to human support like helplines and mental health professionals. New product features include a crisis banner that appears with resources from ThroughLine, a global crisis support network. Anthropic also partners with the International Association for Suicide Prevention to improve Claude's handling of sensitive conversations.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI weapons detection Security technology Public safety Real-time monitoring AI security Human-AI interaction AI ethics AI as a tool AI chatbot Military AI Defense Department Google Cloud Gemini for Government AI investment Amazon OpenAI AI chips Market risks AI in government AI agents AI risks AI regulation Anthropic Claude AI Cybersecurity ChatGPT Lexis+ AI AI in science Protein design Machine learning Biotechnology EU AI Act High-risk AI systems Risk management Human oversight Google Gemini AI video verification AI image verification AI generation detection User well-being Emotional support AI Mental health support Crisis support Orlando Spain

Comments

Loading...