openai, anthropic and meta Updates

New York Governor Kathy Hochul signed the Responsible Artificial Intelligence Safety and Education Act, or RAISE Act, on December 19, 2025. This landmark legislation establishes stringent safety rules for advanced AI models, often called frontier models. It requires major companies such as OpenAI, Anthropic, Meta, Google, and Microsoft to report critical safety issues within 72 hours. The law imposes significant penalties, including $1 million for a first violation and $3 million for subsequent breaches, making it stricter than California's existing regulations. Lawmakers will approve Governor Hochul's requested changes early next year, following lobbying efforts by tech companies for less restrictive measures. Amidst these regulatory developments, AI innovation continues at a rapid pace. Google's AI efforts are notably expanding, with Josh Woodward promoted in April to lead the Gemini app while also overseeing Google Labs. The Gemini app now boasts 650 million monthly active users, and AI Overviews reaches 2 billion monthly users, with its image generator Nano Banana proving particularly popular. Meanwhile, Anthropic recently launched Claude Opus 4.5, its newest and most advanced AI model. This model sets new benchmarks in coding, autonomous agents, and enterprise tasks, offering top performance at a more affordable cost than older Opus versions. It also introduces an "Effort Parameter" for users to control the AI's thinking depth and has significantly boosted complex research task scores from 70.48% to 85.3%. However, the rapid integration of AI also raises significant societal and ethical concerns. Studies from MIT, Carnegie Mellon University, and Microsoft suggest that relying on AI tools like ChatGPT might reduce critical thinking skills and independent problem-solving among students and white-collar workers. In the entertainment industry, actress Natasha Lyonne, cofounder of Asteria Film Co., advocates for ethical AI in filmmaking. Her company partnered with Moonvalley AI to develop Marey, an AI video generation model trained exclusively on properly licensed human-created content, aiming to avoid copyright issues faced by models using scraped web data. Furthermore, YouTube recently took action against misinformation, removing two major channels, Screen Culture and KH, which had over 2 million subscribers combined, for creating fake AI-generated movie trailers. The expansion of AI also presents infrastructure challenges and new job opportunities. Maryland farmers are currently opposing a proposed 67-mile high voltage power line, which is necessary to supply electricity to numerous new data centers supporting the growing use of AI. Looking ahead, Robert Seamans, a professor at NYU Stern School of Business, predicts the emergence of new AI-focused jobs starting in 2026, including roles like "AI explainers" and "AI auditors" who will test AI systems for fairness and bias. Economically, while traditional diversified investment strategies saw their best returns since 2019 in 2025, delivering double-digit gains, this success was largely overshadowed by the intense hype surrounding AI and cryptocurrency, prompting experts to warn against ignoring diversification.

Key Takeaways

  • New York's RAISE Act, signed December 19, 2025, mandates strict safety reporting for frontier AI models from companies like OpenAI, Anthropic, Meta, Google, and Microsoft, with penalties up to $3 million.
  • Google's Gemini app, led by Josh Woodward, now serves 650 million monthly active users, while AI Overviews reaches 2 billion monthly users.
  • Anthropic launched Claude Opus 4.5, an advanced and more affordable AI model, improving complex research tasks by boosting scores from 70.48% to 85.3% and introducing an "Effort Parameter."
  • Studies from MIT, Carnegie Mellon University, and Microsoft suggest that reliance on AI tools like ChatGPT may reduce critical thinking and independent problem-solving skills.
  • New AI-focused jobs, such as "AI explainers" and "AI auditors," are predicted to emerge starting in 2026, according to NYU Stern Professor Robert Seamans.
  • Maryland farmers are currently opposing a proposed 67-mile high voltage power line needed to supply electricity to new AI data centers.
  • Actress Natasha Lyonne's Asteria Film Co., in partnership with Moonvalley AI, developed Marey, an AI video generation model trained on properly licensed content to ensure ethical and copyright-friendly generative AI.
  • YouTube removed two major channels, Screen Culture and KH, with over 2 million subscribers combined, for creating fake AI-generated movie trailers.
  • In 2025, traditional diversified investment strategies achieved their best returns since 2019, delivering double-digit gains despite being overshadowed by AI and cryptocurrency hype.
  • Experts warn against ignoring diversification in investments, especially with high market valuations in tech, suggesting a potential shift towards value-oriented assets.

New York Governor Hochul signs strict AI safety law

Governor Kathy Hochul will sign New York's RAISE Act into law by December 19, 2025. This landmark bill creates strong safety rules for advanced AI models, known as frontier models. It requires companies like OpenAI, Anthropic, Meta, Google, and Microsoft to report critical safety issues within 72 hours. The law also sets penalties of $1 million for a first violation and $3 million for later ones. New York's law is stricter than California's and aims to prevent large-scale harm from AI.

New York Governor Hochul signs AI regulation bill

New York Governor Kathy Hochul signed the Responsible Artificial Intelligence Safety and Education Act, or RAISE Act, on Friday, December 19, 2025. This new law sets state rules for the safe creation of advanced AI models. Tech companies had lobbied to make the law less strict, wanting it to be more like California's regulations. Governor Hochul agreed to sign the original bill, and lawmakers will approve her requested changes early next year.

Google boosts AI efforts with Josh Woodward leading Gemini

Josh Woodward, a 16-year Google veteran, became a key figure in Google's AI race after being promoted in April to lead the Gemini app. He also continues to run Google Labs. Woodward's role is crucial for growing the Gemini app, which is at the heart of Google's AI plans, and for keeping users safe. His ability to remove roadblocks helps his team develop products quickly. Google's AI standing was uncertain earlier this year, but the Gemini app has since seen huge growth, with its image generator Nano Banana becoming very popular. The Gemini app now has 650 million monthly active users, and AI Overviews has 2 billion monthly users.

Anthropic launches advanced AI model Claude Opus 4.5

Anthropic has launched Claude Opus 4.5, its newest and most advanced AI model. This model sets new benchmarks in coding, autonomous agents, and enterprise tasks, offering top performance at a more affordable cost than older Opus versions. A key new feature is the "Effort Parameter," which lets users control the AI's thinking depth to balance speed and accuracy. Claude Opus 4.5 shows significant improvements in complex research tasks, boosting scores from 70.48% to 85.3%. It also excels in managing large codebases, long-term reasoning, and advanced computer and vision tasks, making it highly reliable for various business uses.

Studies suggest AI may reduce critical thinking skills

New studies suggest that relying on AI tools like ChatGPT might make our brains work less. MIT researchers found that students using ChatGPT for essays showed less brain activity in areas linked to thinking. Another study by Carnegie Mellon University and Microsoft revealed that white-collar workers who trusted AI more put in less critical thinking effort. While AI can make work faster, experts worry it could harm independent problem-solving and critical thinking over time. Some students feel AI makes schoolwork too easy, and researchers like Professor Wayne Holmes are calling for more studies on AI's long-term effects on learning and safety.

New AI jobs like explainers and auditors coming soon

Robert Seamans, an NYU Stern School of Business professor, predicts new AI-focused jobs will emerge starting in 2026. He believes AI will change most jobs, similar to how the internet did. People who understand AI and can use it to improve their work, or who can test and train AI, will be in high demand. New roles like "AI explainers" or "AI translators" will help managers understand AI tools simply. "AI auditors" will also be needed to test AI systems for fairness and bias, possibly requiring a legal background. Seamans advises everyone to experiment with AI in different ways.

Maryland farmers oppose power line for AI data centers

Maryland farmers are fighting power companies over a proposed 67-mile high voltage power line. This new line is needed to supply electricity to many new data centers that support the growing use of AI. Farmers argue that the power line, which would cross their land, threatens their way of life. NBC News' Stephanie Gosk reported on this conflict on December 19, 2025.

Diversified investments thrive in 2025 despite AI hype

In 2025, traditional diversified investment strategies, like those split between stocks and bonds, saw their best returns since 2019. These "old-school" approaches delivered double-digit gains, but this success was largely overshadowed by the excitement around AI and cryptocurrency. Despite strong performance, investors continued to pull money from these balanced funds for most of the year. Experts warn that ignoring diversification now could be risky, especially with high market valuations in tech. However, some see a shift towards value-oriented investments and alternative assets, showing that while the 60/40 portfolio is changing, the core idea of diversification remains important.

Natasha Lyonne calls for ethical AI in filmmaking

Actress Natasha Lyonne, cofounder of Asteria Film Co., believes AI has a major ethics problem. Her company aims to create high-quality, copyright-friendly generative AI content for films, unlike other models that face issues for using scraped web data. Asteria partnered with Moonvalley AI to develop Marey, an AI video generation model trained on properly licensed human-created content. Lyonne stresses the importance of careful AI inputs and using AI to improve human lives, not just to save money. She emphasizes that humans must guide AI tools to prevent them from taking over.

YouTube removes channels making fake AI movie trailers

YouTube has finally taken action against channels creating fake AI-generated movie trailers. The platform removed two major channels, Screen Culture and KH, which had over 2 million subscribers and a billion views combined. These channels were known for making false trailers for popular movies like Fantastic Four: First Steps and TV shows such as Squid Games. YouTube decided to act after an investigation by Deadline, and now the channels' pages are no longer available.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Safety Law AI Regulation New York RAISE Act Frontier Models Tech Industry Google AI Gemini App AI Development Anthropic Claude Opus Advanced AI Models AI Performance Enterprise AI Critical Thinking AI Impact Cognitive Effects ChatGPT AI Jobs Future of Work AI Ethics AI Fairness AI Bias Data Centers AI Infrastructure Power Lines Environmental Impact AI in Finance Generative AI Filmmaking Copyright Content Moderation Fake AI Content User Safety AI Research Problem Solving Learning

Comments

Loading...