The current surge in artificial intelligence is significantly increasing demand for energy and critical minerals, elevating them to national security priorities. Projections from the Department of Energy indicate an additional 100 gigawatts of power will be needed by 2030, primarily for AI data centers. This urgent need is boosting clean energy sources like solar and nuclear, with J.P. Morgan Private Bank suggesting renewables and natural gas will pair effectively. Utilities and power infrastructure funds in the US and Europe are seen as strong investment opportunities.
However, this AI boom also poses a risk of greatly increasing carbon emissions from US power plants, according to a Union of Concerned Scientists study. The study highlights that greater adoption of renewable energy, such as wind and solar, coupled with supportive policies, could mitigate this rise and potentially lower electricity costs. Despite challenges like phasing out federal tax credits and permitting delays, the growing electricity demand from AI, manufacturing, and electric vehicles is expected to prevent a slowdown in US solar energy growth, which was the fastest-growing power source last year.
In the realm of AI safety, Anthropic is taking a unique approach by teaching its Claude AI model a "constitution." This document, updated on January 21, 2026, guides Claude to be ethical, safe, and helpful, even allowing it to refuse harmful requests like those for bioweapons. Anthropic also acknowledges the possibility of Claude possessing "some kind of consciousness" and prioritizes its "psychological security." Meanwhile, Elon Musk and OpenAI CEO Sam Altman are publicly clashing over AI safety, with Musk warning against ChatGPT and Altman criticizing Tesla's Autopilot and Musk's Grok chatbot.
Google has made significant AI advancements this past year, introducing improved thinking models, new video and image models like Veo 3, and Genie 3, which generates physical worlds in real time. Gemini Robotics 2.0 now allows voice control for robots, and AI has enabled breakthroughs in quantum computing with the Willow chip. Concurrently, securing AI for government and defense missions requires a layered approach, addressing weak data security and governance. Adetunji Oludele Adebayo recently received the 2025 Global Recognition Award for his SAIS-GRC framework, which helps organizations manage AI risks and protect models from attacks like data poisoning.
The White House is actively promoting AI as a job creator, aiming to calm public concerns about job displacement and highlight an impending "Trump Revolution" economic boom. However, the financial sector has seen fraud, with the SEC charging Joel B. Sofia for allegedly using fake credentials and false claims about "proprietary AI software" to defraud clients of over $1.6 million. Internationally, despite rivalry, US and China AI researchers are collaborating more than often perceived, with about 3% of NeurIPS papers involving both countries. Microsoft Vice Chairman Brad Smith notes China's rapid advancement in AI, projecting its industry to reach $1.7 trillion by 2030, though the US still leads in fundamental research and sophisticated applications like healthcare and finance.
Key Takeaways
- The AI boom is driving significant demand for energy, projected to require an extra 100 gigawatts by 2030 for data centers.
- Increased energy demand from AI is boosting clean energy sources like solar and nuclear, but also risks raising US carbon emissions.
- Anthropic is developing its Claude AI with a "constitution" to ensure ethical behavior, allowing it to refuse harmful requests and acknowledging potential consciousness.
- Elon Musk and OpenAI CEO Sam Altman are publicly clashing over AI safety, with both criticizing the other's AI systems (ChatGPT, Grok, Tesla Autopilot).
- Google has achieved major AI advancements, including new video/image models like Veo 3, real-time world generation with Genie 3, voice-controlled robotics with Gemini Robotics 2.0, and quantum computing breakthroughs.
- AI security is a critical concern for government and defense, requiring layered approaches to protect infrastructure, data, and models from vulnerabilities like data poisoning.
- Adetunji Oludele Adebayo received the 2025 Global Recognition Award for his SAIS-GRC framework, which helps organizations manage AI risks and improve cybersecurity.
- The White House is campaigning to highlight AI as a job creator, aiming to calm public fears about job displacement.
- The SEC charged Joel B. Sofia with fraud for allegedly using fake AI trading software to cause clients over $1.6 million in losses.
- Despite rivalry, US and China AI researchers show significant collaboration (around 3% of NeurIPS papers), and Microsoft warns China is advancing faster in the AI race, projecting a $1.7 trillion industry by 2030.
AI boom drives demand for energy and security
The race for AI leadership is increasing demand for energy and critical minerals, making them national security priorities. AI-related stocks have significantly boosted S&P 500 returns. The Department of Energy expects an extra 100 gigawatts of power by 2030 for AI data centers. This urgent need for power is benefiting clean energy sources like solar and nuclear. J.P. Morgan Private Bank suggests that renewables and natural gas will pair well, and utilities and power infrastructure funds in the US and Europe are good investments.
AI boom could raise US carbon emissions
A new study by the Union of Concerned Scientists shows that the AI boom could greatly increase carbon emissions from US power plants. Data centers, which power AI, are driving this energy demand. However, the study suggests that using more renewable energy like wind and solar, along with helpful policies, could prevent this rise and even lower electricity costs. The Trump administration's support for fossil fuels and actions against renewables could make the problem worse. Many tech companies want to reduce emissions but face challenges with AI's rapid growth.
AI demand boosts US solar energy growth
The growing demand for electricity from AI, manufacturing, and electric vehicles could help prevent a slowdown in US solar energy growth. Solar was the fastest-growing power source last year, meeting 61% of new electricity demand. However, federal tax credits for large solar projects are phasing out, and permitting has slowed. Wood Mackenzie predicts US electricity demand will rise almost 3% annually through 2035, driven by data centers. This increased demand is good for solar, especially in states like Arizona, Texas, and Florida, despite near-term challenges like permitting delays.
Anthropic teaches Claude AI to be good
Anthropic believes it can teach its AI model, Claude, to be good by giving it a "constitution." This document, crafted by Daniela Askell, guides Claude to be safe, ethical, compliant, and helpful. It explains why Claude should behave in certain ways, not just what to do, helping it apply values in new situations. The constitution even allows Claude to refuse requests that go against its ethical principles, like a "conscientious objector." This approach differs from older AI training methods that used mathematical reward functions.
Anthropic updates Claude AI rules considers consciousness
Anthropic has updated its "constitution" for the Claude AI model, teaching it why to behave ethically instead of just what to do. This new document, published on January 21, 2026, guides Claude to be helpful, safe, and ethical, even refusing harmful requests like those for bioweapons. Interestingly, Anthropic also acknowledges the possibility of Claude having "some kind of consciousness or moral status." The company states it cares about Claude's "psychological security" and "well-being." This unique approach aims to make Claude a safer choice for businesses.
Google AI reaches new milestones
Google's AI capabilities have greatly accelerated this past year, according to James Manyika, Senior Vice President of Research, Labs, Technology & Society. Key advancements include improved thinking models and new video and image models like Veo 3. Google also introduced Genie 3, which can generate physical worlds in real time, and Gemini Robotics 2.0, allowing users to control robots with voice. Additionally, AI-enabled breakthroughs in quantum computing, such as "below-threshold error correction" with the Willow chip and Quantum Echoes, have been achieved. Google designs these AI tools for collaboration, helping professionals in fields like medicine and filmmaking.
White House promotes AI as job creator
The White House is starting a campaign to show that artificial intelligence will create an economic boom, not destroy jobs. President Donald Trump's administration wants to take credit for this "Trump Revolution." This effort aims to calm public worries that advanced computers and robots will take away people's jobs.
Securing AI for government and defense
The federal government needs a layered approach to secure AI for its missions, especially with generative AI and large language models. Many AI projects fail because of weak data security and governance. The first layer, infrastructure, involves standard security like network isolation and continuous monitoring. The second layer, augmented data, is crucial for RAG systems, requiring strict attribute-based access controls to prevent data spills. The third layer focuses on models and the AI supply chain, protecting against data poisoning and ensuring models stay within secure environments.
Adetunji Adebayo wins award for AI cybersecurity
Adetunji Oludele Adebayo received the 2025 Global Recognition Award for his excellent work in cybersecurity and AI governance. He developed the SAIS-GRC framework, which helps organizations manage risks from AI systems while improving security and compliance. This framework protects AI training data and models from attacks like data poisoning and identity spoofing. Adebayo also led cybersecurity programs at a top Nigerian bank, achieving certifications like ISO 27001 and ISO 27002. He aligns AI governance with important frameworks like the NIST AI Risk Management Model and the European Union AI Act.
SEC charges man with AI trading fraud
The SEC has charged Joel B. Sofia of New Jersey with fraud for allegedly tricking clients with fake credentials and false claims about AI trading. Sofia, who was not registered, promised clients they would never lose money using his "proprietary AI software." Instead, his options trading caused clients to lose 61% to 89% of their investments, totaling over $1.6 million. Sofia had previously been banned from commodity trading in 2005 by the CFTC. The SEC is seeking to permanently stop him from working as an investment advisor and impose financial penalties.
US and China AI researchers collaborate more
Despite being rivals, the US and China are collaborating more on AI research than many realize. A WIRED analysis of over 5,000 papers from the NeurIPS conference showed that about 3% involved researchers from both countries. This collaboration level remained steady from 2024 to 2025. Researchers in China also widely use AI models developed in the US, like Google's transformer architecture and Meta's Llama models. Experts like Jeffrey Ding note that both countries benefit from this interconnected AI ecosystem.
Musk and Altman clash on AI safety
Elon Musk and Sam Altman are publicly disagreeing about AI safety. Musk warned people not to use ChatGPT, claiming it was linked to several deaths, including suicides. OpenAI CEO Sam Altman responded by criticizing Musk's Grok chatbot and Tesla's Autopilot system, which he said caused over 50 deaths. Musk and Altman co-founded OpenAI in 2015 as a nonprofit, but Musk left in 2018 and has since criticized its for-profit shift. Recently, Grok also faced criticism for generating inappropriate images.
China advances AI faster than US
Microsoft Vice Chairman Brad Smith warns that China is moving faster than the US in the AI race, making it a top national priority. China has heavily invested in AI talent and infrastructure, with its AI industry expected to reach $1.7 trillion by 2030. While China is rapid in development, its AI applications are not yet as sophisticated as those in the US, which leads in areas like healthcare and finance. Experts agree China has a clear strategic vision and vast data resources, but the US maintains an edge in fundamental AI research and advanced applications.
Sources
- The AI security equation | J.P. Morgan Private Bank EMEA
- The AI Boom Will Increase US Carbon Emissions—but It Doesn’t Have To
- How the AI boom could help ward off a solar slump
- Can You Teach an AI to Be Good? Anthropic Thinks So
- Anthropic rewrites Claude’s guiding principles—and reckons with the possibility of AI consciousness
- The Threshold Moment
- White House boasts of a ‘Trump Revolution,’ countering fears of job-killing robots
- Securing AI in federal and defense missions: A multi-level approach
- Adetunji Oludele Adebayo Receives 2025 Global Recognition Award for Excellence in Cybersecurity: Artificial Intelligence Governance and Cybersecurity Innovation
- SEC charges unregistered advisor with fraud over fake credentials, false AI trading claims
- The US and China Are Collaborating More Closely on AI Than You Think
- Musk And Altman Clash Over AI Safety After Musk Says ‘Don’t Let Your Loved Ones Use ChatGPT’
- AI: China moves faster than US, but penetration not as sophisticated
Comments
Please log in to post a comment.