NVIDIA CEO Jensen Huang addressed the Class of 2026 at Carnegie Mellon University on May 11, 2026, urging graduates to run toward the AI future rather than move slowly. He emphasized that no generation possesses more powerful tools and called for scientists, engineers, and policymakers to collaborate on advancing AI safely.
Huang compared the current moment to the start of the PC revolution, noting that intelligence will transform every industry while potentially closing the technology divide. He warned that while AI offers immense opportunity, it also carries risks that require wise management and the creation of societal guardrails.
Despite growing competition, the United States and China have made progress in AI cooperation. In May 2024, the nations held their first AI dialogue in Geneva, and in November 2024, leaders agreed that nuclear weapons must remain under human control to prevent AI militarization. Experts suggest continuing these discussions on safety in non-military areas to manage global risks.
Domestically, the Trump administration faces internal disagreement as US spy agencies seek more power to evaluate AI models. Simultaneously, employers navigate a confusing patchwork of federal and state hiring rules, with experts urging immediate governance implementation. Meanwhile, AI-generated content saturation is making the internet feel increasingly fake, while nonprofits in Pittsburgh receive free training to use AI tools safely.
Key Takeaways
['NVIDIA CEO Tells Grads to Run Toward AI Future
NVIDIA CEO Jensen Huang spoke to graduates at Carnegie Mellon University on May 11, 2026. He told the Class of 2026 that they have never had more powerful tools or greater opportunities. Huang urged students to run toward the future instead of walking slowly. He emphasized that scientists, engineers, and policymakers must work together to advance AI safely.
Jensen Huang Urges Graduates to Shape AI Era
NVIDIA founder Jensen Huang addressed the Class of 2026 at Carnegie Mellon University on May 11, 2026. He stated that no generation has more powerful tools than the current one. Huang encouraged graduates to help shape the future of AI and to advance both AI capabilities and safety. He also called on policymakers to create guardrails that protect society while allowing innovation to move forward.
NVIDIA CEO Says AI Revolution Starts Now
NVIDIA CEO Jensen Huang told graduates at Carnegie Mellon University that their careers start at the beginning of the AI revolution. He compared this moment to the start of the PC revolution and said intelligence will change every industry. Huang noted that AI can close the technology divide and make computing accessible to everyone. He warned that while AI brings opportunity, it also creates risks that must be managed wisely.
US and China Can Expand AI Cooperation
The United States and China have made some progress in cooperating on artificial intelligence despite growing competition. In May 2024, the two nations held their first AI dialogue in Geneva to discuss risks and governance. In November 2024, leaders agreed that nuclear weapons must remain under human control to prevent AI militarization. Experts suggest both countries should continue talking about AI safety in non-military areas to manage global risks.
AI Safety Needs US and China Cooperation
While the US and China compete in AI technology, they still need to cooperate on safety issues. Some AI risks like cyber attacks and biological threats cross national borders and cannot be managed by one country alone. Experts from both nations continue to discuss risks like loss of control and AI-enabled cyber threats. Both countries share the responsibility to mitigate these risks for the global community.
Illinois Passes Bill to Limit Detention Centers
The Illinois House passed House Bill 5024 to limit where detention center facilities can be built. The bill was introduced by House Speaker Emmanuel Chris Welch after protests at the Broadview ICE facility. Community groups support the law to restrict detention centers and eventually end them. The bill now waits for approval in the Illinois Senate.
US Spy Agencies Seek More AI Power
US intelligence agencies want more power to evaluate AI models under the Trump administration. This plan has caused a sharp split within the administration as President Trump prepares to travel to a summit in China. Two anonymous sources say the proposal is not yet public but shows a disagreement over how much control intelligence agencies should have.
AI Hiring Rules Create Compliance Gaps
Employers using AI to hire face a confusing mix of federal and state rules. Federal civil rights laws have not changed, but many states have added their own restrictions. This patchwork of regulations makes it hard for big companies to stay compliant. Experts say employers must understand how their AI tools work and put governance in place now rather than waiting for clear federal rules.
AI Content Makes Internet Feel Fake
AI-generated content is everywhere online and is making everything sound the same. People are so used to seeing fake AI images that they sometimes mistake real photos for AI creations. Podcasts and forums now often contain AI-written scripts and generic arguments that confuse readers. This saturation is driving people crazy and making it hard to trust what they see or read.
Nonprofits Learn to Use AI Tools Safely
Nonprofit workers in Pittsburgh are getting free training on how to use AI tools effectively. The program teaches them about AI literacy, how to build their own agents, and how to spot unreliable results. Instructors use the term bad cat to explain when an AI model fails because it lacks specific information. The training aims to prevent bad AI from hurting vulnerable people served by social services.
People of All Ages Search for Products with AI
Using AI to find product links is becoming popular across all age groups. In February, 31.4 percent of US AI users searched for product links, a significant increase from earlier months. This low-risk activity builds trust for more advanced AI shopping features like automated purchases. Major companies like Amazon and Google are adding AI tools to help customers discover products.
NVIDIA Releases New Rust-to-CUDA Compiler
NVIDIA AI has released cuda-oxide, an experimental compiler backend written in Rust. This tool compiles SIMT GPU kernels directly to PTX code. It serves as a new option for developers working with NVIDIA hardware.
Clairity AI Breast Cancer Page Not Found
The webpage for the article about Clairity AI and breast cancer risk prediction could not be found. The site returned a 404 error indicating the page may have moved or the address was mistyped.
Sources
- Jensen Huang to college grads: "Run. Don't walk" toward AI
- Jensen Huang Told the Class of 2026 How to Harness a ‘Once-in-a-Generation Opportunity' in Just 3 Words
- ‘Your Career Starts at the Beginning of the AI Revolution,’ NVIDIA CEO Tells Graduates
- How China and the U.S. Can Expand Artificial Intelligence Cooperation
- AI Safety Is Where U.S.-China Cooperation Still Matters
- The Weekly: Illinois detention centers, Canvas breach and AI policies
- In Trump administration battle over AI, U.S. spy agencies seek more power
- AI Hiring Compliance Is a Patchwork and Leaves Big Employer Gaps
- Your AI Use Is Breaking My Brain
- Bad cats and reliable agents: Nonprofits workers get insight into AI tools through free training
- AI product link searches gain traction across ages
- NVIDIA AI Just Released cuda-oxide: An Experimental Rust-to-CUDA Compiler Backend that Compiles SIMT GPU Kernels Directly to PTX
- Clairity AI: Advancing Breast Cancer Risk Prediction Through Artificial Intelligence
Comments
Please log in to post a comment.