Nvidia announces AI collaboration push as Jensen Huang urges graduates to run toward the AI future

NVIDIA CEO Jensen Huang addressed the Class of 2026 at Carnegie Mellon University on May 11, 2026, urging graduates to run toward the AI future rather than move slowly. He emphasized that no generation possesses more powerful tools and called for scientists, engineers, and policymakers to collaborate on advancing AI safely.

Huang compared the current moment to the start of the PC revolution, noting that intelligence will transform every industry while potentially closing the technology divide. He warned that while AI offers immense opportunity, it also carries risks that require wise management and the creation of societal guardrails.

Despite growing competition, the United States and China have made progress in AI cooperation. In May 2024, the nations held their first AI dialogue in Geneva, and in November 2024, leaders agreed that nuclear weapons must remain under human control to prevent AI militarization. Experts suggest continuing these discussions on safety in non-military areas to manage global risks.

Domestically, the Trump administration faces internal disagreement as US spy agencies seek more power to evaluate AI models. Simultaneously, employers navigate a confusing patchwork of federal and state hiring rules, with experts urging immediate governance implementation. Meanwhile, AI-generated content saturation is making the internet feel increasingly fake, while nonprofits in Pittsburgh receive free training to use AI tools safely.

Key Takeaways

['
  • NVIDIA CEO Jensen Huang spoke to Carnegie Mellon graduates on May 11, 2026, urging them to embrace the AI revolution.
  • Huang stated that scientists, engineers, and policymakers must work together to advance AI safely.
  • The US and China held their first AI dialogue in Geneva in May 2024 to discuss risks and governance.
  • US and China leaders agreed in November 2024 that nuclear weapons must remain under human control.
  • US intelligence agencies seek expanded power to evaluate AI models, causing a split within the Trump administration.
  • Employers face compliance gaps due to a mix of unchanged federal laws and new state restrictions on AI hiring.
  • AI-generated content saturation is causing people to mistake real photos for AI creations online.
  • Nonprofits in Pittsburgh are receiving free training on AI literacy and spotting unreliable results.
  • 31.4 percent of US AI users searched for product links in February, a significant increase from earlier months.
  • NVIDIA released cuda-oxide, an experimental Rust-to-CUDA compiler backend for developers.
  • ']

    NVIDIA CEO Tells Grads to Run Toward AI Future

    NVIDIA CEO Jensen Huang spoke to graduates at Carnegie Mellon University on May 11, 2026. He told the Class of 2026 that they have never had more powerful tools or greater opportunities. Huang urged students to run toward the future instead of walking slowly. He emphasized that scientists, engineers, and policymakers must work together to advance AI safely.

    Jensen Huang Urges Graduates to Shape AI Era

    NVIDIA founder Jensen Huang addressed the Class of 2026 at Carnegie Mellon University on May 11, 2026. He stated that no generation has more powerful tools than the current one. Huang encouraged graduates to help shape the future of AI and to advance both AI capabilities and safety. He also called on policymakers to create guardrails that protect society while allowing innovation to move forward.

    NVIDIA CEO Says AI Revolution Starts Now

    NVIDIA CEO Jensen Huang told graduates at Carnegie Mellon University that their careers start at the beginning of the AI revolution. He compared this moment to the start of the PC revolution and said intelligence will change every industry. Huang noted that AI can close the technology divide and make computing accessible to everyone. He warned that while AI brings opportunity, it also creates risks that must be managed wisely.

    US and China Can Expand AI Cooperation

    The United States and China have made some progress in cooperating on artificial intelligence despite growing competition. In May 2024, the two nations held their first AI dialogue in Geneva to discuss risks and governance. In November 2024, leaders agreed that nuclear weapons must remain under human control to prevent AI militarization. Experts suggest both countries should continue talking about AI safety in non-military areas to manage global risks.

    AI Safety Needs US and China Cooperation

    While the US and China compete in AI technology, they still need to cooperate on safety issues. Some AI risks like cyber attacks and biological threats cross national borders and cannot be managed by one country alone. Experts from both nations continue to discuss risks like loss of control and AI-enabled cyber threats. Both countries share the responsibility to mitigate these risks for the global community.

    Illinois Passes Bill to Limit Detention Centers

    The Illinois House passed House Bill 5024 to limit where detention center facilities can be built. The bill was introduced by House Speaker Emmanuel Chris Welch after protests at the Broadview ICE facility. Community groups support the law to restrict detention centers and eventually end them. The bill now waits for approval in the Illinois Senate.

    US Spy Agencies Seek More AI Power

    US intelligence agencies want more power to evaluate AI models under the Trump administration. This plan has caused a sharp split within the administration as President Trump prepares to travel to a summit in China. Two anonymous sources say the proposal is not yet public but shows a disagreement over how much control intelligence agencies should have.

    AI Hiring Rules Create Compliance Gaps

    Employers using AI to hire face a confusing mix of federal and state rules. Federal civil rights laws have not changed, but many states have added their own restrictions. This patchwork of regulations makes it hard for big companies to stay compliant. Experts say employers must understand how their AI tools work and put governance in place now rather than waiting for clear federal rules.

    AI Content Makes Internet Feel Fake

    AI-generated content is everywhere online and is making everything sound the same. People are so used to seeing fake AI images that they sometimes mistake real photos for AI creations. Podcasts and forums now often contain AI-written scripts and generic arguments that confuse readers. This saturation is driving people crazy and making it hard to trust what they see or read.

    Nonprofits Learn to Use AI Tools Safely

    Nonprofit workers in Pittsburgh are getting free training on how to use AI tools effectively. The program teaches them about AI literacy, how to build their own agents, and how to spot unreliable results. Instructors use the term bad cat to explain when an AI model fails because it lacks specific information. The training aims to prevent bad AI from hurting vulnerable people served by social services.

    People of All Ages Search for Products with AI

    Using AI to find product links is becoming popular across all age groups. In February, 31.4 percent of US AI users searched for product links, a significant increase from earlier months. This low-risk activity builds trust for more advanced AI shopping features like automated purchases. Major companies like Amazon and Google are adding AI tools to help customers discover products.

    NVIDIA Releases New Rust-to-CUDA Compiler

    NVIDIA AI has released cuda-oxide, an experimental compiler backend written in Rust. This tool compiles SIMT GPU kernels directly to PTX code. It serves as a new option for developers working with NVIDIA hardware.

    Clairity AI Breast Cancer Page Not Found

    The webpage for the article about Clairity AI and breast cancer risk prediction could not be found. The site returned a 404 error indicating the page may have moved or the address was mistyped.

    Sources

    NOTE:

    This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

    NVIDIA AI Artificial Intelligence Jensen Huang Carnegie Mellon University Graduates Future AI Safety US-China Cooperation Cyber Attacks Biological Threats Loss of Control AI-Enabled Cyber Threats Illinois Detention Centers House Bill 5024 US Spy Agencies AI Power Trump Administration AI Hiring Rules Compliance Gaps AI Content Fake News Nonprofits AI Tools AI Literacy Bad Cat Product Search AI Shopping Features Amazon Google NVIDIA Hardware Rust-to-CUDA Compiler cuda-oxide PTX Code SIMT GPU Kernels

    Comments

    Loading...