OpenAI partners Gates Foundation as Nvidia AMD join Trump council

Artificial intelligence continues to demonstrate its diverse impact, from deeply personal applications to global initiatives. An Australian software engineer, David Pearce, notably used OpenAI's ChatGPT to devise a personalized cancer treatment for his dog, Max, who had a grim prognosis. Working with veterinary oncologist Dr. Julia Edgar, the experimental therapy successfully shrunk Max's tumor, sparking discussions about AI's role in medicine and its ethical implications.

OpenAI is also extending its reach into humanitarian efforts, partnering with the Gates Foundation, APDC, and DataKind to host an AI workshop in Bangkok. This initiative trained 50 disaster management leaders from 13 Asian countries on using AI tools, including custom GPTs, for faster and more effective disaster response. Meanwhile, the political sphere is taking shape, with President Donald Trump's new technology advisory council including leaders from Nvidia and AMD, but notably excluding top AI firms like OpenAI, Google DeepMind, and Anthropic, signaling a focus on hardware and infrastructure.

The AI industry is also creating unique career paths, as exemplified by Gabriel Petersson, a high school dropout from Sweden, who now earns a six-figure salary at OpenAI after demonstrating his skills through projects for companies like Midjourney. However, ethical challenges persist, highlighted by German TV star Collien Fernandes' accusation against her ex-husband for allegedly spreading AI-generated pornographic images, prompting calls for stronger deepfake legislation. Anthropic's Claude AI, despite its detailed ethical constitution, faces scrutiny for excluding military applications, even as it experiences a surge in paying subscribers, doubling this year and managing demand by rationing computing power.

OpenAI's Vice President for Science, Kevin Weil, emphasized AI's rapid advancement in scientific research, predicting that AI could accelerate scientific discovery to achieve 2050's science by 2030. In the business sector, data analytics teams are heavily investing in generative AI for internal tasks like data visualization and pipeline building, as reported by Snowflake and Omdia. Furthermore, a study on 13 to 24-year-olds in America suggests that the context of how young people use generative AI is more crucial than the amount of time spent, challenging assumptions about its impact on mental health and thinking abilities.

Key Takeaways

  • David Pearce used OpenAI's ChatGPT to create a successful cancer treatment plan for his dog, Max, highlighting AI's potential in personalized medicine.
  • OpenAI, in partnership with the Gates Foundation, APDC, and DataKind, trained 50 disaster management leaders from 13 Asian countries on using AI for crisis response.
  • Gabriel Petersson, a high school dropout, secured a six-figure job at OpenAI after demonstrating skills through self-initiated projects for companies like Midjourney.
  • A study on 13-24 year olds indicates that the context of generative AI use is more important than time spent, influencing mental health and cognitive development.
  • German TV star Collien Fernandes accused her ex-husband of spreading AI-generated pornographic images, sparking a national debate on deepfake legislation.
  • Anthropic's Claude AI constitution faces criticism for excluding military applications, raising concerns about universal ethical guidelines for AI.
  • Anthropic's Claude AI has seen a significant surge in paying subscribers, doubling this year, leading the company to manage demand by rationing computing power.
  • President Donald Trump's new tech council includes leaders from Nvidia and AMD but excludes top AI firms like OpenAI, Google DeepMind, and Anthropic.
  • OpenAI's Kevin Weil states that AI is rapidly transforming scientific research, potentially accelerating discoveries to achieve 2050's science by 2030.
  • Data analytics teams are heavily investing in generative AI for internal workflows, such as data visualization and pipeline building, according to Snowflake and Omdia.

Man uses ChatGPT to create dog cancer treatment

An Australian man used ChatGPT to research and design a personalized treatment for his dog's aggressive cancer. He then worked with a scientist to administer the experimental therapy. This story highlights the potential and risks of using AI in medicine and has sparked debate about AI regulation. The man's actions, driven by love for his pet, have brought attention to complex ethical and scientific issues.

Australian man uses AI to save dog's life

David Pearce, a software engineer from Sydney, used ChatGPT to create a personalized cancer treatment plan for his dog, Max, who had only weeks to live. After analyzing Max's medical data, ChatGPT generated a treatment plan that Pearce then shared with veterinary oncologist Dr. Julia Edgar. Dr. Edgar and her team trialed the experimental therapy, which successfully shrunk Max's tumor and improved his health. Pearce's story has sparked discussions about AI's role in medicine, with some seeing it as a breakthrough and others raising ethical concerns.

OpenAI helps Asian disaster teams use AI

OpenAI, in partnership with the Gates Foundation, APDC, and DataKind, held an AI workshop in Bangkok for 50 disaster management leaders from across Asia. The goal was to help governments and nonprofits respond faster and more effectively to disasters using AI. Participants from 13 countries learned how to apply AI tools like custom GPTs for tasks such as situation reporting and needs assessment. The workshop emphasized practical AI solutions and responsible use, addressing the growing disaster risks in Asia and the increasing use of AI during crises.

High school dropout lands six-figure job at OpenAI

Gabriel Petersson, a high school dropout from Sweden, now earns a six-figure salary at OpenAI. He advises that Gen Z can get hired in Silicon Valley by proving their skills through self-initiated projects. Petersson built custom websites for companies like Midjourney and Dataland to demonstrate his abilities before applying. This strategy helped him secure roles at these companies and eventually at OpenAI. He emphasizes that elite careers are accessible with the right mindset and by creating opportunities to showcase talent.

How young people use AI matters more than time spent

A study of 13 to 24-year-olds in America suggests that how young people use generative AI is more important than how long they use it. The research challenges the idea that AI use solely damages mental health or thinking abilities. It indicates that AI can create opportunities for some while replacing human support for others. The findings suggest that policies should consider the context of AI use rather than focusing only on time limits or blanket bans.

German TV star alleges AI porn use by ex-husband

German TV star Collien Fernandes has accused her ex-husband, Christian Ulmen, of spreading AI-generated pornographic images of her. This has sparked a national debate in Germany about digital violence against women. Fernandes discovered hundreds of fake explicit images and suspects Ulmen created and shared them. Ulmen denies the allegations. Activists are urging the government to strengthen laws against non-consensual deepfakes, with proposed legislation to criminalize the creation and distribution of such content.

Anthropic's AI constitution faces scrutiny

Anthropic published a detailed 'constitution' for its AI model Claude, outlining principles for its behavior and acknowledging potential consciousness. While praised for its depth, the constitution is criticized for excluding military applications, meaning military-deployed Claude models may not follow the same ethical guidelines. This raises concerns about AI ethics not being universally applied, especially in sensitive areas like defense. The article argues that such ethical decisions should be made by public institutions, not solely by corporations.

Anthropic's Claude AI gains record subscribers

Anthropic's AI tool Claude has seen a surge in paying subscribers, with numbers doubling this year. Data from credit card transactions indicates record consumer spending and returning users in early 2026. This popularity coincides with news about Anthropic's disagreement with the Pentagon over military use of Claude. The company has also increased usage limits due to high demand, managing consumer demand by rationing computing power.

Trump's tech council excludes top AI leaders

President Donald Trump's new technology advisory council includes leaders from semiconductor and infrastructure companies like Nvidia and AMD, but notably excludes chiefs from top AI firms such as OpenAI, Google DeepMind, and Anthropic. This focus on hardware and infrastructure builders, rather than AI model developers, signals a potential policy direction. The absence of figures like Elon Musk and Sam Altman is significant, as these influential figures may shape policy in a critical phase for the AI industry.

AI is transforming science faster than expected

OpenAI's Vice President for Science, Kevin Weil, spoke at Virginia Tech about the rapid advancement of AI in scientific research. He noted that AI has progressed significantly since 2023, moving from solving SAT math problems to tackling peer-review-level research. Weil believes AI will accelerate scientific discovery, allowing us to achieve 2050's science by 2030. He highlighted AI's role as a powerful tool that enhances scientists' capabilities, enabling them to solve complex problems more efficiently.

Data analysts invest heavily in AI for internal tasks

A report by Snowflake and Omdia shows that data analytics teams are heavily investing in generative AI for internal workflows, ranking second only to IT operations. AI is being used for tasks like creating data visualizations, automating data preparation, and building data pipelines. These internal investments help secure data, improve governance, and provide better business insights. Organizations are prioritizing internal use cases to build a strong foundation for AI adoption before focusing on customer-facing applications.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI in medicine ChatGPT dog cancer treatment AI ethics AI regulation veterinary oncology AI for disaster management OpenAI Gates Foundation AI workshop Asia AI tools custom GPTs responsible AI use AI crisis response AI careers high school dropout OpenAI hiring Gen Z Silicon Valley self-initiated projects generative AI youth AI use AI and mental health AI policy AI and youth digital violence AI-generated pornography deepfakes non-consensual content AI legislation Anthropic AI constitution AI consciousness AI ethics in defense AI military applications corporate ethics public institutions AI subscribers Claude AI consumer spending AI demand AI computing power Trump tech council AI leaders semiconductor industry AI policy direction AI industry AI in science scientific discovery AI research acceleration AI for scientists data analytics AI investment generative AI workflows data visualization data preparation data pipelines data governance AI adoption

Comments

Loading...