Google DeepMind CEO pushes AGI while Claude generates sonnets

Demis Hassabis, CEO of Google DeepMind, has been working 100-hour weeks for the past three to four years, driven by intense competition in the AI industry and his personal passion for scientific discovery. His unusual routine includes dedicated research time from 10 PM to 4 AM, reflecting the demanding pace as companies race to advance Artificial General Intelligence.

While AI models like Claude can quickly generate a sonnet about the Forth Bridge, fulfilling Alan Turing's 1950 challenge, questions persist about true human creativity. Author Richard Beard argues AI lacks the human experience and emotion needed for genuine art. Similarly, Karen Stabiner, whose books trained AI models, worries her distinctive writing style, rich in em dashes, might now be mistaken for AI-produced content, suggesting a need for human authorship disclaimers.

In a significant industry shift, Hardware Asus will cease producing new mobile phones from 2026 onwards, redirecting its focus to its rapidly expanding AI server business and other AI hardware. Meanwhile, 2nd Nature is leveraging its AgWaste Portal™ AI platform to discover new, healthy food ingredients from crop side streams, drastically cutting development time. In healthcare, McKinsey Global Institute's Shubham Singhal notes AI's potential to automate most payer jobs, necessitating a re-skilling of the workforce for care-focused roles and the creation of new positions to manage AI systems.

AI stocks are set to dominate Asia Pacific investments in 2025, with Nvidia emerging as the most-used underlying asset for equity-linked products in Hong Kong SAR, accounting for 23% of issuances. Amidst this growth, transparency in AI use is becoming critical; organizations like Amazon and the International Committee of Medical Journal Editors now require disclosure. This push for transparency also ties into broader discussions about AI's potential impact on human purpose, with some commentators suggesting it poses an existential threat to meaning if humans feel unnecessary.

Key Takeaways

  • Demis Hassabis, CEO of Google DeepMind, works 100-hour weeks due to fierce AI industry competition and his passion for scientific discovery.
  • AI models like Claude can generate poetry, fulfilling Alan Turing's 1950 challenge, but authors question their capacity for true human creativity and emotion.
  • Asus will stop producing new mobile phones from 2026 to focus on its rapidly growing AI server business and other AI hardware.
  • Nvidia is the most popular underlying asset for structured products in the Asia Pacific market, especially in Hong Kong SAR, accounting for 23% of issuances.
  • 2nd Nature uses its AgWaste Portal™ AI platform to discover new, healthy food ingredients from agricultural waste, reducing development time from years to months.
  • AI is poised to automate most healthcare payer jobs, requiring re-skilling for care roles and creating new jobs for AI system development and management.
  • Transparency in AI use is increasingly mandated by organizations like Amazon and the International Committee of Medical Journal Editors to build trust and identify biases.
  • Commentary suggests AI's most significant danger is an existential threat to human purpose and meaning, potentially leading to feelings of unnecessity.
  • AI accelerates scientific research by rapidly processing vast data, predicting new material properties, and summarizing academic papers.
  • Authors like Karen Stabiner express concern that their unique writing styles might be mistaken for AI-generated content, prompting calls for human authorship disclaimers.

AI Writes Poetry But Lacks True Human Creativity

Richard Beard argues that while AI can generate writing, it cannot truly replicate human creativity, especially in memoir. He references Alan Turing's 1950 paper 'Computing Machinery and Intelligence' which included writing a sonnet as part of his test for machine intelligence. Modern large language models like Claude can quickly produce a sonnet about the Forth Bridge, fulfilling Turing's challenge. However, Beard suggests AI lacks the human experience, thoughts, and emotions needed for genuine art and the ability to surprise readers. This highlights the unique value of human creativity in writing about lived experience.

Author Blames Herself for AI's Em Dash Habit

Author Karen Stabiner believes she is partly responsible for the frequent use of em dashes and semicolons in AI-generated text. Her books were used to train AI models, making her an accidental contributor to their writing style. Stabiner, who taught at Columbia Journalism School, defends her "by ear" approach to punctuation against academic rules. She now worries her own writing, rich in em dashes, might be mistaken for AI-produced content. She suggests authors might need to add disclaimers like "No AI programs were used" to their books to confirm human authorship.

2nd Nature Uses AI for Healthy Food Ingredients

2nd Nature has launched new ingredients discovered using its AI platform, AgWaste Portal™. These compounds come from crop side streams like wheat, soy, rice, peanut, and corn, which are usually discarded. The AI technology quickly identifies high-performance molecules, reducing development time from years to months. The company filed a patent for these versatile ingredients, which can be used in food, wellness, personal care, and medicine. They offer natural, healthier alternatives to sugar and salt, providing calorie-free sweetness and savory depth without added sodium.

AI's Biggest Danger Is Loss of Human Purpose

This commentary argues that the most dangerous side effect of AI is not economic, but an existential threat to human meaning. It draws on Viktor Frankl's idea that people find deep meaning through responsibility for others. Traditional sources of meaning like work, family, and community are weakening, with fewer people attending religious services and more spending hours on social media or with AI companions. The author highlights statistics showing declining fertility rates and increasing social isolation. The concern is that a world where AI does everything could leave humans feeling unnecessary and without purpose, even while constantly stimulated.

Healthcare Payers Must Adapt to AI Transformation

Shubham Singhal from McKinsey Global Institute discusses how healthcare payers should respond to AI. AI can efficiently handle tasks like network creation, rule definition, pricing, and payment processing. This could automate most payer jobs, bringing big productivity gains and new services for members. However, this shift will also cause major disruption. Payers must focus on re-skilling their workforce for roles that provide care and compassion, and create new jobs to develop and manage AI systems. They also need to clearly explain how AI advances their mission to improve lives and access to healthcare, and be bold in reimagining entire systems, not just small tasks.

Google DeepMind CEO Works 100 Hour Weeks for AI

Demis Hassabis, CEO of Google DeepMind, revealed he has been working 100-hour weeks for the past three to four years. He attributes this intense schedule to the fierce competition in the AI industry and the high stakes involved in developing Artificial General Intelligence. Hassabis also stated that his personal passion for scientific discovery drives him. His unusual routine includes working from 10 PM to 4 AM, which he dedicates to research and creative thinking. This reflects the demanding pace across the AI industry as companies race to advance the technology.

AI Stocks Dominate Asia Pacific Investments in 2025

In 2025, AI stocks are leading as the most popular underlying assets for structured products in the Asia Pacific market. In Hong Kong SAR, tech and semiconductor stocks, especially Nvidia, have gained significant traction. Nvidia was the most-used underlying for equity-linked investments, accounting for 23% of all issuances, totaling 13,015 products. Taiwan also saw AI stocks as top performers. Meanwhile, in Korea, the 3-month treasury bond was the leading underlying asset.

Asus Stops Making Phones to Focus on AI Hardware

Asus CEO Jonney Shih confirmed that the company will stop producing new mobile phones, with no models planned for 2026 and beyond. This decision comes as the smartphone market faces intense competition, slower replacement cycles, and profitability challenges for smaller brands. Asus previously offered Zenfone and high-end ROG Phones, but these struggled with limited software support. The company will now shift its focus to its rapidly growing AI server business and other AI hardware, which has been driving its recent revenue growth.

AI Revolutionizes Scientific Research and Discovery

AI is changing scientific research by speeding up discovery and innovation across many fields. It can process and analyze huge amounts of data much faster than humans, which is crucial for areas like genomics. In materials science, AI predicts new material properties and designs compounds, cutting down on expensive physical experiments. AI tools using natural language processing also help researchers quickly find and summarize scientific papers, saving time and suggesting new research directions. While AI offers great potential, ensuring fairness and transparency in its algorithms remains an important area of discussion.

Disclose All AI Use in Your Work

This article argues that everyone should always disclose how they use AI in their work and communications. With increasing distrust due to AI-generated content, transparency is crucial. Organizations like the International Committee of Medical Journal Editors and Amazon already require AI disclosure in specific fields. Disclosing AI use helps show the quality of human-made work, asserts human value over AI chatbots, and provides leadership in effective AI tool use. It also gives context to recipients, helps identify algorithmic biases, and shows sensitivity to data privacy when content is uploaded to AI tools.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Large Language Models Human Creativity AI Limitations AI Writing Style AI Training Data Human Authorship AI Disclosure AI Platforms Food Technology Sustainable Ingredients Materials Discovery AI Societal Impact Human Purpose Existential Threat AI Ethics AI in Healthcare Healthcare Payers Workforce Transformation AI Automation Artificial General Intelligence AI Industry Competition AI Stocks Tech Investments AI Hardware AI Servers Scientific Research Data Analysis Natural Language Processing Transparency Algorithmic Bias Data Privacy Business Strategy

Comments

Loading...