Google is aggressively expanding its AI hardware ecosystem to challenge Nvidia's dominance. The company is partnering with Marvell Technology, MediaTek, and Broadcom to develop custom chips, including a memory-focused processor and a next-generation TPU optimized for inference. In February 2026, Meta Platforms signed a deal to use Google's Tensor Processing Units for its own AI workloads. Google also renewed its agreement with Broadcom through 2031 and expanded its relationship with Intel for AI infrastructure.
Google plans to announce its new generation of custom-designed TPUs at the Google Cloud Next conference in Las Vegas this week. Chief Scientist Jeff Dean noted that as AI demand grows, it makes sense to specialize chips more for training or inference workloads. While Nvidia remains the gold standard for training advanced models, Google brings unique strengths including a decade of experience designing chips at scale and substantial revenue from its search business.
Outside of hardware, the AI sector sees diverse developments. Anthropic's head of geopolitics, Thompson Paine, recently returned to Chapel Hill for the University's AI for Public Good Conference, where he analyzes global AI landscapes and policy implications. Meanwhile, Marigold appointed Elizabeth Smalley as Chief AI Officer and Pat Jenakanandhini as Chief Product and Technology Officer, reflecting accelerating investment in AI across its commercial portfolio.
In other news, emergency call centers are testing AI more than chatbots to recover structure from chaos and help trained professionals move faster with better context. Researchers warn that relying on large language models for cognitive tasks may reduce human brain activity in areas corresponding to creativity and processing information. Conversely, AI is helping state transportation departments use data fully, with agencies developing internal bots to turn documentation into searchable databases.
Key Takeaways
- Google is partnering with Marvell, MediaTek, and Broadcom to develop custom AI chips, including a memory-focused processor and next-gen TPU for inference.
- Meta Platforms signed a deal in February 2026 to use Google's Tensor Processing Units for its AI workloads.
- Google renewed its agreement with Broadcom through 2031 and expanded its relationship with Intel for AI infrastructure.
- Google plans to announce new inference-focused TPUs at the Google Cloud Next conference in Las Vegas this week.
- Google Chief Scientist Jeff Dean emphasized the need to specialize chips for training versus inference as AI demand grows.
- Anthropic's Thompson Paine returned to UNC for the AI for Public Good Conference to discuss global AI policy and risks.
- Marigold appointed Elizabeth Smalley as Chief AI Officer and Pat Jenakanandhini as Chief Product and Technology Officer.
- Emergency call centers are prioritizing AI testing over chatbots to improve response speed and context in crisis situations.
- Research indicates that heavy reliance on AI for cognitive tasks can reduce brain activity associated with creativity and information processing.
- State transportation departments are using AI to transform legacy systems and create searchable databases from documentation.
Google expands AI chip partnerships to challenge Nvidia
Google is rapidly building its own AI hardware ecosystem to compete with Nvidia. The company is partnering with Marvell Technology, MediaTek, and Broadcom to develop custom chips for running artificial intelligence models. In February 2026, Meta Platforms signed a deal to use Google's Tensor Processing Units for its own AI workloads. Google also renewed its agreement with Broadcom through 2031 and expanded its relationship with Intel for AI infrastructure. These moves allow Google to reduce its reliance on external vendors while scaling production for its cloud services.
Google discusses new AI chips with Marvell for inference
Google is in discussions with Marvell Technology to jointly develop new artificial intelligence chips focused on inference capabilities. The proposed collaboration includes two chip designs: a memory-focused processor and a next-generation TPU optimized for running AI models efficiently. Google aims to complete the memory chip design by next year before moving to test production. This strategy helps Google strengthen its position in the AI hardware space and reduce dependence on third-party chipmakers like Nvidia.
Google scales AI chip ecosystem with Marvell amid Nvidia rivalry
Google is in talks with Marvell to develop two new chips designed to improve how artificial intelligence models are run. The partnership includes a memory processing unit and a next-generation TPU to boost model efficiency. This push is part of Google's effort to position its TPUs as an alternative to Nvidia GPUs while expanding partnerships with Intel and Broadcom. Google recently launched Gemma 4, a new AI model designed to handle multi-step logic and structured problem-solving more effectively. The company plans to complete the memory chip design by next year before entering test production.
Google introduces new chips to challenge Nvidia in AI
Google aims to build on its momentum with new chips dedicated to inference, or running AI models after they have been trained. The company plans to announce its new generation of custom-designed TPUs at the Google Cloud Next conference in Las Vegas this week. Google Chief Scientist Jeff Dean noted that it now makes sense to specialize chips more for training or inference workloads as demand grows. Nvidia remains the gold standard for AI, particularly for training more advanced models, but Google brings unique strengths including a decade of experience designing chips at scale.
Google talks with Marvell on next-gen AI chip development
Google is reportedly in discussions with Marvell Technology to develop next-generation AI chips designed to run models more efficiently. The collaboration may focus on building two advanced chips: a memory processing unit and a new TPU for handling AI workloads effectively. Google has been steadily investing in its custom chip ecosystem to reduce reliance on external hardware and offer a strong alternative to GPUs. Reports suggest the companies aim to finalize the design of the memory processing unit as early as next year before moving into test production.
Google develops inference AI chips with Marvell to beat Nvidia
Google plans to announce a new generation of its tensor processing units at the Google Cloud Next conference in Las Vegas this week, with inference-focused chips expected to follow. The battleground is shifting towards inference as AI demand grows, making it sensible to specialize chips more for training or inference workloads. Google Chief Scientist Jeff Dean stated that as AI demand grows, it becomes sensible to specialize chips more for training or more for inference workloads. Google can draw on advantages built over years of in-house chip development and substantial revenue from its search business.
Experts spot five signs of AI manipulation in Tice photo
Richard Tice posted a picture of a Reform campaign event that experts believe shows signs of AI manipulation. One woman in the image has six fingers on one hand, and another man has three extremely long fingers. The signs are supposed to read Get Starmer Out but the text reads more like Get Stuppence out. Many faces appear smeared and blurred, and pixel-perfect vertical lines on railings are suspicious. A spokesperson for Reform UK said the photograph is real but the version Richard Tice posted was slightly edited using AI mainly to increase brightness.
Richard Tice picture shows telltale signs of AI editing
X users questioned whether a picture posted by Richard Tice was legitimate or pure AI slop after he shared it on Sunday. The image shows a diverse group of Reform supporters gathered with placards in Birmingham. Analysis by Peryton Intelligence found the image was almost certainly generated or altered using AI. The faces of the figures have a smear to them, and one woman has extra long fingers and six fingers on her right hand. The signs themselves all smear Starmer in the Get Starmer Out slogan, and the Reform arrow is inconsistently circular.
AI helps leaders build stronger teams instead of destroying them
Duke dean and professor Scott Dyreng observed that adding AI to teams can shift dynamics dramatically. Before AI, about 5% of his MBA students broke up for the final project, but after AI, over half went solo. The lesson for leaders is that it is not whether to use AI but how they use it. The best teams focus as much on how they work together as on who is on the team and use AI to enhance collaboration. AI tools can help leaders gather insights on individual strengths and weaknesses through meeting analysis and communication trends.
Coursedog acquires ClassRanked for academic AI platform
Coursedog announced it has acquired ClassRanked, a leading-edge course evaluations solution purpose-built for colleges and universities. ClassRanked will become part of Coursedog's Assessment Cloud, closing the loop between student teaching feedback, curriculum, scheduling, and accreditation. Course evaluations and student feedback are central to improving academic quality and student outcomes, yet institutions have long struggled with low response rates and fragmented tools. Coursedog CEO Andrew Rosen said the acquisition gives institutions the ability to move from insight to action and action to outcomes faster than ever.
Marigold appoints new AI and product officers
Marigold announced the appointments of Elizabeth Smalley as Chief AI Officer and Pat Jenakanandhini as Chief Product and Technology Officer. The dual appointments reflect Marigold's accelerating investment in artificial intelligence and product innovation across its commercial portfolio of Campaign Monitor, Emma, and Vuture. Elizabeth Smalley brings over a decade of experience at the intersection of artificial intelligence, data strategy, and product leadership. Pat Jenakanandhini is a seasoned technology and product executive with extensive experience scaling B2B SaaS companies and driving engineering excellence.
Emergency call centers test AI more than chatbots
Most people encounter AI as a tool for convenience, but the real test begins in a very different environment like emergency call centers. In the United States, telecommunicators answer roughly 240 million 911 calls each year, and more than half of U.S. 911 centers were facing a genuine staffing emergency. In that setting, AI cannot afford to be slow, vague, or overly confident. The goal is to recover structure from chaos and help a trained professional move faster with better context. Speed is part of the safety model in emergency response, and uncertainty must be visible to avoid false confidence.
AI chatbots may reduce human cognitive abilities
Researchers are warning that relying on large language models for cognitive tasks comes with a cost. Research scientist Nataliya Kosmyna noticed that students using AI to write essays showed less brain activity in areas corresponding to creativity and processing information. The ChatGPT group showed notably less brain activity, reduced by up to 55%, compared to those who used their own minds. Studies suggest that if we become too reliant on AI, it could affect the language we use and even our ability to do basic cognitive tasks. Young people might be particularly vulnerable to the negative effects that using AI can have on key cognitive skills.
Quantum computers may boost AI processing power
Quantum computers might eventually be able to handle some AI applications that currently require huge amounts of conventional computing power. Researchers have developed an approach that allows quantum computers to process data in smaller batches without saving it all before beginning to process it. This method allows the quantum computer to process more data at a smaller memory cost than any conventional computer. A quantum computer made from about 300 logical qubits could process significantly more data than current systems. However, many questions about applying the new work to actual devices and real-world data still need to be addressed.
Thompson Paine shapes AI future with Carolina roots
Thompson Paine, head of geopolitics at Anthropic, recently returned to Chapel Hill for the University's AI for Public Good Conference. Paine researches and analyzes the global AI landscape, assessing future scenarios and how current trends align with them. He examines policy implications of the scenarios to identify actions policymakers can take to steer AI development toward beneficial outcomes and avoid potential risks. Paine credits his time at UNC for seeding curiosity and encouraging him to pursue uniquely interesting subjects. He traces his career steps back to a transformative summer teaching in China.
AI helps state transportation departments use data fully
State transportation agencies are turning to artificial intelligence to reorient entire agencies toward collecting and refining data. Benjamin McCulloch, strategic data scientist at the Texas Department of Transportation, said AI is a data maturity program that changes how agencies do work. Using AI with agents to do grunt work will allow agencies to unlock potential they have never had previously. The Connecticut Department of Transportation developed an internal DOT bot that takes documentation and puts it into a searchable database. The Utah Department of Transportation is transitioning an old legacy technology system toward a more unified platform to ensure data flows seamlessly.
Sources
- How Google is quietly planning to take on Nvidia
- Google in Talks with Marvell to Develop New AI Chips for Inference
- Google looks to scale AI chip ecosystem with Marvell as Nvidia competition heats up
- Google challenges Nvidia with new chips to speed up AI
- Google in Talks with Marvell to Develop New AI Chips
- Google is developing inference AI chips with Marvell to challenge Nvidia
- Is Richard Tice’s picture AI-manipulated? Here are five giveaways
- Reform’s Richard Tice posts picture with telltale signs of AI manipulation, say experts
- How to use AI to strengthen teams instead of destroying them
- Coursedog Acquires ClassRanked to Connect AI-Driven Course Evaluations Across Academic Operations
- Marigold Appoints Elizabeth Smalley as Chief AI Officer and Pat Jenakanandhini as Chief Product & Technology Officer
- The real test of AI is not the chatbot. It’s the emergency call center
- AI chatbots could be making you stupider
- We might finally know how to use quantum computers to boost AI
- Thompson Paine’s Carolina roots help shape AI’s future
- AI Lets State Transportation Departments Use Data More Fully
Comments
Please log in to post a comment.