Stardog has promoted Navin Sharma to Chief Product Officer to lead its semantic AI platform growth. Sharma will guide global product strategy for a new category called Semantic AI Infrastructure for the Agentic Enterprise. This role focuses on connecting fragmented data with AI systems to ensure reliable operations in complex environments. The company is trusted by major organizations like NASA, Boehringer Ingelheim, and Raytheon.
On May 11, 2026, Stardog announced the promotion of Navin Sharma to Chief Product Officer for AI. He will lead the next phase of innovation for their semantic AI platform designed for enterprise use. Sharma will focus on creating a semantic layer that connects data and enables AI systems to reason with trust and transparency. Craig Harper, CEO of Stardog, stated that trusted data is essential as companies adopt agentic AI at scale. Evren Sirin, CTO and Co-Founder, praised Sharma for combining technical depth with disciplined product execution.
A new nonprofit institute will test AI products for children using a model similar to car crash tests. The initiative will be formally presented at the Danish Parliament with Margrethe Vestager co-hosting the event. Common Sense Media will operate the institute, which is funded by donors including Anthropic, the OpenAI Foundation, and Pinterest. Researchers found that leading chatbots often fail to recognize mental health distress in young people. The institute aims to create transparent safety standards while maintaining full editorial independence from its funders.
Micron Technology has sampled a new 256GB DDR5 server module built on its 1-gamma technology. This memory module reaches speeds of up to 9,200 megatransfers per second, which is over 40% faster than current production modules. The technology is designed to meet the high demands of cloud and edge computing for AI and machine learning workloads. Sanjay Mehrotra, president and CEO of Micron, called the technology a game-changer for the industry. The module enables faster training and inference times for a wide range of applications.
Former PepsiCo CEO Indra Nooyi says board members who do not learn about AI should step aside. She argues that directors must re-educate themselves to understand how companies are doing right by shareholders. Nooyi is currently teaching a course on leadership in the world of AI transformation through MasterClass Executive. She observes AI applications at her board seats with Amazon, Honeywell, and Philips across various industries. Nooyi believes business schools and companies must update their frameworks to teach critical thinking alongside AI use.
Mozilla.ai has introduced the VIBE framework to add human oversight to AI coding agent workflows. The system requires humans to review shared knowledge units before they enter a common memory store. This process aims to prevent automation bias, where users overly trust automated decisions and miss errors. The framework evaluates code based on vulnerability, intention versus impact, bias, and edge case handling. Developers must approve findings before changes are committed to the system memory.
Sandia National Laboratories is using AI to help inspect ceramic components for nuclear deterrence applications. Engineers are installing new optical and acoustic imaging systems to catch tiny defects earlier in the manufacturing process. The new workflow uses an AI augmentation interface to highlight defects for operators to review on their desktops. This approach saves time and money by identifying issues at the billet level before final component manufacturing. Operators will still double-check the AI results to ensure accuracy.
The Atlantic Council will host a panel discussion on health data governance and AI policy on Thursday, May 21. The event will bring together congressional staffers and experts to examine regulatory and strategic choices. Topics include barriers to data interoperability, the role of federal agencies, and privacy risks. The discussion aims to address how these factors impact US competitiveness and security in the healthcare sector. The panel will explore the implications of AI transforming healthcare and biotechnology.
The environmental footprint of AI creates security risks that disproportionately affect women and communities. Issues include mineral extraction, water stress for data centers, and rising energy demand. In February 2026, hundreds protested against the rapid expansion of data centers in the United States. Data center growth in Thailand and the Asia-Pacific region raises concerns over water use and energy security. Mineral extraction in the Democratic Republic of the Congo also increases health risks for women workers.
An OpenAI executive stated that enterprises seek help to keep up with rapid AI innovation. Companies are adopting new AI models but struggle with the speed of compounding capabilities. Dresser noted that a new company will help organizations build and deploy AI at speed and scale. The new entity will have engineers working side by side with customers to provide immediate feedback. This approach aims to create a tight loop of innovation and learning to scale enterprise adoption globally.
Onvida Health used ambient AI to reduce after-hours documentation and improve physician retention. The AI tool automatically drafts clinical notes, leading to a 30% reduction in work done after hours. The health system calculated a positive impact of about $24,000 per physician since starting the program. This technology allowed doctors to see one additional patient per day without hiring new staff. Revenue cycle teams also saw a 20% to 25% improvement in charge lag due to faster billing.
Key Takeaways
['Stardog promotes Navin Sharma to Chief Product Officer to lead its semantic AI platform growth.', 'Navin Sharma will guide global product strategy for a new category called Semantic AI Infrastructure for the Agentic Enterprise.', 'Stardog announced the promotion of Navin Sharma to Chief Product Officer on May 11, 2026.', 'Craig Harper, CEO of Stardog, stated that trusted data is essential as companies adopt agentic AI at scale.', 'Evren Sirin, CTO and Co-Founder of Stardog, praised Sharma for combining technical depth with disciplined product execution.', 'A new nonprofit institute will test AI products for children using a model similar to car crash tests.', 'Common Sense Media will operate the institute, which is funded by donors including Anthropic, the OpenAI Foundation, and Pinterest.', 'Researchers found that leading chatbots often fail to recognize mental health distress in young people.', 'Micron Technology has sampled a new 256GB DDR5 server module built on its 1-gamma technology.', 'The Micron memory module reaches speeds of up to 9,200 megatransfers per second, which is over 40% faster than current production modules.']Stardog Promotes Navin Sharma to Chief Product Officer
Stardog has promoted Navin Sharma to Chief Product Officer to lead its semantic AI platform growth. Sharma will guide global product strategy for a new category called Semantic AI Infrastructure for the Agentic Enterprise. This role focuses on connecting fragmented data with AI systems to ensure reliable operations in complex environments. The company is trusted by major organizations like NASA, Boehringer Ingelheim, and Raytheon. Sharma brings extensive experience in building business-to-business data and AI products.
Stardog Names Navin Sharma Chief Product Officer for AI
Stardog announced the promotion of Navin Sharma to Chief Product Officer on May 11, 2026. He will lead the next phase of innovation for their semantic AI platform designed for enterprise use. Sharma will focus on creating a semantic layer that connects data and enables AI systems to reason with trust and transparency. Craig Harper, CEO of Stardog, stated that trusted data is essential as companies adopt agentic AI at scale. Evren Sirin, CTO and Co-Founder, praised Sharma for combining technical depth with disciplined product execution.
New Nonprofit Lab Will Test AI Safety for Children
A new nonprofit institute will test AI products for children using a model similar to car crash tests. The initiative will be formally presented at the Danish Parliament with Margrethe Vestager co-hosting the event. Common Sense Media will operate the institute, which is funded by donors including Anthropic, the OpenAI Foundation, and Pinterest. Researchers found that leading chatbots often fail to recognize mental health distress in young people. The institute aims to create transparent safety standards while maintaining full editorial independence from its funders.
Micron Samples Fast 256GB DDR5 Memory for AI
Micron Technology has sampled a new 256GB DDR5 server module built on its 1-gamma technology. This memory module reaches speeds of up to 9,200 megatransfers per second, which is over 40% faster than current production modules. The technology is designed to meet the high demands of cloud and edge computing for AI and machine learning workloads. Sanjay Mehrotra, president and CEO of Micron, called the technology a game-changer for the industry. The module enables faster training and inference times for a wide range of applications.
Indra Nooyi Says Board Members Must Learn AI
Former PepsiCo CEO Indra Nooyi says board members who do not learn about AI should step aside. She argues that directors must re-educate themselves to understand how companies are doing right by shareholders. Nooyi is currently teaching a course on leadership in the world of AI transformation through MasterClass Executive. She observes AI applications at her board seats with Amazon, Honeywell, and Philips across various industries. Nooyi believes business schools and companies must update their frameworks to teach critical thinking alongside AI use.
New Framework Adds Human Review to AI Coding
Mozilla.ai has introduced the VIBE framework to add human oversight to AI coding agent workflows. The system requires humans to review shared knowledge units before they enter a common memory store. This process aims to prevent automation bias, where users overly trust automated decisions and miss errors. The framework evaluates code based on vulnerability, intention versus impact, bias, and edge case handling. Developers must approve findings before changes are committed to the system memory.
Sandia Labs Uses AI to Inspect Ceramic Parts
Sandia National Laboratories is using AI to help inspect ceramic components for nuclear deterrence applications. Engineers are installing new optical and acoustic imaging systems to catch tiny defects earlier in the manufacturing process. The new workflow uses an AI augmentation interface to highlight defects for operators to review on their desktops. This approach saves time and money by identifying issues at the billet level before final component manufacturing. Operators will still double-check the AI results to ensure accuracy.
Atlantic Council Discusses Health Data and AI Policy
The Atlantic Council will host a panel discussion on health data governance and AI policy on Thursday, May 21. The event will bring together congressional staffers and experts to examine regulatory and strategic choices. Topics include barriers to data interoperability, the role of federal agencies, and privacy risks. The discussion aims to address how these factors impact US competitiveness and security in the healthcare sector. The panel will explore the implications of AI transforming healthcare and biotechnology.
AI Environmental Impact Creates Gendered Security Risks
The environmental footprint of AI creates security risks that disproportionately affect women and communities. Issues include mineral extraction, water stress for data centers, and rising energy demand. In February 2026, hundreds protested against the rapid expansion of data centers in the United States. Data center growth in Thailand and the Asia-Pacific region raises concerns over water use and energy security. Mineral extraction in the Democratic Republic of the Congo also increases health risks for women workers.
OpenAI Exec Says Companies Need Help With AI
An OpenAI executive stated that enterprises seek help to keep up with rapid AI innovation. Companies are adopting new AI models but struggle with the speed of compounding capabilities. Dresser noted that a new company will help organizations build and deploy AI at speed and scale. The new entity will have engineers working side by side with customers to provide immediate feedback. This approach aims to create a tight loop of innovation and learning to scale enterprise adoption globally.
Onvida Health Saves $24K Per Doctor With AI
Onvida Health used ambient AI to reduce after-hours documentation and improve physician retention. The AI tool automatically drafts clinical notes, leading to a 30% reduction in work done after hours. The health system calculated a positive impact of about $24,000 per physician since starting the program. This technology allowed doctors to see one additional patient per day without hiring new staff. Revenue cycle teams also saw a 20% to 25% improvement in charge lag due to faster billing.
Sources
- Stardog Promotes Navin Sharma To Chief Product Officer To Lead Semantic AI Platform Growth
- Stardog Names Navin Sharma Chief Product Officer to Lead Next Phase of Semantic AI Innovation
- New nonprofit safety lab will 'crash-test' AI products for children
- Micron Redefines AI Performance With Sampling of 256GB DDR5 Server Module
- Indra Nooyi says board members who won’t learn AI should step aside: ‘What are they going to contribute?’
- VIBE✓ adds friction to AI coding agents
- AI’s eyes to help with component inspections
- Health data and AI at scale: Policy choices for US competitiveness and security
- AI’s Environmental Footprint is a Gendered Security Risk
- OpenAI Exec Says Enterprises Seek Help With AI Innovation
- How Onvida Health's Ambient AI Investment Yielded $24K Per Physician
Comments
Please log in to post a comment.