Google Gemma Flaws, Danny Brickman Launches, Amazon, Reltio

The artificial intelligence sector is experiencing a period of intense activity, marked by significant investment alongside considerable challenges. A recent MIT report, highlighted by Lightbeam Health Solutions, indicates that 95% of generative AI projects fail to reach full production despite billions invested. These failures often stem from AI's propensity to generate false information, exhibit bias, produce inconsistent results, and present security vulnerabilities like prompt injection. Concerns about AI's reliability are particularly acute in critical fields such as healthcare. Betsy Castillo, a nurse expert from Carta Healthcare, warns that AI tools must possess a deep understanding of healthcare's unique complexities to avoid financial waste, staff frustration, and inaccurate data that could compromise patient care. The United States emphasizes a human-centered approach, advocating for clinician involvement in AI development and continuous monitoring to ensure patient safety and build trust. This need for robust evaluation extends to AI safety tests themselves, with experts from the British government's AI Safety Institute and Oxford Internet Institute, including Andrew Bean, identifying serious flaws in nearly all current methods, making their results unreliable. This issue gained prominence after Google's Gemma models produced false information. The emergence of Agentic AI is introducing new security and governance imperatives. Danny Brickman, CEO of Oasis Security, in collaboration with Sequoia Capital, has launched the Agentic Access Management Framework to address the growing security risks posed by AI agents, which may soon outnumber human employees. Similarly, CyberArk introduced its Secure AI Agents Solution to provide least-privilege access and real-time threat monitoring for AI identities, recognizing a widespread lack of strong security for these agents. The DataDriven 2026 conference, featuring Reltio CEO Manish Sood, will further explore the challenges Agentic AI presents to company data systems, as many executives, despite recognizing its transformative potential, are unprepared for its data governance and security demands. Economically, the enthusiasm for AI is tempered by caution regarding returns. HSBC CEO Georges Elhedery and General Atlantic CEO William Ford, speaking at a Hong Kong summit, warned that the vast investments in AI may not match current revenues, suggesting that consumers are not yet ready to pay for AI services and that full benefits could take five to ten years to materialize. This comes as companies, including Amazon, attribute recent job layoffs to AI, though MIT economics professor David Autor suggests AI might sometimes serve as an excuse for other economic pressures. Globally, efforts to advance AI capabilities continue, albeit with hurdles. Germany has expressed concern over the European Commission's insufficient funding for new AI training centers, or "gigafactories," crucial for the EU's global competitiveness. Meanwhile, the International RegLab Joint Project convened a workshop in Toronto to discuss the safe integration of AI into nuclear power operations, focusing on developing safety frameworks. In the commercial sphere, EXL has been recognized for the second consecutive year as a Leader in Generative AI Services, demonstrating its expertise. Additionally, Agatha Global Tech (AGT) launched GrantAI, an AI-powered tool on its Annuities Genius platform, designed to streamline annuity research and personalized recommendations for financial advisors.

Key Takeaways

  • 95% of generative AI projects fail to reach full production, often due to issues like factual inaccuracies, bias, and security weaknesses, despite billions invested.
  • Healthcare AI tools require deep understanding of the sector's complexities to avoid waste, staff frustration, and inaccurate patient data, as warned by Carta Healthcare's Betsy Castillo.
  • Human-centered AI development, involving clinicians and diverse data, is crucial for patient safety and building trust in healthcare.
  • Oasis Security, led by CEO Danny Brickman, and Sequoia Capital launched the Agentic Access Management Framework to govern AI agent access, anticipating AI agents may soon outnumber human employees.
  • CyberArk introduced its Secure AI Agents Solution to provide least-privilege access and real-time monitoring for AI identities, addressing a widespread lack of strong security for AI agents.
  • AI safety tests have serious flaws, making their results unreliable and highlighting a need for better standards, especially after issues with models like Google's Gemma producing false information.
  • HSBC CEO Georges Elhedery and General Atlantic CEO William Ford warn that massive AI investments may not match current revenues, with full benefits potentially taking five to ten years to materialize.
  • Companies like Amazon are attributing recent job layoffs to AI, though MIT economics professor David Autor suggests AI can sometimes serve as an excuse for other economic factors.
  • Germany is concerned about insufficient EU funding for new AI training centers, or "gigafactories," which are vital for the EU's global AI competitiveness.
  • Reltio CEO Manish Sood will discuss Agentic AI's data challenges at the DataDriven 2026 conference, as most executives are unprepared for its data governance and security demands.

Healthcare AI tools carry risks says expert

Betsy Castillo, a nurse expert from Carta Healthcare, warns hospitals about the risks of using AI tools from vendors. She says AI tools must understand healthcare's unique complexities to avoid problems. Without this understanding, AI can waste money, frustrate staff, and provide inaccurate data. This can lead to wrong decisions about patient care and safety. Castillo advises organizations to ask vendors specific questions to ensure their AI tools are truly helpful.

Most Generative AI projects fail report finds

Andy De from Lightbeam Health Solutions discusses a new MIT report showing 95% of generative AI projects fail. Despite billions invested, most projects do not reach full production. Key reasons for failure include AI making up facts, showing bias, giving inconsistent results, and having security weaknesses like prompt injection. Generative AI also has limited uses, especially in critical fields like healthcare. The report suggests Agentic AI and AI Agents are the next big step, offering more autonomous and scalable solutions.

Human-centered AI improves healthcare safety

The United States needs to make sure AI tools improve healthcare while keeping patients safe and respecting clinical expertise. This requires involving clinicians in developing and testing AI, using diverse data, and designing tools for real-world use. Continuous checks after launch and special contracts can also help. Without clinician input, AI tools may not work well or gain trust, leading to wasted effort. Designing AI with human needs in mind will help build confidence and ensure better patient outcomes.

Oasis Security and Sequoia launch AI governance framework

Oasis Security and Sequoia Capital introduced the Agentic Access Management Framework to help companies manage AI agents. Danny Brickman, CEO of Oasis Security, states that AI agents may soon outnumber human employees, creating new security risks. The framework provides a seven-pillar model and a free assessment to help organizations govern AI access. Caleb Tennis from Sequoia Capital emphasizes that this framework is crucial for securely adopting Agentic AI. It helps businesses maintain visibility and control as they use more AI tools.

CyberArk secures AI agents with new solution

CyberArk launched its Secure AI Agents Solution to protect AI identities. This new tool gives AI agents only the access they need, reducing risks and preventing unauthorized use. A CyberArk study shows that while many companies will use AI agents soon, very few have strong security in place. The solution helps find AI agents, secures their access with strict controls, and monitors for threats in real time. This allows businesses to use AI more safely and meet compliance rules.

AI safety tests have serious flaws say experts

Experts from the British government's AI Safety Institute and Oxford Internet Institute found many flaws in tests used to check AI safety. Andrew Bean, the lead researcher, stated that almost all these tests have weaknesses, making their results unreliable. These tests are important for evaluating new AI models from big tech companies. The study highlights a need for better standards, especially after issues like Google's Gemma models producing false information. This research shows that current methods for checking AI safety are not strong enough.

Germany worries about EU funding for AI hubs

Germany is concerned that the European Commission has not secured enough money for new AI training centers called "gigafactories." These centers are crucial for the EU to keep up with the US and China in AI development. A German diplomat warned that current EU funds are not enough for five years, which could make investors hesitant. The Commission plans to seek more money from other banks and reallocate unused funds. This funding gap could slow down the EU's efforts to build strong AI capabilities.

Regulators and operators discuss AI in nuclear power

The International RegLab Joint Project held a workshop in Toronto, Canada, to discuss using artificial intelligence in nuclear operations. About 30 participants, including regulators and operators from seven countries, explored AI's potential and challenges. They examined a case where AI monitors sensor data to detect plant faults early and manage risks. The workshop also focused on developing safety frameworks for systems that use machine learning. A public report will finalize this phase, with RegLab2 planned for May 2026 in Korea.

EXL recognized as a top Generative AI service leader

EXL, a global company focused on data and AI, has been named a Leader in the 2025 ISG Provider Lens Generative AI Services Global report. This marks the second year in a row EXL received this honor in both the Midsize Strategy and Consulting and Development and Deployment Services categories. The report recognizes EXL for its strong ability to help business leaders use Generative AI effectively. This achievement highlights EXL's continued expertise in the growing field of AI services.

Agatha Global launches AI tool for annuity advisors

Agatha Global Tech, or AGT, has released GrantAI, a new AI-powered tool for financial advisors. This tool, available on AGT's Annuities Genius platform, helps advisors quickly research and compare annuity products. GrantAI uses conversational AI to create personalized recommendations for clients, making the entire process simpler and faster. Financial advisors, distributors, and institutions across the United States can now use GrantAI to improve their annuity services.

Companies blame AI for recent job layoffs

Many companies, including Amazon, are blaming artificial intelligence for recent job cuts affecting thousands of employees. However, some experts like MIT economics professor David Autor suggest that companies might use AI as an excuse. He believes it is easier to blame AI than to admit to lower profits or a slowing economy. While AI is indeed replacing some jobs, understanding its true impact on the economy remains a major challenge. Generative AI, in particular, is quickly changing many tasks from medical diagnosis to software development.

DataDriven 2026 conference to explore Agentic AI

The DataDriven 2026 conference will bring together top data and AI experts in Orlando next February. The event will focus on Agentic AI and how it challenges company data systems. Keynote speaker Ethan Mollick and Reltio CEO Manish Sood will discuss how AI is changing work. A Harvard Business Review study shows most executives believe Agentic AI will transform work, but few are ready due to data challenges. The conference will offer practical advice on data governance, security, and building strong data foundations for AI.

CEOs warn AI investments may not match returns

HSBC CEO Georges Elhedery and General Atlantic CEO William Ford warned that huge investments in AI may not match current revenues. Speaking at a summit in Hong Kong, Elhedery noted that consumers are not ready to pay for AI, and real benefits will take five to ten years. Ford agreed, comparing AI's long-term impact to railroads or electricity, which took decades to show full effects. Both CEOs cautioned against "irrational exuberance" and potential misallocation of capital in the early stages of AI adoption.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

Access control Access management Agentic AI AI adoption AI Agents AI and jobs AI bias AI conferences AI consulting AI development AI deployment AI evaluation AI funding AI governance AI identities AI implementation AI in nuclear power AI infrastructure AI investments AI models AI project failure AI regulation AI reliability AI returns AI risks AI safety AI scalability AI security AI standards AI testing AI tools Annuity products Capital allocation Clinical expertise Compliance Conversational AI Data accuracy Data diversity Data foundations Data governance Economic impact of AI EU AI strategy Financial AI Financial advisors Generative AI Generative AI services Gigafactories Healthcare AI Human-centered AI Industrial AI Job displacement Machine learning Patient care Patient safety Prompt injection Safety frameworks Threat monitoring Vendor management AI trust

Comments

Loading...