google, amazon and microsoft Updates

Google DeepMind is advancing software security with CodeMender, an AI agent that autonomously finds and fixes vulnerabilities. Over the past six months, CodeMender has submitted 72 security fixes to open-source projects, acting both reactively to patch immediate flaws and proactively by rewriting code to prevent entire classes of bugs. This AI utilizes Gemini Deep Think models and includes a validation process with an 'LLM judge' to ensure fixes are correct before human review. Google is also bolstering its AI security strategy with a new AI Vulnerability Reward Program (AI VRP) and an updated Secure AI Framework (SAIF 2.0). Meanwhile, the broader AI investment landscape is described by Amazon founder Jeff Bezos as a 'good bubble' with lasting innovations, despite the challenges investors face in distinguishing good ideas. Microsoft CTO Kevin Scott highlighted the company's AI strategy at TechCrunch Disrupt 2025, emphasizing productivity gains and responsible AI development. In the realm of identity security, Saviynt has opened a major AI-driven innovation hub in Bengaluru to focus on securing human and non-human identities. Experts at Oktane 2025 discussed the critical role of identity management in securing AI-driven enterprises and warned of AI-driven social engineering risks. On the geopolitical front, White House AI czar David Sacks defended the Trump administration's policy of allowing sales of less advanced AI chips from companies like Nvidia and AMD to China, aiming to maintain US dominance in the AI race. The recent OpenAI-AMD deal is seen as a strong indicator of the thriving AI revolution, with a focus also on data centers and supporting infrastructure. Enterprises are advised to adopt a governance-first approach to balance AI's value with security risks, ensuring privacy and auditable logs. Additionally, Maryland is offering grants for cybersecurity and AI training programs to strengthen its workforce.

Key Takeaways

  • Google DeepMind's CodeMender AI autonomously finds and fixes software security vulnerabilities, submitting 72 fixes to open-source projects in six months.
  • CodeMender uses Gemini Deep Think models and an 'LLM judge' for automated validation of code fixes.
  • Google is enhancing its AI security with a new AI Vulnerability Reward Program (AI VRP) and Secure AI Framework 2.0.
  • Amazon founder Jeff Bezos views the current AI investment surge as a 'good bubble' that will yield lasting innovations.
  • Microsoft CTO Kevin Scott discussed the company's AI strategy, focusing on productivity and responsible AI development.
  • Saviynt opened a large AI-driven identity security hub in Bengaluru to enhance security for human and non-human identities.
  • Experts at Oktane 2025 stressed the importance of identity management for AI security and warned of AI-driven social engineering.
  • White House AI czar David Sacks supports selling 'deprecated' AI chips from Nvidia and AMD to China to maintain US AI dominance.
  • The OpenAI-AMD deal signals a strong AI market, with attention also on data center infrastructure investments.
  • Enterprises are urged to adopt a governance-first strategy to balance AI benefits with security risks.

CodeMender AI automatically finds and fixes software security flaws

Researchers have developed CodeMender AI, an autonomous agent that automatically finds and fixes security vulnerabilities in software. This AI acts as both a security engineer, instantly patching new flaws, and a proactive tool that rewrites code to prevent entire classes of bugs. CodeMender uses advanced AI models like Gemini Deep Think and a multi-agent system where a judge critiques fixes to ensure accuracy. It has already submitted 72 security fixes to open-source projects, aiming to eventually become a tool for all developers.

Google's CodeMender AI fixes software security bugs autonomously

Google DeepMind has created CodeMender, an AI agent that automatically finds and fixes critical security vulnerabilities in software. In the past six months, CodeMender has provided 72 security fixes to open-source projects. The AI can instantly patch new flaws or proactively rewrite code to eliminate entire categories of security risks. It uses Gemini Deep Think models and advanced analysis tools, with a built-in validation process to ensure fixes are correct before human review. This allows developers to focus more on creating new features.

Google DeepMind's CodeMender AI autonomously patches software vulnerabilities

Google DeepMind has unveiled CodeMender, an AI agent that automatically detects, patches, and rewrites vulnerable code to prevent future exploits. This AI builds on previous projects by combining Gemini Deep Think models with advanced program analysis. CodeMender has already submitted 72 security fixes to open-source projects, acting both reactively to patch new flaws and proactively by rewriting code to eliminate entire classes of vulnerabilities. It includes a validation process with an 'LLM judge' to ensure fixes are correct before human review.

Google's new AI agent CodeMender enhances code security

Google has developed CodeMender, a new AI-powered agent that automatically fixes critical software vulnerabilities. Over the last six months, CodeMender has provided 72 security fixes to open-source projects, some with millions of lines of code. The agent is both reactive, patching new flaws instantly, and proactive, rewriting code to eliminate entire classes of vulnerabilities. It uses Gemini Deep Think models and includes automated validation to ensure patches are correct and do not cause new issues before human review.

Google DeepMind's CodeMender AI fixes code vulnerabilities automatically

Google DeepMind has introduced CodeMender, an AI agent that automatically fixes critical software vulnerabilities. In the past six months, CodeMender has submitted 72 security fixes to open-source projects, including large ones. The AI is designed to be both reactive, patching new flaws instantly, and proactive, rewriting code to eliminate entire classes of bugs. It uses Gemini models and advanced analysis tools, with a validation system that ensures patches are correct and do not cause regressions before human review.

Google's AI strategy includes CodeMender, AI VRP, and Secure AI Framework 2.0

Google is enhancing AI security with CodeMender, an AI agent that automatically fixes code vulnerabilities. It uses Gemini models for root cause analysis and self-validated patching, with critique agents ensuring correctness before human review. Google is also launching a new AI Vulnerability Reward Program (AI VRP) to encourage security research and expanding its Secure AI Framework to SAIF 2.0, focusing on securing AI agents. These efforts aim to use AI for defense and ensure AI systems are secure by design.

AI and Identity Security: Experts discuss risks and solutions at Oktane 2025

Leaders at Oktane 2025 discussed how identity management is crucial for securing AI-driven enterprises. Experts like Dor Fledel and Aaron Parecki highlighted the risks of growing human and non-human identities, emphasizing the need for visibility and access controls for AI agents. They also discussed open standards for AI ecosystems and how companies like Box embed AI for data protection. Nitin Raina warned about AI-driven social engineering, recommending phishing-resistant MFA and zero-trust architecture.

Saviynt opens major AI-driven identity security hub in Bengaluru

Saviynt has opened its largest global innovation hub in Bengaluru, India, spanning 62,000 sq. ft. and housing over 650 employees. This hub will drive AI-led research and development in identity security, supporting India's growing digital economy and the IndiaAI Mission. Saviynt's Bengaluru teams have already developed key solutions like Saviynt Identity Security and Governance and AI security. The new center focuses on embedding agentic AI and automation into their platform to secure both human and non-human identities.

Trump's AI Czar David Sacks defends China stance on AI chips

White House AI czar David Sacks defended the Trump administration's approach to China, stating the US must win the AI race. He supported allowing companies like Nvidia and AMD to sell less-advanced AI chips to China, calling them 'deprecated.' Sacks argued this strategy maximizes US tech stack users and keeps China at bay, contrasting it with Biden administration policies. He also praised the AMD and OpenAI deal for building AI infrastructure.

AI Czar Sacks explains US policy on selling AI chips to China

David Sacks, White House AI and Crypto Czar, discussed the US policy on selling 'deprecated' AI chips to China. He stated that the administration aims to ensure the world operates on a US-built AI stack. Sacks explained the rationale behind allowing sales of less advanced chips, arguing it helps maintain US dominance in the AI race.

University students use AI to create vitamin reminder prototypes

University of Nebraska-Lincoln students used artificial intelligence tools and entrepreneurial techniques to design products helping 10- to 12-year-olds remember to take vitamins. In the eighth annual Innovation Challenge, teams created prototypes judged by elementary school students. The winning concept, Zippy, was an interactive reminder device that plays music and offers gentle cues. The challenge emphasized hands-on learning, adapting pitches for different audiences, and the importance of customer feedback.

Jeff Bezos calls AI investment a 'good bubble' with lasting benefits

Amazon founder Jeff Bezos described the current wave of AI investment as a 'good bubble' that will leave behind valuable innovations, unlike the 2008 financial crisis. He explained that industrial bubbles, like the dotcom era's fiber-optic networks, result in lasting societal benefits. Bezos acknowledged that investors struggle to distinguish good ideas from bad during such excitement but stressed that AI is real and will transform every industry. He also discussed his space ambitions with Blue Origin.

ESF's Shayan Mirzabeigi joins SUNY AI for Public Good initiative

Shayan Mirzabeigi from the SUNY College of Environmental Science and Forestry has been named to the inaugural class of SUNY's AI for the Public Good Fellows. This program includes 20 faculty and staff members who are experts in various fields, from health sciences to sustainable resource management. The initiative aims to leverage AI for public benefit across different academic disciplines.

Microsoft CTO Kevin Scott discusses AI strategy at TechCrunch Disrupt

Microsoft CTO Kevin Scott shared the company's AI strategy at TechCrunch Disrupt 2025, detailing how AI is being integrated across its products. He emphasized AI's potential to boost productivity and create new business models, while also stressing the importance of responsible AI development, fairness, and safety. Scott offered advice to startups on navigating the AI landscape and highlighted Microsoft's commitment to leading AI innovation.

Maryland offers grants for cybersecurity and AI training programs

The Maryland Department of Labor is providing up to $1 million in grants through its new Cyber and Artificial Intelligence (AI) Pilot Clinic Grant Initiative. The funding will support projects from February 1, 2026, to January 31, 2029, to strengthen cybersecurity workforces and digital protections for community organizations. Proposals are due by December 10, 2025, and eligible applicants include colleges, universities, and non-profits. The initiative aims to give Marylanders hands-on experience in cybersecurity and AI.

OpenAI-AMD deal shows AI investment is strong, says CIO

An investment chief believes the OpenAI deal for AMD chips confirms the AI revolution is thriving. The CIO prefers investing in companies beyond semiconductors, focusing on data centers and their supporting infrastructure like power solutions and HVAC systems. These areas are seeing significant spending and represent key beneficiaries of the ongoing AI boom.

Enterprises can balance AI speed and security with governance

Enterprises face a challenge balancing AI's value from unstructured content with security risks like information breaches. A governance-first approach is key, integrating controls like centralized policies, least-privilege access, and auditable logs from the start. This ensures LLM privacy, traces outputs to sources, and layers security onto existing systems. By implementing robust governance, businesses can achieve both innovation and strong security.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Security CodeMender AI Vulnerability Patching Autonomous Agents Gemini Deep Think Open Source Security Software Development Identity Security AI Governance AI Investment AI Strategy Cybersecurity Training AI Chips Responsible AI AI for Public Good LLM Privacy Data Protection AI Ethics AI Innovation AI Market

Comments

Loading...