Google Cloud partners Wiz as New Vice joins Novee Security

AI agents are rapidly integrating into organizations, but their adoption outpaces security measures. Research from Rubrik Zero Labs indicates that 86% of IT and security leaders anticipate AI agents will surpass their current security guardrails within a year. Alarmingly, only 23% have full visibility into the agents operating in their environments. This lack of oversight contributes to a significant surge in AI-related attacks, which have increased by nearly 490% year over year. These attacks often involve sensitive data and exploit uncontrolled data exposure and shadow AI embedded in SaaS applications.

Traditional identity and access management (IAM) platforms, designed for human users, struggle to manage autonomous, non-human AI agents effectively. These agents operate at machine speed, creating new challenges for performance, cost, and data sovereignty. Experts emphasize that controlling AI requires governing identities, permissions, and integrations, rather than just the models themselves. Current AI governance tools primarily discover risks but lack enforcement capabilities, particularly in SaaS environments. Cloudflare is partnering with Google Cloud's Wiz to address these issues, aiming to enhance AI application security by detecting prompt injection and personal data leaks.

The United States significantly leads Europe in workplace AI adoption, with 43% of US workers using generative AI compared to 32% in major European economies. US workers also dedicate more time to AI, averaging 5.2% of their total work hours. Organizations are finding success through a "learn by doing" approach, starting with small, low-risk experiments rather than extensive planning. Meanwhile, the digital remnants of defunct companies, including Slack archives and Jira tickets, are becoming valuable premium training data for AI labs, with companies like SimpleClosure specializing in selling this "operational exhaust."

In other news, the shoe company Allbirds is pivoting its business to focus on the artificial intelligence industry, launching a new venture called NewBird AI, though this move faces criticism as unoriginal trend-jacking. Novee Security has appointed Netta Rager Dan as its new Vice President of Product to scale its AI Agents platform, which offers autonomous penetration testing. Circuit & Chisel launched new pay-as-you-go ATXP products (ATXP Music, ATXP Pics, ATXP Chat) for AI agents, enabling independent transactions and already surpassing 1,000,000 transactions. VMware also released its Tanzu Platform 10.4 and AI Agent Foundation technology, providing a secure runtime for AI agents with features like sandboxing and zero-trust networking.

A Stanford University report highlights that China has considerably narrowed the artificial intelligence gap with the United States. While the U.S. still leads in the number of top AI models, China excels in publication citations and industrial robot installations. The performance gap in AI bots has significantly shrunk, and the flow of tech experts to the U.S. has declined. Despite higher private investment in the U.S., China's surge in AI research and development presents a growing challenge to American technological leadership.

Key Takeaways

  • AI agent adoption is outpacing security measures, with 86% of IT/security leaders expecting agents to exceed guardrails within a year, and only 23% having full visibility.
  • AI-related attacks have surged by nearly 490% year over year, driven by uncontrolled data, shadow AI in SaaS, and misuse of OAuth tokens.
  • Traditional identity and access management (IAM) platforms are inadequate for autonomous AI agents, requiring new approaches for non-human identities, scoped access, and short-lived credentials.
  • Effective AI governance needs to move beyond risk discovery to continuous control and enforcement across identities, permissions, and integrations, especially in SaaS environments.
  • Cloudflare and Google Cloud's Wiz are partnering to enhance AI application security, focusing on detecting prompt injection and personal data leaks.
  • The U.S. leads Europe in workplace AI adoption (43% vs. 32%), with US workers spending 5.2% of their work hours on AI.
  • Successful AI adoption emphasizes "learning by doing" through small, low-risk experiments rather than lengthy planning.
  • Defunct companies' digital data (e.g., Slack archives, Jira tickets) is now valuable premium training material for AI labs, with companies like SimpleClosure facilitating its sale.
  • Circuit & Chisel launched ATXP Music, ATXP Pics, and ATXP Chat, pay-as-you-go products enabling AI agents to transact independently, having already processed over 1,000,000 transactions.
  • China has significantly narrowed the AI gap with the U.S., excelling in publication citations and industrial robot installations, challenging American technological leadership.

Agentic AI is here, challenging traditional identity security

Agentic AI introduces autonomous, non-human agents that operate at machine speed, creating new challenges for traditional identity and access management (IAM) platforms. These platforms were designed for human users and struggle with the performance, cost, and data sovereignty needs of AI agents. Organizations require identity platforms that support open standards and can scale rapidly to manage these new autonomous systems. Security leaders must rethink how to issue access and secure these non-human identities, which operate continuously and can pose risks if their behavior deviates from intended actions. Ensuring traceability across complex agent workflows is also crucial for understanding decision-making processes.

Rubrik warns of security gaps as AI agent adoption grows

New research from Rubrik Zero Labs reveals that organizations are adopting AI agents without adequate security controls, creating a significant gap between innovation and safety. Eighty-six percent of IT and security leaders expect AI agents to outpace their organization's security guardrails within a year. Only 23% have full visibility into the agents operating in their environments, leading to an inability to secure identities that are already making decisions and interacting with critical data. The report also found that most agents require more manual oversight than they save in efficiency and that organizations lack the ability to roll back agent actions without disruption.

AI governance tools must control risk, not just discover it

The AI governance market in 2026 is characterized by tools that discover AI risk but lack enforcement capabilities, creating a critical gap. AI-related attacks have surged by nearly 490 percent year over year, with AI embedded across thousands of SaaS applications often without clear ownership. Effective AI governance requires visibility into AI usage, control over access, and enforcement of policies across identities and integrations. Current tools often focus on discovery and risk assessment but fail to provide continuous control, especially in SaaS environments where identity, access, and integrations are key risk drivers.

AI security risks grow with uncontrolled data and shadow AI

AI risk is now operational and scaling rapidly, driven by uncontrolled data exposure, shadow AI embedded in SaaS applications, and the abuse of OAuth tokens. AI-related attacks have increased nearly 490 percent year over year, with sensitive data involved in the majority of incidents. Shadow AI, often hidden within trusted platforms, expands risk without security review, while OAuth integrations grant broad, persistent access that is rarely revisited. The proliferation of unmanaged non-human identities and AI supply chain risks further complicate security. Experts emphasize that controlling AI means governing identities, permissions, and integrations, not just models.

Secure AI agents need strong authentication and access controls

Authenticating AI agents is a critical security boundary that determines the potential impact and manageability of autonomous systems. AI agents amplify credential risks by inheriting and inheriting user credentials or using shared service accounts, making them vulnerable to subversion. Secure autonomy requires treating AI agents as governed non-human identities with scoped access and short-lived credentials. Current authentication methods like API keys and OAuth tokens are often misused or over-provisioned, creating significant governance gaps. Robust authentication design is essential to enforce boundaries between AI autonomy and enterprise infrastructure.

US leads Europe in AI adoption, but challenges remain

Recent reports indicate that the United States leads Europe significantly in workplace AI adoption, with 43% of US workers using generative AI compared to 32% across six major European economies. US workers also spend more time using AI at about 5.2% of their total work hours. This adoption gap is widening, with the US showing faster growth. However, challenges persist, including uneven AI capabilities, adoption rates, and public trust. Designing AI products for a global audience requires considering these varying levels of user familiarity and dependence.

Learn by doing: The effective AI adoption playbook

Organizations that succeed with AI adoption are learning by doing rather than relying on lengthy planning processes. A nine-month discovery phase can become outdated before recommendations are delivered, causing organizations to lose valuable learning time. The most effective approach involves starting with small, low-risk experiments on single workflows, measuring the impact, and iterating based on learnings. AI is an experiential capability that requires hands-on use to understand its potential and refine its application within an organization.

Allbirds shifts focus to AI industry

San Francisco-based shoe company Allbirds is pivoting its business to focus on the artificial intelligence industry. The company is launching a new venture called NewBird AI. This move marks a significant change in direction for the footwear brand.

Allbirds' AI pivot criticized as unoriginal trend-jacking

The shoe brand Allbirds is facing criticism for its recent pivot to focusing entirely on artificial intelligence. Critics compare the move to Big Tech's trend-jacking and suggest it lacks originality and a clear strategy. This decision raises questions about the actual benefits of AI for a footwear company, with many viewing it as more of a marketing tactic than genuine innovation. The situation highlights a broader concern about companies rushing into AI adoption without a deep understanding or clear vision.

Defunct companies' data now valuable AI training material

The digital remnants of defunct companies, including Slack archives, Jira tickets, and emails, are now being sold as premium training data for AI. AI labs, having exhausted public internet data, are seeking this 'operational exhaust' to train AI models, especially for agentic capabilities. Companies like SimpleClosure specialize in helping wind down businesses by selling their digital footprints, providing founders with financial closure and AI labs with valuable real-world data. This data richness, including internal traceability and cross-platform linkages, commands significant value in the AI arms race.

Circuit & Chisel launches ATXP products for AI agents

Circuit & Chisel has launched three new pay-as-you-go products ATXP Music, ATXP Pics, and ATXP Chat, built on their ATXP platform for AI agents. These products allow builders to create AI applications that can transact independently across the internet without subscriptions or complex billing. ATXP-powered products have already surpassed 1,000,000 transactions, with ATXP Music generating 30,000 songs. The company aims to simplify agent-to-agent payments and enable AI agents to handle payments, find tools, and operate autonomously.

Novee Security appoints Netta Rager Dan as VP Product

Novee Security has appointed Netta Rager Dan as its new Vice President of Product. In her role, Rager Dan will lead the company's product strategy and execution, focusing on scaling Novee's AI Agents platform. This platform offers autonomous penetration testing, also known as AI Red Teaming, to identify complex security vulnerabilities. Rager Dan brings over a decade of experience in product leadership and cybersecurity, having previously held key roles at Medigate and Claroty.

China narrows AI lead over US, Stanford report finds

A Stanford University report indicates that China has significantly closed the gap with the United States in artificial intelligence capabilities. While the U.S. still leads in the number of top AI models, China excels in publication citations and industrial robot installations. The gap in AI bot performance has shrunk considerably, and the flow of tech experts to the U.S. has dramatically declined. Despite higher private investment in the U.S., China's surge in AI research and development poses a challenge to American technological leadership.

Cloudflare and Google Cloud's Wiz partner on AI security

Cloudflare is collaborating with Google Cloud's Wiz to enhance the security of AI applications. This partnership aims to detect prompt injection and personal data leaks by integrating Cloudflare's AI security solutions with Wiz's cloud security platform. The goal is to provide a comprehensive approach to securing AI deployments and build trust in generative AI technologies. The integration is expected to offer real-time threat detection and response capabilities tailored for AI workloads.

VMware launches Tanzu Platform and AI Agent innovations

VMware has released its new Tanzu Platform 10.4 and AI Agent Foundation technology, designed for the demands of the agentic era. The Agent Foundations offer a secure-by-default runtime for AI agents, providing a sandbox environment with autoscaling and credential management. Key innovations include an immutable supply chain using trusted Buildpacks, zero-trust networking, and sandboxing to limit agentic loops. This enables developers to build and scale AI applications securely on VMware Cloud Foundation, ensuring agents operate within authorized boundaries.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI agents Identity and Access Management (IAM) Security Autonomous systems AI governance Risk management Data security Shadow AI OAuth tokens Authentication Access controls AI adoption Generative AI AI training data AI supply chain Cloud security Prompt injection Data privacy AI product development AI industry Cybersecurity Non-human identities Machine speed operations Scalability Data sovereignty Open standards Threat detection Vulnerability management AI Red Teaming AI research and development AI capabilities AI adoption playbook Experiential AI AI payments AI applications AI security solutions Cloud Native Zero Trust Sandboxing Credential management Buildpacks AI workloads AI models Operational risk Digital transformation Business strategy Innovation Trend-jacking Marketing tactics Data management AI governance tools Risk assessment Policy enforcement SaaS security Integration security AI security risks AI supply chain risks AI agent payments AI agent transactions AI agent autonomy AI agent security AI agent identity AI agent access AI agent behavior AI agent workflows AI agent oversight AI agent rollback AI agent visibility AI agent adoption AI agent security gaps AI agent controls AI agent security boundary AI agent credential risks AI agent subversion AI agent enterprise infrastructure AI agent infrastructure AI agent management AI agent performance AI agent cost AI agent data sovereignty AI agent platforms AI agent identity platforms AI agent systems AI agent security leaders AI agent access issuance AI agent identity security AI agent non-human identities AI agent risks AI agent deviation AI agent traceability AI agent decision-making AI agent decision processes AI agent security controls AI agent innovation AI agent safety AI agent guardrails AI agent environments AI agent data AI agent decisions AI agent interactions AI agent efficiency AI agent manual oversight AI agent rollback disruption AI risk discovery AI risk enforcement AI market AI risk AI attacks AI embedded SaaS applications AI ownership AI usage visibility AI access control AI policy enforcement AI identity AI integrations AI risk assessment AI continuous control AI risk drivers AI data AI shadow AI AI SaaS applications AI OAuth tokens AI sensitive data AI incidents AI trusted platforms AI risk review AI access AI persistent access AI governance gaps AI unmanaged identities AI security AI identities AI permissions AI authentication AI access controls AI autonomy AI systems AI impact AI manageability AI credential risks AI user credentials AI service accounts AI subversion AI credentials AI authentication methods AI API keys AI authentication design AI boundaries AI enterprise infrastructure

Comments

Loading...