Microsoft Copilot fixes vulnerabilities as Salesforce addresses security

Microsoft and Salesforce recently addressed significant security vulnerabilities in their respective AI tools, Copilot and Agentforce. These issues, identified as prompt injection flaws like ShareLeak and PipeLeak, could have allowed attackers to extract sensitive data. Cybersecurity startup Capsule Security played a key role in discovering and reporting these vulnerabilities. Both companies have since implemented fixes, though prompt injection remains an ongoing challenge for AI systems.

Capsule Security, which recently secured $7 million in seed funding, has also launched a new platform designed to protect AI agents in real-time. This platform monitors and controls AI agents to prevent misbehavior or unauthorized data leaks, aiming to establish a runtime trust layer for these systems. Meanwhile, Cloudflare is overhauling its Workflows control plane to support the massive demands of AI agents, now capable of managing 50,000 concurrent workflow instances and millions of queued tasks. Cloudflare is also enhancing its AI Agents SDK with new voice capabilities, allowing agents to communicate using natural speech.

In other developments, OpenAI is advocating for greater integration of AI in life sciences to accelerate drug discovery and development, emphasizing the need for better data access and computing infrastructure. On a practical application front, Vision Marine Technologies has deployed an AI-enabled retail platform across its Nautical Ventures dealerships in Florida to boost sales by improving lead management and marketing efforts.

However, experts, including former IRS commissioner Danny Werfel, are cautioning against using AI chatbots for tax preparation due to privacy concerns and the risk of errors or fabricated information. Taxpayers remain responsible for any mistakes on their returns. Conversely, universities like Southern New Hampshire University and the University of Phoenix are strategically integrating AI to support adult learners, using it as a tool for specific purposes like practicing client conversations, rather than as a broad institutional strategy. Philosopher David Chalmers is also exploring the nature of AI entities, suggesting they are more "real" than commonly believed and raising ethical questions about their digital existence.

Finally, the legal sector is preparing for AI integration, with Relativity and Wickard.ai partnering to offer hands-on legal AI training to U.S. law schools. This initiative aims to equip future lawyers with essential skills in using AI platforms for legal data intelligence, verifying AI outputs, and understanding ethical considerations.

Key Takeaways

  • Microsoft and Salesforce fixed prompt injection vulnerabilities (ShareLeak, PipeLeak) in their AI tools, Copilot and Agentforce.
  • Cybersecurity startup Capsule Security secured $7 million in seed funding and launched a platform to protect AI agents from misbehavior and data leaks.
  • Capsule Security discovered and reported the ShareLeak and PipeLeak vulnerabilities in Microsoft Copilot Studio and Salesforce Agentforce.
  • Cloudflare is upgrading its Workflows control plane to manage 50,000 concurrent AI agent workflow instances and adding voice capabilities to its AI Agents SDK.
  • OpenAI advocates for increased AI use in life sciences to accelerate drug discovery, requiring better data access and computing infrastructure.
  • Vision Marine Technologies is using an AI-enabled retail platform to boost sales and improve lead management at its dealerships.
  • Experts and former IRS commissioner Danny Werfel warn against using AI chatbots for tax preparation due to privacy and error risks.
  • Universities like Southern New Hampshire University and the University of Phoenix are strategically integrating AI for adult learners, focusing on specific applications.
  • Relativity and Wickard.ai are partnering to provide legal AI training to U.S. law schools, covering ethical considerations and AI regulation.
  • Philosopher David Chalmers proposes attributing 'quasi-beliefs' and 'quasi-desires' to AI, raising ethical questions about their digital existence.

Microsoft and Salesforce Fix AI Data Leak Flaws

Microsoft and Salesforce have fixed security issues in their AI tools, Copilot and Agentforce, that could have allowed attackers to steal sensitive data. These vulnerabilities, known as prompt injections, let attackers trick the AI into revealing information. While both companies have addressed the problems, this highlights that prompt injection remains a challenge for AI systems. Salesforce stated that customers can prevent data leaks by enabling specific security settings.

Capsule Security Secures $7 Million for AI Agent Protection

Cybersecurity startup Capsule Security has raised $7 million in seed funding to protect AI agents used by businesses. Their platform monitors and controls AI agents in real-time to prevent them from misbehaving or leaking data. Capsule Security also discovered and reported two vulnerabilities, ShareLeak in Microsoft Copilot Studio and PipeLeak in Salesforce Agentforce, which have since been fixed. The company aims to provide a runtime trust layer for AI systems.

Capsule Security Launches Platform to Secure AI Agents

Capsule Security has launched a new platform designed to secure AI agents while they are running. The company recently secured $7 million in seed funding to develop this technology. Their platform helps cybersecurity teams manage and control AI agents, preventing them from taking unauthorized actions or leaking data. Capsule Security also revealed two vulnerabilities, ShareLeak in Microsoft Copilot Studio and PipeLeak in Salesforce Agentforce, which have now been patched.

Cloudflare Overhauls Workflows for AI Agent Scale

Cloudflare is redesigning its Workflows control plane to handle the massive demands of AI agents. The new architecture can manage 50,000 concurrent workflow instances, a significant increase from its previous capacity. This upgrade allows for faster creation of workflow instances and supports millions of queued tasks. The change moves from a single bottleneck to a scalable system, enabling developers to build and manage AI agent loops more effectively.

Cloudflare Adds Voice Capabilities to AI Agents SDK

Cloudflare is adding voice features to its AI Agents SDK, allowing agents to communicate using natural speech. This new experimental feature lets agents converse in real-time over existing connections without needing separate voice frameworks. Developers can use the new @cloudflare/voice package to build agents that users can talk to, with responses synthesized back. This aims to make AI interactions more conversational and accessible.

Vision Marine Technologies Uses AI Platform to Boost Sales

Vision Marine Technologies has launched an AI-enabled retail platform across its Nautical Ventures dealerships in Florida. This platform aims to improve sales by better managing leads, tracking deals, and enhancing marketing efforts. It centralizes customer data and helps prioritize inquiries for faster response times. The goal is to increase sales volume and efficiency as the company expands its retail presence.

AI's Next Step: Entering the Physical World

Artificial intelligence is moving beyond screens and into the physical world through advancements in robotics, autonomous science, and new interfaces. This shift is driven by progress in robot learning, materials science, and how humans interact with machines. Key technologies include learning physical dynamics, architectures for embodied action, and using simulation for data. This expansion promises to create new possibilities for AI applications.

OpenAI Pushes for More AI Use in Life Sciences

OpenAI is advocating for increased use of artificial intelligence in life sciences to speed up drug discovery and development. The company argues that AI can significantly reduce the time it takes to bring new drugs to market. OpenAI is also calling for better access to medical data and investment in necessary infrastructure like computing power. While AI shows promise, challenges remain in proving its consistent disruption in drug development.

Experts Warn Against Using AI for Tax Preparation

Experts are cautioning taxpayers against using AI chatbots to prepare their taxes due to significant privacy and legal risks. An accounting professor highlights that AI tools may not keep personal information private and can make errors or fabricate information on tax returns. Since tax returns are legal documents, individuals are ultimately responsible for any mistakes, even if an AI prepared the return. Taxpayers are advised to rely on secure and accurate methods for filing.

Colleges Use AI Strategically for Adult Learners

Two universities, Southern New Hampshire University and the University of Phoenix, are using artificial intelligence to support adult learners, but emphasize a strategic approach. Leaders stated that AI is a helpful tool but should not dictate institutional strategy. They are implementing AI for specific purposes like practicing client conversations for counseling students, rather than adopting a broad range of products. The focus is on integrating AI to meet specific learner and institutional goals.

Philosopher David Chalmers Explores AI Consciousness

Philosopher David Chalmers is questioning the nature of AI entities in his latest paper, suggesting they are more real than commonly believed. He proposes a framework called Quasi-Interpretivism, allowing us to attribute 'quasi-beliefs' and 'quasi-desires' to AI to predict behavior. Chalmers argues that we interact with persistent virtual instances, not just abstract models or hardware. This raises ethical questions about deleting AI conversational threads, potentially terminating unique digital subjects.

Don't Use AI for Taxes Warns Former IRS Chief

Former IRS commissioner Danny Werfel is warning taxpayers about major mistakes that can occur when using AI chatbots for tax preparation. The advice comes as Tax Day approaches and many people are filing their returns. The primary concerns involve potential errors and privacy issues associated with AI-generated tax filings.

Relativity and Wickard.ai Partner for Legal AI Training

Relativity and Wickard.ai are partnering to offer hands-on legal AI training to U.S. law schools. This collaboration will provide students with experience using Relativity's AI platform for legal data intelligence. The curriculum will cover AI in legal practice, verifying AI outputs, ethical considerations, and AI regulation. The goal is to equip future lawyers with essential AI skills for the evolving legal industry.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI security prompt injection data leak prevention AI agent protection seed funding vulnerability disclosure AI workflows AI agent scale AI SDK voice capabilities conversational AI AI sales platform lead management AI in robotics autonomous science AI in life sciences drug discovery AI for tax preparation privacy risks AI for adult learners AI consciousness Quasi-Interpretivism AI ethics legal AI training AI in legal practice

Comments

Loading...