Salesforce Upgrades Slackbot With Anthropic AI as Microsoft Google Lead Security

AI integration continues to expand across various sectors, from public safety to enterprise productivity and cybersecurity. In Delaware County, the East Lansdowne Police Department has deployed 41 new AI-powered cameras, funded by a state grant, to monitor public areas and assist in crime solving. Chief James Cadden emphasizes these cameras enhance real-time monitoring and incident review without using facial recognition or recording private spaces, improving response times and removing guesswork from police work. Meanwhile, the European Union Aviation Safety Agency (EASA) is actively seeking public input on its AI Trustworthiness Framework, with the consultation closing in February. This initiative aims to establish future regulations for AI use in aviation, covering critical areas like safety, human factors, and ethics. EASA encourages participation from training organizations, aircraft manufacturers, and operators to help shape these new rules. In the enterprise software space, Salesforce has significantly upgraded its Slackbot, now powered by Anthropic's AI model, which launched on January 13, 2026. This enhanced Slackbot acts as a personal AI agent for Business+ and Enterprise+ customers, capable of drafting emails, finding calendar events, and pulling information from chats. It integrates with other tools like Microsoft Teams and Google Drive, accessing data from Salesforce and other connected platforms while respecting user permissions, aiming to boost client productivity. AI security is also a growing focus, with a recent survey of Chief Information Security Officers (CISOs) ranking the top 10 vendors for AI-enabled security solutions. Cisco leads the list, recognized for its AI Assistant for Security, followed by Microsoft, which leverages its extensive resources and partnership with OpenAI for products like Security Copilot. Google also secured a spot in the top three for its cloud-based security services. Furthermore, WitnessAI recently secured $58 million in a funding round led by Sound Ventures, with investments from Samsung Ventures and Qualcomm Ventures, to expand its global reach and enhance AI security features, including new ways to secure AI agents. However, the rapid adoption of AI also highlights governance and ethical challenges. Baptist Health is implementing robust AI governance, with its "AI Institute" approving automation projects only if they have clear goals and strong contracts, including exit clauses for pilot programs that don't meet financial returns. Their focus is on improving billing accuracy, automating coding, and managing claim denials. A critical aspect of AI safety often overlooked is data management; the EU AI Act now mandates clear control over data for high-risk AI systems, yet many organizations struggle with understanding and managing their data effectively. Roblox's new AI-powered face scanning system for age verification globally faces significant problems, including misidentification of ages and privacy concerns, leading to a black market for age-verified accounts.

Key Takeaways

  • East Lansdowne Police deployed 41 AI-powered cameras, funded by a state grant, for public safety, emphasizing they do not use facial recognition or record private spaces.
  • Salesforce launched an upgraded Slackbot on January 13, 2026, powered by Anthropic's AI model, offering personal AI assistance across Slack, Salesforce, Microsoft Teams, and Google Drive.
  • The European Union Aviation Safety Agency (EASA) is consulting on an AI Trustworthiness Framework to regulate AI use in aviation, focusing on safety, human factors, and ethics.
  • A CISO survey identified Cisco, Microsoft (leveraging OpenAI and Security Copilot), and Google as the top three vendors for AI-enabled security solutions.
  • WitnessAI raised $58 million in funding from Sound Ventures, Samsung Ventures, and Qualcomm Ventures to expand its global presence and enhance AI security features for AI agents.
  • Baptist Health is establishing strong AI governance, requiring clear goals and exit clauses for AI projects to ensure expected financial returns, particularly in billing and claims management.
  • Roblox's AI-powered age verification system is experiencing issues with misidentification and privacy concerns, leading to the online sale of age-verified accounts.
  • Effective data management is crucial for AI safety and governance, as poor data practices can exacerbate risks, a requirement highlighted by the EU AI Act for high-risk AI systems.

East Lansdowne Police Use AI Cameras to Fight Crime

The East Lansdowne Police Department in Delaware County installed 41 new AI-powered cameras. A state grant from the Pennsylvania Department of Community and Economic Development paid for these cameras. They help officers watch public areas in real time and review past incidents to solve crimes faster. Chief James Cadden says the cameras remove guesswork from police work. The police assure the public that these cameras do not use facial recognition or spy on people.

East Lansdowne Police Deploy AI Cameras for Public Safety

East Lansdowne police are installing 41 AI public safety cameras in the Delaware County borough. These cameras monitor public roads and sidewalks to help track criminals and keep people safe. Chief James Cadden explained that the technology allows quick review of people and vehicles. It also improves police response times by showing what is happening before officers arrive. The police confirm the cameras do not record inside private places or use facial recognition.

Slack Introduces New AI Assistant for Work

Salesforce, the company that owns Slack, is adding a new AI-powered Slackbot to its platform. This smart assistant will be available to most Business+ and Enterprise+ customers. The Slackbot acts as a personal AI agent, using your work context to help with tasks. It can draft emails, find calendar events, and pull information from chats. It also works with other tools like Microsoft Teams and Google Drive, making work across apps easier.

Salesforce Upgrades Slackbot with Anthropic AI

Salesforce launched an improved Slackbot powered by Anthropic's AI model on January 13, 2026. This new Slackbot can answer user questions by understanding conversations, files, and channels within Slack. It also accesses information from other connected platforms like Salesforce, always respecting user permissions. Users report that the upgraded Slackbot helps them prepare for meetings and research topics, saving significant time. Salesforce believes this new feature will boost productivity for its clients.

EASA Seeks Input on Aviation AI Rules

The European Union Aviation Safety Agency, EASA, is asking for public comments on its AI Trustworthiness Framework. This consultation, which started in November, will close in February. The framework aims to regulate how AI is used in aviation, setting future rules for AI assistance and human-AI teamwork. It covers important topics like AI safety, human factors, and ethics. EASA strongly encourages training organizations, aircraft manufacturers, and operators to share their thoughts to help shape these new rules.

CISOs Name Top 10 AI Security Vendors

A new survey reveals the top 10 vendors for AI-enabled security solutions, as ranked by Chief Information Security Officers. CISOs judged companies based on product innovation, reputation, business value, and cost. Cisco leads the list, integrating networking and security with tools like AI Assistant for Security. Microsoft ranks second, using its vast resources and OpenAI partnership for products like Security Copilot. Google also made the top three, known for its cloud-based security services.

Baptist Health Governs AI for Better Returns

Baptist Health is setting up strong AI governance to make sure its technology investments pay off. Steven Kos, a Senior Director at Baptist Health, explains that this requires close teamwork between IT, clinical, and revenue cycle teams. The "AI Institute" at Baptist Health approves automation projects only if they have clear goals and strong contracts. These contracts must include an exit clause if pilot programs do not meet their expected financial returns. Baptist Health is focusing its AI efforts on improving billing accuracy, automating coding, and managing claim denials.

Roblox Age Verification Faces Major Problems

Roblox recently launched an AI-powered face scanning system to verify user ages globally. This system aims to create a safer environment by limiting chat interactions between different age groups. However, the new system is facing many problems. Players are selling age-verified accounts for minors online, and many users report that the AI misidentifies their ages. Some adults are labeled as teens, while young children are placed in adult categories. Users also have privacy concerns about the video scanning process.

WitnessAI Secures $58 Million for AI Security Expansion

WitnessAI has raised $58 million in a new funding round led by Sound Ventures, with Samsung Ventures and Qualcomm Ventures also investing. This funding will help WitnessAI expand globally and improve its AI security features. The company also announced new ways to secure AI agents, which are available this month. WitnessAI's platform monitors AI agent activity and protects AI applications from attacks. This helps businesses confidently adopt AI by ensuring their systems are safe and secure.

Data Management Is Key to AI Safety

Many organizations are quickly adopting AI tools, but a major problem with AI governance is often overlooked. The biggest risk comes from how data is accessed, seen, sorted, and managed, not from the AI models themselves. AI systems can make existing data problems worse if information is messy or poorly understood. The EU AI Act now requires companies to show clear control over their data for high-risk AI systems. However, many organizations struggle to know what data they have, where it is, or how sensitive it is.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Cameras Public Safety Crime Fighting Police Technology AI Assistant Workplace AI Productivity Tools Enterprise Software AI Regulation Aviation AI AI Safety Ethics in AI AI Security Security Solutions Cloud Security AI Governance Healthcare AI Automation Age Verification AI Face Scanning Privacy Concerns Data Management EU AI Act High-Risk AI Systems Funding Human-AI Interaction

Comments

Loading...