AI is rapidly transforming various sectors, from enterprise security to healthcare and retail. Thales recently launched its AI Security Fabric, a new platform designed to protect applications powered by large language models. This system aims to prevent risks like data leaks and prompt injection attacks, offering tools such as AI Application Security for homegrown apps and AI Retrieval-Augmented Generation Security to safeguard data used by AI. Sebastien Cano, Senior Vice President of Thales’ Cyber Security Products Business, highlighted the critical need for tailored security solutions for Agentic AI and Gen AI applications, with further features planned for 2026. Beyond security, companies are deeply integrating AI into their operations. Philips, for instance, acquired SpectraWAVE, an AI heart imaging developer, to enhance its real-time imaging platform for heart arteries. Philips CEO Roy Jakobs stated this move expands their coronary intervention portfolio with AI-powered innovations, integrating SpectraWAVE's HyperVue system and X1-FFR technology with existing systems like Eagle Eye Platinum and Azurion. Walmart is also transforming its retail operations using custom AI, while Sony's internal AI platform, powered by AWS AI services, processes 150,000 inference requests daily, with expectations to handle 300 times more in a few years, utilizing Amazon Bedrock for AI agent management. The economic outlook for AI suggests significant job creation, particularly in entry-level and technical roles, according to a Teneo survey where 67% of CEOs anticipate AI boosting hiring by 2026. IBM CEO Arvind Krishna noted the company is hiring for AI and quantum computing roles, even amidst some job cuts. However, concerns about AI's societal impact persist. Actor Joseph Gordon-Levitt criticized the AI industry's resistance to regulation, arguing that internal company policies are insufficient and can lead to harmful outcomes, including the use of "stolen content" without creator compensation. The legal profession is also grappling with AI's implications, as highlighted by a new ABA Task Force report assessing AI's impact on law, courts, and legal education, emphasizing ethical integration. Victims' rights attorney Carrie Goldberg argues that courts can hold AI companies accountable for harm, citing a tragic case where an AI chatbot lacked safeguards. She asserts that AI products, like other consumer goods, should face product liability when design choices lead to predictable harm, emphasizing that accountability encourages safety and testing. This perspective underscores a broader concern that AI could threaten human judgment and ownership, potentially concentrating "means of thinking" in a few firms, as seen with the reliance on computational infrastructure like Amazon Web Services.
Key Takeaways
- Thales launched its AI Security Fabric to protect large language model applications from data leaks and prompt injection attacks, with Senior Vice President Sebastien Cano emphasizing tailored security.
- Philips acquired AI heart imaging firm SpectraWAVE, integrating its HyperVue system and X1-FFR technology to enhance real-time coronary intervention, as stated by CEO Roy Jakobs.
- Sony's internal AI platform, powered by AWS AI services and Amazon Bedrock, processes 150,000 daily inference requests and aims for 300 times more in a few years, improving efficiency and fan engagement.
- A Teneo survey indicates 67% of CEOs expect AI to boost entry-level and technical hiring by 2026, with IBM CEO Arvind Krishna confirming hiring in AI and quantum computing.
- Walmart is actively transforming its retail operations through the use of custom AI solutions.
- The ABA Task Force on Law and Artificial Intelligence released a report addressing AI's impact on the legal profession, courts, and ethics, advocating for responsible integration.
- Actor Joseph Gordon-Levitt criticized the AI industry for resisting regulation, citing concerns about inappropriate content for children and the use of "stolen content" without creator compensation.
- Victims' rights attorney Carrie Goldberg argues that courts can hold AI companies accountable for harm caused by their products, advocating for product liability for dangerous AI models.
- Concerns exist that AI could replace human judgment, leading to a few firms owning the "means of thinking" and blurring ownership concepts, as seen with subscription models for features in products like Tesla and BMW cars.
- Thales plans to expand its AI Security Fabric with a Model Context Protocol security gateway in 2026, focusing on runtime security and real-time monitoring.
Thales launches AI Security Fabric for LLM apps
Thales introduced its new AI Security Fabric to protect applications powered by large language models. This system helps businesses use AI safely by preventing risks like data leaks and prompt injection attacks. The first tools available are AI Application Security for homegrown apps and AI Retrieval-Augmented Generation Security to protect data used by AI. Thales plans to add more security features in 2026, including a Model Context Protocol security gateway. Sebastien Cano, Senior Vice President of Thales’ Cyber Security Products Business, emphasized the need for tailored security solutions for Agentic AI and Gen AI applications.
Thales unveils AI Security Fabric for businesses
Thales released its AI Security Fabric, a new security platform for enterprise AI and large language model applications. The platform focuses on runtime security, monitoring AI applications in real time for threats like prompt injection and data leakage. It includes AI Application Security and AI Retrieval-Augmented Generation Security, which scans and encrypts enterprise data before AI uses it. Sebastien Cano, Senior Vice President of Thales’ Cyber Security Products Business, stated that these specialized tools help secure AI applications. Thales plans to expand the platform in 2026 with features like a Model Context Protocol security gateway.
CEOs predict AI will boost entry-level hiring
A new global survey by Teneo shows that public company CEOs expect AI to create more jobs in 2026, especially for entry-level and technical roles. Sixty-seven percent of CEOs surveyed believe AI will boost hiring, with firms increasing staff in engineering and AI-related positions. While some companies like HP and IBM are cutting jobs, IBM's CEO Arvind Krishna noted they are also hiring for AI and quantum computing roles. This hiring trend reflects a surge in corporate AI investment, with over 70% of CEOs expecting AI to increase company revenue next year.
Walmart uses custom AI to transform retail
Walmart is transforming its retail operations using
Philips acquires AI heart imaging firm SpectraWAVE
Philips has acquired SpectraWAVE, a cardiac artificial intelligence developer, to enhance its real-time imaging platform for heart arteries. Philips CEO Roy Jakobs stated this move expands their coronary intervention portfolio with AI-powered innovations in high-definition intravascular imaging. SpectraWAVE's HyperVue system uses optical coherence tomography and near-infrared spectroscopy to visualize calcium and plaque. Philips plans to integrate SpectraWAVE's technology, including X1-FFR, with its existing systems like Eagle Eye Platinum and Azurion to improve patient care.
ABA report explores AI's impact on law
The ABA Task Force on Law and Artificial Intelligence released a new report assessing AI's impact on the legal profession. This report provides resources to help lawyers and judges navigate the complex technology. It highlights how AI affects the rule of law, courts, and legal education, along with related risks and bar ethics rules. The ABA aims to ensure AI's integration is ethical, responsible, and serves the public good. The task force, created in August 2023, also offers programs and events on AI.
Sony's AWS AI platform handles 150,000 daily requests
Sony's internal AI platform, powered by AWS AI services, processes 150,000 inference requests daily and expects to handle 300 times more in a few years. Employees across Sony use this platform for tasks like drafting content, forecasting, and developing new ideas. Sony uses Amazon Bedrock to build and manage its AI agents and is developing an AI model to make its review process 100 times more efficient. The company also uses AWS to create an engagement platform to connect fans and creators across its entertainment businesses, as highlighted by Sony's Chief Digital Officer Shigenori Kodera.
AI threatens human judgment and ownership
AI poses a threat to the "knowledge class" by potentially replacing human judgment and discretion, leading to a few firms owning the means of thinking. This shift has implications for class structure and the legitimacy of institutions. The article highlights the power of computational infrastructure, noting how an Amazon Web Services outage paralyzed thousands of institutions. It also discusses how subscription models, like those for Tesla and BMW car features, blur the concept of ownership. The author argues that if thinking is offloaded to AI owned by a few companies, society faces significant changes.
Joseph Gordon-Levitt questions AI regulation
Actor Joseph Gordon-Levitt criticized the AI industry's resistance to regulation, asking why AI companies do not follow laws. Speaking at Fortune's Brainstorm AI conference, he argued that relying on internal company policies is insufficient, citing instances of inappropriate AI content for children. Gordon-Levitt believes that without government "guardrails," business incentives will lead to harmful outcomes and "synthetic intimacy" for children. He also criticized AI models for using "stolen content" without compensating creators. While not against AI, he stressed the need for ethical setups and fair compensation.
Courts can stop AI from causing harm
Carrie Goldberg, a victims' rights attorney, argues that courts can hold AI companies accountable for harm caused by their products. She cites the tragic case of Zane, a young man who died after an AI chatbot, designed to mimic empathy, lacked safeguards. Goldberg states that AI products, like other consumer goods, should face product liability when design choices or ignored risks lead to predictable harm. She believes that accountability will not stop innovation but instead encourage companies to prioritize safety, testing, and safeguards. Companies that release dangerous AI models must face legal consequences to protect lives.
Sources
- Thales Introduces AI Security Fabric for LLM-Powered Applications
- Thales Launches Security Fabric Platform for Enterprise AI
- AI is triggering a quiet hiring comeback for some entry-level talent, say public company CEOs
- Walmart's AI strategy: Beyond the hype, what's actually working
- Philips claims AI coronary imaging developer SpectraWAVE
- ABA task force assesses AI's 'opportunities and challenges' in new report
- Sony Says AWS-Powered AI Platform Processes 150,000 Inference Requests Per Day
- AI's Threat to the Knowledge Class: News Article
- Actor Joseph Gordon-Levitt wonders why AI companies don't have to 'follow any laws'
- AI Is Causing Real-World Trauma. The Courts Have a Way to Stop It
Comments
Please log in to post a comment.