Mend has released a new AI Security Governance Framework to help organizations manage the risks of AI tools. The framework covers asset inventory, risk tiering, AI supply chain security, and a maturity model. It assigns AI systems to three risk tiers based on a scoring system, with higher tiers requiring more security checks. The framework also emphasizes output controls and treating AI-generated code as untrusted input, and provides guidance on monitoring for AI-specific threats like prompt injection and model drift.
The cybersecurity threat from AI is shifting from data leaks to operational chaos caused by autonomous agents. These agents, often deployed without security oversight, can execute logic, integrate with systems, and modify states. Open-source projects like Moltbot and the OpenClaw movement make it easy to deploy agents with high-privilege access, bypassing secure-by-design principles. Security tools like Data Loss Prevention and Identity and Access Management are blind to these agentic identities. The article calls for an AI Bill of Materials to track which agents use which models and access which resources.
Financial institutions face a new threat from Mythos-class AI systems that can autonomously scan, chain, and exploit weaknesses across applications, infrastructure, and databases. The article argues that banks are over-invested in model and application-layer controls while leaving databases as the least-governed layer. The primary risk is silent state corruption, such as schema changes and data mutations that appear legitimate. Liquibase Secure offers a control layer at the database that tracks every change in version control and enforces policy before execution.
Anthropic trained its Mythos AI model to find vulnerabilities in IT networks and released it to a few companies like Mozilla. Mozilla reported that Mythos found 119 potential vulnerabilities in Firefox, far more than older models. Anthropic deemed the model too dangerous for public release because hackers could use it to find and exploit flaws. Bloomberg reported that unauthorized users are now using Mythos on a Discord chat. Anthropic is investigating a report of unauthorized access through a third-party vendor environment.
The US Cybersecurity and Infrastructure Security Agency does not yet have access to Anthropic's bug-hunting AI model Claude Mythos, even though other government agencies do. Unauthorized users have already gained access to the model. Anthropic restricted access to Mythos because it fears the model could be used to identify and exploit software flaws. Some government agencies like the US Department of Commerce's Center for AI Standards and Innovation and the US National Security Agency are already assessing Mythos.
Nursing students should be competent with AI tools in the clinical setting before they enter the workforce. MaryAnn Connor, an NYU adjunct informatics professor, and Olga Kagan, CEO of FANA, advise that AI training is essential for new nurses. The training should prepare them to use AI effectively in real clinical environments.
A new national study from DEWALT reveals a disconnect between the construction workforce's eagerness for AI and the lack of hands-on training. 88% of construction professionals expect AI adoption to increase over the next year, but only 8% say AI is part of their day-to-day work. Tradespeople rely on self-directed resources like YouTube and online platforms for AI education. DEWALT is launching a pilot program with Associated Builders and Contractors Central Florida to deliver hands-on AI training. The company also committed $75,000 to the TCEF to support training initiatives.
Tech billionaires like Elon Musk are backing universal basic income as more companies lay off workers due to artificial intelligence. Some lawmakers are skeptical of the idea. The discussion was featured on CBS News' 'The Takeout' with reporters Daniella Diaz and Nicholas Wu.
Connecticut lawmakers are debating a bill that would impose a surcharge on companies whose payroll falls while remaining workers appear more productive. The bill exempts companies that keep staffing steady and use collaborative technology meant to help workers. Critics argue the approach risks taxing adjustment rather than harm, discouraging investment and slowing wage growth. A drop in payroll can reflect many factors, not just AI replacing workers. The article suggests that the most direct tools to support workers through changes are income support, portable benefits, and retraining programs.
Nick West, chief strategy officer at Mishcon de Reya, discussed how the London law firm measures the return on investment of its legal AI tool Legora. The firm tracks usage and adoption, with associate daily active use in the late 60s to 70%. They also consider the impact on lawyer utilization, throughput, and speed of work. West noted that adoption is not ROI and that measuring ROI is challenging. The firm defines ROI by putting a number on value created, such as increased speed on fixed-fee matters.
Mercer discusses how real assets like data centers, fiber networks, and renewable energy are becoming central to the next phase of economic development driven by AI and the energy transition. Allocations to real assets have increased because they deliver resilient returns and diversification. AI demand is creating investment opportunities in data center sites with power and fiber optic access. The article argues that accessing AI through real assets provides exposure to enabling assets and systems with durable cashflows. Other opportunities include water infrastructure, transport networks, and essential-service platforms.
The Trump administration is joining Elon Musk's artificial intelligence company xAI in a lawsuit against Colorado's AI discrimination law. The case was filed in Denver federal court and represents a battleground between AI developers' free speech rights and algorithmic discrimination concerns.
A group called Explosive Media, which claims to be based in Iran but not working for the regime, is behind dozens of recent viral videos. The group relies on AI and Lego imagery to create what NBC News calls slopaganda. The videos have gained significant attention online.
Key Takeaways
- Mend released an AI Security Governance Framework with risk tiers, supply chain rules, and a maturity model.
- Autonomous agents pose a new cybersecurity threat called 'shadow operations,' bypassing traditional security tools.
- Mythos-class AI systems can autonomously exploit database weaknesses, with silent state corruption as a primary risk.
- Anthropic's Mythos AI found 119 vulnerabilities in Firefox for Mozilla, but is deemed too dangerous for public release.
- Unauthorized users accessed Anthropic's Mythos via Discord, while CISA still lacks access.
- Nursing students need clinical-grade AI training before entering the workforce, according to experts including Olga Kagan.
- DEWALT study: 88% of construction pros expect AI adoption to increase, but only 8% use it daily; DEWALT committed $75,000 to training.
- Elon Musk and other tech leaders support universal basic income amid AI-driven layoffs.
- Connecticut's AI tax proposal would surcharge companies with falling payroll and rising productivity, drawing criticism.
- The DOJ joined Elon Musk's xAI in a lawsuit against Colorado's AI discrimination law.
Mend launches AI security framework with risk tiers and supply chain rules
Mend released a new AI Security Governance Framework to help organizations manage the risks of AI tools. The framework covers asset inventory, risk tiering, AI supply chain security, and a maturity model. It assigns AI systems to three risk tiers based on a scoring system, with higher tiers requiring more security checks. The framework also emphasizes output controls and treating AI-generated code as untrusted input. It provides guidance on monitoring for AI-specific threats like prompt injection and model drift.
Shadow AI becomes shadow operations as autonomous agents spread
The cybersecurity threat from AI is shifting from data leaks to operational chaos caused by autonomous agents. These agents, often deployed without security oversight, can execute logic, integrate with systems, and modify states. Open-source projects like Moltbot and the OpenClaw movement make it easy to deploy agents with high-privilege access, bypassing secure-by-design principles. Security tools like Data Loss Prevention and Identity and Access Management are blind to these agentic identities. The article calls for an AI Bill of Materials to track which agents use which models and access which resources.
AI attacks on banking databases pose new governance challenges
Financial institutions face a new threat from Mythos-class AI systems that can autonomously scan, chain, and exploit weaknesses across applications, infrastructure, and databases. The article argues that banks are over-invested in model and application-layer controls while leaving databases as the least-governed layer. The primary risk is silent state corruption, such as schema changes and data mutations that appear legitimate. Liquibase Secure offers a control layer at the database that tracks every change in version control and enforces policy before execution.
Users on Discord are using Mythos AI deemed too dangerous for public release
Anthropic trained its Mythos AI model to find vulnerabilities in IT networks and released it to a few companies like Mozilla. Mozilla reported that Mythos found 119 potential vulnerabilities in Firefox, far more than older models. Anthropic deemed the model too dangerous for public release because hackers could use it to find and exploit flaws. Bloomberg reported that unauthorized users are now using Mythos on a Discord chat. Anthropic is investigating a report of unauthorized access through a third-party vendor environment.
CISA still waiting for access to Anthropic Mythos while unauthorized users have it
The US Cybersecurity and Infrastructure Security Agency does not yet have access to Anthropic's bug-hunting AI model Claude Mythos, even though other government agencies do. Unauthorized users have already gained access to the model. Anthropic restricted access to Mythos because it fears the model could be used to identify and exploit software flaws. Some government agencies like the US Department of Commerce's Center for AI Standards and Innovation and the US National Security Agency are already assessing Mythos.
Nursing students need clinical-grade AI training before entering workforce
Nursing students should be competent with AI tools in the clinical setting before they enter the workforce. MaryAnn Connor, an NYU adjunct informatics professor, and Olga Kagan, CEO of FANA, advise that AI training is essential for new nurses. The training should prepare them to use AI effectively in real clinical environments.
DEWALT study finds gap between AI training in trade schools and industry needs
A new national study from DEWALT reveals a disconnect between the construction workforce's eagerness for AI and the lack of hands-on training. 88% of construction professionals expect AI adoption to increase over the next year, but only 8% say AI is part of their day-to-day work. Tradespeople rely on self-directed resources like YouTube and online platforms for AI education. DEWALT is launching a pilot program with Associated Builders and Contractors Central Florida to deliver hands-on AI training. The company also committed $75,000 to the TCEF to support training initiatives.
Elon Musk and other tech leaders support universal basic income amid AI layoffs
Tech billionaires like Elon Musk are backing universal basic income as more companies lay off workers due to artificial intelligence. Some lawmakers are skeptical of the idea. The discussion was featured on CBS News' 'The Takeout' with reporters Daniella Diaz and Nicholas Wu.
Connecticut AI tax proposal targets the wrong signal according to critics
Connecticut lawmakers are debating a bill that would impose a surcharge on companies whose payroll falls while remaining workers appear more productive. The bill exempts companies that keep staffing steady and use collaborative technology meant to help workers. Critics argue the approach risks taxing adjustment rather than harm, discouraging investment and slowing wage growth. A drop in payroll can reflect many factors, not just AI replacing workers. The article suggests that the most direct tools to support workers through changes are income support, portable benefits, and retraining programs.
London law firm Mishcon de Reya measures ROI of legal AI tool Legora
Nick West, chief strategy officer at Mishcon de Reya, discussed how the London law firm measures the return on investment of its legal AI tool Legora. The firm tracks usage and adoption, with associate daily active use in the late 60s to 70%. They also consider the impact on lawyer utilization, throughput, and speed of work. West noted that adoption is not ROI and that measuring ROI is challenging. The firm defines ROI by putting a number on value created, such as increased speed on fixed-fee matters.
Real assets and AI reshape private markets investment approach
Mercer discusses how real assets like data centers, fiber networks, and renewable energy are becoming central to the next phase of economic development driven by AI and the energy transition. Allocations to real assets have increased because they deliver resilient returns and diversification. AI demand is creating investment opportunities in data center sites with power and fiber optic access. The article argues that accessing AI through real assets provides exposure to enabling assets and systems with durable cashflows. Other opportunities include water infrastructure, transport networks, and essential-service platforms.
DOJ joins Elon Musk xAI lawsuit against Colorado AI discrimination law
The Trump administration is joining Elon Musk's artificial intelligence company xAI in a lawsuit against Colorado's AI discrimination law. The case was filed in Denver federal court and represents a battleground between AI developers' free speech rights and algorithmic discrimination concerns.
Iran slopaganda team uses AI and Lego imagery for viral videos
A group called Explosive Media, which claims to be based in Iran but not working for the regime, is behind dozens of recent viral videos. The group relies on AI and Lego imagery to create what NBC News calls slopaganda. The videos have gained significant attention online.
Sources
- Mend Releases AI Security Governance Framework: Covering Asset Inventory, Risk Tiering, AI Supply Chain Security, and Maturity Model
- Shadow AI morphs into shadow operations
- AI-Driven Attacks on Banking Databases: Governance at Scale
- Users on a Discord chat are playing with Mythos, an AI deemed too dangerous for the public
- CISA last in line for access to Anthropic Mythos
- Clinical-grade AI training a must for new nurses
- New DEWALT Study Identifies Emerging Gap Between AI Training in Trade Schools and Industry Needs
- Musk and other tech leaders signal support for universal basic income amid AI-fueled layoffs
- CT’s AI tax targets the wrong signal
- How a London law firm measures return on investment of legal AI in practice
- The future of private markets: real assets, AI and the case for a total portfolio approach
- DOJ Joins Musk’s xAI Suit Against Colorado AI Discrimination Law
- How Iran’s ‘slopaganda’ team, small but mighty, relies on AI and Lego imagery
Comments
Please log in to post a comment.