Artificial intelligence continues its rapid integration across various sectors, bringing both transformative capabilities and significant new challenges. Agentic AI, which allows systems to act autonomously and make decisions, is at the forefront of this evolution, necessitating a complete reevaluation of security protocols. Organizations must now implement robust controls to monitor agent actions and verify their intentions, moving beyond traditional security methods that are insufficient for these dynamic, internal applications. In practical applications, Caterpillar introduced its Cat AI Assistant at CES 2026, designed to boost job-site safety and reduce training times. Caterpillar CEO Joe Creed showcased this tool, which is powered by NVIDIA's Riva speech models, demonstrating its ability to provide real-time advice and respond to voice commands, such as setting height limits for excavators. Similarly, the Charles George VA Health Care System has been leveraging AI-assisted colonoscopies for several years, championed by Dr. Douglas Boyce, to enhance the detection of adenomas and significantly lower cancer risks. However, AI's rapid development also introduces complex societal and legal issues. Melissa Sims' experience, where she was jailed based on fake AI-generated texts from her ex-boyfriend, highlights the struggle of current detection tools to identify deepfakes and the urgent need for new laws governing AI evidence. Media organizations are also calling for greater transparency from AI companies regarding their use of journalistic content, advocating for principles like consent, fair recognition, and accurate attribution to combat misinformation and maintain public trust. From an infrastructure and governance perspective, the Defense Logistics Agency is actively building its AI capabilities through continuous training and robust platforms, preparing for agentic AI to enhance operations, demand forecasting, and auditing. Meanwhile, the immense energy demands of AI are reshaping the technology supply chain, with companies like Meta making substantial electricity deals, notably becoming the largest nuclear power buyer among its peers to support its expanding AI data centers. Penn Medicine, a pioneer in AI use in radiology, emphasizes the importance of both top-down and bottom-up approaches for effective AI governance, focusing on human factors and continuous monitoring post-deployment.
Key Takeaways
- Agentic AI agents introduce serious security risks due to their autonomous decision-making and code execution capabilities, requiring new, dynamic security controls.
- Caterpillar launched the Cat AI Assistant at CES 2026, powered by NVIDIA's Riva speech models, to improve job-site safety and reduce training time.
- Caterpillar CEO Joe Creed demonstrated the Cat AI Assistant, which provides real-time advice and responds to voice commands for equipment operation.
- The Charles George VA Health Care System utilizes AI-assisted colonoscopies to enhance the detection rate of adenomas, significantly lowering colon cancer risk.
- Melissa Sims' case highlights the legal challenges posed by AI-generated deepfake evidence, prompting calls for new laws to establish standards for AI evidence.
- Penn Medicine emphasizes a combined top-down and bottom-up approach to AI governance, focusing on human factors engineering and post-deployment monitoring for successful implementation.
- The Defense Logistics Agency is actively building its AI capabilities, including preparing for agentic AI, to enhance operations, demand forecasting, and data cleanup.
- International media organizations are urging AI companies for transparency regarding content sourcing and usage to combat misinformation and maintain public trust.
- The rapid expansion of AI is forcing technology analysts to examine the entire supply chain, particularly due to the technology's significant energy demands.
- Meta is becoming the largest nuclear power buyer among its peers, securing large electricity deals to power its AI data centers.
Agentic AI needs new security measures
Agentic AI agents can act on their own, running code and making decisions. This new technology brings serious security risks like agents performing dangerous actions or being fed false information. Organizations must create strong security controls to monitor agent actions and verify their intentions. They also need to categorize agents based on their access and how they operate to apply the right security. This requires a dynamic security approach that combines strict rules with real-time monitoring.
AI agents create new internal security risks
AI agents, especially those built with no-code tools, are changing how businesses handle security. These agents act like powerful, always-on applications within a company's systems, handling sensitive data across finance, HR, and cloud platforms. Traditional security methods that focus on external threats and static code reviews are not enough for these dynamic agents. A misconfigured agent can cause data breaches or unauthorized actions, making it hard to tell if it was an internal error or an external attack. Companies need constant monitoring and discovery of these agents to manage the growing security risks.
Caterpillar launches AI assistant for safer construction
Construction giant Caterpillar introduced a new AI assistant at CES 2026 in Las Vegas to improve job-site safety and reduce training time. Caterpillar CEO Joe Creed showcased the Cat AI Assistant, powered by NVIDIA's Riva speech models, on FOX Business. This tool acts like a personal assistant, allowing operators to ask questions and get real-time advice on equipment use, parts, and maintenance. During a demonstration, the AI assistant helped an excavator avoid overhead power lines by setting height limits using voice commands. Caterpillar and NVIDIA are expanding their partnership to bring more AI to heavy equipment and production systems.
Woman jailed by fake AI texts seeks new laws
Melissa Sims spent two days in a Florida jail after her ex-boyfriend allegedly created fake AI-generated texts. She was arrested for violating her bond, which required her to avoid contact with him, based on these deepfake messages. Judge Herbert Dixon and Drexel professor Rob D'Ovidio highlight the growing problem of AI-generated evidence, noting that current detection tools struggle to identify fakes. Sims' bond violation charge was eventually dropped, and she was acquitted of the original battery charge. She now advocates for new laws to set standards for AI evidence, similar to the digital forgery law Pennsylvania Governor Josh Shapiro signed.
Smart glasses at CES 2026 focus on AI
At CES 2026, the smart glasses market showed a clear change, moving towards a focus on AI capabilities. This year's event featured fewer but larger booths, with visitors showing more interest in trying out devices and exploring their practical uses. This marks a shift from the previous year, where many startups showcased their products with less clear applications. The industry is now emphasizing how smart glasses can use artificial intelligence to offer more useful features.
New book explores AI's impact on democracy
A new book, "Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship," by Bruce Schneier and Nathan E. Sanders, explores how artificial intelligence is changing politics. The authors believe AI can be used to improve liberal democracies, making government more open and responsive to citizens. They discuss examples like a 2024 mayoral candidate who wanted an AI to make all decisions. While acknowledging concerns about AI's rapid development, the book suggests ways politicians can use this technology for good.
VA uses AI to improve colonoscopy cancer detection
The Charles George VA Health Care System has been using AI-assisted technology for colonoscopies for several years to improve efficiency and quality. Dr. Douglas Boyce, an Asheville VA Gastroenterologist, championed this technology, which helps doctors perform more precise exams. AI can increase the detection rate of adenomas, which are polyps that can turn into cancer, by identifying subtle changes the human eye might miss. Even a small increase in detection can significantly lower a patient's risk of developing colon cancer. Air Force Veteran Dawn Yllescas shared her story, emphasizing that early detection through routine screenings, even without symptoms, saved her life.
Penn Medicine shares AI governance lessons from radiology
Tessa Cook, a leader in radiology at Penn Medicine, will share insights on AI governance at the HIMSS26 conference in March. Penn Medicine has been a pioneer in using AI in its radiology departments. Cook emphasizes that successful AI implementation requires both top-down and bottom-up approaches to governance. She highlights the importance of human factors engineering, ensuring AI tools fit seamlessly into clinicians' daily work. Additionally, monitoring AI results after deployment is crucial for success. Healthcare organizations must understand their internal processes and strategic goals to effectively manage AI.
AI growth makes analysts study full supply chain
The rapid expansion of artificial intelligence is changing how technology analysts work, forcing them to examine the entire supply chain. On January 9, 2026, Paul Meeks from Freedom Capital Markets discussed this shift on 'Bloomberg Tech.' He noted that companies like Meta are making large electricity deals for their AI data centers, with Meta becoming the biggest nuclear power buyer among its peers. This shows how AI's energy demands are broadening the scope of what tech analysts need to understand.
DLA builds AI future with training and platforms
The Defense Logistics Agency, led by CIO Adarryl Roberts, is building its AI capabilities through continuous employee training and robust platforms. DLA plans to use AI in operations, demand forecasting, and auditing, believing AI tools can also help clean up data more quickly. The agency is preparing for agentic AI, which will allow "digital employees" to work like humans and achieve greater efficiency. DLA already uses 185 robotics process automation bots, with a "digital citizen program" empowering employees to build their own. All these efforts are part of a strategy to integrate technology, people, processes, and data under a single DLA Connect portal.
Media groups urge AI companies for transparency
International media organizations are calling for AI companies to be more transparent about their sources and how they use journalistic content. A campaign called 'Facts In, Facts Out' highlights that AI tools often distort or misuse news from trusted sources. Groups like the European Broadcasting Union and WAN-IFRA emphasize that AI is not yet a reliable source for news, and this affects public trust in media. They propose five principles for AI companies, including requiring consent for content use, fair recognition, accurate attribution, and open dialogue to ensure truthful information.
Sources
- Rethinking Security for Agentic AI
- How AI agents are turning security inside-out
- Construction giant unveils AI to help prevent job-site accidents: ‘It’s essentially a personal assistant’
- 'No one verified the evidence:' Woman says AI-generated deepfake text sent her to jail
- CES 2026: Smart glasses market shifts focus to AI capabilities
- How Artificial Intelligence Is Rewiring Democracy
- Artificial intelligence increases efficiency and quality
- Lessons on AI governance from the radiology department
- AI Build-Out Forces Analysts to Cover Entire Supply Chain
- DLA’s foundation to use AI is built on training, platforms
- International media call for transparency from AI companies
Comments
Please log in to post a comment.