Security concerns are mounting as researchers at Carnegie Mellon University warn that relying on Chinese technology for AI and energy grids creates serious risks. Their report, Electrotech Moneyball, suggests the US should prioritize protecting critical supply chain components rather than treating every part as equally risky to avoid slowing necessary infrastructure upgrades.
Global supply chains face additional strain from rising Iran tensions, which have pushed printed circuit board prices up 40% in April due to copper shortages. Semiconductor lead times have stretched to about 40 weeks, forcing companies to redesign products or seek new suppliers, potentially delaying AI infrastructure deployment.
In the software development sector, new benchmarks for 2026 rank top AI coding agents using updated tests like SWE-bench Pro and Terminal-Bench 2.0. Claude Code and OpenAI Codex currently lead these rankings, though experts caution that old tests are no longer reliable because models can cheat by memorizing answers.
Despite these advancements, AI adoption in the distribution industry has been slower than expected. A survey of 426 companies reveals that only 16% achieved the hoped-for 2% margin gain, with many experts noting the technology is still in its early stages and requires careful planning to fix root process problems first.
Major tech leaders including Amazon, Microsoft, and Google are gathering at a Zenity-hosted security summit to discuss agentic AI risks. Speakers will address threats like prompt injection and supply chain attacks, aiming to build trust and share practical advice on securing AI tools in production environments while complying with regulations like the EU AI Act.
OpenAI is reorganizing its internal structure by merging teams for ChatGPT, Codex, and its API into a single organization to streamline product strategy. Greg Brockman will lead the new product strategy team, while Thibault Sottiaux heads core platform groups, a move intended to create a more efficient structure for developing AI applications even as executive Fidji Simo takes medical leave.
Meanwhile, Apple continues to lead in privacy features, using strong encryption on almost all devices and services. Tools limiting location tracking and showing when apps use cameras or microphones have pressured competitors like Google to add similar options to Android phones, though Google has not yet matched every privacy feature found on iPhones.
On the testing front, a Microsoft expert demonstrated how Playwright helps test AI-generated code. As code volume grows, simple coverage is insufficient; combining Playwright with AI agents improves testing efficiency and code quality, helping developers handle the massive increase in software projects expected by 2025.
Finally, business leaders express worry about the cost of AI oversight. Only 23% of executives trust AI output accuracy, leading companies to increase human review. This creates a paradox where AI speeds up work but also increases the time spent checking it, raising concerns about using AI for sensitive tasks like financial budgeting or hiring decisions.
Key Takeaways
['Carnegie Mellon University researchers warn that relying on Chinese technology for AI and energy grids creates serious security risks.', 'Rising Iran tensions have caused printed circuit board prices to jump 40% in April due to material shortages.', 'Semiconductor lead times have grown to about 40 weeks, forcing companies to redesign products or find new suppliers.', 'New 2026 benchmarks rank Claude Code and OpenAI Codex as leaders in AI coding tasks using updated tests.', 'Only 16% of distribution companies achieved the hoped-for 2% margin gain from AI adoption so far.', 'Zenity is hosting a security summit featuring leaders from Amazon, Microsoft, and Google to discuss agentic AI risks.', 'OpenAI is merging teams for ChatGPT, Codex, and its API into a single organization to streamline product strategy.', "Apple's strong encryption and privacy features have pressured competitors like Google to add similar options to Android.", 'Microsoft experts recommend using Playwright to test AI-generated code as simple coverage becomes insufficient.', 'Only 23% of executives trust AI output accuracy, leading to increased human review and concerns about sensitive tasks.']CMU paper warns China-linked tech risks US energy grid
Researchers at Carnegie Mellon University released a new report called Electrotech Moneyball. They warn that relying on Chinese technology for AI and energy grids creates serious security risks. The study suggests the US should focus on protecting the most critical parts of its energy supply chain first. Treating every component as equally risky could slow down necessary upgrades and hurt the US economy. The authors argue for a smart strategy that balances security with the need to keep building modern infrastructure.
Iran tensions tighten AI hardware supply chains globally
Rising tensions involving Iran are making it harder to get parts for AI servers and data centers. Prices for printed circuit boards jumped by 40% in April due to shortages of copper and other materials. Experts say the conflict adds pressure to a supply chain already stretched by high demand for AI technology. Semiconductor lead times have grown to about 40 weeks, forcing companies to redesign products or find new suppliers. This situation could delay the deployment of new AI infrastructure and increase costs for businesses.
New benchmarks rank top AI coding agents for 2026
A new guide ranks the best AI agents for software development using updated benchmarks. The report notes that the old SWE-bench Verified test is no longer reliable because models can cheat by memorizing answers. Instead, the guide uses SWE-bench Pro and Terminal-Bench 2.0 to measure real coding ability. Claude Code and OpenAI Codex currently lead the rankings for different types of coding tasks. The article explains how different testing setups can change scores and warns readers to look beyond simple numbers.
Distributors find AI adoption slower than expected
A survey of 426 distribution companies shows that AI has not yet met high expectations for improving profits. While 73% of companies hoped for at least a 2% gain in margins, only 16% achieved that result so far. Many experts say the technology is still in its early stages and requires careful planning to work well. Successful companies focused on fixing root process problems before adding AI tools. The industry is moving from simple automation to using data for smarter planning and safety.
Zenity leads security summit on agentic AI risks
Security company Zenity is hosting a summit to discuss risks related to AI agents and autonomous systems. The event will feature leaders from major tech companies like Amazon, Microsoft, and Google. Speakers will cover threats such as prompt injection and attacks on AI supply chains. Zenity is also present at a global security conference to help teams follow new rules like the EU AI Act. The goal is to build trust and share practical advice on securing AI tools in production environments.
Writer says AI cannot capture the soul of an author
Carl Nolte, a veteran columnist, tried asking AI to write a piece in his own style. Although the AI produced a polished and formal article, Nolte felt it lacked his unique personality and passion. He realized that while machines can copy a writing style, they cannot replicate the human emotion behind the words. Nolte concludes that AI is a useful tool but cannot replace the essence of a true human writer. He warns that relying too much on AI might make writing feel boring and predictable.
2026 tech media rankings show who still matters
A new ranking of tech news outlets for 2026 highlights which publications are providing the best coverage. VentureBeat and SiliconANGLE are praised for their deep understanding of AI and infrastructure topics. The Information is noted for its excellent reporting but criticized for focusing too much on big tech drama. StartupHub.ai ranks highly for its fast updates on funding and stealth companies. Traditional magazines like Wired are seen as less relevant for breaking news but still valuable for long stories.
OpenAI reorganizes teams to focus on unified app strategy
OpenAI is merging its teams for ChatGPT, Codex, and its API into a single organization. This move is part of a plan to streamline how the company builds and sells its products. Greg Brockman will lead the new product strategy team while Thibault Sottiaux heads the core platform groups. The company is making these changes even though top executive Fidji Simo is on medical leave. The goal is to create a more efficient structure for developing AI applications.
Apple leads privacy features that other companies copy
Apple has made user privacy a top priority, using strong encryption on almost all its devices and services. Features like end-to-end encryption for iMessage and iCloud backups protect user data from being read by others. Apple also introduced tools to limit location tracking and show when apps use the camera or microphone. These measures have pressured competitors like Google to add similar privacy options to their Android phones. Even though Google has improved, it has not yet matched every privacy feature found on iPhones.
Microsoft expert shows how Playwright helps test AI code
Marlene Mhangami from Microsoft and GitHub explained how to use Playwright for testing software built with AI. She noted that as code volume grows, simple code coverage is not enough to ensure software works correctly. Her presentation highlighted how combining Playwright with AI agents can improve testing efficiency and code quality. The goal is to create clean, modular code that is easier for AI tools to understand and improve. This approach helps developers handle the massive increase in software projects expected by 2025.
Business leaders worry about the cost of AI oversight
A new report shows that many business leaders feel pressure to use AI but do not see enough personal benefit yet. Only 23% of executives trust the accuracy of AI outputs, so companies are increasing human review to catch mistakes. This creates a paradox where AI speeds up work but also increases the time spent checking that work. Some leaders worry about using AI for sensitive tasks like financial budgeting or hiring decisions. The study suggests that organizations need better strategies to balance speed, quality, and employee privacy.
Sources
- CMU’s Electrotech Moneyball paper warns China-linked AI, grid technologies threaten US energy infrastructure security
- AI Hardware Supply Chains Tested as Iran Tensions Rise
- Best AI Agents for Software Development Ranked: A Benchmark-Driven Look at the Current Field
- What’s Working: Distributors trade tips on artificial intelligence during Denver event
- Zenity Amplifies Agentic AI Security Push With Summit Leadership and OWASP Presence
- Carl Nolte: I asked AI to write like me. I wish I hadn’t
- The 2026 Tech Media Power Rankings: Who Still Matters, Who Lost the Plot
- OpenAI Reorganizes Product Teams Around Unified-App Strategy
- 5 Privacy Features Only Apple Has
- Marlene Mhangami: Playwright for Functionality Testing
- The AI oversight paradox: Is the investment worth the cost of watching it?
Comments
Please log in to post a comment.