US Senators JD Vance and Scott Bessent recently questioned top tech executives, including Sundar Pichai from Alphabet, Sam Altman from OpenAI, and Satya Nadella from Microsoft, about AI security and responses to cyberattacks. This discussion took place just before Anthropic's planned release of its new AI model, Mythos, also known as Claude Mythos. Anthropic ultimately delayed the model's wide public release due to concerns about potential cybersecurity vulnerabilities, opting instead to provide access to only about 40 tech leaders, including those from Microsoft and Google.
The broader conversation around AI security extends to potential acquisitions, with Cisco Systems Inc. reportedly considering a purchase of Astrix Security Ltd. for over $250 million. Astrix specializes in securing AI agents by identifying and managing them within company networks, scanning for vulnerabilities, and monitoring unusual activity. Meanwhile, IBM Distinguished Engineer Jeff Crume warns that AI systems can accumulate "technical debt" from development shortcuts, leading to issues like complex code, poor version control, and security vulnerabilities such as poisoning.
Human oversight remains critical for AI agents, according to Illia Polosukhin, a co-author of the influential "Attention Is All You Need" paper and co-founder of NEAR AI. Polosukhin, who uses a dozen AI agents daily, stresses that these systems lack sound judgment and require strict human supervision, advocating for auditable and transparent AI. Box CEO Aaron Levie encourages companies to "waste" AI tokens for experimentation, noting that addressing AI agent deployment and token usage issues will require building more data center capacity.
AI's application in law enforcement is also being explored, with Detective Lauren Cunningham testing the Longeye tool at the Oklahoma City Police Department. While Longeye is designed for investigators and operates separately from the public, experts caution about AI's potential for harm, including spreading misinformation. In the legal field, experts warn that AI tools like ChatGPT cannot replace lawyers, as they lack the human judgment, real-world context, and accountability essential for legal practice, often producing overconfident or imprecise analyses.
Beyond specific applications, the growth of AI is now primarily limited by hardware shortages and infrastructure constraints, rather than a lack of innovative ideas. The supply chain struggles to keep pace with demand for advanced chips and data centers, causing significant project delays. Amidst these technical challenges, Anthropic, known for its chatbot Claude, also engaged with Christian religious leaders to discuss the ethical and moral implications of artificial intelligence, reflecting a broader concern for responsible AI development.
Key Takeaways
- US Senators JD Vance and Scott Bessent questioned executives from Alphabet, OpenAI, Microsoft, and Anthropic regarding AI security and cyberattack preparedness.
- Anthropic delayed the wide public release of its Mythos (Claude Mythos) AI model due to cybersecurity vulnerability concerns, limiting initial access to about 40 tech leaders.
- Cisco Systems Inc. is reportedly in talks to acquire AI security startup Astrix Security Ltd. for over $250 million, aiming to enhance AI agent security.
- IBM's Jeff Crume highlights the risk of "technical debt" in AI systems, stemming from development shortcuts and leading to issues like data quality, bias, and security vulnerabilities.
- Illia Polosukhin, co-author of "Attention Is All You Need," emphasizes the critical need for strict human oversight and transparent, auditable AI systems, as AI agents lack sound judgment.
- Box CEO Aaron Levie advocates for experimenting with AI agents, even if it means "wasting" tokens, and notes that increased data center capacity is needed for AI agent deployment.
- AI tools like Longeye are being tested in law enforcement, but experts caution about potential harms and the necessity of human judgment.
- Experts warn that AI, including ChatGPT, cannot replace lawyers due to its inability to provide human judgment, real-world context, and accountability in legal practice.
- The primary limitation to AI growth has shifted from innovative ideas to hardware shortages and infrastructure constraints, causing project delays.
- Anthropic, developer of the Claude chatbot, engaged with Christian religious leaders to discuss the ethical and moral implications of artificial intelligence.
US Senators question tech leaders on AI safety before Anthropic's Mythos launch
US Senators JD Vance and Scott Bessent questioned top tech executives about AI security and cyberattack responses. This happened just before Anthropic planned to release its new AI model, Mythos. Executives from Alphabet, OpenAI, Microsoft, Palo Alto Networks, and CrowdStrike were on the call. Anthropic had developed a powerful AI model but delayed its public release due to concerns about potential cybersecurity vulnerabilities. The Mythos model, also known as Claude Mythos, was intended for a select group of about 40 tech leaders.
Vance and Bessent question tech CEOs on AI security before Mythos release
US Senators JD Vance and Scott Bessent questioned top tech CEOs about AI model security and their plans for cyberattacks. This discussion occurred shortly before Anthropic was set to unveil its new Mythos AI model. Key figures like Dario Amodei from Anthropic, Sundar Pichai from Alphabet, Sam Altman from OpenAI, and Satya Nadella from Microsoft participated. Anthropic had developed a powerful AI model but decided not to release it widely due to worries it could expose security weaknesses. Only about 40 tech experts, including those from Microsoft and Google, were to get access to Anthropic's 'Claude Mythos' model.
Vance, Bessent probe tech giants on AI security pre-Mythos release
US Senators JD Vance and Scott Bessent questioned major technology companies regarding AI security measures. These discussions took place shortly before the planned release of Anthropic's AI model, Mythos. The article references a CNBC report and mentions that the full content may require registration to access.
AI could help police, but experts urge caution
Detective Lauren Cunningham tested a new AI tool called Longeye at the Oklahoma City Police Department. She was initially skeptical due to AI's potential for harm, such as spreading misinformation. Longeye is designed specifically for investigators, operating separately from the public and relying on the detective's own work. The article explores the potential benefits and risks of using AI in law enforcement.
Cisco reportedly eyeing $250M+ acquisition of AI security startup Astrix Security
Cisco Systems Inc. is reportedly in talks to acquire Astrix Security Ltd., a startup focused on securing artificial intelligence agents. Astrix offers a platform that automatically identifies and manages AI agents within a company's network, detecting their tools and potential risks. The system scans for vulnerabilities, flags configuration issues, and monitors for unusual agent activity. Astrix also provides features like just-in-time access and can integrate with other cybersecurity tools for automated response. This potential acquisition follows another AI security purchase by Cisco.
AI agents need human oversight despite advances, says Transformer co-author
Illia Polosukhin, a co-author of the influential 'Attention Is All You Need' paper, uses a dozen AI agents daily but maintains strict oversight. He envisions a future where agents manage trades and supply chains, but believes society is unprepared for advanced AI like AGI. Polosukhin, who co-founded NEAR AI, emphasizes the need for auditable and transparent AI systems, rather than black boxes. He warns that AI agents, while helpful for tasks like summarizing information or coding, still lack sound judgment and require careful human supervision.
IBM's Jeff Crume warns of AI's hidden 'tech debt'
Jeff Crume, a Distinguished Engineer at IBM, explains that AI systems can accumulate 'technical debt' if not managed carefully during development. This debt, similar to traditional software, results from shortcuts taken for speed, leading to issues like complex code, hard-coded assumptions, and poor version control. Key areas contributing to AI technical debt include data quality, bias, model drift, and security vulnerabilities like poisoning. Crume stresses the importance of strategic planning and discipline to mitigate these risks and ensure AI systems are maintainable and adaptable.
Anthropic discusses AI ethics with Christian leaders
Anthropic, a prominent AI company known for its chatbot Claude, recently met with Christian religious leaders. The company, valued at $380 billion, sought input from this group, which is not typically consulted in the tech industry. The meeting focused on the ethical considerations and moral implications of artificial intelligence.
Hardware shortages, not ideas, now limit AI growth
The rapid growth of artificial intelligence is now primarily limited by hardware shortages and infrastructure constraints, rather than a lack of innovative ideas. Building and operating large-scale AI systems requires advanced chips, specialized components, and significant infrastructure like data centers, cooling systems, and energy. The supply chain is struggling to keep pace with the surging demand, causing delays of months or even years for AI projects. This shift means companies must now compete for physical resources, giving an advantage to those who can secure infrastructure, and introducing new risks related to supply chain disruptions.
Box CEO Aaron Levie embraces 'wasting' AI tokens
Box CEO Aaron Levie is not concerned about the cost of AI token usage, believing it signifies a willingness to experiment with new ideas. He suggests that companies should 'waste' tokens to explore new possibilities with AI agents. Levie notes that solving issues around AI agent deployment and token usage requires building more data center capacity. He also points out that CFOs and CIOs are grappling with how current IT policies adapt to AI agents that can run complex prompts and operate for extended periods, potentially causing conflicts in data management.
Physical AI transforms asset management, says IFS executive
Christian Pedersen, chief product officer at IFS, discusses the increasing automation in factories worldwide, citing the 'World Robotics 2025' report. Robot density has risen significantly in Western Europe, North America, and Asia. Pedersen highlights the growing gap between robot capabilities and the systems needed to support them in shared environments, including challenges in safety and regulations. The article also mentions the upcoming 2026 Robotics Summit & Expo.
ChatGPT cannot replace lawyers, experts warn
While AI tools like ChatGPT can draft legal documents, they cannot replace the judgment and accountability required in legal practice. Law involves understanding real-world context, anticipating opposing counsel's moves, and considering broader business objectives, which AI currently lacks. AI-generated legal analysis can be overconfident, imprecise, and contain hidden errors. Practicing law requires human insight and judgment, not just information processing, making AI an unsuitable substitute for legal professionals.
Sources
- JD Vance, Bessent question tech giants on AI security before Anthropic's Mythos release
- Vance and Bessent questioned tech giants on AI security before Anthropic's Mythos
- Vance, Bessent questioned tech giants on AI security before Anthropic's Mythos release
- AI could vastly streamline policing. Skeptics urge caution.
- Report: Cisco could acquire AI agent security startup Astrix Security for $250M+
- Illia Polosukhin on AI agents and why they still need human oversight
- IBM's Jeff Crume on AI Tech Debt
- Can AI be a ‘child of God’? Inside Anthropic’s meeting with Christian leaders.
- AI Growth Is Now Limited by Hardware Shortages Not Ideas
- Box CEO explains why he's not worried about wasting tokens
- Transforming asset management with physical AI
- Think ChatGPT Can Replace Your Lawyer? Think Again
Comments
Please log in to post a comment.