Artificial intelligence is rapidly changing the landscape of modern warfare, particularly in the ongoing conflict involving the United States, Israel, and Iran. AI tools are significantly accelerating intelligence gathering and target selection, allowing for more precise and rapid military actions. The U.S. is deploying an AI-powered anti-drone system called Merops to the Middle East to counter Iran's Shahed drones, which are difficult to detect and can be mistaken for other objects. This deployment addresses concerns about the effectiveness of previous responses to these low-cost drones.
The extensive use of AI in warfare, however, raises significant ethical questions and has led to disputes between the government and AI developers. The Pentagon has clashed with AI company Anthropic, with Defense Secretary Pete Hegseth reportedly threatening to designate Anthropic a supply chain risk. This stems from Anthropic's refusal to allow certain military uses of its AI, Claude, such as domestic surveillance or autonomous weapons. Former Trump administration AI adviser Dean Ball highlights the "AI alignment problem," noting that AI systems reflect embedded moral philosophies, making their alignment a political issue and raising concerns about potential misuse or clashing values with future administrations. Claude has reportedly been used in military operations, including in the conflict with Iran.
Beyond military applications, AI is also impacting cybersecurity and public administration. OpenAI recently launched Codex Security, an AI tool designed to find, verify, and fix software vulnerabilities. During a beta test, it scanned over 1.2 million code changes and identified 792 critical and 10,561 high-severity issues, aiming to improve system security for certain ChatGPT subscribers. Meanwhile, in Albuquerque, City Clerk Ethan Watson observes an unprecedented surge in public record requests, with over 16,000 received last year—a 300% increase since 2017. Many believe AI tools are being used to gather information and format these requests, often from out-of-state or international sources, contributing to a backlog of 1,807 requests.
AI's influence extends into daily life, from personal communication to criminal activities. Gen Z, for instance, is increasingly using AI tools like ChatGPT to help draft difficult conversations, decode mixed signals, and script challenging social interactions. While some find this helpful for clarity, experts express concern that such reliance could hinder emotional development. On a more concerning note, scammers are leveraging AI to create more convincing phishing emails and clone voices, leading to a significant increase in fraud. Phishing and spoofing scams saw an 85.6% surge in 2025, with median losses doubling to $2,060, and investment scams resulting in the highest median losses at $30,000.
Key Takeaways
- AI is accelerating warfare by improving intelligence gathering and target selection, particularly in the conflict involving the US, Israel, and Iran.
- The U.S. is deploying the AI-powered Merops anti-drone system to the Middle East to counter Iran's Shahed drones.
- A conflict exists between the Pentagon and AI company Anthropic over the ethical use of its AI, Claude, with Anthropic refusing certain military applications like domestic surveillance.
- OpenAI launched Codex Security, an AI tool that identified 792 critical and 10,561 high-severity software vulnerabilities across 1.2 million code changes during a beta test for ChatGPT subscribers.
- Albuquerque City Clerk Ethan Watson reports a 300% increase in public record requests since 2017, totaling over 16,000 last year, largely attributed to AI tools used for information gathering.
- Scammers are leveraging AI to create more convincing phishing emails and voice clones, contributing to an 85.6% surge in phishing and spoofing scams in 2025.
- Median losses from phishing and spoofing scams doubled to $2,060, with investment scams leading to the highest median losses at $30,000.
- Gen Z is increasingly using AI tools like ChatGPT to draft difficult conversations and social interactions, raising concerns about potential impacts on emotional development.
- Former AI adviser Dean Ball highlights the "AI alignment problem," where AI systems reflect embedded moral philosophies, potentially clashing with government interests.
- The dispute between the US military and Anthropic raises questions about private tech companies' roles in military decisions and the ethical boundaries of AI in national security.
AI speeds up war in Iran with faster targeting
Artificial intelligence is significantly accelerating the war in Iran by improving how quickly intelligence is gathered and targets are selected. Military AI software helps analyze large amounts of data much faster than before. This allows for more precise and rapid military actions. The use of AI in this conflict marks a new phase of warfare where speed and data analysis are key. The impact on the war in Iran is substantial, potentially changing its course.
AI's role in Iran conflict sparks capability questions
Artificial intelligence is being used more than ever to analyze intelligence and select targets in the recent conflict involving the United States, Israel, and Iran. While AI in warfare isn't new, its extensive use now raises questions about its effectiveness and reliability. Supporters believe AI can improve accuracy and reduce civilian harm. However, critics worry about potential errors, biases, and loss of human control. The debate highlights the complex issues surrounding AI in modern warfare.
AI tools cause surge in public record requests in Albuquerque
Albuquerque is experiencing an unprecedented number of public record requests, likely due to artificial intelligence tools. The city received over 16,000 requests last year, a 300% increase since 2017. City Clerk Ethan Watson believes many users employ AI to gather information from news articles and format requests, often for police body camera footage. These requests, frequently from out-of-state and international sources, consume significant staff time. This trend is worsening the city's record request backlog, which currently stands at 1,807 requests.
OpenAI's Codex Security finds thousands of software flaws
OpenAI has launched Codex Security, an AI tool designed to find, verify, and fix software vulnerabilities. In a recent beta test, it scanned over 1.2 million code changes and identified 792 critical and 10,561 high-severity issues. The tool uses advanced AI reasoning and validation to reduce false positives and provide accurate fixes. Codex Security aims to improve system security by deeply understanding project context and identifying complex vulnerabilities that other tools might miss. It is currently available in a research preview for certain ChatGPT subscribers.
AI helps scammers create more convincing fraud
Scammers are increasingly using artificial intelligence to create more convincing phishing emails and clone voices, making fraud more effective. Phishing and spoofing scams saw an 85.6% surge in 2025, with median losses doubling to $2,060. Investment scams led to the highest median losses at $30,000. Experts warn that AI enables criminals to target more people with realistic scams. The report also notes a shift from robocalls to online contact, with nearly half of victims first encountering scammers online.
US sends AI anti-drone system to Middle East amid Iran concerns
The U.S. is deploying an AI-powered anti-drone system called Merops to the Middle East due to concerns about Iran's Shahed drones. Officials described the U.S. response to these drones as disappointing, especially since Iran's versions are based on a simpler model Russia is continuously improving. Drones are difficult to detect on radar and can be mistaken for other objects. The Merops system is designed to spot and neutralize them more affordably than using expensive missiles. This move highlights the challenge of countering low-cost drones with high-cost defenses.
Government AI alignment issues spark debate
Dean Ball, former AI adviser to the Trump administration, discusses the government's "AI alignment problem," particularly concerning the Pentagon's actions against Anthropic. He argues that AI systems reflect the moral philosophies embedded in them, making their alignment a political issue. Ball expresses concern that governments might misuse AI or that AI aligned with certain values could clash with future administrations. He believes this creates a complex challenge where AI operations might work against government interests in ways that are difficult to understand or track.
Pentagon's Anthropic conflict raises war AI questions
The conflict between the Pentagon and AI company Anthropic highlights critical issues regarding AI use in warfare. Defense Secretary Pete Hegseth threatened to designate Anthropic a supply chain risk, a move typically reserved for foreign companies. This stems from Anthropic's refusal to allow certain uses of its AI, Claude, by the military, including surveillance of Americans. Anthropic's AI has reportedly been used in military operations, including in the conflict with Iran. This dispute raises questions about the ethical boundaries and control of AI in national security.
Gen Z uses AI for difficult conversations
A growing number of young people, particularly Gen Z, are using AI tools like ChatGPT to help draft difficult conversations and social interactions. This trend includes writing rejection texts, decoding mixed signals, and scripting challenging discussions. Experts worry this reliance on AI could hinder emotional development and make individuals less prepared for real human connection. While some find AI helpful for clarity, others find AI-generated messages confusing or impersonal, raising concerns about authenticity and emotional growth.
US military AI feud with Anthropic sparks ethical debate
A dispute between the U.S. military and AI company Anthropic is shedding light on the ethical challenges of using AI in national security. Anthropic, known for its safety-focused approach, is clashing with the Pentagon over how its AI, Claude, can be used. The military wants unrestricted access for national defense, while Anthropic has drawn lines regarding domestic surveillance and autonomous weapons. This conflict raises questions about private tech companies' roles in military decisions and the potential for AI to be used in ways beyond initial agreements, especially within classified systems.
Sources
- How AI Is Turbocharging the War in Iran
- Questions over AI capability as tech guides Iran strikes
- Public records in the age of artificial intelligence
- OpenAI Codex Security Scanned 1.2 Million Commits and Found 10,561 High-Severity Issues
- Forget robocalls. How scammers are using AI to get your money.
- The U.S. is sending an AI-powered anti-drone system to the Mideast as response to countering Iran’s Shahed has been ‘disappointing’
- Video: Opinion | The Government’s A.I. Alignment Problem
- Opinion | Why the Pentagon Wants to Destroy Anthropic
- At a loss for words? Gen Z is outsourcing the hard conversations to AI
- What does the US military’s feud with Anthropic mean for AI used in war?
Comments
Please log in to post a comment.