Visa is significantly enhancing its credit card dispute management with six new AI-powered tools. These innovations aim to streamline processes, reduce costs, and alleviate frustration for merchants and financial institutions. The company reported processing over 106 million disputes in 2025, marking a 35% increase since 2019. Three tools are designed for merchants, offering pre-dispute resolution and automated chargeback challenges using generative AI, while the other three assist financial institutions with analysis and case management. Some tools are already available, with wider deployment expected by late 2026.
The proliferation of AI agents, expected to reach billions by 2028, is fundamentally reshaping cybersecurity. Zero trust security strategies, emphasizing 'never trust, always verify,' are becoming essential to protect corporate data accessed by these agents. Exabeam, for instance, has expanded its Agent Behavior Analytics to secure AI agents used with platforms like ChatGPT, Copilot, and Gemini. This includes AI behavior baselining and prompt abuse detection, crucial for managing risks from autonomous AI workers and the rapid generation of code.
AI's influence extends deeply into the legal sector and education. Law firm Goodwin Procter launched 'Propel,' a firmwide AI training initiative, aiming for over 90% employee AI tool usage by late 2026 to boost client service and efficiency. Similarly, UK law firms are adapting their training and hiring for junior lawyers due to AI's impact. In academia, Columbia Business School professors developed AI tools that ask questions, encouraging critical thinking rather than just summarizing, a direct response to students using tools like ChatGPT.
Addressing the societal implications of AI, Lean In, under new CEO Bridget Griswold, is tackling the AI gender gap. The organization highlights that women's jobs are three times more likely to be automated, and women are underrepresented in AI leadership. Meanwhile, OpenAI has been identified as the primary funder of the Parents and Kids Safe AI Coalition, a group advocating for AI age verification. This undisclosed funding raises concerns about potential conflicts of interest, given OpenAI CEO Sam Altman's position. ClawGo also introduced OpenClaw Companion, a handheld device for running dedicated AI agents, signaling a deeper dive into AI hardware.
Key Takeaways
- Visa launched six new AI tools to manage over 106 million credit card disputes processed in 2025, a 35% increase since 2019.
- These Visa tools include generative AI for automated chargeback challenges and pre-dispute resolution, with some available now and others by late 2026.
- Zero trust security is critical for the AI era, with billions of AI agents expected by 2028, requiring continuous monitoring and strict identity checks.
- Exabeam expanded its Agent Behavior Analytics to secure AI agents on platforms like ChatGPT, Copilot, and Gemini, detecting misuse and insider threats.
- Law firm Goodwin Procter initiated 'Propel,' an AI training program targeting over 90% employee AI tool usage by late 2026 to enhance client service.
- Lean In, led by new CEO Bridget Griswold, is addressing the AI gender gap, noting women's jobs are three times more likely to be automated and they are underrepresented in AI leadership.
- Columbia Business School professors developed AI tools that ask questions to foster critical thinking, countering the use of tools like ChatGPT for simple summaries.
- OpenAI secretly funded the Parents and Kids Safe AI Coalition, which advocates for AI age verification, raising potential conflict of interest concerns for CEO Sam Altman.
- AI is transforming junior lawyer roles in UK firms, requiring adaptations in training and hiring to meet new skill demands.
- ClawGo introduced OpenClaw Companion, a handheld device designed to run OpenClaw-native AI agents, indicating a focus on dedicated AI hardware.
Visa uses AI to speed up dispute resolution
Visa has launched six new AI-powered tools to help manage credit card disputes more efficiently. These tools aim to reduce costs and frustration for merchants, banks, and other financial institutions. In 2025, Visa processed over 106 million disputes, a 35% increase since 2019. The new tools include features for pre-dispute resolution, automated chargeback challenges with AI, and better transaction detail sharing. Some tools are available now, with others planned for late 2026.
Visa rolls out AI tools for charge dispute management
Visa has introduced six new AI tools to streamline the credit card dispute process for merchants, issuers, and acquirers. The company processed over 106 million disputes in 2025, a significant increase. These tools are designed to cut costs and reduce confusion by automating tasks and providing clearer transaction information. Three tools focus on merchants, helping them resolve disputes early and manage chargebacks. The other three assist financial institutions with analysis and case management.
Visa deploys AI to handle 106 million charge disputes
Visa is using six new AI tools to manage the rising number of credit card charge disputes, which increased 35% since 2019 to over 106 million in 2025. Three tools are for merchants, offering pre-dispute resolution and automated chargeback challenges using generative AI. Three other tools are for issuers and acquirers, providing AI-driven decision support and document analysis. These tools aim to reduce costs and improve efficiency in dispute management, with some becoming widely available in 2026.
Agentic AI offers new security possibilities
Non-Human Identities (NHIs) are crucial for cloud security, acting like autonomous workers with specific permissions. Managing these NHIs holistically reduces risks, ensures compliance, and improves operational efficiency. This approach aligns security with future AI trends by providing data-driven insights for better decision-making. By securing NHIs, organizations can build a more resilient cybersecurity framework and bridge gaps between security and development teams.
Zero Trust Security is Key for AI Era
The rise of AI agents accessing corporate data makes zero trust security strategies essential. With billions of AI agents expected by 2028, traditional security models are insufficient. Zero trust principles like 'never trust, always verify' are critical for securing AI agents through strict identity checks and continuous monitoring. This approach helps protect against threats like data breaches and manipulation by ensuring rigorous security measures are in place for AI operations.
Goodwin Procter trains staff on AI for better client service
Law firm Goodwin Procter has launched 'Propel,' a firmwide AI training initiative to enhance client service and efficiency. Led by Chief Digital and Technology Officer Eric Tan, the program offers continuous learning through in-person sessions, role-specific training, and online modules. Propel aims to equip all employees with generative AI skills, potentially allowing for higher hourly rates by improving work quality and speed. The firm targets over 90% employee AI tool usage by the end of 2026.
Lean In tackles AI gender gap with new leadership
Lean In, led by new CEO Bridget Griswold, is focusing on closing the gender gap in Artificial Intelligence. The organization notes that women's jobs are three times more likely to be automated by AI, and they are underrepresented in AI leadership. Women also report feeling more threatened by AI and receive less manager support for its use. Lean In aims to encourage women to use AI confidently and accelerate their careers, addressing biases that may hinder their progress.
Professors create AI tools that ask questions
Professors at Columbia Business School have developed AI tools designed to ask questions rather than provide answers. This approach emerged in response to students using tools like ChatGPT to summarize case studies, shifting the focus from critical thinking and argument development. The new tools aim to encourage deeper engagement and analytical skills among students.
OpenAI secretly funded AI age verification group
OpenAI has been revealed as the primary funder of the Parents and Kids Safe AI Coalition, a group advocating for AI age verification requirements. The coalition's connection to OpenAI was not initially disclosed to other advocacy organizations it contacted. This backing raises concerns, as OpenAI CEO Sam Altman heads a company that could benefit from such age assurance requirements, potentially creating a conflict of interest.
AI changes junior lawyer roles in UK firms
Artificial intelligence is transforming how UK law firms train, hire, and deploy junior lawyers. As AI technology becomes more widespread, firms are adapting their entry-level hiring and training programs to meet new skill demands. This shift reflects the growing impact of AI on legal practice and the evolving needs of the profession.
ClawGo launches companion for AI agents
ClawGo has introduced OpenClaw Companion, a handheld device for running OpenClaw-native AI agents. The company believes the real opportunity in AI hardware lies deeper than just putting a large language model into a device. This companion aims to provide a dedicated platform for AI agents, moving beyond simple interfaces to a more integrated solution for AI agent functionality.
Exabeam secures AI agents like ChatGPT
Exabeam has expanded its Agent Behavior Analytics to secure AI agents used with platforms like ChatGPT, Copilot, and Gemini. The update addresses the need for visibility into how employees use AI assistants to detect misuse and insider threats. New capabilities include AI behavior baselining, prompt and model abuse detection, and identity monitoring. These features help organizations secure autonomous AI workers and prevent security incidents.
AI code generation changes security approach
The rise of AI writing code has fundamentally altered application security, moving away from training developers to integrating security directly into tools and workflows. With employees now able to create functional applications rapidly, traditional security gates are becoming obsolete. Security must now be embedded within AI systems, focusing on agent identity, permissions, and continuous monitoring to manage risks associated with disposable code and autonomous agents.
Sources
- Visa adds AI tools for dispute resolution
- Visa launches new AI tools to manage the charge dispute process
- Visa launches 6 AI tools to tackle $106 million-dispute backlog
- Why be optimistic about the future of Agentic AI?
- Why zero-trust security strategies are vital in the AI era
- Better Service and Higher Rates? Inside Goodwin’s Ambitious AI Training Program
- Sheryl Sandberg tapped a 25-year-old to run Lean In. Here’s her plan to close the AI gender gap
- These professors built AI tools that ask questions, instead of giving answers
- Group Pushing Age Verification Requirements for AI Turns Out to Be Sneakily Backed by OpenAI
- AI Reshapes Junior Lawyer Roles In Training And Hiring
- ClawGo Launches an OpenClaw Companion, Betting on the Harness Behind AI Agents
- Exabeam expands Agent Behavior Analytics to secure AI agents across ChatGPT, Copilot and Gemini
- When AI Writes the Code, What Changes for Security?
Comments
Please log in to post a comment.