Microsoft Invests in AI Chips, Partners with OpenAI, Anthropic

Major technology players are making significant moves in artificial intelligence development and infrastructure. Microsoft, under the leadership of CEO Satya Nadella and AI chief Mustafa Suleyman, is investing heavily in its own AI chip clusters and developing in-house AI models like MAI-1-preview, aiming for greater self-sufficiency. This strategy complements their ongoing partnership with OpenAI and includes plans to integrate models from other developers like Anthropic. Meanwhile, Chinese tech giants Alibaba and Baidu are increasingly utilizing their own designed chips, such as Alibaba's Zhenwu and Baidu's Kunlun P800, for AI model training. This shift reduces their reliance on Nvidia hardware, though both companies still depend on Nvidia's advanced chips for their most critical AI tasks. This development is part of China's broader push for technological independence, especially in light of U.S. export restrictions. Beyond infrastructure, AI's application is expanding across various sectors. In cybersecurity, leaders at Brex and FICO highlight AI's role in accelerating risk detection and management, emphasizing explainability and trust. Security teams are also advised to manage autonomous AI agents through composite identities, comprehensive monitoring, and clear accountability structures. In the financial realm, an AI simulation of a Federal Reserve meeting suggested political pressure could polarize board members, though central banks are primarily using AI for operational improvements and analysis. In healthcare, the focus is on AI augmenting, not replacing, doctors to improve system efficiency and patient care. The marketing sector sees AI-driven platforms like SeezBoost helping car dealerships optimize advertising and boost sales. However, regulatory bodies like the FTC are cautioning AI companies against exaggerated advertising claims, as seen in a case involving a company falsely advertising its AI detector's accuracy. In legal news, a California law aimed at regulating AI-generated election content was struck down by a court, citing First Amendment violations.

Key Takeaways

  • Microsoft is investing significantly in building its own AI chip clusters and developing in-house AI models like MAI-1-preview to enhance self-sufficiency.
  • Alibaba and Baidu are using their internally designed chips, such as Alibaba's Zhenwu and Baidu's Kunlun P800, for AI model training, reducing reliance on Nvidia.
  • Despite using their own chips, Alibaba and Baidu continue to use Nvidia's advanced chips for their most critical AI tasks.
  • Microsoft AI CEO Mustafa Suleyman noted that the MAI-1-preview model was trained on a cluster of 15,000 Nvidia H100s, with plans for larger clusters.
  • Microsoft plans to use AI models from other developers, including Anthropic, in its products.
  • An AI simulation indicated that political pressure could polarize Federal Reserve board members during meetings.
  • A California law targeting AI-generated election content was struck down by a court for violating the First Amendment.
  • Security teams are advised to manage autonomous AI agents by assigning composite identities, implementing comprehensive monitoring, and establishing accountability.
  • The FTC has warned AI companies against making unsubstantiated advertising claims, citing a case where an AI detector's accuracy was exaggerated.
  • AI is being used to improve cybersecurity operations, with leaders emphasizing explainability and trust in AI integration.

China's Alibaba and Baidu use own AI chips, reducing Nvidia reliance

Chinese tech giants Alibaba and Baidu are now using their own designed chips to train artificial intelligence models. This move partially replaces chips previously made by Nvidia. It signals an acceleration in China's technological independence from the U.S. Alibaba uses its Zhenwu chip for smaller models, while Baidu tests its Ernie AI model with the Kunlun P800 chip. Both companies still use Nvidia's advanced chips for their most critical AI tasks.

Alibaba and Baidu use own AI chips for training, report says

Chinese tech leaders Alibaba and Baidu have started using their own internally designed chips to train AI models, according to a report. This shift means they are using less hardware from Nvidia. This development is seen as a significant step in China's tech industry moving towards greater self-sufficiency. The companies are reportedly testing their own chips for various AI tasks, marking a change in their reliance on foreign technology.

Alibaba, Baidu switch to self-made chips for AI training

Alibaba and Baidu have begun using their own designed chips to train artificial intelligence models, reducing their reliance on Nvidia. This shift is a key part of China's effort to develop its AI technology independently. Alibaba uses its Zhenwu chip for smaller AI models, and Baidu is testing its Kunlun P800 chip for its Ernie AI model. Despite this move, both companies continue to use Nvidia's more powerful chips for their most advanced AI work.

Alibaba, Baidu use own chips for AI training, report reveals

Chinese tech giants Alibaba and Baidu are now using their own designed chips to train AI models, partially replacing Nvidia hardware, according to The Information. Alibaba has used its chips for smaller models since early 2025, while Baidu is testing its Kunlun P800 chip for its Ernie AI model. This move comes as U.S. export restrictions limit advanced AI chip sales to China. Nvidia acknowledged the growing competition in the market.

Microsoft plans major investment in its own AI chip cluster

Microsoft is planning significant investments to build its own AI chip cluster, aiming for greater self-sufficiency in artificial intelligence. Microsoft AI CEO Mustafa Suleyman announced this strategy during an employee meeting. The company is also developing its own AI models, like the recently unveiled MAI-1-preview. While Microsoft continues its partnership with OpenAI, this move indicates a desire to control its AI infrastructure and development more directly. Suleyman mentioned that the MAI-1-preview was trained on a relatively small cluster of 15,000 Nvidia H100s.

Microsoft invests heavily in its own AI model training capacity

Microsoft is making significant investments in computing power to train its own advanced AI models. Microsoft AI chief Mustafa Suleyman stated the company needs the capacity to build world-class AI models in-house. He mentioned that their MAI-1-preview model was trained on a small cluster of 15,000 Nvidia H100s, and they aim for much larger clusters in the future. CEO Satya Nadella affirmed Microsoft's commitment to building its own AI capabilities while continuing to support partners like OpenAI. The company also plans to use AI models from other developers, such as Anthropic, in its products.

Webinar: AI pentesting and attack surface management combined

A webinar will demonstrate how AI-powered penetration testing and Attack Surface Management (ASM) can improve cybersecurity. The session will showcase Escape's AI pentesting tool, which finds business logic flaws missed by traditional scanners. It will also feature Escape's AI-powered ASM, which provides context for security findings. The webinar is scheduled for Tuesday, September 23rd, 2025, at 11:30 AM EST. Attendees will see live demos and hear customer use cases for managing risk effectively.

Court strikes down California law on AI election content

A court has struck down California's law, AB 2839, which aimed to regulate AI-generated audio and video content related to elections. The law prohibited materially deceptive AI content about candidates or elections unless it included a disclaimer. The court ruled that the law violated the First Amendment by restricting political speech. It emphasized that while states have an interest in election integrity, they cannot unduly limit free expression. The ruling highlighted the importance of counter-speech and market-based solutions over censorship.

Security teams can manage autonomous AI agents with these 3 strategies

Security teams can better manage the risks associated with autonomous AI agents by adopting new best practices. These AI systems, while not malicious, can expose security vulnerabilities due to over-provisioned access. To address this, teams should assign composite identities to AI agents, linking them to human users for better tracking. Comprehensive monitoring frameworks are also crucial to track agent activities across systems. Finally, transparency and accountability are key, with clear structures for who is responsible when an AI agent exceeds its bounds.

AI simulation shows political pressure divides Federal Reserve board

An artificial intelligence simulation of a Federal Reserve meeting revealed that political pressure can polarize board members. The study used AI agents modeled on real policymakers to process economic data and news. The simulation, replicating the July 2025 FOMC meeting, showed that external scrutiny can influence internal decisions. While central banks are not using AI to set policy, many are using it to improve operations, research, and analysis. The Bank of Japan and Australia's central bank are among those experimenting with AI for economic analysis.

AI should help healthcare systems, not replace doctors

Artificial intelligence has the potential to improve healthcare systems rather than replace physicians. This approach focuses on using AI to enhance efficiency and address systemic issues within healthcare. The goal is to leverage AI as a tool to support medical professionals and improve patient care. This perspective emphasizes the collaborative role of AI in the future of medicine.

SeezBoost uses AI marketing to boost car dealer sales

SeezBoost offers an AI-driven marketing platform designed to help car dealerships optimize their advertising spending and increase sales. The platform uses artificial intelligence to analyze data, identify potential buyers, and deliver targeted ads. This allows dealers to move beyond traditional marketing methods and connect with customers more likely to make a purchase. By focusing on precision and personalization, SeezBoost aims to improve return on investment for dealership advertising.

AI transforms security operations, say Brex and FICO leaders

AI is significantly changing the security landscape, with defenders using it to find risks faster and manage cloud complexity. Mark Hillick, CISO at Brex, views AI as a business enabler that should be integrated into product development. Yoni Kaplansky, VP of Cybersecurity at FICO, sees AI's potential in automating risk identification and remediation. Both leaders emphasize that security should accelerate innovation, not hinder it. They highlight the need for AI to be explainable and for security teams to build trust within organizations.

FTC warns AI companies on exaggerated advertising claims

The Federal Trade Commission (FTC) has cautioned AI companies against making unsupported claims in their advertising. This action follows a case against software company Workado, which falsely advertised its AI detector tool with 98% accuracy. The FTC found the tool's accuracy dropped to about 53% outside of academic contexts. The FTC ordered Workado to stop making unsubstantiated claims, retain evidence, and notify customers. The FTC advises AI companies to test broadly, ensure marketing matches data, build an evidence file, acknowledge limitations, and embed compliance into their culture.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI chips Nvidia Alibaba Baidu Technological independence China AI models AI training Microsoft AI chip cluster Self-sufficiency MAI-1-preview OpenAI Nvidia H100s AI pentesting Attack Surface Management (ASM) Cybersecurity Escape AI AI election content California law First Amendment Political speech Autonomous AI agents Security teams Risk management AI simulation Federal Reserve Political pressure Central banks Economic analysis AI in healthcare Healthcare systems Medical professionals Patient care AI marketing Car dealerships SeezBoost Sales optimization AI in security operations Cloud complexity Risk identification Explainable AI FTC AI advertising claims Workado AI detector tool

Comments

Loading...