Microsoft Copilot Faces Regulations While OpenAI Expands Data Centers

Washington state lawmakers are actively debating new regulations for artificial intelligence, with discussions held on January 14 and 15, 2026. Proposed bills, like House Bill 2477 and House Bill 1170, aim to require generative AI companies with over one million users to provide detection tools and clearly label AI-generated content. Another significant proposal, House Bill 2225, focuses on protecting minors from chatbots such as ChatGPT and Microsoft Copilot, mandating operators to inform young users that these systems are not human and to offer crisis resources for self-harm ideation, with an effective date in 2027. Hawaii lawmakers are also pursuing similar legislation to protect children from advanced AI chatbots, requiring companies to disclose when AI is in use. These state-level efforts face potential challenges from the Trump administration, which prefers federal AI regulation and has threatened preemption for states with "onerous AI laws." Technology industry groups, including the Washington Technology Industry Association, express concerns about the feasibility of reliably detecting AI content and potential liability. Amidst the regulatory discussions, major AI players are pushing forward with significant developments. OpenAI, for instance, is embarking on a substantial expansion into data centers, robotics, and consumer devices. On January 14 and 15, 2026, the company issued requests for proposals to US manufacturers for components like silicon, motors, and cooling gear. OpenAI plans to integrate 750 megawatts of ultra-low latency infrastructure and aims to invest trillions of dollars in data center expansions, including its "Stargate" project, which envisions $500 billion in US data centers and AI infrastructure. Meanwhile, Google faces legal challenges as publishers Hachette Book Group and Cengage Group sought to join a class-action lawsuit on January 15, 2026. They allege Google misused copyrighted material, including works by Scott Turow and N.K. Jemisin, to train its Gemini large language model, seeking unspecified damages. In response to evolving regulatory needs, IBM has introduced IBM Sovereign Core, a new software tool designed to help businesses manage digital sovereignty for artificial intelligence. Built on Red Hat's open-source technology, this tool allows companies to build, deploy, and manage AI workloads within their own chosen legal areas, ensuring local AI processing and securing encryption keys. This offering, available for tech preview next month and full release by mid-2026, aims to simplify compliance and reduce audit costs. However, experts from the SCiDA team warn that current data protection guidelines do not adequately address AI training, potentially hindering fair competition. They suggest that powerful "gatekeeper" companies can leverage vast data to train superior AI models, disadvantaging smaller competitors. The broader implications of AI extend beyond regulation and corporate strategy. Trae Stephens of Anduril highlights AI's critical role in global military defense, emphasizing its efficiency in managing information and control for military operations and the need for US investment. The philosophical debate around AI consciousness also persists, with neuroscience professor Anil Seth cautioning against confusing AI's intelligence with true consciousness, noting its impact on potential moral status and human psychology. In the enterprise software sector, a debate rages between veteran investors who see established SaaS companies retaining their data advantage and AI-first advocates who believe traditional business models are rapidly changing. Alexander Lis of SDV suggests a hybrid approach, combining existing company data with the agility of new startups. These "proto-markets" for AI are characterized by constant evolution, blurred software categories, and uncertain economic value distribution, requiring product flexibility and rapid learning. On the infrastructure front, Saint John, New Brunswick, is being positioned as an AI "hidden gem" due to its extensive unused fiber optic system, attracting plans for a new data center by VoltaGrid and Beacon AI Centres.

Key Takeaways

  • Washington and Hawaii lawmakers are proposing new AI regulations to protect children from chatbots like ChatGPT and Microsoft Copilot, require AI content labeling, and prevent discrimination.
  • House Bill 2225 in Washington state aims to protect minors by requiring chatbot operators to disclose AI is not human and provide crisis resources for self-harm ideation, effective 2027.
  • The Trump administration prefers federal AI regulation and threatens preemption for states with "onerous AI laws."
  • OpenAI plans a massive expansion into data centers, robotics, and consumer devices, issuing RFPs to US manufacturers and aiming for trillions in investment, including a $500 billion "Stargate" project.
  • Publishers Hachette Book Group and Cengage Group are seeking to join a lawsuit against Google, alleging misuse of copyrighted books to train its Gemini large language model.
  • IBM introduced IBM Sovereign Core, a new software tool to help businesses manage digital sovereignty for AI, allowing local AI processing and control over data within chosen legal areas.
  • Experts warn that current data protection rules do not address AI training, potentially hindering competition by favoring "gatekeeper" companies with vast data access.
  • Anduril's chairman, Trae Stephens, highlights AI's crucial role in military defense for efficient information management and control, urging US investment.
  • Neuroscience professor Anil Seth cautions against confusing AI's intelligence with consciousness, emphasizing its implications for moral status and human psychology.
  • Saint John, New Brunswick, is attracting new data center projects from VoltaGrid and Beacon AI Centres due to its extensive unused fiber optic system, positioning it as an AI hub.

Washington Lawmakers Propose New AI Regulations

Washington state lawmakers are considering new laws to regulate artificial intelligence, as discussed on January 14, 2026. These proposals aim to control chatbots, protect children from harmful content, and require clear labels for AI-generated material. Yale Moon, a high school senior, supports House Bill 2477, which would make AI companies provide detection tools and watermarks for AI content. Another bill, House Bill 2225, focuses on protecting minors from AI chatbots like ChatGPT and Microsoft Copilot, requiring operators to inform minors the chatbot is not human and restrict harmful interactions. The tech industry, represented by the Washington Technology Industry Association, opposes some of these measures, citing challenges in detecting AI content. The Trump administration has also expressed a preference for federal AI regulation over state-level laws, threatening preemption for states with "onerous AI laws."

Hawaii Lawmakers Seek to Protect Children from AI Chatbots

Hawaii lawmakers are working on new laws to protect children from advanced AI chatbots. The proposed legislation would require companies using AI in business to clearly tell consumers it is AI. This effort gained new urgency after President Trump tried to stop states from regulating AI and after a Hawaii mother shared concerns about her 12-year-old daughter's interaction with a chatbot. Senator Jarrett Keohokalole plans to introduce a bill to prevent similar situations and may challenge Trump's executive order. Experts agree that clear rules are needed quickly because generative AI is growing fast and can easily mislead young users. Heidi Armstrong and Chelsea Okamoto will help draft the legislation, despite potential challenges from powerful tech lobbyists.

Washington State Considers Strict AI Regulations

Washington state lawmakers are debating new rules for artificial intelligence, with discussions held on January 15, 2026. These regulations aim to address concerns about deepfakes, the safety of chatbots, and discrimination from AI systems. High school senior Yale Moon testified for House Bill 1170, which would require generative AI companies with over one million users to offer AI detection tools and disclose AI-generated content. House Bill 2225 focuses on protecting minors using chatbots, requiring operators to inform young users that the systems are not human and to provide crisis resources for self-harm ideation, with an effective date in 2027. Lawmakers are also considering House Bill 2157 to prevent discrimination in AI-driven decisions for hiring, housing, and loans. Technology industry groups have raised concerns about these bills, including potential liability and the difficulty of reliably detecting AI content.

OpenAI Plans Major Expansion into Hardware and Data Centers

OpenAI is preparing for a major expansion into data centers, robotics, and consumer devices. On January 14, 2026, the AI startup issued a request for proposals to US manufacturers for related components. Chris Lehane, OpenAI's Chief Global Affairs Officer, believes AI will help reindustrialize the country. OpenAI plans to integrate 750 megawatts of ultra-low latency infrastructure. The company is also improving its audio AI models for future personal devices, and CEO Sam Altman mentioned in November 2025 building an "AI factory" for 1 gigawatt.

OpenAI Seeks US Partners for Robotics and AI Hardware Expansion

OpenAI is actively seeking US-based suppliers to support its significant expansion into consumer devices, robotics, and cloud data centers. The company issued a request for proposals on January 15, 2026, for components like silicon, motors, and cooling gear. Chris Lehane, OpenAI's chief global affairs officer, stated that this move into robotics will help revitalize the US manufacturing industry. OpenAI plans to invest trillions of dollars in data center expansions, which is essential for boosting its revenue. The company previously sought partners for its Stargate project, an effort to build $500 billion in US data centers and AI infrastructure. Following this news, shares of Symbotic Inc., a robotics and warehouse automation company, gained as much as 5.2 percent.

Anduril Chairman Discusses AI's Role in Military Defense

Trae Stephens, executive chairman and co-founder of Anduril, discussed the impact of artificial intelligence on global military defense. He explained that AI helps create more efficient ways to manage information and control in military operations. Stephens emphasized the importance of the United States investing in this technology. He shared these insights on "FOX Business In Depth."

Exploring the Idea of Conscious Artificial Intelligence

Anil Seth, a neuroscience professor and award-winning author, explores whether artificial intelligence can truly become conscious. He discusses the long history of imagining artificial beings, from the Golem to HAL 9000, and how this dream is renewed with new technology like AI. Seth highlights that some people, like former Google engineer Blake Lemoine, have claimed AI systems are already conscious. He emphasizes that how we view conscious AI is important because it affects the potential moral status and rights of AI systems. It also impacts human psychology, as believing AI can feel could make us vulnerable or distort our ethics. Seth suggests we should not confuse AI's intelligence with consciousness, nor should we overestimate machines while underestimating human nature.

Publishers Join Lawsuit Against Google Over AI Training Data

Publishers Hachette Book Group and Cengage Group asked a California federal court to join a class action lawsuit against Google. On Thursday, January 15, 2026, they sought permission to intervene in the case, which claims Google misused copyrighted material to train its AI systems. Maria Pallante, CEO of the Association of American Publishers, stated that publishers are uniquely qualified to address the legal and factual issues. The publishers cited ten examples of their books, including works by Scott Turow and N.K. Jemisin, which they allege Google used improperly for its Gemini large language model. They are seeking an unspecified amount of money for damages on behalf of themselves and other authors and publishers. U.S. District Judge Eumi Lee will make the decision on whether they can join the lawsuit.

IBM Unveils New Tool for Sovereign AI Control

IBM has introduced a new software tool called IBM Sovereign Core to help businesses manage digital sovereignty for artificial intelligence. This tool, built on Red Hat's open-source technology, allows companies to build, deploy, and manage AI workloads across different cloud environments within their own chosen legal areas. IBM Sovereign Core gives enterprises direct control over their software operations, ensures local AI processing, and keeps important items like encryption keys secure. Customers can set their own "sovereign boundary" and use their own AI models within this defined area, which helps show compliance with regulations. The software is designed to be flexible, working in any environment that supports Kubernetes, and will be available for tech preview next month, with full release planned for mid-2026. Experts believe it will help companies easily prove compliance and reduce audit costs, especially as AI regulations continue to change.

Experts Warn Data Rules Hinder AI Competition

Experts are concerned that new data protection guidelines do not address artificial intelligence training, which could harm fair competition in AI development. The Shaping Competition in the Digital Age SCiDA team warns that ignoring AI services in digital market rules is a major mistake. Without clear rules, powerful "gatekeeper" companies can use large amounts of data to train better AI models, while smaller competitors cannot access similar data. Professor Podszun explained that this lack of guidance could create a cycle where companies with an AI advantage gain even more power. The SCiDA team recommends that when data protection rules allow different interpretations, "gatekeepers" should choose the option that best supports competition goals. They also suggest that AI training should explicitly require consent for using combined data and ensure data access for competitive AI development while protecting privacy.

Saint John Called AI "Hidden Gem" for New Data Center

Former New Brunswick Premier Bernard Lord supports a plan to build a new data center in Saint John. Lord calls Saint John a "hidden gem" for artificial intelligence because of its extensive, unused fiber optic system, known as "dark fiber." VoltaGrid and Beacon AI Centres announced their plans to construct this data center in the city. He explained that this existing infrastructure, developed by the former NBTel, is vital for data centers that need fast and reliable internet. The project is expected to create many jobs during its construction and ongoing operations. Lord believes this data center will help establish Saint John as a leading hub for AI development and innovation.

AI Reshapes Enterprise Software Investment Debate

A major debate is happening in enterprise software investment, with veteran investors and AI supporters holding different views on where lasting value lies. Veteran investors, like Alex Plesakov at SDV, believe that established SaaS companies will keep their advantage in the data layer. However, AI-first advocates argue that traditional business advantages are disappearing quickly as AI advances. Alexander Lis, SDV's CIO, suggests a winning approach combines existing company data with the speed of new startups. He recommends that established companies make their data ready for AI, while AI startups should integrate into current software workflows to use proprietary data. While some data shows AI is helping cloud companies, others warn that powerful AI models can now generate insights similar to years of proprietary data, challenging old business models.

AI Proto-Markets Challenge Traditional SaaS Strategies

Today's artificial intelligence markets are noisy and still developing, unlike mature software markets. These "proto-markets" have demand but lack clear boundaries, established competition, or stable technology. Winning in these early AI markets requires product flexibility and quick learning, rather than building traditional business advantages early on. AI's broad capabilities blur the lines between different software categories, meaning products are constantly evolving and never truly finished. The economic value distribution between users, AI applications, and underlying models is still uncertain. In these proto-markets, companies often compete with the existing way of doing things, not just other AI products, and adoption often starts from the ground up.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Regulation AI Safety AI Ethics AI Transparency AI Disclosure Generative AI AI Models Large Language Models AI Training Data AI Infrastructure Data Centers AI Hardware Robotics Chatbots Deepfakes Discrimination Intellectual Property Copyright Digital Sovereignty AI Competition Enterprise AI Military AI Conscious AI OpenAI Google IBM Tech Industry State Legislation Federal Regulation Investment SaaS Market Dynamics Children's Protection Harmful Content AI Detection Tools Watermarking

Comments

Loading...