Snowflake launches Project SnowWork as OpenAI partners on child safety

Colorado is moving to amend its 2024 AI law, with a task force agreeing on a new framework. Governor Jared Polis supports this proposal, which aims to balance consumer protection with innovation. The revised draft bill specifically excludes common AI tools like spellcheck and ChatGPT, instead focusing on high-stakes decisions in areas such as education and employment. This framework ensures individuals are notified when AI impacts important life decisions and provides options for correction or human review, addressing concerns about algorithmic discrimination.

In enterprise AI, Snowflake has introduced Project SnowWork, an autonomous AI platform currently in research preview. Running on Snowflake's AI Data Cloud, this platform empowers business users to leverage AI and large language models for tasks like creating forecasts or identifying churn risks, without needing specialized expertise. Meanwhile, OpenAI, the creator of ChatGPT, is collaborating with the Parents and Kids Safe AI coalition to establish industry standards for AI guardrails. This partnership focuses on protecting children from harmful content, preventing targeted ads, and implementing parental controls to address privacy concerns.

Advancements in local AI processing are also emerging, with Tether launching a framework that enables AI models to be trained directly on consumer devices like smartphones. Utilizing technologies such as BitNet and LoRA, this system reduces reliance on cloud-based training and supports hardware from companies including AMD, Intel, and Apple, fine-tuning models up to one billion parameters in under two hours. Separately, Nvidia is preparing to re-enter China's AI chip market with new products designed to comply with U.S. export restrictions, aiming to balance market competitiveness with regulatory compliance. Beyond these developments, AI personalization is boosting wine sales through tailored recommendations, Arizona State University's CreateAI toolkit offers custom AI capabilities to faculty, CUJO AI is enhancing protection against crypto investment scams, and ServiceNow is rigorously testing over 240 AI use cases internally before customer release.

Key Takeaways

  • Colorado's AI Policy Working Group proposed a new framework for the state's AI law, focusing on transparency, responsibility, and consumer notification for high-stakes AI decisions in areas like education and employment.
  • Snowflake launched Project SnowWork, an autonomous enterprise AI platform in research preview, enabling business users to leverage AI agents on the AI Data Cloud for tasks such as forecasting and churn risk identification.
  • OpenAI partnered with the Parents and Kids Safe AI coalition to establish industry standards for child safety, focusing on preventing targeted ads, protecting from harmful content, and implementing parental controls.
  • Tether introduced a framework allowing AI models, up to one billion parameters, to be trained directly on smartphones in under two hours, supporting chips from AMD, Intel, and Apple.
  • Nvidia plans to re-enter China's AI chip market with new products designed to comply with U.S. export restrictions, aiming to balance performance with regulatory requirements.
  • AI personalization is increasing wine sales by providing tailored recommendations, leading to higher average order values and enhanced customer loyalty.
  • Arizona State University's CreateAI toolkit provides faculty and staff with custom AI capabilities and access to over 50 large language models for courses, research, and operations, with over 20,000 employees already using it.
  • CUJO AI introduced new capabilities to protect network service providers against crypto investment scams, which are projected to cost an estimated $17 billion globally in 2025.
  • ServiceNow is internally testing over 240 AI use cases between 2023 and 2025, using feedback to refine customer-facing AI products and developing internal governance tools like AI Control Tower.

Colorado AI law gets framework for consumer protection

Colorado's Governor Jared Polis appointed a group to create a plan for implementing the state's new AI law. This group, the AI Policy Working Group, released recommendations on how to regulate AI systems that make important decisions about people's lives. The proposed changes focus on making AI systems more transparent and assigning responsibility when things go wrong. Developers would need to explain how their AI works, and users would have to tell people when AI is used in decisions affecting them. The goal is to prevent discrimination and protect consumers while still allowing for innovation.

Colorado AI law faces legislative hurdles despite task force agreement

A task force in Colorado has agreed on a framework to change the state's AI law, which regulates AI used for important decisions. Governor Jared Polis supports the proposal, which aims to protect consumers and encourage innovation. However, the framework must now become a bill and pass through the legislature, where previous attempts have failed due to disagreements over AI's scope and who is responsible for its misuse. The task force's agreement includes revisions, and it remains uncertain if these changes will satisfy all lawmakers and stakeholders.

Colorado AI law rewrite nears deal with new draft bill

Colorado is close to amending its groundbreaking AI law after a task force released a draft bill. This new proposal aims to balance protecting consumers with fostering innovation in AI technology. The draft excludes common AI tools like spellcheck and ChatGPT, focusing instead on high-stakes decisions in areas like education and employment. Governor Jared Polis believes the proposal strikes the right balance, and Senate Majority Leader Robert Rodriguez is willing to sponsor the bill, though he needs to review it further.

Colorado AI law to be replaced by new framework

A Colorado working group has reached an agreement on a new framework to replace the state's 2024 AI law. Governor Jared Polis, who signed the original law, supports the proposal. The new framework will ensure people are notified when AI is used in important decisions affecting their lives and will give them a chance to correct information or request a human review. This compromise comes after negotiations between various groups, addressing concerns about algorithmic discrimination and compliance burdens for businesses.

Snowflake introduces Project SnowWork for business AI

Snowflake has launched Project SnowWork, a new AI platform designed for business users. This platform, currently in research preview, runs on Snowflake's AI Data Cloud and allows users to leverage AI and large language models without needing expert knowledge. Project SnowWork aims to automate tasks and provide insights, making AI more accessible and actionable for professionals in finance, sales, marketing, and operations. It is designed to understand complex business situations and improve efficiency and productivity.

Snowflake's Project SnowWork brings AI agents to business users

Snowflake has launched Project SnowWork, an autonomous enterprise AI platform, in research preview. This platform brings AI agents directly to business users, helping them complete tasks like creating forecasts or identifying churn risks. Project SnowWork operates on Snowflake's AI Data Cloud, connecting data, intelligence, and action securely. It offers pre-built skills for different roles and can complete complex, multi-step workflows, aiming to increase productivity and drive business outcomes in the AI era.

OpenAI partners with Parents and Kids Safe AI coalition

OpenAI, the creator of ChatGPT, is joining the Parents and Kids Safe AI coalition to help ensure children's safety when using artificial intelligence. The company aims to establish industry standards for AI guardrails and frameworks for kids. Key goals include preventing ads targeted at children, protecting them from harmful content, and implementing parental controls while addressing privacy concerns. This collaboration seeks to ease public fears about AI's impact on young users.

Ranveer Singh's film poster faces Sikh community objection

The makers of the upcoming film 'Dhurandhar: The Revenge,' starring Ranveer Singh, have received a legal notice over a song poster. The Sikh community has raised objections, stating that a poster for a song titled 'Pralay' allegedly shows Singh's character, who appears to be wearing a turban, smoking a cigarette. This depiction is considered deeply offensive and a violation of Sikh religious principles, which strictly prohibit tobacco. The filmmakers have been asked to remove or correct the poster.

AI personalization boosts wine sales with tailored recommendations

AI-powered personalization is significantly increasing wine sales by offering relevant recommendations to customers. Companies like Virgin Wines use AI to reduce the overwhelming choice for shoppers, presenting wines that match individual taste profiles. This approach helps customers feel more confident in their purchases, leading to higher average order values and increased loyalty. AI personalization also aids product discovery by suggesting wines slightly outside a customer's usual preferences, ultimately driving more sales and customer retention.

ASU's CreateAI toolkit empowers faculty with custom AI

Arizona State University (ASU) has launched CreateAI, a toolkit providing faculty and staff with custom AI capabilities and access to over 50 large language models. This platform allows users to build AI experiences for courses, research, and operations, with over 20,000 employees already using it. Examples include AI avatars for medical training and AI chatbots like Syllabot for student course questions. CreateAI ensures data privacy and compliance, enabling the university community to leverage AI effectively and securely.

Tether framework enables AI training on smartphones

Tether has introduced a new framework that allows artificial intelligence models to be trained directly on consumer devices like smartphones. Using technologies like BitNet and LoRA, the system reduces the need for cloud-based training, making it more practical and privacy-focused. Tether successfully fine-tuned models with up to one billion parameters on smartphones in under two hours. The framework supports various hardware, including chips from AMD, Intel, and Apple, and builds on Tether's commitment to local AI processing.

CUJO AI protects against crypto investment scams

CUJO AI has launched new capabilities to protect network service providers against crypto investment scams, which cost an estimated $17 billion globally in 2025. These scams often bypass platform-level security, making network-level detection crucial. CUJO AI's system analyzes crypto scam infrastructure and behavior across networks to identify and block these threats. This builds on their existing security suite, offering enhanced protection for millions of households against evolving online fraud.

ServiceNow tests AI internally before customer launch

ServiceNow prioritizes internal testing for its AI tools before releasing them to customers. The company has launched over 240 AI use cases internally between 2023 and December 2025, using this feedback to refine external products. For example, issues with data transfer for Workflow Data Fabric were resolved internally before its customer release. ServiceNow also developed an internal AI governance tool, AI Control Tower, which informed its customer-facing product launched in May 2025. This approach ensures AI tools are effective and adopted well by both employees and customers.

Nvidia plans China AI chip market return

Nvidia is preparing to re-enter China's artificial intelligence chip market with new products designed to comply with U.S. export restrictions. These chips will offer performance levels below the strictest controls while still supporting AI development. China has been a key market for Nvidia, but U.S. regulations have limited sales of its most advanced processors. Nvidia is working with Chinese partners to navigate these rules, aiming to balance market competitiveness with regulatory compliance amidst growing local competition.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI law consumer protection AI regulation AI policy AI systems transparency accountability AI development innovation AI framework AI legislation AI ethics AI governance AI personalization AI recommendations wine sales AI agents business AI AI platform AI Data Cloud AI productivity AI tools AI training AI models AI security AI for kids AI safety AI guardrails AI standards AI privacy AI education AI avatars AI chatbots AI for research AI for operations AI chips AI market AI compliance AI use cases AI testing AI solutions AI fraud detection AI for business AI for finance AI for sales AI for marketing AI for education AI for employment AI for healthcare AI for legal

Comments

Loading...