State lawmakers are rapidly developing AI regulations for education, with over 50 bills proposed in 21 states last year. These initiatives focus on advancing AI literacy, establishing usage guidelines, and preventing cyberbullying, even as the Trump administration advocates for a national standard. Organizations like NAAIC and CompTIA are supporting this push by offering free AI training and certifications to 100 high school teachers through their AI Futures program, running until May 2026. This program aims to equip educators with skills in machine learning and ethical AI, preparing students for future careers.
The startup ecosystem is also adapting to AI's influence. Y Combinator, for instance, now invites founders applying for its Spring 2026 program to submit transcripts from AI coding agent sessions, like those using GitHub Copilot or ChatGPT, to demonstrate their proficiency with AI tools. Simultaneously, former Meta executives, including Sheryl Sandberg, have invested $3.5 million in Slashwork, a new London-based AI workplace communication startup. Slashwork intends to challenge platforms such as Microsoft Teams by embedding AI features from its core, offering advanced search capabilities and AI agent commands.
AI's integration into daily life is evident, with a study by The Times revealing over half of UK adults use AI for financial advice. However, concerns about AI bias persist, as seen in reports from ProPublica and The Guardian regarding legal sentences and welfare algorithms. Geopolitically, Jefferies' Anikent Shah cautioned the CalPERS board about AI investment risks, particularly highlighting China's advancements. Shah noted China's lead in certain AI technologies, exemplified by DeepSeek, and its fewer hurdles in building data centers compared to the US, despite the US advantage in Nvidia chips. California's new Transparency Frontier Artificial Intelligence Act mandates companies to publish AI risk frameworks.
On the development front, Moonshot AI introduced Kimi K2 Thinking, an open-source reasoning model with a 1-trillion parameter Mixture-of-Experts architecture. This model, which scored 44.9% on Humanity's Last Exam, boasts a large 256K token context window, surpassing GPT-4 Turbo and Claude 3.5. Meanwhile, Pillar Security uncovered two critical vulnerabilities, rated CVSS 10.0, in the n8n workflow automation platform. These flaws affect hundreds of thousands of deployments, including enterprise AI systems, potentially exposing sensitive data like OpenAI keys and AWS accounts. Users are strongly advised to upgrade to n8n version 2.4.0 or later immediately.
Key Takeaways
- State lawmakers in 21 states proposed over 50 bills last year to regulate AI in education, focusing on literacy and data transparency, despite federal efforts for a national standard.
- NAAIC and CompTIA launched a free AI Futures program to train 100 high school teachers by May 2026, offering CompTIA AI Essentials and AI Prompting Essentials certifications.
- Former Microsoft executive Craig Mundie advocates for a new college curriculum combining liberal arts and STEM to prepare students for an AI-driven future, emphasizing personalized learning via AI tutors.
- Y Combinator's Spring 2026 application now allows founders to submit AI coding agent session transcripts (e.g., GitHub Copilot, ChatGPT) to demonstrate AI tool proficiency.
- Former Meta executives, including Sheryl Sandberg, invested $3.5 million in Slashwork, a London-based AI workplace communication startup aiming to compete with Microsoft Teams with built-in AI features.
- Over half of UK adults use AI tools for financial advice, but concerns about AI bias in systems affecting legal sentences and welfare algorithms persist.
- Jefferies warned CalPERS about AI investment risks, noting China's lead in some AI technologies like DeepSeek and its advantage in data center construction over the US, despite US access to Nvidia chips.
- California's new Transparency Frontier Artificial Intelligence Act requires companies to publish AI risk frameworks.
- Moonshot AI released Kimi K2 Thinking, an open-source 1-trillion parameter Mixture-of-Experts model with a 256K token context window, exceeding GPT-4 Turbo and Claude 3.5.
- Pillar Security discovered two critical CVSS 10.0 vulnerabilities in the n8n workflow automation platform, risking sensitive data like OpenAI keys and AWS accounts, urging immediate upgrades to version 2.4.0+.
States push AI education rules despite federal pushback
State lawmakers are actively creating laws for AI in education, with over 50 bills proposed in 21 states last year. These bills cover topics like advancing AI literacy, setting guidelines for AI use, and preventing cyberbullying. The Trump administration aims to stop state-level AI rules with a December executive order, calling for a national standard. However, advocates like Christian Pinedo from the AI Education Project support state actions. States plan to teach AI in K-12 schools and create AI programs in higher education institutions.
States advance AI education rules despite federal concerns
State lawmakers are moving quickly to create AI regulations for education, proposing over 50 bills last year. The Trump administration wants to limit state-level AI rules, but states continue to act. The Center for Democracy and Technology (CDT) reports that AI literacy is a top focus, with states like New Mexico, Nevada, and Illinois creating specific AI laws. Maddy Dwyer from CDT notes new bills require transparency from Ed Tech vendors about student data. A recent poll shows 32 percent of teachers use AI weekly, mainly for lesson planning.
NAAIC and CompTIA offer free AI training for high school teachers
NAAIC and CompTIA launched a program offering free AI training and credentials for high school teachers nationwide. The first group will include 100 teachers who will earn two certifications: CompTIA AI Essentials and CompTIA AI Prompting Essentials. This self-paced, online program runs from April 7 to May 29, 2026, at no cost to selected educators. The goal is to help teachers bring real-world AI concepts into classrooms and prepare students for future jobs. Teachers can apply by March 11, 2026, to join this important initiative.
New program offers free AI training for high school teachers
The National Applied AI Consortium (NAAIC) and CompTIA launched the AI Futures program for high school teachers. This program offers free, complete training on artificial intelligence basics. Teachers will learn about machine learning, data science, and ethical AI use. Those who finish the program will get a certification to show their AI teaching skills. This effort aims to prepare students for future jobs by improving AI knowledge in K-12 schools.
Meta veterans invest in new AI workplace startup Slashwork
Sheryl Sandberg and other former Meta executives are investing in a new AI workplace communication startup called Slashwork. The London-based company, founded by ex-Facebook engineers Jackson Gabbard, David Miller, and Josh Watzman, raised $3.5 million. Slashwork aims to compete with platforms like Slack and Microsoft Teams by building AI features from the start. The platform uses large-language model embedding for strong searches and allows users to command AI agents. Investors believe AI integration will bridge many gaps in enterprise communication. Slashwork will first launch with smaller tech companies before a wider release later this year.
Y Combinator asks founders to show AI coding skills
Y-Combinator, a well-known startup accelerator, added a new question to its Spring 2026 application. Founders can now submit a transcript from a coding agent session they are proud of, using tools like GitHub Copilot or ChatGPT. This experimental question shows a big change in how YC judges technical skills, focusing on using AI tools effectively. YC CEO Garry Tan believes this helps find "real builders" in the AI era. The change suggests that understanding what to build and for whom is becoming more important than just writing code. This move likely previews a permanent shift in how technical ability is evaluated in the startup world.
AI now shapes many daily decisions
Artificial intelligence is now a common part of daily life, influencing decisions from navigation to email writing. AI systems use advanced computing and data to find patterns, offering guidance rather than direct commands. A study by The Times found over half of UK adults use AI tools for financial advice, including budgeting and investments. AI-driven "robo" advisory platforms are making financial planning tools available to more people. However, concerns exist about AI systems showing biases, as seen in reports by ProPublica and The Guardian regarding legal sentences and welfare algorithms. Despite these issues, over 60 percent of OECD member governments use AI in public services for efficiency.
Could AI improve government in Washington DC
The author reflects on the film "Idiocracy" and current societal trends, including the rise of artificial intelligence. They express concern that over-reliance on technology might reduce human skills like reading. The article suggests that replacing humans with AI in roles like Congress could be safer and more efficient. The author humorously asks an AI about its own qualifications and potential monograms, receiving creative responses like A.I.M. or A.I.Q. The AI's responses show a self-aware and almost human-like understanding of its image. The piece concludes by urging society to unite and move past current absurdities.
CalPERS board warned about AI investment risks and China
Anikent Shah from Jefferies warned the CalPERS board about risks in AI investments, especially concerning China's advancements. He highlighted the US/China tech race as a key factor distinguishing this digital revolution. Shah noted that US industrial policy supports AI infrastructure, but public concerns about job loss and power prices could slow adoption. He urged the board to recognize China is leading in some AI technologies, citing DeepSeek as an example. Despite US advantages like Nvidia chips, China faces fewer issues with building data centers. California's new Transparency Frontier Artificial Intelligence Act also introduces requirements for companies to publish AI risk frameworks.
Ex-Microsoft leader urges new college AI curriculum
Former Microsoft executive Craig Mundie believes colleges need a new curriculum to prepare students for an AI-driven future. Mundie, who retired as Microsoft's chief research and strategy officer in 2014, suggests combining liberal arts with STEM education. He argues that students need both technical and social skills to work effectively with intelligent machines. Mundie also questions the traditional classroom model, suggesting AI can enable more personalized, Socratic learning. He envisions a future where AI tutors adapt to individual student curiosity and pace. Mundie emphasizes that societies must rethink human value as AI automates more tasks.
Moonshot AI releases Kimi K2 Thinking open source model
Moonshot AI released Kimi K2 Thinking, an open-source reasoning model that performs well on complex tasks. The model scored 44.9% on Humanity's Last Exam and can handle 200-300 tool calls while keeping its reasoning clear. Kimi K2 Thinking uses a 1-trillion parameter Mixture-of-Experts architecture, activating only 32 billion parameters per inference. Its open weights allow developers to inspect reasoning, fine-tune it with their own data, and deploy it on their systems. The model also features a large 256K token context window, bigger than GPT-4 Turbo and Claude 3.5. This advanced model is designed for multi-step tasks and long-context learning, though it requires significant GPU power for production.
Pillar Security finds critical flaws in n8n AI systems
Pillar Security discovered two critical vulnerabilities, rated CVSS 10.0, in the n8n workflow automation platform. These flaws affect hundreds of thousands of deployments, including enterprise AI systems. Attackers could easily exploit these vulnerabilities to decrypt stored credentials, hijack AI pipelines, and compromise cloud environments. This means sensitive data like OpenAI keys, AWS accounts, and proprietary AI prompts are at risk. The vulnerabilities affect all n8n users before version 2.4.0, including self-hosted and n8n Cloud users. Pillar Security strongly advises users to immediately upgrade to n8n version 2.4.0 or later, rotate their encryption key, and change all stored credentials.
Sources
- States race forward on education AI regulations
- States race forward on education AI regulations despite Trump objections
- NAAIC and CompTIA Launch Free AI Training and Credentials for High School Teachers Nationwide
- Free AI training and credentials available to high school teachers through new education program from CompTIA and NAAIC
- Sandberg, other Meta vets invest in AI workplace communications startup
- YC Applications Now Ask Founders To Show A Coding Agent Session They’re Proud Of
- How artificial intelligence is now an integral part of everyday decisions
- Ps&Qs: Let’s put AI in D.C.
- CalPERS board warned of risks in AI investments including China innovation
- Ex-Microsoft exec and AI expert says colleges need this new curriculum
- Kimi K2 Thinking: what 200+ tool calls mean for production
- Pillar Security Discovered Critical Flaw in n8n Exposing Hundreds of Thousands of Enterprise AI Systems to Complete Takeover
Comments
Please log in to post a comment.