Recent developments highlight both the advancements and challenges in the AI space. Cloudflare is enhancing its Zero Trust platform with AI Security Posture Management (AI-SPM) and a Firewall for AI, using Llama to moderate content and secure Model Context Protocol (MCP) connections, aiming to help businesses safely adopt AI by analyzing usage, setting security controls, and blocking unsafe prompts. Meta is aggressively pursuing AI talent, hiring researchers from Google DeepMind and Scale AI, including those who worked on LaMDA and Gemini, to bolster its superintelligence team, even as some researchers return to OpenAI; Meta is also partnering with Midjourney on AI-generated content. Meanwhile, Texas is preparing for new AI regulations with House Bill 149 (TRAIGA 2.0), set to take effect in 2026, which includes oversight, ethical standards, and bans on discriminatory AI practices, enforced by the Attorney General. Despite Gen Z's frequent AI use, a Gallup survey reveals anxieties and a lack of skills in evaluating AI outputs, indicating a need for better AI literacy and soft skills training. In other news, an AI tool called Decide, created by an ex-Flutterwave developer, has gained 1,000 users by generating dashboards from simple prompts, leveraging OpenAI, LLaMA, and Google Gemini. Interestingly, a test showed readers preferred AI-written short stories over those by human authors. As colleges grapple with AI cheating, some are returning to in-class essays and oral exams. Finally, software stocks, including Salesforce, Adobe, and ServiceNow, are experiencing a downturn amid investor concerns about AI's potential disruption.
Key Takeaways
- Cloudflare is enhancing its Zero Trust platform with AI Security Posture Management (AI-SPM) to help businesses safely adopt AI.
- Cloudflare's Firewall for AI uses Llama to moderate content and block unsafe prompts, working with models like OpenAI and Gemini.
- Meta is hiring AI researchers from Google DeepMind and Scale AI to strengthen its superintelligence team.
- Some AI researchers have left Meta's Superintelligence Labs (MSL) to return to OpenAI.
- Texas House Bill 149 (TRAIGA 2.0), starting in 2026, will regulate AI systems with oversight, ethical standards, and bans on discriminatory practices.
- A Gallup survey indicates Gen Z uses AI frequently but lacks skills in evaluating AI outputs.
- Decide, an AI tool by an ex-Flutterwave developer, gained 1,000 users by generating dashboards from prompts, using OpenAI, LLaMA, and Google Gemini.
- Readers in a test preferred AI-written short stories over those by human authors.
- Colleges are returning to in-class essays and oral exams to combat AI cheating.
- Software stocks like Salesforce are dropping due to investor concerns about AI disruption.
Cloudflare's Zero Trust platform secures AI for businesses
Cloudflare has launched a Zero Trust platform to help businesses safely use AI. The platform lets companies analyze and control how GenAI tools are used, protecting privacy and boosting productivity. It includes AI Security Posture Management (AI-SPM) to find unauthorized AI use. Cloudflare's Gateway features block unapproved AI apps and AI Prompt Protection flags risky employee chats, preventing sensitive data leaks. A Zero Trust MCP Server Control feature manages all MCP tool calls from one place.
Cloudflare's Firewall for AI blocks unsafe prompts to LLMs
Cloudflare's Firewall for AI now includes unsafe content moderation using Llama to protect AI applications. This feature helps security teams block harmful prompts and topics at the network level. It protects against prompt injection, data leaks, and unsafe content. The firewall is model-agnostic, working with third-party models like OpenAI and Gemini, as well as in-house models. It identifies and blocks unsafe prompts in real time, using Llama Guard to flag content across safety categories.
Cloudflare's MCP Server Portals secure the AI revolution
Cloudflare has introduced MCP Server Portals in Open Beta to secure Model Context Protocol (MCP) connections. MCP allows LLMs to connect and interact with applications like Slack and Canva. MCP Server Portals centralize, secure, and monitor every MCP connection in an organization. This feature is part of Cloudflare One, which connects and protects workspaces. The portals protect against prompt injection, supply chain attacks, privilege escalation, and data leakage by providing a single gateway for all MCP servers.
Cloudflare expands Zero Trust platform for secure AI adoption
Cloudflare is adding AI Security Posture Management (AI-SPM) to its Zero Trust platform. This helps businesses safely use AI tools by understanding and controlling how AI is used in their organization. The new features allow companies to analyze AI usage and set security controls. Security teams can discover how employees use AI with the Shadow AI Report. Cloudflare Gateway enforces AI policies, and AI Prompt Protection safeguards sensitive data by flagging risky interactions.
New Texas laws address nondisclosure, confidentiality, and AI
Two new Texas laws are set to take effect soon. Senate Bill 835, starting September 1, 2025, voids nondisclosure agreements that prevent disclosing sexual abuse or assault, unless a court order allows it. House Bill 149, or TRAIGA 2.0, begins January 1, 2026, and creates rules for AI systems in Texas. It includes oversight, ethical standards, and bans on AI that discriminates against protected groups by employers. The Attorney General will enforce TRAIGA 2.0.
New Texas laws address nondisclosure, confidentiality, and AI
Two new Texas bills will soon become law. Starting September 1, 2025, Senate Bill 835 voids nondisclosure agreements that prevent the disclosure of sexual abuse or assault, unless a court order permits it. Beginning January 1, 2026, House Bill 149, known as TRAIGA 2.0, establishes a framework for AI systems in Texas. This includes oversight, ethical standards, and prohibitions against discriminatory AI practices by employers. The Texas Office of the Attorney General will enforce the AI law.
Researchers leave Meta's Superintelligence Lab for OpenAI
Several AI researchers have left Meta's Superintelligence Labs (MSL) shortly after its launch. Avi Verma and Ethan Knight returned to OpenAI after brief periods at Meta. Rishabh Agarwal also announced his departure from Meta. Chaya Nayak, a director of generative AI product management at Meta, is also joining OpenAI. These departures signal a potentially rocky start for Meta's AI efforts. Meta is also collaborating with Midjourney on AI-generated images and video.
Meta recruits AI talent from Google DeepMind and Scale AI
Meta has hired many researchers from Google's DeepMind for its new superintelligence team. At least 10 researchers have joined Meta since July, including those who worked on Google's LaMDA and Gemini AI models. Meta also recruited from Scale AI, focusing on safety and evaluations. Alexandr Wang from Scale AI leads Meta's superintelligence efforts. These hires show Meta's push to compete in the AI talent race.
Gen Z struggles with AI despite tech comfort
A Gallup survey found that while Gen Z uses AI a lot, many feel anxious about it. EY's survey shows Gen Z workers are overconfident but underperform on AI tasks. Many Gen Z professionals use AI tools for over half their work, but lack skills to evaluate AI outputs. They also struggle with teamwork and communication. Experts say educators and employers must prioritize AI literacy and soft skills training for this generation.
Ex-Flutterwave developer's AI tool Decide gains 1000 users
Abiodun Adetona, a former Flutterwave developer, created an AI product called Decide that attracted 1,000 users in 24 days. Decide analyzes data to help with decision-making, generating dashboards from simple prompts. Unlike ChatGPT, Decide creates ready-to-use websites with charts and commentary. It also allows users to upload and clean data. Decide is built on OpenAI, LLaMA, and Google Gemini, combining their strengths to provide accurate data analysis and dashboard generation.
WRUF explains its use of artificial intelligence
WRUF is transparent about its use of artificial intelligence (AI). All content is created by human reporters, editors, and producers. AI tools may assist with tasks like transcribing interviews and analyzing documents, but humans verify everything. AI may also help translate stories and generate design elements, but staff review all AI-generated content. WRUF does not use AI to write news stories, create fake voices or videos, or present AI-generated material as human-reported.
Readers prefer AI stories over human authors in test
In a blind test, readers preferred AI-written short stories over those by human authors, including Robin Hobb. Mark Lawrence conducted the test, asking readers to rate stories and guess their origin. Readers couldn't reliably tell the difference between AI and human writing. The AI stories were rated better on average. Lawrence notes that shorter outputs are where AI is most successful. He acknowledges the increasing presence of AI-created art and the difficulty of identifying it.
Software stocks drop amid AI disruption fears
Software stocks like Salesforce, Adobe, and ServiceNow are performing poorly due to investor concerns about AI's impact. Investors worry that AI could reduce demand for these companies' products.
To stop AI cheating, colleges return to old methods
A philosophy professor found that students used AI to write papers even after discussions about its misuse. Redesigning assignments doesn't prevent AI cheating. Detectors are unreliable. To ensure learning, which requires mental effort, colleges are shifting to in-class essays and oral exams. This returns to an older education model focused on real-time knowledge demonstration.
Sources
- Cloudflare introduces Zero Trust platform for secure AI adoption in enterprises
- Block unsafe prompts targeting your LLM endpoints with Firewall for AI
- Securing the AI Revolution: Introducing Cloudflare MCP Server Portals
- Cloudflare Expands Zero Trust Platform to Secure Generative AI Adoption
- Texas Laws on Nondisclosure and Confidentiality, AI, Take Effect Soon
- Texas Laws on Nondisclosure and Confidentiality, AI, Take Effect Soon
- Researchers Are Already Leaving Meta’s New Superintelligence Lab
- Meta raids Google DeepMind and Scale AI for its all-star superintelligence team
- Gen Z’s Relationship With AI
- Ex-Flutterwave developer’s AI product hits 1k users in 24 days
- Artificial intelligence usage at WRUF - WRUF 98.1 FM | 850 AM
- Oh great, readers preferred AI-written short stories over one by my favorite author in a blind test
- Software Stocks Suffer on Fears of AI Disruption
- Opinion | Opinion: The Only Real Solution to the AI College Cheating Crisis
Comments
Please log in to post a comment.