Recent developments highlight growing skepticism and scrutiny surrounding artificial intelligence. A news article detailing Gen Z's feelings on AI could not be accessed outside the United States due to a website error, limiting the sharing of this perspective with international readers.
In Denver, students at Abraham Lincoln High School are actively teaching peers about the environmental costs of AI, noting that data centers consume significant energy and water. They have launched a petition to stop a new data center in the Elyria-Swansea neighborhood, supported by Tom Wildman, director of sustainability for Denver Public Schools. The school district is also updating its curriculum to include these environmental impacts.
Across Europe, a Council is advocating for a human-centered approach to AI in education, insisting teachers remain the primary guides. They call for better teacher training and AI tools designed specifically for schools, warning that over-reliance on technology could introduce bias and data protection issues.
Technical shifts are also occurring. A new tool called Coder Agents enables teams to run AI coding tasks on their own servers, separating models from infrastructure to avoid vendor lock-in and maintain data security. Meanwhile, security experts warn that consulting firms are rushing AI projects, increasing cyber risks as incidents doubled between 2021 and 2025.
Behind the scenes, AI companies embed thousands of secret system prompts into chatbots like ChatGPT to steer behavior, with instructions ranging from 2,300 to 27,000 words covering tone and copyright. Additionally, studies show different AI models interpret government policies uniquely, meaning vendor choice significantly affects policy analysis results.
Industry leaders are adapting as well. Cisco is using AI to improve safety labeling consistency by reading detailed definitions rather than relying on human memory. In Hollywood, writers are training AI models using data from YouTube and Wikipedia, though concerns persist about potential bias in these outputs.
Looking ahead, enterprise architects must prepare for post-quantum computing challenges by 2026. This shift requires significant infrastructure updates to protect AI systems from new threats, urging organizations to start planning now to ensure data safety.
Key Takeaways
["Gen Z Skeptical of AI Due to Website Error
A news article about Gen Z's feelings on artificial intelligence could not be read. The website displayed an error message stating it is unavailable outside the United States. This access restriction prevented the content from being shared with readers in other countries.
Gen Z Skeptical of AI Due to Website Error
A news article about Gen Z's feelings on artificial intelligence could not be read. The website displayed an error message stating it is unavailable outside the United States. This access restriction prevented the content from being shared with readers in other countries.
Gen Z Skeptical of AI Due to Website Error
A news article about Gen Z's feelings on artificial intelligence could not be read. The website displayed an error message stating it is unavailable outside the United States. This access restriction prevented the content from being shared with readers in other countries.
Denver Students Push for Sustainable AI Practices
Students at Denver's Abraham Lincoln High School are teaching their peers about the environmental cost of artificial intelligence. They explain that AI data centers use large amounts of energy and water to stay cool. The group created a petition to stop a new data center from being built in the Elyria-Swansea neighborhood. Tom Wildman, the director of sustainability for Denver Public Schools, supports these efforts. He wants students to understand how to use AI responsibly rather than just avoiding it.
Denver Students Push for Sustainable AI Practices
Denver Public Schools is updating its curriculum to include the environmental impact of artificial intelligence. Students are learning how AI data centers consume significant energy and water resources. They are also working on a petition to prevent a new data center from being built in their neighborhood. School officials want students to have the knowledge to use AI in a responsible manner.
Council Calls for Human-Centered AI in Education
The Council is pushing for a human-centered approach to artificial intelligence in education across Europe. They want teachers to remain the main guides for students while using AI tools. The group calls for better training for teachers and the creation of AI tools designed specifically for schools. They warn that too much reliance on technology could cause bias and data protection issues. The Council also wants to ensure all students have fair access to these digital resources.
Coder Agents Allow Self-Hosted AI Coding Workflows
A new tool called Coder Agents lets teams run AI coding tasks on their own servers. This system separates the AI models from the infrastructure that runs them. It helps companies avoid being locked into one specific vendor for their AI services. Users can choose different AI models while keeping control over their own data and security. The tool also works with existing systems like GitHub Actions and Slack.
Consultancies Urged to Slow AI Push Amid Cyber Risk
Security experts warn that consulting firms are pushing corporate AI projects too quickly. This rush increases cyber risks because many companies are not ready for the security challenges. Sam Shar from Trend-Setters Consulting says advisers are selling too many projects without checking if the business case is sound. Recent data shows that cyber incidents doubled between 2021 and 2025. Experts suggest firms should limit the number of AI projects they run for a single client at once.
Hidden Rules Behind AI Chatbots Revealed
AI companies add thousands of secret instructions to every conversation with chatbots like ChatGPT. These hidden commands, called system prompts, steer the behavior of the AI to match the company's goals. A researcher named Asgeir Thor Johnson managed to extract these secret rules from popular AI tools. The instructions range from 2,300 to 27,000 words and cover topics like tone and copyright rules. For example, one prompt tells the AI to avoid quoting more than 15 words from an article.
Federal Agencies Buy Different Policy Views With AI Vendors
Different AI models interpret government policies in unique ways when analyzing documents. A study found that some models focus on one main idea while others recognize multiple goals in a single law. This means that changing the AI vendor can change the results of policy analysis. Researchers tested nine models and found that model choice significantly affects what a policy appears to be about. Government agencies should document which model and version they use for important tasks.
Cisco Uses AI to Improve Safety Labeling Consistency
Cisco is using artificial intelligence to create consistent rules for detecting unsafe content. Their new method uses AI to read detailed definitions instead of relying on human memory. This approach reduces disagreements between different AI models when classifying conversations. The system checks both the intent of the user and the content of the message to flag potential harm. This method makes it easier to explain decisions to customers and maintain stable safety standards.
Hollywood Writers Train AI Amid Industry Changes
Many people in Hollywood are now working to train artificial intelligence models. Screenwriters and other creatives are creating content for AI systems that learn from YouTube videos and Wikipedia. Some worry that this data is biased and could lead to racist or sexist AI outputs. The entertainment industry is shifting from traditional writing methods to more automated approaches. Experts are concerned about how this change will affect the future of human writers and producers.
2026 Roadmap for Post-Quantum AI Security
The year 2026 marks a critical time for enterprise architects to plan for future security needs. Companies must prepare their infrastructure to handle the challenges of post-quantum computing. This shift will require significant updates to how AI systems are protected from new types of threats. Experts are urging organizations to start planning now to ensure their data remains safe.
Sources
- Gen Z increasingly skeptical of, and angry about, artificial intelligence
- Gen Z increasingly skeptical of, and angry about, artificial intelligence
- Gen Z increasingly skeptical of, and angry about, artificial intelligence
- Denver students push to implement more sustainable AI practices
- Denver students push to implement more sustainable AI practices
- AI in education: Council calls for human-centred approach
- Coder Agents Enable Running AI Coding Workflows on Self-Hosted Infrastructure
- Consultancies urged to slow AI push amid cyber risk
- See the hidden rules behind AI. Then use them to rewrite this article.
- When Federal Agencies Pick AI Vendors, They Are Buying Different Policy Interpretations
- Improving Labeling Consistency with Detailed Constitutional Definitions and AI-Driven Evaluation
- I Work in Hollywood. Everyone Who Used to Make TV Is Now Secretly Training AI
- The 2026 Roadmap to Post-Quantum AI Infrastructure Security
Comments
Please log in to post a comment.