Artificial intelligence is rapidly transforming various sectors, from education and retail to cybersecurity and public health, bringing both immense potential and significant challenges. Educational institutions are actively engaging with AI, as Gonzaga University integrates AI ethics into its core curriculum, exploring authorship with AI image generation. K-12 schools in Hawaii and East Grand Forks are cautiously adopting generative AI, addressing concerns about cheating and content reliability. While Hawaii's Department of Education provides guidelines and Mid-Pacific Institute offers an AI Certification Course, East Grand Forks plans a slow rollout, currently blocking AI on student devices, but sees its potential for efficiency and deeper thinking among staff. The rapid growth of AI, with capabilities doubling every seven months, creates a security paradox. Attackers leverage AI for vulnerability discovery and exploiting critical infrastructure, leading to substantial financial losses, such as $4.91 million for supply chain breaches and an additional $670,000 per shadow AI incident. Autonomous AI systems introduce new risks like prompt injection and data leakage, with AI-related vulnerabilities on HackerOne surging by 210% in the past year. Many employees lack AI security training, often feeding sensitive data into unsanctioned tools. To counter these threats, Ostorlab launched an AI Pentesting Engine for Mobile Applications on November 26, 2025, automating vulnerability detection and exploitation to respond to zero-days within hours. In the corporate world, companies like Target are embracing AI to transform the shopping experience, integrating tools like ChatGPT, Perplexity, and Google's Gemini. Target is investing an additional $1 billion, part of a $5 billion total capital expenditure, focusing on technology to boost sales, and uses ChatGPT Enterprise internally for 18,000 employees. Google CEO Sundar Pichai highlighted the company's "AI-first" strategy, initiated in 2016, and discussed progress with AI releases like Gemini 3 and Nano Banana Pro, also expressing excitement for long-term investments in quantum computing. Singapore is proactively addressing AI governance with its updated Model AI Governance Framework and tools like AI Verify, guiding safe and trustworthy AI adoption. Meanwhile, the insurance sector is adapting, with newer policies expanding to cover AI-related incidents like deepfake fraud and chatbot data leaks, which older policies often miss. AI's influence extends to public health, with NYU launching a course on "Data, AI, and the People's Health" on November 26, 2025, to examine algorithmic bias and its impact on human well-being. In the beauty industry, AI is reshaping standards, with Gen-Z using AI tools for beauty advice and plastic surgeons noting patients bringing AI-modified images, reflecting the impact of AI-altered images on self-perception.
Key Takeaways
- Educational institutions like Gonzaga, Hawaii K-12 schools, and East Grand Forks are integrating AI into curricula and developing policies to address ethical use, cheating, and student engagement.
- AI capabilities are doubling every seven months, leading to a significant increase in cyber threats, including supply chain attacks costing $4.91 million and shadow AI incidents adding $670,000 per breach.
- Autonomous AI systems introduce new security risks such as prompt injection and data leakage, with AI-related vulnerabilities on HackerOne surging by 210% in the past year.
- Ostorlab launched an AI Pentesting Engine for Mobile Applications on November 26, 2025, providing automated, AI-driven security assessments to quickly identify and address vulnerabilities.
- Google CEO Sundar Pichai discussed the company's "AI-first" strategy, initiated in 2016, and highlighted progress with AI releases like Gemini 3 and Nano Banana Pro.
- Target is heavily investing in AI, committing an additional $1 billion (part of $5 billion total capital expenditures) to technology advancements, integrating tools like ChatGPT, Perplexity, and Gemini to enhance customer experience and internal operations.
- Singapore has developed an updated Model AI Governance Framework and tools like AI Verify to guide organizations in safe and trustworthy AI adoption, covering aspects like accountability and data governance.
- Cyber insurance policies are adapting to new AI-driven risks, such as deepfake fraud and chatbot data leaks, with newer policies expanding coverage beyond traditional cyberattacks.
- AI is influencing public health, with NYU launching a course to explore algorithmic bias and its impact on human health, and reshaping beauty standards, with patients bringing AI-modified images to plastic surgeons.
- A significant training gap exists in AI security, with 52% of employees lacking training, leading many to feed sensitive data into unsanctioned AI tools.
Gonzaga Students Explore AI Ethics in Core Curriculum
Gonzaga University's core curriculum now includes discussions on artificial intelligence. English professor Chase Bollig uses AI for image generation assignments, sparking talks about authorship. Ann Ciasullo, Director of the University Core, believes AI fits well with the curriculum's focus on inquiry and critical thinking. Faculty like Kris Morehouse teach students how AI creates a social world, often showing a narrow view limited by its creators. The goal is to help students understand AI's power and work towards more just and representative AI.
Hawaii Schools Tackle AI Challenges in Classrooms
Hawaii K-12 schools are carefully adopting generative AI, facing concerns about cheating and content reliability. Gabriel Yanagihara from Iolani School notes teachers lack clear guidance on safe AI tools and struggle with added workload. The Hawai i DOE provides general guidelines for ethical AI use and data privacy. Brian Grantham at Mid-Pacific Institute involves students in setting AI expectations for deeper learning. Mid-Pacific now has consistent AI policies across subjects and offers an AI Certification Course. Some teachers, like one from Farrington High School, are returning to paper assignments due to AI misuse.
East Grand Forks Schools See AI Potential, Plan Slow Rollout
East Grand Forks Public Schools leaders, Superintendent Kevin Grover and Jill Meulebroeck, believe AI can save time and promote deeper thinking. They plan a slow introduction of AI to students, focusing on proper use and waiting for state guidelines. Currently, AI is blocked on student devices, and the district has no specific AI policies. While acknowledging student misuse, Meulebroeck emphasizes working with AI, not against it, using an 80/20 rule for staff to verify AI-generated content. Grover sees AI helping staff with efficiency but stresses that it will not replace teachers.
Fast Growing AI Creates New Cyber Security Dangers
AI capabilities are rapidly increasing, doubling every seven months, which creates a security paradox in cybersecurity. Attackers use AI without limits, developing specialized tools for vulnerability discovery and exploiting critical infrastructure. A significant training gap exists, with 52 percent of employees lacking AI security training, and many feed sensitive data into unsanctioned AI tools. The threat landscape is evolving, showing increased supply chain attacks, cloud identity targeting, and automated social engineering. These AI-powered threats lead to substantial financial losses, with supply chain breaches costing $4.91 million and shadow AI adding $670,000 per incident.
New Security Plan Needed for Autonomous AI
AI adoption is growing fast, with AI-related vulnerabilities on HackerOne surging by 210% in the past year. Autonomous AI systems introduce new risks like prompt injection and data leakage, allowing attackers to manipulate AI behaviors. Attackers are combining human creativity with AI automation to create exploits that scale faster than defenses can react. CISOs must lead AI security, using metrics like Return on Mitigation to show how security investments reduce risk. A new security strategy requires using AI agents for detection, keeping human experts involved, and leveraging diverse security talent.
Autonomous AI Tools Create New Security Dangers
Autonomous AI agents are increasingly used in businesses, but many are deployed without proper cybersecurity planning. A study found 90% of these AI tools lack proper licensing or internal controls, leading employees to feed sensitive data into them. Companies often skip basic security steps like vendor vetting and risk reviews during AI rollouts, increasing liability. Shadow AI is spreading through unapproved tools and trusted platforms quietly adding AI features, causing loss of control. Human oversight and training remain crucial for guiding AI agents and preventing mistakes, even though AI can improve security when used correctly.
Sundar Pichai Discusses Google AI Progress and Future
Google CEO Sundar Pichai recently spoke on the Google AI: Release Notes podcast with host Logan Kilpatrick. Pichai discussed Google's "AI-first" strategy, which began in 2016, and its impact on current AI releases like Gemini 3 and Nano Banana Pro. He also shared his excitement for long-term investments, including quantum computing, which he expects to generate significant interest in about five years. The conversation covered the extraordinary progress and future plans for AI at Google.
Ostorlab Launches AI Tool for Mobile App Security
On November 26, 2025, Ostorlab introduced its AI Pentesting Engine for Mobile Applications. This new engine provides automated, AI-driven penetration testing to find, confirm, and safely exploit mobile app vulnerabilities. It offers continuous security assessments, helping organizations respond to issues like zero-days within hours or minutes. The engine provides clear proof-of-concept evidence and screenshots, reducing developer pushback and speeding up fixes. It integrates with existing Ostorlab workflows, turning lengthy reports into prioritized tickets.
Singapore Creates Framework for Safe AI Use
Singapore has developed an AI Framework based on its National AI Strategy to guide safe and trustworthy AI adoption. The Model AI Governance Framework, updated in 2024, offers practical guidance for organizations, including for generative AI. Tools like AI Verify help organizations test and validate their AI systems, while existing laws like PDPA and the Cybersecurity Act also apply. The framework outlines nine core functions for responsible AI, covering areas like accountability, data governance, security, and content provenance. Following this framework helps organizations build trust, manage risks, and ensure AI benefits society.
AI Transforms Cyber Risks, Insurance Must Adapt
On November 26, 2025, experts discussed how AI is changing cyber risks and the need for insurance policies to adapt. Heather Weaver, Bryan Sterba, and David Anderson highlighted new threats like deepfake fraud and chatbot data leaks. Older cyber insurance policies often do not cover AI-related incidents, as they were designed for traditional attacks. Newer policies are expanding to include AI platforms and third-party AI services, offering broader coverage for security failures and business interruptions. Businesses must actively negotiate for updated policies and align their contracts and risk controls to address these evolving AI exposures.
NYU Course Explores AI Impact on Public Health
NYU School of Global Public Health launched a new course called "Data, AI, and the People s Health" on November 26, 2025. Taught by Rumi Chunara, the course examines how data and AI shape human health, focusing on fairness and accuracy. Graduate students from various NYU schools learn foundational concepts and analyze real-world case studies of algorithmic bias. Examples include facial recognition issues and a health insurance algorithm that prioritized cost over illness. The course aims to equip students with a "toolbox" of skills to critically evaluate and build AI technologies for public health.
Target Embraces AI to Transform Shopping Experience
Target's Chief Information and Product Officer, Prat Vemana, states that AI is changing how Americans shop, leading Target to adopt more AI features. Target integrates with tools like ChatGPT, Perplexity, and Gemini to assist customers and has seen seven quarters of digital sales growth. Internally, Target uses ChatGPT Enterprise for 18,000 employees and a generative AI chatbot called Store Companion to help store operations. The retailer is also deploying agentic AI for customer service and IT, along with a data science-led inventory management system. Target plans to invest an additional $1 billion in capital expenditures, totaling $5 billion, with a focus on technology advancements to boost sales.
AI Transforms Beauty Standards and Industry
NBC10 Boston and Boston University students explored the growing influence of AI in social media and the beauty industry. Gen-Z relies on AI tools for beauty advice and to achieve a "snatched" or perfect look, driven by AI-altered images. AI is becoming central to the industry, offering tools like virtual try-ons, personalized recommendations, and AR content creation. Dr. Jeffrey Spiegel, a plastic surgeon, notes patients now bring AI-modified images of themselves, setting new beauty expectations. Dr. Jill Walsh, a researcher, highlights how constant exposure to digital images through social media and AI affects young women's self-perception.
Sources
- AI in the Core
- Artificial Intelligence in Hawai‘i K-12 Education - Part 2
- East Grand Forks Public Schools district leaders say artificial intelligence has potential in education
- The AI Security Paradox: How Exponential Growth Creates Tomorrow's Biggest Cyber Threats
- AI Autonomy Demands a New Security Playbook
- Agentic AI Raises New Security Risks Few Are Addressing - The National CIO Review
- Get an in-depth look at Gemini 3 with CEO Sundar Pichai.
- Ostorlab brings automated, proof-backed mobile app security testing
- Singapore AI Framework
- AI Is Changing Cyber Risk–Is Your Insurance Keeping Up? (Video)
- Cool Course: Data, AI, and the People’s Health
- AI is reshaping how Americans shop. Here’s how Target’s top tech leader says the retailer is adapting
- Bots or botox: Confronting AI in the beauty world
Comments
Please log in to post a comment.