The artificial intelligence landscape continues to evolve rapidly, bringing both significant advancements and pressing concerns across various sectors. Google has repeatedly clarified that it does not use private Gmail emails or attachments to train its Gemini AI models, addressing claims that arose from a class action lawsuit and reports. The company emphasized that its "Smart Features," such as Smart Compose, Smart Reply, and spam filtering, have been in place for years and operate independently of Gemini's training, scanning emails for functionality rather than AI development. These features, often enabled by default, can be turned off by users. Meanwhile, other tech giants are navigating data usage for AI. Meta announced a new policy taking effect December 16, confirming it uses public user content like photos, posts, and comments for AI training, but not private messages. Users on Facebook, Instagram, or Threads cannot fully opt out of Meta AI training, though WhatsApp users can deactivate it per chat. The United States currently lacks federal regulations on AI training and privacy, a contrast to some other countries. The rise of AI also brings security challenges, with companies like Nightfall and Palo Alto Networks (using Prisma SASE and Prisma Access) developing tools to combat "shadow AI" – unauthorized AI tools that pose data leakage risks. Quest Software also enhanced its security and migration tools on November 24, 2025, adding AI-powered features to its Security Guardian platform, which integrates with Microsoft Security Copilot to identify identity gaps and secure transitions to cloud-native Entra ID. AI is transforming industries globally. Banks, including Bendigo Bank partnering with Google and Commonwealth Bank with OpenAI, are heavily investing in AI for fraud detection, risk management, and customer service, with "autonomous AI" being the next frontier for tasks like opening new accounts. However, this shift raises concerns about job displacement, as seen with Commonwealth Bank call center workers, and the potential for AI to create biases. In healthcare, Harvard Medical School researchers developed popEVE, a new AI model that significantly speeds up rare disease diagnosis by predicting the likelihood of genetic variants causing severe illness. PopEVE, an improvement on the earlier EVE model, incorporates a large-language protein model and human population data, identifying over 100 new alterations for undiagnosed rare genetic diseases and potentially new drug targets. In education, Greece launched a pilot program on November 24, 2025, to train secondary school teachers in using a customized ChatGPT model for lesson planning and personalized teaching, despite concerns about student autonomy and critical thinking. The broader societal impact of AI is also under scrutiny. The Macquarie Dictionary named "AI slop" as its word of 2025, referring to the poor quality, error-filled content generated by AI that is increasingly difficult to distinguish from human-made content and can contribute to misinformation. This phenomenon, along with AI's use of vast internet data, raises legal debates about copyright and privacy, particularly in journalism, where AI-generated content can be fluent but inaccurate or biased, risking democratic accountability. On a darker note, the misuse of AI has led to severe consequences, with Dalton Edwards sentenced to 30 to 45 years in prison for child sexual abuse material, some of which was created using artificial intelligence. Finally, to manage rising inference costs, companies in the Asia Pacific region are shifting AI infrastructure to the "edge." Akamai, in collaboration with NVIDIA, launched Inference Cloud to place AI processing closer to users, reducing latency and high data routing costs, and improving performance for real-time AI applications.
Key Takeaways
- Google denies using private Gmail emails to train its Gemini AI models, stating that "Smart Features" are separate and have existed for years.
- Meta will use public user content (photos, posts, comments) for AI training starting December 16, but not private messages; users cannot fully opt out on Facebook, Instagram, or Threads.
- Harvard Medical School researchers developed popEVE, an AI model that accelerates rare disease diagnosis by identifying over 100 new genetic alterations.
- Companies like Nightfall and Palo Alto Networks are deploying security tools to combat "shadow AI" and prevent sensitive data leaks.
- Banks, including Bendigo Bank (with Google) and Commonwealth Bank (with OpenAI), are heavily investing in AI for operations, but face concerns about job displacement and bias.
- Quest Software updated its Security Guardian platform on November 24, 2025, adding AI-powered features that integrate with Microsoft Security Copilot for enhanced identity security.
- "AI slop," defined as poor quality, error-filled generative AI content, was named Macquarie Dictionary's word of 2025, raising concerns about misinformation and job losses.
- Greece launched a pilot program on November 24, 2025, to train secondary school teachers in using a customized ChatGPT model for educational purposes.
- Companies in the Asia Pacific region, including Akamai and NVIDIA with Inference Cloud, are moving AI infrastructure to the "edge" to reduce inference costs and latency.
- Dalton Edwards received a 30-45 year prison sentence for child sexual abuse material, some of which was created using artificial intelligence.
Google denies using Gmail emails for AI training
Google denied claims that it uses private emails from Gmail to train its Gemini AI models. These claims arose after a class action lawsuit and reports from Malwarebytes. Google stated that its "Smart Features" have existed for many years and do not train Gemini. While these features scan emails for things like spam and writing suggestions, this is different from AI training. Google also clarified that it has not changed user settings, though some Smart Features are automatically enabled.
Tech companies use public data for AI training
Tech companies are rapidly developing AI products, leading to questions about how they use personal data for training. While a November 8 Instagram post claimed Gmail changed settings to use private emails, Google denies this. Meta announced a new policy taking effect December 16, but it does not feed private messages into its AI tools. Meta does use public user content like photos, posts, and comments to train its AI. Users cannot fully opt out of Meta AI training in Facebook, Instagram, or Threads, but WhatsApp users can deactivate it per chat. Experts note the US lacks federal regulations on AI training and privacy, unlike some other countries.
Google confirms Gmail data not used for Gemini AI
Google confirmed on November 24, 2025, that it does not use Gmail emails to train its Gemini AI models. Reports had suggested otherwise, but Google clarified that its "Smart Features" are not new and do not contribute to AI training. These features include Smart Compose, Smart Reply, and spam filtering, which scan email content for functionality. Users can turn off these Smart Features in Gmail, Chat, Meet, and Google Workspace settings if they wish. ZDNet and other sources noted these settings were often enabled by default on new accounts.
Google clarifies Gmail settings not for AI training
Google clarified on November 24, 2025, that it does not use Gmail emails or attachments to train its Gemini AI models. The company stated that reports claiming otherwise were misleading and that no user settings were changed without consent. Google explained that its "Smart Features," like spell check and spam detection, have existed for years and operate separately from Gemini's training. The confusion arose from recent updates to how these settings are displayed. While Malwarebytes initially contributed to the misunderstanding, they later updated their article to reflect Google's clarification.
New AI model popEVE speeds rare disease diagnosis
Harvard Medical School researchers developed a new AI model called popEVE to speed up rare disease diagnosis. PopEVE can predict how likely genetic variants in a patient's DNA are to cause severe disease or death. It improves upon an earlier model, EVE, by adding a large-language protein model and human population data. This allows popEVE to compare variants across different genes and identify over 100 new alterations for undiagnosed rare genetic diseases. The team hopes popEVE will help clinicians diagnose single-variant genetic diseases more quickly and accurately. It could also help find new drug targets for genetic conditions.
Harvard AI tool popEVE helps diagnose rare diseases
Harvard Medical School researchers created a new AI model named popEVE to accelerate the diagnosis of rare genetic diseases. PopEVE scores genetic variants in a patient's genome, showing their likelihood of causing illness. This model builds on the earlier EVE tool by adding a large-language protein model and human population data. This allows popEVE to compare variants across genes and identify over 100 new alterations linked to undiagnosed rare diseases. The team aims for popEVE to help doctors diagnose single-variant genetic diseases faster and more accurately. It may also discover new drug targets for these conditions.
Companies fight "shadow AI" with new security tools
As more AI tools are used in software development, companies are creating new ways to control "shadow AI." Shadow AI refers to unauthorized AI tools that can leak sensitive data. Companies like Nightfall offer Data Loss Prevention (DLP) platforms to detect and stop data from going into these unapproved AI tools. Palo Alto Networks uses its Prisma SASE and Prisma Access services to monitor and control generative AI applications. Experts like Meerah Rajavel from Palo Alto Networks warn that AI models and data are new targets for attacks. Other vendors like Netskope and Zylo also provide tools to manage and secure AI usage.
Banks embrace AI for future services and operations
Banks in Australia and worldwide are investing heavily in AI to transform their business. Bendigo Bank is partnering with Google for its AI tools, following Commonwealth Bank's deal with OpenAI. Banks already use AI for fraud detection, risk management, and customer service, saving millions. The next wave involves "autonomous AI," where AI makes decisions and takes actions independently, like NAB's trials for opening new accounts. This could lead to faster services but also threatens jobs, as seen with Commonwealth Bank call center workers. Concerns remain about AI's potential to create biases and the need for fair, secure systems.
AI changes journalism raising trust concerns
Artificial Intelligence, especially generative AI, is changing journalism and the information environment. GenAI tools collect vast amounts of data from the internet, leading to legal debates about copyright and privacy. These machines use statistical pattern recognition to generate content, not actual understanding of facts or concepts. This means AI output is based on patterns in training data, not truth or verification. For journalism, this is a major concern because AI can produce fluent but potentially inaccurate or biased content. The rise of unlabeled AI-generated content makes it harder for audiences to tell facts from fabrications, posing risks to democratic accountability.
APAC companies shift AI to edge to cut costs
Companies in the Asia Pacific region are moving their AI infrastructure to the "edge" to manage rising inference costs. Many AI projects struggle to deliver value because current systems are not built for fast, large-scale AI decision-making. Akamai, with NVIDIA, launched Inference Cloud to address this by placing AI processing closer to users. Jay Jenkins, Akamai's CTO, explains that inference, not training, is the main bottleneck as AI adoption grows. Moving inference to the edge reduces latency and the high costs of routing large data volumes to distant data centers. This shift can significantly cut costs and improve performance for real-time AI applications.
"AI slop" is Macquarie Dictionary's word of 2025
Macquarie Dictionary named "AI slop" as its word of the year for 2025, chosen by both its committee and the public. "AI slop" refers to poor quality content created by generative AI that often contains errors and is not requested by users. Experts warn that this content is increasingly appearing in people's media diets, making it hard to distinguish from human-made content. Adam Nemeroff from Quinnipiac University noted examples like AI-generated images used in political misinformation. He also stated that AI slop can lead to job losses for human creators. The Cambridge Dictionary also chose an AI-related word, "parasocial," for 2025.
Greece trains teachers to use AI in schools
Greece launched a pilot program on November 24, 2025, to train secondary school teachers in using AI tools. This initiative introduces a customized ChatGPT model to help educators with lesson planning, research, and personalized teaching. Officials hope to prepare teachers for an era where AI supports classroom practices. While supporters believe AI can boost learning, students and teachers have concerns. They worry about losing autonomy and creativity, and that AI might increase screen time or erode critical thinking. Teacher unions also highlight the need for better school infrastructure before focusing on digital reforms.
Quest uses AI to boost Microsoft identity security
Quest Software released major updates to its security and migration tools on November 24, 2025, adding new AI-powered features. The enhanced Security Guardian platform now provides AI-generated summaries that identify critical identity gaps and suggest fixes. A new Security Guardian Agent works with Microsoft Security Copilot to give identity information directly to platforms like Sentinel. Quest also introduced Identity Modernisation Suites to help securely move from old Active Directory systems to cloud-native Entra ID. These updates aim to reduce complexity and risk in digital transformations, creating secure and AI-ready foundations for data and identity infrastructure.
Volunteer gets 30 years for AI child abuse material
Dalton Edwards, a 34-year-old former school volunteer in Morrow County, received a sentence of 30 to 45 years in prison. He pleaded guilty to charges related to child sexual abuse material, some of which was created using artificial intelligence. The investigation began after the National Center for Missing and Exploited Children reported suspicious online activity. Authorities arrested Edwards in January 2023 and found thousands of images and videos, including AI-generated content, on his devices. Edwards had volunteered at Highland Local Schools, which is cooperating with the investigation. He must also register as a sex offender.
Sources
- Google denies analyzing your emails for AI training
- Are tech companies training their AI with private data?
- Google Does Not Read Your Gmail To Train Gemini AI Models
- Google Responds After Reports Misinterpret Gmail Settings As AI Training
- New Artificial Intelligence Model Could Speed Rare Disease Diagnosis
- New Artificial Intelligence Model Could Speed Rare Disease Diagnosis
- The rise (and fall?) of shadow AI
- Your bank is already using AI. But what’s coming next could be radically new
- AI in Journalism and Democracy: Can We Rely on It?
- APAC enterprises move AI infrastructure to edge as inference costs rise
- Macquarie Dictionary names 'AI slop' as word of the year 2025
- Greece accelerates AI training for teachers
- Quest deploys AI to secure Microsoft identities
- Morrow County school volunteer sentenced to at least 30 years in AI child sexual abuse material case
Comments
Please log in to post a comment.