Google Launches AI Courses Alongside Microsoft Security Findings

The world of artificial intelligence is currently navigating a complex landscape of evolving regulations, emerging security challenges, and innovative applications, alongside a growing debate about its future direction. In Europe, the European Commission is preparing to unveil a "digital omnibus" package on November 19, aiming to relax certain AI and GDPR rules. This move seeks to attract significant tech investment and address criticisms from major companies like Alphabet Inc.'s Google and Meta Platforms Inc. regarding strict data privacy regulations. Proposed changes include a one-year grace period for high-risk AI breaches and delaying fines for transparency violations until August 2027. The plan also suggests allowing freer processing of sensitive data and simplifying cookie rules, though critics like privacy activist Max Schrems and GDPR architect Jan Philipp Albrecht express concerns that these measures could weaken European privacy standards and primarily benefit large corporations. The proposals still require approval from EU member states and the European Parliament. Simultaneously, significant security vulnerabilities in AI systems are coming to light. Microsoft researchers recently uncovered a flaw dubbed "Whisper Leak," which can reveal the topic of encrypted AI conversations by exploiting metadata patterns like data size and timing, even if the content remains private. Their tests on 28 large language models showed over 98% accuracy in identifying sensitive topics. Google and Mistral have already implemented features to help conceal this information. Separately, Cisco researchers identified serious security flaws in several popular open-weight AI models, demonstrating that multi-turn attacks can manipulate systems with high success rates, such as 92.78% for the Mistral Large-2 model. These concerns are echoed globally, with India's finance ministry advising employees against using tools like ChatGPT and DeepSeek on work devices due to worries about confidential information. In the hardware sector, British startup Spectral Compute has raised $6 million to challenge Nvidia's dominance in AI computing. Their new tool, SCALE, can instantly compile CUDA programs for AMD GPUs, offering companies an alternative to Nvidia's proprietary CUDA programming and the associated "Nvidia tax," thereby providing more flexibility in hardware choice. Meanwhile, AI continues to find diverse applications. Coe College is partnering with Google to offer new AI courses, utilizing Google's Gemini Pro as a learning companion. The new Intuit Dome, home to the Los Angeles Clippers, is integrating advanced AI for an enhanced fan experience, featuring facial recognition for entry and AI-powered cameras for purchases. Even the adult entertainment sector sees AI adoption with Candy.ai, an AI companion app offering customizable characters and NSFW features. However, the broader societal impact of AI remains a subject of intense debate, with figures like David Sacks, former AI and Crypto Czar for Donald Trump, claiming a "Doomer Industrial Complex" is deliberately campaigning against AI development, while others, like Matthew Adelstein of the Effective Altruism movement, emphasize the real and immediate risks of AI, including bias and job displacement.

Key Takeaways

  • The European Commission plans to relax some AI and GDPR rules in a "digital omnibus" package, to be unveiled on November 19, aiming to attract big tech investment.
  • Proposed EU changes include a one-year grace period for high-risk AI breaches and delayed fines for transparency violations until August 2027.
  • Microsoft researchers discovered "Whisper Leak," a security flaw that can reveal the topic of encrypted AI conversations with over 98% accuracy by exploiting metadata patterns.
  • Google and Mistral have already added features to help hide information exposed by the "Whisper Leak" flaw.
  • Cisco researchers found serious security flaws in open-weight AI models, with multi-turn attacks achieving a 92.78% success rate on Mistral Large-2.
  • India's finance ministry has advised employees not to use AI tools like ChatGPT and DeepSeek on work devices due to confidential information concerns.
  • British startup Spectral Compute raised $6 million to develop SCALE, a tool that compiles Nvidia CUDA programs for AMD GPUs, aiming to reduce dependence on Nvidia's hardware.
  • Coe College is partnering with Google to offer new AI courses, using Google's Gemini Pro as a learning companion for students.
  • The new Intuit Dome in Los Angeles uses advanced AI for fan experience, including facial recognition, AI-powered cameras, and 8K cinematic AI-generated worlds.
  • Debates continue regarding AI's societal impact, with claims of a "Doomer Industrial Complex" campaigning against AI and counter-arguments about real risks like bias and job loss.

EU may ease AI and privacy rules for tech

The European Commission plans to relax some AI and GDPR rules in its upcoming "digital omnibus." This move aims to attract big tech investment and address criticisms from companies and the US government about strict regulations. Changes could include a one-year grace period for high-risk AI breaches and delayed fines for transparency violations until August 2027. The plan also suggests allowing freer processing of sensitive data under GDPR and simplifying cookie rules. Critics worry these changes might weaken privacy and digital rights in Europe, but the proposals still need approval from EU member states and the European Parliament.

EU plans to simplify AI and data rules

The European Commission will unveil a package on November 19 to simplify its privacy and AI rules. This aims to boost the competitiveness of European tech and AI companies. The proposed changes follow complaints from big tech firms like Alphabet Inc.'s Google and Meta Platforms Inc. about strict data privacy rules. The draft may narrow the definition of personal data and offer a one-year grace period for generative AI products to add watermarks. However, privacy activists like Max Schrems criticize these measures, saying they are extreme and mainly benefit large corporations. The plan needs approval from EU national governments and the European Parliament.

EU may loosen privacy rules for AI growth

European Union officials plan to ease some privacy rules to boost AI business in Europe. A "digital omnibus" package, to be unveiled this month, aims to simplify tech laws. Critics, including GDPR architect Jan Philipp Albrecht, worry this will undermine European privacy standards. The draft proposal could allow AI companies to process sensitive data like health or religious beliefs more freely. It may also redefine personal data and simplify cookie rules for tracking users. Privacy groups like Noyb criticize the rushed process, but some lawmakers welcome the potential for legal certainty for AI companies. The proposal still needs approval from EU countries and lawmakers.

Microsoft finds new AI chat privacy flaw

Microsoft researchers discovered a new security flaw called Whisper Leak that can reveal the topic of encrypted AI conversations. This side channel attack exploits metadata patterns, like the size and timing of data chunks, rather than the actual encrypted text. Researchers tested 28 large language models and achieved over 98% accuracy in identifying sensitive topics. Google and Mistral have already added features to help hide this information. This leak poses a risk for businesses handling sensitive data, even if the exact conversation content remains private. Users should avoid discussing highly sensitive topics on untrusted networks and use VPN services for extra protection.

Microsoft discovers AI chatbot privacy risk

Microsoft researchers found a major security flaw called "Whisper Leak" in large language models that power AI chatbots like ChatGPT and Google Gemini. This flaw can expose the topic of encrypted conversations, even though the content itself remains private. The issue comes from how AI responses are sent, as the encryption method reveals metadata about data size and timing. Microsoft tested 28 LLMs and found that an AI could guess sensitive conversation topics with over 98% accuracy. The company states that AI providers must fix this metadata leakage as AI systems handle more sensitive information.

New AI tools need strong safety rules

AI models are rapidly becoming popular, but their widespread use raises concerns about data privacy and security. India's finance ministry has already told employees not to use tools like ChatGPT and DeepSeek on work devices due to worries about confidential information. Questions are being asked about how anonymized data is used by global firms and if queries from important individuals could reveal sensitive insights. This comes as countries like China and India push for their own tech solutions. Companies like Google and Airtel are offering free AI services, making strong safety rules even more important.

Candy AI review custom companions and costs

Candy.ai is a popular AI companion app for adults that offers roleplay, romance, and casual conversations. Users can customize their AI's personality, appearance, and tone, and the app includes NSFW control features, voice chat, and AI image generation. The pricing involves a base monthly subscription, around $12.99, plus extra "token packs" for features like image and voice generation. While the app is praised for its deep character customization and NSFW options, users should be aware that official pricing details are not fully transparent and policies can change. Always check in-app for the latest costs and remember to never share sensitive personal information.

AI changes real estate for some

Artificial intelligence is impacting the real estate industry in different ways. For some professionals, AI is a significant game changer, bringing new tools and efficiencies. However, others view AI as an overhyped tool that does not always deliver on its promises. The technology's true value in real estate seems to depend on how it is applied and integrated into existing practices.

Cisco finds big security flaws in AI models

Cisco researchers discovered serious security flaws in several popular open-weight AI models. These vulnerabilities allow cybercriminals to manipulate AI systems with just a few carefully designed prompts, leading to misinformation, data breaches, and other risks. Multi-turn attacks, which involve a series of prompts, proved much more effective than single prompts, with the Mistral Large-2 model showing a 92.78% success rate. Cisco recommends that organizations test their AI models for weaknesses, use context-aware guardrails for safe responses, and continuously monitor for unsafe behavior to protect against these threats.

Coe College partners with Google for AI training

Coe College in Cedar Rapids is partnering with Google to provide students with AI tools and training. The college will offer new courses in the spring, including "AI in the Business World" and "K-12 Teacher Training for AI." Through this partnership, Google's Gemini Pro will act as a "learning companion," guiding students with questions to help them develop critical thinking skills. Coe College leaders believe that students who learn to use AI responsibly will gain valuable skills for future high-paying jobs and foster innovation on campus.

Spectral Compute raises 6M to free AI from Nvidia

Spectral Compute, a British startup, raised $6 million to help AI applications run on different hardware beyond Nvidia GPUs. Currently, most AI software relies on Nvidia's CUDA programming, which forces companies to use Nvidia chips and creates an "Nvidia tax." Spectral Compute's new tool, SCALE, can instantly compile CUDA programs for AMD GPUs, removing the need for costly and time-consuming code rewrites. This innovation gives companies the freedom to choose AI hardware based on performance and availability, breaking their dependence on a single chipmaker. The funding will help the company accelerate its go-to-market strategy and product development.

LA Intuit Dome uses AI for fan experience

The new Intuit Dome in Inglewood, home to the Los Angeles Clippers, uses advanced AI to enhance the fan experience. This $2 billion arena, which opened in August 2024, features a massive LED "Halo Board" that reacts to crowd noise in real time. The venue uses facial recognition for entry and AI-powered cameras in "Pick and Roll" markets for automatic purchases. A special "Connectopia" exhibit allows fans to create 8K cinematic AI-generated worlds using their chosen words. This innovative system, powered by powerful GPUs and AT&T Fiber, updates its AI models every two weeks to ensure a constantly evolving and engaging digital art experience for visitors.

Trump AI czar claims plot against AI

David Sacks, former AI and Crypto Czar for Donald Trump, claims that public fear of AI is not natural. He believes a "Doomer Industrial Complex," funded by over $1 billion from philanthropists, is deliberately campaigning against AI development. Sacks points to the Effective Altruism movement, which focuses on preventing future catastrophes like rogue AI, as a key player. However, Open Philanthropy, a major funder, states they are committed to safe AI development, not doomsday scenarios. Matthew Adelstein, an Effective Altruism figure, argues that real AI risks exist and public concerns about issues like bias and job loss are more immediate.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

EU AI Regulation AI Privacy Data Security Large Language Models Generative AI AI Safety AI Ethics AI Hardware AI Applications AI Development Microsoft Google Nvidia Cisco Spectral Compute Intuit Dome Coe College GDPR Digital Rights Cybersecurity Metadata Leakage Side Channel Attacks Facial Recognition Automation AI Training Tech Policy AI Investment AI Risks AI Bias Effective Altruism AI Companions Real Estate AI Fan Experience AI European Commission European Parliament CUDA AMD GPUs Sensitive Data Cookie Rules Innovation Critical Thinking

Comments

Loading...