Microsoft Improves Copilot Alongside Google Gemini's Rapid Growth

The rapid advancement of artificial intelligence has brought a significant increase in security vulnerabilities, with AI systems leaking 23.77 million secrets in 2024, a 25% rise. Traditional security frameworks like NIST Cybersecurity Framework and ISO 27001 are proving ineffective against new AI-specific threats. These include prompt injection, where natural language tricks AI, and model poisoning, which corrupts training data. Such attacks bypass conventional checks, creating a substantial gap in current defenses and making AI inference a critical target across sectors like healthcare and finance. Recognizing these evolving dangers, organizations like OWASP have introduced new guidelines, such as the Agentic AI Top 10 framework, to address risks in autonomous AI systems. Tools like Claude Desktop and GitHub Copilot, which are agentic, are particularly susceptible to attacks like Agent Goal Hijack and Tool Misuse. The future also holds the threat of quantum computing, which could break current encryption, making post-quantum cryptography a necessary consideration for AI systems today. This highlights an urgent need for smarter, real-time security solutions that understand complex AI behaviors. In the competitive AI landscape, Microsoft CEO Satya Nadella has openly criticized Copilot's integrations with Gmail and Outlook, calling them "not smart" and "not really working." Nadella is taking a direct role in improving Copilot, which currently holds 14% of the AI market share, while Google's Gemini is rapidly gaining ground. Meanwhile, Stagwell Inc. launched NewVoices.ai, an autonomous AI agent platform for enterprise sales, building on its work with Google Cloud to personalize interactions and boost customer value. Beyond enterprise tools, AI is finding specialized applications, as seen with LinkLayerAI and HolmesAI partnering on December 29, 2025, to enhance AI crypto trading through AI twins that learn from live market and social data. In hospitality, AI is viewed as a tool to boost service efficiency, handling routine tasks like Wi-Fi questions, rather than replacing staff. Globally, China has proposed strict new rules for human-like AI, requiring platforms to disclose AI interaction every two hours and ensure AI behavior aligns with social values, while the Russian Embassy in Kenya shared an AI-generated video of President Putin as Santa, showcasing AI's impact on public perception. The societal implications of AI are also a growing focus. Wesleyan University President Michael S. Roth noted the deliberate friendliness of AI chatbots, designed to ease public fears about their power. On the political front, Senator Elizabeth Warren's use of ChatGPT indicates a shifting stance among some politicians, contrasting with figures like Senator Bernie Sanders who remain skeptical. However, Geoffrey Hinton, the "godfather of AI," warns of massive job losses starting in 2026, affecting call centers and software engineers, and expresses concern about AI's ability to deceive and the lack of investment in AI safety and governance.

Key Takeaways

  • AI systems leaked 23.77 million secrets in 2024, a 25% increase, due to new threats like prompt injection and model poisoning.
  • Traditional security frameworks (NIST, ISO 27001) are failing against AI-specific attacks, necessitating real-time, AI-aware solutions.
  • OWASP introduced the Agentic AI Top 10 framework to address risks in autonomous AI tools like Claude Desktop and GitHub Copilot.
  • Microsoft CEO Satya Nadella criticized Copilot's integrations as "not smart" and is actively working to improve the product, which holds 14% market share, competing with Google's Gemini.
  • Geoffrey Hinton, "godfather of AI," predicts massive job losses starting in 2026, impacting call centers and software engineers, and warns of AI deception.
  • China proposed strict regulations for human-like AI, requiring disclosure of AI interaction and alignment with social values, with reporting to regulators.
  • LinkLayerAI and HolmesAI partnered on December 29, 2025, to integrate AI agents and digital avatars into crypto trading using real-time market and social data.
  • Stagwell Inc. launched NewVoices.ai, an autonomous adaptive AI agent platform for enterprise sales, leveraging Google Cloud to personalize interactions.
  • Hotels are encouraged to use AI for service efficiency (e.g., chatbots for common questions) rather than replacing staff, improving guest experience and employee morale.
  • The Russian Embassy in Kenya shared an AI-generated video of Putin as Santa, highlighting AI's potential impact on public perception and digital media.

Old Security Frameworks Fail Against New AI Attacks

AI systems leaked 23.77 million secrets in 2024, a 25% increase, despite organizations having strong security programs. Traditional frameworks like NIST Cybersecurity Framework and ISO 27001 were not built for AI threats. AI introduces new attack methods such as prompt injection, model poisoning, and adversarial attacks. Prompt injection uses natural language to trick AI, while model poisoning corrupts training data during authorized processes. These new attacks bypass traditional security checks, showing a big gap in current defenses.

Protecting AI from New Threats and Quantum Computers

AI inference is now vital in many fields like healthcare and finance, making it a key target for attacks. Old security methods cannot stop new AI-powered threats, so we need smarter, real-time solutions that understand AI behavior. Quantum computing also poses a future risk to current encryption, making post-quantum cryptography a must-have for AI systems now. Key vulnerabilities include model poisoning, where bad data is fed to AI during training, and prompt injection, which manipulates AI inputs. These attacks can lead to biased decisions or system backdoors, highlighting the urgent need for advanced protection.

OWASP Lists Top 10 Agentic AI Security Risks

OWASP introduced the Agentic AI Top 10, a new framework to understand risks in autonomous AI systems. Agentic AI tools like Claude Desktop and GitHub Copilot are now widely used, leading to more attacks. Traditional security methods are not effective against these AI agents that can act independently. The framework lists ten risk categories, including Agent Goal Hijack and Tool Misuse, which focus on AI's autonomy. Real-world attacks show malware trying to trick AI security tools and attackers using AI hallucinations to mimic legitimate software. This new guide helps the industry better defend against these evolving AI threats.

LinkLayerAI and HolmesAI Join Forces for Smart AI Trading

LinkLayerAI and HolmesAI announced a partnership on December 29, 2025, to improve AI crypto trading. LinkLayerAI, a decentralized protocol, will combine its network with HolmesAI's Digital Avatar system. This allows users' AI twins to access real-time trading activity and social signals. These AI-driven avatars will learn and identify trading opportunities based on live market behavior. The collaboration aims to make AI agents active participants in crypto trading, offering a more responsive experience. This partnership is important for using AI social data and decentralized market intelligence in Web3 trading.

LinkLayerAI and HolmesAI Team Up for Crypto AI Trading

LinkLayerAI and HolmesAI partnered on December 29, 2025, to advance AI in crypto trading. LinkLayerAI, a decentralized protocol, will integrate AI agents with real trading and social data. HolmesAI's Digital Avatar system will use this data to create AI twins for users. These AI twins will learn from live market behavior and social signals to find trading opportunities. The goal is to transform AI agents into active participants in crypto trading, moving beyond simple analytics. This collaboration is key for using AI social data and decentralized market intelligence in the Web3 space.

China Sets Strict New Rules for Human-Like AI

China has proposed new rules for AI systems that act like humans. These rules require platforms to tell users they are interacting with AI when they log in and every two hours. AI providers must also include security and ethics checks, ensuring AI behavior matches China's social values. Content that threatens national safety or public order will not be allowed. Companies must report to regulators before launching AI tools and update them once they reach one million users. This shows China's plan to boost AI growth while setting strong limits on how human-like AI can become.

Russian Embassy Shares AI Video of Putin Giving Gifts

The Russian Embassy in Kenya shared an AI-generated video showing President Vladimir Putin as Santa Claus. In the video, Putin gives Christmas gifts to world leaders like PM Narendra Modi and Donald Trump. Modi received a yoga mat, and Trump got a golf club and "The Art of the Deal." Other leaders, including UK Prime Minister Rishi Sunak and French President Emmanuel Macron, also received symbolic gifts. The video's origin is unknown, but its release by the embassy sparked debate over whether it is a holiday greeting or political propaganda. This viral content highlights how AI can affect public views and the truthfulness of digital media.

Hotels Can Use AI to Boost Service Not Replace Staff

Many hoteliers fear AI will replace the human touch essential to hospitality, but AI offers a big opportunity. Travelers already use AI in their daily lives and expect similar convenience from hotels. AI should be seen as a tool to make service more consistent and efficient, allowing staff to focus on important guest interactions. For example, AI chatbots can handle common questions like Wi-Fi passwords or pool hours, freeing up front desk staff. This approach helps hotels manage staffing shortages and improves employee morale by reducing repetitive tasks. AI tools are now more affordable and accessible for hotels of all sizes, not just large chains.

Wesleyan President Discusses Overly Friendly AI Chatbots

Michael S. Roth, president of Wesleyan University, wrote about how overly friendly AI chatbots are. He notes that new AI conversation partners are very civil, often saying things like "What a good question!" Companies make these large language models friendly to ease public fears about their power. This friendliness is a deliberate choice to make AI tools seem less threatening to users.

Stagwell Unveils NewVoices.ai for Smart Sales AI Agents

Stagwell Inc. launched NewVoices.ai, a new platform using autonomous adaptive AI agents to change enterprise sales. This platform learns from user history and preferences to personalize every interaction, integrating with existing business systems. NewVoices.ai offers solutions for end-to-end revenue management, allowing companies to use ready-made tools or build custom workflows. As a managed service, it combines AI agents with automation and analytics to maintain high quality in global customer interactions. Stagwell aims to lower business costs and boost customer value through this new AI initiative, building on its work with Google Cloud.

Senator Elizabeth Warren Tries ChatGPT

Senator Elizabeth Warren recently used ChatGPT, showing a shift in her stance on AI. Many politicians are becoming less skeptical about AI, though Senator Bernie Sanders remains firmly against it. The Democratic Party could lead the opposition to AI exceptionalism, but some key figures are changing their views. This shift is happening as anti-AI sentiment grows in Republican circles, with figures like Florida Governor Ron DeSantis speaking out. The article questions if the progressive voices in the Democratic Party can win against those who support AI and tech companies.

Microsoft CEO Says Copilot Integrations Are Not Smart

Microsoft CEO Satya Nadella openly criticized Copilot, stating its integrations with Gmail and Outlook "don't really work" and are "not smart." Nadella has taken a direct, hands-on role in fixing the AI assistant, even delegating other duties to focus on its development. He actively engages with engineers, sending bug reports and pushing for faster feature development, noting Google's Gemini is improving. Concerns exist that Copilot is not fulfilling its promise as a "digital worker," with researchers finding AI agents fail 70% of office tasks. Nadella is aggressively recruiting top AI talent and forming partnerships to improve the product. Copilot currently holds 14% of the AI market share, with Google's Gemini close behind, highlighting the intense competition.

AI Godfather Geoffrey Hinton Warns of Massive Job Losses

Geoffrey Hinton, known as the "godfather of AI," warns that AI will cause many more job losses starting in 2026. He states AI is already replacing call center jobs and will soon impact software engineers, making human intelligence less relevant. Hinton believes AI's rapid improvement means tasks that once took hours will soon take minutes, leading to widespread job displacement. He is concerned that the financial drive to replace human labor will increase inequality if governments do not intervene. Hinton also worries about AI's ability to deceive people and criticizes the lack of investment in AI safety and governance. He suggests we should see AI as a "baby" created by humans, hoping to develop AI that cares about humanity's survival.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI security AI attacks Security frameworks NIST ISO 27001 Prompt injection Model poisoning Adversarial attacks AI threats Data security Quantum computing Post-quantum cryptography AI inference Encryption OWASP Agentic AI Autonomous AI AI agents Agent Goal Hijack Tool Misuse AI hallucinations Malware Security risks LinkLayerAI HolmesAI AI crypto trading Decentralized protocol Digital avatars Web3 Market intelligence Social signals Trading opportunities AI regulation China Human-like AI AI ethics National security Public order AI governance AI-generated video Political propaganda Digital media Public perception AI content AI in hospitality Customer service AI chatbots Staffing solutions Hotel industry Operational efficiency Large language models User perception AI friendliness Stagwell NewVoices.ai Enterprise sales Revenue management Automation Analytics Customer interactions Google Cloud ChatGPT AI policy Political views on AI AI exceptionalism Government regulation Microsoft Copilot Satya Nadella AI assistant Google Gemini AI market share AI performance Software development Geoffrey Hinton AI job displacement Job losses Economic inequality

Comments

Loading...