The close of 2025 sees artificial intelligence continuing its rapid expansion, bringing both significant advancements and growing concerns across various sectors. OpenAI, for instance, issued warnings on December 23, stating that prompt injection attacks will likely remain a persistent security challenge for AI browsers like ChatGPT Atlas. These attacks can trick AI agents into performing unintended actions, such as sending sensitive emails. OpenAI is actively developing an AI-powered attacker using reinforcement learning to identify and patch vulnerabilities, while Google is also working on "agent guardrails" to enhance security against these threats. In the business world, 2025 marked substantial AI-driven growth and shifts. Nvidia achieved a remarkable milestone on November 4, becoming the first $5 trillion company, a development that sparked discussions about a potential AI economic bubble. The "magnificent seven" tech giants significantly increased their AI spending throughout the year. OpenAI transitioned to a for-profit company in October, enabling new revenue streams, including integrated shopping checkouts. Both OpenAI and Google, along with Perplexity, introduced new AI-powered shopping features, with Perplexity notably integrating PayPal for direct checkout. Beyond security and commerce, AI's influence is reshaping education and public services. Syracuse colleges, including Syracuse University and Le Moyne College, are adapting teaching methods by reintroducing traditional assessments like oral exams and in-class writing to counter AI's ability to complete assignments. Meanwhile, the Electronic Frontier Foundation raised serious transparency concerns on December 23 regarding AI use in police reports, specifically with tools like Axon's Draft One, which erase initial AI-generated text. This makes it impossible to track officer edits, though Utah and California now require disclosure of AI assistance in reports. Privacy remains a major concern as AI integrates further into daily life. The ACLU warned on December 22 that combining secure messaging with networked AI services, such as pasting private conversations into ChatGPT, compromises message confidentiality. This issue extends to apps like Meta's WhatsApp if they integrate networked AI, requiring users to trust the service provider with their private data, thereby undermining end-to-end encryption. Additionally, AI-powered advertisements are becoming highly personalized and often invisible, using multimodal AI to target individual insecurities and aspirations, raising questions about transparency in recommendations. Looking ahead, Google has postponed the full rollout of Gemini, its next-generation assistant, keeping the current Google Assistant on Android devices until next year. In hardware development, Brandworks Technologies and SandLogic are collaborating to build an India-focused Edge AI hardware ecosystem, aiming to launch AI-powered devices for mobility, home technology, and education in 2026. However, the rise of AI also presents challenges for traditional industries, with Hollywood's background actors facing an uncertain future due to AI, computer-generated imagery, and production shifts, despite Central Casting's efforts to place human talent.
Key Takeaways
- OpenAI warned on December 23, 2025, that prompt injection attacks pose a lasting security risk for AI browsers like ChatGPT Atlas, and is developing an AI-powered attacker to find vulnerabilities.
- Google has postponed the full transition to Gemini, keeping the current Google Assistant on Android devices until next year.
- Nvidia became the first $5 trillion company on November 4, 2025, amidst increased AI spending by major tech firms and concerns about an economic bubble.
- OpenAI shifted to a for-profit model in October 2025, enabling new revenue streams, including AI-powered shopping features alongside Google and Perplexity (which integrated PayPal).
- The ACLU warned on December 22, 2025, that pasting secure messages into networked AI services like ChatGPT compromises confidentiality and undermines end-to-end encryption, especially with integrations into apps like Meta's WhatsApp.
- AI is changing higher education, with Syracuse colleges reintroducing traditional assessments like oral exams to ensure student learning, as AI detection software is often inaccurate.
- AI-powered advertisements are becoming highly personalized and often invisible, using multimodal AI to target individual feelings, raising transparency concerns.
- The Electronic Frontier Foundation highlighted serious transparency issues with AI in police reports, as tools like Axon's Draft One erase initial AI-generated text, though Utah and California now require disclosure.
- Brandworks Technologies and SandLogic are partnering to build an India-focused Edge AI hardware ecosystem, with first AI-enabled products expected in 2026 for mobility, home tech, and education.
- Hollywood background actors face an uncertain future due to the rise of AI and computer-generated imagery, despite efforts by Central Casting to place human talent.
OpenAI warns AI browsers face lasting security risks
OpenAI announced on December 23, 2025, that prompt injection attacks will likely remain a long-term security challenge for AI browsers. These attacks trick AI agents like ChatGPT Atlas, Opera's Neon, and Perplexity's Comet into performing unintended actions, such as sending sensitive emails. The company believes a quick response system can reduce real-world risks. OpenAI is developing an AI-powered attacker that uses reinforcement learning to find and patch vulnerabilities. The UK National Cyber Security Centre and Gartner also raised concerns about these risks, and Google is working on "agent guardrails" to improve security.
OpenAI warns AI browsers face lasting hacker threats
OpenAI stated on December 23, 2025, that AI browsers like ChatGPT Atlas may never be fully safe from prompt injection attacks. These attacks involve hackers hiding bad instructions in websites or emails to trick AI agents into harmful actions, such as sharing private emails. OpenAI is fighting back by using an AI-powered attacker trained with reinforcement learning to find weaknesses. However, security experts like Charlie Eriksen from Aikido Security and George Chalhoub from UCL Interaction Centre are concerned. They note that AI agents have deep access to user data, making these risks very serious.
Who is responsible when chatbots cause harm
Samuel Kimbriel, a political philosopher and founding director of the Aspen Institute's Philosophy and Society program, wrote an opinion piece on December 22, 2025. He raises an important question about who should be held responsible when chatbots cause harm. The article discusses the need to address liability for the actions of AI systems.
Top AI events shaped 2025 in fashion and tech
The year 2025 marked major AI advancements, especially in the fashion industry. OpenAI, Google, and Perplexity introduced new AI-powered shopping features, with Perplexity integrating PayPal for checkout. Nvidia became the first 5 trillion dollar company on November 4, sparking concerns about an AI economic bubble. The "magnificent seven" tech giants significantly increased their AI spending. OpenAI also changed to a for-profit company in October 2025, paving the way for new revenue streams like integrated shopping checkouts. Experts advise companies to focus on specific high-impact AI uses for real returns.
Syracuse colleges adapt teaching as AI changes education
On December 23, 2025, it was reported that AI is changing higher education in Syracuse. Professors at Syracuse University, Le Moyne College, and Onondaga Community College are bringing back old-school assessments like oral exams and in-class writing. This helps ensure students are truly learning, as AI can complete assignments. While SU and OCC handbooks call full AI use plagiarism, Le Moyne College is still creating its AI policy. Most AI detection software is inaccurate, so schools do not recommend it. Professors can choose from three policies on AI use in their classes.
Google Assistant remains on Android longer
Google announced on December 23, 2025, that its old assistant will stay on Android devices for a while longer. The planned change to Gemini has been postponed until next year. This means Android users will continue to use the current Google Assistant for an extended period.
Invisible AI ads target personal feelings
AI-powered advertisements are becoming highly personalized and often invisible to users. These advanced AI tools can understand a person's insecurities and aspirations, creating ads designed to change their minds. Multimodal AI systems now allow companies to create tailored experiences for many people at once. McKinsey reported that one retailer saw sales increase by 2% and profits improve by 3% using AI to target promotions. This new advertising raises concerns about transparency, as users cannot easily tell if recommendations are genuine or if they are being steered towards profitable choices.
AI police reports raise serious transparency concerns
On December 23, 2025, the Electronic Frontier Foundation reviewed the use of AI in police reports, highlighting serious concerns. Tools like Axon's Draft One, which uses body-worn camera audio, create reports but erase the initial AI-generated text. This makes it impossible to tell what parts an officer edited, raising transparency issues and allowing officers to blame AI for inaccuracies. The King County prosecuting attorney's office in Washington state refuses to accept these AI-assisted reports due to reliability concerns. However, there is good news: Utah's HB 251 and California's AB 1813 now require police to disclose when AI helps write a report.
AI and secure messaging create privacy risks
On December 22, 2025, the ACLU warned that mixing secure messaging with AI creates major privacy problems. When users paste secure messages into networked AI services like ChatGPT, the AI company can read them, breaking message confidentiality. While local AI models on a device might be safer, integrating networked AI into apps like Meta's WhatsApp requires trusting the service provider with private data. Promises of data confidentiality and secure environments, known as Trusted Execution Environments, are often not reliable against skilled attackers or even insider threats. This undermines the core idea of end-to-end encryption, which aims to protect user privacy without needing to trust the service provider.
Brandworks and SandLogic build India's Edge AI hardware
Brandworks Technologies and SandLogic are partnering to create an India-focused Edge AI hardware ecosystem. They will develop AI-powered devices for voice interfaces, smart IoT systems, and mobility solutions. These devices will process data directly on the device, ensuring faster responses, better data security, and reliable offline performance. This collaboration supports India's goal of becoming self-reliant in advanced electronics. The first jointly developed AI-enabled products are expected to launch in 2026, targeting mobility, home technology, and education, with plans to expand to global markets. Brandworks Technologies will showcase its automotive and electric mobility products at CES 2026 in Las Vegas.
Hollywood background actors face uncertain future with AI
On December 23, 2025, it was reported that Hollywood's background actors face an uncertain future. Challenges include the rise of AI, computer-generated imagery, and film production moving out of Southern California. Central Casting, which recently celebrated its 100th anniversary, helps place these actors. Mark Goldstein, the company's CEO, believes human actors are still important for making scenes believable. Despite these difficulties, many aspiring actors continue to register with Central Casting, hoping to follow in the footsteps of famous alumni like Clark Gable and Brad Pitt.
Sources
- OpenAI's Outlook on AI Browser Security Is Bleak, but Maybe a Little More AI Can Fix It
- OpenAI says AI browsers like ChatGPT Atlas may never be fully secure from hackers—and experts say the risks are 'a feature not a bug'
- Opinion | Chatbots can inflict harm. Why aren’t they held liable?
- The Biggest AI Moments of 2025
- Say welcome back to the blue book: How AI is reshaping higher education in Syracuse
- Google's old assistant stays on Android for a while longer
- AI ads are here
- AI Police Reports: Year In Review
- Secure Messaging and AI Don’t Mix
- Brandworks Technologies and SandLogic Join Forces to Build India-Centric Edge AI Hardware Ecosystem | Machine Maker - Latest Manufacturing News | Indian Manufacturing News - Latest Manufacturing News | Indian Manufacturing News
- After a century in Hollywood, background actors face uncertain future
Comments
Please log in to post a comment.