Artificial intelligence continues to integrate deeply across various sectors, bringing both significant advancements and new challenges. In the business world, AI copilots and agents, such as Microsoft 365 Copilot and Salesforce, are becoming standard in applications, creating complex data connections and operating at machine speed. This rapid integration necessitates dynamic AI-SaaS security solutions, as traditional methods struggle to track AI actions and prevent unauthorized data access. The rise of "Shadow AI," where employees use unapproved generative AI tools like ChatGPT, further complicates security and compliance, urging companies to implement AI governance tools for better visibility and employee education. Globally, investments in AI infrastructure are surging. Indian telecom companies are shifting focus to building AI-ready data centers and cloud infrastructure, planning investments exceeding {1 lakh crore} over the next two to three years to boost enterprise revenue. Education is also adapting, with Universidad de Caldas in Manizales, Colombia, opening the country's first Faculty of Artificial Intelligence and Engineering. This initiative, backed by over {US$14.2 million} in joint investment, aims to train over 5,000 people in AI, offering programs from technical certificates to a Ph.D. However, the rapid expansion of AI is not without its pitfalls. YouTube recently banned two major channels, Screen Culture and KH Studio, for creating fake AI-generated movie trailers that garnered millions of views and even outranked official content. These channels violated YouTube's spam and misleading-metadata policies by not consistently labeling their videos as fan or concept trailers. In a more concerning incident, a Florida middle school experienced a lockdown after an AI security system, ZeroEyes, mistakenly identified a student's clarinet as a gun, highlighting the potential for false alarms and unnecessary stress. Meanwhile, an Indian AI stock's astonishing 55,000 percent surge has sparked fears of a market bubble, prompting regulators to monitor the situation closely. Despite these challenges, AI is also being leveraged to combat real-world problems. Happy Returns, owned by UPS, is piloting a new AI tool to fight returns fraud, a problem costing retailers an estimated {$76.5 billion}. This system combines human auditors with its Return Vision AI to identify fraudulent returns, which average around $261 each. In the gaming industry, Daniel Vavra, director of Kingdom Come Deliverance 2, defends AI's role, advocating for its use to augment human abilities, making game development faster and more cost-effective. As AI evolves, experts like Jeremy Carrasco (@showtoolsai on TikTok) are teaching the public to spot fake AI videos by looking for unnatural camera movements and inconsistencies, underscoring the ongoing need for media literacy in the AI era.
Key Takeaways
- YouTube banned Screen Culture and KH Studio for creating fake AI movie trailers, violating spam and misleading-metadata policies.
- AI tools like Microsoft 365 Copilot and Salesforce are integrating into business apps, necessitating new dynamic AI-SaaS security measures.
- "Shadow AI," where employees use unapproved generative AI tools like ChatGPT, poses significant security and compliance risks for businesses.
- A Florida middle school experienced a lockdown after an AI security system, ZeroEyes, mistook a student's clarinet for a gun.
- Indian telecom companies plan to invest over {₹1 lakh crore} in AI-ready data centers and cloud infrastructure to boost enterprise revenue.
- Universidad de Caldas in Colombia opened the country's first Faculty of Artificial Intelligence and Engineering, a {US$14.2 million} joint investment.
- An Indian AI stock surged 55,000 percent, raising concerns about a market bubble and prompting regulatory scrutiny.
- Happy Returns (UPS) is using an AI tool to combat returns fraud, a problem estimated to cost retailers {$76.5 billion}.
- Daniel Vavra, director of Kingdom Come Deliverance 2, advocates for AI's use in game development to augment human abilities and reduce costs.
- Experts like Jeremy Carrasco teach how to identify fake AI videos by looking for unnatural camera movements and inconsistencies.
YouTube bans AI channels making fake movie trailers
YouTube has shut down two major channels, Screen Culture and KH Studio, for using AI to create fake movie trailers. These channels, based in India and Georgia, gained millions of views by splicing official footage with AI images. YouTube terminated them for violating its spam and misleading-metadata policies after they stopped consistently labeling videos as "fan trailers." Screen Culture's founder, Nikhil P. Chaudhari, admitted his team exploited YouTube's algorithm with many versions of fake trailers for popular series like Harry Potter and Disney properties.
YouTube removes channels making fake AI movie trailers
YouTube has banned two popular channels, Screen Culture and KH Studio, for creating fake AI-generated movie trailers. These channels had over 2 million subscribers and produced videos for non-existent projects like "GTA: San Andreas (2025) Teaser Trailer." YouTube first demonetized them in early 2025, requiring disclaimers like "parody" or "concept trailer." However, the channels did not consistently use these labels, leading to their termination under YouTube's policies. Many of their fake trailers, especially for Disney properties like The Fantastic Four, even outranked official content in searches.
New security needed for AI tools in business apps
AI copilots and agents are now common in business apps like Microsoft 365 and Salesforce. These AI tools create complex data connections and operate at machine speed, often with high access privileges. Traditional security methods struggle to track AI actions, as they can blend with normal user activity or access sensitive data without clear audit trails. For example, Microsoft 365 Copilot might fetch documents a user should not see, leaving no trace. This highlights the need for dynamic AI-SaaS security, which uses real-time, adaptive policies to protect businesses as AI tools expand.
School AI mistakes clarinet for gun causing lockdown
A Florida middle school, Lawton Chiles Middle School, went into lockdown last week after an AI security system called ZeroEyes mistook a student's clarinet for a gun. Police rushed to the school expecting a shooter, but instead found a student dressed as a military character for a Christmas event. ZeroEyes cofounder Sam Alaimo stated the AI worked correctly, prioritizing safety. However, critics like Kenneth Trump argue these AI tools are unproven and false alarms cause unnecessary stress for students. The school principal, Melissa Laudani, sent a letter to parents, and the school plans to expand its use of ZeroEyes despite the incident.
Learn to spot fake AI videos online
Jeremy Carrasco, known as @showtoolsai on TikTok, teaches people how to spot AI-generated videos online. He explains that while AI models are improving, there are still clues to look for. For example, AI videos might show incorrect hospital equipment or unsafe climbing ropes. Carrasco also points out that camera movements in AI videos often seem unnatural because no real person is filming. Although TikTok has a policy requiring creators to label realistic AI content, Carrasco finds this policy unreliable, as detection often relies on descriptions rather than actual video analysis.
Indian telecom companies invest in AI data centers
India's telecom companies are changing their main focus from expanding networks to building AI-ready data centers and cloud infrastructure. They plan to invest over ₹1 lakh crore in this area over the next two to three years. This big move aims to greatly increase their earnings from businesses. The companies hope that enterprise revenue will eventually make up 40% of their total income.
Manage hidden AI use in your business
Businesses face a growing challenge called "Shadow AI," where employees use unapproved generative AI tools like ChatGPT for work. This creates security and compliance risks because these activities happen outside company oversight. Traditional security tools like DLP and SIEM cannot fully track AI interactions or the data shared with these services. To manage this, companies should focus on visibility by using AI governance tools to monitor prompt content and data flow. They also need to approve helpful AI uses, provide official tools, and educate employees on safe AI practices to reduce risks and maximize benefits.
Colombia opens first AI university faculty in Caldas
Universidad de Caldas in Manizales, Colombia, has opened the country's first Faculty of Artificial Intelligence and Engineering. This project, supported by MinTIC, aims to train over 5,000 people in AI and related fields. The faculty offers a full range of programs, from technical certificates to a Ph.D. in AI, with six programs created and three already registered. Over 155 students are currently enrolled, and the joint investment with Universidad de Caldas exceeds US$14.2 million. This initiative seeks to build a strong AI talent pool and connect training, labs, and research across the nation.
India AI stock surges 55000 percent sparking bubble fears
An AI stock in India has seen an incredible 55,000 percent increase, leading to concerns about a market bubble. This massive rally is now showing signs of weakness, and financial regulators are closely examining the situation. Exchanges and chipmakers across Asia have started warning investors about the risks of chasing these rapidly rising AI-related trades. The surge and subsequent fears were reported on December 18, 2025.
Happy Returns uses AI to fight fraud
Happy Returns, a company owned by UPS, is testing a new AI tool to fight returns fraud. This tool aims to help retailers combat a $76.5 billion problem, especially during the holiday season. The system combines human auditors with its Return Vision AI tool. When returns arrive at hubs in California, Pennsylvania, and Mississippi, human auditors photograph flagged packages. The AI then compares these photos to information about the expected products, with human teams making the final decision. The average value of each confirmed fraudulent return is about $261, and the company notes fraudsters are becoming more skilled.
Kingdom Come Deliverance 2 director defends AI use
Daniel Vavra, director of Kingdom Come Deliverance 2, has spoken out about the debate surrounding AI in game development. He called the strong negative reaction to Larian Studios using AI a "s***storm" and urged the industry to accept this new reality. Vavra believes AI can help make games faster and cheaper by assisting with tasks like creating assets and animating characters. He emphasized that AI should augment human abilities, not replace them, allowing developers to focus on more creative work. Kingdom Come Deliverance 2 is set to release in 2024.
Sources
- YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers Watched By Millions
- YouTube bans two popular channels that created fake AI movie trailers
- The Case for Dynamic AI-SaaS Security as Copilots Scale
- School security AI flagged clarinet as a gun. Exec says it wasn’t an error.
- Spotting AI in your feeds
- Telcos go big on AI-ready data centres
- How to bring shadow AI out of the dark
- Caldas Opens Colombia’s First AI Faculty With Programs Up to a Doctorate
- World-beating 55,000% surge in India AI stock fuels bubble fears
- UPS-Owned Happy Returns Tests AI to Combat Fraud
- Kingdom Come Deliverance 2 director defends Larian over AI "s***storm," says "it's time to face reality"
Comments
Please log in to post a comment.