British actors, members of the Equity union, recently voted overwhelmingly to refuse digital scanning on film and TV sets. Over 99% of members who participated are prepared to take this stance, signaling serious concerns about artificial intelligence. The union fears studios could use actors' digital data without permission to train AI models. Equity plans to leverage this strong mandate in upcoming January negotiations with Pact, a trade body for producers, aiming to secure robust AI protections similar to those in the SAG-AFTRA strike deal in the USA. General Secretary Paul W Fleming emphasized the workforce's willingness to disrupt production for better terms. In other AI developments, openai released GPT-5.2-Codex on December 18, 2025, its most advanced agentic coding model. This new AI is optimized for complex software engineering and defensive cybersecurity tasks, offering improvements in long-context understanding and Windows performance. Paid chatgpt users can access it now, with API access coming soon. Meanwhile, the AI industry is showing its political muscle, with a new $100 million AI-industry super PAC, "Leading the Future," actively targeting New York assemblymember Alex Bores, who advocates for state-level AI regulation. The economic impact of AI is also a major theme. European Central Bank President Christine Lagarde noted that AI investment is driving a significant shift in the euro area economy, influencing the ECB's decision to keep interest rates unchanged. New York, for instance, faces pressure to invest heavily in its electric grid to support the vast power demands of AI development and data centers, as highlighted by Justin Wilcox of Upstate United. Globally, Chinese chipmakers like MetaX and Moore Threads are challenging nvidia, securing successful IPOs as China pushes for a self-sufficient semiconductor industry amidst US export restrictions. Cybersecurity remains a critical concern across sectors. The financial services industry, in particular, must embrace AI for stronger defenses against increasingly sophisticated, AI-driven attacks like deepfakes and targeted phishing. Siroui Mushegian, CIO of Barracuda, also advises small to medium-sized businesses to adopt AI security measures, emphasizing shared responsibility and employee training. To ensure responsible AI development, Xiangyi Li founded BenchFlow, a platform for transparent and standardized AI model testing. BenchFlow, which secured over $1 million in seed funding, aims to help researchers evaluate AI systems consistently and promote safety. Finally, the philosophical implications of AI continue to be debated. Dr. Tom McClelland, a University of Cambridge philosopher, argues that humanity might never definitively know if artificial intelligence achieves consciousness. He suggests our current understanding of consciousness is too limited to create a valid test, advocating for agnosticism on the matter. McClelland also emphasizes that ethical considerations should prioritize "sentience," which encompasses feelings, rather than just basic consciousness.
Key Takeaways
- British actors, members of the Equity union, voted overwhelmingly (over 99%) to refuse digital scanning on film and TV sets, seeking stronger AI protections in contracts.
- openai launched GPT-5.2-Codex on December 18, 2025, its most advanced agentic coding model, optimized for software engineering and defensive cybersecurity.
- Chinese chipmakers MetaX and Moore Threads had successful IPOs, challenging nvidia as China aims for a self-sufficient AI processor industry amid US export restrictions.
- A new $100 million AI-industry super PAC, "Leading the Future," is targeting New York assemblymember Alex Bores due to his advocacy for state-level AI regulation.
- The European Central Bank (ECB) noted that AI investment is driving a major shift in the euro area economy, influencing its decision to keep interest rates unchanged.
- New York needs significant investment in its electric grid to support the high power demands of AI development and data centers.
- The financial services industry must integrate AI for robust cybersecurity against advanced, AI-driven attacks like deepfakes and targeted phishing.
- Small to medium-sized businesses can enhance AI security through shared responsibility, employee training, and implementing strong access controls like multi-factor authentication.
- Xiangyi Li founded BenchFlow to provide transparent and standardized AI model testing, securing over $1 million in seed funding to promote responsible AI development.
- A University of Cambridge philosopher suggests we may never know if AI becomes conscious, advocating for agnosticism and focusing ethical concerns on "sentience."
British Actors Refuse Digital Scans Amid AI Dispute
British actors, members of the Equity union, voted to refuse digital scanning on film and TV sets. This indicative ballot shows their strong desire for better AI protections in their contracts. Equity fears that actors' digital data could be used without their permission to train AI models. The union will use this vote as leverage in upcoming negotiations with Pact, a trade body for producers, in January. They aim to secure adequate AI protections that build on the SAG-AFTRA strike deal in the USA.
Actors Overwhelmingly Vote Against Digital Scans for AI
UK actors have voted overwhelmingly to refuse digital scanning on set, signaling their serious concerns about artificial intelligence. Equity, the largest acting union in the UK, announced that over 99% of members who voted are prepared to refuse scans. General Secretary Paul W Fleming stated this shows the workforce is willing to disrupt production for better terms. The vote follows 18 months of talks with Pact, where the use of data to train AI systems remains a key disagreement. Equity warns a statutory ballot for industrial action could be the next step if negotiations fail in January.
UK Actors Push Back Against AI With Scan Refusal Vote
UK actors have voted to refuse digital scanning on set, marking a significant pushback against the use of artificial intelligence in the arts. The performing arts union Equity reported that 99% of members polled support this action. This indicative ballot demonstrates strong opposition to their likeness being used by AI without consent. Equity General Secretary Paul Fleming highlighted that this shows actors are willing to disrupt productions unless new protections are secured. Actors like Olivia Williams have raised concerns about studios having too much control over scanned data.
OpenAI Launches GPT-5.2-Codex for Advanced Coding
OpenAI released GPT-5.2-Codex, its most advanced agentic coding model, on December 18, 2025. This new model is optimized for complex software engineering and defensive cybersecurity tasks. It offers improvements like better long-context understanding, reliable tool calling, and stronger performance in Windows environments. GPT-5.2-Codex also has enhanced cybersecurity capabilities, building on previous models that have already helped find vulnerabilities. The model is available now for paid ChatGPT users, with API access coming soon, and OpenAI is carefully piloting its use for defensive cybersecurity professionals.
OpenAI Unveils Powerful GPT-5.2-Codex Model
OpenAI introduced GPT-5.2-Codex on December 18, 2025, calling it their most advanced agentic coding model. This new AI is designed for complex software engineering and defensive cybersecurity. It improves on previous models with better long-term task handling, large code changes, and performance in Windows. GPT-5.2-Codex also features stronger cybersecurity abilities, though it does not yet meet a 'High' cyber capability level. Paid ChatGPT users can access it now, and API access will follow in the coming weeks.
Chinese Chipmakers Challenge Nvidia With New AI Processors
Chinese chipmakers MetaX and Moore Threads recently had successful IPOs, showing strong investor interest in local AI chip development. China aims to create its own advanced processors to rival Nvidia, especially due to US export restrictions on powerful chips. While Chinese companies like Huawei, Alibaba, and Baidu have not yet matched Nvidia's top chips, they are making progress. Huawei uses a strategy of building large chip clusters, and Baidu's Kunlunxin chip is seen as a strong contender. This push highlights China's goal for a self-sufficient semiconductor industry.
Philosopher Says AI Consciousness May Remain a Mystery
A University of Cambridge philosopher, Dr. Tom McClelland, argues that we might never know if artificial intelligence becomes conscious. He believes our current understanding of consciousness is too limited to create a valid test for AI awareness. McClelland suggests that the only reasonable position is agnosticism, meaning we simply cannot tell. He also explains that ethical concerns should focus on "sentience," which includes feelings, rather than just basic consciousness. The philosopher's study, published in Mind and Language, critiques both sides of the AI consciousness debate, finding neither has enough evidence.
SMBs Can Boost AI Security Even With Few Resources
Small to medium-sized businesses face growing cybersecurity threats from AI, including advanced phishing and deepfakes. Siroui Mushegian, CIO of Barracuda, explains that SMBs can still build strong AI security despite limited resources. Key steps include creating a culture of shared responsibility and clear security policies. Regular security training for all employees and implementing strong access controls like multi-factor authentication are also crucial. SMBs should also use AI tools to enhance their defenses and establish clear rules for employees using AI applications.
AI Industry Targets New York Politician Alex Bores
The artificial intelligence industry is actively trying to stop Alex Bores, a New York assemblymember running for Congress. Bores has been a leading voice for state-level AI regulation, which the industry sees as a major threat to its business. A new $100 million AI-industry super PAC, called Leading the Future, is specifically targeting his campaign. This situation highlights the growing political impact of AI, touching on issues like data centers, electricity use, and job displacement. Bores, who has a tech background, will discuss his views on AI and regulation.
New York Must Boost Power Grid for AI Future
Justin Wilcox of Upstate United argues that New York must invest heavily in its electric grid to secure its future in artificial intelligence. AI development and data centers require a vast amount of electricity and strong infrastructure. New York's current grid faces challenges with closing power plants and rising demand, risking the state falling behind in the global AI race. Wilcox emphasizes that politicians must support grid investments and streamline projects. If New York fails to strengthen its power infrastructure, it will miss out on significant economic growth and job opportunities from the AI industry.
Financial Services Needs AI for Strong Cybersecurity
The financial services industry must embrace artificial intelligence for strong cybersecurity, according to a report. AI offers great potential, but its value depends on understanding AI's role in security and building security into AI tools from the start. Financial institutions face increasing risks from faster, AI-driven attacks like deepfakes and targeted phishing. Traditional security centers are often overwhelmed, but AI-driven security operations centers can drastically improve detection and response times. For example, Palo Alto Networks' SOC uses AI to analyze 90 billion events daily, reducing them to a single actionable incident.
ECB Says AI Fuels Investment But Rate Path Unset
European Central Bank President Christine Lagarde announced that the ECB kept interest rates unchanged due to ongoing uncertainty. Lagarde noted that artificial intelligence investment is driving a major shift in the euro area economy. She stressed that future interest rate decisions will depend entirely on new data, with all options remaining open. The ECB expects euro area growth to be 1.4% in 2025, with inflation easing below target in 2026 and 2027. Investment in computing capacity, telecommunications, and software is growing, with AI playing a central role.
Xiangyi Li Founds BenchFlow for Clear AI Testing
Xiangyi Li, a young founder, created BenchFlow to bring transparency and standardized testing to artificial intelligence models. He realized that evaluating AI systems was slow and inconsistent, making it hard to compare different models and understand their safety or performance. BenchFlow offers a platform where researchers can submit models through an API for automated, reproducible evaluations across many benchmarks. Li's open-source approach helped BenchFlow gain momentum and secure over $1 million in seed funding from notable investors. BenchFlow aims to help researchers navigate complex machine learning systems and promote responsible AI development.
Sources
- British Actors Vote To Refuse On-Set Digital Scans Amid Growing AI Dispute
- Actors vote for strike action over AI concerns
- Actors vote to refuse to be digitally scanned in pushback against AI
- Introducing GPT-5.2-Codex
- Introducing GPT-5.2-Codex
- MetaX and Moore Threads' IPOs underscore Chinese chipmakers' growing challenge to Nvidia
- We may never be able to tell if AI becomes conscious, argues philosopher
- How SMBs Can Build AI Security Muscle Memory, No Matter Their Resources
- Meet the Politician the AI Industry Is Trying to Stop
- If New York starves its grid, it will starve its AI future (Guest Opinion by Justin Wilcox)
- From the Hill: The AI-Cybersecurity Imperative in Financial Services
- ECB's Lagarde: AI fuels investment, no rate path set
- Xiangyi Li: The Young Founder Bringing Transparency to AI
Comments
Please log in to post a comment.