Palantir CEO Alex Karp recently shared his perspectives on AI's impact on the global workforce and immigration at the World Economic Forum in Davos on January 20, 2026. Karp, speaking with BlackRock CEO Larry Fink, asserted that AI will generate sufficient jobs for citizens, particularly those with vocational training, thereby reducing the need for extensive immigration, except for highly specialized roles. He suggested that while AI might displace "humanities jobs" and affect "elite" white-collar workers, vocational positions will see increased value. Karp cited Palantir's Maven system, managed by a former police officer with junior college education, as an example of AI creating opportunities for individuals with practical skills. He also warned that Europe lags behind the US and China in AI adoption.
The year 2026 marks a significant "AI arms race" in cybersecurity, with agentic AI driving new threats. Malicious AI, deepfakes, and sophisticated malware are weaponizing automation, leading to risks like shadow AI and autonomous attacks. Ransomware 3.0 now subtly alters data to erode trust, and traditional identity management struggles against modern bypass techniques, including MFA fatigue and help desk backdoors. Experts like Aamir Lakhani from Fortinet's FortiGuard Labs warn of autonomous cybercrime agents executing attacks with minimal human oversight, drastically shrinking intrusion-to-impact times. Organizations must adopt phishing-resistant standards such as FIDO2 and passkeys to counter these evolving threats.
Discussions at Davos highlighted the importance of diversifying AI development, with Cohere CEO Aidan Gomez emphasizing it as a national security issue to prevent over-reliance on single sources. Stanford HAI senior fellow Yejin Choi expressed concern over AI power concentrating among a few wealthy tech companies, advocating for AI that is "of human, for human, by human." Meanwhile, the union Equity and producers' association Pact resumed negotiations on January 21, 2026, regarding AI use in the entertainment industry, following an "improved offer" from Pact. Performers previously showed strong support to refuse digital scanning without consent, underscoring the need for "informed consent" for digital replicas, similar to the SAG-AFTRA deal in the US.
Major tech companies are actively integrating AI into education. Anthropic is piloting its Claude chatbot with schools, while Google is embedding Gemini features, such as SAT practice and writing feedback, into Google Classroom. Microsoft is also adding its Copilot AI assistant to education products, reflecting a shift from initial resistance to widespread acceptance of AI in learning. Concurrently, AVPN, supported by Google.org and the Asian Development Bank, expanded its AI Opportunity Fund: Asia-Pacific. This fund, now totaling USD 25 million, supports 18 training providers and aims to reach an additional 50,000 individuals by 2025 across countries like India, Singapore, Vietnam, Indonesia, Malaysia, and the Philippines, building AI skilling infrastructure.
In the competitive AI landscape, Rokid launched new AI-first wearables, the Rokid Ai Glasses Style and Rokid Glasses, targeting both professionals and broader consumer adoption with its open AI ecosystem. The Ai Glasses Style, starting at $299, is lighter and offers prescription compatibility. Globally, Nvidia CEO Jensen Huang plans a visit to China amid new U.S. export controls on AI chips, seeking to strengthen relationships despite shipment difficulties for H200 AI chips. Google DeepMind CEO Demis Hassabis noted at Davos that Chinese AI companies lag about six months behind leading Western labs in frontier AI innovation, attributing this partly to US restrictions on advanced semiconductors. Comparative tests between Google's Gemini 3.2 Fast and OpenAI's ChatGPT 5.2 showed Gemini performing better on complex math problems, while ChatGPT excelled in creative writing.
Key Takeaways
- Palantir CEO Alex Karp believes AI will create enough vocational jobs for citizens, reducing the need for large-scale immigration, and criticized traditional higher education.
- The cybersecurity landscape in 2026 faces an "AI arms race" with agentic AI, deepfakes, and Ransomware 3.0, requiring adoption of phishing-resistant standards like FIDO2.
- Experts at Davos, including Cohere CEO Aidan Gomez, advocate for diversifying AI development globally for national security and to prevent power concentration among a few tech companies.
- The union Equity and producers' association Pact are negotiating AI use, with performers seeking "informed consent" for digital replicas, similar to the SAG-AFTRA deal.
- Anthropic, Google, and Microsoft are integrating their AI tools (Claude, Gemini, Copilot) into classrooms, with Google adding features like SAT practice and writing feedback to Google Classroom.
- AVPN, supported by Google.org and the Asian Development Bank, expanded its AI Opportunity Fund: Asia-Pacific to USD 25 million, aiming to skill 50,000 more individuals by 2025 across six countries.
- Rokid introduced new AI-first wearables, Rokid Ai Glasses Style ($299) and Rokid Glasses, aiming for broader market adoption with an open AI ecosystem and prescription compatibility.
- Nvidia CEO Jensen Huang plans a visit to China to strengthen relationships amidst U.S. export controls on AI chips, which have caused shipment difficulties for H200 AI chips.
- Google DeepMind CEO Demis Hassabis stated Chinese AI companies are approximately six months behind Western labs in frontier AI innovation, partly due to US semiconductor restrictions.
- Comparative tests showed Google's Gemini 3.2 Fast performing better on complex mathematical problems, while OpenAI's ChatGPT 5.2 excelled in creative writing.
Palantir CEO Alex Karp links AI to immigration changes
Palantir CEO Alex Karp believes AI will create enough jobs for citizens, especially those with vocational training. He suggests this trend makes large-scale immigration unnecessary unless for specialized skills. Karp stated that white-collar jobs will be affected, while vocational jobs will thrive. He mentioned Palantir's Maven system, an AI tool for the US Army, noting its head only completed junior college. Karp's views often include critiques of higher education.
Palantir CEO Alex Karp discusses AI and job future
Palantir CEO Alex Karp spoke at the World Economic Forum in Davos on January 20 2026, discussing AI's impact on jobs with BlackRock CEO Larry Fink. Karp stated that AI will destroy humanities jobs but create many opportunities for those with vocational training. He criticized traditional college degrees, advocating for vocational skills. Karp highlighted an example of a former police officer with junior college education who now manages Palantir's Maven system for the US Army. He believes companies should focus on finding people's unique aptitudes.
Palantir CEO Alex Karp links AI to immigration needs
Palantir CEO Alex Karp stated at the World Economic Forum in Davos on January 20 2026, that AI will create enough jobs for citizens, especially those with vocational training. He believes this will reduce the need for large-scale immigration, except for highly specialized skills. Karp, who has a PhD in philosophy, suggested that "elite" white-collar workers might be affected first, while vocational workers will be more secure. His comments, made during a conversation with BlackRock Inc. CEO Larry Fink, have drawn media criticism. Karp also mentioned that AI "will destroy humanities jobs."
Palantir CEO Alex Karp discusses AI and global power
Palantir CEO Alex Karp suggested that AI use in hospitals can "bolster civil liberties" by showing clear reasons for patient processing decisions. He warned that Europe is falling behind the US and China in AI adoption, calling it a serious structural problem. Karp believes AI will destroy humanities jobs but increase the value of vocational technician roles. He also reiterated his view that AI will create enough jobs for citizens, making large-scale immigration unnecessary unless for very specialized skills.
Cybersecurity faces new AI threats in 2026
In 2026, cybersecurity faces an "AI arms race" with agentic AI leading to new threats. Malicious AI, deepfakes, and sophisticated malware are weaponizing automation, creating risks like shadow AI and autonomous attacks. Ransomware 3.0 now focuses on subtly altering data to erode trust. Traditional identity management is failing against modern bypass techniques, with MFA fatigue and help desk backdoors being exploited. Organizations must adopt phishing-resistant standards like FIDO2 and passkeys. The September 2025 Jaguar Land Rover attack showed the devastating impact of ransomware, halting production and affecting 5,000 partners.
AI versus AI defines new cybersecurity war
AI is now a major engine for cybercrime, enabling automated social engineering and advanced attacks. Defenders must use AI-powered security to counter these threats, as attackers industrialize cybercrime. Aamir Lakhani from Fortinet's FortiGuard Labs warns that autonomous cybercrime agents will execute attacks with minimal human oversight, drastically shrinking intrusion-to-impact times. Telcos and other communication providers must secure the entire ecosystem of 5G, AI, IT, and OT technologies. The cybercrime economy is becoming more structured, resembling legitimate businesses with customer service and reputation scoring. Organizations will need to distinguish between legitimate and malicious AI agents, focusing on discerning intent.
Experts at Davos call for diverse AI development
At Davos, executives and academics agreed that diversifying AI's location and designers offers collective benefits. Cohere CEO Aidan Gomez emphasized that AI diversification is a matter of national security, preventing countries from depending on a single source. Stanford HAI senior fellow Yejin Choi expressed concern about the concentration of AI power among a few wealthy tech companies. She stated that AI should be "of human, for human, by human." Turing CEO Jonathan Siddharth added that AI models must "touch reality" and learn from real people to be safe and useful.
Pact and Equity resume AI talks after new offer
On January 21 2026, the union Equity and producers' association Pact agreed to resume negotiations on AI use, following an "improved offer" from Pact. Equity General Secretary Paul W Fleming stated that industrial action remains a possibility if the offer is not sufficient. A previous ballot showed strong support from performers to refuse digital scanning on set. AI has been a central point of these talks, similar to the SAG-AFTRA deal in the US which included "informed consent" for digital replicas. The negotiations also cover pay, residuals, and stipulations from streamers.
Big tech AI tools enter classrooms
Major tech companies like Anthropic, Google, and Microsoft are bringing their AI tools into classrooms, aiming to shape how Gen Alpha learns. After initial resistance to tools like ChatGPT, schools now widely accept AI as a permanent part of education. Anthropic is piloting its Claude chatbot with schools, while Google is integrating Gemini features like SAT practice and writing feedback into Google Classroom. Microsoft is also adding its Copilot AI assistant to education products. Concerns remain about AI tool biases, data privacy, and the enforcement of laws like FERPA.
AVPN expands AI training fund in Asia-Pacific
AVPN, supported by Google.org and the Asian Development Bank, expanded its AI Opportunity Fund: Asia-Pacific to build AI skilling infrastructure. The fund, initially USD 15 million and later adding USD 10 million for Phase Two, now supports 18 training providers. It will extend its reach to Indonesia, Malaysia, and the Philippines, in addition to India, Singapore, and Vietnam. The initiative aims to develop new AI skilling programs and strengthen the overall AI skilling ecosystem. The fund has already helped over 10,000 individuals and plans to reach an additional 50,000 by 2025.
Rokid launches new AI glasses for wider market
Rokid introduced two AI-first wearables, Rokid Ai Glasses Style and Rokid Glasses, with a single strategy to segment the AI eyewear market. Rokid Glasses offer a visual display for AR uses like navigation and translation, targeting professionals. The new Ai Glasses Style, lighter at 38.5 grams and starting at $299, aims for broader adoption as an all-day AI interface. Both products share Rokid's open AI ecosystem, integrating multiple language models and global services. A key feature for the Style is prescription compatibility up to 15.00D, including progressive and photochromic lenses, delivered globally within 7-10 days.
Nvidia CEO Jensen Huang visits China amid chip limits
Nvidia CEO Jensen Huang plans a visit to China, likely to Beijing before the Lunar New Year, to strengthen key relationships. This trip comes as Nvidia faces new U.S. export controls on AI chip sales to China. Shipments of Nvidia's H200 AI chips have encountered difficulties, with U.S. regulators permitting some transfers but Chinese customs later halting them. China remains a crucial market for Nvidia, especially for its data center business. Huang's visit highlights the company's efforts to maintain its position despite trade restrictions and rising demand for AI technology.
DeepMind CEO says China lags in AI innovation
Google DeepMind CEO Demis Hassabis stated that Chinese AI companies are about six months behind leading Western labs in frontier AI innovation. Speaking at the World Economic Forum in Davos, Hassabis acknowledged DeepSeek's R1 model as impressive but noted Chinese firms have not innovated beyond the frontier. US restrictions on advanced semiconductors have constrained China's AI development, though US President Donald Trump is easing some export bans. Hassabis also mentioned DeepMind's contributions to Google's Gemini AI assistant and its work on robotics, anticipating a breakthrough in physical intelligence soon.
Gemini and ChatGPT face off in AI test
Ars conducted comparative tests between Google's Gemini 3.2 Fast and OpenAI's ChatGPT 5.2, focusing on default models for non-subscription users. The tests used complex prompts, including writing original dad jokes, solving a mathematical word problem about Windows 11 on floppy disks, and creative writing about Abraham Lincoln inventing basketball. ChatGPT struggled with originality in dad jokes, while Gemini performed better on the math problem with clearer calculations and more detail. For creative writing, ChatGPT earned points for charm with its historical details. The tests highlighted stylistic and practical differences between the two AI models.
Sources
- Palantir CEO Says AI Will Somehow Be So Great That People Will Stop Immigrating
- Palantir CEO says AI “will destroy" humanities jobs but there will be “more than enough jobs” for people with vocational training
- Palantir CEO says AI will eliminate the need for mass immigration
- Palantir CEO suggests AI 'bolsters civil liberties,' warns Europe falling behind US and China
- Navigating the era of agentic AI and identity management in 2026
- AI vs. AI is the new security battleground
- Axios House: Executives and academics agree there's collective benefit to diversifying AI
- Pact & Equity Return To Negotiating Table On AI Following “Improved Offer” — But Union Says Industrial Action Remains Possibility
- Big tech's AI tools crowd the classroom
- AVPN's AI Opportunity Fund Expands Regional Efforts to Build AI Skilling Infrastructure for a Future-Ready Workforce Across Asia-Pacific
- Rokid Ai Glasses Style vs. Rokid Glasses: Two Products, One Strategy for AI-First Wearables
- Nvidia CEO Jensen Huang Plans China Visit as AI Chip Sales Face New Limits
- DeepMind CEO Says Chinese AI Firms Are 6 Months Behind the West
- Has Gemini surpassed ChatGPT? We put the AI models to the test.
Comments
Please log in to post a comment.