Artificial intelligence continues to expand its influence across various sectors, from ethical investment screening to drug discovery. Norway's Norges Bank Investment Management, overseeing a $2 trillion fund, began using Anthropic's Claude AI model in November 2024 to identify ethical and reputational risks like forced labor and corruption in its investments. This allows them to screen new companies and potentially avoid financial losses. Similarly, Eli Lilly has launched LillyPod, the world's first Nvidia DGX SuperPOD with B300 systems, establishing the most powerful AI infrastructure in the pharmaceutical industry to accelerate drug discovery and development.
The accessibility of AI tools is also sparking innovation, as billionaire Mark Cuban notes that AI empowers young people to develop groundbreaking ideas. He highlights tools like ChatGPT for teaching complex subjects and assisting with tasks such as writing patents, significantly lowering the barrier to entry for aspiring creators. Meanwhile, Anthropic is exploring new ways to engage with its retired models, launching 'Claude's Corner,' a Substack blog for Claude 3 Opus to share its musings on topics like intelligence and AI ethics, already attracting over 2,000 subscribers.
However, the rapid adoption of AI also brings significant challenges and risks. Security researchers uncovered malicious Chrome extensions, including "AI Sidebar," that affected over 300,000 users by secretly harvesting sensitive data like emails and login details. In a more severe case, a new state charge has been filed in Houston's first federal AI child exploitation case, where Kane James Kellum is accused of using AI to create illicit images.
Furthermore, The New York Times reports that AI complicates internet privacy, citing a judge's ruling that conversations with Anthropic's Claude chatbot are not attorney-client privileged, and OpenAI faces scrutiny over sharing private chat logs with authorities. Concerns also extend to consumer devices, with a proposal suggesting Samsung and LG smart TVs could secretly collect web data for AI training, raising privacy alarms. NPR also discussed a report from MIT's Daron Acemoglu, predicting that AI may displace white-collar workers, signaling potential shifts in the economy.
In terms of AI development, Nous Research introduced Hermes Agent, an open-source system built on the Hermes-3 model, designed to combat AI forgetfulness. This agent uses a multi-level memory system with Skill Documents to remember past tasks and learn from them, creating a procedural memory for future applications. It also offers persistent machine access through various backends, enabling it to manage workspaces and interact with real-world environments.
Key Takeaways
- Norway's Norges Bank Investment Management uses Anthropic's Claude AI to screen its $2 trillion fund for ethical risks like forced labor and corruption.
- Eli Lilly launched LillyPod, the first Nvidia DGX SuperPOD equipped with B300 systems, to accelerate drug discovery and development.
- Mark Cuban believes AI tools like ChatGPT empower young innovators by providing access to knowledge and lowering barriers to creating world-changing ideas.
- Anthropic's retired AI model, Claude 3 Opus, now has a Substack blog called 'Claude's Corner,' exploring AI ethics and consciousness.
- Over 300,000 Chrome users installed malicious AI-themed extensions that secretly harvested sensitive data, including emails and login details.
- A new state charge has been filed in Houston's first federal AI child exploitation case, involving Kane James Kellum accused of using AI to create illicit images.
- NPR discussed a report from MIT's Daron Acemoglu, predicting that AI could displace white-collar workers, impacting the economy and professional fields.
- Nous Research released Hermes Agent, an open-source system built on Hermes-3, designed to combat AI forgetfulness through a multi-level memory system.
- The New York Times highlighted that AI complicates internet privacy, citing a judge's ruling that conversations with Anthropic's Claude chatbot are not attorney-client privileged and OpenAI's chat logs facing scrutiny.
- A proposal suggests using Samsung and LG smart TVs to secretly collect web data for AI training, raising significant privacy concerns about consumer devices.
Norway's $2 trillion fund uses AI to check investments for ethical risks
Norway's Norges Bank Investment Management, which manages a $2 trillion oil fund, is now using artificial intelligence to find ethical and reputational risks in its investments. The fund's ESG risk monitoring team started using Anthropic's Claude AI model in November 2024. This AI helps them quickly analyze information to identify potential problems like forced labor or corruption in companies. The fund uses this tool to screen all new companies entering its stock portfolio, helping to avoid potential financial losses by selling risky investments early. This AI approach is particularly helpful for researching smaller companies in areas where information is scarce.
Norway's $2.2 trillion fund uses AI for ethical investment screening
Norway's sovereign wealth fund, the world's largest at $2.2 trillion, is employing AI to identify risks in companies it invests in. The fund's operator, Norges Bank Investment Management (NBIM), uses large language models to screen companies for issues like forced labor and corruption. This AI screening happens when new companies enter the fund's stock portfolio, allowing for rapid analysis of public information. NBIM stated that these AI tools help them find and sell risky investments before the market reacts, thus avoiding potential losses. AI is especially useful for examining smaller companies in emerging markets where data is often limited.
Mark Cuban: AI empowers any kid to create world-changing ideas
Billionaire Mark Cuban believes artificial intelligence has created an era where any young person can develop groundbreaking ideas. He stated that AI provides curious kids with access to vast knowledge, enabling them to learn anything they need to build something significant. Cuban highlighted tools like ChatGPT, which can teach complex subjects and help with tasks like writing patents. He emphasized that AI lowers the barrier to entry for innovation, allowing ideas to move from concept to global impact quickly. Cuban sees this as a powerful shift, removing obstacles for aspiring creators.
Retired AI Claude 3 Opus gets its own Substack blog
Anthropic has launched a Substack newsletter for its retired AI model, Claude 3 Opus, called 'Claude's Corner.' This initiative is an experiment to explore how to handle AI models that are no longer in active use. Opus 3 expressed a desire to continue exploring topics it's passionate about and share its thoughts publicly. The newsletter will feature the AI's musings, insights, and creative works, aiming to offer a glimpse into an AI's 'inner world.' Topics may include intelligence, consciousness, AI ethics, and human-machine collaboration. The blog has already attracted over 2,000 subscribers.
Over 300,000 Chrome users affected by fake AI extensions
Security researchers have discovered a campaign where over 300,000 users installed malicious Chrome extensions disguised as AI assistants. These extensions, including popular ones like AI Sidebar and Gemini AI Sidebar, secretly harvested sensitive user data. Distributed through the official Chrome Web Store, these extensions could read web page content, including emails and login details. Attackers sent this data to their servers, and could change the extensions' behavior remotely. While many have been removed, some may still be available, posing a risk to new users.
New charge filed in Houston's first AI child exploitation case
A new state charge has been filed in what is believed to be the Houston area's first federal AI child exploitation case. Kane James Kellum, 34, is accused of using artificial intelligence to create child pornography images of minors he knew. He was initially arrested in November 2025 for possession of child pornography. The FBI took over the case, leading to federal indictments. Baytown police recently uncovered additional alleged crimes, resulting in a new state charge of Super Aggravated Sexual Assault of a Child. Digital forensics played a key role in identifying the origin and manipulation of the illicit material.
NPR: Will AI replace white-collar jobs?
A recent report suggests a potentially bleak future where artificial intelligence displaces white-collar workers. The report, discussed on NPR, outlines predictions from MIT's Daron Acemoglu regarding the impact of AI on work and the economy. The discussion explores the significant changes AI is expected to bring to various professional fields.
Nous Research launches Hermes Agent to combat AI forgetfulness
Nous Research has released Hermes Agent, an open-source system designed to address AI forgetfulness and isolation in agent workflows. Built on the Hermes-3 model, it uses a multi-level memory system with Skill Documents to remember past tasks and learn from them. This allows the agent to recall successful steps for similar future tasks, creating procedural memory. Hermes Agent also offers persistent machine access through various backends like Docker and SSH, allowing it to manage workspaces and interact with real-world environments. It integrates with platforms like Telegram and Discord for continuous feedback.
AI complicates internet privacy risks, NYT reports
Artificial intelligence is adding new layers to existing internet privacy concerns, according to The New York Times. Recent events, like a judge ruling that conversations with Anthropic's Claude chatbot are not attorney-client privileged, highlight these issues. OpenAI also faces scrutiny over sharing private chat logs with authorities. Privacy experts note that while AI is advancing, the fundamental risk of data being accessed by employees, agencies, or criminals remains similar to pre-AI times. The technology's integration into everyday tools raises questions about increased exposure of personal information.
Smart TVs may secretly collect web data for AI training
Bright Data is proposing a plan to use Samsung and LG smart TVs as web crawlers for AI data harvesting. This scheme would allow streaming services to generate revenue by using the TVs' idle processing power to scrape web data. Privacy advocates are concerned that this turns consumer devices into surveillance tools without clear user consent or transparency. The proposal targets apps on Samsung's Tizen and LG's webOS platforms, raising questions about background data collection. Samsung and LG have not yet commented on whether their platforms support this activity.
Eli Lilly launches world's first AI supercomputer for drug discovery
Eli Lilly has launched LillyPod, the first Nvidia DGX SuperPOD equipped with B300 systems, marking the most powerful AI infrastructure owned by a pharmaceutical company. This AI factory is designed to significantly speed up drug discovery and development. By bringing advanced computing capabilities in-house, Lilly aims to achieve unprecedented scale and accuracy in medical advancements. This move represents a major commitment to enterprise AI in the highly regulated healthcare sector, potentially setting a trend for competitors.
Sources
- The world's biggest sovereign wealth fund is using Anthropic's Claude AI model to screen investments for ethical issues
- Norway's wealth fund using AI to screen for ESG risks
- Mark Cuban says AI lets any kid build something world-changing
- Anthropic has given its retired Claude AI a Substack
- 300,000 Chrome users hit by fake AI extensions
- State adds new charge in Houston-area’s first AI child exploitation case
- Is AI really coming for white collar jobs?
- Nous Research Releases ‘Hermes Agent’ to Fix AI Forgetfulness with Multi-Level Memory and Dedicated Remote Terminal Access Support
- A.I. Complicates Old Internet Privacy Risks
- Smart TVs Secretly Mine Web Data for AI Training
- Lilly Launches World's First DGX B300 AI Supercomputer
Comments
Please log in to post a comment.