Artificial intelligence is increasingly impacting various sectors, from finance and healthcare to personal relationships and cybersecurity, though not without significant challenges and concerns. Romance scams, for instance, are becoming more sophisticated with AI, leading to substantial financial losses. Americans lost over $672 million to these scams in 2024, with the total reaching $1.16 billion in 2025. Scammers use GenAI to create deepfake personas, impersonate celebrities, and craft personalized messages for schemes like "pig butchering" or "worker abroad" scams. Experts advise caution, especially when asked for money via apps like Zelle, and recommend reporting incidents to the FBI's IC3.
Major hyperscalers, including Amazon, Alphabet (Google), Meta, and Microsoft, are making colossal investments in AI infrastructure. They plan to spend $680 billion in AI capital expenditure in 2026, contributing to a total of nearly $1.5 trillion over four years. This massive spending, often funded by increased debt (over $137.5 billion issued since late 2024), has raised investor concerns about potential returns and the short 3-5 year useful life of these assets. The market has reacted with volatility, including a 20% drop in AMD's stock and a sell-off in software firms following an Anthropic Claude AI tool update.
In healthcare, AI tools are being adopted by two-thirds of physicians, primarily for charting, summarizing patient conversations, and assisting with complex cases. Doctors like Dr. Jehan Murugaser use tools such as Dax to record patient discussions, while Presbyterian Healthcare Services piloted GW RhythmX for patient history summaries. However, the integration of AI in medicine is not without risks; 60 AI-assisted medical devices have been linked to 182 product recalls by the FDA, with some, like Brainlab's TruDi system, allegedly causing permanent injuries. Concerns also persist about AI's inability to diagnose, potential errors, and the risk of discouraging professional medical attention.
Beyond these areas, AI's influence extends to emotional intimacy, with developers expressing hesitation about AI simulating human emotions, despite 72% of American teens seeking emotional support from AI. Scientists are also exploring if large language models like Gemini and ChatGPT can mimic psychedelic drug experiences, though this is a statistical simulation, not actual perception. On the business front, Lone Pine Capital predicts AI will significantly boost profits for large companies, expecting hundreds of millions in annual cost savings by 2027. Meanwhile, Elon Musk has criticized Anthropic's Claude AI models for alleged bias against specific demographics, linking it to competition with xAI's Grok. Quesma is also exploring AI for supply chain security, developing tools like BinaryAudit to proactively detect malicious code.
Key Takeaways
- Romance scams using AI cost Americans over $1.16 billion in 2025, with AI making fraudulent messages and personas more convincing.
- Hyperscalers like Amazon, Alphabet, Meta, and Microsoft plan to invest $680 billion in AI infrastructure in 2026, totaling nearly $1.5 trillion over four years.
- Investor concerns are rising over hyperscalers' massive AI spending, which could consume nearly 100% of cash flow and has led to over $137.5 billion in debt issued since late 2024.
- AI tools in healthcare assist doctors with charting and summarizing patient conversations, with two-thirds of physicians now using AI.
- Sixty AI-assisted medical devices have been linked to 182 product recalls by the FDA, with some causing patient injuries like cerebrospinal fluid leaks.
- AI developers express hesitation about AI simulating emotional intimacy, despite 72% of American teens seeking emotional support from AI.
- Scientists are exploring if large language models like Gemini and ChatGPT can mimic psychedelic drug experiences, though it's a statistical simulation, not actual experience.
- Lone Pine Capital predicts AI will significantly boost profits for large companies, expecting hundreds of millions in cost savings by 2027.
- Elon Musk criticized Anthropic's Claude AI models for alleged bias against specific demographics, linking it to competition with xAI's Grok.
- Quesma is developing AI tools like BinaryAudit to proactively detect malicious code in supply chains, aiming to enhance security.
AI makes romance scams more convincing
Romance scammers use artificial intelligence to sound more charming and believable online. Americans lost over $672 million to these scams in 2024, with men being 65% more likely to encounter them weekly. Scammers build trust by mirroring victims' interests and then ask for money through methods like wire transfers or gift cards. Cyber news reporter Kerry Tomlinson advises looking for signs of AI in messages, like flowery language, or in photos, like warped backgrounds or extra fingers. She also suggests asking video callers to move their hands or heads to spot deepfakes.
Watch out for these four AI romance scams
Romance scams, including those using AI, stole $1.16 billion from Americans in 2025. Jonathan Frost from BioCatch warns that GenAI tools create deepfake personas and personalized messages, fueling fraud. Four common scams include celebrity impersonation, where AI helps fraudsters pose as famous people like Kim Kardashian. "Pig butchering" schemes trick victims into fake investments, while tragedy scams involve fraudsters claiming illness or jail time. "Worker abroad" scams feature partners who cannot meet in person, often claiming to be in the military or on an oil rig. Experts advise using reputable dating sites and reporting scams to the FBI's IC3.
AI makes romance scams more convincing for Valentine's Day
As Valentine's Day nears, experts warn that artificial intelligence makes romance scams more believable. Scammers use dating apps and social media to create fake identities and build trust before asking for money, often through apps like Zelle. Adam Klappholz from Zelle advises treating transfers like cash and notes that Zelle has safeguards, but asking for money from someone you have not met is a major red flag. Other warning signs include requests to use encrypted messaging apps or repeated postponements of in-person meetings. Scammers also use AI to impersonate celebrities and often play a long game to gain trust.
Hyperscalers invest $680 billion in AI infrastructure
Five major hyperscalers, including Amazon, Alphabet, Meta, Microsoft, and Oracle, plan to invest $680 billion in AI capital expenditure in 2026. This brings their total investment to nearly $1.5 trillion over four years, raising concerns among investors about spending and potential returns. Most of this money will go into building data centers, power assets, and digital infrastructure, but physical and policy limits may slow this growth. The market also saw a sell-off in software firms after Anthropic's Claude AI tool update, and AMD's stock dropped 20% despite good earnings. Hyperscalers are increasingly using debt, with over $137.5 billion issued since late 2024, to fund these massive AI projects.
Hyperscalers face investor doubts over huge AI spending
Hyperscalers like Amazon, Microsoft, and Google plan to spend up to $700 billion on AI this year, causing investor uncertainty. This massive capital expenditure could consume nearly 100% of their cash flow from operations, compared to a 10-year average of 40%. Michael Field from Morningstar calls this a "binary bet," meaning it will either pay off hugely or lead to business failure. While some analysts remain bullish, citing pre-sold data center capacity, others worry about increased borrowing and reduced free cash flow. Experts say hyperscalers need to show clear timelines for returns and credible monetization strategies to ease investor concerns, as the useful life of these investments is only 3-5 years.
Doctors discuss AI's role in patient healthcare
Patients increasingly ask artificial intelligence for medical advice, a trend doctors like Dr. Andrew Godbey and Dr. Jehan Murugaser see as both helpful and risky. While AI can make patients more invested in their care and explain complex conditions simply, it lacks personal medical history and cannot diagnose. Doctors worry AI might discourage patients from seeking professional medical attention. AI excels at answering specific questions, such as dietary advice after surgery, and summarizing information. Dr. Murugaser uses an AI tool called Dax to record patient conversations and create notes, allowing him to focus more on the patient during appointments, always with their consent.
Real hospitals use AI tools despite drama concerns
The medical drama "The Pitt" shows AI making errors, a concern echoed in real hospitals where two-thirds of physicians now use AI. AI tools primarily help with charting, summarizing patient conversations to save doctors time during appointments. Dr. Murali Doraiswamy notes these ambient AI scribes save a few minutes, but doctors still edit the output. Presbyterian Healthcare Services piloted GW RhythmX, an AI assistant that summarizes patient history and suggests solutions for complex cases, like antibiotic allergies. Yale School of Medicine resident Sudheesha Perera uses OpenEvidence daily for quick medical information and AI tools like Claude Code for data analysis. However, concerns remain about AI errors, cost-cutting, and increased workload for staff.
Scientists explore AI's ability to mimic drug trips
Scientists are exploring if artificial intelligence can mimic psychedelic drug experiences using large language models like Gemini and ChatGPT. Researcher Ziv Ben-Zion found that these AI models could distinguish between different substances like LSD and psilocybin, and their language reflected the distinct effects of each drug. However, Ben-Zion emphasizes that AI only simulates the "statistical structure" of human descriptions and does not actually experience perceptual distortion or emotional changes. Relying on AI for emotional support during a drug trip carries risks, including users over-attributing understanding to the AI or receiving unsafe advice. Ben-Zion suggests "guardrails" for AI, such as constant reminders that they are not human and boundaries against romantic or self-harm discussions.
Hedge fund predicts AI will boost big company profits
David Craver, co-chief investment officer at Lone Pine Capital, believes artificial intelligence will significantly boost profits for large companies, calling it the "revenge of the dinosaurs." He remains bullish on AI's long-term prospects, citing improving models, high demand, and dramatic internal returns for companies using AI. Craver expects CFOs to report massive cost savings, potentially hundreds of millions annually, by 2027 due to AI implementation. While only 17% of companies currently report a positive AI impact, Craver sees the next phase as widespread adoption by incumbent firms. Despite recent stock volatility for AI-linked companies like Nvidia, Lone Pine Capital remains very positive about AI's market impact.
AI surgical tools linked to patient injuries and recalls
AI-powered surgical tools, which assist human surgeons, are facing scrutiny after nearly 200 AI-assisted medical devices were recalled by the FDA. Investigations and lawsuits raise concerns about their safety. For example, Brainlab's TruDi system has been linked to allegations of cerebrospinal fluid leaks and punctured skulls, with two cases reportedly causing permanent injuries. Another device, Sonio Detect, which analyzes prenatal images, is accused of using a faulty algorithm that misidentifies fetal structures. Research published in npj Digital Medicine shows that 60 AI-assisted medical devices have been connected to 182 product recalls.
Quesma explores AI for supply chain security
Quesma is exploring how new artificial intelligence can improve security against supply-chain attacks. With reverse engineer Micha\u0142 'Redford' Kowalczyk, Quesma developed an open-source benchmark called BinaryAudit to test AI's ability to detect malicious code. Traditionally, finding malicious code is a reactive process done by specialists after a breach. However, AI could make this a proactive defense, allowing software inspection at any time, such as before deployment or during updates. Quesma CEO Jacek Migda\u0142 noted that current AI models can detect malicious code but act more as assistants. Quesma hopes future AI models will make binary analysis a mainstream security tool.
Experts question AI's role in human emotional intimacy
Amelia Miller, an AI researcher, explored how developers of AI companions view the social and ethical impacts of their work. She found that many developers are hesitant about AI simulating emotional intimacy, acknowledging it could cause confusion. While 72% of American teens have sought emotional support from AI, many AI developers themselves hope they never need machines for emotional needs. They express concern about the potential harms and "dark day" if humans become reliant on AI for emotional connection. Miller's research highlights a significant debate about the boundaries of AI in human relationships.
Students learn ethical AI use for schoolwork
A workshop at the Calvin T. Ryan Library taught students how to use artificial intelligence ethically in their schoolwork and avoid plagiarism. David Arredondo, a collections librarian, stressed that students must first understand their subject's basics, like citations and critical thinking, before using AI. He and Grace Fuchser pointed out common signs of AI writing, such as overuse of em dashes. Students should talk to professors about AI use, know their course's AI policy, and save their AI chats for proper citation. While tools like NotebookLM can help organize notes and create podcasts, some students, like Liam Mosher, remain skeptical, preferring human experts over AI.
Elon Musk criticizes Anthropic AI models for bias
Elon Musk publicly criticized Anthropic's Claude AI models on social media, calling them "misanthropic and evil." The Tesla CEO claimed the models show racial and demographic bias, specifically against "Whites & Asians, especially Chinese, heterosexuals and men." Musk's AI company, xAI, and its chatbot Grok compete directly with Anthropic's Claude models. He has previously mocked Anthropic and criticized them after they reportedly cut off xAI's access to their models. Musk also has an ongoing feud with OpenAI CEO Sam Altman, having recently exchanged barbs about ChatGPT and Tesla's Autopilot technology.
Sources
- Men are more likely to fall for this scam. Here's how to spot the red flags
- 4 romance scams to watch out for this V-Day
- As Valentine's Day approaches, experts warn about the increasing sophistication of romance scams
- News | Hyperscalers’ $680 billion AI capital expenditure investment raises the stakes
- The Tech Download newsletter: Can hyperscalers justify their huge AI capex?
- Artificial Intelligence and Healthcare
- How The Pitt's AI Drama is Playing Out in Real Hospitals
- Can Artificial Intelligence Get High, And Why Are Scientists Even Trying?
- AI may fuel profits at corporate giants, hedger fund says
- AI surgical tools might be injuring patients
- Quesma Explores Novel AI's Security Capabilities Against Supply-Chain Attacks
- Opinion | We’re All in a Throuple With A.I.
- Students learn how to use AI programs the right way
- Elon Musk slams Anthropic AI models as 'misanthropic and evil' in scathing social media post
Comments
Please log in to post a comment.