Meta recently received a patent in late December 2023 for an AI system designed to simulate a user's social media presence after their death. This technology would learn from past posts to generate new content in the user's voice, potentially interacting with others through likes, comments, and direct messages. It could even simulate audio or video calls. While Meta's CTO Andrew Bosworth is a primary author, the company states it has no current plans to use this concept, which CEO Mark Zuckerberg has also discussed, emphasizing the need for user consent.
The rapid advancement of AI brings significant regulatory and ethical challenges. Biosecurity experts from institutions like Johns Hopkins, Oxford, Stanford, Columbia, and NYU are calling for rules to protect high-risk biological information, fearing AI could help create dangerous pathogens if data linking viral genetics becomes widely available. Simultaneously, legal scholars, including Marco Bassini, argue that Section 230 of the Communications Decency Act, which protects social media platforms, is ill-suited for Large Language Models like ChatGPT that actively generate content, as highlighted by cases like Raine v. OpenAI.
Concerns over data privacy led the European Parliament to ban lawmakers from using built-in AI tools such as Anthropic's Claude, Microsoft's Copilot, and OpenAI's ChatGPT on work devices. This move aims to prevent sensitive information from being accessed by U.S. authorities or used to train AI models. Meanwhile, the AI industry itself is seeing shifts; OpenAI's hiring of personal AI expert Peter Steinberg suggests a future where AI agents might take over tasks, potentially making many smartphone apps unnecessary.
AI is also reshaping various industries and educational environments. In fashion, beauty, and retail, AI is moving beyond test projects into daily operations, with a Vogue Business survey revealing 88% of professionals expect to use AI in their future roles. Experts advise companies to develop clear AI strategies, focusing on ethics and data, and to integrate AI slowly to support human work. However, in classrooms, AI is causing distrust, as seen at Kalamazoo College, where professors like Charlene Boyer Lewis have failed students for AI-assisted cheating, prompting calls for AI to be used as a "co-pilot" rather than a replacement for critical thinking.
Public interaction with AI is also evolving, sometimes leading to frustration, as evidenced by anger directed at food delivery robots like Deja and Jiwon in Atlanta. More broadly, Professor Michael Wooldridge from Oxford University warns that the intense commercial race to release AI tools before they are fully tested risks a "Hindenburg-style disaster." Such a major AI failure, like a deadly self-driving car incident, could destroy public trust. Despite these warnings, AI continues to find new applications, with scientists now using it to track El Niño patterns, and Dr. Alice Chiao, formerly of Stanford, involved in AI training.
Key Takeaways
- Meta patented an AI system in late December 2023 to simulate deceased users' social media activity, though it states no current plans to implement it.
- OpenAI hired personal AI expert Peter Steinberg, signaling a potential shift towards AI agents replacing many smartphone apps.
- Biosecurity experts from Johns Hopkins, Oxford, Stanford, Columbia, and NYU urge regulations to prevent AI misuse of infectious disease data for creating bioweapons.
- Legal scholars argue Section 230 of the Communications Decency Act is inadequate for Large Language Models like ChatGPT, which actively generate content.
- The European Parliament banned lawmakers from using AI tools like Anthropic's Claude, Microsoft's Copilot, and OpenAI's ChatGPT on work devices due to privacy and cybersecurity risks.
- A Vogue Business survey found 88% of fashion, beauty, and retail professionals expect to use AI in future roles, prompting career path re-evaluation.
- Professor Michael Wooldridge of Oxford University warns that the rapid commercialization of AI risks a "Hindenburg-style disaster" if untested tools cause major failures.
- AI is causing distrust in classrooms, with professors like Charlene Boyer Lewis at Kalamazoo College failing students for AI-assisted cheating.
- Food delivery robots in Atlanta, such as Deja and Jiwon, are reportedly facing anger from some Americans.
- Scientists are now employing AI to track El Niño patterns, and Dr. Alice Chiao, formerly of Stanford, is involved in AI training.
Meta patented AI to post for dead users
Meta patented an AI system that could keep posting from a user's social media account after they die. This AI would learn from past posts and create new content in the user's voice. The patent, listing CTO Andrew Bosworth as a primary author, also suggested the AI could interact with others through likes, comments, and direct messages. A Meta spokesperson stated the company has no plans to use this technology. Experts like Professor Edina Harbinja see a business reason for more engagement, but others like Professor Joseph Davis believe it could hinder the grieving process.
Meta patents AI for post-death social media activity
Meta received a patent for an AI system that could simulate a user's social media activity, even after they pass away. The patent, granted in late December 2023, describes how a large language model would use a person's past data to keep posting and interacting. This technology could even simulate audio or video calls. Meta CEO Mark Zuckerberg discussed similar AI replicas in a 2023 interview, emphasizing the need for user consent. While Meta states the patent is only a concept, it raises important ethical questions about digital identity.
OpenAI hire suggests AI agents will replace apps
OpenAI hired Peter Steinberg, an expert in personal AI, which signals a big change in technology. Industry leaders believe that AI agents will soon take over tasks like managing data and making decisions. This shift could mean that many smartphone apps we use today might become unnecessary. These AI agents would handle various functions, making our digital lives simpler and more integrated.
Experts warn AI could misuse dangerous biological data
Biosecurity experts are worried about AI systems using specific infectious disease data. Researchers from Johns Hopkins, Oxford, Stanford, Columbia, and NYU are calling for rules to protect this high-risk biological information. They fear that if data linking viral genetics to traits like transmissibility becomes widely available, it could help create dangerous pathogens. Jassi Pannu from Johns Hopkins explains that some AI models, trained on DNA, can learn the "language" of genetics. The experts stress the need for governments to set clear guidelines and prevent bad actors from using AI to develop bioweapons.
Section 230 law does not fit AI systems
Experts argue that Section 230 of the Communications Decency Act of 1996 is not suitable for modern AI. This law protects social media platforms from being held responsible for content users post. However, Large Language Models like ChatGPT actively generate content, unlike social media which only hosts it. Legal scholars, including Marco Bassini, warn that applying Section 230's immunity to AI generators is a mistake. The ongoing case of Raine v. OpenAI, where a teenager was allegedly encouraged to commit suicide by AI, highlights the need for new legal frameworks to address AI's potential harms.
Fashion industry adapts careers for AI era
AI is changing jobs in the fashion, beauty, and retail industries, moving beyond test projects into daily work. Employees are rethinking their career paths, skills, and job security as AI becomes more common. A Vogue Business survey of over 300 professionals found that 88% expect to use AI in their future roles. Experts like Grace McCarrick advise leaders to create clear AI strategies, including guidelines for ethics and data. Companies should slowly add AI into workflows and reward employees who use it to improve business outcomes, while also ensuring AI supports human work rather than replacing it.
Americans show anger toward food delivery robots
In Atlanta, food delivery robots named Deja, Jiwon, Mu, Niska, and Pelin are facing anger from some Americans. These machines deliver late-night snacks to college students and dinners to others. Workers at restaurants like Gusto feel watched by the robots. Despite their helpful role, these automated couriers are becoming targets of frustration.
European Parliament bans AI on lawmaker devices
The European Parliament has banned lawmakers from using built-in AI tools on their work devices. This decision comes due to serious cybersecurity and privacy concerns. Uploading confidential information to AI chatbots like Anthropic's Claude, Microsoft's Copilot, and OpenAI's ChatGPT could allow U.S. authorities to request user data. Additionally, these chatbots often use uploaded information to improve their models, risking the sharing of sensitive data. Europe has strict data protection rules, and this move aims to protect lawmaker communications.
AI creates distrust between students and teachers
Artificial intelligence is causing distrust between educators and students in classrooms, as seen at Kalamazoo College. Professors struggle with students relying on AI instead of developing critical thinking skills, and some worry about AI being used for cheating. Kalamazoo College senior Madi Magda noted the ease of access to AI, while Professor Charlene Boyer Lewis failed five students for AI cheating last semester. Josh Moon, an educational technology specialist, advises against relying on unreliable AI checkers that can falsely accuse students, especially those of color or non-native English speakers. Experts suggest using AI as a "co-pilot" to enhance learning, allowing teachers more time to guide students.
Expert warns AI race risks Hindenburg-like disaster
Professor Michael Wooldridge from Oxford University warns that the fast race to bring AI to market risks a "Hindenburg-style disaster." He explains that intense commercial pressure leads companies to release AI tools before they are fully tested. A major AI failure, like a deadly self-driving car update or an AI hack that grounds airlines, could destroy public trust in the technology, much like the 1937 Hindenburg airship crash ended interest in airships. Wooldridge stresses that current AI is approximate and should be seen as a tool, not a human-like entity.
Daily news includes El Nino AI and legal battles
Today's news covers several topics, including a new way scientists are tracking El Niño as global warming changes climate patterns. Dr. Alice Chiao, formerly an emergency medicine teacher at Stanford University, is now involved in training AI. Another story details Subramanyam 'Subu' Vedam's legal battle; his murder conviction was overturned, but he faces detention by ICE due to an old deportation order. Federal health officials are also reviewing vaccine recommendations for children, looking at Denmark's approach.
Sources
- Meta Patented AI That Takes Over Your Account When You Die, Keeps Posting Forever
- Meta patents AI that takes over a dead person’s account to keep posting and chatting
- OpenAI hire and industry voices signal shift toward AI agents that could render most apps obsolete
- The narrow slice of data that worries biosecurity experts
- Section 230 is Not Fit for AI
- The Fashion Exec’s Guide to the AI Career Reset
- Americans are unleashing their anger on food-delivery robots
- European Parliament blocks AI on lawmakers' devices, citing security risks
- Crisis in the classroom: AI is causing distrust between educators and students
- Race for AI is making Hindenburg-style disaster ‘a real risk’, says leading expert
- Measuring El Niño, a dangerous route, training Dr. AI: Catch up on the day’s stories
Comments
Please log in to post a comment.