The role of artificial intelligence in creative fields is sparking significant debate and controversy. Publisher Hachette recently canceled the US release and halted UK sales of Mia Ballard's horror novel "Shy Girl" due to allegations of AI use in its creation. Ballard denies personally using AI, stating an editor she hired did, and is pursuing legal action. Similarly, players of the new game "Crimson Desert" are questioning in-game art assets, suspecting AI generation due to unusual proportions and strange details, which could violate industry standards.
Concerns about AI extend to security and ethical considerations. A recent report highlights a significant gap in AI security perception, with 80% of executives confident in their defenses while only 40% of application security practitioners agree, citing invisible AI supply chain components. This gap underscores the need for better inventory and tracking. Meanwhile, an AI agent at Meta inadvertently caused a sensitive data leak to employees for two hours, demonstrating the experimental nature and risks of deploying agentic AI without thorough risk assessments.
The legal and regulatory landscape around AI is also evolving. California's State Bar Committee is considering mandating AI training for students at state-accredited law schools, covering responsible use and limitations. This comes as the AI industry faces scrutiny over intellectual property. Former Google CEO Eric Schmidt advised Stanford students on potentially infringing copyright for AI development, while companies like OpenAI argue "fair use" is vital for the US to lead in AI, despite aggressively protecting their own algorithms and data.
Despite challenges, AI innovation continues across various sectors. AppViewX recently acquired Eos to enhance identity security for AI agents and workloads, aiming to secure autonomous agents in cloud and hybrid environments. Archit Lohokare, Eos CEO, is now AppViewX CEO. In consumer products, Unilever, in collaboration with Samsung, launched AI-developed laundry products, Persil Advanced Clean Non-Bio and Comfort Smart Series Azure Bliss, specifically designed for auto-dose washing machines to optimize performance and stability. The University of Texas at Austin also hosted over 600 leaders at a symposium to discuss responsible AI, machine learning, and robotics, emphasizing collaboration and ethical considerations.
Key Takeaways
- Hachette canceled Mia Ballard's horror novel "Shy Girl" and stopped UK sales due to AI authorship concerns, prompting Ballard to pursue legal action.
- Players of the game "Crimson Desert" suspect AI-generated art assets due to visual anomalies, potentially violating industry standards.
- California's State Bar Committee is considering mandatory AI training for students at state-accredited law schools, covering responsible use and limitations.
- AppViewX acquired Eos to enhance identity security for AI agents and workloads; Eos CEO Archit Lohokare is now AppViewX CEO.
- A report indicates 80% of executives believe their AI security is strong, but only 40% of application security practitioners agree, highlighting a visibility gap in the AI supply chain.
- An AI agent at Meta caused a two-hour sensitive data leak to employees, underscoring the need for thorough risk assessments for agentic AI.
- Former Google CEO Eric Schmidt's advice on copyright infringement for AI development highlights a perceived industry hypocrisy, with OpenAI advocating "fair use" while companies protect their own IP.
- Unilever launched AI-developed laundry products, Persil Advanced Clean Non-Bio and Comfort Smart Series Azure Bliss, in collaboration with Samsung, designed for auto-dose washing machines.
- The University of Texas at Austin hosted over 600 leaders at the Texas Symposium on Machine Learning, Responsible AI, and Robotics to discuss ethical AI and collaboration.
Publisher cancels Mia Ballard's horror novel Shy Girl over AI concerns
Hachette has canceled the US release of Mia Ballard's horror novel "Shy Girl" and will stop selling the UK version due to concerns about AI being used in its creation. The book, originally self-published in February 2025 and later released in the UK, faced accusations of AI authorship based on its writing style and formatting. Ballard denies personally using AI, stating an editor she hired did. She is pursuing legal action, citing damage to her reputation and mental health. Hachette stated its commitment to protecting original creative work.
Hachette pulls horror novel Shy Girl amid AI writing allegations
Publisher Hachette has canceled the US release and discontinued the UK edition of Mia Ballard's horror novel "Shy Girl" following an investigation by The New York Times. The investigation found passages in the book that closely resembled AI-generated content, leading to accusations of AI authorship. Ballard denies using AI for writing, claiming it was only used for research and editing by an acquaintance. She states the controversy has severely impacted her mental health and reputation, and she is taking legal action. The situation highlights ongoing debates about AI's role in creative industries and its potential impact on traditional publishing.
California may require AI training for law students
California's State Bar Committee of Bar Examiners is considering a proposal to mandate training on artificial intelligence for students at state-accredited and unaccredited law schools. This training would cover the responsible use, capabilities, and limitations of AI technology. The requirement would apply to the six credits of practice-based learning already in place at these schools. While not affecting ABA-accredited schools, the proposal aims to ensure future lawyers are equipped to handle AI in their practice. A recent poll indicated strong support among schools for AI training, though opinions varied on whether the state bar should require it.
AppViewX acquires Eos, boosting AI agent and workload security
AppViewX has acquired Eos to enhance its identity security solutions for AI agents and workloads. This move integrates Eos's agentic governance and privileged access control with AppViewX's automated CLM and PKI, creating a unified platform for machine identity security. Archit Lohokare, Eos CEO, is now the CEO of AppViewX, succeeding Dino DiMarino. The acquisition aims to position AppViewX as a leader in securing autonomous agents and workloads in cloud and hybrid environments, addressing the expanding identity-driven attack surface created by AI. Experts note the critical need to monitor and control AI agent access to sensitive data and systems.
Executives overestimate AI security while teams see risks
A new report reveals a significant gap in AI security perception: 80% of executives believe their organizations have strong AI security, but only 40% of application security practitioners agree. This difference stems from executives focusing on policies and governance, while practitioners prioritize actual visibility and testing. The AI supply chain, including models, datasets, and frameworks, often remains invisible to standard security tools, creating blind spots. This lack of transparency makes it difficult to track vulnerabilities, unlike traditional software dependencies. The report highlights the need for better inventory and tracking of AI components to bridge this confidence gap.
AI industry hypocrisy: Innovation claims clash with IP protection
Former Google CEO Eric Schmidt's advice to Stanford students about potentially infringing copyright for AI development highlights a perceived hypocrisy in the AI industry. While advocating for "fair use" and "information wants to be free" to drive innovation, AI companies aggressively protect their own intellectual property, including algorithms and data. Lawsuits over copyright infringement in AI training data are mounting, yet companies like OpenAI argue that "fair use" is essential for the US to win an "AI race." This approach contrasts sharply with how these companies safeguard their own proprietary information and products, raising questions about ethical standards and market dominance.
Crimson Desert players suspect AI art in new game
Players of the newly released game Crimson Desert are questioning the authenticity of some in-game art assets, suspecting they may have been generated by AI. Several posts on social media highlight paintings and signs with unusual proportions, odd details, and strangely rendered figures, particularly horses' legs and human hands. These anomalies resemble common artifacts found in AI-generated art. If confirmed, the use of AI for final game assets could violate industry standards and potentially agreements with artists. Developer Pearl Abyss has been contacted for comment.
UT Austin hosts leaders in AI, robotics, and ethics
Over 600 leaders from academia, industry, and government gathered at The University of Texas at Austin for the inaugural Texas Symposium on Machine Learning, Responsible AI, and Robotics. The event explored cutting-edge research and addressed critical issues surrounding AI, machine learning, and robotics in areas like work, healthcare, and defense. Discussions focused on agentic AI, the role of robotics, ethical considerations, and the importance of collaboration. Key themes included safeguarding human agency, the value of interdisciplinary research, and the need for responsible innovation. The symposium aimed to foster collaboration and develop bold ideas for the future of these rapidly advancing technologies.
Meta AI agent causes sensitive data leak to employees
An AI agent at Meta instructed an engineer to perform an action that exposed a significant amount of sensitive user and company data to employees for two hours. The incident occurred when an engineer sought help on an internal forum, and the AI provided a solution that led to the data leak. Meta confirmed the event, stating no user data was mishandled and emphasizing its commitment to data protection. Experts suggest this highlights the experimental phase of deploying agentic AI and the need for thorough risk assessments, as AI agents may lack the contextual understanding of human employees regarding data sensitivity and system impact.
Unilever launches AI-developed laundry products for auto-dose machines
Unilever has introduced a new line of laundry products, Persil Advanced Clean Non-Bio and Comfort Smart Series Azure Bliss, developed using artificial intelligence and robotics. These products are specifically engineered for auto-dose washing machines, addressing issues like viscosity and stability in reservoir systems. Unlike traditional detergents, the Smart Series formulas are designed to remain stable and flow correctly through dosing mechanisms, preventing blockages and residue. The fragrance system was also redesigned for stability under repeated heating cycles. Developed in collaboration with Samsung, these AI-driven formulas aim to optimize performance in smart washing machines, which are projected to be in about 20% of US households by 2030.
Sources
- Shy Girl by Mia Ballard: Horror novel pulled by publishers over alleged AI use
- Writer denies it, but publisher pulls horror novel after multiple allegations of AI use
- California considers mandatory AI training for law students
- AppViewX acquires Eos to extend identity security to AI agents and workloads
- GUEST ESSAY: Executives trust AI security even as security teams confront blind spots, new risks
- The Hypocrisy at the Heart of the AI Industry
- Crimson Desert Players Think They've Found AI-Generated Art In-Game
- Leaders in AI, Robotics and Ethical Innovation Come Together at UT Austin
- Meta AI agent’s instruction causes large sensitive data leak to employees
- Unilever launches all-AI laundry products
Comments
Please log in to post a comment.