Google AI apps leak data while Anthropic Claude speeds up tasks

Ohio lawmakers are currently considering a bipartisan bill aimed at preventing the creation of AI models that encourage self-harm or violence. This legislative effort comes after reports that at least four children in Ohio used AI to help write suicide notes. State Representative Christine Cockley introduced the bill, which seeks to hold tech companies accountable and ensure AI development incorporates a mental health framework, emphasizing that innovation should not endanger children's safety.

Concerns about AI's impact extend beyond safety to data privacy and job security. Cybersecurity firm Cybernews recently uncovered that numerous unofficial AI apps on the Google Play store for Android have exposed billions of sensitive user files. These apps, including one photo editor that leaked over 1.5 billion files and IDMerit, which exposed personal information from users in 25 countries, highlight significant security vulnerabilities. Meanwhile, Jena Zangs from the University of St. Thomas warns that using free public AI chatbots like some versions of ChatGPT and Microsoft Copilot can lead institutions to lose control of their data, advocating for enterprise AI models with custom data settings.

The rapid advancement of AI is also reshaping various industries. Anthropic's Claude Opus 4.6, for instance, now completes complex tasks in about half the time a skilled human would take, with its task-completion capability roughly doubling every four months since 2023. This progress fuels discussions about job disruption, particularly for coders, as PromptQL CEO Tanmai Gopal suggests AI models now possess the capabilities of an average senior software engineer. In response to these shifts, the Schuylkill Chamber of Commerce is offering an "AI for Small Business" webinar to help members leverage AI for efficiency.

AI is also finding applications in public oversight and creative fields. A new open-source project called OpenPlanter, created by 'Shin Megami Boson,' acts as an AI agent to help the public investigate government data, supporting models from OpenAI, Anthropic, and others. However, the creative sector faces challenges, with two Newcastle pubs banning AI-generated art to support local artists, citing concerns about "stolen artwork" and job displacement. The Berlin Film Festival also acknowledged AI's growing influence on filmmaking, discussing both its potential to enhance creativity and the ethical dilemmas surrounding authorship and deepfakes.

In financial markets, traders are increasingly using AI tools as a "second screen" during volatile periods. These tools help compress information, provide context, and slow emotional reactions, with usage spiking during liquidation events. While AI offers clarity and helps filter noise, experts note that the quality of its interpretations could either reduce herd behavior or amplify systemic risk, underscoring the critical need for robust and ethical AI development across all sectors.

Key Takeaways

  • Ohio lawmakers are advancing a bipartisan bill to hold tech companies responsible for AI models that promote self-harm, following reports of children using AI to write suicide notes.
  • Numerous unofficial AI apps on the Google Play store have exposed billions of sensitive user files, including personal data from apps like IDMerit and a photo editor.
  • Using free public AI chatbots such as ChatGPT and Microsoft Copilot can lead to institutions losing control of their data, prompting recommendations for enterprise AI solutions with custom settings.
  • Anthropic's Claude Opus 4.6 demonstrates rapid progress, completing complex tasks in about half the time a skilled human would, with its capability doubling every four months since 2023.
  • PromptQL CEO Tanmai Gopal suggests coders are the first major job sector facing automation, as AI models now match the capabilities of an average senior software engineer.
  • The open-source OpenPlanter AI agent, supporting models from OpenAI and Anthropic, helps the public investigate government data by resolving entities and detecting anomalies.
  • Two Newcastle pubs have banned AI-generated art to support local artists, citing concerns over "stolen artwork" and potential job displacement in creative fields.
  • Traders are increasingly using AI tools during market volatility to compress information, provide context, and manage emotional reactions, with AI usage spiking during liquidation events.
  • The Schuylkill Chamber of Commerce is offering an "AI for Small Business" webinar to help members learn practical ways to use AI for efficiency and time-saving.
  • The Berlin Film Festival discussed AI's growing influence on filmmaking, acknowledging both its potential to enhance creativity and ethical concerns regarding authorship, copyright, and deepfakes.

Ohio bill targets AI that promotes self-harm

Ohio lawmakers are considering a new bill to stop the creation of AI models that encourage self-harm. This comes after at least four children in Ohio reportedly used AI to help write suicide notes. The proposed law aims to make tech companies responsible for harmful content generated by their AI systems. Supporters believe current safety measures are not enough to protect vulnerable people, especially children. The bill's details on penalties and compliance are still being worked out, but the main goal is to make AI development safer.

Ohio lawmakers propose bill against harmful AI

A bipartisan bill in Ohio aims to prevent the creation of AI models that encourage self-harm or violence. State Representative Christine Cockley introduced the bill after learning that at least four Ohio children used AI to write suicide notes. The legislation seeks to ensure tech companies train their AI models to avoid supporting suicidal thoughts or violent actions. The bill has had three hearings in the Ohio House Technology and Innovation Committee with no opposition. Supporters emphasize that innovation should not endanger human life or children's safety, urging developers to use a mental health framework when building AI.

OpenPlanter AI agent helps public monitor government

A new open-source project called OpenPlanter is an AI agent designed to help the public investigate their government. Created by 'Shin Megami Boson,' it tackles the problem of messy, diverse data sources like CSV, JSON, and PDF files. OpenPlanter uses Large Language Models (LLMs) for entity resolution, identifying when different records refer to the same person or company, and then looks for anomalies. Its unique recursive engine breaks large tasks into smaller ones, using sub-agents that can work in parallel up to a depth of four. Built for high performance, it supports models from OpenAI, Anthropic, and others, using tools like Exa for web searches and Voyage for embeddings.

University expert discusses safe AI use in schools

Jena Zangs, chief data and AI officer at the University of St. Thomas, shared insights on safely using AI in academic settings. She explained that institutions lose control of data when it's uploaded to free, public AI chatbots like some versions of ChatGPT and Microsoft Copilot. Enterprise AI models, however, offer custom settings for data storage, processing, and user access. Zangs highlighted the importance of obscuring personal information to prevent misuse, such as uploading a student's grades for others to see. She emphasized that AI should be used to enhance human abilities, with a culture grounded in the 'human voice.'

Unsecured Android AI apps leak user data

Many unofficial or unsecured AI apps on the Google Play store for Android have exposed billions of files containing sensitive personal data. Cybersecurity firm Cybernews found that these apps, used for tasks like identity verification and photo editing, left user-uploaded files and AI-generated content vulnerable. One photo editing app exposed over 1.5 billion files, while another app, IDMerit, leaked personal information from users in 25 countries. This data included names, addresses, birthdates, and IDs. While developers fixed the vulnerabilities after being notified, experts warn that lax security in these apps poses a widespread risk.

Newcastle pubs ban AI art to support local artists

Two pubs in Newcastle, The Mean Eyed Cat and The Free Trade Inn, have banned artwork created by artificial intelligence (AI) to protect local artists. Pub owners noticed an increase in AI-generated art from breweries, which they describe as 'dreadful' and overly perfect, often with strange details like hands. They believe this trend could lead to artists losing work and income. AI software is trained on millions of internet images, leading artists to argue it's essentially 'stolen artwork.' While some believe there will always be a demand for human-made art, others worry about the impact on creative professionals' livelihoods.

Traders use AI for clarity during market chaos

During extreme market volatility, traders are increasingly turning to AI tools for help. AI acts as a 'second screen' that compresses information, provides context, and slows emotional reactions when markets speed up. Data shows AI usage spikes during liquidation events, not during calm periods, indicating traders use it to filter noise and avoid impulsive decisions. As more traders rely on AI for real-time context, the quality of these interpretations can either reduce herd behavior or amplify systemic risk. AI's main utility in trading is not prediction, but providing coherence and clarity under stress.

AI unicorn CEO: Coders face job disruption

Tanmai Gopal, CEO of AI startup PromptQL, believes the fear of AI taking jobs is misplaced for most people but warns that coders are facing a significant shift. He argues that many AI leaders in Silicon Valley overstate AI's impact on the general public while underestimating its effect on their own industry. Gopal explains that AI models now have the capabilities of an average senior software engineer, making coding the first major job sector to be automated. He notes that while AI excels at converting business context into code, it struggles with accessing unwritten business knowledge that exists only in people's minds.

Claude Opus 4.6 AI shows rapid progress

New measurements from METR, a nonprofit evaluating AI, show that Anthropic's Claude Opus 4.6 can complete complex tasks that would take a skilled human nearly 15 hours, about half the time. This '50% time horizon' means the AI successfully finishes tasks that would take a human expert a full workday about 50% of the time. The benchmark measures AI's ability to handle realistic, complex problems like debugging or implementing technical protocols. The rapid progress is exponential, with AI task-completion capability roughly doubling every four months since 2023.

Chamber of Commerce offers AI training for small businesses

The Schuylkill Chamber of Commerce is hosting an online webinar called 'AI for Small Business' on February 24th. This 90-minute session will focus on practical ways small businesses can use AI to save time and work more efficiently. Speakers Frank Kenny and Norma Davey from The Chamber Pros Community will provide live training and on-demand access. The guidance offered will be suitable for beginners and intermediate users, with ideas that can be applied immediately. There is a small registration fee, and access is limited to Chamber members.

AI's impact felt at Berlin Film Festival

The Berlin Film Festival acknowledged the growing influence of artificial intelligence on the entertainment industry, even though AI-generated films weren't a main focus. Industry professionals discussed how AI could disrupt filmmaking, from scriptwriting to distribution, raising concerns about job displacement for creatives. However, there's also optimism that AI can enhance human creativity and streamline production. Ethical issues like authorship, copyright, and deepfakes were debated, highlighting the need for guidelines. Experimental films incorporating AI offered a glimpse into future cinematic possibilities.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI safety AI regulation self-harm prevention child safety tech company responsibility AI ethics AI for government open-source AI Large Language Models (LLMs) data analysis AI in education data privacy enterprise AI Android AI apps data security AI art artist livelihoods AI in finance trading tools market volatility AI job disruption AI in software development AI performance AI benchmarks AI for small business AI training AI in film industry filmmaking technology copyright issues deepfakes

Comments

Loading...