The rapid expansion of artificial intelligence brings both ethical dilemmas and new opportunities. Scale AI, a company partly owned by Meta, has faced scrutiny over its use of "taskers" to collect and label potentially disturbing personal data from the internet for AI training. While Scale AI states it avoids child abuse material and explicit pornography, the practice raises significant ethical questions about data privacy. Meanwhile, many skilled older workers, struggling in the current job market, are finding temporary employment in data annotation, training AI models like ChatGPT and Gemini.
AI's practical applications are advancing, as seen with Tesla's FSD (Supervised) v14.3 update, which features a new AI system reacting 20% faster and utilizing "fleet learning" from millions of vehicles. This update brings new capabilities, including a "Parked Blind Spot Warning," to Cybertruck owners. However, AI's predictive capabilities still have limitations; for instance, an attempt to use AI to predict the Masters golf winner yielded vague, obvious, or incorrect suggestions, indicating it's not yet reliable for complex sports outcomes. In the business sector, Celerity acquired Ranger4 to enhance its IBM AI and automation services, aiming to provide clients with advanced software-led solutions for hybrid IT environments.
Governments and educational institutions are actively responding to AI's growing influence. Florida is implementing proactive policies to prevent the energy and water costs of massive data centers from burdening families, prioritizing citizen well-being. The Haryana government in India is mandating AI training for all state employees through the iGOT Karmayogi platform, covering Generative AI and tools like Microsoft Copilot to modernize administration. Additionally, Alabama schools are introducing an elective course on AI to educate students about this evolving technology, reflecting a broader trend in education. In a different context, Anthropic demonstrated a stance against certain military applications of its AI, leading to a breakdown in trust with the Pentagon.
Public perception of AI is also evolving, with human writers expressing strong opposition to AI-generated content, arguing it lacks genuine emotion and authentic voice. Publishers like Pushcart Press are wary and can often distinguish AI-written work, even considering legal action for fraud. On social platforms, Bluesky users have quickly blamed AI, or "vibe coding," for service disruptions, with skepticism growing after the company admitted using AI tools and announced an AI chatbot named Attie. This highlights a growing distrust among users who associate AI with potential failures, despite developers emphasizing AI as a supportive tool.
Key Takeaways
- Scale AI, partly owned by Meta, uses gig workers to collect and label data for AI training, raising ethical concerns over disturbing personal data scraping.
- Older workers, facing job market struggles, are finding employment in data annotation, training AI models like ChatGPT and Gemini.
- Tesla's FSD (Supervised) v14.3 update introduces a new AI system that reacts 20% faster, utilizing "fleet learning" and extending capabilities to Cybertrucks.
- AI demonstrated limitations in complex predictions, failing to accurately forecast the Masters golf tournament winner.
- Florida is implementing proactive AI policies to prevent data center energy and water costs from burdening citizens, prioritizing community well-being.
- The Haryana government mandates AI training for all state employees, including Generative AI and Microsoft Copilot, to modernize administration and enhance efficiency.
- Human writers express strong opposition to AI-generated content, arguing it lacks genuine emotion, with publishers considering legal action for fraud.
- Bluesky users attribute platform issues to AI, coining "vibe coding," reflecting skepticism and distrust towards AI-assisted development.
- Anthropic resisted certain Pentagon applications of its AI, leading to a breakdown in trust and highlighting ethical considerations in military AI use.
- Alabama schools are introducing an elective AI course, reflecting a growing trend in integrating AI education into curricula.
Meta AI firm uses gig workers for questionable data scraping
Scale AI, a company partly owned by Meta, hired workers to train AI systems. These workers, called 'taskers,' reported being asked to collect and label disturbing personal data from the internet, including pornography and images of dog feces. Many taskers felt morally conflicted and desperate for the work, fearing they were training AI to replace them. Scale AI stated they do not use child abuse material and avoid explicit pornography, but the practice of scraping personal data for AI training raises ethical concerns.
Skilled older workers find jobs training AI amid job market struggles
Many experienced older workers are struggling to find jobs in their fields and are turning to AI training as a last resort. Patrick Ciriello, 60, with a master's degree, lost his job and couldn't find new work, even for entry-level positions. He eventually found work training AI models, a growing field where experts can earn well, but for many, it's a temporary fallback. This work, known as data annotation, involves labeling information to train AI like ChatGPT and Gemini. The trend highlights the difficulties older workers face in the current job market.
AI fails to predict Masters golf winner accurately
The author tried using AI to predict the winner of the Masters golf tournament, but the results were disappointing. Initial AI responses were vague, stating 'No one knows for sure.' When pressed, the AI suggested top players like Scottie Scheffler and Xander Schauffele, which were obvious choices. Further attempts for less obvious picks resulted in incorrect suggestions, including players not qualified for the tournament. This experience suggests AI is not yet a reliable tool for complex predictions like sports outcomes.
Tesla FSD v14.3 boosts speed and uses fleet learning
Tesla has released its FSD (Supervised) v14.3 update, featuring a completely new AI system that makes the car react 20% faster. This update uses 'fleet learning,' meaning the AI learns from difficult scenarios encountered by millions of Teslas worldwide, improving its ability to handle complex situations. Cybertruck owners will now have the same FSD capabilities as other models, including a new 'Parked Blind Spot Warning.' While parking features are improved, Summon has not been updated. Future updates will focus on better pothole detection and enhanced driver monitoring.
Florida leads on AI policy, protecting citizens from data center costs
Florida is taking a proactive stance on Artificial Intelligence by implementing policies to manage the impact of data centers. Governor Ron DeSantis and state Republicans are working to prevent the costs of massive data center energy and water consumption from burdening Florida families. While Big Tech influences Washington, Florida is enforcing fairness and transparency. Lawmakers are developing regulations to ensure AI innovation is sustainable and doesn't negatively affect utility ratepayers. This approach prioritizes citizens and community well-being over unchecked corporate growth.
Celerity acquires Ranger4 to boost AI and automation services
IT services provider Celerity has acquired Ranger4, a company specializing in IBM AI and automation software. This move expands Celerity's offerings in automation, cost management, and AI-driven performance insights for hybrid IT environments. Ranger4's directors, Malcolm Namey and Steve Green, will join Celerity's leadership team. The acquisition is part of Celerity's growth strategy, aiming to provide clients with enhanced value and scale through software-led automation and AI services. This integration strengthens Celerity's position in the market for hybrid IT solutions.
Haryana government mandates AI training for all state employees
The Haryana government is requiring all state employees to undergo Artificial Intelligence (AI) training through the iGOT Karmayogi platform. This initiative aims to modernize the state's administration by equipping officials with digital skills, including Generative AI and AI-driven governance. The self-paced, free courses cover various AI applications and productivity tools like Microsoft Copilot. Employees can register and log in to the iGOT Karmayogi portal to access these certified courses, enhancing efficiency and service delivery.
Human writers express concerns about AI-generated content
Human writers are voicing strong opposition to the increasing use of AI-generated content, comparing it to 'canned music in the elevator.' They argue that AI-generated text lacks the heart, soul, and careful word choice that comes from human experience. Publishers like Pushcart Press are also wary, stating they can often distinguish between human and AI-written work and are even considering legal action for fraud. The writers believe that while AI can mimic human expression, it cannot replicate genuine emotion or authentic voice.
Bluesky users blame AI for platform issues, coining 'vibe coding'
Users of the social network Bluesky are quick to blame AI, or 'vibe coding,' for any service disruptions or bugs. When Bluesky experienced outages, many users immediately assumed it was due to AI-assisted development, expressing anger and distrust towards the technology. This sentiment intensified after the company admitted to using AI tools and announced an AI chatbot named Attie. While the Bluesky team emphasizes transparency and using AI as a tool to assist human developers, many users remain skeptical and associate AI with sloppy coding and potential failures.
Alabama schools add AI elective course
Schools in Alabama are introducing a new elective course focused on Artificial Intelligence (AI). This initiative aims to educate students about AI technology and its growing importance. The introduction of this course reflects a broader trend of integrating AI education into school curricula across the country.
Real-world AI use in conflict offers lessons
Recent conflicts have provided real-world evidence of how artificial intelligence performs in warfare, offering both encouraging and concerning lessons. Anthropic's resistance to certain Pentagon applications of its AI led to a breakdown in trust, labeling the company an 'unacceptable risk.' Conversely, the Maven Smart Systems command and control program performed well in stable network conditions. However, future conflicts may challenge cloud-first AI due to potential communication disruptions, highlighting the need for edge-first AI deployment that enables real-time decisions without constant connectivity.
Sources
- Porn, dog poo and social media snaps: the ‘taskers’ scraping the internet for Meta-owned AI firm
- ‘There’s a lot of desperation’: skilled older workers turn to AI training to stay afloat
- Masters 2026: Our idiot prognosticator is seeking help, but is A.I. really the answer to this year's winner?
- Tesla Releases FSD 14.3: Fleet Learning, 20% Faster Reactions, and More
- President Trump should listen to Florida’s plans for AI
- Celerity acquires Ranger4 in automation & AI push
- iGOT Karmayogi: Haryana govt asks all state employees to undergo AI training via this portal; Check how to
- Opinion | Human Writers Who Rage Against A.I.
- Bluesky users are mastering the fine art of blaming everything on "vibe coding"
- Alabama schools introducing AI elective course
- Battlefield AI Lessons Learned
Comments
Please log in to post a comment.