Several developments are unfolding in the AI space. Google is partnering with Goodwill to provide free AI training to 200,000 individuals in the U.S. and Canada, utilizing Google's AI Essentials course, which includes hands-on experience with tools like ChatGPT, Copilot, and Gemini. Upon completion of the 10-hour course, participants receive a Google certificate. However, a cybersecurity CEO warns that AI-powered fraud, involving deepfakes and fake identities to steal millions from public benefit systems, is already a significant issue, requiring modernized defenses. The Italian Data Protection Authority (Garante) is also cautioning about the risks of using AI to analyze medical data without proper regulatory oversight, emphasizing the need for doctors to supervise AI systems. In manufacturing, the rise of agentic AI for automation introduces new security vulnerabilities, necessitating adaptable security measures and AI oversight. To address broader AI risks, the Intentional Endowments Network has launched an initiative to help investors navigate these challenges. On a macroeconomic scale, some believe AI could boost productivity, potentially helping to alleviate America's debt crisis. The EU's AI Act is also setting new standards by addressing data transparency gaps in GDPR with measures like the Model Documentation Form (MDF) and Public Summary Template (PST). Meanwhile, Portrait Analytics is partnering with Third Bridge to enhance AI-driven investment research by combining expert insights with AI analysis. Finally, Meta, under Mark Zuckerberg, is reconsidering its open-source AI strategy due to safety concerns and a desire to protect its investments in AI technology.
Key Takeaways
- Google and Goodwill are partnering to offer free AI training to 200,000 people using Google's AI Essentials course.
- The Google AI Essentials course provides hands-on experience with generative AI tools like ChatGPT, Copilot, and Gemini.
- A cybersecurity CEO warns that AI-driven fraud, using deepfakes, is already stealing millions from public programs.
- The Italian Data Protection Authority (Garante) cautions against using AI to analyze medical data without regulatory oversight.
- Agentic AI in manufacturing introduces new security risks, requiring adaptable security measures.
- The Intentional Endowments Network has launched an initiative to help investors navigate AI risks.
- AI-driven productivity gains could potentially help address America's debt crisis.
- The EU's AI Act introduces new data transparency rules beyond GDPR with measures like the Model Documentation Form (MDF).
- Portrait Analytics and Third Bridge are partnering to provide AI-driven investment research with expert insights.
- Meta is shifting its AI strategy and may limit open-source releases due to safety concerns.
Goodwill and Google team up to offer free AI training
Goodwill and Google are working together to provide free AI training to 200,000 people in the U.S. and Canada. Goodwill will use Google's AI Essentials course to teach people important AI skills for today's jobs. The course takes less than 10 hours and gives hands-on experience with generative AI. Participants will receive a certificate from Google after completing the course. This program builds on Goodwill's existing digital skills programs with Google, which have helped over 400,000 Americans find good jobs since 2017.
Goodwill and Google partner for free AI skills course
Goodwill Industries and Google are partnering to train 200,000 people in the U.S. on AI skills. Goodwill will offer Google's AI Essentials course for free, which teaches how to use generative AI in everyday work. The 10-hour course includes videos, reading, and interactive exercises using tools like Chat GPT, Copilot, and Gemini. Participants will receive a certificate from Google upon completion. Goodwill has been offering Google's digital skills programs since 2017, helping over 400,000 Americans get well-paying jobs.
Cybersecurity CEO warns AI fraud is already here, not coming
A cybersecurity CEO who advises over 9,000 agencies says AI-powered fraud is already happening, contrary to Sam Altman's warning about the future. Criminals are using AI to steal millions from public benefit systems using deepfakes and fake identities. These tactics are more advanced and automated than before. The CEO works with agencies and has testified before the U.S. House of Representatives about the increasing speed and scale of AI fraud. He suggests modernizing defenses with better identity verification and real-time data analysis to combat these threats.
Cybersecurity CEO says AI fraud is here now, not in the future
Haywood Talcove, a cybersecurity CEO, argues that AI fraud is already a problem, not a future threat as warned by Sam Altman. He says AI is used to steal millions from public programs using deepfakes and fake documents. These methods are more advanced and automated than previous fraud attempts. Talcove works with over 9,000 agencies and has testified about the rise of AI-driven fraud. He calls for better tools and infrastructure to defend against these attacks.
Italy's Data Authority warns about AI risks with health data
The Italian Data Protection Authority (Garante) issued a warning about the risks of using AI to analyze medical data. People are using AI to interpret medical results, like X-rays, but the Garante says to be careful about sharing health data with AI providers. These AI systems may not be safe for medical use because they haven't been checked by regulators. The Garante advises users to read AI providers' privacy policies to see if their data is stored or deleted. They also emphasize the need for doctors to oversee AI systems to protect against health risks.
Manufacturers face new security risks with AI-driven automation
Manufacturers are using agentic AI for automation, which allows AI to make decisions on its own. While this can improve productivity, it also creates new security risks. Agentic AI can change production plans, access sensitive networks, or violate safety rules. Traditional security methods don't work well because AI can act in unexpected ways. Manufacturers need to create adaptable security measures, combine AI oversight with monitoring, and simulate potential AI behavior to stay safe.
Initiative launched to help investors navigate AI risks
The Intentional Endowments Network has launched an initiative to help investors understand and manage the risks associated with artificial intelligence.
Can AI solve America's debt crisis through productivity gains
America's debt is growing, but AI could help by boosting productivity. AI-driven productivity could increase GDP, reduce inflation, and lower interest rates. This could lead to higher tax revenues and less government borrowing. AI is compared to other major technologies like the steam engine and the internet in its potential impact. Even small productivity gains from AI could help stabilize the debt-to-GDP ratio. While AI may replace some jobs, it's also expected to create new ones.
EU's AI Act addresses data transparency gaps in GDPR
The EU's Artificial Intelligence Act (AI Act) introduces new transparency rules for AI training data. These rules aim to improve data protection rights and help regulators understand AI models. The AI Act includes a Model Documentation Form (MDF) for regulators and a Public Summary Template (PST) for the public. The MDF provides detailed data on training datasets, while the PST offers a general overview. These measures go beyond the GDPR, which doesn't adequately address the scale and complexity of AI training data.
Portrait Analytics partners with Third Bridge for AI investment research
Portrait Analytics and Third Bridge have partnered to provide AI-driven investment research with expert insights. The partnership combines Third Bridge's expert interview library with Portrait's AI platform. This will allow investors to use AI to find real-time expert opinions while researching investments. Third Bridge subscribers on Portrait's platform can now analyze expert conversations to find investment opportunities. The goal is to give decision-makers access to high-quality insights for better investment decisions.
Meta shifts AI strategy, may limit open-source releases
Meta CEO Mark Zuckerberg is changing his approach to open-source AI. Meta may not release its most powerful AI models as open-source in the future. Zuckerberg says this is due to safety concerns as AI becomes more powerful. Meta is also investing heavily in AI and may want to keep its best technology proprietary. Some experts believe Meta's open-source approach helped competitors, influencing this shift in strategy.
Sources
- Goodwill® and Google Offer Free AI Essentials Training in North America
- Goodwill Industries partners with Google for free AI training course
- I’m a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here
- I'm a cybersecurity CEO who advises over 9,000 agencies and Sam Altman is wrong that the AI fraud crisis is coming—it’s already here
- Italian Garante Adopts Statement on Health Data and AI
- Safeguarding Against Agentic AI Security Vulnerabilities in Manufacturing
- Intentional Endowments Network launches initiative to help investors navigate AI risk
- How artificial intelligence could solve America's debt crisis
- Addressing GDPR’s Shortcomings in AI Training Data Transparency with the AI Act
- Portrait Analytics and Third Bridge Partner to Deliver Ai-Driven Investment Research with Human Insight
- Meta CEO Mark Zuckerberg is backsliding on the company's open-source approach to AI. It's a sensible pivot.
Comments
Please log in to post a comment.