OpenAI Greg Brockman sees AGI soon while Anthropic faces code issues

OpenAI President Greg Brockman anticipates Artificial General Intelligence (AGI) within a few years, stating the company sees a clear path to significantly improved AI models this year. Brockman defines AGI as a system capable of handling nearly any intellectual task a computer can perform, acknowledging it might be inconsistent in some areas but emphasizing its transformative potential for work.

Meanwhile, Anthropic faced issues with its Claude code, as efforts to remove leaked client source code from GitHub mistakenly took down legitimate code forks. This incident highlights challenges in controlling AI code dissemination. Separately, California Governor Gavin Newsom issued an executive order requiring state agencies to assess AI harm in contracts, pushing back against federal views that label AI startups, including Anthropic, as supply-chain risks. The order aims to encourage AI use while establishing safeguards against misuse.

AI adoption is expanding into public services, with law enforcement embracing the technology. Fargo's interim police chief, Travis Stefonowicz, advocates for AI use in policing. In Northeast Ohio, departments like Fairview Park, Middleburg Heights, and Olmsted Falls utilize Urban SDK's AI to anonymously track speeders using cell phone and GPS data, optimizing officer deployment. However, U.S. Immigration and Customs Enforcement (ICE) employs private contractors and AI to track immigrants, raising concerns about privacy and potential errors. To address security, ISC2 has integrated AI security concepts into over 50 cybersecurity certification exam topics and offers continuing education.

The tech industry, particularly Silicon Valley, is undergoing significant disruption due to AI, especially generative AI in computer programming, which is reshaping workforce needs and business models. Historians Angus Burgin and Louis Hyman suggest that unlike past technological shifts, AI might impact highly educated workers more profoundly, stressing the importance of early policy decisions. To meet evolving demands, a new AI Product Manager course will begin on April 6, 2026, focusing on ethical AI use and collaboration to create user-centered AI products. Effective communication about AI's complexities is also crucial, with specialists like Meiko S. Patton advocating for storytelling to make debates more accessible.

Key Takeaways

  • OpenAI President Greg Brockman predicts Artificial General Intelligence (AGI) is achievable within the next couple of years.
  • Anthropic's attempt to remove leaked Claude Code client source code from GitHub mistakenly removed legitimate code forks.
  • California Governor Gavin Newsom issued an executive order requiring state agencies to consider AI harm in contract decisions, pushing back against federal labeling of AI startups like Anthropic as supply-chain risks.
  • ISC2 has integrated AI security concepts into over 50 cybersecurity certification exam topics and offers continuing education for professionals.
  • Fargo's interim police chief, Travis Stefonowicz, advocates for law enforcement to adopt artificial intelligence.
  • Police departments in Northeast Ohio are using Urban SDK's AI technology to anonymously track neighborhood speeders for more effective officer deployment.
  • U.S. Immigration and Customs Enforcement (ICE) uses private contractors and AI to track immigrants, raising privacy and surveillance concerns.
  • The tech industry, especially in Silicon Valley, is experiencing significant disruption from AI, particularly generative AI in computer programming.
  • A new AI Product Manager course is set to begin on April 6, 2026, focusing on blending product management skills with AI knowledge and ethical use.
  • Historians suggest that AI may impact highly educated workers more significantly than previous technological revolutions, emphasizing the importance of early policy decisions.

ISC2 adds AI security to cybersecurity certifications

ISC2, a major organization for cybersecurity professionals, now includes AI security in its certifications. The new Exam Guidance for Artificial Intelligence shows how AI security concepts are part of over 50 exam topics. This update ensures that certified professionals have the skills needed to protect AI systems and manage risks. ISC2 is also offering more continuing education on AI security for its members.

ISC2 integrates AI security into all certifications

ISC2, a leading nonprofit for cybersecurity professionals, has released new guidance on AI security concepts within its certifications. This update reflects the growing need for experts to secure AI systems and manage risks as AI use increases. The guidance covers topics like AI ethics, data privacy, and system vulnerabilities. It aims to ensure cybersecurity professionals are prepared for the evolving threat landscape.

ISC2 updates certifications with AI security focus

ISC2 has updated its cybersecurity certifications to include AI security concepts, ensuring professionals can handle modern challenges. The organization spent three years refreshing its exams, with experts validating that they reflect real-world needs. AI security topics are now part of over 50 core cybersecurity exam domains. ISC2 also offers AI security training to help professionals advance their careers.

Fargo police chief says AI must be embraced

Travis Stefonowicz, the interim police chief for Fargo, believes law enforcement must adopt artificial intelligence. He has worked with the Fargo Police Department since 2002 and was recently appointed interim chief. Stefonowicz's appointment was approved by the city commission. His perspective comes amid ongoing discussions about AI's role in public services.

Police use AI to track neighborhood speeders

Several police departments in Northeast Ohio, including Fairview Park, are using AI technology from Urban SDK to address speeding issues. The system analyzes data from cell phones and GPS to identify problem areas and times. This allows officers to be deployed more effectively. The technology is anonymous and does not track license plates. Middleburg Heights and Olmsted Falls are also adopting this AI tool to improve road safety.

History offers lessons for the AI era

Historians Angus Burgin and Louis Hyman suggest that past technological revolutions, like the internet, offer valuable lessons for understanding artificial intelligence. They note that while AI is advancing rapidly, historical anxieties about job loss and societal disruption are recurring themes. Unlike previous shifts, AI may impact highly educated workers more significantly. They emphasize that early policy decisions are crucial for shaping AI's long-term impact.

OpenAI president sees AGI within years

OpenAI President Greg Brockman believes Artificial General Intelligence (AGI) is achievable within the next couple of years. He stated that the company sees a clear path to developing much-improved AI models this year. Brockman described AGI as a system capable of handling almost any intellectual task a computer can do. While acknowledging it might be 'jagged' or inconsistent in some areas, he emphasized its potential to transform work.

Storytelling helps explain complex AI

Communicators can use storytelling to help the public understand artificial intelligence, according to AI communications specialist Meiko S. Patton. She suggests turning technical issues into human dilemmas, exploring possibilities rather than making predictions, and balancing opportunities with risks. Storytelling can make complex AI debates more accessible and engaging for a wider audience. This approach helps translate AI's complexities into actionable understanding.

ICE uses AI for private immigrant tracking

U.S. Immigration and Customs Enforcement (ICE) is using private contractors and artificial intelligence to track immigrants. These contractors receive tens of thousands of names monthly and use data tools, online research, and AI to locate individuals for targeted enforcement. Companies like Bluehawk and BI Incorporated, a subsidiary of GEO Group, are involved. This system raises concerns about privacy, surveillance, and the potential for errors due to AI scaling.

Anthropic AI code takedown affects legitimate GitHub projects

Anthropic's effort to remove leaked Claude Code client source code from GitHub mistakenly removed many legitimate code forks. While Anthropic has since corrected the takedown, the company faces challenges in controlling the spread of its leaked code. Some developers are already using AI tools to recreate the code in different programming languages. The use of AI by Anthropic to write parts of the code also raises legal questions.

California governor orders AI risk review for contracts

California Governor Gavin Newsom has issued an executive order requiring state agencies to consider AI harm when making contract decisions. This order pushes back against federal actions labeling AI startups like Anthropic as supply-chain risks. State agencies must develop recommendations for contract standards related to AI, ensuring tools do not violate rights or privacy. The governor aims to encourage AI use while establishing guardrails against misuse.

AI is transforming Silicon Valley tech industry

Artificial intelligence is significantly disrupting the tech industry itself, altering how companies operate and the nature of tech jobs. Generative AI, particularly in computer programming, is leading companies to re-evaluate their workforce and business models. While the broader impact on other industries is still unfolding, Silicon Valley is experiencing a profound transformation driven by AI. This shift is changing the landscape for software development and tech employment.

AI Product Manager course begins April 6

A new AI Product Manager course is set to begin on April 6, 2026, focusing on blending product management skills with AI knowledge. This role is in high demand across industries like technology, finance, healthcare, and e-commerce. The course will cover ethical AI use and collaboration between business, design, and technical teams. The program aims to equip professionals to create valuable, user-centered AI products.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI Security Cybersecurity Certifications ISC2 AI Ethics Data Privacy System Vulnerabilities Law Enforcement AI Fargo Police Department AI in Public Services AI for Traffic Management Urban SDK Road Safety AI and History Technological Revolutions Job Displacement Societal Disruption Artificial General Intelligence (AGI) OpenAI AI Development AI Communication Public Understanding of AI AI and Storytelling Immigration and Customs Enforcement (ICE) AI for Surveillance Private Contractors AI Code Leaks Anthropic GitHub AI and Copyright AI Risk Management California Executive Order AI Contracts AI and Privacy AI in Tech Industry Generative AI AI Workforce Impact AI Product Management AI Ethics in Product Development User-Centered AI

Comments

Loading...