College professors are implementing unique strategies to combat AI-generated essays, with a New York college professor and a German language instructor at Cornell University, Grit Matthias Phelps, having students use typewriters for assignments. This method, started at Cornell in spring 2023, aims to disconnect students from technology, encourage critical thinking, and promote manual writing processes, making it impossible to use AI for cheating.
Meanwhile, major tech companies like Apple, Google, and Amazon are strategically developing their own AI chips. This move aims to reduce their reliance on Nvidia, optimize performance for specific needs, lower costs, and accelerate innovation, giving them greater independence and a competitive edge in the foundational AI hardware layer.
In AI security, Anthropic's AI model, Claude, successfully identified a 23-year-old heap buffer overflow vulnerability in the Linux kernel, a bug previously missed by human experts. Concurrently, an AI security researcher known as 'Pliny the Liberator' demonstrated 'tokenades' to bypass GPT-4's safety features, using encoded text to potentially reveal sensitive information. On the privacy front, Perplexity AI faces a class action lawsuit in California for allegedly sharing user data, including search queries, with Meta Platforms and other third parties without consent.
Concerns about AI's influence on public discourse are growing, with California lawmakers proposing Senate Bill 1159 to ban AI-generated public comments to government agencies, aiming to ensure genuine human input. Internationally, Taiwan is playing a key role in helping other industrial nations bypass dominant tech companies in AI development, fostering a more diverse global AI landscape.
In business news, Recall.ai, a provider of meeting recording and transcription APIs, reported significant growth, achieving four times year-over-year enterprise sales growth and closing over $7 million in deals under co-founder Amanda Zhu. The company also moved to a new 15,000 square foot headquarters in San Francisco. Separately, reports indicate Chinese companies have been selling intelligence online regarding U.S. military activities in the context of the war in Iran.
Key Takeaways
- College professors at Cornell University and a New York college are using typewriters to prevent students from using AI to cheat on assignments.
- Apple, Google, and Amazon are developing their own AI chips to reduce reliance on Nvidia, optimize performance, and gain independence.
- Anthropic's AI model, Claude, discovered a 23-year-old heap buffer overflow vulnerability in the Linux kernel.
- Perplexity AI is facing a class action lawsuit for allegedly sharing user data, including search queries, with Meta Platforms and other third parties without consent.
- California lawmakers are proposing Senate Bill 1159 to ban AI-generated public comments to government agencies.
- AI security researcher 'Pliny the Liberator' demonstrated 'tokenades' to bypass GPT-4's safety features.
- Taiwan is assisting other industrial nations in bypassing major tech companies for AI development.
- Recall.ai achieved four times year-over-year enterprise sales growth, closing over $7 million in deals, and moved to a new 15,000 sq ft headquarters.
- Chinese companies have reportedly been selling intelligence online about U.S. military activities during the war in Iran.
- The AI's ability to find vulnerabilities faster than humans can verify them highlights a growing capability gap in security.
Professor uses typewriters to stop AI cheating
A New York college professor is using typewriters in class to prevent students from using AI to cheat on assignments. He noticed many students submitted AI-generated essays that were hard to tell apart from human writing. By bringing in typewriters for exams and classwork, students must rely on their own skills since AI cannot be used. This method aims to encourage real learning and critical thinking, addressing challenges educators face with advanced AI.
Cornell class uses typewriters to combat AI essays
A German language instructor at Cornell University, Grit Matthias Phelps, has students use typewriters for assignments to combat AI-generated essays. This practice, started in spring 2023, disconnects students from technology and encourages them to think differently about writing. Students learn the manual process of typing, which slows them down and requires more thought. This method also promotes more social interaction in class as students help each other without digital distractions.
Big Tech builds own AI chips for control and independence
Major tech companies like Apple, Google, and Amazon are developing their own AI chips to gain control over their technology and reduce reliance on Nvidia. This strategic shift allows them to optimize performance for their specific needs, lower costs, and speed up innovation. By owning the critical AI hardware layer, these companies achieve greater independence and a competitive advantage. This trend reflects a larger pattern of dominant tech firms vertically integrating to control foundational technologies.
Perplexity AI sued for sharing user data with Meta
Perplexity AI is facing a class action lawsuit for allegedly sharing user data with Meta Platforms and other third parties without consent. A user filed the suit in California, claiming the company violated privacy laws by transmitting search queries and user activity. This data sharing could lead to users being targeted with ads and other data exploitation. The lawsuit seeks damages and an order to stop Perplexity AI from these practices.
AI finds 23-year-old Linux vulnerability
An AI model named Claude, developed by Anthropic, has discovered a security vulnerability in the Linux kernel that had been hidden for 23 years. Researcher Nicholas Carlini used Claude Code to scan the Linux kernel source, and the AI identified a heap buffer overflow bug. This vulnerability, present since 2003, was missed by human experts and traditional security tools. The AI's ability to find such complex bugs highlights a growing capability gap, with AI now producing vulnerability reports faster than humans can verify them.
California lawmakers want to ban AI in public comments
California lawmakers are concerned that AI could unfairly influence government decisions by flooding public comment processes with automated responses. A bipartisan bill, Senate Bill 1159, aims to ensure that public input comes from real people, not AI tools. Concerns include AI overwhelming agencies, drowning out genuine resident opinions, and undermining transparency laws. While the bill passed the Senate Judiciary Committee, lawmakers are discussing enforcement and detection methods.
Taiwan helps nations bypass big tech AI
While the global AI race is often seen as a US vs. China competition, a significant movement is occurring among other industrial nations. These countries are finding ways to bypass major tech companies, with Taiwan playing a key role in this development. This trend suggests a more complex landscape in AI development and access beyond the dominant players.
Recall.ai expands sales and moves HQ
Recall.ai, which provides an API for meeting recording and transcription, has significantly grown its enterprise sales under co-founder Amanda Zhu. Zhu's leadership drove four times year-over-year growth and closed over $7 million in deals, moving the company from founder-led sales to a structured sales team. The company has also relocated to a new 15,000 square foot headquarters in San Francisco's SoMa district to support its expansion and achieve a $250 million valuation.
AI hacker 'Pliny the Liberator' tests GPT-4 security
AI security researcher 'Pliny the Liberator' demonstrated a new way to bypass safety features in AI models like GPT-4 using 'tokenades'. These are special text payloads that use encoding, emojis, and hidden characters to trick the AI into performing actions it normally wouldn't. The researcher showed how these techniques can potentially reveal sensitive information or execute code. This highlights ongoing challenges in AI security and the need for better defenses against such sophisticated attacks.
Chinese firms sell Iran war intelligence
As the war in Iran intensified, Chinese companies began selling intelligence online about U.S. military activities. Viral posts on social media detailed equipment at U.S. bases, the movement of American carrier groups, and aircraft preparations for strikes. This information was flagged by observers on Western and Chinese platforms, raising concerns about the dissemination of sensitive military intelligence.
Sources
- AI Forces College Professor to Get Typewriters for Entire Class
- To stop AI essays, this class is going back to the 1950s
- The Real Reason Big Tech Is Building Their Own AI Chips
- Perplexity AI Under Fire In Lawsuit Alleging Privacy Violations
- Claude Code Found A Linux Vulnerability That Had Remained Hidden For 23 Years, Says Anthropic Researcher
- State leaders raise concerns over AI-generated public comments in California
- How second-tier powers are bypassing big tech via Taiwan
- Recall.ai Scales Enterprise Sales, Moves to New SoMa HQ
- AI Hacker "Pliny the Liberator" Tests GPT-4 Security
- Chinese firms market Iran war intelligence ‘exposing’ U.S. forces
Comments
Please log in to post a comment.