Anthropic wins fair use as OpenAI details agent loop

The artificial intelligence sector is seeing significant activity, from market consolidation to ongoing legal debates and new product developments. Legal AI firm Harvey recently acquired Hexus, a move that underscores the growing competition and strategic positioning within the legal technology space. This acquisition strengthens Harvey's standing as a major player in the field.

Meanwhile, the debate over fair use for AI and search engines continues, with copyright holders arguing against the use of their works for training. Courts have previously upheld copying for analysis and indexing as fair use, crucial for a free internet. The Electronic Frontier Foundation (EFF) supports this stance, arguing that expanding copyright to control learning from existing works would hinder research. A key court case, Bartz v. Anthropic, specifically supported that training AI models constitutes transformative fair use.

OpenAI has shed light on the mechanics of its Codex CLI, a local software agent, detailing its "agent loop." Michael Bolin explained how this loop facilitates collaboration between users, AI models, and tools to implement software changes. The process involves user input generating a prompt, the model performing "inference" to respond or request a tool call, and the loop continuing until a final "assistant message" is delivered. OpenAI also offers Codex Cloud and a VS Code extension, all leveraging this core logic.

Google DeepMind CEO Demis Hassabis recently questioned OpenAI's decision to integrate ads into ChatGPT, highlighting a perceived contradiction between Sam Altman's claims of impending Artificial General Intelligence (AGI) and the need for advertising revenue. Hassabis noted that Google DeepMind currently has no plans for ads in its Gemini app, suggesting that OpenAI's AGI claims might be overstated or that the company is prioritizing short-term financial gains. This financial approach potentially gives Google an advantage, allowing it to invest in AI without immediate ad revenue pressures.

Google's AI detection tool, SynthID, and its Gemini AI have faced scrutiny over inconsistent results in identifying AI-manipulated content. When checking a doctored image of Homeland Security Secretary Alejandro Mayorkas, initially posted by the White House X account, SynthID through Gemini first indicated manipulation by Google's AI. However, subsequent tests yielded conflicting outcomes, sometimes labeling the image as authentic or not AI-generated by Google. The White House confirmed the image was doctored, raising serious questions about SynthID's reliability.

Despite advancements, AI tools still have limitations, particularly in capturing the human element. A tech editor found that while an AI tool could summarize major topics from the World Economic Forum in Davos, it failed to convey subtle tones, anxieties, optimism, or informal interactions crucial to human reporting. Conversely, AI is proving highly effective in the chemical industry, where companies like BASF are using it to analyze vast datasets and predict material behavior, significantly accelerating the development of new products like faster-drying paint and better-smelling soap.

The commercial ambitions of AI labs are now being measured by a new five-level scale. This scale assesses the intent to monetize, not actual financial success. Prominent entities like OpenAI, Anthropic, and Gemini are categorized at Level 5, indicating they are already generating millions. Newer labs, such as Humans&, are at Level 3 with promising product ideas but no firm commitments. Interestingly, AI engineers are also experiencing a surge in popularity within the San Francisco dating scene, with matchmakers reporting specific requests for partners in the AI field, driven by high salaries and a perception of humility.

Key Takeaways

  • Legal AI company Harvey acquired Hexus, intensifying competition in the legal technology sector.
  • The Bartz v. Anthropic court case supported that training AI models constitutes transformative fair use, a key point in the ongoing copyright debate.
  • OpenAI's Codex CLI utilizes an "agent loop" to facilitate software changes through user input, model inference, and tool calls.
  • Google DeepMind CEO Demis Hassabis questioned OpenAI's ChatGPT ad strategy, contrasting it with Google's Gemini and implying potential exaggeration of AGI claims.
  • Google's SynthID and Gemini demonstrated inconsistent reliability in detecting AI-manipulated images, as shown by conflicting results on a White House-posted doctored image.
  • AI tools effectively summarize factual information but struggle to capture human elements like emotion and subtle interactions in journalistic reporting.
  • The chemical industry, including companies like BASF, is leveraging AI to accelerate new product development by analyzing data and simulating chemical reactions.
  • A new five-level scale categorizes AI labs by their commercial intent, placing OpenAI, Anthropic, and Gemini at Level 5 for already earning millions.
  • AI engineers are experiencing increased popularity in the San Francisco dating scene, driven by high salaries and professional success.
  • Thinking Machines Lab (TML), co-founded by Mira Murati, was initially assessed at Level 4 on the commercial intent scale, though recent changes may lead to a downgrade.

Legal AI giant Harvey buys Hexus

Legal AI company Harvey recently acquired Hexus. This move highlights the increasing competition within the legal technology sector. Harvey is a major player in legal AI, and this acquisition strengthens its position in the market.

Fair Use Debate Continues for AI and Search Engines

Copyright holders are again arguing that new technologies like AI and search engines infringe on their works. Courts have previously ruled that copying for analysis and indexing is fair use, which is vital for a free internet. The same argument now applies to AI training, which learns from patterns without replacing original texts. The EFF believes that expanding copyright to control learning from existing works would harm research. The Bartz v. Anthropic court case supported that training AI models is a transformative fair use.

OpenAI explains how Codex AI agent works

Michael Bolin from OpenAI explains the "agent loop" in Codex CLI, their local software agent. This agent helps users, AI models, and tools work together to make software changes. The process starts with user input, which creates a prompt for the model. The model then performs "inference" to generate a response or request a tool call. This loop continues until the model provides a final "assistant message" to the user, signaling the task is complete. OpenAI also offers Codex Cloud and a VS Code extension, all using this core logic.

Google DeepMind CEO questions OpenAI's ad strategy

Google DeepMind CEO Demis Hassabis questioned OpenAI's decision to put ads in ChatGPT. He highlighted the contrast between Sam Altman's claims of upcoming Artificial General Intelligence AGI and the need for advertising revenue. Hassabis stated that Google DeepMind has no plans for ads in its Gemini app right now. He suggested that OpenAI's AGI claims might be exaggerated or that the company prioritizes short-term money. This financial difference gives Google an advantage, allowing them to invest in AI without immediate ad revenue.

Google AI tool struggles to detect its own doctored images

Google's AI detection tool, SynthID, showed inconsistent results when checking a doctored image posted by the White House X account. The image showed Homeland Security Secretary Alejandro Mayorkas crying, unlike a similar photo posted by Kristi Noem. Initially, SynthID through Gemini indicated the White House image was manipulated with Google's AI. However, later tests with Gemini and SynthID produced different outcomes, sometimes calling the image authentic or not made with Google's AI. These conflicting results raise serious questions about SynthID's ability to reliably identify AI-manipulated content. The White House even confirmed the image was doctored, stating "The memes will continue."

AI misses human touch in Davos reporting

A tech editor used an AI tool to summarize his reporting trip from the World Economic Forum in Davos, Switzerland. The AI successfully identified major topics like the global economy, the war in Ukraine, and the energy crisis. It also noted nuances such as US-China tension and focus on sustainability. However, the AI failed to capture the human elements of the event, including subtle tones, anxiety, optimism, and informal interactions. The journalist concluded that while AI is useful for analysis, it cannot replace a human's ability to connect with people and interpret feelings, which are crucial in journalism.

AI engineers become popular in San Francisco dating

AI engineers are now highly sought after in the San Francisco dating scene, much like they are in the job market. Matchmakers like Erica Arrechea of Cinqe and Amy Andersen, known as the Silicon Valley Cupid, report that women specifically ask to meet men in AI. This trend is due to San Francisco's focus on professional success, high salaries for AI roles, and a perception of AI workers as humble. Kinjal Nandy, an AI startup CEO, and Annie Liao, an AI startup founder, have both noticed increased attention in their dating lives. Wes Myers of Keeper also noted that Bay Area women prefer "nice guys" in AI, even if they are awkward.

New scale measures AI labs' money-making goals

A new five-level scale helps evaluate whether AI labs building foundation models are trying to make money. This scale measures ambition, not actual financial success. Level 5 includes big names like OpenAI, Anthropic, and Gemini, which are already earning millions. Newer labs like Humans& are at Level 3, having promising product ideas but no specific commitments. Thinking Machines Lab TML, co-founded by Mira Murati, was initially seen at Level 4 with a clear roadmap, but recent changes might lead to a downgrade. World Labs, led by respected researcher Fei-Fei Li, focuses more on research than commercialization. The author notes that confusion about a lab's commercial intent can cause industry drama.

AI helps chemical industry create new products faster

The chemical industry is using artificial intelligence to speed up the creation of new products. Companies like BASF are using AI to analyze large amounts of data and predict how new materials will behave. This method can greatly reduce the time and cost of traditional research and development. AI algorithms can quickly examine molecular structures, simulate chemical reactions, and find good candidates for new formulas. The goal is to bring innovative products, like faster-drying paint and better-smelling soap, to market more quickly and in a more sustainable way.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

Legal AI Legal Technology AI Acquisitions Fair Use Copyright Law AI Training Search Engines OpenAI AI Agents Software Development Google DeepMind ChatGPT AI Advertising Artificial General Intelligence (AGI) Gemini AI Detection Image Manipulation Misinformation AI in Journalism Human-AI Interaction AI Engineers San Francisco Tech AI Startups AI Lab Commercialization Foundation Models Chemical Industry AI Product Innovation Data Analysis Sustainability in AI

Comments

Loading...