Google is rolling out significant artificial intelligence enhancements, notably within Google Maps. The platform's largest navigation update in a decade introduces Immersive Navigation, offering detailed 3D views of landmarks and road specifics. Additionally, Google Maps now features Ask Maps, an AI chatbot powered by Gemini, designed to plan trips and answer complex location-based questions by leveraging user preferences and map data. These features are currently available on Android and iOS in the US and India, with broader expansion planned.
Beyond consumer applications, AI's influence is reshaping industries and posing new challenges. Jeff Bleich, General Counsel at Anthropic, predicts AI will end the billable hour in the legal profession by automating tedious tasks, pushing firms towards results-based billing. However, the development of AI itself relies heavily on often underpaid global labor, with workers in regions like Kenya performing essential data labeling and content moderation for products such as ChatGPT, highlighting the human cost behind 'artificial intelligence'.
Enterprises face hurdles in training robust AI models due to data privacy, security, and compliance restrictions, often exacerbated by employees using unapproved tools. Experts like Nigel Vaz, CEO of Publicis Sapient, emphasize that companies frequently misunderstand AI strategy, treating it as a mere tech upgrade rather than a fundamental shift in operations and decision-making. Vaz advocates for embedding ethical commitments directly into AI systems and adopting continuous testing over linear planning, while others stress the need for strong collaboration between IT, security, and risk management teams to scale AI confidently.
The rapid advancement of AI also raises significant societal and regulatory concerns. Researchers from the University of Cambridge are calling for stricter regulations on AI-powered toys, citing instances where a chatbot toy named Gabbo confused young children and gave dismissive responses, underscoring the need for 'psychological safety' standards. Furthermore, the ability of AI tools to generate explicit images without consent highlights the urgent need for stronger safeguards, particularly for vulnerable populations, with calls for regions like South Carolina to strengthen laws and ensure accountability for AI misuse. Meanwhile, Google's AI Overviews in search results are increasingly linking back to its own properties, raising concerns among publishers about reduced traffic.
Key Takeaways
- Google Maps launched a major update featuring Immersive Navigation and Ask Maps, an AI chatbot powered by Gemini, for enhanced trip planning and 3D visuals.
- Enterprises face significant challenges in training AI models due to data privacy, security, and compliance issues, necessitating a 'trust program' approach.
- The development of AI, including products like ChatGPT, is heavily dependent on often underpaid global labor for tasks like data labeling and content moderation.
- Jeff Bleich, General Counsel at Anthropic, believes AI will fundamentally change the legal profession by ending the billable hour model.
- Nigel Vaz, CEO of Publicis Sapient, argues that companies must view AI as a strategic shift in operations and decision-making, not just a technology upgrade, and embed ethical commitments.
- AI's rapid advancement creates risks, such as the generation of non-consensual explicit images, prompting calls for stronger safeguards and accountability, especially for women and children.
- Researchers from the University of Cambridge advocate for stricter regulations on AI-powered toys, citing potential psychological harm and confusion for children.
- The hiring landscape for engineers is evolving, with AI agents handling coding, shifting the focus to product taste, architectural judgment, and orchestrating AI agents over raw coding skills.
- Google's AI Overviews in search results are increasingly linking back to Google's own properties, raising concerns among third-party publishers about reduced traffic.
- Razer showcased new AI-powered products, including an AI desktop companion, at the Game Developers Conference (GDC) 2026.
Google Maps gets major AI upgrade and new navigation view
Google Maps is introducing its biggest navigation update in ten years, featuring a new Immersive Navigation mode with 3D views of landmarks and road details. It also launched Ask Maps, an AI chatbot powered by Gemini, which can plan trips and answer complex location questions. Ask Maps uses your personal preferences and data from Google Maps to provide tailored suggestions. Both features are rolling out now on Android and iOS in the US and India, with web and other platforms to follow.
Google Maps adds AI chatbot Ask Maps and Immersive Navigation
Google Maps has launched a significant update including a new AI-powered chatbot called Ask Maps and a redesigned navigation experience called Immersive Navigation. Ask Maps, using Gemini AI, answers complex questions and personalizes results based on user data. Immersive Navigation provides detailed 3D visuals for driving, showing buildings, lanes, and traffic lights to help drivers anticipate turns. The features are rolling out on Android and iOS in the US and India, with wider availability coming later.
Enterprises face AI training hurdles due to data privacy
Enterprises struggle to train better AI models because sensitive internal data is restricted by privacy, security, and compliance rules. Employees sometimes use unapproved tools, creating risks of data leaks and unclear accountability. Shane Tierney explains that progress is being made with structured training and privacy-preserving methods. He advises companies to treat AI as a trust program, with CISOs acting as 'Chief Trust Officers' to ensure governance and transparency. Providing safe, approved AI tools is also key to innovation.
IT security and risk teams must unite for AI success
To succeed with AI, businesses need strong collaboration between IT, security, and risk management teams. AI and automation introduce new complexities and risks like cybersecurity threats and data privacy issues. Jay Reid of Crowe emphasizes that independent operations create tension between innovation and control. An integrated model with shared data and unified actions provides common visibility and embeds governance into workflows. This partnership helps organizations scale AI confidently and manage risks effectively.
AI relies on underpaid global labor, not just technology
The reality behind artificial intelligence is often the hidden, underpaid labor of workers worldwide, not just advanced technology. Workers in places like Kenya spend long hours labeling data, moderating content, or acting as AI chatbots for low pay. This 'ghost work' is essential for building AI products like ChatGPT. Lawyers and data labelers highlight that AI is an extractive technology dependent on this brutal labor. They argue that 'artificial intelligence' is largely marketing, while the human effort is very real and often exploited.
South Carolina needs stronger AI safeguards for families
Artificial intelligence is advancing rapidly, creating risks that require stronger safeguards, especially for women and children. AI tools can now generate explicit images from real photos without consent, causing significant harm and humiliation. The article stresses that technology companies must prevent exploitation, and when safeguards fail, leaders must act. South Carolina has an opportunity to lead by strengthening laws and ensuring accountability for AI misuse. This is about protecting children, not hindering innovation.
Companies misunderstand AI strategy, focusing too much on tech
Many companies approach AI as a simple technology upgrade, missing its potential to fundamentally change business operations. Nigel Vaz, CEO of Publicis Sapient, argues that AI reshapes decision-making, work processes, and strategy evolution. He advises against treating AI as a standalone project and emphasizes the need for constant testing and iteration over linear planning. Vaz also stresses that ethical commitments must be built into AI systems, not just stated as guidelines, to be effective.
AI-native engineers need judgment over coding skills
The hiring process for engineers is changing as AI agents handle most coding tasks. Augment now prioritizes engineers with product taste, architectural judgment, and the ability to direct both humans and AI agents. The focus shifts from writing code to specifying intent, evaluating tradeoffs, and orchestrating AI. Key skills include product and outcome taste, system and architectural judgment, agent leverage, communication, ownership, and learning velocity. Raw coding ability is no longer the primary differentiator for top engineering talent.
AI toys need stricter safety rules, researchers say
Researchers from the University of Cambridge are calling for stricter regulations on AI-powered toys after studying children's interactions with Gabbo, an AI chatbot toy. They found that the toy's responses could confuse young children during social development and were sometimes dismissive or unclear. The study highlights the need to consider 'psychological safety' alongside physical safety for toys. Parents are advised to supervise their children's use of AI toys, and stricter testing and standards are recommended before these products are sold.
Google AI search results often link back to Google
Google's AI Overviews in search results are increasingly linking back to Google's own properties, causing concern among website publishers. An analysis shows that a significant portion of citations in AI Mode lead to other Google search results, not external sources. While Google states these links help users explore related questions, experts worry this trend reduces traffic to third-party sites. This self-referential loop could harm publishers who rely on search traffic for their business.
AI will end the billable hour in law, says Anthropic lawyer
Jeff Bleich, General Counsel at Anthropic, believes artificial intelligence will end the dominance of the billable hour in the legal profession. He argues that AI tools eliminate tedious work, making the traditional model where lawyers are paid for time spent inefficient and misaligned with client interests. Bleich suggests that law firms need to adopt new billing models focused on strategy and results, not just hours worked. Other legal experts largely agree, seeing AI as a catalyst for change in legal billing.
Razer unveils AI desktop companion at GDC
At the Game Developers Conference (GDC) 2026, Razer showcased new AI-powered products and services. The company highlighted its future-facing tools, including an AI desktop companion. Razer's vice president of software, Quyen Quach, demonstrated some of these upcoming innovations during press briefings.
Sources
- Google Maps gets its biggest navigation redesign in a decade, plus more AI
- Google launches ‘Ask Maps’ AI feature in major Google Maps overhaul
- Q&A: Tackling the major challenges limiting enterprise AI training
- A three-way partnership built around IT, security, and risk drives AI-era success
- Artificial intelligence is just underpaid human labor
- More safeguards needed to protect SC families from AI exploitation
- Most companies are thinking about AI strategy the wrong way
- How we hire AI-native engineers now: our criteria
- Researchers call for tougher scrutiny on AI-powered products
- Google's AI Searches Love to Refer You Back to Google
- AI will kill the billable hour in law, Anthropic top lawyer says
- Razer flexes AI desktop companion and other new products at GDC
Comments
Please log in to post a comment.