Google is making significant strides with its Gemini AI, planning to roll out new memory features for Gemini Deep Research by 2026. These features aim to save users approximately 2.5 hours each month by remembering project details and user preferences, integrating information across Google apps like Docs and Sheets, and even third-party platforms such as Notion and Slack. This technology uses contextual embeddings to build a knowledge graph of user work, though it also raises important privacy discussions. Earlier, from March to November 2025, Gemini Deep Research proved highly effective for writers and journalists, reducing the time to create an investigative story outline from 3.2 hours to just 68 minutes. This tool, available with a Google AI Pro subscription, can analyze various sources and, by November 2025, gained the ability to search across Google Workspace apps like Gmail and Drive. In the competitive AI landscape of 2025, Google's Gemini, now the official name for what was Bard, directly leverages Google Search for real-time information, offering faster and more relevant results with source links compared to OpenAI's ChatGPT (GPT-4o), which uses Microsoft Bing. When compared to Perplexity Pro, Gemini Deep Research focuses on in-depth planning and often uses high-authority sources, producing more detailed, report-like outputs with a higher average depth score, making it more reliable for high-stakes questions, while Perplexity Pro prioritizes speed. Gemini Deep Research also complements NotebookLM 2.0, with Gemini excelling at market scans and exploring new topics, while NotebookLM is better for deep dives into specific uploaded sources.Beyond Google, the adoption of AI is transforming various industries. Akkodis, a global digital engineering firm, announced successful AI projects on November 10, 2025. For instance, they helped a healthcare manufacturer cut production scheduling time from five days to mere seconds using AI. Akkodis also partnered with Microsoft Worldwide Learning to train engineers and data scientists in responsible AI for the banking sector. Akkodis Japan's program made over 2,000 employees AI-proficient in 10 months, saving more than 15,000 hours annually by automating tasks.Investor Todd Ahlsten, Chief Investment Officer at Hirtle Callaghan, cautions that the excitement around AI investing is outpacing its foundational business growth. He advises investors to diversify beyond major players like Nvidia and Microsoft, pointing to opportunities in specialized hardware companies such as Broadcom, AMD, and Micron. Ahlsten also notes that software companies like Microsoft, Salesforce, and Workday are well-positioned to boost growth by integrating AI. He warns of potential risks, including power shortages for AI data centers and inflated expectations for companies like OpenAI.OpenAI itself is urging the U.S. government to significantly enhance the nation's AI infrastructure, advocating for expanded tax credits for computer chip manufacturing and faster approvals for new AI projects like data centers to maintain global leadership.Globally, efforts to boost AI skills are underway. The Saudi Data and Artificial Intelligence Authority (SDAIA) successfully trained over one million Saudi citizens in AI skills through its SAMAI initiative, representing nine percent of the working-age population, with 52 percent of participants being women. Additionally, Imagi, Lovable, and OpenAI have collaborated to launch AI coding lessons for classrooms worldwide, with OpenAI providing $1 million in credits for free access during Computer Science Education Week, aiming to teach 100 million students globally.However, the rapid integration of AI also brings challenges. Doctors are expressing concern that patients are increasingly trusting AI for medical advice without fully understanding its limitations, sometimes even more than their physicians. While AI is already assisting in emergency rooms by processing data and speeding up documentation, medical professionals emphasize that it cannot replace human intuition or compassion. Patients deserve transparency regarding when AI guides their care and who is ultimately responsible for treatment. Privacy is another major concern, as LinkedIn recently adjusted its generative AI training plans after intervention from Ireland's Data Protection Commission (DPC). LinkedIn agreed to clearer user notices, opt-out options, and reduced use of personal and sensitive data. AI also challenges traditional data access controls due to the 'mosaic effect,' where small, seemingly harmless data pieces can be combined to reveal private details, suggesting a need for adaptive security models like Relationship-Based Access Control (REBAC).
Key Takeaways
- Google's Gemini Deep Research is slated to introduce memory features by 2026, aiming to save users 2.5 hours monthly by remembering project details and integrating across Google apps and third-party platforms.
- Gemini Deep Research, available with a Google AI Pro subscription, significantly reduced outline creation time for writers from 3.2 hours to 68 minutes for complex stories during its March-November 2025 testing phase.
- In 2025, Google's Gemini (formerly Bard) leverages direct Google Search for real-time information and source links, giving it an advantage over OpenAI's ChatGPT (GPT-4o), which uses Microsoft Bing.
- Akkodis, a digital engineering firm, successfully used AI to cut production scheduling time from five days to seconds for a healthcare manufacturer and trained over 2,000 Akkodis Japan employees in AI, saving 15,000 hours annually.
- Todd Ahlsten of Hirtle Callaghan advises investors to diversify beyond major AI players like Nvidia and Microsoft, highlighting opportunities in specialized hardware companies such as Broadcom, AMD, and Micron, and software companies like Microsoft, Salesforce, and Workday.
- OpenAI is urging the U.S. government to boost AI infrastructure through expanded tax credits for computer chip manufacturing and faster approvals for AI projects like data centers to maintain global leadership.
- Saudi Arabia's SDAIA trained over one million citizens in AI skills through its SAMAI initiative, representing nine percent of the working-age population, with 52 percent of participants being women.
- Doctors are concerned that patients are over-relying on AI for medical advice, emphasizing that AI is a tool and not a substitute for professional diagnosis and treatment.
- LinkedIn modified its generative AI training plans following intervention from Ireland's Data Protection Commission, agreeing to clearer user notices, opt-out options, and reduced use of personal and sensitive data.
- AI challenges traditional data access controls due to the 'mosaic effect,' where AI systems can combine small data pieces to reveal private details, necessitating adaptive security models like Relationship-Based Access Control (REBAC).
Google Gemini gets smart memory features by 2026
Google is testing new memory features for Gemini Deep Research that will change how people work with AI by 2026. These features will help Gemini remember project details and user preferences, saving users about 2.5 hours each month by reducing repeated explanations. Gemini will also connect information across Google apps like Docs and Sheets, and even third-party apps like Notion and Slack. This new technology uses contextual embeddings to build a knowledge graph of user work. While this offers great convenience, it also brings up important discussions about user privacy.
Gemini Deep Research helps writers save time
From March to November 2025, Gemini Deep Research was tested for writers and journalists, showing big time savings. This AI tool, available with a Google AI Pro subscription, helps manage complex research by analyzing many sources like web articles, PDFs, and transcripts. It excels at comparing sources, building timelines, and finding quotes, especially for documents over 50 pages. For an investigative story, Gemini reduced the outline creation time from 3.2 hours to just 68 minutes. In November 2025, it gained the ability to search across Google Workspace apps like Gmail and Drive.
ChatGPT and Google Gemini face off in 2025
Google Bard is now called Gemini, and this article compares it with OpenAI's ChatGPT (GPT-4o) in 2025. ChatGPT uses the GPT-4o model, known for creative writing and reasoning, while Gemini uses its own models like Gemini Pro and Ultra, focusing on multimodal understanding and Google integration. Both AI tools can explain complex topics well. Gemini has an advantage in real-time information because it uses Google Search directly, providing faster and more relevant results with source links, unlike ChatGPT which uses Microsoft Bing.
Gemini Deep Research and Perplexity Pro accuracy compared
This article compares the data accuracy of Gemini Deep Research and Perplexity Pro. Gemini focuses on in-depth research and planning, often using high-authority sources and integrating with Google Workspace. Perplexity Pro prioritizes speed and clear, cited answers from a broader range of web sources. In tests, Perplexity was faster, delivering initial answers in seconds, while Gemini took longer but produced more detailed, report-like outputs. Gemini had a higher average depth score and required fewer corrections, making it more reliable for high-stakes or policy-heavy questions. Perplexity was better for tool comparisons and developer topics.
Gemini Deep Research and NotebookLM 2.0 compared
This article compares Gemini Deep Research and NotebookLM 2.0, finding them to be complementary tools. NotebookLM 2.0 is a notes-first tool that works only with your uploaded sources like Docs and PDFs, providing summaries and outlines. Gemini Deep Research searches the open web and can use your Google Workspace context for broader synthesis. In tests, Gemini created a brief on video trends in 12 seconds, while NotebookLM gave targeted answers faster. NotebookLM is better for deep dives into specific sources, while Gemini excels at market scans and exploring new topics.
Akkodis shows AI success in many industries
Akkodis, a global digital engineering firm, shared case studies showing how artificial intelligence solves business problems across different industries. In life sciences, AI helped a healthcare manufacturer cut production scheduling time from days to seconds. For the financial sector, Akkodis created a special AI training program for Commonwealth Bank of Australia, teaching engineers AI coding tools. Akkodis Japan also used generative AI and automation to save many work hours and make employees skilled in AI. These examples highlight how AI improves efficiency and transforms operations.
Akkodis reveals AI success across many industries
Akkodis, a global digital engineering company, announced successful AI projects across various industries on November 10, 2025. In life sciences, Akkodis helped a healthcare manufacturer reduce production scheduling time from five days to mere seconds using AI. They also partnered with Microsoft Worldwide Learning to train engineers and data scientists in responsible AI for the banking sector. Akkodis Japan's program made over 2,000 employees AI-proficient in 10 months, saving more than 15,000 hours yearly by automating tasks. Jo Debecker, President and CEO, stated Akkodis focuses on using AI to solve complex problems and empower people.
Doctors worry patients trust AI too much
Doctors are concerned that patients trust artificial intelligence for medical advice without fully understanding how it works. Dr. David Newman, a radiologist, sees patients bringing in ChatGPT diagnoses and sometimes trusting AI more than their doctors. He warns that AI is a tool that can make mistakes and is not perfect. Dr. Newman advises patients to use AI only as a starting point for information and always consult a doctor for a proper diagnosis and treatment plan. He stresses that AI cannot replace professional medical advice.
Saudi Arabia trains one million citizens in AI skills
The Saudi Data and Artificial Intelligence Authority, SDAIA, successfully trained over one million Saudi citizens in artificial intelligence skills through its SAMAI initiative. This achievement represents nine percent of the working-age population, with 52 percent of participants being women and 48 percent men. The training aimed to increase awareness of AI's positive uses in professional and academic life. This success is due to a strong partnership between the Ministry of Education and the Ministry of Human Resources and Social Development, aligning with Saudi Vision 2030. SDAIA plans to launch more advanced training programs soon.
AI challenges old ways of controlling data access
Artificial intelligence is changing how businesses protect sensitive information due to the mosaic effect. AI systems can quickly combine many small, harmless pieces of data to reveal private details, like trade secrets or personal identities. Traditional access controls, such as role-based (RBAC) and attribute-based (ABAC) systems, assume data sensitivity stays the same. However, AI shows that data sensitivity changes based on context and how data connects. Experts suggest a new approach called Relationship-Based Access Control, or REBAC, which defines access based on the links between users, resources, and actions, making security more adaptive and effective.
Investor warns AI hype is growing too fast
Todd Ahlsten, Chief Investment Officer at Hirtle Callaghan, believes that the excitement around AI investing is growing faster than its actual business foundations. He advises investors to be disciplined and diversify, looking for companies that can steadily grow earnings. Ahlsten sees opportunities beyond big names like Nvidia and Microsoft, highlighting companies like Broadcom, AMD, and Micron for their specialized hardware. He also points out that software companies like Microsoft, Salesforce, and Workday are overlooked but can boost growth by adding AI. Ahlsten warns of risks like power shortages for AI data centers and high expectations for companies like OpenAI, urging a long-term perspective.
LinkedIn changes AI training after privacy concerns
LinkedIn has changed its plans for training generative AI systems after Ireland's Data Protection Commission, DPC, intervened. The DPC, which oversees LinkedIn's data privacy in the EU, raised concerns about how user data would be used. LinkedIn agreed to several changes, including clearer notices for users about data processing and their ability to opt out. They will also use less personal data for training, prevent children's data from being used, and filter out other sensitive information. The DPC has not fully approved LinkedIn's AI data use but believes these new measures address their immediate concerns.
OpenAI urges US to boost AI infrastructure
OpenAI is asking the U.S. government to greatly improve the nation's artificial intelligence infrastructure. The company, known for its ChatGPT chatbot, believes this is vital for the U.S. to stay ahead in the global AI race. OpenAI recommends expanding tax credits for making computer chips and speeding up approvals for new AI projects like data centers. They argue these steps will encourage innovation, create jobs, and protect national security. Many countries are investing heavily in AI, making these actions important for the U.S. to keep its leading position.
Imagi Lovable OpenAI launch AI coding for students
Imagi, Lovable, and OpenAI have teamed up to launch new AI coding lessons for classrooms worldwide. Through imagi Edu, students can now access Lovable's AI tools, with OpenAI providing $1 million in credits for free access during Computer Science Education Week. This initiative focuses on vibe coding, a hands-on way for students to learn AI by building prototypes and solving problems creatively. The program aims to teach 100 million students globally and follows strict privacy standards, including COPPA compliance. Educators can get free access and resources through imagi Edu.
AI is already helping doctors in the ER
Artificial intelligence is already being used in emergency rooms, changing how medical decisions are made. While AI can quickly process large amounts of data to spot health risks and speed up documentation, it cannot replace a doctor's human intuition or compassion. Patients are also using AI to self-diagnose, sometimes correctly, but doctors warn that AI is a tool and not a substitute for professional medical advice. As AI becomes more integrated into healthcare, patients deserve to know when their care is guided by algorithms and who is ultimately responsible for their treatment.
Sources
- The Future of Gemini Deep Research: AI Memory Assistants in 2026
- Gemini Deep Research for Writers: 8-Month Testing Results, Time Savings & Real Limits (2025)
- Gemini Deep Research vs Perplexity Pro: Data Accuracy Tested
- Gemini Deep Research vs NotebookLM 2.0: Which Tool Wins in 2025?
- AI-Forward Business Solutions
- Akkodis unveils real-world impact of AI-led innovation across industries
- Doctors: Users trust artificial intelligence without understanding how it works
- 52% of 1 million Saudis received AI training are women
- The Mosaic Effect: Why AI Is Breaking Enterprise Access Control
- AI hype is outpacing fundamentals, says Hirtle Callaghan's Todd Ahlsten
- LinkedIn changes gen-AI training plans after data watchdog intervenes
- OpenAI Tells U.S. to Supercharge AI Infrastructure
- Imagi, Lovable and OpenAI launch global AI coding initiative | ETIH EdTech News
- AI Isn’t Coming for Doctors. It’s Already in the Room.
Comments
Please log in to post a comment.