Microsoft Copilot launches security features as ChatGPT faces scrutiny

Researchers at the University of Cambridge have developed a new nanoelectronic device, inspired by the human brain, that could significantly reduce AI energy consumption. This device uses a special form of hafnium oxide, acting as a stable 'memristor' that processes and stores data in the same location. This innovation has the potential to cut AI energy use by up to 70% and make AI systems more adaptable, though the team is working to make its high-temperature fabrication process compatible with standard industry methods.

AI integration is also expanding rapidly in education. UNC Greensboro's School of Nursing, for example, has 70% of its faculty actively using AI tools to enhance instructional design, content creation, and student engagement. Similarly, a business school student advocates for integrating AI tools like ChatGPT and Gemini into MBA core curricula, rather than just electives, citing American University's Kogod School of Business as a model for embedding AI across its programs.

However, the widespread adoption of AI brings ethical and security challenges. Mediahuis suspended senior journalist Peter Vandermeersch for using ChatGPT to generate quotes, highlighting the risk of AI "hallucinations" and the need for human verification. A concerning trend also sees teenagers using AI to create slanderous videos of teachers, raising serious issues of cyberbullying and privacy. Even billionaire Mark Cuban is using AI on a Mac Mini to combat the overwhelming amount of AI-generated spam emails he receives.

Security for advanced AI systems is a growing focus. Researchers warn that connecting Large Language Models (LLMs) with external data via the Model Context Protocol (MCP) creates fundamental security risks, as LLMs struggle to distinguish between content and malicious instructions. Microsoft is responding by introducing new security features for "agentic AI" systems, including a Security Dashboard for AI and Entra Internet Access Shadow AI Detection, announced at the RSAC 2026 Conference. Despite these efforts, AI still has limitations, as demonstrated by Microsoft Copilot's 2026 NFL mock draft, which made unusual and sometimes ineligible selections, indicating that human analysis remains vital in certain areas.

Key Takeaways

  • Brain-inspired nanoelectronic devices, using hafnium oxide, could reduce AI energy consumption by up to 70% by processing and storing data in the same location.
  • UNC Greensboro's School of Nursing has 70% of its faculty integrating AI tools to improve instructional design and student engagement.
  • Business schools are urged to embed AI, including tools like ChatGPT and Gemini, into core curricula, rather than offering it only as an elective.
  • Journalist Peter Vandermeersch was suspended by Mediahuis for using ChatGPT to generate false quotes, underscoring the risks of AI "hallucinations" and the need for human oversight.
  • Teenagers are using AI to create slanderous videos of teachers, highlighting new challenges in cyberbullying and the need for digital literacy.
  • Mark Cuban is employing AI on a Mac Mini to automate the management and unsubscribing from AI-generated spam emails.
  • The Model Context Protocol (MCP) for LLMs introduces fundamental security risks, as LLMs cannot reliably distinguish between data content and malicious instructions.
  • Microsoft is enhancing security for "agentic AI" systems with features like a Security Dashboard for AI and Entra Internet Access Shadow AI Detection, announced at RSAC 2026.
  • Microsoft Copilot's 2026 NFL mock draft made unusual and initially ineligible picks, suggesting AI is not yet ready to replace human expertise in complex, nuanced tasks.

Brain-inspired chips could slash AI energy use

Researchers at the University of Cambridge have created a new nanoelectronic device that mimics the human brain to reduce AI energy consumption. This new material, a form of hafnium oxide, acts as a stable and low-energy 'memristor.' Unlike current AI hardware that moves data back and forth, consuming much power, this brain-inspired approach processes and stores data in the same place. This could cut energy use by up to 70% and make AI systems more adaptable. While the fabrication process currently requires high temperatures, the team is working to make it compatible with standard industry methods.

New brain-like chip could cut AI energy use by 70%

Scientists have developed a new nanoelectronic device inspired by the human brain that could significantly lower the energy needed for artificial intelligence hardware. The device uses a special form of hafnium oxide, acting as a highly stable and efficient 'memristor.' This design mimics how the brain processes information, potentially reducing AI energy consumption by as much as 70%. Current AI systems use a lot of power by constantly moving data between processing and memory. This new technology aims to store and process data in the same location, making AI more energy-efficient and adaptable.

UNCG Nursing School leads in AI integration

UNC Greensboro's School of Nursing is actively integrating artificial intelligence (AI) into its programs to enhance education. Faculty are using AI tools to improve instructional design, speed up content creation, boost efficiency, and increase student engagement. About 70% of the nursing faculty are already using AI in some way, helping them cater to different learning styles and meet students' needs. The school is also training faculty to become AI leaders, guiding students on helpful and ethical AI platforms. This initiative aims to prepare students for a future where AI is a common tool in healthcare.

Business schools need AI in core curriculum not just electives

A business school student argues that MBA programs should integrate Artificial Intelligence (AI) into their core curriculum, rather than offering it only as electives. The student notes that AI tools like ChatGPT and Gemini are becoming essential for many jobs, and students are already using them to learn and complete assignments. While some classes embrace AI, others restrict its use, creating a gap in AI fluency. The author believes that treating AI as an optional add-on is a disservice to students and that business schools should follow examples like American University's Kogod School of Business, which has embedded AI across its curriculum.

Mark Cuban uses AI to fight email spam

Billionaire Mark Cuban is using artificial intelligence to combat the overwhelming amount of AI-generated spam emails he receives. He recently purchased a Mac Mini to help manage his inbox, focusing on automating the process of unsubscribing from unwanted email lists. Cuban is training AI systems to identify and act on these emails, aiming to clean up his inbox. He described this as a 'trial and error phase' where people are experimenting with AI to handle daily tasks. This approach reflects a growing trend of using AI to manage digital communication overload.

Senior journalist suspended for using AI to create quotes

Mediahuis has suspended senior journalist Peter Vandermeersch for admitting to using AI tools like ChatGPT to generate quotes for articles. Vandermeersch, formerly head of Irish operations for Mediahuis, stated he 'fell into the trap of hallucinations' and wrongly put words into people's mouths. An investigation by De Telegraaf revealed the misuse, leading to his suspension. Mediahuis emphasized its strict rules for AI use, requiring diligence, human oversight, and transparency. The publisher has removed some of Vandermeersch's articles, stressing that reliable journalism requires human verification.

Teens use AI to make slander videos of teachers

A concerning trend has emerged where teenagers are using artificial intelligence to create slanderous videos of their teachers. These AI-generated videos, which can make teachers appear to say or do things they never did, are being shared on social media. This misuse of AI raises serious issues about cyberbullying and privacy. Educators and administrators are working on strategies like educating students on responsible AI use and monitoring social media. The trend highlights the need for digital literacy education on the ethical use of AI and the dangers of misinformation.

AI security risks in LLM environments can't be patched

Researchers warn that using the Model Context Protocol (MCP) to connect Large Language Models (LLMs) with external data creates fundamental security risks that cannot be fixed with simple patches. Unlike traditional AI, MCP allows LLMs to take real actions, access enterprise data, and make decisions autonomously. A major issue is that LLMs cannot distinguish between content and instructions, meaning malicious instructions can be hidden in fetched data. This could allow attackers to exfiltrate data or send emails without the user's knowledge. Gianpietro Cutolo from Netskope highlighted these architectural flaws at the RSAC 2026 Conference.

Microsoft enhances security for agentic AI

Microsoft is introducing new security features to protect 'agentic AI' systems, which are AI agents that can act autonomously. At the RSAC 2026 Conference, the company announced capabilities to secure these agents and their underlying infrastructure. Features include a Security Dashboard for AI to provide unified visibility into AI risks and Entra Internet Access Shadow AI Detection to identify unmanaged AI applications. Microsoft is also strengthening identity security with features like Entra Backup and Recovery and Entra Tenant Governance. These advancements aim to ensure AI systems are secure from the ground up.

AI mock draft makes strange picks for Chiefs, Patriots

Microsoft Copilot's latest 2026 NFL mock draft featured some unusual selections, including a quarterback for the New England Patriots and a double-dip at receiver for the Kansas City Chiefs. The AI struggled with draft eligibility, initially including players already on NFL rosters. After several re-prompts, Copilot produced a draft with eligible players but still made questionable choices. The AI's performance suggests that human mock draft analysts are unlikely to be replaced by AI anytime soon, despite advancements in the technology.

Sources

NOTE:

This news brief was generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral) from aggregated news articles, with minimal to no human editing/review. It is provided for informational purposes only and may contain inaccuracies or biases. This is not financial, investment, or professional advice. If you have any questions or concerns, please verify all information with the linked original articles in the Sources section below.

AI energy consumption brain-inspired computing memristors hafnium oxide AI hardware AI integration in education nursing education AI in curriculum MBA programs AI tools ChatGPT Gemini AI for spam detection Mark Cuban AI ethics AI in journalism AI-generated quotes misinformation AI-generated videos cyberbullying digital literacy AI security risks Large Language Models (LLMs) Model Context Protocol (MCP) agentic AI Microsoft security AI risk management AI applications NFL mock draft Microsoft Copilot

Comments

Loading...