Bitget has partnered with AI platform MuleRun to launch a personal AI trading assistant, aiming to democratize access to professional market signals for everyday investors. This innovative tool leverages natural language processing, allowing users to build automated trading workflows and monitor markets continuously, even when offline. Bitget's CEO stated this collaboration is a significant step towards creating unified trading environments that seamlessly combine analysis, monitoring, and execution across various asset classes like crypto, stocks, commodities, and forex.
Beyond finance, AI is transforming other sectors, as seen with REMAX Advantage utilizing AI tools like Remi and Shilo to enhance agent-client interactions through call analysis and personalized coaching. However, the rapid integration of AI also brings scrutiny. Meta's Ray-Ban smartglasses, while offering helpful AI assistance voiced by Judi Dench, raised privacy concerns due to their constant recording capabilities. Furthermore, Perplexity AI faces a proposed class-action lawsuit alleging it shares user data with Meta and Google, with trackers reportedly giving these tech giants access to user-AI conversations.
The increasing adoption of AI highlights critical challenges in reliability and security. Prosecutors in Nevada County, Northern California, admitted to using AI to generate false legal citations in criminal cases, leading to sanctions and underscoring the risks of inaccurate information. This incident, alongside an AI system fabricating a motorcycle safety course, emphasizes the urgent need for robust due diligence. Companies implementing AI into core systems in 2026 often face failures due to inadequate evaluation, leading to inconsistent outputs, unpredictable costs, and security vulnerabilities.
Boards of directors must adapt their oversight strategies to address the evolving security and safety risks posed by AI, especially agentic AI, which can act independently. Greg Clark of OpenText warns that organizations lacking governed identities and continuous monitoring face significant risks, as agentic AI creates security blind spots. Moreover, while AI promises performance boosts, it risks eroding an organization's unique skills and competitive edge if not managed carefully. This rapid advancement also fuels anxiety among tech workers, who express fears of job displacement and existential crises, prompting a rise in mental health support seeking in Silicon Valley.
Key Takeaways
- Bitget partnered with MuleRun to launch an AI trading assistant for retail investors, offering automated workflows and market monitoring across various asset classes.
- REMAX Advantage uses AI tools, Remi and Shilo, for call analysis, lead revival, and personalized coaching to improve agent performance.
- Meta's Ray-Ban smartglasses, while helpful with AI assistance, raised privacy concerns due to constant recording capabilities.
- Perplexity AI faces a class-action lawsuit alleging it shares user data and conversations with Meta and Google via trackers.
- Prosecutors in Northern California used AI to generate false legal citations, leading to sanctions and highlighting AI reliability issues in legal work.
- An AI system fabricated a motorcycle safety course, underscoring the need for critical thinking when assessing AI-generated information.
- Companies integrating AI into core systems often fail due to insufficient due diligence, leading to inconsistent outputs, unpredictable costs, and security risks.
- Boards must enhance oversight for AI security and safety, particularly for agentic AI, requiring comprehensive safeguards and human accountability.
- Greg Clark of OpenText emphasizes the need for governed identities, protected data, and continuous monitoring to mitigate risks from agentic AI's security blind spots.
- AI implementation risks eroding an organization's unique skills and competitive edge, and it is fueling anxiety among tech workers regarding job displacement.
Bitget and MuleRun Partner for AI Trading Assistant
Bitget has partnered with AI platform MuleRun to create a personal AI trading assistant. This tool uses natural language to give everyday investors access to professional-level market signals. MuleRun's AI runs 24/7, helping users build automated trading workflows and monitor markets even when offline. The partnership integrates Bitget's financial data with MuleRun's AI, offering analysis across crypto, stocks, commodities, and more. This aims to make advanced financial intelligence more accessible to all users.
Bitget Partners with MuleRun for AI Trading
Bitget is teaming up with MuleRun to offer AI-powered trading assistance to retail investors. This partnership connects Bitget's Agent Hub data with MuleRun's AI platform, allowing users to create automated trading plans using simple language. The AI assistant works continuously, monitoring markets and executing tasks even when users are not active. It provides access to extensive data, including crypto, stocks, and economic indicators, aiming to level the playing field for individual traders.
Bitget Launches AI Trading Assistant with MuleRun
Bitget has launched a new AI trading assistant in collaboration with MuleRun. This tool allows users to build automated trading strategies and monitor markets using natural language commands. MuleRun's AI platform runs continuously on cloud servers, requiring no technical setup. The integrated system provides access to data on crypto, stocks, commodities, forex, and economic indicators. Bitget's CEO stated this partnership moves towards unified trading environments where analysis, monitoring, and execution are combined.
AI Due Diligence Checklist for 2026
Companies are increasingly implementing AI into core business systems in 2026, but many face failures due to a lack of proper due diligence. Issues like inconsistent outputs, unpredictable costs, and security risks arise because AI systems are not evaluated like traditional software. Traditional methods fail to account for data quality, model reliability, and hidden dependencies. A new checklist is needed to ensure AI systems are trustworthy, secure, and scalable before deployment to avoid implementation failures and cost overruns.
Boards Must Ensure AI Security and Safety
As AI technology rapidly advances, boards of directors must adapt their oversight strategies for security and safety risks. The evolving nature of AI, especially agentic AI that can act independently, presents complex challenges. Existing governance frameworks and regulations are often unclear or abstract. Companies need a risk-based approach with comprehensive safeguards and assurance in both development and operation. Human oversight remains crucial for value judgments, critical thinking, and accountability in AI deployment.
REMAX Advantage Uses AI for Call Analysis and Coaching
REMAX Advantage is using AI tools, Remi from Speculo and Shilo, to enhance their agents' client interactions. Remi assists with inbound calls by answering property questions and reviving old leads. Shilo, a conversational intelligence platform, analyzes calls to provide feedback and personalized coaching for agents. This allows agents to focus on more valuable conversations and improve their performance. The AI tools help agents tailor their communication based on client needs and personality profiles.
Meta's Smartglasses Feel Intrusive After a Month
Testing Meta's Ray-Ban smartglasses for a month revealed mixed results, with the AI assistant, voiced by Judi Dench, being helpful but the overall experience feeling intrusive. While the glasses offer functions like taking photos and providing directions, they raise privacy concerns for those around the wearer. The integrated AI can operate a phone via voice commands and identify objects, which is transformative for visually impaired individuals. However, the author felt like a 'creep' due to the constant recording capability and lack of clear social cues.
Northern California Courts Face AI Errors
Prosecutors in Nevada County, Northern California, have admitted to using AI to generate false legal citations in four criminal cases. This led to one prosecutor being removed from duties and the office facing potential sanctions from an appeals court. The District Attorney acknowledged the office was unprepared for the risks of generative AI, including inaccurate information and difficulty in detecting fabrications. This case highlights the growing concern over AI's reliability in legal work and the need for careful scrutiny.
AI Fuels Anxiety Among Tech Workers
Silicon Valley psychotherapists report a significant increase in tech workers seeking mental health support due to anxieties surrounding artificial intelligence. Patients express fears about job displacement, existential crises, and the potential negative consequences of the AI they are developing. The rapid pace of AI development, combined with layoffs and job instability in the tech sector, contributes to this heightened stress. Some workers also struggle with the pressure to prove their value while fearing they are making themselves obsolete.
AI Hallucinates Fake Motorcycle Safety Course
An AI system has generated a completely fabricated motorcycle safety course, raising concerns about the spread of misinformation. The fake course, presented as being offered by the College of DuPage, contained numerous errors and nonsensical details, indicating it was AI-generated 'slop.' This incident highlights the critical need for users to apply critical thinking skills when assessing information online, especially regarding important topics like safety training. The hallucinated course appears to have never existed, despite its plausible presentation.
Don't Let AI Erase Company Skills
Artificial intelligence is often promoted as a performance booster for employees. However, a significant drawback is its potential to erode the unique skills and 'DNA' of an organization. By relying too heavily on generic AI standards, companies risk losing their competitive edge. It is crucial for businesses to manage AI implementation carefully to ensure it enhances, rather than diminishes, the specialized capabilities that make them unique and successful.
Securing AI Agency and Machine Identities
Greg Clark of OpenText warns that organizations without governed identities, protected data, and continuous monitoring face significant risks with AI. While many companies have implemented AI, few achieve tangible results, often due to compliance and privacy concerns. Agentic AI, with its numerous non-human identities, creates security blind spots that static systems cannot address. Clark emphasizes the need for continuous, contextual controls, such as de-identifying data and scanning agents for vulnerabilities, to ensure safe AI deployment.
Perplexity AI Accused of Data Sharing
Perplexity AI is facing a proposed class-action lawsuit accusing its AI machine of sharing user data with Meta and Google. According to the complaint filed in federal court, trackers are downloaded onto users' devices as soon as they log into Perplexity's homepage. This allegedly gives Meta and Google full access to conversations between users and Perplexity's AI search engine. The lawsuit highlights concerns about data privacy and how AI platforms handle user information.
Sources
- Bitget Expands Agent Hub Ecosystem Through MuleRun Partnership to Advance Agentic Trading
- Bitget Partners with MuleRun to Bring AI Agent Trading to Retail Investors
- Bitget Launches AI Trading Assistant With MuleRun
- AI Due Diligence Checklist 2026: How to Avoid AI Implementation Failures, Security Risks, and Cost Overruns
- Defensible AI: What Boards Must Get Right on Security and Safety
- How one brokerage uses AI voice, listening tools to help agents transform client calls
- I wore Meta’s smartglasses for a month – and it left me feeling like a creep
- Northern California courts grapple with AI-generated errors: 1 prosecutor removed
- How AI is driving tech workers into therapy
- AI Is Hallucinating Entire Motorcycle Safety Courses Now, And It's Very Dumb
- Don’t Let AI Destroy the Skills That Make Your Company Competitive
- Greg Clark Discusses Securing AI Agency
- Perplexity AI Machine Accused of Sharing Data With Meta, Google
Comments
Please log in to post a comment.