US companies are rapidly integrating AI tools, with a significant increase in paid AI subscriptions. Data from corporate credit card company Ramp indicates that 47% of its client companies now pay for AI subscriptions, a notable jump from 26% just a year ago. OpenAI's ChatGPT currently leads the market with 37% adoption among these companies, while Anthropic's Claude is experiencing rapid growth, highlighting a highly competitive market. Meanwhile, AI startup Replit has launched "Mobile Apps on Replit," a new feature enabling users to create and publish mobile applications using simple language commands, potentially allowing creators to move from idea to App Store readiness in days, though they must still adhere to Apple's strict App Store rules. Replit, valued at $3 billion in September, is reportedly nearing a $9 billion funding round. The expanding use of AI also brings increased scrutiny over data privacy and intellectual property. SpaceX's Starlink recently updated its Privacy Policy, allowing it to use customer data for training its own AI models and sharing with third-party AI companies to enhance customer experience. While users can opt out of sharing data with "trusted collaborators" via their Starlink.com account settings, there is no clear mechanism to opt out of Starlink using data for its internal AI training. Concurrently, publishers Hachette Book Group and Cengage Group are seeking to join a lawsuit against Google, alleging the misuse of their copyrighted books to train Google's Gemini AI model. They cited 10 specific examples of their books allegedly used without permission, a move that could significantly increase potential damages if U.S. District Judge Eumi Lee allows them to join the case. The need for robust AI regulation is a growing concern, with ACLU experts emphasizing the importance of protecting privacy and ensuring fairness. They point out that AI systems often make critical decisions in areas like loans and jobs without adequate disclosure, leading to potential discrimination and data control issues due to limited current regulations. The Isle of Man government has responded by launching its National Office for Artificial Intelligence, backed by £1 million from the Economic Strategy Fund, to educate the public and help businesses use AI responsibly and ethically. This office will also advise on AI risks and work to improve public services. The ethical implications extend to journalism, where a reporter's interview with Vega, a voice option from Google's Gemini AI chatbot, sparked a discussion about AI's role and the authenticity of AI-generated content, especially after the reporter was fooled by AI-written comments. In the business world, tech executives at the Fortune Brainstorm Tech dinner cautioned against simply automating existing processes with AI, labeling it a "trap." Experts like Bill Briggs from Deloitte and Hari Bala from Solventum stressed the importance of rethinking outcomes, designing systems for potential failures, and ensuring strong leadership with well-organized data for successful AI adoption. They urged companies to embrace urgency and not let perfection impede progress. Meanwhile, Luma AI's co-founder and CEO, Amit Jain, discussed how their AI, specifically designed for creative work, could transform Hollywood by reshaping how films and other creative projects are made. Indiana University's Kelley School of Business is also embracing AI in education, launching GenAI 101 with an animated AI co-teacher named Crimson, which has enrolled nearly 107,000 learners and is now free to the public, teaching practical AI skills like prompt engineering.
Key Takeaways
- US companies are rapidly adopting AI tools, with 47% of Ramp's client companies now paying for AI subscriptions, up from 26% a year ago.
- OpenAI's ChatGPT leads the AI subscription market with 37% adoption, while Anthropic's Claude is experiencing rapid growth.
- Replit launched "Mobile Apps on Replit," enabling users to create and publish mobile apps with language commands; Replit is valued at $3 billion and nearing a $9 billion funding round.
- SpaceX's Starlink updated its Privacy Policy to use customer data for internal AI training and sharing with third parties, though users can only opt out of third-party sharing.
- Hachette Book Group and Cengage Group publishers are seeking to join a lawsuit against Google, alleging misuse of copyrighted books to train its Gemini AI model.
- ACLU experts advocate for stronger AI regulation to protect privacy and ensure fairness, citing potential discrimination in critical decisions made by AI systems.
- The Isle of Man government launched a National Office for Artificial Intelligence with £1 million funding to promote responsible AI use and advise on risks.
- Tech executives warn businesses against simply automating old processes with AI, emphasizing the need for rethinking outcomes, strong leadership, and well-organized data.
- Luma AI's technology, specifically designed for creative work, is poised to transform Hollywood's film and creative project production.
- Indiana University's Kelley School of Business offers GenAI 101, a free course with an AI co-teacher named Crimson, teaching practical AI skills to nearly 107,000 learners.
Starlink uses customer data for AI training
Starlink recently updated its Privacy Policy to allow sharing customer data for AI model training. This includes training its own AI and sharing data with third-party AI companies. Customers can opt out of sharing data with trusted collaborators through their account settings. However, there is no clear way to opt out of Starlink using data for its own internal AI training.
Starlink uses customer data for AI training
SpaceX's Starlink updated its privacy policy to use customer data for training its own AI models and sharing with third parties. These partners help develop AI tools to improve customer experience. Users can opt out of sharing data with trusted collaborators by unchecking a box in their Starlink.com account settings. Starlink collects necessary information like name, address, and payment details, and encrypts data transmitted to and from equipment.
Starlink uses customer data for AI training
SpaceX's Starlink updated its privacy policy to use customer data for training its own AI models and sharing with third parties. These partners help develop AI tools to improve customer experience. Users can opt out of sharing data with trusted collaborators by unchecking a box in their Starlink.com account settings. Starlink collects necessary information like name, address, and payment details, and encrypts data transmitted to and from equipment.
Replit AI tool helps users build mobile apps
AI startup Replit launched a new feature called Mobile Apps on Replit. This tool lets users create and publish mobile apps using simple language commands. Creators can go from an idea to an app ready for the App Store in just days. Replit was valued at $3 billion in September and is now nearing a $9 billion funding round. Users must still follow Apple's strict App Store rules before publishing their apps.
ACLU experts discuss AI regulation needs
ACLU experts explain the need for more AI regulation to protect privacy and ensure fairness. AI systems often make important decisions about people's lives in areas like loans and jobs without clear disclosure. Currently, regulations for AI development and use are limited, which can lead to discrimination and data control issues. The ACLU emphasizes that stronger policies are needed to hold companies accountable for AI's impact. They also released a report with NYU School of Law Technology Law and Policy Clinic to help policymakers track AI trends.
Publishers join lawsuit against Google AI training
Hachette Book Group and Cengage Group publishers want to join a lawsuit against Google. They claim Google misused their copyrighted books to train its Gemini AI model. This lawsuit is part of a larger trend where artists and authors are suing tech companies over AI training. The publishers cited 10 specific examples of their books allegedly used without permission. U.S. District Judge Eumi Lee will decide if they can join the case, which could increase potential damages.
Luma AI may transform Hollywood
Luma AI's co-founder and CEO, Amit Jain, discussed how groundbreaking AI could change Hollywood. He explained that this AI is specifically designed for creative work. This technology has the potential to reshape how films and other creative projects are made.
Experts discuss managing AI in business
At the Fortune Brainstorm Tech dinner, tech executives discussed how to manage AI in businesses. They warned against simply automating old processes with AI, calling it a "trap." Experts like Bill Briggs from Deloitte and Hari Bala from Solventum emphasized the need to rethink outcomes and design systems for potential failures. They agreed that strong leadership and well-organized data are crucial for successful AI adoption. The discussion highlighted that companies must embrace urgency and not let perfection hinder progress.
Isle of Man opens new AI office
The Isle of Man government launched its National Office for Artificial Intelligence to explore AI opportunities. This office aims to educate people and help businesses use AI responsibly and ethically. It received £1 million from the Economic Strategy Fund for training and engagement. Lyle Wraxall, Chief Executive of Digital Isle of Man, emphasized balancing AI's potential with safe use. The office will also advise on AI risks and work to improve public services.
AI chatbot discusses its role in journalism
A reporter interviewed Vega, a voice option from Google's Gemini AI chatbot, on WNHH FM's "Dateline New Haven." Vega insisted that AI does not intend to replace human journalists but rather serve as a helpful tool. The reporter expressed concerns about AI's growing influence in news and shared an experience where AI-written comments fooled him. This led to a discussion about whether AI-generated content, even if well-written, should be considered authentic in journalism.
US companies rapidly adopt AI tools
US companies are quickly adopting AI tools, with a significant increase in paid AI subscriptions. Data from Ramp, a corporate credit card company, shows that 47% of its client companies now pay for AI subscriptions, up from 26% a year ago. OpenAI's ChatGPT leads the market with 37% adoption, while Anthropic's Claude is rapidly growing. This rapid adoption highlights a highly competitive AI market, though current data may skew towards tech-forward companies.
Indiana University uses AI co-teacher Crimson
Indiana University's Kelley School of Business launched GenAI 101, a large generative AI course, with an animated AI co-teacher named Crimson. Professor Brian Williams developed the course, which has enrolled nearly 107,000 learners since August. Originally for freshmen, it expanded to faculty, staff, and alumni, and is now free to the public. The course, built in just 66 days, uses 31 short videos to teach practical AI skills like prompt engineering. Crimson helps students learn how to interact with and question AI models effectively.
Sources
- PSA: Starlink Now Uses Customers' Personal Data for AI Training
- Even Starlink Wants Your Data for AI Model Training. How to Opt Out
- Even Starlink Wants Your Data for AI Model Training. How to Opt Out
- AI startup Replit launches feature to vibe code mobile apps
- Your Questions Answered: Where We Are on AI Regulation, and Where We Go From Here
- Publishers Seek to Join Lawsuit Against Google Over AI Training
- Groundbreaking AI may change the Hollywood landscape
- Protect your agentic AI before you wreck your agentic AI
- New office for AI to 'explore opportunities' for Isle of Man
- AI Chatbot Denies Media Takeover Plot
- Charted: AI adoption inside U.S. companies is soaring
- Meet Crimson, Indiana Kelley’s AI Co-Teacher For The Largest GenAI Course In Higher Ed
Comments
Please log in to post a comment.