Artificial intelligence tools, notably ChatGPT, are seeing widespread adoption across educational institutions, with a 2025 survey of over 94,000 California State University students, faculty, and staff revealing that 95% use at least one AI tool. The CSU system has even partnered with OpenAI to provide system-wide access to ChatGPT Edu. Despite this high usage, many students express distrust in AI results and voice concerns about its impact on future jobs, while faculty remain divided on AI's overall effect on teaching and critical thinking skills.
Beyond academia, AI is driving significant efficiency gains in various sectors. Rocket Close, for instance, has accelerated its mortgage document processing by 15 times using Amazon Bedrock and Amazon Textract, achieving 90% accuracy across more than 60 document types. This innovation drastically reduces the 10 hours previously required for manual processing per package. Meanwhile, Microsoft has introduced a new mid-class AI model, signaling a strategic development path as the company anticipates having the necessary computing resources for more advanced AI systems later this year.
However, the rapid expansion of AI also brings critical security and ethical considerations to the forefront. Ethereum co-founder Vitalik Buterin advocates for "local-first" AI systems, citing privacy and security risks associated with cloud-based AI, noting that about 15% of community-built AI tools contain malicious instructions. He personally uses a local Qwen3.5:35B model with human approval for AI actions. Additionally, current AI proxy systems face vulnerabilities from future quantum computers, necessitating new post-quantum encryption methods to prevent data breaches.
The human element in creativity and policy development remains paramount. Many artists and students believe AI art cannot replicate human originality or emotional expression, emphasizing the irreplaceable role of human intention. In government and education, leaders like New York State Commissioner Jeanette Moy and Syracuse University's Jeff Rubin discuss AI's potential for personalized learning and efficient, low-risk government applications, with Syracuse deploying over 30,000 AI licenses. The Long Island Association, funded by Google.org, also launched a free AI training academy for small businesses, offering $5,000 and an "AI Literacy for Business" badge upon completion. The FDA is also reevaluating its "breakthrough" device criteria, particularly for AI technologies, impacting how innovative medical devices are reviewed.
Key Takeaways
- 95% of California State University students use AI tools, with ChatGPT being the most popular, following a partnership with OpenAI.
- CSU students and faculty express concerns about AI accuracy, job impact, and potential over-reliance, despite widespread use.
- Vitalik Buterin advocates for "local-first" AI systems due to privacy and security risks, noting 15% of community AI tools contain malicious code.
- Rocket Close achieved a 15x speed increase in mortgage processing with 90% accuracy using Amazon Bedrock and Amazon Textract.
- Microsoft released a new mid-tier AI model, anticipating sufficient computing resources for advanced AI development later this year.
- Current AI proxy systems using RSA and ECC encryption are vulnerable to quantum computers, requiring new post-quantum encryption methods.
- Artists and students believe AI art lacks human soul and originality, emphasizing human creativity's irreplaceable role.
- Syracuse University deployed over 30,000 AI licenses to personalize learning and ensure equitable access and data security.
- The Long Island Association, funded by Google.org, launched a free AI Growth Academy for small businesses, offering $5,000 and an "AI Literacy for Business" badge.
- The FDA is reevaluating its "breakthrough" device criteria, specifically impacting AI medical technologies.
Cal State Students Use AI But Doubt Its Accuracy and Job Impact
A 2025 survey of over 80,000 California State University students, faculty, and staff revealed that nearly all students use AI tools like ChatGPT. However, most students do not trust AI results and worry about how it will affect their future jobs. Many also want a greater say in the system's AI policies. Faculty opinions on AI's impact are divided, with some seeing benefits and others concerns about student over-reliance. The CSU system is working to develop consistent AI policies, with a new initiative to adopt AI technologies and an agreement with OpenAI for system-wide ChatGPT access.
Cal State Embraces AI Despite Student and Faculty Skepticism
A survey of 94,000 individuals across the California State University system shows widespread use of AI tools, with 95% of respondents using at least one. ChatGPT is the most popular tool, especially after the system partnered with OpenAI for access to ChatGPT Edu. Despite high usage, many faculty express concerns about students becoming dependent on AI and losing critical thinking skills. While 56% of faculty see a positive effect of AI on teaching, 52% report a negative impact. Students also worry about submitting AI-generated work and verifying its accuracy.
Cal State Students Use AI But Doubt Its Accuracy and Job Impact
A 2025 survey of over 80,000 California State University students, faculty, and staff revealed that nearly all students use AI tools like ChatGPT. However, most students do not trust AI results and worry about how it will affect their future jobs. Many also want a greater say in the system's AI policies. Faculty opinions on AI's impact are divided, with some seeing benefits and others concerns about student over-reliance. The CSU system is working to develop consistent AI policies, with a new initiative to adopt AI technologies and an agreement with OpenAI for system-wide ChatGPT access.
Vitalik Buterin Warns of AI Security Risks, Prefers Local Systems
Ethereum co-founder Vitalik Buterin is urging a move towards 'local-first' AI systems due to significant privacy and security risks with cloud-based AI. He has stopped using cloud AI himself, citing concerns about data exposure, manipulation, and potential malicious instructions within AI tools. Research shows a notable percentage of AI agent skills contain harmful code. Buterin proposes using on-device models and human confirmation for AI actions to mitigate these dangers as AI capabilities grow.
Vitalik Buterin Details His Secure Local AI Setup
Ethereum co-founder Vitalik Buterin has detailed his personal AI system, which runs entirely on local hardware to avoid privacy risks associated with cloud-based AI. He uses the open-source Qwen3.5:35B model and has created tools that require human approval before the AI can contact third parties or perform actions. Buterin cited research indicating that about 15% of community-built AI tools contain malicious instructions. He advises teams building AI-connected Ethereum wallet tools to implement similar safeguards, limiting autonomous transactions and requiring confirmation for larger amounts.
Rocket Close Speeds Up Mortgage Processing with AWS AI
Rocket Close has dramatically improved its mortgage document processing by using Amazon Bedrock and Amazon Textract, making the process 15 times faster. This new solution achieves 90% accuracy in identifying, classifying, and extracting data from complex documents. Manual processing was a major bottleneck, requiring 10 hours per document package and costing millions annually. The AI solution handles over 60 types of mortgage-related documents, significantly reducing manual effort and improving efficiency for clients seeking homeownership.
Microsoft Releases New Mid-Tier AI Model Amid Compute Limits
Microsoft has launched a new 'mid-class' artificial intelligence model as the company faces limitations in computing resources. The tech giant's AI chief indicated that they will have the necessary resources to build advanced, frontier AI systems later this year. This release suggests a strategic approach to AI development, balancing current resource constraints with future ambitions for cutting-edge AI.
Quantum-Safe AI Proxies Need New Encryption Methods
Current AI proxy systems using RSA and ECC encryption are vulnerable to future quantum computers, risking data breaches through 'harvest now, decrypt later' tactics. Post-quantum Key Encapsulation Mechanisms (KEMs) offer a solution, but their larger data packets create challenges for existing network infrastructure. Implementing quantum-resistant AI requires updating API schemas, using quantum-safe tunnels, automated migration tools, and potentially hybrid encryption layers for security during the transition.
AI Art Can't Replace Human Creativity and Emotion
While AI can generate images quickly, many artists and students believe it cannot replace human art due to a lack of soul, originality, and emotional expression. Concerns exist about AI art potentially devaluing human artists and leading to job losses, with some AI models capable of mimicking specific artistic styles. Although AI can be a tool for inspiration, artists emphasize that human intention and experience are crucial to creating meaningful art. The fundamental human need for self-expression ensures that art will continue to exist.
AI's Impact on Government and Education Discussed
New York State Commissioner Jeanette Moy and Syracuse University Chief Digital Officer Jeff Rubin discussed the transformative role of AI in government and higher education. Rubin highlighted AI's potential to personalize learning at scale, revolutionizing university teaching methods. Syracuse University has deployed over 30,000 AI licenses to ensure equitable access and data security. Moy emphasized a cautious, measured approach to AI adoption in government, focusing on low-risk, high-value applications like procurement search to ensure data stewardship and efficiency.
LIA Launches Free AI Academy with Google and Stony Brook
The Long Island Association (LIA) has launched the LIA-AI Growth Academy, a free artificial intelligence training program for small businesses with 20 or fewer employees. Funded by Google.org and delivered in partnership with Stony Brook University, the academy aims to help businesses use AI for efficiency and growth. Completing companies will receive $5,000 and an 'AI Literacy for Business' badge. This initiative seeks to empower local businesses with AI knowledge and tools to thrive in the evolving technological landscape.
FDA Reevaluates 'Breakthrough' Device Criteria
The FDA is evolving its definition of a 'breakthrough' device, particularly concerning technologies like artificial intelligence. This shift impacts how innovative medical devices are reviewed and approved. The article notes that April 1, 2026, was the application deadline for the CMMI ACCESS Model's first cohort, suggesting ongoing changes in regulatory pathways for health technology. Further details on the FDA's updated criteria are available to STAT+ subscribers.
Sources
- Cal State students widely use AI tools, but mistrust results and fear job effects
- Despite Skepticism, Widespread AI Use at Cal State
- Cal State students widely use AI tools, but mistrust results and fear job impact
- Vitalik Buterin warns of AI security risks, pushes for local-first systems
- Ethereum Founder Vitalik Buterin Details His 'Private' and 'Secure' AI Setup
- Rocket Close transforms mortgage document processing with Amazon Bedrock and Amazon Textract
- Microsoft launches ‘mid-class’ AI model as compute limits bite
- Post-Quantum Key Encapsulation Mechanisms in AI Proxy Orchestration
- AI art cannot displace human art
- Maxwell Fireside Chat Examines AI’s Role in Government and Higher Education
- LIA launches AI academy with Stony Brook, Google-funded
- FDA's evolving view of what makes a 'breakthrough' device
Comments
Please log in to post a comment.