All your AI Agents & Tools i10X ChatGPT & 500+ AI Models & Tools

mcp-huggingfetch

mcp-huggingfetch
Launch Date: Aug. 16, 2025
Pricing: No Info
AI, machine learning, software tools, HuggingFace, model download

What is mcp-huggingfetch?

mcp-huggingfetch is a tool designed to speed up the download of models from HuggingFace. It supports concurrent downloads, resume capabilities, and intelligent retry mechanisms, making it 3-5 times faster than traditional methods. This tool is compatible with various clients including Claude Desktop, Claude Code, Cursor, and VS Code.

Benefits

  • Faster Downloads: mcp-huggingfetch significantly accelerates the download process, making it 3-5 times faster than traditional methods.
  • Concurrent Downloading: Supports downloading multiple files at the same time, saving time and effort.
  • Resume Capabilities: Allows you to resume interrupted downloads, ensuring you don't have to start over.
  • Intelligent Retry Mechanisms: Automatically retries failed downloads, increasing the chances of successful downloads.
  • Wide Compatibility: Works with popular clients like Claude Desktop, Claude Code, Cursor, and VS Code.

Use Cases

  • Researchers and Developers: Quickly download large models for AI research and development.
  • AI Enthusiasts: Easily access and experiment with various AI models.
  • Educational Institutions: Facilitate the download of models for teaching and learning purposes.

Setup Instructions

To set up mcp-huggingfetch, you need to add specific configurations to your client's settings. Here are the steps for different clients:

Claude Desktop

Add the following to yourclaude_desktop_config.json:

{"mcpServers":{"huggingfetch":{"command":"npx","args":["-y","mcp-huggingfetch@latest"],"env":{"HUGGINGFACE_TOKEN":"your_token_here"}}}}

Claude Code

Add the following to your.claude/claude_config.json:

{"mcpServers":{"huggingfetch":{"command":"npx","args":["-y","mcp-huggingfetch@latest"],"env":{"HUGGINGFACE_TOKEN":"your_token_here"}}}}

Cursor / VS Code (Continue Extension)

Add the following to yourconfig.json:

{"mcp":[{"name":"huggingfetch","command":"npx","args":["-y","mcp-huggingfetch@latest"],"env":{"HUGGINGFACE_TOKEN":"your_token_here"}}]}

Usage

After configuration, you can use the following features directly in conversations:

List Files

View repository files before downloading:

List all files in the 2Noise/ChatTTS repository
Show JSON files in the bert-base-uncased repository
Display files in openai/whisper-large-v3 sorted by size

Download Models

Selectively download required files:

Please download the ChatTTS model to ./models directory
Download microsoft/DialoGPT-medium model, only .bin files
Download openai/whisper-large-v3 model, exclude test files

Supported Features

List Tool Options (list_huggingface_files)

ParameterTypeDescriptionExample
repo_idstringHuggingFace repository ID"2Noise/ChatTTS"
revisionstringGit branch/tag"main","v1.0"
pathstringRepository sub-path"models/"
patternstringFile name filter pattern"*.json","*.safetensors"
sort_bystringSort method"size","name","type"

Download Tool Options (download_huggingface_model)

ParameterTypeDescriptionExample
repo_idstringHuggingFace repository ID"2Noise/ChatTTS"
download_dirstringDownload directory"./models"
filesarraySpecific file list["model.bin", "config.json"]
allow_patternsstring/arrayInclude patterns"*.json"or["*.pt", "*.bin"]
ignore_patternsstring/arrayExclude patterns"test_*"or["*.onnx", "test_*"]
revisionstringGit branch/tag"main","v1.0"
force_redownloadbooleanForce re-downloadtrue,false

Environment Variables

VariableRequiredDefaultDescription
HUGGINGFACE_TOKEN-HuggingFace access token
HUGGINGFETCH_DOWNLOAD_DIR~/Downloads/huggingface_modelsDefault download directory
HF_HOME~/.cache/huggingfaceCache directory
LOG_LEVELinfoLog level (debug,info,warn,error)

FAQ

Q: Token authentication failed, what should I do?A: Check ifHUGGINGFACE_TOKENis correctly set, ensure the token is valid and has sufficient permissions.

Q: Download speed is slow, what can I do?A: The tool supports resume downloads and concurrent downloading. Network issues may cause slow speeds, automatic retry will occur.

Q: How to download private models?A: Ensure your HuggingFace account has access permissions and use a valid token.

Q: What file formats are supported?A: All file formats on HuggingFace are supported, including.pt,.bin,.safetensors,.json,.txt, etc.

Development

Prerequisites

  • Node.js 18+
  • npm or yarn

Installation

git clone https://github.com/freefish1218/mcp-huggingfetch.gitcd mcp-huggingfetchnpm install

Development Commands

npm run dev # Run with file watchingnpm start # Run the MCP servernpm run test:basic # Run basic functionality testsnpm test # Run Jest unit testsnpm run lint # Check code stylenpm run lint:fix # Auto-fix linting issues

Release Commands

npm run release:patch # Release patch version (1.0.0 -> 1.0.1)npm run release:minor # Release minor version (1.0.0 -> 1.1.0)npm run release:major # Release major version (1.0.0 -> 2.0.0)

The release scripts will automatically:* Run tests and linting* Update version number* Create git tag* Push to GitHub* Publish to npm

Building

npm run build # Build single binarynpm run build:all # Build for all platforms (Linux, macOS, Windows)

License

MIT License - see file for details.

Contributing

Contributions are welcome! Please seeCONTRIBUTING.mdfor guidelines.

NOTE:

This content is either user submitted or generated using AI technology (including, but not limited to, Google Gemini API, Llama, Grok, and Mistral), based on automated research and analysis of public data sources from search engines like DuckDuckGo, Google Search, and SearXNG, and directly from the tool's own website and with minimal to no human editing/review. THEJO AI is not affiliated with or endorsed by the AI tools or services mentioned. This is provided for informational and reference purposes only, is not an endorsement or official advice, and may contain inaccuracies or biases. Please verify details with original sources.

Comments

Loading...