Hub documentation
Advanced Compute Options
Single Sign-On (SSO) Audit Logs Storage Regions Data Studio for Private datasets Resource Groups (Access Control) Advanced Compute Options Advanced Security Tokens Management Publisher Analytics Gating Group Collections Network Security Rate Limits Blog Articles
PRO Plan Repositories Getting Started with Repositories Repository Settings Storage Limits Storage Backend (Xet) Local Cache Pull Requests & Discussions Notifications Collections Webhooks GitHub Actions Notebooks Next Steps Licenses
Models The Model Hub Model Cards Eval Results Leaderboard Data Gated Models Uploading Models Downloading Models Integrated Libraries Model Widgets Model Inference Models Download Stats Model Release Checklist Local Apps Frequently Asked Questions Advanced Topics
Datasets Datasets Overview Dataset Cards Gated Datasets Uploading Datasets Uploading Datasets (for LLMs) Downloading Datasets Streaming Datasets Integrated Libraries Data Studio Datasets Download Stats
Spaces Spaces Overview Spaces GPU Upgrades Spaces ZeroGPU Spaces Dev Mode Spaces Disk Usage & Storage Spaces Custom Domain Spaces as MCP servers Spaces as Agent Tools Spaces as API Endpoints Gradio Spaces Streamlit Spaces Static HTML Spaces Docker Spaces Embed your Space Run Spaces with Docker Spaces Configuration Reference Sign-In with HF button Featured Spaces Spaces Changelog Advanced Topics
Storage Buckets new Jobs Jobs Overview Quickstart Pricing and Billing Manage Jobs Configuration Popular Images Examples & Tutorials Schedule Jobs Webhook Automation Reference
Agents Agents Overview Hugging Face CLI for AI Agents Hugging Face MCP Server Hugging Face Agent Skills Building agents with the HF SDK Local Agents with llama.cpp Agent Libraries
Other Advanced Compute Options
This feature is part of the Team & Enterprise plans.
Team & Enterprise organizations gain access to advanced compute options to accelerate their machine learning journey.
Host ZeroGPU Spaces in your organization
ZeroGPU is a dynamic GPU allocation system that optimizes AI deployment on Hugging Face Spaces. By automatically allocating and releasing NVIDIA RTX Pro 6000 Blackwell GPUs (96GB VRAM) as needed, organizations can efficiently serve their AI applications without dedicated GPU instances.

Key benefits for organizations
- Free GPU Access: Access powerful NVIDIA RTX Pro 6000 Blackwell GPUs at no additional cost through dynamic allocation
- Enhanced Resource Management: Host up to 50 ZeroGPU Spaces for efficient team-wide AI deployment
- Simplified Deployment: Easy integration with PyTorch-based models, Gradio apps, and other Hugging Face libraries
- Enterprise-Grade Infrastructure: Access to high-performance NVIDIA RTX Pro 6000 Blackwell GPUs with up to 96GB VRAM per workload