← Back to Library
Development Tools

Accelerate

Accelerate is Hugging Face's lightweight library that simplifies distributed PyTorch training across any device and configuration. It provides a unified interface to the most common distributed frameworks (FSDP, DeepSpeed) while maintaining full control over your training loop with minimal code changes. Supporting automatic mixed precision (including FP8), multi-GPU and multi-node setups, and seamless scaling to cloud platforms, Accelerate acts as a thin wrapper that automatically detects your distributed setup and handles hardware configurations without requiring you to learn a new high-level framework. Perfect for researchers and engineers who want distributed training benefits without sacrificing control or rewriting their PyTorch code.

Accelerate
pytorch distributed-training huggingface fsdp deepspeed mixed-precision