Spaces:
Sleeping
Sleeping
Distributed training with π€ Accelerate | |
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the π€ Accelerate library to help users easily train a π€ Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment. | |
Setup | |
Get started by installing π€ Accelerate: |