RAG / BAAI_bge-large-en-v1.5 /fixed_chunks /_accelerate.txt_chunk_0.txt
thenativefox
Added split files and tables
939262b
raw
history blame contribute delete
967 Bytes
Distributed training with πŸ€— Accelerate
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the πŸ€— Accelerate library to help users easily train a πŸ€— Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment.
Setup
Get started by installing πŸ€— Accelerate:
pip install accelerate
Then import and create an [~accelerate.Accelerator] object. The [~accelerate.Accelerator] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device.
from accelerate import Accelerator
accelerator = Accelerator()