Spaces:
Sleeping
Sleeping
model.to(device) | |
train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( | |
train_dataloader, eval_dataloader, model, optimizer | |
) | |
num_epochs = 3 | |
num_training_steps = num_epochs * len(train_dataloader) | |
lr_scheduler = get_scheduler( | |
"linear", | |
optimizer=optimizer, | |
num_warmup_steps=0, | |
num_training_steps=num_training_steps | |
) | |
progress_bar = tqdm(range(num_training_steps)) | |
model.train() | |
for epoch in range(num_epochs): | |
for batch in train_dataloader: | |
outputs = model(**batch) | |
loss = outputs.loss | |
+ accelerator.backward(loss) | |
optimizer.step() | |
lr_scheduler.step() | |
optimizer.zero_grad() | |
progress_bar.update(1) | |
Train | |
Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. | |
Train with a script | |
If you are running your training from a script, run the following command to create and save a configuration file: | |
accelerate config | |
Then launch your training with: | |
accelerate launch train.py | |
Train with a notebook | |
π€ Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [~accelerate.notebook_launcher]: | |
from accelerate import notebook_launcher | |
notebook_launcher(training_function) | |
For more information about π€ Accelerate and its rich features, refer to the documentation. |