1112lee's picture
nice-model
9d6cb8e verified

Fine-tuning for image classification using LoRA and 🤗 PEFT

Vision Transformer model from transformers

Open In Colab

We provide a notebook (image_classification_peft_lora.ipynb) where we learn how to use LoRA from 🤗 PEFT to fine-tune an image classification model by ONLY using 0.7% of the original trainable parameters of the model.

LoRA adds low-rank "update matrices" to certain blocks in the underlying model (in this case the attention blocks) and ONLY trains those matrices during fine-tuning. During inference, these update matrices are merged with the original model parameters. For more details, check out the original LoRA paper.

PoolFormer model from timm

Open In Colab

The notebook image_classification_timm_peft_lora.ipynb showcases fine-tuning an image classification model using from the timm library. Again, LoRA is used to reduce the numberof trainable parameters to a fraction of the total.