Whisper-large-v3 finetuning using our own dataset
#70
by
rifasca
- opened
I encountered issues while fine-tuning the Whisper-large-v3 model on a 100-hour Arabic dataset using the LoRA-PEFT approach. The resulting transcriptions were highly inaccurate, with excessive hallucinations and frequent duplication of characters.