QuantFactory/Llama2-7B-Hindi-finetuned-GGUF
This is quantized version of subhrokomol/Llama2-7B-Hindi-finetuned created using llama.cpp
Original Model Card
Finetune Llama-2-7B-hf on Hindi dataset after transtokenization
This model was trained on 24GB of RTX A500 on zicsx/mC4-Hindi-Cleaned-3.0 dataset (1%) for 3 hours.
We used Hugging Face PEFT-LoRA PyTorch for training.
Transtokenization process in --
- Downloads last month
- 180
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for QuantFactory/Llama2-7B-Hindi-finetuned-GGUF
Base model
meta-llama/Llama-2-7b-hf