StudyAbroadGPT-7B
A fine-tuned version of Mistral-7B optimized for study abroad assistance.
Model Details
- Base Model: Mistral-7B
- Training: LoRA fine-tuning (r=16, alpha=32)
- Quantization: 4-bit
- Max Length: 2048 tokens
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("millat/StudyAbroadGPT-7B")
tokenizer = AutoTokenizer.from_pretrained("millat/StudyAbroadGPT-7B")
- Downloads last month
- 19
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support