Llama3-8B-Instruct-Turkish-Finetuned GGUF Quantized Models
Technical Details
- Quantization Tool: llama.cpp
- Version: version: 5162 (2016f07b)
Model Information
- Base Model: matrixportal/Llama3-8B-Instruct-Turkish-Finetuned
- Quantized by: matrixportal
Available Files
π Download | π’ Type | π Description |
---|---|---|
Download | Q2 K | Tiny size, lowest quality (emergency use only) |
Download | Q3 K S | Very small, low quality (basic tasks) |
Download | Q3 K M | Small, acceptable quality |
Download | Q3 K L | Small, better than Q3_K_M (good for low RAM) |
Download | Q4 0 | Standard 4-bit (fast on ARM) |
Download | Q4 K S | 4-bit optimized (good space savings) |
Download | Q4 K M | 4-bit balanced (recommended default) |
Download | Q5 0 | 5-bit high quality |
Download | Q5 K S | 5-bit optimized |
Download | Q5 K M | 5-bit best (recommended HQ option) |
Download | Q6 K | 6-bit near-perfect (premium quality) |
Download | Q8 0 | 8-bit maximum (overkill for most) |
Download | F16 | Full precision (maximum accuracy) |
π‘ Q4 K M provides the best balance for most use cases
- Downloads last month
- 319
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for matrixportal/Llama3-8B-Instruct-Turkish-Finetuned-GGUF
Base model
meta-llama/Meta-Llama-3-8B-Instruct