About

static quants of https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct

weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q3_K_S 0.4
GGUF IQ3_S 0.4 beats Q3_K*
GGUF IQ3_XS 0.4
GGUF Q2_K 0.4
GGUF IQ3_M 0.4
GGUF IQ4_XS 0.5
GGUF Q4_0 0.5 fast, low quality
GGUF Q4_0_4_4 0.5 fast on arm, low quality
GGUF Q4_0_4_8 0.5 fast on arm+i8mm, low quality
GGUF Q4_0_8_8 0.5 fast on arm+sve, low quality
GGUF IQ4_NL 0.5 prefer IQ4_XS
GGUF Q3_K_M 0.5 lower quality
GGUF Q3_K_L 0.5
GGUF Q4_1 0.5
GGUF Q4_K_S 0.5 fast, recommended
GGUF Q5_0 0.5
GGUF Q4_K_M 0.5 fast, recommended
GGUF Q5_K_S 0.5
GGUF Q5_1 0.5
GGUF Q5_K_M 0.5
GGUF Q6_K 0.6 very good quality
GGUF Q8_0 0.6 fast, best quality
GGUF SOURCE 1.1 source gguf, only provided when it was hard to come by
GGUF bf16 1.1 16 bpw, overkill
GGUF f16 1.1 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

Downloads last month
658
GGUF
Model size
494M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mradermacher/Qwen2.5-0.5B-Instruct-GGUF

Base model

Qwen/Qwen2.5-0.5B
Quantized
(122)
this model