llama.cpp quantization

Using llama.cpp release b4944 for quantization.

Original model: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B

Run them directly with llama.cpp, or any other llama.cpp based project

Prompt format

<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>

Download a file (not the whole branch) from below:

Filename Quant type File Size Split
deepseek-r1-distill-qwen-1.5b-q4_0.gguf Q4_0 0.99 GB False
deepseek-r1-distill-qwen-1.5b-q2_k.gguf Q2_K 0.70 GB False

Downloading using huggingface-cli

Click to view download instructions First, make sure you have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download https://huggingface.co/Brianpuz/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0-Q2_K-GGUF --include "deepseek-r1-distill-qwen-1.5b-q4_0.gguf" --local-dir ./

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download https://huggingface.co/Brianpuz/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0-Q2_K-GGUF --include "deepseek-r1-distill-qwen-1.5b-q4_0.gguf/*" --local-dir ./

You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./)

Downloads last month
23
GGUF
Model size
1.78B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Brianpuz/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0-Q2_K-GGUF

Quantized
(195)
this model