4-bit GPTQ quantized version of DeepSeek-R1-Distill-Qwen-1.5B for inference with the Private LLM app.
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for numen-tech/DeepSeek-R1-Distill-Qwen-1.5B-GPTQ-Int4
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B