Also see:
- 24B Instruct GGUF (this model)
- 24B Instruct HF
- 24B Base HF
GGUF quants for Mistral Small 3.1 Instruct 24B, compatible with llama.cpp (or almost any other llama.cpp app) in the Mistral format.
Use the Mistral chat template.
Only the text component has been converted to GGUF, does not work as a vision model.
No imatrix yet, sorry!
- Downloads last month
- 1,240
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for mrfakename/mistral-small-3.1-24b-instruct-2503-gguf
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503