Model Card for Mistral-Small-24B-Instruct-2501

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for:

Fast response conversational agents.
Low latency function calling.
Subject matter experts via fine-tuning.
Local inference for hobbyists and organizations handling sensitive data.

For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.

This release demonstrates our commitment to open source, serving as a strong base model.

Learn more about Mistral Small in our blog post.

Model developper: Mistral AI Team

this version have a finetuning of dataset : louisbrulenaudet / code-securite-sociale

It's my firts version

Downloads last month
32
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for martossien/mistrall-24b-css_Instruct-2501_GGUF

Dataset used to train martossien/mistrall-24b-css_Instruct-2501_GGUF