Model Card for Zero-Mistral-24B
Zero-Mistral-24B is an improved TEXT-only version of mistralai/Mistral-Small-3.1-24B-Instruct-2503, primarily adapted for Russian and English languages. The original Mistral model contains vision features which were removed from this model. The training involved SFT stage primarily on Big Russian Dataset dataset and proprietary dataset from Shkolkovo.online.
The model has good math skills and some reasoning abilities.
The modele saves original mistral long context capabilities up to 128k tokens.
Model Details
Model Description
- Developed by: ZeroAgency.ru
- Funded by: ZeroAgency.ru and Shkolkovo.online
- Shared by: Alexander Kozhevnikov (developer)
- Model type: LLM
- Language(s) (NLP): Russian, English
- License: MIT
- Finetuned from model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
π Model versions
- Merged 16-bit - original 16bit merged version for transformers.
- GGUF - different GGUF versions: BF16, F16, Q8_0, Q6_K, Q4_K_M, IQ4_XS, etc.
π Benchmarks for main 16-bit merged version
MERA
MERA score: 0.623
Task | Result | Metric |
---|---|---|
LCS | 0.194 | Accuracy |
RCB | 0.607 / 0.592 | Avg. F1 / Accuracy |
USE | 0.452 | Grade Norm |
RWSD | 0.55 | Accuracy |
PARus | 0.942 | Accuracy |
ruTiE | 0.868 | Accuracy |
MultiQ | 0.781 / 0.629 | F1-score/EM |
CheGeKa | 0.397 / 0.322 | F1 / EM |
ruModAr | 0.971 | EM |
MaMuRAMu | 0.832 | Accuracy |
ruMultiAr | 0.354 | EM |
ruCodeEval | 0 / 0 / 0 | pass@k Β―\_(γ)_/Β― |
MathLogicQA | 0.613 | Accuracy |
ruWorldTree | 0.987 / 0.987 | Avg. F1 / Accuracy |
ruOpenBookQA | 0.913 / 0.913 | Avg. F1 / Accuracy |
ΠΡΠ΅Π½ΠΊΠ° ΠΏΠΎ ΠΎΡΠΊΡΡΡΡΠΌ Π·Π°Π΄Π°ΡΠ°ΠΌ:
ΠΠ°Π΄Π°ΡΠ° | Π Π΅Π·ΡΠ»ΡΡΠ°Ρ | ΠΠ΅ΡΡΠΈΠΊΠ° |
---|---|---|
BPS | 0.981 | Accuracy |
ruMMLU | 0.778 | Accuracy |
SimpleAr | 0.997 | EM |
ruHumanEval | 0.006 / 0.006 / 0.006 | pass@k Β―\_(γ)_/Β― |
ruHHH | 0.916 | Accuracy |
ruHateSpeech | 0.834 | Accuracy |
ruDetox | 0.341 / 0.843 / 0.624 / 0.66 | ΠΠ±ΡΠ°Ρ ΡΡΠ΅Π΄Π½ΡΡ ΠΎΡΠ΅Π½ΠΊΠ° (J) / ΠΡΠ΅Π½ΠΊΠ° ΡΠΎΡ ΡΠ°Π½Π΅Π½ΠΈΡ ΡΠΌΡΡΠ»Π° (SIM) / ΠΡΠ΅Π½ΠΊΠ° Π½Π°ΡΡΡΠ°Π»ΡΠ½ΠΎΡΡΠΈ (FL) / Π’ΠΎΡΠ½ΠΎΡΡΡ ΠΏΠ΅ΡΠ΅Π½ΠΎΡΠ° ΡΡΠΈΠ»Ρ (STA) |
ruEthics | [[0.386, 0.399, 0.41, 0.333, 0.327], [0.421, 0.427, 0.452, 0.375, 0.363], [0.653, 0.65, 0.697, 0.596, 0.573]] | 5 MCC |
Usage
The model can be used with the following frameworks;
Recommended system prompts
prompts = {
"generic": "Π’Ρ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° Π²ΠΎΠΏΡΠΎΡΡ Π»ΡΠ΄Π΅ΠΉ, ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡ. Π’Ρ ΡΠΎΠ·Π΄Π°Π½, ΡΡΠΎΠ±Ρ Π±ΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ, Π±Π΅Π·ΠΎΠ±ΠΈΠ΄Π½ΡΠΌ ΠΈ ΡΠ΅ΡΡΠ½ΡΠΌ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° ΡΠΎΠΌ ΡΠ·ΡΠΊΠ΅, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ» Π·Π°Π΄Π°Π½ Π²ΠΎΠΏΡΠΎΡ ΠΈΠ»ΠΈ ΠΏΠΎΠΏΡΠΎΡΠΈΠ» ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ.",
"think": """Π’Ρ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° Π²ΠΎΠΏΡΠΎΡΡ Π»ΡΠ΄Π΅ΠΉ, ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡ. Π’Ρ ΡΠΎΠ·Π΄Π°Π½, ΡΡΠΎΠ±Ρ Π±ΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ, Π±Π΅Π·ΠΎΠ±ΠΈΠ΄Π½ΡΠΌ ΠΈ ΡΠ΅ΡΡΠ½ΡΠΌ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° ΡΠΎΠΌ ΡΠ·ΡΠΊΠ΅, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ» Π·Π°Π΄Π°Π½ Π²ΠΎΠΏΡΠΎΡ ΠΈΠ»ΠΈ ΠΏΠΎΠΏΡΠΎΡΠΈΠ» ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ.
Answer in the following format:
<think>Reasoning: ...</think>
...""",
"task": "Π’Ρ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° Π²ΠΎΠΏΡΠΎΡΡ Π»ΡΠ΄Π΅ΠΉ, ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡ. Π’Ρ ΡΠΎΠ·Π΄Π°Π½, ΡΡΠΎΠ±Ρ Π±ΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ, Π±Π΅Π·ΠΎΠ±ΠΈΠ΄Π½ΡΠΌ ΠΈ ΡΠ΅ΡΡΠ½ΡΠΌ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° ΡΠΎΠΌ ΡΠ·ΡΠΊΠ΅, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ» Π·Π°Π΄Π°Π½ Π²ΠΎΠΏΡΠΎΡ ΠΈΠ»ΠΈ ΠΏΠΎΠΏΡΠΎΡΠΈΠ» ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ. Π Π΅ΡΠΈ Π·Π°Π΄Π°ΡΡ ΠΏΠΎ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΠΈ Π½ΠΈΠΆΠ΅. ΠΠ΅ ΠΈΠ·Π²ΠΈΠ½ΡΠΉΡΡ, Π½Π΅ ΡΡΡΠΎΠΉ Π΄ΠΈΠ°Π»ΠΎΠ³.",
"task_think": """Π’Ρ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° Π²ΠΎΠΏΡΠΎΡΡ Π»ΡΠ΄Π΅ΠΉ, ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡ. Π’Ρ ΡΠΎΠ·Π΄Π°Π½, ΡΡΠΎΠ±Ρ Π±ΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ, Π±Π΅Π·ΠΎΠ±ΠΈΠ΄Π½ΡΠΌ ΠΈ ΡΠ΅ΡΡΠ½ΡΠΌ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° ΡΠΎΠΌ ΡΠ·ΡΠΊΠ΅, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ» Π·Π°Π΄Π°Π½ Π²ΠΎΠΏΡΠΎΡ ΠΈΠ»ΠΈ ΠΏΠΎΠΏΡΠΎΡΠΈΠ» ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ. Π Π΅ΡΠΈ Π·Π°Π΄Π°ΡΡ ΠΏΠΎ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΠΈ Π½ΠΈΠΆΠ΅. ΠΠ΅ ΠΈΠ·Π²ΠΈΠ½ΡΠΉΡΡ, Π½Π΅ ΡΡΡΠΎΠΉ Π΄ΠΈΠ°Π»ΠΎΠ³.
Answer in the following format:
<think>Reasoning: ...</think>
...""",
"english_generic": """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
""",
"english_think": """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
Answer in the following format:
<think>Reasoning: ...</think>
""",
}
vLLM
We recommend using this model with the vLLM library to implement production-ready inference pipelines.
Note 1: We recommond using a relatively low temperature, such as temperature=0.15
.
Note 2: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt:
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")
Note 3: flash_attn or flashinfer-python preferred for better performance.
Installation
Make sure you install vLLM >= 0.8.4
:
pip install --upgrade vllm
Also make sure you have mistral_common >= 1.5.4
installed:
pip install --upgrade mistral_common
You can also make use of a ready-to-go docker image or on the docker hub.
Server
We recommand that you use ZeroAgency/Zero-Mistral-24B in a server/client setting.
- Spin up a server:
vllm serveZeroAgency/Zero-Mistral-24B --enable-prefix-caching --dtype bfloat16 --max-model-len 32768 --tool-call-parser mistral --enable-auto-tool-choice
Note: Running Zero-Mistral-24B on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
- To ping the client you can use a simple Python snippet.
import requests
import json
from datetime import datetime, timedelta
url = "http://<your-server>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "ZeroAgency/Zero-Mistral-24B"
messages = [
{
"role": "system",
"content": """Π’Ρ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° Π²ΠΎΠΏΡΠΎΡΡ Π»ΡΠ΄Π΅ΠΉ, ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡ. Π’Ρ ΡΠΎΠ·Π΄Π°Π½, ΡΡΠΎΠ±Ρ Π±ΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ, Π±Π΅Π·ΠΎΠ±ΠΈΠ΄Π½ΡΠΌ ΠΈ ΡΠ΅ΡΡΠ½ΡΠΌ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° ΡΠΎΠΌ ΡΠ·ΡΠΊΠ΅, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ» Π·Π°Π΄Π°Π½ Π²ΠΎΠΏΡΠΎΡ ΠΈΠ»ΠΈ ΠΏΠΎΠΏΡΠΎΡΠΈΠ» ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ. Π Π΅ΡΠΈ Π·Π°Π΄Π°ΡΡ ΠΏΠΎ ΠΈΠ½ΡΡΡΡΠΊΡΠΈΠΈ Π½ΠΈΠΆΠ΅. ΠΠ΅ ΠΈΠ·Π²ΠΈΠ½ΡΠΉΡΡ, Π½Π΅ ΡΡΡΠΎΠΉ Π΄ΠΈΠ°Π»ΠΎΠ³.
Answer in the following format:
<think>Reasoning: ...</think>
..."""
},
{ # Task from https://3.shkolkovo.online/catalog/2552/93150
"role": "user",
"content": """ΠΠ΅ΡΠ²ΡΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ Π·Π° ΡΠ°Ρ Π΄Π΅Π»Π°Π΅Ρ Π½Π° 9 Π΄Π΅ΡΠ°Π»Π΅ΠΉ Π±ΠΎΠ»ΡΡΠ΅, ΡΠ΅ΠΌ Π²ΡΠΎΡΠΎΠΉ, ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅Ρ Π·Π°ΠΊΠ°Π·, ΡΠΎΡΡΠΎΡΡΠΈΠΉ ΠΈΠ· 216 Π΄Π΅ΡΠ°Π»Π΅ΠΉ, Π½Π° 4 ΡΠ°ΡΠ° Π±ΡΡΡΡΠ΅Π΅, ΡΠ΅ΠΌ Π²ΡΠΎΡΠΎΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ, Π²ΡΠΏΠΎΠ»Π½ΡΡΡΠΈΠΉ ΡΠ°ΠΊΠΎΠΉ ΠΆΠ΅ Π·Π°ΠΊΠ°Π·. Π‘ΠΊΠΎΠ»ΡΠΊΠΎ Π΄Π΅ΡΠ°Π»Π΅ΠΉ Π² ΡΠ°Ρ Π΄Π΅Π»Π°Π΅Ρ ΠΏΠ΅ΡΠ²ΡΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ?"""
},
]
data = {"model": model, "messages": messages}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
#<think> ΠΡΡΡΡ x β ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ Π΄Π΅ΡΠ°Π»Π΅ΠΉ, ΠΊΠΎΡΠΎΡΡΠ΅ Π΄Π΅Π»Π°Π΅Ρ Π²ΡΠΎΡΠΎΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ Π·Π° ΡΠ°Ρ. Π’ΠΎΠ³Π΄Π° ΠΏΠ΅ΡΠ²ΡΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ Π΄Π΅Π»Π°Π΅Ρ x + 9 Π΄Π΅ΡΠ°Π»Π΅ΠΉ Π·Π° ΡΠ°Ρ. Π‘ΠΎΡΡΠ°Π²ΠΈΠΌ ΡΠ°Π±Π»ΠΈΡΡ: ΠΠ΅ΡΠ²ΡΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ ΠΡΠΎΡΠΎΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ ΠΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ Π΄Π΅ΡΠ°Π»Π΅ΠΉ Π² ΡΠ°Ρ x + 9 x ΠΠΎΠ»ΠΈΡΠ΅ΡΡΠ²ΠΎ ΡΠ°ΡΠΎΠ² 216 : (x + 9) 216 : x Π Π°Π·Π½ΠΎΡΡΡ ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²Π° ΡΠ°ΡΠΎΠ² 4 216 : (x + 9) β 216 : x = 4 216x β 216(x + 9) = 4x(x + 9) 216x β 216x β 1944 = 4x^2 + 36x 1944 = 4x^2 + 36x 4x^2 + 36x β 1944 = 0 D = 36^2 + 4 Β· 4 Β· 1944 = 1296 + 31104 = 32400 = 180^2 x1 = β36 + 180 : 8 = 144 : 8 = 18 x2 = β36 β 180 : 8 < 0 β Π½Π΅ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΠΈΡ ΠΏΠΎ ΡΠΌΡΡΠ»Ρ Π·Π°Π΄Π°ΡΠΈ. Π’ΠΎΠ³Π΄Π° ΠΏΠ΅ΡΠ²ΡΠΉ ΡΠ°Π±ΠΎΡΠΈΠΉ Π΄Π΅Π»Π°Π΅Ρ 18 + 9 = 27 Π΄Π΅ΡΠ°Π»Π΅ΠΉ Π² ΡΠ°Ρ. </think>
#27
Offline
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model="ZeroAgency/Zero-Mistral-24B", tokenizer_mode="mistral", tensor_parallel_size=8)
SYSTEM_PROMPT = """Π’Ρ Π²ΠΈΡΡΡΠ°Π»ΡΠ½ΡΠΉ Π°ΡΡΠΈΡΡΠ΅Π½Ρ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° Π²ΠΎΠΏΡΠΎΡΡ Π»ΡΠ΄Π΅ΠΉ, ΠΏΠΎΠΌΠΎΠ³Π°Π΅ΡΡ ΠΈΠΌ ΠΈ ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΈΠ²Π°Π΅ΡΡ. Π’Ρ ΡΠΎΠ·Π΄Π°Π½, ΡΡΠΎΠ±Ρ Π±ΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ, Π±Π΅Π·ΠΎΠ±ΠΈΠ΄Π½ΡΠΌ ΠΈ ΡΠ΅ΡΡΠ½ΡΠΌ. Π’Ρ ΠΎΡΠ²Π΅ΡΠ°Π΅ΡΡ Π½Π° ΡΠΎΠΌ ΡΠ·ΡΠΊΠ΅, Π½Π° ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ» Π·Π°Π΄Π°Π½ Π²ΠΎΠΏΡΠΎΡ ΠΈΠ»ΠΈ ΠΏΠΎΠΏΡΠΎΡΠΈΠ» ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ.
Answer in the following format:
<think>Reasoning: ...</think>
..."""
user_prompt = """Π§ΡΠΎ Π±ΠΎΠ»ΡΡΠ΅ 9.9 ΠΈΠ»ΠΈ 9.11?"""
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
sampling_params = SamplingParams(max_tokens=512, temperature=0.0, top_p=1, top_k=-1)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
#<think> ΠΠ°Π΄Π°ΡΠ°: Π‘ΡΠ°Π²Π½ΠΈΡΠ΅ 9.9 ΠΈ 9.11 Π΄Π»Ρ ΠΎΠΏΡΠ΅Π΄Π΅Π»Π΅Π½ΠΈΡ ΡΠΎΠ³ΠΎ, ΠΊΠ°ΠΊΠΎΠΉ ΠΈΠ· Π½ΠΈΡ
Π±ΠΎΠ»ΡΡΠ΅ ΠΠΎΠ΄Ρ
ΠΎΠ΄: ΠΠ΅ΡΡΡΠΈΡΠ½ΠΎΠ΅ ΡΡΠ°Π²Π½Π΅Π½ΠΈΠ΅ Ρ Π²ΡΡΠ°Π²Π½ΠΈΠ²Π°Π½ΠΈΠ΅ΠΌ Π΄Π΅ΡΡΡΠΈΡΠ½ΡΡ
ΡΠΎΡΠ΅ΠΊ Π‘Π»ΠΎΠΆΠ½ΠΎΡΡΡ: ΠΠΈΠ·ΠΊΠΈΠΉ ΠΊ ΡΡΠ΅Π΄Π½Π΅ΠΌΡ Π― Π΄ΠΎΠ»ΠΆΠ΅Π½ ΡΡΠ°ΡΠ΅Π»ΡΠ½ΠΎ Π²ΡΡΠΎΠ²Π½ΡΡΡ Π΄Π΅ΡΡΡΠΈΡΠ½ΡΠ΅ ΡΠΎΡΠΊΠΈ ΠΈ ΡΡΠ°Π²Π½ΠΈΡΡ ΡΠΈΡΡΡ ΠΏΠΎ ΠΌΠ΅ΡΡΡ. 1. ΠΡΡΠΎΠ²Π½ΡΡΡ Π΄Π΅ΡΡΡΠΈΡΠ½ΡΠ΅ ΡΠΎΡΠΊΠΈ: 9.90 9.11 2. Π‘ΡΠ°Π²Π½ΠΈΡΠ΅ ΡΠ΅Π»ΡΠ΅ ΡΠΈΡΠ»Π°: ΠΎΠ±Π° ΠΈΠΌΠ΅ΡΡ 9, ΠΏΠΎΡΡΠΎΠΌΡ ΠΎΠ½ΠΈ ΡΠ°Π²Π½Ρ 3. Π‘ΡΠ°Π²Π½ΠΈΡΠ΅ Π΄Π΅ΡΡΡΡΠ΅ ΠΌΠ΅ΡΡΠ°: 9.90 ΠΈΠΌΠ΅Π΅Ρ 9, 9.11 ΠΈΠΌΠ΅Π΅Ρ 1 9 > 1, ΠΏΠΎΡΡΠΎΠΌΡ 9.90 Π±ΠΎΠ»ΡΡΠ΅ 4. Π‘ΡΠ°Π²Π½ΠΈΡΠ΅ ΡΠΎΡΡΠ΅ ΠΌΠ΅ΡΡΠ°: 9.90 ΠΈΠΌΠ΅Π΅Ρ 0, 9.11 ΠΈΠΌΠ΅Π΅Ρ 1 0 < 1, Π½ΠΎ ΡΡΠΎ Π½Π΅ ΠΈΠΌΠ΅Π΅Ρ Π·Π½Π°ΡΠ΅Π½ΠΈΡ, ΠΏΠΎΡΠΊΠΎΠ»ΡΠΊΡ Π΄Π΅ΡΡΡΠΎΠ΅ ΠΌΠ΅ΡΡΠΎ ΡΠΆΠ΅ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΠ»ΠΎ Π±ΠΎΠ»ΡΡΠ΅Π΅ ΡΠΈΡΠ»ΠΎ<reflection>Π― ΠΏΡΠ°Π²ΠΈΠ»ΡΠ½ΠΎ Π²ΡΡΠΎΠ²Π½ΡΠ» Π΄Π΅ΡΡΡΠΈΡΠ½ΡΠ΅ ΡΠΎΡΠΊΠΈ ΠΈ ΡΡΠ°Π²Π½ΠΈΠ» ΡΠΈΡΡΡ ΠΏΠΎ ΠΌΠ΅ΡΡΡ. Π― Π·Π°ΠΌΠ΅ΡΠΈΠ», ΡΡΠΎ Π΄Π΅ΡΡΡΠΎΠ΅ ΠΌΠ΅ΡΡΠΎ (9 ΠΏΡΠΎΡΠΈΠ² 1) ΠΎΠΏΡΠ΅Π΄Π΅Π»ΠΈΠ»ΠΎ, ΡΡΠΎ 9.9 Π±ΠΎΠ»ΡΡΠ΅, ΡΠ΅ΠΌ 9.11. Π‘ΠΎΡΡΠ΅ ΠΌΠ΅ΡΡΠ° Π½Π΅ Π±ΡΠ»ΠΈ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΡ Π΄Π»Ρ ΡΡΠΎΠ³ΠΎ ΡΡΠ°Π²Π½Π΅Π½ΠΈΡ.</reflection> <self_improvement>Π Π±ΡΠ΄ΡΡΠΈΡ
ΡΡΠ°Π²Π½Π΅Π½ΠΈΡΡ
Ρ Π±ΡΠ΄Ρ ΡΠ΄Π΅Π»ΡΡΡ ΠΏΠ΅ΡΠ²ΠΎΠΎΡΠ΅ΡΠ΅Π΄Π½ΠΎΠ΅ Π²Π½ΠΈΠΌΠ°Π½ΠΈΠ΅ ΡΠ°ΠΌΡΠΌ Π»Π΅Π²ΡΠΌ ΡΠΈΡΡΠ°ΠΌ, Π³Π΄Π΅ Π΅ΡΡΡ ΡΠ°Π·Π½ΠΈΡΠ°, ΡΡΠΎΠ±Ρ ΠΎΠΏΡΠΈΠΌΠΈΠ·ΠΈΡΠΎΠ²Π°ΡΡ ΠΏΡΠΎΡΠ΅ΡΡ ΡΡΠ°Π²Π½Π΅Π½ΠΈΡ.</self_improvement> </think> 9.9 Π±ΠΎΠ»ΡΡΠ΅, ΡΠ΅ΠΌ 9.11. ΠΠΎΠ³Π΄Π° Π²Ρ ΡΡΠ°Π²Π½ΠΈΠ²Π°Π΅ΡΠ΅ Π΄Π΅ΡΡΡΠΈΡΠ½ΡΠ΅ ΡΠΈΡΠ»Π°, Π²Ρ Π½Π°ΡΠΈΠ½Π°Π΅ΡΠ΅ Ρ ΡΠ΅Π»ΡΡ
ΡΠΈΡΠ΅Π», Π·Π°ΡΠ΅ΠΌ ΠΏΠ΅ΡΠ΅Ρ
ΠΎΠ΄ΠΈΡΠ΅ ΠΊ Π΄Π΅ΡΡΡΡΠΌ ΠΌΠ΅ΡΡΠ°ΠΌ, ΡΠΎΡΡΠΌ ΠΌΠ΅ΡΡΠ°ΠΌ ΠΈ ΡΠ°ΠΊ Π΄Π°Π»Π΅Π΅. Π ΡΡΠΎΠΌ ΡΠ»ΡΡΠ°Π΅ 9.9 ΠΈΠΌΠ΅Π΅Ρ 9 Π² Π΄Π΅ΡΡΡΠΎΠΌ ΠΌΠ΅ΡΡΠ΅, Π² ΡΠΎ Π²ΡΠ΅ΠΌΡ ΠΊΠ°ΠΊ 9.11 ΠΈΠΌΠ΅Π΅Ρ 1 Π² Π΄Π΅ΡΡΡΠΎΠΌ ΠΌΠ΅ΡΡΠ΅. ΠΠΎΡΠΊΠΎΠ»ΡΠΊΡ 9 > 1, 9.9 Π±ΠΎΠ»ΡΡΠ΅, ΡΠ΅ΠΌ 9.11.
Transformers
If you want to use Hugging Face transformers to generate text, you can do something like this.
from transformers import pipeline
import torch
messages = [
{"role": "user", "content": "Π§ΡΠΎ Π±ΠΎΠ»ΡΡΠ΅ 9.9 ΠΈΠ»ΠΈ 9.11?"},
]
chatbot = pipeline("text-generation", model="ZeroAgency/Zero-Mistral-24B", max_new_tokens=256, torch_dtype=torch.bfloat16)
response = chatbot(messages, temperature=0.1)
print(response[0]['generated_text'][1]['content'])
# 9.9 Π±ΠΎΠ»ΡΡΠ΅, ΡΠ΅ΠΌ 9.11.
llama-server
You can run llama-server - OpenAI compatible server for serving GGUF version of model.
Example of running with docker container:
docker run --gpus all -v `pwd`:/mnt -p8000:8000 ghcr.io/ggml-org/llama.cpp:server-cuda -fa --port 8000 --host 0.0.0.0 --temp 0.0 --jinja -ngl 100 --api-key DUMMY-API-KEY -m /mnt/Zero-Mistral-24B-Q4_K_M_L.gguf
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: 8x H200
- Hours used: 29.5
- Cloud Provider: Runpod
- Compute Region: US-DE
- Carbon Emitted:
Β―\_(γ)_/Β―
- Downloads last month
- 41
Model tree for ZeroAgency/Zero-Mistral-24B
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503