PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models

ArXiv
GitHub


πŸš€ Overview

The PromptCoT-QwQ-32B model is a distilled mathematical reasoning model trained on more challenging problem sets generated by the PromptCoT pipeline. Built upon the QwQ-32B, it leverages an enhanced training dataset specifically designed to strengthen mathematical reasoning capabilities.

For more details, refer to our paper on ArXiv: πŸ”— PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models.


πŸ† State-of-the-Art Performance

PromptCoT-QwQ-32B has achieved remarkable results, outperforming all competitors across key benchmarks focused on mathematical reasoning:

Model GSM8K MATH-500 AIME2024 AIME2025
S1-32B - 93.0% 56.7% 26.6%
LIMO-32B - 94.8% 57.1% 46.6%
QwQ-32B - - 82.1% 70.8%
PromptCoT-QwQ-32B (ours) πŸ”₯ 96.4% Β± 0.2% πŸ”₯ 96.7% Β± 0.5% πŸ”₯ 83.8% Β± 2.8% πŸ”₯ 75.4% Β± 4.7%

πŸ”₯ Quick Start: Using the Model

1️⃣ Install Dependencies

pip install transformers vllm torch accelerate

2️⃣ Load the Model with Hugging Face Transformers

You can use PromptCoT-QwQ-32B to solve mathematical problems using Hugging Face’s generate API:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "xl-zhao/PromptCoT-QwQ-32B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")

problem_statement = (
    "A robe takes 2 bolts of blue fiber and half that much white fiber.  How many bolts in total does it take?"
)

prompt = (
    f"<|im_start|>user\n{problem_statement}\nPlease reason step by step, and put your final answer within \\boxed{{}}.<|im_end|>\n"
    "<|im_start|>assistant\n"
)

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

with torch.no_grad():
    output = model.generate(**inputs, max_length=32768, temperature=0.6)

generated_solution = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_solution)

⚑ Using vLLM for Fast Inference

For optimized inference, use vLLM:

from vllm import LLM, SamplingParams

model_name = "xl-zhao/PromptCoT-QwQ-32B"
llm = LLM(model=model_name, tensor_parallel_size=1)

problem_statement = (
    "A robe takes 2 bolts of blue fiber and half that much white fiber.  How many bolts in total does it take?"
)

prompt = (
    f"<|im_start|>user\n{problem_statement}\nPlease reason step by step, and put your final answer within \\boxed{{}}.<|im_end|>\n"
    "<|im_start|>assistant\n"
)

sampling_params = SamplingParams(temperature=0.6, max_tokens=32768)
outputs = llm.generate([prompt], sampling_params)

print(outputs[0].outputs[0].text)

πŸ”— Full Usage & Advanced Options

For advanced usage, including batch inference and evaluation on mathematical benchmarks, refer to the full repository on GitHub:
πŸ”Ή GitHub: PromptCoT


πŸ“œ Citation

If you use PromptCoT, please consider citing:

@article{zhao2025promptcot,
  author    = {Zhao, Xueliang and Wu, Wei and Guan, Jian and Kong, Lingpeng},
  title     = {PromptCoT: Synthesizing Olympiad-Level Problems for Mathematical Reasoning in Large Language Models},
  year      = {2025},
  journal   = {arXiv preprint arXiv:2503.02324},
  url       = {http://arxiv.org/abs/2503.02324}
}
Downloads last month
13
Safetensors
Model size
32.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for xl-zhao/PromptCoT-QwQ-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Finetuned
(66)
this model
Quantizations
2 models

Dataset used to train xl-zhao/PromptCoT-QwQ-32B

Collection including xl-zhao/PromptCoT-QwQ-32B