VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models

A Challenging Visual-centric Benchmark for Evaluating Multimodal Reasoning in MLLMs!

This is the Qwen2.5-VL-7B-Instruct-RL model of VisuLogic.

For more details, please refer to the project page with dataset exploration and visualization tools: https://visulogic-benchmark.github.io/VisuLogic/.

VisuLogic Resouces

🌐 Homepage | πŸ† Leaderboard | πŸ“– Paper | πŸ€— Benchmark | πŸ€— Train Data

πŸ’» Eval Code | πŸ’» Train Code | πŸ€— Checkpoint (7B) | πŸ€— Checkpoint (38B)

πŸ””News

  • πŸ”₯[2025-04-26] VisuLogic has been merged into VLMEvalkit. You can evaluate your model on VisuLogic with it ! Usage see VLMEvalkit ! πŸš€
  • πŸ”₯[2025-04-22] Release the paper, training data and training code! πŸš€
  • πŸ”₯[2025-04-08] Release the benchmark and the code! πŸš€

βœ… To-do

  • Release the benchmark dataset and eval code
  • Release training code
  • Release the paper
  • Release the training dataset
  • Release model ckpts

πŸ“– Introduction

VisuLogic is a newly designed benchmark aimed at evaluating the visual reasoning capabilities of Multi-modal Large Language Models (MLLMs), independent of textual reasoning processes. It features carefully constructed visual reasoning tasks spanning multiple categories, divided into six types based on required reasoning skills (e.g., Quantitative Reasoning, which involves understanding and deducing changes in the quantity of elements in images). Unlike existing benchmarks, VisuLogic is a challenging visual reasoning benchmark that is inherently difficult to articulate using language, providing a more rigorous evaluation of the visual reasoning capabilities of MLLMs. Most models score below 30% accuracyβ€”only slightly above the 25% random baseline and far below the 51.4% achieved by humansβ€”revealing significant gaps in visual reasoning. Overview

🌟 Key Features

  • πŸš€ Visuo-Logical Challenge
    The first benchmark to integrate visual perception with logical reasoning, enabling authentic multimodal evaluation. Most models score below 30% accuracyβ€”only slightly above the 25% random baseline and far below the 51.4% achieved by humansβ€”revealing significant gaps in visual reasoning.

  • πŸ› οΈ Rigorous Design
    Includes 1,000 meticulously curated questions, spanning 6 domains and 24 subcategories, for comprehensive performance evaluation.

  • πŸ“ Anti-Linguistic Shortcut
    Designed to avoid linguistic reasoning, ensuring tasks rely on genuine visual reasoning rather than shortcuts.

  • πŸ’‘ RL Exploration
    We identify the RL technique as a promising direction for improving the visual reasoning capabilities of MLLMs. Through RL method, models reach SOTA in VisuLogic!

  • βœ… Fully Open-source
    We open-source all the evaluation code, training scripts, and datasets associated with this work to promote further research and innovation.

πŸ–ΌοΈ Examples of VisuLogic

Examples of VisuLogic

πŸ“Š Eval

Please refer to VisuLogic-Eval for eval code.

πŸ“¦ Training

Please refer to VisuLogic-Train for training code.

πŸ“© Contact

πŸ“œ Citation

BibTeX:

@article{xu2025visulogic,
  title={VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models},
  author={Xu, Weiye and Wang, Jiahao and Wang, Weiyun and Chen, Zhe and Zhou, Wengang and Yang, Aijun and Lu, Lewei and Li, Houqiang and Wang, Xiaohua and Zhu, Xizhou and Wang, Wenhai and Dai, Jifeng and Zhu, Jinguo},
  journal={arXiv preprint arXiv:2504.15279},
  year={2025},
  url={https://arxiv.org/abs/2504.15279}
}

πŸŽ‰ Thank you for your interest in VisuLogic! We hope this benchmark helps drive advancements in multimodal visual reasoning! πŸš€

Downloads last month
5
Safetensors
Model size
8.29B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for VisuLogic/qwen2_5vl_7b_rloo_80steps_hf

Finetuned
(206)
this model

Dataset used to train VisuLogic/qwen2_5vl_7b_rloo_80steps_hf