File size: 4,020 Bytes
75d0766
 
 
 
 
 
 
 
 
b1bfa3e
 
c9f89df
b1bfa3e
 
 
 
7b074f6
b1bfa3e
 
7b074f6
b1bfa3e
 
 
 
 
 
 
 
 
 
302ccff
b1bfa3e
 
 
302ccff
b1bfa3e
302ccff
b1bfa3e
 
302ccff
 
 
 
918602a
 
 
 
 
 
 
 
 
 
 
 
 
302ccff
918602a
 
302ccff
 
 
 
 
 
 
 
918602a
ef07489
918602a
 
302ccff
 
 
 
 
b1bfa3e
 
 
302ccff
 
 
 
 
 
 
918602a
 
ef07489
918602a
 
 
9e69f95
 
201fd1e
918602a
302ccff
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
title: README
emoji: 🌍
colorFrom: blue
colorTo: blue
sdk: static
pinned: false
---


<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/N8lP93rB6lL3iqzML4SKZ.png'  width=100px>

<h1 align="center"><b>On Path to Multimodal Generalist: Levels and Benchmarks</b></h1>
<p align="center">
<a href="https://generalist.top/">[πŸ“– Project]</a>
<a href="https://generalist.top/leaderboard">[πŸ† Leaderboard]</a>
<a href="https://arxiv.org/abs/2510.10101">[πŸ“„ Paper]</a>
<a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
<a href="https://github.com/path2generalist/General-Level">[πŸ“ Github]</a>
</p>

---
</div>



<h1 align="center" style="color:#F27E7E"><em>
Does higher performance across tasks indicate a stronger capability of MLLM, and closer to AGI?
<br>
NO! But <b style="color:red">synergy</b> does.
</em></h1>


Most current MLLMs predominantly build on the language intelligence of LLMs to simulate the indirect intelligence of multimodality, which is merely extending language intelligence to aid multimodal understanding. While LLMs (e.g., ChatGPT) have already demonstrated such synergy in NLP, reflecting language intelligence, unfortunately, the vast majority of MLLMs do not really achieve it across modalities and tasks.

We argue that the key to advancing towards AGI lies in the synergy effectβ€”a capability that enables knowledge learned in one modality or task to generalize and enhance mastery in other modalities or tasks, fostering mutual improvement across different modalities and tasks through interconnected learning.


<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/-Asn68kJGjgqbGqZMrk4E.png'  width=950px>
</div>


---

This project introduces **General-Level** and **General-Bench**.

---

## 🌐🌐🌐 Keypoints

- [πŸ† Overall Leaderboard](#leaderboard)
- [πŸš€ General-Level](#level)
- [πŸ• General-Bench](#bench)

---

# πŸ†πŸ†πŸ† Overall Leaderboard<a name="leaderboard" />

<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/32goE-PYuwOwRvYg4GcfK.png'  width=900px>
</div>


---

# πŸš€πŸš€πŸš€ General-Level<a name="level" />
  
**A 5-scale level evaluation system with a new norm for assessing the multimodal generalists (multimodal LLMs/agents).  
The core is the use of <b style="color:red">synergy</b> as the evaluative criterion, categorizing capabilities based on whether MLLMs preserve synergy across comprehension and generation, as well as across multimodal interactions.**


<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/lnvh5Qri9O23uk3BYiedX.jpeg'>
</div>



<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/BPqs-3UODQWvjFzvZYkI4.png'  width=1000px>
</div>



---

# πŸ•πŸ•πŸ• General-Bench<a name="bench" />
  
**A companion  massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325K instances.**


We set two dataset types according to the use purpose:
- [**General-Bench-Openset**](https://huggingface.co/datasets/General-Level/General-Bench-Openset) with inputs and labels of samples all publicly open, for **free open-world use** (e.g., for academic experiment/comparisons).
- [**General-Bench-Closeset**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset) with only sample inputs available, which is used for **leaderboard ranking**. Participants need to submit the predictions to us for internal evaluation.


<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/d4TIWw3rlWuxpBCEpHYJB.jpeg'>
</div>





<div align="center">
<img src='https://cdn-uploads.huggingface.co/production/uploads/647773a1168cb428e00e9a8f/qkD43ne58w31Z7jpkTKjr.jpeg'>
</div>