Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
scofield7419 commited on
Commit
c889cd2
Β·
verified Β·
1 Parent(s): 9056d9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -26
README.md CHANGED
@@ -8,35 +8,22 @@
8
  <a href="https://generalist.top/leaderboard">[πŸ† Leaderboard]</a>
9
  <a href="https://arxiv.org/abs/2505.04620">[πŸ“„ Paper]</a>
10
  <a href="https://huggingface.co/papers/2505.04620">[πŸ€— Paper-HF]</a>
11
- <a href="https://huggingface.co/General-Level">[πŸ€— Dataset-HF]</a>
12
- <a href="https://github.com/path2generalist/GeneralBench">[πŸ“ Dataset-Github]</a>
 
13
  </p>
14
 
15
  <h1 align="center" style="color: red">Open Set of General-Bench</h1>
16
 
17
  </div>
18
 
19
- ---
20
- We divide our benchmark into two settings: **`open`** and **`closed`**.
21
-
22
- This is the **`open benchmark`** of Generalist-Bench, where we release the full ground-truth annotations for all datasets.
23
- It allows researchers to train and evaluate their models with access to the answers.
24
-
25
- If you wish to thoroughly evaluate your model's performance, please use the
26
- [πŸ‘‰ closed benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Closeset), which comes with detailed usage instructions.
27
-
28
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
29
-
30
-
31
- <!-- This is the **`Closed benchmark`** of Generalist-Bench, where we release only the question annotationsβ€”**without ground-truth answers**β€”for all datasets.
32
-
33
- You can follow the detailed [usage](#-usage) instructions to submit the resuls generate by your own model.
34
-
35
- Final results will be updated on the [πŸ† Leaderboard](https://level.generalist.top).
36
 
 
 
37
 
38
- If you’d like to train or evaluate your model with access to the full answers, please check out the [πŸ‘‰ open benchmark](https://huggingface.co/datasets/General-Level/General-Bench-Openset), where all ground-truth annotations are provided. -->
39
 
 
40
 
41
 
42
 
@@ -262,13 +249,15 @@ comprehension and generation categories in various modalities</p>
262
 
263
  # 🚩 **Citation**
264
 
265
- If you find our benchmark useful in your research, please kindly consider citing us:
266
 
267
  ```
268
- @article{generalist2025,
269
- title={On Path to Multimodal Generalist: Levels and Benchmarks},
270
- author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
271
- journal={arXiv},
272
- year={2025}
 
 
273
  }
274
  ```
 
8
  <a href="https://generalist.top/leaderboard">[πŸ† Leaderboard]</a>
9
  <a href="https://arxiv.org/abs/2505.04620">[πŸ“„ Paper]</a>
10
  <a href="https://huggingface.co/papers/2505.04620">[πŸ€— Paper-HF]</a>
11
+ <a href="https://huggingface.co/General-Level/General-Bench-Closeset">[πŸ€— Dataset-HF (Close-Set)]</a>
12
+ <a href="https://huggingface.co/General-Level/General-Bench-Openset">[πŸ€— Dataset-HF (Open-Set)]</a>
13
+ <a href="https://github.com/path2generalist/General-Level">[πŸ“ Github]</a>
14
  </p>
15
 
16
  <h1 align="center" style="color: red">Open Set of General-Bench</h1>
17
 
18
  </div>
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
+ ---
22
+ We divide our `General-Bench` into two settings: **`Open`** and **`Close`**.
23
 
24
+ This is the **`Open Set`**, where we release the full ground-truth annotations for all datasets, allowing to train and evaluate models for open research purpose.
25
 
26
+ If you wish to rank on our [πŸ† `leaderboard`](https://generalist.top/leaderboard), please use the [πŸ‘‰ **`Close Set`**](https://huggingface.co/datasets/General-Level/General-Bench-Closeset).
27
 
28
 
29
 
 
249
 
250
  # 🚩 **Citation**
251
 
252
+ If you find this project useful to your research, please kindly cite our paper:
253
 
254
  ```
255
+ @articles{fei2025pathmultimodalgeneralistgenerallevel,
256
+ title={On Path to Multimodal Generalist: General-Level and General-Bench},
257
+ author={Hao Fei and Yuan Zhou and Juncheng Li and Xiangtai Li and Qingshan Xu and Bobo Li and Shengqiong Wu and Yaoting Wang and Junbao Zhou and Jiahao Meng and Qingyu Shi and Zhiyuan Zhou and Liangtao Shi and Minghe Gao and Daoan Zhang and Zhiqi Ge and Weiming Wu and Siliang Tang and Kaihang Pan and Yaobo Ye and Haobo Yuan and Tao Zhang and Tianjie Ju and Zixiang Meng and Shilin Xu and Liyu Jia and Wentao Hu and Meng Luo and Jiebo Luo and Tat-Seng Chua and Shuicheng Yan and Hanwang Zhang},
258
+ eprint={2505.04620},
259
+ archivePrefix={arXiv},
260
+ primaryClass={cs.CV}
261
+ url={https://arxiv.org/abs/2505.04620},
262
  }
263
  ```