Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
All-Angles-Bench / README.md
ch-chenyu's picture
Update README.md
4c7f1ae verified
metadata
license: mit
language:
  - en
size_categories:
  - 1K<n<10K

Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs

Dataset Card for All-Angles Bench

Dataset Description

The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.

Dataset Sources

  • EgoHumans - Egocentric multi-view human activity understanding dataset
  • Ego-Exo4D - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding

Usage

from datasets import load_dataset

dataset = load_dataset("ch-chenyu/All-Angles-Bench")

We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset and then use the preprocessing scripts provided in our GitHub repository to extract the corresponding images.

Dataset Structure

The JSON data contains the following key-value pairs:

Key Type Description
index Integer Unique identifier for the data entry (e.g. 1221)
folder String Directory name where the scene is stored (e.g. "05_volleyball")
category String Task category (e.g. "counting")
pair_idx String Index of a corresponding paired question (if applicable)
image_path List Array of input image paths
question String Natural language query about the scene
A/B/C String Multiple choice options
answer String Correct option label (e.g. "B")
sourced_dataset String Source dataset name (e.g. "EgoHumans")

Citation

@article{yeh2025seeing,
  title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
  author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
  journal={arXiv preprint arXiv:2504.15280},
  year={2025}
}

Acknowledgements

You may refer to related work that serves as foundations for our framework and code repository, EgoHumans, Ego-Exo4D, VLMEvalKit. Thanks for their wonderful work and data.