Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
All-Angles-Bench / README.md
ch-chenyu's picture
Update README.md (#4)
34113ce verified
metadata
license: mit
language:
  - en
size_categories:
  - 1K<n<10K

Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs

Dataset Card for All-Angles Bench

Dataset Description

The dataset presents a comprehensive benchmark consisting of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.

Dataset Sources

  • EgoHumans - Egocentric multi-view human activity understanding dataset
  • Ego-Exo4D - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding

Direct Usage

from datasets import load_dataset

dataset = load_dataset("ch-chenyu/All-Angles-Bench")

Prepare Full Benchmark Data on Local Machine

  1. Set up Git lfs and clone the benchmark:
$ conda install git-lfs
$ git lfs install

$ git lfs clone https://huggingface.co/datasets/ch-chenyu/All-Angles-Bench
  1. Download Ego4D-Exo dataset and extract the frames for the benchmark scenes:

We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset (downscaled_takes/448) and then use the preprocessing scripts to extract the corresponding images.

$ pip install ego4d --upgrade
$ egoexo -o All-Angles-Bench/ --parts downscaled_takes/448

$ python All-Angles-Bench/scripts/process_ego4d_exo.py --input All-Angles-Bench
  1. Transform JSON metadata into benchmark TSV format:

To convert the metadata from JSON format into a structured TSV format compatible with benchmark evaluation scripts in VLMEvalKit, run:

$ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data.json

Dataset Structure

The JSON data contains the following key-value pairs:

Key Type Description
index Integer Unique identifier for the data entry (e.g. 1221)
folder String Directory name where the scene is stored (e.g. "05_volleyball")
category String Task category (e.g. "counting")
pair_idx String Index of a corresponding paired question (if applicable)
image_path List Array of input image paths
question String Natural language query about the scene
A/B/C String Multiple choice options
answer String Correct option label (e.g. "B")
sourced_dataset String Source dataset name (e.g. "EgoHumans")

Citation

@article{yeh2025seeing,
  title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
  author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
  journal={arXiv preprint arXiv:2504.15280},
  year={2025}
}

Acknowledgements

You may refer to related work that serves as foundations for our framework and code repository, EgoHumans, Ego-Exo4D, VLMEvalKit. Thanks for their wonderful work and data.