SurveyEval / README.md
R0k1e's picture
Update README.md
dcd0e04 verified
metadata
task_categories:
  - text2text-generation
  - text-generation
language:
  - en
pretty_name: SurveyEval
size_categories:
  - n<1K

SurveyEval: An Evaluation Benchmark Dataset for LLM $\times$ MapReduce-V2

SurveyEval stands as a pioneering evaluation benchmark dataset specifically designed for LLM$\times$MapReduce-V2 in the realm of computer science. If you intend to utilize it for evaluation purposes or incorporate it into the creation of a survey, kindly refer to our Github for detailed instructions and guidelines and refer to us paper.

Dataset Uniqueness

To the best of our knowledge, SurveyEval is the first dataset of its kind that meticulously pairs surveys with comprehensive reference papers. We have curated a collection of 384 survey papers from various online sources. Collectively, these papers cite over 26,000 references, providing a rich and extensive knowledge repository for research and evaluation endeavors.

Comparative Analysis with Other Datasets

The following table offers a multi-dimensional comparison between our SurveyEval dataset and other relevant datasets in the field. Through a careful examination of the table, it becomes evident that SurveyEval distinguishes itself as the sole dataset that not only offers complete reference information but also boasts high full content coverage. This unique combination makes it an invaluable resource for researchers and practitioners seeking a comprehensive and reliable dataset for their work.

Dataset Comparison

Composition of the Test Split

The table below provides a detailed breakdown of all 20 surveys included in the test split of the SurveyEval dataset. This overview offers insights into the diversity and scope of the surveys, enabling users to better understand the dataset's composition and tailor their research accordingly.

Composition of Test Split

Citation and Usage Guidelines

Please note that the SurveyEval dataset is intended exclusively for research and educational purposes. It should not be misconstrued as representing the opinions or views of the dataset's creators, owners, or contributors. When using the dataset in your work, we kindly request that you cite it appropriately using the following BibTeX entry:

@misc{wang2025llmtimesmapreducev2entropydrivenconvolutionaltesttime,
      title={LLM$\times$MapReduce-V2: Entropy-Driven Convolutional Test-Time Scaling for Generating Long-Form Articles from Extremely Long Resources}, 
      author={Haoyu Wang and Yujia Fu and Zhu Zhang and Shuo Wang and Zirui Ren and Xiaorong Wang and Zhili Li and Chaoqun He and Bo An and Zhiyuan Liu and Maosong Sun},
      year={2025},
      eprint={2504.05732},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.05732}, 
}

We hope that SurveyEval proves to be a valuable asset in your research endeavors, and we welcome your feedback and contributions to further enhance the dataset's utility and impact.