Dataset Viewer
paper
stringlengths 23
138
| source
stringclasses 2
values | repo_name
stringlengths 3
42
| repo_url
stringlengths 28
71
| paper_json
dict | paper_cleaned_json
dict | conference
stringclasses 3
values |
---|---|---|---|---|---|---|
Generative Judge for Evaluating Alignment | poster | auto-j | https://github.com/GAIR-NLP/auto-j | {"abstract":"The rapid development of Large Language Models (LLMs) has substantially expanded the ra(...TRUNCATED) | {"abstract":"The rapid development of Large Language Models (LLMs) has substantially expanded the ra(...TRUNCATED) | iclr2024 |
Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF | poster | hidden-context | https://github.com/cassidylaidlaw/hidden-context | {"abstract":"In practice, preference learning from human feedback depends on incomplete data with hi(...TRUNCATED) | {"abstract":"In practice, preference learning from human feedback depends on incomplete data with hi(...TRUNCATED) | iclr2024 |
Inherently Interpretable Time Series Classification via Multiple Instance Learning | oral | MILTimeSeriesClassification | https://github.com/JAEarly/MILTimeSeriesClassification | {"abstract":"Conventional Time Series Classification (TSC) methods are often black boxes that obscur(...TRUNCATED) | {"abstract":"Conventional Time Series Classification (TSC) methods are often black boxes that obscur(...TRUNCATED) | iclr2024 |
iTransformer: Inverted Transformers Are Effective for Time Series Forecasting | oral | iTransformer | https://github.com/thuml/iTransformer | {"abstract":"The recent boom of linear forecasting models questions the ongoing passion for architec(...TRUNCATED) | {"abstract":"The recent boom of linear forecasting models questions the ongoing passion for architec(...TRUNCATED) | iclr2024 |
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs | poster | PASTA | https://github.com/QingruZhang/PASTA | {"abstract":"In human-written articles, we often leverage the subtleties of text style, such as bold(...TRUNCATED) | {"abstract":"In human-written articles, we often leverage the subtleties of text style, such as bold(...TRUNCATED) | iclr2024 |
Knowledge Distillation Based on Transformed Teacher Matching | poster | TTM | https://github.com/zkxufo/TTM | {"abstract":"As a technique to bridge logit matching and probability distribution matching, temperat(...TRUNCATED) | {"abstract":"As a technique to bridge logit matching and probability distribution matching, temperat(...TRUNCATED) | iclr2024 |
Meaning Representations from Trajectories in Autoregressive Models | poster | meaning-as-trajectories | https://github.com/tianyu139/meaning-as-trajectories | {"abstract":"We propose to extract meaning representations from autoregressive language models by co(...TRUNCATED) | {"abstract":"We propose to extract meaning representations from autoregressive language models by co(...TRUNCATED) | iclr2024 |
A Simple Interpretable Transformer for Fine-Grained Image Classification and Analysis | poster | INTR | https://github.com/Imageomics/INTR | {"abstract":"Illustration of INTR. We show four images (row-wise) of the same bird species Painted B(...TRUNCATED) | {"abstract":"Illustration of INTR. We show four images (row-wise) of the same bird species Painted B(...TRUNCATED) | iclr2024 |
"VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimodal Large Language (...TRUNCATED) | poster | vdc | https://github.com/zihao-ai/vdc | {"abstract":"The role of data in building AI systems has recently been emphasized by the emerging co(...TRUNCATED) | {"abstract":"The role of data in building AI systems has recently been emphasized by the emerging co(...TRUNCATED) | iclr2024 |
"Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio(...TRUNCATED) | poster | vocos | https://github.com/gemelo-ai/vocos | {"abstract":"Recent advancements in neural vocoding are predominantly driven by Generative Adversari(...TRUNCATED) | {"abstract":"Recent advancements in neural vocoding are predominantly driven by Generative Adversari(...TRUNCATED) | iclr2024 |
End of preview. Expand
in Data Studio
📄 Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning
- Repository: [https://github.com/going-doer/Paper2Code]
- Paper [https://arxiv.org/abs/2504.17192]
Paper2Code Benchmark
Dataset Description
The Paper2Code Benchmark is designed to evaluate the ability to reproduce methods and experiments described in scientific papers.
We collected 90 papers from ICML 2024, NeurIPS 2024, and ICLR 2024, selecting only those with publicly available GitHub repositories.
To ensure manageable complexity, we filtered for repositories with fewer than 70,000 tokens.
Using a model-based evaluation, we selected the top 30 papers from each conference based on repository quality.
For more details, refer to Section 4.1 "Paper2Code Benchmark" of the paper.
Uses
from datasets import load_dataset
dataset = load_dataset("iaminju/paper2code", split="test")
For access to the benchmark files (including pdf files), please refer to the Paper2Code data directory in our GitHub repository.
Dataset Structure
Dataset({
features: ['paper', 'source', 'repo_name', 'repo_url', 'paper_json', 'paper_cleaned_json', 'conference'],
num_rows: 90
})
paper
: Title of the paper.source
: Presentation type — oral or poster.repo_name
: Name of the repository provided by the original authors.repo_url
: URL of the repository provided by the original authors.paper_json
: Parsed JSON version of the paper. We use s2orc-doc2json for this conversion.paper_cleaned_json
: Preprocessed version of the paper used by PaperCoder.conference
: The conference where the paper was accepted - icml2024, iclr2024 or nips2024
Citation
@article{seo2025paper2code,
title={Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning},
author={Seo, Minju and Baek, Jinheon and Lee, Seongyun and Hwang, Sung Ju},
year={2025},
url={https://arxiv.org/pdf/2504.17192}
}
- Downloads last month
- 60