Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
< 1K
License:
metadata
annotations_creators:
- expert-generated
language:
- en
license: cc-by-nc-4.0
multilinguality: monolingual
pretty_name: 'CaseReportBench: Clinical Dense Extraction Benchmark'
tags:
- clinical-nlp
- dense-information-extraction
- medical
- case-reports
- rare-diseases
- benchmarking
- information-extraction
task_categories:
- token-classification
task_ids:
- named-entity-recognition
CaseReportBench: Clinical Dense Extraction Benchmark
CaseReportBench is a curated benchmark dataset designed to evaluate how well large language models (LLMs) can perform dense information extraction from clinical case reports, with a focus on rare disease diagnosis.
It supports fine-grained, system-level phenotype extraction and structured diagnostic reasoning — enabling model evaluation in real-world medical decision-making contexts.
🔔 Note
This dataset accompanies our upcoming publication:
Zhang et al. CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports.
To appear in the Proceedings of the Conference on Health, Inference, and Learning (CHIL 2025), PMLR.
The official PMLR citation and link will be added upon publication.
Key Features
- Expert-annotated, system-wise phenotypic labels mimicking clinical assessments
- Based on real-world PubMed Central-indexed clinical case reports
- Format: JSON with structured head-to-toe organ system outputs
- Designed for: Biomedical NLP, IE, rare disease reasoning, and LLM benchmarking
- Metrics include: Token Selection Rate, Levenshtein Similarity, Exact Match
Dataset Structure
Each record includes:
id
: Unique document IDtext
: Full raw case reportextracted_labels
: System-organized dense annotations (e.g., neuro, heme, derm, etc.)diagnosis
: Final confirmed diagnosis (Inborn Error of Metabolism)source
: PubMed ID or citation
Usage
from datasets import load_dataset
ds = load_dataset("cxyzhang/caseReportBench_ClinicalDenseExtraction_Benchmark")
print(ds["train"][0])
Citation
@inproceedings{zhang2025casereportbench,
title = {CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports},
author = {Zhang, Cindy and Others},
booktitle = {Proceedings of the Conference on Health, Inference, and Learning (CHIL)},
series = {Proceedings of Machine Learning Research},
volume = {vX}, % Update when available
year = {2025},
publisher = {PMLR},
note = {To appear}
}