File size: 2,611 Bytes
adefa24
de1cea6
e801038
de1cea6
e801038
de1cea6
 
cf6dd85
de1cea6
e801038
 
 
 
 
 
 
de1cea6
5fef974
de1cea6
5fef974
de1cea6
e11dc70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
029ccbf
e11dc70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
annotations_creators:
  - expert-generated
language:
  - en
license: cc-by-nc-4.0
multilinguality: monolingual
pretty_name: "CaseReportBench: Clinical Dense Extraction Benchmark"
tags:
  - clinical-nlp
  - dense-information-extraction
  - medical
  - case-reports
  - rare-diseases
  - benchmarking
  - information-extraction
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
---

# CaseReportBench: Clinical Dense Extraction Benchmark

**CaseReportBench** is a curated benchmark dataset designed to evaluate how well large language models (LLMs) can perform **dense information extraction** from **clinical case reports**, with a focus on **rare disease diagnosis**.

It supports fine-grained, system-level phenotype extraction and structured diagnostic reasoning — enabling model evaluation in real-world medical decision-making contexts.

---

## 🔔 Note

This dataset accompanies our upcoming publication:

> **Zhang et al. CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports.**  
> *To appear in the Proceedings of the Conference on Health, Inference, and Learning (CHIL 2025), PMLR.*

The official PMLR citation and link will be added upon publication.

---

## Key Features

- **Expert-annotated**, system-wise phenotypic labels mimicking clinical assessments
- Based on real-world **PubMed Central-indexed clinical case reports**
- Format: JSON with structured head-to-toe organ system outputs
- Designed for: Biomedical NLP, IE, rare disease reasoning, and LLM benchmarking
- Metrics include: Token Selection Rate, Levenshtein Similarity, Exact Match

---

## Dataset Structure

Each record includes:

- `id`: Unique document ID
- `text`: Full raw case report
- `extracted_labels`: System-organized dense annotations (e.g., neuro, heme, derm, etc.)
- `diagnosis`: Final confirmed diagnosis (Inborn Error of Metabolism)
- `source`: PubMed ID or citation

---

## Usage

```python
from datasets import load_dataset

ds = load_dataset("cxyzhang/caseReportBench_ClinicalDenseExtraction_Benchmark")
print(ds["train"][0])
```

## Citation

```bibtex
@inproceedings{zhang2025casereportbench,
  title     = {CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports},
  author    = {Zhang, Cindy and Others},
  booktitle = {Proceedings of the Conference on Health, Inference, and Learning (CHIL)},
  series    = {Proceedings of Machine Learning Research},
  volume    = {vX},  % Update when available
  year      = {2025},
  publisher = {PMLR},
  note      = {To appear}
}



```