Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
< 1K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,76 +1,25 @@
|
|
1 |
---
|
2 |
annotations_creators:
|
3 |
-
- expert-generated
|
4 |
language:
|
5 |
-
- en
|
6 |
license: cc-by-nc-4.0
|
7 |
multilinguality: monolingual
|
8 |
-
pretty_name:
|
9 |
tags:
|
10 |
-
- clinical-nlp
|
11 |
-
- dense-information-extraction
|
12 |
-
- medical
|
13 |
-
- case-reports
|
14 |
-
- rare-diseases
|
15 |
-
- benchmarking
|
16 |
-
- information-extraction
|
17 |
task_categories:
|
18 |
-
- information-extraction
|
19 |
-
- text-classification
|
20 |
-
- question-answering
|
21 |
task_ids:
|
22 |
-
- entity-extraction
|
23 |
-
- multi-label-classification
|
24 |
-
- open-domain-qa
|
25 |
---
|
26 |
-
|
27 |
-
|
28 |
-
# CaseReportBench: Clinical Dense Extraction Benchmark
|
29 |
-
|
30 |
-
**CaseReportBench** is a curated benchmark dataset designed to evaluate the ability of large language models to perform **dense information extraction** from **clinical case reports**, particularly in the context of **rare disease diagnosis**.
|
31 |
-
|
32 |
-
This dataset supports fine-grained, system-wise phenotype extraction and structured diagnostic reasoning evaluation.
|
33 |
-
|
34 |
-
---
|
35 |
-
|
36 |
-
## Key Features
|
37 |
-
|
38 |
-
- Expert-annotated dense labels simulating comprehensive head-to-toe clinical assessments, capturing multi-system findings as encountered in real-world diagnostic reasoning
|
39 |
-
- Domain: Clinical Case Reports (PubmedCentral indexed)
|
40 |
-
- Use case: Medical IE, LLM evaluation, Rare disease diagnosis
|
41 |
-
- Data type: JSON with structured system-wise output
|
42 |
-
- Evaluation metrics: Token Selection Rate, Levenshtein Similarity, Exact Match
|
43 |
-
|
44 |
-
---
|
45 |
-
|
46 |
-
## Dataset Structure
|
47 |
-
|
48 |
-
Each record includes:
|
49 |
-
|
50 |
-
- `id`: Unique document identifier
|
51 |
-
- `text`: Raw case report
|
52 |
-
- `extracted_labels`: Dense structured annotations by system (e.g., nervous system, metabolic)
|
53 |
-
- `diagnosis`: Gold standard diagnosis
|
54 |
-
- `source`: PubMed ID or citation
|
55 |
-
|
56 |
-
---
|
57 |
-
|
58 |
-
## Usage
|
59 |
-
|
60 |
-
```python
|
61 |
-
from datasets import load_dataset
|
62 |
-
|
63 |
-
ds = load_dataset("cxyzhang/caseReportBench_ClinicalDenseExtraction_Benchmark")
|
64 |
-
print(ds["train"][0])
|
65 |
-
```
|
66 |
-
|
67 |
-
## Citation
|
68 |
-
|
69 |
-
```bibtex
|
70 |
-
@inproceedings{zhang2025casereportbench,
|
71 |
-
title={CaseReportBench: An LLM Benchmark Dataset for Dense Information Extraction in Clinical Case Reports},
|
72 |
-
author={Zhang, Cindy and Others},
|
73 |
-
booktitle={Conference on Health, Inference, and Learning (CHIL)},
|
74 |
-
year={2025}
|
75 |
-
}
|
76 |
-
```
|
|
|
1 |
---
|
2 |
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
language:
|
5 |
+
- en
|
6 |
license: cc-by-nc-4.0
|
7 |
multilinguality: monolingual
|
8 |
+
pretty_name: CaseReportBench_Clinical Dense Extraction Benchmark
|
9 |
tags:
|
10 |
+
- clinical-nlp
|
11 |
+
- dense-information-extraction
|
12 |
+
- medical
|
13 |
+
- case-reports
|
14 |
+
- rare-diseases
|
15 |
+
- benchmarking
|
16 |
+
- information-extraction
|
17 |
task_categories:
|
18 |
+
- information-extraction
|
19 |
+
- text-classification
|
20 |
+
- question-answering
|
21 |
task_ids:
|
22 |
+
- entity-extraction
|
23 |
+
- multi-label-classification
|
24 |
+
- open-domain-qa
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|