Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
named-entity-recognition
Languages:
English
Size:
< 1K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,40 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
---
|
4 |
+
# CaseReportBench: Clinical Dense Extraction Benchmark
|
5 |
+
|
6 |
+
**CaseReportBench** is a curated benchmark dataset designed to evaluate the ability of large language models to perform **dense information extraction** from **clinical case reports**, particularly in the context of **rare disease diagnosis**.
|
7 |
+
|
8 |
+
This dataset supports fine-grained, system-wise phenotype extraction and structured diagnostic reasoning evaluation.
|
9 |
+
|
10 |
+
---
|
11 |
+
|
12 |
+
## Key Features
|
13 |
+
|
14 |
+
- Expert-annotated dense labels simulating comprehensive head-to-toe clinical assessments, capturing multi-system findings as encountered in real-world diagnostic reasoning
|
15 |
+
- Domain: Clinical Case Reports (PubmedCentral indexed)
|
16 |
+
- Use case: Medical IE, LLM evaluation, Rare disease diagnosis
|
17 |
+
- Data type: JSON with structured system-wise output
|
18 |
+
- Evaluation metrics: Token Selection Rate, Levenshtein Similarity, Exact Match
|
19 |
+
|
20 |
+
---
|
21 |
+
|
22 |
+
## Dataset Structure
|
23 |
+
|
24 |
+
Each record includes:
|
25 |
+
|
26 |
+
- `id`: Unique document identifier
|
27 |
+
- `text`: Raw case report
|
28 |
+
- `extracted_labels`: Dense structured annotations by system (e.g., nervous system, metabolic)
|
29 |
+
- `diagnosis`: Gold standard diagnosis
|
30 |
+
- `source`: PubMed ID or citation
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
## Usage
|
35 |
+
|
36 |
+
```python
|
37 |
+
from datasets import load_dataset
|
38 |
+
|
39 |
+
ds = load_dataset("cxyzhang/caseReportBench_ClinicalDenseExtraction_Benchmark")
|
40 |
+
print(ds["train"][0])
|