Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -36,4 +36,52 @@ configs:
|
|
36 |
path: data/validation-*
|
37 |
- split: test
|
38 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
path: data/validation-*
|
37 |
- split: test
|
38 |
path: data/test-*
|
39 |
+
task_categories:
|
40 |
+
- question-answering
|
41 |
+
language:
|
42 |
+
- en
|
43 |
+
size_categories:
|
44 |
+
- 10K<n<100K
|
45 |
---
|
46 |
+
|
47 |
+
## Clean SQuAD v1
|
48 |
+
|
49 |
+
This is a refined version of the [SQuAD v1](https://huggingface.co/datasets/rajpurkar/squad) dataset. It has been preprocessed to ensure higher data quality and usability for NLP tasks such as Question Answering.
|
50 |
+
|
51 |
+
## Description
|
52 |
+
|
53 |
+
The **Clean SQuAD v1** dataset was created by applying preprocessing steps to the original SQuAD v1 dataset, including:
|
54 |
+
- **Trimming whitespace**: All leading and trailing spaces have been removed from the `question` field.
|
55 |
+
- **Minimum question length**: Questions with fewer than 12 characters were filtered out to remove overly short or uninformative entries.
|
56 |
+
- **Balanced validation and test sets**: The validation set from the original SQuAD dataset was split 50-50 into new validation and test sets.
|
57 |
+
|
58 |
+
This preprocessing ensures that the dataset is cleaner and more balanced, making it suitable for training and evaluating machine learning models on Question Answering tasks.
|
59 |
+
|
60 |
+
## Dataset Structure
|
61 |
+
|
62 |
+
The dataset is divided into three subsets:
|
63 |
+
|
64 |
+
1. **Train**: The primary dataset for model training.
|
65 |
+
2. **Validation**: A dataset for hyperparameter tuning and model validation.
|
66 |
+
3. **Test**: A separate dataset for evaluating final model performance.
|
67 |
+
|
68 |
+
## Data Fields
|
69 |
+
|
70 |
+
Each subset contains the following fields:
|
71 |
+
- `id`: Unique identifier for each question-context pair.
|
72 |
+
- `title`: Title of the article the context is derived from.
|
73 |
+
- `context`: Paragraph from which the answer is extracted.
|
74 |
+
- `question`: Preprocessed question string.
|
75 |
+
- `answers`: Dictionary containing:
|
76 |
+
- `text`: The text of the correct answer(s).
|
77 |
+
- `answer_start`: Character-level start position of the answer in the context.
|
78 |
+
|
79 |
+
## Usage
|
80 |
+
|
81 |
+
The dataset is hosted on the Hugging Face Hub and can be loaded with the following code:
|
82 |
+
|
83 |
+
```python
|
84 |
+
from datasets import load_dataset
|
85 |
+
|
86 |
+
dataset = load_dataset("decodingchris/clean_squad_v1")
|
87 |
+
```
|