File size: 2,544 Bytes
763e3eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
241f322
 
 
 
 
 
763e3eb
241f322
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    struct:
    - name: answer_start
      sequence: int32
    - name: text
      sequence: string
  splits:
  - name: train
    num_bytes: 116696879
    num_examples: 130316
  - name: validation
    num_bytes: 11660319
    num_examples: 11873
  download_size: 17698683
  dataset_size: 128357198
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
task_categories:
- question-answering
language:
- en
size_categories:
- 100K<n<1M
---

## Clean SQuAD Classic v2

This is a refined version of the [SQuAD v2](https://huggingface.co/datasets/rajpurkar/squad_v2) dataset. It has been preprocessed to ensure higher data quality and usability for NLP tasks such as Question Answering.

## Description

The **Clean SQuAD Classic v2** dataset was created by applying preprocessing steps to the original SQuAD v2 dataset, including:
- **Trimming whitespace**: All leading and trailing spaces have been removed from the `question` field.
- **Minimum question length**: Questions with fewer than 12 characters were filtered out to remove overly short or uninformative entries.

Unlike the [Clean SQuAD v2](https://huggingface.co/datasets/decodingchris/clean_squad_v2) dataset, this dataset does not contain a separate test split. It retains the classic two-way split of **train** and **validation**, following the traditional structure of the original SQuAD v2 dataset.

## Dataset Structure

The dataset is divided into two subsets:

1. **Train**: The primary dataset for model training.
2. **Validation**: A dataset for hyperparameter tuning and model validation.

## Data Fields

Each subset contains the following fields:
- `id`: Unique identifier for each question-context pair.
- `title`: Title of the article the context is derived from.
- `context`: Paragraph from which the answer is extracted.
- `question`: Preprocessed question string.
- `answers`: Dictionary containing:
  - `text`: The text of the correct answer(s), if available. Empty for unanswerable questions.
  - `answer_start`: Character-level start position of the answer in the context, if available.

## Usage

The dataset is hosted on the Hugging Face Hub and can be loaded with the following code:

```python
from datasets import load_dataset

dataset = load_dataset("decodingchris/clean_squad_classic_v2")
```