clean_squad_v1 / README.md
decodingchris's picture
Update README.md
ccbb9f4 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: title
      dtype: string
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: answers
      struct:
        - name: answer_start
          sequence: int32
        - name: text
          sequence: string
  splits:
    - name: train
      num_bytes: 79301631
      num_examples: 87588
    - name: validation
      num_bytes: 5239631
      num_examples: 5285
    - name: test
      num_bytes: 5233006
      num_examples: 5285
  download_size: 19809326
  dataset_size: 89774268
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 10K<n<100K

Clean SQuAD v1

This is a refined version of the SQuAD v1 dataset. It has been preprocessed to ensure higher data quality and usability for NLP tasks such as Question Answering.

Description

The Clean SQuAD v1 dataset was created by applying preprocessing steps to the original SQuAD v1 dataset, including:

  • Trimming whitespace: All leading and trailing spaces have been removed from the question field.
  • Minimum question length: Questions with fewer than 12 characters were filtered out to remove overly short or uninformative entries.
  • Balanced validation and test sets: The validation set from the original SQuAD dataset was split 50-50 into new validation and test sets.

This preprocessing ensures that the dataset is cleaner and more balanced, making it suitable for training and evaluating machine learning models on Question Answering tasks.

Dataset Structure

The dataset is divided into three subsets:

  1. Train: The primary dataset for model training.
  2. Validation: A dataset for hyperparameter tuning and model validation.
  3. Test: A separate dataset for evaluating final model performance.

Data Fields

Each subset contains the following fields:

  • id: Unique identifier for each question-context pair.
  • title: Title of the article the context is derived from.
  • context: Paragraph from which the answer is extracted.
  • question: Preprocessed question string.
  • answers: Dictionary containing:
    • text: The text of the correct answer(s).
    • answer_start: Character-level start position of the answer in the context.

Usage

The dataset is hosted on the Hugging Face Hub and can be loaded with the following code:

from datasets import load_dataset

dataset = load_dataset("decodingchris/clean_squad_v1")