Dataset Viewer

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for Wiki-727K Dataset

Wiki-727K is a large dataset for text segmentation, automatically extracted and labeled from Wikipedia. It is designed as a sentence-level sequence labeling task for identifying semantic or topic shift in documents.

Dataset Overview

  • Train: 582k
  • Validation: 72k
  • Test: 73k

Features

  • id (string): Document ID.
  • ids (sequence of string): Sentence IDs for each document.
  • sentences (sequence of string): Sentences in each document.
  • titles_mask (sequence of uint8): Mask indicating if a sentence is a title (optional).
  • levels (sequence of uint8): Hierarchical level of each sentence (optional).
  • labels (sequence of class): Binary labels: semantic-continuity or semantic-shift.

Usage

The dataset can be loaded using the HuggingFace datasets library:

from datasets import load_dataset

titled_dataset = load_dataset('saeedabc/wiki727k', num_proc=8, trust_remote_code=True)

untitled_dataset = load_dataset('saeedabc/wiki727k', drop_titles=True, num_proc=8, trust_remote_code=True)

Dataset Details

Downloads last month
4