Datasets:
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
Dataset Card for WikiSection (en_city, en_disease) Dataset
The WikiSection dataset is a collection of segmented Wikipedia articles related to cities and diseases, structured in this repository for a sentence-level document segmentation task.
Dataset Overview
WikiSection contains two English subsets:
- en_city: 19.5k Wikipedia articles about cities and city-related topics.
- en_disease: 3.6k articles on diseases and health-related scientific information.
Each subset provides segmented articles, where the task is to classify sentence boundaries as either "semantic-continuity" or "semantic-shift."
Features
The dataset provides the following features:
- id:
string
- A unique identifier for each document. - title:
string
- The title of the document. - ids:
list[string]
- The sentence ids within the document - sentences:
list[string]
- The sentences within the document. - titles_mask:
list[uint8]
- A binary mask to indicate which sentences are titles. - labels:
list[int]
- Binary labels for each sentence, where0
represents "semantic-continuity" and1
represents "semantic-shift."
Usage
The dataset can be easily loaded using the HuggingFace datasets
library:
from datasets import load_dataset
# en_city
titled_en_city = load_dataset('saeedabc/wikisection', 'en_city', trust_remote_code=True)
untitled_en_city = load_dataset('saeedabc/wikisection', 'en_city', drop_titles=True, trust_remote_code=True)
# en_disease
titled_en_disease = load_dataset('saeedabc/wikisection', 'en_disease', trust_remote_code=True)
untitled_en_disease = load_dataset('saeedabc/wikisection', 'en_disease', drop_titles=True, trust_remote_code=True)
Dataset Details
- Homepage: WikiSection on GitHub
- Downloads last month
- 8