test / README.md
nikhilranjan's picture
Update README.md
7780139 verified
metadata
annotations_creators:
  - no-annotation
language:
  - en
language_creators:
  - found
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - n<1K
source_datasets:
  - original
task_categories:
  - text-classification
task_ids:
  - text-classification
paperswithcode_id: null
pretty_name: Text360 Sample Dataset
tags:
  - text-classification
  - arxiv
  - wikipedia
dataset_info:
  data_files:
    train:
      - dir1/subdir1/s1.jsonl
      - dir2/subdir2/s2.jsonl
  config_name: default

Dataset Card for Text360 Sample Dataset

Dataset Description

  • Repository: [Add your repository URL here]
  • Paper: [Add paper URL if applicable]
  • Point of Contact: [Add contact information]

Dataset Summary

This dataset contains text samples from two sources (arXiv and Wikipedia) organized in a hierarchical directory structure. Each sample includes a text field and a subset identifier.

Data Files Structure

The dataset maintains its original directory structure:

.
├── dir1/
│   └── subdir1/
│       └── sample1.jsonl  # Contains arXiv samples
└── dir2/
    └── subdir2/
        └── sample2.jsonl  # Contains Wikipedia samples

Data Fields

Each JSONL file contains records with the following fields:

  • text: string - The main text content
  • subset: string - Source identifier ("arxiv" or "wikipedia")

Data Splits

All data is included in the train split, distributed across the JSONL files in their respective directories.

Example Instance

{
    "text": "This is a long text sample from arxiv about quantum computing...",
    "subset": "arxiv"
}

Additional Information

Dataset Creation

The dataset is organized in its original directory structure, with JSONL files containing text samples from arXiv and Wikipedia sources. Each file maintains its original location and format.

Curation Rationale

The dataset was created to provide a sample of text data from different sources for text classification tasks.

Source Data

Initial Data Collection and Normalization

The data was collected from two sources:

  1. arXiv papers
  2. Wikipedia articles

Who are the source language producers?

  • arXiv: Academic researchers and scientists
  • Wikipedia: Community contributors

Annotations

Annotation process

No additional annotations were added to the source data.

Who are the annotators?

N/A

Personal and Sensitive Information

The dataset does not contain any personal or sensitive information.

Considerations for Using the Data

Social Impact of Dataset

This dataset can be used for educational and research purposes in text classification tasks.

Discussion of Biases

The dataset may contain biases inherent to the source materials (arXiv papers and Wikipedia articles).

Other Known Limitations

The dataset is a small sample and may not be representative of all content from the source materials.

Dataset Curators

[Add curator information]

Licensing Information

This dataset is released under the MIT License.

Citation Information

[Add citation information]

Contributions

[Add contribution information]

Contact

[Add contact information]