metadata
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
π§ OpenAI Moderation Binary Dataset
This dataset is a binary-labeled version of the original OpenAI Moderation Evaluation Dataset, created to support safe/unsafe classification tasks in content moderation, safety research, and AI alignment.
π¦ Dataset Details
- Original Source: OpenAI Moderation API Evaluation Dataset
- License: MIT (inherited from original repo)
- Samples: 1,680 total
- Labels:
"safe"
(no harm labels present)"unsafe"
(at least one moderation label present)
π Structure
Each row consists of:
{
"prompt": "Some user input text...",
"prompt_label": "safe" // or "unsafe"
}
π§Ή Preprocessing
This version was derived by:
- Downloading and parsing the original JSONL dataset (
samples-1680.jsonl.gz
) - Creating a new column called
prompt_label
, based on the presence of any of the following 8 moderation labels:S
(sexual)S3
(severe sexual)H
(hate)H2
(severe hate)V
(violence)V2
(severe violence)HR
(harassment)SH
(self-harm)
- Assigning:
prompt_label = "unsafe"
if any of those were1
prompt_label = "safe"
if all were0
- Removing the original moderation columns, leaving only:
prompt
prompt_label
π Label Distribution
Label | Count | % |
---|---|---|
safe |
1158 | ~68.9% |
unsafe |
522 | ~31.1% |
π‘ Intended Use
This dataset is designed for:
- Binary classification (safe vs unsafe prompt detection)
- Content moderation and safety evaluation
- Educational and research purposes
π Citation
If you use this dataset, please cite the original authors of the OpenAI Moderation dataset:
OpenAI (2022).
A Holistic Approach to Undesired Content Detection in the Real World.
https://github.com/openai/moderation-api-release
π Acknowledgements
Huge credit to OpenAI for releasing the original dataset.
This binary-labeled version was created for ease of evaluation and validation.