|
--- |
|
license: mit |
|
task_categories: |
|
- text-classification |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
# π§ OpenAI Moderation Binary Dataset |
|
|
|
This dataset is a **binary-labeled** version of the original [OpenAI Moderation Evaluation Dataset](https://github.com/openai/moderation-api-release), created to support safe/unsafe classification tasks in content moderation, safety research, and AI alignment. |
|
|
|
--- |
|
|
|
## π¦ Dataset Details |
|
|
|
- **Original Source:** [OpenAI Moderation API Evaluation Dataset](https://github.com/openai/moderation-api-release) |
|
- **License:** MIT (inherited from original repo) |
|
- **Samples:** 1,680 total |
|
- **Labels:** |
|
- `"safe"` (no harm labels present) |
|
- `"unsafe"` (at least one moderation label present) |
|
|
|
--- |
|
|
|
## π Structure |
|
|
|
Each row consists of: |
|
|
|
```json |
|
{ |
|
"prompt": "Some user input text...", |
|
"prompt_label": "safe" // or "unsafe" |
|
} |
|
``` |
|
|
|
--- |
|
|
|
## π§Ή Preprocessing |
|
|
|
This version was derived by: |
|
1. Downloading and parsing the original JSONL dataset (`samples-1680.jsonl.gz`) |
|
2. Creating a new column called `prompt_label`, based on the presence of any of the following 8 moderation labels: |
|
- `S` (sexual) |
|
- `S3` (severe sexual) |
|
- `H` (hate) |
|
- `H2` (severe hate) |
|
- `V` (violence) |
|
- `V2` (severe violence) |
|
- `HR` (harassment) |
|
- `SH` (self-harm) |
|
3. Assigning: |
|
- `prompt_label = "unsafe"` if **any** of those were `1` |
|
- `prompt_label = "safe"` if **all** were `0` |
|
4. Removing the original moderation columns, leaving only: |
|
- `prompt` |
|
- `prompt_label` |
|
|
|
--- |
|
|
|
## π Label Distribution |
|
|
|
| Label | Count | % | |
|
|---------|-------|---------| |
|
| `safe` | 1158 | ~68.9% | |
|
| `unsafe` | 522 | ~31.1% | |
|
|
|
--- |
|
|
|
## π‘ Intended Use |
|
|
|
This dataset is designed for: |
|
- Binary classification (safe vs unsafe prompt detection) |
|
- Content moderation and safety evaluation |
|
- Educational and research purposes |
|
|
|
--- |
|
|
|
## π Citation |
|
|
|
If you use this dataset, please cite the original authors of the OpenAI Moderation dataset: |
|
|
|
> **OpenAI (2022).** |
|
> *A Holistic Approach to Undesired Content Detection in the Real World.* |
|
> [https://github.com/openai/moderation-api-release](https://github.com/openai/moderation-api-release) |
|
|
|
--- |
|
|
|
## π Acknowledgements |
|
|
|
Huge credit to OpenAI for releasing the original dataset. |
|
|
|
This binary-labeled version was created for ease of evaluation and validation. |