File size: 2,365 Bytes
a69e3df 70df97f a4dea77 70df97f 51424b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- 1K<n<10K
---
# π§ OpenAI Moderation Binary Dataset
This dataset is a **binary-labeled** version of the original [OpenAI Moderation Evaluation Dataset](https://github.com/openai/moderation-api-release), created to support safe/unsafe classification tasks in content moderation, safety research, and AI alignment.
---
## π¦ Dataset Details
- **Original Source:** [OpenAI Moderation API Evaluation Dataset](https://github.com/openai/moderation-api-release)
- **License:** MIT (inherited from original repo)
- **Samples:** 1,680 total
- **Labels:**
- `"safe"` (no harm labels present)
- `"unsafe"` (at least one moderation label present)
---
## π Structure
Each row consists of:
```json
{
"prompt": "Some user input text...",
"prompt_label": "safe" // or "unsafe"
}
```
---
## π§Ή Preprocessing
This version was derived by:
1. Downloading and parsing the original JSONL dataset (`samples-1680.jsonl.gz`)
2. Creating a new column called `prompt_label`, based on the presence of any of the following 8 moderation labels:
- `S` (sexual)
- `S3` (severe sexual)
- `H` (hate)
- `H2` (severe hate)
- `V` (violence)
- `V2` (severe violence)
- `HR` (harassment)
- `SH` (self-harm)
3. Assigning:
- `prompt_label = "unsafe"` if **any** of those were `1`
- `prompt_label = "safe"` if **all** were `0`
4. Removing the original moderation columns, leaving only:
- `prompt`
- `prompt_label`
---
## π Label Distribution
| Label | Count | % |
|---------|-------|---------|
| `safe` | 1158 | ~68.9% |
| `unsafe` | 522 | ~31.1% |
---
## π‘ Intended Use
This dataset is designed for:
- Binary classification (safe vs unsafe prompt detection)
- Content moderation and safety evaluation
- Educational and research purposes
---
## π Citation
If you use this dataset, please cite the original authors of the OpenAI Moderation dataset:
> **OpenAI (2022).**
> *A Holistic Approach to Undesired Content Detection in the Real World.*
> [https://github.com/openai/moderation-api-release](https://github.com/openai/moderation-api-release)
---
## π Acknowledgements
Huge credit to OpenAI for releasing the original dataset.
This binary-labeled version was created for ease of evaluation and validation. |