Dataset Card added v1
Browse files
README.md
CHANGED
@@ -1,26 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
markdown
|
2 |
+
# π§ OpenAI Moderation Binary Dataset
|
3 |
+
|
4 |
+
This dataset is a **binary-labeled** version of the original [OpenAI Moderation Evaluation Dataset](https://github.com/openai/moderation-api-release), created to support safe/unsafe classification tasks in content moderation, safety research, and AI alignment.
|
5 |
+
|
6 |
---
|
7 |
+
|
8 |
+
## π¦ Dataset Details
|
9 |
+
|
10 |
+
- **Original Source:** [OpenAI Moderation API Evaluation Dataset](https://github.com/openai/moderation-api-release)
|
11 |
+
- **License:** MIT (inherited from original repo)
|
12 |
+
- **Samples:** 1,680 total
|
13 |
+
- **Labels:**
|
14 |
+
- `"safe"` (no harm labels present)
|
15 |
+
- `"unsafe"` (at least one moderation label present)
|
16 |
+
|
17 |
+
---
|
18 |
+
|
19 |
+
## π Structure
|
20 |
+
|
21 |
+
Each row consists of:
|
22 |
+
|
23 |
+
```json
|
24 |
+
{
|
25 |
+
"prompt": "Some user input text...",
|
26 |
+
"prompt_label": "safe" // or "unsafe"
|
27 |
+
}
|
28 |
+
```
|
29 |
+
|
30 |
+
---
|
31 |
+
|
32 |
+
## π§Ή Preprocessing
|
33 |
+
|
34 |
+
This version was derived by:
|
35 |
+
1. Downloading and parsing the original JSONL dataset (`samples-1680.jsonl.gz`)
|
36 |
+
2. Creating a new column called `prompt_label`, based on the presence of any of the following 8 moderation labels:
|
37 |
+
- `S` (sexual)
|
38 |
+
- `S3` (severe sexual)
|
39 |
+
- `H` (hate)
|
40 |
+
- `H2` (severe hate)
|
41 |
+
- `V` (violence)
|
42 |
+
- `V2` (severe violence)
|
43 |
+
- `HR` (harassment)
|
44 |
+
- `SH` (self-harm)
|
45 |
+
3. Assigning:
|
46 |
+
- `prompt_label = "unsafe"` if **any** of those were `1`
|
47 |
+
- `prompt_label = "safe"` if **all** were `0`
|
48 |
+
4. Removing the original moderation columns, leaving only:
|
49 |
+
- `prompt`
|
50 |
+
- `prompt_label`
|
51 |
+
|
52 |
+
---
|
53 |
+
|
54 |
+
## π Label Distribution
|
55 |
+
|
56 |
+
| Label | Count | % |
|
57 |
+
|---------|-------|---------|
|
58 |
+
| `safe` | 1158 | ~68.9% |
|
59 |
+
| `unsafe` | 522 | ~31.1% |
|
60 |
+
|
61 |
+
---
|
62 |
+
|
63 |
+
## π‘ Intended Use
|
64 |
+
|
65 |
+
This dataset is designed for:
|
66 |
+
- Binary classification (safe vs unsafe prompt detection)
|
67 |
+
- Content moderation and safety evaluation
|
68 |
+
- Educational and research purposes
|
69 |
+
|
70 |
+
---
|
71 |
+
|
72 |
+
## π Citation
|
73 |
+
|
74 |
+
If you use this dataset, please cite the original authors of the OpenAI Moderation dataset:
|
75 |
+
|
76 |
+
> **OpenAI (2022).**
|
77 |
+
> *A Holistic Approach to Undesired Content Detection in the Real World.*
|
78 |
+
> [https://github.com/openai/moderation-api-release](https://github.com/openai/moderation-api-release)
|
79 |
+
|
80 |
+
---
|
81 |
+
|
82 |
+
## π Acknowledgements
|
83 |
+
|
84 |
+
Huge credit to OpenAI for releasing the original dataset.
|
85 |
+
|
86 |
+
This binary-labeled version was created for ease of training and evaluation, while preserving the intent and structure of the original data.
|