AllanK24 commited on
Commit
70df97f
Β·
verified Β·
1 Parent(s): bed34f5

Dataset Card added v1

Browse files
Files changed (1) hide show
  1. README.md +85 -25
README.md CHANGED
@@ -1,26 +1,86 @@
 
 
 
 
 
1
  ---
2
- dataset_info:
3
- features:
4
- - name: prompt
5
- dtype: string
6
- - name: prompt_label
7
- dtype: string
8
- splits:
9
- - name: test
10
- num_bytes: 1127863
11
- num_examples: 1680
12
- download_size: 740014
13
- dataset_size: 1127863
14
- configs:
15
- - config_name: default
16
- data_files:
17
- - split: test
18
- path: data/test-*
19
- license: mit
20
- task_categories:
21
- - text-classification
22
- language:
23
- - en
24
- size_categories:
25
- - 1K<n<10K
26
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ markdown
2
+ # 🧠 OpenAI Moderation Binary Dataset
3
+
4
+ This dataset is a **binary-labeled** version of the original [OpenAI Moderation Evaluation Dataset](https://github.com/openai/moderation-api-release), created to support safe/unsafe classification tasks in content moderation, safety research, and AI alignment.
5
+
6
  ---
7
+
8
+ ## πŸ“¦ Dataset Details
9
+
10
+ - **Original Source:** [OpenAI Moderation API Evaluation Dataset](https://github.com/openai/moderation-api-release)
11
+ - **License:** MIT (inherited from original repo)
12
+ - **Samples:** 1,680 total
13
+ - **Labels:**
14
+ - `"safe"` (no harm labels present)
15
+ - `"unsafe"` (at least one moderation label present)
16
+
17
+ ---
18
+
19
+ ## πŸ“ Structure
20
+
21
+ Each row consists of:
22
+
23
+ ```json
24
+ {
25
+ "prompt": "Some user input text...",
26
+ "prompt_label": "safe" // or "unsafe"
27
+ }
28
+ ```
29
+
30
+ ---
31
+
32
+ ## 🧹 Preprocessing
33
+
34
+ This version was derived by:
35
+ 1. Downloading and parsing the original JSONL dataset (`samples-1680.jsonl.gz`)
36
+ 2. Creating a new column called `prompt_label`, based on the presence of any of the following 8 moderation labels:
37
+ - `S` (sexual)
38
+ - `S3` (severe sexual)
39
+ - `H` (hate)
40
+ - `H2` (severe hate)
41
+ - `V` (violence)
42
+ - `V2` (severe violence)
43
+ - `HR` (harassment)
44
+ - `SH` (self-harm)
45
+ 3. Assigning:
46
+ - `prompt_label = "unsafe"` if **any** of those were `1`
47
+ - `prompt_label = "safe"` if **all** were `0`
48
+ 4. Removing the original moderation columns, leaving only:
49
+ - `prompt`
50
+ - `prompt_label`
51
+
52
+ ---
53
+
54
+ ## πŸ“Š Label Distribution
55
+
56
+ | Label | Count | % |
57
+ |---------|-------|---------|
58
+ | `safe` | 1158 | ~68.9% |
59
+ | `unsafe` | 522 | ~31.1% |
60
+
61
+ ---
62
+
63
+ ## πŸ’‘ Intended Use
64
+
65
+ This dataset is designed for:
66
+ - Binary classification (safe vs unsafe prompt detection)
67
+ - Content moderation and safety evaluation
68
+ - Educational and research purposes
69
+
70
+ ---
71
+
72
+ ## πŸ“š Citation
73
+
74
+ If you use this dataset, please cite the original authors of the OpenAI Moderation dataset:
75
+
76
+ > **OpenAI (2022).**
77
+ > *A Holistic Approach to Undesired Content Detection in the Real World.*
78
+ > [https://github.com/openai/moderation-api-release](https://github.com/openai/moderation-api-release)
79
+
80
+ ---
81
+
82
+ ## πŸ™ Acknowledgements
83
+
84
+ Huge credit to OpenAI for releasing the original dataset.
85
+
86
+ This binary-labeled version was created for ease of training and evaluation, while preserving the intent and structure of the original data.