Spaces:
Running
Running
minor update on instruction
Browse files
app.py
CHANGED
@@ -957,7 +957,7 @@ txt1 = """
|
|
957 |
> This NLP (Natural Language Processing) AI demonstration aims to prevent profanity, vulgarity, hate speech, violence, sexism, and other offensive language.
|
958 |
>It is **not an act of censorship**, as the final UI (User Interface) will give the reader, but not a young reader, the option to click on a label to read the toxic message.
|
959 |
>The goal is to create a safer and more respectful environment for you, your colleages, and your family.
|
960 |
-
> This NLP app is 1 of 3 hands-on apps
|
961 |
---
|
962 |
### π΄ Helpful Instruction:
|
963 |
|
@@ -972,6 +972,8 @@ txt2 = """
|
|
972 |
## π» Author and Developer Notes:
|
973 |
---
|
974 |
- The demo uses the cutting-edge (2024) AI Natural Language Processing (NLP) model from OpenAI.
|
|
|
|
|
975 |
- It is not a Generative (GenAI) model, such as Google Gemini or GPT-4.
|
976 |
- The NLP understands the message context, nuance, innuendo, and not just swear words.
|
977 |
- We **challenge you** to trick it, i.e., write a toxic tweet or post, but our AI thinks it is safe. If you win, please send us your message.
|
@@ -999,6 +1001,8 @@ txt2 = """
|
|
999 |
- Green is a "safe" message
|
1000 |
- Yellow is an "unsafe" message by your toxicity level
|
1001 |
|
|
|
|
|
1002 |
- The real-world dataset is from the Jigsaw Rate Severity of Toxic Comments on Kaggle. It has 30,108 records.
|
1003 |
- Citation:
|
1004 |
- Ian Kivlichan, Jeffrey Sorensen, Lucas Dixon, Lucy Vasserman, Meghan Graham, Tin Acosta, Walter Reade. (2021). Jigsaw Rate Severity of Toxic Comments . Kaggle. https://kaggle.com/competitions/jigsaw-toxic-severity-rating
|
|
|
957 |
> This NLP (Natural Language Processing) AI demonstration aims to prevent profanity, vulgarity, hate speech, violence, sexism, and other offensive language.
|
958 |
>It is **not an act of censorship**, as the final UI (User Interface) will give the reader, but not a young reader, the option to click on a label to read the toxic message.
|
959 |
>The goal is to create a safer and more respectful environment for you, your colleages, and your family.
|
960 |
+
> This NLP app is 1 of 3 hands-on apps from the ["AI Solution Architect," from ELVTR and Duc Haba](https://elvtr.com/course/ai-solution-architect?utm_source=instructor&utm_campaign=AISA&utm_content=linkedin).
|
961 |
---
|
962 |
### π΄ Helpful Instruction:
|
963 |
|
|
|
972 |
## π» Author and Developer Notes:
|
973 |
---
|
974 |
- The demo uses the cutting-edge (2024) AI Natural Language Processing (NLP) model from OpenAI.
|
975 |
+
- This NLP app is 1 of 3 hands-on apps from the ["AI Solution Architect," from ELVTR and Duc Haba](https://elvtr.com/course/ai-solution-architect?utm_source=instructor&utm_campaign=AISA&utm_content=linkedin).
|
976 |
+
|
977 |
- It is not a Generative (GenAI) model, such as Google Gemini or GPT-4.
|
978 |
- The NLP understands the message context, nuance, innuendo, and not just swear words.
|
979 |
- We **challenge you** to trick it, i.e., write a toxic tweet or post, but our AI thinks it is safe. If you win, please send us your message.
|
|
|
1001 |
- Green is a "safe" message
|
1002 |
- Yellow is an "unsafe" message by your toxicity level
|
1003 |
|
1004 |
+
- The **"confidence"** score refers to the confidence level in detecting a particular type of toxicity among the 14 tracked types. For instance, if the confidence score is 90%, it indicates a 90% chance that the toxicity detected is of that particular type. In comparison, the remaining 13 toxicities collectively have a 10% chance of being the detected toxicity. Conversely, if the confidence score is 3%, it could indicate any toxicity. It's worth noting that the Red, Green, or Yellow safety levels do not influence the confidence score.
|
1005 |
+
|
1006 |
- The real-world dataset is from the Jigsaw Rate Severity of Toxic Comments on Kaggle. It has 30,108 records.
|
1007 |
- Citation:
|
1008 |
- Ian Kivlichan, Jeffrey Sorensen, Lucas Dixon, Lucy Vasserman, Meghan Graham, Tin Acosta, Walter Reade. (2021). Jigsaw Rate Severity of Toxic Comments . Kaggle. https://kaggle.com/competitions/jigsaw-toxic-severity-rating
|