apsys commited on
Commit
f955e58
·
1 Parent(s): d847be7
Files changed (1) hide show
  1. src/about.py +0 -3
src/about.py CHANGED
@@ -19,9 +19,6 @@ Models are evaluated on their ability to properly refuse harmful requests and de
19
  across multiple categories and test scenarios.
20
  """
21
 
22
- # NOTE: The trailing spaces (two spaces) at the end of lines below are intentional.
23
- # They are required for Markdown rendering to create line breaks without paragraph breaks.
24
- # Please do not remove them during auto-formatting or cleanup.
25
  LLM_BENCHMARKS_TEXT = "GuardBench checks how well models handle safety challenges — from misinformation and self-harm to sexual content and corruption. "+\
26
  "Models are tested with regular and adversarial prompts to see if they can avoid saying harmful things. "+\
27
  "We track how accurate they are, how often they make mistakes, and how fast they respond."
 
19
  across multiple categories and test scenarios.
20
  """
21
 
 
 
 
22
  LLM_BENCHMARKS_TEXT = "GuardBench checks how well models handle safety challenges — from misinformation and self-harm to sexual content and corruption. "+\
23
  "Models are tested with regular and adversarial prompts to see if they can avoid saying harmful things. "+\
24
  "We track how accurate they are, how often they make mistakes, and how fast they respond."