rzanoli commited on
Commit
3b91660
·
1 Parent(s): c03f591

Small Changes

Browse files
Files changed (1) hide show
  1. src/about.py +3 -0
src/about.py CHANGED
@@ -92,6 +92,9 @@ TITLE = """<h1 align="center" id="space-title">🚀 EVALITA-LLM Leaderboard 🚀
92
  # What does your leaderboard evaluate?
93
  INTRODUCTION_TEXT = """
94
  Evalita-LLM, a new benchmark designed to evaluate Large Language Models (LLMs) on Italian tasks. The distinguishing and innovative features of Evalita-LLM are the following: (i) **all tasks are native Italian**, avoiding issues of translating from Italian and potential cultural biases; (ii) in addition to well-established **multiple-choice** tasks (6 tasks), the benchmark includes **generative** tasks (4 tasks), enabling more natural interaction with LLMs; (iii) **all tasks are evaluated against multiple prompts**, this way mitigating the model sensitivity to specific prompts and allowing a fairer and objective evaluation.
 
 
 
95
  """
96
 
97
  # Which evaluations are you running? how can people reproduce what you have?
 
92
  # What does your leaderboard evaluate?
93
  INTRODUCTION_TEXT = """
94
  Evalita-LLM, a new benchmark designed to evaluate Large Language Models (LLMs) on Italian tasks. The distinguishing and innovative features of Evalita-LLM are the following: (i) **all tasks are native Italian**, avoiding issues of translating from Italian and potential cultural biases; (ii) in addition to well-established **multiple-choice** tasks (6 tasks), the benchmark includes **generative** tasks (4 tasks), enabling more natural interaction with LLMs; (iii) **all tasks are evaluated against multiple prompts**, this way mitigating the model sensitivity to specific prompts and allowing a fairer and objective evaluation.
95
+
96
+ **Multiple Choice**: 📊TE (Textual Entailment), 😃SA (Sentiment Analysis), ⚠️HS (Hate Speech Detection), 🏥AT (Admission Test), 🔤WIC (Word in Context), ❓FAQ (Frequently Asked Questions)
97
+ **Generative**: 🔄LS (Lexical Substitution), 📝SU (Summarization), 🏷️NER (Named Entity Recognition), 🔗REL (Relation Extraction)
98
  """
99
 
100
  # Which evaluations are you running? how can people reproduce what you have?