license: apache-2.0
model-index:
- name: open-llama-3b-v2-chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 40.61
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 70.3
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.73
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.84
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.51
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.58
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mediocredev/open-llama-3b-v2-chat
name: Open LLM Leaderboard
Prerequisites
In addition to pytorch and transformers, install required packages:
pip install sentencepiece
Usage
To use, copy the following script:
ffrom transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'mediocredev/open-llama-3b-v2-chat'
tokenizer_id = 'mediocredev/open-llama-3b-v2-chat'
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
chat_history = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "I am here."},
{"role": "user", "content": "How many days are there in a leap year?"},
]
input_ids = tokenizer.apply_chat_template(
chat_history, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
output_tokens = model.generate(
input_ids,
repetition_penalty=1.05,
max_new_tokens=1000,
)
output_text = tokenizer.decode(
output_tokens[0][len(input_ids[0]) :], skip_special_tokens=True
)
print(output_text)
# Assistant: There are 366 days in a leap year, which is one more day than the standard year.
Limitations
mediocredev/open-llama-3b-v2-chat is based on LLaMA 3B v2. It can struggle with factual accuracy, particularly when presented with conflicting information or nuanced topics. Its outputs are not deterministic and require critical evaluation to avoid relying solely on its assertions. Additionally, its generative capabilities, while promising, can sometimes produce factually incorrect or offensive content, necessitating careful curation and human oversight. As an evolving model, LLaMA is still under development, and its limitations in areas like bias mitigation and interpretability are being actively addressed. By using this model responsibly and being aware of its shortcomings, we can unlock its potential while mitigating its risks.
Contact
Welcome any feedback, questions, and discussions. Feel free to reach out: [email protected]
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 40.93 |
AI2 Reasoning Challenge (25-Shot) | 40.61 |
HellaSwag (10-Shot) | 70.30 |
MMLU (5-Shot) | 28.73 |
TruthfulQA (0-shot) | 37.84 |
Winogrande (5-shot) | 65.51 |
GSM8k (5-shot) | 2.58 |