Adding `safetensors` variant of this model
#13 opened about 1 year ago
by
SFconvertbot

Adding Evaluation Results
#11 opened over 1 year ago
by
leaderboard-pr-bot

When we can expect vicuna variant of CodeLlama-2 34b model?
1
#10 opened over 1 year ago
by
perelmanych
Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check
#9 opened over 1 year ago
by
Shivam1410
Bigger is NOT always better...
1
5
#8 opened almost 2 years ago
by
MrDevolver

Adding `safetensors` variant of this model
#6 opened almost 2 years ago
by
mmahlwy3
Adding `safetensors` variant of this model
#5 opened almost 2 years ago
by
mmahlwy3
How much GPU graphics memory is required for deployment
2
#3 opened almost 2 years ago
by
chenfeicqq
Is there a 4bit quantize version for the FastChat?
6
#2 opened almost 2 years ago
by
ruradium
Prompt format?
5
#1 opened almost 2 years ago
by
Thireus