Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
gorilla-llm
/
gorilla-falcon-7b-hf-v0-gguf
like
2
Follow
Gorilla LLM (UC Berkeley)
156
GGUF
Model card
Files
Files and versions
Community
Deploy
Use this model
No model card
Downloads last month
255
GGUF
Model size
7.22B params
Architecture
falcon
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
3.86 GB
3-bit
Q3_K_S
4.13 GB
Q3_K_M
4.37 GB
Q3_K_L
4.56 GB
4-bit
Q4_K_S
4.75 GB
Q4_K_M
4.98 GB
5-bit
Q5_K_S
5.34 GB
Q5_K_M
5.73 GB
6-bit
Q6_K
7.03 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Collection including
gorilla-llm/gorilla-falcon-7b-hf-v0-gguf
Gorilla
Collection
10 items
โข
Updated
Apr 5, 2024
โข
1