GGUF Quantised Models for Qwen2.5_1.5B_Instruct_Johnny_Silverhand_Merged
This repository contains GGUF format model files for lewiswatson/Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged
, quantised.
Original Model
The original fine-tuned model used to generate these quantisations can be found here: lewiswatson/Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged
Provided Files (GGUF)
File | Size |
---|---|
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.IQ4_XS.gguf |
860.39 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q2_K.gguf |
644.97 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q3_K_L.gguf |
839.39 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q3_K_M.gguf |
786.00 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q3_K_S.gguf |
725.69 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q4_K_M.gguf |
940.37 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q4_K_S.gguf |
896.75 MB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q5_K_M.gguf |
1.05 GB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q5_K_S.gguf |
1.02 GB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q6_K.gguf |
1.19 GB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.Q8_0.gguf |
1.53 GB |
Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged.f16.gguf |
2.88 GB |
This repository was automatically created using a script on 2025-04-14.
- Downloads last month
- 337
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lewiswatson/Qwen2.5-1.5B-Instruct_Johnny_Silverhand_Merged-GGUF
Base model
Qwen/Qwen2.5-1.5B
Finetuned
Qwen/Qwen2.5-1.5B-Instruct