GGUF Quantised Models for Qwen2.5_14B_Instruct_Johnny_Silverhand_Merged
This repository contains GGUF format model files for lewiswatson/Qwen2.5_14B_Instruct_Johnny_Silverhand_Merged-GGUF
, quantised.
Original Model
The original fine-tuned model used to generate these quantisations can be found here: lewiswatson/Qwen2.5_14B_Instruct_Johnny_Silverhand_Merged-GGUF
Provided Files (GGUF)
File | Size |
---|---|
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.IQ4_XS.gguf |
7.62 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q2_K.gguf |
5.37 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q3_K_L.gguf |
7.38 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q3_K_M.gguf |
6.84 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q3_K_S.gguf |
6.20 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q4_K_M.gguf |
8.37 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q4_K_S.gguf |
7.98 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q5_K_M.gguf |
9.79 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q5_K_S.gguf |
9.56 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q6_K.gguf |
11.29 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.Q8_0.gguf |
14.62 GB |
Qwen2.5-14B-Instruct_Johnny_Silverhand_Merged.f16.gguf |
27.52 GB |
This repository was automatically created using a script on 2025-04-15.
- Downloads last month
- 169
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lewiswatson/Qwen2.5_14B_Instruct_Johnny_Silverhand_Merged-GGUF
Unable to build the model tree, the base model loops to the model itself. Learn more.