GGUF quants of inclusionAI/Ling-lite

The importance matrix was generated with calibration_datav3.txt.

All quants were generated/calibrated with the imatrix, including the K quants.

Quantized from BF16.

Downloads last month
152
GGUF
Model size
16.8B params
Architecture
bailingmoe
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for redponike/Ling-lite-GGUF

Quantized
(8)
this model