Transformers
GGUF

QuantFactory Banner

QuantFactory/starcoder2-3b-instruct-v0.1-GGUF

This is quantized version of onekq-ai/starcoder2-3b-instruct-v0.1 created using llama.cpp

Original Model Card

Starcoder2-3b fined the same way as https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 using https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k

Epochs: 1 Learning Rate: 0.0001 Lora Rank: 8 Batch Size: 16 Evaluation Split: 0

Downloads last month
68
GGUF
Model size
3.03B params
Architecture
starcoder2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for QuantFactory/starcoder2-3b-instruct-v0.1-GGUF

Quantized
(12)
this model

Dataset used to train QuantFactory/starcoder2-3b-instruct-v0.1-GGUF