Orpo-Llama-3.2-1B-15k-Q4_K_M-GGUF
Q4_K_M GGUF quantized version of AdamLucek/Orpo-Llama-3.2-1B-15k, see original model card for further details.
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for AdamLucek/Orpo-Llama-3.2-1B-15k-Q4_K_M-GGUF
Base model
meta-llama/Llama-3.2-1B
Finetuned
AdamLucek/Orpo-Llama-3.2-1B-15k