This is a quantized GGML version of OpenOrca-Platypus-13B quantized to 4_0 bits.
(link to the original model : https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support