Orpheus-3b-Kaya-FP16

This is a fine-tuned version of the pretrained model canopylabs/orpheus-3b-0.1-pretrained, trained on a custom voice dataset and quantised to GGUF FP16 format for fast, efficient inference.


πŸ”§ Model Details

  • Model Type: Text-to-Speech (TTS)
  • Architecture: Token-to-audio language model
  • Parameters: ~3 billion
  • Quantisation: 8-bit GGUF (FP16)
  • Sampling Rate: 24kHz mono
  • Training Epochs: 1
  • Training Dataset: lex-au/Orpheus-3b-Kaya
  • Languages: English

πŸš€ Quick Usage

This model is designed for use with Orpheus-FastAPI, an OpenAI-compatible inference server for text-to-speech generation.

Compatible Inference Servers

You can load this model into:

πŸ“œ License

Apache License 2.0 β€” free for research and commercial use.


πŸ™Œ Credits

  • Original model by: Canopy Labs
  • Fine-tuned, quantised, and API-wrapped by: Lex-au via Unsloth and Huggingface's TRL library.


πŸ“š Citation

@misc{orpheus-tts-2025,
  author = {Canopy Labs},
  title = {Orpheus-3b-0.1-pt: Pretrained Text-to-Speech Model},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/canopylabs/orpheus-3b-0.1-pt}}
}

@misc{orpheus-kaya-2025,
  author = {Lex-au},
  title = {Orpheus-3b-Kaya-FP16: Fine-Tuned TTS Model (Quantised)},
  note = {Fine-tuned from canopylabs/orpheus-3b-0.1-pt},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/lex-au/Orpheus-3b-Kaya-FP16}}
}
Downloads last month
54
GGUF
Model size
3.3B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train lex-au/Orpheus-3b-Kaya-FP16.gguf

Collection including lex-au/Orpheus-3b-Kaya-FP16.gguf