WIP of retouched alpindale/magnum-72b-v1 but I will not use "Magnum" in the name. Call it FinalMix!

Found some issues, trying to fix them for my own usage and adding more RP data with merging.

You can do your own quantized files with the imatrix.dat file done with "wiki.train.raw".

Credits to Alpin and the gang for magnum-72b-v1, and Ikari for his datasets.

Prompt template ChatML

<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
Downloads last month
26
GGUF
Model size
72.7B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Undi95/MG-FinalMix-72B-GGUF