Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Magnum-v1-72b-Qwen2.5

A merge of the OG QwenV2-based anthracite-org/magnum-v1-72b with the new Qwen/Qwen2.5-72B-Instruct

Model Details

Process

  1. A LoRA was extracted from anthracite-org/magnum-v1-72b and Qwen/Qwen2-72B-Instruct
  2. This LoRA was then applied to Qwen/Qwen2.5-72B-Instruct.
  3. The resulting model was merged to create this standalone version.

Prompt Template

"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""

Results

It seems to have worked in my testing. Just as "creative" as the OG magnum-v1, and seems to have retained the improvements of Qwen2.5 (eg. it can zero-shot code a snake game in python and is aware of world events which happened after QwenV2 released)

Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for gghfez/Magnum-v1-72b-Qwen2.5-exl2-4.5bpw

Base model

Qwen/Qwen2.5-72B
Quantized
(84)
this model