This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method using Qwen/Qwen2-VL-2B-Instruct as a base.
Models Merged
The following models were included in the merge:
- prithivMLmods/JSONify-Flux
- prithivMLmods/ChemQwen2-vL
- prithivMLmods/Qwen2-VL-OCR-2B-Instruct
- prithivMLmods/Qwen2-VL-OCR2-2B-Instruct
- prithivMLmods/LatexMind-2B-Codec
- prithivMLmods/Radiology-Infer-Mini
- prithivMLmods/QvQ-Step-Tiny
- prithivMLmods/Caption-Pro
- prithivMLmods/Blazer.1-2B-Vision
- prithivMLmods/Omni-Reasoner-2B
- Qwen/Qwen2-VL-2B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: prithivMLmods/Blazer.1-2B-Vision
- model: prithivMLmods/Caption-Pro
- model: prithivMLmods/ChemQwen2-vL
- model: prithivMLmods/JSONify-Flux
- model: prithivMLmods/LatexMind-2B-Codec
- model: prithivMLmods/Omni-Reasoner-2B
- model: prithivMLmods/QvQ-Step-Tiny
- model: prithivMLmods/Qwen2-VL-OCR2-2B-Instruct
- model: prithivMLmods/Qwen2-VL-OCR-2B-Instruct
- model: prithivMLmods/Radiology-Infer-Mini
- model: Qwen/Qwen2-VL-2B-Instruct
- model: Qwen/Qwen2-VL-2B
merge_method: linear
base_model: Qwen/Qwen2-VL-2B-Instruct
parameters:
weight: 0.5
normalize: true
int8_mask: true
dtype: bfloat16
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Lunzima/NQLSG-Qwen2-VL-2B-v2-Base
Merge model
this model