r3b31/Qwen2-VL-7B-Captioner-Relaxed-GGUF
This was tested with llamacpp and Koboldcpp
Used mmproj from:
https://huggingface.co/bartowski/Qwen2-VL-7B-Instruct-GGUF/
This model was converted to GGUF format from Ertugrul/Qwen2-VL-7B-Captioner-Relaxed
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model
- Downloads last month
- 287
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for r3b31/Qwen2-VL-7B-Captioner-Relaxed-GGUF
Base model
Qwen/Qwen2-VL-7B
Finetuned
Qwen/Qwen2-VL-7B-Instruct
Finetuned
Ertugrul/Qwen2-VL-7B-Captioner-Relaxed