google/gemma-3-12b-it-qat-q4_0-unquantized日本語が多く含まれるimatrixを使って量子化したモデルです
This is a model that quantizes google/gemma-3-12b-it-qat-q4_0-unquantized using an imatrix that contains a lot of Japanese..
https://huggingface.co/dahara1/imatrix-jpn-test).

最新のllama.cppを使って動かしてください。
Please use the latest llama.cpp.

llama-mtmd-cliコマンドとmmproj.ggufファイルを使うと画像を読みこむ事ができます
You can use llama-mtmd-cli for image reading.

llama-mtmd-cli -m gemma-3-4b-it-qat-q4_0-japanese-imatrix-Q4_K_L.gguf --mmproj mmproj.gguf --image ./test.png -p "この画像はなんですか?(What is this image?)"
Downloads last month
1,523
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support