Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
This is just a custom 4bit imatrix quant made to run optiomally on a macbook with 8gb of ram.
For use with llama.cpp https://github.com/ggerganov/llama.cpp
- Downloads last month
- 26
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for nisten/mistral-instruct0.2-imatrix4bit.gguf
Base model
mistralai/Mistral-7B-Instruct-v0.2