This model is a quantized version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 and is converted to the OpenVINO format. This model was obtained via the nncf-quantization space with optimum-intel.

First make sure you have optimum-intel installed:

pip install optimum[openvino]

To load your model you can do as follows:

from optimum.intel import OVModelForCausalLM

model_id = "bal723/TinyLlama-1.1B-Chat-v1.0-openvino-4bit"
model = OVModelForCausalLM.from_pretrained(model_id)
Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bal723/TinyLlama-1.1B-Chat-v1.0-openvino-4bit

Finetuned
(309)
this model

Datasets used to train bal723/TinyLlama-1.1B-Chat-v1.0-openvino-4bit