metadata
language:
- code
pipeline_tag: text-generation
tags:
- llama-2
- mlx
license: llama2
library_name: mlx
base_model: codellama/CodeLlama-7b-Instruct-hf
mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2
As opposed to mlx-community/CodeLlama-7b-Instruct-hf-4bit-MLX, this one is converted from the base model using a newer MLX-LM version which uses newer and improved quantization and generates model in a more standard format. For context see Issue#130 and PR#114 of the MLX-LM repo.
This model mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2 was converted to MLX format from codellama/CodeLlama-7b-Instruct-hf using mlx-lm version 0.23.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/CodeLlama-7b-Instruct-hf-4bit-mlx-2")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)