PEFT
Safetensors
English
leonvanbokhorst's picture
Update README.md
ce5f307 verified
metadata
license: apache-2.0
datasets:
  - leonvanbokhorst/tame-the-weights-personas
language:
  - en
base_model:
  - microsoft/Phi-4-mini-instruct
library_name: peft

LoRA Adapter: zen_coder

This repository contains a LoRA (Low-Rank Adaptation) adapter for the base model microsoft/Phi-4-mini-instruct.

This adapter fine-tunes the base model to adopt the zen_coder persona.

Find the adapter files in this repository.

Training Data

This adapter was fine-tuned on the zen_coder subset of the leonvanbokhorst/tame-the-weights-personas dataset.

Usage (Example with PEFT)

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_id = "microsoft/Phi-4-mini-instruct"
adapter_repo_id = "leonvanbokhorst/microsoft-Phi-4-mini-instruct-zen_coder-adapter"

# Load the base model and tokenizer
model = AutoModelForCausalLM.from_pretrained(base_model_id)
tokenizer = AutoTokenizer.from_pretrained(base_model_id)

# Load the PEFT model
model = PeftModel.from_pretrained(model, adapter_repo_id)

# Now you can use the model for inference with the persona applied
# Example:
input_text = "Explain the concept of technical debt."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))