tinyllama-sciq-lora / README.md
TechyCode's picture
Update README.md
b2e59e0 verified
metadata
license: mit
tags:
  - tinyllama
  - sciq
  - multiple-choice
  - peft
  - lora
  - 4bit
  - quantization
  - instruction-tuning
datasets:
  - allenai/sciq
language:
  - en
library_name: transformers
pipeline_tag: text-generation

🧠 TinyLLaMA-1.1B LoRA Fine-tuned on SciQ Dataset

This is a TinyLLaMA-1.1B model fine-tuned using LoRA (Low-Rank Adaptation) on the SciQ multiple-choice question answering dataset. It uses 4-bit quantization via bitsandbytes to reduce memory usage and improve inference efficiency.

πŸ§ͺ Use Cases

This model is suitable for:

  • Educational QA bots
  • MCQ-style reasoning
  • Lightweight inference on constrained hardware (e.g., GPUs with <8GB VRAM)

πŸ› οΈ Training Details

  • Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
  • Dataset: allenai/sciq (Science QA)
  • Method: Parameter-Efficient Fine-Tuning using LoRA
  • Quantization: 4-bit using bitsandbytes
  • Framework: πŸ€— Transformers + PEFT + Datasets

🧬 Model Architecture

  • Model: Causal Language Model
  • Fine-tuned layers: q_proj, v_proj (via LoRA)
  • Quantization: 4-bit (bnb config)

πŸ“Š Evaluation

  • Accuracy: 100% on a 1000-sample SciQ subset
  • Eval Loss: ~0.19

πŸ’‘ How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("TechyCode/tinyllama-sciq-lora")
tokenizer = AutoTokenizer.from_pretrained("TechyCode/tinyllama-sciq-lora")

prompt = """Question: What is the boiling point of water?\nChoices:\nA. 50Β°C\nB. 75Β°C\nC. 90Β°C\nD. 100Β°C\nAnswer:"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ” License

This model is released under the MIT License.

πŸ™Œ Credits

FineTuned By - Uditanshu Pandey
Linkedin - UditanshuPandey
GitHub - UditanshuPandey
Based on - TinyLLaMA-1.1B-Chat-v1.0