Safetensors
qwen3

🧠 Qwen-0.6B – Code Generation Model

Model Repo: XformAI-india/qwen-0.6b-coder
Base Model: Qwen/Qwen-0.5B
Task: Code generation and completion
Trained by: XformAI
Date: May 2025


πŸ” What is this?

This is a fine-tuned version of Qwen-0.6B optimized for code generation, completion, and programming logic reasoning.

It’s designed to be lightweight, fast, and capable of handling common developer tasks across multiple programming languages.


πŸ’» Use Cases

  • AI-powered code assistants
  • Auto-completion for IDEs
  • Offline code generation
  • Learning & training environments
  • Natural language β†’ code prompts

πŸ“š Training Details

Parameter Value
Epochs 3
Batch Size 16
Optimizer AdamW
Precision bfloat16
Context Window 2048 tokens
Framework πŸ€— Transformers + LoRA (PEFT)

πŸš€ Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("XformAI-india/qwen-0.6b-coder")
tokenizer = AutoTokenizer.from_pretrained("XformAI-india/qwen-0.6b-coder")

prompt = "Write a Python function that checks if a number is prime:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
10
Safetensors
Model size
596M params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for XformAI-india/qwen-0.6b-coder

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(26)
this model
Quantizations
1 model

Dataset used to train XformAI-india/qwen-0.6b-coder