---
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-32B-Instruct
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- html
- javascript
- css
- tailwindcss
- frontend
- web-development
- ViCoder-html-32B-preview
- ViCoder-html
- ViCoder
- ViCoder-html-preview
- vichar ai labs
- vichar ai
- strive ai labs llp
- strive ai labs
- vichar.io
pipeline_tag: text-generation
---
ViCoder-html-32B-preview
๐ A powerful HTML/CSS/JS sketching model powered by Qwen2.5-Coder-32B-Instruct ๐
Developed by Vichar AI | Hugging Face Profile
Licensed under Apache 2.0
---
### ๐ก What is ViCoder-html-32B-preview?
**ViCoder-html-32B-preview** is a preview model in the **ViCoder** series from Vichar AI โ a line of models specialized in **code generation**. This model focuses specifically on sketching single-page websites, such as landing pages and dashboards, using using:
- ๐ง **HTML** for semantic structure
- ๐จ **Tailwind CSS** for modern, utility-first styling
- โ๏ธ **JavaScript** for interactivity and basic dynamic behavior
This model is ideal for:
- **Web Developers:** Quickly scaffolding dashboards or page layouts.
- **Frontend Engineers:** Prototyping UIs and exploring design variations.
- **Designers:** Turning textual mockups into initial code sketches.
- **Educators & Students:** Learning and experimenting with HTML, Tailwind CSS, and JavaScript in a practical context.
> โ ๏ธ **Note:** This is a **preview** version. It demonstrates core capabilities but is still under active development. A more refined and robust production release is planned. Stay updated via vichar.io or follow VicharAI on Hugging Face!
---
### ๐ ๏ธ Model Details
| Property | Value |
| :--------------- | :------------------------------------------------------------------------------------------ |
| **Model Type** | Code Generation (Instruction-tuned Language Model) |
| **Base Model** | [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct) |
| **Developed by** | [Vichar AI](https://vichar.io) ([HF Profile](https://huggingface.co/VicharAI)) |
| **Languages** | Primarily HTML, Tailwind CSS, JavaScript. Understands English instructions. |
| **Training Data**| Proprietary curated dataset focusing on high-quality web components and pages. |
| **License** | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) |
| **Library** | ๐ค Transformers |
| **Contact** | Visit [vichar.io](https://vichar.io) or use HF Discussions |
---
### ๐งฑ GGUF Quantized Versions
Quantized versions of **ViCoder-html-32B-preview** in GGUF format are available for efficient local inference using [llama.cpp](https://github.com/ggerganov/llama.cpp), [LM Studio](https://lmstudio.ai/), or [Ollama](https://ollama.com/).
You can find them here:
- ๐ [GGUF Quantizations on Hugging Face](https://huggingface.co/VicharAI/ViCoder-html-32B-preview-GGUF)
These quantized variants (Q3_K_M, Q4_K_M, Q6_K, Q8_0) are useful for running the model on lower-memory hardware or for embedding in desktop/web applications.
---
### โก Example Usage
Use the `transformers` library pipeline for easy text generation. Ensure you have `transformers`, `torch`, and `accelerate` installed.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
# Define model ID
model_id = "VicharAI/ViCoder-html-32B-preview"
# Load tokenizer and model
# Use bfloat16 for faster inference if your GPU supports it
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # Or torch.float16 if bfloat16 is not supported
device_map="auto" # Automatically distribute across available GPUs/CPU
)
messages = [
{"role": "user", "content": "A modern, sleek landing page for a company focusing on open-source LLM solutions"},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True,
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 16000,
use_cache = True, temperature = 0.7, min_p = 0.1, repetition_penalty=1.1)
```
---
### โจ Output Sample
```html
Our Love Story - Surprise Website