Spaces:
Running
on
Zero
Running
on
Zero
update readme
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
title: '
|
3 |
emoji: 🧠
|
4 |
colorFrom: pink
|
5 |
colorTo: purple
|
@@ -8,35 +8,73 @@ sdk_version: 5.25.2
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
11 |
-
short_description:
|
12 |
---
|
13 |
|
14 |
-
This Gradio app
|
15 |
-
|
16 |
-
|
17 |
-
- **
|
18 |
-
- **
|
19 |
-
- **
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
|
23 |
-
|
24 |
-
-
|
25 |
-
-
|
26 |
-
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
-
|
35 |
-
|
36 |
-
-
|
37 |
-
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: 'ZeroGPU-LLM-Inference'
|
3 |
emoji: 🧠
|
4 |
colorFrom: pink
|
5 |
colorTo: purple
|
|
|
8 |
app_file: app.py
|
9 |
pinned: false
|
10 |
license: apache-2.0
|
11 |
+
short_description: Streaming LLM chat with web search and debug
|
12 |
---
|
13 |
|
14 |
+
This Gradio app provides **token-streaming, chat-style inference** on a wide variety of Transformer models—leveraging ZeroGPU for free GPU acceleration on HF Spaces.
|
15 |
+
|
16 |
+
Key features:
|
17 |
+
- **Real-time DuckDuckGo web search** (background thread, configurable timeout) with results injected into the system prompt.
|
18 |
+
- **Prompt preview panel** for debugging and prompt-engineering insights—see exactly what’s sent to the model.
|
19 |
+
- **Thought vs. Answer streaming**: any `<think>…</think>` blocks emitted by the model are shown as separate “💭 Thought.”
|
20 |
+
- **Cancel button** to immediately stop generation.
|
21 |
+
- **Dynamic system prompt**: automatically inserts today’s date when you toggle web search.
|
22 |
+
- **Extensive model selection**: over 30 LLMs (from Phi-4 mini to Qwen3-14B, SmolLM2, Taiwan-ELM, Mistral, Meta-Llama, MiMo, Gemma, DeepSeek-R1, etc.).
|
23 |
+
- **Memory-safe design**: loads one model at a time, clears cache after each generation.
|
24 |
+
- **Customizable generation parameters**: max tokens, temperature, top-k, top-p, repetition penalty.
|
25 |
+
- **Web-search settings**: max results, max chars per result, search timeout.
|
26 |
+
- **Requirements pinned** to ensure reproducible deployment.
|
27 |
+
|
28 |
+
## 🔄 Supported Models
|
29 |
+
|
30 |
+
Use the dropdown to select any of these:
|
31 |
+
|
32 |
+
| Name | Repo ID |
|
33 |
+
| ------------------------------------- | -------------------------------------------------- |
|
34 |
+
| Taiwan-ELM-1_1B-Instruct | liswei/Taiwan-ELM-1_1B-Instruct |
|
35 |
+
| Taiwan-ELM-270M-Instruct | liswei/Taiwan-ELM-270M-Instruct |
|
36 |
+
| Qwen3-0.6B | Qwen/Qwen3-0.6B |
|
37 |
+
| Qwen3-1.7B | Qwen/Qwen3-1.7B |
|
38 |
+
| Qwen3-4B | Qwen/Qwen3-4B |
|
39 |
+
| Qwen3-8B | Qwen/Qwen3-8B |
|
40 |
+
| Qwen3-14B | Qwen/Qwen3-14B |
|
41 |
+
| Gemma-3-4B-IT | unsloth/gemma-3-4b-it |
|
42 |
+
| SmolLM2-135M-Instruct-TaiwanChat | Luigi/SmolLM2-135M-Instruct-TaiwanChat |
|
43 |
+
| SmolLM2-135M-Instruct | HuggingFaceTB/SmolLM2-135M-Instruct |
|
44 |
+
| SmolLM2-360M-Instruct-TaiwanChat | Luigi/SmolLM2-360M-Instruct-TaiwanChat |
|
45 |
+
| Llama-3.2-Taiwan-3B-Instruct | lianghsun/Llama-3.2-Taiwan-3B-Instruct |
|
46 |
+
| MiniCPM3-4B | openbmb/MiniCPM3-4B |
|
47 |
+
| Qwen2.5-3B-Instruct | Qwen/Qwen2.5-3B-Instruct |
|
48 |
+
| Qwen2.5-7B-Instruct | Qwen/Qwen2.5-7B-Instruct |
|
49 |
+
| Phi-4-mini-Reasoning | microsoft/Phi-4-mini-reasoning |
|
50 |
+
| Phi-4-mini-Instruct | microsoft/Phi-4-mini-instruct |
|
51 |
+
| Meta-Llama-3.1-8B-Instruct | MaziyarPanahi/Meta-Llama-3.1-8B-Instruct |
|
52 |
+
| DeepSeek-R1-Distill-Llama-8B | unsloth/DeepSeek-R1-Distill-Llama-8B |
|
53 |
+
| Mistral-7B-Instruct-v0.3 | MaziyarPanahi/Mistral-7B-Instruct-v0.3 |
|
54 |
+
| Qwen2.5-Coder-7B-Instruct | Qwen/Qwen2.5-Coder-7B-Instruct |
|
55 |
+
| Qwen2.5-Omni-3B | Qwen/Qwen2.5-Omni-3B |
|
56 |
+
| MiMo-7B-RL | XiaomiMiMo/MiMo-7B-RL |
|
57 |
+
|
58 |
+
*(…and more can easily be added in `MODELS` in `app.py`.)*
|
59 |
+
|
60 |
+
## ⚙️ Generation & Search Parameters
|
61 |
+
|
62 |
+
- **Max Tokens**: 64–16384
|
63 |
+
- **Temperature**: 0.1–2.0
|
64 |
+
- **Top-K**: 1–100
|
65 |
+
- **Top-P**: 0.1–1.0
|
66 |
+
- **Repetition Penalty**: 1.0–2.0
|
67 |
+
|
68 |
+
- **Enable Web Search**: on/off
|
69 |
+
- **Max Results**: integer
|
70 |
+
- **Max Chars/Result**: integer
|
71 |
+
- **Search Timeout (s)**: 0.0–30.0
|
72 |
+
|
73 |
+
## 🚀 How It Works
|
74 |
+
|
75 |
+
1. **User message** enters chat history.
|
76 |
+
2. If search is enabled, a background DuckDuckGo thread fetches snippets.
|
77 |
+
3. After up to *Search Timeout* seconds, snippets merge into the system prompt.
|
78 |
+
4. The selected model pipeline is loaded (bf16→f16→f32 fallback) on ZeroGPU.
|
79 |
+
5. Prompt is formatted—any `<think>…</think>` blocks will be streamed as separate “💭 Thought.”
|
80 |
+
6. Tokens stream to the Chatbot UI. Press **Cancel** to stop mid-generation.
|