Luigi commited on
Commit
076c1f2
·
1 Parent(s): 6a4537b

update readme

Browse files
Files changed (1) hide show
  1. README.md +69 -31
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: ' ZeroGPU-LLM-Inference'
3
  emoji: 🧠
4
  colorFrom: pink
5
  colorTo: purple
@@ -8,35 +8,73 @@ sdk_version: 5.25.2
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
- short_description: Chat inference for GGUF models with llama.cpp & Gradio
12
  ---
13
 
14
- This Gradio app enables **chat-based inference** on various GGUF models using `llama.cpp` and `llama-cpp-python`. The application features:
15
-
16
- - **Real-Time Web Search Integration:** Uses DuckDuckGo to retrieve up-to-date context; debug output is displayed in real time.
17
- - **Streaming Token-by-Token Responses:** Users see the generated answer as it comes in.
18
- - **Response Cancellation:** A cancel button allows stopping response generation in progress.
19
- - **Customizable Prompts & Generation Parameters:** Adjust the system prompt (with dynamic date insertion), temperature, token limits, and more.
20
- - **Memory-Safe Design:** Loads one model at a time with proper memory management, ideal for deployment on Hugging Face Spaces.
21
- - **Rate Limit Handling:** Implements exponential backoff to cope with DuckDuckGo API rate limits.
22
-
23
- ### 🔄 Supported Models:
24
- - `Qwen/Qwen2.5-7B-Instruct-GGUF` `qwen2.5-7b-instruct-q2_k.gguf`
25
- - `unsloth/gemma-3-4b-it-GGUF` `gemma-3-4b-it-Q4_K_M.gguf`
26
- - `unsloth/Phi-4-mini-instruct-GGUF` `Phi-4-mini-instruct-Q4_K_M.gguf`
27
- - `MaziyarPanahi/Meta-Llama-3.1-8B-Instruct-GGUF` → `Meta-Llama-3.1-8B-Instruct.Q2_K.gguf`
28
- - `unsloth/DeepSeek-R1-Distill-Llama-8B-GGUF` `DeepSeek-R1-Distill-Llama-8B-Q2_K.gguf`
29
- - `MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF` → `Mistral-7B-Instruct-v0.3.IQ3_XS.gguf`
30
- - `Qwen/Qwen2.5-Coder-7B-Instruct-GGUF` `qwen2.5-coder-7b-instruct-q2_k.gguf`
31
-
32
- ### ⚙️ Features:
33
- - **Model Selection:** Select from multiple GGUF models.
34
- - **Customizable Prompts & Parameters:** Set a system prompt (e.g., automatically including today’s date), adjust temperature, token limits, and more.
35
- - **Chat-style Interface:** Interactive Gradio UI with streaming token-by-token responses.
36
- - **Real-Time Web Search & Debug Output:** Leverages DuckDuckGo to fetch recent context, with a dedicated debug panel showing web search progress and results.
37
- - **Response Cancellation:** Cancel in-progress answer generation using a cancel button.
38
- - **Memory-Safe & Rate-Limit Resilient:** Loads one model at a time with proper cleanup and incorporates exponential backoff to handle API rate limits.
39
-
40
- Ideal for deploying multiple GGUF chat models on Hugging Face Spaces with a robust, user-friendly interface!
41
-
42
- For further details, check the [Spaces configuration guide](https://huggingface.co/docs/hub/spaces-config-reference).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: 'ZeroGPU-LLM-Inference'
3
  emoji: 🧠
4
  colorFrom: pink
5
  colorTo: purple
 
8
  app_file: app.py
9
  pinned: false
10
  license: apache-2.0
11
+ short_description: Streaming LLM chat with web search and debug
12
  ---
13
 
14
+ This Gradio app provides **token-streaming, chat-style inference** on a wide variety of Transformer models—leveraging ZeroGPU for free GPU acceleration on HF Spaces.
15
+
16
+ Key features:
17
+ - **Real-time DuckDuckGo web search** (background thread, configurable timeout) with results injected into the system prompt.
18
+ - **Prompt preview panel** for debugging and prompt-engineering insights—see exactly what’s sent to the model.
19
+ - **Thought vs. Answer streaming**: any `<think>…</think>` blocks emitted by the model are shown as separate “💭 Thought.”
20
+ - **Cancel button** to immediately stop generation.
21
+ - **Dynamic system prompt**: automatically inserts today’s date when you toggle web search.
22
+ - **Extensive model selection**: over 30 LLMs (from Phi-4 mini to Qwen3-14B, SmolLM2, Taiwan-ELM, Mistral, Meta-Llama, MiMo, Gemma, DeepSeek-R1, etc.).
23
+ - **Memory-safe design**: loads one model at a time, clears cache after each generation.
24
+ - **Customizable generation parameters**: max tokens, temperature, top-k, top-p, repetition penalty.
25
+ - **Web-search settings**: max results, max chars per result, search timeout.
26
+ - **Requirements pinned** to ensure reproducible deployment.
27
+
28
+ ## 🔄 Supported Models
29
+
30
+ Use the dropdown to select any of these:
31
+
32
+ | Name | Repo ID |
33
+ | ------------------------------------- | -------------------------------------------------- |
34
+ | Taiwan-ELM-1_1B-Instruct | liswei/Taiwan-ELM-1_1B-Instruct |
35
+ | Taiwan-ELM-270M-Instruct | liswei/Taiwan-ELM-270M-Instruct |
36
+ | Qwen3-0.6B | Qwen/Qwen3-0.6B |
37
+ | Qwen3-1.7B | Qwen/Qwen3-1.7B |
38
+ | Qwen3-4B | Qwen/Qwen3-4B |
39
+ | Qwen3-8B | Qwen/Qwen3-8B |
40
+ | Qwen3-14B | Qwen/Qwen3-14B |
41
+ | Gemma-3-4B-IT | unsloth/gemma-3-4b-it |
42
+ | SmolLM2-135M-Instruct-TaiwanChat | Luigi/SmolLM2-135M-Instruct-TaiwanChat |
43
+ | SmolLM2-135M-Instruct | HuggingFaceTB/SmolLM2-135M-Instruct |
44
+ | SmolLM2-360M-Instruct-TaiwanChat | Luigi/SmolLM2-360M-Instruct-TaiwanChat |
45
+ | Llama-3.2-Taiwan-3B-Instruct | lianghsun/Llama-3.2-Taiwan-3B-Instruct |
46
+ | MiniCPM3-4B | openbmb/MiniCPM3-4B |
47
+ | Qwen2.5-3B-Instruct | Qwen/Qwen2.5-3B-Instruct |
48
+ | Qwen2.5-7B-Instruct | Qwen/Qwen2.5-7B-Instruct |
49
+ | Phi-4-mini-Reasoning | microsoft/Phi-4-mini-reasoning |
50
+ | Phi-4-mini-Instruct | microsoft/Phi-4-mini-instruct |
51
+ | Meta-Llama-3.1-8B-Instruct | MaziyarPanahi/Meta-Llama-3.1-8B-Instruct |
52
+ | DeepSeek-R1-Distill-Llama-8B | unsloth/DeepSeek-R1-Distill-Llama-8B |
53
+ | Mistral-7B-Instruct-v0.3 | MaziyarPanahi/Mistral-7B-Instruct-v0.3 |
54
+ | Qwen2.5-Coder-7B-Instruct | Qwen/Qwen2.5-Coder-7B-Instruct |
55
+ | Qwen2.5-Omni-3B | Qwen/Qwen2.5-Omni-3B |
56
+ | MiMo-7B-RL | XiaomiMiMo/MiMo-7B-RL |
57
+
58
+ *(…and more can easily be added in `MODELS` in `app.py`.)*
59
+
60
+ ## ⚙️ Generation & Search Parameters
61
+
62
+ - **Max Tokens**: 64–16384
63
+ - **Temperature**: 0.1–2.0
64
+ - **Top-K**: 1–100
65
+ - **Top-P**: 0.1–1.0
66
+ - **Repetition Penalty**: 1.0–2.0
67
+
68
+ - **Enable Web Search**: on/off
69
+ - **Max Results**: integer
70
+ - **Max Chars/Result**: integer
71
+ - **Search Timeout (s)**: 0.0–30.0
72
+
73
+ ## 🚀 How It Works
74
+
75
+ 1. **User message** enters chat history.
76
+ 2. If search is enabled, a background DuckDuckGo thread fetches snippets.
77
+ 3. After up to *Search Timeout* seconds, snippets merge into the system prompt.
78
+ 4. The selected model pipeline is loaded (bf16→f16→f32 fallback) on ZeroGPU.
79
+ 5. Prompt is formatted—any `<think>…</think>` blocks will be streamed as separate “💭 Thought.”
80
+ 6. Tokens stream to the Chatbot UI. Press **Cancel** to stop mid-generation.