KraTUZen commited on
Commit
1ac12bc
·
1 Parent(s): 85ed994
LogicLinkVersion5.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
README.md CHANGED
@@ -1,14 +1,205 @@
1
- ---
2
- title: LogicLink Project Space
3
- emoji: 📊
4
- colorFrom: indigo
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 5.29.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- short_description: AI Model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ # LogicLink: Version 5
2
+
3
+ LogicLink is a conversational AI chatbot developed by **Kratu Gautam**, an **AIML Engineer**. Powered by the **TinyLlama-1.1B-Chat-v1.0** model, LogicLink provides an interactive and user-friendly interface for engaging conversations, answering queries, and assisting with tasks like planning, writing, and more. Version 5 introduces a sleek GUI, streaming responses, and enhanced features like conversation management.
4
+
5
+ ## Features
6
+
7
+ - **Conversational AI**: Built on TinyLlama-1.1B-Chat-v1.0, LogicLink delivers natural and engaging responses to a wide range of user queries.
8
+ - **Streaming Responses**: Utilizes `TextIteratorStreamer` for real-time response generation, providing a smooth user experience.
9
+ - **Customizable GUI**: Features a modern interface with a red/blue/black theme, powered by Gradio and ModelScope Studio components (`pro.Chatbot`, `antdx.Sender`).
10
+ - **Conversation Management**:
11
+ - **New Chat**: Start fresh conversations with a dedicated button.
12
+ - **Clear History**: Reset the current conversation’s history.
13
+ - **Delete Conversations**: Remove individual conversations from the conversation list.
14
+ - **Single Time Stamp**: Responses include a single processing time stamp (e.g., `*(4.50s)*`), fixed to avoid duplication.
15
+ - **CUDA Support**: Optimizes performance on GPU-enabled systems, with fallback to CPU.
16
+ - **Error Handling**: Gracefully handles issues like memory shortages or invalid inputs, displaying user-friendly error messages.
17
+
18
+ ## Installation
19
+
20
+ ### Prerequisites
21
+
22
+ - Python 3.8+
23
+ - CUDA-enabled GPU (optional, for faster processing)
24
+ - Dependencies:
25
+
26
+ ```bash
27
+ pip install gradio torch transformers modelscope-studio
28
+ ```
29
+
30
+ ### Setup
31
+
32
+ 1. **Clone the Repository**:
33
+
34
+ ```bash
35
+ git clone Kratugautam99/LogicLink-Project.git
36
+ cd LogicLink-Project
37
+ ```
38
+
39
+ 2. **Install Dependencies**:
40
+
41
+ ```bash
42
+ pip install -r requirements.txt
43
+ ```
44
+
45
+ 3. **Directory Structure**: Ensure the following files are present:
46
+
47
+ - `app.py`: Main application script.
48
+ - `config.py`: Configuration for GUI components (ensure `DEFAULT_LOCALE`, `DEFAULT_THEME`, `get_text`, `user_config`, `bot_config`, `welcome_config` are defined).
49
+ - `ui_components/logo.py`: Logo component for the GUI.
50
+ - `ui_components/settings_header.py`: Settings header component.
51
+
52
+ 4. **Run the Application**:
53
+
54
+ ```bash
55
+ python app.py
56
+ ```
57
+
58
+ This launches a web interface via Gradio, providing a public URL (e.g., `https://...gradio.live`) if `share=True`.
59
+
60
+ ## Usage
61
+
62
+ 1. **Launch the Chatbot**:
63
+
64
+ - Run `app.py` in a Jupyter notebook, Colab, or terminal.
65
+ - Access the web interface through the provided URL.
66
+
67
+ 2. **Interact with LogicLink**:
68
+
69
+ - **Input Queries**: Type questions or tasks in the input field (e.g., "Tell me about Pakistan" or "Who are you?").
70
+ - **Manage Conversations**:
71
+ - Click **New Chat** to start a new conversation.
72
+ - Click **Clear History** to reset the current conversation.
73
+ - Click the **Delete** menu item in the conversation list to remove a conversation.
74
+
75
+ 3. **Example Interaction**:
76
+
77
+ - **Input**: "Who are you?"
78
+ - **Output**:
79
+
80
+ ```
81
+ I'm LogicLink, Version 5, created by Kratu Gautam, an AIML Engineer. I'm here to help with your questions, so what's up?
82
+ *(4.50s)*
83
+ ```
84
+ - **Input**: "Explain quantum physics briefly"
85
+ - **Output**: A concise explanation of quantum physics, followed by `*(X.XXs)*`.
86
+
87
+ 4. **Performance**:
88
+
89
+ - **Response Time**: \~3–5 seconds per query (faster with CUDA).
90
+ - **RAM Usage**: \~2–3 GB on CPU, lower on GPU.
91
+
92
+ ## Technical Details
93
+
94
+ ### Model Architecture
95
+
96
+ - **Base Model**: TinyLlama-1.1B-Chat-v1.0, a lightweight transformer-based language model with 1.1 billion parameters, optimized for chat applications.
97
+ - **Framework**: PyTorch with the Transformers library from Hugging Face.
98
+ - **Tokenizer**: `AutoTokenizer` configured with left-padding and EOS token handling to ensure proper input formatting for chat sequences.
99
+ - **Response Generation**:
100
+ - Leverages `AutoModelForCausalLM` for next-token prediction.
101
+ - Implements streaming with `TextIteratorStreamer` to output tokens in real-time, enhancing user experience.
102
+ - Uses a custom `StopOnTokens` stopping criterion to halt generation at specific tokens (e.g., token ID 2), preventing unnecessary output.
103
+ - **Generation Parameters**:
104
+ - `max_new_tokens=1024`: Limits response length to 1024 tokens.
105
+ - `temperature=0.7`: Balances creativity and coherence in responses.
106
+ - `top_k=50`: Considers the top 50 probable tokens for sampling.
107
+ - `top_p=0.95`: Applies nucleus sampling to focus on the top 95% probability mass.
108
+ - `num_beams=1`: Uses greedy decoding for deterministic output.
109
+
110
+ ### Implementation Specifics
111
+
112
+ - **Prompt Engineering**:
113
+ - The model is instructed via a system prompt:
114
+
115
+ ```
116
+ You are LogicLink, Version 5, created by Kratu Gautam, an AIML Engineer. Respond to the following user input: {user_input}
117
+ ```
118
+ - Conversation history is formatted with `<|user|>` and `<|assistant|>` tags, separated by `</s>`, to maintain context.
119
+ - **Threading**: Response generation runs in a separate thread using Python’s `Thread` module to prevent blocking the Gradio interface.
120
+ - **Time Stamp Handling**:
121
+ - A regex (`re.sub(r'\*\(\d+\.\d+s\)\*', '', response)`) removes duplicate time stamps, ensuring each response ends with a single `*(X.XXs)*`.
122
+ - **Error Handling**:
123
+ - Catches exceptions (e.g., memory errors, model incompatibilities) and appends user-friendly messages to the conversation history.
124
+ - Example: `Generation failed: insufficient memory. Possible causes: ...`
125
+
126
+ ### GUI
127
+
128
+ - **Framework**: Gradio integrated with ModelScope Studio components for a professional-grade interface.
129
+ - **Components**:
130
+ - `pro.Chatbot`: Renders conversation history with distinct user (blue bubbles) and assistant (dark gray with red borders) messages.
131
+ - `antdx.Sender`: Provides an input field with a clear button for user queries.
132
+ - `antdx.Conversations`: A sidebar for managing multiple conversations, with a context menu for deletion.
133
+ - `antd.Button`: Implements the "New Chat" button and other interactive elements.
134
+ - **Styling**: Custom CSS defines a red/blue/black theme:
135
+ - User messages: Blue background for visibility.
136
+ - Assistant messages: Dark gray with red borders for contrast.
137
+ - Buttons: Blue with hover effects for interactivity.
138
+ - **Layout**: Uses `antd.Row` and `antd.Col` for responsive design, with a fixed 260px sidebar and flexible chat area.
139
+
140
+ ### Performance Optimization
141
+
142
+ - **CUDA Support**: Automatically detects CUDA-enabled GPUs via `torch.device('cuda' if torch.cuda.is_available() else 'cpu')`, reducing response times to \~3 seconds on GPU compared to \~5 seconds on CPU.
143
+ - **Memory Efficiency**: TinyLlama’s 1.1B parameters require \~2–3 GB RAM on CPU, making it suitable for consumer hardware.
144
+ - **Threaded Generation**: Offloads model inference to a separate thread, ensuring the GUI remains responsive during processing.
145
+
146
+ ### Key Fixes
147
+
148
+ - **Single Time Stamp**: Resolved duplicate time stamps using regex to clean responses before appending `*(X.XXs)*`.
149
+ - **Delete Functionality**: Fixed `AntdXConversations` event handling by replacing `select` with `menu_click`, ensuring reliable conversation deletion.
150
+ - **Metadata**: Embedded model identity in the prompt to consistently identify as LogicLink V5 by Kratu Gautam.
151
+
152
+ ## Troubleshooting
153
+
154
+ - **Double Time Stamps**:
155
+ - If responses show multiple `*(X.XXs)*`, verify the regex in `logiclink_chat`.
156
+ - Test with inputs like "Tell me about Pakistan" and share the output.
157
+ - **Slow Responses**:
158
+ - Use a CUDA-enabled GPU for faster processing.
159
+ - Reduce `max_new_tokens` to 512 if needed.
160
+ - Check RAM usage: `!free -h` in Colab.
161
+ - **GUI Issues**:
162
+ - Ensure `config.py` and `ui_components/` are correctly configured.
163
+ - Update dependencies: `pip install --force-reinstall gradio modelscope-studio`.
164
+ - **Delete Button Not Working**:
165
+ - Verify the `menu_click` event handler and JavaScript snippet.
166
+ - Share any error messages or tracebacks.
167
+ - **Model Errors**:
168
+ - Check for sufficient RAM (\~2–3 GB) and compatible PyTorch/Transformers versions.
169
+ - Run a test generation:
170
+
171
+ ```python
172
+ from transformers import AutoModelForCausalLM, AutoTokenizer
173
+ tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
174
+ model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
175
+ inputs = tokenizer(["Hello"], return_tensors="pt")
176
+ outputs = model.generate(**inputs, max_new_tokens=10)
177
+ print(tokenizer.decode(outputs[0]))
178
+ ```
179
+
180
+ ## Future Improvements
181
+
182
+ - Add a welcome message displaying LogicLink’s identity via `welcome_config()`.
183
+ - Enhance prompt engineering for more context-aware responses.
184
+ - Implement persistent storage for conversation history using a database or file system.
185
+ - Add support for multimodal inputs (e.g., images) to expand functionality.
186
+ - Optimize tokenization and generation for lower latency on CPU.
187
+
188
+ ## Credits
189
+
190
+ - **Developer**: Kratu Gautam, AIML Engineer
191
+ - **Dependencies**:
192
+ - TinyLlama-1.1B-Chat-v1.0 (Hugging Face)
193
+ - Gradio
194
+ - PyTorch
195
+ - Transformers
196
+ - ModelScope Studio
197
+ - **Inspiration**: Built to provide an accessible and interactive AI chatbot for students and enthusiasts.
198
+
199
+ ## License
200
+
201
+ MIT License. See `LICENSE` for details.
202
+
203
  ---
204
 
205
+ **LogicLink V5** is a project by Kratu Gautam, showcasing the power of AI in creating intuitive conversational tools. Contributions and feedback are welcome!
app.py ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import uuid
2
+ import time
3
+ import re
4
+ import gradio as gr
5
+ import torch
6
+ from transformers import AutoModelForCausalLM, AutoTokenizer
7
+ from transformers import StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
8
+ from threading import Thread
9
+ import modelscope_studio.components.antd as antd
10
+ import modelscope_studio.components.antdx as antdx
11
+ import modelscope_studio.components.base as ms
12
+ import modelscope_studio.components.pro as pro
13
+ from config import DEFAULT_LOCALE, DEFAULT_THEME, get_text, user_config, bot_config, welcome_config
14
+ from ui_components.logo import Logo
15
+ from ui_components.settings_header import SettingsHeader
16
+
17
+ # Loading the tokenizer and model from Hugging Face's model hub
18
+
19
+ tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
20
+ model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
21
+
22
+ # Using CUDA for an optimal experience
23
+
24
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
25
+ model = model.to(device)
26
+
27
+ # Defining a custom stopping criteria class for the model's text generation
28
+
29
+ class StopOnTokens(StoppingCriteria):
30
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
31
+ stop_ids = [2] # IDs of tokens where the generation should stop.
32
+ for stop_id in stop_ids:
33
+ if input_ids[0][-1] == stop_id:
34
+ return True
35
+ return False
36
+
37
+ # Function to generate model predictions with streaming
38
+
39
+ def generate_response(user_input, history):
40
+ stop = StopOnTokens()
41
+ messages = "</s>".join([
42
+ "</s>".join([
43
+ "\n<|user|>:" + item["content"] if item["role"] == "user"
44
+ else "\n<|assistant|>:" + item["content"]
45
+ for item in history
46
+ ])
47
+ ])
48
+ messages += f"\n<|user|>:{user_input}\n<|assistant|>:"
49
+ model_inputs = tokenizer([messages], return_tensors="pt").to(device)
50
+ streamer = TextIteratorStreamer(tokenizer, timeout=10.0, skip_prompt=True, skip_special_tokens=True)
51
+ generate_kwargs = dict(
52
+ **model_inputs,
53
+ streamer=streamer,
54
+ max_new_tokens=1024,
55
+ do_sample=True,
56
+ top_p=0.95,
57
+ top_k=50,
58
+ temperature=0.7,
59
+ num_beams=1,
60
+ stopping_criteria=StoppingCriteriaList([stop])
61
+ )
62
+ t = Thread(target=model.generate, kwargs=generate_kwargs)
63
+ t.start() # Start generation in a separate thread.
64
+ partial_message = ""
65
+ for new_token in streamer:
66
+ partial_message += new_token
67
+ if '</s>' in partial_message:
68
+ break
69
+ return partial_message
70
+
71
+ # Define the system prompt for seeding the model's context
72
+
73
+ SYSTEM_PROMPT = (
74
+ "I am LogicLink, Version 5—a state-of-the-art AI chatbot created by "
75
+ "Kratu Gautam (A-27) and Geetank Sahare (A-28) from SY CSE(AIML) GHRCEM. "
76
+ "I am here to assist you with any queries. How can I help you today?"
77
+ )
78
+
79
+ class Gradio_Events:
80
+ _generating = False
81
+
82
+ @staticmethod
83
+ def new_chat(state_value):
84
+ # This is CRITICAL - we DO NOT clean up old conversation
85
+ # Instead, we leave it in the state to be accessed later
86
+
87
+ # Create a fresh conversation
88
+ new_id = str(uuid.uuid4())
89
+ state_value["conversation_id"] = new_id
90
+
91
+ # Add the new conversation to the list with a default name
92
+ state_value["conversations"].append({
93
+ "label": "New Chat",
94
+ "key": new_id
95
+ })
96
+
97
+ # Seed it with system prompt
98
+ state_value["conversation_contexts"][new_id] = {
99
+ "history": [{
100
+ "role": "system",
101
+ "content": SYSTEM_PROMPT,
102
+ "key": str(uuid.uuid4()),
103
+ "avatar": None
104
+ }]
105
+ }
106
+
107
+ # Return updates
108
+ return (
109
+ gr.update(items=state_value["conversations"]),
110
+ gr.update(value=state_value["conversation_contexts"][new_id]["history"]),
111
+ gr.update(value=state_value),
112
+ gr.update(value="") # empties input
113
+ )
114
+
115
+ @staticmethod
116
+ def add_message(input_value, state_value):
117
+ input_update = gr.update(value="")
118
+
119
+ # If input is empty, just return
120
+ if not input_value.strip():
121
+ conversation = state_value["conversation_contexts"].get(state_value["conversation_id"], {"history": []})
122
+ chatbot_update = gr.update(value=conversation["history"])
123
+ state_update = gr.update(value=state_value)
124
+ return input_update, chatbot_update, state_update
125
+
126
+ # If there's no active conversation, initialize a new one
127
+ if not state_value["conversation_id"]:
128
+ random_id = str(uuid.uuid4())
129
+ state_value["conversation_id"] = random_id
130
+ state_value["conversation_contexts"][random_id] = {"history": [{
131
+ "role": "system",
132
+ "content": SYSTEM_PROMPT,
133
+ "key": str(uuid.uuid4()),
134
+ "avatar": None
135
+ }]}
136
+
137
+ # Set the chat name to the first message from user
138
+ chat_name = input_value[:20] + ("..." if len(input_value) > 20 else "")
139
+ state_value["conversations"].append({
140
+ "label": chat_name,
141
+ "key": random_id
142
+ })
143
+ else:
144
+ # Get current conversation history
145
+ current_id = state_value["conversation_id"]
146
+ history = state_value["conversation_contexts"][current_id]["history"]
147
+
148
+ # If this is the first user message (after system message), update the label
149
+ user_messages = [msg for msg in history if msg["role"] == "user"]
150
+ if len(user_messages) == 0:
151
+ # This is the first user message - update the chat name
152
+ chat_name = input_value[:20] + ("..." if len(input_value) > 20 else "")
153
+ for i, conv in enumerate(state_value["conversations"]):
154
+ if conv["key"] == current_id:
155
+ state_value["conversations"][i]["label"] = chat_name
156
+ break
157
+
158
+ # Add the message to history
159
+ history = state_value["conversation_contexts"][state_value["conversation_id"]]["history"]
160
+ history.append({
161
+ "role": "user",
162
+ "content": input_value,
163
+ "key": str(uuid.uuid4()),
164
+ "avatar": None
165
+ })
166
+
167
+ chatbot_update = gr.update(value=history)
168
+ return input_update, chatbot_update, gr.update(value=state_value)
169
+
170
+ @staticmethod
171
+ def submit(state_value):
172
+ if Gradio_Events._generating:
173
+ history = state_value["conversation_contexts"].get(state_value["conversation_id"], {"history": []})["history"]
174
+ return (
175
+ gr.update(value=history),
176
+ gr.update(value=state_value),
177
+ gr.update(value="Generation in progress, please wait...")
178
+ )
179
+
180
+ Gradio_Events._generating = True
181
+
182
+ # Make sure we have a valid conversation ID
183
+ if not state_value["conversation_id"]:
184
+ Gradio_Events._generating = False
185
+ return (
186
+ gr.update(value=[]),
187
+ gr.update(value=state_value),
188
+ gr.update(value="No active conversation")
189
+ )
190
+
191
+ history = state_value["conversation_contexts"][state_value["conversation_id"]]["history"]
192
+
193
+ # Assuming the last message is the latest user input
194
+ user_input = history[-1]["content"] if (history and history[-1]["role"] == "user") else ""
195
+ if not user_input:
196
+ Gradio_Events._generating = False
197
+ return (
198
+ gr.update(value=history),
199
+ gr.update(value=state_value),
200
+ gr.update(value="No user input provided")
201
+ )
202
+
203
+ # Generate the response from the model
204
+ history, response = Gradio_Events.logiclink_chat(user_input, history)
205
+ state_value["conversation_contexts"][state_value["conversation_id"]]["history"] = history
206
+ Gradio_Events._generating = False
207
+ return (
208
+ gr.update(value=history),
209
+ gr.update(value=state_value),
210
+ gr.update(value=response)
211
+ )
212
+
213
+ @staticmethod
214
+ def logiclink_chat(user_input, history):
215
+ if not user_input:
216
+ return history, "No input provided"
217
+ try:
218
+ start = time.time()
219
+ response = generate_response(user_input, history)
220
+ elapsed = time.time() - start
221
+ # Clean and format the response before appending it
222
+ cleaned_response = re.sub(r'\*\(\d+\.\d+s\)\*', '', response).strip()
223
+ response_with_time = f"{cleaned_response}\n\n*({elapsed:.2f}s)*"
224
+ history.append({
225
+ "role": "assistant",
226
+ "content": response_with_time,
227
+ "key": str(uuid.uuid4()),
228
+ "avatar": None
229
+ })
230
+ return history, response_with_time
231
+ except Exception as e:
232
+ error_msg = (
233
+ f"Generation failed: {str(e)}. "
234
+ "Possible causes: insufficient memory, model incompatibility, or input issues."
235
+ )
236
+ history.append({
237
+ "role": "assistant",
238
+ "content": error_msg,
239
+ "key": str(uuid.uuid4()),
240
+ "avatar": None
241
+ })
242
+ return history, error_msg
243
+
244
+ @staticmethod
245
+ def clear_history(state_value):
246
+ if state_value["conversation_id"]:
247
+ # Only clear messages after system prompt
248
+ current_history = state_value["conversation_contexts"][state_value["conversation_id"]]["history"]
249
+ if len(current_history) > 0 and current_history[0]["role"] == "system":
250
+ system_message = current_history[0]
251
+ state_value["conversation_contexts"][state_value["conversation_id"]]["history"] = [system_message]
252
+ else:
253
+ state_value["conversation_contexts"][state_value["conversation_id"]]["history"] = []
254
+
255
+ # Return the cleared history
256
+ return (
257
+ gr.update(value=state_value["conversation_contexts"][state_value["conversation_id"]]["history"]),
258
+ gr.update(value=state_value),
259
+ gr.update(value="")
260
+ )
261
+ return (
262
+ gr.update(value=[]),
263
+ gr.update(value=state_value),
264
+ gr.update(value="")
265
+ )
266
+
267
+ @staticmethod
268
+ def delete_conversation(state_value, conversation_key):
269
+ # Keep a copy of the conversations before removal
270
+ new_conversations = [conv for conv in state_value["conversations"] if conv["key"] != conversation_key]
271
+
272
+ # Remove the conversation from the list
273
+ state_value["conversations"] = new_conversations
274
+
275
+ # Delete the conversation context
276
+ if conversation_key in state_value["conversation_contexts"]:
277
+ del state_value["conversation_contexts"][conversation_key]
278
+
279
+ # If we're deleting the active conversation
280
+ if state_value["conversation_id"] == conversation_key:
281
+ state_value["conversation_id"] = ""
282
+ return gr.update(items=new_conversations), gr.update(value=[]), gr.update(value=state_value)
283
+
284
+ # If deleting another conversation, keep the current one displayed
285
+ return (
286
+ gr.update(items=new_conversations),
287
+ gr.update(value=state_value["conversation_contexts"].get(
288
+ state_value["conversation_id"], {"history": []}
289
+ )["history"]),
290
+ gr.update(value=state_value)
291
+ )
292
+
293
+ # (The remainder of your Gradio UI code remains largely unchanged.)
294
+
295
+ css = """
296
+ :root {
297
+ --color-red: #ff4444;
298
+ --color-blue: #1e88e5;
299
+ --color-black: #000000;
300
+ --color-dark-gray: #121212;
301
+ }
302
+ .gradio-container { background: var(--color-black) !important; color: white !important; }
303
+ .gr-textbox textarea, .ms-gr-ant-input-textarea { background: var(--color-dark-gray) !important; border: 2px solid var(--color-blue) !important; color: white !important; }
304
+ .gr-chatbot { background: var(--color-dark-gray) !important; border: 2px solid var(--color-red) !important; }
305
+ .gr-textbox.output-textbox { background: var(--color-dark-gray) !important; border: 2px solid var(--color-red) !important; color: white !important; margin-bottom: 10px; }
306
+ .gr-chatbot .user { background: var(--color-blue) !important; border-color: var(--color-blue) !important; }
307
+ .gr-chatbot .bot { background: var(--color-dark-gray) !important; border: 1px solid var(--color-red) !important; }
308
+ .gr-button { background: var(--color-blue) !important; border-color: var(--color-blue) !important; }
309
+ .gr-chatbot .tool { background: var(--color-dark-gray) !important; border: 1px solid var(--color-red) !important; }
310
+ """
311
+
312
+ with gr.Blocks(css=css, fill_width=True, title="LogicLinkV5") as demo:
313
+ state = gr.State({
314
+ "conversation_contexts": {},
315
+ "conversations": [],
316
+ "conversation_id": "",
317
+ })
318
+ with ms.Application(), antdx.XProvider(theme=DEFAULT_THEME, locale=DEFAULT_LOCALE), ms.AutoLoading():
319
+ with antd.Row(gutter=[20, 20], wrap=False, elem_id="chatbot"):
320
+ # Left Column
321
+ with antd.Col(md=dict(flex="0 0 260px", span=24, order=0), span=0, order=1):
322
+ with ms.Div(elem_classes="chatbot-conversations"):
323
+ with antd.Flex(vertical=True, gap="small", elem_style=dict(height="100%")):
324
+ Logo()
325
+ with antd.Button(color="primary", variant="filled", block=True, elem_classes="new-chat-btn") as new_chat_btn:
326
+ ms.Text(get_text("New Chat", "新建对话"))
327
+ with ms.Slot("icon"):
328
+ antd.Icon("PlusOutlined")
329
+ with antdx.Conversations(elem_classes="chatbot-conversations-list") as conversations:
330
+ with ms.Slot('menu.items'):
331
+ with antd.Menu.Item(label="Delete", key="delete", danger=True) as conversation_delete_menu_item:
332
+ with ms.Slot("icon"):
333
+ antd.Icon("DeleteOutlined")
334
+ # Right Column
335
+ with antd.Col(flex=1, elem_style=dict(height="100%")):
336
+ with antd.Flex(vertical=True, gap="small", elem_classes="chatbot-chat"):
337
+ chatbot = pro.Chatbot(elem_classes="chatbot-chat-messages", height=600,
338
+ welcome_config=welcome_config(), user_config=user_config(),
339
+ bot_config=bot_config())
340
+ output_textbox = gr.Textbox(label="LatestOutputTextbox", lines=1,
341
+ elem_classes="output-textbox", interactive=True)
342
+ with antdx.Suggestion(items=[]):
343
+ with ms.Slot("children"):
344
+ with antdx.Sender(placeholder="Type your message...", elem_classes="chat-input") as input:
345
+ with ms.Slot("prefix"):
346
+ with antd.Flex(gap=4):
347
+ with antd.Button(type="text", elem_classes="clear-btn") as clear_btn:
348
+ with ms.Slot("icon"):
349
+ antd.Icon("ClearOutlined")
350
+ # Event Handlers
351
+ input.submit(fn=Gradio_Events.add_message, inputs=[input, state],
352
+ outputs=[input, chatbot, state]).then(
353
+ fn=Gradio_Events.submit, inputs=[state],
354
+ outputs=[chatbot, state, output_textbox]
355
+ )
356
+ new_chat_btn.click(fn=Gradio_Events.new_chat,
357
+ inputs=[state],
358
+ outputs=[conversations, chatbot, state, input],
359
+ queue=False)
360
+ clear_btn.click(fn=Gradio_Events.clear_history, inputs=[state],
361
+ outputs=[chatbot, state, output_textbox])
362
+ conversations.menu_click(
363
+ fn=lambda state_value, e: (
364
+ # If there's no payload, skip
365
+ gr.skip() if (e is None or not isinstance(e, dict) or 'key' not in e._data['payload'][0] or 'menu_key' not in e._data['payload'][1])
366
+ else (
367
+ # Extract keys
368
+ (lambda conv_key, action_key: (
369
+ # If "delete", remove that convo
370
+ Gradio_Events.delete_conversation(state_value, conv_key)
371
+ if action_key == "delete"
372
+ # If other action, do nothing
373
+ else (
374
+ gr.update(items=state_value["conversations"]),
375
+ gr.update(value=state_value["conversation_contexts"]
376
+ .get(state_value["conversation_id"], {"history": []})
377
+ ["history"]),
378
+ gr.update(value=state_value)
379
+ )
380
+ ))(
381
+ e._data['payload'][0]['key'],
382
+ e._data['payload'][1]['key']
383
+ )
384
+ )
385
+ ),
386
+ inputs=[state],
387
+ outputs=[conversations, chatbot, state],
388
+ queue=False
389
+ )
390
+
391
+ demo.queue().launch(share=True, debug=True)
config.py ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from modelscope_studio.components.pro.chatbot import ChatbotActionConfig, ChatbotBotConfig, ChatbotUserConfig, ChatbotWelcomeConfig
3
+
4
+ # Env
5
+ is_cn = os.getenv('MODELSCOPE_ENVIRONMENT') == 'studio'
6
+ api_key = os.getenv('API_KEY')
7
+
8
+
9
+ def get_text(text: str, cn_text: str):
10
+ return text
11
+
12
+
13
+ # Save history in browser
14
+ save_history = True
15
+
16
+
17
+ # Chatbot Config
18
+ def user_config(disabled_actions=None):
19
+ return ChatbotUserConfig(
20
+ class_names=dict(content="user-message-content"),
21
+ actions=[
22
+ "copy", "edit",
23
+ ChatbotActionConfig(
24
+ action="delete",
25
+ popconfirm=dict(title=get_text("Delete the message", "删除消息"),
26
+ description=get_text(
27
+ "Are you sure to delete this message?",
28
+ "确认删除该消息?"),
29
+ okButtonProps=dict(danger=True)))
30
+ ],
31
+ disabled_actions=disabled_actions)
32
+
33
+
34
+ def bot_config(disabled_actions=None):
35
+ return ChatbotBotConfig(actions=[
36
+ "copy", "edit",
37
+ ChatbotActionConfig(
38
+ action="retry",
39
+ popconfirm=dict(
40
+ title=get_text("Regenerate the message", "重新生成消息"),
41
+ description=get_text(
42
+ "Regenerate the message will also delete all subsequent messages.",
43
+ "重新生成消息会删除所有后续消息。"),
44
+ okButtonProps=dict(danger=True))),
45
+ ChatbotActionConfig(action="delete",
46
+ popconfirm=dict(
47
+ title=get_text("Delete the message", "删除消息"),
48
+ description=get_text(
49
+ "Are you sure to delete this message?",
50
+ "确认删除该消息?"),
51
+ okButtonProps=dict(danger=True)))
52
+ ],
53
+ avatar="./assets/lll.jpg",
54
+ disabled_actions=disabled_actions)
55
+
56
+
57
+ def welcome_config():
58
+ return ChatbotWelcomeConfig(
59
+ variant="borderless",
60
+ icon="./assets/lll.jpg",
61
+ title=get_text("Hello, I'm LogicLink5", "你好,我是 LogicLink5"),
62
+ description=get_text("Select a model and enter text to get started.",
63
+ "选择模型并输入文本,开始对话吧。"),
64
+ prompts=dict(
65
+ title=get_text("How can I help you today?", "有什么我能帮助你的吗?"),
66
+ styles={
67
+ "list": {
68
+ "width": '100%',
69
+ },
70
+ "item": {
71
+ "flex": 1,
72
+ },
73
+ },
74
+ items=[{
75
+ "label":
76
+ get_text("📅 Make a plan", "📅 制定计划"),
77
+ "children": [{
78
+ "description":
79
+ get_text("Help me with a plan to start a business",
80
+ "帮助我制定一个创业计划")
81
+ }, {
82
+ "description":
83
+ get_text("Help me with a plan to achieve my goals",
84
+ "帮助我制定一个实现目标的计划")
85
+ }, {
86
+ "description":
87
+ get_text("Help me with a plan for a successful interview",
88
+ "帮助我制定一个成功的面试计划")
89
+ }]
90
+ }, {
91
+ "label":
92
+ get_text("🖋 Help me write", "🖋 帮我写"),
93
+ "children": [{
94
+ "description":
95
+ get_text("Help me write a story with a twist ending",
96
+ "帮助我写一个带有意外结局的故事")
97
+ }, {
98
+ "description":
99
+ get_text("Help me write a blog post on mental health",
100
+ "帮助我写一篇关于心理健康的博客文章")
101
+ }, {
102
+ "description":
103
+ get_text("Help me write a letter to my future self",
104
+ "帮助我写一封给未来自己的信")
105
+ }]
106
+ }]),
107
+ )
108
+
109
+
110
+ DEFAULT_SUGGESTIONS = [{
111
+ "label":
112
+ get_text('Make a plan', '制定计划'),
113
+ "value":
114
+ get_text('Make a plan', '制定计划'),
115
+ "children": [{
116
+ "label":
117
+ get_text("Start a business", "开始创业"),
118
+ "value":
119
+ get_text("Help me with a plan to start a business", "帮助我制定一个创业计划")
120
+ }, {
121
+ "label":
122
+ get_text("Achieve my goals", "实现我的目标"),
123
+ "value":
124
+ get_text("Help me with a plan to achieve my goals", "帮助我制定一个实现目标的计划")
125
+ }, {
126
+ "label":
127
+ get_text("Successful interview", "成功的面试"),
128
+ "value":
129
+ get_text("Help me with a plan for a successful interview",
130
+ "帮助我制定一个成功的面试计划")
131
+ }]
132
+ }, {
133
+ "label":
134
+ get_text('Help me write', '帮我写'),
135
+ "value":
136
+ get_text("Help me write", '帮我写'),
137
+ "children": [{
138
+ "label":
139
+ get_text("Story with a twist ending", "带有意外结局的故事"),
140
+ "value":
141
+ get_text("Help me write a story with a twist ending",
142
+ "帮助我写一个带有意外结局的故事")
143
+ }, {
144
+ "label":
145
+ get_text("Blog post on mental health", "关于心理健康的博客文章"),
146
+ "value":
147
+ get_text("Help me write a blog post on mental health",
148
+ "帮助我写一篇关于心理健康的博客文章")
149
+ }, {
150
+ "label":
151
+ get_text("Letter to my future self", "给未来自己的信"),
152
+ "value":
153
+ get_text("Help me write a letter to my future self", "帮助我写一封给未来自己的信")
154
+ }]
155
+ }]
156
+
157
+ DEFAULT_SYS_PROMPT = "You are a helpful and harmless assistant."
158
+
159
+ MIN_THINKING_BUDGET = 1
160
+
161
+ MAX_THINKING_BUDGET = 38
162
+
163
+ DEFAULT_THINKING_BUDGET = 38
164
+
165
+ DEFAULT_MODEL = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
166
+
167
+ MODEL_OPTIONS = [
168
+ {
169
+ "label": get_text("Qwen3-235B-A22B", "通义千问3-235B-A22B"),
170
+ "modelId": "Qwen/Qwen3-235B-A22B",
171
+ "value": "qwen3-235b-a22b"
172
+ },
173
+ {
174
+ "label": get_text("Qwen3-32B", "通义千问3-32B"),
175
+ "modelId": "Qwen/Qwen3-32B",
176
+ "value": "qwen3-32b"
177
+ },
178
+ {
179
+ "label": get_text("Qwen3-30B-A3B", "通义千问3-30B-A3B"),
180
+ "modelId": "Qwen/Qwen3-30B-A3B",
181
+ "value": "qwen3-30b-a3b"
182
+ },
183
+ {
184
+ "label": get_text("Qwen3-14B", "通义千问3-14B"),
185
+ "modelId": "Qwen/Qwen3-14B",
186
+ "value": "qwen3-14b"
187
+ },
188
+ {
189
+ "label": get_text("Qwen3-8B", "通义千问3-8B"),
190
+ "modelId": "Qwen/Qwen3-8B",
191
+ "value": "qwen3-8b"
192
+ },
193
+ {
194
+ "label": get_text("Qwen3-4B", "通义千问3-4B"),
195
+ "modelId": "Qwen/Qwen3-4B",
196
+ "value": "qwen3-4b"
197
+ },
198
+ {
199
+ "label": get_text("Qwen3-1.7B", "通义千问3-1.7B"),
200
+ "modelId": "Qwen/Qwen3-1.7B",
201
+ "value": "qwen3-1.7b"
202
+ },
203
+ {
204
+ "label": get_text("Qwen3-0.6B", "通义千问3-0.6B"),
205
+ "modelId": "Qwen/Qwen3-0.6B",
206
+ "value": "qwen3-0.6b"
207
+ },
208
+ ]
209
+
210
+ for model in MODEL_OPTIONS:
211
+ model[
212
+ "link"] = is_cn and f"https://modelscope.cn/models/{model['modelId']}" or f"https://huggingface.co/{model['modelId']}"
213
+
214
+ MODEL_OPTIONS_MAP = {model["value"]: model for model in MODEL_OPTIONS}
215
+
216
+ DEFAULT_LOCALE = 'en_US'
217
+
218
+ DEFAULT_THEME = {
219
+ "token": {
220
+ "colorPrimary": "#6A57FF",
221
+ }
222
+ }
223
+
224
+ DEFAULT_SETTINGS = {
225
+ "model": DEFAULT_MODEL,
226
+ "sys_prompt": DEFAULT_SYS_PROMPT,
227
+ "thinking_budget": DEFAULT_THINKING_BUDGET
228
+ }
gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ assets/qwen.png filter=lfs diff=lfs merge=lfs -text
requirements.txt ADDED
@@ -0,0 +1,627 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ absl-py==1.4.0
2
+ accelerate==1.6.0
3
+ aiofiles==24.1.0
4
+ aiohappyeyeballs==2.6.1
5
+ aiohttp==3.11.15
6
+ aiosignal==1.3.2
7
+ alabaster==1.0.0
8
+ albucore==0.0.24
9
+ albumentations==2.0.6
10
+ ale-py==0.11.0
11
+ altair==5.5.0
12
+ annotated-types==0.7.0
13
+ anyio==4.9.0
14
+ argon2-cffi==23.1.0
15
+ argon2-cffi-bindings==21.2.0
16
+ array_record==0.7.2
17
+ arviz==0.21.0
18
+ astropy==7.0.1
19
+ astropy-iers-data==0.2025.4.28.0.37.27
20
+ astunparse==1.6.3
21
+ atpublic==5.1
22
+ attrs==25.3.0
23
+ audioread==3.0.1
24
+ autograd==1.7.0
25
+ babel==2.17.0
26
+ backcall==0.2.0
27
+ backports.tarfile==1.2.0
28
+ beautifulsoup4==4.13.4
29
+ betterproto==2.0.0b6
30
+ bigframes==2.1.0
31
+ bigquery-magics==0.9.0
32
+ bleach==6.2.0
33
+ blinker==1.9.0
34
+ blis==1.3.0
35
+ blosc2==3.3.2
36
+ bokeh==3.7.2
37
+ Bottleneck==1.4.2
38
+ bqplot==0.12.44
39
+ branca==0.8.1
40
+ build==1.2.2.post1
41
+ CacheControl==0.14.3
42
+ cachetools==5.5.2
43
+ catalogue==2.0.10
44
+ certifi==2025.4.26
45
+ cffi==1.17.1
46
+ chardet==5.2.0
47
+ charset-normalizer==3.4.1
48
+ chex==0.1.89
49
+ clarabel==0.10.0
50
+ click==8.1.8
51
+ cloudpathlib==0.21.0
52
+ cloudpickle==3.1.1
53
+ cmake==3.31.6
54
+ cmdstanpy==1.2.5
55
+ colorcet==3.1.0
56
+ colorlover==0.3.0
57
+ colour==0.1.5
58
+ community==1.0.0b1
59
+ confection==0.1.5
60
+ cons==0.4.6
61
+ contourpy==1.3.2
62
+ cramjam==2.10.0
63
+ cryptography==43.0.3
64
+ cuda-python==12.6.2.post1
65
+ cudf-cu12 @ https://pypi.nvidia.com/cudf-cu12/cudf_cu12-25.2.1-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
66
+ cudf-polars-cu12==25.2.2
67
+ cufflinks==0.17.3
68
+ cuml-cu12==25.2.1
69
+ cupy-cuda12x==13.3.0
70
+ cuvs-cu12==25.2.1
71
+ cvxopt==1.3.2
72
+ cvxpy==1.6.5
73
+ cycler==0.12.1
74
+ cyipopt==1.5.0
75
+ cymem==2.0.11
76
+ Cython==3.0.12
77
+ dashscope==1.23.2
78
+ dask==2024.12.1
79
+ dask-cuda==25.2.0
80
+ dask-cudf-cu12==25.2.2
81
+ dask-expr==1.1.21
82
+ dataproc-spark-connect==0.7.2
83
+ datascience==0.17.6
84
+ datasets==3.5.1
85
+ db-dtypes==1.4.2
86
+ dbus-python==1.2.18
87
+ debugpy==1.8.0
88
+ decorator==4.4.2
89
+ defusedxml==0.7.1
90
+ Deprecated==1.2.18
91
+ diffusers==0.33.1
92
+ dill==0.3.8
93
+ distributed==2024.12.1
94
+ distributed-ucxx-cu12==0.42.0
95
+ distro==1.9.0
96
+ dlib==19.24.6
97
+ dm-tree==0.1.9
98
+ docker-pycreds==0.4.0
99
+ docstring_parser==0.16
100
+ docutils==0.21.2
101
+ dopamine_rl==4.1.2
102
+ duckdb==1.2.2
103
+ earthengine-api==1.5.13
104
+ easydict==1.13
105
+ editdistance==0.8.1
106
+ eerepr==0.1.1
107
+ einops==0.8.1
108
+ en_core_web_sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.8.0/en_core_web_sm-3.8.0-py3-none-any.whl#sha256=1932429db727d4bff3deed6b34cfc05df17794f4a52eeb26cf8928f7c1a0fb85
109
+ entrypoints==0.4
110
+ et_xmlfile==2.0.0
111
+ etils==1.12.2
112
+ etuples==0.3.9
113
+ Farama-Notifications==0.0.4
114
+ fastai==2.7.19
115
+ fastapi==0.115.12
116
+ fastcore==1.7.29
117
+ fastdownload==0.0.7
118
+ fastjsonschema==2.21.1
119
+ fastprogress==1.0.3
120
+ fastrlock==0.8.3
121
+ ffmpy==0.5.0
122
+ filelock==3.18.0
123
+ firebase-admin==6.8.0
124
+ Flask==3.1.0
125
+ flatbuffers==25.2.10
126
+ flax==0.10.6
127
+ folium==0.19.5
128
+ fonttools==4.57.0
129
+ frozendict==2.4.6
130
+ frozenlist==1.6.0
131
+ fsspec==2025.3.0
132
+ future==1.0.0
133
+ gast==0.6.0
134
+ gcsfs==2025.3.2
135
+ GDAL==3.6.4
136
+ gdown==5.2.0
137
+ geemap==0.35.3
138
+ geocoder==1.38.1
139
+ geographiclib==2.0
140
+ geopandas==1.0.1
141
+ geopy==2.4.1
142
+ gin-config==0.5.0
143
+ gitdb==4.0.12
144
+ GitPython==3.1.44
145
+ glob2==0.7
146
+ google==2.0.3
147
+ google-ai-generativelanguage==0.6.15
148
+ google-api-core==2.24.2
149
+ google-api-python-client==2.169.0
150
+ google-auth==2.38.0
151
+ google-auth-httplib2==0.2.0
152
+ google-auth-oauthlib==1.2.2
153
+ google-cloud-aiplatform==1.91.0
154
+ google-cloud-bigquery==3.31.0
155
+ google-cloud-bigquery-connection==1.18.2
156
+ google-cloud-bigquery-storage==2.31.0
157
+ google-cloud-bigtable==2.30.1
158
+ google-cloud-core==2.4.3
159
+ google-cloud-dataproc==5.18.1
160
+ google-cloud-datastore==2.21.0
161
+ google-cloud-firestore==2.20.2
162
+ google-cloud-functions==1.20.3
163
+ google-cloud-iam==2.19.0
164
+ google-cloud-language==2.17.1
165
+ google-cloud-pubsub==2.25.0
166
+ google-cloud-resource-manager==1.14.2
167
+ google-cloud-spanner==3.54.0
168
+ google-cloud-storage==2.19.0
169
+ google-cloud-translate==3.20.2
170
+ google-colab @ file:///colabtools/dist/google_colab-1.0.0.tar.gz
171
+ google-crc32c==1.7.1
172
+ google-genai==1.13.0
173
+ google-generativeai==0.8.5
174
+ google-pasta==0.2.0
175
+ google-resumable-media==2.7.2
176
+ googleapis-common-protos==1.70.0
177
+ googledrivedownloader==1.1.0
178
+ gradio==5.29.0
179
+ gradio_client==1.10.0
180
+ graphviz==0.20.3
181
+ greenlet==3.2.1
182
+ groovy==0.1.2
183
+ grpc-google-iam-v1==0.14.2
184
+ grpc-interceptor==0.15.4
185
+ grpcio==1.71.0
186
+ grpcio-status==1.71.0
187
+ grpclib==0.4.7
188
+ gspread==6.2.0
189
+ gspread-dataframe==4.0.0
190
+ gym==0.25.2
191
+ gym-notices==0.0.8
192
+ gymnasium==1.1.1
193
+ h11==0.16.0
194
+ h2==4.2.0
195
+ h5netcdf==1.6.1
196
+ h5py==3.13.0
197
+ hdbscan==0.8.40
198
+ highspy==1.10.0
199
+ holidays==0.71
200
+ holoviews==1.20.2
201
+ hpack==4.1.0
202
+ html5lib==1.1
203
+ httpcore==1.0.9
204
+ httpimport==1.4.1
205
+ httplib2==0.22.0
206
+ httpx==0.28.1
207
+ huggingface-hub==0.30.2
208
+ humanize==4.12.3
209
+ hyperframe==6.1.0
210
+ hyperopt==0.2.7
211
+ ibis-framework==9.5.0
212
+ idna==3.10
213
+ imageio==2.37.0
214
+ imageio-ffmpeg==0.6.0
215
+ imagesize==1.4.1
216
+ imbalanced-learn==0.13.0
217
+ immutabledict==4.2.1
218
+ importlib_metadata==8.7.0
219
+ importlib_resources==6.5.2
220
+ imutils==0.5.4
221
+ inflect==7.5.0
222
+ iniconfig==2.1.0
223
+ intel-cmplr-lib-ur==2025.1.1
224
+ intel-openmp==2025.1.1
225
+ ipyevents==2.0.2
226
+ ipyfilechooser==0.6.0
227
+ ipykernel==6.17.1
228
+ ipyleaflet==0.19.2
229
+ ipyparallel==8.8.0
230
+ ipython==7.34.0
231
+ ipython-genutils==0.2.0
232
+ ipython-sql==0.5.0
233
+ ipytree==0.2.2
234
+ ipywidgets==7.7.1
235
+ itsdangerous==2.2.0
236
+ jaraco.classes==3.4.0
237
+ jaraco.context==6.0.1
238
+ jaraco.functools==4.1.0
239
+ jax==0.5.2
240
+ jax-cuda12-pjrt==0.5.1
241
+ jax-cuda12-plugin==0.5.1
242
+ jaxlib==0.5.1
243
+ jeepney==0.9.0
244
+ jieba==0.42.1
245
+ Jinja2==3.1.6
246
+ jiter==0.9.0
247
+ joblib==1.4.2
248
+ jsonpatch==1.33
249
+ jsonpickle==4.0.5
250
+ jsonpointer==3.0.0
251
+ jsonschema==4.23.0
252
+ jsonschema-specifications==2025.4.1
253
+ jupyter-client==6.1.12
254
+ jupyter-console==6.1.0
255
+ jupyter-leaflet==0.19.2
256
+ jupyter-server==1.16.0
257
+ jupyter_core==5.7.2
258
+ jupyter_kernel_gateway @ git+https://github.com/googlecolab/kernel_gateway@b134e9945df25c2dcb98ade9129399be10788671
259
+ jupyterlab_pygments==0.3.0
260
+ jupyterlab_widgets==3.0.14
261
+ kaggle==1.7.4.2
262
+ kagglehub==0.3.12
263
+ keras==3.8.0
264
+ keras-hub==0.18.1
265
+ keras-nlp==0.18.1
266
+ keyring==25.6.0
267
+ keyrings.google-artifactregistry-auth==1.1.2
268
+ kiwisolver==1.4.8
269
+ langchain==0.3.24
270
+ langchain-core==0.3.56
271
+ langchain-text-splitters==0.3.8
272
+ langcodes==3.5.0
273
+ langsmith==0.3.39
274
+ language_data==1.3.0
275
+ launchpadlib==1.10.16
276
+ lazr.restfulclient==0.14.4
277
+ lazr.uri==1.0.6
278
+ lazy_loader==0.4
279
+ libclang==18.1.1
280
+ libcudf-cu12 @ https://pypi.nvidia.com/libcudf-cu12/libcudf_cu12-25.2.1-py3-none-manylinux_2_28_x86_64.whl
281
+ libcugraph-cu12==25.2.0
282
+ libcuml-cu12==25.2.1
283
+ libcuvs-cu12==25.2.1
284
+ libkvikio-cu12==25.2.1
285
+ libraft-cu12==25.2.0
286
+ librosa==0.11.0
287
+ libucx-cu12==1.18.1
288
+ libucxx-cu12==0.42.0
289
+ lightgbm @ file:///tmp/lightgbm/LightGBM/dist/lightgbm-4.5.0-py3-none-linux_x86_64.whl
290
+ linkify-it-py==2.0.3
291
+ llvmlite==0.43.0
292
+ locket==1.0.0
293
+ logical-unification==0.4.6
294
+ lxml==5.4.0
295
+ Mako==1.1.3
296
+ marisa-trie==1.2.1
297
+ Markdown==3.8
298
+ markdown-it-py==3.0.0
299
+ MarkupSafe==3.0.2
300
+ matplotlib==3.10.0
301
+ matplotlib-inline==0.1.7
302
+ matplotlib-venn==1.1.2
303
+ mdit-py-plugins==0.4.2
304
+ mdurl==0.1.2
305
+ miniKanren==1.0.3
306
+ missingno==0.5.2
307
+ mistune==3.1.3
308
+ mizani==0.13.3
309
+ mkl==2025.0.1
310
+ ml-dtypes==0.4.1
311
+ mlxtend==0.23.4
312
+ modelscope_studio==1.2.4
313
+ more-itertools==10.7.0
314
+ moviepy==1.0.3
315
+ mpmath==1.3.0
316
+ msgpack==1.1.0
317
+ multidict==6.4.3
318
+ multipledispatch==1.0.0
319
+ multiprocess==0.70.16
320
+ multitasking==0.0.11
321
+ murmurhash==1.0.12
322
+ music21==9.3.0
323
+ namex==0.0.9
324
+ narwhals==1.37.1
325
+ natsort==8.4.0
326
+ nbclassic==1.3.0
327
+ nbclient==0.10.2
328
+ nbconvert==7.16.6
329
+ nbformat==5.10.4
330
+ ndindex==1.9.2
331
+ nest-asyncio==1.6.0
332
+ networkx==3.4.2
333
+ nibabel==5.3.2
334
+ nltk==3.9.1
335
+ notebook==6.5.7
336
+ notebook_shim==0.2.4
337
+ numba==0.60.0
338
+ numba-cuda==0.2.0
339
+ numexpr==2.10.2
340
+ numpy==2.0.2
341
+ nvidia-cublas-cu12==12.4.5.8
342
+ nvidia-cuda-cupti-cu12==12.4.127
343
+ nvidia-cuda-nvcc-cu12==12.5.82
344
+ nvidia-cuda-nvrtc-cu12==12.4.127
345
+ nvidia-cuda-runtime-cu12==12.4.127
346
+ nvidia-cudnn-cu12==9.1.0.70
347
+ nvidia-cufft-cu12==11.2.1.3
348
+ nvidia-curand-cu12==10.3.5.147
349
+ nvidia-cusolver-cu12==11.6.1.9
350
+ nvidia-cusparse-cu12==12.3.1.170
351
+ nvidia-cusparselt-cu12==0.6.2
352
+ nvidia-ml-py==12.570.86
353
+ nvidia-nccl-cu12==2.21.5
354
+ nvidia-nvcomp-cu12==4.2.0.11
355
+ nvidia-nvjitlink-cu12==12.4.127
356
+ nvidia-nvtx-cu12==12.4.127
357
+ nvtx==0.2.11
358
+ nx-cugraph-cu12 @ https://pypi.nvidia.com/nx-cugraph-cu12/nx_cugraph_cu12-25.2.0-py3-none-any.whl
359
+ oauth2client==4.1.3
360
+ oauthlib==3.2.2
361
+ openai==1.76.2
362
+ opencv-contrib-python==4.11.0.86
363
+ opencv-python==4.11.0.86
364
+ opencv-python-headless==4.11.0.86
365
+ openpyxl==3.1.5
366
+ opentelemetry-api==1.16.0
367
+ opentelemetry-sdk==1.16.0
368
+ opentelemetry-semantic-conventions==0.37b0
369
+ opt_einsum==3.4.0
370
+ optax==0.2.4
371
+ optree==0.15.0
372
+ orbax-checkpoint==0.11.13
373
+ orjson==3.10.18
374
+ osqp==1.0.3
375
+ packaging==24.2
376
+ pandas==2.2.2
377
+ pandas-datareader==0.10.0
378
+ pandas-gbq==0.28.0
379
+ pandas-stubs==2.2.2.240909
380
+ pandocfilters==1.5.1
381
+ panel==1.6.3
382
+ param==2.2.0
383
+ parso==0.8.4
384
+ parsy==2.1
385
+ partd==1.4.2
386
+ pathlib==1.0.1
387
+ patsy==1.0.1
388
+ peewee==3.18.1
389
+ peft==0.15.2
390
+ pexpect==4.9.0
391
+ pickleshare==0.7.5
392
+ pillow==11.2.1
393
+ platformdirs==4.3.7
394
+ plotly==5.24.1
395
+ plotnine==0.14.5
396
+ pluggy==1.5.0
397
+ ply==3.11
398
+ polars==1.21.0
399
+ pooch==1.8.2
400
+ portpicker==1.5.2
401
+ preshed==3.0.9
402
+ prettytable==3.16.0
403
+ proglog==0.1.11
404
+ progressbar2==4.5.0
405
+ prometheus_client==0.21.1
406
+ promise==2.3
407
+ prompt_toolkit==3.0.51
408
+ propcache==0.3.1
409
+ prophet==1.1.6
410
+ proto-plus==1.26.1
411
+ protobuf==5.29.4
412
+ psutil==5.9.5
413
+ psycopg2==2.9.10
414
+ ptyprocess==0.7.0
415
+ py-cpuinfo==9.0.0
416
+ py4j==0.10.9.7
417
+ pyarrow==18.1.0
418
+ pyasn1==0.6.1
419
+ pyasn1_modules==0.4.2
420
+ pycairo==1.28.0
421
+ pycocotools==2.0.8
422
+ pycparser==2.22
423
+ pydantic==2.11.4
424
+ pydantic_core==2.33.2
425
+ pydata-google-auth==1.9.1
426
+ pydot==3.0.4
427
+ pydotplus==2.0.2
428
+ PyDrive==1.3.1
429
+ PyDrive2==1.21.3
430
+ pydub==0.25.1
431
+ pyerfa==2.0.1.5
432
+ pygame==2.6.1
433
+ pygit2==1.18.0
434
+ Pygments==2.19.1
435
+ PyGObject==3.42.0
436
+ PyJWT==2.10.1
437
+ pylibcudf-cu12 @ https://pypi.nvidia.com/pylibcudf-cu12/pylibcudf_cu12-25.2.1-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl
438
+ pylibcugraph-cu12==25.2.0
439
+ pylibraft-cu12==25.2.0
440
+ pymc==5.22.0
441
+ pymystem3==0.2.0
442
+ pynndescent==0.5.13
443
+ pynvjitlink-cu12==0.5.2
444
+ pynvml==12.0.0
445
+ pyogrio==0.10.0
446
+ pyomo==6.9.2
447
+ PyOpenGL==3.1.9
448
+ pyOpenSSL==24.2.1
449
+ pyparsing==3.2.3
450
+ pyperclip==1.9.0
451
+ pyproj==3.7.1
452
+ pyproject_hooks==1.2.0
453
+ pyshp==2.3.1
454
+ PySocks==1.7.1
455
+ pyspark==3.5.1
456
+ pytensor==2.30.3
457
+ pytest==8.3.5
458
+ python-apt==0.0.0
459
+ python-box==7.3.2
460
+ python-dateutil==2.9.0.post0
461
+ python-louvain==0.16
462
+ python-multipart==0.0.20
463
+ python-slugify==8.0.4
464
+ python-snappy==0.7.3
465
+ python-utils==3.9.1
466
+ pytz==2025.2
467
+ pyviz_comms==3.0.4
468
+ PyYAML==6.0.2
469
+ pyzmq==24.0.1
470
+ raft-dask-cu12==25.2.0
471
+ rapids-dask-dependency==25.2.0
472
+ ratelim==0.1.6
473
+ referencing==0.36.2
474
+ regex==2024.11.6
475
+ requests==2.32.3
476
+ requests-oauthlib==2.0.0
477
+ requests-toolbelt==1.0.0
478
+ requirements-parser==0.9.0
479
+ rich==13.9.4
480
+ rmm-cu12==25.2.0
481
+ roman-numerals-py==3.1.0
482
+ rpds-py==0.24.0
483
+ rpy2==3.5.17
484
+ rsa==4.9.1
485
+ ruff==0.11.8
486
+ safehttpx==0.1.6
487
+ safetensors==0.5.3
488
+ scikit-image==0.25.2
489
+ scikit-learn==1.6.1
490
+ scipy==1.15.2
491
+ scooby==0.10.1
492
+ scs==3.2.7.post2
493
+ seaborn==0.13.2
494
+ SecretStorage==3.3.3
495
+ semantic-version==2.10.0
496
+ Send2Trash==1.8.3
497
+ sentence-transformers==3.4.1
498
+ sentencepiece==0.2.0
499
+ sentry-sdk==2.27.0
500
+ setproctitle==1.3.6
501
+ shap==0.47.2
502
+ shapely==2.1.0
503
+ shellingham==1.5.4
504
+ simple-parsing==0.1.7
505
+ simplejson==3.20.1
506
+ simsimd==6.2.1
507
+ six==1.17.0
508
+ sklearn-compat==0.1.3
509
+ sklearn-pandas==2.2.0
510
+ slicer==0.0.8
511
+ smart-open==7.1.0
512
+ smmap==5.0.2
513
+ sniffio==1.3.1
514
+ snowballstemmer==2.2.0
515
+ sortedcontainers==2.4.0
516
+ soundfile==0.13.1
517
+ soupsieve==2.7
518
+ soxr==0.5.0.post1
519
+ spacy==3.8.5
520
+ spacy-legacy==3.0.12
521
+ spacy-loggers==1.0.5
522
+ spanner-graph-notebook==1.1.6
523
+ Sphinx==8.2.3
524
+ sphinxcontrib-applehelp==2.0.0
525
+ sphinxcontrib-devhelp==2.0.0
526
+ sphinxcontrib-htmlhelp==2.1.0
527
+ sphinxcontrib-jsmath==1.0.1
528
+ sphinxcontrib-qthelp==2.0.0
529
+ sphinxcontrib-serializinghtml==2.0.0
530
+ SQLAlchemy==2.0.40
531
+ sqlglot==25.20.2
532
+ sqlparse==0.5.3
533
+ srsly==2.5.1
534
+ stanio==0.5.1
535
+ starlette==0.46.2
536
+ statsmodels==0.14.4
537
+ stringzilla==3.12.5
538
+ sympy==1.13.1
539
+ tables==3.10.2
540
+ tabulate==0.9.0
541
+ tbb==2022.1.0
542
+ tblib==3.1.0
543
+ tcmlib==1.3.0
544
+ tenacity==9.1.2
545
+ tensorboard==2.18.0
546
+ tensorboard-data-server==0.7.2
547
+ tensorflow==2.18.0
548
+ tensorflow-datasets==4.9.8
549
+ tensorflow-hub==0.16.1
550
+ tensorflow-io-gcs-filesystem==0.37.1
551
+ tensorflow-metadata==1.17.1
552
+ tensorflow-probability==0.25.0
553
+ tensorflow-text==2.18.1
554
+ tensorflow_decision_forests==1.11.0
555
+ tensorstore==0.1.74
556
+ termcolor==3.1.0
557
+ terminado==0.18.1
558
+ text-unidecode==1.3
559
+ textblob==0.19.0
560
+ tf-slim==1.1.0
561
+ tf_keras==2.18.0
562
+ thinc==8.3.6
563
+ threadpoolctl==3.6.0
564
+ tifffile==2025.3.30
565
+ timm==1.0.15
566
+ tinycss2==1.4.0
567
+ tokenizers==0.21.1
568
+ toml==0.10.2
569
+ tomlkit==0.13.2
570
+ toolz==0.12.1
571
+ torch @ https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp311-cp311-linux_x86_64.whl
572
+ torchaudio @ https://download.pytorch.org/whl/cu124/torchaudio-2.6.0%2Bcu124-cp311-cp311-linux_x86_64.whl
573
+ torchsummary==1.5.1
574
+ torchvision @ https://download.pytorch.org/whl/cu124/torchvision-0.21.0%2Bcu124-cp311-cp311-linux_x86_64.whl
575
+ tornado==6.4.2
576
+ tqdm==4.67.1
577
+ traitlets==5.7.1
578
+ traittypes==0.2.1
579
+ transformers==4.51.3
580
+ treelite==4.4.1
581
+ treescope==0.1.9
582
+ triton==3.2.0
583
+ tweepy==4.15.0
584
+ typeguard==4.4.2
585
+ typer==0.15.3
586
+ types-pytz==2025.2.0.20250326
587
+ types-setuptools==80.0.0.20250429
588
+ typing-inspection==0.4.0
589
+ typing_extensions==4.13.2
590
+ tzdata==2025.2
591
+ tzlocal==5.3.1
592
+ uc-micro-py==1.0.3
593
+ ucx-py-cu12==0.42.0
594
+ ucxx-cu12==0.42.0
595
+ umap-learn==0.5.7
596
+ umf==0.10.0
597
+ uritemplate==4.1.1
598
+ urllib3==2.4.0
599
+ uvicorn==0.34.2
600
+ vega-datasets==0.9.0
601
+ wadllib==1.3.6
602
+ wandb==0.19.10
603
+ wasabi==1.1.3
604
+ wcwidth==0.2.13
605
+ weasel==0.4.1
606
+ webcolors==24.11.1
607
+ webencodings==0.5.1
608
+ websocket-client==1.8.0
609
+ websockets==15.0.1
610
+ Werkzeug==3.1.3
611
+ widgetsnbextension==3.6.10
612
+ wordcloud==1.9.4
613
+ wrapt==1.17.2
614
+ wurlitzer==3.1.1
615
+ xarray==2025.3.1
616
+ xarray-einstats==0.8.0
617
+ xgboost==2.1.4
618
+ xlrd==2.0.1
619
+ xxhash==3.5.0
620
+ xyzservices==2025.4.0
621
+ yarl==1.20.0
622
+ ydf==0.11.0
623
+ yellowbrick==1.5
624
+ yfinance==0.2.57
625
+ zict==3.0.0
626
+ zipp==3.21.0
627
+ zstandard==0.23.0
ui_components/logo.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import modelscope_studio.components.antd as antd
2
+ import modelscope_studio.components.base as ms
3
+
4
+
5
+ def Logo():
6
+ with antd.Typography.Title(level=1,
7
+ elem_style=dict(fontSize=24,
8
+ padding=8,
9
+ margin=0)):
10
+ with antd.Flex(align="center", gap="small", justify="center"):
11
+ antd.Image('./assets/DownArrow.jpg',
12
+ preview=False,
13
+ alt="logo",
14
+ width=96,
15
+ height=96)
ui_components/settings_header.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import modelscope_studio.components.antd as antd
3
+ import modelscope_studio.components.antdx as antdx
4
+ import modelscope_studio.components.base as ms
5
+
6
+ from config import DEFAULT_SETTINGS, MODEL_OPTIONS, MAX_THINKING_BUDGET, MIN_THINKING_BUDGET, get_text
7
+
8
+
9
+ def SettingsHeader():
10
+ state = gr.State({"open": True})
11
+ with antdx.Sender.Header(title=get_text("Settings", "设置"),
12
+ open=True) as settings_header:
13
+ with antd.Form(value=DEFAULT_SETTINGS) as settings_form:
14
+ with antd.Form.Item(form_name="model",
15
+ label=get_text("Chat Model", "对话模型")):
16
+ with antd.Select(options=MODEL_OPTIONS):
17
+ with ms.Slot("labelRender",
18
+ params_mapping="""(option) => ({
19
+ label: option.label,
20
+ link: { href: window.MODEL_OPTIONS_MAP[option.value].link },
21
+ })"""):
22
+ antd.Typography.Text(as_item="label")
23
+ antd.Typography.Link(get_text("Model Link", "模型链接"),
24
+ href_target="_blank",
25
+ as_item="link")
26
+
27
+ with antd.Form.Item(form_name="thinking_budget",
28
+ label=get_text("Thinking Budget", "思考预算"),
29
+ elem_classes="setting-form-thinking-budget"):
30
+ antd.Slider(elem_style=dict(flex=1, marginRight=14),
31
+ min=MIN_THINKING_BUDGET,
32
+ max=MAX_THINKING_BUDGET,
33
+ tooltip=dict(formatter="(v) => `${v}k`"))
34
+ antd.InputNumber(max=MAX_THINKING_BUDGET,
35
+ min=MIN_THINKING_BUDGET,
36
+ elem_style=dict(width=100),
37
+ addon_after="k")
38
+ # with antd.Form.Item(form_name="sys_prompt",
39
+ # label=get_text("System Prompt", "系统提示")):
40
+ # antd.Input.Textarea(auto_size=dict(minRows=3, maxRows=6))
41
+
42
+ def close_header(state_value):
43
+ state_value["open"] = False
44
+ return gr.update(value=state_value)
45
+
46
+ state.change(fn=lambda state_value: gr.update(open=state_value["open"]),
47
+ inputs=[state],
48
+ outputs=[settings_header])
49
+
50
+ settings_header.open_change(fn=close_header,
51
+ inputs=[state],
52
+ outputs=[state])
53
+
54
+ return state, settings_form
ui_components/thinking_button.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import modelscope_studio.components.antd as antd
2
+ import modelscope_studio.components.base as ms
3
+ import gradio as gr
4
+ from config import get_text
5
+
6
+
7
+ def ThinkingButton():
8
+ state = gr.State({"enable_thinking": True})
9
+ with antd.Button(get_text("Thinking", "深度思考"),
10
+ shape="round",
11
+ color="primary",
12
+ variant="solid") as thinking_btn:
13
+ with ms.Slot("icon"):
14
+ antd.Icon("SunOutlined")
15
+
16
+ def toggle_thinking(state_value):
17
+ state_value["enable_thinking"] = not state_value["enable_thinking"]
18
+ return gr.update(value=state_value)
19
+
20
+ def apply_state_change(state_value):
21
+ return gr.update(
22
+ variant="solid" if state_value["enable_thinking"] else "")
23
+
24
+ state.change(fn=apply_state_change, inputs=[state], outputs=[thinking_btn])
25
+
26
+ thinking_btn.click(fn=toggle_thinking, inputs=[state], outputs=[state])
27
+
28
+ return state