Wendong-Fan commited on
Commit
9be7074
·
1 Parent(s): ac6ba97

update wendong

Browse files
README.md CHANGED
@@ -224,7 +224,7 @@ OWL requires various API keys to interact with different services. The `owl/.env
224
  2. **Configure Your API Keys**:
225
  Open the `.env` file in your preferred text editor and insert your API keys in the corresponding fields.
226
 
227
- > **Note**: For the minimal example (`run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
228
 
229
  ### Option 2: Setting Environment Variables Directly
230
 
@@ -275,7 +275,7 @@ cd .. && source .venv/bin/activate && cd owl
275
  playwright install-deps
276
 
277
  #run example demo script
278
- xvfb-python run.py
279
 
280
  # Option 2: Build and run using the provided scripts
281
  cd .container
@@ -299,17 +299,17 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude
299
  npx @wonderwhy-er/desktop-commander setup
300
 
301
  # Run the MCP example
302
- python owl/run_mcp.py
303
  ```
304
 
305
- This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `owl/run_mcp.py` for the full implementation.
306
 
307
  ## Basic Usage
308
 
309
  After installation and setting up your environment variables, you can start using OWL right away:
310
 
311
  ```bash
312
- python owl/run.py
313
  ```
314
 
315
  ## Running with Different Models
@@ -330,28 +330,28 @@ OWL supports various LLM backends, though capabilities may vary depending on the
330
 
331
  ```bash
332
  # Run with Qwen model
333
- python owl/examples/run_qwen_zh.py
334
 
335
  # Run with Deepseek model
336
- python owl/examples/run_deepseek_zh.py
337
 
338
  # Run with other OpenAI-compatible models
339
- python owl/examples/run_openai_compatiable_model.py
340
 
341
  # Run with Azure OpenAI
342
- python owl/run_azure_openai.py
343
 
344
  # Run with Ollama
345
- python owl/examples/run_ollama.py
346
  ```
347
 
348
  For a simpler version that only requires an LLM API key, you can try our minimal example:
349
 
350
  ```bash
351
- python owl/examples/run_mini.py
352
  ```
353
 
354
- You can run OWL agent with your own task by modifying the `run.py` script:
355
 
356
  ```python
357
  # Define your own task
@@ -393,7 +393,7 @@ Here are some tasks you can try with OWL:
393
 
394
  OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources:
395
 
396
- Try our comprehensive MCP example in `owl/run_mcp.py` to see these capabilities in action!
397
 
398
  ## Available Toolkits
399
 
@@ -464,10 +464,10 @@ OWL includes an intuitive web-based user interface that makes it easier to inter
464
 
465
  ```bash
466
  # Start the Chinese version
467
- python owl/webapp_zh.py
468
 
469
  # Start the English version
470
- python owl/webapp.py
471
  ```
472
 
473
  ## Features
@@ -545,7 +545,7 @@ Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.o
545
 
546
  Join us for further discussions!
547
  <!-- ![](./assets/community.png) -->
548
- ![](./assets/community_8.jpg)
549
 
550
  # ❓ FAQ
551
 
 
224
  2. **Configure Your API Keys**:
225
  Open the `.env` file in your preferred text editor and insert your API keys in the corresponding fields.
226
 
227
+ > **Note**: For the minimal example (`examples/run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
228
 
229
  ### Option 2: Setting Environment Variables Directly
230
 
 
275
  playwright install-deps
276
 
277
  #run example demo script
278
+ xvfb-python examples/run.py
279
 
280
  # Option 2: Build and run using the provided scripts
281
  cd .container
 
299
  npx @wonderwhy-er/desktop-commander setup
300
 
301
  # Run the MCP example
302
+ python examples/run_mcp.py
303
  ```
304
 
305
+ This example showcases how OWL agents can seamlessly interact with file systems, web automation, and information retrieval through the MCP protocol. Check out `examples/run_mcp.py` for the full implementation.
306
 
307
  ## Basic Usage
308
 
309
  After installation and setting up your environment variables, you can start using OWL right away:
310
 
311
  ```bash
312
+ python examples/run.py
313
  ```
314
 
315
  ## Running with Different Models
 
330
 
331
  ```bash
332
  # Run with Qwen model
333
+ python examples/run_qwen_zh.py
334
 
335
  # Run with Deepseek model
336
+ python examples/run_deepseek_zh.py
337
 
338
  # Run with other OpenAI-compatible models
339
+ python examples/run_openai_compatiable_model.py
340
 
341
  # Run with Azure OpenAI
342
+ python examples/run_azure_openai.py
343
 
344
  # Run with Ollama
345
+ python examples/run_ollama.py
346
  ```
347
 
348
  For a simpler version that only requires an LLM API key, you can try our minimal example:
349
 
350
  ```bash
351
+ python examples/run_mini.py
352
  ```
353
 
354
+ You can run OWL agent with your own task by modifying the `examples/run.py` script:
355
 
356
  ```python
357
  # Define your own task
 
393
 
394
  OWL's MCP integration provides a standardized way for AI models to interact with various tools and data sources:
395
 
396
+ Try our comprehensive MCP example in `examples/run_mcp.py` to see these capabilities in action!
397
 
398
  ## Available Toolkits
399
 
 
464
 
465
  ```bash
466
  # Start the Chinese version
467
+ python examples/webapp_zh.py
468
 
469
  # Start the English version
470
+ python examples/webapp.py
471
  ```
472
 
473
  ## Features
 
545
 
546
  Join us for further discussions!
547
  <!-- ![](./assets/community.png) -->
548
+ ![](./assets/community.jpg)
549
 
550
  # ❓ FAQ
551
 
README_zh.md CHANGED
@@ -219,7 +219,7 @@ OWL 需要各种 API 密钥来与不同的服务进行交互。`owl/.env_templat
219
  2. **配置你的 API 密钥**:
220
  在你喜欢的文本编辑器中打开 `.env` 文件,并在相应字段中插入你的 API 密钥。
221
 
222
- > **注意**:对于最小示例(`run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。
223
 
224
  ### 选项 2:直接设置环境变量
225
 
@@ -269,7 +269,7 @@ cd .. && source .venv/bin/activate && cd owl
269
  playwright install-deps
270
 
271
  #运行例子演示脚本
272
- xvfb-python run.py
273
 
274
  # 选项2:使用提供的脚本构建和运行
275
  cd .container
@@ -293,23 +293,23 @@ npx -y @smithery/cli install @wonderwhy-er/desktop-commander --client claude
293
  npx @wonderwhy-er/desktop-commander setup
294
 
295
  # 运行 MCP 示例
296
- python owl/run_mcp.py
297
  ```
298
 
299
- 这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `owl/run_mcp.py` 了解完整实现。
300
 
301
  ## 基本用法
302
 
303
  运行以下示例:
304
 
305
  ```bash
306
- python owl/run.py
307
  ```
308
 
309
  我们还提供了一个最小化示例,只需配置LLM的API密钥即可运行:
310
 
311
  ```bash
312
- python owl/run_mini.py
313
  ```
314
 
315
  ## 使用不同的模型
@@ -330,22 +330,22 @@ OWL 支持多种 LLM 后端,但功能可能因模型的工具调用和多模
330
 
331
  ```bash
332
  # 使用 Qwen 模型运行
333
- python owl/examples/run_qwen_zh.py
334
 
335
  # 使用 Deepseek 模型运行
336
- python owl/examples/run_deepseek_zh.py
337
 
338
  # 使用其他 OpenAI 兼容模型运行
339
- python owl/examples/run_openai_compatiable_model.py
340
 
341
  # 使用 Azure OpenAI模型运行
342
- python owl/run_azure_openai.py
343
 
344
  # 使用 Ollama 运行
345
- python owl/examples/run_ollama.py
346
  ```
347
 
348
- 你可以通过修改 `run.py` 脚本来运行自己的任务:
349
 
350
  ```python
351
  # Define your own task
@@ -383,7 +383,7 @@ OWL 将自动调用与文档相关的工具来处理文件并提取答案。
383
 
384
  OWL 的 MCP 集成为 AI 模型与各种工具和数据源的交互提供了标准化的方式。
385
 
386
- 查看我们的综合示例 `owl/run_mcp.py` 来体验这些功能!
387
 
388
  ## 可用工具包
389
 
@@ -479,7 +479,7 @@ git checkout gaia58.18
479
 
480
  2. 运行评估脚本:
481
  ```bash
482
- python run_gaia_roleplaying.py
483
  ```
484
 
485
  # ⏱️ 未来计划
@@ -531,7 +531,7 @@ python run_gaia_roleplaying.py
531
 
532
  加入我们,参与更多讨论!
533
  <!-- ![](./assets/community.png) -->
534
- ![](./assets/community_8.jpg)
535
  <!-- ![](./assets/meetup.jpg) -->
536
 
537
  # ❓ 常见问题
 
219
  2. **配置你的 API 密钥**:
220
  在你喜欢的文本编辑器中打开 `.env` 文件,并在相应字段中插入你的 API 密钥。
221
 
222
+ > **注意**:对于最小示例(`examples/run_mini.py`),你只需要配置 LLM API 密钥(例如,`OPENAI_API_KEY`)。
223
 
224
  ### 选项 2:直接设置环境变量
225
 
 
269
  playwright install-deps
270
 
271
  #运行例子演示脚本
272
+ xvfb-python examples/run.py
273
 
274
  # 选项2:使用提供的脚本构建和运行
275
  cd .container
 
293
  npx @wonderwhy-er/desktop-commander setup
294
 
295
  # 运行 MCP 示例
296
+ python examples/run_mcp.py
297
  ```
298
 
299
+ 这个示例展示了 OWL 智能体如何通过 MCP 协议无缝地与文件系统、网页自动化和信息检索进行交互。查看 `examples/run_mcp.py` 了解完整实现。
300
 
301
  ## 基本用法
302
 
303
  运行以下示例:
304
 
305
  ```bash
306
+ python examples/run.py
307
  ```
308
 
309
  我们还提供了一个最小化示例,只需配置LLM的API密钥即可运行:
310
 
311
  ```bash
312
+ python examples/run_mini.py
313
  ```
314
 
315
  ## 使用不同的模型
 
330
 
331
  ```bash
332
  # 使用 Qwen 模型运行
333
+ python examples/run_qwen_zh.py
334
 
335
  # 使用 Deepseek 模型运行
336
+ python examples/run_deepseek_zh.py
337
 
338
  # 使用其他 OpenAI 兼容模型运行
339
+ python examples/run_openai_compatiable_model.py
340
 
341
  # 使用 Azure OpenAI模型运行
342
+ python examples/run_azure_openai.py
343
 
344
  # 使用 Ollama 运行
345
+ python examples/run_ollama.py
346
  ```
347
 
348
+ 你可以通过修改 `examples/run.py` 脚本来运行自己的任务:
349
 
350
  ```python
351
  # Define your own task
 
383
 
384
  OWL 的 MCP 集成为 AI 模型与各种工具和数据源的交互提供了标准化的方式。
385
 
386
+ 查看我们的综合示例 `examples/run_mcp.py` 来体验这些功能!
387
 
388
  ## 可用工具包
389
 
 
479
 
480
  2. 运行评估脚本:
481
  ```bash
482
+ python examples/run_gaia_roleplaying.py
483
  ```
484
 
485
  # ⏱️ 未来计划
 
531
 
532
  加入我们,参与更多讨论!
533
  <!-- ![](./assets/community.png) -->
534
+ ![](./assets/community.jpg)
535
  <!-- ![](./assets/meetup.jpg) -->
536
 
537
  # ❓ 常见问题
{owl/examples → examples}/run.py RENAMED
File without changes
{owl → examples}/run_azure_openai.py RENAMED
File without changes
{owl/examples → examples}/run_deepseek_zh.py RENAMED
File without changes
{owl/examples → examples}/run_gaia_roleplaying.py RENAMED
File without changes
{owl → examples}/run_mcp.py RENAMED
File without changes
{owl/examples → examples}/run_mini.py RENAMED
File without changes
{owl/examples → examples}/run_ollama.py RENAMED
File without changes
{owl/examples → examples}/run_openai_compatiable_model.py RENAMED
File without changes
{owl/examples → examples}/run_qwen_mini_zh.py RENAMED
File without changes
{owl/examples → examples}/run_qwen_zh.py RENAMED
File without changes
{owl/examples → examples}/run_terminal.py RENAMED
File without changes
{owl/examples → examples}/run_terminal_zh.py RENAMED
@@ -25,7 +25,6 @@ from camel.logger import set_log_level
25
 
26
  from owl.utils import run_society
27
  from camel.societies import RolePlaying
28
- import os
29
 
30
  load_dotenv()
31
  set_log_level(level="DEBUG")
 
25
 
26
  from owl.utils import run_society
27
  from camel.societies import RolePlaying
 
28
 
29
  load_dotenv()
30
  set_log_level(level="DEBUG")
owl/.env_template CHANGED
@@ -4,7 +4,7 @@
4
  #===========================================
5
 
6
  # OPENAI API (https://platform.openai.com/api-keys)
7
- # OPENAI_API_KEY= ""
8
  # OPENAI_API_BASE_URL=""
9
 
10
  # Azure OpenAI API
@@ -15,22 +15,22 @@
15
 
16
 
17
  # Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
18
- # QWEN_API_KEY=""
19
 
20
  # DeepSeek API (https://platform.deepseek.com/api_keys)
21
- # DEEPSEEK_API_KEY=""
22
 
23
  #===========================================
24
  # Tools & Services API
25
  #===========================================
26
 
27
- # Google Search API (https://developers.google.com/custom-search/v1/overview)
28
- # GOOGLE_API_KEY=""
29
- # SEARCH_ENGINE_ID=""
30
 
31
  # Chunkr API (https://chunkr.ai/)
32
- # CHUNKR_API_KEY=""
33
 
34
  # Firecrawl API (https://www.firecrawl.dev/)
35
- #FIRECRAWL_API_KEY=""
36
  #FIRECRAWL_API_URL="https://api.firecrawl.dev"
 
4
  #===========================================
5
 
6
  # OPENAI API (https://platform.openai.com/api-keys)
7
+ OPENAI_API_KEY='Your_Key'
8
  # OPENAI_API_BASE_URL=""
9
 
10
  # Azure OpenAI API
 
15
 
16
 
17
  # Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
18
+ QWEN_API_KEY='Your_Key'
19
 
20
  # DeepSeek API (https://platform.deepseek.com/api_keys)
21
+ DEEPSEEK_API_KEY='Your_Key'
22
 
23
  #===========================================
24
  # Tools & Services API
25
  #===========================================
26
 
27
+ # Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3)
28
+ GOOGLE_API_KEY='Your_Key'
29
+ SEARCH_ENGINE_ID='Your_ID'
30
 
31
  # Chunkr API (https://chunkr.ai/)
32
+ CHUNKR_API_KEY='Your_Key'
33
 
34
  # Firecrawl API (https://www.firecrawl.dev/)
35
+ FIRECRAWL_API_KEY='Your_Key'
36
  #FIRECRAWL_API_URL="https://api.firecrawl.dev"
owl/utils/enhanced_role_playing.py CHANGED
@@ -461,6 +461,10 @@ def run_society(
461
  assistant_response.info["usage"]["completion_tokens"]
462
  + user_response.info["usage"]["completion_tokens"]
463
  )
 
 
 
 
464
 
465
  # convert tool call to dict
466
  tool_call_records: List[dict] = []
 
461
  assistant_response.info["usage"]["completion_tokens"]
462
  + user_response.info["usage"]["completion_tokens"]
463
  )
464
+ overall_prompt_token_count += (
465
+ assistant_response.info["usage"]["prompt_tokens"]
466
+ + user_response.info["usage"]["prompt_tokens"]
467
+ )
468
 
469
  # convert tool call to dict
470
  tool_call_records: List[dict] = []
owl/webapp.py ADDED
@@ -0,0 +1,1316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
2
+ # Licensed under the Apache License, Version 2.0 (the "License");
3
+ # you may not use this file except in compliance with the License.
4
+ # You may obtain a copy of the License at
5
+ #
6
+ # http://www.apache.org/licenses/LICENSE-2.0
7
+ #
8
+ # Unless required by applicable law or agreed to in writing, software
9
+ # distributed under the License is distributed on an "AS IS" BASIS,
10
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11
+ # See the License for the specific language governing permissions and
12
+ # limitations under the License.
13
+ # ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
14
+ # Import from the correct module path
15
+ from owl.utils import run_society
16
+ import os
17
+ import gradio as gr
18
+ import time
19
+ import json
20
+ import logging
21
+ import datetime
22
+ from typing import Tuple
23
+ import importlib
24
+ from dotenv import load_dotenv, set_key, find_dotenv, unset_key
25
+ import threading
26
+ import queue
27
+ import re # For regular expression operations
28
+
29
+ os.environ["PYTHONIOENCODING"] = "utf-8"
30
+
31
+
32
+ # Configure logging system
33
+ def setup_logging():
34
+ """Configure logging system to output logs to file, memory queue, and console"""
35
+ # Create logs directory (if it doesn't exist)
36
+ logs_dir = os.path.join(os.path.dirname(__file__), "logs")
37
+ os.makedirs(logs_dir, exist_ok=True)
38
+
39
+ # Generate log filename (using current date)
40
+ current_date = datetime.datetime.now().strftime("%Y-%m-%d")
41
+ log_file = os.path.join(logs_dir, f"gradio_log_{current_date}.txt")
42
+
43
+ # Configure root logger (captures all logs)
44
+ root_logger = logging.getLogger()
45
+
46
+ # Clear existing handlers to avoid duplicate logs
47
+ for handler in root_logger.handlers[:]:
48
+ root_logger.removeHandler(handler)
49
+
50
+ root_logger.setLevel(logging.INFO)
51
+
52
+ # Create file handler
53
+ file_handler = logging.FileHandler(log_file, encoding="utf-8", mode="a")
54
+ file_handler.setLevel(logging.INFO)
55
+
56
+ # Create console handler
57
+ console_handler = logging.StreamHandler()
58
+ console_handler.setLevel(logging.INFO)
59
+
60
+ # Create formatter
61
+ formatter = logging.Formatter(
62
+ "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
63
+ )
64
+ file_handler.setFormatter(formatter)
65
+ console_handler.setFormatter(formatter)
66
+
67
+ # Add handlers to root logger
68
+ root_logger.addHandler(file_handler)
69
+ root_logger.addHandler(console_handler)
70
+
71
+ logging.info("Logging system initialized, log file: %s", log_file)
72
+ return log_file
73
+
74
+
75
+ # Global variables
76
+ LOG_FILE = None
77
+ LOG_QUEUE: queue.Queue = queue.Queue() # Log queue
78
+ STOP_LOG_THREAD = threading.Event()
79
+ CURRENT_PROCESS = None # Used to track the currently running process
80
+ STOP_REQUESTED = threading.Event() # Used to mark if stop was requested
81
+
82
+
83
+ # Log reading and updating functions
84
+ def log_reader_thread(log_file):
85
+ """Background thread that continuously reads the log file and adds new lines to the queue"""
86
+ try:
87
+ with open(log_file, "r", encoding="utf-8") as f:
88
+ # Move to the end of file
89
+ f.seek(0, 2)
90
+
91
+ while not STOP_LOG_THREAD.is_set():
92
+ line = f.readline()
93
+ if line:
94
+ LOG_QUEUE.put(line) # Add to conversation record queue
95
+ else:
96
+ # No new lines, wait for a short time
97
+ time.sleep(0.1)
98
+ except Exception as e:
99
+ logging.error(f"Log reader thread error: {str(e)}")
100
+
101
+
102
+ def get_latest_logs(max_lines=100, queue_source=None):
103
+ """Get the latest log lines from the queue, or read directly from the file if the queue is empty
104
+
105
+ Args:
106
+ max_lines: Maximum number of lines to return
107
+ queue_source: Specify which queue to use, default is LOG_QUEUE
108
+
109
+ Returns:
110
+ str: Log content
111
+ """
112
+ logs = []
113
+ log_queue = queue_source if queue_source else LOG_QUEUE
114
+
115
+ # Create a temporary queue to store logs so we can process them without removing them from the original queue
116
+ temp_queue = queue.Queue()
117
+ temp_logs = []
118
+
119
+ try:
120
+ # Try to get all available log lines from the queue
121
+ while not log_queue.empty() and len(temp_logs) < max_lines:
122
+ log = log_queue.get_nowait()
123
+ temp_logs.append(log)
124
+ temp_queue.put(log) # Put the log back into the temporary queue
125
+ except queue.Empty:
126
+ pass
127
+
128
+ # Process conversation records
129
+ logs = temp_logs
130
+
131
+ # If there are no new logs or not enough logs, try to read the last few lines directly from the file
132
+ if len(logs) < max_lines and LOG_FILE and os.path.exists(LOG_FILE):
133
+ try:
134
+ with open(LOG_FILE, "r", encoding="utf-8") as f:
135
+ all_lines = f.readlines()
136
+ # If there are already some logs in the queue, only read the remaining needed lines
137
+ remaining_lines = max_lines - len(logs)
138
+ file_logs = (
139
+ all_lines[-remaining_lines:]
140
+ if len(all_lines) > remaining_lines
141
+ else all_lines
142
+ )
143
+
144
+ # Add file logs before queue logs
145
+ logs = file_logs + logs
146
+ except Exception as e:
147
+ error_msg = f"Error reading log file: {str(e)}"
148
+ logging.error(error_msg)
149
+ if not logs: # Only add error message if there are no logs
150
+ logs = [error_msg]
151
+
152
+ # If there are still no logs, return a prompt message
153
+ if not logs:
154
+ return "Initialization in progress..."
155
+
156
+ # Filter logs, only keep logs with 'camel.agents.chat_agent - INFO'
157
+ filtered_logs = []
158
+ for log in logs:
159
+ if "camel.agents.chat_agent - INFO" in log:
160
+ filtered_logs.append(log)
161
+
162
+ # If there are no logs after filtering, return a prompt message
163
+ if not filtered_logs:
164
+ return "No conversation records yet."
165
+
166
+ # Process log content, extract the latest user and assistant messages
167
+ simplified_logs = []
168
+
169
+ # Use a set to track messages that have already been processed, to avoid duplicates
170
+ processed_messages = set()
171
+
172
+ def process_message(role, content):
173
+ # 创建一个唯一标识符来跟踪消息
174
+ msg_id = f"{role}:{content}"
175
+ if msg_id in processed_messages:
176
+ return None
177
+
178
+ processed_messages.add(msg_id)
179
+ content = content.replace("\\n", "\n")
180
+ lines = [line.strip() for line in content.split("\n")]
181
+ content = "\n".join(lines)
182
+
183
+ return f"[{role.title()} Agent]: {content}"
184
+
185
+ for log in filtered_logs:
186
+ formatted_messages = []
187
+ # 尝试提取消息数组
188
+ messages_match = re.search(
189
+ r"Model (.*?), index (\d+), processed these messages: (\[.*\])", log
190
+ )
191
+
192
+ if messages_match:
193
+ try:
194
+ messages = json.loads(messages_match.group(3))
195
+ for msg in messages:
196
+ if msg.get("role") in ["user", "assistant"]:
197
+ formatted_msg = process_message(
198
+ msg.get("role"), msg.get("content", "")
199
+ )
200
+ if formatted_msg:
201
+ formatted_messages.append(formatted_msg)
202
+ except json.JSONDecodeError:
203
+ pass
204
+
205
+ # If JSON parsing fails or no message array is found, try to extract conversation content directly
206
+ if not formatted_messages:
207
+ user_pattern = re.compile(r"\{'role': 'user', 'content': '(.*?)'\}")
208
+ assistant_pattern = re.compile(
209
+ r"\{'role': 'assistant', 'content': '(.*?)'\}"
210
+ )
211
+
212
+ for content in user_pattern.findall(log):
213
+ formatted_msg = process_message("user", content)
214
+ if formatted_msg:
215
+ formatted_messages.append(formatted_msg)
216
+
217
+ for content in assistant_pattern.findall(log):
218
+ formatted_msg = process_message("assistant", content)
219
+ if formatted_msg:
220
+ formatted_messages.append(formatted_msg)
221
+
222
+ if formatted_messages:
223
+ simplified_logs.append("\n\n".join(formatted_messages))
224
+
225
+ # Format log output, ensure appropriate separation between each conversation record
226
+ formatted_logs = []
227
+ for i, log in enumerate(simplified_logs):
228
+ # Remove excess whitespace characters from beginning and end
229
+ log = log.strip()
230
+
231
+ formatted_logs.append(log)
232
+
233
+ # Ensure each conversation record ends with a newline
234
+ if not log.endswith("\n"):
235
+ formatted_logs.append("\n")
236
+
237
+ return "".join(formatted_logs)
238
+
239
+
240
+ # Dictionary containing module descriptions
241
+ MODULE_DESCRIPTIONS = {
242
+ "run": "Default mode: Using OpenAI model's default agent collaboration mode, suitable for most tasks.",
243
+ "run_mini": "Using OpenAI model with minimal configuration to process tasks",
244
+ "run_deepseek_zh": "Using deepseek model to process Chinese tasks",
245
+ "run_openai_compatiable_model": "Using openai compatible model to process tasks",
246
+ "run_ollama": "Using local ollama model to process tasks",
247
+ "run_qwen_mini_zh": "Using qwen model with minimal configuration to process tasks",
248
+ "run_qwen_zh": "Using qwen model to process tasks",
249
+ }
250
+
251
+
252
+ # Default environment variable template
253
+ DEFAULT_ENV_TEMPLATE = """#===========================================
254
+ # MODEL & API
255
+ # (See https://docs.camel-ai.org/key_modules/models.html#)
256
+ #===========================================
257
+
258
+ # OPENAI API (https://platform.openai.com/api-keys)
259
+ OPENAI_API_KEY='Your_Key'
260
+ # OPENAI_API_BASE_URL=""
261
+
262
+ # Azure OpenAI API
263
+ # AZURE_OPENAI_BASE_URL=""
264
+ # AZURE_API_VERSION=""
265
+ # AZURE_OPENAI_API_KEY=""
266
+ # AZURE_DEPLOYMENT_NAME=""
267
+
268
+
269
+ # Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
270
+ QWEN_API_KEY='Your_Key'
271
+
272
+ # DeepSeek API (https://platform.deepseek.com/api_keys)
273
+ DEEPSEEK_API_KEY='Your_Key'
274
+
275
+ #===========================================
276
+ # Tools & Services API
277
+ #===========================================
278
+
279
+ # Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3)
280
+ GOOGLE_API_KEY='Your_Key'
281
+ SEARCH_ENGINE_ID='Your_ID'
282
+
283
+ # Chunkr API (https://chunkr.ai/)
284
+ CHUNKR_API_KEY='Your_Key'
285
+
286
+ # Firecrawl API (https://www.firecrawl.dev/)
287
+ FIRECRAWL_API_KEY='Your_Key'
288
+ #FIRECRAWL_API_URL="https://api.firecrawl.dev"
289
+ """
290
+
291
+
292
+ def validate_input(question: str) -> bool:
293
+ """Validate if user input is valid
294
+
295
+ Args:
296
+ question: User question
297
+
298
+ Returns:
299
+ bool: Whether the input is valid
300
+ """
301
+ # Check if input is empty or contains only spaces
302
+ if not question or question.strip() == "":
303
+ return False
304
+ return True
305
+
306
+
307
+ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
308
+ """Run the OWL system and return results
309
+
310
+ Args:
311
+ question: User question
312
+ example_module: Example module name to import (e.g., "run_terminal_zh" or "run_deep")
313
+
314
+ Returns:
315
+ Tuple[...]: Answer, token count, status
316
+ """
317
+ global CURRENT_PROCESS
318
+
319
+ # Validate input
320
+ if not validate_input(question):
321
+ logging.warning("User submitted invalid input")
322
+ return (
323
+ "Please enter a valid question",
324
+ "0",
325
+ "❌ Error: Invalid input question",
326
+ )
327
+
328
+ try:
329
+ # Ensure environment variables are loaded
330
+ load_dotenv(find_dotenv(), override=True)
331
+ logging.info(
332
+ f"Processing question: '{question}', using module: {example_module}"
333
+ )
334
+
335
+ # Check if the module is in MODULE_DESCRIPTIONS
336
+ if example_module not in MODULE_DESCRIPTIONS:
337
+ logging.error(f"User selected an unsupported module: {example_module}")
338
+ return (
339
+ f"Selected module '{example_module}' is not supported",
340
+ "0",
341
+ "❌ Error: Unsupported module",
342
+ )
343
+
344
+ # Dynamically import target module
345
+ module_path = f"examples.{example_module}"
346
+ try:
347
+ logging.info(f"Importing module: {module_path}")
348
+ module = importlib.import_module(module_path)
349
+ except ImportError as ie:
350
+ logging.error(f"Unable to import module {module_path}: {str(ie)}")
351
+ return (
352
+ f"Unable to import module: {module_path}",
353
+ "0",
354
+ f"❌ Error: Module {example_module} does not exist or cannot be loaded - {str(ie)}",
355
+ )
356
+ except Exception as e:
357
+ logging.error(
358
+ f"Error occurred while importing module {module_path}: {str(e)}"
359
+ )
360
+ return (
361
+ f"Error occurred while importing module: {module_path}",
362
+ "0",
363
+ f"❌ Error: {str(e)}",
364
+ )
365
+
366
+ # Check if it contains the construct_society function
367
+ if not hasattr(module, "construct_society"):
368
+ logging.error(
369
+ f"construct_society function not found in module {module_path}"
370
+ )
371
+ return (
372
+ f"construct_society function not found in module {module_path}",
373
+ "0",
374
+ "❌ Error: Module interface incompatible",
375
+ )
376
+
377
+ # Build society simulation
378
+ try:
379
+ logging.info("Building society simulation...")
380
+ society = module.construct_society(question)
381
+
382
+ except Exception as e:
383
+ logging.error(f"Error occurred while building society simulation: {str(e)}")
384
+ return (
385
+ f"Error occurred while building society simulation: {str(e)}",
386
+ "0",
387
+ f"❌ Error: Build failed - {str(e)}",
388
+ )
389
+
390
+ # Run society simulation
391
+ try:
392
+ logging.info("Running society simulation...")
393
+ answer, chat_history, token_info = run_society(society)
394
+ logging.info("Society simulation completed")
395
+ except Exception as e:
396
+ logging.error(f"Error occurred while running society simulation: {str(e)}")
397
+ return (
398
+ f"Error occurred while running society simulation: {str(e)}",
399
+ "0",
400
+ f"❌ Error: Run failed - {str(e)}",
401
+ )
402
+
403
+ # Safely get token count
404
+ if not isinstance(token_info, dict):
405
+ token_info = {}
406
+
407
+ completion_tokens = token_info.get("completion_token_count", 0)
408
+ prompt_tokens = token_info.get("prompt_token_count", 0)
409
+ total_tokens = completion_tokens + prompt_tokens
410
+
411
+ logging.info(
412
+ f"Processing completed, token usage: completion={completion_tokens}, prompt={prompt_tokens}, total={total_tokens}"
413
+ )
414
+
415
+ return (
416
+ answer,
417
+ f"Completion tokens: {completion_tokens:,} | Prompt tokens: {prompt_tokens:,} | Total: {total_tokens:,}",
418
+ "✅ Successfully completed",
419
+ )
420
+
421
+ except Exception as e:
422
+ logging.error(
423
+ f"Uncaught error occurred while processing the question: {str(e)}"
424
+ )
425
+ return (f"Error occurred: {str(e)}", "0", f"❌ Error: {str(e)}")
426
+
427
+
428
+ def update_module_description(module_name: str) -> str:
429
+ """Return the description of the selected module"""
430
+ return MODULE_DESCRIPTIONS.get(module_name, "No description available")
431
+
432
+
433
+ # Store environment variables configured from the frontend
434
+ WEB_FRONTEND_ENV_VARS: dict[str, str] = {}
435
+
436
+
437
+ def init_env_file():
438
+ """Initialize .env file if it doesn't exist"""
439
+ dotenv_path = find_dotenv()
440
+ if not dotenv_path:
441
+ with open(".env", "w") as f:
442
+ f.write(DEFAULT_ENV_TEMPLATE)
443
+ dotenv_path = find_dotenv()
444
+ return dotenv_path
445
+
446
+
447
+ def load_env_vars():
448
+ """Load environment variables and return as dictionary format
449
+
450
+ Returns:
451
+ dict: Environment variable dictionary, each value is a tuple containing value and source (value, source)
452
+ """
453
+ dotenv_path = init_env_file()
454
+ load_dotenv(dotenv_path, override=True)
455
+
456
+ # Read environment variables from .env file
457
+ env_file_vars = {}
458
+ with open(dotenv_path, "r") as f:
459
+ for line in f:
460
+ line = line.strip()
461
+ if line and not line.startswith("#"):
462
+ if "=" in line:
463
+ key, value = line.split("=", 1)
464
+ env_file_vars[key.strip()] = value.strip().strip("\"'")
465
+
466
+ # Get from system environment variables
467
+ system_env_vars = {
468
+ k: v
469
+ for k, v in os.environ.items()
470
+ if k not in env_file_vars and k not in WEB_FRONTEND_ENV_VARS
471
+ }
472
+
473
+ # Merge environment variables and mark sources
474
+ env_vars = {}
475
+
476
+ # Add system environment variables (lowest priority)
477
+ for key, value in system_env_vars.items():
478
+ env_vars[key] = (value, "System")
479
+
480
+ # Add .env file environment variables (medium priority)
481
+ for key, value in env_file_vars.items():
482
+ env_vars[key] = (value, ".env file")
483
+
484
+ # Add frontend configured environment variables (highest priority)
485
+ for key, value in WEB_FRONTEND_ENV_VARS.items():
486
+ env_vars[key] = (value, "Frontend configuration")
487
+ # Ensure operating system environment variables are also updated
488
+ os.environ[key] = value
489
+
490
+ return env_vars
491
+
492
+
493
+ def save_env_vars(env_vars):
494
+ """Save environment variables to .env file
495
+
496
+ Args:
497
+ env_vars: Dictionary, keys are environment variable names, values can be strings or (value, source) tuples
498
+ """
499
+ try:
500
+ dotenv_path = init_env_file()
501
+
502
+ # Save each environment variable
503
+ for key, value_data in env_vars.items():
504
+ if key and key.strip(): # Ensure key is not empty
505
+ # Handle case where value might be a tuple
506
+ if isinstance(value_data, tuple):
507
+ value = value_data[0]
508
+ else:
509
+ value = value_data
510
+
511
+ set_key(dotenv_path, key.strip(), value.strip())
512
+
513
+ # Reload environment variables to ensure they take effect
514
+ load_dotenv(dotenv_path, override=True)
515
+
516
+ return True, "Environment variables have been successfully saved!"
517
+ except Exception as e:
518
+ return False, f"Error saving environment variables: {str(e)}"
519
+
520
+
521
+ def add_env_var(key, value, from_frontend=True):
522
+ """Add or update a single environment variable
523
+
524
+ Args:
525
+ key: Environment variable name
526
+ value: Environment variable value
527
+ from_frontend: Whether it's from frontend configuration, default is True
528
+ """
529
+ try:
530
+ if not key or not key.strip():
531
+ return False, "Variable name cannot be empty"
532
+
533
+ key = key.strip()
534
+ value = value.strip()
535
+
536
+ # If from frontend, add to frontend environment variable dictionary
537
+ if from_frontend:
538
+ WEB_FRONTEND_ENV_VARS[key] = value
539
+ # Directly update system environment variables
540
+ os.environ[key] = value
541
+
542
+ # Also update .env file
543
+ dotenv_path = init_env_file()
544
+ set_key(dotenv_path, key, value)
545
+ load_dotenv(dotenv_path, override=True)
546
+
547
+ return True, f"Environment variable {key} has been successfully added/updated!"
548
+ except Exception as e:
549
+ return False, f"Error adding environment variable: {str(e)}"
550
+
551
+
552
+ def delete_env_var(key):
553
+ """Delete environment variable"""
554
+ try:
555
+ if not key or not key.strip():
556
+ return False, "Variable name cannot be empty"
557
+
558
+ key = key.strip()
559
+
560
+ # Delete from .env file
561
+ dotenv_path = init_env_file()
562
+ unset_key(dotenv_path, key)
563
+
564
+ # Delete from frontend environment variable dictionary
565
+ if key in WEB_FRONTEND_ENV_VARS:
566
+ del WEB_FRONTEND_ENV_VARS[key]
567
+
568
+ # Also delete from current process environment
569
+ if key in os.environ:
570
+ del os.environ[key]
571
+
572
+ return True, f"Environment variable {key} has been successfully deleted!"
573
+ except Exception as e:
574
+ return False, f"Error deleting environment variable: {str(e)}"
575
+
576
+
577
+ def is_api_related(key: str) -> bool:
578
+ """Determine if an environment variable is API-related
579
+
580
+ Args:
581
+ key: Environment variable name
582
+
583
+ Returns:
584
+ bool: Whether it's API-related
585
+ """
586
+ # API-related keywords
587
+ api_keywords = [
588
+ "api",
589
+ "key",
590
+ "token",
591
+ "secret",
592
+ "password",
593
+ "openai",
594
+ "qwen",
595
+ "deepseek",
596
+ "google",
597
+ "search",
598
+ "hf",
599
+ "hugging",
600
+ "chunkr",
601
+ "firecrawl",
602
+ ]
603
+
604
+ # Check if it contains API-related keywords (case insensitive)
605
+ return any(keyword in key.lower() for keyword in api_keywords)
606
+
607
+
608
+ def get_api_guide(key: str) -> str:
609
+ """Return the corresponding API guide based on the environment variable name
610
+
611
+ Args:
612
+ key: Environment variable name
613
+
614
+ Returns:
615
+ str: API guide link or description
616
+ """
617
+ key_lower = key.lower()
618
+ if "openai" in key_lower:
619
+ return "https://platform.openai.com/api-keys"
620
+ elif "qwen" in key_lower or "dashscope" in key_lower:
621
+ return "https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key"
622
+ elif "deepseek" in key_lower:
623
+ return "https://platform.deepseek.com/api_keys"
624
+ elif "google" in key_lower:
625
+ return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3"
626
+ elif "search_engine_id" in key_lower:
627
+ return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3"
628
+ elif "chunkr" in key_lower:
629
+ return "https://chunkr.ai/"
630
+ elif "firecrawl" in key_lower:
631
+ return "https://www.firecrawl.dev/"
632
+ else:
633
+ return ""
634
+
635
+
636
+ def update_env_table():
637
+ """Update environment variable table display, only showing API-related environment variables"""
638
+ env_vars = load_env_vars()
639
+ # Filter out API-related environment variables
640
+ api_env_vars = {k: v for k, v in env_vars.items() if is_api_related(k)}
641
+ # Convert to list format to meet Gradio Dataframe requirements
642
+ # Format: [Variable name, Variable value, Guide link]
643
+ result = []
644
+ for k, v in api_env_vars.items():
645
+ guide = get_api_guide(k)
646
+ # If there's a guide link, create a clickable link
647
+ guide_link = (
648
+ f"<a href='{guide}' target='_blank' class='guide-link'>🔗 Get</a>"
649
+ if guide
650
+ else ""
651
+ )
652
+ result.append([k, v[0], guide_link])
653
+ return result
654
+
655
+
656
+ def save_env_table_changes(data):
657
+ """Save changes to the environment variable table
658
+
659
+ Args:
660
+ data: Dataframe data, possibly a pandas DataFrame object
661
+
662
+ Returns:
663
+ str: Operation status information, containing HTML-formatted status message
664
+ """
665
+ try:
666
+ logging.info(
667
+ f"Starting to process environment variable table data, type: {type(data)}"
668
+ )
669
+
670
+ # Get all current environment variables
671
+ current_env_vars = load_env_vars()
672
+ processed_keys = set() # Record processed keys to detect deleted variables
673
+
674
+ # 处理pandas DataFrame对象
675
+ import pandas as pd
676
+
677
+ if isinstance(data, pd.DataFrame):
678
+ # Get column name information
679
+ columns = data.columns.tolist()
680
+ logging.info(f"DataFrame column names: {columns}")
681
+
682
+ # Iterate through each row of the DataFrame
683
+ for index, row in data.iterrows():
684
+ # 使用列名访问数据
685
+ if len(columns) >= 3:
686
+ # Get variable name and value (column 0 is name, column 1 is value)
687
+ key = row[0] if isinstance(row, pd.Series) else row.iloc[0]
688
+ value = row[1] if isinstance(row, pd.Series) else row.iloc[1]
689
+
690
+ # Check if it's an empty row or deleted variable
691
+ if (
692
+ key and str(key).strip()
693
+ ): # If key name is not empty, add or update
694
+ logging.info(
695
+ f"Processing environment variable: {key} = {value}"
696
+ )
697
+ add_env_var(key, str(value))
698
+ processed_keys.add(key)
699
+ # 处理其他格式
700
+ elif isinstance(data, dict):
701
+ logging.info(f"Dictionary format data keys: {list(data.keys())}")
702
+ # 如果是字典格式,尝试不同的键
703
+ if "data" in data:
704
+ rows = data["data"]
705
+ elif "values" in data:
706
+ rows = data["values"]
707
+ elif "value" in data:
708
+ rows = data["value"]
709
+ else:
710
+ # 尝试直接使用字典作为行数据
711
+ rows = []
712
+ for key, value in data.items():
713
+ if key not in ["headers", "types", "columns"]:
714
+ rows.append([key, value])
715
+
716
+ if isinstance(rows, list):
717
+ for row in rows:
718
+ if isinstance(row, list) and len(row) >= 2:
719
+ key, value = row[0], row[1]
720
+ if key and str(key).strip():
721
+ add_env_var(key, str(value))
722
+ processed_keys.add(key)
723
+ elif isinstance(data, list):
724
+ # 列表格式
725
+ for row in data:
726
+ if isinstance(row, list) and len(row) >= 2:
727
+ key, value = row[0], row[1]
728
+ if key and str(key).strip():
729
+ add_env_var(key, str(value))
730
+ processed_keys.add(key)
731
+ else:
732
+ logging.error(f"Unknown data format: {type(data)}")
733
+ return f"❌ Save failed: Unknown data format {type(data)}"
734
+
735
+ # Process deleted variables - check if there are variables in current environment not appearing in the table
736
+ api_related_keys = {k for k in current_env_vars.keys() if is_api_related(k)}
737
+ keys_to_delete = api_related_keys - processed_keys
738
+
739
+ # Delete variables no longer in the table
740
+ for key in keys_to_delete:
741
+ logging.info(f"Deleting environment variable: {key}")
742
+ delete_env_var(key)
743
+
744
+ return "✅ Environment variables have been successfully saved"
745
+ except Exception as e:
746
+ import traceback
747
+
748
+ error_details = traceback.format_exc()
749
+ logging.error(f"Error saving environment variables: {str(e)}\n{error_details}")
750
+ return f"❌ Save failed: {str(e)}"
751
+
752
+
753
+ def get_env_var_value(key):
754
+ """Get the actual value of an environment variable
755
+
756
+ Priority: Frontend configuration > .env file > System environment variables
757
+ """
758
+ # Check frontend configured environment variables
759
+ if key in WEB_FRONTEND_ENV_VARS:
760
+ return WEB_FRONTEND_ENV_VARS[key]
761
+
762
+ # Check system environment variables (including those loaded from .env)
763
+ return os.environ.get(key, "")
764
+
765
+
766
+ def create_ui():
767
+ """Create enhanced Gradio interface"""
768
+
769
+ # Define conversation record update function
770
+ def update_logs2():
771
+ """Get the latest conversation records and return them to the frontend for display"""
772
+ return get_latest_logs(100, LOG_QUEUE)
773
+
774
+ def clear_log_file():
775
+ """Clear log file content"""
776
+ try:
777
+ if LOG_FILE and os.path.exists(LOG_FILE):
778
+ # Clear log file content instead of deleting the file
779
+ open(LOG_FILE, "w").close()
780
+ logging.info("Log file has been cleared")
781
+ # Clear log queue
782
+ while not LOG_QUEUE.empty():
783
+ try:
784
+ LOG_QUEUE.get_nowait()
785
+ except queue.Empty:
786
+ break
787
+ return ""
788
+ else:
789
+ return ""
790
+ except Exception as e:
791
+ logging.error(f"Error clearing log file: {str(e)}")
792
+ return ""
793
+
794
+ # Create a real-time log update function
795
+ def process_with_live_logs(question, module_name):
796
+ """Process questions and update logs in real-time"""
797
+ global CURRENT_PROCESS
798
+
799
+ # Clear log file
800
+ clear_log_file()
801
+
802
+ # Create a background thread to process the question
803
+ result_queue = queue.Queue()
804
+
805
+ def process_in_background():
806
+ try:
807
+ result = run_owl(question, module_name)
808
+ result_queue.put(result)
809
+ except Exception as e:
810
+ result_queue.put(
811
+ (f"Error occurred: {str(e)}", "0", f"❌ Error: {str(e)}")
812
+ )
813
+
814
+ # Start background processing thread
815
+ bg_thread = threading.Thread(target=process_in_background)
816
+ CURRENT_PROCESS = bg_thread # Record current process
817
+ bg_thread.start()
818
+
819
+ # While waiting for processing to complete, update logs once per second
820
+ while bg_thread.is_alive():
821
+ # Update conversation record display
822
+ logs2 = get_latest_logs(100, LOG_QUEUE)
823
+
824
+ # Always update status
825
+ yield (
826
+ "0",
827
+ "<span class='status-indicator status-running'></span> Processing...",
828
+ logs2,
829
+ )
830
+
831
+ time.sleep(1)
832
+
833
+ # Processing complete, get results
834
+ if not result_queue.empty():
835
+ result = result_queue.get()
836
+ answer, token_count, status = result
837
+
838
+ # Final update of conversation record
839
+ logs2 = get_latest_logs(100, LOG_QUEUE)
840
+
841
+ # Set different indicators based on status
842
+ if "Error" in status:
843
+ status_with_indicator = (
844
+ f"<span class='status-indicator status-error'></span> {status}"
845
+ )
846
+ else:
847
+ status_with_indicator = (
848
+ f"<span class='status-indicator status-success'></span> {status}"
849
+ )
850
+
851
+ yield token_count, status_with_indicator, logs2
852
+ else:
853
+ logs2 = get_latest_logs(100, LOG_QUEUE)
854
+ yield (
855
+ "0",
856
+ "<span class='status-indicator status-error'></span> Terminated",
857
+ logs2,
858
+ )
859
+
860
+ with gr.Blocks(theme=gr.themes.Soft(primary_hue="blue")) as app:
861
+ gr.Markdown(
862
+ """
863
+ # 🦉 OWL Multi-Agent Collaboration System
864
+
865
+ Advanced multi-agent collaboration system developed based on the CAMEL framework, designed to solve complex problems through agent collaboration.
866
+ Models and tools can be customized by modifying local scripts.
867
+ This web app is currently in beta development. It is provided for demonstration and testing purposes only and is not yet recommended for production use.
868
+ """
869
+ )
870
+
871
+ # Add custom CSS
872
+ gr.HTML("""
873
+ <style>
874
+ /* Chat container style */
875
+ .chat-container .chatbot {
876
+ height: 500px;
877
+ overflow-y: auto;
878
+ border-radius: 10px;
879
+ box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
880
+ }
881
+
882
+
883
+ /* Improved tab style */
884
+ .tabs .tab-nav {
885
+ background-color: #f5f5f5;
886
+ border-radius: 8px 8px 0 0;
887
+ padding: 5px;
888
+ }
889
+
890
+ .tabs .tab-nav button {
891
+ border-radius: 5px;
892
+ margin: 0 3px;
893
+ padding: 8px 15px;
894
+ font-weight: 500;
895
+ }
896
+
897
+ .tabs .tab-nav button.selected {
898
+ background-color: #2c7be5;
899
+ color: white;
900
+ }
901
+
902
+ /* Status indicator style */
903
+ .status-indicator {
904
+ display: inline-block;
905
+ width: 10px;
906
+ height: 10px;
907
+ border-radius: 50%;
908
+ margin-right: 5px;
909
+ }
910
+
911
+ .status-running {
912
+ background-color: #ffc107;
913
+ animation: pulse 1.5s infinite;
914
+ }
915
+
916
+ .status-success {
917
+ background-color: #28a745;
918
+ }
919
+
920
+ .status-error {
921
+ background-color: #dc3545;
922
+ }
923
+
924
+ /* Log display area style */
925
+ .log-display textarea {
926
+ height: 400px !important;
927
+ max-height: 400px !important;
928
+ overflow-y: auto !important;
929
+ font-family: monospace;
930
+ font-size: 0.9em;
931
+ white-space: pre-wrap;
932
+ line-height: 1.4;
933
+ }
934
+
935
+ /* Environment variable management style */
936
+ .env-manager-container {
937
+ border-radius: 10px;
938
+ padding: 15px;
939
+ background-color: #f9f9f9;
940
+ margin-bottom: 20px;
941
+ }
942
+
943
+ .env-controls, .api-help-container {
944
+ border-radius: 8px;
945
+ padding: 15px;
946
+ background-color: white;
947
+ box-shadow: 0 2px 6px rgba(0, 0, 0, 0.05);
948
+ height: 100%;
949
+ }
950
+
951
+ .env-add-group, .env-delete-group {
952
+ margin-top: 20px;
953
+ padding: 15px;
954
+ border-radius: 8px;
955
+ background-color: #f5f8ff;
956
+ border: 1px solid #e0e8ff;
957
+ }
958
+
959
+ .env-delete-group {
960
+ background-color: #fff5f5;
961
+ border: 1px solid #ffe0e0;
962
+ }
963
+
964
+ .env-buttons {
965
+ justify-content: flex-start;
966
+ gap: 10px;
967
+ margin-top: 10px;
968
+ }
969
+
970
+ .env-button {
971
+ min-width: 100px;
972
+ }
973
+
974
+ .delete-button {
975
+ background-color: #dc3545;
976
+ color: white;
977
+ }
978
+
979
+ .env-table {
980
+ margin-bottom: 15px;
981
+ }
982
+
983
+ /* Improved environment variable table style */
984
+ .env-table table {
985
+ border-collapse: separate;
986
+ border-spacing: 0;
987
+ width: 100%;
988
+ border-radius: 8px;
989
+ overflow: hidden;
990
+ box-shadow: 0 2px 8px rgba(0,0,0,0.05);
991
+ }
992
+
993
+ .env-table th {
994
+ background-color: #f0f7ff;
995
+ padding: 12px 15px;
996
+ text-align: left;
997
+ font-weight: 600;
998
+ color: #2c7be5;
999
+ border-bottom: 2px solid #e0e8ff;
1000
+ }
1001
+
1002
+ .env-table td {
1003
+ padding: 10px 15px;
1004
+ border-bottom: 1px solid #f0f0f0;
1005
+ }
1006
+
1007
+ .env-table tr:hover td {
1008
+ background-color: #f9fbff;
1009
+ }
1010
+
1011
+ .env-table tr:last-child td {
1012
+ border-bottom: none;
1013
+ }
1014
+
1015
+ /* Status icon style */
1016
+ .status-icon-cell {
1017
+ text-align: center;
1018
+ font-size: 1.2em;
1019
+ }
1020
+
1021
+ /* Link style */
1022
+ .guide-link {
1023
+ color: #2c7be5;
1024
+ text-decoration: none;
1025
+ cursor: pointer;
1026
+ font-weight: 500;
1027
+ }
1028
+
1029
+ .guide-link:hover {
1030
+ text-decoration: underline;
1031
+ }
1032
+
1033
+ .env-status {
1034
+ margin-top: 15px;
1035
+ font-weight: 500;
1036
+ padding: 10px;
1037
+ border-radius: 6px;
1038
+ transition: all 0.3s ease;
1039
+ }
1040
+
1041
+ .env-status-success {
1042
+ background-color: #d4edda;
1043
+ color: #155724;
1044
+ border: 1px solid #c3e6cb;
1045
+ }
1046
+
1047
+ .env-status-error {
1048
+ background-color: #f8d7da;
1049
+ color: #721c24;
1050
+ border: 1px solid #f5c6cb;
1051
+ }
1052
+
1053
+ .api-help-accordion {
1054
+ margin-bottom: 8px;
1055
+ border-radius: 6px;
1056
+ overflow: hidden;
1057
+ }
1058
+
1059
+
1060
+ @keyframes pulse {
1061
+ 0% { opacity: 1; }
1062
+ 50% { opacity: 0.5; }
1063
+ 100% { opacity: 1; }
1064
+ }
1065
+ </style>
1066
+ """)
1067
+
1068
+ with gr.Row():
1069
+ with gr.Column(scale=1):
1070
+ question_input = gr.Textbox(
1071
+ lines=5,
1072
+ placeholder="Please enter your question...",
1073
+ label="Question",
1074
+ elem_id="question_input",
1075
+ show_copy_button=True,
1076
+ value="Open Baidu search, summarize the github stars, fork counts, etc. of camel-ai's camel framework, and write the numbers into a python file using the plot package, save it locally, and run the generated python file.",
1077
+ )
1078
+
1079
+ # Enhanced module selection dropdown
1080
+ # Only includes modules defined in MODULE_DESCRIPTIONS
1081
+ module_dropdown = gr.Dropdown(
1082
+ choices=list(MODULE_DESCRIPTIONS.keys()),
1083
+ value="run_qwen_zh",
1084
+ label="Select Function Module",
1085
+ interactive=True,
1086
+ )
1087
+
1088
+ # Module description text box
1089
+ module_description = gr.Textbox(
1090
+ value=MODULE_DESCRIPTIONS["run_qwen_zh"],
1091
+ label="Module Description",
1092
+ interactive=False,
1093
+ elem_classes="module-info",
1094
+ )
1095
+
1096
+ with gr.Row():
1097
+ run_button = gr.Button(
1098
+ "Run", variant="primary", elem_classes="primary"
1099
+ )
1100
+
1101
+ status_output = gr.HTML(
1102
+ value="<span class='status-indicator status-success'></span> Ready",
1103
+ label="Status",
1104
+ )
1105
+ token_count_output = gr.Textbox(
1106
+ label="Token Count", interactive=False, elem_classes="token-count"
1107
+ )
1108
+
1109
+ with gr.Tabs(): # Set conversation record as the default selected tab
1110
+ with gr.TabItem("Conversation Record"):
1111
+ # Add conversation record display area
1112
+ log_display2 = gr.Textbox(
1113
+ label="Conversation Record",
1114
+ lines=25,
1115
+ max_lines=100,
1116
+ interactive=False,
1117
+ autoscroll=True,
1118
+ show_copy_button=True,
1119
+ elem_classes="log-display",
1120
+ container=True,
1121
+ value="",
1122
+ )
1123
+
1124
+ with gr.Row():
1125
+ refresh_logs_button2 = gr.Button("Refresh Record")
1126
+ auto_refresh_checkbox2 = gr.Checkbox(
1127
+ label="Auto Refresh", value=True, interactive=True
1128
+ )
1129
+ clear_logs_button2 = gr.Button(
1130
+ "Clear Record", variant="secondary"
1131
+ )
1132
+
1133
+ with gr.TabItem("Environment Variable Management", id="env-settings"):
1134
+ with gr.Box(elem_classes="env-manager-container"):
1135
+ gr.Markdown("""
1136
+ ## Environment Variable Management
1137
+
1138
+ Set model API keys and other service credentials here. This information will be saved in a local `.env` file, ensuring your API keys are securely stored and not uploaded to the network. Correctly setting API keys is crucial for the functionality of the OWL system. Environment variables can be flexibly configured according to tool requirements.
1139
+ """)
1140
+
1141
+ # Main content divided into two-column layout
1142
+ with gr.Row():
1143
+ # Left column: Environment variable management controls
1144
+ with gr.Column(scale=3):
1145
+ with gr.Box(elem_classes="env-controls"):
1146
+ # Environment variable table - set to interactive for direct editing
1147
+ gr.Markdown("""
1148
+ <div style="background-color: #e7f3fe; border-left: 6px solid #2196F3; padding: 10px; margin: 15px 0; border-radius: 4px;">
1149
+ <strong>Tip:</strong> Please make sure to run cp .env_template .env to create a local .env file, and flexibly configure the required environment variables according to the running module
1150
+ </div>
1151
+ """)
1152
+
1153
+ # Enhanced environment variable table, supporting adding and deleting rows
1154
+ env_table = gr.Dataframe(
1155
+ headers=[
1156
+ "Variable Name",
1157
+ "Value",
1158
+ "Retrieval Guide",
1159
+ ],
1160
+ datatype=[
1161
+ "str",
1162
+ "str",
1163
+ "html",
1164
+ ], # Set the last column as HTML type to support links
1165
+ row_count=10, # Increase row count to allow adding new variables
1166
+ col_count=(3, "fixed"),
1167
+ value=update_env_table,
1168
+ label="API Keys and Environment Variables",
1169
+ interactive=True, # Set as interactive, allowing direct editing
1170
+ elem_classes="env-table",
1171
+ )
1172
+
1173
+ # Operation instructions
1174
+ gr.Markdown(
1175
+ """
1176
+ <div style="background-color: #fff3cd; border-left: 6px solid #ffc107; padding: 10px; margin: 15px 0; border-radius: 4px;">
1177
+ <strong>Operation Guide</strong>:
1178
+ <ul style="margin-top: 8px; margin-bottom: 8px;">
1179
+ <li><strong>Edit Variable</strong>: Click directly on the "Value" cell in the table to edit</li>
1180
+ <li><strong>Add Variable</strong>: Enter a new variable name and value in a blank row</li>
1181
+ <li><strong>Delete Variable</strong>: Clear the variable name to delete that row</li>
1182
+ <li><strong>Get API Key</strong>: Click on the link in the "Retrieval Guide" column to get the corresponding API key</li>
1183
+ </ul>
1184
+ </div>
1185
+ """,
1186
+ elem_classes="env-instructions",
1187
+ )
1188
+
1189
+ # Environment variable operation buttons
1190
+ with gr.Row(elem_classes="env-buttons"):
1191
+ save_env_button = gr.Button(
1192
+ "💾 Save Changes",
1193
+ variant="primary",
1194
+ elem_classes="env-button",
1195
+ )
1196
+ refresh_button = gr.Button(
1197
+ "🔄 Refresh List", elem_classes="env-button"
1198
+ )
1199
+
1200
+ # Status display
1201
+ env_status = gr.HTML(
1202
+ label="Operation Status",
1203
+ value="",
1204
+ elem_classes="env-status",
1205
+ )
1206
+
1207
+ # 连接事件处理函数
1208
+ save_env_button.click(
1209
+ fn=save_env_table_changes,
1210
+ inputs=[env_table],
1211
+ outputs=[env_status],
1212
+ ).then(fn=update_env_table, outputs=[env_table])
1213
+
1214
+ refresh_button.click(fn=update_env_table, outputs=[env_table])
1215
+
1216
+ # Example questions
1217
+ examples = [
1218
+ "Open Baidu search, summarize the github stars, fork counts, etc. of camel-ai's camel framework, and write the numbers into a python file using the plot package, save it locally, and run the generated python file.",
1219
+ "Browse Amazon and find a product that is attractive to programmers. Please provide the product name and price",
1220
+ "Write a hello world python file and save it locally",
1221
+ ]
1222
+
1223
+ gr.Examples(examples=examples, inputs=question_input)
1224
+
1225
+ gr.HTML("""
1226
+ <div class="footer" id="about">
1227
+ <h3>About OWL Multi-Agent Collaboration System</h3>
1228
+ <p>OWL is an advanced multi-agent collaboration system developed based on the CAMEL framework, designed to solve complex problems through agent collaboration.</p>
1229
+ <p>© 2025 CAMEL-AI.org. Based on Apache License 2.0 open source license</p>
1230
+ <p><a href="https://github.com/camel-ai/owl" target="_blank">GitHub</a></p>
1231
+ </div>
1232
+ """)
1233
+
1234
+ # Set up event handling
1235
+ run_button.click(
1236
+ fn=process_with_live_logs,
1237
+ inputs=[question_input, module_dropdown],
1238
+ outputs=[token_count_output, status_output, log_display2],
1239
+ )
1240
+
1241
+ # Module selection updates description
1242
+ module_dropdown.change(
1243
+ fn=update_module_description,
1244
+ inputs=module_dropdown,
1245
+ outputs=module_description,
1246
+ )
1247
+
1248
+ # Conversation record related event handling
1249
+ refresh_logs_button2.click(
1250
+ fn=lambda: get_latest_logs(100, LOG_QUEUE), outputs=[log_display2]
1251
+ )
1252
+
1253
+ clear_logs_button2.click(fn=clear_log_file, outputs=[log_display2])
1254
+
1255
+ # Auto refresh control
1256
+ def toggle_auto_refresh(enabled):
1257
+ if enabled:
1258
+ return gr.update(every=3)
1259
+ else:
1260
+ return gr.update(every=0)
1261
+
1262
+ auto_refresh_checkbox2.change(
1263
+ fn=toggle_auto_refresh,
1264
+ inputs=[auto_refresh_checkbox2],
1265
+ outputs=[log_display2],
1266
+ )
1267
+
1268
+ # No longer automatically refresh logs by default
1269
+
1270
+ return app
1271
+
1272
+
1273
+ # Main function
1274
+ def main():
1275
+ try:
1276
+ # Initialize logging system
1277
+ global LOG_FILE
1278
+ LOG_FILE = setup_logging()
1279
+ logging.info("OWL Web application started")
1280
+
1281
+ # Start log reading thread
1282
+ log_thread = threading.Thread(
1283
+ target=log_reader_thread, args=(LOG_FILE,), daemon=True
1284
+ )
1285
+ log_thread.start()
1286
+ logging.info("Log reading thread started")
1287
+
1288
+ # Initialize .env file (if it doesn't exist)
1289
+ init_env_file()
1290
+ app = create_ui()
1291
+
1292
+ # Register cleanup function for when the application closes
1293
+ def cleanup():
1294
+ global STOP_LOG_THREAD, STOP_REQUESTED
1295
+ STOP_LOG_THREAD.set()
1296
+ STOP_REQUESTED.set()
1297
+ logging.info("Application closed, stopping log thread")
1298
+
1299
+ app.queue()
1300
+ app.launch(share=False, server_name="127.0.0.1", server_port=7860)
1301
+ except Exception as e:
1302
+ logging.error(f"Error occurred while starting the application: {str(e)}")
1303
+ print(f"Error occurred while starting the application: {str(e)}")
1304
+ import traceback
1305
+
1306
+ traceback.print_exc()
1307
+
1308
+ finally:
1309
+ # Ensure log thread stops
1310
+ STOP_LOG_THREAD.set()
1311
+ STOP_REQUESTED.set()
1312
+ logging.info("Application closed")
1313
+
1314
+
1315
+ if __name__ == "__main__":
1316
+ main()
owl/webapp_zh.py CHANGED
@@ -151,7 +151,7 @@ def get_latest_logs(max_lines=100, queue_source=None):
151
 
152
  # 如果仍然没有日志,返回提示信息
153
  if not logs:
154
- return "暂无对话记录。"
155
 
156
  # 过滤日志,只保留 camel.agents.chat_agent - INFO 的日志
157
  filtered_logs = []
@@ -242,87 +242,49 @@ MODULE_DESCRIPTIONS = {
242
  "run": "默认模式:使用OpenAI模型的默认的智能体协作模式,适合大多数任务。",
243
  "run_mini": "使用使用OpenAI模型最小化配置处理任务",
244
  "run_deepseek_zh": "使用deepseek模型处理中文任务",
245
- "run_terminal_zh": "终端模式:可执行命令行操作,支持网络搜索、文件处理等功能。适合需要系统交互的任务,使用OpenAI模型",
246
- "run_gaia_roleplaying": "GAIA基准测试实现,用于评估Agent能力",
247
  "run_openai_compatiable_model": "使用openai兼容模型处理任务",
248
  "run_ollama": "使用本地ollama模型处理任务",
249
  "run_qwen_mini_zh": "使用qwen模型最小化配置处理任务",
250
  "run_qwen_zh": "使用qwen模型处理任务",
251
  }
252
 
253
- # API帮助信息
254
- API_HELP_INFO = {
255
- "OPENAI_API_KEY": {
256
- "name": "OpenAI API",
257
- "desc": "OpenAI API密钥,用于访问GPT系列模型",
258
- "url": "https://platform.openai.com/api-keys",
259
- },
260
- "QWEN_API_KEY": {
261
- "name": "通义千问 API",
262
- "desc": "阿里云通义千问API密钥",
263
- "url": "https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key",
264
- },
265
- "DEEPSEEK_API_KEY": {
266
- "name": "DeepSeek API",
267
- "desc": "DeepSeek API密钥",
268
- "url": "https://platform.deepseek.com/api_keys",
269
- },
270
- "GOOGLE_API_KEY": {
271
- "name": "Google Search API",
272
- "desc": "Google自定义搜索API密钥",
273
- "url": "https://developers.google.com/custom-search/v1/overview",
274
- },
275
- "SEARCH_ENGINE_ID": {
276
- "name": "Google Search Engine ID",
277
- "desc": "Google自定义搜索引擎ID",
278
- "url": "https://developers.google.com/custom-search/v1/overview",
279
- },
280
- "HF_TOKEN": {
281
- "name": "Hugging Face API",
282
- "desc": "Hugging Face API令牌",
283
- "url": "https://huggingface.co/join",
284
- },
285
- "CHUNKR_API_KEY": {
286
- "name": "Chunkr API",
287
- "desc": "Chunkr API密钥",
288
- "url": "https://chunkr.ai/",
289
- },
290
- "FIRECRAWL_API_KEY": {
291
- "name": "Firecrawl API",
292
- "desc": "Firecrawl API密钥",
293
- "url": "https://www.firecrawl.dev/",
294
- },
295
- }
296
 
297
  # 默认环境变量模板
298
- DEFAULT_ENV_TEMPLATE = """# MODEL & API (See https://docs.camel-ai.org/key_modules/models.html#)
 
 
 
299
 
300
- # OPENAI API
301
- # OPENAI_API_KEY= ""
302
  # OPENAI_API_BASE_URL=""
303
 
 
 
 
 
 
 
 
304
  # Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
305
- # QWEN_API_KEY=""
306
 
307
  # DeepSeek API (https://platform.deepseek.com/api_keys)
308
- # DEEPSEEK_API_KEY=""
309
 
310
  #===========================================
311
  # Tools & Services API
312
  #===========================================
313
 
314
- # Google Search API (https://developers.google.com/custom-search/v1/overview)
315
- GOOGLE_API_KEY=""
316
- SEARCH_ENGINE_ID=""
317
-
318
- # Hugging Face API (https://huggingface.co/join)
319
- HF_TOKEN=""
320
 
321
  # Chunkr API (https://chunkr.ai/)
322
- CHUNKR_API_KEY=""
323
 
324
  # Firecrawl API (https://www.firecrawl.dev/)
325
- FIRECRAWL_API_KEY=""
326
  #FIRECRAWL_API_URL="https://api.firecrawl.dev"
327
  """
328
 
@@ -357,7 +319,7 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
357
  # 验证输入
358
  if not validate_input(question):
359
  logging.warning("用户提交了无效的输入")
360
- return ("请输入有效的问题", "0", "❌ 错误: 输入无效")
361
 
362
  try:
363
  # 确保环境变量已加载
@@ -374,7 +336,7 @@ def run_owl(question: str, example_module: str) -> Tuple[str, str, str]:
374
  )
375
 
376
  # 动态导入目标模块
377
- module_path = f"owl.examples.{example_module}"
378
  try:
379
  logging.info(f"正在导入模块: {module_path}")
380
  module = importlib.import_module(module_path)
@@ -452,8 +414,6 @@ def update_module_description(module_name: str) -> str:
452
  return MODULE_DESCRIPTIONS.get(module_name, "无可用描述")
453
 
454
 
455
- # 环境��量管理功能
456
-
457
  # 存储前端配置的环境变量
458
  WEB_FRONTEND_ENV_VARS: dict[str, str] = {}
459
 
@@ -646,7 +606,9 @@ def get_api_guide(key: str) -> str:
646
  elif "deepseek" in key_lower:
647
  return "https://platform.deepseek.com/api_keys"
648
  elif "google" in key_lower:
649
- return "https://developers.google.com/custom-search/v1/overview"
 
 
650
  elif "chunkr" in key_lower:
651
  return "https://chunkr.ai/"
652
  elif "firecrawl" in key_lower:
@@ -701,11 +663,11 @@ def save_env_table_changes(data):
701
 
702
  # 遍历DataFrame的每一行
703
  for index, row in data.iterrows():
704
- # 使用列名或索引访问数据
705
  if len(columns) >= 3:
706
- # 如果有列名,使用列名访问
707
- key = row.iloc[1] if hasattr(row, "iloc") else row[1]
708
- value = row.iloc[2] if hasattr(row, "iloc") else row[2]
709
 
710
  # 检查是否为空行或已删除的变量
711
  if key and str(key).strip(): # 如果键名不为空,则添加或更新
@@ -812,6 +774,9 @@ def create_ui():
812
  """处理问题并实时更新日志"""
813
  global CURRENT_PROCESS
814
 
 
 
 
815
  # 创建一个后台线程来处理问题
816
  result_queue = queue.Queue()
817
 
@@ -874,6 +839,8 @@ def create_ui():
874
  # 🦉 OWL 多智能体协作系统
875
 
876
  基于CAMEL框架开发的先进多智能体协作系统,旨在通过智能体协作解决复杂问题。
 
 
877
  """
878
  )
879
 
@@ -1082,6 +1049,7 @@ def create_ui():
1082
  label="问题",
1083
  elem_id="question_input",
1084
  show_copy_button=True,
 
1085
  )
1086
 
1087
  # 增强版模块选择下拉菜单
@@ -1141,7 +1109,7 @@ def create_ui():
1141
  gr.Markdown("""
1142
  ## 环境变量管理
1143
 
1144
- 在此处设置模型API密钥和其他服务凭证。这些信息将保存在本地的`.env`文件中,确保您的API密钥安全存储且不会上传到网络。
1145
  """)
1146
 
1147
  # 主要内容分为两列布局
@@ -1150,12 +1118,9 @@ def create_ui():
1150
  with gr.Column(scale=3):
1151
  with gr.Box(elem_classes="env-controls"):
1152
  # 环境变量表格 - 设置为可交互以直接编辑
1153
- gr.Markdown("### 环境变量管理")
1154
  gr.Markdown("""
1155
- 管理您的API密钥和其他环境变量。正确设置API密钥对于OWL系统的功能至关重要。
1156
-
1157
  <div style="background-color: #e7f3fe; border-left: 6px solid #2196F3; padding: 10px; margin: 15px 0; border-radius: 4px;">
1158
- <strong>提示:</strong> 请确保正确设置API密钥以确保系统功能正常
1159
  </div>
1160
  """)
1161
 
@@ -1186,7 +1151,6 @@ def create_ui():
1186
  <li><strong>删除变量</strong>: 清空变量名即可删除该行</li>
1187
  <li><strong>获取API密钥</strong>: 点击"获取指南"列中的链接获取相应API密钥</li>
1188
  </ul>
1189
- <strong>注意</strong>: 所有API密钥都安全地存���在本地,不会上传到网络
1190
  </div>
1191
  """,
1192
  elem_classes="env-instructions",
@@ -1221,7 +1185,7 @@ def create_ui():
1221
 
1222
  # 示例问题
1223
  examples = [
1224
- "打开百度搜索,总结一下camel-ai的camel框架的github star、fork数目等,并把数字用plot包写成python文件保存到本地,用本地终端执行python文件显示图出来给我",
1225
  "浏览亚马逊并找出一款对程序员有吸引力的产品。请提供产品名称和价格",
1226
  "写一个hello world的python文件,保存到本地",
1227
  ]
 
151
 
152
  # 如果仍然没有日志,返回提示信息
153
  if not logs:
154
+ return "初始化运行中..."
155
 
156
  # 过滤日志,只保留 camel.agents.chat_agent - INFO 的日志
157
  filtered_logs = []
 
242
  "run": "默认模式:使用OpenAI模型的默认的智能体协作模式,适合大多数任务。",
243
  "run_mini": "使用使用OpenAI模型最小化配置处理任务",
244
  "run_deepseek_zh": "使用deepseek模型处理中文任务",
 
 
245
  "run_openai_compatiable_model": "使用openai兼容模型处理任务",
246
  "run_ollama": "使用本地ollama模型处理任务",
247
  "run_qwen_mini_zh": "使用qwen模型最小化配置处理任务",
248
  "run_qwen_zh": "使用qwen模型处理任务",
249
  }
250
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
251
 
252
  # 默认环境变量模板
253
+ DEFAULT_ENV_TEMPLATE = """#===========================================
254
+ # MODEL & API
255
+ # (See https://docs.camel-ai.org/key_modules/models.html#)
256
+ #===========================================
257
 
258
+ # OPENAI API (https://platform.openai.com/api-keys)
259
+ OPENAI_API_KEY='Your_Key'
260
  # OPENAI_API_BASE_URL=""
261
 
262
+ # Azure OpenAI API
263
+ # AZURE_OPENAI_BASE_URL=""
264
+ # AZURE_API_VERSION=""
265
+ # AZURE_OPENAI_API_KEY=""
266
+ # AZURE_DEPLOYMENT_NAME=""
267
+
268
+
269
  # Qwen API (https://help.aliyun.com/zh/model-studio/developer-reference/get-api-key)
270
+ QWEN_API_KEY='Your_Key'
271
 
272
  # DeepSeek API (https://platform.deepseek.com/api_keys)
273
+ DEEPSEEK_API_KEY='Your_Key'
274
 
275
  #===========================================
276
  # Tools & Services API
277
  #===========================================
278
 
279
+ # Google Search API (https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3)
280
+ GOOGLE_API_KEY='Your_Key'
281
+ SEARCH_ENGINE_ID='Your_ID'
 
 
 
282
 
283
  # Chunkr API (https://chunkr.ai/)
284
+ CHUNKR_API_KEY='Your_Key'
285
 
286
  # Firecrawl API (https://www.firecrawl.dev/)
287
+ FIRECRAWL_API_KEY='Your_Key'
288
  #FIRECRAWL_API_URL="https://api.firecrawl.dev"
289
  """
290
 
 
319
  # 验证输入
320
  if not validate_input(question):
321
  logging.warning("用户提交了无效的输入")
322
+ return ("请输入有效的问题", "0", "❌ 错误: 输入问题无效")
323
 
324
  try:
325
  # 确保环境变量已加载
 
336
  )
337
 
338
  # 动态导入目标模块
339
+ module_path = f"examples.{example_module}"
340
  try:
341
  logging.info(f"正在导入模块: {module_path}")
342
  module = importlib.import_module(module_path)
 
414
  return MODULE_DESCRIPTIONS.get(module_name, "无可用描述")
415
 
416
 
 
 
417
  # 存储前端配置的环境变量
418
  WEB_FRONTEND_ENV_VARS: dict[str, str] = {}
419
 
 
606
  elif "deepseek" in key_lower:
607
  return "https://platform.deepseek.com/api_keys"
608
  elif "google" in key_lower:
609
+ return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3"
610
+ elif "search_engine_id" in key_lower:
611
+ return "https://coda.io/@jon-dallas/google-image-search-pack-example/search-engine-id-and-google-api-key-3"
612
  elif "chunkr" in key_lower:
613
  return "https://chunkr.ai/"
614
  elif "firecrawl" in key_lower:
 
663
 
664
  # 遍历DataFrame的每一行
665
  for index, row in data.iterrows():
666
+ # 使用列名访问数据
667
  if len(columns) >= 3:
668
+ # 获取变量名和值 (第0列是变量名,第1列是值)
669
+ key = row[0] if isinstance(row, pd.Series) else row.iloc[0]
670
+ value = row[1] if isinstance(row, pd.Series) else row.iloc[1]
671
 
672
  # 检查是否为空行或已删除的变量
673
  if key and str(key).strip(): # 如果键名不为空,则添加或更新
 
774
  """处理问题并实时更新日志"""
775
  global CURRENT_PROCESS
776
 
777
+ # 清空日志文件
778
+ clear_log_file()
779
+
780
  # 创建一个后台线程来处理问题
781
  result_queue = queue.Queue()
782
 
 
839
  # 🦉 OWL 多智能体协作系统
840
 
841
  基于CAMEL框架开发的先进多智能体协作系统,旨在通过智能体协作解决复杂问题。
842
+ 可以通过修改本地脚本自定义模型和工具。
843
+ 本网页应用目前处于测试阶段,仅供演示和测试使用,尚未推荐用于生产环境。
844
  """
845
  )
846
 
 
1049
  label="问题",
1050
  elem_id="question_input",
1051
  show_copy_button=True,
1052
+ value="打开百度搜索,总结一下camel-ai的camel框架的github star、fork数目等,并把数字用plot包写成python文件保存到本地,并运行生成的python文件。",
1053
  )
1054
 
1055
  # 增强版模块选择下拉菜单
 
1109
  gr.Markdown("""
1110
  ## 环境变量管理
1111
 
1112
+ 在此处设置模型API密钥和其他服务凭证。这些信息将保存在本地的`.env`文件中,确保您的API密钥安全存储且不会上传到网络。正确设置API密钥对于OWL系统的功能至关重要, 可以按找工具需求灵活配置环境变量。
1113
  """)
1114
 
1115
  # 主要内容分为两列布局
 
1118
  with gr.Column(scale=3):
1119
  with gr.Box(elem_classes="env-controls"):
1120
  # 环境变量表格 - 设置为可交互以直接编辑
 
1121
  gr.Markdown("""
 
 
1122
  <div style="background-color: #e7f3fe; border-left: 6px solid #2196F3; padding: 10px; margin: 15px 0; border-radius: 4px;">
1123
+ <strong>提示:</strong> 请确保运行cp .env_template .env创建本地.env文件,根据运行模块灵活配置所需环境变量
1124
  </div>
1125
  """)
1126
 
 
1151
  <li><strong>删除变量</strong>: 清空变量名即可删除该行</li>
1152
  <li><strong>获取API密钥</strong>: 点击"获取指南"列中的链接获取相应API密钥</li>
1153
  </ul>
 
1154
  </div>
1155
  """,
1156
  elem_classes="env-instructions",
 
1185
 
1186
  # 示例问题
1187
  examples = [
1188
+ "打开百度搜索,总结一下camel-ai的camel框架的github star、fork数目等,并把数字用plot包写成python文件保存到本地,并运行生成的python文件。",
1189
  "浏览亚马逊并找出一款对程序员有吸引力的产品。请提供产品名称和价格",
1190
  "写一个hello world的python文件,保存到本地",
1191
  ]