zjrwtx commited on
Commit
5c06e37
·
2 Parent(s): c1728e5 5ac6182

Merge branch 'main' into docker_en

Browse files
.dockerignore → .container/.dockerignore RENAMED
File without changes
DOCKER_README.md → .container/DOCKER_README.md RENAMED
File without changes
Dockerfile → .container/Dockerfile RENAMED
File without changes
build_docker.bat → .container/build_docker.bat RENAMED
File without changes
build_docker.sh → .container/build_docker.sh RENAMED
File without changes
check_docker.bat → .container/check_docker.bat RENAMED
File without changes
check_docker.sh → .container/check_docker.sh RENAMED
File without changes
docker-compose.yml → .container/docker-compose.yml RENAMED
File without changes
run_in_docker.bat → .container/run_in_docker.bat RENAMED
File without changes
run_in_docker.sh → .container/run_in_docker.sh RENAMED
File without changes
README.md CHANGED
@@ -64,7 +64,7 @@ Our vision is to revolutionize how AI agents collaborate to solve real-world tas
64
  - [📋 Table of Contents](#-table-of-contents)
65
  - [🔥 News](#-news)
66
  - [🎬 Demo Video](#-demo-video)
67
- - [✨️ Core Features](#-code-features)
68
  - [🛠️ Installation](#️-installation)
69
  - [**Clone the Github repository**](#clone-the-github-repository)
70
  - [**Set up Environment**](#set-up-environment)
@@ -139,7 +139,10 @@ playwright install
139
  In the `owl/.env_template` file, you will find all the necessary API keys along with the websites where you can register for each service. To use these API services, follow these steps:
140
 
141
  1. *Copy and Rename*: Duplicate the `.env_example` file and rename the copy to `.env`.
142
- 2. *Fill in Your Keys*: Open the `.env` file and insert your API keys in the corresponding fields.
 
 
 
143
  3. *For using more other models*: please refer to our CAMEL models docs:https://docs.camel-ai.org/key_modules/models.html#supported-model-platforms-in-camel
144
 
145
 
@@ -171,11 +174,18 @@ For more detailed Docker usage instructions, including cross-platform support, o
171
 
172
 
173
 
174
- Run the following minimal example:
175
 
176
  ```bash
177
  python owl/run.py
178
  ```
 
 
 
 
 
 
 
179
  You can run OWL agent with your own task by modifying the `run.py` script:
180
 
181
  ```python
@@ -188,6 +198,21 @@ answer, chat_history, token_count = run_society(society)
188
  logger.success(f"Answer: {answer}")
189
  ```
190
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
  Example tasks you can try:
192
  - "Find the latest stock price for Apple Inc."
193
  - "Analyze the sentiment of recent tweets about climate change"
 
64
  - [📋 Table of Contents](#-table-of-contents)
65
  - [🔥 News](#-news)
66
  - [🎬 Demo Video](#-demo-video)
67
+ - [✨️ Core Features](#-core-features)
68
  - [🛠️ Installation](#️-installation)
69
  - [**Clone the Github repository**](#clone-the-github-repository)
70
  - [**Set up Environment**](#set-up-environment)
 
139
  In the `owl/.env_template` file, you will find all the necessary API keys along with the websites where you can register for each service. To use these API services, follow these steps:
140
 
141
  1. *Copy and Rename*: Duplicate the `.env_example` file and rename the copy to `.env`.
142
+ ```bash
143
+ cp owl/.env_template .env
144
+ ```
145
+ 2. *Fill in Your Keys*: Open the `.env` file and insert your API keys in the corresponding fields. (For the minimal example (`run_mini.py`), you only need to configure the LLM API key (e.g., OPENAI_API_KEY).)
146
  3. *For using more other models*: please refer to our CAMEL models docs:https://docs.camel-ai.org/key_modules/models.html#supported-model-platforms-in-camel
147
 
148
 
 
174
 
175
 
176
 
177
+ Run the following demo case:
178
 
179
  ```bash
180
  python owl/run.py
181
  ```
182
+
183
+ For a simpler version that only requires an LLM API key, you can try our minimal example:
184
+
185
+ ```bash
186
+ python owl/run_mini.py
187
+ ```
188
+
189
  You can run OWL agent with your own task by modifying the `run.py` script:
190
 
191
  ```python
 
198
  logger.success(f"Answer: {answer}")
199
  ```
200
 
201
+ For uploading files, simply provide the file path along with your question:
202
+
203
+ ```python
204
+ # Task with a local file (e.g., file path: `tmp/example.docx`)
205
+ question = "What is in the given DOCX file? Here is the file path: tmp/example.docx"
206
+
207
+ society = construct_society(question)
208
+ answer, chat_history, token_count = run_society(society)
209
+
210
+ logger.success(f"Answer: {answer}")
211
+ ```
212
+
213
+ OWL will then automatically invoke document-related tools to process the file and extract the answer.
214
+
215
+
216
  Example tasks you can try:
217
  - "Find the latest stock price for Apple Inc."
218
  - "Analyze the sentiment of recent tweets about climate change"
README_zh.md CHANGED
@@ -165,13 +165,19 @@ docker-compose exec owl bash -c "xvfb-python run.py"
165
 
166
  # 🚀 快速开始
167
 
168
- 运行以下最小示例:
169
 
170
  ```bash
171
  python owl/run.py
172
  ```
173
 
174
- 你可以通过修改 `run.py` 来运行自定义任务的 OWL 智能体:
 
 
 
 
 
 
175
 
176
  ```python
177
  # Define your own task
@@ -183,11 +189,29 @@ answer, chat_history, token_count = run_society(society)
183
  logger.success(f"Answer: {answer}")
184
  ```
185
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
186
  你可以尝试以下示例任务:
187
  - "查询苹果公司的最新股票价格"
188
  - "分析关于气候变化的最新推文情绪"
189
  - "帮我调试这段 Python 代码:[在此粘贴你的代码]"
190
  - "总结这篇研究论文的主要观点:[论文URL]"
 
191
  # 🧪 实验
192
 
193
  我们提供了一个脚本用于复现 GAIA 上的实验结果。
 
165
 
166
  # 🚀 快速开始
167
 
168
+ 运行以下示例:
169
 
170
  ```bash
171
  python owl/run.py
172
  ```
173
 
174
+ 我们还提供了一个最小化示例,只需配置LLM的API密钥即可运行:
175
+
176
+ ```bash
177
+ python owl/run_mini.py
178
+ ```
179
+
180
+ 你可以通过修改 `run.py` 脚本来运行自己的任务:
181
 
182
  ```python
183
  # Define your own task
 
189
  logger.success(f"Answer: {answer}")
190
  ```
191
 
192
+ 上传文件时,只需提供文件路径和问题:
193
+
194
+ ```python
195
+ # 处理本地文件(例如,文件路径为 `tmp/example.docx`)
196
+ question = "给定的 DOCX 文件中有什么内容?文件路径如下:tmp/example.docx"
197
+
198
+ society = construct_society(question)
199
+ answer, chat_history, token_count = run_society(society)
200
+
201
+ logger.success(f"答案:{answer}")
202
+ ```
203
+
204
+ OWL 将自动调用与文档相关的工具来处理文件并提取答案。
205
+
206
+
207
+ OWL 将自动调用与文档相关的工具来处理文件并提取答案。
208
+
209
  你可以尝试以下示例任务:
210
  - "查询苹果公司的最新股票价格"
211
  - "分析关于气候变化的最新推文情绪"
212
  - "帮我调试这段 Python 代码:[在此粘贴你的代码]"
213
  - "总结这篇研究论文的主要观点:[论文URL]"
214
+ -
215
  # 🧪 实验
216
 
217
  我们提供了一个脚本用于复现 GAIA 上的实验结果。
owl/run.py CHANGED
@@ -2,17 +2,23 @@ from dotenv import load_dotenv
2
  load_dotenv()
3
 
4
  from camel.models import ModelFactory
5
- from camel.toolkits import *
 
 
 
 
 
 
 
 
 
6
  from camel.types import ModelPlatformType, ModelType
7
- from camel.configs import ChatGPTConfig
8
 
9
- from typing import List, Dict
10
 
11
- from retry import retry
12
  from loguru import logger
13
 
14
  from utils import OwlRolePlaying, run_society
15
- import os
16
 
17
 
18
 
@@ -25,13 +31,13 @@ def construct_society(question: str) -> OwlRolePlaying:
25
  user_model = ModelFactory.create(
26
  model_platform=ModelPlatformType.OPENAI,
27
  model_type=ModelType.GPT_4O,
28
- model_config_dict=ChatGPTConfig(temperature=0, top_p=1).as_dict(), # [Optional] the config for model
29
  )
30
 
31
  assistant_model = ModelFactory.create(
32
  model_platform=ModelPlatformType.OPENAI,
33
  model_type=ModelType.GPT_4O,
34
- model_config_dict=ChatGPTConfig(temperature=0, top_p=1).as_dict(), # [Optional] the config for model
35
  )
36
 
37
  tools_list = [
 
2
  load_dotenv()
3
 
4
  from camel.models import ModelFactory
5
+ from camel.toolkits import (
6
+ WebToolkit,
7
+ DocumentProcessingToolkit,
8
+ VideoAnalysisToolkit,
9
+ AudioAnalysisToolkit,
10
+ CodeExecutionToolkit,
11
+ ImageAnalysisToolkit,
12
+ SearchToolkit,
13
+ ExcelToolkit
14
+ )
15
  from camel.types import ModelPlatformType, ModelType
16
+ # from camel.configs import ChatGPTConfig
17
 
 
18
 
 
19
  from loguru import logger
20
 
21
  from utils import OwlRolePlaying, run_society
 
22
 
23
 
24
 
 
31
  user_model = ModelFactory.create(
32
  model_platform=ModelPlatformType.OPENAI,
33
  model_type=ModelType.GPT_4O,
34
+ # model_config_dict=ChatGPTConfig(temperature=0, top_p=1).as_dict(), # [Optional] the config for model
35
  )
36
 
37
  assistant_model = ModelFactory.create(
38
  model_platform=ModelPlatformType.OPENAI,
39
  model_type=ModelType.GPT_4O,
40
+ # model_config_dict=ChatGPTConfig(temperature=0, top_p=1).as_dict(), # [Optional] the config for model
41
  )
42
 
43
  tools_list = [
owl/run_deepseek_example.py CHANGED
@@ -3,13 +3,10 @@ from camel.toolkits import *
3
  from camel.types import ModelPlatformType, ModelType
4
  from camel.configs import DeepSeekConfig
5
 
6
- from typing import List, Dict
7
  from dotenv import load_dotenv
8
- from retry import retry
9
  from loguru import logger
10
 
11
  from utils import OwlRolePlaying, run_society
12
- import os
13
 
14
 
15
  load_dotenv()
 
3
  from camel.types import ModelPlatformType, ModelType
4
  from camel.configs import DeepSeekConfig
5
 
 
6
  from dotenv import load_dotenv
 
7
  from loguru import logger
8
 
9
  from utils import OwlRolePlaying, run_society
 
10
 
11
 
12
  load_dotenv()
owl/run_gaia_roleplaying.py CHANGED
@@ -5,11 +5,9 @@ from camel.configs import ChatGPTConfig
5
  from utils import GAIABenchmark
6
 
7
  from dotenv import load_dotenv
8
- from retry import retry
9
  from loguru import logger
10
 
11
  import os
12
- import shutil
13
 
14
  load_dotenv()
15
 
 
5
  from utils import GAIABenchmark
6
 
7
  from dotenv import load_dotenv
 
8
  from loguru import logger
9
 
10
  import os
 
11
 
12
  load_dotenv()
13
 
owl/run_mini.py ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ load_dotenv()
3
+
4
+ from camel.models import ModelFactory
5
+ from camel.toolkits import (
6
+ WebToolkit,
7
+ SearchToolkit,
8
+ FunctionTool
9
+ )
10
+ from camel.types import ModelPlatformType, ModelType
11
+
12
+
13
+ from loguru import logger
14
+
15
+ from utils import OwlRolePlaying, run_society
16
+
17
+
18
+
19
+ def construct_society(question: str) -> OwlRolePlaying:
20
+ r"""Construct the society based on the question."""
21
+
22
+ user_role_name = "user"
23
+ assistant_role_name = "assistant"
24
+
25
+ user_model = ModelFactory.create(
26
+ model_platform=ModelPlatformType.OPENAI,
27
+ model_type=ModelType.GPT_4O,
28
+ )
29
+
30
+ assistant_model = ModelFactory.create(
31
+ model_platform=ModelPlatformType.OPENAI,
32
+ model_type=ModelType.GPT_4O,
33
+ )
34
+
35
+ tools_list = [
36
+ *WebToolkit(
37
+ headless=False,
38
+ web_agent_model=assistant_model,
39
+ planning_agent_model=assistant_model
40
+ ).get_tools(),
41
+ FunctionTool(SearchToolkit(model=assistant_model).search_duckduckgo),
42
+ ]
43
+
44
+ user_role_name = 'user'
45
+ user_agent_kwargs = dict(model=user_model)
46
+ assistant_role_name = 'assistant'
47
+ assistant_agent_kwargs = dict(model=assistant_model,
48
+ tools=tools_list)
49
+
50
+ task_kwargs = {
51
+ 'task_prompt': question,
52
+ 'with_task_specify': False,
53
+ }
54
+
55
+ society = OwlRolePlaying(
56
+ **task_kwargs,
57
+ user_role_name=user_role_name,
58
+ user_agent_kwargs=user_agent_kwargs,
59
+ assistant_role_name=assistant_role_name,
60
+ assistant_agent_kwargs=assistant_agent_kwargs,
61
+ )
62
+
63
+ return society
64
+
65
+
66
+ # Example case
67
+ question = "What was the volume in m^3 of the fish bag that was calculated in the University of Leicester paper `Can Hiccup Supply Enough Fish to Maintain a Dragon’s Diet?` "
68
+
69
+ society = construct_society(question)
70
+ answer, chat_history, token_count = run_society(society)
71
+
72
+ logger.success(f"Answer: {answer}")
73
+
74
+
75
+
76
+
77
+
owl/{run_qwq_demo.py → run_openai_compatiable_model.py} RENAMED
@@ -26,7 +26,7 @@ def construct_society(question: str) -> OwlRolePlaying:
26
 
27
  user_model = ModelFactory.create(
28
  model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
29
- model_type="qwq-32b",
30
  api_key=os.getenv("QWEN_API_KEY"),
31
  url="https://dashscope.aliyuncs.com/compatible-mode/v1",
32
  model_config_dict={"temperature": 0.4, "max_tokens": 4096},
@@ -34,7 +34,7 @@ def construct_society(question: str) -> OwlRolePlaying:
34
 
35
  assistant_model = ModelFactory.create(
36
  model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
37
- model_type="qwq-32b",
38
  api_key=os.getenv("QWEN_API_KEY"),
39
  url="https://dashscope.aliyuncs.com/compatible-mode/v1",
40
  model_config_dict={"temperature": 0.4, "max_tokens": 4096},
@@ -79,7 +79,7 @@ def construct_society(question: str) -> OwlRolePlaying:
79
 
80
 
81
  # Example case
82
- question = "What was the volume in m^3 of the fish bag that was calculated in the University of Leicester paper `Can Hiccup Supply Enough Fish to Maintain a Dragon’s Diet?` "
83
 
84
  society = construct_society(question)
85
  answer, chat_history, token_count = run_society(society)
 
26
 
27
  user_model = ModelFactory.create(
28
  model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
29
+ model_type="qwen-max",
30
  api_key=os.getenv("QWEN_API_KEY"),
31
  url="https://dashscope.aliyuncs.com/compatible-mode/v1",
32
  model_config_dict={"temperature": 0.4, "max_tokens": 4096},
 
34
 
35
  assistant_model = ModelFactory.create(
36
  model_platform=ModelPlatformType.OPENAI_COMPATIBLE_MODEL,
37
+ model_type="qwen-max",
38
  api_key=os.getenv("QWEN_API_KEY"),
39
  url="https://dashscope.aliyuncs.com/compatible-mode/v1",
40
  model_config_dict={"temperature": 0.4, "max_tokens": 4096},
 
79
 
80
 
81
  # Example case
82
+ question = "what is the weather in beijing today?"
83
 
84
  society = construct_society(question)
85
  answer, chat_history, token_count = run_society(society)