Wendong-Fan commited on
Commit
b1d0895
·
1 Parent(s): 38255bb

update readme

Browse files
Files changed (2) hide show
  1. README.md +17 -3
  2. README_zh.md +17 -3
README.md CHANGED
@@ -154,6 +154,21 @@ Run the following demo case:
154
  python owl/run.py
155
  ```
156
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
  For a simpler version that only requires an LLM API key, you can try our minimal example:
158
 
159
  ```bash
@@ -169,7 +184,7 @@ question = "Task description here."
169
  society = construct_society(question)
170
  answer, chat_history, token_count = run_society(society)
171
 
172
- logger.success(f"Answer: {answer}")
173
  ```
174
 
175
  For uploading files, simply provide the file path along with your question:
@@ -180,8 +195,7 @@ question = "What is in the given DOCX file? Here is the file path: tmp/example.d
180
 
181
  society = construct_society(question)
182
  answer, chat_history, token_count = run_society(society)
183
-
184
- logger.success(f"Answer: {answer}")
185
  ```
186
 
187
  OWL will then automatically invoke document-related tools to process the file and extract the answer.
 
154
  python owl/run.py
155
  ```
156
 
157
+ ## Running with Different Models
158
+
159
+ OWL supports various LLM backends. You can use the following scripts to run with different models:
160
+
161
+ ```bash
162
+ # Run with Qwen model
163
+ python owl/run_qwen.py
164
+
165
+ # Run with Deepseek model
166
+ python owl/run_deepseek.py
167
+
168
+ # Run with other OpenAI-compatible models
169
+ python owl/run_openai_compatiable_model.py
170
+ ```
171
+
172
  For a simpler version that only requires an LLM API key, you can try our minimal example:
173
 
174
  ```bash
 
184
  society = construct_society(question)
185
  answer, chat_history, token_count = run_society(society)
186
 
187
+ print(f"Answer: {answer}")
188
  ```
189
 
190
  For uploading files, simply provide the file path along with your question:
 
195
 
196
  society = construct_society(question)
197
  answer, chat_history, token_count = run_society(society)
198
+ print(f"Answer: {answer}")
 
199
  ```
200
 
201
  OWL will then automatically invoke document-related tools to process the file and extract the answer.
README_zh.md CHANGED
@@ -154,6 +154,21 @@ python owl/run.py
154
  python owl/run_mini.py
155
  ```
156
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
  你可以通过修改 `run.py` 脚本来运行自己的任务:
158
 
159
  ```python
@@ -163,7 +178,7 @@ question = "Task description here."
163
  society = construct_society(question)
164
  answer, chat_history, token_count = run_society(society)
165
 
166
- logger.success(f"Answer: {answer}")
167
  ```
168
 
169
  上传文件时,只需提供文件路径和问题:
@@ -175,12 +190,11 @@ question = "给定的 DOCX 文件中有什么内容?文件路径如下:tmp/e
175
  society = construct_society(question)
176
  answer, chat_history, token_count = run_society(society)
177
 
178
- logger.success(f"答案:{answer}")
179
  ```
180
 
181
  OWL 将自动调用与文档相关的工具来处理文件并提取答案。
182
 
183
-
184
  OWL 将自动调用与文档相关的工具来处理文件并提取答案。
185
 
186
  你可以尝试以下示例任务:
 
154
  python owl/run_mini.py
155
  ```
156
 
157
+ ## 使用不同的模型
158
+
159
+ OWL 支持多种 LLM 后端。您可以使用以下脚本来运行不同的模型:
160
+
161
+ ```bash
162
+ # 使用 Qwen 模型运行
163
+ python owl/run_qwen.py
164
+
165
+ # 使用 Deepseek 模型运行
166
+ python owl/run_deepseek.py
167
+
168
+ # 使用其他 OpenAI 兼容模型运行
169
+ python owl/run_openai_compatiable_model.py
170
+ ```
171
+
172
  你可以通过修改 `run.py` 脚本来运行自己的任务:
173
 
174
  ```python
 
178
  society = construct_society(question)
179
  answer, chat_history, token_count = run_society(society)
180
 
181
+ print(f"Answer: {answer}")
182
  ```
183
 
184
  上传文件时,只需提供文件路径和问题:
 
190
  society = construct_society(question)
191
  answer, chat_history, token_count = run_society(society)
192
 
193
+ print(f"答案:{answer}")
194
  ```
195
 
196
  OWL 将自动调用与文档相关的工具来处理文件并提取答案。
197
 
 
198
  OWL 将自动调用与文档相关的工具来处理文件并提取答案。
199
 
200
  你可以尝试以下示例任务: