File size: 20,699 Bytes
dc70a7e b0d39b1 dc70a7e 62da328 0c55394 e89d299 0c55394 549f760 0c55394 9e0dba2 0c55394 5861490 9e0dba2 0c55394 9b2933c b4a762d 9b2933c b4a762d 38fe7ca 0c55394 c88a8d4 62da328 c413558 794827b 4eafbba 712a1ae 794827b 51af9bc 939ed11 794827b b0af2d3 8a7983e 794827b b0af2d3 794827b 51e1cdf ff39853 7c3eae3 e577e3f 62da328 a95603e 62da328 ed4a27b f99a003 7f4ef9a b0af2d3 a95603e 4eafbba 5168cdf 4eafbba 0c4d1f2 4fee08a c31cac1 38fe7ca e577e3f a95603e b0af2d3 268015d 62da328 4ca8fdc bc33508 4ca8fdc bc33508 62da328 4ca8fdc 62da328 4ca8fdc 62da328 4ca8fdc bc33508 4ca8fdc 5a49290 4ca8fdc 62da328 268015d 9f7283a 62da328 9f7283a 53e64ff 9f7283a a33cdfa 35ee5f2 137610c e21a39d 137610c e21a39d 137610c e21a39d 137610c e577e3f 137610c b0af2d3 a95603e 62da328 bc33508 62da328 672cb5e b1d0895 63e3fc9 0c5d50e 63e3fc9 0c5d50e 63e3fc9 b1d0895 615be7d b1d0895 615be7d b1d0895 8181b73 e26ba2b 8181b73 b1d0895 672cb5e ba8de0a 672cb5e 00999bc 4ca8fdc 00999bc 712a1ae 4ca8fdc 712a1ae b0af2d3 00999bc b0af2d3 62da328 b0af2d3 0c5d50e 63e3fc9 0c5d50e 63e3fc9 0c5d50e 8a7983e b0af2d3 8a7983e b0af2d3 cad68ca e7c5aed b0af2d3 9bde235 8a7983e b0af2d3 8a7983e a95603e 5ec6899 62da328 5ec6899 b0af2d3 5ec6899 b0af2d3 dc70a7e a95603e dc70a7e b0af2d3 dc70a7e b0af2d3 a95603e 0c55394 80109e7 794827b 80109e7 0c55394 3b803b8 ed4a27b f99a003 690d27d 3b803b8 794827b 9c57ab0 246bc24 a089c59 76876b9 5587791 51e1cdf cc07df8 c3bc060 51e1cdf c3bc060 246bc24 b0af2d3 ff39853 639478f ff39853 7c3eae3 246bc24 7c3eae3 246bc24 51e1cdf 0c55394 9e0dba2 0c55394 9e0dba2 0c55394 e89d299 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 |
<h1 align="center">
π¦ OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation
</h1>
<div align="center">
[![Documentation][docs-image]][docs-url]
[![Discord][discord-image]][discord-url]
[![X][x-image]][x-url]
[![Reddit][reddit-image]][reddit-url]
[![Wechat][wechat-image]][wechat-url]
[![Wechat][owl-image]][owl-url]
[![Hugging Face][huggingface-image]][huggingface-url]
[![Star][star-image]][star-url]
[![Package License][package-license-image]][package-license-url]
</div>
<hr>
<div align="center">
<h4 align="center">
[δΈζι
θ―»](https://github.com/camel-ai/owl/tree/main/README_zh.md) |
[Community](https://github.com/camel-ai/owl#community) |
[Installation](#οΈ-installation) |
[Examples](https://github.com/camel-ai/owl/tree/main/owl) |
[Paper](https://arxiv.org/abs/2303.17760) |
[Citation](https://github.com/camel-ai/owl#citation) |
[Contributing](https://github.com/camel-ai/owl/graphs/contributors) |
[CAMEL-AI](https://www.camel-ai.org/)
</h4>
<div align="center" style="background-color: #f0f7ff; padding: 10px; border-radius: 5px; margin: 15px 0;">
<h3 style="color: #1e88e5; margin: 0;">
π OWL achieves <span style="color: #d81b60; font-weight: bold; font-size: 1.2em;">58.18</span> average score on GAIA benchmark and ranks <span style="color: #d81b60; font-weight: bold; font-size: 1.2em;">π
οΈ #1</span> among open-source frameworks! π
</h3>
</div>
<div align="center">
π¦ OWL is a cutting-edge framework for multi-agent collaboration that pushes the boundaries of task automation, built on top of the [CAMEL-AI Framework](https://github.com/camel-ai/camel).
<!-- OWL achieves **58.18** average score on [GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard) benchmark and ranks π
οΈ #1 among open-source frameworks. -->
Our vision is to revolutionize how AI agents collaborate to solve real-world tasks. By leveraging dynamic agent interactions, OWL enables more natural, efficient, and robust task automation across diverse domains.
</div>

<br>
</div>
<!-- # Key Features -->
# π Table of Contents
- [π Table of Contents](#-table-of-contents)
- [π₯ News](#-news)
- [π¬ Demo Video](#-demo-video)
- [β¨οΈ Core Features](#-core-features)
- [π οΈ Installation](#οΈ-installation)
- [**Clone the Github repository**](#clone-the-github-repository)
- [**Set up Environment**](#set-up-environment)
- [**Install Dependencies**](#install-dependencies)
- [**Setup Environment Variables**](#setup-environment-variables)
- [**Running with Docker**](#running-with-docker)
- [π Quick Start](#-quick-start)
- [π§° Toolkits and Capabilities](#-toolkits-and-capabilities)
- [π Web Interface](#-web-interface)
- [π§ͺ Experiments](#-experiments)
- [β±οΈ Future Plans](#οΈ-future-plans)
- [π License](#-license)
- [ποΈ Cite](#οΈ-cite)
- [π€ Contributing](#-contributing)
- [π₯ Community](#-community)
- [β FAQ](#-faq)
- [π Exploring CAMEL Dependency](#-exploring-camel-dependency)
- [β Star History](#-star-history)
# π₯ News
<div align="center" style="background-color: #fffacd; padding: 15px; border-radius: 10px; border: 2px solid #ffd700; margin: 20px 0;">
<h3 style="color: #d81b60; margin: 0; font-size: 1.3em;">
πππ <b>COMMUNITY CALL FOR USE CASES!</b> πππ
</h3>
<p style="font-size: 1.1em; margin: 10px 0;">
We're inviting the community to contribute innovative use cases for OWL! <br>
The <b>top ten submissions</b> will receive special community gifts and recognition.
</p>
<p>
<a href="https://github.com/camel-ai/owl/tree/main/community_usecase/COMMUNITY_CALL_FOR_USE_CASES.md" style="background-color: #d81b60; color: white; padding: 8px 15px; text-decoration: none; border-radius: 5px; font-weight: bold;">Learn More & Submit</a>
</p>
<p style="margin: 5px 0;">
Submission deadline: <b>March 31, 2025</b>
</p>
</div>
- **[2025.03.12]**: Added Bocha search in SearchToolkit, integrated Volcano Engine model platform, and enhanced Azure and OpenAI Compatible models with structured output and tool calling.
- **[2025.03.11]**: We added MCPToolkit, FileWriteToolkit, and TerminalToolkit to enhance OWL agents with MCP tool calling, file writing capabilities, and terminal command execution.
- **[2025.03.09]**: We added a web-based user interface that makes it easier to interact with the system.
- **[2025.03.07]**: We open-sourced the codebase of the π¦ OWL project.
- **[2025.03.03]**: OWL achieved the #1 position among open-source frameworks on the GAIA benchmark with a score of 58.18.
# π¬ Demo Video
https://github.com/user-attachments/assets/2a2a825d-39ea-45c5-9ba1-f9d58efbc372
https://private-user-images.githubusercontent.com/55657767/420212194-e813fc05-136a-485f-8df3-f10d9b4e63ec.mp4
# β¨οΈ Core Features
- **Real-time Information Retrieval**: Leverage Wikipedia, Google Search, and other online sources for up-to-date information.
- **Multimodal Processing**: Support for handling internet or local videos, images, and audio data.
- **Browser Automation**: Utilize the Playwright framework for simulating browser interactions, including scrolling, clicking, input handling, downloading, navigation, and more.
- **Document Parsing**: Extract content from Word, Excel, PDF, and PowerPoint files, converting them into text or Markdown format.
- **Code Execution**: Write and execute Python code using interpreter.
- **Built-in Toolkits**: Access to a comprehensive set of built-in toolkits including ArxivToolkit, AudioAnalysisToolkit, CodeExecutionToolkit, DalleToolkit, DataCommonsToolkit, ExcelToolkit, GitHubToolkit, GoogleMapsToolkit, GoogleScholarToolkit, ImageAnalysisToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, OpenAPIToolkit, RedditToolkit, SearchToolkit, SemanticScholarToolkit, SymPyToolkit, VideoAnalysisToolkit, WeatherToolkit, WebToolkit, and many more for specialized tasks.
# π οΈ Installation
OWL supports multiple installation methods to fit your workflow preferences. Choose the option that works best for you.
## Option 1: Using uv (Recommended)
```bash
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Install uv if you don't have it already
pip install uv
# Create a virtual environment and install dependencies
# We support using Python 3.10, 3.11, 3.12
uv venv .venv --python=3.10
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Install CAMEL with all dependencies
uv pip install -e .
# Exit the virtual environment when done
deactivate
```
## Option 2: Using venv and pip
```bash
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Create a virtual environment
# For Python 3.10 (also works with 3.11, 3.12)
python3.10 -m venv .venv
# Activate the virtual environment
# For macOS/Linux
source .venv/bin/activate
# For Windows
.venv\Scripts\activate
# Install from requirements.txt
pip install -r requirements.txt
```
## Option 3: Using conda
```bash
# Clone github repo
git clone https://github.com/camel-ai/owl.git
# Change directory into project directory
cd owl
# Create a conda environment
conda create -n owl python=3.10
# Activate the conda environment
conda activate owl
# Option 1: Install as a package (recommended)
pip install -e .
# Option 2: Install from requirements.txt
pip install -r requirements.txt
# Exit the conda environment when done
conda deactivate
```
## **Setup Environment Variables**
OWL requires various API keys to interact with different services. The `owl/.env_template` file contains placeholders for all necessary API keys along with links to the services where you can register for them.
### Option 1: Using a `.env` File (Recommended)
1. **Copy and Rename the Template**:
```bash
cd owl
cp .env_template .env
```
2. **Configure Your API Keys**:
Open the `.env` file in your preferred text editor and insert your API keys in the corresponding fields.
> **Note**: For the minimal example (`run_mini.py`), you only need to configure the LLM API key (e.g., `OPENAI_API_KEY`).
### Option 2: Setting Environment Variables Directly
Alternatively, you can set environment variables directly in your terminal:
- **macOS/Linux (Bash/Zsh)**:
```bash
export OPENAI_API_KEY="your-openai-api-key-here"
```
- **Windows (Command Prompt)**:
```batch
set OPENAI_API_KEY="your-openai-api-key-here"
```
- **Windows (PowerShell)**:
```powershell
$env:OPENAI_API_KEY = "your-openai-api-key-here"
```
> **Note**: Environment variables set directly in the terminal will only persist for the current session.
## **Running with Docker**
```bash
# Clone the repository
git clone https://github.com/camel-ai/owl.git
cd owl
# Configure environment variables
cp owl/.env_template owl/.env
# Edit the .env file and fill in your API keys
# Option 1: Using docker-compose directly
cd .container
docker-compose up -d
# Run OWL inside the container
docker-compose exec owl bash -c "xvfb-python run.py"
# Option 2: Build and run using the provided scripts
cd .container
chmod +x build_docker.sh
./build_docker.sh
# Run OWL inside the container
./run_in_docker.sh "your question"
```
For more detailed Docker usage instructions, including cross-platform support, optimized configurations, and troubleshooting, please refer to [DOCKER_README.md](.container/DOCKER_README_en.md).
# π Quick Start
After installation and setting up your environment variables, you can start using OWL right away:
```bash
python owl/run.py
```
## Running with Different Models
### Model Requirements
- **Tool Calling**: OWL requires models with robust tool calling capabilities to interact with various toolkits. Models must be able to understand tool descriptions, generate appropriate tool calls, and process tool outputs.
- **Multimodal Understanding**: For tasks involving web interaction, image analysis, or video processing, models with multimodal capabilities are required to interpret visual content and context.
#### Supported Models
For information on configuring AI models, please refer to our [CAMEL models documentation](https://docs.camel-ai.org/key_modules/models.html#supported-model-platforms-in-camel).
> **Note**: For optimal performance, we strongly recommend using OpenAI models (GPT-4 or later versions). Our experiments show that other models may result in significantly lower performance on complex tasks and benchmarks, especially those requiring advanced multi-modal understanding and tool use.
OWL supports various LLM backends, though capabilities may vary depending on the model's tool calling and multimodal abilities. You can use the following scripts to run with different models:
```bash
# Run with Qwen model
python owl/run_qwen_zh.py
# Run with Deepseek model
python owl/run_deepseek_zh.py
# Run with other OpenAI-compatible models
python owl/run_openai_compatiable_model.py
# Run with Ollama
python owl/run_ollama.py
```
For a simpler version that only requires an LLM API key, you can try our minimal example:
```bash
python owl/run_mini.py
```
You can run OWL agent with your own task by modifying the `run.py` script:
```python
# Define your own task
question = "Task description here."
society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")
```
For uploading files, simply provide the file path along with your question:
```python
# Task with a local file (e.g., file path: `tmp/example.docx`)
question = "What is in the given DOCX file? Here is the file path: tmp/example.docx"
society = construct_society(question)
answer, chat_history, token_count = run_society(society)
print(f"\033[94mAnswer: {answer}\033[0m")
```
OWL will then automatically invoke document-related tools to process the file and extract the answer.
### Example Tasks
Here are some tasks you can try with OWL:
- "Find the latest stock price for Apple Inc."
- "Analyze the sentiment of recent tweets about climate change"
- "Help me debug this Python code: [your code here]"
- "Summarize the main points from this research paper: [paper URL]"
- "Create a data visualization for this dataset: [dataset path]"
# π§° Toolkits and Capabilities
> **Important**: Effective use of toolkits requires models with strong tool calling capabilities. For multimodal toolkits (Web, Image, Video), models must also have multimodal understanding abilities.
OWL supports various toolkits that can be customized by modifying the `tools` list in your script:
```python
# Configure toolkits
tools = [
*WebToolkit(headless=False).get_tools(), # Browser automation
*VideoAnalysisToolkit(model=models["video"]).get_tools(),
*AudioAnalysisToolkit().get_tools(), # Requires OpenAI Key
*CodeExecutionToolkit(sandbox="subprocess").get_tools(),
*ImageAnalysisToolkit(model=models["image"]).get_tools(),
SearchToolkit().search_duckduckgo,
SearchToolkit().search_google, # Comment out if unavailable
SearchToolkit().search_wiki,
*ExcelToolkit().get_tools(),
*DocumentProcessingToolkit(model=models["document"]).get_tools(),
*FileWriteToolkit(output_dir="./").get_tools(),
]
```
## Available Toolkits
Key toolkits include:
### Multimodal Toolkits (Require multimodal model capabilities)
- **WebToolkit**: Browser automation for web interaction and navigation
- **VideoAnalysisToolkit**: Video processing and content analysis
- **ImageAnalysisToolkit**: Image analysis and interpretation
### Text-Based Toolkits
- **AudioAnalysisToolkit**: Audio processing (requires OpenAI API)
- **CodeExecutionToolkit**: Python code execution and evaluation
- **SearchToolkit**: Web searches (Google, DuckDuckGo, Wikipedia)
- **DocumentProcessingToolkit**: Document parsing (PDF, DOCX, etc.)
Additional specialized toolkits: ArxivToolkit, GitHubToolkit, GoogleMapsToolkit, MathToolkit, NetworkXToolkit, NotionToolkit, RedditToolkit, WeatherToolkit, and more. For a complete list, see the [CAMEL toolkits documentation](https://docs.camel-ai.org/key_modules/tools.html#built-in-toolkits).
## Customizing Your Configuration
To customize available tools:
```python
# 1. Import toolkits
from camel.toolkits import WebToolkit, SearchToolkit, CodeExecutionToolkit
# 2. Configure tools list
tools = [
*WebToolkit(headless=True).get_tools(),
SearchToolkit().search_wiki,
*CodeExecutionToolkit(sandbox="subprocess").get_tools(),
]
# 3. Pass to assistant agent
assistant_agent_kwargs = {"model": models["assistant"], "tools": tools}
```
Selecting only necessary toolkits optimizes performance and reduces resource usage.
# π Web Interface
OWL includes an intuitive web-based user interface that makes it easier to interact with the system.
## Starting the Web UI
```bash
# Start the Chinese version
python run_app_zh.py
# Start the English version
python run_app.py
```
## Features
- **Easy Model Selection**: Choose between different models (OpenAI, Qwen, DeepSeek, etc.)
- **Environment Variable Management**: Configure your API keys and other settings directly from the UI
- **Interactive Chat Interface**: Communicate with OWL agents through a user-friendly interface
- **Task History**: View the history and results of your interactions
The web interface is built using Gradio and runs locally on your machine. No data is sent to external servers beyond what's required for the model API calls you configure.
# π§ͺ Experiments
To reproduce OWL's GAIA benchmark score of 58.18:
1. Switch to the `gaia58.18` branch:
```bash
git checkout gaia58.18
```
2. Run the evaluation script:
```bash
python run_gaia_roleplaying.py
```
This will execute the same configuration that achieved our top-ranking performance on the GAIA benchmark.
# β±οΈ Future Plans
We're continuously working to improve OWL. Here's what's on our roadmap:
- [ ] Write a technical blog post detailing our exploration and insights in multi-agent collaboration in real-world tasks
- [ ] Enhance the toolkit ecosystem with more specialized tools for domain-specific tasks
- [ ] Develop more sophisticated agent interaction patterns and communication protocols
- [ ] Improve performance on complex multi-step reasoning tasks
# π License
The source code is licensed under Apache 2.0.
# ποΈ Cite
If you find this repo useful, please cite:
```
@misc{owl2025,
title = {OWL: Optimized Workforce Learning for General Multi-Agent Assistance in Real-World Task Automation},
author = {{CAMEL-AI.org}},
howpublished = {\url{https://github.com/camel-ai/owl}},
note = {Accessed: 2025-03-07},
year = {2025}
}
```
# π€ Contributing
We welcome contributions from the community! Here's how you can help:
1. Read our [Contribution Guidelines](https://github.com/camel-ai/camel/blob/master/CONTRIBUTING.md)
2. Check [open issues](https://github.com/camel-ai/camel/issues) or create new ones
3. Submit pull requests with your improvements
**Current Issues Open for Contribution:**
- [#1770](https://github.com/camel-ai/camel/issues/1770)
- [#1712](https://github.com/camel-ai/camel/issues/1712)
- [#1537](https://github.com/camel-ai/camel/issues/1537)
- [#1827](https://github.com/camel-ai/camel/issues/1827)
To take on an issue, simply leave a comment stating your interest.
# π₯ Community
Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.org/camel/wechat.png)) in pushing the boundaries of finding the scaling laws of agents.
Join us for further discussions!

<!--  -->
# β FAQ
**Q: Why don't I see Chrome running locally after starting the example script?**
A: If OWL determines that a task can be completed using non-browser tools (such as search or code execution), the browser will not be launched. The browser window will only appear when OWL determines that browser-based interaction is necessary.
**Q: Which Python version should I use?**
A: OWL supports Python 3.10, 3.11, and 3.12.
**Q: How can I contribute to the project?**
A: See our [Contributing](#-contributing) section for details on how to get involved. We welcome contributions of all kinds, from code improvements to documentation updates.
# π Exploring CAMEL Dependency
OWL is built on top of the [CAMEL](https://github.com/camel-ai/camel) Framework, here's how you can explore the CAMEL source code and understand how it works with OWL:
## Accessing CAMEL Source Code
```bash
# Clone the CAMEL repository
git clone https://github.com/camel-ai/camel.git
cd camel
```
# β Star History
[](https://star-history.com/#camel-ai/owl&Date)
[docs-image]: https://img.shields.io/badge/Documentation-EB3ECC
[docs-url]: https://camel-ai.github.io/camel/index.html
[star-image]: https://img.shields.io/github/stars/camel-ai/owl?label=stars&logo=github&color=brightgreen
[star-url]: https://github.com/camel-ai/owl/stargazers
[package-license-image]: https://img.shields.io/badge/License-Apache_2.0-blue.svg
[package-license-url]: https://github.com/camel-ai/owl/blob/main/licenses/LICENSE
[colab-url]: https://colab.research.google.com/drive/1AzP33O8rnMW__7ocWJhVBXjKziJXPtim?usp=sharing
[colab-image]: https://colab.research.google.com/assets/colab-badge.svg
[huggingface-url]: https://huggingface.co/camel-ai
[huggingface-image]: https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-CAMEL--AI-ffc107?color=ffc107&logoColor=white
[discord-url]: https://discord.camel-ai.org/
[discord-image]: https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb
[wechat-url]: https://ghli.org/camel/wechat.png
[wechat-image]: https://img.shields.io/badge/WeChat-CamelAIOrg-brightgreen?logo=wechat&logoColor=white
[x-url]: https://x.com/CamelAIOrg
[x-image]: https://img.shields.io/twitter/follow/CamelAIOrg?style=social
[twitter-image]: https://img.shields.io/twitter/follow/CamelAIOrg?style=social&color=brightgreen&logo=twitter
[reddit-url]: https://www.reddit.com/r/CamelAI/
[reddit-image]: https://img.shields.io/reddit/subreddit-subscribers/CamelAI?style=plastic&logo=reddit&label=r%2FCAMEL&labelColor=white
[ambassador-url]: https://www.camel-ai.org/community
[owl-url]: ./assets/qr_code.jpg
[owl-image]: https://img.shields.io/badge/WeChat-OWLProject-brightgreen?logo=wechat&logoColor=white
|