source
stringclasses 1
value | repository
stringclasses 1
value | file
stringlengths 17
123
| label
stringclasses 1
value | content
stringlengths 6
6.94k
|
---|---|---|---|---|
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-and-multi-agent-application.md | autogen | # Agent and Multi-Agent Applications An **agent** is a software entity that communicates via messages, maintains its own state, and performs actions in response to received messages or changes in its state. These actions may modify the agentβs state and produce external effects, such as updating message logs, sending new messages, executing code, or making API calls. Many software systems can be modeled as a collection of independent agents that interact with one another. Examples include: - Sensors on a factory floor - Distributed services powering web applications - Business workflows involving multiple stakeholders - AI agents, such as those powered by language models (e.g., GPT-4), which can write code, interface with external systems, and communicate with other agents. These systems, composed of multiple interacting agents, are referred to as **multi-agent applications**. > **Note:** > AI agents typically use language models as part of their software stack to interpret messages, perform reasoning, and execute actions. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-and-multi-agent-application.md | autogen | Characteristics of Multi-Agent Applications In multi-agent applications, agents may: - Run within the same process or on the same machine - Operate across different machines or organizational boundaries - Be implemented in diverse programming languages and make use of different AI models or instructions - Work together towards a shared goal, coordinating their actions through messaging Each agent is a self-contained unit that can be developed, tested, and deployed independently. This modular design allows agents to be reused across different scenarios and composed into more complex systems. Agents are inherently **composable**: simple agents can be combined to form complex, adaptable applications, where each agent contributes a specific function or service to the overall system. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/index.md | autogen | # Core Concepts The following sections describe the main concepts of the Core API and the system architecture. ```{toctree} :maxdepth: 1 agent-and-multi-agent-application architecture api-layers application-stack agent-identity-and-lifecycle topic-and-subscription ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/application-stack.md | autogen | # Application Stack AutoGen core is designed to be an unopinionated framework that can be used to build a wide variety of multi-agent applications. It is not tied to any specific agent abstraction or multi-agent pattern. The following diagram shows the application stack.  At the bottom of the stack is the base messaging and routing facilities that enable agents to communicate with each other. These are managed by the agent runtime, and for most applications, developers only need to interact with the high-level APIs provided by the runtime (see [Agent and Agent Runtime](../framework/agent-and-agent-runtime.ipynb)). At the top of the stack, developers need to define the types of the messages that agents exchange. This set of message types forms a behavior contract that agents must adhere to, and the implementation of the contracts determines how agents handle messages. The behavior contract is also sometimes referred to as the message protocol. It is the developer's responsibility to implement the behavior contract. Multi-agent patterns emerge from these behavior contracts (see [Multi-Agent Design Patterns](../design-patterns/index.md)). |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/application-stack.md | autogen | An Example Application Consider a concrete example of a multi-agent application for code generation. The application consists of three agents: Coder Agent, Executor Agent, and Reviewer Agent. The following diagram shows the data flow between the agents, and the message types exchanged between them.  In this example, the behavior contract consists of the following: - `CodingTaskMsg` message from application to the Coder Agent - `CodeGenMsg` from Coder Agent to Executor Agent - `ExecutionResultMsg` from Executor Agent to Reviewer Agent - `ReviewMsg` from Reviewer Agent to Coder Agent - `CodingResultMsg` from the Reviewer Agent to the application The behavior contract is implemented by the agents' handling of these messages. For example, the Reviewer Agent listens for `ExecutionResultMsg` and evaluates the code execution result to decide whether to approve or reject, if approved, it sends a `CodingResultMsg` to the application, otherwise, it sends a `ReviewMsg` to the Coder Agent for another round of code generation. This behavior contract is a case of a multi-agent pattern called _reflection_, where a generation result is reviewed by another round of generation, to improve the overall quality. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/topic-and-subscription.md | autogen | # Topic and Subscription There are two ways for runtime to deliver messages, direct messaging or broadcast. Direct messaging is one to one: the sender must provide the recipient's agent ID. On the other hand, broadcast is one to many and the sender does not provide recpients' agent IDs. Many scenarios are suitable for broadcast. For example, in event-driven workflows, agents do not always know who will handle their messages, and a workflow can be composed of agents with no inter-dependencies. This section focuses on the core concepts in broadcast: topic and subscription. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/topic-and-subscription.md | autogen | Topic A topic defines the scope of a broadcast message. In essence, agent runtime implements a publish-subscribe model through its broadcast API: when publishing a message, the topic must be specified. It is an indirection over agent IDs. A topic consists of two components: topic type and topic source. ```{note} Topic = (Topic Type, Topic Source) ``` Similar to [agent ID](./agent-identity-and-lifecycle.md#agent-id), which also has two components, topic type is usually defined by application code to mark the type of messages the topic is for. For example, a GitHub agent may use `"GitHub_Issues"` as the topic type when publishing messages about new issues. Topic source is the unique identifier for a topic within a topic type. It is typically defined by application data. For example, the GitHub agent may use `"github.com/{repo_name}/issues/{issue_number}"` as the topic source to uniquely identifies the topic. Topic source allows the publisher to limit the scope of messages and create silos. Topic IDs can be converted to and from strings. the format of this string is: ```{note} Topic_Type/Topic_Source ``` Types are considered valid if they are in UTF8 and only contain alphanumeric letters (a-z) and (0-9), or underscores (_). A valid identifier cannot start with a number, or contain any spaces. Sources are considered valid if they are in UTF8 and only contain characters between (inclusive) ascii 32 (space) and 126 (~). |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/topic-and-subscription.md | autogen | Subscription A subscription maps topic to agent IDs.  The diagram above shows the relationship between topic and subscription. An agent runtime keeps track of the subscriptions and uses them to deliver messages to agents. If a topic has no subscription, messages published to this topic will not be delivered to any agent. If a topic has many subscriptions, messages will be delivered following all the subscriptions to every recipient agent only once. Applications can add or remove subscriptions using agent runtime's API. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/topic-and-subscription.md | autogen | Type-based Subscription A type-based subscription maps a topic type to an agent type (see [agent ID](./agent-identity-and-lifecycle.md#agent-id)). It declares an unbounded mapping from topics to agent IDs without knowing the exact topic sources and agent keys. The mechanism is simple: any topic matching the type-based subscription's topic type will be mapped to an agent ID with the subscription's agent type and the agent key assigned to the value of the topic source. For Python API, use {py:class}`~autogen_core.components.TypeSubscription`. ```{note} Type-Based Subscription = Topic Type --> Agent Type ``` Generally speaking, type-based subscription is the preferred way to declare subscriptions. It is portable and data-independent: developers do not need to write application code that depends on specific agent IDs. ### Scenarios of Type-Based Subscription Type-based subscriptions can be applied to many scenarios when the exact topic or agent IDs are data-dependent. The scenarios can be broken down by two considerations: (1) whether it is single-tenant or multi-tenant, and (2) whether it is a single topic or multiple topics per tenant. A tenant typically refers to a set of agents that handle a specific user session or a specific request. #### Single-Tenant, Single Topic In this scenario, there is only one tenant and one topic for the entire application. It is the simplest scenario and can be used in many cases like a command line tool or a single-user application. To apply type-based subscription for this scenario, create one type-based subscription for each agent type, and use the same topic type for all the type-based subscriptions. When you publish, always use the same topic, i.e., the same topic type and topic source. For example, assuming there are three agent types: `"triage_agent"`, `"coder_agent"` and `"reviewer_agent"`, and the topic type is `"default"`, create the following type-based subscriptions: ```python # Type-based Subscriptions for single-tenant, single topic scenario TypeSubscription(topic_type="default", agent_type="triage_agent") TypeSubscription(topic_type="default", agent_type="coder_agent") TypeSubscription(topic_type="default", agent_type="reviewer_agent") ``` With the above type-based subscriptions, use the same topic source `"default"` for all messages. So the topic is always `("default", "default")`. A message published to this topic will be delivered to all the agents of all above types. Specifically, the message will be sent to the following agent IDs: ```python # The agent IDs created based on the topic source AgentID("triage_agent", "default") AgentID("coder_agent", "default") AgentID("reviewer_agent", "default") ``` The following figure shows how type-based subscription works in this example.  If the agent with the ID does not exist, the runtime will create it. #### Single-Tenant, Multiple Topics In this scenario, there is only one tenant but you want to control which agent handles which topic. This is useful when you want to create silos and have different agents specialized in handling different topics. To apply type-based subscription for this scenario, create one type-based subscription for each agent type but with different topic types. You can map the same topic type to multiple agent types if you want these agent types to share a same topic. For topic source, still use the same value for all messages when you publish. Continuing the example above with same agent types, create the following type-based subscriptions: ```python # Type-based Subscriptions for single-tenant, multiple topics scenario TypeSubscription(topic_type="triage", agent_type="triage_agent") TypeSubscription(topic_type="coding", agent_type="coder_agent") TypeSubscription(topic_type="coding", agent_type="reviewer_agent") ``` With the above type-based subscriptions, any message published to the topic `("triage", "default")` will be delivered to the agent with type `"triage_agent"`, and any message published to the topic `("coding", "default")` will be delivered to the agents with types `"coder_agent"` and `"reviewer_agent"`. The following figure shows how type-based subscription works in this example.  #### Multi-Tenant Scenarios In single-tenant scenarios, the topic source is always the same (e.g., `"default"`) -- it is hard-coded in the application code. When moving to multi-tenant scenarios, the topic source becomes data-dependent. ```{note} A good indication that you are in a multi-tenant scenario is that you need multiple instances of the same agent type. For example, you may want to have different agent instances to handle different user sessions to keep private data isolated, or, you may want to distribute a heavy workload across multiple instances of the same agent type and have them work on it concurrently. ``` Continuing the example above, if you want to have dedicated instances of agents to handle a specific GitHub issue, you need to set the topic source to be a unique identifier for the issue. For example, let's say there is one type-based subscription for the agent type `"triage_agent"`: ```python TypeSubscription(topic_type="github_issues", agent_type="triage_agent") ``` When a message is published to the topic `("github_issues", "github.com/microsoft/autogen/issues/1")`, the runtime will deliver the message to the agent with ID `("triage_agent", "github.com/microsoft/autogen/issues/1")`. When a message is published to the topic `("github_issues", "github.com/microsoft/autogen/issues/9")`, the runtime will deliver the message to the agent with ID `("triage_agent", "github.com/microsoft/autogen/issues/9")`. The following figure shows how type-based subscription works in this example.  Note the agent ID is data-dependent, and the runtime will create a new instance of the agent if it does not exist. To support multiple topics per tenant, you can use different topic types, just like the single-tenant, multiple topics scenario. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-identity-and-lifecycle.md | autogen | # Agent Identity and Lifecycle The agent runtime manages agents' identities and lifecycles. Application does not create agents directly, rather, it registers an agent type with a factory function for agent instances. In this section, we explain how agents are identified and created by the runtime. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-identity-and-lifecycle.md | autogen | Agent ID Agent ID uniquely identifies an agent instance within an agent runtime -- including distributed runtime. It is the "address" of the agent instance for receiving messages. It has two components: agent type and agent key. ```{note} Agent ID = (Agent Type, Agent Key) ``` The agent type is not an agent class. It associates an agent with a specific factory function, which produces instances of agents of the same agent type. For example, different factory functions can produce the same agent class but with different constructor parameters. The agent key is an instance identifier for the given agent type. Agent IDs can be converted to and from strings. the format of this string is: ```{note} Agent_Type/Agent_Key ``` Types and Keys are considered valid if they only contain alphanumeric letters (a-z) and (0-9), or underscores (_). A valid identifier cannot start with a number, or contain any spaces. In a multi-agent application, agent types are typically defined directly by the application, i.e., they are defined in the application code. On the other hand, agent keys are typically generated given messages delivered to the agents, i.e., they are defined by the application data. For example, a runtime has registered the agent type `"code_reviewer"` with a factory function producing agent instances that perform code reviews. Each code review request has a unique ID `review_request_id` to mark a dedicated session. In this case, each request can be handled by a new instance with an agent ID, `("code_reviewer", review_request_id)`. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/agent-identity-and-lifecycle.md | autogen | Agent Lifecycle When a runtime delivers a message to an agent instance given its ID, it either fetches the instance, or creates it if it does not exist.  The runtime is also responsible for "paging in" or "out" agent instances to conserve resources and balance load across multiple machines. This is not implemented yet. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md | autogen | # Agent Runtime Environments At the foundation level, the framework provides a _runtime environment_, which facilitates communication between agents, manages their identities and lifecycles, and enforce security and privacy boundaries. It supports two types of runtime environment: *standalone* and *distributed*. Both types provide a common set of APIs for building multi-agent applications, so you can switch between them without changing your agent implementation. Each type can also have multiple implementations. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md | autogen | Standalone Agent Runtime Standalone runtime is suitable for single-process applications where all agents are implemented in the same programming language and running in the same process. In the Python API, an example of standalone runtime is the {py:class}`~autogen_core.application.SingleThreadedAgentRuntime`. The following diagram shows the standalone runtime in the framework.  Here, agents communicate via messages through the runtime, and the runtime manages the _lifecycle_ of agents. Developers can build agents quickly by using the provided components including _routed agent_, AI model _clients_, tools for AI models, code execution sandboxes, model context stores, and more. They can also implement their own agents from scratch, or use other libraries. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md | autogen | Distributed Agent Runtime Distributed runtime is suitable for multi-process applications where agents may be implemented in different programming languages and running on different machines.  A distributed runtime, as shown in the diagram above, consists of a _host servicer_ and multiple _workers_. The host servicer facilitates communication between agents across workers and maintains the states of connections. The workers run agents and communicate with the host servicer via _gateways_. They advertise to the host servicer the agents they run and manage the agents' lifecycles. Agents work the same way as in the standalone runtime so that developers can switch between the two runtime types with no change to their agent implementation. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/api-layers.md | autogen | # API Layers The API consists of the following layers: - {py:mod}`autogen_core.base` - {py:mod}`autogen_core.application` - {py:mod}`autogen_core.components` The following diagram shows the relationship between the layers.  The {py:mod}`autogen_core.base` layer defines the core interfaces and base classes for agents, messages, and runtime. This layer is the foundation of the framework and is used by the other layers. The {py:mod}`autogen_core.application` layer provides concrete implementations of runtime and utilities like logging for building multi-agent applications. The {py:mod}`autogen_core.components` layer provides reusable components for building AI agents, including type-routed agents, AI model clients, tools for AI models, code execution sandboxes, and memory stores. The layers are loosely coupled and can be used independently. For example, you can swap out the runtime in the {py:mod}`autogen_core.application` layer with your own runtime implementation. You can also skip the components in the {py:mod}`autogen_core.components` layer and build your own components. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/index.md | autogen | # Cookbook This section contains a collection of recipes that demonstrate how to use the Core API features. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/index.md | autogen | List of recipes ```{toctree} :maxdepth: 1 azure-openai-with-aad-auth termination-with-intervention tool-use-with-intervention extracting-results-with-an-agent openai-assistant-agent langgraph-agent llamaindex-agent local-llms-ollama-litellm instrumenting topic-subscription-scenarios structured-output-agent ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | # Instrumentating your code locally AutoGen supports instrumenting your code using [OpenTelemetry](https://opentelemetry.io). This allows you to collect traces and logs from your code and send them to a backend of your choice. While debugging, you can use a local backend such as [Aspire](https://aspiredashboard.com/) or [Jaeger](https://www.jaegertracing.io/). In this guide we will use Aspire as an example. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | Setting up Aspire Follow the instructions [here](https://learn.microsoft.com/en-us/dotnet/aspire/fundamentals/dashboard/overview?tabs=bash#standalone-mode) to set up Aspire in standalone mode. This will require Docker to be installed on your machine. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | Instrumenting your code Once you have a dashboard set up, now it's a matter of sending traces and logs to it. You can follow the steps in the [Telemetry Guide](../framework/telemetry.md) to set up the opentelemetry sdk and exporter. After instrumenting your code with the Aspire Dashboard running, you should see traces and logs appear in the dashboard as your code runs. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/instrumenting.md | autogen | Observing LLM calls using Open AI If you are using the Open AI package, you can observe the LLM calls by setting up the opentelemetry for that library. We use [opentelemetry-instrumentation-openai](https://pypi.org/project/opentelemetry-instrumentation-openai/) in this example. Install the package: ```bash pip install opentelemetry-instrumentation-openai ``` Enable the instrumentation: ```python from opentelemetry.instrumentation.openai import OpenAIInstrumentor OpenAIInstrumentor().instrument() ``` Now running your code will send traces including the LLM calls to your telemetry backend (Aspire in our case).  |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.md | autogen | # Azure OpenAI with AAD Auth This guide will show you how to use the Azure OpenAI client with Azure Active Directory (AAD) authentication. The identity used must be assigned the [**Cognitive Services OpenAI User**](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user) role. |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.md | autogen | Install Azure Identity client The Azure identity client is used to authenticate with Azure Active Directory. ```sh pip install azure-identity ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/cookbook/azure-openai-with-aad-auth.md | autogen | Using the Model Client ```python from autogen_ext.models import AzureOpenAIChatCompletionClient from azure.identity import DefaultAzureCredential, get_bearer_token_provider # Create the token provider token_provider = get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default" ) client = AzureOpenAIChatCompletionClient( azure_deployment="{your-azure-deployment}", model="{model-name, such as gpt-4o}", api_version="2024-02-01", azure_endpoint="https://{your-custom-endpoint}.openai.azure.com/", azure_ad_token_provider=token_provider, ) ``` ```{note} See [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions) for how to use the Azure client directly or for more info. ``` |
GitHub | autogen | autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/design-patterns/index.md | autogen | # Multi-Agent Design Patterns Agents can work together in a variety of ways to solve problems. Research works like [AutoGen](https://aka.ms/autogen-paper), [MetaGPT](https://arxiv.org/abs/2308.00352) and [ChatDev](https://arxiv.org/abs/2307.07924) have shown multi-agent systems out-performing single agent systems at complex tasks like software development. A multi-agent design pattern is a structure that emerges from message protocols: it describes how agents interact with each other to solve problems. For example, the [tool-equiped agent](../framework/tools.ipynb#tool-equipped-agent) in the previous section employs a design pattern called ReAct, which involves an agent interacting with tools. You can implement any multi-agent design pattern using AutoGen agents. In the next two sections, we will discuss two common design patterns: group chat for task decomposition, and reflection for robustness. ```{toctree} :maxdepth: 1 group-chat handoffs mixture-of-agents multi-agent-debate reflection ``` |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | # Contributing to AutoGenBench As part of the broader AutoGen project, AutoGenBench welcomes community contributions. Contributions are subject to AutoGen's [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute), as well as a few additional AutoGenBench-specific requirements outlined here. You may also wish to develop your own private benchmark scenarios and the guidance in this document will help with such efforts as well. Below you will find the general requirements, followed by a detailed technical description. |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | General Contribution Requirements We ask that all contributions to AutoGenBench adhere to the following: - Follow AutoGen's broader [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute) - All AutoGenBench benchmarks should live in a subfolder of `/benchmarks` alongside `HumanEval`, `GAIA`, etc. - Benchmark scenarios should include a detailed README.md, in the root of their folder, describing the benchmark and providing citations where warranted. - Benchmark data (tasks, ground truth, etc.) should be downloaded from their original sources rather than hosted in the AutoGen repository (unless the benchmark is original, and the repository *is* the original source) - You can use the `Scripts/init_tasks.py` file to automate this download. - Basic scoring should be compatible with the `agbench tabulate` command (e.g., by outputting logs compatible with the default tabulation mechanism, or by providing a `Scripts/custom_tabulate.py` file) These requirements are further detailed below, but if you simply copy the `HumanEval` folder, you will already be off to a great start. |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Implementing and Running Benchmark Tasks At the core of any benchmark is a set of tasks. To implement tasks that are runnable by AutoGenBench, you must adhere to AutoGenBench's templating and scenario expansion algorithms, as outlined below. ### Task Definitions All tasks are stored in JSONL files (in subdirectories under `./Tasks`). Each line of a tasks file is a JSON object with the following schema: ``` { "id": string, "template": dirname, "substitutions" { "filename1": { "find_string1_1": replace_string1_1, "find_string1_2": replace_string1_2, ... "find_string1_M": replace_string1_N } "filename2": { "find_string2_1": replace_string2_1, "find_string2_2": replace_string2_2, ... "find_string2_N": replace_string2_N } } } ``` For example: ``` { "id": "two_agent_stocks_gpt4", "template": "default_two_agents", "substitutions": { "scenario.py": { "__MODEL__": "gpt-4", }, "prompt.txt": { "__PROMPT__": "Plot and save to disk a chart of NVDA and TESLA stock price YTD." } } } ``` In this example, the string `__MODEL__` will be replaced in the file `scenarios.py`, while the string `__PROMPT__` will be replaced in the `prompt.txt` file. The `template` field can also take on a list value, but this usage is considered advanced and is not described here. See the `agbench/run_cmd.py` code, or the `GAIA` benchmark tasks files for additional information about this option. |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Task Instance Expansion Algorithm Once the tasks have been defined, as per above, they must be "instantiated" before they can be run. This instantiation happens automatically when the user issues the `agbench run` command and involves creating a local folder to share with Docker. Each instance and repetition gets its own folder along the path: `./results/[scenario]/[task_id]/[instance_id]`. For the sake of brevity we will refer to this folder as the `DEST_FOLDER`. The algorithm for populating the `DEST_FOLDER` is as follows: 1. Pre-populate DEST_FOLDER with all the basic starter files for running a scenario (found in `agbench/template`). 2. Recursively copy the template folder specified in the JSONL line to DEST_FOLDER (if the JSON `template` attribute points to a folder) If the JSONs `template` attribute instead points to a file, copy the file, but rename it to `scenario.py` 3. Apply any string replacements, as outlined in the prior section. 4. Write a run.sh file to DEST_FOLDER that will be executed by Docker when it is loaded. The `run.sh` is described below. |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Scenario Execution Algorithm Once the task has been instantiated it is run (via run.sh). This script will execute the following steps: 1. If a file named `global_init.sh` is present, run it. 2. If a file named `scenario_init.sh` is present, run it. 3. Install the requirements.txt file (if running in Docker) 4. Run the task via `python scenario.py` 5. If the scenario.py exited cleanly (exit code 0), then print "SCENARIO.PY COMPLETE !#!#" 6. Clean up (delete cache, etc.) 7. If a file named `scenario_finalize.sh` is present, run it. 8. If a file named `global_finalize.sh` is present, run it. 9. echo "RUN COMPLETE !#!#", signaling that all steps completed. Notably, this means that scenarios can add custom init and teardown logic by including `scenario_init.sh` and `scenario_finalize.sh` files. At the time of this writing, the run.sh file is as follows: ```sh export AUTOGEN_TESTBED_SETTING="Docker" umask 000 # Run the global init script if it exists if [ -f global_init.sh ] ; then . ./global_init.sh fi # Run the scenario init script if it exists if [ -f scenario_init.sh ] ; then . ./scenario_init.sh fi # Run the scenario pip install -r requirements.txt python scenario.py EXIT_CODE=$? if [ $EXIT_CODE -ne 0 ]; then echo SCENARIO.PY EXITED WITH CODE: $EXIT_CODE !#!# else echo SCENARIO.PY COMPLETE !#!# fi # Clean up if [ -d .cache ] ; then rm -Rf .cache fi # Run the scenario finalize script if it exists if [ -f scenario_finalize.sh ] ; then . ./scenario_finalize.sh fi # Run the global finalize script if it exists if [ -f global_finalize.sh ] ; then . ./global_finalize.sh fi echo RUN.SH COMPLETE !#!# ``` Be warned that this listing is provided here for illustration purposes, and may vary over time. The source of truth are the `run.sh` files found in the ``./results/[taskset]/[task_id]/[instance_id]`` folders. |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Integrating with the `tabulate` The above details are sufficient for defining and running tasks, but if you wish to support the `agbench tabulate` commands, a few additional steps are required. ### Tabulations If you wish to leverage the default tabulation logic, it is as simple as arranging your `scenario.py` file to output the string "ALL TESTS PASSED !#!#" to the console in the event that a task was solved correctly. If you wish to implement your own tabulation logic, simply create the file `Scripts/custom_tabulate.py` and include a `main(args)` method. Here, the `args` parameter will be provided by AutoGenBench, and is a drop-in replacement for `sys.argv`. In particular, `args[0]` will be the invocation command (similar to the executable or script name in `sys.argv`), and the remaining values (`args[1:]`) are the command line parameters. Should you provide a custom tabulation script, please implement `--help` and `-h` options for documenting your interface. The `scenarios/GAIA/Scripts/custom_tabulate.py` is a great example of custom tabulation. It also shows how you can reuse some components of the default tabulator to speed up development. |
GitHub | autogen | autogen/python/packages/agbench/CONTRIBUTING.md | autogen | Scripts/init_tasks.py Finally, you should provide an `Scripts/init_tasks.py` file, in your benchmark folder, and include a `main()` method therein. This `init_tasks.py` script is a great place to download benchmarks from their original sources and convert them to the JSONL format required by AutoGenBench: - See `HumanEval/Scripts/init_tasks.py` for an example of how to expand a benchmark from an original GitHub repository. - See `GAIA/Scripts/init_tasks.py` for an example of how to expand a benchmark from `Hugging Face Hub`. |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | # AutoGenBench AutoGenBench (agbench) is a tool for repeatedly running a set of pre-defined AutoGen tasks in a setting with tightly-controlled initial conditions. With each run, AutoGenBench will start from a blank slate. The agents being evaluated will need to work out what code needs to be written, and what libraries or dependencies to install, to solve tasks. The results of each run are logged, and can be ingested by analysis or metrics scripts (such as `agbench tabulate`). By default, all runs are conducted in freshly-initialized docker containers, providing the recommended level of consistency and safety. AutoGenBench works with all AutoGen 0.1.*, and 0.2.* versions. |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Technical Specifications If you are already an AutoGenBench pro, and want the full technical specifications, please review the [contributor's guide](CONTRIBUTING.md). |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Docker Requirement AutoGenBench also requires Docker (Desktop or Engine). **It will not run in GitHub codespaces**, unless you opt for native execution (which is strongly discouraged). To install Docker Desktop see [https://www.docker.com/products/docker-desktop/](https://www.docker.com/products/docker-desktop/). If you are working in WSL, you can follow the instructions below to set up your environment: 1. Install Docker Desktop. After installation, restart is needed, then open Docker Desktop, in Settings, Ressources, WSL Integration, Enable integration with additional distros β Ubuntu 2. Clone autogen and export `AUTOGEN_REPO_BASE`. This environment variable enables the Docker containers to use the correct version agents. ```bash git clone [email protected]:microsoft/autogen.git export AUTOGEN_REPO_BASE=<path_to_autogen> ``` |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Installation and Setup [Deprecated currently] **To get the most out of AutoGenBench, the `agbench` package should be installed**. At present, the easiest way to do this is to install it via `pip`. If you would prefer working from source code (e.g., for development, or to utilize an alternate branch), simply clone the [AutoGen](https://github.com/microsoft/autogen) repository, then install `agbench` via: ``` pip install -e autogen/python/packages/agbench ``` After installation, you must configure your API keys. As with other AutoGen applications, AutoGenBench will look for the OpenAI keys in the OAI_CONFIG_LIST file in the current working directory, or the OAI_CONFIG_LIST environment variable. This behavior can be overridden using a command-line parameter described later. If you will be running multiple benchmarks, it is often most convenient to leverage the environment variable option. You can load your keys into the environment variable by executing: ``` export OAI_CONFIG_LIST=$(cat ./OAI_CONFIG_LIST) ``` If an OAI_CONFIG_LIST is *not* provided (by means of file or environment variable), AutoGenBench will use the OPENAI_API_KEY environment variable instead. For some benchmark scenarios, additional keys may be required (e.g., keys for the Bing Search API). These can be added to an `ENV.json` file in the current working folder. An example `ENV.json` file is provided below: ``` { "BING_API_KEY": "xxxyyyzzz" } ``` |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | A Typical Session Once AutoGenBench and necessary keys are installed, a typical session will look as follows: Navigate to HumanEval ```bash cd autogen/python/packages/agbench/benchmarks/HumanEval ``` **Note:** The following instructions are specific to the HumanEval benchmark. For other benchmarks, please refer to the README in the respective benchmark folder, e.g.,: [AssistantBench](benchmarks/AssistantBench/README.md). Create a file called ENV.json with the following (required) contents (If you're using MagenticOne), if using Azure: ```json { "CHAT_COMPLETION_KWARGS_JSON": "{}", "CHAT_COMPLETION_PROVIDER": "azure" } ``` You can also use the openai client by replacing the last two entries in the ENV file by: - `CHAT_COMPLETION_PROVIDER='openai'` - `CHAT_COMPLETION_KWARGS_JSON` with the following JSON structure: ```json { "api_key": "REPLACE_WITH_YOUR_API", "model": "REPLACE_WITH_YOUR_MODEL" } ``` Now initialize the tasks. ```bash python Scripts/init_tasks.py ``` Note: This will attempt to download HumanEval Once the script completes, you should now see a folder in your current directory called `Tasks` that contains one JSONL file per template in `Templates`. Now to run a specific subset of HumanEval use: ```bash agbench run Tasks/human_eval_MagenticOne.jsonl ``` You should see the command line print the raw logs that shows the agents in action To see a summary of the results (e.g., task completion rates), in a new terminal run the following: ```bash agbench tabulate Results/human_eval_MagenticOne ``` Where: - `agbench run Tasks/human_eval_MagenticOne.jsonl` runs the tasks defined in `Tasks/human_eval_MagenticOne.jsonl` - `agbench tablue results/human_eval_MagenticOne` tabulates the results of the run Each of these commands has extensive in-line help via: - `agbench --help` - `agbench run --help` - `agbench tabulate --help` - `agbench remove_missing --help` **NOTE:** If you are running `agbench` from within the repository, you need to navigate to the appropriate scenario folder (e.g., `scenarios/HumanEval`) and run the `Scripts/init_tasks.py` file. More details of each command are provided in the sections that follow. |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Running AutoGenBench To run a benchmark (which executes the tasks, but does not compute metrics), simply execute: ``` cd [BENCHMARK] agbench run Tasks/*.jsonl ``` For example, ``` cd HumanEval agbench run Tasks/human_eval_MagenticOne.jsonl ``` The default is to run each task once. To run each scenario 10 times, use: ``` agbench run --repeat 10 Tasks/human_eval_MagenticOne.jsonl ``` The `agbench` command-line tool allows a number of command-line arguments to control various parameters of execution. Type ``agbench -h`` to explore these options: ``` 'agbench run' will run the specified autogen scenarios for a given number of repetitions and record all logs and trace information. When running in a Docker environment (default), each run will begin from a common, tightly controlled, environment. The resultant logs can then be further processed by other scripts to produce metrics. positional arguments: scenario The JSONL scenario file to run. If a directory is specified, then all JSONL scenarios in the directory are run. (default: ./scenarios) options: -h, --help show this help message and exit -c CONFIG, --config CONFIG The environment variable name or path to the OAI_CONFIG_LIST (default: OAI_CONFIG_LIST). -r REPEAT, --repeat REPEAT The number of repetitions to run for each scenario (default: 1). -s SUBSAMPLE, --subsample SUBSAMPLE Run on a subsample of the tasks in the JSONL file(s). If a decimal value is specified, then run on the given proportion of tasks in each file. For example "0.7" would run on 70% of tasks, and "1.0" would run on 100% of tasks. If an integer value is specified, then randomly select *that* number of tasks from each specified JSONL file. For example "7" would run tasks, while "1" would run only 1 task from each specified JSONL file. (default: 1.0; which is 100%) -m MODEL, --model MODEL Filters the config_list to include only models matching the provided model name (default: None, which is all models). --requirements REQUIREMENTS The requirements file to pip install before running the scenario. -d DOCKER_IMAGE, --docker-image DOCKER_IMAGE The Docker image to use when running scenarios. Can not be used together with --native. (default: 'agbench:default', which will be created if not present) --native Run the scenarios natively rather than in docker. NOTE: This is not advisable, and should be done with great caution. ``` |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Results By default, the AutoGenBench stores results in a folder hierarchy with the following template: ``./results/[scenario]/[task_id]/[instance_id]`` For example, consider the following folders: ``./results/default_two_agents/two_agent_stocks/0`` ``./results/default_two_agents/two_agent_stocks/1`` ... ``./results/default_two_agents/two_agent_stocks/9`` This folder holds the results for the ``two_agent_stocks`` task of the ``default_two_agents`` tasks file. The ``0`` folder contains the results of the first instance / run. The ``1`` folder contains the results of the second run, and so on. You can think of the _task_id_ as mapping to a prompt, or a unique set of parameters, while the _instance_id_ defines a specific attempt or run. Within each folder, you will find the following files: - *timestamp.txt*: records the date and time of the run, along with the version of the autogen-agentchat library installed - *console_log.txt*: all console output produced by Docker when running AutoGen. Read this like you would a regular console. - *[agent]_messages.json*: for each Agent, a log of their messages dictionaries - *./coding*: A directory containing all code written by AutoGen, and all artifacts produced by that code. |
GitHub | autogen | autogen/python/packages/agbench/README.md | autogen | Contributing or Defining New Tasks or Benchmarks If you would like to develop -- or even contribute -- your own tasks or benchmarks, please review the [contributor's guide](CONTRIBUTING.md) for complete technical details. |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/README.md | autogen | # Benchmarking Agents This directory provides ability to benchmarks agents (e.g., built using Autogen) using AgBench. Use the instructions below to prepare your environment for benchmarking. Once done, proceed to relevant benchmarks directory (e.g., `benchmarks/GAIA`) for further scenario-specific instructions. |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/README.md | autogen | Setup on WSL 1. Install Docker Desktop. After installation, restart is needed, then open Docker Desktop, in Settings, Ressources, WSL Integration, Enable integration with additional distros β Ubuntu 2. Clone autogen and export `AUTOGEN_REPO_BASE`. This environment variable enables the Docker containers to use the correct version agents. ```bash git clone [email protected]:microsoft/autogen.git export AUTOGEN_REPO_BASE=<path_to_autogen> ``` 3. Install `agbench`. AgBench is currently a tool in the Autogen repo. ```bash cd autogen/python/packages/agbench pip install -e . ``` |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/HumanEval/README.md | autogen | # HumanEval Benchmark This scenario implements a modified version of the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark. Compared to the original benchmark, there are **two key differences** here: - A chat model rather than a completion model is used. - The agents get pass/fail feedback about their implementations, and can keep trying until they succeed or run out of tokens or turns. |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/HumanEval/README.md | autogen | Running the tasks Navigate to HumanEval ```bash cd benchmarks/HumanEval ``` Create a file called ENV.json with the following (required) contents (If you're using MagenticOne) ```json { "CHAT_COMPLETION_KWARGS_JSON": "{\"api_version\": \"2024-02-15-preview\", \"azure_endpoint\": \"YOUR_ENDPOINT/\", \"model_capabilities\": {\"function_calling\": true, \"json_output\": true, \"vision\": true}, \"azure_ad_token_provider\": \"DEFAULT\", \"model\": \"gpt-4o-2024-05-13\"}", "CHAT_COMPLETION_PROVIDER": "azure" } ``` You can also use the openai client by replacing the last two entries in the ENV file by: - `CHAT_COMPLETION_PROVIDER='openai'` - `CHAT_COMPLETION_KWARGS_JSON` with the following JSON structure: ```json { "api_key": "REPLACE_WITH_YOUR_API", "model": "gpt-4o-2024-05-13" } ``` Now initialize the tasks. ```bash python Scripts/init_tasks.py ``` Note: This will attempt to download HumanEval Then run `Scripts/init_tasks.py` again. Once the script completes, you should now see a folder in your current directory called `Tasks` that contains one JSONL file per template in `Templates`. Now to run a specific subset of HumanEval use: ```bash agbench run Tasks/human_eval_MagenticOne.jsonl ``` You should see the command line print the raw logs that shows the agents in action To see a summary of the results (e.g., task completion rates), in a new terminal run the following: ```bash agbench tabulate Results/human_eval_MagenticOne ``` |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/HumanEval/README.md | autogen | References **Evaluating Large Language Models Trained on Code**`<br/>` Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, Wojciech Zaremba`<br/>` [https://arxiv.org/abs/2107.03374](https://arxiv.org/abs/2107.03374) |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/WebArena/README.md | autogen | # WebArena Benchmark This scenario implements the [WebArena](https://github.com/web-arena-x/webarena/tree/main) benchmark. The evaluation code has been modified from WebArena in [evaluation_harness](Templates/Common/evaluation_harness) we retain the License from WebArena and include it here [LICENSE](Templates/Common/evaluation_harness/LICENSE). |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/WebArena/README.md | autogen | References Zhou, Shuyan, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng et al. "Webarena: A realistic web environment for building autonomous agents." arXiv preprint arXiv:2307.13854 (2023). |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/GAIA/README.md | autogen | # GAIA Benchmark This scenario implements the [GAIA](https://arxiv.org/abs/2311.12983) agent benchmark. Before you begin, make sure you have followed instruction in `../README.md` to prepare your environment. ### Setup Environment Variables for AgBench Navigate to GAIA ```bash cd benchmarks/GAIA ``` Create a file called ENV.json with the following (required) contents (If you're using MagenticOne) ```json { "BING_API_KEY": "REPLACE_WITH_YOUR_BING_API_KEY", "HOMEPAGE": "https://www.bing.com/", "WEB_SURFER_DEBUG_DIR": "/autogen/debug", "CHAT_COMPLETION_KWARGS_JSON": "{\"api_version\": \"2024-02-15-preview\", \"azure_endpoint\": \"YOUR_ENDPOINT/\", \"model_capabilities\": {\"function_calling\": true, \"json_output\": true, \"vision\": true}, \"azure_ad_token_provider\": \"DEFAULT\", \"model\": \"gpt-4o-2024-05-13\"}", "CHAT_COMPLETION_PROVIDER": "azure" } ``` You can also use the openai client by replacing the last two entries in the ENV file by: - `CHAT_COMPLETION_PROVIDER='openai'` - `CHAT_COMPLETION_KWARGS_JSON` with the following JSON structure: ```json { "api_key": "REPLACE_WITH_YOUR_API", "model": "gpt-4o-2024-05-13" } ``` You might need to add additional packages to the requirements.txt file inside the Templates/MagenticOne folder. Now initialize the tasks. ```bash python Scripts/init_tasks.py ``` Note: This will attempt to download GAIA from Hugginface, but this requires authentication. The resulting folder structure should look like this: ``` . ./Downloads ./Downloads/GAIA ./Downloads/GAIA/2023 ./Downloads/GAIA/2023/test ./Downloads/GAIA/2023/validation ./Scripts ./Templates ./Templates/TeamOne ``` Then run `Scripts/init_tasks.py` again. Once the script completes, you should now see a folder in your current directory called `Tasks` that contains one JSONL file per template in `Templates`. ### Running GAIA Now to run a specific subset of GAIA use: ```bash agbench run Tasks/gaia_validation_level_1__MagenticOne.jsonl ``` You should see the command line print the raw logs that shows the agents in action To see a summary of the results (e.g., task completion rates), in a new terminal run the following: ```bash agbench tabulate Results/gaia_validation_level_1__MagenticOne/ ``` |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/GAIA/README.md | autogen | References **GAIA: a benchmark for General AI Assistants** `<br/>` GrΓ©goire Mialon, ClΓ©mentine Fourrier, Craig Swift, Thomas Wolf, Yann LeCun, Thomas Scialom `<br/>` [https://arxiv.org/abs/2311.12983](https://arxiv.org/abs/2311.12983) |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/AssistantBench/README.md | autogen | # AssistantBench Benchmark This scenario implements the [AssistantBench](https://assistantbench.github.io/) agent benchmark. Before you begin, make sure you have followed the instructions in `../README.md` to prepare your environment. We modify the evaluation code from AssistantBench in [Scripts](Scripts) and retain the license including it here [LICENSE](Scripts/evaluate_utils/LICENSE). Please find the original AssistantBench evaluation code here [https://huggingface.co/spaces/AssistantBench/leaderboard/tree/main/evaluation](https://huggingface.co/spaces/AssistantBench/leaderboard/tree/main/evaluation). ### Setup Environment Variables for AgBench Navigate to AssistantBench ```bash cd benchmarks/AssistantBench ``` Create a file called ENV.json with the following (required) contents (If you're using MagenticOne) ```json { "BING_API_KEY": "REPLACE_WITH_YOUR_BING_API_KEY", "HOMEPAGE": "https://www.bing.com/", "WEB_SURFER_DEBUG_DIR": "/autogen/debug", "CHAT_COMPLETION_KWARGS_JSON": "{\"api_version\": \"2024-02-15-preview\", \"azure_endpoint\": \"YOUR_ENDPOINT/\", \"model_capabilities\": {\"function_calling\": true, \"json_output\": true, \"vision\": true}, \"azure_ad_token_provider\": \"DEFAULT\", \"model\": \"gpt-4o-2024-05-13\"}", "CHAT_COMPLETION_PROVIDER": "azure" } ``` You can also use the openai client by replacing the last two entries in the ENV file by: - `CHAT_COMPLETION_PROVIDER='openai'` - `CHAT_COMPLETION_KWARGS_JSON` with the following JSON structure: ```json { "api_key": "REPLACE_WITH_YOUR_API", "model": "gpt-4o-2024-05-13" } ``` Now initialize the tasks. ```bash python Scripts/init_tasks.py ``` Note: This will attempt to download AssistantBench from Huggingface, but this requires authentication. After running the script, you should see the new following folders and files: ``` . ./Downloads ./Downloads/AssistantBench ./Downloads/AssistantBench/assistant_bench_v1.0_dev.jsonl ./Downloads/AssistantBench/assistant_bench_v1.0_dev.jsonl ./Tasks ./Tasks/assistant_bench_v1.0_dev.jsonl ./Tasks/assistant_bench_v1.0_dev.jsonl ``` Then run `Scripts/init_tasks.py` again. Once the script completes, you should now see a folder in your current directory called `Tasks` that contains one JSONL file per template in `Templates`. ### Running AssistantBench Now to run a specific subset of AssistantBench use: ```bash agbench run Tasks/assistant_bench_v1.0_dev__MagenticOne.jsonl ``` You should see the command line print the raw logs that shows the agents in action To see a summary of the results (e.g., task completion rates), in a new terminal run the following: ```bash agbench tabulate Results/assistant_bench_v1.0_dev__MagenticOne ``` |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/AssistantBench/README.md | autogen | References Yoran, Ori, Samuel Joseph Amouyal, Chaitanya Malaviya, Ben Bogin, Ofir Press, and Jonathan Berant. "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?." arXiv preprint arXiv:2407.15711 (2024). https://arxiv.org/abs/2407.15711 |
GitHub | autogen | autogen/python/packages/agbench/benchmarks/AssistantBench/Scripts/evaluate_utils/readme.md | autogen | These files were obtained from the creators of the AssistantBench benchmark and modified slightly. You can find the latest version at [https://huggingface.co/spaces/AssistantBench/leaderboard/tree/main/evaluation](https://huggingface.co/spaces/AssistantBench/leaderboard/tree/main/evaluation) |
GitHub | autogen | autogen/python/templates/new-package/{{cookiecutter.package_name}}/README.md | autogen | # {{cookiecutter.package_name}} |
GitHub | autogen | autogen/dotnet/README.md | autogen | # AutoGen for .NET Thre are two sets of packages here: AutoGen.\* the older packages derived from AutoGen 0.2 for .NET - these will gradually be deprecated and ported into the new packages Microsoft.AutoGen.* the new packages for .NET that use the event-driven model - These APIs are not yet stable and are subject to change. To get started with the new packages, please see the [samples](./samples/) and in particular the [Hello](./samples/Hello) sample. You can install both new and old packages from the following feeds: [](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [](https://badge.fury.io/nu/AutoGen.Core) > [!NOTE] > Nightly build is available at: > > - [](https://dev.azure.com/AGPublish/AGPublic/_artifacts/feed/AutoGen-Nightly) : <https://pkgs.dev.azure.com/AGPublish/AGPublic/_packaging/AutoGen-Nightly/nuget/v3/index.json> Firstly, following the [installation guide](./website/articles/Installation.md) to install AutoGen packages. Then you can start with the following code snippet to create a conversable agent and chat with it. ```csharp using AutoGen; using AutoGen.OpenAI; var openAIKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY") ?? throw new Exception("Please set OPENAI_API_KEY environment variable."); var gpt35Config = new OpenAIConfig(openAIKey, "gpt-3.5-turbo"); var assistantAgent = new AssistantAgent( name: "assistant", systemMessage: "You are an assistant that help user to do some tasks.", llmConfig: new ConversableAgentConfig { Temperature = 0, ConfigList = [gpt35Config], }) .RegisterPrintMessage(); // register a hook to print message nicely to console // set human input mode to ALWAYS so that user always provide input var userProxyAgent = new UserProxyAgent( name: "user", humanInputMode: ConversableAgent.HumanInputMode.ALWAYS) .RegisterPrintMessage(); // start the conversation await userProxyAgent.InitiateChatAsync( receiver: assistantAgent, message: "Hey assistant, please do me a favor.", maxRound: 10); ``` |
GitHub | autogen | autogen/dotnet/README.md | autogen | Samples You can find more examples under the [sample project](https://github.com/microsoft/autogen/tree/dotnet/samples/AutoGen.BasicSamples). |
GitHub | autogen | autogen/dotnet/README.md | autogen | Functionality - ConversableAgent - [x] function call - [x] code execution (dotnet only, powered by [`dotnet-interactive`](https://github.com/dotnet/interactive)) - Agent communication - [x] Two-agent chat - [x] Group chat - [ ] Enhanced LLM Inferences - Exclusive for dotnet - [x] Source generator for type-safe function definition generation |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | # Packaging AutoGen.NET This document describes the steps to pack the `AutoGen.NET` project. |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Prerequisites - .NET SDK |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Create Package 1. **Restore and Build the Project** ```bash dotnet restore dotnet build --configuration Release --no-restore ``` 2. **Create the NuGet Package** ```bash dotnet pack --configuration Release --no-build ``` This will generate both the `.nupkg` file and the `.snupkg` file in the `./artifacts/package/release` directory. For more details, refer to the [official .NET documentation](https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-pack). |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Add new project to package list. By default, when you add a new project to `AutoGen.sln`, it will not be included in the package list. To include the new project in the package, you need to add the following line to the new project's `.csproj` file e.g. ```xml <Import Project="$(RepoRoot)/nuget/nuget-package.props" /> ``` The `nuget-packages.props` enables `IsPackable` to `true` for the project, it also sets nenecessary metadata for the package. For more details, refer to the [NuGet folder](./nuget/README.md). |
GitHub | autogen | autogen/dotnet/PACKAGING.md | autogen | Package versioning The version of the package is defined by `VersionPrefix` and `VersionPrefixForAutoGen0_2` in [MetaInfo.props](./eng/MetaInfo.props). If the name of your project starts with `AutoGen.`, the version will be set to `VersionPrefixForAutoGen0_2`, otherwise it will be set to `VersionPrefix`. |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | ## AutoGen.LMStudio This package provides support for consuming openai-like API from LMStudio local server. |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | Installation To use `AutoGen.LMStudio`, add the following package to your `.csproj` file: ```xml <ItemGroup> <PackageReference Include="AutoGen.LMStudio" Version="AUTOGEN_VERSION" /> </ItemGroup> ``` |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | Usage ```csharp using AutoGen.LMStudio; var localServerEndpoint = "localhost"; var port = 5000; var lmStudioConfig = new LMStudioConfig(localServerEndpoint, port); var agent = new LMStudioAgent( name: "agent", systemMessage: "You are an agent that help user to do some tasks.", lmStudioConfig: lmStudioConfig) .RegisterPrintMessage(); // register a hook to print message nicely to console await agent.SendAsync("Can you write a piece of C# code to calculate 100th of fibonacci?"); ``` |
GitHub | autogen | autogen/dotnet/src/AutoGen.LMStudio/README.md | autogen | Update history ### Update on 0.0.7 (2024-02-11) - Add `LMStudioAgent` to support consuming openai-like API from LMStudio local server. |
GitHub | autogen | autogen/dotnet/src/AutoGen.SourceGenerator/README.md | autogen | ### AutoGen.SourceGenerator This package carries a source generator that adds support for type-safe function definition generation. Simply mark a method with `Function` attribute, and the source generator will generate a function definition and a function call wrapper for you. ### Get start First, add the following to your project file and set `GenerateDocumentationFile` property to true ```xml <PropertyGroup> <!-- This enables structural xml document support --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> ``` ```xml <ItemGroup> <PackageReference Include="AutoGen.SourceGenerator" /> </ItemGroup> ``` > Nightly Build feed: https://devdiv.pkgs.visualstudio.com/DevDiv/_packaging/AutoGen/nuget/v3/index.json Then, for the methods you want to generate function definition and function call wrapper, mark them with `Function` attribute: > Note: For the best of performance, try using primitive types for the parameters and return type. ```csharp // file: MyFunctions.cs using AutoGen; // a partial class is required // and the class must be public public partial class MyFunctions { /// <summary> /// Add two numbers. /// </summary> /// <param name="a">The first number.</param> /// <param name="b">The second number.</param> [Function] public Task<string> AddAsync(int a, int b) { return Task.FromResult($"{a} + {b} = {a + b}"); } } ``` The source generator will generate the following code based on the method signature and documentation. It helps you save the effort of writing function definition and keep it up to date with the actual method signature. ```csharp // file: MyFunctions.generated.cs public partial class MyFunctions { private class AddAsyncSchema { public int a {get; set;} public int b {get; set;} } public Task<string> AddAsyncWrapper(string arguments) { var schema = JsonSerializer.Deserialize<AddAsyncSchema>( arguments, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }); return AddAsync(schema.a, schema.b); } public FunctionDefinition AddAsyncFunction { get => new FunctionDefinition { Name = @"AddAsync", Description = """ Add two numbers. """, Parameters = BinaryData.FromObjectAsJson(new { Type = "object", Properties = new { a = new { Type = @"number", Description = @"The first number.", }, b = new { Type = @"number", Description = @"The second number.", }, }, Required = new [] { "a", "b", }, }, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }) }; } } ``` For more examples, please check out the following project - [AutoGen.BasicSamples](../samples/AutoGen.BasicSamples/) - [AutoGen.SourceGenerator.Tests](../../test/AutoGen.SourceGenerator.Tests/) |
GitHub | autogen | autogen/dotnet/nuget/README.md | autogen | # NuGet Directory This directory contains resources and metadata for packaging the AutoGen.NET SDK as a NuGet package. |
GitHub | autogen | autogen/dotnet/nuget/README.md | autogen | Files - **icon.png**: The icon used for the NuGet package. - **NUGET.md**: The readme file displayed on the NuGet package page. - **NUGET-PACKAGE.PROPS**: The MSBuild properties file that defines the packaging settings for the NuGet package. |
GitHub | autogen | autogen/dotnet/nuget/README.md | autogen | Purpose The files in this directory are used to configure and build the NuGet package for the AutoGen.NET SDK, ensuring that it includes necessary metadata, documentation, and resources. |
GitHub | autogen | autogen/dotnet/nuget/NUGET.md | autogen | ### About AutoGen for .NET `AutoGen for .NET` is the official .NET SDK for [AutoGen](https://github.com/microsoft/autogen). It enables you to create LLM agents and construct multi-agent workflows with ease. It also provides integration with popular platforms like OpenAI, Semantic Kernel, and LM Studio. ### Gettings started - Find documents and examples on our [document site](https://microsoft.github.io/autogen-for-net/) - Report a bug or request a feature by creating a new issue in our [github repo](https://github.com/microsoft/autogen) - Consume the nightly build package from one of the [nightly build feeds](https://microsoft.github.io/autogen-for-net/articles/Installation.html#nighly-build) |
GitHub | autogen | autogen/dotnet/website/index.md | autogen | [!INCLUDE [](./articles/getting-start.md)] |
GitHub | autogen | autogen/dotnet/website/README.md | autogen | ## How to build and run the website ### Prerequisites - dotnet 7.0 or later ### Build Firstly, go to autogen/dotnet folder and run the following command to build the website: ```bash dotnet tool restore dotnet tool run docfx website/docfx.json --serve ``` After the command is executed, you can open your browser and navigate to `http://localhost:8080` to view the website. |
GitHub | autogen | autogen/dotnet/website/release_note/update.md | autogen | ##### Update on 0.0.15 (2024-06-13) Milestone: [AutoGen.Net 0.0.15](https://github.com/microsoft/autogen/milestone/3) ###### Highlights - [Issue 2851](https://github.com/microsoft/autogen/issues/2851) `AutoGen.Gemini` package for Gemini support. Examples can be found [here](https://github.com/microsoft/autogen/tree/main/dotnet/samples/AutoGen.Gemini.Sample) ##### Update on 0.0.14 (2024-05-28) ###### New features - [Issue 2319](https://github.com/microsoft/autogen/issues/2319) Add `AutoGen.Ollama` package for Ollama support. Special thanks to @iddelacruz for the effort. - [Issue 2608](https://github.com/microsoft/autogen/issues/2608) Add `AutoGen.Anthropic` package for Anthropic support. Special thanks to @DavidLuong98 for the effort. - [Issue 2647](https://github.com/microsoft/autogen/issues/2647) Add `ToolCallAggregateMessage` for function call middleware. ###### API Breaking Changes - [Issue 2648](https://github.com/microsoft/autogen/issues/2648) Deprecate `Message` type. - [Issue 2649](https://github.com/microsoft/autogen/issues/2649) Deprecate `Workflow` type. ###### Bug Fixes - [Issue 2735](https://github.com/microsoft/autogen/issues/2735) Fix tool call issue in AutoGen.Mistral package. - [Issue 2722](https://github.com/microsoft/autogen/issues/2722) Fix parallel funciton call in function call middleware. - [Issue 2633](https://github.com/microsoft/autogen/issues/2633) Set up `name` field in `OpenAIChatMessageConnector` - [Issue 2660](https://github.com/microsoft/autogen/issues/2660) Fix dotnet interactive restoring issue when system language is Chinese - [Issue 2687](https://github.com/microsoft/autogen/issues/2687) Add `global::` prefix to generated code to avoid conflict with user-defined types. ##### Update on 0.0.13 (2024-05-09) ###### New features - [Issue 2593](https://github.com/microsoft/autogen/issues/2593) Consume SK plugins in Agent. - [Issue 1893](https://github.com/microsoft/autogen/issues/1893) Support inline-data in ImageMessage - [Issue 2481](https://github.com/microsoft/autogen/issues/2481) Introduce `ChatCompletionAgent` to `AutoGen.SemanticKernel` ###### API Breaking Changes - [Issue 2470](https://github.com/microsoft/autogen/issues/2470) Update the return type of `IStreamingAgent.GenerateStreamingReplyAsync` from `Task<IAsyncEnumerable<IStreamingMessage>>` to `IAsyncEnumerable<IStreamingMessage>` - [Issue 2470](https://github.com/microsoft/autogen/issues/2470) Update the return type of `IStreamingMiddleware.InvokeAsync` from `Task<IAsyncEnumerable<IStreamingMessage>>` to `IAsyncEnumerable<IStreamingMessage>` - Mark `RegisterReply`, `RegisterPreProcess` and `RegisterPostProcess` as obsolete. You can replace them with `RegisterMiddleware` ###### Bug Fixes - Fix [Issue 2609](https://github.com/microsoft/autogen/issues/2609) Constructor of conversableAgentConfig does not accept LMStudioConfig as ConfigList ##### Update on 0.0.12 (2024-04-22) - Add AutoGen.Mistral package to support Mistral.AI models ##### Update on 0.0.11 (2024-04-10) - Add link to Discord channel in nuget's readme.md - Document improvements - In `AutoGen.OpenAI`, update `Azure.AI.OpenAI` to 1.0.0-beta.15 and add support for json mode and deterministic output in `OpenAIChatAgent` [Issue #2346](https://github.com/microsoft/autogen/issues/2346) - In `AutoGen.SemanticKernel`, update `SemanticKernel` package to 1.7.1 - [API Breaking Change] Rename `PrintMessageMiddlewareExtension.RegisterPrintFormatMessageHook' to `PrintMessageMiddlewareExtension.RegisterPrintMessage`. ##### Update on 0.0.10 (2024-03-12) - Rename `Workflow` to `Graph` - Rename `AddInitializeMessage` to `SendIntroduction` - Rename `SequentialGroupChat` to `RoundRobinGroupChat` ##### Update on 0.0.9 (2024-03-02) - Refactor over @AutoGen.Message and introducing `TextMessage`, `ImageMessage`, `MultiModalMessage` and so on. PR [#1676](https://github.com/microsoft/autogen/pull/1676) - Add `AutoGen.SemanticKernel` to support seamless integration with Semantic Kernel - Move the agent contract abstraction to `AutoGen.Core` package. The `AutoGen.Core` package provides the abstraction for message type, agent and group chat and doesn't contain dependencies over `Azure.AI.OpenAI` or `Semantic Kernel`. This is useful when you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies. - Move `GPTAgent`, `OpenAIChatAgent` and all openai-dependencies to `AutoGen.OpenAI` ##### Update on 0.0.8 (2024-02-28) - Fix [#1804](https://github.com/microsoft/autogen/pull/1804) - Streaming support for IAgent [#1656](https://github.com/microsoft/autogen/pull/1656) - Streaming support for middleware via `MiddlewareStreamingAgent` [#1656](https://github.com/microsoft/autogen/pull/1656) - Graph chat support with conditional transition workflow [#1761](https://github.com/microsoft/autogen/pull/1761) - AutoGen.SourceGenerator: Generate `FunctionContract` from `FunctionAttribute` [#1736](https://github.com/microsoft/autogen/pull/1736) ##### Update on 0.0.7 (2024-02-11) - Add `AutoGen.LMStudio` to support comsume openai-like API from LMStudio local server ##### Update on 0.0.6 (2024-01-23) - Add `MiddlewareAgent` - Use `MiddlewareAgent` to implement existing agent hooks (RegisterPreProcess, RegisterPostProcess, RegisterReply) - Remove `AutoReplyAgent`, `PreProcessAgent`, `PostProcessAgent` because they are replaced by `MiddlewareAgent` ##### Update on 0.0.5 - Simplify `IAgent` interface by removing `ChatLLM` Property - Add `GenerateReplyOptions` to `IAgent.GenerateReplyAsync` which allows user to specify or override the options when generating reply ##### Update on 0.0.4 - Move out dependency of Semantic Kernel - Add type `IChatLLM` as connector to LLM ##### Update on 0.0.3 - In AutoGen.SourceGenerator, rename FunctionAttribution to FunctionAttribute - In AutoGen, refactor over ConversationAgent, UserProxyAgent, and AssistantAgent ##### Update on 0.0.2 - update Azure.OpenAI.AI to 1.0.0-beta.12 - update Semantic kernel to 1.0.1 |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | # AutoGen.Net 0.0.16 Release Notes We are excited to announce the release of **AutoGen.Net 0.0.16**. This release includes several new features, bug fixes, improvements, and important updates. Below are the detailed release notes: **[Milestone: AutoGen.Net 0.0.16](https://github.com/microsoft/autogen/milestone/4)** |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π¦ New Features 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - Replaced `IStreamingMessage` and `IStreamingMessage<T>` with `IMessage` and `IMessage<T>`. 2. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issues/3014)) - Added a new tutorial to the website for integrating ollama with LiteLLM for function calls. 3. **Add ReAct sample** ([#2978](https://github.com/microsoft/autogen/issues/2978)) - Added a new sample demonstrating the ReAct pattern. 4. **Support tools Anthropic Models** ([#2771](https://github.com/microsoft/autogen/issues/2771)) - Introduced support for tools like `AnthropicClient`, `AnthropicClientAgent`, and `AnthropicMessageConnector`. 5. **Propose Orchestrator for managing group chat/agentic workflow** ([#2695](https://github.com/microsoft/autogen/issues/2695)) - Introduced a customizable orchestrator interface for managing group chats and agent workflows. 6. **Run Agent as Web API** ([#2519](https://github.com/microsoft/autogen/issues/2519)) - Introduced the ability to start an OpenAI-chat-compatible web API from an arbitrary agent. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π Bug Fixes 1. **SourceGenerator doesn't work when function's arguments are empty** ([#2976](https://github.com/microsoft/autogen/issues/2976)) - Fixed an issue where the SourceGenerator failed when function arguments were empty. 2. **Add content field in ToolCallMessage** ([#2975](https://github.com/microsoft/autogen/issues/2975)) - Added a content property in `ToolCallMessage` to handle text content returned by the OpenAI model during tool calls. 3. **AutoGen.SourceGenerator doesnβt encode `"` in structural comments** ([#2872](https://github.com/microsoft/autogen/issues/2872)) - Fixed an issue where structural comments containing `"` were not properly encoded, leading to compilation errors. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π Improvements 1. **Sample update - Add getting-start samples for BasicSample project** ([#2859](https://github.com/microsoft/autogen/issues/2859)) - Re-organized the `AutoGen.BasicSample` project to include only essential getting-started examples, simplifying complex examples. 2. **Graph constructor should consider null transitions** ([#2708](https://github.com/microsoft/autogen/issues/2708)) - Updated the Graph constructor to handle cases where transitionsβ values are null. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | β οΈ API-Breakchange 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - **Migration guide:** Deprecating `IStreamingMessage` will introduce breaking changes, particularly for `IStreamingAgent` and `IStreamingMiddleware`. Replace all `IStreamingMessage` and `IStreamingMessage<T>` with `IMessage` and `IMessage<T>`. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.16.md | autogen | π Document Update 1. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issues/3014)) - Added a tutorial to the website for using ollama with LiteLLM. Thank you to all the contributors for making this release possible. We encourage everyone to upgrade to AutoGen.Net 0.0.16 to take advantage of these new features and improvements. If you encounter any issues or have any feedback, please let us know. Happy coding! π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.1.md | autogen | ο»Ώ# Release Notes for AutoGen.Net v0.2.1 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.1.md | autogen | New Features π - **Support for OpenAi o1-preview** : Added support for OpenAI o1-preview model ([#3522](https://github.com/microsoft/autogen/issues/3522)) |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.1.md | autogen | Example π - **OpenAI o1-preview**: [Connect_To_OpenAI_o1_preview](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Connect_To_OpenAI_o1_preview.cs) |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | # AutoGen.Net 0.0.17 Release Notes |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | π What's New 1. **.NET Core Target Framework Support** ([#3203](https://github.com/microsoft/autogen/issues/3203)) - π Added support for .NET Core to ensure compatibility and enhanced performance of AutoGen packages across different platforms. 2. **Kernel Support in Interactive Service Constructor** ([#3181](https://github.com/microsoft/autogen/issues/3181)) - π§ Enhanced the Interactive Service to accept a kernel in its constructor, facilitating usage in notebook environments. 3. **Constructor Options for OpenAIChatAgent** ([#3126](https://github.com/microsoft/autogen/issues/3126)) - βοΈ Added new constructor options for `OpenAIChatAgent` to allow full control over chat completion flags/options. 4. **Step-by-Step Execution for Group Chat** ([#3075](https://github.com/microsoft/autogen/issues/3075)) - π οΈ Introduced an `IAsyncEnumerable` extension API to run group chat step-by-step, enabling developers to observe internal processes or implement early stopping mechanisms. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | π Improvements 1. **Cancellation Token Addition in Graph APIs** ([#3111](https://github.com/microsoft/autogen/issues/3111)) - π Added cancellation tokens to async APIs in the `AutoGen.Core.Graph` class to follow best practices and enhance the control flow. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | β οΈ API Breaking Changes 1. **FunctionDefinition Generation Stopped in Source Generator** ([#3133](https://github.com/microsoft/autogen/issues/3133)) - π Stopped generating `FunctionDefinition` from `Azure.AI.OpenAI` in the source generator to eliminate unnecessary package dependencies. Migration guide: - β‘οΈ Use `ToOpenAIFunctionDefinition()` extension from `AutoGen.OpenAI` for generating `FunctionDefinition` from `AutoGen.Core.FunctionContract`. - β‘οΈ Use `FunctionContract` for metadata such as function name or parameters. 2. **Namespace Renaming for AutoGen.WebAPI** ([#3152](https://github.com/microsoft/autogen/issues/3152)) - βοΈ Renamed the namespace of `AutoGen.WebAPI` from `AutoGen.Service` to `AutoGen.WebAPI` to maintain consistency with the project name. 3. **Semantic Kernel Version Update** ([#3118](https://github.com/microsoft/autogen/issues/3118)) - π Upgraded the Semantic Kernel version to 1.15.1 for enhanced functionality and performance improvements. This might introduce break change for those who use a lower-version semantic kernel. |
GitHub | autogen | autogen/dotnet/website/release_note/0.0.17.md | autogen | π Documentation 1. **Consume AutoGen.Net Agent in AG Studio** ([#3142](https://github.com/microsoft/autogen/issues/3142)) - Added detailed documentation on using AutoGen.Net Agent as a model in AG Studio, including examples of starting an OpenAI chat backend and integrating third-party OpenAI models. 2. **Middleware Overview Documentation Errors Fixed** ([#3129](https://github.com/microsoft/autogen/issues/3129)) - Corrected logic and compile errors in the example code provided in the Middleware Overview documentation to ensure it runs without issues. --- We hope you enjoy the new features and improvements in AutoGen.Net 0.0.17! If you encounter any issues or have feedback, please open a new issue on our [GitHub repository](https://github.com/microsoft/autogen/issues). |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.2.md | autogen | ο»Ώ# Release Notes for AutoGen.Net v0.2.2 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.2.md | autogen | Improvements π - **Update OpenAI and Semantick Kernel to the latest version** : Updated OpenAI and Semantick Kernel to the latest version ([#3792](https://github.com/microsoft/autogen/pull/3792) |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | # π Release Notes: AutoGen.Net 0.1.0 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | π¦ New Packages 1. **Add AutoGen.AzureAIInference Package** - **Issue**: [.Net][Feature Request] [#3323](https://github.com/microsoft/autogen/issues/3323) - **Description**: The new `AutoGen.AzureAIInference` package includes the `ChatCompletionClientAgent`. |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | β¨ New Features 1. **Enable Step-by-Step Execution for Two Agent Chat API** - **Issue**: [.Net][Feature Request] [#3339](https://github.com/microsoft/autogen/issues/3339) - **Description**: The `AgentExtension.SendAsync` now returns an `IAsyncEnumerable`, allowing conversations to be driven step by step, similar to how `GroupChatExtension.SendAsync` works. 2. **Support Python Code Execution in AutoGen.DotnetInteractive** - **Issue**: [.Net][Feature Request] [#3316](https://github.com/microsoft/autogen/issues/3316) - **Description**: `dotnet-interactive` now supports Jupyter kernel connection, allowing Python code execution in `AutoGen.DotnetInteractive`. 3. **Support Prompt Cache in Claude** - **Issue**: [.Net][Feature Request] [#3359](https://github.com/microsoft/autogen/issues/3359) - **Description**: Claude now supports prompt caching, which dramatically lowers the bill if the cache is hit. Added the corresponding option in the Claude client. |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | π Bug Fixes 1. **GroupChatExtension.SendAsync Doesnβt Terminate Chat When `IOrchestrator` Returns Null as Next Agent** - **Issue**: [.Net][Bug] [#3306](https://github.com/microsoft/autogen/issues/3306) - **Description**: Fixed an issue where `GroupChatExtension.SendAsync` would continue until the max_round is reached even when `IOrchestrator` returns null as the next speaker. 2. **InitializedMessages Are Added Repeatedly in GroupChatExtension.SendAsync Method** - **Issue**: [.Net][Bug] [#3268](https://github.com/microsoft/autogen/issues/3268) - **Description**: Fixed an issue where initialized messages from group chat were being added repeatedly in every iteration of the `GroupChatExtension.SendAsync` API. 3. **Remove `Azure.AI.OpenAI` Dependency from `AutoGen.DotnetInteractive`** - **Issue**: [.Net][Feature Request] [#3273](https://github.com/microsoft/autogen/issues/3273) - **Description**: Fixed an issue by removing the `Azure.AI.OpenAI` dependency from `AutoGen.DotnetInteractive`, simplifying the package and reducing dependencies. |
GitHub | autogen | autogen/dotnet/website/release_note/0.1.0.md | autogen | π Documentation Updates 1. **Add Function Comparison Page Between Python AutoGen and AutoGen.Net** - **Issue**: [.Net][Document] [#3184](https://github.com/microsoft/autogen/issues/3184) - **Description**: Added comparative documentation for features between AutoGen and AutoGen.Net across various functionalities and platform supports. |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | # Release Notes for AutoGen.Net v0.2.0 π |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | New Features π - **OpenAI Structural Format Output**: Added support for structural output format in the OpenAI integration. You can check out the example [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Structural_Output.cs) ([#3482](https://github.com/microsoft/autogen/issues/3482)). - **Structural Output Configuration**: Introduced a property for overriding the structural output schema when generating replies with `GenerateReplyOption` ([#3436](https://github.com/microsoft/autogen/issues/3436)). |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | Bug Fixes π - **Fixed Error Code 500**: Resolved an issue where an error occurred when the message history contained multiple different tool calls with the `name` field ([#3437](https://github.com/microsoft/autogen/issues/3437)). |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | Improvements π§ - **Leverage OpenAI V2.0 in AutoGen.OpenAI package**: The `AutoGen.OpenAI` package now uses OpenAI v2.0, providing improved functionality and performance. In the meantime, the original `AutoGen.OpenAI` is still available and can be accessed by `AutoGen.OpenAI.V1`. This allows users who prefer to continue to use `Azure.AI.OpenAI v1` package in their project. ([#3193](https://github.com/microsoft/autogen/issues/3193)). - **Deprecation of GPTAgent**: `GPTAgent` has been deprecated in favor of `OpenAIChatAgent` and `OpenAIMessageConnector` ([#3404](https://github.com/microsoft/autogen/issues/3404)). |
GitHub | autogen | autogen/dotnet/website/release_note/0.2.0.md | autogen | Documentation π - **Tool Call Instructions**: Added detailed documentation on using tool calls with `ollama` and `OpenAIChatAgent` ([#3248](https://github.com/microsoft/autogen/issues/3248)). ### Migration Guides π #### For the Deprecation of `GPTAgent` ([#3404](https://github.com/microsoft/autogen/issues/3404)): **Before:** ```csharp var agent = new GPTAgent(...); ``` **After:** ```csharp var agent = new OpenAIChatAgent(...) .RegisterMessageConnector(); ``` #### For Using Azure.AI.OpenAI v2.0 ([#3193](https://github.com/microsoft/autogen/issues/3193)): **Previous way of creating `OpenAIChatAgent`:** ```csharp var openAIClient = new OpenAIClient(apiKey); var openAIClientAgent = new OpenAIChatAgent( openAIClient: openAIClient, model: "gpt-4o-mini", // Other parameters... ); ``` **New way of creating `OpenAIChatAgent`:** ```csharp var openAIClient = new OpenAIClient(apiKey); var openAIClientAgent = new OpenAIChatAgent( chatClient: openAIClient.GetChatClient("gpt-4o-mini"), // Other parameters... ); ``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.