source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
123
label
stringclasses
1 value
content
stringlengths
6
6.94k
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
This tutorial shows how to use AutoGen.Net agent as model in AG Studio
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 1. Create Dotnet empty web app and install AutoGen and AutoGen.WebAPI package ```bash dotnet new web dotnet add package AutoGen dotnet add package AutoGen.WebAPI ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 2. Replace the Program.cs with following code ```bash using AutoGen.Core; using AutoGen.Service; var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); var helloWorldAgent = new HelloWorldAgent(); app.UseAgentAsOpenAIChatCompletionEndpoint(helloWorldAgent); app.Run(); class HelloWorldAgent : IAgent { public string Name => "HelloWorld"; public Task<IMessage> GenerateReplyAsync(IEnumerable<IMessage> messages, GenerateReplyOptions? options = null, CancellationToken cancellationToken = default) { return Task.FromResult<IMessage>(new TextMessage(Role.Assistant, "Hello World!", from: this.Name)); } } ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 3: Start the web app Run the following command to start web api ```bash dotnet RUN ``` The web api will listen at `http://localhost:5264/v1/chat/completion ![terminal](../images/articles/UseAutoGenAsModelinAGStudio/Terminal.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 4: In another terminal, start autogen-studio ```bash autogenstudio ui ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 5: Navigate to AutoGen Studio UI and add hello world agent as openai Model ### Step 5.1: Go to model tab ![The Model Tab](../images/articles/UseAutoGenAsModelinAGStudio/TheModelTab.png) ### Step 5.2: Select "OpenAI model" card ![Open AI model Card](../images/articles/UseAutoGenAsModelinAGStudio/Step5.2OpenAIModel.png) ### Step 5.3: Fill the model name and url The model name needs to be same with agent name ![Fill the model name and url](../images/articles/UseAutoGenAsModelinAGStudio/Step5.3ModelNameAndURL.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 6: Create a hello world agent that uses the hello world model ![Create a hello world agent that uses the hello world model](../images/articles/UseAutoGenAsModelinAGStudio/Step6.png) ![Agent Configuration](../images/articles/UseAutoGenAsModelinAGStudio/Step6b.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Final Step: Use the hello world agent in workflow ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png) ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png) ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsB.png) ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsC.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
This tutorial shows how to generate response using an @AutoGen.Core.IAgent by taking @AutoGen.OpenAI.OpenAIChatAgent as an example. > [!NOTE] > AutoGen.Net provides the following agents to connect to different LLM platforms. Generating responses using these agents is similar to the example shown below. > - @AutoGen.OpenAI.OpenAIChatAgent > - @AutoGen.SemanticKernel.SemanticKernelAgent > - @AutoGen.LMStudio.LMStudioAgent > - @AutoGen.Mistral.MistralClientAgent > - @AutoGen.Anthropic.AnthropicClientAgent > - @AutoGen.Ollama.OllamaAgent > - @AutoGen.Gemini.GeminiChatAgent > [!NOTE] > The complete code example can be found in [Chat_With_Agent.cs](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs)
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 1: Install AutoGen First, install the AutoGen package using the following command: ```bash dotnet add package AutoGen ```
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 2: add Using Statements [!code-csharp[Using Statements](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 3: Create an @AutoGen.OpenAI.OpenAIChatAgent > [!NOTE] > The @AutoGen.OpenAI.Extension.OpenAIAgentExtension.RegisterMessageConnector* method registers an @AutoGen.OpenAI.OpenAIChatRequestMessageConnector middleware which converts OpenAI message types to AutoGen message types. This step is necessary when you want to use AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, etc. > For more information, see [Built-in-messages](../articles/Built-in-messages.md) [!code-csharp[Create an OpenAIChatAgent](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 4: Generate Response To generate response, you can use one of the overloaded method of @AutoGen.Core.AgentExtension.SendAsync* method. The following code shows how to generate response with text message: [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Chat_With_Agent)] To generate response with chat history, you can pass the chat history to the @AutoGen.Core.AgentExtension.SendAsync* method: [!code-csharp[Generate Response with Chat History](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Chat_With_History)] To streamingly generate response, use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* [!code-csharp[Generate Streaming Response](../../samples/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Streaming_Chat)]
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Further Reading - [Chat with google gemini](../articles/AutoGen.Gemini/Chat-with-google-gemini.md) - [Chat with vertex gemini](../articles/AutoGen.Gemini/Chat-with-vertex-gemini.md) - [Chat with Ollama](../articles/AutoGen.Ollama/Chat-with-llama.md) - [Chat with Semantic Kernel Agent](../articles/AutoGen.SemanticKernel/SemanticKernelAgent-simple-chat.md)
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
This tutorial shows how to perform image chat with an agent using the @AutoGen.OpenAI.OpenAIChatAgent as an example. > [!NOTE] > To chat image with an agent, the model behind the agent needs to support image input. Here is a partial list of models that support image input: > - gpt-4o > - gemini-1.5 > - llava > - claude-3 > - ... > > In this example, we are using the gpt-4o model as the backend model for the agent. > [!NOTE] > The complete code example can be found in [Image_Chat_With_Agent.cs](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs)
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 1: Install AutoGen First, install the AutoGen package using the following command: ```bash dotnet add package AutoGen ```
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 2: Add Using Statements [!code-csharp[Using Statements](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 3: Create an @AutoGen.OpenAI.OpenAIChatAgent [!code-csharp[Create an OpenAIChatAgent](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 4: Prepare Image Message In AutoGen, you can create an image message using either @AutoGen.Core.ImageMessage or @AutoGen.Core.MultiModalMessage. The @AutoGen.Core.ImageMessage takes a single image as input, whereas the @AutoGen.Core.MultiModalMessage allows you to pass multiple modalities like text or image. Here is how to create an image message using @AutoGen.Core.ImageMessage: [!code-csharp[Create Image Message](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Prepare_Image_Input)] Here is how to create a multimodal message using @AutoGen.Core.MultiModalMessage: [!code-csharp[Create MultiModal Message](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Prepare_Multimodal_Input)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 5: Generate Response To generate response, you can use one of the overloaded methods of @AutoGen.Core.AgentExtension.SendAsync* method. The following code shows how to generate response with an image message: [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Chat_With_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Further Reading - [Image chat with gemini](../articles/AutoGen.Gemini/Image-chat-with-gemini.md) - [Image chat with llava](../articles/AutoGen.Ollama/Chat-with-llava.md)
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
This tutorial shows how to use tools in an agent.
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
What is tool Tools are pre-defined functions in user's project that agent can invoke. Agent can use tools to perform actions like search web, perform calculations, etc. With tools, it can greatly extend the capabilities of an agent. > [!NOTE] > To use tools with agent, the backend LLM model used by the agent needs to support tool calling. Here are some of the LLM models that support tool calling as of 06/21/2024 > - GPT-3.5-turbo with version >= 0613 > - GPT-4 series > - Gemini series > - OPEN_MISTRAL_7B > - ... > > This tutorial uses the latest `GPT-3.5-turbo` as example. > [!NOTE] > The complete code example can be found in [Use_Tools_With_Agent.cs](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs)
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Key Concepts - @AutoGen.Core.FunctionContract: The contract of a function that agent can invoke. It contains the function name, description, parameters schema, and return type. - @AutoGen.Core.ToolCallMessage: A message type that represents a tool call request in AutoGen.Net. - @AutoGen.Core.ToolCallResultMessage: A message type that represents a tool call result in AutoGen.Net. - @AutoGen.Core.ToolCallAggregateMessage: An aggregate message type that represents a tool call request and its result in a single message in AutoGen.Net. - @AutoGen.Core.FunctionCallMiddleware: A middleware that pass the @AutoGen.Core.FunctionContract to the agent when generating response, and process the tool call response when receiving a @AutoGen.Core.ToolCallMessage. > [!Tip] > You can Use AutoGen.SourceGenerator to automatically generate type-safe @AutoGen.Core.FunctionContract instead of manually defining them. For more information, please check out [Create type-safe function](../articles/Create-type-safe-function-call.md).
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Install AutoGen and AutoGen.SourceGenerator First, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` Also, you might need to enable structural xml document support by setting `GenerateDocumentationFile` property to true in your project file. This allows source generator to leverage the documentation of the function when generating the function definition. ```xml <PropertyGroup> <!-- This enables structural xml document support --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> ```
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Add Using Statements [!code-csharp[Using Statements](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Create agent Create an @AutoGen.OpenAI.OpenAIChatAgent with `GPT-3.5-turbo` as the backend LLM model. [!code-csharp[Create an agent with tools](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Define `Tool` class and create tools Create a `public partial` class to host the tools you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods is defined, mark them with @AutoGen.Core.FunctionAttribute attribute. In the following example, we define a `GetWeather` tool that returns the weather information of a city. [!code-csharp[Define Tool class](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Tools)] [!code-csharp[Create tools](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Create_tools)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Tool call without auto-invoke In this case, when receiving a @AutoGen.Core.ToolCallMessage, the agent will not automatically invoke the tool. Instead, the agent will return the original message back to the user. The user can then decide whether to invoke the tool or not. ![single-turn tool call without auto-invoke](../images/articles/CreateAgentWithTools/single-turn-tool-call-without-auto-invoke.png) To implement this, you can create the @AutoGen.Core.FunctionCallMiddleware without passing the `functionMap` parameter to the constructor so that the middleware will not automatically invoke the tool once it receives a @AutoGen.Core.ToolCallMessage from its inner agent. [!code-csharp[Single-turn tool call without auto-invoke](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Create_no_invoke_middleware)] After creating the function call middleware, you can register it to the agent using `RegisterMiddleware` method, which will return a new agent which can use the methods defined in the `Tool` class. [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Single_Turn_No_Invoke)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Tool call with auto-invoke In this case, the agent will automatically invoke the tool when receiving a @AutoGen.Core.ToolCallMessage and return the @AutoGen.Core.ToolCallAggregateMessage which contains both the tool call request and the tool call result. ![single-turn tool call with auto-invoke](../images/articles/CreateAgentWithTools/single-turn-tool-call-with-auto-invoke.png) To implement this, you can create the @AutoGen.Core.FunctionCallMiddleware with the `functionMap` parameter so that the middleware will automatically invoke the tool once it receives a @AutoGen.Core.ToolCallMessage from its inner agent. [!code-csharp[Single-turn tool call with auto-invoke](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Create_auto_invoke_middleware)] After creating the function call middleware, you can register it to the agent using `RegisterMiddleware` method, which will return a new agent which can use the methods defined in the `Tool` class. [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Single_Turn_Auto_Invoke)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Send the tool call result back to LLM to generate further response In some cases, you may want to send the tool call result back to the LLM to generate further response. To do this, you can send the tool call response from agent back to the LLM by calling the `SendAsync` method of the agent. [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Multi_Turn_Tool_Call)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Parallel tool call Some LLM models support parallel tool call, which returns multiple tool calls in one single message. Note that @AutoGen.Core.FunctionCallMiddleware has already handled the parallel tool call for you. When it receives a @AutoGen.Core.ToolCallMessage that contains multiple tool calls, it will automatically invoke all the tools in the sequantial order and return the @AutoGen.Core.ToolCallAggregateMessage which contains all the tool call requests and results. [!code-csharp[Generate Response](../../samples/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=parallel_tool_call)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Further Reading - [Function call with openai](../articles/OpenAIChatAgent-use-function-call.md) - [Function call with gemini](../articles/AutoGen.Gemini/Function-call-with-gemini.md) - [Function call with local model](../articles/Function-call-with-ollama-and-litellm.md) - [Use kernel plugin in other agents](../articles/AutoGen.SemanticKernel/Use-kernel-plugin-in-other-agents.md) - [function call in mistral](../articles/MistralChatAgent-use-function-call.md)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-simple-chat.md
autogen
The following example shows how to create an @AutoGen.OpenAI.OpenAIChatAgent and chat with it. Firsly, import the required namespaces: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] Then, create an @AutoGen.OpenAI.OpenAIChatAgent and chat with it: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=create_openai_chat_agent)] @AutoGen.OpenAI.OpenAIChatAgent also supports streaming chat via @AutoGen.Core.IAgent.GenerateStreamingReplyAsync*. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=create_openai_chat_agent_streaming)]
GitHub
autogen
autogen/dotnet/website/articles/Create-a-user-proxy-agent.md
autogen
## UserProxyAgent [`UserProxyAgent`](../api/AutoGen.UserProxyAgent.yml) is a special type of agent that can be used to proxy user input to another agent or group of agents. It supports the following human input modes: - `ALWAYS`: Always ask user for input. - `NEVER`: Never ask user for input. In this mode, the agent will use the default response (if any) to respond to the message. Or using underlying LLM model to generate response if provided. - `AUTO`: Only ask user for input when conversation is terminated by the other agent(s). Otherwise, use the default response (if any) to respond to the message. Or using underlying LLM model to generate response if provided. > [!TIP] > You can also set up `humanInputMode` when creating `AssistantAgent` to enable/disable human input. `UserProxyAgent` is equivalent to `AssistantAgent` with `humanInputMode` set to `ALWAYS`. Similarly, `AssistantAgent` is equivalent to `UserProxyAgent` with `humanInputMode` set to `NEVER`. ### Create a `UserProxyAgent` with `HumanInputMode` set to `ALWAYS` [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/UserProxyAgentCodeSnippet.cs?name=code_snippet_1)] When running the code, the user proxy agent will ask user for input and use the input as response. ![code output](../images/articles/CreateUserProxyAgent/image-1.png)
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-OpenAI-Overview.md
autogen
## AutoGen.OpenAI Overview AutoGen.OpenAI provides the following agents over openai models: - @AutoGen.OpenAI.OpenAIChatAgent: A slim wrapper agent over `OpenAIClient`. This agent only support `IMessage<ChatRequestMessage>` message type. To support more message types like @AutoGen.Core.TextMessage, register the agent with @AutoGen.OpenAI.OpenAIChatRequestMessageConnector. - @AutoGen.OpenAI.GPTAgent: An agent that build on top of @AutoGen.OpenAI.OpenAIChatAgent with more message types support like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage and function call support. Essentially, it is equivalent to @AutoGen.OpenAI.OpenAIChatAgent with @AutoGen.Core.FunctionCallMiddleware and @AutoGen.OpenAI.OpenAIChatRequestMessageConnector registered. ### Get start with AutoGen.OpenAI To get start with AutoGen.OpenAI, firstly, follow the [installation guide](Installation.md) to make sure you add the AutoGen feed correctly. Then add `AutoGen.OpenAI` package to your project file. ```xml <ItemGroup> <PackageReference Include="AutoGen.OpenAI" Version="AUTOGEN_VERSION" /> </ItemGroup> ```
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
@AutoGen.Core.GroupChat invokes agents in a dynamic way. On one hand, It relies on its admin agent to intellegently determines the next speaker based on conversation context, and on the other hand, it also allows you to control the conversation flow by using a @AutoGen.Core.Graph. This makes it a more dynamic yet controlable way to determine the next speaker agent. You can use @AutoGen.Core.GroupChat to create a dynamic group chat with multiple agents working together to resolve a given task. > [!NOTE] > In @AutoGen.Core.GroupChat, when only the group admin is used to determine the next speaker agent, it's recommented to use a more powerful llm model, such as `gpt-4` to ensure the best experience.
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
Use @AutoGen.Core.GroupChat to implement a code interpreter chat flow The following example shows how to create a dynamic group chat with @AutoGen.Core.GroupChat. In this example, we will create a dynamic group chat with 4 agents: `admin`, `coder`, `reviewer` and `runner`. Each agent has its own role in the group chat: ### Code interpreter group chat - `admin`: create task for group to work on and terminate the conversation when task is completed. In this example, the task to resolve is to calculate the 39th Fibonacci number. - `coder`: a dotnet coder who can write code to resolve tasks. - `reviewer`: a dotnet code reviewer who can review code written by `coder`. In this example, `reviewer` will examine if the code written by `coder` follows the condition below: - has only one csharp code block. - use top-level statements. - is dotnet code snippet. - print the result of the code snippet to console. - `runner`: a dotnet code runner who can run code written by `coder` and print the result. ```mermaid flowchart LR subgraph Group Chat B[Amin] C[Coder] D[Reviewer] E[Runner] end ``` > [!NOTE] > The complete code of this example can be found in `Example07_Dynamic_GroupChat_Calculate_Fibonacci` ### Create group chat The code below shows how to create a dynamic group chat with @AutoGen.Core.GroupChat. In this example, we will create a dynamic group chat with 4 agents: `admin`, `coder`, `reviewer` and `runner`. In this case we don't pass a workflow to the group chat, so the group chat will use driven by the admin agent. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_group_chat)] > [!TIP] > You can set up initial context for the group chat using @AutoGen.Core.GroupChatExtension.SendIntroduction*. The initial context can help group admin orchestrates the conversation flow. Output: ![GroupChat](../images/articles/DynamicGroupChat/dynamicChat.gif) ### Below are break-down of how agents are created and their roles in the group chat. - Create admin agent The code below shows how to create `admin` agent. `admin` agent will create a task for group to work on and terminate the conversation when task is completed. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_admin)] - Create coder agent [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_coder)] - Create reviewer agent The code below shows how to create `reviewer` agent. `reviewer` agent is a dotnet code reviewer who can review code written by `coder`. In this example, a `function` is used to examine if the code written by `coder` follows the condition. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=reviewer_function)] > [!TIP] > You can use @AutoGen.Core.FunctionAttribute to generate type-safe function definition and function call wrapper for the function. For more information, please check out [Create type safe function call](./Create-type-safe-function-call.md). [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_reviewer)] - Create runner agent > [!TIP] > `AutoGen` provides a built-in support for running code snippet. For more information, please check out [Execute code snippet](./Run-dotnet-code.md). [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_runner)]
GitHub
autogen
autogen/dotnet/website/articles/function-comparison-page-between-python-AutoGen-and-autogen.net.md
autogen
### Function comparison between Python AutoGen and AutoGen\.Net #### Agentic pattern | Feature | AutoGen | AutoGen\.Net | | :---------------- | :------ | :---- | | Code interpreter | run python code in local/docker/notebook executor | run csharp code in dotnet interactive executor | | Single agent chat pattern | ✔️ | ✔️ | | Two agent chat pattern | ✔️ | ✔️ | | group chat (include FSM)| ✔️ | ✔️ (using workflow for FSM groupchat) | | Nest chat| ✔️ | ✔️ (using middleware pattern)| |Sequential chat | ✔️ | ❌ (need to manually create task in code) | | Tool | ✔️ | ✔️ | #### LLM platform support ℹ️ Note ``` Other than the platforms list below, AutoGen.Net also supports all the platforms that semantic kernel supports via AutoGen.SemanticKernel as a bridge ``` | Feature | AutoGen | AutoGen\.Net | | :---------------- | :------ | :---- | | OpenAI (include third-party) | ✔️ | ✔️ | | Mistral | ✔️| ✔️| | Ollama | ✔️| ✔️| |Claude |✔️ |✔️| |Gemini (Include Vertex) | ✔️ | ✔️ | #### Popular Contrib Agent support | Feature | AutoGen | AutoGen\.Net | | :---------------- | :------ | :---- | | Rag Agent | ✔️| ❌ | | Web surfer | ✔️| ❌ |
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-function-call.md
autogen
The following example shows how to create a `GetWeatherAsync` function and pass it to @AutoGen.OpenAI.OpenAIChatAgent. Firstly, you need to install the following packages: ```xml <ItemGroup> <PackageReference Include="AutoGen.OpenAI" Version="AUTOGEN_VERSION" /> <PackageReference Include="AutoGen.SourceGenerator" Version="AUTOGEN_VERSION" /> </ItemGroup> ``` > [!Note] > The `AutoGen.SourceGenerator` package carries a source generator that adds support for type-safe function definition generation. For more information, please check out [Create type-safe function](./Create-type-safe-function-call.md). > [!NOTE] > If you are using VSCode as your editor, you may need to restart the editor to see the generated code. Firstly, import the required namespaces: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] Then, define a public partial class: `Function` with `GetWeather` method [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=weather_function)] Then, create an @AutoGen.OpenAI.OpenAIChatAgent and register it with @AutoGen.OpenAI.OpenAIChatRequestMessageConnector so it can support @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage. These message types are necessary to use @AutoGen.Core.FunctionCallMiddleware, which provides support for processing and invoking function calls. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=openai_chat_agent_get_weather_function_call)] Then, create an @AutoGen.Core.FunctionCallMiddleware with `GetWeather` function and register it with the agent above. When creating the middleware, we also pass a `functionMap` to @AutoGen.Core.FunctionCallMiddleware, which means the function will be automatically invoked when the agent replies a `GetWeather` function call. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=create_function_call_middleware)] Finally, you can chat with the @AutoGen.OpenAI.OpenAIChatAgent and invoke the `GetWeather` function. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=chat_agent_send_function_call)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
## Use function call in AutoGen agent Typically, there are three ways to pass a function definition to an agent to enable function call: - Pass function definitions when creating an agent. This only works if the agent supports pass function call from its constructor. - Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent - Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. > [!NOTE] > To use function call, the underlying LLM model must support function call as well for the best experience. If the model does not support function call, it's likely that the function call will be ignored and the model will reply with a normal response even if a function call is passed to it.
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Pass function definitions when creating an agent In some agents like @AutoGen.AssistantAgent or @AutoGen.OpenAI.GPTAgent, you can pass function definitions when creating the agent Suppose the `TypeSafeFunctionCall` is defined in the following code snippet: [!code-csharp[TypeSafeFunctionCall](../../samples/AutoGen.BasicSamples/CodeSnippet/TypeSafeFunctionCallCodeSnippet.cs?name=weather_report)] You can then pass the `WeatherReport` to the agent when creating it: [!code-csharp[assistant agent](../../samples/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=code_snippet_4)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent You can also pass function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent. This is useful when you want to override the function definitions passed to the agent when creating it. [!code-csharp[assistant agent](../../samples/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=overrider_function_contract)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls You can also register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. This is useful when you want to process and invoke function calls in a more flexible way. [!code-csharp[assistant agent](../../samples/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=register_function_call_middleware)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call inside an agent To invoke a function instead of returning the function call object, you can pass its function call wrapper to the agent via `functionMap`. You can then pass the `WeatherReportWrapper` to the agent via `functionMap`: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=code_snippet_6)] When a function call object is returned, the agent will invoke the function and uses the return value as response rather than returning the function call object. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=code_snippet_6_1)]
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call by another agent You can also use another agent to invoke the function call from one agent. This is a useful pattern in two-agent chat, where one agent is used as a function proxy to invoke the function call from another agent. Once the function call is invoked, the result can be returned to the original agent for further processing. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/FunctionCallCodeSnippet.cs?name=two_agent_weather_chat)]
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-agent.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
The following example shows how to connect to third-party OpenAI API using @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Overview A lot of LLM applications/platforms support spinning up a chat server that is compatible with OpenAI API, such as LM Studio, Ollama, Mistral etc. This means that you can connect to these servers using the @AutoGen.OpenAI.OpenAIChatAgent. > [!NOTE] > Some platforms might not support all the features of OpenAI API. For example, Ollama does not support `function call` when using it's openai API according to its [document](https://github.com/ollama/ollama/blob/main/docs/openai.md#v1chatcompletions) (as of 2024/05/07). > That means some of the features of OpenAI API might not work as expected when using these platforms with the @AutoGen.OpenAI.OpenAIChatAgent. > Please refer to the platform's documentation for more information.
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Prerequisites - Install the following packages: ```bash dotnet add package AutoGen.OpenAI --version AUTOGEN_VERSION ``` - Spin up a chat server that is compatible with OpenAI API. The following example uses Ollama as the chat server, and llama3 as the llm model. ```bash ollama serve ```
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Steps - Import the required namespaces: [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs?name=using_statement)] - Create a `CustomHttpClientHandler` class. The `CustomHttpClientHandler` class is used to customize the HttpClientHandler. In this example, we override the `SendAsync` method to redirect the request to local Ollama server, which is running on `http://localhost:11434`. [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs?name=CustomHttpClientHandler)] - Create an `OpenAIChatAgent` instance and connect to the third-party API. Then create an @AutoGen.OpenAI.OpenAIChatAgent instance and connect to the OpenAI API from Ollama. You can customize the transport behavior of `OpenAIClient` by passing a customized `HttpClientTransport` instance. In the customized `HttpClientTransport` instance, we pass the `CustomHttpClientHandler` we just created which redirects all openai chat requests to the local Ollama server. [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs?name=create_agent)] - Chat with the `OpenAIChatAgent`. Finally, you can start chatting with the agent. In this example, we send a coding question to the agent and get the response. [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs?name=send_message)]
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Sample Output The following is the sample output of the code snippet above: ![output](../images/articles/ConnectTo3PartyOpenAI/output.gif)
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
`Middleware` is a key feature in AutoGen.Net that enables you to customize the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync*. It's similar to the middleware concept in ASP.Net and is widely used in AutoGen.Net for various scenarios, such as function call support, converting message of different types, print message, gather user input, etc. Here are a few examples of how middleware is used in AutoGen.Net: - @AutoGen.AssistantAgent is essentially an agent with @AutoGen.Core.FunctionCallMiddleware, @AutoGen.HumanInputMiddleware and default reply middleware. - @AutoGen.OpenAI.GPTAgent is essentially an @AutoGen.OpenAI.OpenAIChatAgent with @AutoGen.Core.FunctionCallMiddleware and @AutoGen.OpenAI.OpenAIChatRequestMessageConnector.
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Use middleware in an agent To use middleware in an existing agent, you can either create a @AutoGen.Core.MiddlewareAgent on top of the original agent or register middleware functions to the original agent. ### Create @AutoGen.Core.MiddlewareAgent on top of the original agent [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=create_middleware_agent_with_original_agent)] ### Register middleware functions to the original agent [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=register_middleware_agent)]
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Short-circuit the next agent The example below shows how to short-circuit the inner agent [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=short_circuit_middleware_agent)] > [!Note] > When multiple middleware functions are registered, the order of middleware functions is first registered, last invoked.
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Streaming middleware You can also modify the behavior of @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* by registering streaming middleware to it. One example is @AutoGen.OpenAI.OpenAIChatRequestMessageConnector which converts `StreamingChatCompletionsUpdate` to one of `AutoGen.Core.TextMessageUpdate` or `AutoGen.Core.ToolCallMessageUpdate`. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=register_streaming_middleware)]
GitHub
autogen
autogen/dotnet/website/articles/Function-call-middleware.md
autogen
# Coming soon
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
## AssistantAgent [`AssistantAgent`](../api/AutoGen.AssistantAgent.yml) is a built-in agent in `AutoGen` that acts as an AI assistant. It uses LLM to generate response to user input. It also supports function call if the underlying LLM model supports it (e.g. `gpt-3.5-turbo-0613`).
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
Create an `AssistantAgent` using OpenAI model. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/CreateAnAgent.cs?name=code_snippet_1)]
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
Create an `AssistantAgent` using Azure OpenAI model. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/CreateAnAgent.cs?name=code_snippet_2)]
GitHub
autogen
autogen/dotnet/website/articles/Group-chat-overview.md
autogen
@AutoGen.Core.IGroupChat is a fundamental feature in AutoGen. It provides a way to organize multiple agents under the same context and work together to resolve a given task. In AutoGen, there are two types of group chat: - @AutoGen.Core.RoundRobinGroupChat : This group chat runs agents in a round-robin sequence. The chat history plus the most recent reply from the previous agent will be passed to the next agent. - @AutoGen.Core.GroupChat : This group chat provides a more dynamic yet controlable way to determine the next speaker agent. You can either use a llm agent as group admin, or use a @AutoGen.Core.Graph, which is introduced by [this PR](https://github.com/microsoft/autogen/pull/1761), or both to determine the next speaker agent. > [!NOTE] > In @AutoGen.Core.GroupChat, when only the group admin is used to determine the next speaker agent, it's recommented to use a more powerful llm model, such as `gpt-4` to ensure the best experience.
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
@AutoGen.Core.PrintMessageMiddleware is a built-in @AutoGen.Core.IMiddleware that pretty print @AutoGen.Core.IMessage to console. > [!NOTE] > @AutoGen.Core.PrintMessageMiddleware support the following @AutoGen.Core.IMessage types: > - @AutoGen.Core.TextMessage > - @AutoGen.Core.MultiModalMessage > - @AutoGen.Core.ToolCallMessage > - @AutoGen.Core.ToolCallResultMessage > - @AutoGen.Core.Message > - (streaming) @AutoGen.Core.TextMessageUpdate > - (streaming) @AutoGen.Core.ToolCallMessageUpdate
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
Use @AutoGen.Core.PrintMessageMiddleware in an agent You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to register the @AutoGen.Core.PrintMessageMiddleware to an agent. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/PrintMessageMiddlewareCodeSnippet.cs?name=PrintMessageMiddleware)] @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* will format the message and print it to console ![image](../images/articles/PrintMessageMiddleware/printMessage.png)
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
Streaming message support @AutoGen.Core.PrintMessageMiddleware also supports streaming message types like @AutoGen.Core.TextMessageUpdate and @AutoGen.Core.ToolCallMessageUpdate. If you register @AutoGen.Core.PrintMessageMiddleware to a @AutoGen.Core.IStreamingAgent, it will format the streaming message and print it to console if the message is of supported type. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/PrintMessageMiddlewareCodeSnippet.cs?name=print_message_streaming)] ![image](../images/articles/PrintMessageMiddleware/streamingoutput.gif)
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
This example shows how to use function call with local LLM models where [Ollama](https://ollama.com/) as local model provider and [LiteLLM](https://docs.litellm.ai/docs/) proxy server which provides an openai-api compatible interface. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs) To run this example, the following prerequisites are required: - Install [Ollama](https://ollama.com/) and [LiteLLM](https://docs.litellm.ai/docs/) on your local machine. - A local model that supports function call. In this example `dolphincoder:latest` is used.
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install Ollama and pull `dolphincoder:latest` model First, install Ollama by following the instructions on the [Ollama website](https://ollama.com/). After installing Ollama, pull the `dolphincoder:latest` model by running the following command: ```bash ollama pull dolphincoder:latest ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install LiteLLM and start the proxy server You can install LiteLLM by following the instructions on the [LiteLLM website](https://docs.litellm.ai/docs/). ```bash pip install 'litellm[proxy]' ``` Then, start the proxy server by running the following command: ```bash litellm --model ollama_chat/dolphincoder --port 4000 ``` This will start an openai-api compatible proxy server at `http://localhost:4000`. You can verify if the server is running by observing the following output in the terminal: ```bash #------------------------------------------------------------# # # # 'The worst thing about this product is...' # # https://github.com/BerriAI/litellm/issues/new # # # #------------------------------------------------------------# INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit) ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install AutoGen and AutoGen.SourceGenerator In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` The `AutoGen.SourceGenerator` package is used to automatically generate type-safe `FunctionContract` instead of manually defining them. For more information, please check out [Create type-safe function](Create-type-safe-function-call.md). And in your project file, enable structural xml document support by setting the `GenerateDocumentationFile` property to `true`: ```xml <PropertyGroup> <!-- This enables structural xml document support --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Define `WeatherReport` function and create @AutoGen.Core.FunctionCallMiddleware Create a `public partial` class to host the methods you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods are defined, mark them with `AutoGen.Core.FunctionAttribute` attribute. [!code-csharp[Define WeatherReport function](../../samples/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Function)] Then create a @AutoGen.Core.FunctionCallMiddleware and add the `WeatherReport` function to the middleware. The middleware will pass the `FunctionContract` to the agent when generating a response, and process the tool call response when receiving a `ToolCallMessage`. [!code-csharp[Define WeatherReport function](../../samples/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Create_tools)]
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Create @AutoGen.OpenAI.OpenAIChatAgent with `GetWeatherReport` tool and chat with it Because LiteLLM proxy server is openai-api compatible, we can use @AutoGen.OpenAI.OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a @AutoGen.Core.FunctionCallMiddleware which contains the `WeatherReport` tool. Therefore, the agent can call the `WeatherReport` tool when generating a response. [!code-csharp[Create an agent with tools](../../samples/AutoGen.OpenAI.Sample/Tool_Call_With_Ollama_And_LiteLLM.cs?name=Create_Agent)] The reply from the agent will similar to the following: ```bash AggregateMessage from assistant -------------------- ToolCallMessage: ToolCallMessage from assistant -------------------- - GetWeatherAsync: {"city": "new york"} -------------------- ToolCallResultMessage: ToolCallResultMessage from assistant -------------------- - GetWeatherAsync: The weather in new york is 72 degrees and sunny. -------------------- ```
GitHub
autogen
autogen/dotnet/website/articles/Roundrobin-chat.md
autogen
@AutoGen.Core.RoundRobinGroupChat is a group chat that invokes agents in a round-robin order. It's useful when you want to call multiple agents in a fixed sequence. For example, asking search agent to retrieve related information followed by a summarization agent to summarize the information. Beside, it also used by @AutoGen.Core.AgentExtension.SendAsync(AutoGen.Core.IAgent,AutoGen.Core.IAgent,System.String,System.Collections.Generic.IEnumerable{AutoGen.Core.IMessage},System.Int32,System.Threading.CancellationToken) in two agent chat. ### Use @AutoGen.Core.RoundRobinGroupChat to implement a search-summarize chat flow ```mermaid flowchart LR A[User] -->|Ask a question| B[Search Agent] B -->|Retrieve information| C[Summarization Agent] C -->|Summarize result| A[User] ``` > [!NOTE] > Complete code can be found in [Example11_Sequential_GroupChat_Example](https://github.com/microsoft/autogen/blob/dotnet/dotnet/samples/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs); Step 1: Add required using statements [!code-csharp[](../../samples/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=using_statement)] Step 2: Create a `bingSearch` agent using @AutoGen.SemanticKernel.SemanticKernelAgent [!code-csharp[](../../samples/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=CreateBingSearchAgent)] Step 3: Create a `summarization` agent using @AutoGen.SemanticKernel.SemanticKernelAgent [!code-csharp[](../../samples/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=CreateSummarizerAgent)] Step 4: Create a @AutoGen.Core.RoundRobinGroupChat and add `bingSearch` and `summarization` agents to it [!code-csharp[](../../samples/AutoGen.BasicSamples/Example11_Sequential_GroupChat_Example.cs?name=Sequential_GroupChat_Example)] Output: ![Searcher-Summarizer](../images/articles/SequentialGroupChat/SearcherSummarizer.gif)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
The following example shows how to enable JSON mode in @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.OpenAI.Sample/Use_Json_Mode.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
What is JSON mode? JSON mode is a new feature in OpenAI which allows you to instruct model to always respond with a valid JSON object. This is useful when you want to constrain the model output to JSON format only. > [!NOTE] > Currently, JOSN mode is only supported by `gpt-4-turbo-preview` and `gpt-3.5-turbo-0125`. For more information (and limitations) about JSON mode, please visit [OpenAI API documentation](https://platform.openai.com/docs/guides/text-generation/json-mode).
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
How to enable JSON mode in OpenAIChatAgent. To enable JSON mode for @AutoGen.OpenAI.OpenAIChatAgent, set `responseFormat` to `ChatCompletionsResponseFormat.JsonObject` when creating the agent. Note that when enabling JSON mode, you also need to instruct the agent to output JSON format in its system message. [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Use_Json_Mode.cs?name=create_agent)] After enabling JSON mode, the `openAIClientAgent` will always respond in JSON format when it receives a message. [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Use_Json_Mode.cs?name=chat_with_agent)] When running the example, the output from `openAIClientAgent` will be a valid JSON object which can be parsed as `Person` class defined below. Note that in the output, the `address` field is missing because the address information is not provided in user input. [!code-csharp[](../../samples/AutoGen.OpenAI.Sample/Use_Json_Mode.cs?name=person_class)] The output will be: ```bash Name: John Age: 25 Done ```
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-middleware.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/Create-type-safe-function-call.md
autogen
## Create type-safe function call using AutoGen.SourceGenerator `AutoGen` provides a source generator to easness the trouble of manually craft function definition and function call wrapper from a function. To use this feature, simply add the `AutoGen.SourceGenerator` package to your project and decorate your function with @AutoGen.Core.FunctionAttribute. ```bash dotnet add package AutoGen.SourceGenerator ``` > [!NOTE] > It's recommended to enable structural xml document support by setting `GenerateDocumentationFile` property to true in your project file. This allows source generator to leverage the documentation of the function when generating the function definition. ```xml <PropertyGroup> <!-- This enables structural xml document support --> <GenerateDocumentationFile>true</GenerateDocumentationFile> </PropertyGroup> ``` Then, create a `public partial` class to host the methods you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods is defined, mark them with @AutoGen.FunctionAttribute attribute: > [!NOTE] > A `public partial` class is required for the source generator to generate code. > The method has to be a `public` instance method and its return type must be `Task<string>`. > Mark the method with @AutoGen.Core.FunctionAttribute attribute. Firstly, import the required namespaces: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/TypeSafeFunctionCallCodeSnippet.cs?name=weather_report_using_statement)] Then, create a `WeatherReport` function and mark it with @AutoGen.Core.FunctionAttribute: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/TypeSafeFunctionCallCodeSnippet.cs?name=weather_report)] The source generator will generate the @AutoGen.Core.FunctionContract and function call wrapper for `WeatherReport` in another partial class based on its signature and structural comments. The @AutoGen.Core.FunctionContract is introduced by [#1736](https://github.com/microsoft/autogen/pull/1736) and contains all the necessary metadata such as function name, parameters, and return type. It is LLM independent and can be used to generate openai function definition or semantic kernel function. The function call wrapper is a helper class that provides a type-safe way to call the function. > [!NOTE] > If you are using VSCode as your editor, you may need to restart the editor to see the generated code. The following code shows how to generate openai function definition from the @AutoGen.Core.FunctionContract and call the function using the function call wrapper. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/TypeSafeFunctionCallCodeSnippet.cs?name=weather_report_consume)]
GitHub
autogen
autogen/dotnet/website/articles/getting-start.md
autogen
### Get start with AutoGen for dotnet [![dotnet-ci](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) Firstly, add `AutoGen` package to your project. ```bash dotnet add package AutoGen ``` > [!NOTE] > For more information about installing packages, please check out the [installation guide](Installation.md). Then you can start with the following code snippet to create a conversable agent and chat with it. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/GetStartCodeSnippet.cs?name=snippet_GetStartCodeSnippet)] [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/GetStartCodeSnippet.cs?name=code_snippet_1)] ### Tutorial Getting started with AutoGen.Net by following the [tutorial](../tutorial/Chat-with-an-agent.md) series. ### Examples You can find more examples under the [sample project](https://github.com/microsoft/autogen/tree/dotnet/dotnet/samples/AutoGen.BasicSamples). ### Report a bug or request a feature You can report a bug or request a feature by creating a new issue in the [github issue](https://github.com/microsoft/autogen/issues) and specifying label the label "donet"
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-use-function-call.md
autogen
## Use tool in MistralChatAgent The following example shows how to enable tool support in @AutoGen.Mistral.MistralClientAgent by creating a `GetWeatherAsync` function and passing it to the agent. Firstly, you need to install the following packages: ```bash dotnet add package AutoGen.Mistral dotnet add package AutoGen.SourceGenerator ``` > [!Note] > Tool support is only available in some mistral models. Please refer to the [link](https://docs.mistral.ai/capabilities/function_calling/#available-models) for tool call support in mistral models. > [!Note] > The `AutoGen.SourceGenerator` package carries a source generator that adds support for type-safe function definition generation. For more information, please check out [Create type-safe function](./Create-type-safe-function-call.md). > [!NOTE] > If you are using VSCode as your editor, you may need to restart the editor to see the generated code. Import the required namespace [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=using_statement)] Then define a public partial `MistralAgentFunction` class and `GetWeather` method. The `GetWeather` method is a simple function that returns the weather of a given location that marked with @AutoGen.Core.FunctionAttribute. Marking the class as `public partial` together with the @AutoGen.Core.FunctionAttribute attribute allows the source generator to generate the @AutoGen.Core.FunctionContract for the `GetWeather` method. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=weather_function)] Then create an @AutoGen.Mistral.MistralClientAgent and register it with @AutoGen.Mistral.Extension.MistralAgentExtension.RegisterMessageConnector* so it can support @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage. These message types are necessary to use @AutoGen.Core.FunctionCallMiddleware, which provides support for processing and invoking function calls. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=create_mistral_function_call_agent)] Then create an @AutoGen.Core.FunctionCallMiddleware with `GetWeather` function When creating the middleware, we also pass a `functionMap` object which means the function will be automatically invoked when the agent replies a `GetWeather` function call. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=create_get_weather_function_call_middleware)] After the function call middleware is created, register it with the agent so the `GetWeather` function will be passed to agent during chat completion. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=register_function_call_middleware)] Finally, you can chat with the @AutoGen.Mistral.MistralClientAgent about weather! The agent will automatically invoke the `GetWeather` function to "get" the weather information and return the result. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=send_message_with_function_call)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
`AutoGen` provides a built-in feature to run code snippet from agent response. Currently the following languages are supported: - dotnet More languages will be supported in the future.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
What is a code snippet? A code snippet in agent response is a code block with a language identifier. For example: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_3)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Why running code snippet is useful? The ability of running code snippet can greatly extend the ability of an agent. Because it enables agent to resolve tasks by writing code and run it, which is much more powerful than just returning a text response. For example, in data analysis scenario, agent can resolve tasks like "What is the average of the sales amount of the last 7 days?" by firstly write a code snippet to query the sales amount of the last 7 days, then calculate the average and then run the code snippet to get the result. > [!WARNING] > Running arbitrary code snippet from agent response could bring risks to your system. Using this feature with caution.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Use dotnet interactive kernel to execute code snippet? The built-in feature of running dotnet code snippet is provided by [dotnet-interactive](https://github.com/dotnet/interactive). To run dotnet code snippet, you need to install the following package to your project, which provides the intergraion with dotnet-interactive: ```xml <PackageReference Include="AutoGen.DotnetInteractive" /> ``` Then you can use @AutoGen.DotnetInteractive.DotnetInteractiveKernelBuilder* to create a in-process dotnet-interactive composite kernel with C# and F# kernels. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_1)] After that, use @AutoGen.DotnetInteractive.Extension.RunSubmitCodeCommandAsync* method to run code snippet. The method will return the result of the code snippet. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_2)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Run python code snippet To run python code, firstly you need to have python installed on your machine, then you need to set up ipykernel and jupyter in your environment. ```bash pip install ipykernel pip install jupyter ``` After `ipykernel` and `jupyter` are installed, you can confirm the ipykernel is installed correctly by running the following command: ```bash jupyter kernelspec list ``` The output should contain all available kernels, including `python3`. ```bash Available kernels: python3 /usr/local/share/jupyter/kernels/python3 ... ``` Then you can add the python kernel to the dotnet-interactive composite kernel by calling `AddPythonKernel` method. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_4)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Further reading You can refer to the following examples for running code snippet in agentic workflow: - Dynamic_GroupChat_Coding_Task: [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.BasicSample/Example04_Dynamic_GroupChat_Coding_Task.cs) - Dynamic_GroupChat_Calculate_Fibonacci: [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.BasicSample/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs)
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
In `AutoGen`, you can start a conversation between two agents using @AutoGen.Core.AgentExtension.InitiateChatAsync* or one of @AutoGen.Core.AgentExtension.SendAsync* APIs. When conversation starts, the sender agent will firstly send a message to receiver agent, then receiver agent will generate a reply and send it back to sender agent. This process will repeat until either one of the agent sends a termination message or the maximum number of turns is reached. > [!NOTE] > A termination message is an @AutoGen.Core.IMessage which content contains the keyword: @AutoGen.Core.GroupChatExtension.TERMINATE. To determine if a message is a terminate message, you can use @AutoGen.Core.GroupChatExtension.IsGroupChatTerminateMessage*.
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
A basic example The following example shows how to start a conversation between the teacher agent and student agent, where the student agent starts the conversation by asking teacher to create math questions. > [!TIP] > You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to pretty print the message replied by the agent. > [!NOTE] > The conversation is terminated when teacher agent sends a message containing the keyword: @AutoGen.Core.GroupChatExtension.TERMINATE. > [!NOTE] > The teacher agent uses @AutoGen.Core.MiddlewareExtension.RegisterPostProcess* to register a post process function which returns a hard-coded termination message when a certain condition is met. Comparing with putting the @AutoGen.Core.GroupChatExtension.TERMINATE keyword in the prompt, this approach is more robust especially when a weaker LLM model is used. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example02_TwoAgent_MathChat.cs?name=code_snippet_1)]
GitHub
autogen
autogen/dotnet/website/articles/Use-graph-in-group-chat.md
autogen
Sometimes, you may want to add more control on how the next agent is selected in a @AutoGen.Core.GroupChat based on the task you want to resolve. For example, in the previous [code writing example](./Group-chat.md), the original code interpreter workflow can be improved by the following diagram because it's not necessary for `admin` to directly talk to `reviewer`, nor it's necessary for `coder` to talk to `runner`. ```mermaid flowchart TD A[Admin] -->|Ask coder to write code| B[Coder] B -->|Ask Reviewer to review code| C[Reviewer] C -->|Ask Runner to run code| D[Runner] D -->|Send result if succeed| A[Admin] D -->|Ask coder to fix if failed| B[Coder] C -->|Ask coder to fix if not approved| B[Coder] ``` By having @AutoGen.Core.GroupChat to follow a specific graph flow, we can bring prior knowledge to group chat and make the conversation more efficient and robust. This is where @AutoGen.Core.Graph comes in. ### Create a graph The following code shows how to create a graph that represents the diagram above. The graph doesn't need to be a finite state machine where each state can only have one legitimate next state. Instead, it can be a directed graph where each state can have multiple legitimate next states. And if there are multiple legitimate next states, the `admin` agent of @AutoGen.Core.GroupChat will decide which one to go based on the conversation context. > [!TIP] > @AutoGen.Core.Graph supports conditional transitions. To create a conditional transition, you can pass a lambda function to `canTransitionAsync` when creating a @AutoGen.Core.Transition. The lambda function should return a boolean value indicating if the transition can be taken. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_workflow)] Once the graph is created, you can pass it to the group chat. The group chat will then use the graph along with admin agent to orchestrate the conversation flow. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example07_Dynamic_GroupChat_Calculate_Fibonacci.cs?name=create_group_chat_with_workflow)]
GitHub
autogen
autogen/dotnet/website/articles/Consume-LLM-server-from-LM-Studio.md
autogen
## Consume LLM server from LM Studio You can use @AutoGen.LMStudio.LMStudioAgent from `AutoGen.LMStudio` package to consume openai-like API from LMStudio local server. ### What's LM Studio [LM Studio](https://lmstudio.ai/) is an app that allows you to deploy and inference hundreds of thousands of open-source language model on your local machine. It provides an in-app chat ui plus an openai-like API to interact with the language model programmatically. ### Installation - Install LM studio if you haven't done so. You can find the installation guide [here](https://lmstudio.ai/) - Add `AutoGen.LMStudio` to your project. ```xml <ItemGroup> <PackageReference Include="AutoGen.LMStudio" Version="AUTOGEN_LMSTUDIO_VERSION" /> </ItemGroup> ``` ### Usage The following code shows how to use `LMStudioAgent` to write a piece of C# code to calculate 100th of fibonacci. Before running the code, make sure you have local server from LM Studio running on `localhost:1234`. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example08_LMStudio.cs?name=lmstudio_using_statements)] [!code-csharp[](../../samples/AutoGen.BasicSamples/Example08_LMStudio.cs?name=lmstudio_example_1)]
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
`Agent` is one of the most fundamental concepts in AutoGen.Net. In AutoGen.Net, you construct a single agent to process a specific task, and you extend an agent using [Middlewares](./Middleware-overview.md), and you construct a multi-agent workflow using [GroupChat](./Group-chat-overview.md). > [!NOTE] > Every agent in AutoGen.Net implements @AutoGen.Core.IAgent, for agent that supports streaming reply, it also implements @AutoGen.Core.IStreamingAgent.
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Create an agent - Create an @AutoGen.AssistantAgent: [Create an assistant agent](./Create-an-agent.md) - Create an @AutoGen.OpenAI.OpenAIChatAgent: [Create an OpenAI chat agent](./OpenAIChatAgent-simple-chat.md) - Create a @AutoGen.SemanticKernel.SemanticKernelAgent: [Create a semantic kernel agent](./AutoGen.SemanticKernel/SemanticKernelAgent-simple-chat.md) - Create a @AutoGen.LMStudio.LMStudioAgent: [Connect to LM Studio](./Consume-LLM-server-from-LM-Studio.md)
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Chat with an agent To chat with an agent, typically you can invoke @AutoGen.Core.IAgent.GenerateReplyAsync*. On top of that, you can also use one of the extension methods like @AutoGen.Core.AgentExtension.SendAsync* as shortcuts. > [!NOTE] > AutoGen provides a list of built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, @AutoGen.Core.ToolCallMessage, @AutoGen.Core.ToolCallResultMessage, etc. You can use these message types to chat with an agent. For further details, see [built-in messages](./Built-in-messages.md). - Send a @AutoGen.Core.TextMessage to an agent via @AutoGen.Core.IAgent.GenerateReplyAsync*: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/AgentCodeSnippet.cs?name=ChatWithAnAgent_GenerateReplyAsync)] - Send a message to an agent via @AutoGen.Core.AgentExtension.SendAsync*: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/AgentCodeSnippet.cs?name=ChatWithAnAgent_SendAsync)]
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Streaming chat If an agent implements @AutoGen.Core.IStreamingAgent, you can use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* to chat with the agent in a streaming way. You would need to process the streaming updates on your side though. - Send a @AutoGen.Core.TextMessage to an agent via @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*, and print the streaming updates to console: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/AgentCodeSnippet.cs?name=ChatWithAnAgent_GenerateStreamingReplyAsync)]
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Register middleware to an agent @AutoGen.Core.IMiddleware and @AutoGen.Core.IStreamingMiddleware are used to extend the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync* and @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*. You can register middleware to an agent to customize the behavior of the agent on things like function call support, converting message of different types, print message, gather user input, etc. - Middleware overview: [Middleware overview](./Middleware-overview.md) - Write message to console: [Print message middleware](./Print-message-middleware.md) - Convert message type: [SemanticKernelChatMessageContentConnector](./AutoGen.SemanticKernel/SemanticKernelAgent-support-more-messages.md) and [OpenAIChatRequestMessageConnector](./OpenAIChatAgent-support-more-messages.md) - Create your own middleware: [Create your own middleware](./Create-your-own-middleware.md)
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Group chat You can construct a multi-agent workflow using @AutoGen.Core.IGroupChat. In AutoGen.Net, there are two type of group chat: @AutoGen.Core.SequentialGroupChat: Orchestrates the agents in the group chat in a fix, sequential order. @AutoGen.Core.GroupChat: Provide more dynamic yet controllable way to orchestrate the agents in the group chat. For further details, see [Group chat overview](./Group-chat-overview.md).
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-Mistral-Overview.md
autogen
## AutoGen.Mistral overview AutoGen.Mistral provides the following agent(s) to connect to [Mistral.AI](https://mistral.ai/) platform. - @AutoGen.Mistral.MistralClientAgent: A slim wrapper agent over @AutoGen.Mistral.MistralClient. ### Get started with AutoGen.Mistral To get started with AutoGen.Mistral, follow the [installation guide](Installation.md) to make sure you add the AutoGen feed correctly. Then add the `AutoGen.Mistral` package to your project file. ```bash dotnet add package AutoGen.Mistral ``` >[!NOTE] > You need to provide an api-key to use Mistral models which will bring additional cost while using. you can get the api key from [Mistral.AI](https://mistral.ai/). ### Example Import the required namespace [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=using_statement)] Create a @AutoGen.Mistral.MistralClientAgent and start chatting! [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=create_mistral_agent)] Use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* to stream the chat completion. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/MistralAICodeSnippet.cs?name=streaming_chat)]
GitHub
autogen
autogen/dotnet/website/articles/Built-in-messages.md
autogen
## An overview of built-in @AutoGen.Core.IMessage types Start from 0.0.9, AutoGen introduces the @AutoGen.Core.IMessage and @AutoGen.Core.IMessage`1 types to provide a unified message interface for different agents. The @AutoGen.Core.IMessage is a non-generic interface that represents a message. The @AutoGen.Core.IMessage`1 is a generic interface that represents a message with a specific `T` where `T` can be any type. Besides, AutoGen also provides a set of built-in message types that implement the @AutoGen.Core.IMessage and @AutoGen.Core.IMessage`1 interfaces. These built-in message types are designed to cover different types of messages as much as possilbe. The built-in message types include: > [!NOTE] > The minimal requirement for an agent to be used as admin in @AutoGen.Core.GroupChat is to support @AutoGen.Core.TextMessage. > [!NOTE] > @AutoGen.Core.Message will be deprecated in 0.0.14. Please replace it with a more specific message type like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, etc. - @AutoGen.Core.TextMessage: A message that contains a piece of text. - @AutoGen.Core.ImageMessage: A message that contains an image. - @AutoGen.Core.MultiModalMessage: A message that contains multiple modalities like text, image, etc. - @AutoGen.Core.ToolCallMessage: A message that represents a function call request. - @AutoGen.Core.ToolCallResultMessage: A message that represents a function call result. - @AutoGen.Core.ToolCallAggregateMessage: A message that contains both @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage. This type of message is used by @AutoGen.Core.FunctionCallMiddleware to aggregate both @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage into a single message. - @AutoGen.Core.MessageEnvelope`1: A message that represents an envelope that contains a message of any type. - @AutoGen.Core.Message: The original message type before 0.0.9. This message type is reserved for backward compatibility. It is recommended to replace it with a more specific message type like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, etc. ### Streaming message support AutoGen also introduces @AutoGen.Core.IStreamingMessage and @AutoGen.Core.IStreamingMessage`1 which are used in streaming call api. The following built-in message types implement the @AutoGen.Core.IStreamingMessage and @AutoGen.Core.IStreamingMessage`1 interfaces: > [!NOTE] > All @AutoGen.Core.IMessage is also a @AutoGen.Core.IStreamingMessage. That means you can return an @AutoGen.Core.IMessage from a streaming call method. It's also recommended to return the final updated result instead of the last update as the last message in the streaming call method to indicate the end of the stream, which saves caller's effort of assembling the final result from multiple updates. - @AutoGen.Core.TextMessageUpdate: A message that contains a piece of text update. - @AutoGen.Core.ToolCallMessageUpdate: A message that contains a function call request update. #### Usage The below code snippet shows how to print a streaming update to console and update the final result on the caller side. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/BuildInMessageCodeSnippet.cs?name=StreamingCallCodeSnippet)] If the agent returns a final result instead of the last update as the last message in the streaming call method, the caller can directly use the final result without assembling the final result from multiple updates. [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/BuildInMessageCodeSnippet.cs?name=StreamingCallWithFinalMessage)]
GitHub
autogen
autogen/dotnet/website/articles/Installation.md
autogen
### Current version: [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) AutoGen.Net provides the following packages, you can choose to install one or more of them based on your needs: - `AutoGen`: The one-in-all package. This package has dependencies over `AutoGen.Core`, `AutoGen.OpenAI`, `AutoGen.LMStudio`, `AutoGen.SemanticKernel` and `AutoGen.SourceGenerator`. - `AutoGen.Core`: The core package, this package provides the abstraction for message type, agent and group chat. - `AutoGen.OpenAI`: This package provides the integration agents over openai models. - `AutoGen.Mistral`: This package provides the integration agents for Mistral.AI models. - `AutoGen.Ollama`: This package provides the integration agents for [Ollama](https://ollama.com/). - `AutoGen.Anthropic`: This package provides the integration agents for [Anthropic](https://www.anthropic.com/api) - `AutoGen.LMStudio`: This package provides the integration agents from LM Studio. - `AutoGen.SemanticKernel`: This package provides the integration agents over semantic kernel. - `AutoGen.Gemini`: This package provides the integration agents from [Google Gemini](https://gemini.google.com/). - `AutoGen.AzureAIInference`: This package provides the integration agents for [Azure AI Inference](https://www.nuget.org/packages/Azure.AI.Inference). - `AutoGen.SourceGenerator`: This package carries a source generator that adds support for type-safe function definition generation. - `AutoGen.DotnetInteractive`: This packages carries dotnet interactive support to execute code snippets. The current supported language is C#, F#, powershell and python. >[!Note] > Help me choose > - If you just want to install one package and enjoy the core features of AutoGen, choose `AutoGen`. > - If you want to leverage AutoGen's abstraction only and want to avoid introducing any other dependencies, like `Azure.AI.OpenAI` or `Semantic Kernel`, choose `AutoGen.Core`. You will need to implement your own agent, but you can still use AutoGen core features like group chat, built-in message type, workflow and middleware. >- If you want to use AutoGen with openai, choose `AutoGen.OpenAI`, similarly, choose `AutoGen.LMStudio` or `AutoGen.SemanticKernel` if you want to use agents from LM Studio or semantic kernel. >- If you just want the type-safe source generation for function call and don't want any other features, which even include the AutoGen's abstraction, choose `AutoGen.SourceGenerator`. Then, install the package using the following command: ```bash dotnet add package AUTOGEN_PACKAGES ``` ### Consume nightly build To consume nightly build, you can add one of the following feeds to your `NuGet.config` or global nuget config: > - [![Static Badge](https://img.shields.io/badge/azure_devops-grey?style=flat)](https://dev.azure.com/AGPublish/AGPublic/_artifacts/feed/AutoGen-Nightly) : <https://pkgs.dev.azure.com/AGPublish/AGPublic/_packaging/AutoGen-Nightly/nuget/v3/index.json> To add a local `NuGet.config`, create a file named `NuGet.config` in the root of your project and add the following content: ```xml <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <clear /> <add key="AutoGen" value="$(FEED_URL)" /> <!-- replace $(FEED_URL) with the feed url --> <!-- other feeds --> </packageSources> <disabledPackageSources /> </configuration> ``` To add the feed to your global nuget config. You can do this by running the following command in your terminal: ```bash dotnet nuget add source FEED_URL --name AutoGen # dotnet-tools contains Microsoft.DotNet.Interactive.VisualStudio package, which is used by AutoGen.DotnetInteractive dotnet nuget add source https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json --name dotnet-tools ``` Once you have added the feed, you can install the nightly-build package using the following command: ```bash dotnet add package AUTOGEN_PACKAGES VERSION ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-overview.md
autogen
## Overview of function call In some LLM models, you can provide a list of function definitions to the model. The function definition is usually essentially an OpenAPI schema object which describes the function, its parameters and return value. And these function definitions tells the model what "functions" are available to be used to resolve the user's request. This feature greatly extend the capability of LLM models by enabling them to "execute" arbitrary function as long as it can be described as a function definition. Below is an example of a function definition for getting weather report for a city: > [!NOTE] > To use function call, the underlying LLM model must support function call as well for the best experience. > The model used in the example below is `gpt-3.5-turbo-0613`. ```json { "name": "GetWeather", "description": "Get the weather report for a city", "parameters": { "city": { "type": "string", "description": "The city name" }, "required": ["city"] }, } ``` When the model receives a message, it will intelligently decide whether to use function call or not based on the message received. If the model decides to use function call, it will generate a function call which can be used to invoke the actual function. The function call is a json object which contains the function name and its arguments. Below is an example of a function call object for getting weather report for Seattle: ```json { "name": "GetWeather", "arguments": { "city": "Seattle" } } ``` And when the function call is return to the caller, it can be used to invoke the actual function to get the weather report for Seattle. ### Create type-safe function contract and function call wrapper use AutoGen.SourceGenerator AutoGen provides a source generator to easness the trouble of manually craft function contract and function call wrapper from a function. To use this feature, simply add the `AutoGen.SourceGenerator` package to your project and decorate your function with `Function` attribute. For more information, please check out [Create type-safe function](Create-type-safe-function-call.md). ### Use function call in an agent AutoGen provides first-class support for function call in its agent story. Usually there are three ways to enable a function call in an agent. - Pass function definitions when creating an agent. This only works if the agent supports pass function call from its constructor. - Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent - Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. For more information, please check out [Use function call in an agent](Use-function-call.md).
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-support-more-messages.md
autogen
By default, @AutoGen.OpenAI.OpenAIChatAgent only supports the @AutoGen.Core.IMessage<T> type where `T` is original request or response message from `Azure.AI.OpenAI`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage and so on, you can register the agent with @AutoGen.OpenAI.OpenAIChatRequestMessageConnector. The @AutoGen.OpenAI.OpenAIChatRequestMessageConnector will convert the message from AutoGen built-in message types to `Azure.AI.OpenAI.ChatRequestMessage` and vice versa. import the required namespaces: [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] [!code-csharp[](../../samples/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=register_openai_chat_message_connector)]
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-count-token-usage.md
autogen
The following example shows how to create a `MistralAITokenCounterMiddleware` @AutoGen.Core.IMiddleware and count the token usage when chatting with @AutoGen.Mistral.MistralClientAgent. ### Overview To collect the token usage for the entire chat session, one easy solution is simply collect all the responses from agent and sum up the token usage for each response. To collect all the agent responses, we can create a middleware which simply saves all responses to a list and register it with the agent. To get the token usage information for each response, because in the example we are using @AutoGen.Mistral.MistralClientAgent, we can simply get the token usage from the response object. > [!NOTE] > You can find the complete example in the [Example13_OpenAIAgent_JsonMode](https://github.com/microsoft/autogen/tree/main/dotnet/samples/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs). - Step 1: Adding using statement [!code-csharp[](../../samples/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs?name=using_statements)] - Step 2: Create a `MistralAITokenCounterMiddleware` class which implements @AutoGen.Core.IMiddleware. This middleware will collect all the responses from the agent and sum up the token usage for each response. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs?name=token_counter_middleware)] - Step 3: Create a `MistralClientAgent` [!code-csharp[](../../samples/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs?name=create_mistral_client_agent)] - Step 4: Register the `MistralAITokenCounterMiddleware` with the `MistralClientAgent`. Note that the order of each middlewares matters. The token counter middleware needs to be registered before `mistralMessageConnector` because it collects response only when the responding message type is `IMessage<ChatCompletionResponse>` while the `mistralMessageConnector` will convert `IMessage<ChatCompletionResponse>` to one of @AutoGen.Core.TextMessage, @AutoGen.Core.ToolCallMessage or @AutoGen.Core.ToolCallResultMessage. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs?name=register_middleware)] - Step 5: Chat with the `MistralClientAgent` and get the token usage information from the response object. [!code-csharp[](../../samples/AutoGen.BasicSamples/Example14_MistralClientAgent_TokenCount.cs?name=chat_with_agent)] ### Output When running the example, the completion token count will be printed to the console. ```bash Completion token count: 1408 # might be different based on the response ```