source
stringclasses 1
value | repository
stringclasses 1
value | file
stringlengths 17
123
| label
stringclasses 1
value | content
stringlengths 6
6.94k
|
---|---|---|---|---|
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelAgent-support-more-messages.md | autogen | @AutoGen.SemanticKernel.SemanticKernelAgent only supports the original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, you can register the agent with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. The @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector will convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. > [!NOTE] > At the current stage, @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector only supports conversation for the followng built-in @AutoGen.Core.IMessage > - @AutoGen.Core.TextMessage > - @AutoGen.Core.ImageMessage > - @AutoGen.Core.MultiModalMessage > > Function call message type like @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage are not supported yet. [!code-csharp[](../../../samples/AutoGen.BasicSamples/CodeSnippet/SemanticKernelCodeSnippet.cs?name=register_semantic_kernel_chat_message_content_connector)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.SemanticKernel/AutoGen-SemanticKernel-Overview.md | autogen | ## AutoGen.SemanticKernel Overview AutoGen.SemanticKernel is a package that provides seamless integration with Semantic Kernel. It provides the following agents: - @AutoGen.SemanticKernel.SemanticKernelAgent: A slim wrapper agent over `Kernel` that only support original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message type, register the agent with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. - @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent: A slim wrapper agent over `Microsoft.SemanticKernel.Agents.ChatCompletionAgent`. AutoGen.SemanticKernel also provides the following middleware: - @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector: A connector that convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. At the current stage, it only supports conversation between @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage and @AutoGen.Core.MultiModalMessage. Function call message type like @AutoGen.Core.ToolCallMessage and @AutoGen.Core.ToolCallResultMessage are not supported yet. - @AutoGen.SemanticKernel.KernelPluginMiddleware: A middleware that allows you to use semantic kernel plugins in other AutoGen agents like @AutoGen.OpenAI.OpenAIChatAgent. ### Get start with AutoGen.SemanticKernel To get start with AutoGen.SemanticKernel, firstly, follow the [installation guide](../Installation.md) to make sure you add the AutoGen feed correctly. Then add `AutoGen.SemanticKernel` package to your project file. ```xml <ItemGroup> <PackageReference Include="AutoGen.SemanticKernel" Version="AUTOGEN_VERSION" /> </ItemGroup> ``` |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.SemanticKernel/Use-kernel-plugin-in-other-agents.md | autogen | In semantic kernel, a kernel plugin is a collection of kernel functions that can be invoked during LLM calls. Semantic kernel provides a list of built-in plugins, like [core plugins](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Plugins/Plugins.Core), [web search plugin](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Plugins/Plugins.Web) and many more. You can also create your own plugins and use them in semantic kernel. Kernel plugins greatly extend the capabilities of semantic kernel and can be used to perform various tasks like web search, image search, text summarization, etc. `AutoGen.SemanticKernel` provides a middleware called @AutoGen.SemanticKernel.KernelPluginMiddleware that allows you to use semantic kernel plugins in other AutoGen agents like @AutoGen.OpenAI.OpenAIChatAgent. The following example shows how to define a simple plugin with a single `GetWeather` function and use it in @AutoGen.OpenAI.OpenAIChatAgent. > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs) ### Step 1: add using statement [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Using)] ### Step 2: create plugin In this step, we create a simple plugin with a single `GetWeather` function that takes a location as input and returns the weather information for that location. [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Create_plugin)] ### Step 3: create OpenAIChatAgent and use the plugin In this step, we firstly create a @AutoGen.SemanticKernel.KernelPluginMiddleware and register the previous plugin with it. The `KernelPluginMiddleware` will load the plugin and make the functions available for use in other agents. Followed by creating an @AutoGen.OpenAI.OpenAIChatAgent and register it with the `KernelPluginMiddleware`. [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Use_plugin)] ### Step 4: chat with OpenAIChatAgent In this final step, we start the chat with the @AutoGen.OpenAI.OpenAIChatAgent by asking the weather in Seattle. The `OpenAIChatAgent` will use the `GetWeather` function from the plugin to get the weather information for Seattle. [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Use_Kernel_Functions_With_Other_Agent.cs?name=Send_message)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelAgent-simple-chat.md | autogen | You can chat with @AutoGen.SemanticKernel.SemanticKernelAgent using both streaming and non-streaming methods and use native `ChatMessageContent` type via `IMessage<ChatMessageContent>`. The following example shows how to create an @AutoGen.SemanticKernel.SemanticKernelAgent and chat with it using non-streaming method: [!code-csharp[](../../../samples/AutoGen.BasicSamples/CodeSnippet/SemanticKernelCodeSnippet.cs?name=create_semantic_kernel_agent)] @AutoGen.SemanticKernel.SemanticKernelAgent also supports streaming chat via @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*. [!code-csharp[](../../../samples/AutoGen.BasicSamples/CodeSnippet/SemanticKernelCodeSnippet.cs?name=create_semantic_kernel_agent_streaming)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelChatAgent-simple-chat.md | autogen | `AutoGen.SemanticKernel` provides built-in support for `ChatCompletionAgent` via @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent. By default the @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent only supports the original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, you can register the agent with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. The @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector will convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. The following step-by-step example shows how to create an @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent and chat with it: > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs). ### Step 1: add using statement [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Using)] ### Step 2: create kernel [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Create_Kernel)] ### Step 3: create ChatCompletionAgent [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Create_ChatCompletionAgent)] ### Step 4: create @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent In this step, we create an @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent and register it with @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector. The @AutoGen.SemanticKernel.SemanticKernelChatMessageContentConnector will convert the message from AutoGen built-in message types to `ChatMessageContent` and vice versa. [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Create_SemanticKernelChatCompletionAgent)] ### Step 5: chat with @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent [!code-csharp[](../../../samples/AutoGen.SemanticKernel.Sample/Create_Semantic_Kernel_Chat_Agent.cs?name=Send_Message)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Gemini/Function-call-with-gemini.md | autogen | This example shows how to use @AutoGen.Gemini.GeminiChatAgent to make function call. This example is modified from [gemini-api function call example](https://ai.google.dev/gemini-api/docs/function-calling) To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.Gemini.Sample/Function_Call_With_Gemini.cs) ### Step 1: Install AutoGen.Gemini and AutoGen.SourceGenerator First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini dotnet add package AutoGen.SourceGenerator ``` The AutoGen.SourceGenerator package is required to generate the @AutoGen.Core.FunctionContract. For more information, please refer to [Create-type-safe-function-call](../Create-type-safe-function-call.md) ### Step 2: Add using statement [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Using)] ### Step 3: Create `MovieFunction` [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=MovieFunction)] ### Step 4: Create a Gemini agent [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Create_Gemini_Agent)] ### Step 5: Single turn function call [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Single_turn)] ### Step 6: Multi-turn function call [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Function_call_with_gemini.cs?name=Multi_turn)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Gemini/Overview.md | autogen | # AutoGen.Gemini Overview AutoGen.Gemini is a package that provides seamless integration with Google Gemini. It provides the following agent: - @AutoGen.Gemini.GeminiChatAgent: The agent that connects to Google Gemini or Vertex AI Gemini. It supports chat, multi-modal chat, and function call. AutoGen.Gemini also provides the following middleware: - @AutoGen.Gemini.GeminiMessageConnector: The middleware that converts the Gemini message to AutoGen built-in message type. |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Gemini/Overview.md | autogen | Examples You can find more examples under the [gemini sample project](https://github.com/microsoft/autogen/tree/main/dotnet/samples/AutoGen.Gemini.Sample) |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Gemini/Image-chat-with-gemini.md | autogen | This example shows how to use @AutoGen.Gemini.GeminiChatAgent for image chat with Gemini model. To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs) ### Step 1: Install AutoGen.Gemini First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini ``` ### Step 2: Add using statement [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs?name=Using)] ### Step 3: Create a Gemini agent [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs?name=Create_Gemini_Agent)] ### Step 4: Send image to Gemini [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Image_Chat_With_Vertex_Gemini.cs?name=Send_Image_Request)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Gemini/Chat-with-vertex-gemini.md | autogen | This example shows how to use @AutoGen.Gemini.GeminiChatAgent to connect to Vertex AI Gemini API and chat with Gemini model. To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs) > [!NOTE] > What's the difference between Google AI Gemini and Vertex AI Gemini? > > Gemini is a series of large language models developed by Google. You can use it either from Google AI API or Vertex AI API. If you are relatively new to Gemini and wants to explore the feature and build some prototype for your chatbot app, Google AI APIs (with Google AI Studio) is a fast way to get started. While your app and idea matures and you'd like to leverage more MLOps tools that streamline the usage, deployment, and monitoring of models, you can move to Google Cloud Vertex AI which provides Gemini APIs along with many other features. Basically, to help you productionize your app. ([reference](https://stackoverflow.com/questions/78007243/utilizing-gemini-through-vertex-ai-or-through-google-generative-ai)) ### Step 1: Install AutoGen.Gemini First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini ``` ### Step 2: Add using statement [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs?name=Using)] ### Step 3: Create a Gemini agent [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs?name=Create_Gemini_Agent)] ### Step 4: Chat with Gemini [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Chat_With_Vertex_Gemini.cs?name=Chat_With_Vertex_Gemini)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Gemini/Chat-with-google-gemini.md | autogen | This example shows how to use @AutoGen.Gemini.GeminiChatAgent to connect to Google AI Gemini and chat with Gemini model. To run this example, you need to have a Google AI Gemini API key. For how to get a Google Gemini API key, please refer to [Google Gemini](https://gemini.google.com/). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs) > [!NOTE] > What's the difference between Google AI Gemini and Vertex AI Gemini? > > Gemini is a series of large language models developed by Google. You can use it either from Google AI API or Vertex AI API. If you are relatively new to Gemini and wants to explore the feature and build some prototype for your chatbot app, Google AI APIs (with Google AI Studio) is a fast way to get started. While your app and idea matures and you'd like to leverage more MLOps tools that streamline the usage, deployment, and monitoring of models, you can move to Google Cloud Vertex AI which provides Gemini APIs along with many other features. Basically, to help you productionize your app. ([reference](https://stackoverflow.com/questions/78007243/utilizing-gemini-through-vertex-ai-or-through-google-generative-ai)) ### Step 1: Install AutoGen.Gemini First, install the AutoGen.Gemini package using the following command: ```bash dotnet add package AutoGen.Gemini ``` ### Step 2: Add using statement [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs?name=Using)] ### Step 3: Create a Gemini agent [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs?name=Create_Gemini_Agent)] ### Step 4: Chat with Gemini [!code-csharp[](../../../samples/AutoGen.Gemini.Sample/Chat_With_Google_Gemini.cs?name=Chat_With_Google_Gemini)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Ollama/Chat-with-llava.md | autogen | This sample shows how to use @AutoGen.Ollama.OllamaAgent to chat with LLaVA model. To run this example, you need to have an Ollama server running aside and have `llava:latest` model installed. For how to setup an Ollama server, please refer to [Ollama](https://ollama.com/). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs) ### Step 1: Install AutoGen.Ollama First, install the AutoGen.Ollama package using the following command: ```bash dotnet add package AutoGen.Ollama ``` For how to install from nightly build, please refer to [Installation](../Installation.md). ### Step 2: Add using statement [!code-csharp[](../../../samples/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs?name=Using)] ### Step 3: Create @AutoGen.Ollama.OllamaAgent [!code-csharp[](../../../samples/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs?name=Create_Ollama_Agent)] ### Step 4: Start MultiModal Chat LLaVA is a multimodal model that supports both text and image inputs. In this step, we create an image message along with a question about the image. [!code-csharp[](../../../samples/AutoGen.Ollama.Sample/Chat_With_LLaVA.cs?name=Send_Message)] |
GitHub | autogen | autogen/dotnet/website/articles/AutoGen.Ollama/Chat-with-llama.md | autogen | This example shows how to use @AutoGen.Ollama.OllamaAgent to connect to Ollama server and chat with LLaVA model. To run this example, you need to have an Ollama server running aside and have `llama3:latest` model installed. For how to setup an Ollama server, please refer to [Ollama](https://ollama.com/). > [!NOTE] > You can find the complete sample code [here](https://github.com/microsoft/autogen/blob/main/dotnet/samples/AutoGen.Ollama.Sample/Chat_With_LLaMA.cs) ### Step 1: Install AutoGen.Ollama First, install the AutoGen.Ollama package using the following command: ```bash dotnet add package AutoGen.Ollama ``` For how to install from nightly build, please refer to [Installation](../Installation.md). ### Step 2: Add using statement [!code-csharp[](../../../samples/AutoGen.Ollama.Sample/Chat_With_LLaMA.cs?name=Using)] ### Step 3: Create and chat @AutoGen.Ollama.OllamaAgent In this step, we create an @AutoGen.Ollama.OllamaAgent and connect it to the Ollama server. [!code-csharp[](../../../samples/AutoGen.Ollama.Sample/Chat_With_LLaMA.cs?name=Create_Ollama_Agent)] |
GitHub | autogen | autogen/dotnet/samples/Hello/README.md | autogen | # Multiproject App Host for HelloAgent This is a [.NET Aspire](https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-overview) App Host that starts up the HelloAgent project and the agents backend. Once the project starts up you will be able to view the telemetry and logs in the [Aspire Dashboard](https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-dashboard) using the link provided in the console. ```shell cd Hello.AppHost dotnet run ``` For more info see the HelloAgent [README](../HelloAgent/README.md). |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgentState/README.md | autogen | # AutoGen 0.4 .NET Hello World Sample This [sample](Program.cs) demonstrates how to create a simple .NET console application that listens for an event and then orchestrates a series of actions in response. |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgentState/README.md | autogen | Prerequisites To run this sample, you'll need: [.NET 8.0](https://dotnet.microsoft.com/en-us/) or later. Also recommended is the [GitHub CLI](https://cli.github.com/). |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgentState/README.md | autogen | Instructions to run the sample ```bash # Clone the repository gh repo clone microsoft/autogen cd dotnet/samples/Hello dotnet run ``` |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgentState/README.md | autogen | Key Concepts This sample illustrates how to create your own agent that inherits from a base agent and listens for an event. It also shows how to use the SDK's App Runtime locally to start the agent and send messages. Flow Diagram: ```mermaid %%{init: {'theme':'forest'}}%% graph LR; A[Main] --> |"PublishEventAsync(NewMessage('World'))"| B{"Handle(NewMessageReceived item)"} B --> |"PublishEventAsync(Output('***Hello, World***'))"| C[ConsoleAgent] C --> D{"WriteConsole()"} B --> |"PublishEventAsync(ConversationClosed('Goodbye'))"| E{"Handle(ConversationClosed item)"} B --> |"PublishEventAsync(Output('***Goodbye***'))"| C E --> F{"Shutdown()"} ``` ### Writing Event Handlers The heart of an autogen application are the event handlers. Agents select a ```TopicSubscription``` to listen for events on a specific topic. When an event is received, the agent's event handler is called with the event data. Within that event handler you may optionally *emit* new events, which are then sent to the event bus for other agents to process. The EventTypes are declared gRPC ProtoBuf messages that are used to define the schema of the event. The default protos are available via the ```Microsoft.AutoGen.Abstractions;``` namespace and are defined in [autogen/protos](/autogen/protos). The EventTypes are registered in the agent's constructor using the ```IHandle``` interface. ```csharp TopicSubscription("HelloAgents")] public class HelloAgent( IAgentContext context, [FromKeyedServices("EventTypes")] EventTypes typeRegistry) : ConsoleAgent( context, typeRegistry), ISayHello, IHandle<NewMessageReceived>, IHandle<ConversationClosed> { public async Task Handle(NewMessageReceived item) { var response = await SayHello(item.Message).ConfigureAwait(false); var evt = new Output { Message = response }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(evt).ConfigureAwait(false); var goodbye = new ConversationClosed { UserId = this.AgentId.Key, UserMessage = "Goodbye" }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(goodbye).ConfigureAwait(false); } ``` ### Inheritance and Composition This sample also illustrates inheritance in AutoGen. The `HelloAgent` class inherits from `ConsoleAgent`, which is a base class that provides a `WriteConsole` method. ### Starting the Application Runtime AuotoGen provides a flexible runtime ```Microsoft.AutoGen.Agents.App``` that can be started in a variety of ways. The `Program.cs` file demonstrates how to start the runtime locally and send a message to the agent all in one go using the ```App.PublishMessageAsync``` method. ```csharp // send a message to the agent var app = await App.PublishMessageAsync("HelloAgents", new NewMessageReceived { Message = "World" }, local: true); await App.RuntimeApp!.WaitForShutdownAsync(); await app.WaitForShutdownAsync(); ``` ### Sending Messages The set of possible Messages is defined in gRPC ProtoBuf specs. These are then turned into C# classes by the gRPC tools. You can define your own Message types by creating a new .proto file in your project and including the gRPC tools in your ```.csproj``` file: ```proto syntax = "proto3"; package devteam; option csharp_namespace = "DevTeam.Shared"; message NewAsk { string org = 1; string repo = 2; string ask = 3; int64 issue_number = 4; } message ReadmeRequested { string org = 1; string repo = 2; int64 issue_number = 3; string ask = 4; } ``` ```xml <ItemGroup> <PackageReference Include="Google.Protobuf" /> <PackageReference Include="Grpc.Tools" PrivateAssets="All" /> <Protobuf Include="..\Protos\messages.proto" Link="Protos\messages.proto" /> </ItemGroup> ``` You can send messages using the [```Microsoft.AutoGen.Agents.AgentWorker``` class](autogen/dotnet/src/Microsoft.AutoGen/Agents/AgentWorker.cs). Messages are wrapped in [the CloudEvents specification](https://cloudevents.io) and sent to the event bus. ### Managing State There is a simple API for persisting agent state. ```csharp await Store(new AgentState { AgentId = this.AgentId, TextData = entry }).ConfigureAwait(false); ``` which can be read back using Read: ```csharp State = await Read<AgentState>(this.AgentId).ConfigureAwait(false); ``` |
GitHub | autogen | autogen/dotnet/samples/Hello/Backend/README.md | autogen | # Backend Example This example demonstrates how to create a simple backend service for the agent runtime using ASP.NET Core. To Run it, simply run the following command in the terminal: ```bash dotnet run ``` Or you can run it using Visual Studio Code by pressing `F5`. |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgent/README.md | autogen | # AutoGen 0.4 .NET Hello World Sample This [sample](Program.cs) demonstrates how to create a simple .NET console application that listens for an event and then orchestrates a series of actions in response. |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgent/README.md | autogen | Prerequisites To run this sample, you'll need: [.NET 8.0](https://dotnet.microsoft.com/en-us/) or later. Also recommended is the [GitHub CLI](https://cli.github.com/). |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgent/README.md | autogen | Instructions to run the sample ```bash # Clone the repository gh repo clone microsoft/autogen cd dotnet/samples/Hello dotnet run ``` |
GitHub | autogen | autogen/dotnet/samples/Hello/HelloAgent/README.md | autogen | Key Concepts This sample illustrates how to create your own agent that inherits from a base agent and listens for an event. It also shows how to use the SDK's App Runtime locally to start the agent and send messages. Flow Diagram: ```mermaid %%{init: {'theme':'forest'}}%% graph LR; A[Main] --> |"PublishEventAsync(NewMessage('World'))"| B{"Handle(NewMessageReceived item)"} B --> |"PublishEventAsync(Output('***Hello, World***'))"| C[ConsoleAgent] C --> D{"WriteConsole()"} B --> |"PublishEventAsync(ConversationClosed('Goodbye'))"| E{"Handle(ConversationClosed item)"} B --> |"PublishEventAsync(Output('***Goodbye***'))"| C E --> F{"Shutdown()"} ``` ### Writing Event Handlers The heart of an autogen application are the event handlers. Agents select a ```TopicSubscription``` to listen for events on a specific topic. When an event is received, the agent's event handler is called with the event data. Within that event handler you may optionally *emit* new events, which are then sent to the event bus for other agents to process. The EventTypes are declared gRPC ProtoBuf messages that are used to define the schema of the event. The default protos are available via the ```Microsoft.AutoGen.Abstractions;``` namespace and are defined in [autogen/protos](/autogen/protos). The EventTypes are registered in the agent's constructor using the ```IHandle``` interface. ```csharp TopicSubscription("HelloAgents")] public class HelloAgent( IAgentContext context, [FromKeyedServices("EventTypes")] EventTypes typeRegistry) : ConsoleAgent( context, typeRegistry), ISayHello, IHandle<NewMessageReceived>, IHandle<ConversationClosed> { public async Task Handle(NewMessageReceived item) { var response = await SayHello(item.Message).ConfigureAwait(false); var evt = new Output { Message = response }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(evt).ConfigureAwait(false); var goodbye = new ConversationClosed { UserId = this.AgentId.Key, UserMessage = "Goodbye" }.ToCloudEvent(this.AgentId.Key); await PublishEventAsync(goodbye).ConfigureAwait(false); } ``` ### Inheritance and Composition This sample also illustrates inheritance in AutoGen. The `HelloAgent` class inherits from `ConsoleAgent`, which is a base class that provides a `WriteConsole` method. ### Starting the Application Runtime AuotoGen provides a flexible runtime ```Microsoft.AutoGen.Agents.App``` that can be started in a variety of ways. The `Program.cs` file demonstrates how to start the runtime locally and send a message to the agent all in one go using the ```App.PublishMessageAsync``` method. ```csharp // send a message to the agent var app = await App.PublishMessageAsync("HelloAgents", new NewMessageReceived { Message = "World" }, local: true); await App.RuntimeApp!.WaitForShutdownAsync(); await app.WaitForShutdownAsync(); ``` ### Sending Messages The set of possible Messages is defined in gRPC ProtoBuf specs. These are then turned into C# classes by the gRPC tools. You can define your own Message types by creating a new .proto file in your project and including the gRPC tools in your ```.csproj``` file: ```proto syntax = "proto3"; package devteam; option csharp_namespace = "DevTeam.Shared"; message NewAsk { string org = 1; string repo = 2; string ask = 3; int64 issue_number = 4; } message ReadmeRequested { string org = 1; string repo = 2; int64 issue_number = 3; string ask = 4; } ``` ```xml <ItemGroup> <PackageReference Include="Google.Protobuf" /> <PackageReference Include="Grpc.Tools" PrivateAssets="All" /> <Protobuf Include="..\Protos\messages.proto" Link="Protos\messages.proto" /> </ItemGroup> ``` You can send messages using the [```Microsoft.AutoGen.Agents.AgentWorker``` class](autogen/dotnet/src/Microsoft.AutoGen/Agents/AgentWorker.cs). Messages are wrapped in [the CloudEvents specification](https://cloudevents.io) and sent to the event bus. |
GitHub | autogen | autogen/dotnet/samples/dev-team/README.md | autogen | # GitHub Dev Team with AI Agents Build a Dev Team using event driven agents. This project is an experiment and is not intended to be used in production. |
GitHub | autogen | autogen/dotnet/samples/dev-team/README.md | autogen | Background From a natural language specification, set out to integrate a team of AI agents into your team’s dev process, either for discrete tasks on an existing repo (unit tests, pipeline expansions, PRs for specific intents), developing a new feature, or even building an application from scratch. Starting from an existing repo and a broad statement of intent, work with multiple AI agents, each of which has a different emphasis - from architecture, to task breakdown, to plans for individual tasks, to code output, code review, efficiency, documentation, build, writing tests, setting up pipelines, deployment, integration tests, and then validation. The system will present a view that facilitates chain-of-thought coordination across multiple trees of reasoning with the dev team agents. |
GitHub | autogen | autogen/dotnet/samples/dev-team/README.md | autogen | Get it running Check [the getting started guide](./docs/github-flow-getting-started.md). |
GitHub | autogen | autogen/dotnet/samples/dev-team/README.md | autogen | Demo https://github.com/microsoft/azure-openai-dev-skills-orchestrator/assets/10728102/cafb1546-69ab-4c27-aaf5-1968313d637f |
GitHub | autogen | autogen/dotnet/samples/dev-team/README.md | autogen | Solution overview  |
GitHub | autogen | autogen/dotnet/samples/dev-team/README.md | autogen | How it works * User begins with creating an issue and then stateing what they want to accomplish, natural language, as simple or as detailed as needed. * Product manager agent will respond with a Readme, which can be iterated upon. * User approves the readme or gives feedback via issue comments. * Once the readme is approved, the user closes the issue and the Readme is commited to a PR. * Developer lead agent responds with a decomposed plan for development, which also can be iterated upon. * User approves the plan or gives feedback via issue comments. * Once the readme is approved, the user closes the issue and the plan is used to break down the task to different developer agents. * Developer agents respond with code, which can be iterated upon. * User approves the code or gives feedback via issue comments. * Once the code is approved, the user closes the issue and the code is commited to a PR. ```mermaid graph TD; NEA([NewAsk event]) -->|Hubber| NEA1[Creation of PM issue, DevLead issue, and new branch]; RR([ReadmeRequested event]) -->|ProductManager| PM1[Generation of new README]; NEA1 --> RR; PM1 --> RG([ReadmeGenerated event]); RG -->|Hubber| RC[Post the readme as a new comment on the issue]; RC --> RCC([ReadmeChainClosed event]); RCC -->|ProductManager| RCR([ReadmeCreated event]); RCR --> |AzureGenie| RES[Store Readme in blob storage]; RES --> RES2([ReadmeStored event]); RES2 --> |Hubber| REC[Readme commited to branch and create new PR]; DPR([DevPlanRequested event]) -->|DeveloperLead| DPG[Generation of new development plan]; NEA1 --> DPR; DPG --> DPGE([DevPlanGenerated event]); DPGE -->|Hubber| DPGEC[Posting the plan as a new comment on the issue]; DPGEC --> DPCC([DevPlanChainClosed event]); DPCC -->|DeveloperLead| DPCE([DevPlanCreated event]); DPCE --> |Hubber| DPC[Creates a Dev issue for each subtask]; DPC([CodeGenerationRequested event]) -->|Developer| CG[Generation of new code]; CG --> CGE([CodeGenerated event]); CGE -->|Hubber| CGC[Posting the code as a new comment on the issue]; CGC --> CCCE([CodeChainClosed event]); CCCE -->|Developer| CCE([CodeCreated event]); CCE --> |AzureGenie| CS[Store code in blob storage and schedule a run in the sandbox]; CS --> SRC([SandboxRunCreated event]); SRC --> |Sandbox| SRM[Check every minute if the run finished]; SRM --> SRF([SandboxRunFinished event]); SRF --> |Hubber| SRCC[Code files commited to branch]; ``` |
GitHub | autogen | autogen/dotnet/samples/dev-team/seed-memory/README.md | autogen | # TODO |
GitHub | autogen | autogen/dotnet/samples/dev-team/docs/github-flow-getting-started.md | autogen | ## Prerequisites - Access to gpt3.5-turbo or preferably gpt4 - [Get access here](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview#how-do-i-get-access-to-azure-openai) - [Setup a Github app](#how-do-i-setup-the-github-app) - [Install the Github app](https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app) - [Provision the azure resources](#how-do-I-deploy-the-azure-bits) - [Create labels for the dev team skills](#which-labels-should-i-create) ### How do I setup the Github app? - [Register a Github app](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app), with the options listed below: - Give your App a name and add a description - Homepage URL: Can be anything (Example: repository URL) - Add a dummy value for the webhook url, we'll come back to this setting - Enter a webhook secret, which you'll need later on when filling in the `WebhookSecret` property in the `appsettings.json` file - Setup the following permissions - Repository - Contents - read and write - Issues - read and write - Metadata - read only - Pull requests - read and write - Subscribe to the following events: - Issues - Issue comment - Allow this app to be installed by any user or organization - After the app is created, generate a private key, we'll use it later for authentication to Github from the app ### Which labels should I create? In order for us to know which skill and persona we need to talk with, we are using Labels in Github Issues. The default bunch of skills and personnas are as follow: - PM.Readme - Do.It - DevLead.Plan - Developer.Implement Add them to your repository (They are not there by default). Once you start adding your own skills, just remember to add the corresponding label to your repository. |
GitHub | autogen | autogen/dotnet/samples/dev-team/docs/github-flow-getting-started.md | autogen | How do I run this locally? Codespaces are preset for this repo. For codespaces there is a 'free' tier for individual accounts. See: https://github.com/pricing Start by creating a codespace: https://docs.github.com/en/codespaces/developing-in-a-codespace/creating-a-codespace-for-a-repository  In this sample's folder there are two files called appsettings.azure.template.json and appsettings.local.template.json. If you run this demo locally, use the local template and if you want to run it within Azure use the Azure template. Rename the selected file to appsettings.json and fill out the config values within the file. ### GitHubOptions For the GitHubOptions section, you'll need to fill in the following values: - **AppKey (PrivateKey)**: this is a key generated while creating a GitHub App. If you haven't saved it during creation, you'll need to generate a new one. Go to the settings of your GitHub app, scroll down to "Private keys" and click on "Generate a new private key". It will download a .pem file that contains your App Key. Then copy and paste all the **-----BEGIN RSA PRIVATE KEY---- your key -----END RSA PRIVATE KEY-----** content here, in one line. - **AppId**: This can be found on the same page where you created your app. Go to the settings of your GitHub app and you can see the App ID at the top of the page. - **InstallationId**: Access to your GitHub app installation and take note of the number (long type) at the end of the URL (which should be in the following format: https://github.com/settings/installations/installation-id). - **WebhookSecret**: This is a value that you set when you create your app. In the app settings, go to the "Webhooks" section. Here you can find the "Secret" field where you can set your Webhook Secret. ### AzureOptions The following fields are required and need to be filled in: - **SubscriptionId**: The id of the subscription you want to work on. - **Location** - **ContainerInstancesResourceGroup**: The name of the resource group where container instances will be deployed. - **FilesAccountName**: Azure Storage Account name. - **FilesShareName**: The name of the File Share. - **FilesAccountKey**: The File Account key. - **SandboxImage** In the Explorer tab in VS Code, find the Solution explorer, right click on the `gh-flow` project and click Debug -> Start new instance  We'll need to expose the running application to the GH App webhooks, for example using [DevTunnels](https://learn.microsoft.com/en-us/azure/developer/dev-tunnels/overview), but any tool like ngrok can also work. The following commands will create a persistent tunnel, so we need to only do this once: ```bash TUNNEL_NAME=_name_your_tunnel_here_ devtunnel user login devtunnel create -a $TUNNEL_NAME devtunnel port create -p 5244 $TUNNEL_NAME ``` and once we have the tunnel created we can just start forwarding with the following command: ```bash devtunnel host $TUNNEL_NAME ``` Copy the local address (it will look something like https://your_tunnel_name.euw.devtunnels.ms) and append `/api/github/webhooks` at the end. Using this value, update the Github App's webhook URL and you are ready to go! Before you go and have the best of times, there is one last thing left to do [load the WAF into the vector DB](#load-the-waf-into-qdrant) Also, since this project is relying on Orleans for the Agents implementation, there is a [dashboard](https://github.com/OrleansContrib/OrleansDashboard) available at https://yout_tunnel_name.euw.devtunnels.ms/dashboard, with useful metrics and stats related to the running Agents. |
GitHub | autogen | autogen/dotnet/samples/dev-team/docs/github-flow-getting-started.md | autogen | How do I deploy the azure bits? This sample is setup to use [azd](https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/overview) to work with the Azure bits. `azd` is installed in the codespace. Let's start by logging in to Azure using ```bash azd auth login ``` After we've logged in, we need to create a new environment provision the azure bits. ```bash ENVIRONMENT=_name_of_your_env azd env new $ENVIRONMENT azd provision -e $ENVIRONMENT ``` After the provisioning is done, you can inspect the outputs with the following command ```bash azd env get-values -e dev ``` As the last step, we also need to [load the WAF into the vector DB](#load-the-waf-into-qdrant) ### Load the WAF into Qdrant. If you are running the app locally, we have [Qdrant](https://qdrant.tech/) setup in the Codespace and if you are running in Azure, Qdrant is deployed to ACA. The loader is a project in the `samples` folder, called `seed-memory`. We need to fill in the `appsettings.json` (after renaming `appsettings.template.json` in `appsettings.json`) file in the `config` folder with the OpenAI details and the Qdrant endpoint, then just run the loader with `dotnet run` and you are ready to go. ### WIP Local setup ``` dotnet user-secrets set "OpenAI:Key" "your_key" dotnet user-secrets set "OpenAI:Endpoint" "https://your_endpoint.openai.azure.com/" dotnet user-secrets set "Github:AppId" "gh_app_id" dotnet user-secrets set "Github:InstallationId" "gh_inst_id" dotnet user-secrets set "Github:WebhookSecret" "webhook_secret" dotnet user-secrets set "Github:AppKey" "gh_app_key" ``` |
GitHub | autogen | autogen/docs/design/05 - Services.md | autogen | # AutoGen Services |
GitHub | autogen | autogen/docs/design/05 - Services.md | autogen | Overview Each AutoGen agent system has one or more Agent Workers and a set of services for managing/supporting the agents. The services and workers can all be hosted in the same process or in a distributed system. When in the same process communication and event delivery is in-memory. When distributed, workers communicate with the service over gRPC. In all cases, events are packaged as CloudEvents. There are multiple options for the backend services: - In-Memory: the Agent Workers and Services are all hosted in the same process and communicate over in-memory channels. Available for python and .NET. - Python only: Agent workers communicate with a python hosted service that implements an in-memory message bus and agent registry. - Micrososft Orleans: a distributed actor system that can host the services and workers, enables distributed state with persistent storage, can leverage multiple event bus types, and cross-language agent communication. - *Roadmap: support for other languages distributed systems such as dapr or Akka.* The Services in the system include: - Worker: Hosts the Agents and is a client to the Gateway - Gateway: -- RPC gateway for the other services APIs -- Provides an RPC bridge between the workers and the Event Bus -- Message Session state (track message queues/delivery) - Registry: keeps track of the {agents:agent types}:{Subscription/Topics} in the system and which events they can handle -- *Roadmap: add lookup api in gateway* - AgentState: persistent state for agents - Routing: delivers events to agents based on their subscriptions+topics -- *Roadmap: add subscription management APIs* - *Roadmap: Management APIs for the Agent System* - *Roadmap: Scheduling: manages placement of agents* - *Roadmap: Discovery: allows discovery of agents and services* |
GitHub | autogen | autogen/docs/design/04 - Agent and Topic ID Specs.md | autogen | # Agent and Topic ID Specs This document describes the structure, constraints, and behavior of Agent IDs and Topic IDs. |
GitHub | autogen | autogen/docs/design/04 - Agent and Topic ID Specs.md | autogen | Agent ID ### Required Attributes #### type - Type: `string` - Description: The agent type is not an agent class. It associates an agent with a specific factory function, which produces instances of agents of the same agent `type`. For example, different factory functions can produce the same agent class but with different constructor perameters. - Constraints: UTF8 and only contain alphanumeric letters (a-z) and (0-9), or underscores (\_). A valid identifier cannot start with a number, or contain any spaces. - Examples: - `code_reviewer` - `WebSurfer` - `UserProxy` #### key - Type: `string` - Description: The agent key is an instance identifier for the given agent `type` - Constraints: UTF8 and only contain characters between (inclusive) ascii 32 (space) and 126 (~). - Examples: - `default` - A memory address - a UUID string |
GitHub | autogen | autogen/docs/design/04 - Agent and Topic ID Specs.md | autogen | Topic ID ### Required Attributes #### type - Type: `string` - Description: Topic type is usually defined by application code to mark the type of messages the topic is for. - Constraints: UTF8 and only contain alphanumeric letters (a-z) and (0-9), or underscores (\_). A valid identifier cannot start with a number, or contain any spaces. - Examples: - `GitHub_Issues` #### source - Type: `string` - Description: Topic source is the unique identifier for a topic within a topic type. It is typically defined by application data. - Constraints: UTF8 and only contain characters between (inclusive) ascii 32 (space) and 126 (~). - Examples: - `github.com/{repo_name}/issues/{issue_number}` |
GitHub | autogen | autogen/docs/design/03 - Agent Worker Protocol.md | autogen | # Agent Worker Protocol |
GitHub | autogen | autogen/docs/design/03 - Agent Worker Protocol.md | autogen | System architecture The system consists of multiple processes, each being either a _service_ process or a _worker_ process. Worker processes host application code (agents) and connect to a service process. Workers advertise the agents which they support to the service, so the service can decide which worker to place agents on. Service processes coordinate placement of agents on worker processes and facilitate communication between agents. Agent instances are identified by the tuple of `(namespace: str, name: str)`. Both _namespace_ and _name_ are application-defined. The _namespace_ has no semantics implied by the system: it is free-form, and any semantics are implemented by application code. The _name_ is used to route requests to a worker which supports agents with that name. Workers advertise the set of agent names which they are capable of hosting to the service. Workers activate agents in response to messages received from the service. The service uses the _name_ to determine where to place currently-inactive agents, maintaining a mapping from agent name to a set of workers which support that agent. The service maintains a _directory_ mapping active agent ids to worker processes which host the identified agent. ### Agent lifecycle Agents are never explicitly created or destroyed. When a request is received for an agent which is not currently active, it is the responsibility of the service to select a worker which is capable of hosting that agent, and to route the request to that worker. |
GitHub | autogen | autogen/docs/design/03 - Agent Worker Protocol.md | autogen | Worker protocol flow The worker protocol has three phases, following the lifetime of the worker: initialization, operation, and termination. ### Initialization When the worker process starts, it initiates a connection to a service process, establishing a bi-directional communication channel which messages are passed across. Next, the worker issues zero or more `RegisterAgentType(name: str)` messages, which tell the service the names of the agents which it is able to host. * TODO: What other metadata should the worker give to the service? * TODO: Should we give the worker a unique id which can be used to identify it for its lifetime? Should we allow this to be specified by the worker process itself? ### Operation Once the connection is established, and the service knows which agents the worker is capable of hosting, the worker may begin receiving requests for agents which it must host. Placement of agents happens in response to an `Event(...)` or `RpcRequest(...)` message. The worker maintains a _catalog_ of locally active agents: a mapping from agent id to agent instance. If a message arrives for an agent which does not have a corresponding entry in the catalog, the worker activates a new instance of that agent and inserts it into the catalog. The worker dispatches the message to the agent: * For an `Event`, the agent processes the message and no response is generated. * For an `RpcRequest` message, the agent processes the message and generates a response of type `RpcResponse`. The worker routes the response to the original sender. The worker maintains a mapping of outstanding requests, identified by `RpcRequest.id`, to a promise for a future `RpcResponse`. When an `RpcResponse` is received, the worker finds the corresponding request id and fulfils the promise using that response. If no response is received in a specified time frame (eg, 30s), the worker breaks the promise with a timeout error. ### Termination When the worker is ready to shutdown, it closes the connection to the service and terminates. The service de-registers the worker and all agent instances which were hosted on it. |
GitHub | autogen | autogen/docs/design/02 - Topics.md | autogen | # Topics This document describes the semantics and components of publishing messages and subscribing to topics. |
GitHub | autogen | autogen/docs/design/02 - Topics.md | autogen | Overview Topics are used as the primitive to manage which agents receive a given published message. Agents subscribe to topics. There is an application defined mapping from topic to agent instance. These concepts intentionally map to the [CloudEvents](https://cloudevents.io/) specification. This allows for easy integration with existing systems and tools. ### Non-goals This document does not specify RPC/direct messaging |
GitHub | autogen | autogen/docs/design/02 - Topics.md | autogen | Identifiers A topic is identified by two components (called a `TopicId`): - [`type`](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#type) - represents the type of event that occurs, this is static and defined in code - SHOULD use reverse domain name notation to avoid naming conflicts. For example: `com.example.my-topic`. - [`source`](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md#source-1) - represents where the event originated from, this is dynamic and based on the message itself - SHOULD be a URI Agent instances are identified by two components (called an `AgentId`): - `type` - represents the type of agent, this is static and defined in code - MUST be a valid identifier as defined [here](https://docs.python.org/3/reference/lexical_analysis.html#identifiers) except that only the ASCII range is allowed - `key` - represents the instance of the agent type for the key - SHOULD be a URI For example: `GraphicDesigner:1234` |
GitHub | autogen | autogen/docs/design/02 - Topics.md | autogen | Subscriptions Subscriptions define which agents receive messages published to a topic. Subscriptions are dynamic and can be added or removed at any time. A subscription defines two things: - Matcher func of type `TopicId -> bool`, telling us "does this subscription match this topic" - Mapper func of type `TopicId -> AgentId`, telling us "given this subscription matches this topic, which agent does it map to" These functions MUST be be free of side effects such that the evaluation can be cached. ### Agent instance creation If a message is received on a topic that maps to an agent that does not yet exist the runtime will instantiate an agent to fullfil the request. |
GitHub | autogen | autogen/docs/design/02 - Topics.md | autogen | Message types Agents are able to handle certain types of messages. This is an internal detail of an agent's implementation. All agents in a channel will receive all messages, but will ignore messages that it cannot handle. > [!NOTE] > This might be revisited based on scaling and performance considerations. |
GitHub | autogen | autogen/docs/design/01 - Programming Model.md | autogen | # Programming Model Understanding your workflow and mapping it to agents is the key to building an agent system in AutoGen. The programming model is basically publish-subscribe. Agents subscribe to events they care about and also can publish events that other agents may care about. Agents may also have additonal assets such as Memory, prompts, data sources, and skills (external APIs). |
GitHub | autogen | autogen/docs/design/01 - Programming Model.md | autogen | Events Delivered as CloudEvents Each event in the system is defined using the [CloudEvents Specification](https://cloudevents.io/). This allows for a common event format that can be used across different systems and languages. In CloudEvents, each event has "Context Attributes" that must include: 1. *id* - A unique id (eg. a UUID). 2. *source* - A URI or URN indicating the event's origin. 3. *type* - The namespace of the event - prefixed with a reverse-DNS name. - The prefixed domain dictates the organization which defines the semantics of this event type: e.g `com.github.pull_request.opened` or `com.example.object.deleted.v2`), and optionally fields describing the data schema/content-type or extensions. |
GitHub | autogen | autogen/docs/design/01 - Programming Model.md | autogen | Event Handlers Each agent has a set of event handlers, that are bound to a specific match against a CloudEvents *type*. Event Handlers could match against an exact type or match for a pattern of events of a particular level in the type heirarchy (eg: `com.Microsoft.AutoGen.Agents.System.*` for all Events in the `System` namespace) Each event handler is a function that can change state, call models, access memory, call external tools, emit other events, and flow data to/from other systems. Each event handler can be a simple function or a more complex function that uses a state machine or other control logic. |
GitHub | autogen | autogen/docs/design/01 - Programming Model.md | autogen | Orchestrating Agents It is possible to build a functional and scalable agent system that only reacts to external events. In many cases, however, you will want to orchestrate the agents to achieve a specific goal or follow a pre-determined workflow. In this case, you will need to build an orchestrator agent that manages the flow of events between agents. |
GitHub | autogen | autogen/docs/design/01 - Programming Model.md | autogen | Built-in Event Types The AutoGen system comes with a set of built-in event types that are used to manage the system. These include: - *System Events* - Events that are used to manage the system itself. These include events for starting and stopping the Agents, sending messages to all agents, and other system-level events. - *Insert other types here* |
GitHub | autogen | autogen/docs/design/01 - Programming Model.md | autogen | Agent Contracts You may want to leverage more prescriptive agent behavior contracts, and AutoGen also includes base agents that implement different approaches to agent behavior, including layering request/response patterns on top of the event-driven model. For an example of this see the ChatAgents in the Python examples. In this case your agent will have a known set of events which it must implement and specific behaviors expected of those events. |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | <!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | Related issue number <!-- For example: "Closes #1234" --> |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | Checks - [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all auto checks have passed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.