instruction
stringclasses
2 values
input
stringlengths
0
1k
output
stringclasses
2 values
metadata
dict
Analyze this document content
# Application Stack AutoGen core is designed to be an unopinionated framework that can be used to build a wide variety of multi-agent applications. It is not tied to any specific agent abstraction or multi-agent pattern. The following diagram shows the application stack. ![Application Stack](application-stack.svg) At the bottom of the stack is the base messaging and routing facilities that enable agents to communicate with each other. These are managed by the agent runtime, and for most applications, developers only need to interact with the high-level APIs provided by the runtime (see [Agent and Agent Runtime](../framework/agent-and-agent-runtime.ipynb)). At the top of the stack, developers need to define the types of the messages that agents exchange. This set of message types forms a behavior contract that agents must adhere to, and the implementation of the contracts determines how agents handle messages. The behavior contract is also sometimes referred to as the message proto
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/application-stack.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Agent Runtime Environments At the foundation level, the framework provides a _runtime environment_, which facilitates communication between agents, manages their identities and lifecycles, and enforce security and privacy boundaries. It supports two types of runtime environment: _standalone_ and _distributed_. Both types provide a common set of APIs for building multi-agent applications, so you can switch between them without changing your agent implementation. Each type can also have multiple implementations. ## Standalone Agent Runtime Standalone runtime is suitable for single-process applications where all agents are implemented in the same programming language and running in the same process. In the Python API, an example of standalone runtime is the {py:class}`~autogen_core.SingleThreadedAgentRuntime`. The following diagram shows the standalone runtime in the framework. ![Standalone Runtime](architecture-standalone.svg) Here, agents communicate via messages through the ru
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/architecture.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Topic and Subscription There are two ways for runtime to deliver messages, direct messaging or broadcast. Direct messaging is one to one: the sender must provide the recipient's agent ID. On the other hand, broadcast is one to many and the sender does not provide recpients' agent IDs. Many scenarios are suitable for broadcast. For example, in event-driven workflows, agents do not always know who will handle their messages, and a workflow can be composed of agents with no inter-dependencies. This section focuses on the core concepts in broadcast: topic and subscription. (topic_and_subscription_topic)= ## Topic A topic defines the scope of a broadcast message. In essence, agent runtime implements a publish-subscribe model through its broadcast API: when publishing a message, the topic must be specified. It is an indirection over agent IDs. A topic consists of two components: topic type and topic source. ```{note} Topic = (Topic Type, Topic Source) ``` Similar to [agent ID](./age
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/core-concepts/topic-and-subscription.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Intro Agents can work together in a variety of ways to solve problems. Research works like [AutoGen](https://aka.ms/autogen-paper), [MetaGPT](https://arxiv.org/abs/2308.00352) and [ChatDev](https://arxiv.org/abs/2307.07924) have shown multi-agent systems out-performing single agent systems at complex tasks like software development. A multi-agent design pattern is a structure that emerges from message protocols: it describes how agents interact with each other to solve problems. For example, the [tool-equiped agent](../framework/tools.ipynb#tool-equipped-agent) in the previous section employs a design pattern called ReAct, which involves an agent interacting with tools. You can implement any multi-agent design pattern using AutoGen agents. In the next two sections, we will discuss two common design patterns: group chat for task decomposition, and reflection for robustness.
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/design-patterns/intro.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Logging AutoGen uses Python's built-in [`logging`](https://docs.python.org/3/library/logging.html) module. There are two kinds of logging: - **Trace logging**: This is used for debugging and is human readable messages to indicate what is going on. This is intended for a developer to understand what is happening in the code. The content and format of these logs should not be depended on by other systems. - Name: {py:attr}`~autogen_core.TRACE_LOGGER_NAME`. - **Structured logging**: This logger emits structured events that can be consumed by other systems. The content and format of these logs can be depended on by other systems. - Name: {py:attr}`~autogen_core.EVENT_LOGGER_NAME`. - See the module {py:mod}`autogen_core.logging` to see the available events. - {py:attr}`~autogen_core.ROOT_LOGGER_NAME` can be used to enable or disable all logs. ## Enabling logging output To enable trace logging, you can use the following code: ```python import logging from autogen_core import T
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/logging.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Open Telemetry AutoGen has native support for [open telemetry](https://opentelemetry.io/). This allows you to collect telemetry data from your application and send it to a telemetry backend of your choosing. These are the components that are currently instrumented: - Runtime (Single Threaded Agent Runtime, Worker Agent Runtime) ## Instrumenting your application To instrument your application, you will need an sdk and an exporter. You may already have these if your application is already instrumented with open telemetry. ## Clean instrumentation If you do not have open telemetry set up in your application, you can follow these steps to instrument your application. ```bash pip install opentelemetry-sdk ``` Depending on your open telemetry collector, you can use grpc or http to export your telemetry. ```bash # Pick one of the following pip install opentelemetry-exporter-otlp-proto-http pip install opentelemetry-exporter-otlp-proto-grpc ``` Next, we need to get a tracer provide
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/core-user-guide/framework/telemetry.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Creating your own extension With the new package structure in 0.4, it is easier than ever to create and publish your own extension to the AutoGen ecosystem. This page details some best practices so that your extension package integrates well with the AutoGen ecosystem. ## Best practices ### Naming There is no requirement about naming. But prefixing the package name with `autogen-` makes it easier to find. ### Common interfaces Whenever possible, extensions should implement the provided interfaces from the `autogen_core` package. This will allow for a more consistent experience for users. #### Dependency on AutoGen To ensure that the extension works with the version of AutoGen that it was designed for, it is recommended to specify the version of AutoGen the dependency section of the `pyproject.toml` with adequate constraints. ```toml [project] # ... dependencies = [ "autogen-core>=0.4,<0.5" ] ``` ### Usage of typing AutoGen embraces the use of type hints to provide a b
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/create-your-own.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Discover community projects ::::{grid} 1 2 2 2 :margin: 4 4 0 0 :gutter: 1 :::{grid-item-card} {fas}`globe;pst-color-primary` <br> Ecosystem :link: https://github.com/topics/autogen :class-item: api-card :columns: 12 Find samples, services and other things that work with AutoGen ::: :::{grid-item-card} {fas}`puzzle-piece;pst-color-primary` <br> Community Extensions :link: https://github.com/topics/autogen-extension :class-item: api-card Find AutoGen extensions for 3rd party tools, components and services ::: :::{grid-item-card} {fas}`vial;pst-color-primary` <br> Community Samples :link: https://github.com/topics/autogen-sample :class-item: api-card Find community samples and examples of how to use AutoGen ::: :::: ## List of community projects | Name | Package | Description | |---|---|---| | [autogen-watsonx-client](https://github.com/tsinggggg/autogen-watsonx-client) | [PyPi](https://pypi.org/project/autogen-watsonx-client/) | Model client for [IBM watsonx.ai](https:/
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/discover.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
--- myst: html_meta: "description lang=en": | User Guide for AutoGen Extensions, a framework for building multi-agent applications with AI agents. --- # Extensions ```{toctree} :maxdepth: 3 :hidden: installation discover create-your-own ``` ```{toctree} :maxdepth: 3 :hidden: :caption: Guides azure-container-code-executor ``` AutoGen is designed to be extensible. The `autogen-ext` package contains many different component implementations maintained by the AutoGen project. However, we strongly encourage others to build their own components and publish them as part of the ecosytem. ::::{grid} 2 2 2 2 :gutter: 3 :::{grid-item-card} {fas}`magnifying-glass;pst-color-primary` Discover :link: ./discover.html Discover community extensions and samples ::: :::{grid-item-card} {fas}`code;pst-color-primary` Create your own :link: ./create-your-own.html Create your own extension ::: ::::
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/index.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
--- myst: html_meta: "description lang=en": | User Guide for AutoGen Extensions, a framework for building multi-agent applications with AI agents. --- # Installation First-part maintained extensions are available in the `autogen-ext` package. ```sh pip install "autogen-ext==0.4.0.dev13" ``` Extras: - `langchain` needed for {py:class}`~autogen_ext.tools.langchain.LangChainToolAdapter` - `azure` needed for {py:class}`~autogen_ext.code_executors.azure.ACADynamicSessionsCodeExecutor` - `docker` needed for {py:class}`~autogen_ext.code_executors.docker.DockerCommandLineCodeExecutor` - `openai` needed for {py:class}`~autogen_ext.models.openai.OpenAIChatCompletionClient`
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/docs/src/user-guide/extensions-user-guide/installation.md", "file_type": ".md", "source_type": "document" }
Analyze this document content
# Samples This directory contains sample apps that use AutoGen Core API. See [core user guide](../docs/src/user-guide/core-user-guide/) for notebook examples. See [Running the examples](#running-the-examples) for instructions on how to run the examples. - [`chess_game.py`](chess_game.py): an example with two chess player agents that executes its own tools to demonstrate tool use and reflection on tool use. - [`slow_human_in_loop.py`](slow_human_in_loop.py): an example showing human-in-the-loop which waits for human input before making the tool call. ## Running the examples ### Prerequisites First, you need a shell with AutoGen core and required dependencies installed. ### Using Azure OpenAI API For Azure OpenAI API, you need to set the following environment variables: ```bash export OPENAI_API_TYPE=azure export AZURE_OPENAI_API_ENDPOINT=your_azure_openai_endpoint export AZURE_OPENAI_API_VERSION=your_azure_openai_api_version ``` By default, we use Azure Active Directory (AAD)
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/README.md", "file_type": ".md", "source_type": "document" }
Analyze this code content
"""This is an example of simulating a chess game with two agents that play against each other, using tools to reason about the game state and make moves, and using a group chat manager to orchestrate the conversation.""" import argparse import asyncio import logging from typing import Annotated, Literal from autogen_core import ( AgentId, AgentInstantiationContext, AgentRuntime, DefaultSubscription, DefaultTopicId, SingleThreadedAgentRuntime, ) from autogen_core.model_context import BufferedChatCompletionContext from autogen_core.models import SystemMessage from autogen_core.tools import FunctionTool from chess import BLACK, SQUARE_NAMES, WHITE, Board, Move from chess import piece_name as get_piece_name from common.agents._chat_completion_agent import ChatCompletionAgent from common.patterns._group_chat_manager import GroupChatManager from common.types import TextMessage from common.utils import get_chat_completion_client_from_envs def validate_turn(board: B
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/chess_game.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
""" This example demonstrates an approach one can use to implement a async human in the loop system. The system consists of two agents: 1. An assistant agent that uses a tool call to schedule a meeting (this is a mock) 2. A user proxy that is used as a proxy for a slow human user. When this user receives a message from the assistant, it sends out a termination request with the query for the real human. The query to the human is sent out (as an input to the terminal here, but it could be an email or anything else) and the state of the runtime is saved in a persistent layer. When the user responds, the runtime is rehydrated with the state and the user input is sent back to the runtime. This is a simple example that can be extended to more complex scenarios as well. Whenever implementing a human in the loop system, it is important to consider that human looped systems can be slow - Humans take time to respond, but also depending on your medium of communication, the time taken can vary si
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/slow_human_in_loop.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations from dataclasses import dataclass, field from enum import Enum from typing import List, Union from autogen_core import FunctionCall, Image from autogen_core.models import FunctionExecutionResultMessage @dataclass(kw_only=True) class BaseMessage: # Name of the agent that sent this message source: str @dataclass class TextMessage(BaseMessage): content: str @dataclass class MultiModalMessage(BaseMessage): content: List[Union[str, Image]] @dataclass class FunctionCallMessage(BaseMessage): content: List[FunctionCall] Message = Union[TextMessage, MultiModalMessage, FunctionCallMessage, FunctionExecutionResultMessage] class ResponseFormat(Enum): text = "text" json_object = "json_object" @dataclass class RespondNow: """A message to request a response from the addressed agent. The sender expects a response upon sening and waits for it synchronously.""" response_format: ResponseFormat = field(default=Resp
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/types.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import os from typing import Any, List, Optional, Union from autogen_core.models import ( AssistantMessage, ChatCompletionClient, FunctionExecutionResult, FunctionExecutionResultMessage, LLMMessage, UserMessage, ) from autogen_ext.models.openai import AzureOpenAIChatCompletionClient, OpenAIChatCompletionClient from azure.identity import DefaultAzureCredential, get_bearer_token_provider from typing_extensions import Literal from .types import ( FunctionCallMessage, Message, MultiModalMessage, TextMessage, ) def convert_content_message_to_assistant_message( message: Union[TextMessage, MultiModalMessage, FunctionCallMessage], handle_unrepresentable: Literal["error", "ignore", "try_slice"] = "error", ) -> Optional[AssistantMessage]: match message: case TextMessage() | FunctionCallMessage(): return AssistantMessage(content=message.content, source=message.source) case MultiModalMessage(): if handl
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/utils.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._chat_completion_agent import ChatCompletionAgent __all__ = [ "ChatCompletionAgent", ]
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/agents/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import json from typing import Any, Coroutine, Dict, List, Mapping, Sequence, Tuple from autogen_core import ( AgentId, CancellationToken, DefaultTopicId, FunctionCall, MessageContext, RoutedAgent, message_handler, ) from autogen_core.model_context import ChatCompletionContext from autogen_core.models import ( AssistantMessage, ChatCompletionClient, FunctionExecutionResult, FunctionExecutionResultMessage, SystemMessage, UserMessage, ) from autogen_core.tools import Tool from ..types import ( FunctionCallMessage, Message, MultiModalMessage, PublishNow, Reset, RespondNow, ResponseFormat, TextMessage, ToolApprovalRequest, ToolApprovalResponse, ) class ChatCompletionAgent(RoutedAgent): """An agent implementation that uses the ChatCompletion API to gnenerate responses and execute tools. Args: description (str): The description of the agent. system_messa
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/agents/_chat_completion_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._group_chat_manager import GroupChatManager __all__ = ["GroupChatManager"]
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/patterns/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import logging from typing import Any, Callable, List, Mapping from autogen_core import AgentId, AgentProxy, MessageContext, RoutedAgent, message_handler from autogen_core.model_context import ChatCompletionContext from autogen_core.models import ChatCompletionClient, UserMessage from ..types import ( MultiModalMessage, PublishNow, Reset, TextMessage, ) from ._group_chat_utils import select_speaker logger = logging.getLogger("autogen_core.events") class GroupChatManager(RoutedAgent): """An agent that manages a group chat through event-driven orchestration. Args: name (str): The name of the agent. description (str): The description of the agent. runtime (AgentRuntime): The runtime to register the agent. participants (List[AgentId]): The list of participants in the group chat. model_context (ChatCompletionContext): The context manager for storing and retrieving ChatCompletion messages. model_client
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/patterns/_group_chat_manager.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
"""Credit to the original authors: https://github.com/microsoft/autogen/blob/main/autogen/agentchat/groupchat.py""" import re from typing import Dict, List from autogen_core import AgentProxy from autogen_core.model_context import ChatCompletionContext from autogen_core.models import ChatCompletionClient, SystemMessage, UserMessage async def select_speaker(context: ChatCompletionContext, client: ChatCompletionClient, agents: List[AgentProxy]) -> int: """Selects the next speaker in a group chat using a ChatCompletion client.""" # TODO: Handle multi-modal messages. # Construct formated current message history. history_messages: List[str] = [] for msg in await context.get_messages(): assert isinstance(msg, UserMessage) and isinstance(msg.content, str) history_messages.append(f"{msg.source}: {msg.content}") history = "\n".join(history_messages) # Construct agent roles. roles = "\n".join( [f"{(await agent.metadata)['type']}: {(aw
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/common/patterns/_group_chat_utils.py", "file_type": ".py", "source_type": "code" }
Analyze this document content
# Distributed Group Chat This example runs a gRPC server using [GrpcWorkerAgentRuntimeHost](../../src/autogen_core/application/_worker_runtime_host.py) and instantiates three distributed runtimes using [GrpcWorkerAgentRuntime](../../src/autogen_core/application/_worker_runtime.py). These runtimes connect to the gRPC server as hosts and facilitate a round-robin distributed group chat. This example leverages the [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service) to implement writer and editor LLM agents. Agents are instructed to provide concise answers, as the primary goal of this example is to showcase the distributed runtime rather than the quality of agent responses. ## Setup ### Setup Python Environment 1. Create a virtual environment as instructed in [README](../../../../../../../../README.md). 2. Run `uv pip install chainlit` in the same virtual environment ### General Configuration In the `config.yaml` file, you can configure the `c
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/README.md", "file_type": ".md", "source_type": "document" }
Analyze this code content
import asyncio import random from typing import Awaitable, Callable, List from uuid import uuid4 from _types import GroupChatMessage, MessageChunk, RequestToSpeak, UIAgentConfig from autogen_core import DefaultTopicId, MessageContext, RoutedAgent, message_handler from autogen_core.models import ( AssistantMessage, ChatCompletionClient, LLMMessage, SystemMessage, UserMessage, ) from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime from rich.console import Console from rich.markdown import Markdown class BaseGroupChatAgent(RoutedAgent): """A group chat participant using an LLM.""" def __init__( self, description: str, group_chat_topic_type: str, model_client: ChatCompletionClient, system_message: str, ui_config: UIAgentConfig, ) -> None: super().__init__(description=description) self._group_chat_topic_type = group_chat_topic_type self._model_client = model_client sel
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/_agents.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from dataclasses import dataclass from typing import Dict from autogen_core.models import ( LLMMessage, ) from autogen_ext.models.openai import AzureOpenAIClientConfiguration from pydantic import BaseModel class GroupChatMessage(BaseModel): """Implements a sample message sent by an LLM agent""" body: LLMMessage class RequestToSpeak(BaseModel): """Message type for agents to speak""" pass @dataclass class MessageChunk: message_id: str text: str author: str finished: bool def __str__(self) -> str: return f"{self.author}({self.message_id}): {self.text}" # Define Host configuration model class HostConfig(BaseModel): hostname: str port: int @property def address(self) -> str: return f"{self.hostname}:{self.port}" # Define GroupChatManager configuration model class GroupChatManagerConfig(BaseModel): topic_type: str max_rounds: int # Define WriterAgent configuration model class ChatAgentConfig(Base
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/_types.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import logging import os from typing import Any, Iterable, Type import yaml from _types import AppConfig from autogen_core import MessageSerializer, try_get_known_serializers_for_type from autogen_ext.models.openai import AzureOpenAIClientConfiguration from azure.identity import DefaultAzureCredential, get_bearer_token_provider def load_config(file_path: str = os.path.join(os.path.dirname(__file__), "config.yaml")) -> AppConfig: model_client = {} with open(file_path, "r") as file: config_data = yaml.safe_load(file) model_client = config_data["client_config"] del config_data["client_config"] app_config = AppConfig(**config_data) # This was required as it couldn't automatically instantiate AzureOpenAIClientConfiguration aad_params = {} if len(model_client.get("api_key", "")) == 0: aad_params["azure_ad_token_provider"] = get_bearer_token_provider( DefaultAzureCredential(), "https://cognitiveservices.azure.com/.def
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/_utils.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging import warnings from _agents import BaseGroupChatAgent from _types import AppConfig, GroupChatMessage, MessageChunk, RequestToSpeak from _utils import get_serializers, load_config, set_all_log_levels from autogen_core import ( TypeSubscription, ) from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime from rich.console import Console from rich.markdown import Markdown async def main(config: AppConfig): set_all_log_levels(logging.ERROR) editor_agent_runtime = GrpcWorkerAgentRuntime(host_address=config.host.address) editor_agent_runtime.add_message_serializer(get_serializers([RequestToSpeak, GroupChatMessage, MessageChunk])) # type: ignore[arg-type] await asyncio.sleep(4) Console().print(Markdown("Starting **`Editor Agent`**")) editor_agent_runtime.start() editor_agent_type = await BaseGroupChatAgent.register( editor_agent_runtime, c
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/run_editor_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging import warnings from _agents import GroupChatManager, publish_message_to_ui, publish_message_to_ui_and_backend from _types import AppConfig, GroupChatMessage, MessageChunk, RequestToSpeak from _utils import get_serializers, load_config, set_all_log_levels from autogen_core import ( TypeSubscription, ) from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime from rich.console import Console from rich.markdown import Markdown set_all_log_levels(logging.ERROR) async def main(config: AppConfig): set_all_log_levels(logging.ERROR) group_chat_manager_runtime = GrpcWorkerAgentRuntime(host_address=config.host.address) group_chat_manager_runtime.add_message_serializer(get_serializers([RequestToSpeak, GroupChatMessage, MessageChunk])) # type: ignore[arg-type] await asyncio.sleep(1) Console().print(Markdown("Starting **`Group Chat Manager`**")) group_chat_manage
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/run_group_chat_manager.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio from _types import HostConfig from _utils import load_config from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntimeHost from rich.console import Console from rich.markdown import Markdown async def main(host_config: HostConfig): host = GrpcWorkerAgentRuntimeHost(address=host_config.address) host.start() console = Console() console.print( Markdown(f"**`Distributed Host`** is now running and listening for connection at **`{host_config.address}`**") ) await host.stop_when_signal() if __name__ == "__main__": asyncio.run(main(load_config().host))
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/run_host.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging import warnings import chainlit as cl # type: ignore [reportUnknownMemberType] # This dependency is installed through instructions from _agents import MessageChunk, UIAgent from _types import AppConfig, GroupChatMessage, RequestToSpeak from _utils import get_serializers, load_config, set_all_log_levels from autogen_core import ( TypeSubscription, ) from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime from chainlit import Message # type: ignore [reportAttributeAccessIssue] from rich.console import Console from rich.markdown import Markdown set_all_log_levels(logging.ERROR) message_chunks: dict[str, Message] = {} # type: ignore [reportUnknownVariableType] async def send_cl_stream(msg: MessageChunk) -> None: if msg.message_id not in message_chunks: message_chunks[msg.message_id] = Message(content="", author=msg.author) if not msg.finished: await message_chunks[msg.message_id].stream_token(msg.text) # type: ignore
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/run_ui.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging import warnings from _agents import BaseGroupChatAgent from _types import AppConfig, GroupChatMessage, MessageChunk, RequestToSpeak from _utils import get_serializers, load_config, set_all_log_levels from autogen_core import ( TypeSubscription, ) from autogen_ext.models.openai import AzureOpenAIChatCompletionClient from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime from rich.console import Console from rich.markdown import Markdown async def main(config: AppConfig) -> None: set_all_log_levels(logging.ERROR) writer_agent_runtime = GrpcWorkerAgentRuntime(host_address=config.host.address) writer_agent_runtime.add_message_serializer(get_serializers([RequestToSpeak, GroupChatMessage, MessageChunk])) # type: ignore[arg-type] await asyncio.sleep(3) Console().print(Markdown("Starting **`Writer Agent`**")) writer_agent_runtime.start() writer_agent_type = await BaseGroupChatAgent.register( writer_agent_runtime,
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/distributed-group-chat/run_writer_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
""" The :mod:`autogen_core.worker.protos` module provides Google Protobuf classes for agent-worker communication """ import os import sys sys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/protos/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
# -*- coding: utf-8 -*- # Generated by the protocol buffer compiler. DO NOT EDIT! # source: agent_events.proto # Protobuf Python Version: 4.25.1 """Generated protocol buffer code.""" from google.protobuf import descriptor as _descriptor from google.protobuf import descriptor_pool as _descriptor_pool from google.protobuf import symbol_database as _symbol_database from google.protobuf.internal import builder as _builder # @@protoc_insertion_point(imports) _sym_db = _symbol_database.Default() DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x12\x61gent_events.proto\x12\x06\x61gents\"2\n\x0bTextMessage\x12\x13\n\x0btextMessage\x18\x01 \x01(\t\x12\x0e\n\x06source\x18\x02 \x01(\t\"\x18\n\x05Input\x12\x0f\n\x07message\x18\x01 \x01(\t\"\x1f\n\x0eInputProcessed\x12\r\n\x05route\x18\x01 \x01(\t\"\x19\n\x06Output\x12\x0f\n\x07message\x18\x01 \x01(\t\"\x1e\n\rOutputWritten\x12\r\n\x05route\x18\x01 \x01(\t\"\x1a\n\x07IOError\x12\x0f\n\x07message\x18\x01 \x01(\t\"%\n\x12NewMessag
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/protos/agent_events_pb2.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! """Client and server classes corresponding to protobuf-defined services.""" import grpc
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/protos/agent_events_pb2_grpc.py", "file_type": ".py", "source_type": "code" }
Analyze this document content
# Multi Agent Orchestration, Distributed Agent Runtime Example This repository is an example of how to run a distributed agent runtime. The system is composed of three main components: 1. The agent host runtime, which is responsible for managing the eventing engine, and the pub/sub message system. 2. The worker runtime, which is responsible for the lifecycle of the distributed agents, including the "semantic router". 3. The user proxy, which is responsible for managing the user interface and the user interactions with the agents. ## Example Scenario In this example, we have a simple scenario where we have a set of distributed agents (an "HR", and a "Finance" agent) which an enterprise may use to manage their HR and Finance operations. Each of these agents are independent, and can be running on different machines. While many multi-agent systems are built to have the agents collaborate to solve a difficult task - the goal of this example is to show how an enterprise may manage a lar
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/semantic_router/README.md", "file_type": ".md", "source_type": "document" }
Analyze this code content
import asyncio import logging from _semantic_router_components import FinalResult, TerminationMessage, UserProxyMessage, WorkerAgentMessage from autogen_core import TRACE_LOGGER_NAME, DefaultTopicId, MessageContext, RoutedAgent, message_handler logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(f"{TRACE_LOGGER_NAME}.workers") class WorkerAgent(RoutedAgent): def __init__(self, name: str) -> None: super().__init__("A Worker Agent") self._name = name @message_handler async def my_message_handler(self, message: UserProxyMessage, ctx: MessageContext) -> None: assert ctx.topic_id is not None logger.debug(f"Received message from {message.source}: {message.content}") if "END" in message.content: await self.publish_message( TerminationMessage(reason="user terminated conversation", content=message.content, source=self.type), topic_id=DefaultTopicId(type="user_proxy", source=ctx.t
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/semantic_router/_agents.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import logging from _semantic_router_components import AgentRegistryBase, IntentClassifierBase, TerminationMessage, UserProxyMessage from autogen_core import ( TRACE_LOGGER_NAME, DefaultTopicId, MessageContext, RoutedAgent, default_subscription, message_handler, ) logging.basicConfig(level=logging.WARNING) logger = logging.getLogger(f"{TRACE_LOGGER_NAME}.semantic_router") logger.setLevel(logging.DEBUG) @default_subscription class SemanticRouterAgent(RoutedAgent): def __init__(self, name: str, agent_registry: AgentRegistryBase, intent_classifier: IntentClassifierBase) -> None: super().__init__("Semantic Router Agent") self._name = name self._registry = agent_registry self._classifier = intent_classifier # The User has sent a message that needs to be routed @message_handler async def route_to_agent(self, message: UserProxyMessage, ctx: MessageContext) -> None: assert ctx.topic_id is not None logg
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/semantic_router/_semantic_router_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from abc import ABC, abstractmethod from dataclasses import dataclass class IntentClassifierBase(ABC): @abstractmethod async def classify_intent(self, message: str) -> str: pass class AgentRegistryBase(ABC): @abstractmethod async def get_agent(self, intent: str) -> str: pass @dataclass(kw_only=True) class BaseMessage: """A basic message that stores the source of the message.""" source: str @dataclass class TextMessage(BaseMessage): content: str def __len__(self): return len(self.content) @dataclass class UserProxyMessage(TextMessage): """A message that is sent from the user to the system, and needs to be routed to the appropriate agent.""" pass @dataclass class TerminationMessage(TextMessage): """A message that is sent from the system to the user, indicating that the conversation has ended.""" reason: str @dataclass class WorkerAgentMessage(TextMessage): """A message that is sent from a worker
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/semantic_router/_semantic_router_components.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging import platform from autogen_core import TRACE_LOGGER_NAME from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntimeHost async def run_host(): host = GrpcWorkerAgentRuntimeHost(address="localhost:50051") host.start() # Start a host service in the background. if platform.system() == "Windows": try: while True: await asyncio.sleep(1) except KeyboardInterrupt: await host.stop() else: await host.stop_when_signal() if __name__ == "__main__": logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger(f"{TRACE_LOGGER_NAME}.host") asyncio.run(run_host())
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/semantic_router/run_host.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
""" This example showcases using a Semantic Router to dynamically route user messages to the most appropraite agent for a conversation. The Semantic Router Agent is responsible for receiving messages from the user, identifying the intent of the message, and then routing the message to the agent, by referencing an "Agent Registry". Using the pub-sub model, messages are broadcast to the most appropriate agent. In this example, the Agent Registry is a simple dictionary which maps string-matched intents to agent names. In a more complex example, the intent classifier may be more robust, and the agent registry could use a technology such as Azure AI Search to host definitions for many agents. For this example, there are 2 agents available, an "hr" agent and a "finance" agent. Any requests that can not be classified as "hr" or "finance" will result in the conversation ending with a Termination message. """ import asyncio import platform from _agents import UserProxyAgent, WorkerAgent f
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/semantic_router/run_semantic_router.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from dataclasses import dataclass from autogen_core import DefaultTopicId, MessageContext, RoutedAgent, default_subscription, message_handler @dataclass class CascadingMessage: round: int @dataclass class ReceiveMessageEvent: round: int sender: str recipient: str @default_subscription class CascadingAgent(RoutedAgent): def __init__(self, max_rounds: int) -> None: super().__init__("A cascading agent.") self.max_rounds = max_rounds @message_handler async def on_new_message(self, message: CascadingMessage, ctx: MessageContext) -> None: await self.publish_message( ReceiveMessageEvent(round=message.round, sender=str(ctx.sender), recipient=str(self.id)), topic_id=DefaultTopicId(), ) if message.round == self.max_rounds: return await self.publish_message(CascadingMessage(round=message.round + 1), topic_id=DefaultTopicId()) @default_subscription class ObserverAgent(RoutedAge
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/worker/agents.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from agents import CascadingMessage, ObserverAgent from autogen_core import DefaultTopicId, try_get_known_serializers_for_type from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime async def main() -> None: runtime = GrpcWorkerAgentRuntime(host_address="localhost:50051") runtime.add_message_serializer(try_get_known_serializers_for_type(CascadingMessage)) runtime.start() await ObserverAgent.register(runtime, "observer_agent", lambda: ObserverAgent()) await runtime.publish_message(CascadingMessage(round=1), topic_id=DefaultTopicId()) await runtime.stop_when_signal() if __name__ == "__main__": # import logging # logging.basicConfig(level=logging.DEBUG) # logger = logging.getLogger("autogen_core") import asyncio asyncio.run(main())
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/worker/run_cascading_publisher.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import uuid from agents import CascadingAgent, ReceiveMessageEvent from autogen_core import try_get_known_serializers_for_type from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime async def main() -> None: runtime = GrpcWorkerAgentRuntime(host_address="localhost:50051") runtime.add_message_serializer(try_get_known_serializers_for_type(ReceiveMessageEvent)) runtime.start() agent_type = f"cascading_agent_{uuid.uuid4()}".replace("-", "_") await CascadingAgent.register(runtime, agent_type, lambda: CascadingAgent(max_rounds=3)) await runtime.stop_when_signal() if __name__ == "__main__": import logging logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("autogen_core") import asyncio asyncio.run(main())
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/worker/run_cascading_worker.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntimeHost async def main() -> None: service = GrpcWorkerAgentRuntimeHost(address="localhost:50051") service.start() await service.stop_when_signal() if __name__ == "__main__": import logging logging.basicConfig(level=logging.WARNING) logging.getLogger("autogen_core").setLevel(logging.DEBUG) asyncio.run(main())
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/worker/run_host.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging from dataclasses import dataclass from typing import Any, NoReturn from autogen_core import ( DefaultSubscription, DefaultTopicId, MessageContext, RoutedAgent, message_handler, try_get_known_serializers_for_type, ) from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime @dataclass class AskToGreet: content: str @dataclass class Greeting: content: str @dataclass class ReturnedGreeting: content: str @dataclass class Feedback: content: str @dataclass class ReturnedFeedback: content: str class ReceiveAgent(RoutedAgent): def __init__(self) -> None: super().__init__("Receive Agent") @message_handler async def on_greet(self, message: Greeting, ctx: MessageContext) -> None: await self.publish_message(ReturnedGreeting(f"Returned greeting: {message.content}"), topic_id=DefaultTopicId()) @message_handler async def on_feedback(self, message: Feedback, ctx: MessageContex
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/worker/run_worker_pub_sub.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging from dataclasses import dataclass from autogen_core import ( AgentId, DefaultSubscription, DefaultTopicId, MessageContext, RoutedAgent, message_handler, ) from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime @dataclass class AskToGreet: content: str @dataclass class Greeting: content: str @dataclass class Feedback: content: str class ReceiveAgent(RoutedAgent): def __init__(self) -> None: super().__init__("Receive Agent") @message_handler async def on_greet(self, message: Greeting, ctx: MessageContext) -> Greeting: return Greeting(content=f"Received: {message.content}") @message_handler async def on_feedback(self, message: Feedback, ctx: MessageContext) -> None: print(f"Feedback received: {message.content}") class GreeterAgent(RoutedAgent): def __init__(self, receive_agent_type: str) -> None: super().__init__("Greeter Agent") self._receiv
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/worker/run_worker_rpc.py", "file_type": ".py", "source_type": "code" }
Analyze this document content
# Python and dotnet agents interoperability sample This sample demonstrates how to create a Python agent that interacts with a .NET agent. To run the sample, check out the autogen repository. Then do the following: 1. Navigate to autogen/dotnet/samples/Hello/Hello.AppHost 2. Run `dotnet run` to start the .NET Aspire app host, which runs three projects: - Backend (the .NET Agent Runtime) - HelloAgent (the .NET Agent) - this Python agent - hello_python_agent.py 3. The AppHost will start the Aspire dashboard on [https://localhost:15887](https://localhost:15887). The Python agent will interact with the .NET agent by sending a message to the .NET runtime, which will relay the message to the .NET agent.
Document summary and key points
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/xlang/hello_python_agent/README.md", "file_type": ".md", "source_type": "document" }
Analyze this code content
import asyncio import logging import os import sys # from protos.agents_events_pb2 import NewMessageReceived from autogen_core import ( PROTOBUF_DATA_CONTENT_TYPE, AgentId, DefaultSubscription, DefaultTopicId, TypeSubscription, try_get_known_serializers_for_type, ) from autogen_ext.runtimes.grpc import GrpcWorkerAgentRuntime # Add the local package directory to sys.path thisdir = os.path.dirname(os.path.abspath(__file__)) sys.path.append(os.path.join(thisdir, "..", "..")) from dotenv import load_dotenv # type: ignore # noqa: E402 from protos.agent_events_pb2 import NewMessageReceived, Output # type: ignore # noqa: E402 from user_input import UserProxy # type: ignore # noqa: E402 agnext_logger = logging.getLogger("autogen_core") async def main() -> None: load_dotenv() agentHost = os.getenv("AGENT_HOST") or "localhost:53072" # grpc python bug - can only use the hostname, not prefix - if hostname has a prefix we have to remove it: if agentHo
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/xlang/hello_python_agent/hello_python_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import asyncio import logging from typing import Union from autogen_core import DefaultTopicId, MessageContext, RoutedAgent, message_handler from protos.agent_events_pb2 import ConversationClosed, Input, NewMessageReceived, Output # type: ignore input_types = Union[ConversationClosed, Input, Output] class UserProxy(RoutedAgent): """An agent that allows the user to play the role of an agent in the conversation via input.""" DEFAULT_DESCRIPTION = "A human user." def __init__( self, description: str = DEFAULT_DESCRIPTION, ) -> None: super().__init__(description) @message_handler async def handle_user_chat_input(self, message: input_types, ctx: MessageContext) -> None: logger = logging.getLogger("autogen_core") if isinstance(message, Input): response = await self.ainput("User input ('exit' to quit): ") response = response.strip() logger.info(response) await self.publish
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/samples/xlang/hello_python_agent/user_input.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import importlib.metadata __version__ = importlib.metadata.version("autogen_core") from ._agent import Agent from ._agent_id import AgentId from ._agent_instantiation import AgentInstantiationContext from ._agent_metadata import AgentMetadata from ._agent_proxy import AgentProxy from ._agent_runtime import AgentRuntime from ._agent_type import AgentType from ._base_agent import BaseAgent from ._cancellation_token import CancellationToken from ._closure_agent import ClosureAgent, ClosureContext from ._component_config import ( Component, ComponentConfigImpl, ComponentLoader, ComponentModel, ComponentType, ) from ._constants import ( EVENT_LOGGER_NAME as EVENT_LOGGER_NAME_ALIAS, ) from ._constants import ( ROOT_LOGGER_NAME as ROOT_LOGGER_NAME_ALIAS, ) from ._constants import ( TRACE_LOGGER_NAME as TRACE_LOGGER_NAME_ALIAS, ) from ._default_subscription import DefaultSubscription, default_subscription, type_subscription from ._default_topic import DefaultT
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import Any, Mapping, Protocol, runtime_checkable from ._agent_id import AgentId from ._agent_metadata import AgentMetadata from ._message_context import MessageContext @runtime_checkable class Agent(Protocol): @property def metadata(self) -> AgentMetadata: """Metadata of the agent.""" ... @property def id(self) -> AgentId: """ID of the agent.""" ... async def on_message(self, message: Any, ctx: MessageContext) -> Any: """Message handler for the agent. This should only be called by the runtime, not by other agents. Args: message (Any): Received message. Type is one of the types in `subscriptions`. ctx (MessageContext): Context of the message. Returns: Any: Response to the message. Can be None. Raises: asyncio.CancelledError: If the message was cancelled. CantHandleException: If the agent cannot handle the message. """
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import re from typing_extensions import Self from ._agent_type import AgentType def is_valid_agent_type(value: str) -> bool: return bool(re.match(r"^[\w\-\.]+\Z", value)) class AgentId: """ Agent ID uniquely identifies an agent instance within an agent runtime - including distributed runtime. It is the 'address' of the agent instance for receiving messages. See here for more information: :ref:`agentid_and_lifecycle` """ def __init__(self, type: str | AgentType, key: str) -> None: if isinstance(type, AgentType): type = type.type if not is_valid_agent_type(type): raise ValueError(rf"Invalid agent type: {type}. Allowed values MUST match the regex: `^[\w\-\.]+\Z`") self._type = type self._key = key def __hash__(self) -> int: return hash((self._type, self._key)) def __str__(self) -> str: return f"{self._type}/{self._key}" def __repr__(self) -> str: return f'AgentI
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent_id.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from contextlib import contextmanager from contextvars import ContextVar from typing import Any, ClassVar, Generator from ._agent_id import AgentId from ._agent_runtime import AgentRuntime class AgentInstantiationContext: def __init__(self) -> None: raise RuntimeError( "AgentInstantiationContext cannot be instantiated. It is a static class that provides context management for agent instantiation." ) _AGENT_INSTANTIATION_CONTEXT_VAR: ClassVar[ContextVar[tuple[AgentRuntime, AgentId]]] = ContextVar( "_AGENT_INSTANTIATION_CONTEXT_VAR" ) @classmethod @contextmanager def populate_context(cls, ctx: tuple[AgentRuntime, AgentId]) -> Generator[None, Any, None]: """:meta private:""" token = AgentInstantiationContext._AGENT_INSTANTIATION_CONTEXT_VAR.set(ctx) try: yield finally: AgentInstantiationContext._AGENT_INSTANTIATION_CONTEXT_VAR.reset(token) @classmethod def curr
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent_instantiation.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import TypedDict class AgentMetadata(TypedDict): type: str key: str description: str
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent_metadata.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations from typing import TYPE_CHECKING, Any, Awaitable, Mapping from ._agent_id import AgentId from ._agent_metadata import AgentMetadata from ._cancellation_token import CancellationToken if TYPE_CHECKING: from ._agent_runtime import AgentRuntime class AgentProxy: """A helper class that allows you to use an :class:`~autogen_core.AgentId` in place of its associated :class:`~autogen_core.Agent`""" def __init__(self, agent: AgentId, runtime: AgentRuntime): self._agent = agent self._runtime = runtime @property def id(self) -> AgentId: """Target agent for this proxy""" return self._agent @property def metadata(self) -> Awaitable[AgentMetadata]: """Metadata of the agent.""" return self._runtime.agent_metadata(self._agent) async def send_message( self, message: Any, *, sender: AgentId, cancellation_token: CancellationToken | None = None
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent_proxy.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations from collections.abc import Sequence from typing import Any, Awaitable, Callable, Mapping, Protocol, Type, TypeVar, overload, runtime_checkable from ._agent import Agent from ._agent_id import AgentId from ._agent_metadata import AgentMetadata from ._agent_type import AgentType from ._cancellation_token import CancellationToken from ._serialization import MessageSerializer from ._subscription import Subscription from ._topic import TopicId # Undeliverable - error T = TypeVar("T", bound=Agent) @runtime_checkable class AgentRuntime(Protocol): async def send_message( self, message: Any, recipient: AgentId, *, sender: AgentId | None = None, cancellation_token: CancellationToken | None = None, message_id: str | None = None, ) -> Any: """Send a message to an agent and get a response. Args: message (Any): The message to send. recipient (AgentId): Th
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent_runtime.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from dataclasses import dataclass @dataclass(eq=True, frozen=True) class AgentType: type: str """String representation of this agent type."""
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_agent_type.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations import inspect import warnings from abc import ABC, abstractmethod from collections.abc import Sequence from typing import Any, Awaitable, Callable, ClassVar, List, Mapping, Tuple, Type, TypeVar, final from typing_extensions import Self from ._agent import Agent from ._agent_id import AgentId from ._agent_instantiation import AgentInstantiationContext from ._agent_metadata import AgentMetadata from ._agent_runtime import AgentRuntime from ._agent_type import AgentType from ._cancellation_token import CancellationToken from ._message_context import MessageContext from ._serialization import MessageSerializer, try_get_known_serializers_for_type from ._subscription import Subscription, UnboundSubscription from ._subscription_context import SubscriptionInstantiationContext from ._topic import TopicId from ._type_prefix_subscription import TypePrefixSubscription T = TypeVar("T", bound=Agent) BaseAgentType = TypeVar("BaseAgentType", bound="BaseAgent")
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_base_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import threading from asyncio import Future from typing import Any, Callable, List class CancellationToken: """A token used to cancel pending async calls""" def __init__(self) -> None: self._cancelled: bool = False self._lock: threading.Lock = threading.Lock() self._callbacks: List[Callable[[], None]] = [] def cancel(self) -> None: """Cancel pending async calls linked to this cancellation token.""" with self._lock: if not self._cancelled: self._cancelled = True for callback in self._callbacks: callback() def is_cancelled(self) -> bool: """Check if the CancellationToken has been used""" with self._lock: return self._cancelled def add_callback(self, callback: Callable[[], None]) -> None: """Attach a callback that will be called when cancel is invoked""" with self._lock: if self._cancelled:
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_cancellation_token.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations import inspect import warnings from typing import Any, Awaitable, Callable, List, Literal, Mapping, Protocol, Sequence, TypeVar, get_type_hints from ._agent_id import AgentId from ._agent_instantiation import AgentInstantiationContext from ._agent_metadata import AgentMetadata from ._agent_runtime import AgentRuntime from ._agent_type import AgentType from ._base_agent import BaseAgent from ._cancellation_token import CancellationToken from ._message_context import MessageContext from ._serialization import try_get_known_serializers_for_type from ._subscription import Subscription from ._subscription_context import SubscriptionInstantiationContext from ._topic import TopicId from ._type_helpers import get_types from .exceptions import CantHandleException T = TypeVar("T") ClosureAgentType = TypeVar("ClosureAgentType", bound="ClosureAgent") def get_handled_types_from_closure( closure: Callable[[ClosureAgent, T, MessageContext], Awaitable[Any]],
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_closure_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations import importlib import warnings from typing import Any, ClassVar, Dict, Generic, Literal, Protocol, Type, cast, overload, runtime_checkable from pydantic import BaseModel from typing_extensions import Self, TypeVar ComponentType = Literal["model", "agent", "tool", "termination", "token_provider"] | str ConfigT = TypeVar("ConfigT", bound=BaseModel) T = TypeVar("T", bound=BaseModel, covariant=True) class ComponentModel(BaseModel): """Model class for a component. Contains all information required to instantiate a component.""" provider: str """Describes how the component can be instantiated.""" component_type: ComponentType | None = None """Logical type of the component. If missing, the component assumes the default type of the provider.""" version: int | None = None """Version of the component specification. If missing, the component assumes whatever is the current version of the library used to load it. This is obv
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_component_config.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
ROOT_LOGGER_NAME = "autogen_core" """str: Logger name used for structured event logging""" EVENT_LOGGER_NAME = "autogen_core.events" """str: Logger name used for structured event logging""" TRACE_LOGGER_NAME = "autogen_core.trace" """str: Logger name used for developer intended trace logging. The content and format of this log should not be depended upon."""
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_constants.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import Callable, Type, TypeVar, overload from ._agent_type import AgentType from ._base_agent import BaseAgent, subscription_factory from ._subscription_context import SubscriptionInstantiationContext from ._type_subscription import TypeSubscription from .exceptions import CantHandleException class DefaultSubscription(TypeSubscription): """The default subscription is designed to be a sensible default for applications that only need global scope for agents. This topic by default uses the "default" topic type and attempts to detect the agent type to use based on the instantiation context. Args: topic_type (str, optional): The topic type to subscribe to. Defaults to "default". agent_type (str, optional): The agent type to use for the subscription. Defaults to None, in which case it will attempt to detect the agent type based on the instantiation context. """ def __init__(self, topic_type: str = "default", agent_type: str | AgentType |
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_default_subscription.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._message_handler_context import MessageHandlerContext from ._topic import TopicId class DefaultTopicId(TopicId): """DefaultTopicId provides a sensible default for the topic_id and source fields of a TopicId. If created in the context of a message handler, the source will be set to the agent_id of the message handler, otherwise it will be set to "default". Args: type (str, optional): Topic type to publish message to. Defaults to "default". source (str | None, optional): Topic source to publish message to. If None, the source will be set to the agent_id of the message handler if in the context of a message handler, otherwise it will be set to "default". Defaults to None. """ def __init__(self, type: str = "default", source: str | None = None) -> None: if source is None: try: source = MessageHandlerContext.agent_id().key # If we aren't in the context of a message handler, we use the default sour
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_default_topic.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
# File based from: https://github.com/microsoft/autogen/blob/47f905267245e143562abfb41fcba503a9e1d56d/autogen/function_utils.py # Credit to original authors import inspect import typing from logging import getLogger from typing import ( Annotated, Any, Callable, Dict, List, Optional, Set, Tuple, Type, TypeVar, Union, cast, get_args, get_origin, ) from pydantic import BaseModel, Field, TypeAdapter, create_model # type: ignore from pydantic_core import PydanticUndefined from typing_extensions import Literal logger = getLogger(__name__) T = TypeVar("T") def get_typed_signature(call: Callable[..., Any]) -> inspect.Signature: """Get the signature of a function with type annotations. Args: call: The function to get the signature for Returns: The signature of the function with type annotations """ signature = inspect.signature(call) globalns = getattr(call, "__globals__", {}) type_hint
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_function_utils.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations import base64 import re from io import BytesIO from pathlib import Path from typing import Any, cast import aiohttp from openai.types.chat import ChatCompletionContentPartImageParam from PIL import Image as PILImage from pydantic import GetCoreSchemaHandler, ValidationInfo from pydantic_core import core_schema from typing_extensions import Literal class Image: def __init__(self, image: PILImage.Image): self.image: PILImage.Image = image.convert("RGB") @classmethod def from_pil(cls, pil_image: PILImage.Image) -> Image: return cls(pil_image) @classmethod def from_uri(cls, uri: str) -> Image: if not re.match(r"data:image/(?:png|jpeg);base64,", uri): raise ValueError("Invalid URI format. It should be a base64 encoded image URI.") # A URI. Remove the prefix and decode the base64 string. base64_data = re.sub(r"data:image/(?:png|jpeg);base64,", "", uri) return cls.from_bas
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_image.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import Any, Protocol, final from ._agent_id import AgentId __all__ = [ "DropMessage", "InterventionHandler", "DefaultInterventionHandler", ] @final class DropMessage: ... class InterventionHandler(Protocol): """An intervention handler is a class that can be used to modify, log or drop messages that are being processed by the :class:`autogen_core.base.AgentRuntime`. Note: Returning None from any of the intervention handler methods will result in a warning being issued and treated as "no change". If you intend to drop a message, you should return :class:`DropMessage` explicitly. """ async def on_send(self, message: Any, *, sender: AgentId | None, recipient: AgentId) -> Any | type[DropMessage]: ... async def on_publish(self, message: Any, *, sender: AgentId | None) -> Any | type[DropMessage]: ... async def on_response( self, message: Any, *, sender: AgentId, recipient: AgentId | None ) -> Any | type[DropMessage]: ... cl
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_intervention.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from dataclasses import dataclass from ._agent_id import AgentId from ._cancellation_token import CancellationToken from ._topic import TopicId @dataclass class MessageContext: sender: AgentId | None topic_id: TopicId | None is_rpc: bool cancellation_token: CancellationToken message_id: str
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_message_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from contextlib import contextmanager from contextvars import ContextVar from typing import Any, ClassVar, Generator from ._agent_id import AgentId class MessageHandlerContext: def __init__(self) -> None: raise RuntimeError( "MessageHandlerContext cannot be instantiated. It is a static class that provides context management for message handling." ) _MESSAGE_HANDLER_CONTEXT: ClassVar[ContextVar[AgentId]] = ContextVar("_MESSAGE_HANDLER_CONTEXT") @classmethod @contextmanager def populate_context(cls, ctx: AgentId) -> Generator[None, Any, None]: """:meta private:""" token = MessageHandlerContext._MESSAGE_HANDLER_CONTEXT.set(ctx) try: yield finally: MessageHandlerContext._MESSAGE_HANDLER_CONTEXT.reset(token) @classmethod def agent_id(cls) -> AgentId: try: return cls._MESSAGE_HANDLER_CONTEXT.get() except LookupError as e: raise RuntimeE
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_message_handler_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
# Copy of Asyncio queue: https://github.com/python/cpython/blob/main/Lib/asyncio/queues.py # So that shutdown can be used in <3.13 # Modified to work outside of the asyncio package import asyncio import collections import threading from typing import Generic, TypeVar _global_lock = threading.Lock() class _LoopBoundMixin: _loop = None def _get_loop(self) -> asyncio.AbstractEventLoop: loop = asyncio.get_running_loop() if self._loop is None: with _global_lock: if self._loop is None: self._loop = loop if loop is not self._loop: raise RuntimeError(f"{self!r} is bound to a different event loop") return loop class QueueShutDown(Exception): """Raised when putting on to or getting from a shut-down Queue.""" pass T = TypeVar("T") class Queue(_LoopBoundMixin, Generic[T]): def __init__(self, maxsize: int = 0): self._maxsize = maxsize self._getters = collectio
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_queue.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import logging from functools import wraps from typing import ( Any, Callable, Coroutine, DefaultDict, List, Literal, Protocol, Sequence, Tuple, Type, TypeVar, cast, get_type_hints, overload, runtime_checkable, ) from ._base_agent import BaseAgent from ._message_context import MessageContext from ._serialization import MessageSerializer, try_get_known_serializers_for_type from ._type_helpers import AnyType, get_types from .exceptions import CantHandleException logger = logging.getLogger("autogen_core") AgentT = TypeVar("AgentT") ReceivesT = TypeVar("ReceivesT") ProducesT = TypeVar("ProducesT", covariant=True) # TODO: Generic typevar bound binding U to agent type # Can't do because python doesnt support it # Pyright and mypy disagree on the variance of ReceivesT. Mypy thinks it should be contravariant here. # Revisit this later to see if we can remove the ignore. @runtime_checkable class MessageHandler(Protocol[AgentT, Re
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_routed_agent.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from collections import defaultdict from typing import Awaitable, Callable, DefaultDict, List, Set from ._agent import Agent from ._agent_id import AgentId from ._agent_type import AgentType from ._subscription import Subscription from ._topic import TopicId async def get_impl( *, id_or_type: AgentId | AgentType | str, key: str, lazy: bool, instance_getter: Callable[[AgentId], Awaitable[Agent]], ) -> AgentId: if isinstance(id_or_type, AgentId): if not lazy: await instance_getter(id_or_type) return id_or_type type_str = id_or_type if isinstance(id_or_type, str) else id_or_type.type id = AgentId(type_str, key) if not lazy: await instance_getter(id) return id class SubscriptionManager: def __init__(self) -> None: self._subscriptions: List[Subscription] = [] self._seen_topics: Set[TopicId] = set() self._subscribed_recipients: DefaultDict[TopicId, List[AgentId]] = defaultdict(list
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_runtime_impl_helpers.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import json from dataclasses import asdict, dataclass, fields from typing import Any, ClassVar, Dict, List, Protocol, Sequence, TypeVar, cast, get_args, get_origin, runtime_checkable from google.protobuf import any_pb2 from google.protobuf.message import Message from pydantic import BaseModel from ._type_helpers import is_union T = TypeVar("T") class MessageSerializer(Protocol[T]): @property def data_content_type(self) -> str: ... @property def type_name(self) -> str: ... def deserialize(self, payload: bytes) -> T: ... def serialize(self, message: T) -> bytes: ... @runtime_checkable class IsDataclass(Protocol): # as already noted in comments, checking for this attribute is currently # the most reliable way to ascertain that something is a dataclass __dataclass_fields__: ClassVar[Dict[str, Any]] def is_dataclass(cls: type[Any]) -> bool: return hasattr(cls, "__dataclass_fields__") def has_nested_dataclass(cls: type[IsDataclass]) -> bo
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_serialization.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations import asyncio import inspect import logging import sys import uuid import warnings from asyncio import CancelledError, Future, Queue, Task from collections.abc import Sequence from dataclasses import dataclass from typing import Any, Awaitable, Callable, Dict, List, Mapping, ParamSpec, Set, Type, TypeVar, cast from opentelemetry.trace import TracerProvider from .logging import ( AgentConstructionExceptionEvent, DeliveryStage, MessageDroppedEvent, MessageEvent, MessageHandlerExceptionEvent, MessageKind, ) if sys.version_info >= (3, 13): from asyncio import Queue, QueueShutDown else: from ._queue import Queue, QueueShutDown # type: ignore from typing_extensions import deprecated from ._agent import Agent from ._agent_id import AgentId from ._agent_instantiation import AgentInstantiationContext from ._agent_metadata import AgentMetadata from ._agent_runtime import AgentRuntime from ._agent_type import AgentType fro
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_single_threaded_agent_runtime.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations from typing import Awaitable, Callable, Protocol, runtime_checkable from ._agent_id import AgentId from ._topic import TopicId @runtime_checkable class Subscription(Protocol): """Subscriptions define the topics that an agent is interested in.""" @property def id(self) -> str: """Get the ID of the subscription. Implementations should return a unique ID for the subscription. Usually this is a UUID. Returns: str: ID of the subscription. """ ... def __eq__(self, other: object) -> bool: """Check if two subscriptions are equal. Args: other (object): Other subscription to compare against. Returns: bool: True if the subscriptions are equal, False otherwise. """ if not isinstance(other, Subscription): return False return self.id == other.id def is_match(self, topic_id: TopicId) -> bool:
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_subscription.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from contextlib import contextmanager from contextvars import ContextVar from typing import Any, ClassVar, Generator from ._agent_type import AgentType class SubscriptionInstantiationContext: def __init__(self) -> None: raise RuntimeError( "SubscriptionInstantiationContext cannot be instantiated. It is a static class that provides context management for subscription instantiation." ) _SUBSCRIPTION_CONTEXT_VAR: ClassVar[ContextVar[AgentType]] = ContextVar("_SUBSCRIPTION_CONTEXT_VAR") @classmethod @contextmanager def populate_context(cls, ctx: AgentType) -> Generator[None, Any, None]: """:meta private:""" token = SubscriptionInstantiationContext._SUBSCRIPTION_CONTEXT_VAR.set(ctx) try: yield finally: SubscriptionInstantiationContext._SUBSCRIPTION_CONTEXT_VAR.reset(token) @classmethod def agent_type(cls) -> AgentType: try: return cls._SUBSCRIPTION_CONTEX
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_subscription_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import re from dataclasses import dataclass from typing_extensions import Self def is_valid_topic_type(value: str) -> bool: return bool(re.match(r"^[\w\-\.\:\=]+\Z", value)) @dataclass(eq=True, frozen=True) class TopicId: """ TopicId defines the scope of a broadcast message. In essence, agent runtime implements a publish-subscribe model through its broadcast API: when publishing a message, the topic must be specified. See here for more information: :ref:`topic_and_subscription_topic` """ type: str """Type of the event that this topic_id contains. Adhere's to the cloud event spec. Must match the pattern: ^[\\w\\-\\.\\:\\=]+\\Z Learn more here: https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#type """ source: str """Identifies the context in which an event happened. Adhere's to the cloud event spec. Learn more here: https://github.com/cloudevents/spec/blob/main/cloudevents/spec.md#source-1 """ def __
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_topic.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from collections.abc import Sequence from types import NoneType, UnionType from typing import Any, Optional, Type, Union, get_args, get_origin def is_union(t: object) -> bool: origin = get_origin(t) return origin is Union or origin is UnionType def is_optional(t: object) -> bool: origin = get_origin(t) return origin is Optional # Special type to avoid the 3.10 vs 3.11+ difference of typing._SpecialForm vs typing.Any class AnyType: pass def get_types(t: object) -> Sequence[Type[Any]] | None: if is_union(t): return get_args(t) elif is_optional(t): return tuple(list(get_args(t)) + [NoneType]) elif t is Any: return (AnyType,) elif isinstance(t, type): return (t,) elif isinstance(t, NoneType): return (NoneType,) else: return None
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_type_helpers.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import uuid from ._agent_id import AgentId from ._agent_type import AgentType from ._subscription import Subscription from ._topic import TopicId from .exceptions import CantHandleException class TypePrefixSubscription(Subscription): """This subscription matches on topics based on a prefix of the type and maps to agents using the source of the topic as the agent key. This subscription causes each source to have its own agent instance. Example: .. code-block:: python from autogen_core import TypePrefixSubscription subscription = TypePrefixSubscription(topic_type_prefix="t1", agent_type="a1") In this case: - A topic_id with type `t1` and source `s1` will be handled by an agent of type `a1` with key `s1` - A topic_id with type `t1` and source `s2` will be handled by an agent of type `a1` with key `s2`. - A topic_id with type `t1SUFFIX` and source `s2` will be handled by an agent of type `a1` with key `s2
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_type_prefix_subscription.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import uuid from ._agent_id import AgentId from ._agent_type import AgentType from ._subscription import Subscription from ._topic import TopicId from .exceptions import CantHandleException class TypeSubscription(Subscription): """This subscription matches on topics based on the type and maps to agents using the source of the topic as the agent key. This subscription causes each source to have its own agent instance. Example: .. code-block:: python from autogen_core import TypeSubscription subscription = TypeSubscription(topic_type="t1", agent_type="a1") In this case: - A topic_id with type `t1` and source `s1` will be handled by an agent of type `a1` with key `s1` - A topic_id with type `t1` and source `s2` will be handled by an agent of type `a1` with key `s2`. Args: topic_type (str): Topic type to match against agent_type (str): Agent type to handle this subscription """ def _
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_type_subscription.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations from dataclasses import dataclass @dataclass class FunctionCall: id: str # JSON args arguments: str # Function to call name: str
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_types.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
__all__ = ["CantHandleException", "UndeliverableException", "MessageDroppedException", "NotAccessibleError"] class CantHandleException(Exception): """Raised when a handler can't handle the exception.""" class UndeliverableException(Exception): """Raised when a message can't be delivered.""" class MessageDroppedException(Exception): """Raised when a message is dropped.""" class NotAccessibleError(Exception): """Tried to access a value that is not accessible. For example if it is remote cannot be accessed locally."""
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/exceptions.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import json from enum import Enum from typing import Any, cast from ._agent_id import AgentId from ._topic import TopicId class LLMCallEvent: def __init__(self, *, prompt_tokens: int, completion_tokens: int, **kwargs: Any) -> None: """To be used by model clients to log the call to the LLM. Args: prompt_tokens (int): Number of tokens used in the prompt. completion_tokens (int): Number of tokens used in the completion. Example: .. code-block:: python from autogen_core import EVENT_LOGGER_NAME from autogen_core.logging import LLMCallEvent logger = logging.getLogger(EVENT_LOGGER_NAME) logger.info(LLMCallEvent(prompt_tokens=10, completion_tokens=20)) """ self.kwargs = kwargs self.kwargs["prompt_tokens"] = prompt_tokens self.kwargs["completion_tokens"] = completion_tokens self.kwargs["type"] = "LLMCall" @property
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/logging.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._propagation import ( EnvelopeMetadata, TelemetryMetadataContainer, get_telemetry_envelope_metadata, get_telemetry_grpc_metadata, ) from ._tracing import TraceHelper from ._tracing_config import MessageRuntimeTracingConfig __all__ = [ "EnvelopeMetadata", "get_telemetry_envelope_metadata", "get_telemetry_grpc_metadata", "TelemetryMetadataContainer", "TraceHelper", "MessageRuntimeTracingConfig", ]
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_telemetry/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
NAMESPACE = "autogen"
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_telemetry/_constants.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from dataclasses import dataclass from typing import Dict, Mapping, Optional from opentelemetry.context import Context from opentelemetry.propagate import extract from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator @dataclass(kw_only=True) class EnvelopeMetadata: """Metadata for an envelope.""" traceparent: Optional[str] = None tracestate: Optional[str] = None def _get_carrier_for_envelope_metadata(envelope_metadata: EnvelopeMetadata) -> Dict[str, str]: carrier: Dict[str, str] = {} if envelope_metadata.traceparent is not None: carrier["traceparent"] = envelope_metadata.traceparent if envelope_metadata.tracestate is not None: carrier["tracestate"] = envelope_metadata.tracestate return carrier def get_telemetry_envelope_metadata() -> EnvelopeMetadata: """ Retrieves the telemetry envelope metadata. Returns: EnvelopeMetadata: The envelope metadata containing the traceparent and trace
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_telemetry/_propagation.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import contextlib from typing import Dict, Generic, Iterator, Optional, Sequence from opentelemetry.trace import Link, NoOpTracerProvider, Span, SpanKind, TracerProvider from opentelemetry.util import types from ._propagation import TelemetryMetadataContainer, get_telemetry_context from ._tracing_config import Destination, ExtraAttributes, Operation, TracingConfig class TraceHelper(Generic[Operation, Destination, ExtraAttributes]): """ TraceHelper is a utility class to assist with tracing operations using OpenTelemetry. This class provides a context manager `trace_block` to create and manage spans for tracing operations, following semantic conventions and supporting nested spans through metadata contexts. """ def __init__( self, tracer_provider: TracerProvider | None, instrumentation_builder_config: TracingConfig[Operation, Destination, ExtraAttributes], ) -> None: self.tracer = (tracer_provider if tracer_provider else
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_telemetry/_tracing.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
import logging from abc import ABC, abstractmethod from typing import Dict, Generic, List, Literal, TypedDict, TypeVar, Union from opentelemetry.trace import SpanKind from opentelemetry.util import types from typing_extensions import NotRequired from .._agent_id import AgentId from .._topic import TopicId from ._constants import NAMESPACE logger = logging.getLogger("autogen_core") event_logger = logging.getLogger("autogen_core.events") Operation = TypeVar("Operation", bound=str) Destination = TypeVar("Destination") ExtraAttributes = TypeVar("ExtraAttributes") class TracingConfig(ABC, Generic[Operation, Destination, ExtraAttributes]): """ A protocol that defines the configuration for instrumentation. This protocol specifies the required properties and methods that any instrumentation configuration class must implement. It includes a property to get the name of the module being instrumented and a method to build attributes for the instrumentation configurat
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/_telemetry/_tracing_config.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/base/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing_extensions import deprecated from .._intervention import DefaultInterventionHandler as DefaultInterventionHandlerAlias from .._intervention import DropMessage as DropMessageAlias from .._intervention import InterventionHandler as InterventionHandlerAliass __all__ = [ "DropMessage", "InterventionHandler", "DefaultInterventionHandler", ] # Final so can't inherit and deprecate DropMessage = DropMessageAlias @deprecated("Moved to autogen_core.InterventionHandler. Will remove this in 0.4.0.", stacklevel=2) class InterventionHandler(InterventionHandlerAliass): ... @deprecated("Moved to autogen_core.DefaultInterventionHandler. Will remove this in 0.4.0.", stacklevel=2) class DefaultInterventionHandler(DefaultInterventionHandlerAlias): ...
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/base/intervention.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._base import CodeBlock, CodeExecutor, CodeResult from ._func_with_reqs import ( Alias, FunctionWithRequirements, FunctionWithRequirementsStr, Import, ImportFromModule, with_requirements, ) __all__ = [ "CodeBlock", "CodeExecutor", "CodeResult", "Alias", "ImportFromModule", "Import", "FunctionWithRequirements", "FunctionWithRequirementsStr", "with_requirements", ]
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/code_executor/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
# File based from: https://github.com/microsoft/autogen/blob/main/autogen/coding/base.py # Credit to original authors from __future__ import annotations from dataclasses import dataclass from typing import List, Protocol, runtime_checkable from .._cancellation_token import CancellationToken @dataclass class CodeBlock: """A code block extracted fromm an agent message.""" code: str language: str @dataclass class CodeResult: """Result of a code execution.""" exit_code: int output: str @runtime_checkable class CodeExecutor(Protocol): """Executes code blocks and returns the result.""" async def execute_code_blocks( self, code_blocks: List[CodeBlock], cancellation_token: CancellationToken ) -> CodeResult: """Execute code blocks and return the result. This method should be implemented by the code executor. Args: code_blocks (List[CodeBlock]): The code blocks to execute. Returns:
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/code_executor/_base.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
# File based from: https://github.com/microsoft/autogen/blob/main/autogen/coding/func_with_reqs.py # Credit to original authors from __future__ import annotations import functools import inspect from dataclasses import dataclass, field from importlib.abc import SourceLoader from importlib.util import module_from_spec, spec_from_loader from textwrap import dedent, indent from typing import Any, Callable, Generic, List, Sequence, Set, Tuple, TypeVar, Union from typing_extensions import ParamSpec T = TypeVar("T") P = ParamSpec("P") def _to_code(func: Union[FunctionWithRequirements[T, P], Callable[P, T], FunctionWithRequirementsStr]) -> str: if isinstance(func, FunctionWithRequirementsStr): return func.func if isinstance(func, FunctionWithRequirements): code = inspect.getsource(func.func) else: code = inspect.getsource(func) # Strip the decorator if code.startswith("@"): code = code[code.index("\n") + 1 :] return code @datacl
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/code_executor/_func_with_reqs.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._buffered_chat_completion_context import BufferedChatCompletionContext from ._chat_completion_context import ChatCompletionContext, ChatCompletionContextState from ._head_and_tail_chat_completion_context import HeadAndTailChatCompletionContext from ._unbounded_chat_completion_context import ( UnboundedChatCompletionContext, ) __all__ = [ "ChatCompletionContext", "ChatCompletionContextState", "UnboundedChatCompletionContext", "BufferedChatCompletionContext", "HeadAndTailChatCompletionContext", ]
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/model_context/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import List from ..models import FunctionExecutionResultMessage, LLMMessage from ._chat_completion_context import ChatCompletionContext class BufferedChatCompletionContext(ChatCompletionContext): """A buffered chat completion context that keeps a view of the last n messages, where n is the buffer size. The buffer size is set at initialization. Args: buffer_size (int): The size of the buffer. initial_messages (List[LLMMessage] | None): The initial messages. """ def __init__(self, buffer_size: int, initial_messages: List[LLMMessage] | None = None) -> None: super().__init__(initial_messages) if buffer_size <= 0: raise ValueError("buffer_size must be greater than 0.") self._buffer_size = buffer_size async def get_messages(self) -> List[LLMMessage]: """Get at most `buffer_size` recent messages.""" messages = self._messages[-self._buffer_size :] # Handle the first message is
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/model_context/_buffered_chat_completion_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from abc import ABC, abstractmethod from typing import Any, List, Mapping from pydantic import BaseModel, Field from ..models import LLMMessage class ChatCompletionContext(ABC): """An abstract base class for defining the interface of a chat completion context. A chat completion context lets agents store and retrieve LLM messages. It can be implemented with different recall strategies. Args: initial_messages (List[LLMMessage] | None): The initial messages. """ def __init__(self, initial_messages: List[LLMMessage] | None = None) -> None: self._messages: List[LLMMessage] = initial_messages or [] async def add_message(self, message: LLMMessage) -> None: """Add a message to the context.""" self._messages.append(message) @abstractmethod async def get_messages(self) -> List[LLMMessage]: ... async def clear(self) -> None: """Clear the context.""" self._messages = [] async def save_state(self)
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/model_context/_chat_completion_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import List from .._types import FunctionCall from ..models import AssistantMessage, FunctionExecutionResultMessage, LLMMessage, UserMessage from ._chat_completion_context import ChatCompletionContext class HeadAndTailChatCompletionContext(ChatCompletionContext): """A chat completion context that keeps a view of the first n and last m messages, where n is the head size and m is the tail size. The head and tail sizes are set at initialization. Args: head_size (int): The size of the head. tail_size (int): The size of the tail. initial_messages (List[LLMMessage] | None): The initial messages. """ def __init__(self, head_size: int, tail_size: int, initial_messages: List[LLMMessage] | None = None) -> None: super().__init__(initial_messages) if head_size <= 0: raise ValueError("head_size must be greater than 0.") if tail_size <= 0: raise ValueError("tail_size must be greater than
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/model_context/_head_and_tail_chat_completion_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from typing import List from ..models import LLMMessage from ._chat_completion_context import ChatCompletionContext class UnboundedChatCompletionContext(ChatCompletionContext): """An unbounded chat completion context that keeps a view of the all the messages.""" async def get_messages(self) -> List[LLMMessage]: """Get at most `buffer_size` recent messages.""" return self._messages
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/model_context/_unbounded_chat_completion_context.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from ._model_client import ChatCompletionClient, ModelCapabilities, ModelFamily, ModelInfo # type: ignore from ._types import ( AssistantMessage, ChatCompletionTokenLogprob, CreateResult, FinishReasons, FunctionExecutionResult, FunctionExecutionResultMessage, LLMMessage, RequestUsage, SystemMessage, TopLogprob, UserMessage, ) __all__ = [ "ModelCapabilities", "ChatCompletionClient", "SystemMessage", "UserMessage", "AssistantMessage", "FunctionExecutionResult", "FunctionExecutionResultMessage", "LLMMessage", "RequestUsage", "FinishReasons", "CreateResult", "TopLogprob", "ChatCompletionTokenLogprob", "ModelFamily", "ModelInfo", ]
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/models/__init__.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from __future__ import annotations import warnings from abc import ABC, abstractmethod from typing import Literal, Mapping, Optional, Sequence, TypeAlias from typing_extensions import Any, AsyncGenerator, Required, TypedDict, Union, deprecated from .. import CancellationToken from .._component_config import ComponentLoader from ..tools import Tool, ToolSchema from ._types import CreateResult, LLMMessage, RequestUsage class ModelFamily: """A model family is a group of models that share similar characteristics from a capabilities perspective. This is different to discrete supported features such as vision, function calling, and JSON output. This namespace class holds constants for the model families that AutoGen understands. Other families definitely exist and can be represented by a string, however, AutoGen will treat them as unknown.""" GPT_4O = "gpt-4o" O1 = "o1" GPT_4 = "gpt-4" GPT_35 = "gpt-35" UNKNOWN = "unknown" ANY: TypeAlias = Literal["gpt
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/models/_model_client.py", "file_type": ".py", "source_type": "code" }
Analyze this code content
from dataclasses import dataclass from typing import List, Literal, Optional, Union from pydantic import BaseModel, Field from typing_extensions import Annotated from .. import FunctionCall, Image class SystemMessage(BaseModel): content: str type: Literal["SystemMessage"] = "SystemMessage" class UserMessage(BaseModel): content: Union[str, List[Union[str, Image]]] # Name of the agent that sent this message source: str type: Literal["UserMessage"] = "UserMessage" class AssistantMessage(BaseModel): content: Union[str, List[FunctionCall]] # Name of the agent that sent this message source: str type: Literal["AssistantMessage"] = "AssistantMessage" class FunctionExecutionResult(BaseModel): content: str call_id: str class FunctionExecutionResultMessage(BaseModel): content: List[FunctionExecutionResult] type: Literal["FunctionExecutionResultMessage"] = "FunctionExecutionResultMessage" LLMMessage = Annotated[ Union[S
Code analysis and explanation
{ "file_path": "multi_repo_data/autogen/python/packages/autogen-core/src/autogen_core/models/_types.py", "file_type": ".py", "source_type": "code" }