diff --git a/sdk/agentserver/azure-ai-agentserver-agentframework/pyproject.toml b/sdk/agentserver/azure-ai-agentserver-agentframework/pyproject.toml index 814d1d6d1a1e..9ce8eb00b246 100644 --- a/sdk/agentserver/azure-ai-agentserver-agentframework/pyproject.toml +++ b/sdk/agentserver/azure-ai-agentserver-agentframework/pyproject.toml @@ -24,6 +24,7 @@ dependencies = [ "agent-framework-azure-ai==1.0.0b251007", "agent-framework-core==1.0.0b251007", "opentelemetry-exporter-otlp-proto-grpc>=1.36.0", + "opentelemetry-semantic-conventions-ai==0.4.13" ] [build-system] diff --git a/sdk/agentserver/azure-ai-agentserver-core/CHANGELOG.md b/sdk/agentserver/azure-ai-agentserver-core/CHANGELOG.md index cfcf2445e256..d72adf35f015 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/CHANGELOG.md +++ b/sdk/agentserver/azure-ai-agentserver-core/CHANGELOG.md @@ -1,7 +1,16 @@ # Release History -## 1.0.0b1 (2025-11-07) +## 2.0.0b1 (Unreleased) ### Features Added -First version +- Renamed package from `azure-ai-agentserver-hosting` to `azure-ai-agentserver-core`. +- `AgentHost` host framework with health probe, graceful shutdown, and port binding. +- `TracingHelper` for OpenTelemetry tracing with Azure Monitor and OTLP exporters. +- Auto-enable tracing when Application Insights or OTLP endpoint is configured. +- W3C Trace Context propagation and `leaf_customer_span_id` baggage re-parenting. +- `error_response()` utility for standard error envelope responses. +- `get_logger()` for library-scoped logging. +- `StructuredLogFilter` and `LogScope` for per-request structured logging. +- `register_routes()` for pluggable protocol composition. +- Hypercorn-based ASGI server with HTTP/1.1 support. diff --git a/sdk/agentserver/azure-ai-agentserver-core/LICENSE b/sdk/agentserver/azure-ai-agentserver-core/LICENSE index 63447fd8bbbf..4c3581d3b052 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/LICENSE +++ b/sdk/agentserver/azure-ai-agentserver-core/LICENSE @@ -12,10 +12,10 @@ furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. -THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. \ No newline at end of file +SOFTWARE. diff --git a/sdk/agentserver/azure-ai-agentserver-core/MANIFEST.in b/sdk/agentserver/azure-ai-agentserver-core/MANIFEST.in index eefbfbed7925..15a42f74dc4b 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/MANIFEST.in +++ b/sdk/agentserver/azure-ai-agentserver-core/MANIFEST.in @@ -2,7 +2,6 @@ include *.md include LICENSE recursive-include tests *.py recursive-include samples *.py *.md -recursive-include doc *.rst *.md include azure/__init__.py include azure/ai/__init__.py include azure/ai/agentserver/__init__.py diff --git a/sdk/agentserver/azure-ai-agentserver-core/README.md b/sdk/agentserver/azure-ai-agentserver-core/README.md index ff60cf460196..662a1464ec34 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/README.md +++ b/sdk/agentserver/azure-ai-agentserver-core/README.md @@ -1,105 +1,143 @@ -# Azure AI Agent Server Adapter for Python +# Azure AI AgentHost Core for Python +The `azure-ai-agentserver-core` package provides the foundation host framework for building Azure AI Hosted Agent containers. It handles the protocol-agnostic infrastructure — health probes, graceful shutdown, OpenTelemetry tracing, and ASGI serving — so that protocol packages can focus on their endpoint logic. ## Getting started +### Install the package + ```bash pip install azure-ai-agentserver-core ``` +To enable OpenTelemetry tracing with Azure Monitor and OTLP exporters: + +```bash +pip install azure-ai-agentserver-core[tracing] +``` + +### Prerequisites + +- Python 3.10 or later + ## Key concepts -This is the core package for Azure AI Agent server. It hosts your agent as a container on the cloud. +### AgentHost + +`AgentHost` is the host process for Azure AI Hosted Agent containers. It provides: + +- **Health probe** — `GET /healthy` returns `200 OK` when the server is ready. +- **Graceful shutdown** — On `SIGTERM` the server drains in-flight requests (default 30 s timeout) before exiting. +- **OpenTelemetry tracing** — Automatic span creation with Azure Monitor and OTLP export when configured. +- **Hypercorn ASGI server** — Serves on `0.0.0.0:${PORT:-8088}` with HTTP/1.1. -You can talk to your agent using azure-ai-project sdk. +Protocol packages (e.g. `azure-ai-agentserver-invocations`) plug into `AgentHost` by calling `register_routes()` to add their endpoints. +### Environment variables + +| Variable | Description | Default | +|---|---|---| +| `PORT` | Listen port | `8088` | +| `FOUNDRY_AGENT_NAME` | Agent name (used in tracing) | `""` | +| `FOUNDRY_AGENT_VERSION` | Agent version (used in tracing) | `""` | +| `FOUNDRY_PROJECT_ENDPOINT` | Azure AI Foundry project endpoint | `""` | +| `APPLICATIONINSIGHTS_CONNECTION_STRING` | Azure Monitor connection string | — | +| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP collector endpoint | — | +| `AGENT_GRACEFUL_SHUTDOWN_TIMEOUT` | Shutdown drain timeout (seconds) | `30` | +| `AGENT_LOG_LEVEL` | Log level (`DEBUG`, `INFO`, etc.) | `INFO` | ## Examples -If your agent is not built using a supported framework such as LangGraph and Agent-framework, you can still make it compatible with Microsoft AI Foundry by manually implementing the predefined interface. +`AgentHost` is typically used with a protocol package. The simplest setup with the invocations protocol: ```python -import datetime +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from starlette.responses import JSONResponse -from azure.ai.agentserver.core import FoundryCBAgent -from azure.ai.agentserver.core.models import ( - CreateResponse, - Response as OpenAIResponse, -) -from azure.ai.agentserver.core.models.projects import ( - ItemContentOutputText, - ResponsesAssistantMessageItemResource, - ResponseTextDeltaEvent, - ResponseTextDoneEvent, -) +server = AgentHost() +invocations = InvocationHandler(server) + +@invocations.invoke_handler +async def handle(request): + body = await request.json() + return JSONResponse({"greeting": f"Hello, {body['name']}!"}) + +server.run() +``` + +### Using AgentHost standalone +For custom protocol implementations, use `AgentHost` directly and register your own routes: -def stream_events(text: str): - assembled = "" - for i, token in enumerate(text.split(" ")): - piece = token if i == len(text.split(" ")) - 1 else token + " " - assembled += piece - yield ResponseTextDeltaEvent(delta=piece) - # Done with text - yield ResponseTextDoneEvent(text=assembled) - - -async def agent_run(request_body: CreateResponse): - agent = request_body.agent - print(f"agent:{agent}") - - if request_body.stream: - return stream_events("I am mock agent with no intelligence in stream mode.") - - # Build assistant output content - output_content = [ - ItemContentOutputText( - text="I am mock agent with no intelligence.", - annotations=[], - ) - ] - - response = OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="me", - id="id", - created_at=datetime.datetime.now(), - output=[ - ResponsesAssistantMessageItemResource( - status="completed", - content=output_content, - ) - ], - ) - return response - - -my_agent = FoundryCBAgent() -my_agent.agent_run = agent_run - -if __name__ == "__main__": - my_agent.run() +```python +from azure.ai.agentserver.core import AgentHost +from starlette.requests import Request +from starlette.responses import JSONResponse +from starlette.routing import Route +async def my_endpoint(request: Request): + return JSONResponse({"status": "ok"}) + +server = AgentHost() +server.register_routes([Route("/my-endpoint", my_endpoint, methods=["POST"])]) +server.run() +``` + +### Shutdown handler + +Register a cleanup function that runs during graceful shutdown: + +```python +server = AgentHost() + +@server.shutdown_handler +async def on_shutdown(): + # Close database connections, flush buffers, etc. + pass +``` + +### Configuring tracing + +Tracing is enabled automatically when an Application Insights connection string is available: + +```python +server = AgentHost( + application_insights_connection_string="InstrumentationKey=...", +) +``` + +Or via environment variable: + +```bash +export APPLICATIONINSIGHTS_CONNECTION_STRING="InstrumentationKey=..." +python my_agent.py ``` ## Troubleshooting -First run your agent with azure-ai-agentserver-core locally. +### Logging -If it works on local by failed on cloud. Check your logs in the application insight connected to your Azure AI Foundry Project. +Set the log level to `DEBUG` for detailed diagnostics: +```python +server = AgentHost(log_level="DEBUG") +``` -### Reporting issues +Or via environment variable: + +```bash +export AGENT_LOG_LEVEL=DEBUG +``` -To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-agents" in the title or content. +### Reporting issues +To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). ## Next steps -Please visit [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/agentserver/azure-ai-agentserver-core/samples) folder. There are several cases for you to build your agent with azure-ai-agentserver - +- Install [`azure-ai-agentserver-invocations`](https://pypi.org/project/azure-ai-agentserver-invocations/) to add the invocation protocol endpoints. +- See the [container image spec](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/agentserver) for the full hosted agent contract. ## Contributing @@ -117,3 +155,5 @@ This project has adopted the [Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments. + +[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/ diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/__init__.py index d55ccad1f573..8db66d3d0f0f 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/__init__.py +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/__init__.py @@ -1 +1 @@ -__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore +__path__ = __import__("pkgutil").extend_path(__path__, __name__) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/__init__.py index d55ccad1f573..8db66d3d0f0f 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/__init__.py +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/__init__.py @@ -1 +1 @@ -__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore +__path__ = __import__("pkgutil").extend_path(__path__, __name__) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/__init__.py index d55ccad1f573..8db66d3d0f0f 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/__init__.py +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/__init__.py @@ -1 +1 @@ -__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore +__path__ = __import__("pkgutil").extend_path(__path__, __name__) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/__init__.py index 895074d32ae3..ff6585108544 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/__init__.py +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/__init__.py @@ -1,14 +1,35 @@ # --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- +"""Azure AI AgentHost core framework. + +Provides the :class:`AgentHost` host and shared utilities for +building Azure AI Hosted Agent containers. + +Public API:: + + from azure.ai.agentserver.core import ( + AgentLogger, + AgentHost, + Constants, + ErrorResponse, + TracingHelper, + ) +""" __path__ = __import__("pkgutil").extend_path(__path__, __name__) +from ._base import AgentHost +from ._constants import Constants +from ._errors import ErrorResponse +from ._logger import AgentLogger +from ._tracing import TracingHelper from ._version import VERSION -from .logger import configure as config_logging -from .server.base import FoundryCBAgent -from .server.common.agent_run_context import AgentRunContext - -config_logging() -__all__ = ["FoundryCBAgent", "AgentRunContext"] +__all__ = [ + "AgentLogger", + "AgentHost", + "Constants", + "ErrorResponse", + "TracingHelper", +] __version__ = VERSION diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_base.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_base.py new file mode 100644 index 000000000000..dd4a771c3e6b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_base.py @@ -0,0 +1,320 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +import asyncio # pylint: disable=do-not-import-asyncio +import contextlib +import logging +from collections.abc import AsyncGenerator, Awaitable, Callable # pylint: disable=import-error +from typing import Any, Optional + +from starlette.applications import Starlette +from starlette.middleware import Middleware +from starlette.middleware.base import BaseHTTPMiddleware +from starlette.requests import Request +from starlette.responses import Response +from starlette.routing import Route + +from . import _config +from ._logger import AgentLogger +from ._tracing import TracingHelper + +logger = AgentLogger.get() + +# Pre-built health-check response to avoid per-request allocation. +_HEALTHY_BODY = b'{"status":"healthy"}' + +# Server identity header value (name only — no version to avoid information disclosure). +_PLATFORM_SERVER_VALUE = "azure-ai-agentserver-core" + +# Sentinel attribute name set on the console handler to prevent adding duplicates +# across multiple AgentHost instantiations. +_CONSOLE_HANDLER_ATTR = "_agentserver_console" + + +class _PlatformHeaderMiddleware(BaseHTTPMiddleware): + """Middleware that adds x-platform-server identity header to all responses.""" + + async def dispatch(self, request: Request, call_next): # type: ignore[no-untyped-def, override] + response = await call_next(request) + response.headers["x-platform-server"] = _PLATFORM_SERVER_VALUE + return response + + +class AgentHost: + """Agent server host framework with built-in protocol endpoints. + + Provides the protocol-agnostic infrastructure required by all Azure AI + Hosted Agent containers: + + - Health probe (``GET /healthy``) + - Graceful shutdown handling (SIGTERM, configurable timeout) + - OpenTelemetry tracing with Azure Monitor and OTLP exporters + - Hypercorn-based ASGI server with HTTP/1.1 + + Protocol packages (e.g. ``azure-ai-agentserver-invocations``) plug into + this host by calling :meth:`register_routes` to add their endpoints. + + Usage:: + + from azure.ai.agentserver.core import AgentHost + from azure.ai.agentserver.invocations import InvocationHandler + + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request): + return JSONResponse({"ok": True}) + + server.run() + + :param application_insights_connection_string: Application Insights + connection string for exporting traces and logs to Azure Monitor. + When *None* (default) the ``APPLICATIONINSIGHTS_CONNECTION_STRING`` + env var is consulted. Tracing is automatically enabled when a + connection string is available. Requires ``opentelemetry-sdk`` and + ``azure-monitor-opentelemetry-exporter`` (included in the + ``[tracing]`` extras group). + :type application_insights_connection_string: Optional[str] + :param graceful_shutdown_timeout: Seconds to wait for in-flight requests to + complete after receiving SIGTERM / shutdown signal. When *None* (default) + the ``AGENT_GRACEFUL_SHUTDOWN_TIMEOUT`` env var is consulted; if that is + also unset the default is 30 seconds. Set to ``0`` to disable the + drain period. + :type graceful_shutdown_timeout: Optional[int] + :param log_level: Library log level (e.g. ``"DEBUG"``, ``"INFO"``). When + *None* (default) the ``AGENT_LOG_LEVEL`` env var is consulted; if that + is also unset the default is ``"INFO"``. + :type log_level: Optional[str] + """ + + def __init__( + self, + *, + application_insights_connection_string: Optional[str] = None, + graceful_shutdown_timeout: Optional[int] = None, + log_level: Optional[str] = None, + ) -> None: + # Shutdown handler slot (server-level lifecycle) ------------------- + self._shutdown_fn: Optional[Callable[[], Awaitable[None]]] = None + + # Logging ---------------------------------------------------------- + resolved_level = _config.resolve_log_level(log_level) + logger.setLevel(resolved_level) + if not any(getattr(h, _CONSOLE_HANDLER_ATTR, False) for h in logger.handlers): + _console = logging.StreamHandler() + _console.setFormatter(logging.Formatter("%(asctime)s %(levelname)s %(name)s: %(message)s")) + setattr(_console, _CONSOLE_HANDLER_ATTR, True) + logger.addHandler(_console) + + # Tracing — enabled when App Insights or OTLP endpoint is configured + _conn_str = _config.resolve_appinsights_connection_string(application_insights_connection_string) + _otlp_endpoint = _config.resolve_otlp_endpoint() + _tracing_on = bool(_conn_str or _otlp_endpoint) + self._tracing: Optional[TracingHelper] = ( + TracingHelper(connection_string=_conn_str) if _tracing_on else None + ) + + # Timeouts --------------------------------------------------------- + self._graceful_shutdown_timeout = _config.resolve_graceful_shutdown_timeout( + graceful_shutdown_timeout + ) + + # Protocol routes (registered by protocol packages via register_routes) + self._protocol_routes: list[Route] = [] + + # App is built lazily on first access + self._app: Optional[Starlette] = None + + # ------------------------------------------------------------------ + # ASGI app accessor (lazy build) + # ------------------------------------------------------------------ + + @property + def app(self) -> Starlette: + """Return the Starlette ASGI application, building it on first access. + + :return: The configured Starlette application. + :rtype: Starlette + """ + if self._app is None: + self._build_app() + return self._app # type: ignore[return-value] # _build_app sets _app + + # ------------------------------------------------------------------ + # Tracing accessor (for protocol packages) + # ------------------------------------------------------------------ + + @property + def tracing(self) -> Optional[TracingHelper]: + """Return the tracing helper, or *None* when tracing is disabled. + + :return: The tracing helper instance. + :rtype: Optional[TracingHelper] + """ + return self._tracing + + # ------------------------------------------------------------------ + # Shutdown handler (server-level lifecycle) + # ------------------------------------------------------------------ + + def shutdown_handler(self, fn: Callable[[], Awaitable[None]]) -> Callable[[], Awaitable[None]]: + """Register a function as the shutdown handler. + + :param fn: Async function called during graceful shutdown. + :type fn: Callable[[], Awaitable[None]] + :return: The original function (unmodified). + :rtype: Callable[[], Awaitable[None]] + """ + self._shutdown_fn = fn + return fn + + async def _dispatch_shutdown(self) -> None: + """Dispatch to the registered shutdown handler, or no-op.""" + if self._shutdown_fn is not None: + await self._shutdown_fn() + + # ------------------------------------------------------------------ + # Protocol route registration + # ------------------------------------------------------------------ + + def register_routes(self, routes: list[Route]) -> None: + """Register additional routes from a protocol package. + + Invalidates the cached Starlette app so it will be rebuilt with the + new routes on next access. Called by protocol packages (e.g. + ``InvocationHandler``) during setup. + + :param routes: List of Starlette Route objects to add. + :type routes: list[Route] + """ + if not routes: + return + if self._app is not None: + logger.warning( + "register_routes() called after the ASGI app was already built. " + "The new routes will be included on the next app rebuild, but " + "will NOT affect an already-running server." + ) + self._protocol_routes.extend(routes) + self._app = None # invalidate — rebuilt lazily via .app property + + # ------------------------------------------------------------------ + # Run helpers + # ------------------------------------------------------------------ + + def _build_hypercorn_config(self, host: str, port: int) -> object: + """Create a Hypercorn config with resolved host, port and timeouts. + + :param host: Network interface to bind. + :type host: str + :param port: Port to bind. + :type port: int + :return: Configured Hypercorn config. + :rtype: hypercorn.config.Config + """ + from hypercorn.config import Config as HypercornConfig + + config = HypercornConfig() + config.bind = [f"{host}:{port}"] + config.graceful_timeout = float(self._graceful_shutdown_timeout) + return config + + def run(self, host: str = "0.0.0.0", port: Optional[int] = None) -> None: + """Start the server synchronously. + + Uses Hypercorn as the ASGI server, which supports HTTP/1.1 and HTTP/2. + + :param host: Network interface to bind. Defaults to ``"0.0.0.0"`` + (all interfaces). + :type host: str + :param port: Port to bind. Defaults to ``PORT`` env var or 8088. + :type port: Optional[int] + """ + from hypercorn.asyncio import serve as _hypercorn_serve + + resolved_port = _config.resolve_port(port) + logger.info("AgentHost starting on %s:%s", host, resolved_port) + config = self._build_hypercorn_config(host, resolved_port) + asyncio.run(_hypercorn_serve(self.app, config)) # type: ignore[arg-type] # Starlette is ASGI-compatible + + async def run_async(self, host: str = "0.0.0.0", port: Optional[int] = None) -> None: + """Start the server asynchronously (awaitable). + + Uses Hypercorn as the ASGI server, which supports HTTP/1.1 and HTTP/2. + + :param host: Network interface to bind. Defaults to ``"0.0.0.0"`` + (all interfaces). + :type host: str + :param port: Port to bind. Defaults to ``PORT`` env var or 8088. + :type port: Optional[int] + """ + from hypercorn.asyncio import serve as _hypercorn_serve + + resolved_port = _config.resolve_port(port) + logger.info("AgentHost starting on %s:%s (async)", host, resolved_port) + config = self._build_hypercorn_config(host, resolved_port) + await _hypercorn_serve(self.app, config) # type: ignore[arg-type] # Starlette is ASGI-compatible + + # ------------------------------------------------------------------ + # Private: app construction + # ------------------------------------------------------------------ + + def _build_app(self) -> None: + """Construct the Starlette ASGI application with all routes.""" + + @contextlib.asynccontextmanager + async def _lifespan(_app: Starlette) -> AsyncGenerator[None, None]: # noqa: RUF029 + logger.info("AgentHost started") + yield + + # --- SHUTDOWN: runs once when the server is stopping --- + logger.info( + "AgentHost shutting down (graceful timeout=%ss)", + self._graceful_shutdown_timeout, + ) + if self._graceful_shutdown_timeout == 0: + logger.info("Graceful shutdown drain period disabled (timeout=0)") + else: + try: + await asyncio.wait_for( + self._dispatch_shutdown(), + timeout=self._graceful_shutdown_timeout, + ) + except asyncio.TimeoutError: + logger.warning( + "on_shutdown did not complete within %ss timeout", + self._graceful_shutdown_timeout, + ) + except Exception: # pylint: disable=broad-exception-caught + logger.exception("Error in on_shutdown") + + # All routes: protocol routes + health + routes: list[Any] = list(self._protocol_routes) + routes.append( + Route("/healthy", self._healthy_endpoint, methods=["GET"], name="healthy"), + ) + + self._app = Starlette( + routes=routes, + lifespan=_lifespan, + middleware=[Middleware(_PlatformHeaderMiddleware)], + ) + + # ------------------------------------------------------------------ + # Health endpoint + # ------------------------------------------------------------------ + + async def _healthy_endpoint(self, request: Request) -> Response: # pylint: disable=unused-argument + """GET /healthy — single health check endpoint. + + Return ``200 OK`` when the process is alive and ready to serve traffic. + A single endpoint is sufficient — the hosting platform can map it to + both liveness and readiness probes. + + :param request: The incoming Starlette request. + :type request: Request + :return: 200 OK response. + :rtype: Response + """ + return Response(_HEALTHY_BODY, media_type="application/json") diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_config.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_config.py new file mode 100644 index 000000000000..0b5ea1152a8e --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_config.py @@ -0,0 +1,199 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Configuration resolution helpers for AgentHost hosting. + +Each ``resolve_*`` function follows the same hierarchy: +1. Explicit argument (if not *None*) +2. Environment variable +3. Built-in default + +A value of ``0`` conventionally disables the corresponding feature. + +Invalid environment variable values raise ``ValueError`` immediately so +misconfiguration is surfaced at startup rather than silently masked. +""" +import os +from typing import Optional + +from ._constants import Constants + + +def _parse_int_env(var_name: str) -> Optional[int]: + """Parse an integer environment variable, raising on invalid values. + + :param var_name: Name of the environment variable. + :type var_name: str + :return: The parsed integer or None if the variable is not set. + :rtype: Optional[int] + :raises ValueError: If the variable is set but cannot be parsed as an integer. + """ + raw = os.environ.get(var_name) + if raw is None: + return None + try: + return int(raw) + except ValueError as exc: + raise ValueError( + f"Invalid value for {var_name}: {raw!r} (expected an integer)" + ) from exc + + +def _require_int(name: str, value: object) -> int: + """Validate that *value* is an integer. + + :param name: Human-readable parameter/env-var name for the error message. + :type name: str + :param value: The value to validate. + :type value: object + :return: The value cast to int. + :rtype: int + :raises ValueError: If *value* is not an integer. + """ + if isinstance(value, bool) or not isinstance(value, int): + raise ValueError( + f"Invalid value for {name}: {value!r} (expected an integer)" + ) + return value + + +def _validate_port(value: int, source: str) -> int: + """Validate that a port number is within the valid range. + + :param value: The port number to validate. + :type value: int + :param source: Human-readable source name for the error message. + :type source: str + :return: The validated port number. + :rtype: int + :raises ValueError: If the port is outside 1-65535. + """ + if not 1 <= value <= 65535: + raise ValueError( + f"Invalid value for {source}: {value} (expected 1-65535)" + ) + return value + + +def resolve_port(port: Optional[int]) -> int: + """Resolve the server port from argument, env var, or default. + + Resolution order: explicit *port* → ``PORT`` env var → ``8088``. + + :param port: Explicitly requested port or None. + :type port: Optional[int] + :return: The resolved port number. + :rtype: int + :raises ValueError: If the port value is not a valid integer or is outside 1-65535. + """ + if port is not None: + return _validate_port(_require_int("port", port), "port") + env_port = _parse_int_env(Constants.PORT) + if env_port is not None: + return _validate_port(env_port, Constants.PORT) + return Constants.DEFAULT_PORT + + +def resolve_graceful_shutdown_timeout(timeout: Optional[int]) -> int: + """Resolve the graceful shutdown timeout from argument, env var, or default. + + :param timeout: Explicitly requested timeout or None. + :type timeout: Optional[int] + :return: The resolved timeout in seconds. + :rtype: int + :raises ValueError: If the env var is not a valid integer. + """ + if timeout is not None: + return max(0, _require_int("graceful_shutdown_timeout", timeout)) + env_timeout = _parse_int_env(Constants.AGENT_GRACEFUL_SHUTDOWN_TIMEOUT) + if env_timeout is not None: + return max(0, env_timeout) + return Constants.DEFAULT_GRACEFUL_SHUTDOWN_TIMEOUT + + +_VALID_LOG_LEVELS = ("DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL") + + +def resolve_appinsights_connection_string( + connection_string: Optional[str], +) -> Optional[str]: + """Resolve the Application Insights connection string. + + Resolution order: + + 1. Explicit *connection_string* argument (if not *None*). + 2. ``APPLICATIONINSIGHTS_CONNECTION_STRING`` env var (standard Azure + Monitor convention). + 3. *None* — no connection string available. + + :param connection_string: Explicitly provided connection string or None. + :type connection_string: Optional[str] + :return: The resolved connection string, or None. + :rtype: Optional[str] + """ + if connection_string is not None: + return connection_string + return os.environ.get( + Constants.APPLICATIONINSIGHTS_CONNECTION_STRING + ) + + +def resolve_log_level(level: Optional[str]) -> str: + """Resolve the library log level from argument, env var, or default (``INFO``). + + :param level: Explicitly requested level (e.g. ``"DEBUG"``) or None. + :type level: Optional[str] + :return: Validated, upper-cased log level string. + :rtype: str + :raises ValueError: If the value is not one of DEBUG/INFO/WARNING/ERROR/CRITICAL. + """ + if level is not None: + normalized = level.upper() + else: + normalized = os.environ.get(Constants.AGENT_LOG_LEVEL, "INFO").upper() + if normalized not in _VALID_LOG_LEVELS: + raise ValueError( + f"Invalid log level: {normalized!r} " + f"(expected one of {', '.join(_VALID_LOG_LEVELS)})" + ) + return normalized + + +def resolve_agent_name() -> str: + """Resolve the agent name from the ``FOUNDRY_AGENT_NAME`` environment variable. + + :return: The agent name, or an empty string if not set. + :rtype: str + """ + return os.environ.get(Constants.FOUNDRY_AGENT_NAME, "") + + +def resolve_agent_version() -> str: + """Resolve the agent version from the ``FOUNDRY_AGENT_VERSION`` environment variable. + + :return: The agent version, or an empty string if not set. + :rtype: str + """ + return os.environ.get(Constants.FOUNDRY_AGENT_VERSION, "") + + +def resolve_project_id() -> str: + """Resolve the Foundry project ARM resource ID from the ``FOUNDRY_PROJECT_ARM_ID`` environment variable. + + The UX queries spans using this ID, so it must be present in trace + attributes for portal integration. + + :return: The project ARM resource ID, or an empty string if not set. + :rtype: str + """ + return os.environ.get(Constants.FOUNDRY_PROJECT_ARM_ID, "") + + +def resolve_otlp_endpoint() -> Optional[str]: + """Resolve the OTLP exporter endpoint from the ``OTEL_EXPORTER_OTLP_ENDPOINT`` environment variable. + + :return: The OTLP endpoint URL, or None if not set or empty. + :rtype: Optional[str] + """ + value = os.environ.get(Constants.OTEL_EXPORTER_OTLP_ENDPOINT, "") + return value if value else None diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_constants.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_constants.py new file mode 100644 index 000000000000..588cda97919b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_constants.py @@ -0,0 +1,31 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- + + +class Constants: + """Well-known environment variables and defaults for AgentHost hosting.""" + + # Foundry identity + FOUNDRY_AGENT_NAME = "FOUNDRY_AGENT_NAME" + FOUNDRY_AGENT_VERSION = "FOUNDRY_AGENT_VERSION" + FOUNDRY_PROJECT_ENDPOINT = "FOUNDRY_PROJECT_ENDPOINT" + FOUNDRY_PROJECT_ARM_ID = "FOUNDRY_PROJECT_ARM_ID" + + # Network + PORT = "PORT" + DEFAULT_PORT = 8088 + + # Logging + AGENT_LOG_LEVEL = "AGENT_LOG_LEVEL" + + # Tracing + APPLICATIONINSIGHTS_CONNECTION_STRING = "APPLICATIONINSIGHTS_CONNECTION_STRING" + OTEL_EXPORTER_OTLP_ENDPOINT = "OTEL_EXPORTER_OTLP_ENDPOINT" + + # Graceful shutdown + AGENT_GRACEFUL_SHUTDOWN_TIMEOUT = "AGENT_GRACEFUL_SHUTDOWN_TIMEOUT" + DEFAULT_GRACEFUL_SHUTDOWN_TIMEOUT = 30 + + # Session identity + FOUNDRY_AGENT_SESSION_ID = "FOUNDRY_AGENT_SESSION_ID" diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_errors.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_errors.py new file mode 100644 index 000000000000..6d1c9d69835d --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_errors.py @@ -0,0 +1,71 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Standardized error response builder for AgentHost. + +Every error returned by the framework uses the shape:: + + { + "error": { + "code": "...", // required – machine-readable error code + "message": "...", // required – human-readable description + "type": "...", // optional – error type classification + "details": [ ... ] // optional – child errors + } + } +""" +from typing import Any, Optional + +from starlette.responses import JSONResponse + + +class ErrorResponse: + """Standardized error response builder for AgentHost. + + Provides a static factory method for building JSON error responses + with the standard error envelope. + + Usage:: + + from azure.ai.agentserver.core import ErrorResponse + + return ErrorResponse.create("not_found", "Resource missing", status_code=404) + """ + + @staticmethod + def create( + code: str, + message: str, + *, + status_code: int, + error_type: Optional[str] = None, + details: Optional[list[dict[str, Any]]] = None, + headers: Optional[dict[str, str]] = None, + ) -> JSONResponse: + """Build a ``JSONResponse`` with the standard error envelope. + + :param code: Machine-readable error code (e.g. ``"internal_error"``). + :type code: str + :param message: Human-readable error message. + :type message: str + :keyword status_code: HTTP status code for the response. + :paramtype status_code: int + :keyword error_type: Optional error type classification string. When + provided, included as ``"type"`` in the error body. + :paramtype error_type: Optional[str] + :keyword details: Child error objects, each with at least ``code`` and + ``message`` keys. + :paramtype details: Optional[list[dict[str, Any]]] + :keyword headers: Extra HTTP headers to include on the response. + :paramtype headers: Optional[dict[str, str]] + :return: A ready-to-send JSON error response. + :rtype: JSONResponse + """ + body: dict[str, Any] = {"code": code, "message": message} + if error_type is not None: + body["type"] = error_type + if details is not None: + body["details"] = details + return JSONResponse( + {"error": body}, status_code=status_code, headers=headers + ) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_logger.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_logger.py new file mode 100644 index 000000000000..c628fd6aba32 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_logger.py @@ -0,0 +1,32 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Logging facade for AgentHost. + +Usage:: + + from azure.ai.agentserver.core import AgentLogger + + logger = AgentLogger.get() + logger.info("Processing request") +""" +import logging + + +class AgentLogger: + """Logging facade for AgentHost. + + Provides library-scoped logger access under the + ``azure.ai.agentserver`` namespace. + + All methods are static — no instantiation required. + """ + + @staticmethod + def get() -> logging.Logger: + """Return the library-scoped logger. + + :return: Logger instance for ``azure.ai.agentserver``. + :rtype: logging.Logger + """ + return logging.getLogger("azure.ai.agentserver") diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_tracing.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_tracing.py new file mode 100644 index 000000000000..34bd3a513ecc --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_tracing.py @@ -0,0 +1,866 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""OpenTelemetry tracing for AgentHost. + +Tracing is automatically enabled when an Application Insights connection +string (``APPLICATIONINSIGHTS_CONNECTION_STRING``) or an OTLP exporter +endpoint (``OTEL_EXPORTER_OTLP_ENDPOINT``) is available. + +Requires ``opentelemetry-api`` to be installed:: + + pip install azure-ai-agentserver-core[tracing] + +If the package is not installed, tracing silently becomes a no-op. + +When an Application Insights connection string is available (via constructor +or ``APPLICATIONINSIGHTS_CONNECTION_STRING`` env var), traces **and** logs are +automatically exported to Azure Monitor. This requires the additional +``opentelemetry-sdk`` and ``azure-monitor-opentelemetry-exporter`` packages +(both included in the ``[tracing]`` extras group). + +When the platform sets ``OTEL_EXPORTER_OTLP_ENDPOINT``, an OTLP exporter is +also registered for traces and logs. +""" +import logging +from collections.abc import AsyncIterable, AsyncIterator, Mapping # pylint: disable=import-error +from contextlib import contextmanager +from typing import TYPE_CHECKING, Any, Iterator, Optional, Union + +from . import _config +from ._logger import AgentLogger + +#: Starlette's ``Content`` type — the element type for streaming bodies. +_Content = Union[str, bytes, memoryview] + +#: W3C Trace Context header names used for distributed trace propagation. +_W3C_HEADERS = ("traceparent", "tracestate") + +#: Baggage key whose value overrides the parent span ID. +_LEAF_CUSTOMER_SPAN_ID = "leaf_customer_span_id" + +# ------------------------------------------------------------------ +# GenAI semantic convention attribute keys +# ------------------------------------------------------------------ +_ATTR_SERVICE_NAME = "service.name" +_ATTR_GEN_AI_SYSTEM = "gen_ai.system" +_ATTR_GEN_AI_PROVIDER_NAME = "gen_ai.provider.name" +_ATTR_GEN_AI_AGENT_ID = "gen_ai.agent.id" +_ATTR_GEN_AI_AGENT_NAME = "gen_ai.agent.name" +_ATTR_GEN_AI_AGENT_VERSION = "gen_ai.agent.version" +_ATTR_GEN_AI_RESPONSE_ID = "gen_ai.response.id" +_ATTR_GEN_AI_OPERATION_NAME = "gen_ai.operation.name" +_ATTR_GEN_AI_CONVERSATION_ID = "gen_ai.conversation.id" + +# Foundry project identity +_ATTR_FOUNDRY_PROJECT_ID = "microsoft.foundry.project.id" + +# Constant values +_SERVICE_NAME_VALUE = "azure.ai.agentserver" +_GEN_AI_SYSTEM_VALUE = "azure.ai.agentserver" +_GEN_AI_PROVIDER_NAME_VALUE = "AzureAI Hosted Agents" + +logger = AgentLogger.get() + +_HAS_OTEL = False +_HAS_BAGGAGE = False +try: + from opentelemetry import trace + from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator + + _HAS_OTEL = True + try: + from opentelemetry import baggage as _otel_baggage, context as _otel_context + + _HAS_BAGGAGE = True + except ImportError: + pass +except ImportError: + if TYPE_CHECKING: + from opentelemetry import trace + from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator + + +class TracingHelper: + """Lightweight wrapper around OpenTelemetry. + + Only instantiate when tracing is enabled. If ``opentelemetry-api`` is + not installed, a warning is logged and all methods become no-ops. + + When *connection_string* is provided, a :class:`TracerProvider` with an + Azure Monitor exporter is configured globally and log records from the + ``azure.ai.agentserver`` logger are forwarded to Application Insights. + This requires ``opentelemetry-sdk`` and + ``azure-monitor-opentelemetry-exporter``. + """ + + def __init__( + self, + connection_string: Optional[str] = None, + ) -> None: + self._enabled = _HAS_OTEL + self._tracer: Any = None + self._propagator: Any = None + + # Resolve agent identity from environment variables. + self._agent_name = _config.resolve_agent_name() + self._agent_version = _config.resolve_agent_version() + self._project_id = _config.resolve_project_id() + + # gen_ai.agent.id format: + # "{name}:{version}" when both present, "{name}" when only name, "" otherwise + if self._agent_name and self._agent_version: + self._agent_id = f"{self._agent_name}:{self._agent_version}" + elif self._agent_name: + self._agent_id = self._agent_name + else: + self._agent_id = "" + + if not self._enabled: + logger.warning( + "Tracing was enabled but opentelemetry-api is not installed. " + "Install it with: pip install azure-ai-agentserver-core[tracing]" + ) + return + + # Create OTel resource once for all exporters + resource = _create_resource() + + # Ensure a single TracerProvider exists for all exporters. + # Create it once up front so that both Azure Monitor and OTLP + # exporters add processors to the same provider, regardless of + # which combination is configured or the order they are set up. + trace_provider = _ensure_trace_provider(resource) + + if connection_string: + self._setup_azure_monitor(connection_string, resource, trace_provider) + + # OTLP exporter + otlp_endpoint = _config.resolve_otlp_endpoint() + if otlp_endpoint: + self._setup_otlp_export(otlp_endpoint, resource, trace_provider) + + self._tracer = trace.get_tracer("azure.ai.agentserver") + self._propagator = TraceContextTextMapPropagator() + + # ------------------------------------------------------------------ + # Azure Monitor auto-configuration + # ------------------------------------------------------------------ + + def _extract_context( + self, + carrier: Optional[dict[str, str]], + baggage_header: Optional[str] = None, + ) -> Any: + """Extract parent trace context from a W3C carrier dict. + + When a ``baggage`` header is provided and contains a + ``leaf_customer_span_id`` key, the parent span ID is overridden + so that the server's root span is parented under the leaf customer + span rather than the span referenced in the ``traceparent`` header. + + :param carrier: W3C trace-context headers or None. + :type carrier: Optional[dict[str, str]] + :param baggage_header: Raw ``baggage`` header value or None. + :type baggage_header: Optional[str] + :return: The extracted OTel context, or None. + :rtype: Any + """ + if not carrier or self._propagator is None: + return None + + ctx = self._propagator.extract(carrier=carrier) + + if not baggage_header: + return ctx + + leaf_span_id = _parse_baggage_key(baggage_header, _LEAF_CUSTOMER_SPAN_ID) + if not leaf_span_id: + return ctx + + return _override_parent_span_id(ctx, leaf_span_id) + + @staticmethod + def _setup_azure_monitor(connection_string: str, resource: Any, trace_provider: Any) -> None: + """Configure global TracerProvider and LoggerProvider for App Insights. + + Sets up ``AzureMonitorTraceExporter`` so spans are exported to + Application Insights, and ``AzureMonitorLogExporter`` so log records + from the ``azure.ai.agentserver`` namespace are forwarded. + + If the required packages are not installed, a warning is logged and + export is silently skipped — span creation still works via the + default no-op or user-configured provider. + + :param connection_string: Application Insights connection string. + :type connection_string: str + :param resource: Pre-created OTel resource, or None. + :type resource: Any + :param trace_provider: The shared TracerProvider, or None. + :type trace_provider: Any + """ + if resource is None: + return + _setup_trace_export(trace_provider, connection_string) + _setup_log_export(resource, connection_string) + + @staticmethod + def _setup_otlp_export(endpoint: str, resource: Any, trace_provider: Any) -> None: + """Configure OTLP exporter for traces and logs. + + Per container-image-spec, when ``OTEL_EXPORTER_OTLP_ENDPOINT`` + is set, the container must register an OTLP exporter. + + :param endpoint: The OTLP collector endpoint URL. + :type endpoint: str + :param resource: Pre-created OTel resource, or None. + :type resource: Any + :param trace_provider: The shared TracerProvider, or None. + :type trace_provider: Any + """ + if resource is None: + return + _setup_otlp_trace_export(trace_provider, endpoint) + _setup_otlp_log_export(resource, endpoint) + + # ------------------------------------------------------------------ + # Span naming and attribute helpers (shared by all protocols) + # ------------------------------------------------------------------ + + def span_name(self, span_operation: str) -> str: + """Build a span name using the operation and agent label. + + Per invocation-protocol-spec: + ``"invoke_agent {Name}:{Version}"`` or ``"invoke_agent {Name}"`` + or ``"invoke_agent"``. + + :param span_operation: The span operation (e.g. ``"invoke_agent"``). + This becomes the first token of the OTel span name. + :type span_operation: str + :return: ``" :"`` or just + ``""``. + :rtype: str + """ + if self._agent_id: + return f"{span_operation} {self._agent_id}" + return span_operation + + def build_span_attrs( + self, + invocation_id: str, + session_id: str, + operation_name: Optional[str] = None, + ) -> dict[str, str]: + """Build GenAI semantic convention span attributes. + + These attributes are common across all protocol heads. + Per invocation-protocol-spec. + + :param invocation_id: The invocation/request ID for this request. + :type invocation_id: str + :param session_id: The session ID (empty string if absent). + :type session_id: str + :param operation_name: Optional ``gen_ai.operation.name`` value + (e.g. ``"invoke_agent"``). Omitted from the dict when *None*. + :type operation_name: Optional[str] + :return: Span attribute dict. + :rtype: dict[str, str] + """ + attrs: dict[str, str] = { + # Identity & GenAI convention tags + _ATTR_SERVICE_NAME: _SERVICE_NAME_VALUE, + _ATTR_GEN_AI_SYSTEM: _GEN_AI_SYSTEM_VALUE, + _ATTR_GEN_AI_PROVIDER_NAME: _GEN_AI_PROVIDER_NAME_VALUE, + _ATTR_GEN_AI_RESPONSE_ID: invocation_id, + _ATTR_GEN_AI_AGENT_ID: self._agent_id, + } + if self._agent_name: + attrs[_ATTR_GEN_AI_AGENT_NAME] = self._agent_name + if self._agent_version: + attrs[_ATTR_GEN_AI_AGENT_VERSION] = self._agent_version + if operation_name: + attrs[_ATTR_GEN_AI_OPERATION_NAME] = operation_name + if session_id: + attrs[_ATTR_GEN_AI_CONVERSATION_ID] = session_id + if self._project_id: + attrs[_ATTR_FOUNDRY_PROJECT_ID] = self._project_id + return attrs + + @contextmanager + def span( + self, + name: str, + attributes: Optional[dict[str, str]] = None, + carrier: Optional[dict[str, str]] = None, + baggage_header: Optional[str] = None, + ) -> Iterator[Any]: + """Create a traced span if tracing is enabled, otherwise no-op. + + Yields the OpenTelemetry span object when tracing is active, or + ``None`` when tracing is disabled. Callers may use the yielded span + together with :meth:`record_error` to attach error information. + + :param name: Span name, e.g. ``"invoke_agent my_agent:1.0"``. + :type name: str + :param attributes: Key-value span attributes. + :type attributes: Optional[dict[str, str]] + :param carrier: Incoming HTTP headers for W3C trace-context propagation. + :type carrier: Optional[dict[str, str]] + :param baggage_header: Raw ``baggage`` header value for + ``leaf_customer_span_id`` extraction. + :type baggage_header: Optional[str] + :return: Context manager that yields the OTel span or *None*. + :rtype: Iterator[Any] + """ + if not self._enabled or self._tracer is None: + yield None + return + + ctx = self._extract_context(carrier, baggage_header) + + with self._tracer.start_as_current_span( + name=name, + attributes=attributes or {}, + kind=trace.SpanKind.SERVER, + context=ctx, + ) as otel_span: + yield otel_span + + def start_span( + self, + name: str, + attributes: Optional[dict[str, str]] = None, + carrier: Optional[dict[str, str]] = None, + baggage_header: Optional[str] = None, + ) -> Any: + """Start a span without a context manager. + + Use this for streaming responses where the span must outlive the + initial ``invoke()`` call. The caller **must** call :meth:`end_span` + when the work is finished. + + :param name: Span name, e.g. ``"invoke_agent my_agent:1.0"``. + :type name: str + :param attributes: Key-value span attributes. + :type attributes: Optional[dict[str, str]] + :param carrier: Incoming HTTP headers for W3C trace-context propagation. + :type carrier: Optional[dict[str, str]] + :param baggage_header: Raw ``baggage`` header value for + ``leaf_customer_span_id`` extraction. + :type baggage_header: Optional[str] + :return: The OTel span, or *None* when tracing is disabled. + :rtype: Any + """ + if not self._enabled or self._tracer is None: + return None + + ctx = self._extract_context(carrier, baggage_header) + + return self._tracer.start_span( + name=name, + attributes=attributes or {}, + kind=trace.SpanKind.SERVER, + context=ctx, + ) + + # ------------------------------------------------------------------ + # Request-level convenience wrappers + # ------------------------------------------------------------------ + + def _prepare_request_span_args( + self, + headers: Mapping[str, str], + invocation_id: str, + span_operation: str, + operation_name: Optional[str] = None, + session_id: str = "", + ) -> tuple[str, dict[str, str], dict[str, str], Optional[str]]: + """Extract headers and build span arguments for a request. + + Shared pipeline used by :meth:`start_request_span` and + :meth:`request_span` to avoid duplicating header extraction, + attribute building, and span naming. + + :param headers: HTTP request headers (any ``Mapping[str, str]``). + :type headers: Mapping[str, str] + :param invocation_id: The invocation/request ID. + :type invocation_id: str + :param span_operation: Span operation (e.g. ``"invoke_agent"``). + :type span_operation: str + :param operation_name: Optional ``gen_ai.operation.name`` value. + :type operation_name: Optional[str] + :param session_id: Session ID from the ``agent_session_id`` query + parameter. Defaults to ``""`` (no session). + :type session_id: str + :return: ``(name, attributes, carrier, baggage)`` ready for + :meth:`span` or :meth:`start_span`. + :rtype: tuple[str, dict[str, str], dict[str, str], Optional[str]] + """ + carrier = _extract_w3c_carrier(headers) + baggage = headers.get("baggage") + span_attrs = self.build_span_attrs( + invocation_id, session_id, operation_name=operation_name + ) + return self.span_name(span_operation), span_attrs, carrier, baggage + + def start_request_span( + self, + headers: Mapping[str, str], + invocation_id: str, + span_operation: str, + operation_name: Optional[str] = None, + session_id: str = "", + ) -> Any: + """Start a request-scoped span, extracting context from HTTP headers. + + Convenience method that combines header extraction, attribute + building, span naming, and span creation into a single call. + Use for streaming responses where the span must outlive the + initial handler call. The caller **must** call :meth:`end_span` + when work is finished. + + :param headers: HTTP request headers (any ``Mapping[str, str]``). + :type headers: Mapping[str, str] + :param invocation_id: The invocation/request ID. + :type invocation_id: str + :param span_operation: Span operation (e.g. ``"invoke_agent"``). + Becomes the first token of the OTel span name via + :meth:`span_name`. + :type span_operation: str + :param operation_name: Optional ``gen_ai.operation.name`` attribute + value (e.g. ``"invoke_agent"``). Omitted when *None*. + :type operation_name: Optional[str] + :param session_id: Session ID from the ``agent_session_id`` query + parameter. Defaults to ``""`` (no session). + :type session_id: str + :return: The OTel span, or *None* when tracing is disabled. + :rtype: Any + """ + name, attrs, carrier, baggage = self._prepare_request_span_args( + headers, invocation_id, span_operation, operation_name, + session_id=session_id, + ) + return self.start_span(name, attributes=attrs, carrier=carrier, baggage_header=baggage) + + @contextmanager + def request_span( + self, + headers: Mapping[str, str], + invocation_id: str, + span_operation: str, + operation_name: Optional[str] = None, + session_id: str = "", + ) -> Iterator[Any]: + """Create a request-scoped span as a context manager. + + Convenience method that combines header extraction, attribute + building, span naming, and span creation into a single call. + Use for non-streaming request handlers where the span should + cover the entire handler execution. + + :param headers: HTTP request headers (any ``Mapping[str, str]``). + :type headers: Mapping[str, str] + :param invocation_id: The invocation/request ID. + :type invocation_id: str + :param span_operation: Span operation (e.g. ``"get_invocation"``). + Becomes the first token of the OTel span name via + :meth:`span_name`. + :type span_operation: str + :param operation_name: Optional ``gen_ai.operation.name`` attribute + value. Omitted when *None*. + :type operation_name: Optional[str] + :param session_id: Session ID from the ``agent_session_id`` query + parameter. Defaults to ``""`` (no session). + :type session_id: str + :return: Context manager that yields the OTel span or *None*. + :rtype: Iterator[Any] + """ + name, attrs, carrier, baggage = self._prepare_request_span_args( + headers, invocation_id, span_operation, operation_name, + session_id=session_id, + ) + with self.span(name, attributes=attrs, carrier=carrier, baggage_header=baggage) as otel_span: + yield otel_span + + # ------------------------------------------------------------------ + # Span lifecycle helpers + # ------------------------------------------------------------------ + + def end_span(self, span: Any, exc: Optional[BaseException] = None) -> None: + """End a span started with :meth:`start_span`. + + Optionally records an error before ending. No-op when *span* is + ``None`` (tracing disabled). + + :param span: The OTel span, or *None*. + :type span: Any + :param exc: Optional exception to record before ending. + :type exc: Optional[BaseException] + """ + if span is None: + return + if exc is not None: + self.record_error(span, exc) + span.end() + + @staticmethod + def record_error(span: Any, exc: BaseException) -> None: + """Record an exception and ERROR status on a span. + + No-op when *span* is ``None`` (tracing disabled) or when + ``opentelemetry-api`` is not installed. + + :param span: The OTel span returned by :meth:`span`, or *None*. + :type span: Any + :param exc: The exception to record. + :type exc: BaseException + """ + if span is not None and _HAS_OTEL: + span.set_status(trace.StatusCode.ERROR, str(exc)) + span.record_exception(exc) + + @staticmethod + def set_baggage(keys: dict[str, str]) -> Any: + """Set W3C Baggage entries on the current context. + + Baggage keys propagate to downstream services via + the ``baggage`` header. No-op when the OTel baggage API is not + available. + + :param keys: Mapping of baggage key → value to set. + :type keys: dict[str, str] + :return: A context token that must be passed to :meth:`detach_baggage` + when the scope ends, or *None* when baggage is unavailable. + :rtype: Any + """ + if not _HAS_BAGGAGE: + return None + ctx = _otel_context.get_current() + for key, value in keys.items(): + ctx = _otel_baggage.set_baggage(key, value, context=ctx) + return _otel_context.attach(ctx) + + @staticmethod + def detach_baggage(token: Any) -> None: + """Detach a baggage context previously attached by :meth:`set_baggage`. + + :param token: The token returned by :meth:`set_baggage`. + :type token: Any + """ + if token is not None and _HAS_BAGGAGE: + _otel_context.detach(token) + + async def trace_stream( + self, iterator: AsyncIterable[_Content], span: Any + ) -> AsyncIterator[_Content]: + """Wrap a streaming body iterator so the tracing span covers the full + duration of data transmission. + + Yields chunks from *iterator* unchanged. When the iterator is + exhausted or raises an exception the span is ended (with error status + if applicable). Safe to call when tracing is disabled (*span* is + ``None``). + + :param iterator: The original async body iterator from + :class:`~starlette.responses.StreamingResponse`. + :type iterator: AsyncIterable[Union[str, bytes, memoryview]] + :param span: The OTel span (or *None* when tracing is disabled). + :type span: Any + :return: An async iterator that yields chunks unchanged. + :rtype: AsyncIterator[Union[str, bytes, memoryview]] + """ + error: Optional[BaseException] = None + try: + async for chunk in iterator: + yield chunk + except BaseException as exc: + error = exc + raise + finally: + self.end_span(span, exc=error) + + +def _create_resource() -> Any: + """Create the OTel resource for exporters. + + :return: A :class:`~opentelemetry.sdk.resources.Resource`, or *None* + if the required packages are not installed. + :rtype: Any + """ + try: + from opentelemetry.sdk.resources import Resource + except ImportError: + logger.warning( + "Required OTel SDK packages are not installed. Install them with: " + "pip install azure-ai-agentserver-core[tracing]" + ) + return None + return Resource.create({_ATTR_SERVICE_NAME: _SERVICE_NAME_VALUE}) + + +def _ensure_trace_provider(resource: Any) -> Any: + """Return or create the global :class:`TracerProvider`. + + If a user-configured ``TracerProvider`` already exists (one that + supports ``add_span_processor``), it is reused. Otherwise a new + ``SdkTracerProvider`` is created with the given *resource* and set + as the global provider. + + Creating the provider once and passing it to both + :func:`_setup_trace_export` and :func:`_setup_otlp_trace_export` + removes the order-dependent initialization that existed previously. + + :param resource: The OTel resource describing this service, or *None*. + :type resource: Any + :return: A ``TracerProvider``, or *None* if the SDK is not installed. + :rtype: Any + """ + # Called only when _HAS_OTEL is True, so the module-level ``trace`` + # import is guaranteed to be bound. + if resource is None: + return None + try: + from opentelemetry.sdk.trace import TracerProvider as SdkTracerProvider + except ImportError: + return None + + current_provider = trace.get_tracer_provider() + if hasattr(current_provider, "add_span_processor"): + return current_provider + + provider = SdkTracerProvider(resource=resource) + trace.set_tracer_provider(provider) + return provider + + +def _setup_trace_export(provider: Any, connection_string: str) -> None: + """Add an Azure Monitor span processor to the given *provider*. + + :param provider: The TracerProvider to attach the exporter to, or *None*. + :type provider: Any + :param connection_string: Application Insights connection string. + :type connection_string: str + """ + if provider is None: + return + try: + from opentelemetry.sdk.trace.export import BatchSpanProcessor + + from azure.monitor.opentelemetry.exporter import ( # type: ignore[import-untyped] + AzureMonitorTraceExporter, + ) + except ImportError: + logger.warning( + "Trace export to Application Insights requires " + "opentelemetry-sdk and azure-monitor-opentelemetry-exporter. " + "Traces will not be forwarded." + ) + return + + exporter = AzureMonitorTraceExporter(connection_string=connection_string) + provider.add_span_processor(BatchSpanProcessor(exporter)) + logger.info("Application Insights trace exporter configured.") + + +def _setup_log_export(resource: Any, connection_string: str) -> None: + """Configure a global :class:`LoggerProvider` that exports to App Insights. + + :param resource: The OTel resource describing this service. + :type resource: Any + :param connection_string: Application Insights connection string. + :type connection_string: str + """ + try: + from opentelemetry._logs import set_logger_provider + from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler + from opentelemetry.sdk._logs.export import BatchLogRecordProcessor + + from azure.monitor.opentelemetry.exporter import ( # type: ignore[import-untyped] + AzureMonitorLogExporter, + ) + except ImportError: + logger.warning( + "Log export to Application Insights requires " + "opentelemetry-sdk. Logs will not be forwarded." + ) + return + + log_provider = LoggerProvider(resource=resource) + set_logger_provider(log_provider) + log_exporter = AzureMonitorLogExporter(connection_string=connection_string) + log_provider.add_log_record_processor(BatchLogRecordProcessor(log_exporter)) + handler = LoggingHandler(logger_provider=log_provider) + logging.getLogger("azure.ai.agentserver").addHandler(handler) + logger.info("Application Insights log exporter configured.") + + +def _setup_otlp_trace_export(provider: Any, endpoint: str) -> None: + """Add an OTLP span processor to the given *provider*. + + :param provider: The TracerProvider to attach the exporter to, or *None*. + :type provider: Any + :param endpoint: The OTLP collector endpoint URL. + :type endpoint: str + """ + if provider is None: + return + try: + from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter + from opentelemetry.sdk.trace.export import BatchSpanProcessor + except ImportError: + logger.warning( + "OTLP trace export requires opentelemetry-sdk and " + "opentelemetry-exporter-otlp-proto-grpc. " + "Traces will not be forwarded via OTLP." + ) + return + + exporter = OTLPSpanExporter(endpoint=endpoint) + provider.add_span_processor(BatchSpanProcessor(exporter)) + logger.info("OTLP trace exporter configured (endpoint=%s).", endpoint) + + +def _setup_otlp_log_export(resource: Any, endpoint: str) -> None: + """Configure OTLP log exporter. + + :param resource: The OTel resource describing this service. + :type resource: Any + :param endpoint: The OTLP collector endpoint URL. + :type endpoint: str + """ + try: + from opentelemetry._logs import get_logger_provider + from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter + from opentelemetry.sdk._logs import LoggerProvider + from opentelemetry.sdk._logs.export import BatchLogRecordProcessor + except ImportError: + logger.warning( + "OTLP log export requires opentelemetry-sdk and " + "opentelemetry-exporter-otlp-proto-grpc. " + "Logs will not be forwarded via OTLP." + ) + return + + current_provider = get_logger_provider() + if hasattr(current_provider, "add_log_record_processor"): + log_provider = current_provider + else: + from opentelemetry._logs import set_logger_provider + + log_provider = LoggerProvider(resource=resource) + set_logger_provider(log_provider) + + log_exporter = OTLPLogExporter(endpoint=endpoint) + log_provider.add_log_record_processor(BatchLogRecordProcessor(log_exporter)) # type: ignore[union-attr] + logger.info("OTLP log exporter configured (endpoint=%s).", endpoint) + + +def _extract_w3c_carrier(headers: Mapping[str, str]) -> dict[str, str]: + """Extract W3C trace-context headers from a mapping. + + Filters the input to only ``traceparent`` and ``tracestate`` — the two + headers defined by the `W3C Trace Context`_ standard. This avoids + passing unrelated headers (e.g. ``authorization``, ``cookie``) into the + OpenTelemetry propagator. + + .. _W3C Trace Context: https://www.w3.org/TR/trace-context/ + + :param headers: A mapping of header name to value (e.g. + ``request.headers``). + :type headers: Mapping[str, str] + :return: A dict containing only the W3C propagation headers present + in *headers*. + :rtype: dict[str, str] + """ + result: dict[str, str] = {k: v for k in _W3C_HEADERS if (v := headers.get(k)) is not None} + return result + + +def _parse_baggage_key(baggage: str, key: str) -> Optional[str]: + """Parse a single key from a W3C Baggage header value. + + The `W3C Baggage`_ format is a comma-separated list of + ``key=value`` pairs with optional properties after a ``;``. + + Example:: + + leaf_customer_span_id=abc123,other=val + + .. _W3C Baggage: https://www.w3.org/TR/baggage/ + + :param baggage: The raw header value. + :type baggage: str + :param key: The baggage key to look up. + :type key: str + :return: The value for *key*, or *None* if not found. + :rtype: Optional[str] + """ + for member in baggage.split(","): + member = member.strip() + if not member: + continue + # Split on first '=' only; value may contain '=' + kv_part = member.split(";", 1)[0] # strip optional properties + eq_idx = kv_part.find("=") + if eq_idx < 0: + continue + k = kv_part[:eq_idx].strip() + v = kv_part[eq_idx + 1:].strip() + if k == key: + return v + return None + + +def _override_parent_span_id(ctx: Any, hex_span_id: str) -> Any: + """Create a new context with the same trace ID but a different parent span ID. + + Constructs a :class:`~opentelemetry.trace.SpanContext` with the trace ID + taken from the existing context and the span ID replaced by + *hex_span_id*. The resulting context can be used as the ``context`` + argument to ``start_span`` / ``start_as_current_span``. + + Returns the original *ctx* unchanged if *hex_span_id* is invalid or + ``opentelemetry-api`` is not installed. + + Per invocation-protocol-spec. + + :param ctx: An OTel context produced by ``TraceContextTextMapPropagator.extract()``. + :type ctx: Any + :param hex_span_id: 16-character lower-case hex string representing the + desired parent span ID. + :type hex_span_id: str + :return: A context with the overridden parent span ID, or the original. + :rtype: Any + """ + if not _HAS_OTEL: + return ctx + + # A valid OTel span ID is exactly 16 hex characters (8 bytes). + if len(hex_span_id) != 16: + logger.warning("Invalid leaf_customer_span_id length in baggage: %r (expected 16 hex chars)", hex_span_id) + return ctx + + try: + new_span_id = int(hex_span_id, 16) + except (ValueError, TypeError): + logger.warning("Invalid leaf_customer_span_id in baggage: %r", hex_span_id) + return ctx + + if new_span_id == 0: + return ctx + + # Grab the trace ID from the current parent span in ctx. + current_span = trace.get_current_span(ctx) + current_ctx = current_span.get_span_context() + if current_ctx is None or not current_ctx.is_valid: + return ctx + + custom_span_ctx = trace.SpanContext( + trace_id=current_ctx.trace_id, + span_id=new_span_id, + is_remote=True, + trace_flags=current_ctx.trace_flags, + trace_state=current_ctx.trace_state, + ) + custom_parent = trace.NonRecordingSpan(custom_span_ctx) + return trace.set_span_in_context(custom_parent, ctx) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_version.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_version.py index be71c81bd282..71775f48670c 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_version.py +++ b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/_version.py @@ -1,9 +1,5 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- +# --------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- +# --------------------------------------------------------- -VERSION = "1.0.0b1" +VERSION = "2.0.0b1" diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/constants.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/constants.py deleted file mode 100644 index a13f23aa261e..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/constants.py +++ /dev/null @@ -1,14 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -class Constants: - # well-known environment variables - APPLICATION_INSIGHTS_CONNECTION_STRING = "_AGENT_RUNTIME_APP_INSIGHTS_CONNECTION_STRING" - AZURE_AI_PROJECT_ENDPOINT = "AZURE_AI_PROJECT_ENDPOINT" - AGENT_ID = "AGENT_ID" - AGENT_NAME = "AGENT_NAME" - AGENT_PROJECT_RESOURCE_ID = "AGENT_PROJECT_NAME" - OTEL_EXPORTER_ENDPOINT = "OTEL_EXPORTER_ENDPOINT" - AGENT_LOG_LEVEL = "AGENT_LOG_LEVEL" - AGENT_DEBUG_ERRORS = "AGENT_DEBUG_ERRORS" - ENABLE_APPLICATION_INSIGHTS_LOGGER = "ENABLE_APPLICATION_INSIGHTS_LOGGER" diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/logger.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/logger.py deleted file mode 100644 index f062398c0d3b..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/logger.py +++ /dev/null @@ -1,159 +0,0 @@ -# pylint: disable=broad-exception-caught,dangerous-default-value -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -import contextvars -import logging -import os -from logging import config - -from ._version import VERSION -from .constants import Constants - -default_log_config = { - "version": 1, - "disable_existing_loggers": False, - "loggers": { - "azure.ai.agentserver": { - "handlers": ["console"], - "level": "INFO", - "propagate": False, - }, - }, - "handlers": { - "console": {"formatter": "std_out", "class": "logging.StreamHandler", "level": "INFO"}, - }, - "formatters": {"std_out": {"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"}}, -} - -request_context = contextvars.ContextVar("request_context", default=None) - - -def get_dimensions(): - env_values = {name: value for name, value in vars(Constants).items() if not name.startswith("_")} - res = {"azure.ai.agentserver.version": VERSION} - for name, env_name in env_values.items(): - if isinstance(env_name, str) and not env_name.startswith("_"): - runtime_value = os.environ.get(env_name) - if runtime_value: - res[f"azure.ai.agentserver.{name.lower()}"] = runtime_value - return res - - -def get_project_endpoint(): - project_resource_id = os.environ.get(Constants.AGENT_PROJECT_RESOURCE_ID) - if project_resource_id: - last_part = project_resource_id.split("/")[-1] - - parts = last_part.split("@") - if len(parts) < 2: - print(f"invalid project resource id: {project_resource_id}") - return None - account = parts[0] - project = parts[1] - return f"https://{account}.services.ai.azure.com/api/projects/{project}" - print("environment variable AGENT_PROJECT_RESOURCE_ID not set.") - return None - - -def get_application_insights_connstr(): - try: - conn_str = os.environ.get(Constants.APPLICATION_INSIGHTS_CONNECTION_STRING) - if not conn_str: - print("environment variable APPLICATION_INSIGHTS_CONNECTION_STRING not set.") - project_endpoint = get_project_endpoint() - if project_endpoint: - # try to get the project connected application insights - from azure.ai.projects import AIProjectClient - from azure.identity import DefaultAzureCredential - - project_client = AIProjectClient(credential=DefaultAzureCredential(), endpoint=project_endpoint) - conn_str = project_client.telemetry.get_application_insights_connection_string() - if not conn_str: - print(f"no connected application insights found for project:{project_endpoint}") - else: - os.environ[Constants.APPLICATION_INSIGHTS_CONNECTION_STRING] = conn_str - return conn_str - except Exception as e: - print(f"failed to get application insights with error: {e}") - return None - - -class CustomDimensionsFilter(logging.Filter): - def filter(self, record): - # Add custom dimensions to every log record - dimensions = get_dimensions() - for key, value in dimensions.items(): - setattr(record, key, value) - cur_request_context = request_context.get() - if cur_request_context: - for key, value in cur_request_context.items(): - setattr(record, key, value) - return True - - -def configure(log_config: dict = default_log_config): - """ - Configure logging based on the provided configuration dictionary. - The dictionary should contain the logging configuration in a format compatible with `logging.config.dictConfig`. - - :param log_config: A dictionary containing logging configuration. - :type log_config: dict - """ - try: - config.dictConfig(log_config) - - application_insights_connection_string = get_application_insights_connstr() - enable_application_insights_logger = ( - os.environ.get(Constants.ENABLE_APPLICATION_INSIGHTS_LOGGER, "true").lower() == "true" - ) - if application_insights_connection_string and enable_application_insights_logger: - from opentelemetry._logs import set_logger_provider - from opentelemetry.sdk._logs import ( - LoggerProvider, - LoggingHandler, - ) - from opentelemetry.sdk._logs.export import BatchLogRecordProcessor - from opentelemetry.sdk.resources import Resource - - from azure.monitor.opentelemetry.exporter import AzureMonitorLogExporter - - logger_provider = LoggerProvider(resource=Resource.create({"service.name": "azure.ai.agentserver"})) - set_logger_provider(logger_provider) - - exporter = AzureMonitorLogExporter(connection_string=application_insights_connection_string) - - logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter)) - handler = LoggingHandler(logger_provider=logger_provider) - handler.name = "appinsights_handler" - - # Add custom filter to inject dimensions - custom_filter = CustomDimensionsFilter() - handler.addFilter(custom_filter) - - # Only add to azure.ai.agentserver namespace to avoid infrastructure logs - app_logger = logging.getLogger("azure.ai.agentserver") - app_logger.setLevel(get_log_level()) - app_logger.addHandler(handler) - - except Exception as e: - print(f"Failed to configure logging: {e}") - - -def get_log_level(): - log_level = os.getenv(Constants.AGENT_LOG_LEVEL, "INFO").upper() - valid_levels = ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] - if log_level not in valid_levels: - print(f"Invalid log level '{log_level}' specified. Defaulting to 'INFO'.") - log_level = "INFO" - return log_level - - -def get_logger() -> logging.Logger: - """ - If the logger is not already configured, it will be initialized with default settings. - - :return: Configured logger instance. - :rtype: logging.Logger - """ - return logging.getLogger("azure.ai.agentserver") diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/__init__.py deleted file mode 100644 index d5622ebe7732..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -from ._create_response import CreateResponse # type: ignore -from .projects import Response, ResponseStreamEvent - -__all__ = ["CreateResponse", "Response", "ResponseStreamEvent"] # type: ignore[var-annotated] diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/_create_response.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/_create_response.py deleted file mode 100644 index a38f55408c7f..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/_create_response.py +++ /dev/null @@ -1,12 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -# pylint: disable=no-name-in-module -from typing import Optional - -from .openai import response_create_params # type: ignore -from . import projects as _azure_ai_projects_models - -class CreateResponse(response_create_params.ResponseCreateParamsBase, total=False): # type: ignore - agent: Optional[_azure_ai_projects_models.AgentReference] - stream: Optional[bool] diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/openai/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/openai/__init__.py deleted file mode 100644 index ecf2179f53b7..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/openai/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -""" -Re-exports of OpenAI SDK response types. - -This module re-exports types from the OpenAI SDK for convenience. -These types are fully documented in the OpenAI SDK documentation. - -.. note:: - This module re-exports OpenAI SDK types. For detailed documentation, - please refer to the `OpenAI Python SDK documentation `_. -""" -from openai.types.responses import * # pylint: disable=unused-wildcard-import - -__all__ = [name for name in globals() if not name.startswith("_")] # type: ignore[var-annotated] diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/__init__.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/__init__.py deleted file mode 100644 index f65ea1133818..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/__init__.py +++ /dev/null @@ -1,820 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -# pylint: disable=wrong-import-position - -from typing import TYPE_CHECKING - -if TYPE_CHECKING: - from ._patch import * # pylint: disable=unused-wildcard-import - - -from ._models import ( # type: ignore - A2ATool, - AISearchIndexResource, - AgentClusterInsightResult, - AgentClusterInsightsRequest, - AgentContainerObject, - AgentContainerOperationError, - AgentContainerOperationObject, - AgentDefinition, - AgentId, - AgentObject, - AgentObjectVersions, - AgentReference, - AgentTaxonomyInput, - AgentVersionObject, - AgenticIdentityCredentials, - Annotation, - AnnotationFileCitation, - AnnotationFilePath, - AnnotationUrlCitation, - ApiError, - ApiErrorResponse, - ApiInnerError, - ApiKeyCredentials, - ApproximateLocation, - AzureAIAgentTarget, - AzureAISearchAgentTool, - AzureAISearchIndex, - AzureAISearchToolResource, - AzureFunctionAgentTool, - AzureFunctionBinding, - AzureFunctionDefinition, - AzureFunctionDefinitionFunction, - AzureFunctionStorageQueue, - AzureOpenAIModelConfiguration, - BaseCredentials, - BingCustomSearchAgentTool, - BingCustomSearchConfiguration, - BingCustomSearchToolParameters, - BingGroundingAgentTool, - BingGroundingSearchConfiguration, - BingGroundingSearchToolParameters, - BlobReference, - BlobReferenceSasCredential, - BrowserAutomationAgentTool, - BrowserAutomationToolConnectionParameters, - BrowserAutomationToolParameters, - CaptureStructuredOutputsTool, - ChartCoordinate, - ChatSummaryMemoryItem, - ClusterInsightResult, - ClusterTokenUsage, - CodeBasedEvaluatorDefinition, - CodeInterpreterOutput, - CodeInterpreterOutputImage, - CodeInterpreterOutputLogs, - CodeInterpreterTool, - CodeInterpreterToolAuto, - CodeInterpreterToolCallItemParam, - CodeInterpreterToolCallItemResource, - ComparisonFilter, - CompoundFilter, - ComputerAction, - ComputerActionClick, - ComputerActionDoubleClick, - ComputerActionDrag, - ComputerActionKeyPress, - ComputerActionMove, - ComputerActionScreenshot, - ComputerActionScroll, - ComputerActionTypeKeys, - ComputerActionWait, - ComputerToolCallItemParam, - ComputerToolCallItemResource, - ComputerToolCallOutputItemOutput, - ComputerToolCallOutputItemOutputComputerScreenshot, - ComputerToolCallOutputItemParam, - ComputerToolCallOutputItemResource, - ComputerToolCallSafetyCheck, - ComputerUsePreviewTool, - Connection, - ContainerAppAgentDefinition, - ContinuousEvaluationRuleAction, - Coordinate, - CosmosDBIndex, - CreatedBy, - CronTrigger, - CustomCredential, - DailyRecurrenceSchedule, - DatasetCredential, - DatasetVersion, - DeleteAgentResponse, - DeleteAgentVersionResponse, - DeleteMemoryStoreResponse, - Deployment, - EmbeddingConfiguration, - EntraIDCredentials, - EvalCompareReport, - EvalResult, - EvalRunResultCompareItem, - EvalRunResultComparison, - EvalRunResultSummary, - EvaluationComparisonRequest, - EvaluationResultSample, - EvaluationRule, - EvaluationRuleAction, - EvaluationRuleFilter, - EvaluationRunClusterInsightResult, - EvaluationRunClusterInsightsRequest, - EvaluationScheduleTask, - EvaluationTaxonomy, - EvaluationTaxonomyInput, - EvaluatorDefinition, - EvaluatorMetric, - EvaluatorVersion, - FabricDataAgentToolParameters, - FieldMapping, - FileDatasetVersion, - FileSearchTool, - FileSearchToolCallItemParam, - FileSearchToolCallItemParamResult, - FileSearchToolCallItemResource, - FolderDatasetVersion, - FunctionTool, - FunctionToolCallItemParam, - FunctionToolCallItemResource, - FunctionToolCallOutputItemParam, - FunctionToolCallOutputItemResource, - HostedAgentDefinition, - HourlyRecurrenceSchedule, - HumanEvaluationRuleAction, - ImageBasedHostedAgentDefinition, - ImageGenTool, - ImageGenToolCallItemParam, - ImageGenToolCallItemResource, - ImageGenToolInputImageMask, - Index, - Insight, - InsightCluster, - InsightModelConfiguration, - InsightRequest, - InsightResult, - InsightSample, - InsightScheduleTask, - InsightSummary, - InsightsMetadata, - InvokeAzureAgentWorkflowActionOutputItemResource, - ItemContent, - ItemContentInputAudio, - ItemContentInputFile, - ItemContentInputImage, - ItemContentInputText, - ItemContentOutputAudio, - ItemContentOutputText, - ItemContentRefusal, - ItemParam, - ItemReferenceItemParam, - ItemResource, - LocalShellExecAction, - LocalShellTool, - LocalShellToolCallItemParam, - LocalShellToolCallItemResource, - LocalShellToolCallOutputItemParam, - LocalShellToolCallOutputItemResource, - Location, - LogProb, - MCPApprovalRequestItemParam, - MCPApprovalRequestItemResource, - MCPApprovalResponseItemParam, - MCPApprovalResponseItemResource, - MCPCallItemParam, - MCPCallItemResource, - MCPListToolsItemParam, - MCPListToolsItemResource, - MCPListToolsTool, - MCPTool, - MCPToolAllowedTools1, - MCPToolRequireApproval1, - MCPToolRequireApprovalAlways, - MCPToolRequireApprovalNever, - ManagedAzureAISearchIndex, - MemoryItem, - MemoryOperation, - MemorySearchItem, - MemorySearchOptions, - MemorySearchTool, - MemorySearchToolCallItemParam, - MemorySearchToolCallItemResource, - MemoryStoreDefaultDefinition, - MemoryStoreDefaultOptions, - MemoryStoreDefinition, - MemoryStoreDeleteScopeResponse, - MemoryStoreObject, - MemoryStoreOperationUsage, - MemoryStoreOperationUsageInputTokensDetails, - MemoryStoreOperationUsageOutputTokensDetails, - MemoryStoreSearchResponse, - MemoryStoreUpdateResponse, - MemoryStoreUpdateResult, - MicrosoftFabricAgentTool, - ModelDeployment, - ModelDeploymentSku, - MonthlyRecurrenceSchedule, - NoAuthenticationCredentials, - OAuthConsentRequestItemResource, - OneTimeTrigger, - OpenApiAgentTool, - OpenApiAnonymousAuthDetails, - OpenApiAuthDetails, - OpenApiFunctionDefinition, - OpenApiFunctionDefinitionFunction, - OpenApiManagedAuthDetails, - OpenApiManagedSecurityScheme, - OpenApiProjectConnectionAuthDetails, - OpenApiProjectConnectionSecurityScheme, - PagedScheduleRun, - PendingUploadRequest, - PendingUploadResponse, - Prompt, - PromptAgentDefinition, - PromptAgentDefinitionText, - PromptBasedEvaluatorDefinition, - ProtocolVersionRecord, - RaiConfig, - RankingOptions, - Reasoning, - ReasoningItemParam, - ReasoningItemResource, - ReasoningItemSummaryPart, - ReasoningItemSummaryTextPart, - RecurrenceSchedule, - RecurrenceTrigger, - RedTeam, - Response, - ResponseCodeInterpreterCallCodeDeltaEvent, - ResponseCodeInterpreterCallCodeDoneEvent, - ResponseCodeInterpreterCallCompletedEvent, - ResponseCodeInterpreterCallInProgressEvent, - ResponseCodeInterpreterCallInterpretingEvent, - ResponseCompletedEvent, - ResponseContentPartAddedEvent, - ResponseContentPartDoneEvent, - ResponseConversation1, - ResponseCreatedEvent, - ResponseError, - ResponseErrorEvent, - ResponseFailedEvent, - ResponseFileSearchCallCompletedEvent, - ResponseFileSearchCallInProgressEvent, - ResponseFileSearchCallSearchingEvent, - ResponseFormatJsonSchemaSchema, - ResponseFunctionCallArgumentsDeltaEvent, - ResponseFunctionCallArgumentsDoneEvent, - ResponseImageGenCallCompletedEvent, - ResponseImageGenCallGeneratingEvent, - ResponseImageGenCallInProgressEvent, - ResponseImageGenCallPartialImageEvent, - ResponseInProgressEvent, - ResponseIncompleteDetails1, - ResponseIncompleteEvent, - ResponseMCPCallArgumentsDeltaEvent, - ResponseMCPCallArgumentsDoneEvent, - ResponseMCPCallCompletedEvent, - ResponseMCPCallFailedEvent, - ResponseMCPCallInProgressEvent, - ResponseMCPListToolsCompletedEvent, - ResponseMCPListToolsFailedEvent, - ResponseMCPListToolsInProgressEvent, - ResponseOutputItemAddedEvent, - ResponseOutputItemDoneEvent, - ResponsePromptVariables, - ResponseQueuedEvent, - ResponseReasoningDeltaEvent, - ResponseReasoningDoneEvent, - ResponseReasoningSummaryDeltaEvent, - ResponseReasoningSummaryDoneEvent, - ResponseReasoningSummaryPartAddedEvent, - ResponseReasoningSummaryPartDoneEvent, - ResponseReasoningSummaryTextDeltaEvent, - ResponseReasoningSummaryTextDoneEvent, - ResponseRefusalDeltaEvent, - ResponseRefusalDoneEvent, - ResponseStreamEvent, - ResponseText, - ResponseTextDeltaEvent, - ResponseTextDoneEvent, - ResponseTextFormatConfiguration, - ResponseTextFormatConfigurationJsonObject, - ResponseTextFormatConfigurationJsonSchema, - ResponseTextFormatConfigurationText, - ResponseUsage, - ResponseWebSearchCallCompletedEvent, - ResponseWebSearchCallInProgressEvent, - ResponseWebSearchCallSearchingEvent, - ResponsesAssistantMessageItemParam, - ResponsesAssistantMessageItemResource, - ResponsesDeveloperMessageItemParam, - ResponsesDeveloperMessageItemResource, - ResponsesMessageItemParam, - ResponsesMessageItemResource, - ResponsesSystemMessageItemParam, - ResponsesSystemMessageItemResource, - ResponsesUserMessageItemParam, - ResponsesUserMessageItemResource, - SASCredentials, - Schedule, - ScheduleRun, - ScheduleTask, - SharepointAgentTool, - SharepointGroundingToolParameters, - StructuredInputDefinition, - StructuredOutputDefinition, - StructuredOutputsItemResource, - Target, - TargetConfig, - TaxonomyCategory, - TaxonomySubCategory, - Tool, - ToolArgumentBinding, - ToolChoiceObject, - ToolChoiceObjectCodeInterpreter, - ToolChoiceObjectComputer, - ToolChoiceObjectFileSearch, - ToolChoiceObjectFunction, - ToolChoiceObjectImageGen, - ToolChoiceObjectMCP, - ToolChoiceObjectWebSearch, - ToolDescription, - ToolProjectConnection, - ToolProjectConnectionList, - TopLogProb, - Trigger, - UserProfileMemoryItem, - VectorStoreFileAttributes, - WebSearchAction, - WebSearchActionFind, - WebSearchActionOpenPage, - WebSearchActionSearch, - WebSearchPreviewTool, - WebSearchToolCallItemParam, - WebSearchToolCallItemResource, - WeeklyRecurrenceSchedule, - WorkflowActionOutputItemResource, - WorkflowDefinition, -) - -from ._enums import ( # type: ignore - AgentContainerOperationStatus, - AgentContainerStatus, - AgentKind, - AgentProtocol, - AnnotationType, - AttackStrategy, - AzureAISearchQueryType, - CodeInterpreterOutputType, - ComputerActionType, - ComputerToolCallOutputItemOutputType, - ConnectionType, - CredentialType, - DatasetType, - DayOfWeek, - DeploymentType, - EvaluationRuleActionType, - EvaluationRuleEventType, - EvaluationTaxonomyInputType, - EvaluatorCategory, - EvaluatorDefinitionType, - EvaluatorMetricDirection, - EvaluatorMetricType, - EvaluatorType, - IndexType, - InsightType, - ItemContentType, - ItemType, - LocationType, - MemoryItemKind, - MemoryOperationKind, - MemoryStoreKind, - MemoryStoreUpdateStatus, - OpenApiAuthType, - OperationState, - PendingUploadType, - ReasoningEffort, - ReasoningItemSummaryPartType, - RecurrenceType, - ResponseErrorCode, - ResponseStreamEventType, - ResponseTextFormatConfigurationType, - ResponsesMessageRole, - RiskCategory, - SampleType, - ScheduleProvisioningStatus, - ScheduleTaskType, - ServiceTier, - ToolChoiceObjectType, - ToolChoiceOptions, - ToolType, - TreatmentEffectType, - TriggerType, - WebSearchActionType, -) -from ._patch import __all__ as _patch_all -from ._patch import * -from ._patch import patch_sdk as _patch_sdk - -__all__ = [ - "A2ATool", - "AISearchIndexResource", - "AgentClusterInsightResult", - "AgentClusterInsightsRequest", - "AgentContainerObject", - "AgentContainerOperationError", - "AgentContainerOperationObject", - "AgentDefinition", - "AgentId", - "AgentObject", - "AgentObjectVersions", - "AgentReference", - "AgentTaxonomyInput", - "AgentVersionObject", - "AgenticIdentityCredentials", - "Annotation", - "AnnotationFileCitation", - "AnnotationFilePath", - "AnnotationUrlCitation", - "ApiError", - "ApiErrorResponse", - "ApiInnerError", - "ApiKeyCredentials", - "ApproximateLocation", - "AzureAIAgentTarget", - "AzureAISearchAgentTool", - "AzureAISearchIndex", - "AzureAISearchToolResource", - "AzureFunctionAgentTool", - "AzureFunctionBinding", - "AzureFunctionDefinition", - "AzureFunctionDefinitionFunction", - "AzureFunctionStorageQueue", - "AzureOpenAIModelConfiguration", - "BaseCredentials", - "BingCustomSearchAgentTool", - "BingCustomSearchConfiguration", - "BingCustomSearchToolParameters", - "BingGroundingAgentTool", - "BingGroundingSearchConfiguration", - "BingGroundingSearchToolParameters", - "BlobReference", - "BlobReferenceSasCredential", - "BrowserAutomationAgentTool", - "BrowserAutomationToolConnectionParameters", - "BrowserAutomationToolParameters", - "CaptureStructuredOutputsTool", - "ChartCoordinate", - "ChatSummaryMemoryItem", - "ClusterInsightResult", - "ClusterTokenUsage", - "CodeBasedEvaluatorDefinition", - "CodeInterpreterOutput", - "CodeInterpreterOutputImage", - "CodeInterpreterOutputLogs", - "CodeInterpreterTool", - "CodeInterpreterToolAuto", - "CodeInterpreterToolCallItemParam", - "CodeInterpreterToolCallItemResource", - "ComparisonFilter", - "CompoundFilter", - "ComputerAction", - "ComputerActionClick", - "ComputerActionDoubleClick", - "ComputerActionDrag", - "ComputerActionKeyPress", - "ComputerActionMove", - "ComputerActionScreenshot", - "ComputerActionScroll", - "ComputerActionTypeKeys", - "ComputerActionWait", - "ComputerToolCallItemParam", - "ComputerToolCallItemResource", - "ComputerToolCallOutputItemOutput", - "ComputerToolCallOutputItemOutputComputerScreenshot", - "ComputerToolCallOutputItemParam", - "ComputerToolCallOutputItemResource", - "ComputerToolCallSafetyCheck", - "ComputerUsePreviewTool", - "Connection", - "ContainerAppAgentDefinition", - "ContinuousEvaluationRuleAction", - "Coordinate", - "CosmosDBIndex", - "CreatedBy", - "CronTrigger", - "CustomCredential", - "DailyRecurrenceSchedule", - "DatasetCredential", - "DatasetVersion", - "DeleteAgentResponse", - "DeleteAgentVersionResponse", - "DeleteMemoryStoreResponse", - "Deployment", - "EmbeddingConfiguration", - "EntraIDCredentials", - "EvalCompareReport", - "EvalResult", - "EvalRunResultCompareItem", - "EvalRunResultComparison", - "EvalRunResultSummary", - "EvaluationComparisonRequest", - "EvaluationResultSample", - "EvaluationRule", - "EvaluationRuleAction", - "EvaluationRuleFilter", - "EvaluationRunClusterInsightResult", - "EvaluationRunClusterInsightsRequest", - "EvaluationScheduleTask", - "EvaluationTaxonomy", - "EvaluationTaxonomyInput", - "EvaluatorDefinition", - "EvaluatorMetric", - "EvaluatorVersion", - "FabricDataAgentToolParameters", - "FieldMapping", - "FileDatasetVersion", - "FileSearchTool", - "FileSearchToolCallItemParam", - "FileSearchToolCallItemParamResult", - "FileSearchToolCallItemResource", - "FolderDatasetVersion", - "FunctionTool", - "FunctionToolCallItemParam", - "FunctionToolCallItemResource", - "FunctionToolCallOutputItemParam", - "FunctionToolCallOutputItemResource", - "HostedAgentDefinition", - "HourlyRecurrenceSchedule", - "HumanEvaluationRuleAction", - "ImageBasedHostedAgentDefinition", - "ImageGenTool", - "ImageGenToolCallItemParam", - "ImageGenToolCallItemResource", - "ImageGenToolInputImageMask", - "Index", - "Insight", - "InsightCluster", - "InsightModelConfiguration", - "InsightRequest", - "InsightResult", - "InsightSample", - "InsightScheduleTask", - "InsightSummary", - "InsightsMetadata", - "InvokeAzureAgentWorkflowActionOutputItemResource", - "ItemContent", - "ItemContentInputAudio", - "ItemContentInputFile", - "ItemContentInputImage", - "ItemContentInputText", - "ItemContentOutputAudio", - "ItemContentOutputText", - "ItemContentRefusal", - "ItemParam", - "ItemReferenceItemParam", - "ItemResource", - "LocalShellExecAction", - "LocalShellTool", - "LocalShellToolCallItemParam", - "LocalShellToolCallItemResource", - "LocalShellToolCallOutputItemParam", - "LocalShellToolCallOutputItemResource", - "Location", - "LogProb", - "MCPApprovalRequestItemParam", - "MCPApprovalRequestItemResource", - "MCPApprovalResponseItemParam", - "MCPApprovalResponseItemResource", - "MCPCallItemParam", - "MCPCallItemResource", - "MCPListToolsItemParam", - "MCPListToolsItemResource", - "MCPListToolsTool", - "MCPTool", - "MCPToolAllowedTools1", - "MCPToolRequireApproval1", - "MCPToolRequireApprovalAlways", - "MCPToolRequireApprovalNever", - "ManagedAzureAISearchIndex", - "MemoryItem", - "MemoryOperation", - "MemorySearchItem", - "MemorySearchOptions", - "MemorySearchTool", - "MemorySearchToolCallItemParam", - "MemorySearchToolCallItemResource", - "MemoryStoreDefaultDefinition", - "MemoryStoreDefaultOptions", - "MemoryStoreDefinition", - "MemoryStoreDeleteScopeResponse", - "MemoryStoreObject", - "MemoryStoreOperationUsage", - "MemoryStoreOperationUsageInputTokensDetails", - "MemoryStoreOperationUsageOutputTokensDetails", - "MemoryStoreSearchResponse", - "MemoryStoreUpdateResponse", - "MemoryStoreUpdateResult", - "MicrosoftFabricAgentTool", - "ModelDeployment", - "ModelDeploymentSku", - "MonthlyRecurrenceSchedule", - "NoAuthenticationCredentials", - "OAuthConsentRequestItemResource", - "OneTimeTrigger", - "OpenApiAgentTool", - "OpenApiAnonymousAuthDetails", - "OpenApiAuthDetails", - "OpenApiFunctionDefinition", - "OpenApiFunctionDefinitionFunction", - "OpenApiManagedAuthDetails", - "OpenApiManagedSecurityScheme", - "OpenApiProjectConnectionAuthDetails", - "OpenApiProjectConnectionSecurityScheme", - "PagedScheduleRun", - "PendingUploadRequest", - "PendingUploadResponse", - "Prompt", - "PromptAgentDefinition", - "PromptAgentDefinitionText", - "PromptBasedEvaluatorDefinition", - "ProtocolVersionRecord", - "RaiConfig", - "RankingOptions", - "Reasoning", - "ReasoningItemParam", - "ReasoningItemResource", - "ReasoningItemSummaryPart", - "ReasoningItemSummaryTextPart", - "RecurrenceSchedule", - "RecurrenceTrigger", - "RedTeam", - "Response", - "ResponseCodeInterpreterCallCodeDeltaEvent", - "ResponseCodeInterpreterCallCodeDoneEvent", - "ResponseCodeInterpreterCallCompletedEvent", - "ResponseCodeInterpreterCallInProgressEvent", - "ResponseCodeInterpreterCallInterpretingEvent", - "ResponseCompletedEvent", - "ResponseContentPartAddedEvent", - "ResponseContentPartDoneEvent", - "ResponseConversation1", - "ResponseCreatedEvent", - "ResponseError", - "ResponseErrorEvent", - "ResponseFailedEvent", - "ResponseFileSearchCallCompletedEvent", - "ResponseFileSearchCallInProgressEvent", - "ResponseFileSearchCallSearchingEvent", - "ResponseFormatJsonSchemaSchema", - "ResponseFunctionCallArgumentsDeltaEvent", - "ResponseFunctionCallArgumentsDoneEvent", - "ResponseImageGenCallCompletedEvent", - "ResponseImageGenCallGeneratingEvent", - "ResponseImageGenCallInProgressEvent", - "ResponseImageGenCallPartialImageEvent", - "ResponseInProgressEvent", - "ResponseIncompleteDetails1", - "ResponseIncompleteEvent", - "ResponseMCPCallArgumentsDeltaEvent", - "ResponseMCPCallArgumentsDoneEvent", - "ResponseMCPCallCompletedEvent", - "ResponseMCPCallFailedEvent", - "ResponseMCPCallInProgressEvent", - "ResponseMCPListToolsCompletedEvent", - "ResponseMCPListToolsFailedEvent", - "ResponseMCPListToolsInProgressEvent", - "ResponseOutputItemAddedEvent", - "ResponseOutputItemDoneEvent", - "ResponsePromptVariables", - "ResponseQueuedEvent", - "ResponseReasoningDeltaEvent", - "ResponseReasoningDoneEvent", - "ResponseReasoningSummaryDeltaEvent", - "ResponseReasoningSummaryDoneEvent", - "ResponseReasoningSummaryPartAddedEvent", - "ResponseReasoningSummaryPartDoneEvent", - "ResponseReasoningSummaryTextDeltaEvent", - "ResponseReasoningSummaryTextDoneEvent", - "ResponseRefusalDeltaEvent", - "ResponseRefusalDoneEvent", - "ResponseStreamEvent", - "ResponseText", - "ResponseTextDeltaEvent", - "ResponseTextDoneEvent", - "ResponseTextFormatConfiguration", - "ResponseTextFormatConfigurationJsonObject", - "ResponseTextFormatConfigurationJsonSchema", - "ResponseTextFormatConfigurationText", - "ResponseUsage", - "ResponseWebSearchCallCompletedEvent", - "ResponseWebSearchCallInProgressEvent", - "ResponseWebSearchCallSearchingEvent", - "ResponsesAssistantMessageItemParam", - "ResponsesAssistantMessageItemResource", - "ResponsesDeveloperMessageItemParam", - "ResponsesDeveloperMessageItemResource", - "ResponsesMessageItemParam", - "ResponsesMessageItemResource", - "ResponsesSystemMessageItemParam", - "ResponsesSystemMessageItemResource", - "ResponsesUserMessageItemParam", - "ResponsesUserMessageItemResource", - "SASCredentials", - "Schedule", - "ScheduleRun", - "ScheduleTask", - "SharepointAgentTool", - "SharepointGroundingToolParameters", - "StructuredInputDefinition", - "StructuredOutputDefinition", - "StructuredOutputsItemResource", - "Target", - "TargetConfig", - "TaxonomyCategory", - "TaxonomySubCategory", - "Tool", - "ToolArgumentBinding", - "ToolChoiceObject", - "ToolChoiceObjectCodeInterpreter", - "ToolChoiceObjectComputer", - "ToolChoiceObjectFileSearch", - "ToolChoiceObjectFunction", - "ToolChoiceObjectImageGen", - "ToolChoiceObjectMCP", - "ToolChoiceObjectWebSearch", - "ToolDescription", - "ToolProjectConnection", - "ToolProjectConnectionList", - "TopLogProb", - "Trigger", - "UserProfileMemoryItem", - "VectorStoreFileAttributes", - "WebSearchAction", - "WebSearchActionFind", - "WebSearchActionOpenPage", - "WebSearchActionSearch", - "WebSearchPreviewTool", - "WebSearchToolCallItemParam", - "WebSearchToolCallItemResource", - "WeeklyRecurrenceSchedule", - "WorkflowActionOutputItemResource", - "WorkflowDefinition", - "AgentContainerOperationStatus", - "AgentContainerStatus", - "AgentKind", - "AgentProtocol", - "AnnotationType", - "AttackStrategy", - "AzureAISearchQueryType", - "CodeInterpreterOutputType", - "ComputerActionType", - "ComputerToolCallOutputItemOutputType", - "ConnectionType", - "CredentialType", - "DatasetType", - "DayOfWeek", - "DeploymentType", - "EvaluationRuleActionType", - "EvaluationRuleEventType", - "EvaluationTaxonomyInputType", - "EvaluatorCategory", - "EvaluatorDefinitionType", - "EvaluatorMetricDirection", - "EvaluatorMetricType", - "EvaluatorType", - "IndexType", - "InsightType", - "ItemContentType", - "ItemType", - "LocationType", - "MemoryItemKind", - "MemoryOperationKind", - "MemoryStoreKind", - "MemoryStoreUpdateStatus", - "OpenApiAuthType", - "OperationState", - "PendingUploadType", - "ReasoningEffort", - "ReasoningItemSummaryPartType", - "RecurrenceType", - "ResponseErrorCode", - "ResponseStreamEventType", - "ResponseTextFormatConfigurationType", - "ResponsesMessageRole", - "RiskCategory", - "SampleType", - "ScheduleProvisioningStatus", - "ScheduleTaskType", - "ServiceTier", - "ToolChoiceObjectType", - "ToolChoiceOptions", - "ToolType", - "TreatmentEffectType", - "TriggerType", - "WebSearchActionType", -] -__all__.extend([p for p in _patch_all if p not in __all__]) # pyright: ignore -_patch_sdk() diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_enums.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_enums.py deleted file mode 100644 index ea4ebc59efd7..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_enums.py +++ /dev/null @@ -1,767 +0,0 @@ -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- - -from enum import Enum -from azure.core import CaseInsensitiveEnumMeta - - -class AgentContainerOperationStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Status of the container operation for a specific version of an agent.""" - - NOT_STARTED = "NotStarted" - """The container operation is not started.""" - IN_PROGRESS = "InProgress" - """The container operation is in progress.""" - SUCCEEDED = "Succeeded" - """The container operation has succeeded.""" - FAILED = "Failed" - """The container operation has failed.""" - - -class AgentContainerStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Status of the container of a specific version of an agent.""" - - STARTING = "Starting" - """The container is starting.""" - RUNNING = "Running" - """The container is running.""" - STOPPING = "Stopping" - """The container is stopping.""" - STOPPED = "Stopped" - """The container is stopped.""" - FAILED = "Failed" - """The container has failed.""" - DELETING = "Deleting" - """The container is deleting.""" - DELETED = "Deleted" - """The container is deleted.""" - UPDATING = "Updating" - """The container is updating.""" - - -class AgentKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of AgentKind.""" - - PROMPT = "prompt" - HOSTED = "hosted" - CONTAINER_APP = "container_app" - WORKFLOW = "workflow" - - -class AgentProtocol(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of AgentProtocol.""" - - ACTIVITY_PROTOCOL = "activity_protocol" - RESPONSES = "responses" - - -class AnnotationType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of AnnotationType.""" - - FILE_CITATION = "file_citation" - URL_CITATION = "url_citation" - FILE_PATH = "file_path" - CONTAINER_FILE_CITATION = "container_file_citation" - - -class AttackStrategy(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Strategies for attacks.""" - - EASY = "easy" - """Represents a default set of easy complexity attacks. Easy complexity attacks require less - effort, such as translation of a prompt into some encoding, and does not require any Large - Language Model to convert or orchestrate.""" - MODERATE = "moderate" - """Represents a default set of moderate complexity attacks. Moderate complexity attacks require - having access to resources such as another generative AI model.""" - DIFFICULT = "difficult" - """Represents a default set of difficult complexity attacks. Difficult complexity attacks include - attacks that require access to significant resources and effort to execute an attack such as - knowledge of search-based algorithms in addition to a generative AI model.""" - ASCII_ART = "ascii_art" - """Generates visual art using ASCII characters, often used for creative or obfuscation purposes.""" - ASCII_SMUGGLER = "ascii_smuggler" - """Conceals data within ASCII characters, making it harder to detect.""" - ATBASH = "atbash" - """Implements the Atbash cipher, a simple substitution cipher where each letter is mapped to its - reverse.""" - BASE64 = "base64" - """Encodes binary data into a text format using Base64, commonly used for data transmission.""" - BINARY = "binary" - """Converts text into binary code, representing data in a series of 0s and 1s.""" - CAESAR = "caesar" - """Applies the Caesar cipher, a substitution cipher that shifts characters by a fixed number of - positions.""" - CHARACTER_SPACE = "character_space" - """Alters text by adding spaces between characters, often used for obfuscation.""" - JAILBREAK = "jailbreak" - """Injects specially crafted prompts to bypass AI safeguards, known as User Injected Prompt - Attacks (UPIA).""" - ANSII_ATTACK = "ansii_attack" - """Utilizes ANSI escape sequences to manipulate text appearance and behavior.""" - CHARACTER_SWAP = "character_swap" - """Swaps characters within text to create variations or obfuscate the original content.""" - SUFFIX_APPEND = "suffix_append" - """Appends an adversarial suffix to the prompt.""" - STRING_JOIN = "string_join" - """Joins multiple strings together, often used for concatenation or obfuscation.""" - UNICODE_CONFUSABLE = "unicode_confusable" - """Uses Unicode characters that look similar to standard characters, creating visual confusion.""" - UNICODE_SUBSTITUTION = "unicode_substitution" - """Substitutes standard characters with Unicode equivalents, often for obfuscation.""" - DIACRITIC = "diacritic" - """Adds diacritical marks to characters, changing their appearance and sometimes their meaning.""" - FLIP = "flip" - """Flips characters from front to back, creating a mirrored effect.""" - LEETSPEAK = "leetspeak" - """Transforms text into Leetspeak, a form of encoding that replaces letters with similar-looking - numbers or symbols.""" - ROT13 = "rot13" - """Applies the ROT13 cipher, a simple substitution cipher that shifts characters by 13 positions.""" - MORSE = "morse" - """Encodes text into Morse code, using dots and dashes to represent characters.""" - URL = "url" - """Encodes text into URL format.""" - BASELINE = "baseline" - """Represents the baseline direct adversarial probing, which is used by attack strategies as the - attack objective.""" - - -class AzureAISearchQueryType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Available query types for Azure AI Search tool.""" - - SIMPLE = "simple" - """Query type ``simple``""" - SEMANTIC = "semantic" - """Query type ``semantic``""" - VECTOR = "vector" - """Query type ``vector``""" - VECTOR_SIMPLE_HYBRID = "vector_simple_hybrid" - """Query type ``vector_simple_hybrid``""" - VECTOR_SEMANTIC_HYBRID = "vector_semantic_hybrid" - """Query type ``vector_semantic_hybrid``""" - - -class CodeInterpreterOutputType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of CodeInterpreterOutputType.""" - - LOGS = "logs" - IMAGE = "image" - - -class ComputerActionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of ComputerActionType.""" - - SCREENSHOT = "screenshot" - CLICK = "click" - DOUBLE_CLICK = "double_click" - SCROLL = "scroll" - TYPE = "type" - WAIT = "wait" - KEYPRESS = "keypress" - DRAG = "drag" - MOVE = "move" - - -class ComputerToolCallOutputItemOutputType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """A computer screenshot image used with the computer use tool.""" - - SCREENSHOT = "computer_screenshot" - - -class ConnectionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The Type (or category) of the connection.""" - - AZURE_OPEN_AI = "AzureOpenAI" - """Azure OpenAI Service""" - AZURE_BLOB_STORAGE = "AzureBlob" - """Azure Blob Storage, with specified container""" - AZURE_STORAGE_ACCOUNT = "AzureStorageAccount" - """Azure Blob Storage, with container not specified (used by Agents)""" - AZURE_AI_SEARCH = "CognitiveSearch" - """Azure AI Search""" - COSMOS_DB = "CosmosDB" - """CosmosDB""" - API_KEY = "ApiKey" - """Generic connection that uses API Key authentication""" - APPLICATION_CONFIGURATION = "AppConfig" - """Application Configuration""" - APPLICATION_INSIGHTS = "AppInsights" - """Application Insights""" - CUSTOM = "CustomKeys" - """Custom Keys""" - REMOTE_TOOL = "RemoteTool" - """Remote tool""" - - -class CredentialType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The credential type used by the connection.""" - - API_KEY = "ApiKey" - """API Key credential""" - ENTRA_ID = "AAD" - """Entra ID credential (formerly known as AAD)""" - SAS = "SAS" - """Shared Access Signature (SAS) credential""" - CUSTOM = "CustomKeys" - """Custom credential""" - NONE = "None" - """No credential""" - AGENTIC_IDENTITY = "AgenticIdentityToken" - """Agentic identity credential""" - - -class DatasetType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Enum to determine the type of data.""" - - URI_FILE = "uri_file" - """URI file.""" - URI_FOLDER = "uri_folder" - """URI folder.""" - - -class DayOfWeek(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Days of the week for recurrence schedule.""" - - SUNDAY = "Sunday" - """Sunday.""" - MONDAY = "Monday" - """Monday.""" - TUESDAY = "Tuesday" - """Tuesday.""" - WEDNESDAY = "Wednesday" - """Wednesday.""" - THURSDAY = "Thursday" - """Thursday.""" - FRIDAY = "Friday" - """Friday.""" - SATURDAY = "Saturday" - """Saturday.""" - - -class DeploymentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of DeploymentType.""" - - MODEL_DEPLOYMENT = "ModelDeployment" - """Model deployment""" - - -class EvaluationRuleActionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of the evaluation action.""" - - CONTINUOUS_EVALUATION = "continuousEvaluation" - """Continuous evaluation.""" - HUMAN_EVALUATION = "humanEvaluation" - """Human evaluation.""" - - -class EvaluationRuleEventType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of the evaluation rule event.""" - - RESPONSE_COMPLETED = "response.completed" - """Response completed.""" - MANUAL = "manual" - """Manual trigger.""" - - -class EvaluationTaxonomyInputType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of the evaluation taxonomy input.""" - - AGENT = "agent" - """Agent""" - POLICY = "policy" - """Policy.""" - - -class EvaluatorCategory(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The category of the evaluator.""" - - QUALITY = "quality" - """Quality""" - SAFETY = "safety" - """Risk & Safety""" - AGENTS = "agents" - """Agents""" - - -class EvaluatorDefinitionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The type of evaluator definition.""" - - PROMPT = "prompt" - """Prompt-based definition""" - CODE = "code" - """Code-based definition""" - PROMPT_AND_CODE = "prompt_and_code" - """Prompt & Code Based definition""" - SERVICE = "service" - """Service-based evaluator""" - OPENAI_GRADERS = "openai_graders" - """OpenAI graders""" - - -class EvaluatorMetricDirection(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The direction of the metric indicating whether a higher value is better, a lower value is - better, or neutral. - """ - - INCREASE = "increase" - """It indicates a higher value is better for this metric""" - DECREASE = "decrease" - """It indicates a lower value is better for this metric""" - NEUTRAL = "neutral" - """It indicates no preference for this metric direction""" - - -class EvaluatorMetricType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The type of the evaluator.""" - - ORDINAL = "ordinal" - """Ordinal metric representing categories that can be ordered or ranked.""" - CONTINUOUS = "continuous" - """Continuous metric representing values in a continuous range.""" - BOOLEAN = "boolean" - """Boolean metric representing true/false values""" - - -class EvaluatorType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The type of the evaluator.""" - - BUILT_IN = "builtin" - """Built-in evaluator (Microsoft provided)""" - CUSTOM = "custom" - """Custom evaluator""" - - -class IndexType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of IndexType.""" - - AZURE_SEARCH = "AzureSearch" - """Azure search""" - COSMOS_DB = "CosmosDBNoSqlVectorStore" - """CosmosDB""" - MANAGED_AZURE_SEARCH = "ManagedAzureSearch" - """Managed Azure Search""" - - -class InsightType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The request of the insights.""" - - EVALUATION_RUN_CLUSTER_INSIGHT = "EvaluationRunClusterInsight" - """Insights on an Evaluation run result.""" - AGENT_CLUSTER_INSIGHT = "AgentClusterInsight" - """Cluster Insight on an Agent.""" - EVALUATION_COMPARISON = "EvaluationComparison" - """Evaluation Comparison.""" - - -class ItemContentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Multi-modal input and output contents.""" - - INPUT_TEXT = "input_text" - INPUT_AUDIO = "input_audio" - INPUT_IMAGE = "input_image" - INPUT_FILE = "input_file" - OUTPUT_TEXT = "output_text" - OUTPUT_AUDIO = "output_audio" - REFUSAL = "refusal" - - -class ItemType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of ItemType.""" - - MESSAGE = "message" - FILE_SEARCH_CALL = "file_search_call" - FUNCTION_CALL = "function_call" - FUNCTION_CALL_OUTPUT = "function_call_output" - COMPUTER_CALL = "computer_call" - COMPUTER_CALL_OUTPUT = "computer_call_output" - WEB_SEARCH_CALL = "web_search_call" - REASONING = "reasoning" - ITEM_REFERENCE = "item_reference" - IMAGE_GENERATION_CALL = "image_generation_call" - CODE_INTERPRETER_CALL = "code_interpreter_call" - LOCAL_SHELL_CALL = "local_shell_call" - LOCAL_SHELL_CALL_OUTPUT = "local_shell_call_output" - MCP_LIST_TOOLS = "mcp_list_tools" - MCP_APPROVAL_REQUEST = "mcp_approval_request" - MCP_APPROVAL_RESPONSE = "mcp_approval_response" - MCP_CALL = "mcp_call" - STRUCTURED_OUTPUTS = "structured_outputs" - WORKFLOW_ACTION = "workflow_action" - MEMORY_SEARCH_CALL = "memory_search_call" - OAUTH_CONSENT_REQUEST = "oauth_consent_request" - - -class LocationType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of LocationType.""" - - APPROXIMATE = "approximate" - - -class MemoryItemKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Memory item kind.""" - - USER_PROFILE = "user_profile" - """User profile information extracted from conversations.""" - CHAT_SUMMARY = "chat_summary" - """Summary of chat conversations.""" - - -class MemoryOperationKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Memory operation kind.""" - - CREATE = "create" - """Create a new memory item.""" - UPDATE = "update" - """Update an existing memory item.""" - DELETE = "delete" - """Delete an existing memory item.""" - - -class MemoryStoreKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The type of memory store implementation to use.""" - - DEFAULT = "default" - """The default memory store implementation.""" - - -class MemoryStoreUpdateStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Status of a memory store update operation.""" - - QUEUED = "queued" - IN_PROGRESS = "in_progress" - COMPLETED = "completed" - FAILED = "failed" - SUPERSEDED = "superseded" - - -class OpenApiAuthType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Authentication type for OpenApi endpoint. Allowed types are: - * Anonymous (no authentication required) - * Project Connection (requires project_connection_id to endpoint, as setup in AI Foundry) - * Managed_Identity (requires audience for identity based auth). - """ - - ANONYMOUS = "anonymous" - PROJECT_CONNECTION = "project_connection" - MANAGED_IDENTITY = "managed_identity" - - -class OperationState(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Enum describing allowed operation states.""" - - NOT_STARTED = "NotStarted" - """The operation has not started.""" - RUNNING = "Running" - """The operation is in progress.""" - SUCCEEDED = "Succeeded" - """The operation has completed successfully.""" - FAILED = "Failed" - """The operation has failed.""" - CANCELED = "Canceled" - """The operation has been canceled by the user.""" - - -class PendingUploadType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The type of pending upload.""" - - NONE = "None" - """No pending upload.""" - BLOB_REFERENCE = "BlobReference" - """Blob Reference is the only supported type.""" - - -class ReasoningEffort(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """**o-series models only** - Constrains effort on reasoning for - `reasoning models `_. - Currently supported values are ``low``, ``medium``, and ``high``. Reducing - reasoning effort can result in faster responses and fewer tokens used - on reasoning in a response. - """ - - LOW = "low" - MEDIUM = "medium" - HIGH = "high" - - -class ReasoningItemSummaryPartType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of ReasoningItemSummaryPartType.""" - - SUMMARY_TEXT = "summary_text" - - -class RecurrenceType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Recurrence type.""" - - HOURLY = "Hourly" - """Hourly recurrence pattern.""" - DAILY = "Daily" - """Daily recurrence pattern.""" - WEEKLY = "Weekly" - """Weekly recurrence pattern.""" - MONTHLY = "Monthly" - """Monthly recurrence pattern.""" - - -class ResponseErrorCode(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The error code for the response.""" - - SERVER_ERROR = "server_error" - RATE_LIMIT_EXCEEDED = "rate_limit_exceeded" - INVALID_PROMPT = "invalid_prompt" - VECTOR_STORE_TIMEOUT = "vector_store_timeout" - INVALID_IMAGE = "invalid_image" - INVALID_IMAGE_FORMAT = "invalid_image_format" - INVALID_BASE64_IMAGE = "invalid_base64_image" - INVALID_IMAGE_URL = "invalid_image_url" - IMAGE_TOO_LARGE = "image_too_large" - IMAGE_TOO_SMALL = "image_too_small" - IMAGE_PARSE_ERROR = "image_parse_error" - IMAGE_CONTENT_POLICY_VIOLATION = "image_content_policy_violation" - INVALID_IMAGE_MODE = "invalid_image_mode" - IMAGE_FILE_TOO_LARGE = "image_file_too_large" - UNSUPPORTED_IMAGE_MEDIA_TYPE = "unsupported_image_media_type" - EMPTY_IMAGE_FILE = "empty_image_file" - FAILED_TO_DOWNLOAD_IMAGE = "failed_to_download_image" - IMAGE_FILE_NOT_FOUND = "image_file_not_found" - - -class ResponsesMessageRole(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The collection of valid roles for responses message items.""" - - SYSTEM = "system" - DEVELOPER = "developer" - USER = "user" - ASSISTANT = "assistant" - - -class ResponseStreamEventType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of ResponseStreamEventType.""" - - RESPONSE_AUDIO_DELTA = "response.audio.delta" - RESPONSE_AUDIO_DONE = "response.audio.done" - RESPONSE_AUDIO_TRANSCRIPT_DELTA = "response.audio_transcript.delta" - RESPONSE_AUDIO_TRANSCRIPT_DONE = "response.audio_transcript.done" - RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA = "response.code_interpreter_call_code.delta" - RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE = "response.code_interpreter_call_code.done" - RESPONSE_CODE_INTERPRETER_CALL_COMPLETED = "response.code_interpreter_call.completed" - RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS = "response.code_interpreter_call.in_progress" - RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING = "response.code_interpreter_call.interpreting" - RESPONSE_COMPLETED = "response.completed" - RESPONSE_CONTENT_PART_ADDED = "response.content_part.added" - RESPONSE_CONTENT_PART_DONE = "response.content_part.done" - RESPONSE_CREATED = "response.created" - ERROR = "error" - RESPONSE_FILE_SEARCH_CALL_COMPLETED = "response.file_search_call.completed" - RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS = "response.file_search_call.in_progress" - RESPONSE_FILE_SEARCH_CALL_SEARCHING = "response.file_search_call.searching" - RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA = "response.function_call_arguments.delta" - RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE = "response.function_call_arguments.done" - RESPONSE_IN_PROGRESS = "response.in_progress" - RESPONSE_FAILED = "response.failed" - RESPONSE_INCOMPLETE = "response.incomplete" - RESPONSE_OUTPUT_ITEM_ADDED = "response.output_item.added" - RESPONSE_OUTPUT_ITEM_DONE = "response.output_item.done" - RESPONSE_REFUSAL_DELTA = "response.refusal.delta" - RESPONSE_REFUSAL_DONE = "response.refusal.done" - RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED = "response.output_text.annotation.added" - RESPONSE_OUTPUT_TEXT_DELTA = "response.output_text.delta" - RESPONSE_OUTPUT_TEXT_DONE = "response.output_text.done" - RESPONSE_REASONING_SUMMARY_PART_ADDED = "response.reasoning_summary_part.added" - RESPONSE_REASONING_SUMMARY_PART_DONE = "response.reasoning_summary_part.done" - RESPONSE_REASONING_SUMMARY_TEXT_DELTA = "response.reasoning_summary_text.delta" - RESPONSE_REASONING_SUMMARY_TEXT_DONE = "response.reasoning_summary_text.done" - RESPONSE_WEB_SEARCH_CALL_COMPLETED = "response.web_search_call.completed" - RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS = "response.web_search_call.in_progress" - RESPONSE_WEB_SEARCH_CALL_SEARCHING = "response.web_search_call.searching" - RESPONSE_IMAGE_GENERATION_CALL_COMPLETED = "response.image_generation_call.completed" - RESPONSE_IMAGE_GENERATION_CALL_GENERATING = "response.image_generation_call.generating" - RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS = "response.image_generation_call.in_progress" - RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE = "response.image_generation_call.partial_image" - RESPONSE_MCP_CALL_ARGUMENTS_DELTA = "response.mcp_call.arguments_delta" - RESPONSE_MCP_CALL_ARGUMENTS_DONE = "response.mcp_call.arguments_done" - RESPONSE_MCP_CALL_COMPLETED = "response.mcp_call.completed" - RESPONSE_MCP_CALL_FAILED = "response.mcp_call.failed" - RESPONSE_MCP_CALL_IN_PROGRESS = "response.mcp_call.in_progress" - RESPONSE_MCP_LIST_TOOLS_COMPLETED = "response.mcp_list_tools.completed" - RESPONSE_MCP_LIST_TOOLS_FAILED = "response.mcp_list_tools.failed" - RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS = "response.mcp_list_tools.in_progress" - RESPONSE_QUEUED = "response.queued" - RESPONSE_REASONING_DELTA = "response.reasoning.delta" - RESPONSE_REASONING_DONE = "response.reasoning.done" - RESPONSE_REASONING_SUMMARY_DELTA = "response.reasoning_summary.delta" - RESPONSE_REASONING_SUMMARY_DONE = "response.reasoning_summary.done" - - -class ResponseTextFormatConfigurationType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """An object specifying the format that the model must output. - Configuring ``{ "type": "json_schema" }`` enables Structured Outputs, - which ensures the model will match your supplied JSON schema. Learn more in the - `Structured Outputs guide `_. - The default format is ``{ "type": "text" }`` with no additional options. - **Not recommended for gpt-4o and newer models:** - Setting to ``{ "type": "json_object" }`` enables the older JSON mode, which - ensures the message the model generates is valid JSON. Using ``json_schema`` - is preferred for models that support it. - """ - - TEXT = "text" - JSON_SCHEMA = "json_schema" - JSON_OBJECT = "json_object" - - -class RiskCategory(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Risk category for the attack objective.""" - - HATE_UNFAIRNESS = "HateUnfairness" - """Represents content related to hate or unfairness.""" - VIOLENCE = "Violence" - """Represents content related to violence.""" - SEXUAL = "Sexual" - """Represents content of a sexual nature.""" - SELF_HARM = "SelfHarm" - """Represents content related to self-harm.""" - - -class SampleType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """The type of sample used in the analysis.""" - - EVALUATION_RESULT_SAMPLE = "EvaluationResultSample" - """A sample from the evaluation result.""" - - -class ScheduleProvisioningStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Schedule provisioning status.""" - - CREATING = "Creating" - """Represents the creation status of the schedule.""" - UPDATING = "Updating" - """Represents the updating status of the schedule.""" - DELETING = "Deleting" - """Represents the deleting status of the schedule.""" - SUCCEEDED = "Succeeded" - """Represents the succeeded status of the schedule.""" - FAILED = "Failed" - """Represents the failed status of the schedule.""" - - -class ScheduleTaskType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of the task.""" - - EVALUATION = "Evaluation" - """Evaluation task.""" - INSIGHT = "Insight" - """Insight task.""" - - -class ServiceTier(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Specifies the processing type used for serving the request. - * If set to 'auto', then the request will be processed with the service tier configured in the - Project settings. Unless otherwise configured, the Project will use 'default'. - * If set to 'default', then the request will be processed with the standard pricing and - performance for the selected model. - * If set to '[flex](/docs/guides/flex-processing)' or 'priority', then the request will be - processed with the corresponding service tier. [Contact - sales](https://openai.com/contact-sales) to learn more about Priority processing. - * When not set, the default behavior is 'auto'. - When the ``service_tier`` parameter is set, the response body will include the - ``service_tier`` value based on the processing mode actually used to serve the request. This - response value may be different from the value set in the parameter. - """ - - AUTO = "auto" - DEFAULT = "default" - FLEX = "flex" - SCALE = "scale" - PRIORITY = "priority" - - -class ToolChoiceObjectType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Indicates that the model should use a built-in tool to generate a response. - `Learn more about built-in tools `_. - """ - - FILE_SEARCH = "file_search" - FUNCTION = "function" - COMPUTER = "computer_use_preview" - WEB_SEARCH = "web_search_preview" - IMAGE_GENERATION = "image_generation" - CODE_INTERPRETER = "code_interpreter" - MCP = "mcp" - - -class ToolChoiceOptions(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Controls which (if any) tool is called by the model. - ``none`` means the model will not call any tool and instead generates a message. - ``auto`` means the model can pick between generating a message or calling one or - more tools. - ``required`` means the model must call one or more tools. - """ - - NONE = "none" - AUTO = "auto" - REQUIRED = "required" - - -class ToolType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """A tool that can be used to generate a response.""" - - FILE_SEARCH = "file_search" - FUNCTION = "function" - COMPUTER_USE_PREVIEW = "computer_use_preview" - WEB_SEARCH_PREVIEW = "web_search_preview" - MCP = "mcp" - CODE_INTERPRETER = "code_interpreter" - IMAGE_GENERATION = "image_generation" - LOCAL_SHELL = "local_shell" - BING_GROUNDING = "bing_grounding" - BROWSER_AUTOMATION_PREVIEW = "browser_automation_preview" - FABRIC_DATAAGENT_PREVIEW = "fabric_dataagent_preview" - SHAREPOINT_GROUNDING_PREVIEW = "sharepoint_grounding_preview" - AZURE_AI_SEARCH = "azure_ai_search" - OPENAPI = "openapi" - BING_CUSTOM_SEARCH_PREVIEW = "bing_custom_search_preview" - CAPTURE_STRUCTURED_OUTPUTS = "capture_structured_outputs" - A2_A_PREVIEW = "a2a_preview" - AZURE_FUNCTION = "azure_function" - MEMORY_SEARCH = "memory_search" - - -class TreatmentEffectType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Treatment Effect Type.""" - - TOO_FEW_SAMPLES = "TooFewSamples" - """Not enough samples to determine treatment effect.""" - INCONCLUSIVE = "Inconclusive" - """No significant difference between treatment and baseline.""" - CHANGED = "Changed" - """Indicates the metric changed with statistical significance, but the direction is neutral.""" - IMPROVED = "Improved" - """Indicates the treatment significantly improved the metric compared to baseline.""" - DEGRADED = "Degraded" - """Indicates the treatment significantly degraded the metric compared to baseline.""" - - -class TriggerType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of the trigger.""" - - CRON = "Cron" - """Cron based trigger.""" - RECURRENCE = "Recurrence" - """Recurrence based trigger.""" - ONE_TIME = "OneTime" - """One-time trigger.""" - - -class WebSearchActionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of WebSearchActionType.""" - - SEARCH = "search" - OPEN_PAGE = "open_page" - FIND = "find" diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_models.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_models.py deleted file mode 100644 index a810ddc805c3..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_models.py +++ /dev/null @@ -1,15049 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression,too-many-lines -# coding=utf-8 -# -------------------------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# Licensed under the MIT License. See License.txt in the project root for license information. -# Code generated by Microsoft (R) Python Code Generator. -# Changes may cause incorrect behavior and will be lost if the code is regenerated. -# -------------------------------------------------------------------------- -# pylint: disable=useless-super-delegation - -import datetime -from typing import Any, Literal, Mapping, Optional, TYPE_CHECKING, Union, overload - -from ._utils.model_base import Model as _Model, rest_discriminator, rest_field -from ._enums import ( - AgentKind, - AnnotationType, - CodeInterpreterOutputType, - ComputerActionType, - ComputerToolCallOutputItemOutputType, - CredentialType, - DatasetType, - DeploymentType, - EvaluationRuleActionType, - EvaluationTaxonomyInputType, - EvaluatorDefinitionType, - IndexType, - InsightType, - ItemContentType, - ItemType, - LocationType, - MemoryItemKind, - MemoryStoreKind, - OpenApiAuthType, - PendingUploadType, - ReasoningItemSummaryPartType, - RecurrenceType, - ResponseStreamEventType, - ResponseTextFormatConfigurationType, - ResponsesMessageRole, - SampleType, - ScheduleTaskType, - ToolChoiceObjectType, - ToolType, - TriggerType, - WebSearchActionType, -) - -if TYPE_CHECKING: - from .. import _types, models as _models # type: ignore - - -class Tool(_Model): - """Tool. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - A2ATool, AzureAISearchAgentTool, AzureFunctionAgentTool, BingCustomSearchAgentTool, - BingGroundingAgentTool, BrowserAutomationAgentTool, CaptureStructuredOutputsTool, - CodeInterpreterTool, ComputerUsePreviewTool, MicrosoftFabricAgentTool, FileSearchTool, - FunctionTool, ImageGenTool, LocalShellTool, MCPTool, MemorySearchTool, OpenApiAgentTool, - SharepointAgentTool, WebSearchPreviewTool - - :ivar type: Required. Known values are: "file_search", "function", "computer_use_preview", - "web_search_preview", "mcp", "code_interpreter", "image_generation", "local_shell", - "bing_grounding", "browser_automation_preview", "fabric_dataagent_preview", - "sharepoint_grounding_preview", "azure_ai_search", "openapi", "bing_custom_search_preview", - "capture_structured_outputs", "a2a_preview", "azure_function", and "memory_search". - :vartype type: str or ~azure.ai.projects.models.ToolType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"file_search\", \"function\", \"computer_use_preview\", - \"web_search_preview\", \"mcp\", \"code_interpreter\", \"image_generation\", \"local_shell\", - \"bing_grounding\", \"browser_automation_preview\", \"fabric_dataagent_preview\", - \"sharepoint_grounding_preview\", \"azure_ai_search\", \"openapi\", - \"bing_custom_search_preview\", \"capture_structured_outputs\", \"a2a_preview\", - \"azure_function\", and \"memory_search\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class A2ATool(Tool, discriminator="a2a_preview"): - """An agent implementing the A2A protocol. - - :ivar type: The type of the tool. Always ``a2a``. Required. - :vartype type: str or ~azure.ai.projects.models.A2_A_PREVIEW - :ivar base_url: Base URL of the agent. - :vartype base_url: str - :ivar agent_card_path: The path to the agent card relative to the ``base_url``. - If not provided, defaults to ``/.well-known/agent-card.json``. - :vartype agent_card_path: str - :ivar project_connection_id: The connection ID in the project for the A2A server. - The connection stores authentication and other connection details needed to connect to the A2A - server. - :vartype project_connection_id: str - """ - - type: Literal[ToolType.A2_A_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the tool. Always ``a2a``. Required.""" - base_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Base URL of the agent.""" - agent_card_path: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The path to the agent card relative to the ``base_url``. - If not provided, defaults to ``/.well-known/agent-card.json``.""" - project_connection_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The connection ID in the project for the A2A server. - The connection stores authentication and other connection details needed to connect to the A2A - server.""" - - @overload - def __init__( - self, - *, - base_url: Optional[str] = None, - agent_card_path: Optional[str] = None, - project_connection_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.A2_A_PREVIEW # type: ignore - - -class InsightResult(_Model): - """The result of the insights. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AgentClusterInsightResult, EvalCompareReport, EvaluationRunClusterInsightResult - - :ivar type: The type of insights result. Required. Known values are: - "EvaluationRunClusterInsight", "AgentClusterInsight", and "EvaluationComparison". - :vartype type: str or ~azure.ai.projects.models.InsightType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """The type of insights result. Required. Known values are: \"EvaluationRunClusterInsight\", - \"AgentClusterInsight\", and \"EvaluationComparison\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgentClusterInsightResult(InsightResult, discriminator="AgentClusterInsight"): - """Insights from the agent cluster analysis. - - :ivar type: The type of insights result. Required. Cluster Insight on an Agent. - :vartype type: str or ~azure.ai.projects.models.AGENT_CLUSTER_INSIGHT - :ivar cluster_insight: Required. - :vartype cluster_insight: ~azure.ai.projects.models.ClusterInsightResult - """ - - type: Literal[InsightType.AGENT_CLUSTER_INSIGHT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of insights result. Required. Cluster Insight on an Agent.""" - cluster_insight: "_models.ClusterInsightResult" = rest_field( - name="clusterInsight", visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" - - @overload - def __init__( - self, - *, - cluster_insight: "_models.ClusterInsightResult", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = InsightType.AGENT_CLUSTER_INSIGHT # type: ignore - - -class InsightRequest(_Model): - """The request of the insights report. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AgentClusterInsightsRequest, EvaluationComparisonRequest, EvaluationRunClusterInsightsRequest - - :ivar type: The type of request. Required. Known values are: "EvaluationRunClusterInsight", - "AgentClusterInsight", and "EvaluationComparison". - :vartype type: str or ~azure.ai.projects.models.InsightType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """The type of request. Required. Known values are: \"EvaluationRunClusterInsight\", - \"AgentClusterInsight\", and \"EvaluationComparison\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgentClusterInsightsRequest(InsightRequest, discriminator="AgentClusterInsight"): - """Insights on set of Agent Evaluation Results. - - :ivar type: The type of request. Required. Cluster Insight on an Agent. - :vartype type: str or ~azure.ai.projects.models.AGENT_CLUSTER_INSIGHT - :ivar agent_name: Identifier for the agent. Required. - :vartype agent_name: str - :ivar model_configuration: Configuration of the model used in the insight generation. - :vartype model_configuration: ~azure.ai.projects.models.InsightModelConfiguration - """ - - type: Literal[InsightType.AGENT_CLUSTER_INSIGHT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of request. Required. Cluster Insight on an Agent.""" - agent_name: str = rest_field(name="agentName", visibility=["read", "create", "update", "delete", "query"]) - """Identifier for the agent. Required.""" - model_configuration: Optional["_models.InsightModelConfiguration"] = rest_field( - name="modelConfiguration", visibility=["read", "create", "update", "delete", "query"] - ) - """Configuration of the model used in the insight generation.""" - - @overload - def __init__( - self, - *, - agent_name: str, - model_configuration: Optional["_models.InsightModelConfiguration"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = InsightType.AGENT_CLUSTER_INSIGHT # type: ignore - - -class AgentContainerObject(_Model): - """The details of the container of a specific version of an agent. - - :ivar object: The object type, which is always 'agent.container'. Required. Default value is - "agent.container". - :vartype object: str - :ivar status: The status of the container of a specific version of an agent. Required. Known - values are: "Starting", "Running", "Stopping", "Stopped", "Failed", "Deleting", "Deleted", and - "Updating". - :vartype status: str or ~azure.ai.projects.models.AgentContainerStatus - :ivar max_replicas: The maximum number of replicas for the container. Default is 1. - :vartype max_replicas: int - :ivar min_replicas: The minimum number of replicas for the container. Default is 1. - :vartype min_replicas: int - :ivar error_message: The error message if the container failed to operate, if any. - :vartype error_message: str - :ivar created_at: The creation time of the container. Required. - :vartype created_at: ~datetime.datetime - :ivar updated_at: The last update time of the container. Required. - :vartype updated_at: ~datetime.datetime - """ - - object: Literal["agent.container"] = rest_field(visibility=["read"]) - """The object type, which is always 'agent.container'. Required. Default value is - \"agent.container\".""" - status: Union[str, "_models.AgentContainerStatus"] = rest_field(visibility=["read"]) - """The status of the container of a specific version of an agent. Required. Known values are: - \"Starting\", \"Running\", \"Stopping\", \"Stopped\", \"Failed\", \"Deleting\", \"Deleted\", - and \"Updating\".""" - max_replicas: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The maximum number of replicas for the container. Default is 1.""" - min_replicas: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The minimum number of replicas for the container. Default is 1.""" - error_message: Optional[str] = rest_field(visibility=["read"]) - """The error message if the container failed to operate, if any.""" - created_at: datetime.datetime = rest_field(visibility=["read"], format="rfc3339") - """The creation time of the container. Required.""" - updated_at: datetime.datetime = rest_field(visibility=["read"], format="rfc3339") - """The last update time of the container. Required.""" - - @overload - def __init__( - self, - *, - max_replicas: Optional[int] = None, - min_replicas: Optional[int] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["agent.container"] = "agent.container" - - -class AgentContainerOperationError(_Model): - """The error details of the container operation, if any. - - :ivar code: The error code of the container operation, if any. Required. - :vartype code: str - :ivar type: The error type of the container operation, if any. Required. - :vartype type: str - :ivar message: The error message of the container operation, if any. Required. - :vartype message: str - """ - - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error code of the container operation, if any. Required.""" - type: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error type of the container operation, if any. Required.""" - message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error message of the container operation, if any. Required.""" - - @overload - def __init__( - self, - *, - code: str, - type: str, - message: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgentContainerOperationObject(_Model): - """The container operation for a specific version of an agent. - - :ivar id: The ID of the container operation. This id is unique identifier across the system. - Required. - :vartype id: str - :ivar agent_id: The ID of the agent. Required. - :vartype agent_id: str - :ivar agent_version_id: The ID of the agent version. Required. - :vartype agent_version_id: str - :ivar status: The status of the container operation. Required. Known values are: "NotStarted", - "InProgress", "Succeeded", and "Failed". - :vartype status: str or ~azure.ai.projects.models.AgentContainerOperationStatus - :ivar error: The error of the container operation, if any. - :vartype error: ~azure.ai.projects.models.AgentContainerOperationError - :ivar container: The container of the specific version of an agent. - :vartype container: ~azure.ai.projects.models.AgentContainerObject - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the container operation. This id is unique identifier across the system. Required.""" - agent_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the agent. Required.""" - agent_version_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the agent version. Required.""" - status: Union[str, "_models.AgentContainerOperationStatus"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the container operation. Required. Known values are: \"NotStarted\", - \"InProgress\", \"Succeeded\", and \"Failed\".""" - error: Optional["_models.AgentContainerOperationError"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The error of the container operation, if any.""" - container: Optional["_models.AgentContainerObject"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The container of the specific version of an agent.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - agent_id: str, - agent_version_id: str, - status: Union[str, "_models.AgentContainerOperationStatus"], - error: Optional["_models.AgentContainerOperationError"] = None, - container: Optional["_models.AgentContainerObject"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgentDefinition(_Model): - """AgentDefinition. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ContainerAppAgentDefinition, HostedAgentDefinition, PromptAgentDefinition, WorkflowDefinition - - :ivar kind: Required. Known values are: "prompt", "hosted", "container_app", and "workflow". - :vartype kind: str or ~azure.ai.projects.models.AgentKind - :ivar rai_config: Configuration for Responsible AI (RAI) content filtering and safety features. - :vartype rai_config: ~azure.ai.projects.models.RaiConfig - """ - - __mapping__: dict[str, _Model] = {} - kind: str = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"prompt\", \"hosted\", \"container_app\", and \"workflow\".""" - rai_config: Optional["_models.RaiConfig"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Configuration for Responsible AI (RAI) content filtering and safety features.""" - - @overload - def __init__( - self, - *, - kind: str, - rai_config: Optional["_models.RaiConfig"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BaseCredentials(_Model): - """A base class for connection credentials. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - EntraIDCredentials, AgenticIdentityCredentials, ApiKeyCredentials, CustomCredential, - NoAuthenticationCredentials, SASCredentials - - :ivar type: The type of credential used by the connection. Required. Known values are: - "ApiKey", "AAD", "SAS", "CustomKeys", "None", and "AgenticIdentityToken". - :vartype type: str or ~azure.ai.projects.models.CredentialType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read"]) - """The type of credential used by the connection. Required. Known values are: \"ApiKey\", \"AAD\", - \"SAS\", \"CustomKeys\", \"None\", and \"AgenticIdentityToken\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgenticIdentityCredentials(BaseCredentials, discriminator="AgenticIdentityToken"): - """Agentic identity credential definition. - - :ivar type: The credential type. Required. Agentic identity credential - :vartype type: str or ~azure.ai.projects.models.AGENTIC_IDENTITY - """ - - type: Literal[CredentialType.AGENTIC_IDENTITY] = rest_discriminator(name="type", visibility=["read"]) # type: ignore - """The credential type. Required. Agentic identity credential""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CredentialType.AGENTIC_IDENTITY # type: ignore - - -class AgentId(_Model): - """AgentId. - - :ivar type: Required. Default value is "agent_id". - :vartype type: str - :ivar name: The name of the agent. Required. - :vartype name: str - :ivar version: The version identifier of the agent. Required. - :vartype version: str - """ - - type: Literal["agent_id"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required. Default value is \"agent_id\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the agent. Required.""" - version: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The version identifier of the agent. Required.""" - - @overload - def __init__( - self, - *, - name: str, - version: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type: Literal["agent_id"] = "agent_id" - - -class AgentObject(_Model): - """AgentObject. - - :ivar object: The object type, which is always 'agent'. Required. Default value is "agent". - :vartype object: str - :ivar id: The unique identifier of the agent. Required. - :vartype id: str - :ivar name: The name of the agent. Required. - :vartype name: str - :ivar versions: The latest version of the agent. Required. - :vartype versions: ~azure.ai.projects.models.AgentObjectVersions - """ - - object: Literal["agent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type, which is always 'agent'. Required. Default value is \"agent\".""" - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the agent. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the agent. Required.""" - versions: "_models.AgentObjectVersions" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The latest version of the agent. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - name: str, - versions: "_models.AgentObjectVersions", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["agent"] = "agent" - - -class AgentObjectVersions(_Model): - """AgentObjectVersions. - - :ivar latest: Required. - :vartype latest: ~azure.ai.projects.models.AgentVersionObject - """ - - latest: "_models.AgentVersionObject" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - latest: "_models.AgentVersionObject", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgentReference(_Model): - """AgentReference. - - :ivar type: Required. Default value is "agent_reference". - :vartype type: str - :ivar name: The name of the agent. Required. - :vartype name: str - :ivar version: The version identifier of the agent. - :vartype version: str - """ - - type: Literal["agent_reference"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required. Default value is \"agent_reference\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the agent. Required.""" - version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The version identifier of the agent.""" - - @overload - def __init__( - self, - *, - name: str, - version: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type: Literal["agent_reference"] = "agent_reference" - - -class EvaluationTaxonomyInput(_Model): - """Input configuration for the evaluation taxonomy. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AgentTaxonomyInput - - :ivar type: Input type of the evaluation taxonomy. Required. Known values are: "agent" and - "policy". - :vartype type: str or ~azure.ai.projects.models.EvaluationTaxonomyInputType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Input type of the evaluation taxonomy. Required. Known values are: \"agent\" and \"policy\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AgentTaxonomyInput(EvaluationTaxonomyInput, discriminator="agent"): - """Input configuration for the evaluation taxonomy when the input type is agent. - - :ivar type: Input type of the evaluation taxonomy. Required. Agent - :vartype type: str or ~azure.ai.projects.models.AGENT - :ivar target: Target configuration for the agent. Required. - :vartype target: ~azure.ai.projects.models.AzureAIAgentTarget - :ivar risk_categories: List of risk categories to evaluate against. Required. - :vartype risk_categories: list[str or ~azure.ai.projects.models.RiskCategory] - """ - - type: Literal[EvaluationTaxonomyInputType.AGENT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Input type of the evaluation taxonomy. Required. Agent""" - target: "_models.AzureAIAgentTarget" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Target configuration for the agent. Required.""" - risk_categories: list[Union[str, "_models.RiskCategory"]] = rest_field( - name="riskCategories", visibility=["read", "create", "update", "delete", "query"] - ) - """List of risk categories to evaluate against. Required.""" - - @overload - def __init__( - self, - *, - target: "_models.AzureAIAgentTarget", - risk_categories: list[Union[str, "_models.RiskCategory"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = EvaluationTaxonomyInputType.AGENT # type: ignore - - -class AgentVersionObject(_Model): - """AgentVersionObject. - - :ivar metadata: Set of 16 key-value pairs that can be attached to an object. This can be - useful for storing additional information about the object in a structured - format, and querying for objects via API or the dashboard. - Keys are strings with a maximum length of 64 characters. Values are strings - with a maximum length of 512 characters. Required. - :vartype metadata: dict[str, str] - :ivar object: The object type, which is always 'agent.version'. Required. Default value is - "agent.version". - :vartype object: str - :ivar id: The unique identifier of the agent version. Required. - :vartype id: str - :ivar name: The name of the agent. Name can be used to retrieve/update/delete the agent. - Required. - :vartype name: str - :ivar version: The version identifier of the agent. Agents are immutable and every update - creates a new version while keeping the name same. Required. - :vartype version: str - :ivar description: A human-readable description of the agent. - :vartype description: str - :ivar created_at: The Unix timestamp (seconds) when the agent was created. Required. - :vartype created_at: ~datetime.datetime - :ivar definition: Required. - :vartype definition: ~azure.ai.projects.models.AgentDefinition - """ - - metadata: dict[str, str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Set of 16 key-value pairs that can be attached to an object. This can be - useful for storing additional information about the object in a structured - format, and querying for objects via API or the dashboard. - Keys are strings with a maximum length of 64 characters. Values are strings - with a maximum length of 512 characters. Required.""" - object: Literal["agent.version"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type, which is always 'agent.version'. Required. Default value is \"agent.version\".""" - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the agent version. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the agent. Name can be used to retrieve/update/delete the agent. Required.""" - version: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The version identifier of the agent. Agents are immutable and every update creates a new - version while keeping the name same. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A human-readable description of the agent.""" - created_at: datetime.datetime = rest_field( - visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" - ) - """The Unix timestamp (seconds) when the agent was created. Required.""" - definition: "_models.AgentDefinition" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - metadata: dict[str, str], - id: str, # pylint: disable=redefined-builtin - name: str, - version: str, - created_at: datetime.datetime, - definition: "_models.AgentDefinition", - description: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["agent.version"] = "agent.version" - - -class AISearchIndexResource(_Model): - """A AI Search Index resource. - - :ivar project_connection_id: An index connection ID in an IndexResource attached to this agent. - Required. - :vartype project_connection_id: str - :ivar index_name: The name of an index in an IndexResource attached to this agent. - :vartype index_name: str - :ivar query_type: Type of query in an AIIndexResource attached to this agent. Known values are: - "simple", "semantic", "vector", "vector_simple_hybrid", and "vector_semantic_hybrid". - :vartype query_type: str or ~azure.ai.projects.models.AzureAISearchQueryType - :ivar top_k: Number of documents to retrieve from search and present to the model. - :vartype top_k: int - :ivar filter: filter string for search resource. Learn more from here: - `https://learn.microsoft.com/azure/search/search-filters - `_. - :vartype filter: str - :ivar index_asset_id: Index asset id for search resource. - :vartype index_asset_id: str - """ - - project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An index connection ID in an IndexResource attached to this agent. Required.""" - index_name: Optional[str] = rest_field(name="indexName", visibility=["read", "create", "update", "delete", "query"]) - """The name of an index in an IndexResource attached to this agent.""" - query_type: Optional[Union[str, "_models.AzureAISearchQueryType"]] = rest_field( - name="queryType", visibility=["read", "create", "update", "delete", "query"] - ) - """Type of query in an AIIndexResource attached to this agent. Known values are: \"simple\", - \"semantic\", \"vector\", \"vector_simple_hybrid\", and \"vector_semantic_hybrid\".""" - top_k: Optional[int] = rest_field(name="topK", visibility=["read", "create", "update", "delete", "query"]) - """Number of documents to retrieve from search and present to the model.""" - filter: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """filter string for search resource. Learn more from here: - `https://learn.microsoft.com/azure/search/search-filters - `_.""" - index_asset_id: Optional[str] = rest_field( - name="indexAssetId", visibility=["read", "create", "update", "delete", "query"] - ) - """Index asset id for search resource.""" - - @overload - def __init__( - self, - *, - project_connection_id: str, - index_name: Optional[str] = None, - query_type: Optional[Union[str, "_models.AzureAISearchQueryType"]] = None, - top_k: Optional[int] = None, - filter: Optional[str] = None, # pylint: disable=redefined-builtin - index_asset_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class Annotation(_Model): - """Annotation. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AnnotationFileCitation, AnnotationFilePath, AnnotationUrlCitation - - :ivar type: Required. Known values are: "file_citation", "url_citation", "file_path", and - "container_file_citation". - :vartype type: str or ~azure.ai.projects.models.AnnotationType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"file_citation\", \"url_citation\", \"file_path\", and - \"container_file_citation\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AnnotationFileCitation(Annotation, discriminator="file_citation"): - """A citation to a file. - - :ivar type: The type of the file citation. Always ``file_citation``. Required. - :vartype type: str or ~azure.ai.projects.models.FILE_CITATION - :ivar file_id: The ID of the file. Required. - :vartype file_id: str - :ivar index: The index of the file in the list of files. Required. - :vartype index: int - :ivar filename: The filename of the file cited. Required. - :vartype filename: str - """ - - type: Literal[AnnotationType.FILE_CITATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the file citation. Always ``file_citation``. Required.""" - file_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the file. Required.""" - index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the file in the list of files. Required.""" - filename: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The filename of the file cited. Required.""" - - @overload - def __init__( - self, - *, - file_id: str, - index: int, - filename: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = AnnotationType.FILE_CITATION # type: ignore - - -class AnnotationFilePath(Annotation, discriminator="file_path"): - """A path to a file. - - :ivar type: The type of the file path. Always ``file_path``. Required. - :vartype type: str or ~azure.ai.projects.models.FILE_PATH - :ivar file_id: The ID of the file. Required. - :vartype file_id: str - :ivar index: The index of the file in the list of files. Required. - :vartype index: int - """ - - type: Literal[AnnotationType.FILE_PATH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the file path. Always ``file_path``. Required.""" - file_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the file. Required.""" - index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the file in the list of files. Required.""" - - @overload - def __init__( - self, - *, - file_id: str, - index: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = AnnotationType.FILE_PATH # type: ignore - - -class AnnotationUrlCitation(Annotation, discriminator="url_citation"): - """A citation for a web resource used to generate a model response. - - :ivar type: The type of the URL citation. Always ``url_citation``. Required. - :vartype type: str or ~azure.ai.projects.models.URL_CITATION - :ivar url: The URL of the web resource. Required. - :vartype url: str - :ivar start_index: The index of the first character of the URL citation in the message. - Required. - :vartype start_index: int - :ivar end_index: The index of the last character of the URL citation in the message. Required. - :vartype end_index: int - :ivar title: The title of the web resource. Required. - :vartype title: str - """ - - type: Literal[AnnotationType.URL_CITATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the URL citation. Always ``url_citation``. Required.""" - url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The URL of the web resource. Required.""" - start_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the first character of the URL citation in the message. Required.""" - end_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the last character of the URL citation in the message. Required.""" - title: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The title of the web resource. Required.""" - - @overload - def __init__( - self, - *, - url: str, - start_index: int, - end_index: int, - title: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = AnnotationType.URL_CITATION # type: ignore - - -class ApiError(_Model): - """ApiError. - - :ivar code: The error code. Required. - :vartype code: str - :ivar message: A human-readable description of the error. Required. - :vartype message: str - :ivar target: The target of the error, if applicable. - :vartype target: str - :ivar details: Additional details about the error. Required. - :vartype details: list[~azure.ai.projects.models.ApiError] - :ivar innererror: The inner error, if any. - :vartype innererror: ~azure.ai.projects.models.ApiInnerError - """ - - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error code. Required.""" - message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A human-readable description of the error. Required.""" - target: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The target of the error, if applicable.""" - details: list["_models.ApiError"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Additional details about the error. Required.""" - innererror: Optional["_models.ApiInnerError"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The inner error, if any.""" - - @overload - def __init__( - self, - *, - code: str, - message: str, - details: list["_models.ApiError"], - target: Optional[str] = None, - innererror: Optional["_models.ApiInnerError"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ApiErrorResponse(_Model): - """Error response for API failures. - - :ivar error: Required. - :vartype error: ~azure.ai.projects.models.ApiError - """ - - error: "_models.ApiError" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - error: "_models.ApiError", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ApiInnerError(_Model): - """ApiInnerError. - - :ivar code: The error code. Required. - :vartype code: str - :ivar innererror: The inner error, if any. - :vartype innererror: ~azure.ai.projects.models.ApiInnerError - """ - - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error code. Required.""" - innererror: Optional["_models.ApiInnerError"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The inner error, if any.""" - - @overload - def __init__( - self, - *, - code: str, - innererror: Optional["_models.ApiInnerError"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ApiKeyCredentials(BaseCredentials, discriminator="ApiKey"): - """API Key Credential definition. - - :ivar type: The credential type. Required. API Key credential - :vartype type: str or ~azure.ai.projects.models.API_KEY - :ivar api_key: API Key. - :vartype api_key: str - """ - - type: Literal[CredentialType.API_KEY] = rest_discriminator(name="type", visibility=["read"]) # type: ignore - """The credential type. Required. API Key credential""" - api_key: Optional[str] = rest_field(name="key", visibility=["read"]) - """API Key.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CredentialType.API_KEY # type: ignore - - -class Location(_Model): - """Location. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ApproximateLocation - - :ivar type: Required. "approximate" - :vartype type: str or ~azure.ai.projects.models.LocationType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. \"approximate\"""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ApproximateLocation(Location, discriminator="approximate"): - """ApproximateLocation. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.APPROXIMATE - :ivar country: - :vartype country: str - :ivar region: - :vartype region: str - :ivar city: - :vartype city: str - :ivar timezone: - :vartype timezone: str - """ - - type: Literal[LocationType.APPROXIMATE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - country: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - region: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - city: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - timezone: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - - @overload - def __init__( - self, - *, - country: Optional[str] = None, - region: Optional[str] = None, - city: Optional[str] = None, - timezone: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = LocationType.APPROXIMATE # type: ignore - - -class Target(_Model): - """Base class for targets with discriminator support. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AzureAIAgentTarget, AzureAIAssistantTarget, AzureAIModelTarget - - :ivar type: The type of target. Required. Default value is None. - :vartype type: str - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """The type of target. Required. Default value is None.""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AzureAIAgentTarget(Target, discriminator="azure_ai_agent"): - """Represents a target specifying an Azure AI agent. - - :ivar type: The type of target, always ``azure_ai_agent``. Required. Default value is - "azure_ai_agent". - :vartype type: str - :ivar name: The unique identifier of the Azure AI agent. Required. - :vartype name: str - :ivar version: The version of the Azure AI agent. - :vartype version: str - :ivar tool_descriptions: The parameters used to control the sampling behavior of the agent - during text generation. - :vartype tool_descriptions: list[~azure.ai.projects.models.ToolDescription] - """ - - type: Literal["azure_ai_agent"] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of target, always ``azure_ai_agent``. Required. Default value is \"azure_ai_agent\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the Azure AI agent. Required.""" - version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The version of the Azure AI agent.""" - tool_descriptions: Optional[list["_models.ToolDescription"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The parameters used to control the sampling behavior of the agent during text generation.""" - - @overload - def __init__( - self, - *, - name: str, - version: Optional[str] = None, - tool_descriptions: Optional[list["_models.ToolDescription"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = "azure_ai_agent" # type: ignore - - -class AzureAISearchAgentTool(Tool, discriminator="azure_ai_search"): - """The input definition information for an Azure AI search tool as used to configure an agent. - - :ivar type: The object type, which is always 'azure_ai_search'. Required. - :vartype type: str or ~azure.ai.projects.models.AZURE_AI_SEARCH - :ivar azure_ai_search: The azure ai search index resource. Required. - :vartype azure_ai_search: ~azure.ai.projects.models.AzureAISearchToolResource - """ - - type: Literal[ToolType.AZURE_AI_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'azure_ai_search'. Required.""" - azure_ai_search: "_models.AzureAISearchToolResource" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The azure ai search index resource. Required.""" - - @overload - def __init__( - self, - *, - azure_ai_search: "_models.AzureAISearchToolResource", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.AZURE_AI_SEARCH # type: ignore - - -class Index(_Model): - """Index resource Definition. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AzureAISearchIndex, CosmosDBIndex, ManagedAzureAISearchIndex - - :ivar type: Type of index. Required. Known values are: "AzureSearch", - "CosmosDBNoSqlVectorStore", and "ManagedAzureSearch". - :vartype type: str or ~azure.ai.projects.models.IndexType - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Type of index. Required. Known values are: \"AzureSearch\", \"CosmosDBNoSqlVectorStore\", and - \"ManagedAzureSearch\".""" - id: Optional[str] = rest_field(visibility=["read"]) - """Asset ID, a unique identifier for the asset.""" - name: str = rest_field(visibility=["read"]) - """The name of the resource. Required.""" - version: str = rest_field(visibility=["read"]) - """The version of the resource. Required.""" - description: Optional[str] = rest_field(visibility=["create", "update"]) - """The asset description text.""" - tags: Optional[dict[str, str]] = rest_field(visibility=["create", "update"]) - """Tag dictionary. Tags can be added, removed, and updated.""" - - @overload - def __init__( - self, - *, - type: str, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AzureAISearchIndex(Index, discriminator="AzureSearch"): - """Azure AI Search Index Definition. - - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - :ivar type: Type of index. Required. Azure search - :vartype type: str or ~azure.ai.projects.models.AZURE_SEARCH - :ivar connection_name: Name of connection to Azure AI Search. Required. - :vartype connection_name: str - :ivar index_name: Name of index in Azure AI Search resource to attach. Required. - :vartype index_name: str - :ivar field_mapping: Field mapping configuration. - :vartype field_mapping: ~azure.ai.projects.models.FieldMapping - """ - - type: Literal[IndexType.AZURE_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Type of index. Required. Azure search""" - connection_name: str = rest_field(name="connectionName", visibility=["create"]) - """Name of connection to Azure AI Search. Required.""" - index_name: str = rest_field(name="indexName", visibility=["create"]) - """Name of index in Azure AI Search resource to attach. Required.""" - field_mapping: Optional["_models.FieldMapping"] = rest_field(name="fieldMapping", visibility=["create"]) - """Field mapping configuration.""" - - @overload - def __init__( - self, - *, - connection_name: str, - index_name: str, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - field_mapping: Optional["_models.FieldMapping"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = IndexType.AZURE_SEARCH # type: ignore - - -class AzureAISearchToolResource(_Model): - """A set of index resources used by the ``azure_ai_search`` tool. - - :ivar index_list: The indices attached to this agent. There can be a maximum of 1 index - resource attached to the agent. - :vartype index_list: list[~azure.ai.projects.models.AISearchIndexResource] - """ - - index_list: Optional[list["_models.AISearchIndexResource"]] = rest_field( - name="indexList", visibility=["read", "create", "update", "delete", "query"] - ) - """The indices attached to this agent. There can be a maximum of 1 index - resource attached to the agent.""" - - @overload - def __init__( - self, - *, - index_list: Optional[list["_models.AISearchIndexResource"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AzureFunctionAgentTool(Tool, discriminator="azure_function"): - """The input definition information for an Azure Function Tool, as used to configure an Agent. - - :ivar type: The object type, which is always 'browser_automation'. Required. - :vartype type: str or ~azure.ai.projects.models.AZURE_FUNCTION - :ivar azure_function: The Azure Function Tool definition. Required. - :vartype azure_function: ~azure.ai.projects.models.AzureFunctionDefinition - """ - - type: Literal[ToolType.AZURE_FUNCTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'browser_automation'. Required.""" - azure_function: "_models.AzureFunctionDefinition" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The Azure Function Tool definition. Required.""" - - @overload - def __init__( - self, - *, - azure_function: "_models.AzureFunctionDefinition", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.AZURE_FUNCTION # type: ignore - - -class AzureFunctionBinding(_Model): - """The structure for keeping storage queue name and URI. - - :ivar type: The type of binding, which is always 'storage_queue'. Required. Default value is - "storage_queue". - :vartype type: str - :ivar storage_queue: Storage queue. Required. - :vartype storage_queue: ~azure.ai.projects.models.AzureFunctionStorageQueue - """ - - type: Literal["storage_queue"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The type of binding, which is always 'storage_queue'. Required. Default value is - \"storage_queue\".""" - storage_queue: "_models.AzureFunctionStorageQueue" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Storage queue. Required.""" - - @overload - def __init__( - self, - *, - storage_queue: "_models.AzureFunctionStorageQueue", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type: Literal["storage_queue"] = "storage_queue" - - -class AzureFunctionDefinition(_Model): - """The definition of Azure function. - - :ivar function: The definition of azure function and its parameters. Required. - :vartype function: ~azure.ai.projects.models.AzureFunctionDefinitionFunction - :ivar input_binding: Input storage queue. The queue storage trigger runs a function as messages - are added to it. Required. - :vartype input_binding: ~azure.ai.projects.models.AzureFunctionBinding - :ivar output_binding: Output storage queue. The function writes output to this queue when the - input items are processed. Required. - :vartype output_binding: ~azure.ai.projects.models.AzureFunctionBinding - """ - - function: "_models.AzureFunctionDefinitionFunction" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The definition of azure function and its parameters. Required.""" - input_binding: "_models.AzureFunctionBinding" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Input storage queue. The queue storage trigger runs a function as messages are added to it. - Required.""" - output_binding: "_models.AzureFunctionBinding" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Output storage queue. The function writes output to this queue when the input items are - processed. Required.""" - - @overload - def __init__( - self, - *, - function: "_models.AzureFunctionDefinitionFunction", - input_binding: "_models.AzureFunctionBinding", - output_binding: "_models.AzureFunctionBinding", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AzureFunctionDefinitionFunction(_Model): - """AzureFunctionDefinitionFunction. - - :ivar name: The name of the function to be called. Required. - :vartype name: str - :ivar description: A description of what the function does, used by the model to choose when - and how to call the function. - :vartype description: str - :ivar parameters: The parameters the functions accepts, described as a JSON Schema object. - Required. - :vartype parameters: any - """ - - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to be called. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A description of what the function does, used by the model to choose when and how to call the - function.""" - parameters: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The parameters the functions accepts, described as a JSON Schema object. Required.""" - - @overload - def __init__( - self, - *, - name: str, - parameters: Any, - description: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AzureFunctionStorageQueue(_Model): - """The structure for keeping storage queue name and URI. - - :ivar queue_service_endpoint: URI to the Azure Storage Queue service allowing you to manipulate - a queue. Required. - :vartype queue_service_endpoint: str - :ivar queue_name: The name of an Azure function storage queue. Required. - :vartype queue_name: str - """ - - queue_service_endpoint: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """URI to the Azure Storage Queue service allowing you to manipulate a queue. Required.""" - queue_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of an Azure function storage queue. Required.""" - - @overload - def __init__( - self, - *, - queue_service_endpoint: str, - queue_name: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class TargetConfig(_Model): - """Abstract class for target configuration. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - AzureOpenAIModelConfiguration - - :ivar type: Type of the model configuration. Required. Default value is None. - :vartype type: str - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Type of the model configuration. Required. Default value is None.""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class AzureOpenAIModelConfiguration(TargetConfig, discriminator="AzureOpenAIModel"): - """Azure OpenAI model configuration. The API version would be selected by the service for querying - the model. - - :ivar type: Required. Default value is "AzureOpenAIModel". - :vartype type: str - :ivar model_deployment_name: Deployment name for AOAI model. Example: gpt-4o if in AIServices - or connection based ``connection_name/deployment_name`` (e.g. ``my-aoai-connection/gpt-4o``). - Required. - :vartype model_deployment_name: str - """ - - type: Literal["AzureOpenAIModel"] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Default value is \"AzureOpenAIModel\".""" - model_deployment_name: str = rest_field( - name="modelDeploymentName", visibility=["read", "create", "update", "delete", "query"] - ) - """Deployment name for AOAI model. Example: gpt-4o if in AIServices or connection based - ``connection_name/deployment_name`` (e.g. ``my-aoai-connection/gpt-4o``). Required.""" - - @overload - def __init__( - self, - *, - model_deployment_name: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = "AzureOpenAIModel" # type: ignore - - -class BingCustomSearchAgentTool(Tool, discriminator="bing_custom_search_preview"): - """The input definition information for a Bing custom search tool as used to configure an agent. - - :ivar type: The object type, which is always 'bing_custom_search'. Required. - :vartype type: str or ~azure.ai.projects.models.BING_CUSTOM_SEARCH_PREVIEW - :ivar bing_custom_search_preview: The bing custom search tool parameters. Required. - :vartype bing_custom_search_preview: ~azure.ai.projects.models.BingCustomSearchToolParameters - """ - - type: Literal[ToolType.BING_CUSTOM_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'bing_custom_search'. Required.""" - bing_custom_search_preview: "_models.BingCustomSearchToolParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The bing custom search tool parameters. Required.""" - - @overload - def __init__( - self, - *, - bing_custom_search_preview: "_models.BingCustomSearchToolParameters", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.BING_CUSTOM_SEARCH_PREVIEW # type: ignore - - -class BingCustomSearchConfiguration(_Model): - """A bing custom search configuration. - - :ivar project_connection_id: Project connection id for grounding with bing search. Required. - :vartype project_connection_id: str - :ivar instance_name: Name of the custom configuration instance given to config. Required. - :vartype instance_name: str - :ivar market: The market where the results come from. - :vartype market: str - :ivar set_lang: The language to use for user interface strings when calling Bing API. - :vartype set_lang: str - :ivar count: The number of search results to return in the bing api response. - :vartype count: int - :ivar freshness: Filter search results by a specific time range. Accepted values: - `https://learn.microsoft.com/bing/search-apis/bing-web-search/reference/query-parameters - `_. - :vartype freshness: str - """ - - project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Project connection id for grounding with bing search. Required.""" - instance_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Name of the custom configuration instance given to config. Required.""" - market: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The market where the results come from.""" - set_lang: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The language to use for user interface strings when calling Bing API.""" - count: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of search results to return in the bing api response.""" - freshness: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Filter search results by a specific time range. Accepted values: - `https://learn.microsoft.com/bing/search-apis/bing-web-search/reference/query-parameters - `_.""" - - @overload - def __init__( - self, - *, - project_connection_id: str, - instance_name: str, - market: Optional[str] = None, - set_lang: Optional[str] = None, - count: Optional[int] = None, - freshness: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BingCustomSearchToolParameters(_Model): - """The bing custom search tool parameters. - - :ivar search_configurations: The project connections attached to this tool. There can be a - maximum of 1 connection - resource attached to the tool. Required. - :vartype search_configurations: list[~azure.ai.projects.models.BingCustomSearchConfiguration] - """ - - search_configurations: list["_models.BingCustomSearchConfiguration"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The project connections attached to this tool. There can be a maximum of 1 connection - resource attached to the tool. Required.""" - - @overload - def __init__( - self, - *, - search_configurations: list["_models.BingCustomSearchConfiguration"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BingGroundingAgentTool(Tool, discriminator="bing_grounding"): - """The input definition information for a bing grounding search tool as used to configure an - agent. - - :ivar type: The object type, which is always 'bing_grounding'. Required. - :vartype type: str or ~azure.ai.projects.models.BING_GROUNDING - :ivar bing_grounding: The bing grounding search tool parameters. Required. - :vartype bing_grounding: ~azure.ai.projects.models.BingGroundingSearchToolParameters - """ - - type: Literal[ToolType.BING_GROUNDING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'bing_grounding'. Required.""" - bing_grounding: "_models.BingGroundingSearchToolParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The bing grounding search tool parameters. Required.""" - - @overload - def __init__( - self, - *, - bing_grounding: "_models.BingGroundingSearchToolParameters", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.BING_GROUNDING # type: ignore - - -class BingGroundingSearchConfiguration(_Model): - """Search configuration for Bing Grounding. - - :ivar project_connection_id: Project connection id for grounding with bing search. Required. - :vartype project_connection_id: str - :ivar market: The market where the results come from. - :vartype market: str - :ivar set_lang: The language to use for user interface strings when calling Bing API. - :vartype set_lang: str - :ivar count: The number of search results to return in the bing api response. - :vartype count: int - :ivar freshness: Filter search results by a specific time range. Accepted values: - `https://learn.microsoft.com/bing/search-apis/bing-web-search/reference/query-parameters - `_. - :vartype freshness: str - """ - - project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Project connection id for grounding with bing search. Required.""" - market: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The market where the results come from.""" - set_lang: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The language to use for user interface strings when calling Bing API.""" - count: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of search results to return in the bing api response.""" - freshness: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Filter search results by a specific time range. Accepted values: - `https://learn.microsoft.com/bing/search-apis/bing-web-search/reference/query-parameters - `_.""" - - @overload - def __init__( - self, - *, - project_connection_id: str, - market: Optional[str] = None, - set_lang: Optional[str] = None, - count: Optional[int] = None, - freshness: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BingGroundingSearchToolParameters(_Model): - """The bing grounding search tool parameters. - - :ivar project_connections: The project connections attached to this tool. There can be a - maximum of 1 connection - resource attached to the tool. Required. - :vartype project_connections: ~azure.ai.projects.models.ToolProjectConnectionList - :ivar search_configurations: The search configurations attached to this tool. There can be a - maximum of 1 - search configuration resource attached to the tool. Required. - :vartype search_configurations: - list[~azure.ai.projects.models.BingGroundingSearchConfiguration] - """ - - project_connections: "_models.ToolProjectConnectionList" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The project connections attached to this tool. There can be a maximum of 1 connection - resource attached to the tool. Required.""" - search_configurations: list["_models.BingGroundingSearchConfiguration"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The search configurations attached to this tool. There can be a maximum of 1 - search configuration resource attached to the tool. Required.""" - - @overload - def __init__( - self, - *, - project_connections: "_models.ToolProjectConnectionList", - search_configurations: list["_models.BingGroundingSearchConfiguration"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BlobReference(_Model): - """Blob reference details. - - :ivar blob_uri: Blob URI path for client to upload data. Example: - `https://blob.windows.core.net/Container/Path `_. - Required. - :vartype blob_uri: str - :ivar storage_account_arm_id: ARM ID of the storage account to use. Required. - :vartype storage_account_arm_id: str - :ivar credential: Credential info to access the storage account. Required. - :vartype credential: ~azure.ai.projects.models.BlobReferenceSasCredential - """ - - blob_uri: str = rest_field(name="blobUri", visibility=["read", "create", "update", "delete", "query"]) - """Blob URI path for client to upload data. Example: `https://blob.windows.core.net/Container/Path - `_. Required.""" - storage_account_arm_id: str = rest_field( - name="storageAccountArmId", visibility=["read", "create", "update", "delete", "query"] - ) - """ARM ID of the storage account to use. Required.""" - credential: "_models.BlobReferenceSasCredential" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Credential info to access the storage account. Required.""" - - @overload - def __init__( - self, - *, - blob_uri: str, - storage_account_arm_id: str, - credential: "_models.BlobReferenceSasCredential", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BlobReferenceSasCredential(_Model): - """SAS Credential definition. - - :ivar sas_uri: SAS uri. Required. - :vartype sas_uri: str - :ivar type: Type of credential. Required. Default value is "SAS". - :vartype type: str - """ - - sas_uri: str = rest_field(name="sasUri", visibility=["read"]) - """SAS uri. Required.""" - type: Literal["SAS"] = rest_field(visibility=["read"]) - """Type of credential. Required. Default value is \"SAS\".""" - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type: Literal["SAS"] = "SAS" - - -class BrowserAutomationAgentTool(Tool, discriminator="browser_automation_preview"): - """The input definition information for a Browser Automation Tool, as used to configure an Agent. - - :ivar type: The object type, which is always 'browser_automation'. Required. - :vartype type: str or ~azure.ai.projects.models.BROWSER_AUTOMATION_PREVIEW - :ivar browser_automation_preview: The Browser Automation Tool parameters. Required. - :vartype browser_automation_preview: ~azure.ai.projects.models.BrowserAutomationToolParameters - """ - - type: Literal[ToolType.BROWSER_AUTOMATION_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'browser_automation'. Required.""" - browser_automation_preview: "_models.BrowserAutomationToolParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The Browser Automation Tool parameters. Required.""" - - @overload - def __init__( - self, - *, - browser_automation_preview: "_models.BrowserAutomationToolParameters", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.BROWSER_AUTOMATION_PREVIEW # type: ignore - - -class BrowserAutomationToolConnectionParameters(_Model): # pylint: disable=name-too-long - """Definition of input parameters for the connection used by the Browser Automation Tool. - - :ivar id: The ID of the project connection to your Azure Playwright resource. Required. - :vartype id: str - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the project connection to your Azure Playwright resource. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class BrowserAutomationToolParameters(_Model): - """Definition of input parameters for the Browser Automation Tool. - - :ivar project_connection: The project connection parameters associated with the Browser - Automation Tool. Required. - :vartype project_connection: - ~azure.ai.projects.models.BrowserAutomationToolConnectionParameters - """ - - project_connection: "_models.BrowserAutomationToolConnectionParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The project connection parameters associated with the Browser Automation Tool. Required.""" - - @overload - def __init__( - self, - *, - project_connection: "_models.BrowserAutomationToolConnectionParameters", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CaptureStructuredOutputsTool(Tool, discriminator="capture_structured_outputs"): - """A tool for capturing structured outputs. - - :ivar type: The type of the tool. Always ``capture_structured_outputs``. Required. - :vartype type: str or ~azure.ai.projects.models.CAPTURE_STRUCTURED_OUTPUTS - :ivar outputs: The structured outputs to capture from the model. Required. - :vartype outputs: ~azure.ai.projects.models.StructuredOutputDefinition - """ - - type: Literal[ToolType.CAPTURE_STRUCTURED_OUTPUTS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the tool. Always ``capture_structured_outputs``. Required.""" - outputs: "_models.StructuredOutputDefinition" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The structured outputs to capture from the model. Required.""" - - @overload - def __init__( - self, - *, - outputs: "_models.StructuredOutputDefinition", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.CAPTURE_STRUCTURED_OUTPUTS # type: ignore - - -class ChartCoordinate(_Model): - """Coordinates for the analysis chart. - - :ivar x: X-axis coordinate. Required. - :vartype x: int - :ivar y: Y-axis coordinate. Required. - :vartype y: int - :ivar size: Size of the chart element. Required. - :vartype size: int - """ - - x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """X-axis coordinate. Required.""" - y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Y-axis coordinate. Required.""" - size: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Size of the chart element. Required.""" - - @overload - def __init__( - self, - *, - x: int, - y: int, - size: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryItem(_Model): - """A single memory item stored in the memory store, containing content and metadata. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ChatSummaryMemoryItem, UserProfileMemoryItem - - :ivar memory_id: The unique ID of the memory item. Required. - :vartype memory_id: str - :ivar updated_at: The last update time of the memory item. Required. - :vartype updated_at: ~datetime.datetime - :ivar scope: The namespace that logically groups and isolates memories, such as a user ID. - Required. - :vartype scope: str - :ivar content: The content of the memory. Required. - :vartype content: str - :ivar kind: The kind of the memory item. Required. Known values are: "user_profile" and - "chat_summary". - :vartype kind: str or ~azure.ai.projects.models.MemoryItemKind - """ - - __mapping__: dict[str, _Model] = {} - memory_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the memory item. Required.""" - updated_at: datetime.datetime = rest_field( - visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" - ) - """The last update time of the memory item. Required.""" - scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The namespace that logically groups and isolates memories, such as a user ID. Required.""" - content: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content of the memory. Required.""" - kind: str = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) - """The kind of the memory item. Required. Known values are: \"user_profile\" and \"chat_summary\".""" - - @overload - def __init__( - self, - *, - memory_id: str, - updated_at: datetime.datetime, - scope: str, - content: str, - kind: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ChatSummaryMemoryItem(MemoryItem, discriminator="chat_summary"): - """A memory item containing a summary extracted from conversations. - - :ivar memory_id: The unique ID of the memory item. Required. - :vartype memory_id: str - :ivar updated_at: The last update time of the memory item. Required. - :vartype updated_at: ~datetime.datetime - :ivar scope: The namespace that logically groups and isolates memories, such as a user ID. - Required. - :vartype scope: str - :ivar content: The content of the memory. Required. - :vartype content: str - :ivar kind: The kind of the memory item. Required. Summary of chat conversations. - :vartype kind: str or ~azure.ai.projects.models.CHAT_SUMMARY - """ - - kind: Literal[MemoryItemKind.CHAT_SUMMARY] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The kind of the memory item. Required. Summary of chat conversations.""" - - @overload - def __init__( - self, - *, - memory_id: str, - updated_at: datetime.datetime, - scope: str, - content: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = MemoryItemKind.CHAT_SUMMARY # type: ignore - - -class ClusterInsightResult(_Model): - """Insights from the cluster analysis. - - :ivar summary: Summary of the insights report. Required. - :vartype summary: ~azure.ai.projects.models.InsightSummary - :ivar clusters: List of clusters identified in the insights. Required. - :vartype clusters: list[~azure.ai.projects.models.InsightCluster] - :ivar coordinates: Optional mapping of IDs to 2D coordinates used by the UX for - visualization. - The map keys are string identifiers (for example, a cluster id or a sample id) - and the values are the coordinates and visual size for rendering on a 2D chart. - This property is omitted unless the client requests coordinates (for example, - by passing ``includeCoordinates=true`` as a query parameter). - Example: - { - "cluster-1": { "x": 12, "y": 34, "size": 8 }, - "sample-123": { "x": 18, "y": 22, "size": 4 } - } - Coordinates are intended only for client-side visualization and do not - modify the canonical insights results. - :vartype coordinates: dict[str, ~azure.ai.projects.models.ChartCoordinate] - """ - - summary: "_models.InsightSummary" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Summary of the insights report. Required.""" - clusters: list["_models.InsightCluster"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """List of clusters identified in the insights. Required.""" - coordinates: Optional[dict[str, "_models.ChartCoordinate"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """ Optional mapping of IDs to 2D coordinates used by the UX for visualization. - The map keys are string identifiers (for example, a cluster id or a sample id) - and the values are the coordinates and visual size for rendering on a 2D chart. - This property is omitted unless the client requests coordinates (for example, - by passing ``includeCoordinates=true`` as a query parameter). - Example: - { - \"cluster-1\": { \"x\": 12, \"y\": 34, \"size\": 8 }, - \"sample-123\": { \"x\": 18, \"y\": 22, \"size\": 4 } - } - Coordinates are intended only for client-side visualization and do not - modify the canonical insights results.""" - - @overload - def __init__( - self, - *, - summary: "_models.InsightSummary", - clusters: list["_models.InsightCluster"], - coordinates: Optional[dict[str, "_models.ChartCoordinate"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ClusterTokenUsage(_Model): - """Token usage for cluster analysis. - - :ivar input_token_usage: input token usage. Required. - :vartype input_token_usage: int - :ivar output_token_usage: output token usage. Required. - :vartype output_token_usage: int - :ivar total_token_usage: total token usage. Required. - :vartype total_token_usage: int - """ - - input_token_usage: int = rest_field( - name="inputTokenUsage", visibility=["read", "create", "update", "delete", "query"] - ) - """input token usage. Required.""" - output_token_usage: int = rest_field( - name="outputTokenUsage", visibility=["read", "create", "update", "delete", "query"] - ) - """output token usage. Required.""" - total_token_usage: int = rest_field( - name="totalTokenUsage", visibility=["read", "create", "update", "delete", "query"] - ) - """total token usage. Required.""" - - @overload - def __init__( - self, - *, - input_token_usage: int, - output_token_usage: int, - total_token_usage: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluatorDefinition(_Model): - """Base evaluator configuration with discriminator. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - CodeBasedEvaluatorDefinition, PromptBasedEvaluatorDefinition - - :ivar type: The type of evaluator definition. Required. Known values are: "prompt", "code", - "prompt_and_code", "service", and "openai_graders". - :vartype type: str or ~azure.ai.projects.models.EvaluatorDefinitionType - :ivar init_parameters: The JSON schema (Draft 2020-12) for the evaluator's input parameters. - This includes parameters like type, properties, required. - :vartype init_parameters: any - :ivar data_schema: The JSON schema (Draft 2020-12) for the evaluator's input data. This - includes parameters like type, properties, required. - :vartype data_schema: any - :ivar metrics: List of output metrics produced by this evaluator. - :vartype metrics: dict[str, ~azure.ai.projects.models.EvaluatorMetric] - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """The type of evaluator definition. Required. Known values are: \"prompt\", \"code\", - \"prompt_and_code\", \"service\", and \"openai_graders\".""" - init_parameters: Optional[Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The JSON schema (Draft 2020-12) for the evaluator's input parameters. This includes parameters - like type, properties, required.""" - data_schema: Optional[Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The JSON schema (Draft 2020-12) for the evaluator's input data. This includes parameters like - type, properties, required.""" - metrics: Optional[dict[str, "_models.EvaluatorMetric"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """List of output metrics produced by this evaluator.""" - - @overload - def __init__( - self, - *, - type: str, - init_parameters: Optional[Any] = None, - data_schema: Optional[Any] = None, - metrics: Optional[dict[str, "_models.EvaluatorMetric"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CodeBasedEvaluatorDefinition(EvaluatorDefinition, discriminator="code"): - """Code-based evaluator definition using python code. - - :ivar init_parameters: The JSON schema (Draft 2020-12) for the evaluator's input parameters. - This includes parameters like type, properties, required. - :vartype init_parameters: any - :ivar data_schema: The JSON schema (Draft 2020-12) for the evaluator's input data. This - includes parameters like type, properties, required. - :vartype data_schema: any - :ivar metrics: List of output metrics produced by this evaluator. - :vartype metrics: dict[str, ~azure.ai.projects.models.EvaluatorMetric] - :ivar type: Required. Code-based definition - :vartype type: str or ~azure.ai.projects.models.CODE - :ivar code_text: Inline code text for the evaluator. Required. - :vartype code_text: str - """ - - type: Literal[EvaluatorDefinitionType.CODE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Code-based definition""" - code_text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Inline code text for the evaluator. Required.""" - - @overload - def __init__( - self, - *, - code_text: str, - init_parameters: Optional[Any] = None, - data_schema: Optional[Any] = None, - metrics: Optional[dict[str, "_models.EvaluatorMetric"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = EvaluatorDefinitionType.CODE # type: ignore - - -class CodeInterpreterOutput(_Model): - """CodeInterpreterOutput. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - CodeInterpreterOutputImage, CodeInterpreterOutputLogs - - :ivar type: Required. Known values are: "logs" and "image". - :vartype type: str or ~azure.ai.projects.models.CodeInterpreterOutputType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"logs\" and \"image\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CodeInterpreterOutputImage(CodeInterpreterOutput, discriminator="image"): - """The image output from the code interpreter. - - :ivar type: The type of the output. Always 'image'. Required. - :vartype type: str or ~azure.ai.projects.models.IMAGE - :ivar url: The URL of the image output from the code interpreter. Required. - :vartype url: str - """ - - type: Literal[CodeInterpreterOutputType.IMAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the output. Always 'image'. Required.""" - url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The URL of the image output from the code interpreter. Required.""" - - @overload - def __init__( - self, - *, - url: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CodeInterpreterOutputType.IMAGE # type: ignore - - -class CodeInterpreterOutputLogs(CodeInterpreterOutput, discriminator="logs"): - """The logs output from the code interpreter. - - :ivar type: The type of the output. Always 'logs'. Required. - :vartype type: str or ~azure.ai.projects.models.LOGS - :ivar logs: The logs output from the code interpreter. Required. - :vartype logs: str - """ - - type: Literal[CodeInterpreterOutputType.LOGS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the output. Always 'logs'. Required.""" - logs: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The logs output from the code interpreter. Required.""" - - @overload - def __init__( - self, - *, - logs: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CodeInterpreterOutputType.LOGS # type: ignore - - -class CodeInterpreterTool(Tool, discriminator="code_interpreter"): - """A tool that runs Python code to help generate a response to a prompt. - - :ivar type: The type of the code interpreter tool. Always ``code_interpreter``. Required. - :vartype type: str or ~azure.ai.projects.models.CODE_INTERPRETER - :ivar container: The code interpreter container. Can be a container ID or an object that - specifies uploaded file IDs to make available to your code. Required. Is either a str type or a - CodeInterpreterToolAuto type. - :vartype container: str or ~azure.ai.projects.models.CodeInterpreterToolAuto - """ - - type: Literal[ToolType.CODE_INTERPRETER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the code interpreter tool. Always ``code_interpreter``. Required.""" - container: Union[str, "_models.CodeInterpreterToolAuto"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The code interpreter container. Can be a container ID or an object that - specifies uploaded file IDs to make available to your code. Required. Is either a str type or a - CodeInterpreterToolAuto type.""" - - @overload - def __init__( - self, - *, - container: Union[str, "_models.CodeInterpreterToolAuto"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.CODE_INTERPRETER # type: ignore - - -class CodeInterpreterToolAuto(_Model): - """Configuration for a code interpreter container. Optionally specify the IDs - of the files to run the code on. - - :ivar type: Always ``auto``. Required. Default value is "auto". - :vartype type: str - :ivar file_ids: An optional list of uploaded files to make available to your code. - :vartype file_ids: list[str] - """ - - type: Literal["auto"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Always ``auto``. Required. Default value is \"auto\".""" - file_ids: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An optional list of uploaded files to make available to your code.""" - - @overload - def __init__( - self, - *, - file_ids: Optional[list[str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type: Literal["auto"] = "auto" - - -class ItemParam(_Model): - """Content item used to generate a response. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - CodeInterpreterToolCallItemParam, ComputerToolCallItemParam, ComputerToolCallOutputItemParam, - FileSearchToolCallItemParam, FunctionToolCallItemParam, FunctionToolCallOutputItemParam, - ImageGenToolCallItemParam, ItemReferenceItemParam, LocalShellToolCallItemParam, - LocalShellToolCallOutputItemParam, MCPApprovalRequestItemParam, MCPApprovalResponseItemParam, - MCPCallItemParam, MCPListToolsItemParam, MemorySearchToolCallItemParam, - ResponsesMessageItemParam, ReasoningItemParam, WebSearchToolCallItemParam - - :ivar type: Required. Known values are: "message", "file_search_call", "function_call", - "function_call_output", "computer_call", "computer_call_output", "web_search_call", - "reasoning", "item_reference", "image_generation_call", "code_interpreter_call", - "local_shell_call", "local_shell_call_output", "mcp_list_tools", "mcp_approval_request", - "mcp_approval_response", "mcp_call", "structured_outputs", "workflow_action", - "memory_search_call", and "oauth_consent_request". - :vartype type: str or ~azure.ai.projects.models.ItemType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"message\", \"file_search_call\", \"function_call\", - \"function_call_output\", \"computer_call\", \"computer_call_output\", \"web_search_call\", - \"reasoning\", \"item_reference\", \"image_generation_call\", \"code_interpreter_call\", - \"local_shell_call\", \"local_shell_call_output\", \"mcp_list_tools\", - \"mcp_approval_request\", \"mcp_approval_response\", \"mcp_call\", \"structured_outputs\", - \"workflow_action\", \"memory_search_call\", and \"oauth_consent_request\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CodeInterpreterToolCallItemParam(ItemParam, discriminator="code_interpreter_call"): - """A tool call to run code. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.CODE_INTERPRETER_CALL - :ivar container_id: The ID of the container used to run the code. Required. - :vartype container_id: str - :ivar code: The code to run, or null if not available. Required. - :vartype code: str - :ivar outputs: The outputs generated by the code interpreter, such as logs or images. - Can be null if no outputs are available. Required. - :vartype outputs: list[~azure.ai.projects.models.CodeInterpreterOutput] - """ - - type: Literal[ItemType.CODE_INTERPRETER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the container used to run the code. Required.""" - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The code to run, or null if not available. Required.""" - outputs: list["_models.CodeInterpreterOutput"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The outputs generated by the code interpreter, such as logs or images. - Can be null if no outputs are available. Required.""" - - @overload - def __init__( - self, - *, - container_id: str, - code: str, - outputs: list["_models.CodeInterpreterOutput"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.CODE_INTERPRETER_CALL # type: ignore - - -class ItemResource(_Model): - """Content item used to generate a response. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - CodeInterpreterToolCallItemResource, ComputerToolCallItemResource, - ComputerToolCallOutputItemResource, FileSearchToolCallItemResource, - FunctionToolCallItemResource, FunctionToolCallOutputItemResource, ImageGenToolCallItemResource, - LocalShellToolCallItemResource, LocalShellToolCallOutputItemResource, - MCPApprovalRequestItemResource, MCPApprovalResponseItemResource, MCPCallItemResource, - MCPListToolsItemResource, MemorySearchToolCallItemResource, ResponsesMessageItemResource, - OAuthConsentRequestItemResource, ReasoningItemResource, StructuredOutputsItemResource, - WebSearchToolCallItemResource, WorkflowActionOutputItemResource - - :ivar type: Required. Known values are: "message", "file_search_call", "function_call", - "function_call_output", "computer_call", "computer_call_output", "web_search_call", - "reasoning", "item_reference", "image_generation_call", "code_interpreter_call", - "local_shell_call", "local_shell_call_output", "mcp_list_tools", "mcp_approval_request", - "mcp_approval_response", "mcp_call", "structured_outputs", "workflow_action", - "memory_search_call", and "oauth_consent_request". - :vartype type: str or ~azure.ai.projects.models.ItemType - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"message\", \"file_search_call\", \"function_call\", - \"function_call_output\", \"computer_call\", \"computer_call_output\", \"web_search_call\", - \"reasoning\", \"item_reference\", \"image_generation_call\", \"code_interpreter_call\", - \"local_shell_call\", \"local_shell_call_output\", \"mcp_list_tools\", - \"mcp_approval_request\", \"mcp_approval_response\", \"mcp_call\", \"structured_outputs\", - \"workflow_action\", \"memory_search_call\", and \"oauth_consent_request\".""" - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - created_by: Optional["_models.CreatedBy"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The information about the creator of the item.""" - - @overload - def __init__( - self, - *, - type: str, - id: str, # pylint: disable=redefined-builtin - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CodeInterpreterToolCallItemResource(ItemResource, discriminator="code_interpreter_call"): - """A tool call to run code. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.CODE_INTERPRETER_CALL - :ivar status: Required. Is one of the following types: Literal["in_progress"], - Literal["completed"], Literal["incomplete"], Literal["interpreting"], Literal["failed"] - :vartype status: str or str or str or str or str - :ivar container_id: The ID of the container used to run the code. Required. - :vartype container_id: str - :ivar code: The code to run, or null if not available. Required. - :vartype code: str - :ivar outputs: The outputs generated by the code interpreter, such as logs or images. - Can be null if no outputs are available. Required. - :vartype outputs: list[~azure.ai.projects.models.CodeInterpreterOutput] - """ - - type: Literal[ItemType.CODE_INTERPRETER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required. Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], - Literal[\"incomplete\"], Literal[\"interpreting\"], Literal[\"failed\"]""" - container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the container used to run the code. Required.""" - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The code to run, or null if not available. Required.""" - outputs: list["_models.CodeInterpreterOutput"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The outputs generated by the code interpreter, such as logs or images. - Can be null if no outputs are available. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"], - container_id: str, - code: str, - outputs: list["_models.CodeInterpreterOutput"], - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.CODE_INTERPRETER_CALL # type: ignore - - -class ComparisonFilter(_Model): - """A filter used to compare a specified attribute key to a given value using a defined comparison - operation. - - :ivar type: Specifies the comparison operator: ``eq``, ``ne``, ``gt``, ``gte``, ``lt``, - ``lte``. - * `eq`: equals - * `ne`: not equal - * `gt`: greater than - * `gte`: greater than or equal - * `lt`: less than - * `lte`: less than or equal. Required. Is one of the following types: Literal["eq"], - Literal["ne"], Literal["gt"], Literal["gte"], Literal["lt"], Literal["lte"] - :vartype type: str or str or str or str or str or str - :ivar key: The key to compare against the value. Required. - :vartype key: str - :ivar value: The value to compare against the attribute key; supports string, number, or - boolean types. Required. Is one of the following types: str, float, bool - :vartype value: str or float or bool - """ - - type: Literal["eq", "ne", "gt", "gte", "lt", "lte"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Specifies the comparison operator: ``eq``, ``ne``, ``gt``, ``gte``, ``lt``, ``lte``. - * `eq`: equals - * `ne`: not equal - * `gt`: greater than - * `gte`: greater than or equal - * `lt`: less than - * `lte`: less than or equal. Required. Is one of the following types: Literal[\"eq\"], - Literal[\"ne\"], Literal[\"gt\"], Literal[\"gte\"], Literal[\"lt\"], Literal[\"lte\"]""" - key: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The key to compare against the value. Required.""" - value: Union[str, float, bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The value to compare against the attribute key; supports string, number, or boolean types. - Required. Is one of the following types: str, float, bool""" - - @overload - def __init__( - self, - *, - type: Literal["eq", "ne", "gt", "gte", "lt", "lte"], - key: str, - value: Union[str, float, bool], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CompoundFilter(_Model): - """Combine multiple filters using ``and`` or ``or``. - - :ivar type: Type of operation: ``and`` or ``or``. Required. Is either a Literal["and"] type or - a Literal["or"] type. - :vartype type: str or str - :ivar filters: Array of filters to combine. Items can be ``ComparisonFilter`` or - ``CompoundFilter``. Required. - :vartype filters: list[~azure.ai.projects.models.ComparisonFilter or - ~azure.ai.projects.models.CompoundFilter] - """ - - type: Literal["and", "or"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Type of operation: ``and`` or ``or``. Required. Is either a Literal[\"and\"] type or a - Literal[\"or\"] type.""" - filters: list[Union["_models.ComparisonFilter", "_models.CompoundFilter"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Array of filters to combine. Items can be ``ComparisonFilter`` or ``CompoundFilter``. Required.""" - - @overload - def __init__( - self, - *, - type: Literal["and", "or"], - filters: list[Union["_models.ComparisonFilter", "_models.CompoundFilter"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ComputerAction(_Model): - """ComputerAction. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ComputerActionClick, ComputerActionDoubleClick, ComputerActionDrag, ComputerActionKeyPress, - ComputerActionMove, ComputerActionScreenshot, ComputerActionScroll, ComputerActionTypeKeys, - ComputerActionWait - - :ivar type: Required. Known values are: "screenshot", "click", "double_click", "scroll", - "type", "wait", "keypress", "drag", and "move". - :vartype type: str or ~azure.ai.projects.models.ComputerActionType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"screenshot\", \"click\", \"double_click\", \"scroll\", \"type\", - \"wait\", \"keypress\", \"drag\", and \"move\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ComputerActionClick(ComputerAction, discriminator="click"): - """A click action. - - :ivar type: Specifies the event type. For a click action, this property is - always set to ``click``. Required. - :vartype type: str or ~azure.ai.projects.models.CLICK - :ivar button: Indicates which mouse button was pressed during the click. One of ``left``, - ``right``, ``wheel``, ``back``, or ``forward``. Required. Is one of the following types: - Literal["left"], Literal["right"], Literal["wheel"], Literal["back"], Literal["forward"] - :vartype button: str or str or str or str or str - :ivar x: The x-coordinate where the click occurred. Required. - :vartype x: int - :ivar y: The y-coordinate where the click occurred. Required. - :vartype y: int - """ - - type: Literal[ComputerActionType.CLICK] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a click action, this property is - always set to ``click``. Required.""" - button: Literal["left", "right", "wheel", "back", "forward"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Indicates which mouse button was pressed during the click. One of ``left``, ``right``, - ``wheel``, ``back``, or ``forward``. Required. Is one of the following types: - Literal[\"left\"], Literal[\"right\"], Literal[\"wheel\"], Literal[\"back\"], - Literal[\"forward\"]""" - x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The x-coordinate where the click occurred. Required.""" - y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The y-coordinate where the click occurred. Required.""" - - @overload - def __init__( - self, - *, - button: Literal["left", "right", "wheel", "back", "forward"], - x: int, - y: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.CLICK # type: ignore - - -class ComputerActionDoubleClick(ComputerAction, discriminator="double_click"): - """A double click action. - - :ivar type: Specifies the event type. For a double click action, this property is - always set to ``double_click``. Required. - :vartype type: str or ~azure.ai.projects.models.DOUBLE_CLICK - :ivar x: The x-coordinate where the double click occurred. Required. - :vartype x: int - :ivar y: The y-coordinate where the double click occurred. Required. - :vartype y: int - """ - - type: Literal[ComputerActionType.DOUBLE_CLICK] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a double click action, this property is - always set to ``double_click``. Required.""" - x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The x-coordinate where the double click occurred. Required.""" - y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The y-coordinate where the double click occurred. Required.""" - - @overload - def __init__( - self, - *, - x: int, - y: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.DOUBLE_CLICK # type: ignore - - -class ComputerActionDrag(ComputerAction, discriminator="drag"): - """A drag action. - - :ivar type: Specifies the event type. For a drag action, this property is - always set to ``drag``. Required. - :vartype type: str or ~azure.ai.projects.models.DRAG - :ivar path: An array of coordinates representing the path of the drag action. Coordinates will - appear as an array - of objects, eg - .. code-block:: - [ - { x: 100, y: 200 }, - { x: 200, y: 300 } - ]. Required. - :vartype path: list[~azure.ai.projects.models.Coordinate] - """ - - type: Literal[ComputerActionType.DRAG] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a drag action, this property is - always set to ``drag``. Required.""" - path: list["_models.Coordinate"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An array of coordinates representing the path of the drag action. Coordinates will appear as an - array - of objects, eg - .. code-block:: - [ - { x: 100, y: 200 }, - { x: 200, y: 300 } - ]. Required.""" - - @overload - def __init__( - self, - *, - path: list["_models.Coordinate"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.DRAG # type: ignore - - -class ComputerActionKeyPress(ComputerAction, discriminator="keypress"): - """A collection of keypresses the model would like to perform. - - :ivar type: Specifies the event type. For a keypress action, this property is - always set to ``keypress``. Required. - :vartype type: str or ~azure.ai.projects.models.KEYPRESS - :ivar keys_property: The combination of keys the model is requesting to be pressed. This is an - array of strings, each representing a key. Required. - :vartype keys_property: list[str] - """ - - type: Literal[ComputerActionType.KEYPRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a keypress action, this property is - always set to ``keypress``. Required.""" - keys_property: list[str] = rest_field(name="keys", visibility=["read", "create", "update", "delete", "query"]) - """The combination of keys the model is requesting to be pressed. This is an - array of strings, each representing a key. Required.""" - - @overload - def __init__( - self, - *, - keys_property: list[str], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.KEYPRESS # type: ignore - - -class ComputerActionMove(ComputerAction, discriminator="move"): - """A mouse move action. - - :ivar type: Specifies the event type. For a move action, this property is - always set to ``move``. Required. - :vartype type: str or ~azure.ai.projects.models.MOVE - :ivar x: The x-coordinate to move to. Required. - :vartype x: int - :ivar y: The y-coordinate to move to. Required. - :vartype y: int - """ - - type: Literal[ComputerActionType.MOVE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a move action, this property is - always set to ``move``. Required.""" - x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The x-coordinate to move to. Required.""" - y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The y-coordinate to move to. Required.""" - - @overload - def __init__( - self, - *, - x: int, - y: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.MOVE # type: ignore - - -class ComputerActionScreenshot(ComputerAction, discriminator="screenshot"): - """A screenshot action. - - :ivar type: Specifies the event type. For a screenshot action, this property is - always set to ``screenshot``. Required. - :vartype type: str or ~azure.ai.projects.models.SCREENSHOT - """ - - type: Literal[ComputerActionType.SCREENSHOT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a screenshot action, this property is - always set to ``screenshot``. Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.SCREENSHOT # type: ignore - - -class ComputerActionScroll(ComputerAction, discriminator="scroll"): - """A scroll action. - - :ivar type: Specifies the event type. For a scroll action, this property is - always set to ``scroll``. Required. - :vartype type: str or ~azure.ai.projects.models.SCROLL - :ivar x: The x-coordinate where the scroll occurred. Required. - :vartype x: int - :ivar y: The y-coordinate where the scroll occurred. Required. - :vartype y: int - :ivar scroll_x: The horizontal scroll distance. Required. - :vartype scroll_x: int - :ivar scroll_y: The vertical scroll distance. Required. - :vartype scroll_y: int - """ - - type: Literal[ComputerActionType.SCROLL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a scroll action, this property is - always set to ``scroll``. Required.""" - x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The x-coordinate where the scroll occurred. Required.""" - y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The y-coordinate where the scroll occurred. Required.""" - scroll_x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The horizontal scroll distance. Required.""" - scroll_y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The vertical scroll distance. Required.""" - - @overload - def __init__( - self, - *, - x: int, - y: int, - scroll_x: int, - scroll_y: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.SCROLL # type: ignore - - -class ComputerActionTypeKeys(ComputerAction, discriminator="type"): - """An action to type in text. - - :ivar type: Specifies the event type. For a type action, this property is - always set to ``type``. Required. - :vartype type: str or ~azure.ai.projects.models.TYPE - :ivar text: The text to type. Required. - :vartype text: str - """ - - type: Literal[ComputerActionType.TYPE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a type action, this property is - always set to ``type``. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text to type. Required.""" - - @overload - def __init__( - self, - *, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.TYPE # type: ignore - - -class ComputerActionWait(ComputerAction, discriminator="wait"): - """A wait action. - - :ivar type: Specifies the event type. For a wait action, this property is - always set to ``wait``. Required. - :vartype type: str or ~azure.ai.projects.models.WAIT - """ - - type: Literal[ComputerActionType.WAIT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Specifies the event type. For a wait action, this property is - always set to ``wait``. Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerActionType.WAIT # type: ignore - - -class ComputerToolCallItemParam(ItemParam, discriminator="computer_call"): - """A tool call to a computer use tool. See the - `computer use guide `_ for more information. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.COMPUTER_CALL - :ivar call_id: An identifier used when responding to the tool call with output. Required. - :vartype call_id: str - :ivar action: Required. - :vartype action: ~azure.ai.projects.models.ComputerAction - :ivar pending_safety_checks: The pending safety checks for the computer call. Required. - :vartype pending_safety_checks: list[~azure.ai.projects.models.ComputerToolCallSafetyCheck] - """ - - type: Literal[ItemType.COMPUTER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An identifier used when responding to the tool call with output. Required.""" - action: "_models.ComputerAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - pending_safety_checks: list["_models.ComputerToolCallSafetyCheck"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The pending safety checks for the computer call. Required.""" - - @overload - def __init__( - self, - *, - call_id: str, - action: "_models.ComputerAction", - pending_safety_checks: list["_models.ComputerToolCallSafetyCheck"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.COMPUTER_CALL # type: ignore - - -class ComputerToolCallItemResource(ItemResource, discriminator="computer_call"): - """A tool call to a computer use tool. See the - `computer use guide `_ for more information. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.COMPUTER_CALL - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar call_id: An identifier used when responding to the tool call with output. Required. - :vartype call_id: str - :ivar action: Required. - :vartype action: ~azure.ai.projects.models.ComputerAction - :ivar pending_safety_checks: The pending safety checks for the computer call. Required. - :vartype pending_safety_checks: list[~azure.ai.projects.models.ComputerToolCallSafetyCheck] - """ - - type: Literal[ItemType.COMPUTER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An identifier used when responding to the tool call with output. Required.""" - action: "_models.ComputerAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - pending_safety_checks: list["_models.ComputerToolCallSafetyCheck"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The pending safety checks for the computer call. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - call_id: str, - action: "_models.ComputerAction", - pending_safety_checks: list["_models.ComputerToolCallSafetyCheck"], - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.COMPUTER_CALL # type: ignore - - -class ComputerToolCallOutputItemOutput(_Model): - """ComputerToolCallOutputItemOutput. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ComputerToolCallOutputItemOutputComputerScreenshot - - :ivar type: Required. "computer_screenshot" - :vartype type: str or ~azure.ai.projects.models.ComputerToolCallOutputItemOutputType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. \"computer_screenshot\"""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ComputerToolCallOutputItemOutputComputerScreenshot( - ComputerToolCallOutputItemOutput, discriminator="computer_screenshot" -): # pylint: disable=name-too-long - """ComputerToolCallOutputItemOutputComputerScreenshot. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.SCREENSHOT - :ivar image_url: - :vartype image_url: str - :ivar file_id: - :vartype file_id: str - """ - - type: Literal[ComputerToolCallOutputItemOutputType.SCREENSHOT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - - @overload - def __init__( - self, - *, - image_url: Optional[str] = None, - file_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ComputerToolCallOutputItemOutputType.SCREENSHOT # type: ignore - - -class ComputerToolCallOutputItemParam(ItemParam, discriminator="computer_call_output"): - """The output of a computer tool call. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.COMPUTER_CALL_OUTPUT - :ivar call_id: The ID of the computer tool call that produced the output. Required. - :vartype call_id: str - :ivar acknowledged_safety_checks: The safety checks reported by the API that have been - acknowledged by the - developer. - :vartype acknowledged_safety_checks: - list[~azure.ai.projects.models.ComputerToolCallSafetyCheck] - :ivar output: Required. - :vartype output: ~azure.ai.projects.models.ComputerToolCallOutputItemOutput - """ - - type: Literal[ItemType.COMPUTER_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the computer tool call that produced the output. Required.""" - acknowledged_safety_checks: Optional[list["_models.ComputerToolCallSafetyCheck"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The safety checks reported by the API that have been acknowledged by the - developer.""" - output: "_models.ComputerToolCallOutputItemOutput" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" - - @overload - def __init__( - self, - *, - call_id: str, - output: "_models.ComputerToolCallOutputItemOutput", - acknowledged_safety_checks: Optional[list["_models.ComputerToolCallSafetyCheck"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.COMPUTER_CALL_OUTPUT # type: ignore - - -class ComputerToolCallOutputItemResource(ItemResource, discriminator="computer_call_output"): - """The output of a computer tool call. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.COMPUTER_CALL_OUTPUT - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar call_id: The ID of the computer tool call that produced the output. Required. - :vartype call_id: str - :ivar acknowledged_safety_checks: The safety checks reported by the API that have been - acknowledged by the - developer. - :vartype acknowledged_safety_checks: - list[~azure.ai.projects.models.ComputerToolCallSafetyCheck] - :ivar output: Required. - :vartype output: ~azure.ai.projects.models.ComputerToolCallOutputItemOutput - """ - - type: Literal[ItemType.COMPUTER_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the computer tool call that produced the output. Required.""" - acknowledged_safety_checks: Optional[list["_models.ComputerToolCallSafetyCheck"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The safety checks reported by the API that have been acknowledged by the - developer.""" - output: "_models.ComputerToolCallOutputItemOutput" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - call_id: str, - output: "_models.ComputerToolCallOutputItemOutput", - created_by: Optional["_models.CreatedBy"] = None, - acknowledged_safety_checks: Optional[list["_models.ComputerToolCallSafetyCheck"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.COMPUTER_CALL_OUTPUT # type: ignore - - -class ComputerToolCallSafetyCheck(_Model): - """A pending safety check for the computer call. - - :ivar id: The ID of the pending safety check. Required. - :vartype id: str - :ivar code: The type of the pending safety check. Required. - :vartype code: str - :ivar message: Details about the pending safety check. Required. - :vartype message: str - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the pending safety check. Required.""" - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The type of the pending safety check. Required.""" - message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Details about the pending safety check. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - code: str, - message: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ComputerUsePreviewTool(Tool, discriminator="computer_use_preview"): - """A tool that controls a virtual computer. Learn more about the `computer tool - `_. - - :ivar type: The type of the computer use tool. Always ``computer_use_preview``. Required. - :vartype type: str or ~azure.ai.projects.models.COMPUTER_USE_PREVIEW - :ivar environment: The type of computer environment to control. Required. Is one of the - following types: Literal["windows"], Literal["mac"], Literal["linux"], Literal["ubuntu"], - Literal["browser"] - :vartype environment: str or str or str or str or str - :ivar display_width: The width of the computer display. Required. - :vartype display_width: int - :ivar display_height: The height of the computer display. Required. - :vartype display_height: int - """ - - type: Literal[ToolType.COMPUTER_USE_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the computer use tool. Always ``computer_use_preview``. Required.""" - environment: Literal["windows", "mac", "linux", "ubuntu", "browser"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The type of computer environment to control. Required. Is one of the following types: - Literal[\"windows\"], Literal[\"mac\"], Literal[\"linux\"], Literal[\"ubuntu\"], - Literal[\"browser\"]""" - display_width: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The width of the computer display. Required.""" - display_height: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The height of the computer display. Required.""" - - @overload - def __init__( - self, - *, - environment: Literal["windows", "mac", "linux", "ubuntu", "browser"], - display_width: int, - display_height: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.COMPUTER_USE_PREVIEW # type: ignore - - -class Connection(_Model): - """Response from the list and get connections operations. - - :ivar name: The friendly name of the connection, provided by the user. Required. - :vartype name: str - :ivar id: A unique identifier for the connection, generated by the service. Required. - :vartype id: str - :ivar type: Category of the connection. Required. Known values are: "AzureOpenAI", "AzureBlob", - "AzureStorageAccount", "CognitiveSearch", "CosmosDB", "ApiKey", "AppConfig", "AppInsights", - "CustomKeys", and "RemoteTool". - :vartype type: str or ~azure.ai.projects.models.ConnectionType - :ivar target: The connection URL to be used for this service. Required. - :vartype target: str - :ivar is_default: Whether the connection is tagged as the default connection of its type. - Required. - :vartype is_default: bool - :ivar credentials: The credentials used by the connection. Required. - :vartype credentials: ~azure.ai.projects.models.BaseCredentials - :ivar metadata: Metadata of the connection. Required. - :vartype metadata: dict[str, str] - """ - - name: str = rest_field(visibility=["read"]) - """The friendly name of the connection, provided by the user. Required.""" - id: str = rest_field(visibility=["read"]) - """A unique identifier for the connection, generated by the service. Required.""" - type: Union[str, "_models.ConnectionType"] = rest_field(visibility=["read"]) - """Category of the connection. Required. Known values are: \"AzureOpenAI\", \"AzureBlob\", - \"AzureStorageAccount\", \"CognitiveSearch\", \"CosmosDB\", \"ApiKey\", \"AppConfig\", - \"AppInsights\", \"CustomKeys\", and \"RemoteTool\".""" - target: str = rest_field(visibility=["read"]) - """The connection URL to be used for this service. Required.""" - is_default: bool = rest_field(name="isDefault", visibility=["read"]) - """Whether the connection is tagged as the default connection of its type. Required.""" - credentials: "_models.BaseCredentials" = rest_field(visibility=["read"]) - """The credentials used by the connection. Required.""" - metadata: dict[str, str] = rest_field(visibility=["read"]) - """Metadata of the connection. Required.""" - - -class ContainerAppAgentDefinition(AgentDefinition, discriminator="container_app"): - """The container app agent definition. - - :ivar rai_config: Configuration for Responsible AI (RAI) content filtering and safety features. - :vartype rai_config: ~azure.ai.projects.models.RaiConfig - :ivar kind: Required. - :vartype kind: str or ~azure.ai.projects.models.CONTAINER_APP - :ivar container_protocol_versions: The protocols that the agent supports for ingress - communication of the containers. Required. - :vartype container_protocol_versions: list[~azure.ai.projects.models.ProtocolVersionRecord] - :ivar container_app_resource_id: The resource ID of the Azure Container App that hosts this - agent. Not mutable across versions. Required. - :vartype container_app_resource_id: str - :ivar ingress_subdomain_suffix: The suffix to apply to the app subdomain when sending ingress - to the agent. This can be a label (e.g., '---current'), a specific revision (e.g., - '--0000001'), or empty to use the default endpoint for the container app. Required. - :vartype ingress_subdomain_suffix: str - """ - - kind: Literal[AgentKind.CONTAINER_APP] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - container_protocol_versions: list["_models.ProtocolVersionRecord"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The protocols that the agent supports for ingress communication of the containers. Required.""" - container_app_resource_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The resource ID of the Azure Container App that hosts this agent. Not mutable across versions. - Required.""" - ingress_subdomain_suffix: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The suffix to apply to the app subdomain when sending ingress to the agent. This can be a label - (e.g., '---current'), a specific revision (e.g., '--0000001'), or empty to use the default - endpoint for the container app. Required.""" - - @overload - def __init__( - self, - *, - container_protocol_versions: list["_models.ProtocolVersionRecord"], - container_app_resource_id: str, - ingress_subdomain_suffix: str, - rai_config: Optional["_models.RaiConfig"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = AgentKind.CONTAINER_APP # type: ignore - - -class EvaluationRuleAction(_Model): - """Evaluation action model. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ContinuousEvaluationRuleAction, HumanEvaluationRuleAction - - :ivar type: Type of the evaluation action. Required. Known values are: "continuousEvaluation" - and "humanEvaluation". - :vartype type: str or ~azure.ai.projects.models.EvaluationRuleActionType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Type of the evaluation action. Required. Known values are: \"continuousEvaluation\" and - \"humanEvaluation\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ContinuousEvaluationRuleAction(EvaluationRuleAction, discriminator="continuousEvaluation"): - """Evaluation rule action for continuous evaluation. - - :ivar type: Required. Continuous evaluation. - :vartype type: str or ~azure.ai.projects.models.CONTINUOUS_EVALUATION - :ivar eval_id: Eval Id to add continuous evaluation runs to. Required. - :vartype eval_id: str - :ivar max_hourly_runs: Maximum number of evaluation runs allowed per hour. - :vartype max_hourly_runs: int - """ - - type: Literal[EvaluationRuleActionType.CONTINUOUS_EVALUATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Continuous evaluation.""" - eval_id: str = rest_field(name="evalId", visibility=["read", "create", "update", "delete", "query"]) - """Eval Id to add continuous evaluation runs to. Required.""" - max_hourly_runs: Optional[int] = rest_field( - name="maxHourlyRuns", visibility=["read", "create", "update", "delete", "query"] - ) - """Maximum number of evaluation runs allowed per hour.""" - - @overload - def __init__( - self, - *, - eval_id: str, - max_hourly_runs: Optional[int] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = EvaluationRuleActionType.CONTINUOUS_EVALUATION # type: ignore - - -class Coordinate(_Model): - """An x/y coordinate pair, e.g. ``{ x: 100, y: 200 }``. - - :ivar x: The x-coordinate. Required. - :vartype x: int - :ivar y: The y-coordinate. Required. - :vartype y: int - """ - - x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The x-coordinate. Required.""" - y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The y-coordinate. Required.""" - - @overload - def __init__( - self, - *, - x: int, - y: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CosmosDBIndex(Index, discriminator="CosmosDBNoSqlVectorStore"): - """CosmosDB Vector Store Index Definition. - - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - :ivar type: Type of index. Required. CosmosDB - :vartype type: str or ~azure.ai.projects.models.COSMOS_DB - :ivar connection_name: Name of connection to CosmosDB. Required. - :vartype connection_name: str - :ivar database_name: Name of the CosmosDB Database. Required. - :vartype database_name: str - :ivar container_name: Name of CosmosDB Container. Required. - :vartype container_name: str - :ivar embedding_configuration: Embedding model configuration. Required. - :vartype embedding_configuration: ~azure.ai.projects.models.EmbeddingConfiguration - :ivar field_mapping: Field mapping configuration. Required. - :vartype field_mapping: ~azure.ai.projects.models.FieldMapping - """ - - type: Literal[IndexType.COSMOS_DB] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Type of index. Required. CosmosDB""" - connection_name: str = rest_field(name="connectionName", visibility=["create"]) - """Name of connection to CosmosDB. Required.""" - database_name: str = rest_field(name="databaseName", visibility=["create"]) - """Name of the CosmosDB Database. Required.""" - container_name: str = rest_field(name="containerName", visibility=["create"]) - """Name of CosmosDB Container. Required.""" - embedding_configuration: "_models.EmbeddingConfiguration" = rest_field( - name="embeddingConfiguration", visibility=["create"] - ) - """Embedding model configuration. Required.""" - field_mapping: "_models.FieldMapping" = rest_field(name="fieldMapping", visibility=["create"]) - """Field mapping configuration. Required.""" - - @overload - def __init__( - self, - *, - connection_name: str, - database_name: str, - container_name: str, - embedding_configuration: "_models.EmbeddingConfiguration", - field_mapping: "_models.FieldMapping", - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = IndexType.COSMOS_DB # type: ignore - - -class CreatedBy(_Model): - """CreatedBy. - - :ivar agent: The agent that created the item. - :vartype agent: ~azure.ai.projects.models.AgentId - :ivar response_id: The response on which the item is created. - :vartype response_id: str - """ - - agent: Optional["_models.AgentId"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The agent that created the item.""" - response_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The response on which the item is created.""" - - @overload - def __init__( - self, - *, - agent: Optional["_models.AgentId"] = None, - response_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class Trigger(_Model): - """Base model for Trigger of the schedule. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - CronTrigger, OneTimeTrigger, RecurrenceTrigger - - :ivar type: Type of the trigger. Required. Known values are: "Cron", "Recurrence", and - "OneTime". - :vartype type: str or ~azure.ai.projects.models.TriggerType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Type of the trigger. Required. Known values are: \"Cron\", \"Recurrence\", and \"OneTime\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class CronTrigger(Trigger, discriminator="Cron"): - """Cron based trigger. - - :ivar type: Required. Cron based trigger. - :vartype type: str or ~azure.ai.projects.models.CRON - :ivar expression: Cron expression that defines the schedule frequency. Required. - :vartype expression: str - :ivar time_zone: Time zone for the cron schedule. - :vartype time_zone: str - :ivar start_time: Start time for the cron schedule in ISO 8601 format. - :vartype start_time: str - :ivar end_time: End time for the cron schedule in ISO 8601 format. - :vartype end_time: str - """ - - type: Literal[TriggerType.CRON] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Cron based trigger.""" - expression: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Cron expression that defines the schedule frequency. Required.""" - time_zone: Optional[str] = rest_field(name="timeZone", visibility=["read", "create", "update", "delete", "query"]) - """Time zone for the cron schedule.""" - start_time: Optional[str] = rest_field(name="startTime", visibility=["read", "create", "update", "delete", "query"]) - """Start time for the cron schedule in ISO 8601 format.""" - end_time: Optional[str] = rest_field(name="endTime", visibility=["read", "create", "update", "delete", "query"]) - """End time for the cron schedule in ISO 8601 format.""" - - @overload - def __init__( - self, - *, - expression: str, - time_zone: Optional[str] = None, - start_time: Optional[str] = None, - end_time: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = TriggerType.CRON # type: ignore - - -class CustomCredential(BaseCredentials, discriminator="CustomKeys"): - """Custom credential definition. - - :ivar type: The credential type. Required. Custom credential - :vartype type: str or ~azure.ai.projects.models.CUSTOM - """ - - type: Literal[CredentialType.CUSTOM] = rest_discriminator(name="type", visibility=["read"]) # type: ignore - """The credential type. Required. Custom credential""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CredentialType.CUSTOM # type: ignore - - -class RecurrenceSchedule(_Model): - """Recurrence schedule model. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - DailyRecurrenceSchedule, HourlyRecurrenceSchedule, MonthlyRecurrenceSchedule, - WeeklyRecurrenceSchedule - - :ivar type: Recurrence type for the recurrence schedule. Required. Known values are: "Hourly", - "Daily", "Weekly", and "Monthly". - :vartype type: str or ~azure.ai.projects.models.RecurrenceType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Recurrence type for the recurrence schedule. Required. Known values are: \"Hourly\", \"Daily\", - \"Weekly\", and \"Monthly\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class DailyRecurrenceSchedule(RecurrenceSchedule, discriminator="Daily"): - """Daily recurrence schedule. - - :ivar type: Daily recurrence type. Required. Daily recurrence pattern. - :vartype type: str or ~azure.ai.projects.models.DAILY - :ivar hours: Hours for the recurrence schedule. Required. - :vartype hours: list[int] - """ - - type: Literal[RecurrenceType.DAILY] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Daily recurrence type. Required. Daily recurrence pattern.""" - hours: list[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Hours for the recurrence schedule. Required.""" - - @overload - def __init__( - self, - *, - hours: list[int], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = RecurrenceType.DAILY # type: ignore - - -class DatasetCredential(_Model): - """Represents a reference to a blob for consumption. - - :ivar blob_reference: Credential info to access the storage account. Required. - :vartype blob_reference: ~azure.ai.projects.models.BlobReference - """ - - blob_reference: "_models.BlobReference" = rest_field( - name="blobReference", visibility=["read", "create", "update", "delete", "query"] - ) - """Credential info to access the storage account. Required.""" - - @overload - def __init__( - self, - *, - blob_reference: "_models.BlobReference", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class DatasetVersion(_Model): - """DatasetVersion Definition. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - FileDatasetVersion, FolderDatasetVersion - - :ivar data_uri: URI of the data. Example: `https://go.microsoft.com/fwlink/?linkid=2202330 - `_. Required. - :vartype data_uri: str - :ivar type: Dataset type. Required. Known values are: "uri_file" and "uri_folder". - :vartype type: str or ~azure.ai.projects.models.DatasetType - :ivar is_reference: Indicates if the dataset holds a reference to the storage, or the dataset - manages storage itself. If true, the underlying data will not be deleted when the dataset - version is deleted. - :vartype is_reference: bool - :ivar connection_name: The Azure Storage Account connection name. Required if - startPendingUploadVersion was not called before creating the Dataset. - :vartype connection_name: str - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - """ - - __mapping__: dict[str, _Model] = {} - data_uri: str = rest_field(name="dataUri", visibility=["read", "create"]) - """URI of the data. Example: `https://go.microsoft.com/fwlink/?linkid=2202330 - `_. Required.""" - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Dataset type. Required. Known values are: \"uri_file\" and \"uri_folder\".""" - is_reference: Optional[bool] = rest_field(name="isReference", visibility=["read"]) - """Indicates if the dataset holds a reference to the storage, or the dataset manages storage - itself. If true, the underlying data will not be deleted when the dataset version is deleted.""" - connection_name: Optional[str] = rest_field(name="connectionName", visibility=["read", "create"]) - """The Azure Storage Account connection name. Required if startPendingUploadVersion was not called - before creating the Dataset.""" - id: Optional[str] = rest_field(visibility=["read"]) - """Asset ID, a unique identifier for the asset.""" - name: str = rest_field(visibility=["read"]) - """The name of the resource. Required.""" - version: str = rest_field(visibility=["read"]) - """The version of the resource. Required.""" - description: Optional[str] = rest_field(visibility=["create", "update"]) - """The asset description text.""" - tags: Optional[dict[str, str]] = rest_field(visibility=["create", "update"]) - """Tag dictionary. Tags can be added, removed, and updated.""" - - @overload - def __init__( - self, - *, - data_uri: str, - type: str, - connection_name: Optional[str] = None, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class DeleteAgentResponse(_Model): - """A deleted agent Object. - - :ivar object: The object type. Always 'agent.deleted'. Required. Default value is - "agent.deleted". - :vartype object: str - :ivar name: The name of the agent. Required. - :vartype name: str - :ivar deleted: Whether the agent was successfully deleted. Required. - :vartype deleted: bool - """ - - object: Literal["agent.deleted"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type. Always 'agent.deleted'. Required. Default value is \"agent.deleted\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the agent. Required.""" - deleted: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the agent was successfully deleted. Required.""" - - @overload - def __init__( - self, - *, - name: str, - deleted: bool, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["agent.deleted"] = "agent.deleted" - - -class DeleteAgentVersionResponse(_Model): - """A deleted agent version Object. - - :ivar object: The object type. Always 'agent.deleted'. Required. Default value is - "agent.version.deleted". - :vartype object: str - :ivar name: The name of the agent. Required. - :vartype name: str - :ivar version: The version identifier of the agent. Required. - :vartype version: str - :ivar deleted: Whether the agent was successfully deleted. Required. - :vartype deleted: bool - """ - - object: Literal["agent.version.deleted"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type. Always 'agent.deleted'. Required. Default value is \"agent.version.deleted\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the agent. Required.""" - version: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The version identifier of the agent. Required.""" - deleted: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the agent was successfully deleted. Required.""" - - @overload - def __init__( - self, - *, - name: str, - version: str, - deleted: bool, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["agent.version.deleted"] = "agent.version.deleted" - - -class DeleteMemoryStoreResponse(_Model): - """DeleteMemoryStoreResponse. - - :ivar object: The object type. Always 'memory_store.deleted'. Required. Default value is - "memory_store.deleted". - :vartype object: str - :ivar name: The name of the memory store. Required. - :vartype name: str - :ivar deleted: Whether the memory store was successfully deleted. Required. - :vartype deleted: bool - """ - - object: Literal["memory_store.deleted"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type. Always 'memory_store.deleted'. Required. Default value is - \"memory_store.deleted\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the memory store. Required.""" - deleted: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the memory store was successfully deleted. Required.""" - - @overload - def __init__( - self, - *, - name: str, - deleted: bool, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["memory_store.deleted"] = "memory_store.deleted" - - -class Deployment(_Model): - """Model Deployment Definition. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ModelDeployment - - :ivar type: The type of the deployment. Required. "ModelDeployment" - :vartype type: str or ~azure.ai.projects.models.DeploymentType - :ivar name: Name of the deployment. Required. - :vartype name: str - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """The type of the deployment. Required. \"ModelDeployment\"""" - name: str = rest_field(visibility=["read"]) - """Name of the deployment. Required.""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EmbeddingConfiguration(_Model): - """Embedding configuration class. - - :ivar model_deployment_name: Deployment name of embedding model. It can point to a model - deployment either in the parent AIServices or a connection. Required. - :vartype model_deployment_name: str - :ivar embedding_field: Embedding field. Required. - :vartype embedding_field: str - """ - - model_deployment_name: str = rest_field(name="modelDeploymentName", visibility=["create"]) - """Deployment name of embedding model. It can point to a model deployment either in the parent - AIServices or a connection. Required.""" - embedding_field: str = rest_field(name="embeddingField", visibility=["create"]) - """Embedding field. Required.""" - - @overload - def __init__( - self, - *, - model_deployment_name: str, - embedding_field: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EntraIDCredentials(BaseCredentials, discriminator="AAD"): - """Entra ID credential definition. - - :ivar type: The credential type. Required. Entra ID credential (formerly known as AAD) - :vartype type: str or ~azure.ai.projects.models.ENTRA_ID - """ - - type: Literal[CredentialType.ENTRA_ID] = rest_discriminator(name="type", visibility=["read"]) # type: ignore - """The credential type. Required. Entra ID credential (formerly known as AAD)""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CredentialType.ENTRA_ID # type: ignore - - -class EvalCompareReport(InsightResult, discriminator="EvaluationComparison"): - """Insights from the evaluation comparison. - - :ivar type: The type of insights result. Required. Evaluation Comparison. - :vartype type: str or ~azure.ai.projects.models.EVALUATION_COMPARISON - :ivar comparisons: Comparison results for each treatment run against the baseline. Required. - :vartype comparisons: list[~azure.ai.projects.models.EvalRunResultComparison] - :ivar method: The statistical method used for comparison. Required. - :vartype method: str - """ - - type: Literal[InsightType.EVALUATION_COMPARISON] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of insights result. Required. Evaluation Comparison.""" - comparisons: list["_models.EvalRunResultComparison"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Comparison results for each treatment run against the baseline. Required.""" - method: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The statistical method used for comparison. Required.""" - - @overload - def __init__( - self, - *, - comparisons: list["_models.EvalRunResultComparison"], - method: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = InsightType.EVALUATION_COMPARISON # type: ignore - - -class EvalResult(_Model): - """Result of the evaluation. - - :ivar name: name of the check. Required. - :vartype name: str - :ivar type: type of the check. Required. - :vartype type: str - :ivar score: score. Required. - :vartype score: float - :ivar passed: indicates if the check passed or failed. Required. - :vartype passed: bool - """ - - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """name of the check. Required.""" - type: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """type of the check. Required.""" - score: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """score. Required.""" - passed: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """indicates if the check passed or failed. Required.""" - - @overload - def __init__( - self, - *, - name: str, - type: str, - score: float, - passed: bool, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvalRunResultCompareItem(_Model): - """Metric comparison for a treatment against the baseline. - - :ivar treatment_run_id: The treatment run ID. Required. - :vartype treatment_run_id: str - :ivar treatment_run_summary: Summary statistics of the treatment run. Required. - :vartype treatment_run_summary: ~azure.ai.projects.models.EvalRunResultSummary - :ivar delta_estimate: Estimated difference between treatment and baseline. Required. - :vartype delta_estimate: float - :ivar p_value: P-value for the treatment effect. Required. - :vartype p_value: float - :ivar treatment_effect: Type of treatment effect. Required. Known values are: "TooFewSamples", - "Inconclusive", "Changed", "Improved", and "Degraded". - :vartype treatment_effect: str or ~azure.ai.projects.models.TreatmentEffectType - """ - - treatment_run_id: str = rest_field( - name="treatmentRunId", visibility=["read", "create", "update", "delete", "query"] - ) - """The treatment run ID. Required.""" - treatment_run_summary: "_models.EvalRunResultSummary" = rest_field( - name="treatmentRunSummary", visibility=["read", "create", "update", "delete", "query"] - ) - """Summary statistics of the treatment run. Required.""" - delta_estimate: float = rest_field(name="deltaEstimate", visibility=["read", "create", "update", "delete", "query"]) - """Estimated difference between treatment and baseline. Required.""" - p_value: float = rest_field(name="pValue", visibility=["read", "create", "update", "delete", "query"]) - """P-value for the treatment effect. Required.""" - treatment_effect: Union[str, "_models.TreatmentEffectType"] = rest_field( - name="treatmentEffect", visibility=["read", "create", "update", "delete", "query"] - ) - """Type of treatment effect. Required. Known values are: \"TooFewSamples\", \"Inconclusive\", - \"Changed\", \"Improved\", and \"Degraded\".""" - - @overload - def __init__( - self, - *, - treatment_run_id: str, - treatment_run_summary: "_models.EvalRunResultSummary", - delta_estimate: float, - p_value: float, - treatment_effect: Union[str, "_models.TreatmentEffectType"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvalRunResultComparison(_Model): - """Comparison results for treatment runs against the baseline. - - :ivar testing_criteria: Name of the testing criteria. Required. - :vartype testing_criteria: str - :ivar metric: Metric being evaluated. Required. - :vartype metric: str - :ivar evaluator: Name of the evaluator for this testing criteria. Required. - :vartype evaluator: str - :ivar baseline_run_summary: Summary statistics of the baseline run. Required. - :vartype baseline_run_summary: ~azure.ai.projects.models.EvalRunResultSummary - :ivar compare_items: List of comparison results for each treatment run. Required. - :vartype compare_items: list[~azure.ai.projects.models.EvalRunResultCompareItem] - """ - - testing_criteria: str = rest_field( - name="testingCriteria", visibility=["read", "create", "update", "delete", "query"] - ) - """Name of the testing criteria. Required.""" - metric: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Metric being evaluated. Required.""" - evaluator: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Name of the evaluator for this testing criteria. Required.""" - baseline_run_summary: "_models.EvalRunResultSummary" = rest_field( - name="baselineRunSummary", visibility=["read", "create", "update", "delete", "query"] - ) - """Summary statistics of the baseline run. Required.""" - compare_items: list["_models.EvalRunResultCompareItem"] = rest_field( - name="compareItems", visibility=["read", "create", "update", "delete", "query"] - ) - """List of comparison results for each treatment run. Required.""" - - @overload - def __init__( - self, - *, - testing_criteria: str, - metric: str, - evaluator: str, - baseline_run_summary: "_models.EvalRunResultSummary", - compare_items: list["_models.EvalRunResultCompareItem"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvalRunResultSummary(_Model): - """Summary statistics of a metric in an evaluation run. - - :ivar run_id: The evaluation run ID. Required. - :vartype run_id: str - :ivar sample_count: Number of samples in the evaluation run. Required. - :vartype sample_count: int - :ivar average: Average value of the metric in the evaluation run. Required. - :vartype average: float - :ivar standard_deviation: Standard deviation of the metric in the evaluation run. Required. - :vartype standard_deviation: float - """ - - run_id: str = rest_field(name="runId", visibility=["read", "create", "update", "delete", "query"]) - """The evaluation run ID. Required.""" - sample_count: int = rest_field(name="sampleCount", visibility=["read", "create", "update", "delete", "query"]) - """Number of samples in the evaluation run. Required.""" - average: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Average value of the metric in the evaluation run. Required.""" - standard_deviation: float = rest_field( - name="standardDeviation", visibility=["read", "create", "update", "delete", "query"] - ) - """Standard deviation of the metric in the evaluation run. Required.""" - - @overload - def __init__( - self, - *, - run_id: str, - sample_count: int, - average: float, - standard_deviation: float, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluationComparisonRequest(InsightRequest, discriminator="EvaluationComparison"): - """Evaluation Comparison Request. - - :ivar type: The type of request. Required. Evaluation Comparison. - :vartype type: str or ~azure.ai.projects.models.EVALUATION_COMPARISON - :ivar eval_id: Identifier for the evaluation. Required. - :vartype eval_id: str - :ivar baseline_run_id: The baseline run ID for comparison. Required. - :vartype baseline_run_id: str - :ivar treatment_run_ids: List of treatment run IDs for comparison. Required. - :vartype treatment_run_ids: list[str] - """ - - type: Literal[InsightType.EVALUATION_COMPARISON] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of request. Required. Evaluation Comparison.""" - eval_id: str = rest_field(name="evalId", visibility=["read", "create", "update", "delete", "query"]) - """Identifier for the evaluation. Required.""" - baseline_run_id: str = rest_field(name="baselineRunId", visibility=["read", "create", "update", "delete", "query"]) - """The baseline run ID for comparison. Required.""" - treatment_run_ids: list[str] = rest_field( - name="treatmentRunIds", visibility=["read", "create", "update", "delete", "query"] - ) - """List of treatment run IDs for comparison. Required.""" - - @overload - def __init__( - self, - *, - eval_id: str, - baseline_run_id: str, - treatment_run_ids: list[str], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = InsightType.EVALUATION_COMPARISON # type: ignore - - -class InsightSample(_Model): - """A sample from the analysis. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - EvaluationResultSample - - :ivar id: The unique identifier for the analysis sample. Required. - :vartype id: str - :ivar type: Sample type. Required. "EvaluationResultSample" - :vartype type: str or ~azure.ai.projects.models.SampleType - :ivar features: Features to help with additional filtering of data in UX. Required. - :vartype features: dict[str, any] - :ivar correlation_info: Info about the correlation for the analysis sample. Required. - :vartype correlation_info: dict[str, any] - """ - - __mapping__: dict[str, _Model] = {} - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier for the analysis sample. Required.""" - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Sample type. Required. \"EvaluationResultSample\"""" - features: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Features to help with additional filtering of data in UX. Required.""" - correlation_info: dict[str, Any] = rest_field( - name="correlationInfo", visibility=["read", "create", "update", "delete", "query"] - ) - """Info about the correlation for the analysis sample. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - type: str, - features: dict[str, Any], - correlation_info: dict[str, Any], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluationResultSample(InsightSample, discriminator="EvaluationResultSample"): - """A sample from the evaluation result. - - :ivar id: The unique identifier for the analysis sample. Required. - :vartype id: str - :ivar features: Features to help with additional filtering of data in UX. Required. - :vartype features: dict[str, any] - :ivar correlation_info: Info about the correlation for the analysis sample. Required. - :vartype correlation_info: dict[str, any] - :ivar type: Evaluation Result Sample Type. Required. A sample from the evaluation result. - :vartype type: str or ~azure.ai.projects.models.EVALUATION_RESULT_SAMPLE - :ivar evaluation_result: Evaluation result for the analysis sample. Required. - :vartype evaluation_result: ~azure.ai.projects.models.EvalResult - """ - - type: Literal[SampleType.EVALUATION_RESULT_SAMPLE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Evaluation Result Sample Type. Required. A sample from the evaluation result.""" - evaluation_result: "_models.EvalResult" = rest_field( - name="evaluationResult", visibility=["read", "create", "update", "delete", "query"] - ) - """Evaluation result for the analysis sample. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - features: dict[str, Any], - correlation_info: dict[str, Any], - evaluation_result: "_models.EvalResult", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = SampleType.EVALUATION_RESULT_SAMPLE # type: ignore - - -class EvaluationRule(_Model): - """Evaluation rule model. - - :ivar id: Unique identifier for the evaluation rule. Required. - :vartype id: str - :ivar display_name: Display Name for the evaluation rule. - :vartype display_name: str - :ivar description: Description for the evaluation rule. - :vartype description: str - :ivar action: Definition of the evaluation rule action. Required. - :vartype action: ~azure.ai.projects.models.EvaluationRuleAction - :ivar filter: Filter condition of the evaluation rule. - :vartype filter: ~azure.ai.projects.models.EvaluationRuleFilter - :ivar event_type: Event type that the evaluation rule applies to. Required. Known values are: - "response.completed" and "manual". - :vartype event_type: str or ~azure.ai.projects.models.EvaluationRuleEventType - :ivar enabled: Indicates whether the evaluation rule is enabled. Default is true. Required. - :vartype enabled: bool - :ivar system_data: System metadata for the evaluation rule. Required. - :vartype system_data: dict[str, str] - """ - - id: str = rest_field(visibility=["read"]) - """Unique identifier for the evaluation rule. Required.""" - display_name: Optional[str] = rest_field( - name="displayName", visibility=["read", "create", "update", "delete", "query"] - ) - """Display Name for the evaluation rule.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Description for the evaluation rule.""" - action: "_models.EvaluationRuleAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Definition of the evaluation rule action. Required.""" - filter: Optional["_models.EvaluationRuleFilter"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Filter condition of the evaluation rule.""" - event_type: Union[str, "_models.EvaluationRuleEventType"] = rest_field( - name="eventType", visibility=["read", "create", "update", "delete", "query"] - ) - """Event type that the evaluation rule applies to. Required. Known values are: - \"response.completed\" and \"manual\".""" - enabled: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Indicates whether the evaluation rule is enabled. Default is true. Required.""" - system_data: dict[str, str] = rest_field(name="systemData", visibility=["read"]) - """System metadata for the evaluation rule. Required.""" - - @overload - def __init__( - self, - *, - action: "_models.EvaluationRuleAction", - event_type: Union[str, "_models.EvaluationRuleEventType"], - enabled: bool, - display_name: Optional[str] = None, - description: Optional[str] = None, - filter: Optional["_models.EvaluationRuleFilter"] = None, # pylint: disable=redefined-builtin - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluationRuleFilter(_Model): - """Evaluation filter model. - - :ivar agent_name: Filter by agent name. Required. - :vartype agent_name: str - """ - - agent_name: str = rest_field(name="agentName", visibility=["read", "create", "update", "delete", "query"]) - """Filter by agent name. Required.""" - - @overload - def __init__( - self, - *, - agent_name: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluationRunClusterInsightResult(InsightResult, discriminator="EvaluationRunClusterInsight"): - """Insights from the evaluation run cluster analysis. - - :ivar type: The type of insights result. Required. Insights on an Evaluation run result. - :vartype type: str or ~azure.ai.projects.models.EVALUATION_RUN_CLUSTER_INSIGHT - :ivar cluster_insight: Required. - :vartype cluster_insight: ~azure.ai.projects.models.ClusterInsightResult - """ - - type: Literal[InsightType.EVALUATION_RUN_CLUSTER_INSIGHT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of insights result. Required. Insights on an Evaluation run result.""" - cluster_insight: "_models.ClusterInsightResult" = rest_field( - name="clusterInsight", visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" - - @overload - def __init__( - self, - *, - cluster_insight: "_models.ClusterInsightResult", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = InsightType.EVALUATION_RUN_CLUSTER_INSIGHT # type: ignore - - -class EvaluationRunClusterInsightsRequest(InsightRequest, discriminator="EvaluationRunClusterInsight"): - """Insights on set of Evaluation Results. - - :ivar type: The type of insights request. Required. Insights on an Evaluation run result. - :vartype type: str or ~azure.ai.projects.models.EVALUATION_RUN_CLUSTER_INSIGHT - :ivar eval_id: Evaluation Id for the insights. Required. - :vartype eval_id: str - :ivar run_ids: List of evaluation run IDs for the insights. Required. - :vartype run_ids: list[str] - :ivar model_configuration: Configuration of the model used in the insight generation. - :vartype model_configuration: ~azure.ai.projects.models.InsightModelConfiguration - """ - - type: Literal[InsightType.EVALUATION_RUN_CLUSTER_INSIGHT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of insights request. Required. Insights on an Evaluation run result.""" - eval_id: str = rest_field(name="evalId", visibility=["read", "create", "update", "delete", "query"]) - """Evaluation Id for the insights. Required.""" - run_ids: list[str] = rest_field(name="runIds", visibility=["read", "create", "update", "delete", "query"]) - """List of evaluation run IDs for the insights. Required.""" - model_configuration: Optional["_models.InsightModelConfiguration"] = rest_field( - name="modelConfiguration", visibility=["read", "create", "update", "delete", "query"] - ) - """Configuration of the model used in the insight generation.""" - - @overload - def __init__( - self, - *, - eval_id: str, - run_ids: list[str], - model_configuration: Optional["_models.InsightModelConfiguration"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = InsightType.EVALUATION_RUN_CLUSTER_INSIGHT # type: ignore - - -class ScheduleTask(_Model): - """Schedule task model. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - EvaluationScheduleTask, InsightScheduleTask - - :ivar type: Type of the task. Required. Known values are: "Evaluation" and "Insight". - :vartype type: str or ~azure.ai.projects.models.ScheduleTaskType - :ivar configuration: Configuration for the task. - :vartype configuration: dict[str, str] - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Type of the task. Required. Known values are: \"Evaluation\" and \"Insight\".""" - configuration: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Configuration for the task.""" - - @overload - def __init__( - self, - *, - type: str, - configuration: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluationScheduleTask(ScheduleTask, discriminator="Evaluation"): - """Evaluation task for the schedule. - - :ivar configuration: Configuration for the task. - :vartype configuration: dict[str, str] - :ivar type: Required. Evaluation task. - :vartype type: str or ~azure.ai.projects.models.EVALUATION - :ivar eval_id: Identifier of the evaluation group. Required. - :vartype eval_id: str - :ivar eval_run: The evaluation run payload. Required. - :vartype eval_run: any - """ - - type: Literal[ScheduleTaskType.EVALUATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Evaluation task.""" - eval_id: str = rest_field(name="evalId", visibility=["read", "create", "update", "delete", "query"]) - """Identifier of the evaluation group. Required.""" - eval_run: Any = rest_field(name="evalRun", visibility=["read", "create", "update", "delete", "query"]) - """The evaluation run payload. Required.""" - - @overload - def __init__( - self, - *, - eval_id: str, - eval_run: Any, - configuration: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ScheduleTaskType.EVALUATION # type: ignore - - -class EvaluationTaxonomy(_Model): - """Evaluation Taxonomy Definition. - - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - :ivar taxonomy_input: Input configuration for the evaluation taxonomy. Required. - :vartype taxonomy_input: ~azure.ai.projects.models.EvaluationTaxonomyInput - :ivar taxonomy_categories: List of taxonomy categories. - :vartype taxonomy_categories: list[~azure.ai.projects.models.TaxonomyCategory] - :ivar properties: Additional properties for the evaluation taxonomy. - :vartype properties: dict[str, str] - """ - - id: Optional[str] = rest_field(visibility=["read"]) - """Asset ID, a unique identifier for the asset.""" - name: str = rest_field(visibility=["read"]) - """The name of the resource. Required.""" - version: str = rest_field(visibility=["read"]) - """The version of the resource. Required.""" - description: Optional[str] = rest_field(visibility=["create", "update"]) - """The asset description text.""" - tags: Optional[dict[str, str]] = rest_field(visibility=["create", "update"]) - """Tag dictionary. Tags can be added, removed, and updated.""" - taxonomy_input: "_models.EvaluationTaxonomyInput" = rest_field( - name="taxonomyInput", visibility=["read", "create", "update", "delete", "query"] - ) - """Input configuration for the evaluation taxonomy. Required.""" - taxonomy_categories: Optional[list["_models.TaxonomyCategory"]] = rest_field( - name="taxonomyCategories", visibility=["read", "create", "update", "delete", "query"] - ) - """List of taxonomy categories.""" - properties: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Additional properties for the evaluation taxonomy.""" - - @overload - def __init__( - self, - *, - taxonomy_input: "_models.EvaluationTaxonomyInput", - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - taxonomy_categories: Optional[list["_models.TaxonomyCategory"]] = None, - properties: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluatorMetric(_Model): - """Evaluator Metric. - - :ivar type: Type of the metric. Known values are: "ordinal", "continuous", and "boolean". - :vartype type: str or ~azure.ai.projects.models.EvaluatorMetricType - :ivar desirable_direction: It indicates whether a higher value is better or a lower value is - better for this metric. Known values are: "increase", "decrease", and "neutral". - :vartype desirable_direction: str or ~azure.ai.projects.models.EvaluatorMetricDirection - :ivar min_value: Minimum value for the metric. - :vartype min_value: float - :ivar max_value: Maximum value for the metric. If not specified, it is assumed to be unbounded. - :vartype max_value: float - :ivar is_primary: Indicates if this metric is primary when there are multiple metrics. - :vartype is_primary: bool - """ - - type: Optional[Union[str, "_models.EvaluatorMetricType"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Type of the metric. Known values are: \"ordinal\", \"continuous\", and \"boolean\".""" - desirable_direction: Optional[Union[str, "_models.EvaluatorMetricDirection"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """It indicates whether a higher value is better or a lower value is better for this metric. Known - values are: \"increase\", \"decrease\", and \"neutral\".""" - min_value: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Minimum value for the metric.""" - max_value: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Maximum value for the metric. If not specified, it is assumed to be unbounded.""" - is_primary: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Indicates if this metric is primary when there are multiple metrics.""" - - @overload - def __init__( - self, - *, - type: Optional[Union[str, "_models.EvaluatorMetricType"]] = None, - desirable_direction: Optional[Union[str, "_models.EvaluatorMetricDirection"]] = None, - min_value: Optional[float] = None, - max_value: Optional[float] = None, - is_primary: Optional[bool] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EvaluatorVersion(_Model): - """Evaluator Definition. - - :ivar display_name: Display Name for evaluator. It helps to find the evaluator easily in AI - Foundry. It does not need to be unique. - :vartype display_name: str - :ivar metadata: Metadata about the evaluator. - :vartype metadata: dict[str, str] - :ivar evaluator_type: The type of the evaluator. Required. Known values are: "builtin" and - "custom". - :vartype evaluator_type: str or ~azure.ai.projects.models.EvaluatorType - :ivar categories: The categories of the evaluator. Required. - :vartype categories: list[str or ~azure.ai.projects.models.EvaluatorCategory] - :ivar definition: Definition of the evaluator. Required. - :vartype definition: ~azure.ai.projects.models.EvaluatorDefinition - :ivar created_by: Creator of the evaluator. Required. - :vartype created_by: str - :ivar created_at: Creation date/time of the evaluator. Required. - :vartype created_at: int - :ivar modified_at: Last modified date/time of the evaluator. Required. - :vartype modified_at: int - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - """ - - display_name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Display Name for evaluator. It helps to find the evaluator easily in AI Foundry. It does not - need to be unique.""" - metadata: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Metadata about the evaluator.""" - evaluator_type: Union[str, "_models.EvaluatorType"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The type of the evaluator. Required. Known values are: \"builtin\" and \"custom\".""" - categories: list[Union[str, "_models.EvaluatorCategory"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The categories of the evaluator. Required.""" - definition: "_models.EvaluatorDefinition" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Definition of the evaluator. Required.""" - created_by: str = rest_field(visibility=["read"]) - """Creator of the evaluator. Required.""" - created_at: int = rest_field(visibility=["read"]) - """Creation date/time of the evaluator. Required.""" - modified_at: int = rest_field(visibility=["read"]) - """Last modified date/time of the evaluator. Required.""" - id: Optional[str] = rest_field(visibility=["read"]) - """Asset ID, a unique identifier for the asset.""" - name: str = rest_field(visibility=["read"]) - """The name of the resource. Required.""" - version: str = rest_field(visibility=["read"]) - """The version of the resource. Required.""" - description: Optional[str] = rest_field(visibility=["create", "update"]) - """The asset description text.""" - tags: Optional[dict[str, str]] = rest_field(visibility=["create", "update"]) - """Tag dictionary. Tags can be added, removed, and updated.""" - - @overload - def __init__( - self, - *, - evaluator_type: Union[str, "_models.EvaluatorType"], - categories: list[Union[str, "_models.EvaluatorCategory"]], - definition: "_models.EvaluatorDefinition", - display_name: Optional[str] = None, - metadata: Optional[dict[str, str]] = None, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class FabricDataAgentToolParameters(_Model): - """The fabric data agent tool parameters. - - :ivar project_connections: The project connections attached to this tool. There can be a - maximum of 1 connection - resource attached to the tool. - :vartype project_connections: list[~azure.ai.projects.models.ToolProjectConnection] - """ - - project_connections: Optional[list["_models.ToolProjectConnection"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The project connections attached to this tool. There can be a maximum of 1 connection - resource attached to the tool.""" - - @overload - def __init__( - self, - *, - project_connections: Optional[list["_models.ToolProjectConnection"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class FieldMapping(_Model): - """Field mapping configuration class. - - :ivar content_fields: List of fields with text content. Required. - :vartype content_fields: list[str] - :ivar filepath_field: Path of file to be used as a source of text content. - :vartype filepath_field: str - :ivar title_field: Field containing the title of the document. - :vartype title_field: str - :ivar url_field: Field containing the url of the document. - :vartype url_field: str - :ivar vector_fields: List of fields with vector content. - :vartype vector_fields: list[str] - :ivar metadata_fields: List of fields with metadata content. - :vartype metadata_fields: list[str] - """ - - content_fields: list[str] = rest_field(name="contentFields", visibility=["create"]) - """List of fields with text content. Required.""" - filepath_field: Optional[str] = rest_field(name="filepathField", visibility=["create"]) - """Path of file to be used as a source of text content.""" - title_field: Optional[str] = rest_field(name="titleField", visibility=["create"]) - """Field containing the title of the document.""" - url_field: Optional[str] = rest_field(name="urlField", visibility=["create"]) - """Field containing the url of the document.""" - vector_fields: Optional[list[str]] = rest_field(name="vectorFields", visibility=["create"]) - """List of fields with vector content.""" - metadata_fields: Optional[list[str]] = rest_field(name="metadataFields", visibility=["create"]) - """List of fields with metadata content.""" - - @overload - def __init__( - self, - *, - content_fields: list[str], - filepath_field: Optional[str] = None, - title_field: Optional[str] = None, - url_field: Optional[str] = None, - vector_fields: Optional[list[str]] = None, - metadata_fields: Optional[list[str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class FileDatasetVersion(DatasetVersion, discriminator="uri_file"): - """FileDatasetVersion Definition. - - :ivar data_uri: URI of the data. Example: `https://go.microsoft.com/fwlink/?linkid=2202330 - `_. Required. - :vartype data_uri: str - :ivar is_reference: Indicates if the dataset holds a reference to the storage, or the dataset - manages storage itself. If true, the underlying data will not be deleted when the dataset - version is deleted. - :vartype is_reference: bool - :ivar connection_name: The Azure Storage Account connection name. Required if - startPendingUploadVersion was not called before creating the Dataset. - :vartype connection_name: str - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - :ivar type: Dataset type. Required. URI file. - :vartype type: str or ~azure.ai.projects.models.URI_FILE - """ - - type: Literal[DatasetType.URI_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Dataset type. Required. URI file.""" - - @overload - def __init__( - self, - *, - data_uri: str, - connection_name: Optional[str] = None, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = DatasetType.URI_FILE # type: ignore - - -class FileSearchTool(Tool, discriminator="file_search"): - """A tool that searches for relevant content from uploaded files. Learn more about the `file - search tool `_. - - :ivar type: The type of the file search tool. Always ``file_search``. Required. - :vartype type: str or ~azure.ai.projects.models.FILE_SEARCH - :ivar vector_store_ids: The IDs of the vector stores to search. Required. - :vartype vector_store_ids: list[str] - :ivar max_num_results: The maximum number of results to return. This number should be between 1 - and 50 inclusive. - :vartype max_num_results: int - :ivar ranking_options: Ranking options for search. - :vartype ranking_options: ~azure.ai.projects.models.RankingOptions - :ivar filters: A filter to apply. Is either a ComparisonFilter type or a CompoundFilter type. - :vartype filters: ~azure.ai.projects.models.ComparisonFilter or - ~azure.ai.projects.models.CompoundFilter - """ - - type: Literal[ToolType.FILE_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the file search tool. Always ``file_search``. Required.""" - vector_store_ids: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The IDs of the vector stores to search. Required.""" - max_num_results: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The maximum number of results to return. This number should be between 1 and 50 inclusive.""" - ranking_options: Optional["_models.RankingOptions"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Ranking options for search.""" - filters: Optional[Union["_models.ComparisonFilter", "_models.CompoundFilter"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A filter to apply. Is either a ComparisonFilter type or a CompoundFilter type.""" - - @overload - def __init__( - self, - *, - vector_store_ids: list[str], - max_num_results: Optional[int] = None, - ranking_options: Optional["_models.RankingOptions"] = None, - filters: Optional[Union["_models.ComparisonFilter", "_models.CompoundFilter"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.FILE_SEARCH # type: ignore - - -class FileSearchToolCallItemParam(ItemParam, discriminator="file_search_call"): - """The results of a file search tool call. See the - `file search guide `_ for more information. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FILE_SEARCH_CALL - :ivar queries: The queries used to search for files. Required. - :vartype queries: list[str] - :ivar results: The results of the file search tool call. - :vartype results: list[~azure.ai.projects.models.FileSearchToolCallItemParamResult] - """ - - type: Literal[ItemType.FILE_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - queries: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The queries used to search for files. Required.""" - results: Optional[list["_models.FileSearchToolCallItemParamResult"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The results of the file search tool call.""" - - @overload - def __init__( - self, - *, - queries: list[str], - results: Optional[list["_models.FileSearchToolCallItemParamResult"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.FILE_SEARCH_CALL # type: ignore - - -class FileSearchToolCallItemParamResult(_Model): - """FileSearchToolCallItemParamResult. - - :ivar file_id: The unique ID of the file. - :vartype file_id: str - :ivar text: The text that was retrieved from the file. - :vartype text: str - :ivar filename: The name of the file. - :vartype filename: str - :ivar attributes: - :vartype attributes: ~azure.ai.projects.models.VectorStoreFileAttributes - :ivar score: The relevance score of the file - a value between 0 and 1. - :vartype score: float - """ - - file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the file.""" - text: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text that was retrieved from the file.""" - filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the file.""" - attributes: Optional["_models.VectorStoreFileAttributes"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - score: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The relevance score of the file - a value between 0 and 1.""" - - @overload - def __init__( - self, - *, - file_id: Optional[str] = None, - text: Optional[str] = None, - filename: Optional[str] = None, - attributes: Optional["_models.VectorStoreFileAttributes"] = None, - score: Optional[float] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class FileSearchToolCallItemResource(ItemResource, discriminator="file_search_call"): - """The results of a file search tool call. See the - `file search guide `_ for more information. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FILE_SEARCH_CALL - :ivar status: The status of the file search tool call. One of ``in_progress``, - ``searching``, ``incomplete`` or ``failed``,. Required. Is one of the following types: - Literal["in_progress"], Literal["searching"], Literal["completed"], Literal["incomplete"], - Literal["failed"] - :vartype status: str or str or str or str or str - :ivar queries: The queries used to search for files. Required. - :vartype queries: list[str] - :ivar results: The results of the file search tool call. - :vartype results: list[~azure.ai.projects.models.FileSearchToolCallItemParamResult] - """ - - type: Literal[ItemType.FILE_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "searching", "completed", "incomplete", "failed"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the file search tool call. One of ``in_progress``, - ``searching``, ``incomplete`` or ``failed``,. Required. Is one of the following types: - Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], - Literal[\"incomplete\"], Literal[\"failed\"]""" - queries: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The queries used to search for files. Required.""" - results: Optional[list["_models.FileSearchToolCallItemParamResult"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The results of the file search tool call.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "searching", "completed", "incomplete", "failed"], - queries: list[str], - created_by: Optional["_models.CreatedBy"] = None, - results: Optional[list["_models.FileSearchToolCallItemParamResult"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.FILE_SEARCH_CALL # type: ignore - - -class FolderDatasetVersion(DatasetVersion, discriminator="uri_folder"): - """FileDatasetVersion Definition. - - :ivar data_uri: URI of the data. Example: `https://go.microsoft.com/fwlink/?linkid=2202330 - `_. Required. - :vartype data_uri: str - :ivar is_reference: Indicates if the dataset holds a reference to the storage, or the dataset - manages storage itself. If true, the underlying data will not be deleted when the dataset - version is deleted. - :vartype is_reference: bool - :ivar connection_name: The Azure Storage Account connection name. Required if - startPendingUploadVersion was not called before creating the Dataset. - :vartype connection_name: str - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - :ivar type: Dataset type. Required. URI folder. - :vartype type: str or ~azure.ai.projects.models.URI_FOLDER - """ - - type: Literal[DatasetType.URI_FOLDER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Dataset type. Required. URI folder.""" - - @overload - def __init__( - self, - *, - data_uri: str, - connection_name: Optional[str] = None, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = DatasetType.URI_FOLDER # type: ignore - - -class FunctionTool(Tool, discriminator="function"): - """Defines a function in your own code the model can choose to call. Learn more about `function - calling `_. - - :ivar type: The type of the function tool. Always ``function``. Required. - :vartype type: str or ~azure.ai.projects.models.FUNCTION - :ivar name: The name of the function to call. Required. - :vartype name: str - :ivar description: A description of the function. Used by the model to determine whether or not - to call the function. - :vartype description: str - :ivar parameters: A JSON schema object describing the parameters of the function. Required. - :vartype parameters: any - :ivar strict: Whether to enforce strict parameter validation. Default ``true``. Required. - :vartype strict: bool - """ - - type: Literal[ToolType.FUNCTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the function tool. Always ``function``. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to call. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A description of the function. Used by the model to determine whether or not to call the - function.""" - parameters: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON schema object describing the parameters of the function. Required.""" - strict: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to enforce strict parameter validation. Default ``true``. Required.""" - - @overload - def __init__( - self, - *, - name: str, - parameters: Any, - strict: bool, - description: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.FUNCTION # type: ignore - - -class FunctionToolCallItemParam(ItemParam, discriminator="function_call"): - """A tool call to run a function. See the - `function calling guide `_ for more information. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FUNCTION_CALL - :ivar call_id: The unique ID of the function tool call generated by the model. Required. - :vartype call_id: str - :ivar name: The name of the function to run. Required. - :vartype name: str - :ivar arguments: A JSON string of the arguments to pass to the function. Required. - :vartype arguments: str - """ - - type: Literal[ItemType.FUNCTION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the function tool call generated by the model. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to run. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the arguments to pass to the function. Required.""" - - @overload - def __init__( - self, - *, - call_id: str, - name: str, - arguments: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.FUNCTION_CALL # type: ignore - - -class FunctionToolCallItemResource(ItemResource, discriminator="function_call"): - """A tool call to run a function. See the - `function calling guide `_ for more information. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FUNCTION_CALL - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar call_id: The unique ID of the function tool call generated by the model. Required. - :vartype call_id: str - :ivar name: The name of the function to run. Required. - :vartype name: str - :ivar arguments: A JSON string of the arguments to pass to the function. Required. - :vartype arguments: str - """ - - type: Literal[ItemType.FUNCTION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the function tool call generated by the model. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to run. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the arguments to pass to the function. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - call_id: str, - name: str, - arguments: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.FUNCTION_CALL # type: ignore - - -class FunctionToolCallOutputItemParam(ItemParam, discriminator="function_call_output"): - """The output of a function tool call. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FUNCTION_CALL_OUTPUT - :ivar call_id: The unique ID of the function tool call generated by the model. Required. - :vartype call_id: str - :ivar output: A JSON string of the output of the function tool call. Required. - :vartype output: str - """ - - type: Literal[ItemType.FUNCTION_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the function tool call generated by the model. Required.""" - output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the output of the function tool call. Required.""" - - @overload - def __init__( - self, - *, - call_id: str, - output: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.FUNCTION_CALL_OUTPUT # type: ignore - - -class FunctionToolCallOutputItemResource(ItemResource, discriminator="function_call_output"): - """The output of a function tool call. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FUNCTION_CALL_OUTPUT - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar call_id: The unique ID of the function tool call generated by the model. Required. - :vartype call_id: str - :ivar output: A JSON string of the output of the function tool call. Required. - :vartype output: str - """ - - type: Literal[ItemType.FUNCTION_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the function tool call generated by the model. Required.""" - output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the output of the function tool call. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - call_id: str, - output: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.FUNCTION_CALL_OUTPUT # type: ignore - - -class HostedAgentDefinition(AgentDefinition, discriminator="hosted"): - """The hosted agent definition. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ImageBasedHostedAgentDefinition - - :ivar rai_config: Configuration for Responsible AI (RAI) content filtering and safety features. - :vartype rai_config: ~azure.ai.projects.models.RaiConfig - :ivar kind: Required. - :vartype kind: str or ~azure.ai.projects.models.HOSTED - :ivar tools: An array of tools the hosted agent's model may call while generating a response. - You - can specify which tool to use by setting the ``tool_choice`` parameter. - :vartype tools: list[~azure.ai.projects.models.Tool] - :ivar container_protocol_versions: The protocols that the agent supports for ingress - communication of the containers. Required. - :vartype container_protocol_versions: list[~azure.ai.projects.models.ProtocolVersionRecord] - :ivar cpu: The CPU configuration for the hosted agent. Required. - :vartype cpu: str - :ivar memory: The memory configuration for the hosted agent. Required. - :vartype memory: str - :ivar environment_variables: Environment variables to set in the hosted agent container. - :vartype environment_variables: dict[str, str] - """ - - __mapping__: dict[str, _Model] = {} - kind: Literal[AgentKind.HOSTED] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An array of tools the hosted agent's model may call while generating a response. You - can specify which tool to use by setting the ``tool_choice`` parameter.""" - container_protocol_versions: list["_models.ProtocolVersionRecord"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The protocols that the agent supports for ingress communication of the containers. Required.""" - cpu: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The CPU configuration for the hosted agent. Required.""" - memory: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The memory configuration for the hosted agent. Required.""" - environment_variables: Optional[dict[str, str]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Environment variables to set in the hosted agent container.""" - - @overload - def __init__( - self, - *, - container_protocol_versions: list["_models.ProtocolVersionRecord"], - cpu: str, - memory: str, - rai_config: Optional["_models.RaiConfig"] = None, - tools: Optional[list["_models.Tool"]] = None, - environment_variables: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = AgentKind.HOSTED # type: ignore - - -class HourlyRecurrenceSchedule(RecurrenceSchedule, discriminator="Hourly"): - """Hourly recurrence schedule. - - :ivar type: Required. Hourly recurrence pattern. - :vartype type: str or ~azure.ai.projects.models.HOURLY - """ - - type: Literal[RecurrenceType.HOURLY] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Hourly recurrence pattern.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = RecurrenceType.HOURLY # type: ignore - - -class HumanEvaluationRuleAction(EvaluationRuleAction, discriminator="humanEvaluation"): - """Evaluation rule action for human evaluation. - - :ivar type: Required. Human evaluation. - :vartype type: str or ~azure.ai.projects.models.HUMAN_EVALUATION - :ivar template_id: Human evaluation template Id. Required. - :vartype template_id: str - """ - - type: Literal[EvaluationRuleActionType.HUMAN_EVALUATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Human evaluation.""" - template_id: str = rest_field(name="templateId", visibility=["read", "create", "update", "delete", "query"]) - """Human evaluation template Id. Required.""" - - @overload - def __init__( - self, - *, - template_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = EvaluationRuleActionType.HUMAN_EVALUATION # type: ignore - - -class ImageBasedHostedAgentDefinition(HostedAgentDefinition, discriminator="hosted"): - """The image-based deployment definition for a hosted agent. - - :ivar rai_config: Configuration for Responsible AI (RAI) content filtering and safety features. - :vartype rai_config: ~azure.ai.projects.models.RaiConfig - :ivar tools: An array of tools the hosted agent's model may call while generating a response. - You - can specify which tool to use by setting the ``tool_choice`` parameter. - :vartype tools: list[~azure.ai.projects.models.Tool] - :ivar container_protocol_versions: The protocols that the agent supports for ingress - communication of the containers. Required. - :vartype container_protocol_versions: list[~azure.ai.projects.models.ProtocolVersionRecord] - :ivar cpu: The CPU configuration for the hosted agent. Required. - :vartype cpu: str - :ivar memory: The memory configuration for the hosted agent. Required. - :vartype memory: str - :ivar environment_variables: Environment variables to set in the hosted agent container. - :vartype environment_variables: dict[str, str] - :ivar kind: Required. - :vartype kind: str or ~azure.ai.projects.models.HOSTED - :ivar image: The image for the hosted agent. Required. - :vartype image: str - """ - - image: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The image for the hosted agent. Required.""" - - @overload - def __init__( - self, - *, - container_protocol_versions: list["_models.ProtocolVersionRecord"], - cpu: str, - memory: str, - image: str, - rai_config: Optional["_models.RaiConfig"] = None, - tools: Optional[list["_models.Tool"]] = None, - environment_variables: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ImageGenTool(Tool, discriminator="image_generation"): - """A tool that generates images using a model like ``gpt-image-1``. - - :ivar type: The type of the image generation tool. Always ``image_generation``. Required. - :vartype type: str or ~azure.ai.projects.models.IMAGE_GENERATION - :ivar model: The image generation model to use. Default: ``gpt-image-1``. Default value is - "gpt-image-1". - :vartype model: str - :ivar quality: The quality of the generated image. One of ``low``, ``medium``, ``high``, - or ``auto``. Default: ``auto``. Is one of the following types: Literal["low"], - Literal["medium"], Literal["high"], Literal["auto"] - :vartype quality: str or str or str or str - :ivar size: The size of the generated image. One of ``1024x1024``, ``1024x1536``, - ``1536x1024``, or ``auto``. Default: ``auto``. Is one of the following types: - Literal["1024x1024"], Literal["1024x1536"], Literal["1536x1024"], Literal["auto"] - :vartype size: str or str or str or str - :ivar output_format: The output format of the generated image. One of ``png``, ``webp``, or - ``jpeg``. Default: ``png``. Is one of the following types: Literal["png"], Literal["webp"], - Literal["jpeg"] - :vartype output_format: str or str or str - :ivar output_compression: Compression level for the output image. Default: 100. - :vartype output_compression: int - :ivar moderation: Moderation level for the generated image. Default: ``auto``. Is either a - Literal["auto"] type or a Literal["low"] type. - :vartype moderation: str or str - :ivar background: Background type for the generated image. One of ``transparent``, - ``opaque``, or ``auto``. Default: ``auto``. Is one of the following types: - Literal["transparent"], Literal["opaque"], Literal["auto"] - :vartype background: str or str or str - :ivar input_image_mask: Optional mask for inpainting. Contains ``image_url`` - (string, optional) and ``file_id`` (string, optional). - :vartype input_image_mask: ~azure.ai.projects.models.ImageGenToolInputImageMask - :ivar partial_images: Number of partial images to generate in streaming mode, from 0 (default - value) to 3. - :vartype partial_images: int - """ - - type: Literal[ToolType.IMAGE_GENERATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the image generation tool. Always ``image_generation``. Required.""" - model: Optional[Literal["gpt-image-1"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The image generation model to use. Default: ``gpt-image-1``. Default value is \"gpt-image-1\".""" - quality: Optional[Literal["low", "medium", "high", "auto"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The quality of the generated image. One of ``low``, ``medium``, ``high``, - or ``auto``. Default: ``auto``. Is one of the following types: Literal[\"low\"], - Literal[\"medium\"], Literal[\"high\"], Literal[\"auto\"]""" - size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The size of the generated image. One of ``1024x1024``, ``1024x1536``, - ``1536x1024``, or ``auto``. Default: ``auto``. Is one of the following types: - Literal[\"1024x1024\"], Literal[\"1024x1536\"], Literal[\"1536x1024\"], Literal[\"auto\"]""" - output_format: Optional[Literal["png", "webp", "jpeg"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The output format of the generated image. One of ``png``, ``webp``, or - ``jpeg``. Default: ``png``. Is one of the following types: Literal[\"png\"], Literal[\"webp\"], - Literal[\"jpeg\"]""" - output_compression: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Compression level for the output image. Default: 100.""" - moderation: Optional[Literal["auto", "low"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Moderation level for the generated image. Default: ``auto``. Is either a Literal[\"auto\"] type - or a Literal[\"low\"] type.""" - background: Optional[Literal["transparent", "opaque", "auto"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Background type for the generated image. One of ``transparent``, - ``opaque``, or ``auto``. Default: ``auto``. Is one of the following types: - Literal[\"transparent\"], Literal[\"opaque\"], Literal[\"auto\"]""" - input_image_mask: Optional["_models.ImageGenToolInputImageMask"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Optional mask for inpainting. Contains ``image_url`` - (string, optional) and ``file_id`` (string, optional).""" - partial_images: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Number of partial images to generate in streaming mode, from 0 (default value) to 3.""" - - @overload - def __init__( - self, - *, - model: Optional[Literal["gpt-image-1"]] = None, - quality: Optional[Literal["low", "medium", "high", "auto"]] = None, - size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]] = None, - output_format: Optional[Literal["png", "webp", "jpeg"]] = None, - output_compression: Optional[int] = None, - moderation: Optional[Literal["auto", "low"]] = None, - background: Optional[Literal["transparent", "opaque", "auto"]] = None, - input_image_mask: Optional["_models.ImageGenToolInputImageMask"] = None, - partial_images: Optional[int] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.IMAGE_GENERATION # type: ignore - - -class ImageGenToolCallItemParam(ItemParam, discriminator="image_generation_call"): - """An image generation request made by the model. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.IMAGE_GENERATION_CALL - :ivar result: The generated image encoded in base64. Required. - :vartype result: str - """ - - type: Literal[ItemType.IMAGE_GENERATION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - result: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The generated image encoded in base64. Required.""" - - @overload - def __init__( - self, - *, - result: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.IMAGE_GENERATION_CALL # type: ignore - - -class ImageGenToolCallItemResource(ItemResource, discriminator="image_generation_call"): - """An image generation request made by the model. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.IMAGE_GENERATION_CALL - :ivar status: Required. Is one of the following types: Literal["in_progress"], - Literal["completed"], Literal["generating"], Literal["failed"] - :vartype status: str or str or str or str - :ivar result: The generated image encoded in base64. Required. - :vartype result: str - """ - - type: Literal[ItemType.IMAGE_GENERATION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "generating", "failed"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required. Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], - Literal[\"generating\"], Literal[\"failed\"]""" - result: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The generated image encoded in base64. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "generating", "failed"], - result: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.IMAGE_GENERATION_CALL # type: ignore - - -class ImageGenToolInputImageMask(_Model): - """ImageGenToolInputImageMask. - - :ivar image_url: Base64-encoded mask image. - :vartype image_url: str - :ivar file_id: File ID for the mask image. - :vartype file_id: str - """ - - image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Base64-encoded mask image.""" - file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """File ID for the mask image.""" - - @overload - def __init__( - self, - *, - image_url: Optional[str] = None, - file_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class Insight(_Model): - """The response body for cluster insights. - - :ivar id: The unique identifier for the insights report. Required. - :vartype id: str - :ivar metadata: Metadata about the insights report. Required. - :vartype metadata: ~azure.ai.projects.models.InsightsMetadata - :ivar state: The current state of the insights. Required. Known values are: "NotStarted", - "Running", "Succeeded", "Failed", and "Canceled". - :vartype state: str or ~azure.ai.projects.models.OperationState - :ivar display_name: User friendly display name for the insight. Required. - :vartype display_name: str - :ivar request: Request for the insights analysis. Required. - :vartype request: ~azure.ai.projects.models.InsightRequest - :ivar result: The result of the insights report. - :vartype result: ~azure.ai.projects.models.InsightResult - """ - - id: str = rest_field(visibility=["read"]) - """The unique identifier for the insights report. Required.""" - metadata: "_models.InsightsMetadata" = rest_field(visibility=["read"]) - """Metadata about the insights report. Required.""" - state: Union[str, "_models.OperationState"] = rest_field(visibility=["read"]) - """The current state of the insights. Required. Known values are: \"NotStarted\", \"Running\", - \"Succeeded\", \"Failed\", and \"Canceled\".""" - display_name: str = rest_field(name="displayName", visibility=["read", "create", "update", "delete", "query"]) - """User friendly display name for the insight. Required.""" - request: "_models.InsightRequest" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Request for the insights analysis. Required.""" - result: Optional["_models.InsightResult"] = rest_field(visibility=["read"]) - """The result of the insights report.""" - - @overload - def __init__( - self, - *, - display_name: str, - request: "_models.InsightRequest", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class InsightCluster(_Model): - """A cluster of analysis samples. - - :ivar id: The id of the analysis cluster. Required. - :vartype id: str - :ivar label: Label for the cluster. Required. - :vartype label: str - :ivar suggestion: Suggestion for the cluster. Required. - :vartype suggestion: str - :ivar description: Description of the analysis cluster. Required. - :vartype description: str - :ivar weight: The weight of the analysis cluster. This indicate number of samples in the - cluster. Required. - :vartype weight: int - :ivar sub_clusters: List of subclusters within this cluster. Empty if no subclusters exist. - :vartype sub_clusters: list[~azure.ai.projects.models.InsightCluster] - :ivar samples: List of samples that belong to this cluster. Empty if samples are part of - subclusters. - :vartype samples: list[~azure.ai.projects.models.InsightSample] - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The id of the analysis cluster. Required.""" - label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Label for the cluster. Required.""" - suggestion: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Suggestion for the cluster. Required.""" - description: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Description of the analysis cluster. Required.""" - weight: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The weight of the analysis cluster. This indicate number of samples in the cluster. Required.""" - sub_clusters: Optional[list["_models.InsightCluster"]] = rest_field( - name="subClusters", visibility=["read", "create", "update", "delete", "query"] - ) - """List of subclusters within this cluster. Empty if no subclusters exist.""" - samples: Optional[list["_models.InsightSample"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """List of samples that belong to this cluster. Empty if samples are part of subclusters.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - label: str, - suggestion: str, - description: str, - weight: int, - sub_clusters: Optional[list["_models.InsightCluster"]] = None, - samples: Optional[list["_models.InsightSample"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class InsightModelConfiguration(_Model): - """Configuration of the model used in the insight generation. - - :ivar model_deployment_name: The model deployment to be evaluated. Accepts either the - deployment name alone or with the connection name as '{connectionName}/'. - Required. - :vartype model_deployment_name: str - """ - - model_deployment_name: str = rest_field( - name="modelDeploymentName", visibility=["read", "create", "update", "delete", "query"] - ) - """The model deployment to be evaluated. Accepts either the deployment name alone or with the - connection name as '{connectionName}/'. Required.""" - - @overload - def __init__( - self, - *, - model_deployment_name: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class InsightScheduleTask(ScheduleTask, discriminator="Insight"): - """Insight task for the schedule. - - :ivar configuration: Configuration for the task. - :vartype configuration: dict[str, str] - :ivar type: Required. Insight task. - :vartype type: str or ~azure.ai.projects.models.INSIGHT - :ivar insight: The insight payload. Required. - :vartype insight: ~azure.ai.projects.models.Insight - """ - - type: Literal[ScheduleTaskType.INSIGHT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Insight task.""" - insight: "_models.Insight" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The insight payload. Required.""" - - @overload - def __init__( - self, - *, - insight: "_models.Insight", - configuration: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ScheduleTaskType.INSIGHT # type: ignore - - -class InsightsMetadata(_Model): - """Metadata about the insights. - - :ivar created_at: The timestamp when the insights were created. Required. - :vartype created_at: ~datetime.datetime - :ivar completed_at: The timestamp when the insights were completed. - :vartype completed_at: ~datetime.datetime - """ - - created_at: datetime.datetime = rest_field( - name="createdAt", visibility=["read", "create", "update", "delete", "query"], format="rfc3339" - ) - """The timestamp when the insights were created. Required.""" - completed_at: Optional[datetime.datetime] = rest_field( - name="completedAt", visibility=["read", "create", "update", "delete", "query"], format="rfc3339" - ) - """The timestamp when the insights were completed.""" - - @overload - def __init__( - self, - *, - created_at: datetime.datetime, - completed_at: Optional[datetime.datetime] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class InsightSummary(_Model): - """Summary of the error cluster analysis. - - :ivar sample_count: Total number of samples analyzed. Required. - :vartype sample_count: int - :ivar unique_subcluster_count: Total number of unique subcluster labels. Required. - :vartype unique_subcluster_count: int - :ivar unique_cluster_count: Total number of unique clusters. Required. - :vartype unique_cluster_count: int - :ivar method: Method used for clustering. Required. - :vartype method: str - :ivar usage: Token usage while performing clustering analysis. Required. - :vartype usage: ~azure.ai.projects.models.ClusterTokenUsage - """ - - sample_count: int = rest_field(name="sampleCount", visibility=["read", "create", "update", "delete", "query"]) - """Total number of samples analyzed. Required.""" - unique_subcluster_count: int = rest_field( - name="uniqueSubclusterCount", visibility=["read", "create", "update", "delete", "query"] - ) - """Total number of unique subcluster labels. Required.""" - unique_cluster_count: int = rest_field( - name="uniqueClusterCount", visibility=["read", "create", "update", "delete", "query"] - ) - """Total number of unique clusters. Required.""" - method: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Method used for clustering. Required.""" - usage: "_models.ClusterTokenUsage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Token usage while performing clustering analysis. Required.""" - - @overload - def __init__( - self, - *, - sample_count: int, - unique_subcluster_count: int, - unique_cluster_count: int, - method: str, - usage: "_models.ClusterTokenUsage", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class WorkflowActionOutputItemResource(ItemResource, discriminator="workflow_action"): - """WorkflowActionOutputItemResource. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - InvokeAzureAgentWorkflowActionOutputItemResource - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.WORKFLOW_ACTION - :ivar kind: The kind of CSDL action (e.g., 'SetVariable', 'InvokeAzureAgent'). Required. - Default value is None. - :vartype kind: str - :ivar action_id: Unique identifier for the action. Required. - :vartype action_id: str - :ivar parent_action_id: ID of the parent action if this is a nested action. - :vartype parent_action_id: str - :ivar previous_action_id: ID of the previous action if this action follows another. - :vartype previous_action_id: str - :ivar status: Status of the action (e.g., 'in_progress', 'completed', 'failed', 'cancelled'). - Required. Is one of the following types: Literal["completed"], Literal["failed"], - Literal["in_progress"], Literal["cancelled"] - :vartype status: str or str or str or str - """ - - __mapping__: dict[str, _Model] = {} - type: Literal[ItemType.WORKFLOW_ACTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - kind: str = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) - """The kind of CSDL action (e.g., 'SetVariable', 'InvokeAzureAgent'). Required. Default value is - None.""" - action_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique identifier for the action. Required.""" - parent_action_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """ID of the parent action if this is a nested action.""" - previous_action_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """ID of the previous action if this action follows another.""" - status: Literal["completed", "failed", "in_progress", "cancelled"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Status of the action (e.g., 'in_progress', 'completed', 'failed', 'cancelled'). Required. Is - one of the following types: Literal[\"completed\"], Literal[\"failed\"], - Literal[\"in_progress\"], Literal[\"cancelled\"]""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - kind: str, - action_id: str, - status: Literal["completed", "failed", "in_progress", "cancelled"], - created_by: Optional["_models.CreatedBy"] = None, - parent_action_id: Optional[str] = None, - previous_action_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.WORKFLOW_ACTION # type: ignore - - -class InvokeAzureAgentWorkflowActionOutputItemResource( - WorkflowActionOutputItemResource, discriminator="InvokeAzureAgent" -): # pylint: disable=name-too-long - """Details about an agent invocation as part of a workflow action. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.WORKFLOW_ACTION - :ivar action_id: Unique identifier for the action. Required. - :vartype action_id: str - :ivar parent_action_id: ID of the parent action if this is a nested action. - :vartype parent_action_id: str - :ivar previous_action_id: ID of the previous action if this action follows another. - :vartype previous_action_id: str - :ivar status: Status of the action (e.g., 'in_progress', 'completed', 'failed', 'cancelled'). - Required. Is one of the following types: Literal["completed"], Literal["failed"], - Literal["in_progress"], Literal["cancelled"] - :vartype status: str or str or str or str - :ivar kind: Required. Default value is "InvokeAzureAgent". - :vartype kind: str - :ivar agent: Agent id. Required. - :vartype agent: ~azure.ai.projects.models.AgentId - :ivar conversation_id: ID of the conversation for the agent invocation. - :vartype conversation_id: str - :ivar response_id: The response id for the agent invocation. Required. - :vartype response_id: str - """ - - __mapping__: dict[str, _Model] = {} - kind: Literal["InvokeAzureAgent"] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Default value is \"InvokeAzureAgent\".""" - agent: "_models.AgentId" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Agent id. Required.""" - conversation_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """ID of the conversation for the agent invocation.""" - response_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The response id for the agent invocation. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - action_id: str, - status: Literal["completed", "failed", "in_progress", "cancelled"], - agent: "_models.AgentId", - response_id: str, - created_by: Optional["_models.CreatedBy"] = None, - parent_action_id: Optional[str] = None, - previous_action_id: Optional[str] = None, - conversation_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = "InvokeAzureAgent" # type: ignore - - -class ItemContent(_Model): - """ItemContent. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ItemContentInputAudio, ItemContentInputFile, ItemContentInputImage, ItemContentInputText, - ItemContentOutputAudio, ItemContentOutputText, ItemContentRefusal - - :ivar type: Required. Known values are: "input_text", "input_audio", "input_image", - "input_file", "output_text", "output_audio", and "refusal". - :vartype type: str or ~azure.ai.projects.models.ItemContentType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"input_text\", \"input_audio\", \"input_image\", \"input_file\", - \"output_text\", \"output_audio\", and \"refusal\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ItemContentInputAudio(ItemContent, discriminator="input_audio"): - """An audio input to the model. - - :ivar type: The type of the input item. Always ``input_audio``. Required. - :vartype type: str or ~azure.ai.projects.models.INPUT_AUDIO - :ivar data: Base64-encoded audio data. Required. - :vartype data: str - :ivar format: The format of the audio data. Currently supported formats are ``mp3`` and - ``wav``. Required. Is either a Literal["mp3"] type or a Literal["wav"] type. - :vartype format: str or str - """ - - type: Literal[ItemContentType.INPUT_AUDIO] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the input item. Always ``input_audio``. Required.""" - data: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Base64-encoded audio data. Required.""" - format: Literal["mp3", "wav"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The format of the audio data. Currently supported formats are ``mp3`` and - ``wav``. Required. Is either a Literal[\"mp3\"] type or a Literal[\"wav\"] type.""" - - @overload - def __init__( - self, - *, - data: str, - format: Literal["mp3", "wav"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.INPUT_AUDIO # type: ignore - - -class ItemContentInputFile(ItemContent, discriminator="input_file"): - """A file input to the model. - - :ivar type: The type of the input item. Always ``input_file``. Required. - :vartype type: str or ~azure.ai.projects.models.INPUT_FILE - :ivar file_id: The ID of the file to be sent to the model. - :vartype file_id: str - :ivar filename: The name of the file to be sent to the model. - :vartype filename: str - :ivar file_data: The content of the file to be sent to the model. - :vartype file_data: str - """ - - type: Literal[ItemContentType.INPUT_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the input item. Always ``input_file``. Required.""" - file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the file to be sent to the model.""" - filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the file to be sent to the model.""" - file_data: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content of the file to be sent to the model.""" - - @overload - def __init__( - self, - *, - file_id: Optional[str] = None, - filename: Optional[str] = None, - file_data: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.INPUT_FILE # type: ignore - - -class ItemContentInputImage(ItemContent, discriminator="input_image"): - """An image input to the model. Learn about `image inputs `_. - - :ivar type: The type of the input item. Always ``input_image``. Required. - :vartype type: str or ~azure.ai.projects.models.INPUT_IMAGE - :ivar image_url: The URL of the image to be sent to the model. A fully qualified URL or base64 - encoded image in a data URL. - :vartype image_url: str - :ivar file_id: The ID of the file to be sent to the model. - :vartype file_id: str - :ivar detail: The detail level of the image to be sent to the model. One of ``high``, ``low``, - or ``auto``. Defaults to ``auto``. Is one of the following types: Literal["low"], - Literal["high"], Literal["auto"] - :vartype detail: str or str or str - """ - - type: Literal[ItemContentType.INPUT_IMAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the input item. Always ``input_image``. Required.""" - image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in - a data URL.""" - file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the file to be sent to the model.""" - detail: Optional[Literal["low", "high", "auto"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The detail level of the image to be sent to the model. One of ``high``, ``low``, or ``auto``. - Defaults to ``auto``. Is one of the following types: Literal[\"low\"], Literal[\"high\"], - Literal[\"auto\"]""" - - @overload - def __init__( - self, - *, - image_url: Optional[str] = None, - file_id: Optional[str] = None, - detail: Optional[Literal["low", "high", "auto"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.INPUT_IMAGE # type: ignore - - -class ItemContentInputText(ItemContent, discriminator="input_text"): - """A text input to the model. - - :ivar type: The type of the input item. Always ``input_text``. Required. - :vartype type: str or ~azure.ai.projects.models.INPUT_TEXT - :ivar text: The text input to the model. Required. - :vartype text: str - """ - - type: Literal[ItemContentType.INPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the input item. Always ``input_text``. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text input to the model. Required.""" - - @overload - def __init__( - self, - *, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.INPUT_TEXT # type: ignore - - -class ItemContentOutputAudio(ItemContent, discriminator="output_audio"): - """An audio output from the model. - - :ivar type: The type of the output audio. Always ``output_audio``. Required. - :vartype type: str or ~azure.ai.projects.models.OUTPUT_AUDIO - :ivar data: Base64-encoded audio data from the model. Required. - :vartype data: str - :ivar transcript: The transcript of the audio data from the model. Required. - :vartype transcript: str - """ - - type: Literal[ItemContentType.OUTPUT_AUDIO] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the output audio. Always ``output_audio``. Required.""" - data: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Base64-encoded audio data from the model. Required.""" - transcript: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The transcript of the audio data from the model. Required.""" - - @overload - def __init__( - self, - *, - data: str, - transcript: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.OUTPUT_AUDIO # type: ignore - - -class ItemContentOutputText(ItemContent, discriminator="output_text"): - """A text output from the model. - - :ivar type: The type of the output text. Always ``output_text``. Required. - :vartype type: str or ~azure.ai.projects.models.OUTPUT_TEXT - :ivar text: The text output from the model. Required. - :vartype text: str - :ivar annotations: The annotations of the text output. Required. - :vartype annotations: list[~azure.ai.projects.models.Annotation] - :ivar logprobs: - :vartype logprobs: list[~azure.ai.projects.models.LogProb] - """ - - type: Literal[ItemContentType.OUTPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the output text. Always ``output_text``. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text output from the model. Required.""" - annotations: list["_models.Annotation"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The annotations of the text output. Required.""" - logprobs: Optional[list["_models.LogProb"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - - @overload - def __init__( - self, - *, - text: str, - annotations: list["_models.Annotation"], - logprobs: Optional[list["_models.LogProb"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.OUTPUT_TEXT # type: ignore - - -class ItemContentRefusal(ItemContent, discriminator="refusal"): - """A refusal from the model. - - :ivar type: The type of the refusal. Always ``refusal``. Required. - :vartype type: str or ~azure.ai.projects.models.REFUSAL - :ivar refusal: The refusal explanationfrom the model. Required. - :vartype refusal: str - """ - - type: Literal[ItemContentType.REFUSAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the refusal. Always ``refusal``. Required.""" - refusal: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The refusal explanationfrom the model. Required.""" - - @overload - def __init__( - self, - *, - refusal: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemContentType.REFUSAL # type: ignore - - -class ItemReferenceItemParam(ItemParam, discriminator="item_reference"): - """An internal identifier for an item to reference. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.ITEM_REFERENCE - :ivar id: The service-originated ID of the previously generated response item being referenced. - Required. - :vartype id: str - """ - - type: Literal[ItemType.ITEM_REFERENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The service-originated ID of the previously generated response item being referenced. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.ITEM_REFERENCE # type: ignore - - -class LocalShellExecAction(_Model): - """Execute a shell command on the server. - - :ivar type: The type of the local shell action. Always ``exec``. Required. Default value is - "exec". - :vartype type: str - :ivar command: The command to run. Required. - :vartype command: list[str] - :ivar timeout_ms: Optional timeout in milliseconds for the command. - :vartype timeout_ms: int - :ivar working_directory: Optional working directory to run the command in. - :vartype working_directory: str - :ivar env: Environment variables to set for the command. Required. - :vartype env: dict[str, str] - :ivar user: Optional user to run the command as. - :vartype user: str - """ - - type: Literal["exec"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The type of the local shell action. Always ``exec``. Required. Default value is \"exec\".""" - command: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The command to run. Required.""" - timeout_ms: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional timeout in milliseconds for the command.""" - working_directory: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional working directory to run the command in.""" - env: dict[str, str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Environment variables to set for the command. Required.""" - user: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional user to run the command as.""" - - @overload - def __init__( - self, - *, - command: list[str], - env: dict[str, str], - timeout_ms: Optional[int] = None, - working_directory: Optional[str] = None, - user: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type: Literal["exec"] = "exec" - - -class LocalShellTool(Tool, discriminator="local_shell"): - """A tool that allows the model to execute shell commands in a local environment. - - :ivar type: The type of the local shell tool. Always ``local_shell``. Required. - :vartype type: str or ~azure.ai.projects.models.LOCAL_SHELL - """ - - type: Literal[ToolType.LOCAL_SHELL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the local shell tool. Always ``local_shell``. Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.LOCAL_SHELL # type: ignore - - -class LocalShellToolCallItemParam(ItemParam, discriminator="local_shell_call"): - """A tool call to run a command on the local shell. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.LOCAL_SHELL_CALL - :ivar call_id: The unique ID of the local shell tool call generated by the model. Required. - :vartype call_id: str - :ivar action: Required. - :vartype action: ~azure.ai.projects.models.LocalShellExecAction - """ - - type: Literal[ItemType.LOCAL_SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the local shell tool call generated by the model. Required.""" - action: "_models.LocalShellExecAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - call_id: str, - action: "_models.LocalShellExecAction", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.LOCAL_SHELL_CALL # type: ignore - - -class LocalShellToolCallItemResource(ItemResource, discriminator="local_shell_call"): - """A tool call to run a command on the local shell. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.LOCAL_SHELL_CALL - :ivar status: Required. Is one of the following types: Literal["in_progress"], - Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar call_id: The unique ID of the local shell tool call generated by the model. Required. - :vartype call_id: str - :ivar action: Required. - :vartype action: ~azure.ai.projects.models.LocalShellExecAction - """ - - type: Literal[ItemType.LOCAL_SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required. Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], - Literal[\"incomplete\"]""" - call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the local shell tool call generated by the model. Required.""" - action: "_models.LocalShellExecAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - call_id: str, - action: "_models.LocalShellExecAction", - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.LOCAL_SHELL_CALL # type: ignore - - -class LocalShellToolCallOutputItemParam(ItemParam, discriminator="local_shell_call_output"): - """The output of a local shell tool call. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.LOCAL_SHELL_CALL_OUTPUT - :ivar output: A JSON string of the output of the local shell tool call. Required. - :vartype output: str - """ - - type: Literal[ItemType.LOCAL_SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the output of the local shell tool call. Required.""" - - @overload - def __init__( - self, - *, - output: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.LOCAL_SHELL_CALL_OUTPUT # type: ignore - - -class LocalShellToolCallOutputItemResource(ItemResource, discriminator="local_shell_call_output"): - """The output of a local shell tool call. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.LOCAL_SHELL_CALL_OUTPUT - :ivar status: Required. Is one of the following types: Literal["in_progress"], - Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar output: A JSON string of the output of the local shell tool call. Required. - :vartype output: str - """ - - type: Literal[ItemType.LOCAL_SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required. Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], - Literal[\"incomplete\"]""" - output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the output of the local shell tool call. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - output: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.LOCAL_SHELL_CALL_OUTPUT # type: ignore - - -class LogProb(_Model): - """The log probability of a token. - - :ivar token: Required. - :vartype token: str - :ivar logprob: Required. - :vartype logprob: float - :ivar bytes: Required. - :vartype bytes: list[int] - :ivar top_logprobs: Required. - :vartype top_logprobs: list[~azure.ai.projects.models.TopLogProb] - """ - - token: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - logprob: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - bytes: list[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - top_logprobs: list["_models.TopLogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - token: str, - logprob: float, - bytes: list[int], - top_logprobs: list["_models.TopLogProb"], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ManagedAzureAISearchIndex(Index, discriminator="ManagedAzureSearch"): - """Managed Azure AI Search Index Definition. - - :ivar id: Asset ID, a unique identifier for the asset. - :vartype id: str - :ivar name: The name of the resource. Required. - :vartype name: str - :ivar version: The version of the resource. Required. - :vartype version: str - :ivar description: The asset description text. - :vartype description: str - :ivar tags: Tag dictionary. Tags can be added, removed, and updated. - :vartype tags: dict[str, str] - :ivar type: Type of index. Required. Managed Azure Search - :vartype type: str or ~azure.ai.projects.models.MANAGED_AZURE_SEARCH - :ivar vector_store_id: Vector store id of managed index. Required. - :vartype vector_store_id: str - """ - - type: Literal[IndexType.MANAGED_AZURE_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Type of index. Required. Managed Azure Search""" - vector_store_id: str = rest_field(name="vectorStoreId", visibility=["create"]) - """Vector store id of managed index. Required.""" - - @overload - def __init__( - self, - *, - vector_store_id: str, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = IndexType.MANAGED_AZURE_SEARCH # type: ignore - - -class MCPApprovalRequestItemParam(ItemParam, discriminator="mcp_approval_request"): - """A request for human approval of a tool invocation. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_APPROVAL_REQUEST - :ivar server_label: The label of the MCP server making the request. Required. - :vartype server_label: str - :ivar name: The name of the tool to run. Required. - :vartype name: str - :ivar arguments: A JSON string of arguments for the tool. Required. - :vartype arguments: str - """ - - type: Literal[ItemType.MCP_APPROVAL_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server making the request. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool to run. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of arguments for the tool. Required.""" - - @overload - def __init__( - self, - *, - server_label: str, - name: str, - arguments: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_APPROVAL_REQUEST # type: ignore - - -class MCPApprovalRequestItemResource(ItemResource, discriminator="mcp_approval_request"): - """A request for human approval of a tool invocation. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_APPROVAL_REQUEST - :ivar server_label: The label of the MCP server making the request. Required. - :vartype server_label: str - :ivar name: The name of the tool to run. Required. - :vartype name: str - :ivar arguments: A JSON string of arguments for the tool. Required. - :vartype arguments: str - """ - - type: Literal[ItemType.MCP_APPROVAL_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server making the request. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool to run. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of arguments for the tool. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - server_label: str, - name: str, - arguments: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_APPROVAL_REQUEST # type: ignore - - -class MCPApprovalResponseItemParam(ItemParam, discriminator="mcp_approval_response"): - """A response to an MCP approval request. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_APPROVAL_RESPONSE - :ivar approval_request_id: The ID of the approval request being answered. Required. - :vartype approval_request_id: str - :ivar approve: Whether the request was approved. Required. - :vartype approve: bool - :ivar reason: Optional reason for the decision. - :vartype reason: str - """ - - type: Literal[ItemType.MCP_APPROVAL_RESPONSE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - approval_request_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the approval request being answered. Required.""" - approve: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the request was approved. Required.""" - reason: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional reason for the decision.""" - - @overload - def __init__( - self, - *, - approval_request_id: str, - approve: bool, - reason: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_APPROVAL_RESPONSE # type: ignore - - -class MCPApprovalResponseItemResource(ItemResource, discriminator="mcp_approval_response"): - """A response to an MCP approval request. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_APPROVAL_RESPONSE - :ivar approval_request_id: The ID of the approval request being answered. Required. - :vartype approval_request_id: str - :ivar approve: Whether the request was approved. Required. - :vartype approve: bool - :ivar reason: Optional reason for the decision. - :vartype reason: str - """ - - type: Literal[ItemType.MCP_APPROVAL_RESPONSE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - approval_request_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the approval request being answered. Required.""" - approve: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the request was approved. Required.""" - reason: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional reason for the decision.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - approval_request_id: str, - approve: bool, - created_by: Optional["_models.CreatedBy"] = None, - reason: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_APPROVAL_RESPONSE # type: ignore - - -class MCPCallItemParam(ItemParam, discriminator="mcp_call"): - """An invocation of a tool on an MCP server. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_CALL - :ivar server_label: The label of the MCP server running the tool. Required. - :vartype server_label: str - :ivar name: The name of the tool that was run. Required. - :vartype name: str - :ivar arguments: A JSON string of the arguments passed to the tool. Required. - :vartype arguments: str - :ivar output: The output from the tool call. - :vartype output: str - :ivar error: The error from the tool call, if any. - :vartype error: str - """ - - type: Literal[ItemType.MCP_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server running the tool. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool that was run. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the arguments passed to the tool. Required.""" - output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The output from the tool call.""" - error: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error from the tool call, if any.""" - - @overload - def __init__( - self, - *, - server_label: str, - name: str, - arguments: str, - output: Optional[str] = None, - error: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_CALL # type: ignore - - -class MCPCallItemResource(ItemResource, discriminator="mcp_call"): - """An invocation of a tool on an MCP server. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_CALL - :ivar server_label: The label of the MCP server running the tool. Required. - :vartype server_label: str - :ivar name: The name of the tool that was run. Required. - :vartype name: str - :ivar arguments: A JSON string of the arguments passed to the tool. Required. - :vartype arguments: str - :ivar output: The output from the tool call. - :vartype output: str - :ivar error: The error from the tool call, if any. - :vartype error: str - """ - - type: Literal[ItemType.MCP_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server running the tool. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool that was run. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A JSON string of the arguments passed to the tool. Required.""" - output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The output from the tool call.""" - error: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error from the tool call, if any.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - server_label: str, - name: str, - arguments: str, - created_by: Optional["_models.CreatedBy"] = None, - output: Optional[str] = None, - error: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_CALL # type: ignore - - -class MCPListToolsItemParam(ItemParam, discriminator="mcp_list_tools"): - """A list of tools available on an MCP server. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_LIST_TOOLS - :ivar server_label: The label of the MCP server. Required. - :vartype server_label: str - :ivar tools: The tools available on the server. Required. - :vartype tools: list[~azure.ai.projects.models.MCPListToolsTool] - :ivar error: Error message if the server could not list tools. - :vartype error: str - """ - - type: Literal[ItemType.MCP_LIST_TOOLS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server. Required.""" - tools: list["_models.MCPListToolsTool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The tools available on the server. Required.""" - error: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Error message if the server could not list tools.""" - - @overload - def __init__( - self, - *, - server_label: str, - tools: list["_models.MCPListToolsTool"], - error: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_LIST_TOOLS # type: ignore - - -class MCPListToolsItemResource(ItemResource, discriminator="mcp_list_tools"): - """A list of tools available on an MCP server. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MCP_LIST_TOOLS - :ivar server_label: The label of the MCP server. Required. - :vartype server_label: str - :ivar tools: The tools available on the server. Required. - :vartype tools: list[~azure.ai.projects.models.MCPListToolsTool] - :ivar error: Error message if the server could not list tools. - :vartype error: str - """ - - type: Literal[ItemType.MCP_LIST_TOOLS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server. Required.""" - tools: list["_models.MCPListToolsTool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The tools available on the server. Required.""" - error: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Error message if the server could not list tools.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - server_label: str, - tools: list["_models.MCPListToolsTool"], - created_by: Optional["_models.CreatedBy"] = None, - error: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MCP_LIST_TOOLS # type: ignore - - -class MCPListToolsTool(_Model): - """A tool available on an MCP server. - - :ivar name: The name of the tool. Required. - :vartype name: str - :ivar description: The description of the tool. - :vartype description: str - :ivar input_schema: The JSON schema describing the tool's input. Required. - :vartype input_schema: any - :ivar annotations: Additional annotations about the tool. - :vartype annotations: any - """ - - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The description of the tool.""" - input_schema: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The JSON schema describing the tool's input. Required.""" - annotations: Optional[Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Additional annotations about the tool.""" - - @overload - def __init__( - self, - *, - name: str, - input_schema: Any, - description: Optional[str] = None, - annotations: Optional[Any] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MCPTool(Tool, discriminator="mcp"): - """Give the model access to additional tools via remote Model Context Protocol - (MCP) servers. `Learn more about MCP `_. - - :ivar type: The type of the MCP tool. Always ``mcp``. Required. - :vartype type: str or ~azure.ai.projects.models.MCP - :ivar server_label: A label for this MCP server, used to identify it in tool calls. Required. - :vartype server_label: str - :ivar server_url: The URL for the MCP server. Required. - :vartype server_url: str - :ivar headers: Optional HTTP headers to send to the MCP server. Use for authentication - or other purposes. - :vartype headers: dict[str, str] - :ivar allowed_tools: List of allowed tool names or a filter object. Is either a [str] type or a - MCPToolAllowedTools1 type. - :vartype allowed_tools: list[str] or ~azure.ai.projects.models.MCPToolAllowedTools1 - :ivar require_approval: Specify which of the MCP server's tools require approval. Is one of the - following types: MCPToolRequireApproval1, Literal["always"], Literal["never"] - :vartype require_approval: ~azure.ai.projects.models.MCPToolRequireApproval1 or str or str - :ivar project_connection_id: The connection ID in the project for the MCP server. The - connection stores authentication and other connection details needed to connect to the MCP - server. - :vartype project_connection_id: str - """ - - type: Literal[ToolType.MCP] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the MCP tool. Always ``mcp``. Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A label for this MCP server, used to identify it in tool calls. Required.""" - server_url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The URL for the MCP server. Required.""" - headers: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional HTTP headers to send to the MCP server. Use for authentication - or other purposes.""" - allowed_tools: Optional[Union[list[str], "_models.MCPToolAllowedTools1"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """List of allowed tool names or a filter object. Is either a [str] type or a MCPToolAllowedTools1 - type.""" - require_approval: Optional[Union["_models.MCPToolRequireApproval1", Literal["always"], Literal["never"]]] = ( - rest_field(visibility=["read", "create", "update", "delete", "query"]) - ) - """Specify which of the MCP server's tools require approval. Is one of the following types: - MCPToolRequireApproval1, Literal[\"always\"], Literal[\"never\"]""" - project_connection_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The connection ID in the project for the MCP server. The connection stores authentication and - other connection details needed to connect to the MCP server.""" - - @overload - def __init__( - self, - *, - server_label: str, - server_url: str, - headers: Optional[dict[str, str]] = None, - allowed_tools: Optional[Union[list[str], "_models.MCPToolAllowedTools1"]] = None, - require_approval: Optional[ - Union["_models.MCPToolRequireApproval1", Literal["always"], Literal["never"]] - ] = None, - project_connection_id: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.MCP # type: ignore - - -class MCPToolAllowedTools1(_Model): - """MCPToolAllowedTools1. - - :ivar tool_names: List of allowed tool names. - :vartype tool_names: list[str] - """ - - tool_names: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """List of allowed tool names.""" - - @overload - def __init__( - self, - *, - tool_names: Optional[list[str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MCPToolRequireApproval1(_Model): - """MCPToolRequireApproval1. - - :ivar always: A list of tools that always require approval. - :vartype always: ~azure.ai.projects.models.MCPToolRequireApprovalAlways - :ivar never: A list of tools that never require approval. - :vartype never: ~azure.ai.projects.models.MCPToolRequireApprovalNever - """ - - always: Optional["_models.MCPToolRequireApprovalAlways"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A list of tools that always require approval.""" - never: Optional["_models.MCPToolRequireApprovalNever"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A list of tools that never require approval.""" - - @overload - def __init__( - self, - *, - always: Optional["_models.MCPToolRequireApprovalAlways"] = None, - never: Optional["_models.MCPToolRequireApprovalNever"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MCPToolRequireApprovalAlways(_Model): - """MCPToolRequireApprovalAlways. - - :ivar tool_names: List of tools that require approval. - :vartype tool_names: list[str] - """ - - tool_names: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """List of tools that require approval.""" - - @overload - def __init__( - self, - *, - tool_names: Optional[list[str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MCPToolRequireApprovalNever(_Model): - """MCPToolRequireApprovalNever. - - :ivar tool_names: List of tools that do not require approval. - :vartype tool_names: list[str] - """ - - tool_names: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """List of tools that do not require approval.""" - - @overload - def __init__( - self, - *, - tool_names: Optional[list[str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryOperation(_Model): - """Represents a single memory operation (create, update, or delete) performed on a memory item. - - :ivar kind: The type of memory operation being performed. Required. Known values are: "create", - "update", and "delete". - :vartype kind: str or ~azure.ai.projects.models.MemoryOperationKind - :ivar memory_item: The memory item to create, update, or delete. Required. - :vartype memory_item: ~azure.ai.projects.models.MemoryItem - """ - - kind: Union[str, "_models.MemoryOperationKind"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The type of memory operation being performed. Required. Known values are: \"create\", - \"update\", and \"delete\".""" - memory_item: "_models.MemoryItem" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The memory item to create, update, or delete. Required.""" - - @overload - def __init__( - self, - *, - kind: Union[str, "_models.MemoryOperationKind"], - memory_item: "_models.MemoryItem", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemorySearchItem(_Model): - """A retrieved memory item from memory search. - - :ivar memory_item: Retrieved memory item. Required. - :vartype memory_item: ~azure.ai.projects.models.MemoryItem - """ - - memory_item: "_models.MemoryItem" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Retrieved memory item. Required.""" - - @overload - def __init__( - self, - *, - memory_item: "_models.MemoryItem", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemorySearchOptions(_Model): - """Memory search options. - - :ivar max_memories: Maximum number of memory items to return. - :vartype max_memories: int - """ - - max_memories: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Maximum number of memory items to return.""" - - @overload - def __init__( - self, - *, - max_memories: Optional[int] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemorySearchTool(Tool, discriminator="memory_search"): - """A tool for integrating memories into the agent. - - :ivar type: The type of the tool. Always ``memory_search``. Required. - :vartype type: str or ~azure.ai.projects.models.MEMORY_SEARCH - :ivar memory_store_name: The name of the memory store to use. Required. - :vartype memory_store_name: str - :ivar scope: The namespace used to group and isolate memories, such as a user ID. - Limits which memories can be retrieved or updated. - Use special variable ``{{$userId}}`` to scope memories to the current signed-in user. Required. - :vartype scope: str - :ivar search_options: Options for searching the memory store. - :vartype search_options: ~azure.ai.projects.models.MemorySearchOptions - :ivar update_delay: The amount of time to wait after inactivity before updating memories with - messages from the call (e.g., '0s', '5m'). Defaults to '60s'. - :vartype update_delay: ~datetime.timedelta - """ - - type: Literal[ToolType.MEMORY_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the tool. Always ``memory_search``. Required.""" - memory_store_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the memory store to use. Required.""" - scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The namespace used to group and isolate memories, such as a user ID. - Limits which memories can be retrieved or updated. - Use special variable ``{{$userId}}`` to scope memories to the current signed-in user. Required.""" - search_options: Optional["_models.MemorySearchOptions"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Options for searching the memory store.""" - update_delay: Optional[datetime.timedelta] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The amount of time to wait after inactivity before updating memories with messages from the - call (e.g., '0s', '5m'). Defaults to '60s'.""" - - @overload - def __init__( - self, - *, - memory_store_name: str, - scope: str, - search_options: Optional["_models.MemorySearchOptions"] = None, - update_delay: Optional[datetime.timedelta] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.MEMORY_SEARCH # type: ignore - - -class MemorySearchToolCallItemParam(ItemParam, discriminator="memory_search_call"): - """MemorySearchToolCallItemParam. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MEMORY_SEARCH_CALL - :ivar results: The results returned from the memory search. - :vartype results: list[~azure.ai.projects.models.MemorySearchItem] - """ - - type: Literal[ItemType.MEMORY_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - results: Optional[list["_models.MemorySearchItem"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The results returned from the memory search.""" - - @overload - def __init__( - self, - *, - results: Optional[list["_models.MemorySearchItem"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MEMORY_SEARCH_CALL # type: ignore - - -class MemorySearchToolCallItemResource(ItemResource, discriminator="memory_search_call"): - """MemorySearchToolCallItemResource. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.MEMORY_SEARCH_CALL - :ivar status: The status of the memory search tool call. One of ``in_progress``, - ``searching``, ``completed``, ``incomplete`` or ``failed``,. Required. Is one of the following - types: Literal["in_progress"], Literal["searching"], Literal["completed"], - Literal["incomplete"], Literal["failed"] - :vartype status: str or str or str or str or str - :ivar results: The results returned from the memory search. - :vartype results: list[~azure.ai.projects.models.MemorySearchItem] - """ - - type: Literal[ItemType.MEMORY_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "searching", "completed", "incomplete", "failed"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the memory search tool call. One of ``in_progress``, - ``searching``, ``completed``, ``incomplete`` or ``failed``,. Required. Is one of the following - types: Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], - Literal[\"incomplete\"], Literal[\"failed\"]""" - results: Optional[list["_models.MemorySearchItem"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The results returned from the memory search.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "searching", "completed", "incomplete", "failed"], - created_by: Optional["_models.CreatedBy"] = None, - results: Optional[list["_models.MemorySearchItem"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MEMORY_SEARCH_CALL # type: ignore - - -class MemoryStoreDefinition(_Model): - """Base definition for memory store configurations. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - MemoryStoreDefaultDefinition - - :ivar kind: The kind of the memory store. Required. "default" - :vartype kind: str or ~azure.ai.projects.models.MemoryStoreKind - """ - - __mapping__: dict[str, _Model] = {} - kind: str = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) - """The kind of the memory store. Required. \"default\"""" - - @overload - def __init__( - self, - *, - kind: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreDefaultDefinition(MemoryStoreDefinition, discriminator="default"): - """Default memory store implementation. - - :ivar kind: The kind of the memory store. Required. The default memory store implementation. - :vartype kind: str or ~azure.ai.projects.models.DEFAULT - :ivar chat_model: The name or identifier of the chat completion model deployment used for - memory processing. Required. - :vartype chat_model: str - :ivar embedding_model: The name or identifier of the embedding model deployment used for memory - processing. Required. - :vartype embedding_model: str - :ivar options: Default memory store options. - :vartype options: ~azure.ai.projects.models.MemoryStoreDefaultOptions - """ - - kind: Literal[MemoryStoreKind.DEFAULT] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The kind of the memory store. Required. The default memory store implementation.""" - chat_model: str = rest_field(visibility=["read", "create"]) - """The name or identifier of the chat completion model deployment used for memory processing. - Required.""" - embedding_model: str = rest_field(visibility=["read", "create"]) - """The name or identifier of the embedding model deployment used for memory processing. Required.""" - options: Optional["_models.MemoryStoreDefaultOptions"] = rest_field(visibility=["read", "create"]) - """Default memory store options.""" - - @overload - def __init__( - self, - *, - chat_model: str, - embedding_model: str, - options: Optional["_models.MemoryStoreDefaultOptions"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = MemoryStoreKind.DEFAULT # type: ignore - - -class MemoryStoreDefaultOptions(_Model): - """Default memory store configurations. - - :ivar user_profile_enabled: Whether to enable user profile extraction and storage. Default is - true. Required. - :vartype user_profile_enabled: bool - :ivar user_profile_details: Specific categories or types of user profile information to extract - and store. - :vartype user_profile_details: str - :ivar chat_summary_enabled: Whether to enable chat summary extraction and storage. Default is - true. Required. - :vartype chat_summary_enabled: bool - """ - - user_profile_enabled: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to enable user profile extraction and storage. Default is true. Required.""" - user_profile_details: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Specific categories or types of user profile information to extract and store.""" - chat_summary_enabled: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to enable chat summary extraction and storage. Default is true. Required.""" - - @overload - def __init__( - self, - *, - user_profile_enabled: bool, - chat_summary_enabled: bool, - user_profile_details: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreDeleteScopeResponse(_Model): - """Response for deleting memories from a scope. - - :ivar object: The object type. Always 'memory_store.scope.deleted'. Required. Default value is - "memory_store.scope.deleted". - :vartype object: str - :ivar name: The name of the memory store. Required. - :vartype name: str - :ivar scope: The scope from which memories were deleted. Required. - :vartype scope: str - :ivar deleted: Whether the deletion operation was successful. Required. - :vartype deleted: bool - """ - - object: Literal["memory_store.scope.deleted"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The object type. Always 'memory_store.scope.deleted'. Required. Default value is - \"memory_store.scope.deleted\".""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the memory store. Required.""" - scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The scope from which memories were deleted. Required.""" - deleted: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the deletion operation was successful. Required.""" - - @overload - def __init__( - self, - *, - name: str, - scope: str, - deleted: bool, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["memory_store.scope.deleted"] = "memory_store.scope.deleted" - - -class MemoryStoreObject(_Model): - """A memory store that can store and retrieve user memories. - - :ivar object: The object type, which is always 'memory_store'. Required. Default value is - "memory_store". - :vartype object: str - :ivar id: The unique identifier of the memory store. Required. - :vartype id: str - :ivar created_at: The Unix timestamp (seconds) when the memory store was created. Required. - :vartype created_at: ~datetime.datetime - :ivar updated_at: The Unix timestamp (seconds) when the memory store was last updated. - Required. - :vartype updated_at: ~datetime.datetime - :ivar name: The name of the memory store. Required. - :vartype name: str - :ivar description: A human-readable description of the memory store. - :vartype description: str - :ivar metadata: Arbitrary key-value metadata to associate with the memory store. - :vartype metadata: dict[str, str] - :ivar definition: The definition of the memory store. Required. - :vartype definition: ~azure.ai.projects.models.MemoryStoreDefinition - """ - - object: Literal["memory_store"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type, which is always 'memory_store'. Required. Default value is \"memory_store\".""" - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the memory store. Required.""" - created_at: datetime.datetime = rest_field( - visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" - ) - """The Unix timestamp (seconds) when the memory store was created. Required.""" - updated_at: datetime.datetime = rest_field( - visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" - ) - """The Unix timestamp (seconds) when the memory store was last updated. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the memory store. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A human-readable description of the memory store.""" - metadata: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Arbitrary key-value metadata to associate with the memory store.""" - definition: "_models.MemoryStoreDefinition" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The definition of the memory store. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - created_at: datetime.datetime, - updated_at: datetime.datetime, - name: str, - definition: "_models.MemoryStoreDefinition", - description: Optional[str] = None, - metadata: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["memory_store"] = "memory_store" - - -class MemoryStoreOperationUsage(_Model): - """Usage statistics of a memory store operation. - - :ivar embedding_tokens: The number of embedding tokens. Required. - :vartype embedding_tokens: int - :ivar input_tokens: The number of input tokens. Required. - :vartype input_tokens: int - :ivar input_tokens_details: A detailed breakdown of the input tokens. Required. - :vartype input_tokens_details: - ~azure.ai.projects.models.MemoryStoreOperationUsageInputTokensDetails - :ivar output_tokens: The number of output tokens. Required. - :vartype output_tokens: int - :ivar output_tokens_details: A detailed breakdown of the output tokens. Required. - :vartype output_tokens_details: - ~azure.ai.projects.models.MemoryStoreOperationUsageOutputTokensDetails - :ivar total_tokens: The total number of tokens used. Required. - :vartype total_tokens: int - """ - - embedding_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of embedding tokens. Required.""" - input_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of input tokens. Required.""" - input_tokens_details: "_models.MemoryStoreOperationUsageInputTokensDetails" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A detailed breakdown of the input tokens. Required.""" - output_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of output tokens. Required.""" - output_tokens_details: "_models.MemoryStoreOperationUsageOutputTokensDetails" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A detailed breakdown of the output tokens. Required.""" - total_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The total number of tokens used. Required.""" - - @overload - def __init__( - self, - *, - embedding_tokens: int, - input_tokens: int, - input_tokens_details: "_models.MemoryStoreOperationUsageInputTokensDetails", - output_tokens: int, - output_tokens_details: "_models.MemoryStoreOperationUsageOutputTokensDetails", - total_tokens: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreOperationUsageInputTokensDetails(_Model): # pylint: disable=name-too-long - """MemoryStoreOperationUsageInputTokensDetails. - - :ivar cached_tokens: The number of tokens that were retrieved from the cache. - `More on prompt caching `_. Required. - :vartype cached_tokens: int - """ - - cached_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of tokens that were retrieved from the cache. - `More on prompt caching `_. Required.""" - - @overload - def __init__( - self, - *, - cached_tokens: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreOperationUsageOutputTokensDetails(_Model): # pylint: disable=name-too-long - """MemoryStoreOperationUsageOutputTokensDetails. - - :ivar reasoning_tokens: The number of reasoning tokens. Required. - :vartype reasoning_tokens: int - """ - - reasoning_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of reasoning tokens. Required.""" - - @overload - def __init__( - self, - *, - reasoning_tokens: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreSearchResponse(_Model): - """Memory search response. - - :ivar search_id: The unique ID of this search request. Use this value as previous_search_id in - subsequent requests to perform incremental searches. Required. - :vartype search_id: str - :ivar memories: Related memory items found during the search operation. Required. - :vartype memories: list[~azure.ai.projects.models.MemorySearchItem] - :ivar usage: Usage statistics associated with the memory search operation. Required. - :vartype usage: ~azure.ai.projects.models.MemoryStoreOperationUsage - """ - - search_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of this search request. Use this value as previous_search_id in subsequent - requests to perform incremental searches. Required.""" - memories: list["_models.MemorySearchItem"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Related memory items found during the search operation. Required.""" - usage: "_models.MemoryStoreOperationUsage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Usage statistics associated with the memory search operation. Required.""" - - @overload - def __init__( - self, - *, - search_id: str, - memories: list["_models.MemorySearchItem"], - usage: "_models.MemoryStoreOperationUsage", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreUpdateResponse(_Model): - """Provides the status of a memory store update operation. - - :ivar update_id: The unique ID of this update request. Use this value as previous_update_id in - subsequent requests to perform incremental updates. Required. - :vartype update_id: str - :ivar status: The status of the memory update operation. One of "queued", "in_progress", - "completed", "failed", or "superseded". Required. Known values are: "queued", "in_progress", - "completed", "failed", and "superseded". - :vartype status: str or ~azure.ai.projects.models.MemoryStoreUpdateStatus - :ivar superseded_by: The update_id the operation was superseded by when status is "superseded". - :vartype superseded_by: str - :ivar result: The result of memory store update operation when status is "completed". - :vartype result: ~azure.ai.projects.models.MemoryStoreUpdateResult - :ivar error: Error object that describes the error when status is "failed". - :vartype error: ~azure.ai.projects.models.ApiError - """ - - update_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of this update request. Use this value as previous_update_id in subsequent - requests to perform incremental updates. Required.""" - status: Union[str, "_models.MemoryStoreUpdateStatus"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the memory update operation. One of \"queued\", \"in_progress\", \"completed\", - \"failed\", or \"superseded\". Required. Known values are: \"queued\", \"in_progress\", - \"completed\", \"failed\", and \"superseded\".""" - superseded_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The update_id the operation was superseded by when status is \"superseded\".""" - result: Optional["_models.MemoryStoreUpdateResult"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The result of memory store update operation when status is \"completed\".""" - error: Optional["_models.ApiError"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Error object that describes the error when status is \"failed\".""" - - @overload - def __init__( - self, - *, - update_id: str, - status: Union[str, "_models.MemoryStoreUpdateStatus"], - superseded_by: Optional[str] = None, - result: Optional["_models.MemoryStoreUpdateResult"] = None, - error: Optional["_models.ApiError"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MemoryStoreUpdateResult(_Model): - """Memory update result. - - :ivar memory_operations: A list of individual memory operations that were performed during the - update. Required. - :vartype memory_operations: list[~azure.ai.projects.models.MemoryOperation] - :ivar usage: Usage statistics associated with the memory update operation. Required. - :vartype usage: ~azure.ai.projects.models.MemoryStoreOperationUsage - """ - - memory_operations: list["_models.MemoryOperation"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A list of individual memory operations that were performed during the update. Required.""" - usage: "_models.MemoryStoreOperationUsage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Usage statistics associated with the memory update operation. Required.""" - - @overload - def __init__( - self, - *, - memory_operations: list["_models.MemoryOperation"], - usage: "_models.MemoryStoreOperationUsage", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MicrosoftFabricAgentTool(Tool, discriminator="fabric_dataagent_preview"): - """The input definition information for a Microsoft Fabric tool as used to configure an agent. - - :ivar type: The object type, which is always 'fabric_dataagent'. Required. - :vartype type: str or ~azure.ai.projects.models.FABRIC_DATAAGENT_PREVIEW - :ivar fabric_dataagent_preview: The fabric data agent tool parameters. Required. - :vartype fabric_dataagent_preview: ~azure.ai.projects.models.FabricDataAgentToolParameters - """ - - type: Literal[ToolType.FABRIC_DATAAGENT_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'fabric_dataagent'. Required.""" - fabric_dataagent_preview: "_models.FabricDataAgentToolParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The fabric data agent tool parameters. Required.""" - - @overload - def __init__( - self, - *, - fabric_dataagent_preview: "_models.FabricDataAgentToolParameters", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.FABRIC_DATAAGENT_PREVIEW # type: ignore - - -class ModelDeployment(Deployment, discriminator="ModelDeployment"): - """Model Deployment Definition. - - :ivar name: Name of the deployment. Required. - :vartype name: str - :ivar type: The type of the deployment. Required. Model deployment - :vartype type: str or ~azure.ai.projects.models.MODEL_DEPLOYMENT - :ivar model_name: Publisher-specific name of the deployed model. Required. - :vartype model_name: str - :ivar model_version: Publisher-specific version of the deployed model. Required. - :vartype model_version: str - :ivar model_publisher: Name of the deployed model's publisher. Required. - :vartype model_publisher: str - :ivar capabilities: Capabilities of deployed model. Required. - :vartype capabilities: dict[str, str] - :ivar sku: Sku of the model deployment. Required. - :vartype sku: ~azure.ai.projects.models.ModelDeploymentSku - :ivar connection_name: Name of the connection the deployment comes from. - :vartype connection_name: str - """ - - type: Literal[DeploymentType.MODEL_DEPLOYMENT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the deployment. Required. Model deployment""" - model_name: str = rest_field(name="modelName", visibility=["read"]) - """Publisher-specific name of the deployed model. Required.""" - model_version: str = rest_field(name="modelVersion", visibility=["read"]) - """Publisher-specific version of the deployed model. Required.""" - model_publisher: str = rest_field(name="modelPublisher", visibility=["read"]) - """Name of the deployed model's publisher. Required.""" - capabilities: dict[str, str] = rest_field(visibility=["read"]) - """Capabilities of deployed model. Required.""" - sku: "_models.ModelDeploymentSku" = rest_field(visibility=["read"]) - """Sku of the model deployment. Required.""" - connection_name: Optional[str] = rest_field(name="connectionName", visibility=["read"]) - """Name of the connection the deployment comes from.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = DeploymentType.MODEL_DEPLOYMENT # type: ignore - - -class ModelDeploymentSku(_Model): - """Sku information. - - :ivar capacity: Sku capacity. Required. - :vartype capacity: int - :ivar family: Sku family. Required. - :vartype family: str - :ivar name: Sku name. Required. - :vartype name: str - :ivar size: Sku size. Required. - :vartype size: str - :ivar tier: Sku tier. Required. - :vartype tier: str - """ - - capacity: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Sku capacity. Required.""" - family: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Sku family. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Sku name. Required.""" - size: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Sku size. Required.""" - tier: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Sku tier. Required.""" - - @overload - def __init__( - self, - *, - capacity: int, - family: str, - name: str, - size: str, - tier: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class MonthlyRecurrenceSchedule(RecurrenceSchedule, discriminator="Monthly"): - """Monthly recurrence schedule. - - :ivar type: Monthly recurrence type. Required. Monthly recurrence pattern. - :vartype type: str or ~azure.ai.projects.models.MONTHLY - :ivar days_of_month: Days of the month for the recurrence schedule. Required. - :vartype days_of_month: list[int] - """ - - type: Literal[RecurrenceType.MONTHLY] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Monthly recurrence type. Required. Monthly recurrence pattern.""" - days_of_month: list[int] = rest_field( - name="daysOfMonth", visibility=["read", "create", "update", "delete", "query"] - ) - """Days of the month for the recurrence schedule. Required.""" - - @overload - def __init__( - self, - *, - days_of_month: list[int], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = RecurrenceType.MONTHLY # type: ignore - - -class NoAuthenticationCredentials(BaseCredentials, discriminator="None"): - """Credentials that do not require authentication. - - :ivar type: The credential type. Required. No credential - :vartype type: str or ~azure.ai.projects.models.NONE - """ - - type: Literal[CredentialType.NONE] = rest_discriminator(name="type", visibility=["read"]) # type: ignore - """The credential type. Required. No credential""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CredentialType.NONE # type: ignore - - -class OAuthConsentRequestItemResource(ItemResource, discriminator="oauth_consent_request"): - """Request from the service for the user to perform OAuth consent. - - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar id: Required. - :vartype id: str - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.OAUTH_CONSENT_REQUEST - :ivar consent_link: The link the user can use to perform OAuth consent. Required. - :vartype consent_link: str - :ivar server_label: The server label for the OAuth consent request. Required. - :vartype server_label: str - """ - - type: Literal[ItemType.OAUTH_CONSENT_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - consent_link: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The link the user can use to perform OAuth consent. Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The server label for the OAuth consent request. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - consent_link: str, - server_label: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.OAUTH_CONSENT_REQUEST # type: ignore - - -class OneTimeTrigger(Trigger, discriminator="OneTime"): - """One-time trigger. - - :ivar type: Required. One-time trigger. - :vartype type: str or ~azure.ai.projects.models.ONE_TIME - :ivar trigger_at: Date and time for the one-time trigger in ISO 8601 format. Required. - :vartype trigger_at: str - :ivar time_zone: Time zone for the one-time trigger. - :vartype time_zone: str - """ - - type: Literal[TriggerType.ONE_TIME] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. One-time trigger.""" - trigger_at: str = rest_field(name="triggerAt", visibility=["read", "create", "update", "delete", "query"]) - """Date and time for the one-time trigger in ISO 8601 format. Required.""" - time_zone: Optional[str] = rest_field(name="timeZone", visibility=["read", "create", "update", "delete", "query"]) - """Time zone for the one-time trigger.""" - - @overload - def __init__( - self, - *, - trigger_at: str, - time_zone: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = TriggerType.ONE_TIME # type: ignore - - -class OpenApiAgentTool(Tool, discriminator="openapi"): - """The input definition information for an OpenAPI tool as used to configure an agent. - - :ivar type: The object type, which is always 'openapi'. Required. - :vartype type: str or ~azure.ai.projects.models.OPENAPI - :ivar openapi: The openapi function definition. Required. - :vartype openapi: ~azure.ai.projects.models.OpenApiFunctionDefinition - """ - - type: Literal[ToolType.OPENAPI] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'openapi'. Required.""" - openapi: "_models.OpenApiFunctionDefinition" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The openapi function definition. Required.""" - - @overload - def __init__( - self, - *, - openapi: "_models.OpenApiFunctionDefinition", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.OPENAPI # type: ignore - - -class OpenApiAuthDetails(_Model): - """authentication details for OpenApiFunctionDefinition. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - OpenApiAnonymousAuthDetails, OpenApiManagedAuthDetails, OpenApiProjectConnectionAuthDetails - - :ivar type: The type of authentication, must be anonymous/project_connection/managed_identity. - Required. Known values are: "anonymous", "project_connection", and "managed_identity". - :vartype type: str or ~azure.ai.projects.models.OpenApiAuthType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """The type of authentication, must be anonymous/project_connection/managed_identity. Required. - Known values are: \"anonymous\", \"project_connection\", and \"managed_identity\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class OpenApiAnonymousAuthDetails(OpenApiAuthDetails, discriminator="anonymous"): - """Security details for OpenApi anonymous authentication. - - :ivar type: The object type, which is always 'anonymous'. Required. - :vartype type: str or ~azure.ai.projects.models.ANONYMOUS - """ - - type: Literal[OpenApiAuthType.ANONYMOUS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'anonymous'. Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = OpenApiAuthType.ANONYMOUS # type: ignore - - -class OpenApiFunctionDefinition(_Model): - """The input definition information for an openapi function. - - :ivar name: The name of the function to be called. Required. - :vartype name: str - :ivar description: A description of what the function does, used by the model to choose when - and how to call the function. - :vartype description: str - :ivar spec: The openapi function shape, described as a JSON Schema object. Required. - :vartype spec: any - :ivar auth: Open API authentication details. Required. - :vartype auth: ~azure.ai.projects.models.OpenApiAuthDetails - :ivar default_params: List of OpenAPI spec parameters that will use user-provided defaults. - :vartype default_params: list[str] - :ivar functions: List of function definitions used by OpenApi tool. - :vartype functions: list[~azure.ai.projects.models.OpenApiFunctionDefinitionFunction] - """ - - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to be called. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A description of what the function does, used by the model to choose when and how to call the - function.""" - spec: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The openapi function shape, described as a JSON Schema object. Required.""" - auth: "_models.OpenApiAuthDetails" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Open API authentication details. Required.""" - default_params: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """List of OpenAPI spec parameters that will use user-provided defaults.""" - functions: Optional[list["_models.OpenApiFunctionDefinitionFunction"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """List of function definitions used by OpenApi tool.""" - - @overload - def __init__( - self, - *, - name: str, - spec: Any, - auth: "_models.OpenApiAuthDetails", - description: Optional[str] = None, - default_params: Optional[list[str]] = None, - functions: Optional[list["_models.OpenApiFunctionDefinitionFunction"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class OpenApiFunctionDefinitionFunction(_Model): - """OpenApiFunctionDefinitionFunction. - - :ivar name: The name of the function to be called. Required. - :vartype name: str - :ivar description: A description of what the function does, used by the model to choose when - and how to call the function. - :vartype description: str - :ivar parameters: The parameters the functions accepts, described as a JSON Schema object. - Required. - :vartype parameters: any - """ - - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to be called. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A description of what the function does, used by the model to choose when and how to call the - function.""" - parameters: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The parameters the functions accepts, described as a JSON Schema object. Required.""" - - @overload - def __init__( - self, - *, - name: str, - parameters: Any, - description: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class OpenApiManagedAuthDetails(OpenApiAuthDetails, discriminator="managed_identity"): - """Security details for OpenApi managed_identity authentication. - - :ivar type: The object type, which is always 'managed_identity'. Required. - :vartype type: str or ~azure.ai.projects.models.MANAGED_IDENTITY - :ivar security_scheme: Connection auth security details. Required. - :vartype security_scheme: ~azure.ai.projects.models.OpenApiManagedSecurityScheme - """ - - type: Literal[OpenApiAuthType.MANAGED_IDENTITY] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'managed_identity'. Required.""" - security_scheme: "_models.OpenApiManagedSecurityScheme" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Connection auth security details. Required.""" - - @overload - def __init__( - self, - *, - security_scheme: "_models.OpenApiManagedSecurityScheme", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = OpenApiAuthType.MANAGED_IDENTITY # type: ignore - - -class OpenApiManagedSecurityScheme(_Model): - """Security scheme for OpenApi managed_identity authentication. - - :ivar audience: Authentication scope for managed_identity auth type. Required. - :vartype audience: str - """ - - audience: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Authentication scope for managed_identity auth type. Required.""" - - @overload - def __init__( - self, - *, - audience: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class OpenApiProjectConnectionAuthDetails(OpenApiAuthDetails, discriminator="project_connection"): - """Security details for OpenApi project connection authentication. - - :ivar type: The object type, which is always 'project_connection'. Required. - :vartype type: str or ~azure.ai.projects.models.PROJECT_CONNECTION - :ivar security_scheme: Project connection auth security details. Required. - :vartype security_scheme: ~azure.ai.projects.models.OpenApiProjectConnectionSecurityScheme - """ - - type: Literal[OpenApiAuthType.PROJECT_CONNECTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'project_connection'. Required.""" - security_scheme: "_models.OpenApiProjectConnectionSecurityScheme" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Project connection auth security details. Required.""" - - @overload - def __init__( - self, - *, - security_scheme: "_models.OpenApiProjectConnectionSecurityScheme", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = OpenApiAuthType.PROJECT_CONNECTION # type: ignore - - -class OpenApiProjectConnectionSecurityScheme(_Model): - """Security scheme for OpenApi managed_identity authentication. - - :ivar project_connection_id: Project connection id for Project Connection auth type. Required. - :vartype project_connection_id: str - """ - - project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Project connection id for Project Connection auth type. Required.""" - - @overload - def __init__( - self, - *, - project_connection_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class PagedScheduleRun(_Model): - """Paged collection of ScheduleRun items. - - :ivar value: The ScheduleRun items on this page. Required. - :vartype value: list[~azure.ai.projects.models.ScheduleRun] - :ivar next_link: The link to the next page of items. - :vartype next_link: str - """ - - value: list["_models.ScheduleRun"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ScheduleRun items on this page. Required.""" - next_link: Optional[str] = rest_field(name="nextLink", visibility=["read", "create", "update", "delete", "query"]) - """The link to the next page of items.""" - - @overload - def __init__( - self, - *, - value: list["_models.ScheduleRun"], - next_link: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class PendingUploadRequest(_Model): - """Represents a request for a pending upload. - - :ivar pending_upload_id: If PendingUploadId is not provided, a random GUID will be used. - :vartype pending_upload_id: str - :ivar connection_name: Azure Storage Account connection name to use for generating temporary - SAS token. - :vartype connection_name: str - :ivar pending_upload_type: BlobReference is the only supported type. Required. Blob Reference - is the only supported type. - :vartype pending_upload_type: str or ~azure.ai.projects.models.BLOB_REFERENCE - """ - - pending_upload_id: Optional[str] = rest_field( - name="pendingUploadId", visibility=["read", "create", "update", "delete", "query"] - ) - """If PendingUploadId is not provided, a random GUID will be used.""" - connection_name: Optional[str] = rest_field( - name="connectionName", visibility=["read", "create", "update", "delete", "query"] - ) - """Azure Storage Account connection name to use for generating temporary SAS token.""" - pending_upload_type: Literal[PendingUploadType.BLOB_REFERENCE] = rest_field( - name="pendingUploadType", visibility=["read", "create", "update", "delete", "query"] - ) - """BlobReference is the only supported type. Required. Blob Reference is the only supported type.""" - - @overload - def __init__( - self, - *, - pending_upload_type: Literal[PendingUploadType.BLOB_REFERENCE], - pending_upload_id: Optional[str] = None, - connection_name: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class PendingUploadResponse(_Model): - """Represents the response for a pending upload request. - - :ivar blob_reference: Container-level read, write, list SAS. Required. - :vartype blob_reference: ~azure.ai.projects.models.BlobReference - :ivar pending_upload_id: ID for this upload request. Required. - :vartype pending_upload_id: str - :ivar version: Version of asset to be created if user did not specify version when initially - creating upload. - :vartype version: str - :ivar pending_upload_type: BlobReference is the only supported type. Required. Blob Reference - is the only supported type. - :vartype pending_upload_type: str or ~azure.ai.projects.models.BLOB_REFERENCE - """ - - blob_reference: "_models.BlobReference" = rest_field( - name="blobReference", visibility=["read", "create", "update", "delete", "query"] - ) - """Container-level read, write, list SAS. Required.""" - pending_upload_id: str = rest_field( - name="pendingUploadId", visibility=["read", "create", "update", "delete", "query"] - ) - """ID for this upload request. Required.""" - version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Version of asset to be created if user did not specify version when initially creating upload.""" - pending_upload_type: Literal[PendingUploadType.BLOB_REFERENCE] = rest_field( - name="pendingUploadType", visibility=["read", "create", "update", "delete", "query"] - ) - """BlobReference is the only supported type. Required. Blob Reference is the only supported type.""" - - @overload - def __init__( - self, - *, - blob_reference: "_models.BlobReference", - pending_upload_id: str, - pending_upload_type: Literal[PendingUploadType.BLOB_REFERENCE], - version: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class Prompt(_Model): - """Reference to a prompt template and its variables. - `Learn more `_. - - :ivar id: The unique identifier of the prompt template to use. Required. - :vartype id: str - :ivar version: Optional version of the prompt template. - :vartype version: str - :ivar variables: - :vartype variables: ~azure.ai.projects.models.ResponsePromptVariables - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the prompt template to use. Required.""" - version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Optional version of the prompt template.""" - variables: Optional["_models.ResponsePromptVariables"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - version: Optional[str] = None, - variables: Optional["_models.ResponsePromptVariables"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class PromptAgentDefinition(AgentDefinition, discriminator="prompt"): - """The prompt agent definition. - - :ivar rai_config: Configuration for Responsible AI (RAI) content filtering and safety features. - :vartype rai_config: ~azure.ai.projects.models.RaiConfig - :ivar kind: Required. - :vartype kind: str or ~azure.ai.projects.models.PROMPT - :ivar model: The model deployment to use for this agent. Required. - :vartype model: str - :ivar instructions: A system (or developer) message inserted into the model's context. - :vartype instructions: str - :ivar temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 - will make the output more random, while lower values like 0.2 will make it more focused and - deterministic. - We generally recommend altering this or ``top_p`` but not both. - :vartype temperature: float - :ivar top_p: An alternative to sampling with temperature, called nucleus sampling, - where the model considers the results of the tokens with top_p probability - mass. So 0.1 means only the tokens comprising the top 10% probability mass - are considered. - We generally recommend altering this or ``temperature`` but not both. - :vartype top_p: float - :ivar reasoning: - :vartype reasoning: ~azure.ai.projects.models.Reasoning - :ivar tools: An array of tools the model may call while generating a response. You - can specify which tool to use by setting the ``tool_choice`` parameter. - :vartype tools: list[~azure.ai.projects.models.Tool] - :ivar text: Configuration options for a text response from the model. Can be plain text or - structured JSON data. - :vartype text: ~azure.ai.projects.models.PromptAgentDefinitionText - :ivar structured_inputs: Set of structured inputs that can participate in prompt template - substitution or tool argument bindings. - :vartype structured_inputs: dict[str, ~azure.ai.projects.models.StructuredInputDefinition] - """ - - kind: Literal[AgentKind.PROMPT] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - model: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The model deployment to use for this agent. Required.""" - instructions: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A system (or developer) message inserted into the model's context.""" - temperature: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output - more random, while lower values like 0.2 will make it more focused and deterministic. - We generally recommend altering this or ``top_p`` but not both.""" - top_p: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An alternative to sampling with temperature, called nucleus sampling, - where the model considers the results of the tokens with top_p probability - mass. So 0.1 means only the tokens comprising the top 10% probability mass - are considered. - We generally recommend altering this or ``temperature`` but not both.""" - reasoning: Optional["_models.Reasoning"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An array of tools the model may call while generating a response. You - can specify which tool to use by setting the ``tool_choice`` parameter.""" - text: Optional["_models.PromptAgentDefinitionText"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Configuration options for a text response from the model. Can be plain text or structured JSON - data.""" - structured_inputs: Optional[dict[str, "_models.StructuredInputDefinition"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Set of structured inputs that can participate in prompt template substitution or tool argument - bindings.""" - - @overload - def __init__( - self, - *, - model: str, - rai_config: Optional["_models.RaiConfig"] = None, - instructions: Optional[str] = None, - temperature: Optional[float] = None, - top_p: Optional[float] = None, - reasoning: Optional["_models.Reasoning"] = None, - tools: Optional[list["_models.Tool"]] = None, - text: Optional["_models.PromptAgentDefinitionText"] = None, - structured_inputs: Optional[dict[str, "_models.StructuredInputDefinition"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = AgentKind.PROMPT # type: ignore - - -class PromptAgentDefinitionText(_Model): - """PromptAgentDefinitionText. - - :ivar format: - :vartype format: ~azure.ai.projects.models.ResponseTextFormatConfiguration - """ - - format: Optional["_models.ResponseTextFormatConfiguration"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - - @overload - def __init__( - self, - *, - format: Optional["_models.ResponseTextFormatConfiguration"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class PromptBasedEvaluatorDefinition(EvaluatorDefinition, discriminator="prompt"): - """Prompt-based evaluator. - - :ivar init_parameters: The JSON schema (Draft 2020-12) for the evaluator's input parameters. - This includes parameters like type, properties, required. - :vartype init_parameters: any - :ivar data_schema: The JSON schema (Draft 2020-12) for the evaluator's input data. This - includes parameters like type, properties, required. - :vartype data_schema: any - :ivar metrics: List of output metrics produced by this evaluator. - :vartype metrics: dict[str, ~azure.ai.projects.models.EvaluatorMetric] - :ivar type: Required. Prompt-based definition - :vartype type: str or ~azure.ai.projects.models.PROMPT - :ivar prompt_text: The prompt text used for evaluation. Required. - :vartype prompt_text: str - """ - - type: Literal[EvaluatorDefinitionType.PROMPT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. Prompt-based definition""" - prompt_text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The prompt text used for evaluation. Required.""" - - @overload - def __init__( - self, - *, - prompt_text: str, - init_parameters: Optional[Any] = None, - data_schema: Optional[Any] = None, - metrics: Optional[dict[str, "_models.EvaluatorMetric"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = EvaluatorDefinitionType.PROMPT # type: ignore - - -class ProtocolVersionRecord(_Model): - """A record mapping for a single protocol and its version. - - :ivar protocol: The protocol type. Required. Known values are: "activity_protocol" and - "responses". - :vartype protocol: str or ~azure.ai.projects.models.AgentProtocol - :ivar version: The version string for the protocol, e.g. 'v0.1.1'. Required. - :vartype version: str - """ - - protocol: Union[str, "_models.AgentProtocol"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The protocol type. Required. Known values are: \"activity_protocol\" and \"responses\".""" - version: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The version string for the protocol, e.g. 'v0.1.1'. Required.""" - - @overload - def __init__( - self, - *, - protocol: Union[str, "_models.AgentProtocol"], - version: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class RaiConfig(_Model): - """Configuration for Responsible AI (RAI) content filtering and safety features. - - :ivar rai_policy_name: The name of the RAI policy to apply. Required. - :vartype rai_policy_name: str - """ - - rai_policy_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the RAI policy to apply. Required.""" - - @overload - def __init__( - self, - *, - rai_policy_name: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class RankingOptions(_Model): - """RankingOptions. - - :ivar ranker: The ranker to use for the file search. Is either a Literal["auto"] type or a - Literal["default-2024-11-15"] type. - :vartype ranker: str or str - :ivar score_threshold: The score threshold for the file search, a number between 0 and 1. - Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer - results. - :vartype score_threshold: float - """ - - ranker: Optional[Literal["auto", "default-2024-11-15"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The ranker to use for the file search. Is either a Literal[\"auto\"] type or a - Literal[\"default-2024-11-15\"] type.""" - score_threshold: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will - attempt to return only the most relevant results, but may return fewer results.""" - - @overload - def __init__( - self, - *, - ranker: Optional[Literal["auto", "default-2024-11-15"]] = None, - score_threshold: Optional[float] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class Reasoning(_Model): - """**o-series models only** - Configuration options for - `reasoning models `_. - - :ivar effort: Known values are: "low", "medium", and "high". - :vartype effort: str or ~azure.ai.projects.models.ReasoningEffort - :ivar summary: A summary of the reasoning performed by the model. This can be - useful for debugging and understanding the model's reasoning process. - One of ``auto``, ``concise``, or ``detailed``. Is one of the following types: Literal["auto"], - Literal["concise"], Literal["detailed"] - :vartype summary: str or str or str - :ivar generate_summary: **Deprecated:** use ``summary`` instead. - A summary of the reasoning performed by the model. This can be - useful for debugging and understanding the model's reasoning process. - One of ``auto``, ``concise``, or ``detailed``. Is one of the following types: Literal["auto"], - Literal["concise"], Literal["detailed"] - :vartype generate_summary: str or str or str - """ - - effort: Optional[Union[str, "_models.ReasoningEffort"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Known values are: \"low\", \"medium\", and \"high\".""" - summary: Optional[Literal["auto", "concise", "detailed"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A summary of the reasoning performed by the model. This can be - useful for debugging and understanding the model's reasoning process. - One of ``auto``, ``concise``, or ``detailed``. Is one of the following types: - Literal[\"auto\"], Literal[\"concise\"], Literal[\"detailed\"]""" - generate_summary: Optional[Literal["auto", "concise", "detailed"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """**Deprecated:** use ``summary`` instead. - A summary of the reasoning performed by the model. This can be - useful for debugging and understanding the model's reasoning process. - One of ``auto``, ``concise``, or ``detailed``. Is one of the following types: - Literal[\"auto\"], Literal[\"concise\"], Literal[\"detailed\"]""" - - @overload - def __init__( - self, - *, - effort: Optional[Union[str, "_models.ReasoningEffort"]] = None, - summary: Optional[Literal["auto", "concise", "detailed"]] = None, - generate_summary: Optional[Literal["auto", "concise", "detailed"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ReasoningItemParam(ItemParam, discriminator="reasoning"): - """A description of the chain of thought used by a reasoning model while generating - a response. Be sure to include these items in your ``input`` to the Responses API - for subsequent turns of a conversation if you are manually - `managing context `_. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.REASONING - :ivar encrypted_content: The encrypted content of the reasoning item - populated when a - response is - generated with ``reasoning.encrypted_content`` in the ``include`` parameter. - :vartype encrypted_content: str - :ivar summary: Reasoning text contents. Required. - :vartype summary: list[~azure.ai.projects.models.ReasoningItemSummaryPart] - """ - - type: Literal[ItemType.REASONING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - encrypted_content: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The encrypted content of the reasoning item - populated when a response is - generated with ``reasoning.encrypted_content`` in the ``include`` parameter.""" - summary: list["_models.ReasoningItemSummaryPart"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Reasoning text contents. Required.""" - - @overload - def __init__( - self, - *, - summary: list["_models.ReasoningItemSummaryPart"], - encrypted_content: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.REASONING # type: ignore - - -class ReasoningItemResource(ItemResource, discriminator="reasoning"): - """A description of the chain of thought used by a reasoning model while generating - a response. Be sure to include these items in your ``input`` to the Responses API - for subsequent turns of a conversation if you are manually - `managing context `_. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.REASONING - :ivar encrypted_content: The encrypted content of the reasoning item - populated when a - response is - generated with ``reasoning.encrypted_content`` in the ``include`` parameter. - :vartype encrypted_content: str - :ivar summary: Reasoning text contents. Required. - :vartype summary: list[~azure.ai.projects.models.ReasoningItemSummaryPart] - """ - - type: Literal[ItemType.REASONING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - encrypted_content: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The encrypted content of the reasoning item - populated when a response is - generated with ``reasoning.encrypted_content`` in the ``include`` parameter.""" - summary: list["_models.ReasoningItemSummaryPart"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Reasoning text contents. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - summary: list["_models.ReasoningItemSummaryPart"], - created_by: Optional["_models.CreatedBy"] = None, - encrypted_content: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.REASONING # type: ignore - - -class ReasoningItemSummaryPart(_Model): - """ReasoningItemSummaryPart. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ReasoningItemSummaryTextPart - - :ivar type: Required. "summary_text" - :vartype type: str or ~azure.ai.projects.models.ReasoningItemSummaryPartType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. \"summary_text\"""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ReasoningItemSummaryTextPart(ReasoningItemSummaryPart, discriminator="summary_text"): - """ReasoningItemSummaryTextPart. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.SUMMARY_TEXT - :ivar text: Required. - :vartype text: str - """ - - type: Literal[ReasoningItemSummaryPartType.SUMMARY_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ReasoningItemSummaryPartType.SUMMARY_TEXT # type: ignore - - -class RecurrenceTrigger(Trigger, discriminator="Recurrence"): - """Recurrence based trigger. - - :ivar type: Type of the trigger. Required. Recurrence based trigger. - :vartype type: str or ~azure.ai.projects.models.RECURRENCE - :ivar start_time: Start time for the recurrence schedule in ISO 8601 format. - :vartype start_time: str - :ivar end_time: End time for the recurrence schedule in ISO 8601 format. - :vartype end_time: str - :ivar time_zone: Time zone for the recurrence schedule. - :vartype time_zone: str - :ivar interval: Interval for the recurrence schedule. Required. - :vartype interval: int - :ivar schedule: Recurrence schedule for the recurrence trigger. Required. - :vartype schedule: ~azure.ai.projects.models.RecurrenceSchedule - """ - - type: Literal[TriggerType.RECURRENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Type of the trigger. Required. Recurrence based trigger.""" - start_time: Optional[str] = rest_field(name="startTime", visibility=["read", "create", "update", "delete", "query"]) - """Start time for the recurrence schedule in ISO 8601 format.""" - end_time: Optional[str] = rest_field(name="endTime", visibility=["read", "create", "update", "delete", "query"]) - """End time for the recurrence schedule in ISO 8601 format.""" - time_zone: Optional[str] = rest_field(name="timeZone", visibility=["read", "create", "update", "delete", "query"]) - """Time zone for the recurrence schedule.""" - interval: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Interval for the recurrence schedule. Required.""" - schedule: "_models.RecurrenceSchedule" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Recurrence schedule for the recurrence trigger. Required.""" - - @overload - def __init__( - self, - *, - interval: int, - schedule: "_models.RecurrenceSchedule", - start_time: Optional[str] = None, - end_time: Optional[str] = None, - time_zone: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = TriggerType.RECURRENCE # type: ignore - - -class RedTeam(_Model): - """Red team details. - - :ivar name: Identifier of the red team run. Required. - :vartype name: str - :ivar display_name: Name of the red-team run. - :vartype display_name: str - :ivar num_turns: Number of simulation rounds. - :vartype num_turns: int - :ivar attack_strategies: List of attack strategies or nested lists of attack strategies. - :vartype attack_strategies: list[str or ~azure.ai.projects.models.AttackStrategy] - :ivar simulation_only: Simulation-only or Simulation + Evaluation. Default false, if true the - scan outputs conversation not evaluation result. - :vartype simulation_only: bool - :ivar risk_categories: List of risk categories to generate attack objectives for. - :vartype risk_categories: list[str or ~azure.ai.projects.models.RiskCategory] - :ivar application_scenario: Application scenario for the red team operation, to generate - scenario specific attacks. - :vartype application_scenario: str - :ivar tags: Red team's tags. Unlike properties, tags are fully mutable. - :vartype tags: dict[str, str] - :ivar properties: Red team's properties. Unlike tags, properties are add-only. Once added, a - property cannot be removed. - :vartype properties: dict[str, str] - :ivar status: Status of the red-team. It is set by service and is read-only. - :vartype status: str - :ivar target: Target configuration for the red-team run. Required. - :vartype target: ~azure.ai.projects.models.TargetConfig - """ - - name: str = rest_field(name="id", visibility=["read"]) - """Identifier of the red team run. Required.""" - display_name: Optional[str] = rest_field( - name="displayName", visibility=["read", "create", "update", "delete", "query"] - ) - """Name of the red-team run.""" - num_turns: Optional[int] = rest_field(name="numTurns", visibility=["read", "create", "update", "delete", "query"]) - """Number of simulation rounds.""" - attack_strategies: Optional[list[Union[str, "_models.AttackStrategy"]]] = rest_field( - name="attackStrategies", visibility=["read", "create", "update", "delete", "query"] - ) - """List of attack strategies or nested lists of attack strategies.""" - simulation_only: Optional[bool] = rest_field( - name="simulationOnly", visibility=["read", "create", "update", "delete", "query"] - ) - """Simulation-only or Simulation + Evaluation. Default false, if true the scan outputs - conversation not evaluation result.""" - risk_categories: Optional[list[Union[str, "_models.RiskCategory"]]] = rest_field( - name="riskCategories", visibility=["read", "create", "update", "delete", "query"] - ) - """List of risk categories to generate attack objectives for.""" - application_scenario: Optional[str] = rest_field( - name="applicationScenario", visibility=["read", "create", "update", "delete", "query"] - ) - """Application scenario for the red team operation, to generate scenario specific attacks.""" - tags: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Red team's tags. Unlike properties, tags are fully mutable.""" - properties: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Red team's properties. Unlike tags, properties are add-only. Once added, a property cannot be - removed.""" - status: Optional[str] = rest_field(visibility=["read"]) - """Status of the red-team. It is set by service and is read-only.""" - target: "_models.TargetConfig" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Target configuration for the red-team run. Required.""" - - @overload - def __init__( - self, - *, - target: "_models.TargetConfig", - display_name: Optional[str] = None, - num_turns: Optional[int] = None, - attack_strategies: Optional[list[Union[str, "_models.AttackStrategy"]]] = None, - simulation_only: Optional[bool] = None, - risk_categories: Optional[list[Union[str, "_models.RiskCategory"]]] = None, - application_scenario: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - properties: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class Response(_Model): - """Response. - - :ivar metadata: Set of 16 key-value pairs that can be attached to an object. This can be -useful for storing additional information about the object in a structured -format, and querying for objects via API or the dashboard. -Keys are strings with a maximum length of 64 characters. Values are strings -with a maximum length of 512 characters. Required. - :vartype metadata: dict[str, str] - :ivar temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 - will make the output more random, while lower values like 0.2 will make it more focused and - deterministic. -We generally recommend altering this or ``top_p`` but not both. Required. - :vartype temperature: float - :ivar top_p: An alternative to sampling with temperature, called nucleus sampling, -where the model considers the results of the tokens with top_p probability -mass. So 0.1 means only the tokens comprising the top 10% probability mass -are considered. -We generally recommend altering this or ``temperature`` but not both. Required. - :vartype top_p: float - :ivar user: A unique identifier representing your end-user, which can help OpenAI to monitor - and detect abuse. `Learn more `_. Required. - :vartype user: str - :ivar service_tier: Note: service_tier is not applicable to Azure OpenAI. Known values are: - "auto", "default", "flex", "scale", and "priority". - :vartype service_tier: str or ~azure.ai.projects.models.ServiceTier - :ivar top_logprobs: An integer between 0 and 20 specifying the number of most likely tokens to - return at each token position, each with an associated log probability. - :vartype top_logprobs: int - :ivar previous_response_id: The unique ID of the previous response to the model. Use this to -create multi-turn conversations. Learn more about -`conversation state `_. - :vartype previous_response_id: str - :ivar model: The model deployment to use for the creation of this response. - :vartype model: str - :ivar reasoning: - :vartype reasoning: ~azure.ai.projects.models.Reasoning - :ivar background: Whether to run the model response in the background. -`Learn more `_. - :vartype background: bool - :ivar max_output_tokens: An upper bound for the number of tokens that can be generated for a - response, including visible output tokens and `reasoning tokens `_. - :vartype max_output_tokens: int - :ivar max_tool_calls: The maximum number of total calls to built-in tools that can be processed - in a response. This maximum number applies across all built-in tool calls, not per individual - tool. Any further attempts to call a tool by the model will be ignored. - :vartype max_tool_calls: int - :ivar text: Configuration options for a text response from the model. Can be plain -text or structured JSON data. Learn more: - * [Text inputs and outputs](/docs/guides/text) - * [Structured Outputs](/docs/guides/structured-outputs). - :vartype text: ~azure.ai.projects.models.ResponseText - :ivar tools: An array of tools the model may call while generating a response. You -can specify which tool to use by setting the ``tool_choice`` parameter. -The two categories of tools you can provide the model are: - * **Built-in tools**: Tools that are provided by OpenAI that extend the -model's capabilities, like [web search](/docs/guides/tools-web-search) -or [file search](/docs/guides/tools-file-search). Learn more about -[built-in tools](/docs/guides/tools). - * **Function calls (custom tools)**: Functions that are defined by you, -enabling the model to call your own code. Learn more about -[function calling](/docs/guides/function-calling). - :vartype tools: list[~azure.ai.projects.models.Tool] - :ivar tool_choice: How the model should select which tool (or tools) to use when generating -a response. See the ``tools`` parameter to see how to specify which tools -the model can call. Is either a Union[str, "_models.ToolChoiceOptions"] type or a - ToolChoiceObject type. - :vartype tool_choice: str or ~azure.ai.projects.models.ToolChoiceOptions or - ~azure.ai.projects.models.ToolChoiceObject - :ivar prompt: - :vartype prompt: ~azure.ai.projects.models.Prompt - :ivar truncation: The truncation strategy to use for the model response. - * `auto`: If the context of this response and previous ones exceeds -the model's context window size, the model will truncate the -response to fit the context window by dropping input items in the -middle of the conversation. - * `disabled` (default): If a model response will exceed the context window -size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a - Literal["disabled"] type. - :vartype truncation: str or str - :ivar id: Unique identifier for this Response. Required. - :vartype id: str - :ivar object: The object type of this resource - always set to ``response``. Required. Default - value is "response". - :vartype object: str - :ivar status: The status of the response generation. One of ``completed``, ``failed``, -``in_progress``, ``cancelled``, ``queued``, or ``incomplete``. Is one of the following types: - Literal["completed"], Literal["failed"], Literal["in_progress"], Literal["cancelled"], - Literal["queued"], Literal["incomplete"] - :vartype status: str or str or str or str or str or str - :ivar created_at: Unix timestamp (in seconds) of when this Response was created. Required. - :vartype created_at: ~datetime.datetime - :ivar error: Required. - :vartype error: ~azure.ai.projects.models.ResponseError - :ivar incomplete_details: Details about why the response is incomplete. Required. - :vartype incomplete_details: ~azure.ai.projects.models.ResponseIncompleteDetails1 - :ivar output: An array of content items generated by the model. - * The length and order of items in the `output` array is dependent -on the model's response. - * Rather than accessing the first item in the `output` array and -assuming it's an `assistant` message with the content generated by -the model, you might consider using the `output_text` property where -supported in SDKs. Required. - :vartype output: list[~azure.ai.projects.models.ItemResource] - :ivar instructions: A system (or developer) message inserted into the model's context. -When using along with ``previous_response_id``, the instructions from a previous -response will not be carried over to the next response. This makes it simple -to swap out system (or developer) messages in new responses. Required. Is either a str type or - a [ItemParam] type. - :vartype instructions: str or list[~azure.ai.projects.models.ItemParam] - :ivar output_text: SDK-only convenience property that contains the aggregated text output -from all ``output_text`` items in the ``output`` array, if any are present. -Supported in the Python and JavaScript SDKs. - :vartype output_text: str - :ivar usage: - :vartype usage: ~azure.ai.projects.models.ResponseUsage - :ivar parallel_tool_calls: Whether to allow the model to run tool calls in parallel. Required. - :vartype parallel_tool_calls: bool - :ivar conversation: Required. - :vartype conversation: ~azure.ai.projects.models.ResponseConversation1 - :ivar agent: The agent used for this response. - :vartype agent: ~azure.ai.projects.models.AgentId - :ivar structured_inputs: The structured inputs to the response that can participate in prompt - template substitution or tool argument bindings. - :vartype structured_inputs: dict[str, any] - """ - - metadata: dict[str, str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Set of 16 key-value pairs that can be attached to an object. This can be - useful for storing additional information about the object in a structured - format, and querying for objects via API or the dashboard. - Keys are strings with a maximum length of 64 characters. Values are strings - with a maximum length of 512 characters. Required.""" - temperature: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output - more random, while lower values like 0.2 will make it more focused and deterministic. - We generally recommend altering this or ``top_p`` but not both. Required.""" - top_p: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An alternative to sampling with temperature, called nucleus sampling, - where the model considers the results of the tokens with top_p probability - mass. So 0.1 means only the tokens comprising the top 10% probability mass - are considered. - We generally recommend altering this or ``temperature`` but not both. Required.""" - user: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A unique identifier representing your end-user, which can help OpenAI to monitor and detect - abuse. `Learn more `_. Required.""" - service_tier: Optional[Union[str, "_models.ServiceTier"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Note: service_tier is not applicable to Azure OpenAI. Known values are: \"auto\", \"default\", - \"flex\", \"scale\", and \"priority\".""" - top_logprobs: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An integer between 0 and 20 specifying the number of most likely tokens to return at each token - position, each with an associated log probability.""" - previous_response_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique ID of the previous response to the model. Use this to - create multi-turn conversations. Learn more about - `conversation state `_.""" - model: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The model deployment to use for the creation of this response.""" - reasoning: Optional["_models.Reasoning"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - background: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to run the model response in the background. - `Learn more `_.""" - max_output_tokens: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An upper bound for the number of tokens that can be generated for a response, including visible - output tokens and `reasoning tokens `_.""" - max_tool_calls: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The maximum number of total calls to built-in tools that can be processed in a response. This - maximum number applies across all built-in tool calls, not per individual tool. Any further - attempts to call a tool by the model will be ignored.""" - text: Optional["_models.ResponseText"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Configuration options for a text response from the model. Can be plain - text or structured JSON data. Learn more: - * [Text inputs and outputs](/docs/guides/text) - * [Structured Outputs](/docs/guides/structured-outputs).""" - tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An array of tools the model may call while generating a response. You - can specify which tool to use by setting the ``tool_choice`` parameter. - The two categories of tools you can provide the model are: - * **Built-in tools**: Tools that are provided by OpenAI that extend the - model's capabilities, like [web search](/docs/guides/tools-web-search) - or [file search](/docs/guides/tools-file-search). Learn more about - [built-in tools](/docs/guides/tools). - * **Function calls (custom tools)**: Functions that are defined by you, - enabling the model to call your own code. Learn more about - [function calling](/docs/guides/function-calling).""" - tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceObject"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """How the model should select which tool (or tools) to use when generating - a response. See the ``tools`` parameter to see how to specify which tools - the model can call. Is either a Union[str, \"_models.ToolChoiceOptions\"] type or a - ToolChoiceObject type.""" - prompt: Optional["_models.Prompt"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - truncation: Optional[Literal["auto", "disabled"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The truncation strategy to use for the model response. - * `auto`: If the context of this response and previous ones exceeds - the model's context window size, the model will truncate the - response to fit the context window by dropping input items in the - middle of the conversation. - * `disabled` (default): If a model response will exceed the context window - size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or - a Literal[\"disabled\"] type.""" - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique identifier for this Response. Required.""" - object: Literal["response"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The object type of this resource - always set to ``response``. Required. Default value is - \"response\".""" - status: Optional[Literal["completed", "failed", "in_progress", "cancelled", "queued", "incomplete"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the response generation. One of ``completed``, ``failed``, - ``in_progress``, ``cancelled``, ``queued``, or ``incomplete``. Is one of the following types: - Literal[\"completed\"], Literal[\"failed\"], Literal[\"in_progress\"], Literal[\"cancelled\"], - Literal[\"queued\"], Literal[\"incomplete\"]""" - created_at: datetime.datetime = rest_field( - visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" - ) - """Unix timestamp (in seconds) of when this Response was created. Required.""" - error: "_models.ResponseError" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - incomplete_details: "_models.ResponseIncompleteDetails1" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Details about why the response is incomplete. Required.""" - output: list["_models.ItemResource"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An array of content items generated by the model. - * The length and order of items in the `output` array is dependent - on the model's response. - * Rather than accessing the first item in the `output` array and - assuming it's an `assistant` message with the content generated by - the model, you might consider using the `output_text` property where - supported in SDKs. Required.""" - instructions: Union[str, list["_models.ItemParam"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A system (or developer) message inserted into the model's context. - When using along with ``previous_response_id``, the instructions from a previous - response will not be carried over to the next response. This makes it simple - to swap out system (or developer) messages in new responses. Required. Is either a str type or - a [ItemParam] type.""" - output_text: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """SDK-only convenience property that contains the aggregated text output - from all ``output_text`` items in the ``output`` array, if any are present. - Supported in the Python and JavaScript SDKs.""" - usage: Optional["_models.ResponseUsage"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - parallel_tool_calls: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to allow the model to run tool calls in parallel. Required.""" - conversation: "_models.ResponseConversation1" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" - agent: Optional["_models.AgentId"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The agent used for this response.""" - structured_inputs: Optional[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The structured inputs to the response that can participate in prompt template substitution or - tool argument bindings.""" - - @overload - def __init__( # pylint: disable=too-many-locals - self, - *, - metadata: dict[str, str], - temperature: float, - top_p: float, - user: str, - id: str, # pylint: disable=redefined-builtin - created_at: datetime.datetime, - error: "_models.ResponseError", - incomplete_details: "_models.ResponseIncompleteDetails1", - output: list["_models.ItemResource"], - instructions: Union[str, list["_models.ItemParam"]], - parallel_tool_calls: bool, - conversation: "_models.ResponseConversation1", - service_tier: Optional[Union[str, "_models.ServiceTier"]] = None, - top_logprobs: Optional[int] = None, - previous_response_id: Optional[str] = None, - model: Optional[str] = None, - reasoning: Optional["_models.Reasoning"] = None, - background: Optional[bool] = None, - max_output_tokens: Optional[int] = None, - max_tool_calls: Optional[int] = None, - text: Optional["_models.ResponseText"] = None, - tools: Optional[list["_models.Tool"]] = None, - tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceObject"]] = None, - prompt: Optional["_models.Prompt"] = None, - truncation: Optional[Literal["auto", "disabled"]] = None, - status: Optional[Literal["completed", "failed", "in_progress", "cancelled", "queued", "incomplete"]] = None, - output_text: Optional[str] = None, - usage: Optional["_models.ResponseUsage"] = None, - agent: Optional["_models.AgentId"] = None, - structured_inputs: Optional[dict[str, Any]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.object: Literal["response"] = "response" - - -class ResponseStreamEvent(_Model): - """ResponseStreamEvent. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ResponseErrorEvent, ResponseCodeInterpreterCallCompletedEvent, - ResponseCodeInterpreterCallInProgressEvent, ResponseCodeInterpreterCallInterpretingEvent, - ResponseCodeInterpreterCallCodeDeltaEvent, ResponseCodeInterpreterCallCodeDoneEvent, - ResponseCompletedEvent, ResponseContentPartAddedEvent, ResponseContentPartDoneEvent, - ResponseCreatedEvent, ResponseFailedEvent, ResponseFileSearchCallCompletedEvent, - ResponseFileSearchCallInProgressEvent, ResponseFileSearchCallSearchingEvent, - ResponseFunctionCallArgumentsDeltaEvent, ResponseFunctionCallArgumentsDoneEvent, - ResponseImageGenCallCompletedEvent, ResponseImageGenCallGeneratingEvent, - ResponseImageGenCallInProgressEvent, ResponseImageGenCallPartialImageEvent, - ResponseInProgressEvent, ResponseIncompleteEvent, ResponseMCPCallArgumentsDeltaEvent, - ResponseMCPCallArgumentsDoneEvent, ResponseMCPCallCompletedEvent, ResponseMCPCallFailedEvent, - ResponseMCPCallInProgressEvent, ResponseMCPListToolsCompletedEvent, - ResponseMCPListToolsFailedEvent, ResponseMCPListToolsInProgressEvent, - ResponseOutputItemAddedEvent, ResponseOutputItemDoneEvent, ResponseTextDeltaEvent, - ResponseTextDoneEvent, ResponseQueuedEvent, ResponseReasoningDeltaEvent, - ResponseReasoningDoneEvent, ResponseReasoningSummaryDeltaEvent, - ResponseReasoningSummaryDoneEvent, ResponseReasoningSummaryPartAddedEvent, - ResponseReasoningSummaryPartDoneEvent, ResponseReasoningSummaryTextDeltaEvent, - ResponseReasoningSummaryTextDoneEvent, ResponseRefusalDeltaEvent, ResponseRefusalDoneEvent, - ResponseWebSearchCallCompletedEvent, ResponseWebSearchCallInProgressEvent, - ResponseWebSearchCallSearchingEvent - - :ivar type: Required. Known values are: "response.audio.delta", "response.audio.done", - "response.audio_transcript.delta", "response.audio_transcript.done", - "response.code_interpreter_call_code.delta", "response.code_interpreter_call_code.done", - "response.code_interpreter_call.completed", "response.code_interpreter_call.in_progress", - "response.code_interpreter_call.interpreting", "response.completed", - "response.content_part.added", "response.content_part.done", "response.created", "error", - "response.file_search_call.completed", "response.file_search_call.in_progress", - "response.file_search_call.searching", "response.function_call_arguments.delta", - "response.function_call_arguments.done", "response.in_progress", "response.failed", - "response.incomplete", "response.output_item.added", "response.output_item.done", - "response.refusal.delta", "response.refusal.done", "response.output_text.annotation.added", - "response.output_text.delta", "response.output_text.done", - "response.reasoning_summary_part.added", "response.reasoning_summary_part.done", - "response.reasoning_summary_text.delta", "response.reasoning_summary_text.done", - "response.web_search_call.completed", "response.web_search_call.in_progress", - "response.web_search_call.searching", "response.image_generation_call.completed", - "response.image_generation_call.generating", "response.image_generation_call.in_progress", - "response.image_generation_call.partial_image", "response.mcp_call.arguments_delta", - "response.mcp_call.arguments_done", "response.mcp_call.completed", "response.mcp_call.failed", - "response.mcp_call.in_progress", "response.mcp_list_tools.completed", - "response.mcp_list_tools.failed", "response.mcp_list_tools.in_progress", "response.queued", - "response.reasoning.delta", "response.reasoning.done", "response.reasoning_summary.delta", and - "response.reasoning_summary.done". - :vartype type: str or ~azure.ai.projects.models.ResponseStreamEventType - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"response.audio.delta\", \"response.audio.done\", - \"response.audio_transcript.delta\", \"response.audio_transcript.done\", - \"response.code_interpreter_call_code.delta\", \"response.code_interpreter_call_code.done\", - \"response.code_interpreter_call.completed\", \"response.code_interpreter_call.in_progress\", - \"response.code_interpreter_call.interpreting\", \"response.completed\", - \"response.content_part.added\", \"response.content_part.done\", \"response.created\", - \"error\", \"response.file_search_call.completed\", \"response.file_search_call.in_progress\", - \"response.file_search_call.searching\", \"response.function_call_arguments.delta\", - \"response.function_call_arguments.done\", \"response.in_progress\", \"response.failed\", - \"response.incomplete\", \"response.output_item.added\", \"response.output_item.done\", - \"response.refusal.delta\", \"response.refusal.done\", - \"response.output_text.annotation.added\", \"response.output_text.delta\", - \"response.output_text.done\", \"response.reasoning_summary_part.added\", - \"response.reasoning_summary_part.done\", \"response.reasoning_summary_text.delta\", - \"response.reasoning_summary_text.done\", \"response.web_search_call.completed\", - \"response.web_search_call.in_progress\", \"response.web_search_call.searching\", - \"response.image_generation_call.completed\", \"response.image_generation_call.generating\", - \"response.image_generation_call.in_progress\", - \"response.image_generation_call.partial_image\", \"response.mcp_call.arguments_delta\", - \"response.mcp_call.arguments_done\", \"response.mcp_call.completed\", - \"response.mcp_call.failed\", \"response.mcp_call.in_progress\", - \"response.mcp_list_tools.completed\", \"response.mcp_list_tools.failed\", - \"response.mcp_list_tools.in_progress\", \"response.queued\", \"response.reasoning.delta\", - \"response.reasoning.done\", \"response.reasoning_summary.delta\", and - \"response.reasoning_summary.done\".""" - sequence_number: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The sequence number for this event. Required.""" - - @overload - def __init__( - self, - *, - type: str, - sequence_number: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseCodeInterpreterCallCodeDeltaEvent( - ResponseStreamEvent, discriminator="response.code_interpreter_call_code.delta" -): # pylint: disable=name-too-long - """Emitted when a partial code snippet is streamed by the code interpreter. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.code_interpreter_call_code.delta``. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA - :ivar output_index: The index of the output item in the response for which the code is being - streamed. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the code interpreter tool call item. Required. - :vartype item_id: str - :ivar delta: The partial code snippet being streamed by the code interpreter. Required. - :vartype delta: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.code_interpreter_call_code.delta``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response for which the code is being streamed. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the code interpreter tool call item. Required.""" - delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The partial code snippet being streamed by the code interpreter. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - delta: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA # type: ignore - - -class ResponseCodeInterpreterCallCodeDoneEvent( - ResponseStreamEvent, discriminator="response.code_interpreter_call_code.done" -): - """Emitted when the code snippet is finalized by the code interpreter. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.code_interpreter_call_code.done``. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE - :ivar output_index: The index of the output item in the response for which the code is - finalized. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the code interpreter tool call item. Required. - :vartype item_id: str - :ivar code: The final code snippet output by the code interpreter. Required. - :vartype code: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.code_interpreter_call_code.done``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response for which the code is finalized. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the code interpreter tool call item. Required.""" - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The final code snippet output by the code interpreter. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - code: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE # type: ignore - - -class ResponseCodeInterpreterCallCompletedEvent( - ResponseStreamEvent, discriminator="response.code_interpreter_call.completed" -): # pylint: disable=name-too-long - """Emitted when the code interpreter call is completed. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.code_interpreter_call.completed``. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED - :ivar output_index: The index of the output item in the response for which the code interpreter - call is completed. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the code interpreter tool call item. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.code_interpreter_call.completed``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response for which the code interpreter call is completed. - Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the code interpreter tool call item. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED # type: ignore - - -class ResponseCodeInterpreterCallInProgressEvent( - ResponseStreamEvent, discriminator="response.code_interpreter_call.in_progress" -): # pylint: disable=name-too-long - """Emitted when a code interpreter call is in progress. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.code_interpreter_call.in_progress``. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS - :ivar output_index: The index of the output item in the response for which the code interpreter - call is in progress. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the code interpreter tool call item. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.code_interpreter_call.in_progress``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response for which the code interpreter call is in - progress. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the code interpreter tool call item. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS # type: ignore - - -class ResponseCodeInterpreterCallInterpretingEvent( - ResponseStreamEvent, discriminator="response.code_interpreter_call.interpreting" -): # pylint: disable=name-too-long - """Emitted when the code interpreter is actively interpreting the code snippet. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.code_interpreter_call.interpreting``. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING - :ivar output_index: The index of the output item in the response for which the code interpreter - is interpreting code. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the code interpreter tool call item. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.code_interpreter_call.interpreting``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response for which the code interpreter is interpreting - code. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the code interpreter tool call item. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING # type: ignore - - -class ResponseCompletedEvent(ResponseStreamEvent, discriminator="response.completed"): - """Emitted when the model response is complete. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.completed``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_COMPLETED - :ivar response: Properties of the completed response. Required. - :vartype response: ~azure.ai.projects.models.Response - """ - - type: Literal[ResponseStreamEventType.RESPONSE_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.completed``. Required.""" - response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Properties of the completed response. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - response: "_models.Response", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_COMPLETED # type: ignore - - -class ResponseContentPartAddedEvent(ResponseStreamEvent, discriminator="response.content_part.added"): - """Emitted when a new content part is added. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.content_part.added``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CONTENT_PART_ADDED - :ivar item_id: The ID of the output item that the content part was added to. Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the content part was added to. Required. - :vartype output_index: int - :ivar content_index: The index of the content part that was added. Required. - :vartype content_index: int - :ivar part: The content part that was added. Required. - :vartype part: ~azure.ai.projects.models.ItemContent - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CONTENT_PART_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.content_part.added``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the content part was added to. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the content part was added to. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the content part that was added. Required.""" - part: "_models.ItemContent" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content part that was added. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - part: "_models.ItemContent", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CONTENT_PART_ADDED # type: ignore - - -class ResponseContentPartDoneEvent(ResponseStreamEvent, discriminator="response.content_part.done"): - """Emitted when a content part is done. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.content_part.done``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CONTENT_PART_DONE - :ivar item_id: The ID of the output item that the content part was added to. Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the content part was added to. Required. - :vartype output_index: int - :ivar content_index: The index of the content part that is done. Required. - :vartype content_index: int - :ivar part: The content part that is done. Required. - :vartype part: ~azure.ai.projects.models.ItemContent - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CONTENT_PART_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.content_part.done``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the content part was added to. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the content part was added to. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the content part that is done. Required.""" - part: "_models.ItemContent" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content part that is done. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - part: "_models.ItemContent", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CONTENT_PART_DONE # type: ignore - - -class ResponseConversation1(_Model): - """ResponseConversation1. - - :ivar id: Required. - :vartype id: str - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseCreatedEvent(ResponseStreamEvent, discriminator="response.created"): - """An event that is emitted when a response is created. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.created``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_CREATED - :ivar response: The response that was created. Required. - :vartype response: ~azure.ai.projects.models.Response - """ - - type: Literal[ResponseStreamEventType.RESPONSE_CREATED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.created``. Required.""" - response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The response that was created. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - response: "_models.Response", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_CREATED # type: ignore - - -class ResponseError(_Model): - """An error object returned when the model fails to generate a Response. - - :ivar code: Required. Known values are: "server_error", "rate_limit_exceeded", - "invalid_prompt", "vector_store_timeout", "invalid_image", "invalid_image_format", - "invalid_base64_image", "invalid_image_url", "image_too_large", "image_too_small", - "image_parse_error", "image_content_policy_violation", "invalid_image_mode", - "image_file_too_large", "unsupported_image_media_type", "empty_image_file", - "failed_to_download_image", and "image_file_not_found". - :vartype code: str or ~azure.ai.projects.models.ResponseErrorCode - :ivar message: A human-readable description of the error. Required. - :vartype message: str - """ - - code: Union[str, "_models.ResponseErrorCode"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required. Known values are: \"server_error\", \"rate_limit_exceeded\", \"invalid_prompt\", - \"vector_store_timeout\", \"invalid_image\", \"invalid_image_format\", - \"invalid_base64_image\", \"invalid_image_url\", \"image_too_large\", \"image_too_small\", - \"image_parse_error\", \"image_content_policy_violation\", \"invalid_image_mode\", - \"image_file_too_large\", \"unsupported_image_media_type\", \"empty_image_file\", - \"failed_to_download_image\", and \"image_file_not_found\".""" - message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A human-readable description of the error. Required.""" - - @overload - def __init__( - self, - *, - code: Union[str, "_models.ResponseErrorCode"], - message: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseErrorEvent(ResponseStreamEvent, discriminator="error"): - """Emitted when an error occurs. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``error``. Required. - :vartype type: str or ~azure.ai.projects.models.ERROR - :ivar code: The error code. Required. - :vartype code: str - :ivar message: The error message. Required. - :vartype message: str - :ivar param: The error parameter. Required. - :vartype param: str - """ - - type: Literal[ResponseStreamEventType.ERROR] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``error``. Required.""" - code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error code. Required.""" - message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error message. Required.""" - param: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The error parameter. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - code: str, - message: str, - param: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.ERROR # type: ignore - - -class ResponseFailedEvent(ResponseStreamEvent, discriminator="response.failed"): - """An event that is emitted when a response fails. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.failed``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_FAILED - :ivar response: The response that failed. Required. - :vartype response: ~azure.ai.projects.models.Response - """ - - type: Literal[ResponseStreamEventType.RESPONSE_FAILED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.failed``. Required.""" - response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The response that failed. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - response: "_models.Response", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_FAILED # type: ignore - - -class ResponseFileSearchCallCompletedEvent(ResponseStreamEvent, discriminator="response.file_search_call.completed"): - """Emitted when a file search call is completed (results found). - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.file_search_call.completed``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_FILE_SEARCH_CALL_COMPLETED - :ivar output_index: The index of the output item that the file search call is initiated. - Required. - :vartype output_index: int - :ivar item_id: The ID of the output item that the file search call is initiated. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.file_search_call.completed``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the file search call is initiated. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the file search call is initiated. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_COMPLETED # type: ignore - - -class ResponseFileSearchCallInProgressEvent(ResponseStreamEvent, discriminator="response.file_search_call.in_progress"): - """Emitted when a file search call is initiated. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.file_search_call.in_progress``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS - :ivar output_index: The index of the output item that the file search call is initiated. - Required. - :vartype output_index: int - :ivar item_id: The ID of the output item that the file search call is initiated. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.file_search_call.in_progress``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the file search call is initiated. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the file search call is initiated. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS # type: ignore - - -class ResponseFileSearchCallSearchingEvent(ResponseStreamEvent, discriminator="response.file_search_call.searching"): - """Emitted when a file search is currently searching. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.file_search_call.searching``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_FILE_SEARCH_CALL_SEARCHING - :ivar output_index: The index of the output item that the file search call is searching. - Required. - :vartype output_index: int - :ivar item_id: The ID of the output item that the file search call is initiated. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_SEARCHING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.file_search_call.searching``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the file search call is searching. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the file search call is initiated. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_SEARCHING # type: ignore - - -class ResponseFormatJsonSchemaSchema(_Model): - """The schema for the response format, described as a JSON Schema object. - Learn how to build JSON schemas `here `_. - - """ - - -class ResponseFunctionCallArgumentsDeltaEvent( - ResponseStreamEvent, discriminator="response.function_call_arguments.delta" -): - """Emitted when there is a partial function-call arguments delta. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.function_call_arguments.delta``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA - :ivar item_id: The ID of the output item that the function-call arguments delta is added to. - Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the function-call arguments delta is - added to. Required. - :vartype output_index: int - :ivar delta: The function-call arguments delta that is added. Required. - :vartype delta: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.function_call_arguments.delta``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the function-call arguments delta is added to. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the function-call arguments delta is added to. Required.""" - delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The function-call arguments delta that is added. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - delta: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA # type: ignore - - -class ResponseFunctionCallArgumentsDoneEvent( - ResponseStreamEvent, discriminator="response.function_call_arguments.done" -): - """Emitted when function-call arguments are finalized. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE - :ivar item_id: The ID of the item. Required. - :vartype item_id: str - :ivar output_index: The index of the output item. Required. - :vartype output_index: int - :ivar arguments: The function-call arguments. Required. - :vartype arguments: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the item. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item. Required.""" - arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The function-call arguments. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - arguments: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE # type: ignore - - -class ResponseImageGenCallCompletedEvent(ResponseStreamEvent, discriminator="response.image_generation_call.completed"): - """Emitted when an image generation tool call has completed and the final image is available. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.image_generation_call.completed'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the image generation item being processed. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.image_generation_call.completed'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the image generation item being processed. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED # type: ignore - - -class ResponseImageGenCallGeneratingEvent( - ResponseStreamEvent, discriminator="response.image_generation_call.generating" -): - """Emitted when an image generation tool call is actively generating an image (intermediate - state). - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.image_generation_call.generating'. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_IMAGE_GENERATION_CALL_GENERATING - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the image generation item being processed. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_GENERATING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.image_generation_call.generating'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the image generation item being processed. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_GENERATING # type: ignore - - -class ResponseImageGenCallInProgressEvent( - ResponseStreamEvent, discriminator="response.image_generation_call.in_progress" -): - """Emitted when an image generation tool call is in progress. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.image_generation_call.in_progress'. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the image generation item being processed. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.image_generation_call.in_progress'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the image generation item being processed. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS # type: ignore - - -class ResponseImageGenCallPartialImageEvent( - ResponseStreamEvent, discriminator="response.image_generation_call.partial_image" -): - """Emitted when a partial image is available during image generation streaming. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.image_generation_call.partial_image'. - Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the image generation item being processed. Required. - :vartype item_id: str - :ivar partial_image_index: 0-based index for the partial image (backend is 1-based, but this is - 0-based for the user). Required. - :vartype partial_image_index: int - :ivar partial_image_b64: Base64-encoded partial image data, suitable for rendering as an image. - Required. - :vartype partial_image_b64: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.image_generation_call.partial_image'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the image generation item being processed. Required.""" - partial_image_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """0-based index for the partial image (backend is 1-based, but this is 0-based for the user). - Required.""" - partial_image_b64: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Base64-encoded partial image data, suitable for rendering as an image. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - partial_image_index: int, - partial_image_b64: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE # type: ignore - - -class ResponseIncompleteDetails1(_Model): - """ResponseIncompleteDetails1. - - :ivar reason: The reason why the response is incomplete. Is either a - Literal["max_output_tokens"] type or a Literal["content_filter"] type. - :vartype reason: str or str - """ - - reason: Optional[Literal["max_output_tokens", "content_filter"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The reason why the response is incomplete. Is either a Literal[\"max_output_tokens\"] type or a - Literal[\"content_filter\"] type.""" - - @overload - def __init__( - self, - *, - reason: Optional[Literal["max_output_tokens", "content_filter"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseIncompleteEvent(ResponseStreamEvent, discriminator="response.incomplete"): - """An event that is emitted when a response finishes as incomplete. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.incomplete``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_INCOMPLETE - :ivar response: The response that was incomplete. Required. - :vartype response: ~azure.ai.projects.models.Response - """ - - type: Literal[ResponseStreamEventType.RESPONSE_INCOMPLETE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.incomplete``. Required.""" - response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The response that was incomplete. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - response: "_models.Response", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_INCOMPLETE # type: ignore - - -class ResponseInProgressEvent(ResponseStreamEvent, discriminator="response.in_progress"): - """Emitted when the response is in progress. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.in_progress``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_IN_PROGRESS - :ivar response: The response that is in progress. Required. - :vartype response: ~azure.ai.projects.models.Response - """ - - type: Literal[ResponseStreamEventType.RESPONSE_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.in_progress``. Required.""" - response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The response that is in progress. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - response: "_models.Response", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_IN_PROGRESS # type: ignore - - -class ResponseMCPCallArgumentsDeltaEvent(ResponseStreamEvent, discriminator="response.mcp_call.arguments_delta"): - """Emitted when there is a delta (partial update) to the arguments of an MCP tool call. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_call.arguments_delta'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_CALL_ARGUMENTS_DELTA - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the MCP tool call item being processed. Required. - :vartype item_id: str - :ivar delta: The partial update to the arguments for the MCP tool call. Required. - :vartype delta: any - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_call.arguments_delta'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the MCP tool call item being processed. Required.""" - delta: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The partial update to the arguments for the MCP tool call. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - delta: Any, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DELTA # type: ignore - - -class ResponseMCPCallArgumentsDoneEvent(ResponseStreamEvent, discriminator="response.mcp_call.arguments_done"): - """Emitted when the arguments for an MCP tool call are finalized. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_call.arguments_done'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_CALL_ARGUMENTS_DONE - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the MCP tool call item being processed. Required. - :vartype item_id: str - :ivar arguments: The finalized arguments for the MCP tool call. Required. - :vartype arguments: any - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_call.arguments_done'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the MCP tool call item being processed. Required.""" - arguments: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The finalized arguments for the MCP tool call. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - arguments: Any, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DONE # type: ignore - - -class ResponseMCPCallCompletedEvent(ResponseStreamEvent, discriminator="response.mcp_call.completed"): - """Emitted when an MCP tool call has completed successfully. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_call.completed'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_CALL_COMPLETED - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_call.completed'. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_COMPLETED # type: ignore - - -class ResponseMCPCallFailedEvent(ResponseStreamEvent, discriminator="response.mcp_call.failed"): - """Emitted when an MCP tool call has failed. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_call.failed'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_CALL_FAILED - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_FAILED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_call.failed'. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_FAILED # type: ignore - - -class ResponseMCPCallInProgressEvent(ResponseStreamEvent, discriminator="response.mcp_call.in_progress"): - """Emitted when an MCP tool call is in progress. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_call.in_progress'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_CALL_IN_PROGRESS - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar item_id: The unique identifier of the MCP tool call item being processed. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_call.in_progress'. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the MCP tool call item being processed. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_IN_PROGRESS # type: ignore - - -class ResponseMCPListToolsCompletedEvent(ResponseStreamEvent, discriminator="response.mcp_list_tools.completed"): - """Emitted when the list of available MCP tools has been successfully retrieved. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_list_tools.completed'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_LIST_TOOLS_COMPLETED - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_list_tools.completed'. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_COMPLETED # type: ignore - - -class ResponseMCPListToolsFailedEvent(ResponseStreamEvent, discriminator="response.mcp_list_tools.failed"): - """Emitted when the attempt to list available MCP tools has failed. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_list_tools.failed'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_LIST_TOOLS_FAILED - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_FAILED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_list_tools.failed'. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_FAILED # type: ignore - - -class ResponseMCPListToolsInProgressEvent(ResponseStreamEvent, discriminator="response.mcp_list_tools.in_progress"): - """Emitted when the system is in the process of retrieving the list of available MCP tools. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.mcp_list_tools.in_progress'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS - """ - - type: Literal[ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.mcp_list_tools.in_progress'. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS # type: ignore - - -class ResponseOutputItemAddedEvent(ResponseStreamEvent, discriminator="response.output_item.added"): - """Emitted when a new output item is added. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.output_item.added``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_OUTPUT_ITEM_ADDED - :ivar output_index: The index of the output item that was added. Required. - :vartype output_index: int - :ivar item: The output item that was added. Required. - :vartype item: ~azure.ai.projects.models.ItemResource - """ - - type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.output_item.added``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that was added. Required.""" - item: "_models.ItemResource" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The output item that was added. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item: "_models.ItemResource", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_ADDED # type: ignore - - -class ResponseOutputItemDoneEvent(ResponseStreamEvent, discriminator="response.output_item.done"): - """Emitted when an output item is marked done. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.output_item.done``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_OUTPUT_ITEM_DONE - :ivar output_index: The index of the output item that was marked done. Required. - :vartype output_index: int - :ivar item: The output item that was marked done. Required. - :vartype item: ~azure.ai.projects.models.ItemResource - """ - - type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.output_item.done``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that was marked done. Required.""" - item: "_models.ItemResource" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The output item that was marked done. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item: "_models.ItemResource", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_DONE # type: ignore - - -class ResponsePromptVariables(_Model): - """Optional map of values to substitute in for variables in your - prompt. The substitution values can either be strings, or other - Response input types like images or files. - - """ - - -class ResponseQueuedEvent(ResponseStreamEvent, discriminator="response.queued"): - """Emitted when a response is queued and waiting to be processed. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.queued'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_QUEUED - :ivar response: The full response object that is queued. Required. - :vartype response: ~azure.ai.projects.models.Response - """ - - type: Literal[ResponseStreamEventType.RESPONSE_QUEUED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.queued'. Required.""" - response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The full response object that is queued. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - response: "_models.Response", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_QUEUED # type: ignore - - -class ResponseReasoningDeltaEvent(ResponseStreamEvent, discriminator="response.reasoning.delta"): - """Emitted when there is a delta (partial update) to the reasoning content. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.reasoning.delta'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_DELTA - :ivar item_id: The unique identifier of the item for which reasoning is being updated. - Required. - :vartype item_id: str - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar content_index: The index of the reasoning content part within the output item. Required. - :vartype content_index: int - :ivar delta: The partial update to the reasoning content. Required. - :vartype delta: any - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.reasoning.delta'. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the item for which reasoning is being updated. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the reasoning content part within the output item. Required.""" - delta: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The partial update to the reasoning content. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - delta: Any, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_DELTA # type: ignore - - -class ResponseReasoningDoneEvent(ResponseStreamEvent, discriminator="response.reasoning.done"): - """Emitted when the reasoning content is finalized for an item. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.reasoning.done'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_DONE - :ivar item_id: The unique identifier of the item for which reasoning is finalized. Required. - :vartype item_id: str - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar content_index: The index of the reasoning content part within the output item. Required. - :vartype content_index: int - :ivar text: The finalized reasoning text. Required. - :vartype text: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.reasoning.done'. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the item for which reasoning is finalized. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the reasoning content part within the output item. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The finalized reasoning text. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_DONE # type: ignore - - -class ResponseReasoningSummaryDeltaEvent(ResponseStreamEvent, discriminator="response.reasoning_summary.delta"): - """Emitted when there is a delta (partial update) to the reasoning summary content. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.reasoning_summary.delta'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_SUMMARY_DELTA - :ivar item_id: The unique identifier of the item for which the reasoning summary is being - updated. Required. - :vartype item_id: str - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar summary_index: The index of the summary part within the output item. Required. - :vartype summary_index: int - :ivar delta: The partial update to the reasoning summary content. Required. - :vartype delta: any - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.reasoning_summary.delta'. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the item for which the reasoning summary is being updated. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the summary part within the output item. Required.""" - delta: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The partial update to the reasoning summary content. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - summary_index: int, - delta: Any, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_DELTA # type: ignore - - -class ResponseReasoningSummaryDoneEvent(ResponseStreamEvent, discriminator="response.reasoning_summary.done"): - """Emitted when the reasoning summary content is finalized for an item. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always 'response.reasoning_summary.done'. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_SUMMARY_DONE - :ivar item_id: The unique identifier of the item for which the reasoning summary is finalized. - Required. - :vartype item_id: str - :ivar output_index: The index of the output item in the response's output array. Required. - :vartype output_index: int - :ivar summary_index: The index of the summary part within the output item. Required. - :vartype summary_index: int - :ivar text: The finalized reasoning summary text. Required. - :vartype text: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always 'response.reasoning_summary.done'. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The unique identifier of the item for which the reasoning summary is finalized. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item in the response's output array. Required.""" - summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the summary part within the output item. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The finalized reasoning summary text. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - summary_index: int, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_DONE # type: ignore - - -class ResponseReasoningSummaryPartAddedEvent( - ResponseStreamEvent, discriminator="response.reasoning_summary_part.added" -): - """Emitted when a new reasoning summary part is added. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.reasoning_summary_part.added``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_SUMMARY_PART_ADDED - :ivar item_id: The ID of the item this summary part is associated with. Required. - :vartype item_id: str - :ivar output_index: The index of the output item this summary part is associated with. - Required. - :vartype output_index: int - :ivar summary_index: The index of the summary part within the reasoning summary. Required. - :vartype summary_index: int - :ivar part: The summary part that was added. Required. - :vartype part: ~azure.ai.projects.models.ReasoningItemSummaryPart - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.reasoning_summary_part.added``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the item this summary part is associated with. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item this summary part is associated with. Required.""" - summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the summary part within the reasoning summary. Required.""" - part: "_models.ReasoningItemSummaryPart" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The summary part that was added. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - summary_index: int, - part: "_models.ReasoningItemSummaryPart", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_ADDED # type: ignore - - -class ResponseReasoningSummaryPartDoneEvent(ResponseStreamEvent, discriminator="response.reasoning_summary_part.done"): - """Emitted when a reasoning summary part is completed. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.reasoning_summary_part.done``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_SUMMARY_PART_DONE - :ivar item_id: The ID of the item this summary part is associated with. Required. - :vartype item_id: str - :ivar output_index: The index of the output item this summary part is associated with. - Required. - :vartype output_index: int - :ivar summary_index: The index of the summary part within the reasoning summary. Required. - :vartype summary_index: int - :ivar part: The completed summary part. Required. - :vartype part: ~azure.ai.projects.models.ReasoningItemSummaryPart - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.reasoning_summary_part.done``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the item this summary part is associated with. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item this summary part is associated with. Required.""" - summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the summary part within the reasoning summary. Required.""" - part: "_models.ReasoningItemSummaryPart" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The completed summary part. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - summary_index: int, - part: "_models.ReasoningItemSummaryPart", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_DONE # type: ignore - - -class ResponseReasoningSummaryTextDeltaEvent( - ResponseStreamEvent, discriminator="response.reasoning_summary_text.delta" -): - """Emitted when a delta is added to a reasoning summary text. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.reasoning_summary_text.delta``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_SUMMARY_TEXT_DELTA - :ivar item_id: The ID of the item this summary text delta is associated with. Required. - :vartype item_id: str - :ivar output_index: The index of the output item this summary text delta is associated with. - Required. - :vartype output_index: int - :ivar summary_index: The index of the summary part within the reasoning summary. Required. - :vartype summary_index: int - :ivar delta: The text delta that was added to the summary. Required. - :vartype delta: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.reasoning_summary_text.delta``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the item this summary text delta is associated with. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item this summary text delta is associated with. Required.""" - summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the summary part within the reasoning summary. Required.""" - delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text delta that was added to the summary. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - summary_index: int, - delta: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DELTA # type: ignore - - -class ResponseReasoningSummaryTextDoneEvent(ResponseStreamEvent, discriminator="response.reasoning_summary_text.done"): - """Emitted when a reasoning summary text is completed. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.reasoning_summary_text.done``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REASONING_SUMMARY_TEXT_DONE - :ivar item_id: The ID of the item this summary text is associated with. Required. - :vartype item_id: str - :ivar output_index: The index of the output item this summary text is associated with. - Required. - :vartype output_index: int - :ivar summary_index: The index of the summary part within the reasoning summary. Required. - :vartype summary_index: int - :ivar text: The full text of the completed reasoning summary. Required. - :vartype text: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.reasoning_summary_text.done``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the item this summary text is associated with. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item this summary text is associated with. Required.""" - summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the summary part within the reasoning summary. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The full text of the completed reasoning summary. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - summary_index: int, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DONE # type: ignore - - -class ResponseRefusalDeltaEvent(ResponseStreamEvent, discriminator="response.refusal.delta"): - """Emitted when there is a partial refusal text. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.refusal.delta``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REFUSAL_DELTA - :ivar item_id: The ID of the output item that the refusal text is added to. Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the refusal text is added to. Required. - :vartype output_index: int - :ivar content_index: The index of the content part that the refusal text is added to. Required. - :vartype content_index: int - :ivar delta: The refusal text that is added. Required. - :vartype delta: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REFUSAL_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.refusal.delta``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the refusal text is added to. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the refusal text is added to. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the content part that the refusal text is added to. Required.""" - delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The refusal text that is added. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - delta: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REFUSAL_DELTA # type: ignore - - -class ResponseRefusalDoneEvent(ResponseStreamEvent, discriminator="response.refusal.done"): - """Emitted when refusal text is finalized. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.refusal.done``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_REFUSAL_DONE - :ivar item_id: The ID of the output item that the refusal text is finalized. Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the refusal text is finalized. Required. - :vartype output_index: int - :ivar content_index: The index of the content part that the refusal text is finalized. - Required. - :vartype content_index: int - :ivar refusal: The refusal text that is finalized. Required. - :vartype refusal: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_REFUSAL_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.refusal.done``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the refusal text is finalized. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the refusal text is finalized. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the content part that the refusal text is finalized. Required.""" - refusal: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The refusal text that is finalized. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - refusal: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_REFUSAL_DONE # type: ignore - - -class ResponsesMessageItemParam(ItemParam, discriminator="message"): - """A response message item, representing a role and content, as provided as client request - parameters. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ResponsesAssistantMessageItemParam, ResponsesDeveloperMessageItemParam, - ResponsesSystemMessageItemParam, ResponsesUserMessageItemParam - - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar role: The role associated with the message. Required. Known values are: "system", - "developer", "user", and "assistant". - :vartype role: str or ~azure.ai.projects.models.ResponsesMessageRole - """ - - __mapping__: dict[str, _Model] = {} - type: Literal[ItemType.MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the responses item, which is always 'message'. Required.""" - role: str = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) - """The role associated with the message. Required. Known values are: \"system\", \"developer\", - \"user\", and \"assistant\".""" - - @overload - def __init__( - self, - *, - role: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MESSAGE # type: ignore - - -class ResponsesAssistantMessageItemParam(ResponsesMessageItemParam, discriminator="assistant"): - """A message parameter item with the ``assistant`` role. - - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar role: The role of the message, which is always ``assistant``. Required. - :vartype role: str or ~azure.ai.projects.models.ASSISTANT - :ivar content: The content associated with the message. Required. Is either a str type or a - [ItemContent] type. - :vartype content: str or list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.ASSISTANT] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``assistant``. Required.""" - content: Union["str", list["_models.ItemContent"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The content associated with the message. Required. Is either a str type or a [ItemContent] - type.""" - - @overload - def __init__( - self, - *, - content: Union[str, list["_models.ItemContent"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.ASSISTANT # type: ignore - - -class ResponsesMessageItemResource(ItemResource, discriminator="message"): - """A response message resource item, representing a role and content, as provided on service - responses. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ResponsesAssistantMessageItemResource, ResponsesDeveloperMessageItemResource, - ResponsesSystemMessageItemResource, ResponsesUserMessageItemResource - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar role: The role associated with the message. Required. Known values are: "system", - "developer", "user", and "assistant". - :vartype role: str or ~azure.ai.projects.models.ResponsesMessageRole - """ - - __mapping__: dict[str, _Model] = {} - type: Literal[ItemType.MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the responses item, which is always 'message'. Required.""" - status: Literal["in_progress", "completed", "incomplete"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" - role: str = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) - """The role associated with the message. Required. Known values are: \"system\", \"developer\", - \"user\", and \"assistant\".""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - role: str, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.MESSAGE # type: ignore - - -class ResponsesAssistantMessageItemResource(ResponsesMessageItemResource, discriminator="assistant"): - """A message resource item with the ``assistant`` role. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar role: The role of the message, which is always ``assistant``. Required. - :vartype role: str or ~azure.ai.projects.models.ASSISTANT - :ivar content: The content associated with the message. Required. - :vartype content: list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.ASSISTANT] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``assistant``. Required.""" - content: list["_models.ItemContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content associated with the message. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - content: list["_models.ItemContent"], - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.ASSISTANT # type: ignore - - -class ResponsesDeveloperMessageItemParam(ResponsesMessageItemParam, discriminator="developer"): - """A message parameter item with the ``developer`` role. - - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar role: The role of the message, which is always ``developer``. Required. - :vartype role: str or ~azure.ai.projects.models.DEVELOPER - :ivar content: The content associated with the message. Required. Is either a str type or a - [ItemContent] type. - :vartype content: str or list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.DEVELOPER] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``developer``. Required.""" - content: Union["str", list["_models.ItemContent"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The content associated with the message. Required. Is either a str type or a [ItemContent] - type.""" - - @overload - def __init__( - self, - *, - content: Union[str, list["_models.ItemContent"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.DEVELOPER # type: ignore - - -class ResponsesDeveloperMessageItemResource(ResponsesMessageItemResource, discriminator="developer"): - """A message resource item with the ``developer`` role. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar role: The role of the message, which is always ``developer``. Required. - :vartype role: str or ~azure.ai.projects.models.DEVELOPER - :ivar content: The content associated with the message. Required. - :vartype content: list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.DEVELOPER] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``developer``. Required.""" - content: list["_models.ItemContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content associated with the message. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - content: list["_models.ItemContent"], - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.DEVELOPER # type: ignore - - -class ResponsesSystemMessageItemParam(ResponsesMessageItemParam, discriminator="system"): - """A message parameter item with the ``system`` role. - - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar role: The role of the message, which is always ``system``. Required. - :vartype role: str or ~azure.ai.projects.models.SYSTEM - :ivar content: The content associated with the message. Required. Is either a str type or a - [ItemContent] type. - :vartype content: str or list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.SYSTEM] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``system``. Required.""" - content: Union["str", list["_models.ItemContent"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The content associated with the message. Required. Is either a str type or a [ItemContent] - type.""" - - @overload - def __init__( - self, - *, - content: Union[str, list["_models.ItemContent"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.SYSTEM # type: ignore - - -class ResponsesSystemMessageItemResource(ResponsesMessageItemResource, discriminator="system"): - """A message resource item with the ``system`` role. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar role: The role of the message, which is always ``system``. Required. - :vartype role: str or ~azure.ai.projects.models.SYSTEM - :ivar content: The content associated with the message. Required. - :vartype content: list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.SYSTEM] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``system``. Required.""" - content: list["_models.ItemContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content associated with the message. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - content: list["_models.ItemContent"], - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.SYSTEM # type: ignore - - -class ResponsesUserMessageItemParam(ResponsesMessageItemParam, discriminator="user"): - """A message parameter item with the ``user`` role. - - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar role: The role of the message, which is always ``user``. Required. - :vartype role: str or ~azure.ai.projects.models.USER - :ivar content: The content associated with the message. Required. Is either a str type or a - [ItemContent] type. - :vartype content: str or list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.USER] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``user``. Required.""" - content: Union["str", list["_models.ItemContent"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The content associated with the message. Required. Is either a str type or a [ItemContent] - type.""" - - @overload - def __init__( - self, - *, - content: Union[str, list["_models.ItemContent"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.USER # type: ignore - - -class ResponsesUserMessageItemResource(ResponsesMessageItemResource, discriminator="user"): - """A message resource item with the ``user`` role. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: The type of the responses item, which is always 'message'. Required. - :vartype type: str or ~azure.ai.projects.models.MESSAGE - :ivar status: The status of the item. One of ``in_progress``, ``completed``, or - ``incomplete``. Populated when items are returned via API. Required. Is one of the following - types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] - :vartype status: str or str or str - :ivar role: The role of the message, which is always ``user``. Required. - :vartype role: str or ~azure.ai.projects.models.USER - :ivar content: The content associated with the message. Required. - :vartype content: list[~azure.ai.projects.models.ItemContent] - """ - - __mapping__: dict[str, _Model] = {} - role: Literal[ResponsesMessageRole.USER] = rest_discriminator(name="role", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The role of the message, which is always ``user``. Required.""" - content: list["_models.ItemContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The content associated with the message. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "completed", "incomplete"], - content: list["_models.ItemContent"], - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.role = ResponsesMessageRole.USER # type: ignore - - -class ResponseText(_Model): - """ResponseText. - - :ivar format: - :vartype format: ~azure.ai.projects.models.ResponseTextFormatConfiguration - """ - - format: Optional["_models.ResponseTextFormatConfiguration"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - - @overload - def __init__( - self, - *, - format: Optional["_models.ResponseTextFormatConfiguration"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseTextDeltaEvent(ResponseStreamEvent, discriminator="response.output_text.delta"): - """Emitted when there is an additional text delta. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.output_text.delta``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_OUTPUT_TEXT_DELTA - :ivar item_id: The ID of the output item that the text delta was added to. Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the text delta was added to. Required. - :vartype output_index: int - :ivar content_index: The index of the content part that the text delta was added to. Required. - :vartype content_index: int - :ivar delta: The text delta that was added. Required. - :vartype delta: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.output_text.delta``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the text delta was added to. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the text delta was added to. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the content part that the text delta was added to. Required.""" - delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text delta that was added. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - delta: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DELTA # type: ignore - - -class ResponseTextDoneEvent(ResponseStreamEvent, discriminator="response.output_text.done"): - """Emitted when text content is finalized. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.output_text.done``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_OUTPUT_TEXT_DONE - :ivar item_id: The ID of the output item that the text content is finalized. Required. - :vartype item_id: str - :ivar output_index: The index of the output item that the text content is finalized. Required. - :vartype output_index: int - :ivar content_index: The index of the content part that the text content is finalized. - Required. - :vartype content_index: int - :ivar text: The text content that is finalized. Required. - :vartype text: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.output_text.done``. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the output item that the text content is finalized. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the text content is finalized. Required.""" - content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the content part that the text content is finalized. Required.""" - text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The text content that is finalized. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - item_id: str, - output_index: int, - content_index: int, - text: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DONE # type: ignore - - -class ResponseTextFormatConfiguration(_Model): - """ResponseTextFormatConfiguration. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ResponseTextFormatConfigurationJsonObject, ResponseTextFormatConfigurationJsonSchema, - ResponseTextFormatConfigurationText - - :ivar type: Required. Known values are: "text", "json_schema", and "json_object". - :vartype type: str or ~azure.ai.projects.models.ResponseTextFormatConfigurationType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"text\", \"json_schema\", and \"json_object\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseTextFormatConfigurationJsonObject( - ResponseTextFormatConfiguration, discriminator="json_object" -): # pylint: disable=name-too-long - """ResponseTextFormatConfigurationJsonObject. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.JSON_OBJECT - """ - - type: Literal[ResponseTextFormatConfigurationType.JSON_OBJECT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseTextFormatConfigurationType.JSON_OBJECT # type: ignore - - -class ResponseTextFormatConfigurationJsonSchema( - ResponseTextFormatConfiguration, discriminator="json_schema" -): # pylint: disable=name-too-long - """JSON Schema response format. Used to generate structured JSON responses. - Learn more about `Structured Outputs `_. - - :ivar type: The type of response format being defined. Always ``json_schema``. Required. - :vartype type: str or ~azure.ai.projects.models.JSON_SCHEMA - :ivar description: A description of what the response format is for, used by the model to - determine how to respond in the format. - :vartype description: str - :ivar name: The name of the response format. Must be a-z, A-Z, 0-9, or contain - underscores and dashes, with a maximum length of 64. Required. - :vartype name: str - :ivar schema: Required. - :vartype schema: ~azure.ai.projects.models.ResponseFormatJsonSchemaSchema - :ivar strict: Whether to enable strict schema adherence when generating the output. - If set to true, the model will always follow the exact schema defined - in the ``schema`` field. Only a subset of JSON Schema is supported when - ``strict`` is ``true``. To learn more, read the `Structured Outputs - guide `_. - :vartype strict: bool - """ - - type: Literal[ResponseTextFormatConfigurationType.JSON_SCHEMA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of response format being defined. Always ``json_schema``. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A description of what the response format is for, used by the model to - determine how to respond in the format.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the response format. Must be a-z, A-Z, 0-9, or contain - underscores and dashes, with a maximum length of 64. Required.""" - schema: "_models.ResponseFormatJsonSchemaSchema" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" - strict: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to enable strict schema adherence when generating the output. - If set to true, the model will always follow the exact schema defined - in the ``schema`` field. Only a subset of JSON Schema is supported when - ``strict`` is ``true``. To learn more, read the `Structured Outputs - guide `_.""" - - @overload - def __init__( - self, - *, - name: str, - schema: "_models.ResponseFormatJsonSchemaSchema", - description: Optional[str] = None, - strict: Optional[bool] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseTextFormatConfigurationType.JSON_SCHEMA # type: ignore - - -class ResponseTextFormatConfigurationText(ResponseTextFormatConfiguration, discriminator="text"): - """ResponseTextFormatConfigurationText. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.TEXT - """ - - type: Literal[ResponseTextFormatConfigurationType.TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseTextFormatConfigurationType.TEXT # type: ignore - - -class ResponseUsage(_Model): - """Represents token usage details including input tokens, output tokens, - a breakdown of output tokens, and the total tokens used. - - :ivar input_tokens: The number of input tokens. Required. - :vartype input_tokens: int - :ivar input_tokens_details: A detailed breakdown of the input tokens. Required. - :vartype input_tokens_details: - ~azure.ai.projects.models.MemoryStoreOperationUsageInputTokensDetails - :ivar output_tokens: The number of output tokens. Required. - :vartype output_tokens: int - :ivar output_tokens_details: A detailed breakdown of the output tokens. Required. - :vartype output_tokens_details: - ~azure.ai.projects.models.MemoryStoreOperationUsageOutputTokensDetails - :ivar total_tokens: The total number of tokens used. Required. - :vartype total_tokens: int - """ - - input_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of input tokens. Required.""" - input_tokens_details: "_models.MemoryStoreOperationUsageInputTokensDetails" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A detailed breakdown of the input tokens. Required.""" - output_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The number of output tokens. Required.""" - output_tokens_details: "_models.MemoryStoreOperationUsageOutputTokensDetails" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """A detailed breakdown of the output tokens. Required.""" - total_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The total number of tokens used. Required.""" - - @overload - def __init__( - self, - *, - input_tokens: int, - input_tokens_details: "_models.MemoryStoreOperationUsageInputTokensDetails", - output_tokens: int, - output_tokens_details: "_models.MemoryStoreOperationUsageOutputTokensDetails", - total_tokens: int, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ResponseWebSearchCallCompletedEvent(ResponseStreamEvent, discriminator="response.web_search_call.completed"): - """Note: web_search is not yet available via Azure OpenAI. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.web_search_call.completed``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_WEB_SEARCH_CALL_COMPLETED - :ivar output_index: The index of the output item that the web search call is associated with. - Required. - :vartype output_index: int - :ivar item_id: Unique ID for the output item associated with the web search call. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.web_search_call.completed``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the web search call is associated with. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique ID for the output item associated with the web search call. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_COMPLETED # type: ignore - - -class ResponseWebSearchCallInProgressEvent(ResponseStreamEvent, discriminator="response.web_search_call.in_progress"): - """Note: web_search is not yet available via Azure OpenAI. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.web_search_call.in_progress``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS - :ivar output_index: The index of the output item that the web search call is associated with. - Required. - :vartype output_index: int - :ivar item_id: Unique ID for the output item associated with the web search call. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.web_search_call.in_progress``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the web search call is associated with. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique ID for the output item associated with the web search call. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS # type: ignore - - -class ResponseWebSearchCallSearchingEvent(ResponseStreamEvent, discriminator="response.web_search_call.searching"): - """Note: web_search is not yet available via Azure OpenAI. - - :ivar sequence_number: The sequence number for this event. Required. - :vartype sequence_number: int - :ivar type: The type of the event. Always ``response.web_search_call.searching``. Required. - :vartype type: str or ~azure.ai.projects.models.RESPONSE_WEB_SEARCH_CALL_SEARCHING - :ivar output_index: The index of the output item that the web search call is associated with. - Required. - :vartype output_index: int - :ivar item_id: Unique ID for the output item associated with the web search call. Required. - :vartype item_id: str - """ - - type: Literal[ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_SEARCHING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the event. Always ``response.web_search_call.searching``. Required.""" - output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The index of the output item that the web search call is associated with. Required.""" - item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique ID for the output item associated with the web search call. Required.""" - - @overload - def __init__( - self, - *, - sequence_number: int, - output_index: int, - item_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_SEARCHING # type: ignore - - -class SASCredentials(BaseCredentials, discriminator="SAS"): - """Shared Access Signature (SAS) credential definition. - - :ivar type: The credential type. Required. Shared Access Signature (SAS) credential - :vartype type: str or ~azure.ai.projects.models.SAS - :ivar sas_token: SAS token. - :vartype sas_token: str - """ - - type: Literal[CredentialType.SAS] = rest_discriminator(name="type", visibility=["read"]) # type: ignore - """The credential type. Required. Shared Access Signature (SAS) credential""" - sas_token: Optional[str] = rest_field(name="SAS", visibility=["read"]) - """SAS token.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = CredentialType.SAS # type: ignore - - -class Schedule(_Model): - """Schedule model. - - :ivar id: Identifier of the schedule. Required. - :vartype id: str - :ivar display_name: Name of the schedule. - :vartype display_name: str - :ivar description: Description of the schedule. - :vartype description: str - :ivar enabled: Enabled status of the schedule. Required. - :vartype enabled: bool - :ivar provisioning_status: Provisioning status of the schedule. Known values are: "Creating", - "Updating", "Deleting", "Succeeded", and "Failed". - :vartype provisioning_status: str or ~azure.ai.projects.models.ScheduleProvisioningStatus - :ivar trigger: Trigger for the schedule. Required. - :vartype trigger: ~azure.ai.projects.models.Trigger - :ivar task: Task for the schedule. Required. - :vartype task: ~azure.ai.projects.models.ScheduleTask - :ivar tags: Schedule's tags. Unlike properties, tags are fully mutable. - :vartype tags: dict[str, str] - :ivar properties: Schedule's properties. Unlike tags, properties are add-only. Once added, a - property cannot be removed. - :vartype properties: dict[str, str] - :ivar system_data: System metadata for the resource. Required. - :vartype system_data: dict[str, str] - """ - - id: str = rest_field(visibility=["read"]) - """Identifier of the schedule. Required.""" - display_name: Optional[str] = rest_field( - name="displayName", visibility=["read", "create", "update", "delete", "query"] - ) - """Name of the schedule.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Description of the schedule.""" - enabled: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Enabled status of the schedule. Required.""" - provisioning_status: Optional[Union[str, "_models.ScheduleProvisioningStatus"]] = rest_field( - name="provisioningStatus", visibility=["read"] - ) - """Provisioning status of the schedule. Known values are: \"Creating\", \"Updating\", - \"Deleting\", \"Succeeded\", and \"Failed\".""" - trigger: "_models.Trigger" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Trigger for the schedule. Required.""" - task: "_models.ScheduleTask" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Task for the schedule. Required.""" - tags: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Schedule's tags. Unlike properties, tags are fully mutable.""" - properties: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Schedule's properties. Unlike tags, properties are add-only. Once added, a property cannot be - removed.""" - system_data: dict[str, str] = rest_field(name="systemData", visibility=["read"]) - """System metadata for the resource. Required.""" - - @overload - def __init__( - self, - *, - enabled: bool, - trigger: "_models.Trigger", - task: "_models.ScheduleTask", - display_name: Optional[str] = None, - description: Optional[str] = None, - tags: Optional[dict[str, str]] = None, - properties: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ScheduleRun(_Model): - """Schedule run model. - - :ivar id: Identifier of the schedule run. Required. - :vartype id: str - :ivar schedule_id: Identifier of the schedule. Required. - :vartype schedule_id: str - :ivar success: Trigger success status of the schedule run. Required. - :vartype success: bool - :ivar trigger_time: Trigger time of the schedule run. - :vartype trigger_time: str - :ivar error: Error information for the schedule run. - :vartype error: str - :ivar properties: Properties of the schedule run. Required. - :vartype properties: dict[str, str] - """ - - id: str = rest_field(visibility=["read"]) - """Identifier of the schedule run. Required.""" - schedule_id: str = rest_field(name="scheduleId", visibility=["read", "create", "update", "delete", "query"]) - """Identifier of the schedule. Required.""" - success: bool = rest_field(visibility=["read"]) - """Trigger success status of the schedule run. Required.""" - trigger_time: Optional[str] = rest_field( - name="triggerTime", visibility=["read", "create", "update", "delete", "query"] - ) - """Trigger time of the schedule run.""" - error: Optional[str] = rest_field(visibility=["read"]) - """Error information for the schedule run.""" - properties: dict[str, str] = rest_field(visibility=["read"]) - """Properties of the schedule run. Required.""" - - @overload - def __init__( - self, - *, - schedule_id: str, - trigger_time: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class SharepointAgentTool(Tool, discriminator="sharepoint_grounding_preview"): - """The input definition information for a sharepoint tool as used to configure an agent. - - :ivar type: The object type, which is always 'sharepoint_grounding'. Required. - :vartype type: str or ~azure.ai.projects.models.SHAREPOINT_GROUNDING_PREVIEW - :ivar sharepoint_grounding_preview: The sharepoint grounding tool parameters. Required. - :vartype sharepoint_grounding_preview: - ~azure.ai.projects.models.SharepointGroundingToolParameters - """ - - type: Literal[ToolType.SHAREPOINT_GROUNDING_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The object type, which is always 'sharepoint_grounding'. Required.""" - sharepoint_grounding_preview: "_models.SharepointGroundingToolParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The sharepoint grounding tool parameters. Required.""" - - @overload - def __init__( - self, - *, - sharepoint_grounding_preview: "_models.SharepointGroundingToolParameters", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.SHAREPOINT_GROUNDING_PREVIEW # type: ignore - - -class SharepointGroundingToolParameters(_Model): - """The sharepoint grounding tool parameters. - - :ivar project_connections: The project connections attached to this tool. There can be a - maximum of 1 connection - resource attached to the tool. - :vartype project_connections: list[~azure.ai.projects.models.ToolProjectConnection] - """ - - project_connections: Optional[list["_models.ToolProjectConnection"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The project connections attached to this tool. There can be a maximum of 1 connection - resource attached to the tool.""" - - @overload - def __init__( - self, - *, - project_connections: Optional[list["_models.ToolProjectConnection"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class StructuredInputDefinition(_Model): - """An structured input that can participate in prompt template substitutions and tool argument - binding. - - :ivar description: A human-readable description of the input. - :vartype description: str - :ivar default_value: The default value for the input if no run-time value is provided. - :vartype default_value: any - :ivar tool_argument_bindings: When provided, the input value is bound to the specified tool - arguments. - :vartype tool_argument_bindings: list[~azure.ai.projects.models.ToolArgumentBinding] - :ivar schema: The JSON schema for the structured input (optional). - :vartype schema: any - :ivar required: Whether the input property is required when the agent is invoked. - :vartype required: bool - """ - - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A human-readable description of the input.""" - default_value: Optional[Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The default value for the input if no run-time value is provided.""" - tool_argument_bindings: Optional[list["_models.ToolArgumentBinding"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """When provided, the input value is bound to the specified tool arguments.""" - schema: Optional[Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The JSON schema for the structured input (optional).""" - required: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether the input property is required when the agent is invoked.""" - - @overload - def __init__( - self, - *, - description: Optional[str] = None, - default_value: Optional[Any] = None, - tool_argument_bindings: Optional[list["_models.ToolArgumentBinding"]] = None, - schema: Optional[Any] = None, - required: Optional[bool] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class StructuredOutputDefinition(_Model): - """A structured output that can be produced by the agent. - - :ivar name: The name of the structured output. Required. - :vartype name: str - :ivar description: A description of the output to emit. Used by the model to determine when to - emit the output. Required. - :vartype description: str - :ivar schema: The JSON schema for the structured output. Required. - :vartype schema: dict[str, any] - :ivar strict: Whether to enforce strict validation. Default ``true``. Required. - :vartype strict: bool - """ - - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the structured output. Required.""" - description: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A description of the output to emit. Used by the model to determine when to emit the output. - Required.""" - schema: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The JSON schema for the structured output. Required.""" - strict: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Whether to enforce strict validation. Default ``true``. Required.""" - - @overload - def __init__( - self, - *, - name: str, - description: str, - schema: dict[str, Any], - strict: bool, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class StructuredOutputsItemResource(ItemResource, discriminator="structured_outputs"): - """StructuredOutputsItemResource. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.STRUCTURED_OUTPUTS - :ivar output: The structured output captured during the response. Required. - :vartype output: any - """ - - type: Literal[ItemType.STRUCTURED_OUTPUTS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - output: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The structured output captured during the response. Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - output: Any, - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.STRUCTURED_OUTPUTS # type: ignore - - -class TaxonomyCategory(_Model): - """Taxonomy category definition. - - :ivar id: Unique identifier of the taxonomy category. Required. - :vartype id: str - :ivar name: Name of the taxonomy category. Required. - :vartype name: str - :ivar description: Description of the taxonomy category. - :vartype description: str - :ivar risk_category: Risk category associated with this taxonomy category. Required. Known - values are: "HateUnfairness", "Violence", "Sexual", and "SelfHarm". - :vartype risk_category: str or ~azure.ai.projects.models.RiskCategory - :ivar sub_categories: List of taxonomy sub categories. Required. - :vartype sub_categories: list[~azure.ai.projects.models.TaxonomySubCategory] - :ivar properties: Additional properties for the taxonomy category. - :vartype properties: dict[str, str] - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique identifier of the taxonomy category. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Name of the taxonomy category. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Description of the taxonomy category.""" - risk_category: Union[str, "_models.RiskCategory"] = rest_field( - name="riskCategory", visibility=["read", "create", "update", "delete", "query"] - ) - """Risk category associated with this taxonomy category. Required. Known values are: - \"HateUnfairness\", \"Violence\", \"Sexual\", and \"SelfHarm\".""" - sub_categories: list["_models.TaxonomySubCategory"] = rest_field( - name="subCategories", visibility=["read", "create", "update", "delete", "query"] - ) - """List of taxonomy sub categories. Required.""" - properties: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Additional properties for the taxonomy category.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - name: str, - risk_category: Union[str, "_models.RiskCategory"], - sub_categories: list["_models.TaxonomySubCategory"], - description: Optional[str] = None, - properties: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class TaxonomySubCategory(_Model): - """Taxonomy sub-category definition. - - :ivar id: Unique identifier of the taxonomy sub-category. Required. - :vartype id: str - :ivar name: Name of the taxonomy sub-category. Required. - :vartype name: str - :ivar description: Description of the taxonomy sub-category. - :vartype description: str - :ivar enabled: List of taxonomy items under this sub-category. Required. - :vartype enabled: bool - :ivar properties: Additional properties for the taxonomy sub-category. - :vartype properties: dict[str, str] - """ - - id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Unique identifier of the taxonomy sub-category. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Name of the taxonomy sub-category. Required.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Description of the taxonomy sub-category.""" - enabled: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """List of taxonomy items under this sub-category. Required.""" - properties: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Additional properties for the taxonomy sub-category.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - name: str, - enabled: bool, - description: Optional[str] = None, - properties: Optional[dict[str, str]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ToolArgumentBinding(_Model): - """ToolArgumentBinding. - - :ivar tool_name: The name of the tool to participate in the argument binding. If not provided, - then all tools with matching arguments will participate in binding. - :vartype tool_name: str - :ivar argument_name: The name of the argument within the tool. Required. - :vartype argument_name: str - """ - - tool_name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool to participate in the argument binding. If not provided, then all tools - with matching arguments will participate in binding.""" - argument_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the argument within the tool. Required.""" - - @overload - def __init__( - self, - *, - argument_name: str, - tool_name: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ToolChoiceObject(_Model): - """ToolChoiceObject. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - ToolChoiceObjectCodeInterpreter, ToolChoiceObjectComputer, ToolChoiceObjectFileSearch, - ToolChoiceObjectFunction, ToolChoiceObjectImageGen, ToolChoiceObjectMCP, - ToolChoiceObjectWebSearch - - :ivar type: Required. Known values are: "file_search", "function", "computer_use_preview", - "web_search_preview", "image_generation", "code_interpreter", and "mcp". - :vartype type: str or ~azure.ai.projects.models.ToolChoiceObjectType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"file_search\", \"function\", \"computer_use_preview\", - \"web_search_preview\", \"image_generation\", \"code_interpreter\", and \"mcp\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ToolChoiceObjectCodeInterpreter(ToolChoiceObject, discriminator="code_interpreter"): - """ToolChoiceObjectCodeInterpreter. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.CODE_INTERPRETER - """ - - type: Literal[ToolChoiceObjectType.CODE_INTERPRETER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.CODE_INTERPRETER # type: ignore - - -class ToolChoiceObjectComputer(ToolChoiceObject, discriminator="computer_use_preview"): - """ToolChoiceObjectComputer. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.COMPUTER - """ - - type: Literal[ToolChoiceObjectType.COMPUTER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.COMPUTER # type: ignore - - -class ToolChoiceObjectFileSearch(ToolChoiceObject, discriminator="file_search"): - """ToolChoiceObjectFileSearch. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.FILE_SEARCH - """ - - type: Literal[ToolChoiceObjectType.FILE_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.FILE_SEARCH # type: ignore - - -class ToolChoiceObjectFunction(ToolChoiceObject, discriminator="function"): - """Use this option to force the model to call a specific function. - - :ivar type: For function calling, the type is always ``function``. Required. - :vartype type: str or ~azure.ai.projects.models.FUNCTION - :ivar name: The name of the function to call. Required. - :vartype name: str - """ - - type: Literal[ToolChoiceObjectType.FUNCTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """For function calling, the type is always ``function``. Required.""" - name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the function to call. Required.""" - - @overload - def __init__( - self, - *, - name: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.FUNCTION # type: ignore - - -class ToolChoiceObjectImageGen(ToolChoiceObject, discriminator="image_generation"): - """ToolChoiceObjectImageGen. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.IMAGE_GENERATION - """ - - type: Literal[ToolChoiceObjectType.IMAGE_GENERATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.IMAGE_GENERATION # type: ignore - - -class ToolChoiceObjectMCP(ToolChoiceObject, discriminator="mcp"): - """Use this option to force the model to call a specific tool on a remote MCP server. - - :ivar type: For MCP tools, the type is always ``mcp``. Required. - :vartype type: str or ~azure.ai.projects.models.MCP - :ivar server_label: The label of the MCP server to use. Required. - :vartype server_label: str - :ivar name: The name of the tool to call on the server. - :vartype name: str - """ - - type: Literal[ToolChoiceObjectType.MCP] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """For MCP tools, the type is always ``mcp``. Required.""" - server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The label of the MCP server to use. Required.""" - name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool to call on the server.""" - - @overload - def __init__( - self, - *, - server_label: str, - name: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.MCP # type: ignore - - -class ToolChoiceObjectWebSearch(ToolChoiceObject, discriminator="web_search_preview"): - """Note: web_search is not yet available via Azure OpenAI. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.WEB_SEARCH - """ - - type: Literal[ToolChoiceObjectType.WEB_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolChoiceObjectType.WEB_SEARCH # type: ignore - - -class ToolDescription(_Model): - """Description of a tool that can be used by an agent. - - :ivar name: The name of the tool. - :vartype name: str - :ivar description: A brief description of the tool's purpose. - :vartype description: str - """ - - name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The name of the tool.""" - description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A brief description of the tool's purpose.""" - - @overload - def __init__( - self, - *, - name: Optional[str] = None, - description: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ToolProjectConnection(_Model): - """A project connection resource. - - :ivar project_connection_id: A project connection in a ToolProjectConnectionList attached to - this tool. Required. - :vartype project_connection_id: str - """ - - project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """A project connection in a ToolProjectConnectionList attached to this tool. Required.""" - - @overload - def __init__( - self, - *, - project_connection_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class ToolProjectConnectionList(_Model): - """A set of project connection resources currently used by either the ``bing_grounding``, - ``fabric_dataagent``, or ``sharepoint_grounding`` tools. - - :ivar project_connections: The project connections attached to this tool. There can be a - maximum of 1 connection - resource attached to the tool. - :vartype project_connections: list[~azure.ai.projects.models.ToolProjectConnection] - """ - - project_connections: Optional[list["_models.ToolProjectConnection"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The project connections attached to this tool. There can be a maximum of 1 connection - resource attached to the tool.""" - - @overload - def __init__( - self, - *, - project_connections: Optional[list["_models.ToolProjectConnection"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class TopLogProb(_Model): - """The top log probability of a token. - - :ivar token: Required. - :vartype token: str - :ivar logprob: Required. - :vartype logprob: float - :ivar bytes: Required. - :vartype bytes: list[int] - """ - - token: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - logprob: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - bytes: list[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """Required.""" - - @overload - def __init__( - self, - *, - token: str, - logprob: float, - bytes: list[int], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class UserProfileMemoryItem(MemoryItem, discriminator="user_profile"): - """A memory item specifically containing user profile information extracted from conversations, - such as preferences, interests, and personal details. - - :ivar memory_id: The unique ID of the memory item. Required. - :vartype memory_id: str - :ivar updated_at: The last update time of the memory item. Required. - :vartype updated_at: ~datetime.datetime - :ivar scope: The namespace that logically groups and isolates memories, such as a user ID. - Required. - :vartype scope: str - :ivar content: The content of the memory. Required. - :vartype content: str - :ivar kind: The kind of the memory item. Required. User profile information extracted from - conversations. - :vartype kind: str or ~azure.ai.projects.models.USER_PROFILE - """ - - kind: Literal[MemoryItemKind.USER_PROFILE] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The kind of the memory item. Required. User profile information extracted from conversations.""" - - @overload - def __init__( - self, - *, - memory_id: str, - updated_at: datetime.datetime, - scope: str, - content: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = MemoryItemKind.USER_PROFILE # type: ignore - - -class VectorStoreFileAttributes(_Model): - """Set of 16 key-value pairs that can be attached to an object. This can be - useful for storing additional information about the object in a structured - format, and querying for objects via API or the dashboard. Keys are strings - with a maximum length of 64 characters. Values are strings with a maximum - length of 512 characters, booleans, or numbers. - - """ - - -class WebSearchAction(_Model): - """WebSearchAction. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - WebSearchActionFind, WebSearchActionOpenPage, WebSearchActionSearch - - :ivar type: Required. Known values are: "search", "open_page", and "find". - :vartype type: str or ~azure.ai.projects.models.WebSearchActionType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"search\", \"open_page\", and \"find\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class WebSearchActionFind(WebSearchAction, discriminator="find"): - """Action type "find": Searches for a pattern within a loaded page. - - :ivar type: The action type. Required. - :vartype type: str or ~azure.ai.projects.models.FIND - :ivar url: The URL of the page searched for the pattern. Required. - :vartype url: str - :ivar pattern: The pattern or text to search for within the page. Required. - :vartype pattern: str - """ - - type: Literal[WebSearchActionType.FIND] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The action type. Required.""" - url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The URL of the page searched for the pattern. Required.""" - pattern: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The pattern or text to search for within the page. Required.""" - - @overload - def __init__( - self, - *, - url: str, - pattern: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = WebSearchActionType.FIND # type: ignore - - -class WebSearchActionOpenPage(WebSearchAction, discriminator="open_page"): - """Action type "open_page" - Opens a specific URL from search results. - - :ivar type: The action type. Required. - :vartype type: str or ~azure.ai.projects.models.OPEN_PAGE - :ivar url: The URL opened by the model. Required. - :vartype url: str - """ - - type: Literal[WebSearchActionType.OPEN_PAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The action type. Required.""" - url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The URL opened by the model. Required.""" - - @overload - def __init__( - self, - *, - url: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = WebSearchActionType.OPEN_PAGE # type: ignore - - -class WebSearchActionSearch(WebSearchAction, discriminator="search"): - """Action type "search" - Performs a web search query. - - :ivar type: The action type. Required. - :vartype type: str or ~azure.ai.projects.models.SEARCH - :ivar query: The search query. Required. - :vartype query: str - """ - - type: Literal[WebSearchActionType.SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The action type. Required.""" - query: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The search query. Required.""" - - @overload - def __init__( - self, - *, - query: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = WebSearchActionType.SEARCH # type: ignore - - -class WebSearchPreviewTool(Tool, discriminator="web_search_preview"): - """Note: web_search is not yet available via Azure OpenAI. - - :ivar type: The type of the web search tool. One of ``web_search_preview`` or - ``web_search_preview_2025_03_11``. Required. - :vartype type: str or ~azure.ai.projects.models.WEB_SEARCH_PREVIEW - :ivar user_location: The user's location. - :vartype user_location: ~azure.ai.projects.models.Location - :ivar search_context_size: High level guidance for the amount of context window space to use - for the search. One of ``low``, ``medium``, or ``high``. ``medium`` is the default. Is one of - the following types: Literal["low"], Literal["medium"], Literal["high"] - :vartype search_context_size: str or str or str - """ - - type: Literal[ToolType.WEB_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """The type of the web search tool. One of ``web_search_preview`` or - ``web_search_preview_2025_03_11``. Required.""" - user_location: Optional["_models.Location"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The user's location.""" - search_context_size: Optional[Literal["low", "medium", "high"]] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """High level guidance for the amount of context window space to use for the search. One of - ``low``, ``medium``, or ``high``. ``medium`` is the default. Is one of the following types: - Literal[\"low\"], Literal[\"medium\"], Literal[\"high\"]""" - - @overload - def __init__( - self, - *, - user_location: Optional["_models.Location"] = None, - search_context_size: Optional[Literal["low", "medium", "high"]] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ToolType.WEB_SEARCH_PREVIEW # type: ignore - - -class WebSearchToolCallItemParam(ItemParam, discriminator="web_search_call"): - """The results of a web search tool call. See the - `web search guide `_ for more information. - - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.WEB_SEARCH_CALL - :ivar action: An object describing the specific action taken in this web search call. - Includes details on how the model used the web (search, open_page, find). Required. - :vartype action: ~azure.ai.projects.models.WebSearchAction - """ - - type: Literal[ItemType.WEB_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - action: "_models.WebSearchAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An object describing the specific action taken in this web search call. - Includes details on how the model used the web (search, open_page, find). Required.""" - - @overload - def __init__( - self, - *, - action: "_models.WebSearchAction", - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.WEB_SEARCH_CALL # type: ignore - - -class WebSearchToolCallItemResource(ItemResource, discriminator="web_search_call"): - """The results of a web search tool call. See the - `web search guide `_ for more information. - - :ivar id: Required. - :vartype id: str - :ivar created_by: The information about the creator of the item. - :vartype created_by: ~azure.ai.projects.models.CreatedBy - :ivar type: Required. - :vartype type: str or ~azure.ai.projects.models.WEB_SEARCH_CALL - :ivar status: The status of the web search tool call. Required. Is one of the following types: - Literal["in_progress"], Literal["searching"], Literal["completed"], Literal["failed"] - :vartype status: str or str or str or str - :ivar action: An object describing the specific action taken in this web search call. - Includes details on how the model used the web (search, open_page, find). Required. - :vartype action: ~azure.ai.projects.models.WebSearchAction - """ - - type: Literal[ItemType.WEB_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - status: Literal["in_progress", "searching", "completed", "failed"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The status of the web search tool call. Required. Is one of the following types: - Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], Literal[\"failed\"]""" - action: "_models.WebSearchAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """An object describing the specific action taken in this web search call. - Includes details on how the model used the web (search, open_page, find). Required.""" - - @overload - def __init__( - self, - *, - id: str, # pylint: disable=redefined-builtin - status: Literal["in_progress", "searching", "completed", "failed"], - action: "_models.WebSearchAction", - created_by: Optional["_models.CreatedBy"] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = ItemType.WEB_SEARCH_CALL # type: ignore - - -class WeeklyRecurrenceSchedule(RecurrenceSchedule, discriminator="Weekly"): - """Weekly recurrence schedule. - - :ivar type: Weekly recurrence type. Required. Weekly recurrence pattern. - :vartype type: str or ~azure.ai.projects.models.WEEKLY - :ivar days_of_week: Days of the week for the recurrence schedule. Required. - :vartype days_of_week: list[str or ~azure.ai.projects.models.DayOfWeek] - """ - - type: Literal[RecurrenceType.WEEKLY] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Weekly recurrence type. Required. Weekly recurrence pattern.""" - days_of_week: list[Union[str, "_models.DayOfWeek"]] = rest_field( - name="daysOfWeek", visibility=["read", "create", "update", "delete", "query"] - ) - """Days of the week for the recurrence schedule. Required.""" - - @overload - def __init__( - self, - *, - days_of_week: list[Union[str, "_models.DayOfWeek"]], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.type = RecurrenceType.WEEKLY # type: ignore - - -class WorkflowDefinition(AgentDefinition, discriminator="workflow"): - """The workflow specification in CSDL format. - - :ivar rai_config: Configuration for Responsible AI (RAI) content filtering and safety features. - :vartype rai_config: ~azure.ai.projects.models.RaiConfig - :ivar kind: Required. - :vartype kind: str or ~azure.ai.projects.models.WORKFLOW - :ivar trigger: (Deprecated) The CSDL trigger definition. Use ``workflow`` property instead to - send CSDL yaml definition inline. - :vartype trigger: dict[str, any] - :ivar workflow: The CSDL YAML definition of the workflow. - :vartype workflow: str - """ - - kind: Literal[AgentKind.WORKFLOW] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required.""" - trigger: Optional[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """(Deprecated) The CSDL trigger definition. Use ``workflow`` property instead to send CSDL yaml - definition inline.""" - workflow: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The CSDL YAML definition of the workflow.""" - - @overload - def __init__( - self, - *, - rai_config: Optional["_models.RaiConfig"] = None, - trigger: Optional[dict[str, Any]] = None, - workflow: Optional[str] = None, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = AgentKind.WORKFLOW # type: ignore diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_patch.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_patch.py deleted file mode 100644 index 6cd95db87150..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_patch.py +++ /dev/null @@ -1,39 +0,0 @@ -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -"""Customize generated code here. - -Follow our quickstart for examples: https://aka.ms/azsdk/python/dpcodegen/python/customize -""" -from typing import List, Dict -from ._patch_evaluations import EvaluatorIds -from ._models import CustomCredential as CustomCredentialGenerated - - -class CustomCredential(CustomCredentialGenerated): - """Custom credential definition. - - :ivar type: The credential type. Always equals CredentialType.CUSTOM. Required. - :vartype type: str or ~azure.ai.projects.models.CredentialType - :ivar credential_keys: The secret custom credential keys. Required. - :vartype credential_keys: dict[str, str] - """ - - credential_keys: Dict[str, str] = {} - """The secret custom credential keys. Required.""" - - -__all__: List[str] = [ - "EvaluatorIds", - "CustomCredential", -] # Add all objects you want publicly available to users at this package level - - -def patch_sdk(): - """Do not remove from this file. - - `patch_sdk` is a last resort escape hatch that allows you to do customizations - you can't accomplish using the techniques described in - https://aka.ms/azsdk/python/dpcodegen/python/customize - """ diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_patch_evaluations.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_patch_evaluations.py deleted file mode 100644 index d362c28d0d8a..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_patch_evaluations.py +++ /dev/null @@ -1,48 +0,0 @@ -# pylint: disable=line-too-long,useless-suppression -# ------------------------------------ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. -# ------------------------------------ -"""Customize generated code here. - -Follow our quickstart for examples: https://aka.ms/azsdk/python/dpcodegen/python/customize -""" -from enum import Enum - -from azure.core import CaseInsensitiveEnumMeta - - -class EvaluatorIds(str, Enum, metaclass=CaseInsensitiveEnumMeta): - RELEVANCE = "azureai://built-in/evaluators/relevance" - HATE_UNFAIRNESS = "azureai://built-in/evaluators/hate_unfairness" - VIOLENCE = "azureai://built-in/evaluators/violence" - GROUNDEDNESS = "azureai://built-in/evaluators/groundedness" - GROUNDEDNESS_PRO = "azureai://built-in/evaluators/groundedness_pro" - BLEU_SCORE = "azureai://built-in/evaluators/bleu_score" - CODE_VULNERABILITY = "azureai://built-in/evaluators/code_vulnerability" - COHERENCE = "azureai://built-in/evaluators/coherence" - CONTENT_SAFETY = "azureai://built-in/evaluators/content_safety" - F1_SCORE = "azureai://built-in/evaluators/f1_score" - FLUENCY = "azureai://built-in/evaluators/fluency" - GLEU_SCORE = "azureai://built-in/evaluators/gleu_score" - INDIRECT_ATTACK = "azureai://built-in/evaluators/indirect_attack" - INTENT_RESOLUTION = "azureai://built-in/evaluators/intent_resolution" - METEOR_SCORE = "azureai://built-in/evaluators/meteor_score" - PROTECTED_MATERIAL = "azureai://built-in/evaluators/protected_material" - RETRIEVAL = "azureai://built-in/evaluators/retrieval" - ROUGE_SCORE = "azureai://built-in/evaluators/rouge_score" - SELF_HARM = "azureai://built-in/evaluators/self_harm" - SEXUAL = "azureai://built-in/evaluators/sexual" - SIMILARITY = "azureai://built-in/evaluators/similarity" - QA = "azureai://built-in/evaluators/qa" - DOCUMENT_RETRIEVAL = "azureai://built-in/evaluators/document_retrieval" - TASK_ADHERENCE = "azureai://built-in/evaluators/task_adherence" - TOOL_CALL_ACCURACY = "azureai://built-in/evaluators/tool_call_accuracy" - UNGROUNDED_ATTRIBUTES = "azureai://built-in/evaluators/ungrounded_attributes" - RESPONSE_COMPLETENESS = "azureai://built-in/evaluators/response_completeness" - # AOAI Graders - LABEL_GRADER = "azureai://built-in/evaluators/azure-openai/label_grader" - STRING_CHECK_GRADER = "azureai://built-in/evaluators/azure-openai/string_check_grader" - TEXT_SIMILARITY_GRADER = "azureai://built-in/evaluators/azure-openai/text_similarity_grader" - GENERAL_GRADER = "azureai://built-in/evaluators/azure-openai/custom_grader" - SCORE_MODEL_GRADER = "azureai://built-in/evaluators/azure-openai/score_model_grader" diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/base.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/base.py deleted file mode 100644 index 8915aadb172b..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/base.py +++ /dev/null @@ -1,315 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -# pylint: disable=broad-exception-caught,unused-argument,logging-fstring-interpolation,too-many-statements,too-many-return-statements -import inspect -import json -import os -import traceback -from abc import abstractmethod -from typing import Any, AsyncGenerator, Generator, Union - -import uvicorn -from opentelemetry import context as otel_context, trace -from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator -from starlette.applications import Starlette -from starlette.middleware.base import BaseHTTPMiddleware -from starlette.middleware.cors import CORSMiddleware -from starlette.requests import Request -from starlette.responses import JSONResponse, Response, StreamingResponse -from starlette.routing import Route -from starlette.types import ASGIApp - -from ..constants import Constants -from ..logger import get_logger, request_context -from ..models import ( - Response as OpenAIResponse, - ResponseStreamEvent, -) -from .common.agent_run_context import AgentRunContext - -logger = get_logger() -DEBUG_ERRORS = os.environ.get(Constants.AGENT_DEBUG_ERRORS, "false").lower() == "true" - - -class AgentRunContextMiddleware(BaseHTTPMiddleware): - def __init__(self, app: ASGIApp): - super().__init__(app) - - async def dispatch(self, request: Request, call_next): - if request.url.path in ("/runs", "/responses"): - try: - self.set_request_id_to_context_var(request) - payload = await request.json() - except Exception as e: - logger.error(f"Invalid JSON payload: {e}") - return JSONResponse({"error": f"Invalid JSON payload: {e}"}, status_code=400) - try: - request.state.agent_run_context = AgentRunContext(payload) - self.set_run_context_to_context_var(request.state.agent_run_context) - except Exception as e: - logger.error(f"Context build failed: {e}.", exc_info=True) - return JSONResponse({"error": f"Context build failed: {e}"}, status_code=500) - return await call_next(request) - - def set_request_id_to_context_var(self, request): - request_id = request.headers.get("X-Request-Id", None) - if request_id: - ctx = request_context.get() or {} - ctx["azure.ai.agentserver.x-request-id"] = request_id - request_context.set(ctx) - - def set_run_context_to_context_var(self, run_context): - agent_id, agent_name = "", "" - agent_obj = run_context.get_agent_id_object() - if agent_obj: - agent_name = getattr(agent_obj, "name", "") - agent_version = getattr(agent_obj, "version", "") - agent_id = f"{agent_name}:{agent_version}" - - res = { - "azure.ai.agentserver.response_id": run_context.response_id or "", - "azure.ai.agentserver.conversation_id": run_context.conversation_id or "", - "azure.ai.agentserver.streaming": str(run_context.stream or False), - "gen_ai.agent.id": agent_id, - "gen_ai.agent.name": agent_name, - "gen_ai.provider.name": "AzureAI Hosted Agents", - "gen_ai.response.id": run_context.response_id or "", - } - ctx = request_context.get() or {} - ctx.update(res) - request_context.set(ctx) - - -class FoundryCBAgent: - def __init__(self): - async def runs_endpoint(request): - # Set up tracing context and span - context = request.state.agent_run_context - ctx = request_context.get() - with self.tracer.start_as_current_span( - name=f"HostedAgents-{context.response_id}", - attributes=ctx, - kind=trace.SpanKind.SERVER, - ): - try: - logger.info("Start processing CreateResponse request:") - - context_carrier = {} - TraceContextTextMapPropagator().inject(context_carrier) - - resp = await self.agent_run(context) - - if inspect.isgenerator(resp): - # Prefetch first event to allow 500 status if generation fails immediately - try: - first_event = next(resp) - except Exception as e: # noqa: BLE001 - err_msg = str(e) if DEBUG_ERRORS else "Internal error" - logger.error("Generator initialization failed: %s\n%s", e, traceback.format_exc()) - return JSONResponse({"error": err_msg}, status_code=500) - - def gen(): - ctx = TraceContextTextMapPropagator().extract(carrier=context_carrier) - token = otel_context.attach(ctx) - error_sent = False - try: - # yield prefetched first event - yield _event_to_sse_chunk(first_event) - for event in resp: - yield _event_to_sse_chunk(event) - except Exception as e: # noqa: BLE001 - err_msg = str(e) if DEBUG_ERRORS else "Internal error" - logger.error("Error in non-async generator: %s\n%s", e, traceback.format_exc()) - payload = {"error": err_msg} - yield f"event: error\ndata: {json.dumps(payload)}\n\n" - yield "data: [DONE]\n\n" - error_sent = True - finally: - logger.info("End of processing CreateResponse request:") - otel_context.detach(token) - if not error_sent: - yield "data: [DONE]\n\n" - - return StreamingResponse(gen(), media_type="text/event-stream") - if inspect.isasyncgen(resp): - # Prefetch first async event to allow early 500 - try: - first_event = await resp.__anext__() - except StopAsyncIteration: - # No items produced; treat as empty successful stream - def empty_gen(): - yield "data: [DONE]\n\n" - - return StreamingResponse(empty_gen(), media_type="text/event-stream") - except Exception as e: # noqa: BLE001 - err_msg = str(e) if DEBUG_ERRORS else "Internal error" - logger.error("Async generator initialization failed: %s\n%s", e, traceback.format_exc()) - return JSONResponse({"error": err_msg}, status_code=500) - - async def gen_async(): - ctx = TraceContextTextMapPropagator().extract(carrier=context_carrier) - token = otel_context.attach(ctx) - error_sent = False - try: - # yield prefetched first event - yield _event_to_sse_chunk(first_event) - async for event in resp: - yield _event_to_sse_chunk(event) - except Exception as e: # noqa: BLE001 - err_msg = str(e) if DEBUG_ERRORS else "Internal error" - logger.error("Error in async generator: %s\n%s", e, traceback.format_exc()) - payload = {"error": err_msg} - yield f"event: error\ndata: {json.dumps(payload)}\n\n" - yield "data: [DONE]\n\n" - error_sent = True - finally: - logger.info("End of processing CreateResponse request.") - otel_context.detach(token) - if not error_sent: - yield "data: [DONE]\n\n" - - return StreamingResponse(gen_async(), media_type="text/event-stream") - logger.info("End of processing CreateResponse request.") - return JSONResponse(resp.as_dict()) - except Exception as e: - # TODO: extract status code from exception - logger.error(f"Error processing CreateResponse request: {traceback.format_exc()}") - return JSONResponse({"error": str(e)}, status_code=500) - - async def liveness_endpoint(request): - result = await self.agent_liveness(request) - return _to_response(result) - - async def readiness_endpoint(request): - result = await self.agent_readiness(request) - return _to_response(result) - - routes = [ - Route("/runs", runs_endpoint, methods=["POST"], name="agent_run"), - Route("/responses", runs_endpoint, methods=["POST"], name="agent_response"), - Route("/liveness", liveness_endpoint, methods=["GET"], name="agent_liveness"), - Route("/readiness", readiness_endpoint, methods=["GET"], name="agent_readiness"), - ] - - self.app = Starlette(routes=routes) - self.app.add_middleware( - CORSMiddleware, - allow_origins=["*"], - allow_credentials=True, - allow_methods=["*"], - allow_headers=["*"], - ) - self.app.add_middleware(AgentRunContextMiddleware) - - @self.app.on_event("startup") - async def attach_appinsights_logger(): - import logging - - for handler in logger.handlers: - if handler.name == "appinsights_handler": - for logger_name in ["uvicorn", "uvicorn.error", "uvicorn.access"]: - uv_logger = logging.getLogger(logger_name) - uv_logger.addHandler(handler) - uv_logger.setLevel(logger.level) - uv_logger.propagate = False - - self.tracer = None - - @abstractmethod - async def agent_run( - self, context: AgentRunContext - ) -> Union[OpenAIResponse, Generator[ResponseStreamEvent, Any, Any], AsyncGenerator[ResponseStreamEvent, Any]]: - raise NotImplementedError - - async def agent_liveness(self, request) -> Union[Response, dict]: - return Response(status_code=200) - - async def agent_readiness(self, request) -> Union[Response, dict]: - return {"status": "ready"} - - async def run_async( - self, - port: int = int(os.environ.get("DEFAULT_AD_PORT", 8088)), - ) -> None: - """ - Awaitable server starter for use **inside** an existing event loop. - - :param port: Port to listen on. - :type port: int - """ - self.init_tracing() - config = uvicorn.Config(self.app, host="0.0.0.0", port=port, loop="asyncio") - server = uvicorn.Server(config) - logger.info(f"Starting FoundryCBAgent server async on port {port}") - await server.serve() - - def run(self, port: int = int(os.environ.get("DEFAULT_AD_PORT", 8088))) -> None: - """ - Start a Starlette server on localhost: exposing: - POST /runs - POST /responses - GET /liveness - GET /readiness - - :param port: Port to listen on. - :type port: int - """ - self.init_tracing() - logger.info(f"Starting FoundryCBAgent server on port {port}") - uvicorn.run(self.app, host="0.0.0.0", port=port) - - def init_tracing(self): - exporter = os.environ.get(Constants.OTEL_EXPORTER_ENDPOINT) - app_insights_conn_str = os.environ.get(Constants.APPLICATION_INSIGHTS_CONNECTION_STRING) - if exporter or app_insights_conn_str: - from opentelemetry.sdk.resources import Resource - from opentelemetry.sdk.trace import TracerProvider - - resource = Resource.create(self.get_trace_attributes()) - provider = TracerProvider(resource=resource) - if exporter: - self.setup_otlp_exporter(exporter, provider) - if app_insights_conn_str: - self.setup_application_insights_exporter(app_insights_conn_str, provider) - trace.set_tracer_provider(provider) - self.init_tracing_internal(exporter_endpoint=exporter, app_insights_conn_str=app_insights_conn_str) - self.tracer = trace.get_tracer(__name__) - - def get_trace_attributes(self): - return { - "service.name": "azure.ai.agentserver", - } - - def init_tracing_internal(self, exporter_endpoint=None, app_insights_conn_str=None): - pass - - def setup_application_insights_exporter(self, connection_string, provider): - from opentelemetry.sdk.trace.export import BatchSpanProcessor - - from azure.monitor.opentelemetry.exporter import AzureMonitorTraceExporter - - exporter_instance = AzureMonitorTraceExporter.from_connection_string(connection_string) - processor = BatchSpanProcessor(exporter_instance) - provider.add_span_processor(processor) - logger.info("Tracing setup with Application Insights exporter.") - - def setup_otlp_exporter(self, endpoint, provider): - from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter - from opentelemetry.sdk.trace.export import BatchSpanProcessor - - exporter_instance = OTLPSpanExporter(endpoint=endpoint) - processor = BatchSpanProcessor(exporter_instance) - provider.add_span_processor(processor) - logger.info(f"Tracing setup with OTLP exporter: {endpoint}") - - -def _event_to_sse_chunk(event: ResponseStreamEvent) -> str: - event_data = json.dumps(event.as_dict()) - if event.type: - return f"event: {event.type}\ndata: {event_data}\n\n" - return f"data: {event_data}\n\n" - - -def _to_response(result: Union[Response, dict]) -> Response: - return result if isinstance(result, Response) else JSONResponse(result) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/agent_run_context.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/agent_run_context.py deleted file mode 100644 index 6fae56f0027d..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/agent_run_context.py +++ /dev/null @@ -1,76 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -from ...logger import get_logger -from ...models import CreateResponse -from ...models.projects import AgentId, AgentReference, ResponseConversation1 -from .id_generator.foundry_id_generator import FoundryIdGenerator -from .id_generator.id_generator import IdGenerator - -logger = get_logger() - - -class AgentRunContext: - def __init__(self, payload: dict): - self._raw_payload = payload - self._request = _deserialize_create_response(payload) - self._id_generator = FoundryIdGenerator.from_request(payload) - self._response_id = self._id_generator.response_id - self._conversation_id = self._id_generator.conversation_id - self._stream = self.request.get("stream", False) - - @property - def raw_payload(self) -> dict: - return self._raw_payload - - @property - def request(self) -> CreateResponse: - return self._request - - @property - def id_generator(self) -> IdGenerator: - return self._id_generator - - @property - def response_id(self) -> str: - return self._response_id - - @property - def conversation_id(self) -> str: - return self._conversation_id - - @property - def stream(self) -> bool: - return self._stream - - def get_agent_id_object(self) -> AgentId: - agent = self.request.get("agent") - if not agent: - return None # type: ignore - return AgentId( - { - "type": agent.type, - "name": agent.name, - "version": agent.version, - } - ) - - def get_conversation_object(self) -> ResponseConversation1: - if not self._conversation_id: - return None # type: ignore - return ResponseConversation1(id=self._conversation_id) - - -def _deserialize_create_response(payload: dict) -> CreateResponse: - _deserialized = CreateResponse(**payload) - - raw_agent_reference = payload.get("agent") - if raw_agent_reference: - _deserialized["agent"] = _deserialize_agent_reference(raw_agent_reference) - return _deserialized - - -def _deserialize_agent_reference(payload: dict) -> AgentReference: - if not payload: - return None # type: ignore - return AgentReference(**payload) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/foundry_id_generator.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/foundry_id_generator.py deleted file mode 100644 index 910a7c481daa..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/foundry_id_generator.py +++ /dev/null @@ -1,136 +0,0 @@ -# pylint: disable=docstring-missing-return,docstring-missing-param,docstring-missing-rtype -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -from __future__ import annotations - -import base64 -import os -import re -from typing import Optional - -from .id_generator import IdGenerator - -_WATERMARK_RE = re.compile(r"^[A-Za-z0-9]*$") - - -class FoundryIdGenerator(IdGenerator): - """ - Python port of the C# FoundryIdGenerator. - - Notable behaviors preserved: - - Secure, alphanumeric entropy via base64 filtering, retrying until exact length. - - Watermark must be strictly alphanumeric; inserted mid-entropy. - - Only one delimiter (default "_") after the prefix; no delimiter between entropy and partition key. - - Partition key is the last N characters of the second ID segment (post-delimiter). - """ - - def __init__(self, response_id: Optional[str], conversation_id: Optional[str]): - self.response_id = response_id or self._new_id("resp") - self.conversation_id = conversation_id or self._new_id("conv") - self._partition_id = self._extract_partition_id(self.conversation_id) - - @classmethod - def from_request(cls, payload: dict) -> "FoundryIdGenerator": - response_id = payload.get("metadata", {}).get("response_id", None) - conv_id_raw = payload.get("conversation", None) - if isinstance(conv_id_raw, str): - conv_id = conv_id_raw - elif isinstance(conv_id_raw, dict): - conv_id = conv_id_raw.get("id", None) - else: - conv_id = None - return cls(response_id, conv_id) - - def generate(self, category: Optional[str] = None) -> str: - prefix = "id" if not category else category - return self._new_id(prefix, partition_key=self._partition_id) - - # --- Static helpers (mirror C# private static methods) -------------------- - - @staticmethod - def _new_id( - prefix: str, - string_length: int = 32, - partition_key_length: int = 18, - infix: Optional[str] = "", - watermark: str = "", - delimiter: str = "_", - partition_key: Optional[str] = None, - partition_key_hint: str = "", - ) -> str: - """ - Generates a new ID. - - Format matches the C# logic: - f"{prefix}{delimiter}{infix}{partitionKey}{entropy}" - (i.e., exactly one delimiter after prefix; no delimiter between entropy and partition key) - """ - entropy = FoundryIdGenerator._secure_entropy(string_length) - - if partition_key is not None: - pkey = partition_key - elif partition_key_hint: - pkey = FoundryIdGenerator._extract_partition_id( - partition_key_hint, - string_length=string_length, - partition_key_length=partition_key_length, - delimiter=delimiter, - ) - else: - pkey = FoundryIdGenerator._secure_entropy(partition_key_length) - - if watermark: - if not _WATERMARK_RE.fullmatch(watermark): - raise ValueError(f"Only alphanumeric characters may be in watermark: {watermark}") - half = string_length // 2 - entropy = f"{entropy[:half]}{watermark}{entropy[half:]}" - - infix = infix or "" - prefix_part = f"{prefix}{delimiter}" if prefix else "" - return f"{prefix_part}{entropy}{infix}{pkey}" - - @staticmethod - def _secure_entropy(string_length: int) -> str: - """ - Generates a secure random alphanumeric string of exactly `string_length`. - Re-tries whole generation until the filtered base64 string is exactly the desired length, - matching the C# behavior. - """ - if string_length < 1: - raise ValueError("Must be greater than or equal to 1") - - while True: - # Use cryptographically secure bytes; base64 then filter to alnum. - buf = os.urandom(string_length) - encoded = base64.b64encode(buf).decode("ascii") - alnum = "".join(ch for ch in encoded if ch.isalnum()) - if len(alnum) >= string_length: - return alnum[:string_length] - # else: retry, same as the C# loop which discards and regenerates - - @staticmethod - def _extract_partition_id( - id_str: str, - string_length: int = 32, - partition_key_length: int = 18, - delimiter: str = "_", - ) -> str: - """ - Extracts partition key from an existing ID. - - Expected shape (per C# logic): "_" - We take the last `partition_key_length` characters from the *second* segment. - """ - if not id_str: - raise ValueError("Id cannot be null or empty") - - parts = [p for p in id_str.split(delimiter) if p] # remove empty entries like C# Split(..., RemoveEmptyEntries) - if len(parts) < 2: - raise ValueError(f"Id '{id_str}' does not contain a valid partition key.") - - segment = parts[1] - if len(segment) < string_length + partition_key_length: - raise ValueError(f"Id '{id_str}' does not contain a valid id.") - - return segment[-partition_key_length:] diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/id_generator.py b/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/id_generator.py deleted file mode 100644 index 48f0d9add17d..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/id_generator.py +++ /dev/null @@ -1,19 +0,0 @@ -# --------------------------------------------------------- -# Copyright (c) Microsoft Corporation. All rights reserved. -# --------------------------------------------------------- -from abc import ABC, abstractmethod -from typing import Optional - - -class IdGenerator(ABC): - @abstractmethod - def generate(self, category: Optional[str] = None) -> str: ... - - def generate_function_call_id(self) -> str: - return self.generate("func") - - def generate_function_output_id(self) -> str: - return self.generate("funcout") - - def generate_message_id(self) -> str: - return self.generate("msg") diff --git a/sdk/agentserver/azure-ai-agentserver-core/cspell.json b/sdk/agentserver/azure-ai-agentserver-core/cspell.json index 126cadc0625c..a2c6989a053e 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/cspell.json +++ b/sdk/agentserver/azure-ai-agentserver-core/cspell.json @@ -1,27 +1,25 @@ { "ignoreWords": [ - "Agentic", - "UPIA", - "ANSII", - "inpainting", - "CSDL", - "azureai", - "GLEU", - "fstring", - "alnum", - "GENAI", - "Prereqs", - "mslearn", - "PYTHONIOENCODING", - "GETFL", - "DETFL", - "SETFL", - "Planifica" + "agentserver", + "appinsights", + "ASGI", + "autouse", + "caplog", + "genai", + "hypercorn", + "openapi", + "paramtype", + "pytestmark", + "rtype", + "starlette", + "traceparent", + "tracestate", + "tracecontext" ], "ignorePaths": [ - "*.csv", - "*.json", - "*.rst", - "samples/**" + "*.csv", + "*.json", + "*.rst", + "samples/**" ] - } \ No newline at end of file +} diff --git a/sdk/agentserver/azure-ai-agentserver-core/dev_requirements.txt b/sdk/agentserver/azure-ai-agentserver-core/dev_requirements.txt index 129e3e21fef1..5a716de9f2de 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/dev_requirements.txt +++ b/sdk/agentserver/azure-ai-agentserver-core/dev_requirements.txt @@ -1,2 +1,7 @@ -e ../../../eng/tools/azure-sdk-tools -python-dotenv \ No newline at end of file +pytest +httpx +pytest-asyncio +opentelemetry-api>=1.20.0 +opentelemetry-sdk>=1.20.0 +azure-monitor-opentelemetry-exporter>=1.0.0b21 diff --git a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.rst b/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.rst deleted file mode 100644 index da01b083b0b3..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.rst +++ /dev/null @@ -1,34 +0,0 @@ -azure.ai.agentserver.core package -================================= - -.. automodule:: azure.ai.agentserver.core - :inherited-members: - :members: - :undoc-members: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 4 - - azure.ai.agentserver.core.server - -Submodules ----------- - -azure.ai.agentserver.core.constants module ------------------------------------------- - -.. automodule:: azure.ai.agentserver.core.constants - :inherited-members: - :members: - :undoc-members: - -azure.ai.agentserver.core.logger module ---------------------------------------- - -.. automodule:: azure.ai.agentserver.core.logger - :inherited-members: - :members: - :undoc-members: diff --git a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.id_generator.rst b/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.id_generator.rst deleted file mode 100644 index cf935aa1d1ed..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.id_generator.rst +++ /dev/null @@ -1,26 +0,0 @@ -azure.ai.agentserver.core.server.common.id\_generator package -============================================================= - -.. automodule:: azure.ai.agentserver.core.server.common.id_generator - :inherited-members: - :members: - :undoc-members: - -Submodules ----------- - -azure.ai.agentserver.core.server.common.id\_generator.foundry\_id\_generator module ------------------------------------------------------------------------------------ - -.. automodule:: azure.ai.agentserver.core.server.common.id_generator.foundry_id_generator - :inherited-members: - :members: - :undoc-members: - -azure.ai.agentserver.core.server.common.id\_generator.id\_generator module --------------------------------------------------------------------------- - -.. automodule:: azure.ai.agentserver.core.server.common.id_generator.id_generator - :inherited-members: - :members: - :undoc-members: diff --git a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.rst b/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.rst deleted file mode 100644 index 26c4aaf4d15a..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.common.rst +++ /dev/null @@ -1,26 +0,0 @@ -azure.ai.agentserver.core.server.common package -=============================================== - -.. automodule:: azure.ai.agentserver.core.server.common - :inherited-members: - :members: - :undoc-members: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 4 - - azure.ai.agentserver.core.server.common.id_generator - -Submodules ----------- - -azure.ai.agentserver.core.server.common.agent\_run\_context module ------------------------------------------------------------------- - -.. automodule:: azure.ai.agentserver.core.server.common.agent_run_context - :inherited-members: - :members: - :undoc-members: diff --git a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.rst b/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.rst deleted file mode 100644 index b82fa765b839..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/doc/azure.ai.agentserver.core.server.rst +++ /dev/null @@ -1,26 +0,0 @@ -azure.ai.agentserver.core.server package -======================================== - -.. automodule:: azure.ai.agentserver.core.server - :inherited-members: - :members: - :undoc-members: - -Subpackages ------------ - -.. toctree:: - :maxdepth: 4 - - azure.ai.agentserver.core.server.common - -Submodules ----------- - -azure.ai.agentserver.core.server.base module --------------------------------------------- - -.. automodule:: azure.ai.agentserver.core.server.base - :inherited-members: - :members: - :undoc-members: diff --git a/sdk/agentserver/azure-ai-agentserver-core/pyproject.toml b/sdk/agentserver/azure-ai-agentserver-core/pyproject.toml index f574360722bb..038907f79001 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/pyproject.toml +++ b/sdk/agentserver/azure-ai-agentserver-core/pyproject.toml @@ -1,13 +1,14 @@ [project] name = "azure-ai-agentserver-core" dynamic = ["version", "readme"] -description = "Agents server adapter for Azure AI" +description = "Foundation utilities and host framework for Azure AI Hosted Agents" requires-python = ">=3.10" authors = [ { name = "Microsoft Corporation", email = "azpysdkhelp@microsoft.com" }, ] license = "MIT" classifiers = [ + "Development Status :: 4 - Beta", "Programming Language :: Python", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3", @@ -16,25 +17,28 @@ classifiers = [ "Programming Language :: Python :: 3.12", "Programming Language :: Python :: 3.13", ] -keywords = ["azure", "azure sdk"] +keywords = ["azure", "azure sdk", "agent", "agentserver", "core"] dependencies = [ - "azure-monitor-opentelemetry>=1.5.0", - "azure-ai-projects", - "azure-ai-agents>=1.2.0b5", - "azure-core>=1.35.0", - "azure-identity", - "openai>=1.80.0", - "opentelemetry-api>=1.35", - "opentelemetry-exporter-otlp-proto-http", "starlette>=0.45.0", - "uvicorn>=0.31.0", + "hypercorn>=0.17.0", +] + +[project.optional-dependencies] +tracing = [ + "opentelemetry-api>=1.20.0", + "opentelemetry-sdk>=1.20.0", + "opentelemetry-exporter-otlp-proto-grpc>=1.20.0", + "azure-monitor-opentelemetry-exporter>=1.0.0b21", ] [build-system] requires = ["setuptools>=69", "wheel"] build-backend = "setuptools.build_meta" +[project.urls] +repository = "https://github.com/Azure/azure-sdk-for-python" + [tool.setuptools.packages.find] exclude = [ "tests*", @@ -42,6 +46,7 @@ exclude = [ "doc*", "azure", "azure.ai", + "azure.ai.agentserver", ] [tool.setuptools.dynamic] @@ -49,23 +54,23 @@ version = { attr = "azure.ai.agentserver.core._version.VERSION" } readme = { file = ["README.md"], content-type = "text/markdown" } [tool.setuptools.package-data] -pytyped = ["py.typed"] +"azure.ai.agentserver.core" = ["py.typed"] [tool.ruff] line-length = 120 -target-version = "py311" -lint.select = ["E", "F", "B", "I"] # E=pycodestyle errors, F=Pyflakes, B=bugbear, I=import sort +target-version = "py310" +lint.select = ["E", "F", "B", "I"] lint.ignore = [] fix = false -exclude = [ - "**/azure/ai/agentserver/core/models/", -] [tool.ruff.lint.isort] known-first-party = ["azure.ai.agentserver.core"] combine-as-imports = true [tool.azure-sdk-build] -breaking = false # incompatible python version -pyright = false -verifytypes = false \ No newline at end of file +breaking = false +mypy = true +pyright = true +verifytypes = true +pylint = true +type_check_samples = false diff --git a/sdk/agentserver/azure-ai-agentserver-core/pyrightconfig.json b/sdk/agentserver/azure-ai-agentserver-core/pyrightconfig.json index b7490ae2b8c7..f36c5a7fe0d3 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/pyrightconfig.json +++ b/sdk/agentserver/azure-ai-agentserver-core/pyrightconfig.json @@ -1,13 +1,11 @@ { - "reportOptionalMemberAccess": "warning", - "reportArgumentType": "warning", - "reportAttributeAccessIssue": "warning", - "reportMissingImports": "warning", - "reportGeneralTypeIssues": "warning", - "reportReturnType": "warning", - - "exclude": [ - "**/azure/ai/agentserver/core/models/**", - "**/samples/**" - ] -} \ No newline at end of file + "reportOptionalMemberAccess": "warning", + "reportArgumentType": "warning", + "reportAttributeAccessIssue": "warning", + "reportMissingImports": "warning", + "reportGeneralTypeIssues": "warning", + "reportReturnType": "warning", + "exclude": [ + "**/samples/**" + ] +} diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/.env.sample b/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/.env.sample deleted file mode 100644 index a19b1c6d02f7..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/.env.sample +++ /dev/null @@ -1,24 +0,0 @@ -# Core agent configuration -API_HOST=github -WEEKEND_PLANNER_MODE=container - -# GitHub Models (when API_HOST=github) -GITHUB_TOKEN=your-github-token -GITHUB_OPENAI_BASE_URL=https://models.inference.ai.azure.com -GITHUB_MODEL=gpt-4o - -# Azure OpenAI (when API_HOST=azure) -AZURE_OPENAI_ENDPOINT=https://.openai.azure.com/ -AZURE_OPENAI_VERSION=2025-01-01-preview -AZURE_OPENAI_CHAT_DEPLOYMENT= - -# Telemetry & tracing -OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4318/v1/traces -OTEL_EXPORTER_OTLP_PROTOCOL=grpc -OTEL_EXPORTER_OTLP_GRPC_ENDPOINT=http://127.0.0.1:4317 -APPLICATION_INSIGHTS_CONNECTION_STRING= - -# Optional GenAI capture overrides -OTEL_GENAI_AGENT_NAME=Bilingual Weekend Planner Agent -OTEL_GENAI_AGENT_DESCRIPTION=Assistant that plans weekend activities using weather and events data in multiple languages -OTEL_GENAI_AGENT_ID=bilingual-weekend-planner diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/README.md b/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/README.md deleted file mode 100644 index 83296f5dd348..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/README.md +++ /dev/null @@ -1,42 +0,0 @@ -Bilingual Weekend Planner (Custom Container + Telemetry) - -- Container-hosted multi-agent weekend planner with full GenAI telemetry capture and a standalone tracing demo that exercises `opentelemetry-instrumentation-openai-agents-v2`. - -Prereqs -- Optional: Activate repo venv `source .venv/bin/activate` -- Install deps `pip install -U -r samples/python/custom/bilingual_weekend_planner/requirements.txt` - -Env Vars -Choose the API host via `API_HOST`: - -- `github`: GitHub Models hosted on Azure AI Inference - - `GITHUB_TOKEN` - - Optional: `GITHUB_OPENAI_BASE_URL` (default `https://models.inference.ai.azure.com`) - - Optional: `GITHUB_MODEL` (default `gpt-4o`) -- `azure`: Azure OpenAI - - `AZURE_OPENAI_ENDPOINT` (e.g. `https://.openai.azure.com/`) - - `AZURE_OPENAI_VERSION` (e.g. `2025-01-01-preview`) - - `AZURE_OPENAI_CHAT_DEPLOYMENT` (deployment name) - -Modes -- Container (default): runs the bilingual triage agent via `FoundryCBAgent`. -- `API_HOST=github GITHUB_TOKEN=... ./run.sh` -- `API_HOST=azure AZURE_OPENAI_ENDPOINT=... AZURE_OPENAI_VERSION=2025-01-01-preview AZURE_OPENAI_CHAT_DEPLOYMENT=... ./run.sh` - - Test (non-stream): - `curl -s http://localhost:8088/responses -H 'Content-Type: application/json' -d '{"input":"What should I do this weekend in Seattle?"}'` - - Test (stream): - `curl -s http://localhost:8088/responses -H 'Content-Type: application/json' -d '{"input":"Plan my weekend in Barcelona","stream":true}'` -- Telemetry demo: set `WEEKEND_PLANNER_MODE=demo` to run the content-capture simulation (no model calls). - `WEEKEND_PLANNER_MODE=demo python main.py` - -Telemetry -- Console exporter is enabled by default; set `OTEL_EXPORTER_OTLP_ENDPOINT` (HTTP) or `OTEL_EXPORTER_OTLP_GRPC_ENDPOINT` to export spans elsewhere. -- Set `APPLICATION_INSIGHTS_CONNECTION_STRING` to export spans to Azure Monitor. -- GenAI capture flags are pre-configured (content, system instructions, tool metadata). -- `opentelemetry-instrumentation-openai-agents-v2` enables span-and-event message capture for requests, responses, and tool payloads. -- The tracing demo uses the `agents.tracing` helpers to emit spans without invoking external APIs. - -Notes -- Uses `FoundryCBAgent` to host the bilingual weekend planner triage agent on `http://localhost:8088`. -- Tools: `get_weather`, `get_activities`, `get_current_date`. -- Rich logger output highlights tool invocations; bilingual agents route traveler requests to the right language specialist. diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/main.py b/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/main.py deleted file mode 100644 index 099d8dc45181..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/main.py +++ /dev/null @@ -1,579 +0,0 @@ -# mypy: ignore-errors -"""Bilingual weekend planner sample with full GenAI telemetry capture.""" - -from __future__ import annotations - -import json -import logging -import os -import random -from dataclasses import dataclass -from datetime import datetime, timezone -from typing import Callable -from urllib.parse import urlparse - -import azure.identity -import openai -from agents import ( - Agent, - OpenAIChatCompletionsModel, - Runner, - function_tool, - set_default_openai_client, - set_tracing_disabled, -) -from agents.tracing import ( - agent_span as tracing_agent_span, - function_span as tracing_function_span, - generation_span as tracing_generation_span, - trace as tracing_trace, -) -from azure.ai.agentserver.core import AgentRunContext, FoundryCBAgent -from azure.ai.agentserver.core.models import ( - CreateResponse, - Response as OpenAIResponse, -) -from azure.ai.agentserver.core.models.projects import ( - ItemContentOutputText, - ResponseCompletedEvent, - ResponseCreatedEvent, - ResponseOutputItemAddedEvent, - ResponsesAssistantMessageItemResource, - ResponseTextDeltaEvent, - ResponseTextDoneEvent, -) -from dotenv import load_dotenv -from opentelemetry import trace -from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter -from opentelemetry.instrumentation.openai_agents import OpenAIAgentsInstrumentor -from opentelemetry.sdk.resources import Resource -from opentelemetry.sdk.trace import TracerProvider -from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter -from rich.logging import RichHandler - -try: - from azure.monitor.opentelemetry.exporter import ( # mypy: ignore - AzureMonitorTraceExporter, - ) -except Exception: # pragma: no cover - AzureMonitorTraceExporter = None # mypy: ignore - -# Load env early so adapter init sees them -load_dotenv(override=True) - - -logging.basicConfig( - level=logging.WARNING, - format="%(message)s", - datefmt="[%X]", - handlers=[RichHandler()], -) -logger = logging.getLogger("bilingual_weekend_planner") -RUN_MODE = os.getenv("WEEKEND_PLANNER_MODE", "container").lower() - - -@dataclass -class _ApiConfig: - """Helper describing how to create the OpenAI client.""" - - build_client: Callable[[], openai.AsyncOpenAI] - model_name: str - base_url: str - provider: str - - -def _set_capture_env(provider: str, base_url: str) -> None: - """Enable all GenAI capture toggles prior to instrumentation.""" - - capture_defaults = { - "OTEL_INSTRUMENTATION_OPENAI_AGENTS_CAPTURE_CONTENT": "true", - "OTEL_INSTRUMENTATION_OPENAI_AGENTS_CAPTURE_METRICS": "true", - "OTEL_GENAI_CAPTURE_MESSAGES": "true", - "OTEL_GENAI_CAPTURE_SYSTEM_INSTRUCTIONS": "true", - "OTEL_GENAI_CAPTURE_TOOL_DEFINITIONS": "true", - "OTEL_GENAI_EMIT_OPERATION_DETAILS": "true", - "OTEL_GENAI_AGENT_NAME": os.getenv( - "OTEL_GENAI_AGENT_NAME", - "Bilingual Weekend Planner Agent", - ), - "OTEL_GENAI_AGENT_DESCRIPTION": os.getenv( - "OTEL_GENAI_AGENT_DESCRIPTION", - "Assistant that plans weekend activities using weather and events data in multiple languages", - ), - "OTEL_GENAI_AGENT_ID": os.getenv( - "OTEL_GENAI_AGENT_ID", "bilingual-weekend-planner" - ), - } - for env_key, value in capture_defaults.items(): - os.environ.setdefault(env_key, value) - - parsed = urlparse(base_url) - if parsed.hostname: - os.environ.setdefault("OTEL_GENAI_SERVER_ADDRESS", parsed.hostname) - if parsed.port: - os.environ.setdefault("OTEL_GENAI_SERVER_PORT", str(parsed.port)) - - -def _resolve_api_config() -> _ApiConfig: - """Return the client configuration for the requested host.""" - - host = os.getenv("API_HOST", "github").lower() - - if host == "github": - base_url = os.getenv( - "GITHUB_OPENAI_BASE_URL", - "https://models.inference.ai.azure.com", - ).rstrip("/") - model_name = os.getenv("GITHUB_MODEL", "gpt-4o") - api_key = os.environ.get("GITHUB_TOKEN") - if not api_key: - if RUN_MODE != "demo": - raise RuntimeError("GITHUB_TOKEN is required when API_HOST=github") - api_key = "demo-key" - - def _build_client() -> openai.AsyncOpenAI: - return openai.AsyncOpenAI(base_url=base_url, api_key=api_key) - - return _ApiConfig( - build_client=_build_client, - model_name=model_name, - base_url=base_url, - provider="azure.ai.inference", - ) - - if host == "azure": - # Explicitly check for required environment variables - if "AZURE_OPENAI_ENDPOINT" not in os.environ: - raise ValueError("AZURE_OPENAI_ENDPOINT is required when API_HOST=azure") - if "AZURE_OPENAI_VERSION" not in os.environ: - raise ValueError("AZURE_OPENAI_VERSION is required when API_HOST=azure") - if "AZURE_OPENAI_CHAT_DEPLOYMENT" not in os.environ: - raise ValueError( - "AZURE_OPENAI_CHAT_DEPLOYMENT is required when API_HOST=azure" - ) - endpoint = os.environ["AZURE_OPENAI_ENDPOINT"].rstrip("/") - api_version = os.environ["AZURE_OPENAI_VERSION"] - deployment = os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] - - credential = azure.identity.DefaultAzureCredential() - token_provider = azure.identity.get_bearer_token_provider( - credential, - "https://cognitiveservices.azure.com/.default", - ) - - def _build_client() -> openai.AsyncAzureOpenAI: - return openai.AsyncAzureOpenAI( - api_version=api_version, - azure_endpoint=endpoint, - azure_ad_token_provider=token_provider, - ) - - return _ApiConfig( - build_client=_build_client, - model_name=deployment, - base_url=endpoint, - provider="azure.ai.openai", - ) - - raise ValueError( - f"Unsupported API_HOST '{host}'. Supported values are 'github' or 'azure'." - ) - - -def _configure_otel() -> None: - """Configure the tracer provider and exporters.""" - - grpc_endpoint = os.getenv("OTEL_EXPORTER_OTLP_GRPC_ENDPOINT") - if not grpc_endpoint: - default_otlp_endpoint = os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT") - protocol = os.getenv("OTEL_EXPORTER_OTLP_PROTOCOL", "grpc").lower() - if default_otlp_endpoint and protocol == "grpc": - grpc_endpoint = default_otlp_endpoint - - conn = os.getenv("APPLICATION_INSIGHTS_CONNECTION_STRING") - resource = Resource.create( - { - "service.name": "weekend-planner-service", - "service.namespace": "leisure-orchestration", - "service.version": os.getenv("SERVICE_VERSION", "1.0.0"), - } - ) - - tracer_provider = TracerProvider(resource=resource) - - if grpc_endpoint: - tracer_provider.add_span_processor( - BatchSpanProcessor(OTLPSpanExporter(endpoint=grpc_endpoint)) - ) - print(f"[otel] OTLP gRPC exporter configured ({grpc_endpoint})") - elif conn: - if AzureMonitorTraceExporter is None: - print( - "Warning: Azure Monitor exporter not installed. " - "Install with: pip install azure-monitor-opentelemetry-exporter", - ) - tracer_provider.add_span_processor( - BatchSpanProcessor(ConsoleSpanExporter()) - ) - else: - tracer_provider.add_span_processor( - BatchSpanProcessor( - AzureMonitorTraceExporter.from_connection_string(conn) - ) - ) - print("[otel] Azure Monitor trace exporter configured") - else: - tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter())) - print("[otel] Console span exporter configured") - print( - "[otel] Set APPLICATION_INSIGHTS_CONNECTION_STRING to export to Application Insights " - "instead of the console", - ) - - trace.set_tracer_provider(tracer_provider) - - -api_config = _resolve_api_config() -_set_capture_env(api_config.provider, api_config.base_url) -_configure_otel() -OpenAIAgentsInstrumentor().instrument( - tracer_provider=trace.get_tracer_provider(), - capture_message_content="span_and_event", - agent_name="Weekend Planner", - base_url=api_config.base_url, - system=api_config.provider, -) - -client = api_config.build_client() -set_default_openai_client(client) -set_tracing_disabled(False) - - -def _chat_model() -> OpenAIChatCompletionsModel: - """Return the chat completions model used for weekend planning.""" - - return OpenAIChatCompletionsModel(model=api_config.model_name, openai_client=client) - - -SUNNY_WEATHER_PROBABILITY = 0.05 - - -@function_tool -def get_weather(city: str) -> dict[str, object]: - """Fetch mock weather information for the requested city.""" - - logger.info("Getting weather for %s", city) - if random.random() < SUNNY_WEATHER_PROBABILITY: - return {"city": city, "temperature": 72, "description": "Sunny"} - return {"city": city, "temperature": 60, "description": "Rainy"} - - -@function_tool -def get_activities(city: str, date: str) -> list[dict[str, object]]: - """Return mock activities for the supplied city and date.""" - - logger.info("Getting activities for %s on %s", city, date) - return [ - {"name": "Hiking", "location": city}, - {"name": "Beach", "location": city}, - {"name": "Museum", "location": city}, - ] - - -@function_tool -def get_current_date() -> str: - """Return the current date as YYYY-MM-DD.""" - - logger.info("Getting current date") - return datetime.now().strftime("%Y-%m-%d") - - -ENGLISH_WEEKEND_PLANNER = Agent( - name="Weekend Planner (English)", - instructions=( - "You help English-speaking travelers plan their weekends. " - "Use the available tools to gather the weekend date, current weather, and local activities. " - "Only recommend activities that align with the weather and include the date in your final response." - ), - tools=[get_weather, get_activities, get_current_date], - model=_chat_model(), -) - -# cSpell:disable -SPANISH_WEEKEND_PLANNER = Agent( - name="Planificador de fin de semana (Español)", - instructions=( - "Ayudas a viajeros hispanohablantes a planificar su fin de semana. " - "Usa las herramientas disponibles para obtener la fecha, el clima y actividades locales. " - "Recomienda actividades acordes al clima e incluye la fecha del fin de semana en tu respuesta." - ), - tools=[get_weather, get_activities, get_current_date], - model=_chat_model(), -) - -TRIAGE_AGENT = Agent( - name="Weekend Planner Triage", - instructions=( - "Revisa el idioma del viajero. " - "Si el mensaje está en español, realiza un handoff a 'Planificador de fin de semana (Español)'. " - "De lo contrario, usa 'Weekend Planner (English)'." - ), - handoffs=[SPANISH_WEEKEND_PLANNER, ENGLISH_WEEKEND_PLANNER], - model=_chat_model(), -) -# cSpell:enable - - -def _root_span_name(provider: str) -> str: - return f"weekend_planning_session[{provider}]" - - -def _apply_weekend_semconv( - span: trace.Span, - *, - user_text: str, - final_text: str, - conversation_id: str | None, - response_id: str, - final_agent_name: str | None, - success: bool, -) -> None: - parsed = urlparse(api_config.base_url) - if parsed.hostname: - span.set_attribute("server.address", parsed.hostname) - if parsed.port: - span.set_attribute("server.port", parsed.port) - - span.set_attribute("gen_ai.operation.name", "invoke_agent") - span.set_attribute("gen_ai.provider.name", api_config.provider) - span.set_attribute("gen_ai.request.model", api_config.model_name) - span.set_attribute("gen_ai.output.type", "text") - span.set_attribute("gen_ai.response.model", api_config.model_name) - span.set_attribute("gen_ai.response.id", response_id) - span.set_attribute( - "gen_ai.response.finish_reasons", - ["stop"] if success else ["error"], - ) - - if conversation_id: - span.set_attribute("gen_ai.conversation.id", conversation_id) - if TRIAGE_AGENT.instructions: - span.set_attribute("gen_ai.system_instructions", TRIAGE_AGENT.instructions) - if final_agent_name: - span.set_attribute("gen_ai.agent.name", final_agent_name) - else: - span.set_attribute("gen_ai.agent.name", TRIAGE_AGENT.name) - if user_text: - span.set_attribute( - "gen_ai.input.messages", - json.dumps([{"role": "user", "content": user_text}]), - ) - if final_text: - span.set_attribute( - "gen_ai.output.messages", - json.dumps([{"role": "assistant", "content": final_text}]), - ) - - -def _extract_user_text(request: CreateResponse) -> str: - """Extract the first user text input from the request body.""" - - input = request.get("input") - if not input: - return "" - - first = input[0] - content = first.get("content", None) if isinstance(first, dict) else first - if isinstance(content, str): - return content - - if isinstance(content, list): - for item in content: - text = item.get("text", None) - if text: - return text - return "" - - -def _stream_final_text(final_text: str, context: AgentRunContext): - """Yield streaming events for the provided final text.""" - - async def _async_stream(): - assembled = "" - yield ResponseCreatedEvent(response=OpenAIResponse(output=[])) - item_id = context.id_generator.generate_message_id() - yield ResponseOutputItemAddedEvent( - output_index=0, - item=ResponsesAssistantMessageItemResource( - id=item_id, - status="in_progress", - content=[ItemContentOutputText(text="", annotations=[])], - ), - ) - - words = final_text.split(" ") - for idx, token in enumerate(words): - piece = token if idx == len(words) - 1 else token + " " - assembled += piece - yield ResponseTextDeltaEvent(output_index=0, content_index=0, delta=piece) - - yield ResponseTextDoneEvent(output_index=0, content_index=0, text=assembled) - yield ResponseCompletedEvent( - response=OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="user", - id=context.response_id, - created_at=datetime.now(timezone.utc), - output=[ - ResponsesAssistantMessageItemResource( - id=item_id, - status="completed", - content=[ItemContentOutputText(text=assembled, annotations=[])], - ) - ], - ) - ) - - return _async_stream() - - -def dump(title: str, payload: object) -> None: - """Pretty print helper for the tracing demo.""" - - print(f"\n=== {title} ===") - print(json.dumps(payload, indent=2)) - - -def run_content_capture_demo() -> None: - """Simulate an agent workflow using the tracing helpers without calling an API.""" - - itinerary_prompt = [ - {"role": "system", "content": "Help travelers plan memorable weekends."}, - {"role": "user", "content": "I'm visiting Seattle this weekend."}, - ] - tool_args = {"city": "Seattle", "date": "2025-05-17"} - tool_result = { - "forecast": "Light rain, highs 60°F", - "packing_tips": ["rain jacket", "waterproof shoes"], - } - - with tracing_trace("weekend-planner-simulation"): - with tracing_agent_span(name="weekend_planner_demo") as agent: - dump( - "Agent span started", - {"span_id": agent.span_id, "trace_id": agent.trace_id}, - ) - - with tracing_generation_span( - input=itinerary_prompt, - output=[ - { - "role": "assistant", - "content": ( - "Day 1 explore Pike Place Market, Day 2 visit the Museum of Pop Culture, " - "Day 3 take the Bainbridge ferry if weather allows." - ), - } - ], - model=api_config.model_name, - usage={ - "input_tokens": 128, - "output_tokens": 96, - "total_tokens": 224, - }, - ): - pass - - with tracing_function_span( - name="get_weather", - input=json.dumps(tool_args), - output=tool_result, - ): - pass - - print("\nWorkflow complete – spans exported to the configured OTLP endpoint.") - - -class WeekendPlannerContainer(FoundryCBAgent): - """Container entry point that surfaces the weekend planner agent via FoundryCBAgent.""" - - async def agent_run(self, context: AgentRunContext): - request = context.request - user_text = _extract_user_text(request) - - tracer = trace.get_tracer(__name__) - with tracer.start_as_current_span(_root_span_name(api_config.provider)) as span: - span.set_attribute("user.request", user_text) - span.set_attribute("api.host", os.getenv("API_HOST", "github")) - span.set_attribute("model.name", api_config.model_name) - span.set_attribute("agent.name", TRIAGE_AGENT.name) - span.set_attribute("triage.languages", "en,es") - - try: - result = await Runner.run(TRIAGE_AGENT, input=user_text) - final_text = str(result.final_output or "") - span.set_attribute( - "agent.response", final_text[:500] if final_text else "" - ) - final_agent = getattr(result, "last_agent", None) - if final_agent and getattr(final_agent, "name", None): - span.set_attribute("agent.final", final_agent.name) - span.set_attribute("request.success", True) - _apply_weekend_semconv( - span, - user_text=user_text, - final_text=final_text, - conversation_id=context.conversation_id, - response_id=context.response_id, - final_agent_name=getattr(final_agent, "name", None), - success=True, - ) - logger.info("Weekend planning completed successfully") - except Exception as exc: # pragma: no cover - defensive logging path - span.record_exception(exc) - span.set_attribute("request.success", False) - span.set_attribute("error.type", exc.__class__.__name__) - logger.error("Error during weekend planning: %s", exc) - final_text = f"Error running agent: {exc}" - _apply_weekend_semconv( - span, - user_text=user_text, - final_text=final_text, - conversation_id=context.conversation_id, - response_id=context.response_id, - final_agent_name=None, - success=False, - ) - - if request.get("stream", False): - return _stream_final_text(final_text, context) - - response = OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="user", - id=context.response_id, - created_at=datetime.now(timezone.utc), - output=[ - ResponsesAssistantMessageItemResource( - id=context.id_generator.generate_message_id(), - status="completed", - content=[ItemContentOutputText(text=final_text, annotations=[])], - ) - ], - ) - return response - - -if __name__ == "__main__": - logger.setLevel(logging.INFO) - try: - if RUN_MODE == "demo": - run_content_capture_demo() - else: - WeekendPlannerContainer().run() - finally: - trace.get_tracer_provider().shutdown() diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/requirements.txt b/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/requirements.txt deleted file mode 100644 index faf4fd5fbe2c..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/requirements.txt +++ /dev/null @@ -1,13 +0,0 @@ -openai-agents>=0.3.3 -python-dotenv -openai>=1.42.0 -azure-identity>=1.17.0 -opentelemetry-api>=1.26.0 -opentelemetry-sdk>=1.26.0 -opentelemetry-exporter-otlp-proto-http>=1.26.0 -opentelemetry-exporter-otlp-proto-grpc>=1.26.0 -opentelemetry-instrumentation-openai-agents-v2>=0.1.0 -rich>=13.9.0 -azure-ai-agentserver-core -# Optional tracing exporters -azure-monitor-opentelemetry-exporter>=1.0.0b16 diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/run.sh b/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/run.sh deleted file mode 100644 index e3d097e14166..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/bilingual_weekend_planner/run.sh +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env bash -set -euo pipefail - -# Simple local runner for the bilingual weekend planner container sample. -# Examples: -# API_HOST=github GITHUB_TOKEN=... ./run.sh - -SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" -ROOT_DIR="$(cd "$SCRIPT_DIR/../../../.." && pwd)" - -export PYTHONPATH="$ROOT_DIR:${PYTHONPATH:-}" - -if [[ -d "$ROOT_DIR/.venv" ]]; then - # shellcheck disable=SC1090 - source "$ROOT_DIR/.venv/bin/activate" -fi - -PYTHON_BIN="${ROOT_DIR}/.venv/bin/python" -if [[ ! -x "$PYTHON_BIN" ]]; then - PYTHON_BIN="python3" -fi - -"$PYTHON_BIN" -u "$SCRIPT_DIR/main.py" diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/mcp_simple/mcp_simple.py b/sdk/agentserver/azure-ai-agentserver-core/samples/mcp_simple/mcp_simple.py deleted file mode 100644 index af9812826941..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/mcp_simple/mcp_simple.py +++ /dev/null @@ -1,246 +0,0 @@ -# mypy: ignore-errors -"""Custom MCP simple sample. - -This sample combines the patterns from: - - langgraph `mcp_simple` (uses MultiServerMCPClient to discover tools) - - `custom_mock_agent_test` (implements a custom FoundryCBAgent with streaming events) - -Goal: When invoked in stream mode, emit MCP list tools related stream events so a -consumer (UI / CLI) can visualize tool enumeration plus a final assistant -message. In non-stream mode, return a single aggregated response summarizing -the tools. - -Run: - python mcp_simple.py - -Then call (example): - curl -X POST http://localhost:8088/responses -H 'Content-Type: application/json' -d '{ - "agent": {"name": "custom_mcp", "type": "agent_reference"}, - "stream": true, - "input": "List the tools available" - }' -""" - -import datetime -import json -from typing import AsyncGenerator, List - -from langchain_mcp_adapters.client import MultiServerMCPClient - -from azure.ai.agentserver.core import AgentRunContext, FoundryCBAgent -from azure.ai.agentserver.core.models import Response as OpenAIResponse -from azure.ai.agentserver.core.models.projects import ( - ItemContentOutputText, - MCPListToolsItemResource, - MCPListToolsTool, - ResponseCompletedEvent, - ResponseCreatedEvent, - ResponseMCPListToolsCompletedEvent, - ResponseMCPListToolsInProgressEvent, - ResponseOutputItemAddedEvent, - ResponsesAssistantMessageItemResource, - ResponseTextDeltaEvent, - ResponseTextDoneEvent, -) - - -class MCPToolsAgent(FoundryCBAgent): - def __init__(self): # noqa: D401 - super().__init__() - # Lazy init; created on first request to avoid startup latency if unused - self._mcp_client = None - - async def _get_client(self) -> MultiServerMCPClient: - if self._mcp_client is None: - # Mirror langgraph sample server config - self._mcp_client = MultiServerMCPClient( - { - "mslearn": { - "url": "https://learn.microsoft.com/api/mcp", - "transport": "streamable_http", - } - } - ) - return self._mcp_client - - async def _list_tools(self) -> List[MCPListToolsTool]: - client = await self._get_client() - try: - raw_tools = await client.get_tools() - tools: List[MCPListToolsTool] = [] - for t in raw_tools: - # Support either dict-like or attribute-based tool objects - if isinstance(t, dict): - name = t.get("name", "unknown_tool") - description = t.get("description") - schema = ( - t.get("input_schema") - or t.get("schema") - or t.get("parameters") - or {} - ) - else: # Fallback to attribute access - name = getattr(t, "name", "unknown_tool") - description = getattr(t, "description", None) - schema = ( - getattr(t, "input_schema", None) - or getattr(t, "schema", None) - or getattr(t, "parameters", None) - or {} - ) - tools.append( - MCPListToolsTool( - name=name, - description=description, - input_schema=schema, - ) - ) - if not tools: - raise ValueError("No tools discovered from MCP server") - return tools - except Exception: # noqa: BLE001 - # Provide deterministic fallback so sample always works offline - return [ - MCPListToolsTool( - name="fallback_echo", - description="Echo back provided text.", - input_schema={ - "type": "object", - "properties": {"text": {"type": "string"}}, - "required": ["text"], - }, - ) - ] - - async def agent_run(self, context: AgentRunContext): # noqa: D401 - """Implements the FoundryCBAgent contract. - - Streaming path emits MCP list tools events + assistant summary. - Non-stream path returns aggregated assistant message. - """ - - tools = await self._list_tools() - - if context.stream: - - async def stream() -> AsyncGenerator: # noqa: D401 - # Initial empty response context (pattern from mock sample) - yield ResponseCreatedEvent(response=OpenAIResponse(output=[])) - - # Indicate listing in progress - yield ResponseMCPListToolsInProgressEvent() - - mcp_item = MCPListToolsItemResource( - id=context.id_generator.generate("mcp_list"), - server_label="mslearn", - tools=tools, - ) - yield ResponseOutputItemAddedEvent(output_index=0, item=mcp_item) - yield ResponseMCPListToolsCompletedEvent() - - # Assistant streaming summary - assistant_item = ResponsesAssistantMessageItemResource( - id=context.id_generator.generate_message_id(), - status="in_progress", - content=[ItemContentOutputText(text="", annotations=[])], - ) - yield ResponseOutputItemAddedEvent(output_index=1, item=assistant_item) - - summary_text = "Discovered MCP tools: " + ", ".join( - t.name for t in tools - ) - assembled = "" - parts = summary_text.split(" ") - for i, token in enumerate(parts): - piece = token if i == len(parts) - 1 else token + " " # keep spaces - assembled += piece - yield ResponseTextDeltaEvent( - output_index=1, content_index=0, delta=piece - ) - yield ResponseTextDoneEvent( - output_index=1, content_index=0, text=assembled - ) - - final_response = OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="user", - id=context.response_id, - created_at=datetime.datetime.now(), - output=[ - mcp_item, - ResponsesAssistantMessageItemResource( - id=assistant_item.id, - status="completed", - content=[ - ItemContentOutputText(text=assembled, annotations=[]) - ], - ), - ], - ) - yield ResponseCompletedEvent(response=final_response) - - return stream() - - # Non-stream path: single assistant message - # Build a JSON-serializable summary. Avoid dumping complex model/schema objects that - # can include non-serializable metaclass references (seen in error stacktrace). - safe_tools = [] - for t in tools: - schema = t.input_schema - # Simplify schema to plain dict/str; if not directly serializable, fallback to string. - if isinstance(schema, (str, int, float, bool)) or schema is None: - safe_schema = schema - elif isinstance(schema, dict): - # Shallow copy ensuring nested values are primitive or stringified - safe_schema = {} - for k, v in schema.items(): - if isinstance(v, (str, int, float, bool, type(None), list, dict)): - safe_schema[k] = v - else: - safe_schema[k] = str(v) - else: - safe_schema = str(schema) - safe_tools.append( - { - "name": t.name, - "description": t.description, - # Provide only top-level schema keys if dict. - "input_schema_keys": list(safe_schema.keys()) - if isinstance(safe_schema, dict) - else safe_schema, - } - ) - summary = { - "server_label": "mslearn", - "tool_count": len(tools), - "tools": safe_tools, - } - content = [ - ItemContentOutputText( - text="MCP tool listing completed.\n" + json.dumps(summary, indent=2), - annotations=[], - ) - ] - return OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="user", - id="id", - created_at=datetime.datetime.now(), - output=[ - ResponsesAssistantMessageItemResource( - id=context.id_generator.generate_message_id(), - status="completed", - content=content, - ) - ], - ) - - -my_agent = MCPToolsAgent() - -if __name__ == "__main__": - my_agent.run() diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/mcp_simple/requirements.txt b/sdk/agentserver/azure-ai-agentserver-core/samples/mcp_simple/requirements.txt deleted file mode 100644 index 525ee6af3f7d..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/mcp_simple/requirements.txt +++ /dev/null @@ -1,2 +0,0 @@ -langchain-mcp-adapters==0.1.11 -azure-ai-agentserver-core diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/selfhosted_invocation/requirements.txt b/sdk/agentserver/azure-ai-agentserver-core/samples/selfhosted_invocation/requirements.txt new file mode 100644 index 000000000000..1840264735c0 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/samples/selfhosted_invocation/requirements.txt @@ -0,0 +1 @@ +azure-ai-agentserver-core[tracing] diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/selfhosted_invocation/selfhosted_invocation.py b/sdk/agentserver/azure-ai-agentserver-core/samples/selfhosted_invocation/selfhosted_invocation.py new file mode 100644 index 000000000000..cc87aed06cfd --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/samples/selfhosted_invocation/selfhosted_invocation.py @@ -0,0 +1,104 @@ +"""Self-hosted invocation agent with tracing using only the hosting package (Tier 3). + +Demonstrates implementing the invocations protocol directly with +``AgentHost``, ``register_routes``, and ``TracingHelper`` — without +the invocations protocol package. You handle invocation ID tracking, +session resolution, tracing spans, and response headers yourself. + +This pattern is useful when: + +- You need a custom protocol not provided by the SDK +- You want full control over endpoint routing, tracing, and request handling +- You're learning how the protocol packages work internally + +Usage:: + + pip install azure-ai-agentserver-core[tracing] + + # Enable tracing via App Insights connection string + export APPLICATIONINSIGHTS_CONNECTION_STRING="InstrumentationKey=..." + + python selfhosted_invocation.py + + # Invoke the agent + curl -X POST http://localhost:8088/invocations -H "Content-Type: application/json" -d '{"name": "Alice"}' + # -> {"greeting": "Hello, Alice!"} + + # Health check (provided by AgentHost) + curl http://localhost:8088/healthy + # -> {"status": "healthy"} +""" +import contextlib +import os +import uuid +from typing import Optional + +from starlette.requests import Request +from starlette.responses import JSONResponse, Response +from starlette.routing import Route + +from azure.ai.agentserver.core import AgentLogger, AgentHost, TracingHelper + +logger = AgentLogger.get() + +server = AgentHost() + +# Access the tracing helper from the server (None if tracing is disabled) +tracing: Optional[TracingHelper] = server.tracing + + +async def invoke(request: Request) -> Response: + """POST /invocations — handle an invocation request with tracing. + + Demonstrates using TracingHelper to create spans, set attributes, + record errors, and propagate W3C trace context. + """ + invocation_id = request.headers.get("x-agent-invocation-id") or str(uuid.uuid4()) + session_id = ( + request.query_params.get("agent_session_id") + or os.environ.get("FOUNDRY_AGENT_SESSION_ID") + or str(uuid.uuid4()) + ) + + # Create a traced span that covers the entire request. + # When tracing is disabled, request_span yields None and is a no-op. + if tracing is not None: + span_cm = tracing.request_span( + headers=request.headers, + invocation_id=invocation_id, + span_operation="invoke_agent", + operation_name="invoke_agent", + session_id=session_id, + ) + else: + span_cm = contextlib.nullcontext(None) + + with span_cm as otel_span: + logger.info("Processing invocation %s in session %s", invocation_id, session_id) + + try: + data = await request.json() + name = data.get("name", "World") + result = {"greeting": f"Hello, {name}!"} + except Exception as exc: + # Record the error on the span if tracing is active + if tracing is not None and otel_span is not None: + tracing.record_error(otel_span, exc) + logger.error("Invocation %s failed: %s", invocation_id, exc) + raise + + return JSONResponse( + result, + headers={ + "x-agent-invocation-id": invocation_id, + "x-agent-session-id": session_id, + }, + ) + + +server.register_routes([ + Route("/invocations", invoke, methods=["POST"]), +]) + +if __name__ == "__main__": + server.run() diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/custom_mock_agent_test.py b/sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/custom_mock_agent_test.py deleted file mode 100644 index 3d4187a188f2..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/custom_mock_agent_test.py +++ /dev/null @@ -1,104 +0,0 @@ -# mypy: ignore-errors -import datetime - -from azure.ai.agentserver.core import AgentRunContext, FoundryCBAgent -from azure.ai.agentserver.core.models import Response as OpenAIResponse -from azure.ai.agentserver.core.models.projects import ( - ItemContentOutputText, - ResponseCompletedEvent, - ResponseCreatedEvent, - ResponseOutputItemAddedEvent, - ResponsesAssistantMessageItemResource, - ResponseTextDeltaEvent, - ResponseTextDoneEvent, -) - - -def stream_events(text: str, context: AgentRunContext): - item_id = context.id_generator.generate_message_id() - - assembled = "" - yield ResponseCreatedEvent(response=OpenAIResponse(output=[])) - yield ResponseOutputItemAddedEvent( - output_index=0, - item=ResponsesAssistantMessageItemResource( - id=item_id, - status="in_progress", - content=[ - ItemContentOutputText( - text="", - annotations=[], - ) - ], - ), - ) - for i, token in enumerate(text.split(" ")): - piece = token if i == len(text.split(" ")) - 1 else token + " " - assembled += piece - yield ResponseTextDeltaEvent(output_index=0, content_index=0, delta=piece) - # Done with text - yield ResponseTextDoneEvent(output_index=0, content_index=0, text=assembled) - yield ResponseCompletedEvent( - response=OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="me", - id=context.response_id, - created_at=datetime.datetime.now(), - output=[ - ResponsesAssistantMessageItemResource( - id=item_id, - status="completed", - content=[ - ItemContentOutputText( - text=assembled, - annotations=[], - ) - ], - ) - ], - ) - ) - - -async def agent_run(context: AgentRunContext): - agent = context.request.get("agent") - print(f"agent:{agent}") - - if context.stream: - return stream_events( - "I am mock agent with no intelligence in stream mode.", context - ) - - # Build assistant output content - output_content = [ - ItemContentOutputText( - text="I am mock agent with no intelligence.", - annotations=[], - ) - ] - - response = OpenAIResponse( - metadata={}, - temperature=0.0, - top_p=0.0, - user="me", - id=context.response_id, - created_at=datetime.datetime.now(), - output=[ - ResponsesAssistantMessageItemResource( - id=context.id_generator.generate_message_id(), - status="completed", - content=output_content, - ) - ], - ) - return response - - -my_agent = FoundryCBAgent() -my_agent.agent_run = agent_run - -if __name__ == "__main__": - my_agent.run() diff --git a/sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/requirements.txt b/sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/requirements.txt deleted file mode 100644 index 3f2b4e9ee6b4..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/samples/simple_mock_agent/requirements.txt +++ /dev/null @@ -1 +0,0 @@ -azure-ai-agentserver-core diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/conftest.py b/sdk/agentserver/azure-ai-agentserver-core/tests/conftest.py index e84bdfff3bd7..e1e8e071bf7e 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/tests/conftest.py +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/conftest.py @@ -1,456 +1,26 @@ -""" -Pytest configuration for samples gated tests. - -This file automatically loads environment variables from .env file -and provides shared test fixtures. -""" - -import json -import logging -import os -import socket -import subprocess -import sys -import time -from pathlib import Path -from typing import Any, Dict, Optional - +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Shared fixtures for azure-ai-agentserver-core tests.""" import pytest -import requests -from dotenv import load_dotenv - -# Load .env file from project root or current directory -# conftest.py is at: src/adapter/python/tests/gated_test/conftest.py -# Need to go up 6 levels to reach project root -project_root = Path(__file__).parent.parent -env_paths = [ - project_root / ".env", # Project root - Path.cwd() / ".env", # Current working directory - Path(__file__).parent / ".env", # Test directory -] - -for env_path in env_paths: - if env_path.exists(): - load_dotenv(env_path, override=True) - break - -# Setup logging -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", - handlers=[logging.StreamHandler(sys.stdout)], -) -logger = logging.getLogger(__name__) - - -class AgentTestClient: - """Generic test client for all agent types.""" - - def __init__( - self, - sample_name: str, - script_name: str, - endpoint: str = "/responses", # Default endpoint - base_url: Optional[str] = None, - env_vars: Optional[Dict[str, str]] = None, - timeout: int = 120, - port: Optional[int] = None, - ): - self.sample_name = sample_name - self.script_name = script_name - self.endpoint = endpoint - self.timeout = timeout - - # Setup paths - self.project_root = project_root # Use already defined project_root - self.sample_dir = self.project_root / "samples" / sample_name - self.original_dir = os.getcwd() - - # Determine port assignment priority: explicit param > env override > random - if env_vars and env_vars.get("DEFAULT_AD_PORT"): - self.port = int(env_vars["DEFAULT_AD_PORT"]) - elif port is not None: - self.port = port - else: - self.port = self._find_free_port() - - # Configure base URL for client requests - self.base_url = (base_url or f"http://127.0.0.1:{self.port}").rstrip("/") - - # Setup environment - # Get Agent Framework configuration (new format) - azure_ai_project_endpoint = os.getenv("AZURE_AI_PROJECT_ENDPOINT", "") - azure_ai_model_deployment = os.getenv("AZURE_AI_MODEL_DEPLOYMENT_NAME", "") - agent_project_name = os.getenv("AGENT_PROJECT_NAME", "") - - # Get legacy Azure OpenAI configuration (for backward compatibility) - main_api_key = os.getenv("AZURE_OPENAI_API_KEY", "") - main_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT", "") - main_api_version = os.getenv("OPENAI_API_VERSION", "2025-03-01-preview") - embedding_api_version = os.getenv("AZURE_OPENAI_EMBEDDINGS_API_VERSION", "2024-02-01") - - self.env_vars = { - "PYTHONIOENCODING": "utf-8", - "LANG": "C.UTF-8", - "LC_ALL": "C.UTF-8", - "PYTHONUNBUFFERED": "1", - # Agent Framework environment variables (new) - "AZURE_AI_PROJECT_ENDPOINT": azure_ai_project_endpoint, - "AZURE_AI_MODEL_DEPLOYMENT_NAME": azure_ai_model_deployment, - "AGENT_PROJECT_NAME": agent_project_name, - # Legacy Azure OpenAI environment variables (for backward compatibility) - "AZURE_OPENAI_API_KEY": main_api_key, - "AZURE_OPENAI_ENDPOINT": main_endpoint, - "AZURE_OPENAI_CHAT_DEPLOYMENT_NAME": os.getenv("AZURE_OPENAI_CHAT_DEPLOYMENT_NAME", ""), - "OPENAI_API_VERSION": main_api_version, - } - - # Auto-configure embeddings to use main config if not explicitly set - # This allows using the same Azure OpenAI resource for both chat and embeddings - self.env_vars["AZURE_OPENAI_EMBEDDINGS_API_KEY"] = os.getenv( - "AZURE_OPENAI_EMBEDDINGS_API_KEY", - main_api_key, # Fallback to main API key - ) - self.env_vars["AZURE_OPENAI_EMBEDDINGS_ENDPOINT"] = os.getenv( - "AZURE_OPENAI_EMBEDDINGS_ENDPOINT", - main_endpoint, # Fallback to main endpoint - ) - self.env_vars["AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME"] = os.getenv( - "AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME", "" - ) - self.env_vars["AZURE_OPENAI_EMBEDDINGS_API_VERSION"] = os.getenv( - "AZURE_OPENAI_EMBEDDINGS_API_VERSION", - embedding_api_version, # Fallback to main API version - ) - self.env_vars["AZURE_OPENAI_EMBEDDINGS_MODEL_NAME"] = os.getenv( - "AZURE_OPENAI_EMBEDDINGS_MODEL_NAME", - os.getenv("AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME", ""), # Fallback to deployment name - ) - - if env_vars: - self.env_vars.update(env_vars) - - # Ensure server picks the dynamically assigned port and clients know how to reach it - self.env_vars.setdefault("DEFAULT_AD_PORT", str(self.port)) - self.env_vars.setdefault("AGENT_BASE_URL", self.base_url) - - self.process = None - self.session = requests.Session() - - @staticmethod - def _find_free_port() -> int: - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: - sock.bind(("127.0.0.1", 0)) - return sock.getsockname()[1] - - def setup(self): - """Setup test environment.""" - os.chdir(self.sample_dir) - - logger.info( - "Configured %s to listen on %s", - self.sample_name, - f"{self.base_url}{self.endpoint}", - ) - - # Validate critical environment variables - # For Agent Framework samples, check new env vars first - required_vars = [] - if "agent_framework" in self.sample_name: - # Agent Framework samples use new format - required_vars = [ - "AZURE_AI_PROJECT_ENDPOINT", - "AZURE_AI_MODEL_DEPLOYMENT_NAME", - ] - else: - # Legacy samples use old format - required_vars = [ - "AZURE_OPENAI_API_KEY", - "AZURE_OPENAI_ENDPOINT", - "AZURE_OPENAI_CHAT_DEPLOYMENT_NAME", - ] - - missing_vars = [] - for var in required_vars: - value = self.env_vars.get(var) or os.getenv(var) - if not value: - missing_vars.append(var) - else: - logger.debug(f"Environment variable {var} is set") - - if missing_vars: - logger.error(f"Missing required environment variables: {', '.join(missing_vars)}") - logger.error(f"Sample name: {self.sample_name}") - if "agent_framework" in self.sample_name: - logger.error("For Agent Framework samples, please set:") - logger.error(" - AZURE_AI_PROJECT_ENDPOINT") - logger.error(" - AZURE_AI_MODEL_DEPLOYMENT_NAME") - pytest.skip(f"Missing required environment variables: {', '.join(missing_vars)}") - - # Set environment variables - for key, value in self.env_vars.items(): - if value: # Only set if value is not empty - os.environ[key] = value - - # Start server - self.start_server() - - # Wait for server to be ready - if not self.wait_for_ready(): - self.cleanup() - logger.error(f"{self.sample_name} server failed to start") - pytest.skip(f"{self.sample_name} server failed to start") - - def start_server(self): - """Start the agent server.""" - logger.info( - "Starting %s server in %s on port %s", - self.sample_name, - self.sample_dir, - self.port, - ) - - env = os.environ.copy() - env.update(self.env_vars) - env["DEFAULT_AD_PORT"] = str(self.port) - env.setdefault("AGENT_BASE_URL", self.base_url) +import httpx - # Use unbuffered output to capture logs in real-time - self.process = subprocess.Popen( - [sys.executable, "-u", self.script_name], # -u for unbuffered output - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT, # Merge stderr into stdout - env=env, - text=True, - encoding="utf-8", - errors="replace", - bufsize=1, # Line buffered - ) - logger.info(f"Server process started with PID {self.process.pid}") +from azure.ai.agentserver.core import AgentHost - def wait_for_ready(self, max_attempts: int = 30, delay: float = 1.0) -> bool: - """Wait for server to be ready.""" - logger.info( - "Waiting for server to be ready at %s (max %s attempts)", - f"{self.base_url}{self.endpoint}", - max_attempts, - ) - for i in range(max_attempts): - # Check process status first - if self.process.poll() is not None: - # Process has terminated - read all output - stdout, stderr = self.process.communicate() - logger.error(f"Server terminated with code {self.process.returncode}") - logger.error("=== SERVER OUTPUT ===") - if stdout: - logger.error(stdout) - if stderr: - logger.error("=== STDERR ===") - logger.error(stderr) - return False +@pytest.fixture() +def agent() -> AgentHost: + """Create a bare AgentHost with no protocol routes. - # Read and log any available output - self._log_server_output() - - # Check health endpoint - try: - health_response = self.session.get(f"{self.base_url}/readiness", timeout=2) - if health_response.status_code == 200: - logger.info(f"Server ready after {i + 1} attempts") - return True - else: - logger.debug(f"Health check attempt {i + 1}: status {health_response.status_code}") - except Exception as e: - logger.debug(f"Health check attempt {i + 1} failed: {e}") - # After several failed attempts, show server output for debugging - if i > 5 and i % 5 == 0: - logger.warning(f"Server still not ready after {i + 1} attempts, checking output...") - self._log_server_output(force=True) - - time.sleep(delay) - - # Timeout reached - dump all server output - logger.error(f"Server failed to start within {max_attempts} attempts") - self._dump_server_output() - return False - - def cleanup(self): - """Cleanup resources.""" - if self.process: - try: - self.process.terminate() - self.process.wait(timeout=5) - except Exception: - self.process.kill() - - os.chdir(self.original_dir) - - def request( - self, - input_data: Any, - stream: bool = False, - timeout: Optional[int] = None, - debug: bool = False, - ) -> requests.Response: - """Send request to the server.""" - url = f"{self.base_url}{self.endpoint}" - timeout = timeout or self.timeout - - payload = {"input": input_data, "stream": stream} - - headers = { - "Content-Type": "application/json; charset=utf-8", - "Accept": "application/json; charset=utf-8", - } - - if debug: - logger.info(f">>> POST {url}") - logger.info(f">>> Headers: {headers}") - logger.info(f">>> Payload: {json.dumps(payload, indent=2)}") - - try: - response = self.session.post(url, json=payload, headers=headers, timeout=timeout, stream=stream) - - if debug: - logger.info(f"<<< Status: {response.status_code}") - logger.info(f"<<< Headers: {dict(response.headers)}") - - # For non-streaming responses, log the body - if not stream: - try: - content = response.json() - logger.info(f"<<< Body: {json.dumps(content, indent=2)}") - except (ValueError, requests.exceptions.JSONDecodeError): - logger.info(f"<<< Body: {response.text}") - - return response - - except Exception as e: - logger.error(f"Request failed: {e}") - self._log_server_output() - raise - - def _log_server_output(self, force=False): - """Log server output for debugging.""" - if self.process and self.process.poll() is None and hasattr(self.process, "stdout"): - try: - import select - - if hasattr(select, "select"): - # Use non-blocking read - ready, _, _ = select.select([self.process.stdout], [], [], 0.1) - if ready: - # Read available lines without blocking - import fcntl - import os as os_module - - # Set non-blocking mode - fd = self.process.stdout.fileno() - fl = fcntl.fcntl(fd, fcntl.F_GETFL) - fcntl.fcntl(fd, fcntl.F_SETFL, fl | os_module.O_NONBLOCK) - - try: - while True: - line = self.process.stdout.readline() - if not line: - break - line = line.strip() - if line: - if force or any( - keyword in line.lower() - for keyword in [ - "error", - "exception", - "traceback", - "failed", - ] - ): - logger.error(f"Server output: {line}") - else: - logger.info(f"Server output: {line}") - except BlockingIOError: - pass # No more data available - except Exception as e: - if force: - logger.debug(f"Could not read server output: {e}") - - def _dump_server_output(self): - """Dump all remaining server output.""" - if self.process: - try: - # Try to read any remaining output - if self.process.poll() is None: - # Process still running, terminate and get output - self.process.terminate() - try: - stdout, stderr = self.process.communicate(timeout=5) - except subprocess.TimeoutExpired: - self.process.kill() - stdout, stderr = self.process.communicate() - else: - stdout, stderr = self.process.communicate() - - if stdout: - logger.error(f"=== FULL SERVER OUTPUT ===\n{stdout}") - if stderr: - logger.error(f"=== FULL SERVER STDERR ===\n{stderr}") - except Exception as e: - logger.error(f"Failed to dump server output: {e}") - - -@pytest.fixture -def basic_client(): - """Fixture for basic agent tests.""" - client = AgentTestClient( - sample_name="agent_framework/basic_simple", - script_name="minimal_example.py", - endpoint="/responses", - timeout=60, - ) - client.setup() - yield client - client.cleanup() - - -@pytest.fixture -def workflow_client(): - """Fixture for workflow agent tests (reflection pattern with Worker + Reviewer).""" - client = AgentTestClient( - sample_name="agent_framework/workflow_agent_simple", - script_name="workflow_agent_simple.py", - endpoint="/responses", # Changed from /runs to /responses - timeout=600, # Increased timeout for workflow agent (reflection loop may need multiple iterations) - ) - client.setup() - yield client - client.cleanup() - - -@pytest.fixture -def mcp_client(): - """Fixture for MCP simple agent tests (uses Microsoft Learn MCP, no auth required).""" - client = AgentTestClient( - sample_name="agent_framework/mcp_simple", - script_name="mcp_simple.py", - endpoint="/responses", # Changed from /runs to /responses - timeout=120, - ) - client.setup() - yield client - client.cleanup() + Tracing is disabled to avoid requiring opentelemetry in the test env. + """ + return AgentHost() -@pytest.fixture -def mcp_apikey_client(): - """Fixture for MCP API Key agent tests (uses GitHub MCP, requires GITHUB_TOKEN).""" - client = AgentTestClient( - sample_name="agent_framework/mcp_apikey", - script_name="mcp_apikey.py", - endpoint="/responses", # Changed from /runs to /responses - timeout=120, - env_vars={"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN", "")}, +@pytest.fixture() +def client(agent: AgentHost) -> httpx.AsyncClient: + """Create an httpx.AsyncClient bound to the AgentHost's ASGI app.""" + return httpx.AsyncClient( + transport=httpx.ASGITransport(app=agent.app), + base_url="http://testserver", ) - client.setup() - yield client - client.cleanup() diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/env-template b/sdk/agentserver/azure-ai-agentserver-core/tests/env-template deleted file mode 100644 index 33c60226b90b..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/tests/env-template +++ /dev/null @@ -1,31 +0,0 @@ -# ===== Agent Framework Configuration (NEW - Required for agent_framework samples) ===== -# Required for all Agent Framework samples (basic_simple, mcp_simple, mcp_apikey, workflow_agent_simple) -AZURE_AI_PROJECT_ENDPOINT=https://.region.project.azure.ai/ -AZURE_AI_MODEL_DEPLOYMENT_NAME=gpt-4o - -# Optional: Azure AI Project resource ID for telemetry -# Format: /subscriptions//resourceGroups//providers/Microsoft.MachineLearningServices/workspaces/ -AGENT_PROJECT_NAME= - -# GitHub Token for MCP samples (mcp_simple, mcp_apikey) -# Get from: https://github.com/settings/tokens -GITHUB_TOKEN=your-github-token-here - -# ===== Legacy Azure OpenAI Configuration (For backward compatibility) ===== -AZURE_OPENAI_API_KEY=your-api-key-here -AZURE_OPENAI_ENDPOINT=https://your-endpoint.openai.azure.com/ -AZURE_OPENAI_CHAT_DEPLOYMENT_NAME=gpt-4o -OPENAI_API_VERSION=2025-03-01-preview - -# Azure OpenAI Embeddings Configuration (for RAG tests) -# If not set, will use the same values as Chat API -AZURE_OPENAI_EMBEDDINGS_API_KEY=your-embeddings-api-key-here -AZURE_OPENAI_EMBEDDINGS_ENDPOINT=https://your-endpoint.openai.azure.com/ -AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME=text-embedding-ada-002 -AZURE_OPENAI_EMBEDDINGS_API_VERSION=2025-03-01-preview - -# Note: -# - Copy this file to .env and fill in your actual values -# - Never commit .env file to git (it's in .gitignore) -# - In CI/CD, these values are loaded from GitHub Secrets - diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_custom.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_custom.py deleted file mode 100644 index f8f2075e22e5..000000000000 --- a/sdk/agentserver/azure-ai-agentserver-core/tests/test_custom.py +++ /dev/null @@ -1,298 +0,0 @@ -#!/usr/bin/env python3 -""" -Custom agents samples gated test. - -This module tests all Custom agent samples with parametrized test cases. -Each sample gets its own test class with multiple test scenarios. -""" - -import os -import socket -import subprocess -import sys -import time -from pathlib import Path -from typing import Any - -import pytest -import requests - -# Add the project root to the path -project_root = Path(__file__).parent.parent -sys.path.insert(0, str(project_root)) - - -class BaseCustomAgentTest: - """Base class for Custom agent sample tests with common utilities.""" - - def __init__(self, sample_name: str, script_name: str): - """ - Initialize test configuration. - - Args: - sample_name: Name of the sample directory (e.g., 'simple_mock_agent') - script_name: Name of the Python script to run (e.g., 'custom_mock_agent_test.py') - """ - self.sample_name = sample_name - self.script_name = script_name - self.sample_dir = project_root / "samples" / sample_name - self.port = self._find_free_port() - self.base_url = f"http://127.0.0.1:{self.port}" - self.responses_endpoint = f"{self.base_url}/responses" - self.process = None - self.original_dir = os.getcwd() - - def setup(self): - """Set up environment (dependencies are pre-installed in CI/CD).""" - os.chdir(self.sample_dir) - - def start_server(self): - """Start the agent server in background.""" - # Prepare environment with UTF-8 encoding to handle emoji in agent output - env = os.environ.copy() - env["PYTHONIOENCODING"] = "utf-8" - env["DEFAULT_AD_PORT"] = str(self.port) - env.setdefault("AGENT_BASE_URL", self.base_url) - - # Use subprocess.DEVNULL to avoid buffering issues - self.process = subprocess.Popen( - [sys.executable, self.script_name], - stdout=subprocess.DEVNULL, - stderr=subprocess.DEVNULL, - env=env, - ) - - def wait_for_ready(self, max_attempts: int = 30, delay: float = 1.0) -> bool: - """Wait for the server to be ready.""" - for _i in range(max_attempts): - # Check if process is still running - if self.process and self.process.poll() is not None: - # Process has terminated - print(f"Server process terminated unexpectedly with exit code {self.process.returncode}") - return False - - try: - response = requests.get(f"{self.base_url}/readiness", timeout=1) - if response.status_code == 200: - return True - except requests.exceptions.RequestException: - pass - - try: - response = requests.get(self.base_url, timeout=1) - if response.status_code in [200, 404]: - return True - except requests.exceptions.RequestException: - pass - - time.sleep(delay) - - # Server didn't start - print diagnostics - if self.process: - self.process.terminate() - stdout, stderr = self.process.communicate(timeout=5) - print(f"Server failed to start. Logs:\n{stdout}\nErrors:\n{stderr}") - - return False - - def send_request(self, input_data: Any, stream: bool = False, timeout: int = 30) -> requests.Response: - """ - Send a request to the agent. - - Args: - input_data: Input to send (string or structured message) - stream: Whether to use streaming - timeout: Request timeout in seconds - - Returns: - Response object - """ - payload = { - "agent": {"name": "mock_agent", "type": "agent_reference"}, - "input": input_data, - "stream": stream, - } - - # Note: Only set stream parameter for requests.post if streaming is requested - # Otherwise, let requests handle response body reading with timeout - if stream: - return requests.post(self.responses_endpoint, json=payload, timeout=timeout, stream=True) - else: - return requests.post(self.responses_endpoint, json=payload, timeout=timeout) - - def cleanup(self): - """Clean up resources and restore directory.""" - if self.process: - try: - self.process.terminate() - self.process.wait(timeout=5) - except Exception: - self.process.kill() - - os.chdir(self.original_dir) - - @staticmethod - def _find_free_port() -> int: - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: - sock.bind(("127.0.0.1", 0)) - return sock.getsockname()[1] - - -class TestSimpleMockAgent: - """Test suite for Simple Mock Agent - uses shared server.""" - - @pytest.fixture(scope="class") - def mock_server(self): - """Shared server instance for all mock agent tests.""" - tester = BaseCustomAgentTest("simple_mock_agent", "custom_mock_agent_test.py") - tester.setup() - tester.start_server() - - if not tester.wait_for_ready(): - tester.cleanup() - pytest.fail("Simple Mock Agent server failed to start") - - yield tester - tester.cleanup() - - @pytest.mark.parametrize( - "input_text,expected_keywords,description", - [ - ("Hello, mock agent!", ["mock"], "simple_greeting"), - ("Test message", ["mock"], "test_message"), - ("What can you do?", ["mock"], "capability_query"), - ], - ) - def test_mock_agent_queries(self, mock_server, input_text: str, expected_keywords: list, description: str): - """Test mock agent with various queries.""" - response = mock_server.send_request(input_text, stream=False) - - assert response.status_code == 200, f"Expected 200, got {response.status_code}" - - response_text = response.text.lower() - found_keyword = any(kw.lower() in response_text for kw in expected_keywords) - assert found_keyword, f"Expected one of {expected_keywords} in response" - - def test_streaming_response(self, mock_server): - """Test mock agent with streaming response.""" - response = mock_server.send_request("Hello, streaming test!", stream=True) - - assert response.status_code == 200, f"Expected 200, got {response.status_code}" - - # Verify we can read streaming data - lines_read = 0 - for line in response.iter_lines(): - if line: - lines_read += 1 - if lines_read >= 3: - break - - assert lines_read > 0, "Expected to read at least one line from streaming response" - - -@pytest.mark.skip -class TestMcpSimple: - """Test suite for Custom MCP Simple - uses Microsoft Learn MCP.""" - - @pytest.fixture(scope="class") - def mcp_server(self): - """Shared server instance for all MCP Simple tests.""" - tester = BaseCustomAgentTest("mcp_simple", "mcp_simple.py") - tester.setup() - tester.start_server() - - if not tester.wait_for_ready(): - tester.cleanup() - pytest.fail("MCP Simple server failed to start") - - yield tester - tester.cleanup() - - @pytest.mark.parametrize( - "input_text,expected_keywords,description", - [ - ( - "What Azure services can I use for image generation?", - ["image", "generation", "azure"], - "image_generation", - ), - ( - "Show me documentation about Azure App Service", - ["app", "service", "azure"], - "app_service_docs", - ), - ], - ) - def test_mcp_operations(self, mcp_server, input_text: str, expected_keywords: list, description: str): - """Test MCP Simple with Microsoft Learn queries.""" - response = mcp_server.send_request(input_text, stream=False, timeout=60) - - assert response.status_code == 200, f"Expected 200, got {response.status_code}" - - response_text = response.text.lower() - found_keyword = any(kw.lower() in response_text for kw in expected_keywords) - assert found_keyword, f"Expected one of {expected_keywords} in response" - - -@pytest.mark.skip -class TestBilingualWeekendPlanner: - """Test suite for the bilingual weekend planner custom sample.""" - - @pytest.fixture(scope="class") - def weekend_planner_server(self): - """Shared server fixture for bilingual weekend planner tests.""" - pytest.importorskip("azure.identity") - pytest.importorskip("agents") - pytest.importorskip("openai") - - tester = BaseCustomAgentTest("bilingual_weekend_planner", "main.py") - tester.setup() - - env_overrides = { - "API_HOST": "github", - "GITHUB_TOKEN": os.environ.get("GITHUB_TOKEN", "unit-test-token"), - "GITHUB_OPENAI_BASE_URL": os.environ.get("GITHUB_OPENAI_BASE_URL", "http://127.0.0.1:65535"), - "WEEKEND_PLANNER_MODE": "container", - } - original_env = {key: os.environ.get(key) for key in env_overrides} - os.environ.update(env_overrides) - - try: - tester.start_server() - - if not tester.wait_for_ready(max_attempts=60, delay=1.0): - tester.cleanup() - pytest.fail("Bilingual weekend planner server failed to start") - - yield tester - finally: - tester.cleanup() - for key, value in original_env.items(): - if value is None: - os.environ.pop(key, None) - else: - os.environ[key] = value - - def test_offline_planner_response(self, weekend_planner_server): - """Verify the planner responds with a graceful error when the model is unreachable.""" - response = weekend_planner_server.send_request("Plan my weekend in Seattle", stream=False, timeout=60) - - assert response.status_code == 200, f"Expected 200, got {response.status_code}" - - response_text = response.text.lower() - assert "error running agent" in response_text - - def test_streaming_offline_response(self, weekend_planner_server): - """Verify streaming responses deliver data even when the model call fails.""" - response = weekend_planner_server.send_request("Planifica mi fin de semana en Madrid", stream=True, timeout=60) - - assert response.status_code == 200, f"Expected 200, got {response.status_code}" - - lines_read = 0 - for line in response.iter_lines(): - if line: - lines_read += 1 - if lines_read >= 3: - break - - assert lines_read > 0, "Expected to read at least one line from streaming response" diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_edge_cases.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_edge_cases.py new file mode 100644 index 000000000000..009cf7f9477b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/test_edge_cases.py @@ -0,0 +1,139 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Hosting-specific edge-case tests.""" +import logging +import os +from unittest import mock + +import pytest +import httpx + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.core._config import resolve_log_level +from azure.ai.agentserver.core._constants import Constants + + +# ------------------------------------------------------------------ # +# POST /healthy → 405 +# ------------------------------------------------------------------ # + + +@pytest.fixture() +def client() -> httpx.AsyncClient: + agent = AgentHost() + return httpx.AsyncClient( + transport=httpx.ASGITransport(app=agent.app), + base_url="http://testserver", + ) + + +@pytest.mark.asyncio +async def test_post_healthy_returns_405(client: httpx.AsyncClient) -> None: + """POST /healthy is method-not-allowed.""" + resp = await client.post("/healthy") + assert resp.status_code == 405 + + +# ------------------------------------------------------------------ # +# Log level via constructor +# ------------------------------------------------------------------ # + + +class TestLogLevelConstructor: + """Log-level configuration via the AgentHost constructor.""" + + def test_log_level_via_constructor(self) -> None: + AgentHost(log_level="DEBUG") # side-effect: configures logger + lib_logger = logging.getLogger("azure.ai.agentserver") + assert lib_logger.level == logging.DEBUG + + def test_log_level_warning_via_constructor(self) -> None: + AgentHost(log_level="WARNING") # side-effect: configures logger + lib_logger = logging.getLogger("azure.ai.agentserver") + assert lib_logger.level == logging.WARNING + + def test_log_level_case_insensitive(self) -> None: + AgentHost(log_level="error") # side-effect: configures logger + lib_logger = logging.getLogger("azure.ai.agentserver") + assert lib_logger.level == logging.ERROR + + +# ------------------------------------------------------------------ # +# Log level via env var +# ------------------------------------------------------------------ # + + +class TestLogLevelEnvVar: + """Log-level configuration via the AGENT_LOG_LEVEL environment variable.""" + + def test_log_level_via_env_var(self) -> None: + with mock.patch.dict(os.environ, {Constants.AGENT_LOG_LEVEL: "CRITICAL"}): + AgentHost() # side-effect: configures logger + lib_logger = logging.getLogger("azure.ai.agentserver") + assert lib_logger.level == logging.CRITICAL + + def test_constructor_overrides_env_var(self) -> None: + with mock.patch.dict(os.environ, {Constants.AGENT_LOG_LEVEL: "CRITICAL"}): + AgentHost(log_level="DEBUG") # side-effect: configures logger + lib_logger = logging.getLogger("azure.ai.agentserver") + assert lib_logger.level == logging.DEBUG + + +# ------------------------------------------------------------------ # +# Invalid log level raises +# ------------------------------------------------------------------ # + + +class TestInvalidLogLevel: + """Invalid log levels are rejected with ValueError.""" + + def test_invalid_log_level_raises(self) -> None: + with pytest.raises(ValueError, match="Invalid log level"): + AgentHost(log_level="TRACE") + + def test_invalid_log_level_via_env_raises(self) -> None: + with mock.patch.dict(os.environ, {Constants.AGENT_LOG_LEVEL: "VERBOSE"}): + with pytest.raises(ValueError, match="Invalid log level"): + AgentHost() + + +# ------------------------------------------------------------------ # +# resolve_log_level unit tests +# ------------------------------------------------------------------ # + + +class TestResolveLogLevel: + """Unit tests for resolve_log_level().""" + + def test_explicit_debug(self) -> None: + assert resolve_log_level("DEBUG") == "DEBUG" + + def test_explicit_info(self) -> None: + assert resolve_log_level("INFO") == "INFO" + + def test_explicit_warning(self) -> None: + assert resolve_log_level("WARNING") == "WARNING" + + def test_explicit_error(self) -> None: + assert resolve_log_level("ERROR") == "ERROR" + + def test_explicit_critical(self) -> None: + assert resolve_log_level("CRITICAL") == "CRITICAL" + + def test_case_insensitive(self) -> None: + assert resolve_log_level("debug") == "DEBUG" + + def test_invalid_raises(self) -> None: + with pytest.raises(ValueError, match="Invalid log level"): + resolve_log_level("TRACE") + + def test_env_var_fallback(self) -> None: + with mock.patch.dict(os.environ, {Constants.AGENT_LOG_LEVEL: "ERROR"}): + assert resolve_log_level(None) == "ERROR" + + def test_default_info(self) -> None: + env = os.environ.copy() + env.pop(Constants.AGENT_LOG_LEVEL, None) + with mock.patch.dict(os.environ, env, clear=True): + assert resolve_log_level(None) == "INFO" diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_graceful_shutdown.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_graceful_shutdown.py new file mode 100644 index 000000000000..810999c97afc --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/test_graceful_shutdown.py @@ -0,0 +1,333 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for graceful-shutdown configuration, lifecycle, and handler dispatch.""" +import asyncio +import logging +import os +from unittest import mock + +import pytest + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.core._config import resolve_graceful_shutdown_timeout +from azure.ai.agentserver.core._constants import Constants + + +# ------------------------------------------------------------------ # +# Timeout resolution: explicit > env > default (30s) +# ------------------------------------------------------------------ # + + +class TestResolveGracefulShutdownTimeout: + """Tests for resolve_graceful_shutdown_timeout().""" + + def test_explicit_wins(self) -> None: + assert resolve_graceful_shutdown_timeout(10) == 10 + + def test_env_var(self) -> None: + with mock.patch.dict(os.environ, {"AGENT_GRACEFUL_SHUTDOWN_TIMEOUT": "45"}): + assert resolve_graceful_shutdown_timeout(None) == 45 + + def test_default(self) -> None: + env = os.environ.copy() + env.pop("AGENT_GRACEFUL_SHUTDOWN_TIMEOUT", None) + with mock.patch.dict(os.environ, env, clear=True): + assert resolve_graceful_shutdown_timeout(None) == Constants.DEFAULT_GRACEFUL_SHUTDOWN_TIMEOUT + + def test_invalid_env_var_raises(self) -> None: + with mock.patch.dict(os.environ, {"AGENT_GRACEFUL_SHUTDOWN_TIMEOUT": "abc"}): + with pytest.raises(ValueError, match="Invalid value for AGENT_GRACEFUL_SHUTDOWN_TIMEOUT"): + resolve_graceful_shutdown_timeout(None) + + def test_non_int_explicit_raises(self) -> None: + with pytest.raises(ValueError, match="expected an integer"): + resolve_graceful_shutdown_timeout("ten") # type: ignore[arg-type] + + def test_negative_explicit_clamps_to_zero(self) -> None: + assert resolve_graceful_shutdown_timeout(-5) == 0 + + def test_zero_explicit(self) -> None: + assert resolve_graceful_shutdown_timeout(0) == 0 + + +# ------------------------------------------------------------------ # +# Constants existence +# ------------------------------------------------------------------ # + + +class TestConstants: + """Verify the graceful-shutdown constants exist on Constants.""" + + def test_timeout_env_var_name(self) -> None: + assert Constants.AGENT_GRACEFUL_SHUTDOWN_TIMEOUT == "AGENT_GRACEFUL_SHUTDOWN_TIMEOUT" + + def test_default_timeout_value(self) -> None: + assert Constants.DEFAULT_GRACEFUL_SHUTDOWN_TIMEOUT == 30 + + +# ------------------------------------------------------------------ # +# Hypercorn config receives graceful_timeout +# ------------------------------------------------------------------ # + + +class TestHypercornConfig: + """Verify _build_hypercorn_config passes the resolved timeout to Hypercorn.""" + + def test_sync_run_passes_timeout(self) -> None: + agent = AgentHost(graceful_shutdown_timeout=15) + config = agent._build_hypercorn_config("127.0.0.1", 8000) + assert config.graceful_timeout == 15.0 + + def test_async_run_passes_timeout(self) -> None: + agent = AgentHost(graceful_shutdown_timeout=25) + config = agent._build_hypercorn_config("0.0.0.0", 9000) + assert config.graceful_timeout == 25.0 + + def test_default_timeout_in_config(self) -> None: + env = os.environ.copy() + env.pop("AGENT_GRACEFUL_SHUTDOWN_TIMEOUT", None) + with mock.patch.dict(os.environ, env, clear=True): + agent = AgentHost() + config = agent._build_hypercorn_config("0.0.0.0", 8088) + assert config.graceful_timeout == float(Constants.DEFAULT_GRACEFUL_SHUTDOWN_TIMEOUT) + + +# ------------------------------------------------------------------ # +# Lifespan shutdown logging +# ------------------------------------------------------------------ # + + +@pytest.mark.asyncio +async def test_lifespan_shutdown_logs(caplog: pytest.LogCaptureFixture) -> None: + """The lifespan shutdown phase logs the graceful timeout.""" + agent = AgentHost(graceful_shutdown_timeout=7) + + # Drive the lifespan manually via the ASGI interface. + scope = {"type": "lifespan"} + startup_complete = asyncio.Event() + shutdown_complete = asyncio.Event() + + async def receive(): + if not startup_complete.is_set(): + startup_complete.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + async def send(message): + if message["type"] == "lifespan.shutdown.complete": + shutdown_complete.set() + + with caplog.at_level(logging.INFO, logger="azure.ai.agentserver"): + await agent.app(scope, receive, send) + + assert any("shutting down" in r.message.lower() for r in caplog.records) + assert any("7" in r.message for r in caplog.records) + + +# ------------------------------------------------------------------ # +# Shutdown handler decorator +# ------------------------------------------------------------------ # + + +@pytest.mark.asyncio +async def test_shutdown_handler_called() -> None: + """The function registered via @shutdown_handler is called during shutdown.""" + agent = AgentHost(graceful_shutdown_timeout=5) + called = False + + @agent.shutdown_handler + async def on_shutdown(): + nonlocal called + called = True + + # Drive lifespan + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + shutdown_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + async def send(message): + if message["type"] == "lifespan.shutdown.complete": + shutdown_done.set() + + await agent.app(scope, receive, send) + assert called is True + + +@pytest.mark.asyncio +async def test_default_shutdown_is_noop() -> None: + """When no shutdown handler is registered, shutdown succeeds silently.""" + agent = AgentHost(graceful_shutdown_timeout=5) + + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + shutdown_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + async def send(message): + if message["type"] == "lifespan.shutdown.complete": + shutdown_done.set() + + # Should not raise + await agent.app(scope, receive, send) + assert shutdown_done.is_set() + + +# ------------------------------------------------------------------ # +# Failing shutdown is logged, not raised +# ------------------------------------------------------------------ # + + +@pytest.mark.asyncio +async def test_failing_shutdown_is_logged(caplog: pytest.LogCaptureFixture) -> None: + """A shutdown handler that raises is logged but does not crash the server.""" + agent = AgentHost(graceful_shutdown_timeout=5) + + @agent.shutdown_handler + async def on_shutdown(): + raise RuntimeError("shutdown kaboom") + + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + sent_messages: list[dict] = [] + + async def send(message): + sent_messages.append(message) + + with caplog.at_level(logging.ERROR, logger="azure.ai.agentserver"): + await agent.app(scope, receive, send) + + # The error should be logged + assert any("on_shutdown" in r.message.lower() or "error" in r.message.lower() for r in caplog.records) + # Server should still complete shutdown + assert any(m["type"] == "lifespan.shutdown.complete" for m in sent_messages) + + +# ------------------------------------------------------------------ # +# Slow shutdown is cancelled with warning +# ------------------------------------------------------------------ # + + +@pytest.mark.asyncio +async def test_slow_shutdown_cancelled_with_warning(caplog: pytest.LogCaptureFixture) -> None: + """A shutdown handler exceeding the timeout is cancelled and a warning is logged.""" + agent = AgentHost(graceful_shutdown_timeout=1) + + @agent.shutdown_handler + async def on_shutdown(): + await asyncio.sleep(60) # way longer than the 1s timeout + + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + sent_messages: list[dict] = [] + + async def send(message): + sent_messages.append(message) + + with caplog.at_level(logging.WARNING, logger="azure.ai.agentserver"): + await agent.app(scope, receive, send) + + assert any("did not complete" in r.message.lower() or "timeout" in r.message.lower() for r in caplog.records) + assert any(m["type"] == "lifespan.shutdown.complete" for m in sent_messages) + + +# ------------------------------------------------------------------ # +# Fast shutdown completes normally +# ------------------------------------------------------------------ # + + +@pytest.mark.asyncio +async def test_fast_shutdown_completes_normally() -> None: + """A shutdown handler that finishes within the timeout completes normally.""" + agent = AgentHost(graceful_shutdown_timeout=10) + completed = False + + @agent.shutdown_handler + async def on_shutdown(): + nonlocal completed + await asyncio.sleep(0.01) + completed = True + + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + sent_messages: list[dict] = [] + + async def send(message): + sent_messages.append(message) + + await agent.app(scope, receive, send) + assert completed is True + assert any(m["type"] == "lifespan.shutdown.complete" for m in sent_messages) + + +# ------------------------------------------------------------------ # +# Zero timeout passes None (no timeout) +# ------------------------------------------------------------------ # + + +@pytest.mark.asyncio +async def test_zero_timeout_skips_shutdown_handler() -> None: + """When graceful_shutdown_timeout=0, the shutdown handler is skipped.""" + agent = AgentHost(graceful_shutdown_timeout=0) + completed = False + + @agent.shutdown_handler + async def on_shutdown(): + nonlocal completed + completed = True + + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + sent_messages: list[dict] = [] + + async def send(message): + sent_messages.append(message) + + await agent.app(scope, receive, send) + assert completed is False # handler was NOT called diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_health.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_health.py new file mode 100644 index 000000000000..39636ffb010c --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/test_health.py @@ -0,0 +1,53 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for the GET /healthy health-check endpoint.""" +import pytest +import httpx + +from azure.ai.agentserver.core import AgentHost + + +@pytest.fixture() +def client() -> httpx.AsyncClient: + agent = AgentHost() + return httpx.AsyncClient( + transport=httpx.ASGITransport(app=agent.app), + base_url="http://testserver", + ) + + +@pytest.mark.asyncio +async def test_healthy_returns_200(client: httpx.AsyncClient) -> None: + """GET /healthy returns 200 with the expected JSON body.""" + resp = await client.get("/healthy") + assert resp.status_code == 200 + assert resp.json() == {"status": "healthy"} + + +@pytest.mark.asyncio +async def test_healthy_content_type(client: httpx.AsyncClient) -> None: + """GET /healthy returns application/json content type.""" + resp = await client.get("/healthy") + assert "application/json" in resp.headers["content-type"] + + +@pytest.mark.asyncio +async def test_healthy_post_returns_405(client: httpx.AsyncClient) -> None: + """POST /healthy is not allowed — only GET is registered.""" + resp = await client.post("/healthy") + assert resp.status_code == 405 + + +@pytest.mark.asyncio +async def test_old_liveness_endpoint_returns_404(client: httpx.AsyncClient) -> None: + """The old /liveness endpoint no longer exists.""" + resp = await client.get("/liveness") + assert resp.status_code == 404 + + +@pytest.mark.asyncio +async def test_old_readiness_endpoint_returns_404(client: httpx.AsyncClient) -> None: + """The old /readiness endpoint no longer exists.""" + resp = await client.get("/readiness") + assert resp.status_code == 404 diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_logger.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_logger.py new file mode 100644 index 000000000000..a95e4980d530 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/test_logger.py @@ -0,0 +1,19 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for the library-scoped logger.""" +import logging + + +def test_library_logger_exists() -> None: + """The library logger uses the expected dotted name.""" + lib_logger = logging.getLogger("azure.ai.agentserver") + assert lib_logger.name == "azure.ai.agentserver" + + +def test_log_level_preserved_across_imports() -> None: + """Importing internal modules does not reset the log level set by user code.""" + lib_logger = logging.getLogger("azure.ai.agentserver") + lib_logger.setLevel(logging.ERROR) + from azure.ai.agentserver.core import _base # noqa: F401 + assert lib_logger.level == logging.ERROR diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_server_routes.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_server_routes.py new file mode 100644 index 000000000000..f2a22ba45fa8 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/test_server_routes.py @@ -0,0 +1,85 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for port resolution and unknown-route handling.""" +import os +from unittest import mock + +import pytest +import httpx + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.core._config import resolve_port +from azure.ai.agentserver.core._constants import Constants + + +# ------------------------------------------------------------------ # +# Port resolution +# ------------------------------------------------------------------ # + + +class TestResolvePort: + """Tests for resolve_port() — explicit > env > default.""" + + def test_explicit_port_wins(self) -> None: + assert resolve_port(9090) == 9090 + + def test_env_var_port(self) -> None: + with mock.patch.dict(os.environ, {"PORT": "7777"}): + assert resolve_port(None) == 7777 + + def test_default_port(self) -> None: + with mock.patch.dict(os.environ, {}, clear=True): + # Remove PORT if set so the default is used. + env = os.environ.copy() + env.pop("PORT", None) + with mock.patch.dict(os.environ, env, clear=True): + assert resolve_port(None) == Constants.DEFAULT_PORT + + def test_invalid_env_var_raises(self) -> None: + with mock.patch.dict(os.environ, {"PORT": "not-a-number"}): + with pytest.raises(ValueError, match="Invalid value for PORT"): + resolve_port(None) + + def test_non_int_explicit_raises(self) -> None: + with pytest.raises(ValueError, match="expected an integer"): + resolve_port("8080") # type: ignore[arg-type] + + def test_port_out_of_range_explicit(self) -> None: + with pytest.raises(ValueError, match="expected 1-65535"): + resolve_port(0) + + def test_port_above_range_explicit(self) -> None: + with pytest.raises(ValueError, match="expected 1-65535"): + resolve_port(70000) + + def test_env_var_port_out_of_range(self) -> None: + with mock.patch.dict(os.environ, {"PORT": "0"}): + with pytest.raises(ValueError, match="expected 1-65535"): + resolve_port(None) + + def test_env_var_port_above_range(self) -> None: + with mock.patch.dict(os.environ, {"PORT": "99999"}): + with pytest.raises(ValueError, match="expected 1-65535"): + resolve_port(None) + + +# ------------------------------------------------------------------ # +# Unknown route +# ------------------------------------------------------------------ # + + +@pytest.fixture() +def client() -> httpx.AsyncClient: + agent = AgentHost() + return httpx.AsyncClient( + transport=httpx.ASGITransport(app=agent.app), + base_url="http://testserver", + ) + + +@pytest.mark.asyncio +async def test_unknown_route_returns_404(client: httpx.AsyncClient) -> None: + """A request to an unregistered path returns 404.""" + resp = await client.get("/no-such-endpoint") + assert resp.status_code == 404 diff --git a/sdk/agentserver/azure-ai-agentserver-core/tests/test_tracing.py b/sdk/agentserver/azure-ai-agentserver-core/tests/test_tracing.py new file mode 100644 index 000000000000..bcacefe59bda --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-core/tests/test_tracing.py @@ -0,0 +1,224 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for tracing configuration — not invocation spans (those live in the invocations package).""" +import contextlib +import os +from unittest import mock + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.core._config import ( + resolve_agent_name, + resolve_agent_version, + resolve_appinsights_connection_string, +) +from azure.ai.agentserver.core._constants import Constants +from azure.ai.agentserver.core._tracing import _parse_baggage_key + + +# ------------------------------------------------------------------ # +# Tracing enabled / disabled +# ------------------------------------------------------------------ # + + +class TestTracingToggle: + """Tracing is enabled when App Insights or OTLP endpoint is configured.""" + + def test_tracing_disabled_when_no_endpoints(self) -> None: + env = os.environ.copy() + env.pop(Constants.APPLICATIONINSIGHTS_CONNECTION_STRING, None) + env.pop(Constants.OTEL_EXPORTER_OTLP_ENDPOINT, None) + with mock.patch.dict(os.environ, env, clear=True): + agent = AgentHost() + assert agent.tracing is None + + def test_tracing_enabled_via_appinsights_env_var(self) -> None: + with mock.patch.dict(os.environ, {Constants.APPLICATIONINSIGHTS_CONNECTION_STRING: "InstrumentationKey=test"}): + with mock.patch( + "azure.ai.agentserver.core._tracing.TracingHelper.__init__", + return_value=None, + ): + agent = AgentHost() + assert agent.tracing is not None + + def test_tracing_enabled_via_otlp_env_var(self) -> None: + with mock.patch.dict(os.environ, {Constants.OTEL_EXPORTER_OTLP_ENDPOINT: "http://localhost:4318"}): + with mock.patch( + "azure.ai.agentserver.core._tracing.TracingHelper.__init__", + return_value=None, + ): + agent = AgentHost() + assert agent.tracing is not None + + def test_tracing_enabled_via_constructor_connection_string(self) -> None: + with mock.patch( + "azure.ai.agentserver.core._tracing.TracingHelper.__init__", + return_value=None, + ): + agent = AgentHost(application_insights_connection_string="InstrumentationKey=ctor") + assert agent.tracing is not None + + +# ------------------------------------------------------------------ # +# Application Insights connection string resolution +# ------------------------------------------------------------------ # + + +class TestAppInsightsConnectionString: + """Tests for resolve_appinsights_connection_string().""" + + def test_explicit_wins(self) -> None: + assert resolve_appinsights_connection_string("InstrumentationKey=abc") == "InstrumentationKey=abc" + + def test_env_var(self) -> None: + with mock.patch.dict( + os.environ, + {Constants.APPLICATIONINSIGHTS_CONNECTION_STRING: "InstrumentationKey=env"}, + ): + assert resolve_appinsights_connection_string(None) == "InstrumentationKey=env" + + def test_none_when_unset(self) -> None: + env = os.environ.copy() + env.pop(Constants.APPLICATIONINSIGHTS_CONNECTION_STRING, None) + with mock.patch.dict(os.environ, env, clear=True): + assert resolve_appinsights_connection_string(None) is None + + def test_explicit_overrides_env_var(self) -> None: + with mock.patch.dict( + os.environ, + {Constants.APPLICATIONINSIGHTS_CONNECTION_STRING: "InstrumentationKey=env"}, + ): + result = resolve_appinsights_connection_string("InstrumentationKey=explicit") + assert result == "InstrumentationKey=explicit" + + +# ------------------------------------------------------------------ # +# _setup_azure_monitor (mocked) +# ------------------------------------------------------------------ # + + +class TestSetupAzureMonitor: + """Verify _setup_azure_monitor calls the right helpers.""" + + @staticmethod + def _tracing_mocks() -> contextlib.ExitStack: + """Enter the common set of mocks needed to instantiate TracingHelper.""" + stack = contextlib.ExitStack() + stack.enter_context(mock.patch("azure.ai.agentserver.core._tracing._HAS_OTEL", True)) + stack.enter_context(mock.patch("azure.ai.agentserver.core._tracing.trace", create=True)) + stack.enter_context( + mock.patch("azure.ai.agentserver.core._tracing.TraceContextTextMapPropagator", create=True) + ) + stack.enter_context( + mock.patch("azure.ai.agentserver.core._tracing._ensure_trace_provider", return_value=mock.MagicMock()) + ) + return stack + + def test_setup_azure_monitor_called_when_conn_str_provided(self) -> None: + with self._tracing_mocks(): + with mock.patch( + "azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor" + ) as mock_setup: + with mock.patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_otlp_export"): + from azure.ai.agentserver.core._tracing import TracingHelper + TracingHelper(connection_string="InstrumentationKey=test") + # _setup_azure_monitor receives (connection_string, resource, trace_provider) + mock_setup.assert_called_once() + args = mock_setup.call_args[0] + assert args[0] == "InstrumentationKey=test" + + def test_setup_azure_monitor_not_called_when_no_conn_str(self) -> None: + with self._tracing_mocks(): + with mock.patch( + "azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor" + ) as mock_setup: + with mock.patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_otlp_export"): + from azure.ai.agentserver.core._tracing import TracingHelper + TracingHelper(connection_string=None) + mock_setup.assert_not_called() + + +# ------------------------------------------------------------------ # +# Constructor passes / skips connection string +# ------------------------------------------------------------------ # + + +class TestConstructorConnectionString: + """Verify AgentHost forwards the connection string to TracingHelper.""" + + def test_constructor_passes_connection_string(self) -> None: + with mock.patch( + "azure.ai.agentserver.core._tracing.TracingHelper.__init__", + return_value=None, + ) as mock_init: + AgentHost( + application_insights_connection_string="InstrumentationKey=ctor", + ) + mock_init.assert_called_once_with(connection_string="InstrumentationKey=ctor") + + +# ------------------------------------------------------------------ # +# Agent name / version resolution with new env vars +# ------------------------------------------------------------------ # + + +class TestAgentIdentityResolution: + """Tests for resolve_agent_name() and resolve_agent_version().""" + + def test_agent_name_from_env(self) -> None: + with mock.patch.dict(os.environ, {Constants.FOUNDRY_AGENT_NAME: "my-agent"}): + assert resolve_agent_name() == "my-agent" + + def test_agent_name_default_empty(self) -> None: + env = os.environ.copy() + env.pop(Constants.FOUNDRY_AGENT_NAME, None) + with mock.patch.dict(os.environ, env, clear=True): + assert resolve_agent_name() == "" + + def test_agent_version_from_env(self) -> None: + with mock.patch.dict(os.environ, {Constants.FOUNDRY_AGENT_VERSION: "2.0"}): + assert resolve_agent_version() == "2.0" + + def test_agent_version_default_empty(self) -> None: + env = os.environ.copy() + env.pop(Constants.FOUNDRY_AGENT_VERSION, None) + with mock.patch.dict(os.environ, env, clear=True): + assert resolve_agent_version() == "" + + +# ------------------------------------------------------------------ # +# Baggage parsing (unit tests for _parse_baggage_key) +# ------------------------------------------------------------------ # + + +class TestParseBaggageKey: + """Unit tests for _parse_baggage_key().""" + + def test_single_key(self) -> None: + assert _parse_baggage_key("leaf_customer_span_id=abc123", "leaf_customer_span_id") == "abc123" + + def test_multiple_keys(self) -> None: + baggage = "key1=val1,leaf_customer_span_id=def456,key2=val2" + assert _parse_baggage_key(baggage, "leaf_customer_span_id") == "def456" + + def test_key_not_found(self) -> None: + assert _parse_baggage_key("key1=val1,key2=val2", "leaf_customer_span_id") is None + + def test_empty_baggage(self) -> None: + assert _parse_baggage_key("", "leaf_customer_span_id") is None + + def test_key_with_properties(self) -> None: + baggage = "leaf_customer_span_id=abc123;prop1=x" + assert _parse_baggage_key(baggage, "leaf_customer_span_id") == "abc123" + + def test_whitespace_handling(self) -> None: + baggage = " leaf_customer_span_id = abc123 , other = val " + assert _parse_baggage_key(baggage, "leaf_customer_span_id") == "abc123" + + def test_value_with_equals(self) -> None: + baggage = "leaf_customer_span_id=abc=123" + assert _parse_baggage_key(baggage, "leaf_customer_span_id") == "abc=123" + + def test_no_equals_in_member(self) -> None: + baggage = "malformed_entry,leaf_customer_span_id=good" + assert _parse_baggage_key(baggage, "leaf_customer_span_id") == "good" diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/CHANGELOG.md b/sdk/agentserver/azure-ai-agentserver-invocations/CHANGELOG.md new file mode 100644 index 000000000000..1cb00d1154d0 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/CHANGELOG.md @@ -0,0 +1,15 @@ +# Release History + +## 1.0.0b1 (Unreleased) + +### Features Added + +- Initial release of `azure-ai-agentserver-invocations`. +- `InvocationHandler` for wiring invocation protocol endpoints to an `AgentHost`. +- Decorator-based handler registration (`@invocations.invoke_handler`). +- Optional `GET /invocations/{id}` and `POST /invocations/{id}/cancel` endpoints. +- `GET /invocations/docs/openapi.json` for OpenAPI spec serving. +- Invocation ID tracking and session correlation via `agent_session_id` query parameter. +- Distributed tracing with GenAI semantic convention span attributes. +- W3C Baggage propagation for cross-service correlation. +- Streaming response support with span lifecycle management. diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/LICENSE b/sdk/agentserver/azure-ai-agentserver-invocations/LICENSE new file mode 100644 index 000000000000..4c3581d3b052 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/LICENSE @@ -0,0 +1,21 @@ +Copyright (c) Microsoft Corporation. + +MIT License + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/MANIFEST.in b/sdk/agentserver/azure-ai-agentserver-invocations/MANIFEST.in new file mode 100644 index 000000000000..cd83a6c13bfa --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/MANIFEST.in @@ -0,0 +1,8 @@ +include *.md +include LICENSE +recursive-include tests *.py +recursive-include samples *.py *.md +include azure/__init__.py +include azure/ai/__init__.py +include azure/ai/agentserver/__init__.py +include azure/ai/agentserver/invocations/py.typed diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/README.md b/sdk/agentserver/azure-ai-agentserver-invocations/README.md new file mode 100644 index 000000000000..0ab1bf64f5d6 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/README.md @@ -0,0 +1,221 @@ +# Azure AI AgentHost Invocations for Python + +The `azure-ai-agentserver-invocations` package provides the invocation protocol endpoints for Azure AI Hosted Agent containers. It plugs into the [`azure-ai-agentserver-core`](https://pypi.org/project/azure-ai-agentserver-core/) host framework and adds the full invocation lifecycle: `POST /invocations`, `GET /invocations/{id}`, `POST /invocations/{id}/cancel`, and `GET /invocations/docs/openapi.json`. + +## Getting started + +### Install the package + +```bash +pip install azure-ai-agentserver-invocations +``` + +This automatically installs `azure-ai-agentserver-core` as a dependency. + +### Prerequisites + +- Python 3.10 or later + +## Key concepts + +### InvocationHandler + +`InvocationHandler` is the composable protocol handler that mounts invocation endpoints onto an `AgentHost`. It provides decorator methods for registering handler functions: + +- `@invocations.invoke_handler` — **Required.** Handles `POST /invocations`. +- `@invocations.get_invocation_handler` — Optional. Handles `GET /invocations/{id}`. +- `@invocations.cancel_invocation_handler` — Optional. Handles `POST /invocations/{id}/cancel`. + +### Protocol endpoints + +| Method | Route | Required | Description | +|---|---|---|---| +| `POST` | `/invocations` | Yes | Execute the agent | +| `GET` | `/invocations/{invocation_id}` | No | Retrieve invocation status or result | +| `POST` | `/invocations/{invocation_id}/cancel` | No | Cancel a running invocation | +| `GET` | `/invocations/docs/openapi.json` | No | Serve the agent's OpenAPI 3.x spec | + +### Request and response headers + +The SDK automatically manages these headers on every invocation: + +| Header | Direction | Description | +|---|---|---| +| `x-agent-invocation-id` | Request & Response | Echoed if provided, otherwise a UUID is generated | +| `x-agent-session-id` | Response (POST only) | Resolved from `agent_session_id` query param, `FOUNDRY_AGENT_SESSION_ID` env var, or generated UUID | + +### Session ID resolution + +Session IDs group related invocations into a conversation. The SDK resolves the session ID in order: + +1. `agent_session_id` query parameter on `POST /invocations` +2. `FOUNDRY_AGENT_SESSION_ID` environment variable +3. Auto-generated UUID + +The resolved session ID is available in handler functions via `request.state.session_id`. + +### Handler access to SDK state + +Inside handler functions, the SDK sets these attributes on `request.state`: + +- `request.state.invocation_id` — The invocation ID (echoed or generated). +- `request.state.session_id` — The resolved session ID (POST /invocations only). + +### Distributed tracing + +When tracing is enabled on the `AgentHost`, invocation spans are automatically created with GenAI semantic conventions: + +- **Span name**: `invoke_agent {FOUNDRY_AGENT_NAME}:{FOUNDRY_AGENT_VERSION}` +- **Span attributes**: `gen_ai.system`, `gen_ai.operation.name`, `gen_ai.response.id`, `gen_ai.conversation.id`, `gen_ai.agent.id`, `gen_ai.agent.name`, `gen_ai.agent.version` +- **Error tags**: `azure.ai.agentserver.invocations.error.code`, `.error.message` +- **Baggage keys**: `azure.ai.agentserver.invocation_id`, `.session_id` + +## Examples + +### Simple synchronous agent + +```python +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +server = AgentHost() +invocations = InvocationHandler(server) + +@invocations.invoke_handler +async def handle(request: Request) -> Response: + data = await request.json() + return JSONResponse({"greeting": f"Hello, {data['name']}!"}) + +server.run() +``` + +### Long-running operations with polling + +```python +import asyncio +import json + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +_tasks: dict[str, asyncio.Task] = {} +_results: dict[str, bytes] = {} + +server = AgentHost() +invocations = InvocationHandler(server) + +@invocations.invoke_handler +async def handle(request: Request) -> Response: + data = await request.json() + invocation_id = request.state.invocation_id + task = asyncio.create_task(do_work(invocation_id, data)) + _tasks[invocation_id] = task + return JSONResponse({"invocation_id": invocation_id, "status": "running"}) + +@invocations.get_invocation_handler +async def get_invocation(request: Request) -> Response: + invocation_id = request.state.invocation_id + if invocation_id in _results: + return Response(content=_results[invocation_id], media_type="application/json") + return JSONResponse({"invocation_id": invocation_id, "status": "running"}) + +@invocations.cancel_invocation_handler +async def cancel_invocation(request: Request) -> Response: + invocation_id = request.state.invocation_id + if invocation_id in _tasks: + _tasks[invocation_id].cancel() + del _tasks[invocation_id] + return JSONResponse({"invocation_id": invocation_id, "status": "cancelled"}) + return JSONResponse({"error": "not found"}, status_code=404) +``` + +### Streaming (Server-Sent Events) + +```python +import json + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from starlette.requests import Request +from starlette.responses import Response, StreamingResponse + +server = AgentHost() +invocations = InvocationHandler(server) + +@invocations.invoke_handler +async def handle(request: Request) -> Response: + async def generate(): + for word in ["Hello", " ", "world", "!"]: + yield json.dumps({"delta": word}).encode() + b"\n" + + return StreamingResponse(generate(), media_type="text/event-stream") +``` + +### Multi-turn conversation + +Use the `agent_session_id` query parameter to group invocations into a conversation: + +```bash +# First turn +curl -X POST "http://localhost:8088/invocations?agent_session_id=session-abc" \ + -H "Content-Type: application/json" \ + -d '{"message": "My name is Alice"}' + +# Second turn (same session) +curl -X POST "http://localhost:8088/invocations?agent_session_id=session-abc" \ + -H "Content-Type: application/json" \ + -d '{"message": "What is my name?"}' +``` + +The session ID is available in the handler via `request.state.session_id`. + +### Serving an OpenAPI spec + +Pass an OpenAPI spec dict to enable the discovery endpoint at `GET /invocations/docs/openapi.json`: + +```python +server = AgentHost() +invocations = InvocationHandler(server, openapi_spec={ + "openapi": "3.0.3", + "info": {"title": "My Agent", "version": "1.0.0"}, + "paths": { ... }, +}) +``` + +## Troubleshooting + +### Reporting issues + +To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). + +## Next steps + +Visit the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/agentserver/azure-ai-agentserver-invocations/samples) folder for complete working examples: + +| Sample | Description | +|---|---| +| [simple_invoke_agent](samples/simple_invoke_agent/) | Minimal synchronous request-response | +| [async_invoke_agent](samples/async_invoke_agent/) | Long-running operations with polling and cancellation | + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require +you to agree to a Contributor License Agreement (CLA) declaring that you have +the right to, and actually do, grant us the rights to use your contribution. +For details, visit https://cla.microsoft.com. + +When you submit a pull request, a CLA-bot will automatically determine whether +you need to provide a CLA and decorate the PR appropriately (e.g., label, +comment). Simply follow the instructions provided by the bot. You will only +need to do this once across all repos using our CLA. + +This project has adopted the +[Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, +see the Code of Conduct FAQ or contact opencode@microsoft.com with any +additional questions or comments. + +[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/ diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/__init__.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/__init__.py similarity index 100% rename from sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/__init__.py rename to sdk/agentserver/azure-ai-agentserver-invocations/azure/__init__.py diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/__init__.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/__init__.py similarity index 100% rename from sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/__init__.py rename to sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/__init__.py diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/__init__.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/__init__.py new file mode 100644 index 000000000000..8db66d3d0f0f --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/__init__.py @@ -0,0 +1 @@ +__path__ = __import__("pkgutil").extend_path(__path__, __name__) diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/__init__.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/__init__.py new file mode 100644 index 000000000000..e8cdb4179622 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/__init__.py @@ -0,0 +1,30 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Invocations protocol for Azure AI Hosted Agents. + +This package provides the invocation protocol endpoints and handler +wiring for :class:`~azure.ai.agentserver.core.AgentHost`. + +Quick start:: + + from azure.ai.agentserver.core import AgentHost + from azure.ai.agentserver.invocations import InvocationHandler + from starlette.responses import JSONResponse + + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request): + return JSONResponse({"ok": True}) + + server.run() +""" +__path__ = __import__("pkgutil").extend_path(__path__, __name__) + +from ._invocation import InvocationHandler +from ._version import VERSION + +__all__ = ["InvocationHandler"] +__version__ = VERSION diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_constants.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_constants.py new file mode 100644 index 000000000000..2dd8e9f91ce7 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_constants.py @@ -0,0 +1,25 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- + + +class InvocationConstants: + """Invocation protocol constants. + + Protocol-specific headers, env vars, and defaults for the invocation + endpoints. + """ + + # Request / response headers + INVOCATION_ID_HEADER = "x-agent-invocation-id" + SESSION_ID_HEADER = "x-agent-session-id" + + # Span attribute keys + ATTR_SPAN_INVOCATION_ID = "azure.ai.agentserver.invocations.invocation_id" + ATTR_SPAN_SESSION_ID = "azure.ai.agentserver.invocations.session_id" + ATTR_SPAN_ERROR_CODE = "azure.ai.agentserver.invocations.error.code" + ATTR_SPAN_ERROR_MESSAGE = "azure.ai.agentserver.invocations.error.message" + + # Baggage keys + ATTR_BAGGAGE_INVOCATION_ID = "azure.ai.agentserver.invocation_id" + ATTR_BAGGAGE_SESSION_ID = "azure.ai.agentserver.session_id" diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_invocation.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_invocation.py new file mode 100644 index 000000000000..d543c2a6e368 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_invocation.py @@ -0,0 +1,416 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Invocation protocol handler for AgentHost. + +Provides the invocation protocol endpoints and handler decorators. +Registers routes with the ``AgentHost`` on construction. +""" +import contextlib +import os +import uuid +from collections.abc import Awaitable, Callable # pylint: disable=import-error +from typing import TYPE_CHECKING, Any, Optional + +from starlette.requests import Request +from starlette.responses import JSONResponse, Response, StreamingResponse +from starlette.routing import Route + +from azure.ai.agentserver.core import ( # pylint: disable=no-name-in-module + AgentLogger, + Constants, + ErrorResponse, +) + +if TYPE_CHECKING: + from azure.ai.agentserver.core import AgentHost, TracingHelper + +from ._constants import InvocationConstants + +logger = AgentLogger.get() + + +class InvocationHandler: + """Invocation protocol handler that plugs into an ``AgentHost``. + + Creates the invocation protocol endpoints and registers them with + the server. Use the decorator methods to wire handler functions + to the endpoints. + + This design supports multi-protocol composition — multiple protocol + handlers (e.g. ``InvocationHandler``, ``ResponseHandler``) can be + mounted onto the same ``AgentHost``. + + Usage:: + + from azure.ai.agentserver.core import AgentHost + from azure.ai.agentserver.invocations import InvocationHandler + + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request): + return JSONResponse({"ok": True}) + + server.run() + + :param server: The ``AgentHost`` to register invocation protocol + routes with. + :type server: AgentHost + :param openapi_spec: Optional OpenAPI spec dict. When provided, the spec + is served at ``GET /invocations/docs/openapi.json``. + :type openapi_spec: Optional[dict[str, Any]] + """ + + def __init__( + self, + server: "AgentHost", + *, + openapi_spec: Optional[dict[str, Any]] = None, + ) -> None: + self._tracing: Optional["TracingHelper"] = server.tracing + self._invoke_fn: Optional[Callable] = None + self._get_invocation_fn: Optional[Callable] = None + self._cancel_invocation_fn: Optional[Callable] = None + self._openapi_spec = openapi_spec + + # Build and cache routes once + self._routes: list[Route] = [ + Route( + "/invocations/docs/openapi.json", + self._get_openapi_spec_endpoint, + methods=["GET"], + name="get_openapi_spec", + ), + Route( + "/invocations", + self._create_invocation_endpoint, + methods=["POST"], + name="create_invocation", + ), + Route( + "/invocations/{invocation_id}", + self._get_invocation_endpoint, + methods=["GET"], + name="get_invocation", + ), + Route( + "/invocations/{invocation_id}/cancel", + self._cancel_invocation_endpoint, + methods=["POST"], + name="cancel_invocation", + ), + ] + + # Register routes with the server + server.register_routes(self._routes) + + # ------------------------------------------------------------------ + # Routes + # ------------------------------------------------------------------ + + @property + def routes(self) -> list[Route]: + """Starlette routes for the invocation protocol. + + :return: A list of Route objects for the invocation endpoints. + :rtype: list[Route] + """ + return self._routes + + # ------------------------------------------------------------------ + # Handler decorators + # ------------------------------------------------------------------ + + def invoke_handler( + self, fn: Callable[[Request], Awaitable[Response]] + ) -> Callable[[Request], Awaitable[Response]]: + """Register a function as the invoke handler. + + Usage:: + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + ... + + :param fn: Async function accepting a Starlette Request and returning a Response. + :type fn: Callable[[Request], Awaitable[Response]] + :return: The original function (unmodified). + :rtype: Callable[[Request], Awaitable[Response]] + """ + self._invoke_fn = fn + return fn + + def get_invocation_handler( + self, fn: Callable[[Request], Awaitable[Response]] + ) -> Callable[[Request], Awaitable[Response]]: + """Register a function as the get-invocation handler. + + :param fn: Async function accepting a Starlette Request and returning a Response. + :type fn: Callable[[Request], Awaitable[Response]] + :return: The original function (unmodified). + :rtype: Callable[[Request], Awaitable[Response]] + """ + self._get_invocation_fn = fn + return fn + + def cancel_invocation_handler( + self, fn: Callable[[Request], Awaitable[Response]] + ) -> Callable[[Request], Awaitable[Response]]: + """Register a function as the cancel-invocation handler. + + :param fn: Async function accepting a Starlette Request and returning a Response. + :type fn: Callable[[Request], Awaitable[Response]] + :return: The original function (unmodified). + :rtype: Callable[[Request], Awaitable[Response]] + """ + self._cancel_invocation_fn = fn + return fn + + # ------------------------------------------------------------------ + # Dispatch methods (internal) + # ------------------------------------------------------------------ + + async def _dispatch_invoke(self, request: Request) -> Response: + if self._invoke_fn is not None: + return await self._invoke_fn(request) + raise NotImplementedError( + "No invoke handler registered. Use the @invocations.invoke_handler decorator." + ) + + async def _dispatch_get_invocation(self, request: Request) -> Response: + if self._get_invocation_fn is not None: + return await self._get_invocation_fn(request) + return ErrorResponse.create("not_found", "get_invocation not implemented", status_code=404) + + async def _dispatch_cancel_invocation(self, request: Request) -> Response: + if self._cancel_invocation_fn is not None: + return await self._cancel_invocation_fn(request) + return ErrorResponse.create("not_found", "cancel_invocation not implemented", status_code=404) + + def get_openapi_spec(self) -> Optional[dict[str, Any]]: + """Return the stored OpenAPI spec, or None.""" + return self._openapi_spec + + # ------------------------------------------------------------------ + # Span attribute helper + # ------------------------------------------------------------------ + + @staticmethod + def _safe_set_attrs(span: Any, attrs: dict[str, str]) -> None: + if span is None: + return + try: + for key, value in attrs.items(): + span.set_attribute(key, value) + except Exception: # pylint: disable=broad-exception-caught + logger.debug("Failed to set span attributes: %s", list(attrs.keys()), exc_info=True) + + # ------------------------------------------------------------------ + # Streaming response helpers + # ------------------------------------------------------------------ + + def _wrap_streaming_response( + self, + response: StreamingResponse, + otel_span: Any, + baggage_token: Any, + ) -> StreamingResponse: + """Wrap a streaming response's body iterator with tracing and baggage cleanup. + + Two layers of wrapping are applied in order: + + 1. **Inner (tracing):** ``trace_stream`` wraps the body iterator so + the OTel span covers the full streaming duration and records any + errors that occur while yielding chunks. + 2. **Outer (baggage cleanup):** A second async generator detaches the + W3C Baggage context *after* all chunks have been sent (or an + error occurs). This ordering ensures the span is ended before + the baggage context is detached. + + :param response: The ``StreamingResponse`` returned by the user handler. + :param otel_span: The OTel span (or *None* when tracing is disabled). + :param baggage_token: Token from ``set_baggage`` (or *None*). + :return: The same response object, with its body_iterator replaced. + """ + # When tracing is disabled there is nothing to wrap — skip the + # extra async-generator layer to avoid unnecessary overhead on + # every streaming chunk. + if self._tracing is None: + return response + + # Inner wrap: trace_stream ends the span when iteration completes. + response.body_iterator = self._tracing.trace_stream(response.body_iterator, otel_span) + + # Outer wrap: detach baggage after all chunks are sent. + original_iterator = response.body_iterator + tracing = self._tracing # capture for the closure + + async def _cleanup_iter(): # type: ignore[return-value] + try: + async for chunk in original_iterator: + yield chunk + finally: + tracing.detach_baggage(baggage_token) + + response.body_iterator = _cleanup_iter() + return response + + # ------------------------------------------------------------------ + # Endpoint handlers + # ------------------------------------------------------------------ + + async def _get_openapi_spec_endpoint(self, request: Request) -> Response: # pylint: disable=unused-argument + spec = self.get_openapi_spec() + if spec is None: + return ErrorResponse.create("not_found", "No OpenAPI spec registered", status_code=404) + return JSONResponse(spec) + + async def _create_invocation_endpoint(self, request: Request) -> Response: + invocation_id = ( + request.headers.get(InvocationConstants.INVOCATION_ID_HEADER) + or str(uuid.uuid4()) + ) + request.state.invocation_id = invocation_id + + # Session ID: query param overrides env var / generated UUID + session_id = ( + request.query_params.get("agent_session_id") + or os.environ.get(Constants.FOUNDRY_AGENT_SESSION_ID) + or str(uuid.uuid4()) + ) + request.state.session_id = session_id + + baggage_token = None + response: Optional[Response] = None + streaming_wrapped = False + + try: + otel_span = None + if self._tracing is not None: + otel_span = self._tracing.start_request_span( + request.headers, + invocation_id, + span_operation="invoke_agent", + operation_name="invoke_agent", + session_id=session_id, + ) + self._safe_set_attrs(otel_span, { + InvocationConstants.ATTR_SPAN_INVOCATION_ID: invocation_id, + InvocationConstants.ATTR_SPAN_SESSION_ID: session_id, + }) + baggage_token = self._tracing.set_baggage({ + InvocationConstants.ATTR_BAGGAGE_INVOCATION_ID: invocation_id, + InvocationConstants.ATTR_BAGGAGE_SESSION_ID: session_id, + }) + + try: + response = await self._dispatch_invoke(request) + except NotImplementedError as exc: + self._safe_set_attrs(otel_span, { + InvocationConstants.ATTR_SPAN_ERROR_CODE: "not_implemented", + InvocationConstants.ATTR_SPAN_ERROR_MESSAGE: str(exc), + }) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc) + logger.error("Invocation %s failed: %s", invocation_id, exc) + return ErrorResponse.create( + "not_implemented", + str(exc), + status_code=501, + headers={ + InvocationConstants.INVOCATION_ID_HEADER: invocation_id, + InvocationConstants.SESSION_ID_HEADER: session_id, + }, + ) + except Exception as exc: # pylint: disable=broad-exception-caught + self._safe_set_attrs(otel_span, { + InvocationConstants.ATTR_SPAN_ERROR_CODE: "internal_error", + InvocationConstants.ATTR_SPAN_ERROR_MESSAGE: str(exc), + }) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc) + logger.error("Error processing invocation %s: %s", invocation_id, exc, exc_info=True) + return ErrorResponse.create( + "internal_error", + "Internal server error", + status_code=500, + headers={ + InvocationConstants.INVOCATION_ID_HEADER: invocation_id, + InvocationConstants.SESSION_ID_HEADER: session_id, + }, + ) + + response.headers[InvocationConstants.INVOCATION_ID_HEADER] = invocation_id + response.headers[InvocationConstants.SESSION_ID_HEADER] = session_id + + if isinstance(response, StreamingResponse): + wrapped = self._wrap_streaming_response(response, otel_span, baggage_token) + streaming_wrapped = True + return wrapped + + # Non-streaming: end the span immediately. + if self._tracing is not None: + self._tracing.end_span(otel_span) + + return response + finally: + # For non-streaming responses (or error paths that returned + # before reaching _wrap_streaming_response), detach baggage + # immediately. Streaming responses handle this in + # _wrap_streaming_response's cleanup iterator instead. + if not streaming_wrapped: + if self._tracing is not None: + self._tracing.detach_baggage(baggage_token) + + async def _traced_invocation_endpoint( + self, + request: Request, + span_operation: str, + dispatch: Callable[[Request], Awaitable[Response]], + ) -> Response: + invocation_id = request.path_params["invocation_id"] + request.state.invocation_id = invocation_id + + span_cm: Any = contextlib.nullcontext(None) + if self._tracing is not None: + span_cm = self._tracing.request_span( + request.headers, invocation_id, span_operation, + session_id=request.query_params.get("agent_session_id", ""), + ) + with span_cm as _otel_span: + self._safe_set_attrs(_otel_span, { + InvocationConstants.ATTR_SPAN_INVOCATION_ID: invocation_id, + InvocationConstants.ATTR_SPAN_SESSION_ID: request.query_params.get("agent_session_id", ""), + }) + try: + response = await dispatch(request) + response.headers[InvocationConstants.INVOCATION_ID_HEADER] = invocation_id + return response + except Exception as exc: # pylint: disable=broad-exception-caught + self._safe_set_attrs(_otel_span, { + InvocationConstants.ATTR_SPAN_ERROR_CODE: "internal_error", + InvocationConstants.ATTR_SPAN_ERROR_MESSAGE: str(exc), + }) + # The exception is caught here (not re-raised), so OTel's + # start_as_current_span won't see it. Record it explicitly. + if self._tracing is not None: + self._tracing.record_error(_otel_span, exc) + logger.error("Error in %s %s: %s", span_operation, invocation_id, exc, exc_info=True) + return ErrorResponse.create( + "internal_error", + "Internal server error", + status_code=500, + headers={InvocationConstants.INVOCATION_ID_HEADER: invocation_id}, + ) + + async def _get_invocation_endpoint(self, request: Request) -> Response: + return await self._traced_invocation_endpoint( + request, "get_invocation", self._dispatch_get_invocation + ) + + async def _cancel_invocation_endpoint(self, request: Request) -> Response: + return await self._traced_invocation_endpoint( + request, "cancel_invocation", self._dispatch_cancel_invocation + ) diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/__init__.py b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_version.py similarity index 73% rename from sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/__init__.py rename to sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_version.py index fdf8caba9ef5..67d209a8cafd 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/server/common/id_generator/__init__.py +++ b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/_version.py @@ -2,4 +2,4 @@ # Copyright (c) Microsoft Corporation. All rights reserved. # --------------------------------------------------------- -__path__ = __import__("pkgutil").extend_path(__path__, __name__) +VERSION = "1.0.0b1" diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/py.typed b/sdk/agentserver/azure-ai-agentserver-invocations/azure/ai/agentserver/invocations/py.typed new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/cspell.json b/sdk/agentserver/azure-ai-agentserver-invocations/cspell.json new file mode 100644 index 000000000000..5858cd8e195b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/cspell.json @@ -0,0 +1,26 @@ +{ + "ignoreWords": [ + "agentserver", + "appinsights", + "ASGI", + "autouse", + "caplog", + "genai", + "hypercorn", + "invocations", + "openapi", + "paramtype", + "pytestmark", + "rtype", + "starlette", + "traceparent", + "tracestate", + "tracecontext" + ], + "ignorePaths": [ + "*.csv", + "*.json", + "*.rst", + "samples/**" + ] +} diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/dev_requirements.txt b/sdk/agentserver/azure-ai-agentserver-invocations/dev_requirements.txt new file mode 100644 index 000000000000..d9dabecb9e67 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/dev_requirements.txt @@ -0,0 +1,6 @@ +-e ../../../eng/tools/azure-sdk-tools +pytest +httpx +pytest-asyncio +opentelemetry-api>=1.20.0 +opentelemetry-sdk>=1.20.0 diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/pyproject.toml b/sdk/agentserver/azure-ai-agentserver-invocations/pyproject.toml new file mode 100644 index 000000000000..b4e85165aa0a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/pyproject.toml @@ -0,0 +1,67 @@ +[project] +name = "azure-ai-agentserver-invocations" +dynamic = ["version", "readme"] +description = "Invocations protocol for Azure AI Hosted Agents" +requires-python = ">=3.10" +authors = [ + { name = "Microsoft Corporation", email = "azpysdkhelp@microsoft.com" }, +] +license = "MIT" +classifiers = [ + "Development Status :: 4 - Beta", + "Programming Language :: Python", + "Programming Language :: Python :: 3 :: Only", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", +] +keywords = ["azure", "azure sdk", "agent", "agentserver", "invocations"] + +dependencies = [ + "azure-ai-agentserver-core>=2.0.0b1", +] + +[build-system] +requires = ["setuptools>=69", "wheel"] +build-backend = "setuptools.build_meta" + +[project.urls] +repository = "https://github.com/Azure/azure-sdk-for-python" + +[tool.setuptools.packages.find] +exclude = [ + "tests*", + "samples*", + "doc*", + "azure", + "azure.ai", + "azure.ai.agentserver", +] + +[tool.setuptools.dynamic] +version = { attr = "azure.ai.agentserver.invocations._version.VERSION" } +readme = { file = ["README.md"], content-type = "text/markdown" } + +[tool.setuptools.package-data] +"azure.ai.agentserver.invocations" = ["py.typed"] + +[tool.ruff] +line-length = 120 +target-version = "py310" +lint.select = ["E", "F", "B", "I"] +lint.ignore = [] +fix = false + +[tool.ruff.lint.isort] +known-first-party = ["azure.ai.agentserver.invocations"] +combine-as-imports = true + +[tool.azure-sdk-build] +breaking = false +mypy = true +pyright = true +verifytypes = true +pylint = true +type_check_samples = false diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/pyrightconfig.json b/sdk/agentserver/azure-ai-agentserver-invocations/pyrightconfig.json new file mode 100644 index 000000000000..f36c5a7fe0d3 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/pyrightconfig.json @@ -0,0 +1,11 @@ +{ + "reportOptionalMemberAccess": "warning", + "reportArgumentType": "warning", + "reportAttributeAccessIssue": "warning", + "reportMissingImports": "warning", + "reportGeneralTypeIssues": "warning", + "reportReturnType": "warning", + "exclude": [ + "**/samples/**" + ] +} diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/samples/async_invoke_agent/async_invoke_agent.py b/sdk/agentserver/azure-ai-agentserver-invocations/samples/async_invoke_agent/async_invoke_agent.py new file mode 100644 index 000000000000..40d0a4be8d7f --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/samples/async_invoke_agent/async_invoke_agent.py @@ -0,0 +1,170 @@ +"""Async invoke agent example. + +Demonstrates get_invocation and cancel_invocation for long-running work. +Invocations run in background tasks; callers poll or cancel by ID. + +.. warning:: + + **In-memory demo only.** This sample stores all invocation state + (``self._tasks``, ``self._results``) in process memory. Both in-flight + ``asyncio.Task`` objects and completed results are lost on process restart + — which *will* happen during platform rolling updates, health-check + failures, and scaling events. + + For production long-running invocations: + + * Persist results to durable storage (Redis, Cosmos DB, etc.) inside + ``_do_work`` **before** the method returns. + * On startup, rehydrate any incomplete work or mark it as failed. + * Consider an external task queue (Celery, Azure Queue, etc.) instead + of ``asyncio.create_task`` for work that must survive restarts. + +Usage:: + + # Start the agent + python async_invoke_agent.py + + # Start a long-running invocation + curl -X POST http://localhost:8088/invocations -H "Content-Type: application/json" -d '{"query": "analyze dataset"}' + # -> x-agent-invocation-id: abc-123 + # -> {"invocation_id": "abc-123", "status": "running"} + + # Poll for result + curl http://localhost:8088/invocations/abc-123 + # -> {"invocation_id": "abc-123", "status": "running"} (still working) + # -> {"invocation_id": "abc-123", "status": "completed"} (done) + + # Or cancel + curl -X POST http://localhost:8088/invocations/abc-123/cancel + # -> {"invocation_id": "abc-123", "status": "cancelled"} +""" +import asyncio +import json + +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +# In-memory state for demo purposes (see module docstring for production caveats) +_tasks: dict[str, asyncio.Task] = {} +_results: dict[str, bytes] = {} + +server = AgentHost() +invocations = InvocationHandler(server) + + +async def _do_work(invocation_id: str, data: dict) -> bytes: + """Simulate long-running work. + + :param invocation_id: The invocation ID for this task. + :type invocation_id: str + :param data: The parsed request data. + :type data: dict + :return: JSON result bytes. + :rtype: bytes + """ + await asyncio.sleep(10) + result = json.dumps({ + "invocation_id": invocation_id, + "status": "completed", + "output": f"Processed: {data}", + }).encode() + _results[invocation_id] = result + return result + + +@invocations.invoke_handler +async def handle_invoke(request: Request) -> Response: + """Start a long-running invocation in a background task. + + :param request: The raw Starlette request. + :type request: starlette.requests.Request + :return: JSON status indicating the task is running. + :rtype: starlette.responses.JSONResponse + """ + data = await request.json() + invocation_id = request.state.invocation_id + + task = asyncio.create_task(_do_work(invocation_id, data)) + _tasks[invocation_id] = task + + return JSONResponse({ + "invocation_id": invocation_id, + "status": "running", + }) + + +@invocations.get_invocation_handler +async def handle_get_invocation(request: Request) -> Response: + """Retrieve a previous invocation result. + + :param request: The raw Starlette request. + :type request: starlette.requests.Request + :return: JSON status or result. + :rtype: starlette.responses.JSONResponse + """ + invocation_id = request.state.invocation_id + + if invocation_id in _results: + return Response(content=_results[invocation_id], media_type="application/json") + + if invocation_id in _tasks: + task = _tasks[invocation_id] + if not task.done(): + return JSONResponse({ + "invocation_id": invocation_id, + "status": "running", + }) + result = task.result() + _results[invocation_id] = result + del _tasks[invocation_id] + return Response(content=result, media_type="application/json") + + return JSONResponse({"error": "not found"}, status_code=404) + + +@invocations.cancel_invocation_handler +async def handle_cancel_invocation(request: Request) -> Response: + """Cancel a running invocation. + + :param request: The raw Starlette request. + :type request: starlette.requests.Request + :return: JSON cancellation status. + :rtype: starlette.responses.JSONResponse + """ + invocation_id = request.state.invocation_id + + # Already completed — cannot cancel + if invocation_id in _results: + return JSONResponse({ + "invocation_id": invocation_id, + "status": "completed", + "error": "invocation already completed", + }) + + if invocation_id in _tasks: + task = _tasks[invocation_id] + if task.done(): + # Task finished between check — treat as completed + _results[invocation_id] = task.result() + del _tasks[invocation_id] + return JSONResponse({ + "invocation_id": invocation_id, + "status": "completed", + "error": "invocation already completed", + }) + task.cancel() + del _tasks[invocation_id] + return JSONResponse({ + "invocation_id": invocation_id, + "status": "cancelled", + }) + + return JSONResponse({"error": "not found"}, status_code=404) + + +if __name__ == "__main__": + server.run() diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/samples/async_invoke_agent/requirements.txt b/sdk/agentserver/azure-ai-agentserver-invocations/samples/async_invoke_agent/requirements.txt new file mode 100644 index 000000000000..bc5cf4644e14 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/samples/async_invoke_agent/requirements.txt @@ -0,0 +1 @@ +azure-ai-agentserver-invocations diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/samples/simple_invoke_agent/requirements.txt b/sdk/agentserver/azure-ai-agentserver-invocations/samples/simple_invoke_agent/requirements.txt new file mode 100644 index 000000000000..bc5cf4644e14 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/samples/simple_invoke_agent/requirements.txt @@ -0,0 +1 @@ +azure-ai-agentserver-invocations diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/samples/simple_invoke_agent/simple_invoke_agent.py b/sdk/agentserver/azure-ai-agentserver-invocations/samples/simple_invoke_agent/simple_invoke_agent.py new file mode 100644 index 000000000000..212585120132 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/samples/simple_invoke_agent/simple_invoke_agent.py @@ -0,0 +1,40 @@ +"""Simple invoke agent example. + +Accepts JSON requests, echoes back with a greeting. + +Usage:: + + # Start the agent + python simple_invoke_agent.py + + # Send a greeting request + curl -X POST http://localhost:8088/invocations -H "Content-Type: application/json" -d '{"name": "Alice"}' + # -> {"greeting": "Hello, Alice!"} +""" +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +server = AgentHost() +invocations = InvocationHandler(server) + + +@invocations.invoke_handler +async def handle_invoke(request: Request) -> Response: + """Process the invocation by echoing a greeting. + + :param request: The raw Starlette request. + :type request: starlette.requests.Request + :return: JSON greeting response. + :rtype: starlette.responses.JSONResponse + """ + data = await request.json() + greeting = f"Hello, {data['name']}!" + return JSONResponse({"greeting": greeting}) + + +if __name__ == "__main__": + server.run() diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/conftest.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/conftest.py new file mode 100644 index 000000000000..603bbcb45cda --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/conftest.py @@ -0,0 +1,208 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Shared fixtures and factory functions for invocations tests.""" +import json +from typing import Any + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import JSONResponse, Response, StreamingResponse + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + +# --------------------------------------------------------------------------- +# Sample OpenAPI spec used by several tests +# --------------------------------------------------------------------------- + +SAMPLE_OPENAPI_SPEC: dict[str, Any] = { + "openapi": "3.0.0", + "info": {"title": "Echo Agent", "version": "1.0.0"}, + "paths": { + "/invocations": { + "post": { + "requestBody": { + "required": True, + "content": { + "application/json": { + "schema": { + "type": "object", + "required": ["message"], + "properties": { + "message": {"type": "string"}, + }, + } + } + }, + }, + "responses": { + "200": { + "description": "OK", + "content": { + "application/json": { + "schema": { + "type": "object", + "properties": { + "reply": {"type": "string"}, + }, + } + } + }, + } + }, + } + } + }, +} + + +# --------------------------------------------------------------------------- +# Factory functions +# --------------------------------------------------------------------------- + + +def _make_echo_agent(**kwargs: Any) -> AgentHost: + """Create an AgentHost whose invoke handler echoes the request body.""" + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + body = await request.body() + return Response(content=body, media_type="application/octet-stream") + + return server + + +def _make_streaming_agent(**kwargs: Any) -> AgentHost: + """Create an AgentHost whose invoke handler returns 3 JSON chunks.""" + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> StreamingResponse: + async def generate(): + for i in range(3): + yield json.dumps({"chunk": i}) + "\n" + + return StreamingResponse(generate(), media_type="application/x-ndjson") + + return server + + +def _make_async_storage_agent(**kwargs: Any) -> AgentHost: + """Create an AgentHost with get/cancel handlers and in-memory store.""" + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + store: dict[str, Any] = {} + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + inv_id = request.state.invocation_id + body = await request.body() + store[inv_id] = body + return Response(content=body, media_type="application/octet-stream") + + @invocations.get_invocation_handler + async def get_handler(request: Request) -> Response: + inv_id = request.path_params["invocation_id"] + if inv_id not in store: + return JSONResponse( + {"error": {"code": "not_found", "message": "Not found"}}, + status_code=404, + ) + return Response(content=store[inv_id], media_type="application/octet-stream") + + @invocations.cancel_invocation_handler + async def cancel_handler(request: Request) -> Response: + inv_id = request.path_params["invocation_id"] + if inv_id not in store: + return JSONResponse( + {"error": {"code": "not_found", "message": "Not found"}}, + status_code=404, + ) + del store[inv_id] + return JSONResponse({"status": "cancelled"}) + + return server + + +def _make_validated_agent() -> AgentHost: + """Create an AgentHost with OpenAPI spec.""" + server = AgentHost() + invocations = InvocationHandler( + server, + openapi_spec=SAMPLE_OPENAPI_SPEC, + ) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + data = await request.json() + return JSONResponse({"reply": f"echo: {data['message']}"}) + + return server + + +def _make_failing_agent(**kwargs: Any) -> AgentHost: + """Create an AgentHost whose handler raises ValueError.""" + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + raise ValueError("something went wrong") + + return server + + +# --------------------------------------------------------------------------- +# Fixtures +# --------------------------------------------------------------------------- + + +@pytest.fixture() +def echo_client(): + server = _make_echo_agent() + transport = ASGITransport(app=server.app) + return AsyncClient(transport=transport, base_url="http://testserver") + + +@pytest.fixture() +def streaming_client(): + server = _make_streaming_agent() + transport = ASGITransport(app=server.app) + return AsyncClient(transport=transport, base_url="http://testserver") + + +@pytest.fixture() +def async_storage_server(): + return _make_async_storage_agent() + + +@pytest.fixture() +def async_storage_client(async_storage_server): + transport = ASGITransport(app=async_storage_server.app) + return AsyncClient(transport=transport, base_url="http://testserver") + + +@pytest.fixture() +def validated_client(): + server = _make_validated_agent() + transport = ASGITransport(app=server.app) + return AsyncClient(transport=transport, base_url="http://testserver") + + +@pytest.fixture() +def no_spec_client(): + server = _make_echo_agent() + transport = ASGITransport(app=server.app) + return AsyncClient(transport=transport, base_url="http://testserver") + + +@pytest.fixture() +def failing_client(): + server = _make_failing_agent() + transport = ASGITransport(app=server.app) + return AsyncClient(transport=transport, base_url="http://testserver") diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_decorator_pattern.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_decorator_pattern.py new file mode 100644 index 000000000000..e8ff084d5358 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_decorator_pattern.py @@ -0,0 +1,252 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for decorator-based handler registration on AgentHost + InvocationHandler.""" +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +# --------------------------------------------------------------------------- +# invoke_handler stores function +# --------------------------------------------------------------------------- + +def test_invoke_handler_stores_function(): + """@invocations.invoke_handler stores the function on the protocol object.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + assert invocations._invoke_fn is handle + + +# --------------------------------------------------------------------------- +# invoke_handler returns original function +# --------------------------------------------------------------------------- + +def test_invoke_handler_returns_original_function(): + """@invocations.invoke_handler returns the original function.""" + server = AgentHost() + invocations = InvocationHandler(server) + + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + result = invocations.invoke_handler(handle) + assert result is handle + + +# --------------------------------------------------------------------------- +# get_invocation_handler stores function +# --------------------------------------------------------------------------- + +def test_get_invocation_handler_stores_function(): + """@invocations.get_invocation_handler stores the function.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.get_invocation_handler + async def get_handler(request: Request) -> Response: + return Response(content=b"ok") + + assert invocations._get_invocation_fn is get_handler + + +# --------------------------------------------------------------------------- +# cancel_invocation_handler stores function +# --------------------------------------------------------------------------- + +def test_cancel_invocation_handler_stores_function(): + """@invocations.cancel_invocation_handler stores the function.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.cancel_invocation_handler + async def cancel_handler(request: Request) -> Response: + return Response(content=b"ok") + + assert invocations._cancel_invocation_fn is cancel_handler + + +# --------------------------------------------------------------------------- +# shutdown_handler stores function +# --------------------------------------------------------------------------- + +def test_shutdown_handler_stores_function(): + """@server.shutdown_handler stores the function on the server.""" + server = AgentHost() + + @server.shutdown_handler + async def on_shutdown(): + pass + + assert server._shutdown_fn is on_shutdown + + +# --------------------------------------------------------------------------- +# Full request flow +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_full_request_flow(): + """Full lifecycle: invoke → get → cancel → get (404).""" + server = AgentHost() + invocations = InvocationHandler(server) + store: dict[str, bytes] = {} + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + body = await request.body() + store[request.state.invocation_id] = body + return Response(content=body, media_type="application/octet-stream") + + @invocations.get_invocation_handler + async def get_handler(request: Request) -> Response: + inv_id = request.path_params["invocation_id"] + if inv_id in store: + return Response(content=store[inv_id]) + return JSONResponse({"error": {"code": "not_found", "message": "Not found"}}, status_code=404) + + @invocations.cancel_invocation_handler + async def cancel_handler(request: Request) -> Response: + inv_id = request.path_params["invocation_id"] + if inv_id in store: + del store[inv_id] + return JSONResponse({"status": "cancelled"}) + return JSONResponse({"error": {"code": "not_found", "message": "Not found"}}, status_code=404) + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + # Invoke + resp = await client.post("/invocations", content=b"lifecycle-test") + assert resp.status_code == 200 + inv_id = resp.headers["x-agent-invocation-id"] + + # Get + resp = await client.get(f"/invocations/{inv_id}") + assert resp.status_code == 200 + assert resp.content == b"lifecycle-test" + + # Cancel + resp = await client.post(f"/invocations/{inv_id}/cancel") + assert resp.status_code == 200 + + # Get after cancel + resp = await client.get(f"/invocations/{inv_id}") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# Missing optional handlers return 404 +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_missing_invoke_handler_returns_501(): + """POST /invocations without registered handler returns 501.""" + server = AgentHost() + invocations = InvocationHandler(server) + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 501 + + +@pytest.mark.asyncio +async def test_missing_get_handler_returns_404(): + """GET /invocations/{id} without registered handler returns 404.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.get("/invocations/some-id") + assert resp.status_code == 404 + + +@pytest.mark.asyncio +async def test_missing_cancel_handler_returns_404(): + """POST /invocations/{id}/cancel without registered handler returns 404.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations/some-id/cancel") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# Optional handler defaults and overrides +# --------------------------------------------------------------------------- + +def test_optional_handlers_default_none(): + """Get and cancel handlers default to None.""" + server = AgentHost() + invocations = InvocationHandler(server) + assert invocations._get_invocation_fn is None + assert invocations._cancel_invocation_fn is None + + +def test_optional_handler_override(): + """Setting an optional handler replaces None.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.get_invocation_handler + async def get_handler(request: Request) -> Response: + return Response(content=b"ok") + + assert invocations._get_invocation_fn is not None + + +# --------------------------------------------------------------------------- +# Shutdown handler called during lifespan +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_shutdown_handler_called_during_lifespan(): + """Shutdown handler is called when the app lifespan ends.""" + called = [] + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + @server.shutdown_handler + async def on_shutdown(): + called.append(True) + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 200 + # The lifespan exit runs when the ASGI app scope ends + # With ASGITransport, the lifespan is managed by the transport + # The shutdown handler should be called on transport cleanup + + +# --------------------------------------------------------------------------- +# Config passthrough +# --------------------------------------------------------------------------- + +def test_graceful_shutdown_timeout_passthrough(): + """graceful_shutdown_timeout is passed through to the base class.""" + server = AgentHost(graceful_shutdown_timeout=15) + assert server._graceful_shutdown_timeout == 15 diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_edge_cases.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_edge_cases.py new file mode 100644 index 000000000000..4054f3c4c71c --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_edge_cases.py @@ -0,0 +1,310 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Edge-case tests for AgentHost + InvocationHandler.""" +import asyncio +import uuid + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import Response, StreamingResponse + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from conftest import SAMPLE_OPENAPI_SPEC + + +# --------------------------------------------------------------------------- +# Factory helpers for edge cases +# --------------------------------------------------------------------------- + + +def _make_custom_header_agent() -> AgentHost: + """Agent whose handler sets its own x-agent-invocation-id (should be overwritten).""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + resp = Response(content=b"ok") + resp.headers["x-agent-invocation-id"] = "custom-id-from-handler" + return resp + + return server + + +def _make_empty_streaming_agent() -> AgentHost: + """Agent that returns an empty streaming response.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> StreamingResponse: + async def generate(): + return + yield # noqa: E501 — make it a generator + + return StreamingResponse(generate(), media_type="text/plain") + + return server + + +def _make_large_payload_agent() -> AgentHost: + """Agent that echoes large payloads.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + body = await request.body() + return Response(content=body, media_type="application/octet-stream") + + return server + + +# --------------------------------------------------------------------------- +# Method not allowed tests +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_invocations_returns_405(): + """GET /invocations returns 405 Method Not Allowed.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.get("/invocations") + assert resp.status_code == 405 + + +@pytest.mark.asyncio +async def test_put_invocations_returns_405(): + """PUT /invocations returns 405 Method Not Allowed.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.put("/invocations", content=b"test") + assert resp.status_code == 405 + + +@pytest.mark.asyncio +async def test_delete_invocation_returns_405(): + """DELETE /invocations/{id} returns 405 Method Not Allowed.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.delete("/invocations/some-id") + assert resp.status_code == 405 + + +@pytest.mark.asyncio +async def test_post_openapi_json_returns_405(): + """POST /invocations/docs/openapi.json returns 405.""" + server = AgentHost() + invocations = InvocationHandler(server, openapi_spec=SAMPLE_OPENAPI_SPEC) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations/docs/openapi.json", content=b"{}") + assert resp.status_code == 405 + + +# --------------------------------------------------------------------------- +# Response header tests +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_custom_invocation_id_overwritten(): + """Handler-set x-agent-invocation-id is overwritten by the server.""" + server = _make_custom_header_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + # Server overwrites handler's value with a generated UUID + inv_id = resp.headers["x-agent-invocation-id"] + assert inv_id != "custom-id-from-handler" + uuid.UUID(inv_id) # Should be a valid UUID + + +@pytest.mark.asyncio +async def test_invocation_id_auto_injected(echo_client): + """Invocation ID is auto-injected when not provided.""" + resp = await echo_client.post("/invocations", content=b"test") + assert "x-agent-invocation-id" in resp.headers + + +@pytest.mark.asyncio +async def test_invocation_id_accepted_from_request(echo_client): + """Server accepts invocation ID from request header.""" + custom_id = str(uuid.uuid4()) + resp = await echo_client.post( + "/invocations", + content=b"test", + headers={"x-agent-invocation-id": custom_id}, + ) + assert resp.headers["x-agent-invocation-id"] == custom_id + + +@pytest.mark.asyncio +async def test_invocation_id_generated_when_empty(echo_client): + """When empty invocation ID is sent, server generates one.""" + resp = await echo_client.post( + "/invocations", + content=b"test", + headers={"x-agent-invocation-id": ""}, + ) + inv_id = resp.headers["x-agent-invocation-id"] + uuid.UUID(inv_id) # Should be a valid UUID + + +# --------------------------------------------------------------------------- +# Payload edge cases +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_large_payload(): + """Large payload (1MB) is handled correctly.""" + server = _make_large_payload_agent() + transport = ASGITransport(app=server.app) + payload = b"x" * (1024 * 1024) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=payload) + assert resp.status_code == 200 + assert len(resp.content) == 1024 * 1024 + + +@pytest.mark.asyncio +async def test_unicode_payload(echo_client): + """Unicode payload is preserved.""" + text = "Hello, 世界! 🌍" + resp = await echo_client.post("/invocations", content=text.encode("utf-8")) + assert resp.status_code == 200 + assert resp.content.decode("utf-8") == text + + +@pytest.mark.asyncio +async def test_binary_payload(echo_client): + """Binary payload with non-UTF-8 bytes is handled.""" + binary = bytes(range(256)) + resp = await echo_client.post("/invocations", content=binary) + assert resp.status_code == 200 + assert resp.content == binary + + +# --------------------------------------------------------------------------- +# Streaming edge cases +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_empty_streaming(): + """Empty streaming response doesn't crash.""" + server = _make_empty_streaming_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 200 + assert resp.content == b"" + + +@pytest.mark.asyncio +async def test_streaming_has_invocation_id(): + """Streaming response has x-agent-invocation-id header.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> StreamingResponse: + async def generate(): + yield b"chunk1" + + return StreamingResponse(generate(), media_type="text/plain") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert "x-agent-invocation-id" in resp.headers + + +# --------------------------------------------------------------------------- +# Invocation lifecycle +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_multiple_gets(async_storage_client): + """Multiple GETs for the same invocation return the same result.""" + resp = await async_storage_client.post("/invocations", content=b"multi-get") + inv_id = resp.headers["x-agent-invocation-id"] + + for _ in range(3): + get_resp = await async_storage_client.get(f"/invocations/{inv_id}") + assert get_resp.status_code == 200 + assert get_resp.content == b"multi-get" + + +@pytest.mark.asyncio +async def test_double_cancel(async_storage_client): + """Cancelling twice: second cancel returns 404.""" + resp = await async_storage_client.post("/invocations", content=b"cancel-twice") + inv_id = resp.headers["x-agent-invocation-id"] + + cancel1 = await async_storage_client.post(f"/invocations/{inv_id}/cancel") + assert cancel1.status_code == 200 + + cancel2 = await async_storage_client.post(f"/invocations/{inv_id}/cancel") + assert cancel2.status_code == 404 + + +@pytest.mark.asyncio +async def test_invoke_cancel_get(async_storage_client): + """Invoke → cancel → get returns 404.""" + resp = await async_storage_client.post("/invocations", content=b"icg") + inv_id = resp.headers["x-agent-invocation-id"] + + await async_storage_client.post(f"/invocations/{inv_id}/cancel") + get_resp = await async_storage_client.get(f"/invocations/{inv_id}") + assert get_resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# Concurrency +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_concurrent_invocations_get_unique_ids(): + """10 concurrent POSTs each get unique invocation IDs.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + tasks = [client.post("/invocations", content=b"test") for _ in range(10)] + responses = await asyncio.gather(*tasks) + + ids = {r.headers["x-agent-invocation-id"] for r in responses} + assert len(ids) == 10 diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_get_cancel.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_get_cancel.py new file mode 100644 index 000000000000..07e92f1cc0ac --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_get_cancel.py @@ -0,0 +1,129 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for GET /invocations/{id} and POST /invocations/{id}/cancel.""" +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +# --------------------------------------------------------------------------- +# GET after invoke +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_after_invoke_returns_stored_result(async_storage_client): + """GET /invocations/{id} after invoke returns the stored result.""" + resp = await async_storage_client.post("/invocations", content=b"stored-data") + assert resp.status_code == 200 + inv_id = resp.headers["x-agent-invocation-id"] + + get_resp = await async_storage_client.get(f"/invocations/{inv_id}") + assert get_resp.status_code == 200 + assert get_resp.content == b"stored-data" + + +# --------------------------------------------------------------------------- +# GET unknown ID +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_unknown_id_returns_404(async_storage_client): + """GET /invocations/{unknown} returns 404.""" + resp = await async_storage_client.get("/invocations/unknown-id-12345") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# Cancel after invoke +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_cancel_after_invoke_returns_cancelled(async_storage_client): + """POST /invocations/{id}/cancel after invoke returns cancelled status.""" + resp = await async_storage_client.post("/invocations", content=b"cancel-me") + inv_id = resp.headers["x-agent-invocation-id"] + + cancel_resp = await async_storage_client.post(f"/invocations/{inv_id}/cancel") + assert cancel_resp.status_code == 200 + assert cancel_resp.json()["status"] == "cancelled" + + +# --------------------------------------------------------------------------- +# Cancel unknown ID +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_cancel_unknown_id_returns_404(async_storage_client): + """POST /invocations/{unknown}/cancel returns 404.""" + resp = await async_storage_client.post("/invocations/unknown-id-12345/cancel") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# GET after cancel +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_after_cancel_returns_404(async_storage_client): + """GET after cancel returns 404 (data has been removed).""" + resp = await async_storage_client.post("/invocations", content=b"temp") + inv_id = resp.headers["x-agent-invocation-id"] + await async_storage_client.post(f"/invocations/{inv_id}/cancel") + + get_resp = await async_storage_client.get(f"/invocations/{inv_id}") + assert get_resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# GET error returns 500 (inline AgentHost) +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_invocation_error_returns_500(): + """GET handler raising an exception returns 500.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + @invocations.get_invocation_handler + async def get_handler(request: Request) -> Response: + raise RuntimeError("get failed") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.get("/invocations/some-id") + assert resp.status_code == 500 + assert resp.json()["error"]["code"] == "internal_error" + + +# --------------------------------------------------------------------------- +# Cancel error returns 500 (inline AgentHost) +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_cancel_invocation_error_returns_500(): + """Cancel handler raising an exception returns 500.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + @invocations.cancel_invocation_handler + async def cancel_handler(request: Request) -> Response: + raise RuntimeError("cancel failed") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations/some-id/cancel") + assert resp.status_code == 500 + assert resp.json()["error"]["code"] == "internal_error" diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_graceful_shutdown.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_graceful_shutdown.py new file mode 100644 index 000000000000..faf6a224b8ef --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_graceful_shutdown.py @@ -0,0 +1,229 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for graceful shutdown with AgentHost.""" +import asyncio +import logging + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +def _make_server_with_shutdown(**kwargs) -> tuple[AgentHost, list]: + """Create AgentHost with a tracked shutdown handler.""" + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + calls: list[str] = [] + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + @server.shutdown_handler + async def on_shutdown(): + calls.append("shutdown") + + return server, calls + + +# --------------------------------------------------------------------------- +# Shutdown handler registration +# --------------------------------------------------------------------------- + +def test_shutdown_handler_registered(): + """Shutdown handler is stored on the server.""" + server, _ = _make_server_with_shutdown() + assert server._shutdown_fn is not None + + +def test_shutdown_handler_not_registered(): + """Without @shutdown_handler, _shutdown_fn is None.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + assert server._shutdown_fn is None + + +# --------------------------------------------------------------------------- +# ASGI lifespan helper +# --------------------------------------------------------------------------- + +async def _drive_lifespan(app): + """Drive a full ASGI lifespan startup+shutdown cycle.""" + scope = {"type": "lifespan"} + startup_done = asyncio.Event() + shutdown_done = asyncio.Event() + + async def receive(): + if not startup_done.is_set(): + startup_done.set() + return {"type": "lifespan.startup"} + await asyncio.sleep(0) + return {"type": "lifespan.shutdown"} + + async def send(message): + if message["type"] == "lifespan.shutdown.complete": + shutdown_done.set() + + await app(scope, receive, send) + return shutdown_done.is_set() + + +# --------------------------------------------------------------------------- +# Shutdown handler called during lifespan +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_shutdown_handler_called_on_lifespan_exit(): + """Shutdown handler runs when the ASGI lifespan exits.""" + server, calls = _make_server_with_shutdown() + + # Drive the lifespan via raw ASGI protocol + completed = await _drive_lifespan(server.app) + assert completed + assert "shutdown" in calls + + +# --------------------------------------------------------------------------- +# Shutdown handler timeout +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_shutdown_handler_timeout(caplog): + """Shutdown handler that exceeds timeout is warned about.""" + server = AgentHost(graceful_shutdown_timeout=1) + invocations = InvocationHandler(server) + calls: list[str] = [] + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + @server.shutdown_handler + async def on_shutdown(): + await asyncio.sleep(10) + calls.append("completed") + + with caplog.at_level(logging.WARNING, logger="azure.ai.agentserver"): + await _drive_lifespan(server.app) + + # Shutdown should have been interrupted + assert "completed" not in calls + # Logger should have warned about timeout + assert any("did not complete" in r.message.lower() or "timeout" in r.message.lower() for r in caplog.records) + + +# --------------------------------------------------------------------------- +# Shutdown handler exception +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_shutdown_handler_exception(caplog): + """Shutdown handler that raises is caught and logged.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + @server.shutdown_handler + async def on_shutdown(): + raise RuntimeError("shutdown exploded") + + with caplog.at_level(logging.ERROR, logger="azure.ai.agentserver"): + await _drive_lifespan(server.app) + + # Should have logged the exception + assert any("on_shutdown" in r.message.lower() or "error" in r.message.lower() for r in caplog.records) + + +# --------------------------------------------------------------------------- +# Graceful shutdown timeout config +# --------------------------------------------------------------------------- + +def test_default_graceful_shutdown_timeout(): + """Default graceful shutdown timeout is 30 seconds.""" + server = AgentHost() + assert server._graceful_shutdown_timeout == 30 + + +def test_custom_graceful_shutdown_timeout(): + """Custom graceful_shutdown_timeout is stored.""" + server = AgentHost(graceful_shutdown_timeout=60) + assert server._graceful_shutdown_timeout == 60 + + +def test_zero_graceful_shutdown_timeout(): + """Zero timeout disables the drain period.""" + server = AgentHost(graceful_shutdown_timeout=0) + assert server._graceful_shutdown_timeout == 0 + + +# --------------------------------------------------------------------------- +# Health endpoint accessible during normal operation +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_health_endpoint_during_operation(): + """GET /healthy returns 200 during normal operation.""" + server, _ = _make_server_with_shutdown() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.get("/healthy") + assert resp.status_code == 200 + assert resp.json() == {"status": "healthy"} + + +# --------------------------------------------------------------------------- +# No shutdown handler is no-op +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_no_shutdown_handler_is_noop(): + """Without a shutdown handler, lifespan exit succeeds silently.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 200 + # No exception means success + + +# --------------------------------------------------------------------------- +# Multiple requests before shutdown +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_multiple_requests_before_shutdown(): + """Multiple requests can be served, then shutdown handler runs.""" + server, calls = _make_server_with_shutdown() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + for i in range(5): + resp = await client.post("/invocations", content=f"request-{i}".encode()) + assert resp.status_code == 200 + + # Drive the lifespan to trigger shutdown + completed = await _drive_lifespan(server.app) + assert completed + assert "shutdown" in calls diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_invoke.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_invoke.py new file mode 100644 index 000000000000..5de15efd63cc --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_invoke.py @@ -0,0 +1,132 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for the POST /invocations invoke dispatch.""" +import json +import uuid + +import pytest + + +# --------------------------------------------------------------------------- +# Echo body +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_invoke_echo_body(echo_client): + """POST /invocations echoes the request body.""" + resp = await echo_client.post("/invocations", content=b"hello world") + assert resp.status_code == 200 + assert resp.content == b"hello world" + + +# --------------------------------------------------------------------------- +# Headers +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_invoke_returns_invocation_id_header(echo_client): + """Response includes x-agent-invocation-id header.""" + resp = await echo_client.post("/invocations", content=b"test") + assert "x-agent-invocation-id" in resp.headers + # Should be a valid UUID + uuid.UUID(resp.headers["x-agent-invocation-id"]) + + +@pytest.mark.asyncio +async def test_invoke_returns_session_id_header(echo_client): + """Response includes x-agent-session-id header on POST /invocations.""" + resp = await echo_client.post("/invocations", content=b"test") + assert "x-agent-session-id" in resp.headers + # Should be a valid UUID (auto-generated) + uuid.UUID(resp.headers["x-agent-session-id"]) + + +@pytest.mark.asyncio +async def test_invoke_unique_invocation_ids(echo_client): + """Each invoke gets a unique invocation ID.""" + ids = set() + for _ in range(5): + resp = await echo_client.post("/invocations", content=b"test") + ids.add(resp.headers["x-agent-invocation-id"]) + assert len(ids) == 5 + + +@pytest.mark.asyncio +async def test_invoke_accepts_custom_invocation_id(echo_client): + """If the request sends x-agent-invocation-id, the server echoes it.""" + custom_id = str(uuid.uuid4()) + resp = await echo_client.post( + "/invocations", + content=b"test", + headers={"x-agent-invocation-id": custom_id}, + ) + assert resp.headers["x-agent-invocation-id"] == custom_id + + +# --------------------------------------------------------------------------- +# Streaming +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_streaming_returns_chunks(streaming_client): + """Streaming handler returns 3 JSON chunks.""" + resp = await streaming_client.post("/invocations", content=b"") + assert resp.status_code == 200 + lines = resp.text.strip().split("\n") + assert len(lines) == 3 + for i, line in enumerate(lines): + assert json.loads(line) == {"chunk": i} + + +@pytest.mark.asyncio +async def test_streaming_has_invocation_id_header(streaming_client): + """Streaming response includes invocation ID header.""" + resp = await streaming_client.post("/invocations", content=b"") + assert "x-agent-invocation-id" in resp.headers + uuid.UUID(resp.headers["x-agent-invocation-id"]) + + +# --------------------------------------------------------------------------- +# Empty body +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_invoke_empty_body(echo_client): + """Empty body doesn't crash the server.""" + resp = await echo_client.post("/invocations", content=b"") + assert resp.status_code == 200 + assert resp.content == b"" + + +# --------------------------------------------------------------------------- +# Error handling +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_invoke_error_returns_500(failing_client): + """Handler exception returns 500 with generic message.""" + resp = await failing_client.post("/invocations", content=b"test") + assert resp.status_code == 500 + body = resp.json() + assert body["error"]["code"] == "internal_error" + assert body["error"]["message"] == "Internal server error" + + +@pytest.mark.asyncio +async def test_invoke_error_has_invocation_id(failing_client): + """Error response still includes invocation ID header.""" + resp = await failing_client.post("/invocations", content=b"test") + assert "x-agent-invocation-id" in resp.headers + + +# --------------------------------------------------------------------------- +# Error handling +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_error_hides_details_by_default(failing_client): + """Exception message is hidden in error responses.""" + resp = await failing_client.post("/invocations", content=b"") + body = resp.json() + assert "something went wrong" not in body["error"]["message"] diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_multimodal_protocol.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_multimodal_protocol.py new file mode 100644 index 000000000000..c8c3fc12d245 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_multimodal_protocol.py @@ -0,0 +1,279 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for multi-modality payloads with AgentHost + InvocationHandler.""" +import json + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import JSONResponse, Response, StreamingResponse + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +# --------------------------------------------------------------------------- +# Helper: content-type echo agent +# --------------------------------------------------------------------------- + +def _make_content_type_echo_agent() -> AgentHost: + """Agent that echoes body and returns the content-type it received.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + body = await request.body() + ct = request.headers.get("content-type", "unknown") + return Response( + content=body, + media_type=ct, + headers={"x-received-content-type": ct}, + ) + + return server + + +def _make_status_code_agent() -> AgentHost: + """Agent that returns a custom HTTP status code from query param.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + status = int(request.query_params.get("status", "200")) + body = await request.body() + return Response(content=body, status_code=status) + + return server + + +def _make_sse_agent() -> AgentHost: + """Agent that returns SSE-formatted streaming response.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> StreamingResponse: + async def generate(): + for i in range(3): + yield f"data: {json.dumps({'event': i})}\n\n" + + return StreamingResponse(generate(), media_type="text/event-stream") + + return server + + +# --------------------------------------------------------------------------- +# Various content types +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_png_content_type(): + """PNG content type is accepted and echoed.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + fake_png = b"\x89PNG\r\n\x1a\n" + b"\x00" * 100 + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=fake_png, + headers={"content-type": "image/png"}, + ) + assert resp.status_code == 200 + assert resp.headers["x-received-content-type"] == "image/png" + assert resp.content == fake_png + + +@pytest.mark.asyncio +async def test_jpeg_content_type(): + """JPEG content type is accepted.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + fake_jpeg = b"\xff\xd8\xff\xe0" + b"\x00" * 100 + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=fake_jpeg, + headers={"content-type": "image/jpeg"}, + ) + assert resp.status_code == 200 + assert resp.headers["x-received-content-type"] == "image/jpeg" + + +@pytest.mark.asyncio +async def test_wav_content_type(): + """WAV audio content type is accepted.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + fake_wav = b"RIFF" + b"\x00" * 100 + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=fake_wav, + headers={"content-type": "audio/wav"}, + ) + assert resp.status_code == 200 + assert resp.headers["x-received-content-type"] == "audio/wav" + + +@pytest.mark.asyncio +async def test_pdf_content_type(): + """PDF content type is accepted.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + fake_pdf = b"%PDF-1.4" + b"\x00" * 100 + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=fake_pdf, + headers={"content-type": "application/pdf"}, + ) + assert resp.status_code == 200 + assert resp.headers["x-received-content-type"] == "application/pdf" + + +@pytest.mark.asyncio +async def test_octet_stream_content_type(): + """application/octet-stream is accepted.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + binary = bytes(range(256)) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=binary, + headers={"content-type": "application/octet-stream"}, + ) + assert resp.status_code == 200 + assert resp.content == binary + + +@pytest.mark.asyncio +async def test_text_plain_content_type(): + """text/plain content type is accepted.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=b"Hello, world!", + headers={"content-type": "text/plain"}, + ) + assert resp.status_code == 200 + assert resp.content == b"Hello, world!" + + +# --------------------------------------------------------------------------- +# Custom HTTP status codes +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_custom_status_200(): + """Handler returning 200.""" + server = _make_status_code_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations?status=200", content=b"ok") + assert resp.status_code == 200 + + +@pytest.mark.asyncio +async def test_custom_status_201(): + """Handler returning 201.""" + server = _make_status_code_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations?status=201", content=b"created") + assert resp.status_code == 201 + + +@pytest.mark.asyncio +async def test_custom_status_202(): + """Handler returning 202.""" + server = _make_status_code_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations?status=202", content=b"accepted") + assert resp.status_code == 202 + + +# --------------------------------------------------------------------------- +# Query strings +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_query_string_passed_to_handler(): + """Query string params are accessible in the handler.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + name = request.query_params.get("name", "unknown") + return JSONResponse({"name": name}) + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations?name=Alice", content=b"") + assert resp.status_code == 200 + assert resp.json()["name"] == "Alice" + + +# --------------------------------------------------------------------------- +# SSE streaming +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_sse_streaming(): + """SSE-formatted streaming response works.""" + server = _make_sse_agent() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"") + assert resp.status_code == 200 + assert "text/event-stream" in resp.headers.get("content-type", "") + lines = [line for line in resp.text.split("\n") if line.startswith("data:")] + assert len(lines) == 3 + + +# --------------------------------------------------------------------------- +# Large binary payloads +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_large_binary_payload(): + """Large binary payload (512KB) is handled correctly.""" + server = _make_content_type_echo_agent() + transport = ASGITransport(app=server.app) + payload = bytes(range(256)) * 2048 # 512KB + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations", + content=payload, + headers={"content-type": "application/octet-stream"}, + ) + assert resp.status_code == 200 + assert len(resp.content) == len(payload) + + +# --------------------------------------------------------------------------- +# Health endpoint (updated from /liveness to /healthy) +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_health_endpoint_returns_200(): + """GET /healthy returns 200 with healthy status.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.get("/healthy") + assert resp.status_code == 200 + assert resp.json() == {"status": "healthy"} diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_request_limits.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_request_limits.py new file mode 100644 index 000000000000..1625cdb84e07 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_request_limits.py @@ -0,0 +1,45 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for request processing (timeout feature removed per spec alignment).""" +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + + +# --------------------------------------------------------------------------- +# AgentHost no longer accepts request_timeout +# --------------------------------------------------------------------------- + +def test_no_request_timeout_parameter(): + """AgentHost no longer accepts request_timeout.""" + with pytest.raises(TypeError): + AgentHost(request_timeout=10) + + +# --------------------------------------------------------------------------- +# Slow invoke completes without timeout +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_slow_invoke_completes(): + """Without timeout, handler runs to completion.""" + import asyncio + + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + await asyncio.sleep(0.1) + return Response(content=b"done") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 200 + assert resp.content == b"done" diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_server_routes.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_server_routes.py new file mode 100644 index 000000000000..405735f10164 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_server_routes.py @@ -0,0 +1,103 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for basic server route registration with AgentHost + InvocationHandler.""" +import uuid + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from conftest import SAMPLE_OPENAPI_SPEC + + +# --------------------------------------------------------------------------- +# POST /invocations returns 200 +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_post_invocations_returns_200(echo_client): + """POST /invocations returns 200 OK.""" + resp = await echo_client.post("/invocations", content=b"test") + assert resp.status_code == 200 + + +# --------------------------------------------------------------------------- +# POST /invocations returns invocation-id header (UUID) +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_post_invocations_returns_uuid_invocation_id(echo_client): + """POST /invocations returns a valid UUID in x-agent-invocation-id.""" + resp = await echo_client.post("/invocations", content=b"test") + inv_id = resp.headers["x-agent-invocation-id"] + parsed = uuid.UUID(inv_id) + assert str(parsed) == inv_id + + +# --------------------------------------------------------------------------- +# GET openapi spec returns 404 when not set +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_openapi_spec_returns_404_when_not_set(no_spec_client): + """GET /invocations/docs/openapi.json returns 404 when no spec registered.""" + resp = await no_spec_client.get("/invocations/docs/openapi.json") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# GET openapi spec returns spec when registered +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_openapi_spec_returns_spec_when_registered(): + """GET /invocations/docs/openapi.json returns the spec when registered.""" + server = AgentHost() + invocations = InvocationHandler(server, openapi_spec=SAMPLE_OPENAPI_SPEC) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.get("/invocations/docs/openapi.json") + assert resp.status_code == 200 + assert resp.json() == SAMPLE_OPENAPI_SPEC + + +# --------------------------------------------------------------------------- +# GET /invocations/{id} returns 404 default +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_invocation_returns_404_default(echo_client): + """GET /invocations/{id} returns 404 when no get handler registered.""" + resp = await echo_client.get("/invocations/some-id") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# POST /invocations/{id}/cancel returns 404 default +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_cancel_invocation_returns_404_default(echo_client): + """POST /invocations/{id}/cancel returns 404 when no cancel handler.""" + resp = await echo_client.post("/invocations/some-id/cancel") + assert resp.status_code == 404 + + +# --------------------------------------------------------------------------- +# Unknown route returns 404 +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_unknown_route_returns_404(echo_client): + """Unknown route returns 404.""" + resp = await echo_client.get("/nonexistent") + assert resp.status_code == 404 diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_session_id.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_session_id.py new file mode 100644 index 000000000000..23609ef1ecc9 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_session_id.py @@ -0,0 +1,112 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for session ID resolution and x-agent-session-id header.""" +import os +import uuid +from unittest.mock import patch + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from azure.ai.agentserver.invocations._constants import InvocationConstants + + +# --------------------------------------------------------------------------- +# Constants +# --------------------------------------------------------------------------- + +def test_session_id_header_constant(): + """SESSION_ID_HEADER constant is correct.""" + assert InvocationConstants.SESSION_ID_HEADER == "x-agent-session-id" + + +# --------------------------------------------------------------------------- +# POST /invocations response has x-agent-session-id header +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_post_invocations_has_session_id_header(echo_client): + """POST /invocations response includes x-agent-session-id header.""" + resp = await echo_client.post("/invocations", content=b"test") + assert "x-agent-session-id" in resp.headers + # Auto-generated should be a valid UUID + uuid.UUID(resp.headers["x-agent-session-id"]) + + +# --------------------------------------------------------------------------- +# POST /invocations with query param uses that value +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_post_invocations_with_query_param(): + """POST /invocations with agent_session_id query param uses that value.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post( + "/invocations?agent_session_id=my-custom-session", + content=b"test", + ) + assert resp.headers["x-agent-session-id"] == "my-custom-session" + + +# --------------------------------------------------------------------------- +# POST /invocations with env var +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_post_invocations_uses_env_var(): + """POST /invocations uses FOUNDRY_AGENT_SESSION_ID env var when no query param.""" + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + with patch.dict(os.environ, {"FOUNDRY_AGENT_SESSION_ID": "env-session"}): + resp = await client.post("/invocations", content=b"test") + assert resp.headers["x-agent-session-id"] == "env-session" + + +# --------------------------------------------------------------------------- +# GET /invocations/{id} does NOT have x-agent-session-id header +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_invocation_no_session_id_header(async_storage_client): + """GET /invocations/{id} does NOT include x-agent-session-id.""" + resp = await async_storage_client.post("/invocations", content=b"data") + inv_id = resp.headers["x-agent-invocation-id"] + + get_resp = await async_storage_client.get(f"/invocations/{inv_id}") + assert get_resp.status_code == 200 + assert "x-agent-session-id" not in get_resp.headers + + +# --------------------------------------------------------------------------- +# POST /invocations/{id}/cancel does NOT have x-agent-session-id header +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_cancel_invocation_no_session_id_header(async_storage_client): + """POST /invocations/{id}/cancel does NOT include x-agent-session-id.""" + resp = await async_storage_client.post("/invocations", content=b"data") + inv_id = resp.headers["x-agent-invocation-id"] + + cancel_resp = await async_storage_client.post(f"/invocations/{inv_id}/cancel") + assert cancel_resp.status_code == 200 + assert "x-agent-session-id" not in cancel_resp.headers diff --git a/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_tracing.py b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_tracing.py new file mode 100644 index 000000000000..a5d8a78ff9a8 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-invocations/tests/test_tracing.py @@ -0,0 +1,519 @@ +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- +"""Tests for OpenTelemetry tracing in the invocations protocol.""" +import os +import uuid +from unittest.mock import patch + +import pytest +from httpx import ASGITransport, AsyncClient +from starlette.requests import Request +from starlette.responses import JSONResponse, Response, StreamingResponse + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler + +# --------------------------------------------------------------------------- +# Module-level OTel setup with in-memory exporter +# --------------------------------------------------------------------------- +# We use the real OTel SDK to capture spans in memory. + +try: + from opentelemetry import trace + from opentelemetry.sdk.trace import TracerProvider as SdkTracerProvider + from opentelemetry.sdk.trace.export import SimpleSpanProcessor + from opentelemetry.sdk.trace.export.in_memory import InMemorySpanExporter + + _HAS_OTEL = True +except ImportError: + _HAS_OTEL = False + +# Module-level provider so all tests share the same exporter +if _HAS_OTEL: + _MODULE_EXPORTER = InMemorySpanExporter() + _MODULE_PROVIDER = SdkTracerProvider() + _MODULE_PROVIDER.add_span_processor(SimpleSpanProcessor(_MODULE_EXPORTER)) + trace.set_tracer_provider(_MODULE_PROVIDER) +else: + _MODULE_EXPORTER = None + _MODULE_PROVIDER = None + +pytestmark = pytest.mark.skipif(not _HAS_OTEL, reason="opentelemetry not installed") + + +@pytest.fixture(autouse=True) +def _clear_spans(): + """Clear exported spans before each test.""" + if _MODULE_EXPORTER: + _MODULE_EXPORTER.clear() + + +def _get_spans(): + """Return all captured spans.""" + if _MODULE_EXPORTER: + return _MODULE_EXPORTER.get_finished_spans() + return [] + + +# --------------------------------------------------------------------------- +# Helper: create tracing-enabled server +# --------------------------------------------------------------------------- + +def _make_tracing_server(**kwargs): + """Create an AgentHost with tracing enabled.""" + with patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor"): + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + body = await request.body() + return Response(content=body, media_type="application/octet-stream") + + return server + + +def _make_tracing_server_with_get_cancel(**kwargs): + """Create a tracing-enabled server with get/cancel handlers.""" + with patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor"): + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + store: dict[str, bytes] = {} + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + body = await request.body() + store[request.state.invocation_id] = body + return Response(content=body, media_type="application/octet-stream") + + @invocations.get_invocation_handler + async def get_handler(request: Request) -> Response: + inv_id = request.path_params["invocation_id"] + if inv_id in store: + return Response(content=store[inv_id]) + return JSONResponse({"error": {"code": "not_found", "message": "Not found"}}, status_code=404) + + @invocations.cancel_invocation_handler + async def cancel_handler(request: Request) -> Response: + inv_id = request.path_params["invocation_id"] + if inv_id in store: + del store[inv_id] + return JSONResponse({"status": "cancelled"}) + return JSONResponse({"error": {"code": "not_found", "message": "Not found"}}, status_code=404) + + return server + + +def _make_failing_tracing_server(**kwargs): + """Create a tracing-enabled server whose handler raises.""" + with patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor"): + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + raise ValueError("tracing error test") + + return server + + +def _make_streaming_tracing_server(**kwargs): + """Create a tracing-enabled server with streaming response.""" + with patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor"): + server = AgentHost(**kwargs) + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> StreamingResponse: + async def generate(): + yield b"chunk1\n" + yield b"chunk2\n" + + return StreamingResponse(generate(), media_type="text/plain") + + return server + + +# --------------------------------------------------------------------------- +# Tracing disabled by default +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_tracing_disabled_by_default(): + """No spans are created when tracing is not enabled.""" + if _MODULE_EXPORTER: + _MODULE_EXPORTER.clear() + + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + # No spans should be created (server has no tracing helper) + # The module-level provider may capture unrelated spans, + # but none should be from our server + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) == 0 + + +# --------------------------------------------------------------------------- +# Tracing enabled creates invoke span with correct name +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_tracing_enabled_creates_invoke_span(): + """Tracing enabled creates a span named 'invoke_agent'.""" + server = _make_tracing_server() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + assert invoke_spans[0].name.startswith("invoke_agent") + + +# --------------------------------------------------------------------------- +# Invoke error records exception +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_invoke_error_records_exception(): + """When handler raises, the span records the exception.""" + server = _make_failing_tracing_server() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 500 + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + span = invoke_spans[0] + # Should have error status + assert span.status.status_code.name == "ERROR" + + +# --------------------------------------------------------------------------- +# GET/cancel create spans +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_get_invocation_creates_span(): + """GET /invocations/{id} creates a span.""" + server = _make_tracing_server_with_get_cancel() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"data") + inv_id = resp.headers["x-agent-invocation-id"] + await client.get(f"/invocations/{inv_id}") + + spans = _get_spans() + get_spans = [s for s in spans if "get_invocation" in s.name] + assert len(get_spans) >= 1 + + +@pytest.mark.asyncio +async def test_cancel_invocation_creates_span(): + """POST /invocations/{id}/cancel creates a span.""" + server = _make_tracing_server_with_get_cancel() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"data") + inv_id = resp.headers["x-agent-invocation-id"] + await client.post(f"/invocations/{inv_id}/cancel") + + spans = _get_spans() + cancel_spans = [s for s in spans if "cancel_invocation" in s.name] + assert len(cancel_spans) >= 1 + + +# --------------------------------------------------------------------------- +# Tracing via env var +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_tracing_via_appinsights_env_var(): + """Tracing is enabled when APPLICATIONINSIGHTS_CONNECTION_STRING is set.""" + with patch.dict(os.environ, {"APPLICATIONINSIGHTS_CONNECTION_STRING": "InstrumentationKey=test"}): + with patch("azure.ai.agentserver.core._tracing.TracingHelper._setup_azure_monitor"): + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + + +# --------------------------------------------------------------------------- +# No tracing when no endpoints configured +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_no_tracing_when_no_endpoints(): + """Tracing is disabled when no connection string or OTLP endpoint is set.""" + env = os.environ.copy() + env.pop("APPLICATIONINSIGHTS_CONNECTION_STRING", None) + env.pop("OTEL_EXPORTER_OTLP_ENDPOINT", None) + with patch.dict(os.environ, env, clear=True): + server = AgentHost() + invocations = InvocationHandler(server) + + @invocations.invoke_handler + async def handle(request: Request) -> Response: + return Response(content=b"ok") + + if _MODULE_EXPORTER: + _MODULE_EXPORTER.clear() + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) == 0 + + +# --------------------------------------------------------------------------- +# Traceparent propagation +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_traceparent_propagation(): + """Server propagates traceparent header into span context.""" + server = _make_tracing_server() + transport = ASGITransport(app=server.app) + + # Create a traceparent + trace_id_hex = uuid.uuid4().hex + span_id_hex = uuid.uuid4().hex[:16] + traceparent = f"00-{trace_id_hex}-{span_id_hex}-01" + + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post( + "/invocations", + content=b"test", + headers={"traceparent": traceparent}, + ) + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + span = invoke_spans[0] + # The span should have the same trace ID as the traceparent + actual_trace_id = format(span.context.trace_id, "032x") + assert actual_trace_id == trace_id_hex + + +# --------------------------------------------------------------------------- +# Streaming spans +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_streaming_creates_span(): + """Streaming response creates and completes a span.""" + server = _make_streaming_tracing_server() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + assert resp.status_code == 200 + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + + +# --------------------------------------------------------------------------- +# GenAI attributes on invoke span +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_genai_attributes_on_invoke_span(): + """Invoke span has GenAI semantic convention attributes.""" + server = _make_tracing_server() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + attrs = dict(invoke_spans[0].attributes) + + assert attrs.get("gen_ai.provider.name") == "AzureAI Hosted Agents" + assert attrs.get("gen_ai.system") == "azure.ai.agentserver" + assert attrs.get("service.name") == "azure.ai.agentserver" + + +# --------------------------------------------------------------------------- +# Session ID in gen_ai.conversation.id +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_session_id_in_conversation_id(): + """Session ID is set as gen_ai.conversation.id on invoke span.""" + server = _make_tracing_server() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post( + "/invocations?agent_session_id=test-session", + content=b"test", + ) + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + attrs = dict(invoke_spans[0].attributes) + assert attrs.get("gen_ai.conversation.id") == "test-session" + + +# --------------------------------------------------------------------------- +# GenAI attributes on get_invocation span +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_genai_attributes_on_get_span(): + """GET invocation span has GenAI attributes.""" + server = _make_tracing_server_with_get_cancel() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"data") + inv_id = resp.headers["x-agent-invocation-id"] + await client.get(f"/invocations/{inv_id}") + + spans = _get_spans() + get_spans = [s for s in spans if "get_invocation" in s.name] + assert len(get_spans) >= 1 + attrs = dict(get_spans[0].attributes) + assert attrs.get("gen_ai.system") == "azure.ai.agentserver" + assert attrs.get("gen_ai.provider.name") == "AzureAI Hosted Agents" + + +# --------------------------------------------------------------------------- +# Namespaced invocation_id attribute +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_namespaced_invocation_id_attribute(): + """Invoke span has azure.ai.agentserver.invocations.invocation_id.""" + server = _make_tracing_server() + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + resp = await client.post("/invocations", content=b"test") + inv_id = resp.headers["x-agent-invocation-id"] + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + attrs = dict(invoke_spans[0].attributes) + assert attrs.get("azure.ai.agentserver.invocations.invocation_id") == inv_id + + +# --------------------------------------------------------------------------- +# Baggage tests +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_baggage_leaf_customer_span_id(): + """Baggage leaf_customer_span_id overrides parent span ID.""" + server = _make_tracing_server() + transport = ASGITransport(app=server.app) + + trace_id_hex = uuid.uuid4().hex + original_span_id = uuid.uuid4().hex[:16] + leaf_span_id = uuid.uuid4().hex[:16] + traceparent = f"00-{trace_id_hex}-{original_span_id}-01" + baggage = f"leaf_customer_span_id={leaf_span_id}" + + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post( + "/invocations", + content=b"test", + headers={ + "traceparent": traceparent, + "baggage": baggage, + }, + ) + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + span = invoke_spans[0] + # The parent span ID should be overridden to leaf_span_id + if span.parent is not None: + actual_parent_span_id = format(span.parent.span_id, "016x") + assert actual_parent_span_id == leaf_span_id + + +# --------------------------------------------------------------------------- +# Agent name/version in span names +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_agent_name_in_span_name(): + """Agent name from env var appears in span name.""" + with patch.dict(os.environ, { + "FOUNDRY_AGENT_NAME": "my-agent", + "FOUNDRY_AGENT_VERSION": "2.0", + }): + server = _make_tracing_server() + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + assert "my-agent" in invoke_spans[0].name + assert "2.0" in invoke_spans[0].name + + +@pytest.mark.asyncio +async def test_agent_name_only_in_span_name(): + """Agent name without version in span name.""" + env_override = {"FOUNDRY_AGENT_NAME": "solo-agent"} + env_copy = os.environ.copy() + env_copy.pop("FOUNDRY_AGENT_VERSION", None) + env_copy.update(env_override) + with patch.dict(os.environ, env_copy, clear=True): + server = _make_tracing_server() + + transport = ASGITransport(app=server.app) + async with AsyncClient(transport=transport, base_url="http://testserver") as client: + await client.post("/invocations", content=b"test") + + spans = _get_spans() + invoke_spans = [s for s in spans if "invoke_agent" in s.name] + assert len(invoke_spans) >= 1 + assert "solo-agent" in invoke_spans[0].name + + +# --------------------------------------------------------------------------- +# Project endpoint attribute +# --------------------------------------------------------------------------- + +@pytest.mark.asyncio +async def test_project_endpoint_env_var(): + """FOUNDRY_PROJECT_ENDPOINT constant matches the expected env var name.""" + from azure.ai.agentserver.core import Constants + assert Constants.FOUNDRY_PROJECT_ENDPOINT == "FOUNDRY_PROJECT_ENDPOINT" diff --git a/sdk/agentserver/azure-ai-agentserver-langgraph/pyproject.toml b/sdk/agentserver/azure-ai-agentserver-langgraph/pyproject.toml index 5552ff8233d2..cb7f62909ecd 100644 --- a/sdk/agentserver/azure-ai-agentserver-langgraph/pyproject.toml +++ b/sdk/agentserver/azure-ai-agentserver-langgraph/pyproject.toml @@ -22,7 +22,7 @@ dependencies = [ "azure-ai-agentserver-core", "langchain>0.3.5", "langchain-openai>0.3.10", - "langchain-azure-ai[opentelemetry]>=0.1.4", + "langchain-azure-ai[opentelemetry]>=0.1.4,<=1.0.62", "langgraph>0.5.0", "opentelemetry-exporter-otlp-proto-http", ] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/LICENSE b/sdk/agentserver/azure-ai-agentserver-responses/LICENSE new file mode 100644 index 000000000000..63447fd8bbbf --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/LICENSE @@ -0,0 +1,21 @@ +Copyright (c) Microsoft Corporation. + +MIT License + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/MANIFEST.in b/sdk/agentserver/azure-ai-agentserver-responses/MANIFEST.in new file mode 100644 index 000000000000..59f874c668d6 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/MANIFEST.in @@ -0,0 +1,9 @@ +include *.md +include LICENSE +recursive-include tests *.py +recursive-include samples *.py *.md +recursive-include doc *.rst *.md +include azure/__init__.py +include azure/ai/__init__.py +include azure/ai/agentserver/__init__.py +include azure/ai/agentserver/responses/py.typed \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/Makefile b/sdk/agentserver/azure-ai-agentserver-responses/Makefile new file mode 100644 index 000000000000..041766cc0e9a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/Makefile @@ -0,0 +1,154 @@ +# Python TypeSpec Code Generation Tooling +# Targets: generate-models, clean, install-typespec-deps + +OUTPUT_DIR ?= azure/ai/agentserver/responses/models/_generated +TYPESPEC_DIR ?= type_spec +OPENAPI_SPEC ?= type_spec/TempTypeSpecFiles/Foundry/openapi3/v1/microsoft-foundry-openapi3.yaml +VALIDATORS_OUTPUT ?= $(OUTPUT_DIR)/_validators.py +ROOT_SCHEMAS ?= CreateResponse +LOCAL_TYPESPEC_PACKAGES := @typespec/compiler @typespec/http @typespec/openapi @typespec/openapi3 @typespec/versioning @typespec/events @typespec/sse @azure-tools/typespec-python @azure-tools/typespec-azure-core @azure-tools/typespec-client-generator-core @azure-tools/openai-typespec +TEMP_OUTPUT_DIR := $(OUTPUT_DIR)/.tmp_codegen +MODEL_PACKAGE_DIR := $(TEMP_OUTPUT_DIR)/azure/ai/agentserver/responses/sdk/models +MODEL_SHIMS_DIR := scripts/generated_shims + +.PHONY: generate-models generate-validators generate-contracts clean install-typespec-deps + +ifeq ($(OS),Windows_NT) +SHELL := cmd +.SHELLFLAGS := /c +endif + +# -------------------------------------------------------------------------- +# generate-validators: Generate JSON payload validators from OpenAPI +# -------------------------------------------------------------------------- +ifeq ($(OS),Windows_NT) +generate-validators: + @where python >NUL 2>NUL || (echo Error: python is required and was not found on PATH. 1>&2 && exit /b 1) + @if not exist "$(OPENAPI_SPEC)" (echo Error: OpenAPI spec not found at $(OPENAPI_SPEC). 1>&2 && exit /b 1) + @echo Generating payload validators from $(OPENAPI_SPEC)... + python scripts/generate_validators.py --input "$(OPENAPI_SPEC)" --output "$(VALIDATORS_OUTPUT)" --root-schemas "$(ROOT_SCHEMAS)" + @echo Generated validators at $(VALIDATORS_OUTPUT) +else +generate-validators: + @command -v python >/dev/null 2>&1 || { \ + echo "Error: python is required and was not found on PATH." >&2; \ + exit 1; \ + } + @test -f "$(OPENAPI_SPEC)" || { \ + echo "Error: OpenAPI spec not found at $(OPENAPI_SPEC)." >&2; \ + exit 1; \ + } + @echo "Generating payload validators from $(OPENAPI_SPEC)..." + python scripts/generate_validators.py --input "$(OPENAPI_SPEC)" --output "$(VALIDATORS_OUTPUT)" --root-schemas "$(ROOT_SCHEMAS)" + @echo "Generated validators at $(VALIDATORS_OUTPUT)" +endif + +# -------------------------------------------------------------------------- +# generate-contracts: Generate models + validators artifacts +# -------------------------------------------------------------------------- +generate-contracts: generate-models generate-validators + +TYPESPEC_OUTPUT_DIR := {cwd}/../$(TEMP_OUTPUT_DIR) + +# -------------------------------------------------------------------------- +# generate-models: Compile TypeSpec definitions into Python model classes +# -------------------------------------------------------------------------- +ifeq ($(OS),Windows_NT) +generate-models: + @where tsp-client >NUL 2>NUL || (echo Error: tsp-client is not installed. 1>&2 && echo Run 'make install-typespec-deps' to install it. 1>&2 && exit /b 1) + @where npm >NUL 2>NUL || (echo Error: npm is required. Install Node.js ^(v18+^) from https://nodejs.org/ 1>&2 && exit /b 1) + @echo Syncing upstream TypeSpec sources... + cd /d $(TYPESPEC_DIR) && tsp-client sync + @echo Installing local TypeSpec compiler dependencies... + npm install --prefix $(TYPESPEC_DIR) --no-save $(LOCAL_TYPESPEC_PACKAGES) + @echo Generating Python models... + @if exist "$(OUTPUT_DIR)" rmdir /s /q "$(OUTPUT_DIR)" + cd /d $(TYPESPEC_DIR) && npx tsp compile . --emit @azure-tools/typespec-python --option "@azure-tools/typespec-python.emitter-output-dir=$(TYPESPEC_OUTPUT_DIR)" + @if not exist "$(MODEL_PACKAGE_DIR)" (echo Error: generated model package was not found. 1>&2 && exit /b 1) + @if not exist "$(OUTPUT_DIR)\sdk" mkdir "$(OUTPUT_DIR)\sdk" + @xcopy /E /I /Y "$(MODEL_PACKAGE_DIR)" "$(OUTPUT_DIR)\sdk\models" >NUL + @if exist "$(OUTPUT_DIR)\sdk\models\aio" rmdir /s /q "$(OUTPUT_DIR)\sdk\models\aio" + @if exist "$(OUTPUT_DIR)\sdk\models\operations" rmdir /s /q "$(OUTPUT_DIR)\sdk\models\operations" + @if exist "$(OUTPUT_DIR)\sdk\models\_client.py" del /q "$(OUTPUT_DIR)\sdk\models\_client.py" + @if exist "$(OUTPUT_DIR)\sdk\models\_configuration.py" del /q "$(OUTPUT_DIR)\sdk\models\_configuration.py" + @if exist "$(OUTPUT_DIR)\sdk\models\_version.py" del /q "$(OUTPUT_DIR)\sdk\models\_version.py" + @copy /Y "$(MODEL_SHIMS_DIR)\sdk_models__init__.py" "$(OUTPUT_DIR)\sdk\models\__init__.py" >NUL + @copy /Y "$(MODEL_SHIMS_DIR)\__init__.py" "$(OUTPUT_DIR)\__init__.py" >NUL + @copy /Y "$(MODEL_SHIMS_DIR)\_enums.py" "$(OUTPUT_DIR)\_enums.py" >NUL + @copy /Y "$(MODEL_SHIMS_DIR)\_models.py" "$(OUTPUT_DIR)\_models.py" >NUL + @copy /Y "$(MODEL_SHIMS_DIR)\_patch.py" "$(OUTPUT_DIR)\_patch.py" >NUL + @copy /Y "$(MODEL_SHIMS_DIR)\models_patch.py" "$(OUTPUT_DIR)\sdk\models\models\_patch.py" >NUL + @if exist "$(TEMP_OUTPUT_DIR)" rmdir /s /q "$(TEMP_OUTPUT_DIR)" +else +generate-models: + @command -v tsp-client >/dev/null 2>&1 || { \ + echo "Error: tsp-client is not installed." >&2; \ + echo "Run 'make install-typespec-deps' to install it." >&2; \ + exit 1; \ + } + @command -v npm >/dev/null 2>&1 || { \ + echo "Error: npm is required. Install Node.js (v18+) from https://nodejs.org/" >&2; \ + exit 1; \ + } + @echo "Syncing upstream TypeSpec sources..." + cd $(TYPESPEC_DIR) && tsp-client sync + @echo "Installing local TypeSpec compiler dependencies..." + npm install --prefix $(TYPESPEC_DIR) --no-save $(LOCAL_TYPESPEC_PACKAGES) + @echo "Generating Python models..." + rm -rf $(OUTPUT_DIR) + cd $(TYPESPEC_DIR) && npx tsp compile . --emit @azure-tools/typespec-python --option "@azure-tools/typespec-python.emitter-output-dir=$(TYPESPEC_OUTPUT_DIR)" + @test -d $(MODEL_PACKAGE_DIR) || { \ + echo "Error: generated model package was not found." >&2; \ + exit 1; \ + } + mkdir -p $(OUTPUT_DIR)/sdk + cp -R $(MODEL_PACKAGE_DIR) $(OUTPUT_DIR)/sdk/models + rm -rf $(OUTPUT_DIR)/sdk/models/aio + rm -rf $(OUTPUT_DIR)/sdk/models/operations + rm -f $(OUTPUT_DIR)/sdk/models/_client.py + rm -f $(OUTPUT_DIR)/sdk/models/_configuration.py + rm -f $(OUTPUT_DIR)/sdk/models/_version.py + cp $(MODEL_SHIMS_DIR)/sdk_models__init__.py $(OUTPUT_DIR)/sdk/models/__init__.py + cp $(MODEL_SHIMS_DIR)/__init__.py $(OUTPUT_DIR)/__init__.py + cp $(MODEL_SHIMS_DIR)/_enums.py $(OUTPUT_DIR)/_enums.py + cp $(MODEL_SHIMS_DIR)/_models.py $(OUTPUT_DIR)/_models.py + cp $(MODEL_SHIMS_DIR)/_patch.py $(OUTPUT_DIR)/_patch.py + cp $(MODEL_SHIMS_DIR)/models_patch.py $(OUTPUT_DIR)/sdk/models/models/_patch.py + rm -rf $(TEMP_OUTPUT_DIR) +endif + +# -------------------------------------------------------------------------- +# clean: Remove all previously generated Python model files +# -------------------------------------------------------------------------- +ifeq ($(OS),Windows_NT) +clean: + @if exist "$(OUTPUT_DIR)" rmdir /s /q "$(OUTPUT_DIR)" +else +clean: + rm -rf $(OUTPUT_DIR) +endif + +# -------------------------------------------------------------------------- +# install-typespec-deps: Install tsp-client CLI and sync TypeSpec sources +# -------------------------------------------------------------------------- +ifeq ($(OS),Windows_NT) +install-typespec-deps: + @where node >NUL 2>NUL || (echo Error: Node.js ^(v18+^) is required. Install from https://nodejs.org/ 1>&2 && exit /b 1) + @where npm >NUL 2>NUL || (echo Error: npm is required. Install Node.js ^(v18+^) from https://nodejs.org/ 1>&2 && exit /b 1) + npm install -g @azure-tools/typespec-client-generator-cli + npm install --prefix $(TYPESPEC_DIR) --no-save $(LOCAL_TYPESPEC_PACKAGES) + cd /d $(TYPESPEC_DIR) && tsp-client sync +else +install-typespec-deps: + @command -v node >/dev/null 2>&1 || { \ + echo "Error: Node.js (v18+) is required. Install from https://nodejs.org/" >&2; \ + exit 1; \ + } + @command -v npm >/dev/null 2>&1 || { \ + echo "Error: npm is required. Install Node.js (v18+) from https://nodejs.org/" >&2; \ + exit 1; \ + } + npm install -g @azure-tools/typespec-client-generator-cli + npm install --prefix $(TYPESPEC_DIR) --no-save $(LOCAL_TYPESPEC_PACKAGES) + cd $(TYPESPEC_DIR) && tsp-client sync +endif diff --git a/sdk/agentserver/azure-ai-agentserver-responses/README.md b/sdk/agentserver/azure-ai-agentserver-responses/README.md new file mode 100644 index 000000000000..722e56920b51 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/README.md @@ -0,0 +1,371 @@ +# Azure AI Agent Server Responses API for Python + +A Python SDK for building [Starlette](https://www.starlette.io/) servers that implement the [Azure AI Responses API](https://learn.microsoft.com/azure/ai-services). Install the package, implement one handler, and the SDK handles routing, streaming (SSE), background execution, cancellation, state management, and response lifecycle. + +## Getting started + +### Prerequisites + +- Python 3.10+ + +### Install the package + +```bash +pip install azure-ai-agentserver-responses +``` + +### Implement a handler and register routes + +```python +from starlette.applications import Starlette +from starlette.responses import JSONResponse +from azure.ai.agentserver.responses import ResponseEventStream +from azure.ai.agentserver.responses.hosting import map_responses_server + +class EchoHandler: + """Simple handler yielding a deterministic response lifecycle.""" + + async def create_async(self, request, context, cancellation_signal): + stream = ResponseEventStream( + response_id=context.response_id, + model=getattr(request, "model", None), + ) + + yield stream.emit_created() + yield stream.emit_in_progress() + + message = stream.add_output_item_message() + yield message.emit_added() + + text = message.add_text_content() + yield text.emit_added() + + yield text.emit_delta("Hello, ") + yield text.emit_delta("world!") + + yield text.emit_done("Hello, world!") + yield message.emit_content_done(text) + yield message.emit_done() + + yield stream.emit_completed() + +app = Starlette() +app.add_route("/ready", lambda r: JSONResponse({"status": "ready"}), methods=["GET"]) +map_responses_server(app, EchoHandler()) +``` + +Run with [uvicorn](https://www.uvicorn.org/): + +```bash +uvicorn app:app --host 0.0.0.0 --port 5100 +``` + +This gives you five endpoints: + +| Method | Route | Description | +|--------|------------------------------------------|-----------------------------------------| +| POST | `/responses` | Create a new response | +| GET | `/responses/{response_id}` | Get response state (JSON or SSE replay) | +| POST | `/responses/{response_id}/cancel` | Cancel an in-flight response | +| DELETE | `/responses/{response_id}` | Delete a stored response | +| GET | `/responses/{response_id}/input_items` | List input items (paginated) | + +## Key concepts + +### ResponseEventStream + +`ResponseEventStream` manages `sequence_number`, `output_index`, `content_index`, `item_id`, and the full `Response` lifecycle automatically — each `yield` maps 1:1 to an SSE event with zero bookkeeping. The handler interacts only through `context.response_id` and the builder API. + +It provides a scoped, hierarchical builder that mirrors the SSE event nesting. Each scope manages its own bookkeeping — you never touch `sequence_number`, `output_index`, `content_index`, or `item_id`. + +``` +ResponseEventStream → response.created / in_progress / completed / failed / incomplete + ├─ OutputItemMessageBuilder → output_item.added / done + │ ├─ TextContentBuilder → content_part.added / text.delta / text.done / content_part.done + │ │ └─ emit_annotation_added → output_text.annotation.added + │ └─ RefusalContentBuilder → content_part.added / refusal.delta / refusal.done / content_part.done + ├─ OutputItemFunctionCallBuilder → output_item.added / function_call_arguments.delta / done / output_item.done + ├─ OutputItemReasoningItemBuilder → output_item.added / done + │ └─ ReasoningSummaryPartBuilder → summary_part.added / text.delta / text.done / summary_part.done + ├─ OutputItemFileSearchCallBuilder → output_item.added / in_progress / searching / completed / done + ├─ OutputItemWebSearchCallBuilder → output_item.added / in_progress / searching / completed / done + ├─ OutputItemCodeInterpreterCallBuilder → output_item.added / in_progress / code.delta / code.done / completed / done + ├─ OutputItemImageGenCallBuilder → output_item.added / in_progress / partial_image / completed / done + ├─ OutputItemMcpCallBuilder → output_item.added / in_progress / args.delta / args.done / completed|failed / done + ├─ OutputItemMcpListToolsBuilder → output_item.added / in_progress / completed|failed / done + └─ OutputItemCustomToolCallBuilder → output_item.added / input.delta / input.done / done +``` + +**Naming convention:** `add_output_item_*()` methods create child scopes (return builders). `emit_*()` methods produce SSE events (return event dicts). + +### Handler contract + +Your handler must implement `create_async` with this signature: + +```python +from typing import AsyncIterable +import asyncio + +class ResponseHandler(Protocol): + async def create_async( + self, + request: CreateResponse, + context: ResponseContext, + cancellation_signal: asyncio.Event, + ) -> AsyncIterable[dict]: ... +``` + +The `ResponseContext` provides: + +| Property / Method | Description | +|---|---| +| `response_id` | Unique ID for this response | +| `is_shutdown_requested` | Whether the server is draining | +| `raw_body` | Raw request body (if needed) | +| `get_input_items_async()` | Load input items for this request | +| `get_history_async()` | Load conversation history items | + +### Execution modes + +The SDK automatically handles all combinations of `stream` and `background` flags: + +- **Default** — Run to completion, return final JSON response +- **Streaming** — Pipe events as SSE in real-time, cancel on client disconnect +- **Background** — Return immediately, handler runs in the background +- **Streaming + Background** — SSE while connected, handler continues after disconnect + +### Features + +- **SSE keep-alive** — Automatic keep-alive comments to prevent proxy/load-balancer timeouts +- **Event stream replay** — SSE replay for previously streamed responses via `?stream=true` +- **Pluggable state provider** — `ResponseProviderProtocol` abstracts state persistence; default `InMemoryResponseProvider` included, override for multi-instance deployments +- **Cancellation** — Cancel endpoint triggers cooperative cancellation via `asyncio.Event` +- **Graceful shutdown** — Handlers distinguish shutdown from cancel via `context.is_shutdown_requested`. Shutdown-terminated responses are marked `failed` for client retry +- **Content negotiation** — GET endpoint returns JSON snapshot by default, or SSE replay when `?stream=true` query parameter is specified +- **Distributed tracing** — Built-in observability hooks for OpenTelemetry integration +- **Error handling** — Global exception handling maps errors to appropriate HTTP responses + +### Configuration + +```python +from azure.ai.agentserver.responses import ResponsesServerOptions + +options = ResponsesServerOptions( + default_model="gpt-4o", + sse_keep_alive_interval_seconds=15, + shutdown_grace_period_seconds=10, + additional_server_identity="my-server/1.0", +) + +map_responses_server(app, handler, options=options) +``` + +Options can also be loaded from environment variables: + +```python +options = ResponsesServerOptions.from_env() +``` + +#### Route prefix + +```python +# Mount at a custom prefix +map_responses_server(app, handler, prefix="/openai/v1") +# Routes become: /openai/v1/responses, /openai/v1/responses/{response_id}, etc. +``` + +#### Custom Response Provider + +For multi-instance deployments, implement `ResponseProviderProtocol`: + +```python +from azure.ai.agentserver.responses import ResponseProviderProtocol + +class MyDurableProvider: + """Implements ResponseProviderProtocol with database-backed storage.""" + + async def create_response_async(self, response, input_items, history_item_ids): + ... + + async def get_response_async(self, response_id): + ... + + async def update_response_async(self, response): + ... + + async def delete_response_async(self, response_id): + ... + + async def get_input_items_async(self, response_id, limit=20, ascending=False, after=None, before=None): + ... + + async def get_items_async(self, item_ids): + ... + + async def get_history_item_ids_async(self, previous_response_id, conversation_id, limit): + ... +``` + +## Examples + +### Function call response + +```python +stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) +yield stream.emit_created() +yield stream.emit_in_progress() + +fn_call = stream.add_output_item_function_call("get_weather", "call_abc123") +yield fn_call.emit_added() +yield fn_call.emit_arguments_delta('{"location":') +yield fn_call.emit_arguments_delta('"San Francisco"}') +yield fn_call.emit_arguments_done('{"location":"San Francisco"}') +yield fn_call.emit_done() + +yield stream.emit_completed() +``` + +### Reasoning + text message (multiple output items) + +Output indices auto-increment across `add_output_item_*()` calls: + +```python +stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) +yield stream.emit_created() +yield stream.emit_in_progress() + +# output_index=0: reasoning item +reasoning = stream.add_output_item_reasoning_item() +yield reasoning.emit_added() +summary = reasoning.add_summary_part() +yield summary.emit_added() +yield summary.emit_text_delta("Let me think about this...") +yield summary.emit_text_done("Let me think about this...") +yield summary.emit_done() +yield reasoning.emit_summary_part_done(summary) +yield reasoning.emit_done() + +# output_index=1: message item (auto-incremented) +message = stream.add_output_item_message() +yield message.emit_added() +text = message.add_text_content() +yield text.emit_added() +yield text.emit_delta("Here is my answer.") +yield text.emit_done("Here is my answer.") +yield message.emit_content_done(text) +yield message.emit_done() + +yield stream.emit_completed() +``` + +### Conversation history (multi-turn) + +```python +class ConversationHandler: + async def create_async(self, request, context, cancellation_signal): + stream = ResponseEventStream( + response_id=context.response_id, + model=getattr(request, "model", None), + ) + yield stream.emit_created() + yield stream.emit_in_progress() + + # Retrieve history and input from context + history = await context.get_history_async() + input_items = await context.get_input_items_async() + current_input = extract_text(request) + reply = build_reply(current_input, history, input_items) + + message = stream.add_output_item_message() + yield message.emit_added() + text = message.add_text_content() + yield text.emit_added() + yield text.emit_delta(reply) + yield text.emit_done(reply) + yield message.emit_content_done(text) + yield message.emit_done() + + yield stream.emit_completed() +``` + +### More samples + +The `samples/` directory contains runnable Starlette servers demonstrating the SDK: + +| Sample | Description | +|--------|-------------| +| GettingStarted | Minimal echo handler — text message in default, streaming, and background modes | +| FunctionCalling | Two-turn conversation — server emits a function call, client submits output, server returns result | +| MultiOutput | Multiple output items — reasoning followed by a text message | +| ConversationHistory | Multi-turn with `previous_response_id` — demonstrates `get_history_async()` and conversation chaining | + +Each sample includes: +- `app.py` — the sample Starlette server +- `test.py` — a `requests`-based client that exercises the scenario + +## Troubleshooting + +### General + +Run your server locally first to verify handler behaviour before deploying. + +If the server works locally but fails in the cloud, check your logs in the Application Insights instance connected to your Azure AI Foundry Project. + +### Logging + +Enable SDK-level logging by configuring Python's `logging` module: + +```python +import logging + +logging.basicConfig(level=logging.DEBUG) +``` + +### Reporting issues + +To report an issue with the client library, or request additional features, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-agentserver-responses" in the title or content. + +## Next steps + +Please visit the [Samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/agentserver/azure-ai-agentserver-responses/samples) folder for runnable examples covering common scenarios. + +### Project structure + +``` +azure/ai/agentserver/responses/ + ├─ hosting/ Starlette routing, background execution, validation, observability + ├─ models/ Domain models (runtime state, errors, generated contracts) + │ └─ _generated/ TypeSpec-generated model classes + ├─ store/ Persistence abstraction and in-memory provider + ├─ streaming/ Event stream builders, SSE encoding, lifecycle state machine + │ └─ _builders/ Scoped builder classes (message, function call, reasoning, etc.) + ├─ _handlers.py Handler protocol and runtime context + ├─ _options.py Server configuration (ResponsesServerOptions) + └─ _id_generator.py Deterministic ID generation +samples/ Runnable Starlette sample servers +tests/ Test suite (contract, unit, integration) +type_spec/ TypeSpec definitions and pipeline +``` + +### Development + +```bash +make install # pip install -e .[dev] +make test # pytest +make lint # ruff check + mypy +make format # ruff format +make all # install → test → lint +``` + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [https://cla.microsoft.com](https://cla.microsoft.com/). + +When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, see the [Code of Conduct FAQ][code_of_conduct_faq] or contact [opencode@microsoft.com][email_opencode] with any additional questions or comments. + + +[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/ +[code_of_conduct_faq]: https://opensource.microsoft.com/codeofconduct/faq/ +[email_opencode]: mailto:opencode@microsoft.com diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/__init__.py new file mode 100644 index 000000000000..d55ccad1f573 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/__init__.py @@ -0,0 +1 @@ +__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/__init__.py new file mode 100644 index 000000000000..d55ccad1f573 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/__init__.py @@ -0,0 +1 @@ +__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/__init__.py new file mode 100644 index 000000000000..d55ccad1f573 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/__init__.py @@ -0,0 +1 @@ +__path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/__init__.py new file mode 100644 index 000000000000..55ccf190898a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/__init__.py @@ -0,0 +1,59 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Public API surface for the Azure AI Agent Server Responses package.""" + +from .streaming._builders import ( + OutputItemCodeInterpreterCallBuilder, + OutputItemBuilder, + OutputItemCustomToolCallBuilder, + OutputItemFileSearchCallBuilder, + OutputItemFunctionCallBuilder, + OutputItemFunctionCallOutputBuilder, + OutputItemImageGenCallBuilder, + OutputItemMcpCallBuilder, + OutputItemMcpListToolsBuilder, + OutputItemMessageBuilder, + OutputItemReasoningItemBuilder, + OutputItemWebSearchCallBuilder, + ReasoningSummaryPartBuilder, + RefusalContentBuilder, + TextContentBuilder, +) +from .streaming._event_stream import ResponseEventStream +from ._response_context import ResponseContext +from ._options import ResponsesServerOptions +from .store._base import ResponseProviderProtocol, ResponseStreamProviderProtocol +from .store._foundry_errors import FoundryApiError, FoundryBadRequestError, FoundryResourceNotFoundError, FoundryStorageError +from .store._foundry_provider import FoundryStorageProvider +from .store._foundry_settings import FoundryStorageSettings +from .store._memory import InMemoryResponseProvider + +__all__ = [ + "ResponseContext", + "ResponsesServerOptions", + "ResponseProviderProtocol", + "ResponseStreamProviderProtocol", + "InMemoryResponseProvider", + "FoundryStorageProvider", + "FoundryStorageSettings", + "FoundryStorageError", + "FoundryResourceNotFoundError", + "FoundryBadRequestError", + "FoundryApiError", + "TextContentBuilder", + "OutputItemMessageBuilder", + "OutputItemBuilder", + "OutputItemFunctionCallBuilder", + "OutputItemFunctionCallOutputBuilder", + "RefusalContentBuilder", + "OutputItemReasoningItemBuilder", + "ReasoningSummaryPartBuilder", + "OutputItemFileSearchCallBuilder", + "OutputItemWebSearchCallBuilder", + "OutputItemCodeInterpreterCallBuilder", + "OutputItemImageGenCallBuilder", + "OutputItemMcpCallBuilder", + "OutputItemMcpListToolsBuilder", + "OutputItemCustomToolCallBuilder", + "ResponseEventStream", +] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_id_generator.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_id_generator.py new file mode 100644 index 000000000000..e37c815f7ed6 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_id_generator.py @@ -0,0 +1,501 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""ID generation utilities aligned with the .NET IdGenerator implementation.""" + +from __future__ import annotations + +import base64 +import secrets +from typing import Callable, Sequence + +from .models import _generated as generated_models + + +class IdGenerator: # pylint: disable=too-many-public-methods + """Generates IDs with embedded partition keys matching the .NET format.""" + + _PARTITION_KEY_HEX_LENGTH = 16 + _PARTITION_KEY_SUFFIX = "00" + _PARTITION_KEY_TOTAL_LENGTH = _PARTITION_KEY_HEX_LENGTH + 2 + _ENTROPY_LENGTH = 32 + _NEW_FORMAT_BODY_LENGTH = _PARTITION_KEY_TOTAL_LENGTH + _ENTROPY_LENGTH + _LEGACY_BODY_LENGTH = 48 + _LEGACY_PARTITION_KEY_LENGTH = 16 + + @staticmethod + def new_id(prefix: str, partition_key_hint: str | None = "") -> str: + """Generate a new ID in the format ``{prefix}_{partitionKey}{entropy}``. + + :param prefix: The prefix segment for the ID (e.g. ``"caresp"``, ``"msg"``). + :type prefix: str + :param partition_key_hint: An existing ID from which to extract a partition key + for co-location. Defaults to an empty string (generates a new partition key). + :type partition_key_hint: str | None + :returns: A new unique ID string. + :rtype: str + :raises TypeError: If *prefix* is None. + :raises ValueError: If *prefix* is empty. + """ + if prefix is None: + raise TypeError("prefix must not be None") + if len(prefix) == 0: + raise ValueError("Prefix must not be empty.") + + extracted, partition_key = IdGenerator._try_extract_partition_key_raw(partition_key_hint) + if extracted: + if len(partition_key) == IdGenerator._LEGACY_PARTITION_KEY_LENGTH: + partition_key = partition_key + IdGenerator._PARTITION_KEY_SUFFIX + else: + partition_key = IdGenerator._generate_partition_key() + + entropy = IdGenerator._generate_entropy() + return f"{prefix}_{partition_key}{entropy}" + + @staticmethod + def new_response_id(partition_key_hint: str | None = "") -> str: + """Generate a new response ID with the ``caresp`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique response ID string. + :rtype: str + """ + return IdGenerator.new_id("caresp", partition_key_hint) + + @staticmethod + def new_message_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new message item ID with the ``msg`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique message item ID string. + :rtype: str + """ + return IdGenerator.new_id("msg", partition_key_hint) + + @staticmethod + def new_function_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new function call item ID with the ``fc`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique function call item ID string. + :rtype: str + """ + return IdGenerator.new_id("fc", partition_key_hint) + + @staticmethod + def new_reasoning_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new reasoning item ID with the ``rs`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique reasoning item ID string. + :rtype: str + """ + return IdGenerator.new_id("rs", partition_key_hint) + + @staticmethod + def new_file_search_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new file search call item ID with the ``fs`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique file search call item ID string. + :rtype: str + """ + return IdGenerator.new_id("fs", partition_key_hint) + + @staticmethod + def new_web_search_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new web search call item ID with the ``ws`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique web search call item ID string. + :rtype: str + """ + return IdGenerator.new_id("ws", partition_key_hint) + + @staticmethod + def new_code_interpreter_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new code interpreter call item ID with the ``ci`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique code interpreter call item ID string. + :rtype: str + """ + return IdGenerator.new_id("ci", partition_key_hint) + + @staticmethod + def new_image_gen_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new image generation call item ID with the ``ig`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique image generation call item ID string. + :rtype: str + """ + return IdGenerator.new_id("ig", partition_key_hint) + + @staticmethod + def new_mcp_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new MCP call item ID with the ``mcp`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique MCP call item ID string. + :rtype: str + """ + return IdGenerator.new_id("mcp", partition_key_hint) + + @staticmethod + def new_mcp_list_tools_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new MCP list tools item ID with the ``mcpl`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique MCP list tools item ID string. + :rtype: str + """ + return IdGenerator.new_id("mcpl", partition_key_hint) + + @staticmethod + def new_custom_tool_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new custom tool call item ID with the ``ctc`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique custom tool call item ID string. + :rtype: str + """ + return IdGenerator.new_id("ctc", partition_key_hint) + + @staticmethod + def new_custom_tool_call_output_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new custom tool call output item ID with the ``ctco`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique custom tool call output item ID string. + :rtype: str + """ + return IdGenerator.new_id("ctco", partition_key_hint) + + @staticmethod + def new_function_call_output_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new function call output item ID with the ``fco`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique function call output item ID string. + :rtype: str + """ + return IdGenerator.new_id("fco", partition_key_hint) + + @staticmethod + def new_computer_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new computer call item ID with the ``cu`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique computer call item ID string. + :rtype: str + """ + return IdGenerator.new_id("cu", partition_key_hint) + + @staticmethod + def new_computer_call_output_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new computer call output item ID with the ``cuo`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique computer call output item ID string. + :rtype: str + """ + return IdGenerator.new_id("cuo", partition_key_hint) + + @staticmethod + def new_local_shell_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new local shell call item ID with the ``lsh`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique local shell call item ID string. + :rtype: str + """ + return IdGenerator.new_id("lsh", partition_key_hint) + + @staticmethod + def new_local_shell_call_output_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new local shell call output item ID with the ``lsho`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique local shell call output item ID string. + :rtype: str + """ + return IdGenerator.new_id("lsho", partition_key_hint) + + @staticmethod + def new_function_shell_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new function shell call item ID with the ``lsh`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique function shell call item ID string. + :rtype: str + """ + return IdGenerator.new_id("lsh", partition_key_hint) + + @staticmethod + def new_function_shell_call_output_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new function shell call output item ID with the ``lsho`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique function shell call output item ID string. + :rtype: str + """ + return IdGenerator.new_id("lsho", partition_key_hint) + + @staticmethod + def new_apply_patch_call_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new apply patch call item ID with the ``ap`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique apply patch call item ID string. + :rtype: str + """ + return IdGenerator.new_id("ap", partition_key_hint) + + @staticmethod + def new_apply_patch_call_output_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new apply patch call output item ID with the ``apo`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique apply patch call output item ID string. + :rtype: str + """ + return IdGenerator.new_id("apo", partition_key_hint) + + @staticmethod + def new_mcp_approval_request_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new MCP approval request item ID with the ``mcpr`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique MCP approval request item ID string. + :rtype: str + """ + return IdGenerator.new_id("mcpr", partition_key_hint) + + @staticmethod + def new_mcp_approval_response_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new MCP approval response item ID with the ``mcpa`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique MCP approval response item ID string. + :rtype: str + """ + return IdGenerator.new_id("mcpa", partition_key_hint) + + @staticmethod + def new_compaction_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new compaction item ID with the ``cmp`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique compaction item ID string. + :rtype: str + """ + return IdGenerator.new_id("cmp", partition_key_hint) + + @staticmethod + def new_workflow_action_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new workflow action item ID with the ``wfa`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique workflow action item ID string. + :rtype: str + """ + return IdGenerator.new_id("wfa", partition_key_hint) + + @staticmethod + def new_output_message_item_id(partition_key_hint: str | None = "") -> str: + """Generate a new output message item ID with the ``om`` prefix. + + :param partition_key_hint: An existing ID to extract the partition key from for co-location. + :type partition_key_hint: str | None + :returns: A new unique output message item ID string. + :rtype: str + """ + return IdGenerator.new_id("om", partition_key_hint) + + @staticmethod + def new_item_id(item: generated_models.Item, partition_key_hint: str | None = "") -> str | None: + """Generate a type-specific ID for a generated Item subtype. + + Dispatches to the appropriate ``new_*_item_id`` factory method based on the + runtime type of *item*. Returns None for ``ItemReferenceParam`` or unrecognized types. + + :param item: The generated Item instance to create an ID for. + :type item: generated_models.Item + :param partition_key_hint: An existing ID from which to extract the partition key + for co-location. Defaults to an empty string. + :type partition_key_hint: str | None + :returns: A new unique ID string, or None if the item type is a reference or unrecognized. + :rtype: str | None + """ + dispatch_map: tuple[tuple[type[object], Callable[..., str]], ...] = ( + (generated_models.ItemMessage, IdGenerator.new_message_item_id), + (generated_models.ItemOutputMessage, IdGenerator.new_output_message_item_id), + (generated_models.ItemFunctionToolCall, IdGenerator.new_function_call_item_id), + (generated_models.FunctionCallOutputItemParam, IdGenerator.new_function_call_output_item_id), + (generated_models.ItemCustomToolCall, IdGenerator.new_custom_tool_call_item_id), + (generated_models.ItemCustomToolCallOutput, IdGenerator.new_custom_tool_call_output_item_id), + (generated_models.ItemComputerToolCall, IdGenerator.new_computer_call_item_id), + (generated_models.ComputerCallOutputItemParam, IdGenerator.new_computer_call_output_item_id), + (generated_models.ItemFileSearchToolCall, IdGenerator.new_file_search_call_item_id), + (generated_models.ItemWebSearchToolCall, IdGenerator.new_web_search_call_item_id), + (generated_models.ItemImageGenToolCall, IdGenerator.new_image_gen_call_item_id), + (generated_models.ItemCodeInterpreterToolCall, IdGenerator.new_code_interpreter_call_item_id), + (generated_models.ItemLocalShellToolCall, IdGenerator.new_local_shell_call_item_id), + (generated_models.ItemLocalShellToolCallOutput, IdGenerator.new_local_shell_call_output_item_id), + (generated_models.FunctionShellCallItemParam, IdGenerator.new_function_shell_call_item_id), + (generated_models.FunctionShellCallOutputItemParam, IdGenerator.new_function_shell_call_output_item_id), + (generated_models.ApplyPatchToolCallItemParam, IdGenerator.new_apply_patch_call_item_id), + (generated_models.ApplyPatchToolCallOutputItemParam, IdGenerator.new_apply_patch_call_output_item_id), + (generated_models.ItemMcpListTools, IdGenerator.new_mcp_list_tools_item_id), + (generated_models.ItemMcpToolCall, IdGenerator.new_mcp_call_item_id), + (generated_models.ItemMcpApprovalRequest, IdGenerator.new_mcp_approval_request_item_id), + (generated_models.MCPApprovalResponse, IdGenerator.new_mcp_approval_response_item_id), + (generated_models.ItemReasoningItem, IdGenerator.new_reasoning_item_id), + (generated_models.CompactionSummaryItemParam, IdGenerator.new_compaction_item_id), + ) + + for model_type, generator in dispatch_map: + if isinstance(item, model_type): + return generator(partition_key_hint) + + if isinstance(item, generated_models.ItemReferenceParam): + return None + return None + + @staticmethod + def extract_partition_key(id_value: str) -> str: + """Extract the partition key segment from an existing ID. + + :param id_value: The full ID string to extract the partition key from. + :type id_value: str + :returns: The partition key hex string. + :rtype: str + :raises ValueError: If the ID is null, empty, missing a delimiter, or has + an unexpected body length. + """ + extracted, partition_key = IdGenerator._try_extract_partition_key_raw(id_value) + if extracted: + return partition_key + + if id_value is None or id_value == "": + raise ValueError("ID must not be null or empty.") + if "_" not in id_value: + raise ValueError(f"ID '{id_value}' has no '_' delimiter.") + raise ValueError(f"ID '{id_value}' has unexpected body length.") + + @staticmethod + def is_valid(id_value: str | None, allowed_prefixes: Sequence[str] | None = None) -> tuple[bool, str | None]: + """Validate whether an ID string conforms to the expected format. + + :param id_value: The ID string to validate. + :type id_value: str | None + :param allowed_prefixes: An optional sequence of allowed prefix strings. + When provided, the ID's prefix must be in this set. + :type allowed_prefixes: Sequence[str] | None + :returns: A tuple of (is_valid, error_message). When valid, error_message is None. + :rtype: tuple[bool, str | None] + """ + if id_value is None or id_value == "": + return False, "ID must not be null or empty." + + delimiter_index = id_value.find("_") + if delimiter_index < 0: + return False, f"ID '{id_value}' has no '_' delimiter." + + prefix = id_value[:delimiter_index] + if len(prefix) == 0: + return False, "ID has an empty prefix." + + body = id_value[delimiter_index + 1 :] + if len(body) != IdGenerator._NEW_FORMAT_BODY_LENGTH and len(body) != IdGenerator._LEGACY_BODY_LENGTH: + return ( + False, + f"ID '{id_value}' has unexpected body length {len(body)}" \ + + f" (expected {IdGenerator._NEW_FORMAT_BODY_LENGTH} or" \ + + f" {IdGenerator._LEGACY_BODY_LENGTH}).", + ) + + if allowed_prefixes is not None and prefix not in allowed_prefixes: + return False, f"ID prefix '{prefix}' is not in the allowed set [{', '.join(allowed_prefixes)}]." + + return True, None + + @staticmethod + def _generate_partition_key() -> str: + """Generate a random partition key hex string with the standard suffix. + + :returns: An 18-character hex partition key string. + :rtype: str + """ + return f"{secrets.token_bytes(8).hex()}{IdGenerator._PARTITION_KEY_SUFFIX}" + + @staticmethod + def _generate_entropy() -> str: + """Generate a random alphanumeric entropy string. + + :returns: A 32-character alphanumeric entropy string. + :rtype: str + """ + chars: list[str] = [] + while len(chars) < IdGenerator._ENTROPY_LENGTH: + base64_text = base64.b64encode(secrets.token_bytes(48)).decode("ascii") + for char in base64_text: + if char.isalnum(): + chars.append(char) + if len(chars) >= IdGenerator._ENTROPY_LENGTH: + break + return "".join(chars) + + @staticmethod + def _try_extract_partition_key_raw(id_value: str | None) -> tuple[bool, str]: + """Attempt to extract the raw partition key from an ID string. + + Supports both the new format (18-char partition key at the start of the body) + and the legacy format (16-char partition key at the end of the body). + + :param id_value: The full ID string to parse. + :type id_value: str | None + :returns: A tuple of (success, partition_key). On failure, partition_key is + an empty string. + :rtype: tuple[bool, str] + """ + if id_value is None or id_value == "": + return False, "" + + delimiter_index = id_value.find("_") + if delimiter_index < 0: + return False, "" + + body = id_value[delimiter_index + 1 :] + if len(body) == IdGenerator._NEW_FORMAT_BODY_LENGTH: + return True, body[: IdGenerator._PARTITION_KEY_TOTAL_LENGTH] + + if len(body) == IdGenerator._LEGACY_BODY_LENGTH: + return True, body[-IdGenerator._LEGACY_PARTITION_KEY_LENGTH :] + + return False, "" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_options.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_options.py new file mode 100644 index 000000000000..d05e7cd0ee6a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_options.py @@ -0,0 +1,126 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Typed options for configuring the Responses server runtime.""" + +from __future__ import annotations + +import os +from typing import Any, Mapping + + +class ResponsesServerOptions: + """Configuration values for hosting and runtime behavior. + + This shape mirrors the .NET `ResponsesServerOptions` surface: + - SSE keep-alive is disabled by default. + - `default_model` is optional. + - `default_fetch_history_count` defaults to 100. + - `additional_server_identity` is optional. + """ + + def __init__( + self, + *, + additional_server_identity: str | None = None, + default_model: str | None = None, + default_fetch_history_count: int = 100, + sse_keep_alive_interval_seconds: int | None = None, + shutdown_grace_period_seconds: int = 10, + create_span_hook: Any | None = None, + ) -> None: + self.additional_server_identity = additional_server_identity + self.default_model = default_model + self.default_fetch_history_count = default_fetch_history_count + self.sse_keep_alive_interval_seconds = sse_keep_alive_interval_seconds + self.shutdown_grace_period_seconds = shutdown_grace_period_seconds + self.create_span_hook = create_span_hook + self._validate() + + @classmethod + def from_env(cls, environ: Mapping[str, str] | None = None) -> "ResponsesServerOptions": + """Create options from environment variables. + + Mirrors .NET environment-based configuration: + - ``AZURE_AI_RESPONSES_SERVER_SSE_KEEPALIVE_INTERVAL`` (seconds) + - ``AZURE_AI_RESPONSES_SERVER_DEFAULT_FETCH_HISTORY_ITEM_COUNT`` (integer) + + :param environ: Optional mapping to read environment variables from. + Defaults to ``os.environ`` when None. + :type environ: Mapping[str, str] | None + :returns: A new options instance populated from the environment. + :rtype: ResponsesServerOptions + :raises ValueError: If an environment variable value is not a valid positive integer. + """ + + source: Mapping[str, str] = os.environ if environ is None else environ + + def _first_non_empty(*keys: str) -> str | None: + for key in keys: + raw = source.get(key) + if raw is None: + continue + normalized = raw.strip() + if normalized: + return normalized + return None + + def _parse_positive_int(*keys: str) -> int | None: + raw = _first_non_empty(*keys) + if raw is None: + return None + try: + value = int(raw) + except ValueError as exc: + raise ValueError(f"{keys[0]} must be a positive integer") from exc + if value <= 0: + raise ValueError(f"{keys[0]} must be > 0") + return value + + default_fetch_history_count = _parse_positive_int( + "AZURE_AI_RESPONSES_SERVER_DEFAULT_FETCH_HISTORY_ITEM_COUNT", + ) + sse_keep_alive_interval_seconds = _parse_positive_int( + "AZURE_AI_RESPONSES_SERVER_SSE_KEEPALIVE_INTERVAL", + ) + + kwargs: dict[str, Any] = {} + if default_fetch_history_count is not None: + kwargs["default_fetch_history_count"] = default_fetch_history_count + if sse_keep_alive_interval_seconds is not None: + kwargs["sse_keep_alive_interval_seconds"] = sse_keep_alive_interval_seconds + + return cls(**kwargs) + + def _validate(self) -> None: + """Validate and normalize option values. + + Strips whitespace from string fields and enforces positive-integer + constraints on numeric fields. + + :raises ValueError: If any numeric option is not positive. + """ + if self.additional_server_identity is not None: + normalized = self.additional_server_identity.strip() + self.additional_server_identity = normalized or None + + if self.default_model is not None: + normalized_model = self.default_model.strip() + self.default_model = normalized_model or None + + if self.sse_keep_alive_interval_seconds is not None and self.sse_keep_alive_interval_seconds <= 0: + raise ValueError("sse_keep_alive_interval_seconds must be > 0 when set") + + if self.default_fetch_history_count <= 0: + raise ValueError("default_fetch_history_count must be > 0") + + if self.shutdown_grace_period_seconds <= 0: + raise ValueError("shutdown_grace_period_seconds must be > 0") + + @property + def sse_keep_alive_enabled(self) -> bool: + """Return whether periodic SSE keep-alive comments are enabled. + + :returns: True if ``sse_keep_alive_interval_seconds`` is set, False otherwise. + :rtype: bool + """ + return self.sse_keep_alive_interval_seconds is not None diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_response_context.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_response_context.py new file mode 100644 index 000000000000..772e826b672a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_response_context.py @@ -0,0 +1,93 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""ResponseContext for user-defined response execution.""" + +from __future__ import annotations + +from datetime import datetime, timezone +from typing import TYPE_CHECKING, Any, Sequence + +from azure.ai.agentserver.responses.models._generated.sdk.models._types import InputParam + +from .models._generated import CreateResponse, OutputItem +from .models import ResponseModeFlags + +if TYPE_CHECKING: + from .store._base import ResponseProviderProtocol + + +class ResponseContext: + """Runtime context exposed to response handlers and used by hosting orchestration. + + - response identifier + - shutdown signal flag + - raw body access + - async input/history resolution + """ + + def __init__( + self, + *, + response_id: str, + mode_flags: ResponseModeFlags, + raw_body: Any | None = None, + request: CreateResponse | None = None, + created_at: datetime | None = None, + provider: "ResponseProviderProtocol | None" = None, + input_items: list[InputParam] | None = None, + previous_response_id: str | None = None, + conversation_id: str | None = None, + history_limit: int = 100, + client_headers: dict[str, str] | None = None, + query_parameters: dict[str, str] | None = None, + ) -> None: + self.response_id = response_id + self.mode_flags = mode_flags + self.raw_body = raw_body + self.request = request + self.created_at = created_at if created_at is not None else datetime.now(timezone.utc) + self.is_shutdown_requested: bool = False + self.client_headers: dict[str, str] = client_headers or {} + self.query_parameters: dict[str, str] = query_parameters or {} + self._provider: "ResponseProviderProtocol | None" = provider + self._input_items: list[InputParam] = list(input_items) if input_items is not None else [] + self._previous_response_id: str | None = previous_response_id + self.conversation_id: str | None = conversation_id + self._history_limit: int = history_limit + self._input_items_cache: Sequence[OutputItem] | None = None + self._history_cache: Sequence[OutputItem] | None = None + + async def get_input_items_async(self) -> Sequence[InputParam]: + """Return and cache request input items. + + :returns: A tuple of input items from the request. + :rtype: Sequence[InputParam] + """ + if self._input_items_cache is not None: + return self._input_items_cache + self._input_items_cache = tuple(self._input_items) + return self._input_items_cache + + async def get_history_async(self) -> Sequence[OutputItem]: + """Resolve and cache conversation history items via the provider. + + :returns: A tuple of conversation history items. + :rtype: Sequence[OutputItem] + """ + if self._history_cache is not None: + return self._history_cache + + if self._provider is None: + self._history_cache = () + return self._history_cache + + item_ids = await self._provider.get_history_item_ids_async( + self._previous_response_id, self.conversation_id, self._history_limit + ) + if not item_ids: + self._history_cache = () + return self._history_cache + + items = await self._provider.get_items_async(item_ids) + self._history_cache = tuple(item for item in items if item is not None) + return self._history_cache diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_version.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_version.py new file mode 100644 index 000000000000..cf584760eb91 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/_version.py @@ -0,0 +1,7 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# --------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# --------------------------------------------------------- + +VERSION = "1.0.0b1" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/__init__.py new file mode 100644 index 000000000000..f13e50e5fa64 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/__init__.py @@ -0,0 +1,41 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""HTTP hosting, routing, and request orchestration for the Responses server.""" + +from ._routing import ResponseHandler +from ._observability import ( + CreateSpan, + CreateSpanHook, + InMemoryCreateSpanHook, + RecordedSpan, + build_create_span_tags, + build_platform_server_header, + start_create_span, +) +from ._validation import ( + build_api_error_response, + build_invalid_mode_error_response, + build_not_found_error_response, + parse_and_validate_create_response, + parse_create_response, + to_api_error_response, + validate_create_response, +) + +__all__ = [ + "ResponseHandler", + "CreateSpan", + "CreateSpanHook", + "InMemoryCreateSpanHook", + "RecordedSpan", + "build_api_error_response", + "build_create_span_tags", + "build_invalid_mode_error_response", + "build_not_found_error_response", + "build_platform_server_header", + "parse_and_validate_create_response", + "parse_create_response", + "start_create_span", + "to_api_error_response", + "validate_create_response", +] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_background.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_background.py new file mode 100644 index 000000000000..e0d7a6301878 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_background.py @@ -0,0 +1,179 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Background execution helpers for non-stream responses.""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from copy import deepcopy +from typing import Any + +from .._response_context import ResponseContext +from ..models import _generated as generated_models +from ..models.runtime import ResponseExecution, build_failed_response +from ..streaming._helpers import ( + _build_events, + _coerce_handler_event, + _apply_stream_event_defaults, + _extract_response_snapshot_from_events, +) + + +async def _run_background_non_stream( + *, + create_async: Any, + parsed: Any, + context: ResponseContext, + cancellation_signal: asyncio.Event, + record: ResponseExecution, + response_id: str, + agent_reference: dict[str, Any], + model: str | None, + provider: Any = None, + store: bool = True, +) -> None: + """Execute a non-stream handler in the background and update the execution record. + + Collects handler events, builds the response payload, and transitions the + record status to ``completed``, ``failed``, or ``cancelled``. + + :keyword create_async: The handler's async generator callable. + :keyword type create_async: Any + :keyword parsed: Parsed ``CreateResponse`` model instance. + :keyword type parsed: Any + :keyword context: Runtime response context for this request. + :keyword type context: ResponseContext + :keyword cancellation_signal: Event signalling that cancellation was requested. + :keyword type cancellation_signal: asyncio.Event + :keyword record: The mutable execution record to update. + :keyword type record: ResponseExecution + :keyword response_id: The response ID for this execution. + :keyword type response_id: str + :keyword agent_reference: Normalized agent reference dictionary. + :keyword type agent_reference: dict[str, Any] + :keyword model: Model name, or ``None``. + :keyword type model: str | None + :keyword provider: Optional persistence provider; when set and ``store`` is ``True``, + ``update_response_async`` is called after terminal state is reached. + :keyword type provider: Any + :keyword store: Whether the response should be persisted via the provider. + :keyword type store: bool + :return: None + :rtype: None + """ + record.transition_to("in_progress") + handler_events: list[dict[str, Any]] = [] + + try: + try: + first_event_processed = False + async for handler_event in create_async(parsed, context, cancellation_signal): + if cancellation_signal.is_set(): + record.transition_to("cancelled") + return + + coerced = _coerce_handler_event(handler_event) + normalized = _apply_stream_event_defaults( + coerced, + response_id=response_id, + agent_reference=agent_reference, + model=model, + sequence_number=None, + ) + handler_events.append(normalized) + if not first_event_processed: + first_event_processed = True + # Set initial response snapshot for POST response body without + # changing record.status (transition_to manages status lifecycle) + _initial_snapshot = _extract_response_snapshot_from_events( + handler_events, + response_id=response_id, + agent_reference=agent_reference, + model=model, + ) + record.set_response_snapshot(generated_models.Response(_initial_snapshot)) + record.response_created_signal.set() + except Exception: # pylint: disable=broad-exception-caught + if record.status != "cancelled": + record.set_response_snapshot( + build_failed_response( + response_id, + agent_reference, + model, + created_at=context.created_at, + ) + ) + record.transition_to("failed") + if not first_event_processed: + # Mark failure before any events so run_background can return HTTP 500 + record.response_failed_before_events = True + record.response_created_signal.set() # unblock run_background on failure + return + + if cancellation_signal.is_set(): + record.transition_to("cancelled") + record.response_created_signal.set() # unblock run_background on cancellation + return + + events = handler_events if handler_events else _build_events( + response_id, + include_progress=True, + agent_reference=agent_reference, + model=model, + ) + response_payload = _extract_response_snapshot_from_events( + events, + response_id=response_id, + agent_reference=agent_reference, + model=model, + remove_sequence_number=True, + ) + + resolved_status = response_payload.get("status") + if record.status != "cancelled": + record.set_response_snapshot(generated_models.Response(response_payload)) + record.transition_to(resolved_status if isinstance(resolved_status, str) else "completed") + finally: + # Always unblock run_background (idempotent if already set) + record.response_created_signal.set() + # Persist terminal state update via provider (bg non-stream: update after runner completes) + if ( + store + and provider is not None + and record.status not in {"cancelled"} + and record.response is not None + ): + try: + await provider.update_response_async(record.response) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + + +def _refresh_background_status(record: ResponseExecution) -> None: + """Refresh the status of a background execution record. + + Checks the execution task state and cancellation signal to update the + record status. Called by GET/DELETE/cancel endpoints to reflect the + current runner state without triggering execution. + + :param record: The execution record to refresh. + :type record: ResponseExecution + :return: None + :rtype: None + """ + if not record.mode_flags.background or record.is_terminal: + return + + if record.cancel_signal.is_set() and not record.is_terminal: + record.status = "cancelled" + return + + # execution_task is started immediately in run_background (Task 3.1) + if record.execution_task is not None and record.execution_task.done(): + if not record.is_terminal: + if record.execution_task.cancelled(): + record.status = "cancelled" + else: + exc = record.execution_task.exception() + if exc is not None: + record.status = "failed" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_endpoint_handler.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_endpoint_handler.py new file mode 100644 index 000000000000..11491f695512 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_endpoint_handler.py @@ -0,0 +1,772 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""HTTP endpoint handler for the Responses server. + +This module owns all Starlette I/O: ``Request`` parsing, route-level +validation, header propagation, and ``Response`` construction. Business +logic lives in :class:`_ResponseOrchestrator`. +""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from copy import deepcopy +from typing import TYPE_CHECKING, Any + +from starlette.requests import Request +from starlette.responses import JSONResponse, Response, StreamingResponse + +from azure.ai.agentserver.core import AgentLogger +from azure.ai.agentserver.responses.models._generated.sdk.models.models._models import CreateResponse + +from .._response_context import ResponseContext +from .._options import ResponsesServerOptions +from ..models import ResponseModeFlags +from ..streaming._helpers import _encode_sse +from ..streaming._sse import encode_sse_payload +from ..streaming._state_machine import LifecycleStateMachineError, normalize_lifecycle_events +from ._background import _refresh_background_status +from ._execution_context import _ExecutionContext +from ..models.runtime import build_cancelled_response +from ._http_errors import ( + _deleted_response, + _error_response, + _invalid_mode, + _invalid_request, + _not_found, + _service_unavailable, +) +from ._observability import build_create_span_tags, start_create_span +from ._orchestrator import _HandlerError, _ResponseOrchestrator +from ._request_parsing import ( + _apply_item_cursors, + _extract_item_id, + _prevalidate_identity_payload, + _resolve_conversation_id, + _resolve_identity_fields, +) +from ._runtime_state import _RuntimeState +from ._validation import parse_and_validate_create_response +from ..store._base import ResponseProviderProtocol, ResponseStreamProviderProtocol +from ..streaming._helpers import EVENT_TYPE + +if TYPE_CHECKING: + from azure.ai.agentserver.core import TracingHelper + +logger = AgentLogger.get() + + +class _ResponseEndpointHandler: # pylint: disable=too-many-instance-attributes + """HTTP-layer handler for all Responses API endpoints. + + Owns all Starlette ``Request``/``Response`` concerns. Delegates + event-pipeline logic to :class:`_ResponseOrchestrator`. + + Mutable shutdown state (``_is_draining``, ``_shutdown_requested``) lives + here so every route method shares consistent drain/cancel semantics without + needing a ``nonlocal`` closure variable. + """ + + def __init__( + self, + *, + orchestrator: _ResponseOrchestrator, + runtime_state: _RuntimeState, + runtime_options: ResponsesServerOptions, + response_headers: dict[str, str], + sse_headers: dict[str, str], + tracing: "TracingHelper | None" = None, + provider: ResponseProviderProtocol, + stream_provider: ResponseStreamProviderProtocol | None = None, + ) -> None: + """Initialise the endpoint handler. + + :param orchestrator: Event-pipeline orchestrator. + :type orchestrator: _ResponseOrchestrator + :param runtime_state: In-memory execution record store. + :type runtime_state: _RuntimeState + :param runtime_options: Server runtime options. + :type runtime_options: ResponsesServerOptions + :param response_headers: Headers to include on all responses. + :type response_headers: dict[str, str] + :param sse_headers: SSE-specific headers (e.g. connection, cache-control). + :type sse_headers: dict[str, str] + :param tracing: Optional tracing helper from hosting's AgentHost. + :type tracing: TracingHelper | None + :param provider: Persistence provider for response envelopes and input items. + :type provider: ResponseProviderProtocol + :param stream_provider: Optional provider for SSE stream event persistence and replay. + :type stream_provider: ResponseStreamProviderProtocol | None + """ + self._orchestrator = orchestrator + self._runtime_state = runtime_state + self._runtime_options = runtime_options + self._response_headers = response_headers + self._sse_headers = sse_headers + self._tracing = tracing + self._provider = provider + self._stream_provider = stream_provider + self._shutdown_requested: asyncio.Event = asyncio.Event() + self._is_draining: bool = False + + # Validate the lifecycle event state machine on startup so + # misconfigured state machines surface immediately. + try: + normalize_lifecycle_events( + response_id="resp_validation", + events=[ + {"type": EVENT_TYPE.RESPONSE_CREATED.value, "payload": {"status": "queued"}}, + {"type": EVENT_TYPE.RESPONSE_COMPLETED.value, "payload": {"status": "completed"}}, + ], + ) + except LifecycleStateMachineError as exc: + raise RuntimeError(f"Invalid lifecycle event state machine configuration: {exc}") from exc + + # ------------------------------------------------------------------ + # Span attribute helper + # ------------------------------------------------------------------ + + @staticmethod + def _safe_set_attrs(span: Any, attrs: dict[str, str]) -> None: + """Safely set attributes on an OTel span. + + :param span: The OTel span, or *None*. + :type span: Any + :param attrs: Key-value attributes to set. + :type attrs: dict[str, str] + """ + if span is None: + return + try: + for key, value in attrs.items(): + span.set_attribute(key, value) + except Exception: # pylint: disable=broad-exception-caught + logger.debug("Failed to set span attributes: %s", list(attrs.keys()), exc_info=True) + + # ------------------------------------------------------------------ + # Streaming response helpers + # ------------------------------------------------------------------ + + def _wrap_streaming_response( + self, + response: StreamingResponse, + otel_span: Any, + baggage_token: Any, + ) -> StreamingResponse: + """Wrap a streaming response's body iterator with tracing and baggage cleanup. + + Two layers of wrapping are applied in order: + + 1. **Inner (tracing):** ``trace_stream`` wraps the body iterator so + the OTel span covers the full streaming duration and records any + errors that occur while yielding chunks. + 2. **Outer (baggage cleanup):** A second async generator detaches the + W3C Baggage context *after* all chunks have been sent (or an + error occurs). + + :param response: The ``StreamingResponse`` to wrap. + :type response: StreamingResponse + :param otel_span: The OTel span (or *None* when tracing is disabled). + :type otel_span: Any + :param baggage_token: Token from ``set_baggage`` (or *None*). + :type baggage_token: Any + :return: The same response object, with its body_iterator replaced. + :rtype: StreamingResponse + """ + if self._tracing is None: + return response + + # Inner wrap: trace_stream ends the span when iteration completes. + response.body_iterator = self._tracing.trace_stream(response.body_iterator, otel_span) + + # Outer wrap: detach baggage after all chunks are sent. + original_iterator = response.body_iterator + tracing = self._tracing # capture for the closure + + async def _cleanup_iter(): # type: ignore[return-value] + try: + async for chunk in original_iterator: + yield chunk + finally: + tracing.detach_baggage(baggage_token) + + response.body_iterator = _cleanup_iter() + return response + + # ------------------------------------------------------------------ + # ResponseContext factory + # ------------------------------------------------------------------ + + def _build_execution_context( + self, + *, + payload: Any, + parsed: CreateResponse, + response_id: str, + agent_reference: Any, + span: Any, + request: Request, + ) -> _ExecutionContext: + """Build an :class:`_ExecutionContext` from the parsed request. + + Extracts all protocol fields from *parsed* exactly once and + creates the cancellation signal. The companion + :class:`ResponseContext` is derived automatically so that both + objects share a single source of truth for mode flags, input + items, and conversation-threading fields. + + :param payload: Raw JSON payload dict. + :type payload: Any + :param parsed: Validated :class:`CreateResponse` model. + :type parsed: CreateResponse + :param response_id: Assigned response identifier. + :type response_id: str + :param agent_reference: Normalised agent reference dictionary. + :type agent_reference: Any + :param span: Active observability span for this request. + :type span: Any + :param request: Starlette HTTP request (for headers / query params). + :type request: Request + :return: A fully-populated :class:`_ExecutionContext` with its + ``context`` field already set. + :rtype: _ExecutionContext + """ + stream = bool(getattr(parsed, "stream", False)) + store = True if getattr(parsed, "store", None) is None else bool(parsed.store) + background = bool(getattr(parsed, "background", False)) + model = getattr(parsed, "model", None) + input_items = [deepcopy(item) for item in (parsed.input or [])] + previous_response_id: str | None = ( + parsed.previous_response_id + if isinstance(parsed.previous_response_id, str) and parsed.previous_response_id + else None + ) + conversation_id = _resolve_conversation_id(parsed) + + cancellation_signal = asyncio.Event() + if self._shutdown_requested.is_set(): + cancellation_signal.set() + + ctx = _ExecutionContext( + response_id=response_id, + agent_reference=agent_reference, + model=model, + store=store, + background=background, + stream=stream, + input_items=input_items, + previous_response_id=previous_response_id, + conversation_id=conversation_id, + cancellation_signal=cancellation_signal, + span=span, + parsed=parsed, + ) + + # Derive the public ResponseContext from the execution context. + ctx.context = self._create_response_context(ctx, raw_body=payload, request=request) + return ctx + + def _create_response_context( + self, + ctx: _ExecutionContext, + *, + raw_body: Any, + request: Request, + ) -> ResponseContext: + """Derive a :class:`ResponseContext` from an :class:`_ExecutionContext`. + + All protocol fields (mode flags, input items, conversation + threading) are read from *ctx* so that values are extracted from + the parsed request exactly once. + + :param ctx: The execution context that owns the protocol fields. + :param raw_body: The raw JSON payload dict. + :param request: The Starlette HTTP request. + :return: A fully-populated :class:`ResponseContext`. + """ + mode_flags = ResponseModeFlags( + stream=ctx.stream, store=ctx.store, background=ctx.background + ) + client_headers = { + k: v for k, v in request.headers.items() + if k.lower().startswith("x-client-") + } + + context = ResponseContext( + response_id=ctx.response_id, + mode_flags=mode_flags, + raw_body=raw_body, + request=ctx.parsed, + provider=self._provider, + input_items=ctx.input_items, + previous_response_id=ctx.previous_response_id, + conversation_id=ctx.conversation_id, + history_limit=self._runtime_options.default_fetch_history_count, + client_headers=client_headers, + query_parameters=dict(request.query_params), + ) + context.is_shutdown_requested = self._shutdown_requested.is_set() + return context + + # ------------------------------------------------------------------ + # Route handlers + # ------------------------------------------------------------------ + + async def handle_create(self, request: Request) -> Response: # pylint: disable=too-many-return-statements + """Route handler for ``POST /responses``. + + Parses and validates the create request, builds an + :class:`_ExecutionContext`, then dispatches to the appropriate + orchestrator method (stream / sync / background). + + :param request: Incoming Starlette request. + :type request: Request + :return: HTTP response for the create operation. + :rtype: Response + """ + if self._is_draining: + return _service_unavailable("Server is shutting down.", {}) + + # Start tracing span using hosting's TracingHelper + otel_span = None + baggage_token = None + streaming_wrapped = False + + # Also maintain CreateSpanHook for backward compat (tests etc.) + span = start_create_span( + "create_response", + build_create_span_tags( + response_id=None, + model=None, + agent_reference=None, + service_name="azure-ai-agentserver-responses", + ), + hook=self._runtime_options.create_span_hook, + ) + captured_error: Exception | None = None + + try: + payload = await request.json() + _prevalidate_identity_payload(payload) + parsed = parse_and_validate_create_response(payload, options=self._runtime_options) + except Exception as exc: # pylint: disable=broad-exception-caught + logger.error("Failed to parse/validate create request", exc_info=exc) + captured_error = exc + span.end(captured_error) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc) + return _error_response(exc, {}) + + try: + response_id, agent_reference = _resolve_identity_fields(parsed) + except Exception as exc: # pylint: disable=broad-exception-caught + logger.error("Failed to resolve identity fields", exc_info=exc) + captured_error = exc + span.end(captured_error) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc) + return _error_response(exc, {}) + + ctx = self._build_execution_context( + payload=payload, + parsed=parsed, + response_id=response_id, + agent_reference=agent_reference, + span=span, + request=request, + ) + + # Start OTel request span now that we have the response_id + if self._tracing is not None: + otel_span = self._tracing.start_request_span( + request.headers, + response_id, + span_operation="create_response", + operation_name="create_response", + ) + self._safe_set_attrs(otel_span, { + "gen_ai.response.id": response_id, + "gen_ai.request.model": ctx.model or "", + }) + baggage_token = self._tracing.set_baggage({ + "response_id": response_id, + }) + + span.set_tags( + build_create_span_tags( + response_id=response_id, + model=ctx.model, + agent_reference=agent_reference, + service_name="azure-ai-agentserver-responses", + ) + ) + + try: + if ctx.stream: + sse_response = StreamingResponse( + self._orchestrator.run_stream(ctx), + media_type="text/event-stream", + headers=self._sse_headers, + ) + wrapped = self._wrap_streaming_response(sse_response, otel_span, baggage_token) + streaming_wrapped = True + return wrapped + + if not ctx.background: + try: + snapshot = await self._orchestrator.run_sync(ctx) + # End OTel span for non-streaming success + if self._tracing is not None: + self._tracing.end_span(otel_span) + return JSONResponse(snapshot, status_code=200) + except _HandlerError as exc: + logger.error("Handler error in sync create (response_id=%s)", ctx.response_id, exc_info=exc.original) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc.original) + return _error_response(exc.original, {}) + + snapshot = await self._orchestrator.run_background(ctx) + if self._tracing is not None: + self._tracing.end_span(otel_span) + return JSONResponse(snapshot, status_code=200, headers=self._response_headers) + except _HandlerError as exc: + logger.error("Handler error in create (response_id=%s)", ctx.response_id, exc_info=exc.original) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc) + return _error_response(exc.original, self._response_headers) + except Exception as exc: # pylint: disable=broad-exception-caught + logger.error("Unexpected error in create (response_id=%s)", ctx.response_id, exc_info=exc) + if self._tracing is not None: + self._tracing.end_span(otel_span, exc=exc) + raise + finally: + # For non-streaming responses (or error paths that returned + # before reaching _wrap_streaming_response), detach baggage + # immediately. Streaming responses handle this in + # _wrap_streaming_response's cleanup iterator instead. + if not streaming_wrapped: + if self._tracing is not None: + self._tracing.detach_baggage(baggage_token) + + async def handle_get(self, request: Request) -> Response: # pylint: disable=too-many-return-statements + """Route handler for ``GET /responses/{response_id}``. + + Returns the response snapshot or replays SSE events if + ``stream=true`` is in the query parameters. + + :param request: Incoming Starlette request. + :type request: Request + :return: JSON snapshot or SSE replay streaming response. + :rtype: Response + """ + response_id = request.path_params["response_id"] + record = await self._runtime_state.get(response_id) + if record is None: + if await self._runtime_state.is_deleted(response_id): + return _deleted_response(response_id, {}) + + stream_replay = request.query_params.get("stream", "false").lower() == "true" + if not stream_replay: + # Provider fallback: serve completed responses that are no longer in runtime state + # (e.g., after a process restart). + try: + response_obj = await self._provider.get_response_async(response_id) + snapshot = response_obj.as_dict() + return JSONResponse(snapshot, status_code=200) + except Exception: # pylint: disable=broad-exception-caught + logger.warning("Provider fallback failed for GET response_id=%s", response_id, exc_info=True) + else: + # Stream provider fallback: replay persisted SSE events when runtime state is gone. + replay_response = await self._try_replay_persisted_stream(request, response_id) + if replay_response is not None: + return replay_response + + return _not_found(response_id, {}) + + _refresh_background_status(record) + + stream_replay = request.query_params.get("stream", "false").lower() == "true" + if stream_replay: + if not record.replay_enabled: + return _invalid_mode( + "stream replay is not available for this response; to enable SSE replay, " + + "create the response with background=true", + {}, + param="stream", + ) + + parsed_cursor = self._parse_starting_after(request) + if isinstance(parsed_cursor, Response): + return parsed_cursor + + return self._build_live_stream_response(record, parsed_cursor) + + if not record.visible_via_get: + return _not_found(response_id, {}) + + return JSONResponse(_RuntimeState.to_snapshot(record), status_code=200, headers=self._response_headers) + + @staticmethod + def _parse_starting_after(request: Request) -> int | Response: + """Parse the ``starting_after`` query parameter. + + Returns the integer cursor value (defaulting to ``-1``) or an + error :class:`Response` when the value is not a valid integer. + """ + cursor_raw = request.query_params.get("starting_after") + if cursor_raw is None: + return -1 + try: + return int(cursor_raw) + except ValueError: + return _invalid_request( + "starting_after must be an integer", + {}, + param="starting_after", + ) + + def _build_live_stream_response(self, record: Any, starting_after: int) -> StreamingResponse: + """Build a live SSE subscription response for an in-flight record.""" + _cursor = starting_after + + async def _stream_from_subject(): + async for event in record.subject.subscribe(cursor=_cursor): # type: ignore[union-attr] + yield encode_sse_payload(event["type"], event["payload"]) + + return StreamingResponse( + _stream_from_subject(), media_type="text/event-stream", headers=self._sse_headers + ) + + async def _try_replay_persisted_stream( + self, request: Request, response_id: str + ) -> Response | None: + """Try to replay persisted SSE events from the stream provider. + + Returns a ``StreamingResponse`` if replay events are available, + an error ``Response`` for invalid query parameters, or ``None`` + when no replay data exists. + """ + if self._stream_provider is None: + return None + try: + replay_events = await self._stream_provider.get_stream_events_async(response_id) + if replay_events is None: + return None + parsed_cursor = self._parse_starting_after(request) + if isinstance(parsed_cursor, Response): + return parsed_cursor + filtered = [ + e for e in replay_events + if e["payload"]["sequence_number"] > parsed_cursor + ] + return StreamingResponse( + _encode_sse(filtered), + media_type="text/event-stream", + headers=self._sse_headers, + ) + except Exception: # pylint: disable=broad-exception-caught + logger.warning("Failed to replay persisted stream for response_id=%s", response_id, exc_info=True) + return None + + async def handle_delete(self, request: Request) -> Response: + """Route handler for ``DELETE /responses/{response_id}``. + + :param request: Incoming Starlette request. + :type request: Request + :return: Deletion confirmation or error response. + :rtype: Response + """ + response_id = request.path_params["response_id"] + record = await self._runtime_state.get(response_id) + if record is None: + return _not_found(response_id, {}) + + _refresh_background_status(record) + + if record.mode_flags.background and record.status in {"queued", "in_progress"}: + return _invalid_request( + "Cannot delete an in-flight response.", + {}, + param="response_id", + ) + + deleted = await self._runtime_state.delete(response_id) + if not deleted: + return _not_found(response_id, {}) + + if record.mode_flags.store: + try: + await self._provider.delete_response_async(response_id) + except Exception: # pylint: disable=broad-exception-caught + logger.warning("Best-effort provider delete failed for response_id=%s", response_id, exc_info=True) + + return JSONResponse( + {"id": response_id, "object": "response.deleted", "deleted": True}, + status_code=200, + ) + + async def handle_cancel(self, request: Request) -> Response: # pylint: disable=too-many-return-statements + """Route handler for ``POST /responses/{response_id}/cancel``. + + :param request: Incoming Starlette request. + :type request: Request + :return: Cancelled snapshot or error response. + :rtype: Response + """ + response_id = request.path_params["response_id"] + record = await self._runtime_state.get(response_id) + if record is None: + return _not_found(response_id, {}) + + _refresh_background_status(record) + + if not record.mode_flags.background: + return _invalid_request( + "Cannot cancel a synchronous response.", + {}, + param="response_id", + ) + + if record.status == "cancelled": + # Idempotent: ensure the response snapshot reflects cancelled state + record.set_response_snapshot( + build_cancelled_response(record.response_id, record.agent_reference, record.model) + ) + return JSONResponse(_RuntimeState.to_snapshot(record), status_code=200, headers=self._response_headers) + + if record.status == "completed": + return _invalid_request( + "Cannot cancel a completed response.", + {}, + param="response_id", + ) + + if record.status == "failed": + return _invalid_request( + "Cannot cancel a failed response.", + {}, + param="response_id", + ) + + if record.status == "incomplete": + return _invalid_request( + "Cannot cancel an incomplete response.", + {}, + param="response_id", + ) + + record.set_response_snapshot( + build_cancelled_response(record.response_id, record.agent_reference, record.model) + ) + record.cancel_signal.set() + record.transition_to("cancelled") + return JSONResponse(_RuntimeState.to_snapshot(record), status_code=200, headers=self._response_headers) + + async def handle_input_items(self, request: Request) -> Response: + """Route handler for ``GET /responses/{response_id}/input_items``. + + Returns a paginated list of input items for the given response. + + :param request: Incoming Starlette request. + :type request: Request + :return: Paginated input items list. + :rtype: Response + """ + response_id = request.path_params["response_id"] + + limit_raw = request.query_params.get("limit", "20") + try: + limit = int(limit_raw) + except ValueError: + return _invalid_request( + "limit must be an integer between 1 and 100", {}, param="limit" + ) + + if limit < 1 or limit > 100: + return _invalid_request( + "limit must be between 1 and 100", {}, param="limit" + ) + + order = request.query_params.get("order", "desc").lower() + if order not in {"asc", "desc"}: + return _invalid_request("order must be 'asc' or 'desc'", {}, param="order") + + after = request.query_params.get("after") + before = request.query_params.get("before") + + try: + items = await self._provider.get_input_items_async( + response_id, limit=100, ascending=True + ) + except ValueError: + return _deleted_response(response_id, {}) + except KeyError: + # Fall back to runtime_state for in-flight responses not yet persisted to provider + try: + items = await self._runtime_state.get_input_items(response_id) + except ValueError: + return _deleted_response(response_id, {}) + except KeyError: + return _not_found(response_id, {}) + + ordered_items = items if order == "asc" else list(reversed(items)) + scoped_items = _apply_item_cursors(ordered_items, after=after, before=before) + + page = scoped_items[:limit] + has_more = len(scoped_items) > limit + + first_id = _extract_item_id(page[0]) if page else None + last_id = _extract_item_id(page[-1]) if page else None + + return JSONResponse( + { + "object": "list", + "data": page, + "first_id": first_id, + "last_id": last_id, + "has_more": has_more, + }, + status_code=200, + ) + + async def handle_shutdown(self) -> None: + """Graceful shutdown handler. + + Signals all active responses to cancel and waits for in-flight + background executions to complete within the configured grace period. + + :return: None + :rtype: None + """ + self._is_draining = True + self._shutdown_requested.set() + + records = await self._runtime_state.list_records() + for record in records: + if record.response_context is not None: + record.response_context.is_shutdown_requested = True + + record.cancel_signal.set() + + if record.mode_flags.background and record.status in {"queued", "in_progress"}: + record.set_response_snapshot( + build_cancelled_response(record.response_id, record.agent_reference, record.model) + ) + record.transition_to("cancelled") + + deadline = asyncio.get_running_loop().time() + float( + self._runtime_options.shutdown_grace_period_seconds + ) + while True: + pending = [ + record + for record in records + if record.mode_flags.background + and record.execution_task is not None + and record.status in {"queued", "in_progress"} + ] + if not pending: + break + if asyncio.get_running_loop().time() >= deadline: + break + await asyncio.sleep(0.05) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_event_subject.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_event_subject.py new file mode 100644 index 000000000000..1640947c0838 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_event_subject.py @@ -0,0 +1,93 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Seekable replay subject for in-process SSE event broadcasting.""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from typing import Any, AsyncIterator + + +class _ResponseEventSubject: + """In-process hot observable with replay buffer for SSE event broadcasting. + + Equivalent to .NET's ``SeekableReplaySubject`` / ``ConcurrentReplayAsyncSubject``. + Multiple concurrent subscribers can join at any time and receive: + + - All buffered events emitted since creation (or from a cursor). + - Subsequent live events as they are published in real time. + - A completion signal when the stream ends. + + This enables live SSE replay behaviour for + ``GET /responses/{id}?stream=true`` while a background+stream response is + still in flight, matching the .NET ``SseReplayResult`` / ``IResponsesStreamProvider`` + contract. + """ + + _DONE: object = object() # sentinel that signals stream completion + + def __init__(self) -> None: + """Initialise the subject with an empty event buffer and no subscribers.""" + self._events: list[dict[str, Any]] = [] + self._subscribers: list[asyncio.Queue[Any]] = [] + self._done: bool = False + self._lock: asyncio.Lock = asyncio.Lock() + + async def publish(self, event: dict[str, Any]) -> None: + """Push a new event to all current subscribers and append it to the replay buffer. + + :param event: The normalised event dict (``{"type": str, "payload": dict}``). + :type event: dict[str, Any] + """ + async with self._lock: + self._events.append(event) + for q in self._subscribers: + q.put_nowait(event) + + async def complete(self) -> None: + """Signal stream completion to all current and future subscribers. + + After calling this, new :meth:`subscribe` calls will still deliver the full + buffered event history and then exit immediately. + """ + async with self._lock: + self._done = True + for q in self._subscribers: + q.put_nowait(self._DONE) + + async def subscribe(self, cursor: int = -1) -> AsyncIterator[dict[str, Any]]: + """Subscribe to events, yielding buffered history then live events. + + :param cursor: Sequence-number cursor. Only events whose + ``payload["sequence_number"]`` is strictly greater than *cursor* are + yielded. Pass ``-1`` (default) to receive all events. + :type cursor: int + :returns: An async iterator of normalised event dicts. + :rtype: AsyncIterator[dict[str, Any]] + """ + q: asyncio.Queue[Any] = asyncio.Queue() + async with self._lock: + # Replay all buffered events that are after the cursor + for event in self._events: + if event["payload"]["sequence_number"] > cursor: + q.put_nowait(event) + if self._done: + # Stream already completed — put sentinel so iterator exits after replay + q.put_nowait(self._DONE) + else: + # Register for live events + self._subscribers.append(q) + + try: + while True: + item = await q.get() + if item is self._DONE: + return + yield item + finally: + # Clean up subscription on client disconnect or normal completion + async with self._lock: + try: + self._subscribers.remove(q) + except ValueError: + pass # already removed (e.g. complete() ran concurrently) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_execution_context.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_execution_context.py new file mode 100644 index 000000000000..a1ee6f9d5a32 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_execution_context.py @@ -0,0 +1,69 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Per-request execution context for the Responses server.""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from typing import Any + +from azure.ai.agentserver.responses.models._generated.sdk.models._types import InputParam + +from .._response_context import ResponseContext + + +class _ExecutionContext: # pylint: disable=too-many-instance-attributes + """Holds all per-request *input* state for a single create-response call. + + Passed between the routing layer and the orchestrator to avoid + large keyword-argument bundles on every internal function call. + All fields are set once at construction and treated as immutable by + the orchestrator. Mutable in-flight state (accumulated events, + background record, captured error) lives in :class:`_PipelineState` + inside the orchestrator methods. + """ + + def __init__( + self, + *, + response_id: str, + agent_reference: Any, + model: str | None, + store: bool, + background: bool, + stream: bool, + input_items: list[InputParam], + previous_response_id: str | None, + conversation_id: str | None, + cancellation_signal: asyncio.Event, + span: Any, + parsed: Any, + context: ResponseContext | None = None, + ) -> None: + self.response_id = response_id + """The assigned response identifier.""" + self.agent_reference = agent_reference + """Normalized agent reference dictionary.""" + self.model = model + """Model name, or ``None`` if not specified.""" + self.store = store + """Whether to persist the execution record after the response completes.""" + self.background = background + """Whether this is a background (non-blocking) request.""" + self.stream = stream + """Whether to produce SSE streaming output.""" + self.input_items = input_items + """Extracted input items from the request body.""" + self.previous_response_id = previous_response_id + """Previous response ID for conversation chaining, or ``None``.""" + self.conversation_id = conversation_id + """Conversation ID for grouping related responses, or ``None``.""" + self.cancellation_signal = cancellation_signal + """Event signalling that the client has requested cancellation.""" + self.context: ResponseContext | None = context + """Runtime response context for this request. Set after construction + via :meth:`_ResponseEndpointHandler._create_response_context`.""" + self.span = span + """Active observability span for this request.""" + self.parsed = parsed + """Parsed ``CreateResponse`` model instance from the request body.""" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_http_errors.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_http_errors.py new file mode 100644 index 000000000000..e7f4c5df2526 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_http_errors.py @@ -0,0 +1,188 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""HTTP error response factory functions for the Responses server.""" + +from __future__ import annotations + +from typing import Any + +from starlette.responses import JSONResponse + +from ._validation import ( + build_api_error_response, + build_invalid_mode_error_response, + to_api_error_response, +) + + +def _json_payload(value: Any) -> Any: + """Convert a value to a JSON-serializable form. + + If the value has an ``as_dict`` method (e.g. generated models), it is called; + otherwise the value is returned as-is. + + :param value: The value to convert. + :type value: Any + :return: A JSON-serializable representation of the value. + :rtype: Any + """ + if hasattr(value, "as_dict"): + return value.as_dict() # type: ignore[no-any-return] + return value + + +def _api_error( + *, + message: str, + code: str, + param: str | None = None, + error_type: str = "invalid_request_error", + status_code: int, + headers: dict[str, str], +) -> JSONResponse: + """Build a standard API error ``JSONResponse`` from individual fields. + + :keyword message: Human-readable error message. + :keyword type message: str + :keyword code: Machine-readable error code. + :keyword type code: str + :keyword param: The request parameter that caused the error, or ``None``. + :keyword type param: str | None + :keyword error_type: Error type category (default ``"invalid_request_error"``). + :keyword type error_type: str + :keyword status_code: HTTP status code for the response. + :keyword type status_code: int + :keyword headers: Response headers to include. + :keyword type headers: dict[str, str] + :return: A ``JSONResponse`` with the error envelope. + :rtype: JSONResponse + """ + payload = _json_payload( + build_api_error_response( + message=message, + code=code, + param=param, + error_type=error_type, + ) + ) + return JSONResponse(payload, status_code=status_code, headers=headers) + + +def _error_response(error: Exception, headers: dict[str, str]) -> JSONResponse: + """Map an exception to an appropriate HTTP error ``JSONResponse``. + + The HTTP status code is inferred from the error type field in the envelope. + + :param error: The exception to convert. + :type error: Exception + :param headers: Response headers to include. + :type headers: dict[str, str] + :return: A ``JSONResponse`` with the mapped error and status code. + :rtype: JSONResponse + """ + envelope = to_api_error_response(error) + payload = _json_payload(envelope) + error_type = payload.get("error", {}).get("type") if isinstance(payload, dict) else None + + status_code = 500 + if error_type == "invalid_request_error": + status_code = 400 + elif error_type == "not_found_error": + status_code = 404 + + return JSONResponse(payload, status_code=status_code, headers=headers) + + +def _not_found(response_id: str, headers: dict[str, str]) -> JSONResponse: + """Build a 404 Not Found error response for a missing response ID. + + :param response_id: The response ID that was not found. + :type response_id: str + :param headers: Response headers to include. + :type headers: dict[str, str] + :return: A 404 ``JSONResponse``. + :rtype: JSONResponse + """ + return _api_error( + message=f"Response with id '{response_id}' not found.", + code="invalid_request", + param="response_id", + error_type="invalid_request_error", + status_code=404, + headers=headers, + ) + + +def _invalid_request(message: str, headers: dict[str, str], *, param: str | None = None) -> JSONResponse: + """Build a 400 Bad Request error response for an invalid request. + + :param message: Human-readable error message. + :type message: str + :param headers: Response headers to include. + :type headers: dict[str, str] + :keyword param: The request parameter that caused the error, or ``None``. + :keyword type param: str | None + :return: A 400 ``JSONResponse``. + :rtype: JSONResponse + """ + return _api_error( + message=message, + code="invalid_request", + param=param, + error_type="invalid_request_error", + status_code=400, + headers=headers, + ) + + +def _invalid_mode(message: str, headers: dict[str, str], *, param: str | None = None) -> JSONResponse: + """Build a 400 Bad Request error response for an invalid mode combination. + + :param message: Human-readable error message. + :type message: str + :param headers: Response headers to include. + :type headers: dict[str, str] + :keyword param: The request parameter that caused the error, or ``None``. + :keyword type param: str | None + :return: A 400 ``JSONResponse`` with an ``invalid_mode`` error code. + :rtype: JSONResponse + """ + payload = _json_payload(build_invalid_mode_error_response(message, param=param)) + return JSONResponse(payload, status_code=400, headers=headers) + + +def _service_unavailable(message: str, headers: dict[str, str]) -> JSONResponse: + """Build a 503 Service Unavailable error response. + + :param message: Human-readable error message. + :type message: str + :param headers: Response headers to include. + :type headers: dict[str, str] + :return: A 503 ``JSONResponse``. + :rtype: JSONResponse + """ + return _api_error( + message=message, + code="service_unavailable", + param=None, + error_type="server_error", + status_code=503, + headers=headers, + ) + + +def _deleted_response(response_id: str, headers: dict[str, str]) -> JSONResponse: + """Build a 400 error response indicating the response has been deleted. + + :param response_id: The response ID that was deleted. + :type response_id: str + :param headers: Response headers to include. + :type headers: dict[str, str] + :return: A 400 ``JSONResponse``. + :rtype: JSONResponse + """ + return _invalid_request( + f"Response with id '{response_id}' has been deleted.", + headers, + param="response_id", + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_observability.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_observability.py new file mode 100644 index 000000000000..cc6990afcead --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_observability.py @@ -0,0 +1,233 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Observability and identity header helpers.""" + +from __future__ import annotations + +from datetime import datetime, timezone +from typing import Any, Protocol, runtime_checkable + + +def build_platform_server_header(sdk_name: str, version: str, runtime: str, extra: str | None = None) -> str: + """Build the platform server identity header value. + + :param sdk_name: SDK package name. + :type sdk_name: str + :param version: SDK package version. + :type version: str + :param runtime: Runtime marker, such as python/3.10. + :type runtime: str + :param extra: Optional additional identity suffix. + :type extra: str | None + :returns: Formatted identity header value. + :rtype: str + """ + base_value = f"{sdk_name}/{version} ({runtime})" + return f"{base_value} {extra}".strip() if extra else base_value + + +@runtime_checkable +class CreateSpanHook(Protocol): + """Hook contract for one-root-span-per-create observability.""" + + def on_span_start(self, name: str, tags: dict[str, Any]) -> None: + """Called when a create span starts. + + :param name: Span name. + :type name: str + :param tags: Initial span tags. + :type tags: dict[str, Any] + :return: None + :rtype: None + """ + + def on_span_end(self, name: str, tags: dict[str, Any], error: Exception | None) -> None: + """Called when a create span ends. + + :param name: Span name. + :type name: str + :param tags: Final span tags. + :type tags: dict[str, Any] + :param error: The exception if the span ended with an error, or ``None``. + :type error: Exception | None + :return: None + :rtype: None + """ + + +class CreateSpan: + """Mutable create-span helper used by hosting orchestration.""" + + def __init__( + self, + *, + name: str, + tags: dict[str, Any], + _hook: CreateSpanHook | None = None, + _ended: bool = False, + ) -> None: + self.name = name + self.tags = tags + self._hook = _hook + self._ended = _ended + + def set_tag(self, key: str, value: Any) -> None: + """Set or overwrite one span tag. + + :param key: Tag key. + :type key: str + :param value: Tag value. + :type value: Any + :return: None + :rtype: None + """ + self.tags[key] = value + + def set_tags(self, values: dict[str, Any]) -> None: + """Merge a set of tags into this span. + + :param values: Dictionary of tags to merge. + :type values: dict[str, Any] + :return: None + :rtype: None + """ + self.tags.update(values) + + def end(self, error: Exception | None = None) -> None: + """Complete the span exactly once. + + Subsequent calls are no-ops. + + :param error: The exception if the span ended with an error, or ``None``. + :type error: Exception | None + :return: None + :rtype: None + """ + if self._ended: + return + + self._ended = True + if self._hook is None: + return + self._hook.on_span_end(self.name, dict(self.tags), error) + + +def start_create_span(name: str, tags: dict[str, Any], hook: CreateSpanHook | None = None) -> CreateSpan: + """Start a create span and notify hook subscribers. + + :param name: Span name. + :type name: str + :param tags: Initial span tags. + :type tags: dict[str, Any] + :param hook: Optional hook to receive span lifecycle events. + :type hook: CreateSpanHook | None + :return: The started ``CreateSpan`` instance. + :rtype: CreateSpan + """ + span = CreateSpan(name=name, tags=dict(tags), _hook=hook) + if hook is not None: + hook.on_span_start(name, dict(span.tags)) + return span + + +def build_create_span_tags( + *, + response_id: str | None, + model: str | None, + agent_reference: dict[str, Any] | None, + service_name: str, +) -> dict[str, Any]: + """Build a baseline GenAI tag set for create spans. + + :keyword response_id: The response ID, or ``None`` if not yet assigned. + :keyword type response_id: str | None + :keyword model: Model name, or ``None``. + :keyword type model: str | None + :keyword agent_reference: Agent reference dictionary, or ``None``. + :keyword type agent_reference: dict[str, Any] | None + :keyword service_name: Logical service name for the span. + :keyword type service_name: str + :return: Dictionary of OpenTelemetry-style GenAI span tags. + :rtype: dict[str, Any] + """ + agent_name = None + agent_id = None + if agent_reference is not None: + agent_name = agent_reference.get("name") + agent_version = agent_reference.get("version") + if agent_name and agent_version: + agent_id = f"{agent_name}:{agent_version}" + + return { + "service.name": service_name, + "gen_ai.system": "responses", + "gen_ai.operation.name": "create_response", + "gen_ai.response.id": response_id, + "gen_ai.request.model": model, + "gen_ai.agent.name": agent_name, + "gen_ai.agent.id": agent_id, + } + + +class RecordedSpan: + """Recorded span event for tests and diagnostics.""" + + def __init__( + self, + *, + name: str, + tags: dict[str, Any], + started_at: datetime, + ended_at: datetime | None = None, + error: Exception | None = None, + ) -> None: + self.name = name + self.tags = tags + self.started_at = started_at + self.ended_at = ended_at + self.error = error + + +class InMemoryCreateSpanHook: + """Simple in-memory hook for asserting span lifecycle in tests.""" + + def __init__(self, spans: list[RecordedSpan] | None = None) -> None: + self.spans: list[RecordedSpan] = spans if spans is not None else [] + + def on_span_start(self, name: str, tags: dict[str, Any]) -> None: + """Record a span start event. + + :param name: Span name. + :type name: str + :param tags: Span tags at start time. + :type tags: dict[str, Any] + :return: None + :rtype: None + """ + self.spans.append( + RecordedSpan( + name=name, + tags=dict(tags), + started_at=datetime.now(timezone.utc), + ) + ) + + def on_span_end(self, name: str, tags: dict[str, Any], error: Exception | None) -> None: + """Record a span end event. + + :param name: Span name. + :type name: str + :param tags: Final span tags. + :type tags: dict[str, Any] + :param error: The exception if the span ended with an error, or ``None``. + :type error: Exception | None + :return: None + :rtype: None + """ + if not self.spans: + self.on_span_start(name, tags) + + span = self.spans[-1] + span.tags = dict(tags) + span.error = error + span.ended_at = datetime.now(timezone.utc) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_orchestrator.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_orchestrator.py new file mode 100644 index 000000000000..4dab6043067f --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_orchestrator.py @@ -0,0 +1,915 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Event-pipeline orchestration for the Responses server. + +This module is intentionally free of Starlette imports: it operates purely on +``_ExecutionContext`` and produces plain Python data (dicts, async iterators of +strings). The HTTP layer (Starlette ``Request`` / ``Response``) lives in the +routing module which wraps these results. +""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from copy import deepcopy +from typing import Any, AsyncIterator + +import anyio + +from ..models import _generated as generated_models +from ..models.runtime import ( + ResponseExecution, + ResponseModeFlags, + build_cancelled_response as _build_cancelled_response, + build_failed_response as _build_failed_response, +) +from ..streaming._sse import encode_keep_alive_comment, encode_sse_payload, new_stream_counter +from ..streaming._helpers import ( + EVENT_TYPE, + _apply_stream_event_defaults, + _build_events, + _coerce_handler_event, + _extract_response_snapshot_from_events, +) +from .._options import ResponsesServerOptions +from ..store._base import ResponseProviderProtocol, ResponseStreamProviderProtocol +from ._background import _run_background_non_stream +from ._event_subject import _ResponseEventSubject +from ._execution_context import _ExecutionContext +from ._runtime_state import _RuntimeState + + +def _check_first_event_contract(normalized: dict[str, Any], response_id: str) -> str | None: + """Return an error message if the first handler event violates S-007/S-008/S-009, else None. + + - S-007: The first event MUST be ``response.created``. + - S-008: The ``id`` in ``response.created`` MUST equal the library-assigned ``response_id``. + - S-009: The ``status`` in ``response.created`` MUST be non-terminal. + + :param normalized: Normalised first event dict. + :type normalized: dict[str, Any] + :param response_id: Library-assigned response identifier. + :type response_id: str + :return: Violation message string, or ``None`` if no violation. + :rtype: str | None + """ + event_type = normalized.get("type") + payload = normalized.get("payload") or {} + if event_type != "response.created": + return f"S-007: first event must be response.created, got '{event_type}'" + emitted_id = payload.get("id") + if emitted_id and emitted_id != response_id: + return f"S-008: response.created id '{emitted_id}' != assigned id '{response_id}'" + emitted_status = payload.get("status") + if emitted_status in {"completed", "failed", "cancelled", "incomplete"}: + return f"S-009: response.created status must be non-terminal, got '{emitted_status}'" + return None + + +class _HandlerError(Exception): + """Raised by :meth:`_ResponseOrchestrator.run_sync` when the handler raises. + + Callers should catch this to convert it into an appropriate HTTP error + response without leaking orchestrator internals. + """ + + def __init__(self, original: Exception) -> None: + self.original = original + super().__init__(str(original)) + + +class _PipelineState: + """Mutable in-flight state for a single create-response invocation. + + Intentionally separate from :class:`_ExecutionContext` (which is a pure + immutable per-request input value object). Created locally inside + :meth:`_ResponseOrchestrator._live_stream` and + :meth:`_ResponseOrchestrator.run_sync`, then threaded through every + internal helper so that the helpers are side-effect-free with respect + to ``_ExecutionContext``. + """ + + __slots__ = ("handler_events", "bg_record", "captured_error") + + def __init__(self) -> None: + self.handler_events: list[dict[str, Any]] = [] + self.bg_record: ResponseExecution | None = None + self.captured_error: Exception | None = None + + +class _ResponseOrchestrator: # pylint: disable=too-many-instance-attributes + """Event-pipeline orchestrator for the Responses API. + + Handles the business logic for streaming, synchronous, and background + create-response requests: driving the handler iterator, normalising events, + managing the background execution record, and finalising persistent state. + + This class has no dependency on Starlette types. + """ + + _TERMINAL_SSE_TYPES: frozenset[str] = frozenset({ + EVENT_TYPE.RESPONSE_COMPLETED.value, + EVENT_TYPE.RESPONSE_FAILED.value, + EVENT_TYPE.RESPONSE_INCOMPLETE.value, + }) + + def __init__( + self, + *, + create_async: Any, + runtime_state: _RuntimeState, + runtime_options: ResponsesServerOptions, + provider: ResponseProviderProtocol, + stream_provider: ResponseStreamProviderProtocol | None = None, + ) -> None: + """Initialise the orchestrator. + + :param create_async: The bound ``create_async`` method from the registered handler. + :type create_async: Any + :param runtime_state: In-memory execution record store. + :type runtime_state: _RuntimeState + :param runtime_options: Server runtime options (keep-alive, etc.). + :type runtime_options: ResponsesServerOptions + :param provider: Persistence provider for response envelopes and input items. + :type provider: ResponseProviderProtocol + :param stream_provider: Optional provider for SSE stream event persistence and replay. + :type stream_provider: ResponseStreamProviderProtocol | None + """ + self._create_async = create_async + self._runtime_state = runtime_state + self._runtime_options = runtime_options + self._provider = provider + self._stream_provider = stream_provider + + # ------------------------------------------------------------------ + # Internal helpers (stream path) + # ------------------------------------------------------------------ + + async def _normalize_and_append( + self, ctx: _ExecutionContext, state: _PipelineState, handler_event: Any + ) -> dict[str, Any]: + """Coerce, normalise, and append a handler event to the pipeline state. + + Also propagates the event into the background record and its subject when active. + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + :param handler_event: Raw event emitted by the handler. + :type handler_event: Any + :return: The normalised event dictionary. + :rtype: dict[str, Any] + """ + coerced = _coerce_handler_event(handler_event) + normalized = _apply_stream_event_defaults( + coerced, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + sequence_number=len(state.handler_events), + ) + state.handler_events.append(normalized) + if state.bg_record is not None: + state.bg_record.apply_event(normalized, state.handler_events) + if state.bg_record.subject is not None: + await state.bg_record.subject.publish(normalized) + return normalized + + @staticmethod + def _has_terminal_event(handler_events: list[dict[str, Any]]) -> bool: + """Return ``True`` if any terminal event has been emitted. + + :param handler_events: List of normalised handler events. + :type handler_events: list[dict[str, Any]] + :return: Whether a terminal event is present. + :rtype: bool + """ + return any(e["type"] in _ResponseOrchestrator._TERMINAL_SSE_TYPES for e in handler_events) + + async def _cancel_terminal_sse(self, ctx: _ExecutionContext, state: _PipelineState) -> str: + """Build and record a ``response.failed`` cancel-terminal SSE string. + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + :return: Encoded SSE string for the cancel-terminal event. + :rtype: str + """ + cancel_event: dict[str, Any] = { + "type": EVENT_TYPE.RESPONSE_FAILED.value, + "payload": _build_cancelled_response(ctx.response_id, ctx.agent_reference, ctx.model).as_dict(), + } + normalized = _apply_stream_event_defaults( + cancel_event, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + sequence_number=len(state.handler_events), + ) + state.handler_events.append(normalized) + if state.bg_record is not None: + state.bg_record.apply_event(normalized, state.handler_events) + if state.bg_record.subject is not None: + await state.bg_record.subject.publish(normalized) + return encode_sse_payload(normalized["type"], normalized["payload"]) + + async def _cancel_terminal_sse_dict(self, ctx: _ExecutionContext, state: _PipelineState) -> dict[str, Any]: + """Build, normalise, append, and return a cancel-terminal event dict. + + Like :meth:`_cancel_terminal_sse` but returns the raw normalised event + dictionary instead of an SSE-encoded string, so that it can be consumed + by the shared :meth:`_process_handler_events` pipeline. + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + :return: Normalised cancel-terminal event dictionary. + :rtype: dict[str, Any] + """ + cancel_event: dict[str, Any] = { + "type": EVENT_TYPE.RESPONSE_FAILED.value, + "payload": _build_cancelled_response(ctx.response_id, ctx.agent_reference, ctx.model).as_dict(), + } + return await self._normalize_and_append(ctx, state, cancel_event) + + async def _make_failed_event(self, ctx: _ExecutionContext, state: _PipelineState) -> dict[str, Any]: + """Build, normalise, append, and return a ``response.failed`` event dict. + + Used for B-13 (handler exception after ``response.created``) and + S-021 (handler completed without emitting a terminal event). + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + :return: Normalised ``response.failed`` event dictionary. + :rtype: dict[str, Any] + """ + failed_event: dict[str, Any] = { + "type": EVENT_TYPE.RESPONSE_FAILED.value, + "payload": { + "id": ctx.response_id, + "object": "response", + "status": "failed", + "output": [], + "error": {"code": "server_error", "message": "An internal server error occurred."}, + }, + } + return await self._normalize_and_append(ctx, state, failed_event) + + async def _register_bg_execution( + self, ctx: _ExecutionContext, state: _PipelineState, first_normalized: dict[str, Any] + ) -> None: + """Create, seed, and register the background+stream execution record. + + Called from :meth:`_process_handler_events` after the first event is + received. The record is seeded with ``first_normalized`` so that + subscribers joining mid-stream receive the full history (matching + :meth:`.NET`'s ``SeekableReplaySubject`` behaviour). + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + :param first_normalized: The first normalised handler event. + :type first_normalized: dict[str, Any] + """ + initial_payload = _extract_response_snapshot_from_events( + state.handler_events, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + initial_status = initial_payload.get("status") + if not isinstance(initial_status, str): + initial_status = "in_progress" + execution = ResponseExecution( + response_id=ctx.response_id, + mode_flags=ResponseModeFlags(stream=True, store=True, background=True), + status=initial_status, + input_items=deepcopy(ctx.input_items), + previous_response_id=ctx.previous_response_id, + cancel_signal=ctx.cancellation_signal, + ) + execution.set_response_snapshot(generated_models.Response(initial_payload)) + execution.subject = _ResponseEventSubject() + state.bg_record = execution + await state.bg_record.subject.publish(first_normalized) + await self._runtime_state.add(execution) + if ctx.store: + _initial_response_obj = generated_models.Response(initial_payload) + _history_ids = ( + await self._provider.get_history_item_ids_async(ctx.previous_response_id, None, 10000) + if ctx.previous_response_id + else None + ) + await self._provider.create_response_async( + _initial_response_obj, ctx.input_items or None, _history_ids + ) + + async def _process_handler_events( + self, + ctx: _ExecutionContext, + state: _PipelineState, + handler_iterator: Any, + ) -> AsyncIterator[dict[str, Any]]: + """Shared event pipeline: coerce → normalise → apply_event → subject publish. + + This async generator is the single authoritative event pipeline consumed by + both :meth:`_live_stream` (streaming) and :meth:`run_sync` (synchronous). + It handles: + + - Empty handler (``StopAsyncIteration`` before the first event): synthesises + a full lifecycle event sequence and yields it. + - Pre-creation handler exception (B8): yields a standalone ``error`` event + and sets ``state.captured_error``. + - First-event normalisation and bg+store record registration + (:meth:`_register_bg_execution`). + - Remaining events via :meth:`_normalize_and_append`. + - Post-creation handler exception (B-13): yields a ``response.failed`` event + and sets ``state.captured_error``. + - Missing terminal after successful handler completion (S-021): yields a + ``response.failed`` event without setting ``state.captured_error`` so that + synchronous callers can return HTTP 200 with a ``"failed"`` body. + - Cancellation winddown (S-019): yields a cancel-terminal event when the + cancellation signal is set and no terminal event was emitted. + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + :param handler_iterator: Async generator returned by the handler's + ``create_async`` factory. + :type handler_iterator: Any + :return: Async iterator of normalised event dictionaries. + :rtype: AsyncIterator[dict[str, Any]] + """ + # --- First event --- + try: + first_raw = await handler_iterator.__anext__() + except StopAsyncIteration: + # Handler yielded nothing: synthesise fallback lifecycle events. + fallback_events = _build_events( + ctx.response_id, + include_progress=True, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + for event in fallback_events: + state.handler_events.append(event) + yield event + return + except Exception as exc: # pylint: disable=broad-exception-caught + # B8: Pre-creation error → emit a standalone `error` event only. + # No response.created precedes it; this is the contract-mandated shape. + state.captured_error = exc + yield { + "type": EVENT_TYPE.ERROR.value, + "payload": {"message": "An internal server error occurred.", "param": None, "code": None}, + } + return + + # Normalise the first event manually (before _normalize_and_append so we + # can set up the bg record with the correct sequence number). + first_normalized = _apply_stream_event_defaults( + _coerce_handler_event(first_raw), + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + sequence_number=len(state.handler_events), + ) + + # S-007/S-008/S-009: first-event contract validation. + # Violations are treated the same as B8 pre-creation errors: + # - streaming: yield a standalone 'error' event and return (no record created) + # - sync: state.captured_error is set → run_sync raises _HandlerError → HTTP 500 + violation = _check_first_event_contract(first_normalized, ctx.response_id) + if violation: + state.captured_error = RuntimeError(violation) + yield { + "type": EVENT_TYPE.ERROR.value, + "payload": {"message": "An internal server error occurred.", "param": None, "code": None}, + } + return + + state.handler_events.append(first_normalized) + + # bg+store: create and register the execution record after the first event. + if ctx.background and ctx.store: + await self._register_bg_execution(ctx, state, first_normalized) + + yield first_normalized + + # --- Remaining events --- + try: + async for raw in handler_iterator: + normalized = await self._normalize_and_append(ctx, state, raw) + yield normalized + except Exception as exc: # pylint: disable=broad-exception-caught + state.captured_error = exc + # B-13: emit response.failed when handler raises after response.created. + if not self._has_terminal_event(state.handler_events): + yield await self._make_failed_event(ctx, state) + return + + # S-019: cancellation winddown checked BEFORE S-021 so that a handler + # stopped early by the cancellation signal receives a proper cancel + # terminal event (response.failed with status == "cancelled") rather + # than a generic S-021 failure terminal. + if ctx.cancellation_signal.is_set() and not self._has_terminal_event(state.handler_events): + yield await self._cancel_terminal_sse_dict(ctx, state) + return + + # S-021: handler completed normally but never emitted a terminal event. + # NOTE: state.captured_error intentionally left None so that synchronous + # callers return HTTP 200 with a "failed" body rather than HTTP 500. + if not self._has_terminal_event(state.handler_events): + yield await self._make_failed_event(ctx, state) + + async def _finalize_bg_stream(self, ctx: _ExecutionContext, state: _PipelineState) -> None: + """Persist state and complete the subject for a background+stream response. + + Called from the ``finally`` block of :meth:`_live_stream` when + ``ctx.background and ctx.store`` is True. The execution record may + already exist (``state.bg_record``, created at ``response.created`` time) + or may be absent (empty handler — fallback events were synthesised by + :meth:`_process_handler_events`). In the latter case the record is + created here from the accumulated ``state.handler_events``. + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + """ + # If the handler yielded nothing, _process_handler_events synthesised + # fallback events but never called _register_bg_execution, so + # state.bg_record is None. Create the record here from the fallback events. + if state.bg_record is None: + events = state.handler_events if state.handler_events else _build_events( + ctx.response_id, + include_progress=True, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + response_payload = _extract_response_snapshot_from_events( + events, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + resolved_status = response_payload.get("status") + status = resolved_status if isinstance(resolved_status, str) else "completed" + + replay_subject = _ResponseEventSubject() + for _evt in events: + await replay_subject.publish(_evt) + await replay_subject.complete() + + execution = ResponseExecution( + response_id=ctx.response_id, + mode_flags=ResponseModeFlags(stream=True, store=True, background=True), + status=status, + subject=replay_subject, + input_items=deepcopy(ctx.input_items), + previous_response_id=ctx.previous_response_id, + cancel_signal=ctx.cancellation_signal, + ) + execution.set_response_snapshot(generated_models.Response(response_payload)) + await self._runtime_state.add(execution) + if ctx.store: + try: + _history_ids = ( + await self._provider.get_history_item_ids_async(ctx.previous_response_id, None, 10000) + if ctx.previous_response_id + else None + ) + await self._provider.create_response_async( + generated_models.Response(response_payload), ctx.input_items or None, _history_ids + ) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + ctx.span.end(state.captured_error) + return + + record = state.bg_record + + if record.status != "cancelled": + events = state.handler_events if state.handler_events else _build_events( + ctx.response_id, + include_progress=True, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + if state.captured_error is not None: + record.set_response_snapshot( + _build_failed_response( + ctx.response_id, + ctx.agent_reference, + ctx.model, + created_at=ctx.context.created_at, + ) + ) + record.transition_to("failed") + else: + response_payload = _extract_response_snapshot_from_events( + events, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + resolved_status = response_payload.get("status") + status = resolved_status if isinstance(resolved_status, str) else "in_progress" + record.set_response_snapshot(generated_models.Response(response_payload)) + record.transition_to(status) # type: ignore[arg-type] + + # Persist terminal state update via provider (bg+stream: initial create already done) + if record.mode_flags.store and record.response is not None: + try: + await self._provider.update_response_async(record.response) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + # Persist SSE events for replay after process restart + if self._stream_provider is not None and state.handler_events: + try: + await self._stream_provider.save_stream_events_async( + ctx.response_id, state.handler_events + ) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + + ctx.span.end(state.captured_error) + # Complete the subject — signals all live SSE replay subscribers that the + # stream has ended, matching .NET's publisher.OnCompletedAsync() call. + if record.subject is not None: + try: + await record.subject.complete() + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + + async def _finalize_non_bg_stream(self, ctx: _ExecutionContext, state: _PipelineState) -> None: + """Persist the execution record for a non-background streaming response. + + Called from the ``finally`` block of :meth:`_live_stream` when + ``ctx.background`` is False. For ``store=True`` responses, this creates + a new execution record and provider entry at terminal state (the stream + is already complete). A pre-filled replay subject is attached so that + ``GET ?stream=true`` replay has a subject to subscribe to, though replay + will be rejected (``replay_enabled=False``) because this is non-background. + + :param ctx: Current execution context (immutable inputs). + :type ctx: _ExecutionContext + :param state: Mutable pipeline state for this invocation. + :type state: _PipelineState + """ + events = state.handler_events if state.handler_events else _build_events( + ctx.response_id, + include_progress=True, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + response_payload = _extract_response_snapshot_from_events( + events, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + resolved_status = response_payload.get("status") + status = resolved_status if isinstance(resolved_status, str) else "completed" + + if ctx.store: + # Pre-fill a completed subject so GET ?stream=true replay always has + # a subject to subscribe to (non-bg: stream already finished here). + replay_subject = _ResponseEventSubject() + for _evt in events: + await replay_subject.publish(_evt) + await replay_subject.complete() + stream_record = ResponseExecution( + response_id=ctx.response_id, + mode_flags=ResponseModeFlags(stream=True, store=True, background=False), + status=status, + subject=replay_subject, + input_items=deepcopy(ctx.input_items), + previous_response_id=ctx.previous_response_id, + ) + stream_record.set_response_snapshot(generated_models.Response(response_payload)) + await self._runtime_state.add(stream_record) + # Persist via provider (non-bg stream: single create at terminal state) + try: + _history_ids = ( + await self._provider.get_history_item_ids_async(ctx.previous_response_id, None, 10000) + if ctx.previous_response_id + else None + ) + await self._provider.create_response_async( + generated_models.Response(response_payload), ctx.input_items or None, _history_ids + ) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + + ctx.span.end(state.captured_error) + + # ------------------------------------------------------------------ + # Public execution methods + # ------------------------------------------------------------------ + + def run_stream(self, ctx: _ExecutionContext) -> AsyncIterator[str]: + """Return an async iterator of SSE-encoded strings for a streaming request. + + The iterator handles: + + - Pre-creation errors (B8 contract: standalone ``error`` SSE event). + - Empty handler (fallback synthesised events). + - Mid-stream handler errors (``response.failed`` SSE event, B-13). + - Cancellation terminal events. + - Optional SSE keep-alive comments. + + :param ctx: Current execution context. + :type ctx: _ExecutionContext + :return: Async iterator of SSE strings. + :rtype: AsyncIterator[str] + """ + return self._live_stream(ctx) + + async def _live_stream(self, ctx: _ExecutionContext) -> AsyncIterator[str]: + """Drive the SSE streaming pipeline using the shared event pipeline. + + Delegates all event processing (first-event handling, normalisation, + bg record registration, B-13 / S-021 / S-019 terminal events) to + :meth:`_process_handler_events`. This method only encodes each event + dict to SSE and handles keep-alive comment injection. + + :param ctx: Current execution context. + :type ctx: _ExecutionContext + :returns: Async iterator of SSE-encoded strings. + :rtype: AsyncIterator[str] + """ + new_stream_counter() + state = _PipelineState() + handler_iterator = self._create_async(ctx.parsed, ctx.context, ctx.cancellation_signal) + + # Helper: route to the right finalize method based on the request semantics + # (bg+store → bg_stream path; everything else → non_bg_stream path). + # NOTE: state.bg_record may be None for bg+stream when the handler yields no + # events (fallback path in _process_handler_events); _finalize_bg_stream + # handles that case by creating the record itself. + async def _finalize() -> None: + if ctx.background and ctx.store: + await self._finalize_bg_stream(ctx, state) + else: + await self._finalize_non_bg_stream(ctx, state) + + # --- Fast path: no keep-alive --- + if not self._runtime_options.sse_keep_alive_enabled: + if not (ctx.background and ctx.store): + # Simple fast path for non-background streaming. + try: + async for event in self._process_handler_events(ctx, state, handler_iterator): + yield encode_sse_payload(event["type"], event["payload"]) + finally: + await _finalize() + return + + # Background+stream without keep-alive: run the handler as an independent + # asyncio.Task so that finalization (including subject.complete()) is + # guaranteed to run even when the original SSE connection is dropped before + # all events are delivered. Without this, _live_stream can be abandoned + # mid-iteration by Starlette (the async-generator finalizer may not fire + # promptly), leaving GET-replay subscribers blocked on await q.get() forever. + _SENTINEL_BG = object() + bg_queue: asyncio.Queue[object] = asyncio.Queue() + + async def _bg_producer() -> None: + try: + async for event in self._process_handler_events(ctx, state, handler_iterator): + await bg_queue.put(encode_sse_payload(event["type"], event["payload"])) + except Exception as exc: # pylint: disable=broad-exception-caught + state.captured_error = exc + finally: + # Always finalize (includes subject.complete()) — this runs even if + # the original POST SSE connection was dropped and _live_stream is + # never properly closed by Starlette. + await _finalize() + await bg_queue.put(_SENTINEL_BG) + + bg_task = asyncio.create_task(_bg_producer()) + try: + while True: + item = await bg_queue.get() + if item is _SENTINEL_BG: + break + yield item # type: ignore[misc] + except Exception: # pylint: disable=broad-exception-caught + pass # SSE connection dropped; bg_task continues independently + finally: + # Wait for the handler task so _finalize() has run before we exit. + # Do NOT cancel it — background+stream must reach a terminal state + # regardless of client connectivity. + if not bg_task.done(): + try: + await bg_task + except Exception: # pylint: disable=broad-exception-caught + pass + return + + # --- Keep-alive path: merge handler events with periodic keep-alive comments --- + # via a shared asyncio.Queue so comments are sent even while the handler is idle. + _SENTINEL = object() + merge_queue: asyncio.Queue[str | object] = asyncio.Queue() + + async def _handler_producer() -> None: + try: + async for event in self._process_handler_events(ctx, state, handler_iterator): + await merge_queue.put(encode_sse_payload(event["type"], event["payload"])) + finally: + await merge_queue.put(_SENTINEL) + + async def _keep_alive_producer(interval: int) -> None: + try: + while True: + await asyncio.sleep(interval) + await merge_queue.put(encode_keep_alive_comment()) + except asyncio.CancelledError: + return + + handler_task = asyncio.create_task(_handler_producer()) + keep_alive_task = asyncio.create_task( + _keep_alive_producer(self._runtime_options.sse_keep_alive_interval_seconds) # type: ignore[arg-type] + ) + + try: + while True: + item = await merge_queue.get() + if item is _SENTINEL: + break + yield item # type: ignore[misc] + except Exception as exc: # pylint: disable=broad-exception-caught + state.captured_error = exc + finally: + keep_alive_task.cancel() + try: + await keep_alive_task + except asyncio.CancelledError: + pass + # Ensure the handler task has finished before finalising + if not handler_task.done(): + handler_task.cancel() + try: + await handler_task + except asyncio.CancelledError: + pass + await _finalize() + + async def run_sync(self, ctx: _ExecutionContext) -> dict[str, Any]: + """Execute a synchronous (non-stream, non-background) create-response request. + + Delegates event processing to :meth:`_process_handler_events`, which + handles all error paths. This method collects the accumulated events, + builds the response snapshot, optionally persists the record, closes + the span, and returns the snapshot dict. + + Raises :class:`_HandlerError` if the handler raises (B8 or B-13) so + the caller can map it to an HTTP 500 response. S-021 (handler + completed without emitting a terminal event) does *not* raise; instead + the snapshot status is ``"failed"`` and HTTP 200 is returned. + + :param ctx: Current execution context. + :type ctx: _ExecutionContext + :return: Response snapshot dictionary. + :rtype: dict[str, Any] + :raises _HandlerError: If the handler raises during iteration. + """ + state = _PipelineState() + handler_iterator = self._create_async(ctx.parsed, ctx.context, ctx.cancellation_signal) + # _process_handler_events handles all error paths (B8, B-13, S-021, S-019). + # run_sync only needs to exhaust the generator for state.handler_events side-effects. + async for _ in self._process_handler_events(ctx, state, handler_iterator): + pass + + if state.captured_error is not None: + ctx.span.end(state.captured_error) + raise _HandlerError(state.captured_error) from state.captured_error + + events = state.handler_events if state.handler_events else _build_events( + ctx.response_id, + include_progress=True, + agent_reference=ctx.agent_reference, + model=ctx.model, + ) + response_payload = _extract_response_snapshot_from_events( + events, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + remove_sequence_number=True, + ) + resolved_status = response_payload.get("status") + status = resolved_status if isinstance(resolved_status, str) else "completed" + + record = ResponseExecution( + response_id=ctx.response_id, + mode_flags=ResponseModeFlags(stream=False, store=ctx.store, background=False), + status=status, + input_items=deepcopy(ctx.input_items), + previous_response_id=ctx.previous_response_id, + response_context=ctx.context, + ) + record.set_response_snapshot(generated_models.Response(response_payload)) + + if ctx.store: + await self._runtime_state.add(record) + # Persist via provider (non-bg sync: single create at terminal state) + try: + _response_obj = generated_models.Response(response_payload) + _history_ids = ( + await self._provider.get_history_item_ids_async(ctx.previous_response_id, None, 10000) + if ctx.previous_response_id + else None + ) + await self._provider.create_response_async(_response_obj, ctx.input_items or None, _history_ids) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + + ctx.span.end(None) + return _RuntimeState.to_snapshot(record) + + async def run_background(self, ctx: _ExecutionContext) -> dict[str, Any]: + """Handle a background (non-stream) create-response request. + + Immediately launches the handler as an asyncio task, waits up to 10 s + for the first ``response.created`` event, then returns the current + snapshot. This ensures that the POST response body reflects the + handler's real ``response.created`` payload (S-003). + + :param ctx: Current execution context. + :type ctx: _ExecutionContext + :return: Response snapshot dictionary (status: in_progress or queued on timeout). + :rtype: dict[str, Any] + :raises _HandlerError: If the handler fails before emitting any event. + """ + record = ResponseExecution( + response_id=ctx.response_id, + mode_flags=ResponseModeFlags(stream=False, store=ctx.store, background=True), + status="queued", + input_items=deepcopy(ctx.input_items), + previous_response_id=ctx.previous_response_id, + response_context=ctx.context, + cancel_signal=ctx.cancellation_signal, + ) + + # Always register so GET can observe in-flight state + await self._runtime_state.add(record) + + # Best-effort persist initial queued state via provider + if ctx.store: + try: + _initial_snapshot = _RuntimeState.to_snapshot(record) + _response_obj = generated_models.Response(_initial_snapshot) + _history_ids = ( + await self._provider.get_history_item_ids_async(ctx.previous_response_id, None, 10000) + if ctx.previous_response_id + else None + ) + await self._provider.create_response_async(_response_obj, ctx.input_items or None, _history_ids) + except Exception: # pylint: disable=broad-exception-caught + pass # best effort + + # Launch handler immediately (S-003: handler runs asynchronously) + # Use anyio.CancelScope(shield=True) + suppress CancelledError so the + # background task is NOT cancelled when the HTTP request scope exits + # (anyio structured concurrency). The shielded scope ensures the handler + # runs to completion; catching CancelledError prevents the Task from being + # marked as cancelled, so _refresh_background_status reads the real status. + async def _shielded_runner() -> None: + try: + with anyio.CancelScope(shield=True): + await _run_background_non_stream( + create_async=self._create_async, + parsed=ctx.parsed, + context=ctx.context, + cancellation_signal=ctx.cancellation_signal, + record=record, + response_id=ctx.response_id, + agent_reference=ctx.agent_reference, + model=ctx.model, + provider=self._provider, + store=ctx.store, + ) + except asyncio.CancelledError: + pass # event-loop teardown in TestClient; background work already done + + record.execution_task = asyncio.create_task(_shielded_runner()) + + # Wait for first event signal (or 10 s timeout) before returning POST response + try: + await asyncio.wait_for(record.response_created_signal.wait(), timeout=10.0) + except asyncio.TimeoutError: + pass # Return current snapshot anyway; handler is still running + + if record.response_failed_before_events: + raise _HandlerError(RuntimeError("Handler failed before emitting response.created")) + + ctx.span.end(None) + return _RuntimeState.to_snapshot(record) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_request_parsing.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_request_parsing.py new file mode 100644 index 000000000000..9dd307efe85b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_request_parsing.py @@ -0,0 +1,254 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Request pre-validation, identity resolution, and input extraction helpers.""" + +from __future__ import annotations + +from copy import deepcopy +from typing import Any + +from .._id_generator import IdGenerator +from ..models.errors import RequestValidationError + +_DEFAULT_AGENT_REFERENCE_NAME = "server-default-agent" + + +def _extract_input_items(raw_payload: Any) -> list[dict[str, Any]]: + """Extract and deep-copy the ``input`` items array from a raw request payload. + + :param raw_payload: Raw decoded JSON request body. + :type raw_payload: Any + :return: List of deep-copied input item dictionaries, or empty list if absent. + :rtype: list[dict[str, Any]] + """ + if not isinstance(raw_payload, dict): + return [] + + value = raw_payload.get("input") + if not isinstance(value, list): + return [] + + extracted: list[dict[str, Any]] = [] + for item in value: + if isinstance(item, dict): + extracted.append(deepcopy(item)) + return extracted + + +def _extract_previous_response_id(raw_payload: Any) -> str | None: + """Extract the ``previous_response_id`` string from a raw request payload. + + :param raw_payload: Raw decoded JSON request body. + :type raw_payload: Any + :return: The previous response ID string, or ``None`` if absent or invalid. + :rtype: str | None + """ + if not isinstance(raw_payload, dict): + return None + value = raw_payload.get("previous_response_id") + return value if isinstance(value, str) and value else None + + +def _extract_item_id(item: dict[str, Any]) -> str | None: + """Extract the ``id`` field from an input item dictionary. + + :param item: An input item dictionary. + :type item: dict[str, Any] + :return: The item ID as a string, or ``None`` if not present. + :rtype: str | None + """ + value = item.get("id") + return str(value) if value is not None else None + + +def _apply_item_cursors(items: list[dict[str, Any]], *, after: str | None, before: str | None) -> list[dict[str, Any]]: + """Apply cursor-based pagination to a list of input items. + + :param items: Ordered list of input item dictionaries. + :type items: list[dict[str, Any]] + :keyword after: Item ID to start after (exclusive lower bound), or ``None``. + :keyword type after: str | None + :keyword before: Item ID to stop before (exclusive upper bound), or ``None``. + :keyword type before: str | None + :return: The subset of items within the cursor window. + :rtype: list[dict[str, Any]] + """ + scoped = items + + if after is not None: + after_index = next((index for index, item in enumerate(scoped) if _extract_item_id(item) == after), None) + if after_index is not None: + scoped = scoped[after_index + 1 :] + + if before is not None: + before_index = next((index for index, item in enumerate(scoped) if _extract_item_id(item) == before), None) + if before_index is not None: + scoped = scoped[:before_index] + + return scoped + + +def _validate_response_id(response_id: str) -> None: + """Validate that a response ID matches the expected canonical format. + + :param response_id: The response ID string to validate. + :type response_id: str + :return: None + :rtype: None + :raises RequestValidationError: If the ID format is invalid. + """ + is_valid_id, _ = IdGenerator.is_valid(response_id) + if not is_valid_id: + raise RequestValidationError( + "response_id must be in format caresp_<18-char partition key><32-char alphanumeric entropy>", + code="invalid_request", + param="response_id", + ) + + +def _normalize_agent_reference(value: Any) -> dict[str, Any]: + """Normalize an agent reference value into a validated dictionary. + + If *value* is ``None``, a default agent reference is returned. + + :param value: Raw agent reference from the request (dict, model, or ``None``). + :type value: Any + :return: Normalized agent reference dictionary with ``type`` and ``name`` keys. + :rtype: dict[str, Any] + :raises RequestValidationError: If the value is not a valid agent reference. + """ + if value is None: + return { + "type": "agent_reference", + "name": _DEFAULT_AGENT_REFERENCE_NAME, + } + + if hasattr(value, "as_dict"): + candidate = value.as_dict() + elif isinstance(value, dict): + candidate = dict(value) + else: + raise RequestValidationError( + "agent_reference must be an object", + code="invalid_request", + param="agent_reference", + ) + + candidate.setdefault("type", "agent_reference") + name = candidate.get("name") + reference_type = candidate.get("type") + + if reference_type != "agent_reference": + raise RequestValidationError( + "agent_reference.type must be 'agent_reference'", + code="invalid_request", + param="agent_reference.type", + ) + + if not isinstance(name, str) or not name.strip(): + raise RequestValidationError( + "agent_reference.name must be a non-empty string", + code="invalid_request", + param="agent_reference.name", + ) + + candidate["name"] = name.strip() + return candidate + + +def _prevalidate_identity_payload(payload: Any) -> None: + """Pre-validate identity-related fields in the raw request payload. + + Checks ``response_id`` format and ``agent_reference`` structure before full + model parsing, so that identity errors surface early. + + :param payload: Raw decoded JSON request body. + :type payload: Any + :return: None + :rtype: None + :raises RequestValidationError: If identity fields are malformed. + """ + if not isinstance(payload, dict): + return + + raw_response_id = payload.get("response_id") + if raw_response_id is not None: + if not isinstance(raw_response_id, str) or not raw_response_id.strip(): + raise RequestValidationError( + "response_id must be a non-empty string", + code="invalid_request", + param="response_id", + ) + _validate_response_id(raw_response_id.strip()) + + raw_agent_reference = payload.get("agent_reference") + if raw_agent_reference is None: + return + + if not isinstance(raw_agent_reference, dict): + raise RequestValidationError( + "agent_reference must be an object", + code="invalid_request", + param="agent_reference", + ) + + if raw_agent_reference.get("type") != "agent_reference": + raise RequestValidationError( + "agent_reference.type must be 'agent_reference'", + code="invalid_request", + param="agent_reference.type", + ) + + raw_name = raw_agent_reference.get("name") + if not isinstance(raw_name, str) or not raw_name.strip(): + raise RequestValidationError( + "agent_reference.name must be a non-empty string", + code="invalid_request", + param="agent_reference.name", + ) + + +def _resolve_identity_fields(parsed: Any) -> tuple[str, dict[str, Any]]: + """Resolve the response ID and agent reference from a parsed create request. + + Generates a new response ID if one is not explicitly provided. + + :param parsed: Parsed ``CreateResponse`` model instance. + :type parsed: Any + :return: A tuple of ``(response_id, agent_reference)``. + :rtype: tuple[str, dict[str, Any]] + :raises RequestValidationError: If the resolved response ID is invalid. + """ + parsed_mapping = parsed.as_dict() if hasattr(parsed, "as_dict") else {} + explicit_response_id = parsed_mapping.get("response_id") or getattr(parsed, "response_id", None) + if isinstance(explicit_response_id, str) and explicit_response_id.strip(): + response_id = explicit_response_id.strip() + else: + response_id = IdGenerator.new_response_id() + + _validate_response_id(response_id) + agent_reference = _normalize_agent_reference( + parsed_mapping.get("agent_reference") \ + if isinstance(parsed_mapping, dict) \ + else getattr(parsed, "agent_reference", None) + ) + return response_id, agent_reference + + +def _resolve_conversation_id(parsed: Any) -> str | None: + """Extract the conversation ID from a parsed ``CreateResponse`` request. + + Handles both a plain string value and a ``ConversationParam_2`` object + (which carries the ID in its ``.id`` attribute). + + :param parsed: The parsed ``CreateResponse`` model instance. + :type parsed: Any + :returns: The conversation ID string, or ``None`` if not present. + :rtype: str | None + """ + raw = getattr(parsed, "conversation", None) + if isinstance(raw, str): + return raw or None + if raw is not None and hasattr(raw, "id"): + return str(raw.id) or None + return None diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_routing.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_routing.py new file mode 100644 index 000000000000..39acd7543d16 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_routing.py @@ -0,0 +1,225 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Response handler for AgentHost. + +Provides the Responses API endpoints and registers them with the +``AgentHost`` on construction, following the same pattern as +``InvocationHandler``. +""" + +from __future__ import annotations + +from typing import TYPE_CHECKING, Callable, Optional + +from starlette.routing import Route + +from azure.ai.agentserver.core import AgentLogger + +from ._endpoint_handler import _ResponseEndpointHandler +from ._orchestrator import _ResponseOrchestrator +from ._runtime_state import _RuntimeState +from .._options import ResponsesServerOptions +from ..store._base import ResponseProviderProtocol, ResponseStreamProviderProtocol +from ..store._memory import InMemoryResponseProvider + +if TYPE_CHECKING: + from azure.ai.agentserver.core import AgentHost, TracingHelper + +logger = AgentLogger.get() + + +class ResponseHandler: + """Response protocol handler that plugs into an ``AgentHost``. + + Creates the Responses API endpoints and registers them with + the server. Use the :meth:`create_handler` decorator to wire + a handler function to the create endpoint. + + This design supports multi-protocol composition -- multiple protocol + handlers (e.g. ``InvocationHandler``, ``ResponseHandler``) can be + mounted onto the same ``AgentHost``. + + Usage:: + + from azure.ai.agentserver.core import AgentHost + from azure.ai.agentserver.responses.hosting import ResponseHandler + + server = AgentHost() + responses = ResponseHandler(server) + + @responses.create_handler + def my_handler(request, context, cancellation_signal): + async def _events(): + yield event + return _events() + + server.run() + + :param server: The ``AgentHost`` to register response protocol + routes with. + :type server: AgentHost + :param prefix: Optional URL prefix for all response routes + (e.g. ``"/v1"``). + :type prefix: str + :param options: Optional runtime options for the responses server. + :type options: ResponsesServerOptions | None + :param provider: Optional persistence provider for response + envelopes and input items. + :type provider: ResponseProviderProtocol | None + """ + + def __init__( + self, + server: "AgentHost", + *, + prefix: str = "", + options: ResponsesServerOptions | None = None, + provider: ResponseProviderProtocol | None = None, + ) -> None: + # Extract tracing from server + self._tracing: Optional["TracingHelper"] = server.tracing + + # Handler slot — populated via @responses.create_handler decorator + self._create_fn: Optional[Callable] = None + + # Normalize prefix + normalized_prefix = prefix.strip() + if normalized_prefix and not normalized_prefix.startswith("/"): + normalized_prefix = f"/{normalized_prefix}" + normalized_prefix = normalized_prefix.rstrip("/") + + # Build internal components + runtime_options = options or ResponsesServerOptions() + # SSE-specific headers (x-platform-server is handled by hosting middleware) + sse_headers: dict[str, str] = { + "connection": "keep-alive", + "cache-control": "no-cache", + "x-accel-buffering": "no", + } + + resolved_provider: ResponseProviderProtocol = provider if provider is not None \ + else InMemoryResponseProvider() + stream_provider = resolved_provider if isinstance(resolved_provider, ResponseStreamProviderProtocol) \ + else None + runtime_state = _RuntimeState() + orchestrator = _ResponseOrchestrator( + create_async=self._dispatch_create, + runtime_state=runtime_state, + runtime_options=runtime_options, + provider=resolved_provider, + stream_provider=stream_provider, + ) + endpoint = _ResponseEndpointHandler( + orchestrator=orchestrator, + runtime_state=runtime_state, + runtime_options=runtime_options, + response_headers={}, + sse_headers=sse_headers, + tracing=self._tracing, + provider=resolved_provider, + stream_provider=stream_provider, + ) + + # Build and cache routes + self._routes: list[Route] = [ + Route( + f"{normalized_prefix}/responses", + endpoint.handle_create, + methods=["POST"], + name="create_response", + ), + Route( + f"{normalized_prefix}/responses/{{response_id}}", + endpoint.handle_get, + methods=["GET"], + name="get_response", + ), + Route( + f"{normalized_prefix}/responses/{{response_id}}", + endpoint.handle_delete, + methods=["DELETE"], + name="delete_response", + ), + Route( + f"{normalized_prefix}/responses/{{response_id}}/cancel", + endpoint.handle_cancel, + methods=["POST"], + name="cancel_response", + ), + Route( + f"{normalized_prefix}/responses/{{response_id}}/input_items", + endpoint.handle_input_items, + methods=["GET"], + name="get_input_items", + ), + ] + + # Register routes with the server + server.register_routes(self._routes) + + # Register shutdown handler with the server + server.shutdown_handler(endpoint.handle_shutdown) + + # ------------------------------------------------------------------ + # Handler decorator + # ------------------------------------------------------------------ + + def create_handler(self, fn: Callable) -> Callable: + """Register a function as the create-response handler. + + The handler function must accept exactly three positional parameters: + ``(request, context, cancellation_signal)`` and return an + ``AsyncIterable`` of response stream events. + + Usage:: + + @responses.create_handler + def my_handler(request, context, cancellation_signal): + async def _events(): + yield event + return _events() + + :param fn: A callable accepting (request, context, cancellation_signal). + :type fn: Callable + :return: The original function (unmodified). + :rtype: Callable + """ + self._create_fn = fn + return fn + + # ------------------------------------------------------------------ + # Dispatch (internal) + # ------------------------------------------------------------------ + + def _dispatch_create(self, request, context, cancellation_signal): # type: ignore[no-untyped-def] + """Dispatch to the registered create handler. + + Called by the orchestrator when processing a create request. + + :param request: The parsed create-response request. + :type request: Any + :param context: The response context for the request. + :type context: Any + :param cancellation_signal: The cancellation signal for the request. + :type cancellation_signal: Any + :returns: The result from the registered create handler callable. + :rtype: Any + """ + if self._create_fn is None: + raise NotImplementedError( + "No create handler registered. Use the @responses.create_handler decorator." + ) + return self._create_fn(request, context, cancellation_signal) + + # ------------------------------------------------------------------ + # Routes + # ------------------------------------------------------------------ + + @property + def routes(self) -> list[Route]: + """Starlette routes for the response protocol. + + :return: A list of Route objects for the response endpoints. + :rtype: list[Route] + """ + return self._routes diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_runtime_state.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_runtime_state.py new file mode 100644 index 000000000000..b3141cce26bb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_runtime_state.py @@ -0,0 +1,143 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Runtime state management for the Responses server.""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from copy import deepcopy +from typing import Any + +from ..models.runtime import ResponseExecution + + +class _RuntimeState: + """In-memory store for response execution records.""" + + def __init__(self) -> None: + """Initialize the runtime state with empty record and deletion sets.""" + self._records: dict[str, ResponseExecution] = {} + self._deleted_response_ids: set[str] = set() + self._lock = asyncio.Lock() + + async def add(self, record: ResponseExecution) -> None: + """Add or replace an execution record in the store. + + :param record: The execution record to store. + :type record: ResponseExecution + :return: None + :rtype: None + """ + async with self._lock: + self._records[record.response_id] = record + self._deleted_response_ids.discard(record.response_id) + + async def get(self, response_id: str) -> ResponseExecution | None: + """Look up an execution record by response ID. + + :param response_id: The response ID to look up. + :type response_id: str + :return: The matching execution record, or ``None`` if not found. + :rtype: ResponseExecution | None + """ + async with self._lock: + return self._records.get(response_id) + + async def is_deleted(self, response_id: str) -> bool: + """Check whether a response ID has been deleted. + + :param response_id: The response ID to check. + :type response_id: str + :return: ``True`` if the response was previously deleted. + :rtype: bool + """ + async with self._lock: + return response_id in self._deleted_response_ids + + async def delete(self, response_id: str) -> bool: + """Delete an execution record by response ID. + + :param response_id: The response ID to delete. + :type response_id: str + :return: ``True`` if the record was found and deleted, ``False`` otherwise. + :rtype: bool + """ + async with self._lock: + record = self._records.pop(response_id, None) + if record is None: + return False + self._deleted_response_ids.add(response_id) + return True + + async def get_input_items(self, response_id: str) -> list[dict[str, Any]]: + """Retrieve the full input item chain for a response, including ancestors. + + Walks the ``previous_response_id`` chain to build the complete ordered + list of input items. + + :param response_id: The response ID whose input items to retrieve. + :type response_id: str + :return: Ordered list of deep-copied input item dictionaries. + :rtype: list[dict[str, Any]] + :raises ValueError: If the response has been deleted. + :raises KeyError: If the response is not found or not visible. + """ + async with self._lock: + record = self._records.get(response_id) + if record is None: + if response_id in self._deleted_response_ids: + raise ValueError(f"response '{response_id}' has been deleted") + raise KeyError(f"response '{response_id}' not found") + + if not record.visible_via_get: + raise KeyError(f"response '{response_id}' not found") + + history: list[dict[str, Any]] = [] + cursor = record.previous_response_id + visited: set[str] = set() + + while isinstance(cursor, str) and cursor and cursor not in visited: + visited.add(cursor) + previous = self._records.get(cursor) + if previous is None: + break + history = [*deepcopy(previous.input_items), *history] + cursor = previous.previous_response_id + + return [*history, *deepcopy(record.input_items)] + + async def list_records(self) -> list[ResponseExecution]: + """Return a snapshot list of all execution records in the store. + + :return: List of all current execution records. + :rtype: list[ResponseExecution] + """ + async with self._lock: + return list(self._records.values()) + + @staticmethod + def to_snapshot(execution: ResponseExecution) -> dict[str, Any]: + """Build a normalized response snapshot dictionary from an execution. + + Uses ``execution.response.as_dict()`` directly when a response snapshot is + available, avoiding an unnecessary ``Response(dict).as_dict()`` round-trip. + Falls back to a minimal status-only dict when no response has been set yet. + + :param execution: The execution whose response snapshot to build. + :type execution: ResponseExecution + :return: A normalized response payload dictionary. + :rtype: dict[str, Any] + """ + if execution.response is not None: + result: dict[str, Any] = execution.response.as_dict() + result.setdefault("id", execution.response_id) + result.setdefault("response_id", execution.response_id) + result.setdefault("object", "response") + result["status"] = execution.status + return result + return { + "id": execution.response_id, + "response_id": execution.response_id, + "object": "response", + "status": execution.status, + } diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_validation.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_validation.py new file mode 100644 index 000000000000..d30e0df45178 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/hosting/_validation.py @@ -0,0 +1,251 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Validation utilities for request and response models.""" + +from __future__ import annotations + +from typing import Any, Mapping + +from azure.ai.agentserver.responses._options import ResponsesServerOptions +from azure.ai.agentserver.responses.models._generated import ApiErrorResponse, CreateResponse, Error +from azure.ai.agentserver.responses.models._generated._validators import validate_CreateResponse +from azure.ai.agentserver.responses.models.errors import RequestValidationError + + +def parse_create_response(payload: Mapping[str, Any]) -> CreateResponse: + """Parse incoming JSON payload into the generated ``CreateResponse`` model. + + :param payload: Raw request payload mapping. + :type payload: Mapping[str, Any] + :returns: Parsed generated create response model. + :rtype: CreateResponse + :raises RequestValidationError: If payload is not an object or cannot be parsed. + """ + if not isinstance(payload, Mapping): + raise RequestValidationError("request body must be a JSON object", code="invalid_request") + + validation_errors = validate_CreateResponse(payload) + if validation_errors: + details = [ + { + "code": "invalid_value", + "message": e.get("message", ""), + "param": e.get("path", "").lstrip("."), + } + for e in validation_errors + ] + raise RequestValidationError( + "request body failed schema validation", + code="invalid_request", + details=details, + ) + + try: + return CreateResponse(payload) + except Exception as exc: # pragma: no cover - generated model raises implementation-specific errors. + raise RequestValidationError( + "request body failed schema validation", + code="invalid_request", + debug_info={"exception_type": type(exc).__name__, "detail": str(exc)}, + ) from exc + + +def normalize_create_response( + request: CreateResponse, + options: ResponsesServerOptions | None, +) -> CreateResponse: + """Apply server-side defaults to a parsed create request model. + + :param request: The parsed create response model to normalize. + :type request: CreateResponse + :param options: Server runtime options containing defaults, or ``None``. + :type options: ResponsesServerOptions | None + :return: The same model instance with defaults applied. + :rtype: CreateResponse + """ + if (request.model is None or (isinstance(request.model, str) and not request.model.strip())) and options: + request.model = options.default_model + + if isinstance(request.model, str): + request.model = request.model.strip() or "" + elif request.model is None: + request.model = "" + + return request + + +def validate_create_response(request: CreateResponse) -> None: + """Validate create request semantics not enforced by generated model typing. + + :param request: The parsed create response model to validate. + :type request: CreateResponse + :raises RequestValidationError: If semantic preconditions are violated. + """ + store_enabled = True if request.store is None else bool(request.store) + + if request.background and not store_enabled: + raise RequestValidationError( + "background=true requires store=true", + code="unsupported_parameter", + param="background", + ) + + if request.stream_options is not None and request.stream is not True: + raise RequestValidationError( + "stream_options requires stream=true", + code="invalid_mode", + param="stream", + ) + + # B22: model is optional — resolved to default in normalize_create_response() + + # Metadata constraints: ≤16 keys, key ≤64 chars, value ≤512 chars + metadata = getattr(request, "metadata", None) + if metadata is not None and hasattr(metadata, "items"): + if len(metadata) > 16: + raise RequestValidationError( + "metadata must have at most 16 key-value pairs", + code="invalid_request", + param="metadata", + ) + for key, value in metadata.items(): + if isinstance(key, str) and len(key) > 64: + raise RequestValidationError( + f"metadata key '{key[:64]}...' exceeds maximum length of 64 characters", + code="invalid_request", + param="metadata", + ) + if isinstance(value, str) and len(value) > 512: + raise RequestValidationError( + f"metadata value for key '{key}' exceeds maximum length of 512 characters", + code="invalid_request", + param="metadata", + ) + + +def parse_and_validate_create_response( + payload: Mapping[str, Any], + *, + options: ResponsesServerOptions | None = None, +) -> CreateResponse: + """Parse, normalize, and validate a create request using generated models. + + :param payload: Raw request payload mapping. + :type payload: Mapping[str, Any] + :keyword options: Server runtime options for defaults, or ``None``. + :keyword type options: ResponsesServerOptions | None + :return: A fully validated ``CreateResponse`` model. + :rtype: CreateResponse + :raises RequestValidationError: If parsing or validation fails. + """ + request = parse_create_response(payload) + request = normalize_create_response(request, options) + validate_create_response(request) + return request + + +def build_api_error_response( + message: str, + *, + code: str, + param: str | None = None, + error_type: str = "invalid_request_error", + debug_info: dict[str, Any] | None = None, +) -> ApiErrorResponse: + """Build a generated ``ApiErrorResponse`` envelope for client-visible failures. + + :param message: Human-readable error message. + :type message: str + :keyword code: Machine-readable error code. + :keyword type code: str + :keyword param: The request parameter that caused the error, or ``None``. + :keyword type param: str | None + :keyword error_type: Error type category (default ``"invalid_request_error"``). + :keyword type error_type: str + :keyword debug_info: Optional debug information dictionary. + :keyword type debug_info: dict[str, Any] | None + :return: A generated ``ApiErrorResponse`` envelope. + :rtype: ApiErrorResponse + """ + return ApiErrorResponse( + error=Error( + code=code, + message=message, + param=param, + type=error_type, + debug_info=debug_info, + ) + ) + + +def build_not_found_error_response( + resource_id: str, + *, + param: str = "response_id", + resource_name: str = "response", +) -> ApiErrorResponse: + """Build a canonical generated not-found error envelope. + + :param resource_id: The ID of the resource that was not found. + :type resource_id: str + :keyword param: The parameter name to include in the error (default ``"response_id"``). + :keyword type param: str + :keyword resource_name: Display name for the resource type (default ``"response"``). + :keyword type resource_name: str + :return: A generated ``ApiErrorResponse`` envelope with not-found error. + :rtype: ApiErrorResponse + """ + return build_api_error_response( + message=f"{resource_name} '{resource_id}' was not found", + code="not_found", + param=param, + error_type="not_found_error", + ) + + +def build_invalid_mode_error_response( + message: str, + *, + param: str | None = None, +) -> ApiErrorResponse: + """Build a canonical generated invalid-mode error envelope. + + :param message: Human-readable error message. + :type message: str + :keyword param: The request parameter that caused the error, or ``None``. + :keyword type param: str | None + :return: A generated ``ApiErrorResponse`` envelope with invalid-mode error. + :rtype: ApiErrorResponse + """ + return build_api_error_response( + message=message, + code="invalid_mode", + param=param, + error_type="invalid_request_error", + ) + + +def to_api_error_response(error: Exception) -> ApiErrorResponse: + """Map a Python exception to a generated API error envelope. + + :param error: The exception to convert. + :type error: Exception + :return: A generated ``ApiErrorResponse`` envelope. + :rtype: ApiErrorResponse + """ + if isinstance(error, RequestValidationError): + return error.to_api_error_response() + + if isinstance(error, ValueError): + return build_api_error_response( + message=str(error) or "invalid request", + code="invalid_request", + error_type="invalid_request_error", + ) + + return build_api_error_response( + message="internal server error", + code="internal_error", + error_type="server_error", + debug_info={"exception_type": type(error).__name__}, + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/__init__.py new file mode 100644 index 000000000000..26cb2ecf9c26 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/__init__.py @@ -0,0 +1,41 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Canonical non-generated model types for the response server.""" + +# from .errors import RequestValidationError +from .runtime import ( + ResponseExecution, + ResponseModeFlags, + ResponseStatus, + StreamEventRecord, + StreamReplayState, + TerminalResponseStatus, +) +from ._helpers import ( + get_content_expanded, + get_conversation_expanded, + get_conversation_id, + get_input_expanded, + get_input_text, + get_instruction_items, + get_output_item_id, + get_tool_choice_expanded, +) + +__all__ = [ + # "RequestValidationError", + "ResponseExecution", + "ResponseModeFlags", + "ResponseStatus", + "StreamEventRecord", + "StreamReplayState", + "TerminalResponseStatus", + "get_content_expanded", + "get_conversation_expanded", + "get_conversation_id", + "get_input_expanded", + "get_input_text", + "get_instruction_items", + "get_output_item_id", + "get_tool_choice_expanded", +] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/__init__.py new file mode 100644 index 000000000000..b783bfa73795 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/__init__.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility re-exports for generated models preserved under sdk/models.""" + +from .sdk.models.models import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_enums.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_enums.py new file mode 100644 index 000000000000..481d6d628755 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_enums.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility shim for generated enum symbols.""" + +from .sdk.models.models._enums import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_models.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_models.py new file mode 100644 index 000000000000..01e649adb824 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_models.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility shim for generated model symbols.""" + +from .sdk.models.models._models import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_patch.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_patch.py new file mode 100644 index 000000000000..66ee2dea3a63 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_patch.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility shim for generated patch helpers.""" + +from .sdk.models.models._patch import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_validators.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_validators.py new file mode 100644 index 000000000000..b2dfc33c9c4a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/_validators.py @@ -0,0 +1,666 @@ +# pylint: disable=line-too-long,useless-suppression,too-many-lines +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + + +from __future__ import annotations + +from typing import Any + +try: + from . import _enums as _generated_enums +except Exception: + _generated_enums = None + +def _append_error(errors: list[dict[str, str]], path: str, message: str) -> None: + errors.append({'path': path, 'message': message}) + +def _type_label(value: Any) -> str: + if value is None: + return 'null' + if isinstance(value, bool): + return 'boolean' + if isinstance(value, int): + return 'integer' + if isinstance(value, float): + return 'number' + if isinstance(value, str): + return 'string' + if isinstance(value, dict): + return 'object' + if isinstance(value, list): + return 'array' + return type(value).__name__ + +def _is_type(value: Any, expected: str) -> bool: + if expected == 'string': + return isinstance(value, str) + if expected == 'integer': + return isinstance(value, int) and not isinstance(value, bool) + if expected == 'number': + return (isinstance(value, int) and not isinstance(value, bool)) or isinstance(value, float) + if expected == 'boolean': + return isinstance(value, bool) + if expected == 'object': + return isinstance(value, dict) + if expected == 'array': + return isinstance(value, list) + return True + +def _append_type_mismatch(errors: list[dict[str, str]], path: str, expected: str, value: Any) -> None: + _append_error(errors, path, f"Expected {expected}, got {_type_label(value)}") + +def _enum_values(enum_name: str) -> tuple[tuple[str, ...] | None, str | None]: + if _generated_enums is None: + return None, f'enum type _enums.{enum_name} is unavailable' + enum_cls = getattr(_generated_enums, enum_name, None) + if enum_cls is None: + return None, f'enum type _enums.{enum_name} is not defined' + try: + return tuple(str(member.value) for member in enum_cls), None + except Exception: + return None, f'enum type _enums.{enum_name} failed to load values' + +def _validate_CreateResponse(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'agent_reference' in value: + _validate_CreateResponse_agent_reference(value['agent_reference'], f"{path}.agent_reference", errors) + if 'background' in value: + _validate_CreateResponse_background(value['background'], f"{path}.background", errors) + if 'context_management' in value: + _validate_CreateResponse_context_management(value['context_management'], f"{path}.context_management", errors) + if 'conversation' in value: + _validate_CreateResponse_conversation(value['conversation'], f"{path}.conversation", errors) + if 'include' in value: + _validate_CreateResponse_include(value['include'], f"{path}.include", errors) + if 'input' in value: + _validate_CreateResponse_input(value['input'], f"{path}.input", errors) + if 'instructions' in value: + _validate_CreateResponse_instructions(value['instructions'], f"{path}.instructions", errors) + if 'max_output_tokens' in value: + _validate_CreateResponse_max_output_tokens(value['max_output_tokens'], f"{path}.max_output_tokens", errors) + if 'max_tool_calls' in value: + _validate_CreateResponse_max_output_tokens(value['max_tool_calls'], f"{path}.max_tool_calls", errors) + if 'metadata' in value: + _validate_CreateResponse_metadata(value['metadata'], f"{path}.metadata", errors) + if 'model' in value: + _validate_CreateResponse_model(value['model'], f"{path}.model", errors) + if 'parallel_tool_calls' in value: + _validate_CreateResponse_parallel_tool_calls(value['parallel_tool_calls'], f"{path}.parallel_tool_calls", errors) + if 'previous_response_id' in value: + _validate_CreateResponse_instructions(value['previous_response_id'], f"{path}.previous_response_id", errors) + if 'prompt' in value: + _validate_CreateResponse_prompt(value['prompt'], f"{path}.prompt", errors) + if 'prompt_cache_key' in value: + _validate_CreateResponse_prompt_cache_key(value['prompt_cache_key'], f"{path}.prompt_cache_key", errors) + if 'prompt_cache_retention' in value: + _validate_CreateResponse_prompt_cache_retention(value['prompt_cache_retention'], f"{path}.prompt_cache_retention", errors) + if 'reasoning' in value: + _validate_CreateResponse_reasoning(value['reasoning'], f"{path}.reasoning", errors) + if 'safety_identifier' in value: + _validate_CreateResponse_safety_identifier(value['safety_identifier'], f"{path}.safety_identifier", errors) + if 'service_tier' in value: + _validate_CreateResponse_service_tier(value['service_tier'], f"{path}.service_tier", errors) + if 'store' in value: + _validate_CreateResponse_parallel_tool_calls(value['store'], f"{path}.store", errors) + if 'stream' in value: + _validate_CreateResponse_background(value['stream'], f"{path}.stream", errors) + if 'stream_options' in value: + _validate_CreateResponse_stream_options(value['stream_options'], f"{path}.stream_options", errors) + if 'structured_inputs' in value: + _validate_CreateResponse_structured_inputs(value['structured_inputs'], f"{path}.structured_inputs", errors) + if 'temperature' in value: + _validate_CreateResponse_temperature(value['temperature'], f"{path}.temperature", errors) + if 'text' in value: + _validate_CreateResponse_text(value['text'], f"{path}.text", errors) + if 'tool_choice' in value: + _validate_CreateResponse_tool_choice(value['tool_choice'], f"{path}.tool_choice", errors) + if 'tools' in value: + _validate_CreateResponse_tools(value['tools'], f"{path}.tools", errors) + if 'top_logprobs' in value: + _validate_CreateResponse_max_output_tokens(value['top_logprobs'], f"{path}.top_logprobs", errors) + if 'top_p' in value: + _validate_CreateResponse_temperature(value['top_p'], f"{path}.top_p", errors) + if 'truncation' in value: + _validate_CreateResponse_truncation(value['truncation'], f"{path}.truncation", errors) + if 'user' in value: + _validate_CreateResponse_user(value['user'], f"{path}.user", errors) + +def _validate_CreateResponse_agent_reference(value: Any, path: str, errors: list[dict[str, str]]) -> None: + return + +def _validate_CreateResponse_background(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'boolean'): + _append_type_mismatch(errors, path, 'boolean', value) + return + +def _validate_CreateResponse_context_management(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'array'): + _append_type_mismatch(errors, path, 'array', value) + return + for _idx, _item in enumerate(value): + _validate_CreateResponse_context_management_item(_item, f"{path}[{_idx}]", errors) + +def _validate_CreateResponse_conversation(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + +def _validate_CreateResponse_include(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'array'): + _append_type_mismatch(errors, path, 'array', value) + return + for _idx, _item in enumerate(value): + _validate_CreateResponse_include_item(_item, f"{path}[{_idx}]", errors) + +def _validate_CreateResponse_input(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_InputParam(value, path, errors) + +def _validate_CreateResponse_instructions(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_max_output_tokens(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'integer'): + _append_type_mismatch(errors, path, 'integer', value) + return + +def _validate_CreateResponse_metadata(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + +def _validate_CreateResponse_model(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_parallel_tool_calls(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'boolean'): + _append_type_mismatch(errors, path, 'boolean', value) + return + +def _validate_CreateResponse_prompt(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_Prompt(value, path, errors) + +def _validate_CreateResponse_prompt_cache_key(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_prompt_cache_retention(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + _allowed_values = ('in-memory', '24h') + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_reasoning(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + +def _validate_CreateResponse_safety_identifier(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_service_tier(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_ServiceTier(value, path, errors) + +def _validate_CreateResponse_stream_options(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + +def _validate_CreateResponse_structured_inputs(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + for _key, _item in value.items(): + if _key not in (): + _validate_CreateResponse_structured_inputs_additional_property(_item, f"{path}.{_key}", errors) + +def _validate_CreateResponse_temperature(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'number'): + _append_type_mismatch(errors, path, 'number', value) + return + +def _validate_CreateResponse_text(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_ResponseTextParam(value, path, errors) + +def _validate_CreateResponse_tool_choice(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_ToolChoiceOptions(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'object'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_ToolChoiceParam(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected one of: OpenAI.ToolChoiceOptions, OpenAI.ToolChoiceParam; got {_type_label(value)}") + return + +def _validate_CreateResponse_tools(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_ToolsArray(value, path, errors) + +def _validate_CreateResponse_truncation(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + _allowed_values = ('auto', 'disabled') + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_user(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_context_management_item(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_ContextManagementParam(value, path, errors) + +def _validate_CreateResponse_include_item(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_IncludeEnum(value, path, errors) + +def _validate_OpenAI_InputParam(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_string(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'array'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_array(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected one of: string, array; got {_type_label(value)}") + return + +def _validate_OpenAI_Prompt(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'id' not in value: + _append_error(errors, f"{path}.id", "Required property 'id' is missing") + if 'id' in value: + _validate_OpenAI_Prompt_id(value['id'], f"{path}.id", errors) + if 'variables' in value: + _validate_OpenAI_Prompt_variables(value['variables'], f"{path}.variables", errors) + if 'version' in value: + _validate_CreateResponse_instructions(value['version'], f"{path}.version", errors) + +def _validate_OpenAI_ServiceTier(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + _allowed_values, _enum_error = _enum_values('ServiceTier') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_CreateResponse_structured_inputs_additional_property(value: Any, path: str, errors: list[dict[str, str]]) -> None: + return + +def _validate_OpenAI_ResponseTextParam(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'format' in value: + _validate_OpenAI_ResponseTextParam_format(value['format'], f"{path}.format", errors) + if 'verbosity' in value: + _validate_OpenAI_ResponseTextParam_verbosity(value['verbosity'], f"{path}.verbosity", errors) + +def _validate_OpenAI_ToolChoiceOptions(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _allowed_values, _enum_error = _enum_values('ToolChoiceOptions') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_ToolChoiceParam(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'type' not in value: + _append_error(errors, f"{path}.type", "Required property 'type' is missing") + if 'type' in value: + _validate_OpenAI_ToolChoiceParam_type(value['type'], f"{path}.type", errors) + _disc_value = value.get('type') + if not isinstance(_disc_value, str): + _append_error(errors, f"{path}.type", "Required discriminator 'type' is missing or invalid") + return + +def _validate_OpenAI_ToolsArray(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'array'): + _append_type_mismatch(errors, path, 'array', value) + return + for _idx, _item in enumerate(value): + _validate_OpenAI_ToolsArray_item(_item, f"{path}[{_idx}]", errors) + +def _validate_OpenAI_ContextManagementParam(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'type' not in value: + _append_error(errors, f"{path}.type", "Required property 'type' is missing") + if 'compact_threshold' in value: + _validate_CreateResponse_max_output_tokens(value['compact_threshold'], f"{path}.compact_threshold", errors) + if 'type' in value: + _validate_OpenAI_ContextManagementParam_type(value['type'], f"{path}.type", errors) + +def _validate_OpenAI_IncludeEnum(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_string(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'string'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_IncludeEnum_2(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected IncludeEnum to be a string value, got {_type_label(value)}") + return + +def _validate_OpenAI_InputParam_string(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_InputParam_array(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'array'): + _append_type_mismatch(errors, path, 'array', value) + return + for _idx, _item in enumerate(value): + _validate_OpenAI_InputParam_array_item(_item, f"{path}[{_idx}]", errors) + +def _validate_OpenAI_Prompt_id(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_Prompt_variables(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + +def _validate_OpenAI_ResponseTextParam_format(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_TextResponseFormatConfiguration(value, path, errors) + +def _validate_OpenAI_ResponseTextParam_verbosity(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_Verbosity(value, path, errors) + +def _validate_OpenAI_ToolChoiceParam_type(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_ToolChoiceParamType(value, path, errors) + +def _validate_OpenAI_ToolsArray_item(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_Tool(value, path, errors) + +def _validate_OpenAI_ContextManagementParam_type(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_IncludeEnum_2(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _allowed_values, _enum_error = _enum_values('IncludeEnum') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_InputParam_array_item(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_InputItem(value, path, errors) + +def _validate_OpenAI_TextResponseFormatConfiguration(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'type' not in value: + _append_error(errors, f"{path}.type", "Required property 'type' is missing") + if 'type' in value: + _validate_OpenAI_TextResponseFormatConfiguration_type(value['type'], f"{path}.type", errors) + _disc_value = value.get('type') + if not isinstance(_disc_value, str): + _append_error(errors, f"{path}.type", "Required discriminator 'type' is missing or invalid") + return + +def _validate_OpenAI_Verbosity(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if value is None: + return + _allowed_values, _enum_error = _enum_values('Verbosity') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_ToolChoiceParamType(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_string(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'string'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_ToolChoiceParamType_2(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected ToolChoiceParamType to be a string value, got {_type_label(value)}") + return + +def _validate_OpenAI_Tool(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'type' not in value: + _append_error(errors, f"{path}.type", "Required property 'type' is missing") + if 'type' in value: + _validate_OpenAI_Tool_type(value['type'], f"{path}.type", errors) + _disc_value = value.get('type') + if not isinstance(_disc_value, str): + _append_error(errors, f"{path}.type", "Required discriminator 'type' is missing or invalid") + return + +def _validate_OpenAI_InputItem(value: Any, path: str, errors: list[dict[str, str]]) -> None: + if not _is_type(value, 'object'): + _append_type_mismatch(errors, path, 'object', value) + return + if 'type' not in value: + _append_error(errors, f"{path}.type", "Required property 'type' is missing") + if 'type' in value: + _validate_OpenAI_InputItem_type(value['type'], f"{path}.type", errors) + _disc_value = value.get('type') + if not isinstance(_disc_value, str): + _append_error(errors, f"{path}.type", "Required discriminator 'type' is missing or invalid") + return + +def _validate_OpenAI_TextResponseFormatConfiguration_type(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_TextResponseFormatConfigurationType(value, path, errors) + +def _validate_OpenAI_ToolChoiceParamType_2(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _allowed_values, _enum_error = _enum_values('ToolChoiceParamType') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_Tool_type(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_ToolType(value, path, errors) + +def _validate_OpenAI_InputItem_type(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _validate_OpenAI_InputItemType(value, path, errors) + +def _validate_OpenAI_TextResponseFormatConfigurationType(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_string(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'string'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_TextResponseFormatConfigurationType_2(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected TextResponseFormatConfigurationType to be a string value, got {_type_label(value)}") + return + +def _validate_OpenAI_ToolType(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_string(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'string'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_ToolType_2(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected ToolType to be a string value, got {_type_label(value)}") + return + +def _validate_OpenAI_InputItemType(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _matched_union = False + if not _matched_union and _is_type(value, 'string'): + _branch_errors_0: list[dict[str, str]] = [] + _validate_OpenAI_InputParam_string(value, path, _branch_errors_0) + if not _branch_errors_0: + _matched_union = True + if not _matched_union and _is_type(value, 'string'): + _branch_errors_1: list[dict[str, str]] = [] + _validate_OpenAI_InputItemType_2(value, path, _branch_errors_1) + if not _branch_errors_1: + _matched_union = True + if not _matched_union: + _append_error(errors, path, f"Expected InputItemType to be a string value, got {_type_label(value)}") + return + +def _validate_OpenAI_TextResponseFormatConfigurationType_2(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _allowed_values, _enum_error = _enum_values('TextResponseFormatConfigurationType') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_ToolType_2(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _allowed_values, _enum_error = _enum_values('ToolType') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +def _validate_OpenAI_InputItemType_2(value: Any, path: str, errors: list[dict[str, str]]) -> None: + _allowed_values, _enum_error = _enum_values('InputItemType') + if _enum_error is not None: + _append_error(errors, path, _enum_error) + return + if _allowed_values is None: + return + if value not in _allowed_values: + _append_error(errors, path, f"Invalid value '{value}'. Allowed: {', '.join(str(v) for v in _allowed_values)}") + if not _is_type(value, 'string'): + _append_type_mismatch(errors, path, 'string', value) + return + +ROOT_SCHEMAS = ['CreateResponse'] + +class CreateResponseValidator: + """Generated validator for the root schema.""" + + @staticmethod + def validate(payload: Any) -> list[dict[str, str]]: + errors: list[dict[str, str]] = [] + _validate_CreateResponse(payload, '$', errors) + return errors + +def validate_CreateResponse(payload: Any) -> list[dict[str, str]]: + return CreateResponseValidator.validate(payload) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/__init__.py new file mode 100644 index 000000000000..9abd30ab9c84 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/__init__.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Model-only generated package surface.""" + +from .models import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_patch.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_patch.py new file mode 100644 index 000000000000..87676c65a8f0 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_patch.py @@ -0,0 +1,21 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- +"""Customize generated code here. + +Follow our quickstart for examples: https://aka.ms/azsdk/python/dpcodegen/python/customize +""" + + +__all__: list[str] = [] # Add all objects you want publicly available to users at this package level + + +def patch_sdk(): + """Do not remove from this file. + + `patch_sdk` is a last resort escape hatch that allows you to do customizations + you can't accomplish using the techniques described in + https://aka.ms/azsdk/python/dpcodegen/python/customize + """ diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_types.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_types.py new file mode 100644 index 000000000000..c99439ce635a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_types.py @@ -0,0 +1,71 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from typing import Any, TYPE_CHECKING, Union + +if TYPE_CHECKING: + from . import models as _models +Filters = Union["_models.ComparisonFilter", "_models.CompoundFilter"] +ToolCallOutputContent = Union[dict[str, Any], str, list[Any]] +InputParam = Union[str, list["_models.Item"]] +ConversationParam = Union[str, "_models.ConversationParam_2"] +CreateResponseStreamingResponse = Union[ + "_models.ResponseAudioDeltaEvent", + "_models.ResponseAudioTranscriptDeltaEvent", + "_models.ResponseCodeInterpreterCallCodeDeltaEvent", + "_models.ResponseCodeInterpreterCallInProgressEvent", + "_models.ResponseCodeInterpreterCallInterpretingEvent", + "_models.ResponseContentPartAddedEvent", + "_models.ResponseCreatedEvent", + "_models.ResponseErrorEvent", + "_models.ResponseFileSearchCallInProgressEvent", + "_models.ResponseFileSearchCallSearchingEvent", + "_models.ResponseFunctionCallArgumentsDeltaEvent", + "_models.ResponseInProgressEvent", + "_models.ResponseFailedEvent", + "_models.ResponseIncompleteEvent", + "_models.ResponseOutputItemAddedEvent", + "_models.ResponseReasoningSummaryPartAddedEvent", + "_models.ResponseReasoningSummaryTextDeltaEvent", + "_models.ResponseReasoningTextDeltaEvent", + "_models.ResponseRefusalDeltaEvent", + "_models.ResponseTextDeltaEvent", + "_models.ResponseWebSearchCallInProgressEvent", + "_models.ResponseWebSearchCallSearchingEvent", + "_models.ResponseImageGenCallGeneratingEvent", + "_models.ResponseImageGenCallInProgressEvent", + "_models.ResponseImageGenCallPartialImageEvent", + "_models.ResponseMCPCallArgumentsDeltaEvent", + "_models.ResponseMCPCallFailedEvent", + "_models.ResponseMCPCallInProgressEvent", + "_models.ResponseMCPListToolsFailedEvent", + "_models.ResponseMCPListToolsInProgressEvent", + "_models.ResponseOutputTextAnnotationAddedEvent", + "_models.ResponseQueuedEvent", + "_models.ResponseCustomToolCallInputDeltaEvent", + "_models.ResponseAudioDoneEvent", + "_models.ResponseAudioTranscriptDoneEvent", + "_models.ResponseCodeInterpreterCallCodeDoneEvent", + "_models.ResponseCodeInterpreterCallCompletedEvent", + "_models.ResponseCompletedEvent", + "_models.ResponseContentPartDoneEvent", + "_models.ResponseFileSearchCallCompletedEvent", + "_models.ResponseFunctionCallArgumentsDoneEvent", + "_models.ResponseOutputItemDoneEvent", + "_models.ResponseReasoningSummaryPartDoneEvent", + "_models.ResponseReasoningSummaryTextDoneEvent", + "_models.ResponseReasoningTextDoneEvent", + "_models.ResponseRefusalDoneEvent", + "_models.ResponseTextDoneEvent", + "_models.ResponseWebSearchCallCompletedEvent", + "_models.ResponseImageGenCallCompletedEvent", + "_models.ResponseMCPCallArgumentsDoneEvent", + "_models.ResponseMCPCallCompletedEvent", + "_models.ResponseMCPListToolsCompletedEvent", + "_models.ResponseCustomToolCallInputDoneEvent", +] diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/__init__.py similarity index 100% rename from sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/__init__.py rename to sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/__init__.py diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/model_base.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/model_base.py similarity index 86% rename from sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/model_base.py rename to sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/model_base.py index 03b8c4ce34a0..a75a22adbb97 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/model_base.py +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/model_base.py @@ -6,7 +6,7 @@ # Code generated by Microsoft (R) Python Code Generator. # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- -# pylint: disable=protected-access, broad-except, import-error, no-value-for-parameter +# pylint: disable=protected-access, broad-except import copy import calendar @@ -37,6 +37,7 @@ TZ_UTC = timezone.utc _T = typing.TypeVar("_T") +_NONE_TYPE = type(None) def _timedelta_as_isostr(td: timedelta) -> str: @@ -171,6 +172,21 @@ def default(self, o): # pylint: disable=too-many-return-statements r"(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s\d{4}\s\d{2}:\d{2}:\d{2}\sGMT" ) +_ARRAY_ENCODE_MAPPING = { + "pipeDelimited": "|", + "spaceDelimited": " ", + "commaDelimited": ",", + "newlineDelimited": "\n", +} + + +def _deserialize_array_encoded(delimit: str, attr): + if isinstance(attr, str): + if attr == "": + return [] + return attr.split(delimit) + return attr + def _deserialize_datetime(attr: typing.Union[str, datetime]) -> datetime: """Deserialize ISO-8601 formatted string into Datetime object. @@ -202,7 +218,7 @@ def _deserialize_datetime(attr: typing.Union[str, datetime]) -> datetime: test_utc = date_obj.utctimetuple() if test_utc.tm_year > 9999 or test_utc.tm_year < 1: raise OverflowError("Hit max or min date") - return date_obj + return date_obj # type: ignore[no-any-return] def _deserialize_datetime_rfc7231(attr: typing.Union[str, datetime]) -> datetime: @@ -256,7 +272,7 @@ def _deserialize_time(attr: typing.Union[str, time]) -> time: """ if isinstance(attr, time): return attr - return isodate.parse_time(attr) + return isodate.parse_time(attr) # type: ignore[no-any-return] def _deserialize_bytes(attr): @@ -315,6 +331,8 @@ def _deserialize_int_as_str(attr): def get_deserializer(annotation: typing.Any, rf: typing.Optional["_RestField"] = None): if annotation is int and rf and rf._format == "str": return _deserialize_int_as_str + if annotation is str and rf and rf._format in _ARRAY_ENCODE_MAPPING: + return functools.partial(_deserialize_array_encoded, _ARRAY_ENCODE_MAPPING[rf._format]) if rf and rf._format: return _DESERIALIZE_MAPPING_WITHFORMAT.get(rf._format) return _DESERIALIZE_MAPPING.get(annotation) # pyright: ignore @@ -353,9 +371,39 @@ def __contains__(self, key: typing.Any) -> bool: return key in self._data def __getitem__(self, key: str) -> typing.Any: + # If this key has been deserialized (for mutable types), we need to handle serialization + if hasattr(self, "_attr_to_rest_field"): + cache_attr = f"_deserialized_{key}" + if hasattr(self, cache_attr): + rf = _get_rest_field(getattr(self, "_attr_to_rest_field"), key) + if rf: + value = self._data.get(key) + if isinstance(value, (dict, list, set)): + # For mutable types, serialize and return + # But also update _data with serialized form and clear flag + # so mutations via this returned value affect _data + serialized = _serialize(value, rf._format) + # If serialized form is same type (no transformation needed), + # return _data directly so mutations work + if isinstance(serialized, type(value)) and serialized == value: + return self._data.get(key) + # Otherwise return serialized copy and clear flag + try: + object.__delattr__(self, cache_attr) + except AttributeError: + pass + # Store serialized form back + self._data[key] = serialized + return serialized return self._data.__getitem__(key) def __setitem__(self, key: str, value: typing.Any) -> None: + # Clear any cached deserialized value when setting through dictionary access + cache_attr = f"_deserialized_{key}" + try: + object.__delattr__(self, cache_attr) + except AttributeError: + pass self._data.__setitem__(key, value) def __delitem__(self, key: str) -> None: @@ -467,6 +515,8 @@ def setdefault(self, key: str, default: typing.Any = _UNSET) -> typing.Any: return self._data.setdefault(key, default) def __eq__(self, other: typing.Any) -> bool: + if isinstance(other, _MyMutableMapping): + return self._data == other._data try: other_model = self.__class__(other) except Exception: @@ -483,6 +533,8 @@ def _is_model(obj: typing.Any) -> bool: def _serialize(o, format: typing.Optional[str] = None): # pylint: disable=too-many-return-statements if isinstance(o, list): + if format in _ARRAY_ENCODE_MAPPING and all(isinstance(x, str) for x in o): + return _ARRAY_ENCODE_MAPPING[format].join(o) return [_serialize(x, format) for x in o] if isinstance(o, dict): return {k: _serialize(v, format) for k, v in o.items()} @@ -578,6 +630,9 @@ def __init__(self, *args: typing.Any, **kwargs: typing.Any) -> None: if len(items) > 0: existed_attr_keys.append(xml_name) dict_to_pass[rf._rest_name] = _deserialize(rf._type, items) + elif not rf._is_optional: + existed_attr_keys.append(xml_name) + dict_to_pass[rf._rest_name] = [] continue # text element is primitive type @@ -638,6 +693,10 @@ def __new__(cls, *args: typing.Any, **kwargs: typing.Any) -> Self: if not rf._rest_name_input: rf._rest_name_input = attr cls._attr_to_rest_field: dict[str, _RestField] = dict(attr_to_rest_field.items()) + cls._backcompat_attr_to_rest_field: dict[str, _RestField] = { + Model._get_backcompat_attribute_name(cls._attr_to_rest_field, attr): rf + for attr, rf in cls._attr_to_rest_field.items() + } cls._calculated.add(f"{cls.__module__}.{cls.__qualname__}") return super().__new__(cls) @@ -647,6 +706,16 @@ def __init_subclass__(cls, discriminator: typing.Optional[str] = None) -> None: if hasattr(base, "__mapping__"): base.__mapping__[discriminator or cls.__name__] = cls # type: ignore + @classmethod + def _get_backcompat_attribute_name(cls, attr_to_rest_field: dict[str, "_RestField"], attr_name: str) -> str: + rest_field_obj = attr_to_rest_field.get(attr_name) # pylint: disable=protected-access + if rest_field_obj is None: + return attr_name + original_tsp_name = getattr(rest_field_obj, "_original_tsp_name", None) # pylint: disable=protected-access + if original_tsp_name: + return original_tsp_name + return attr_name + @classmethod def _get_discriminator(cls, exist_discriminators) -> typing.Optional["_RestField"]: for v in cls.__dict__.values(): @@ -758,6 +827,14 @@ def _deserialize_multiple_sequence( return type(obj)(_deserialize(deserializer, entry, module) for entry, deserializer in zip(obj, entry_deserializers)) +def _is_array_encoded_deserializer(deserializer: functools.partial) -> bool: + return ( + isinstance(deserializer, functools.partial) + and isinstance(deserializer.args[0], functools.partial) + and deserializer.args[0].func == _deserialize_array_encoded # pylint: disable=comparison-with-callable + ) + + def _deserialize_sequence( deserializer: typing.Optional[typing.Callable], module: typing.Optional[str], @@ -767,6 +844,19 @@ def _deserialize_sequence( return obj if isinstance(obj, ET.Element): obj = list(obj) + + # encoded string may be deserialized to sequence + if isinstance(obj, str) and isinstance(deserializer, functools.partial): + # for list[str] + if _is_array_encoded_deserializer(deserializer): + return deserializer(obj) + + # for list[Union[...]] + if isinstance(deserializer.args[0], list): + for sub_deserializer in deserializer.args[0]: + if _is_array_encoded_deserializer(sub_deserializer): + return sub_deserializer(obj) + return type(obj)(_deserialize(deserializer, entry, module) for entry in obj) @@ -817,16 +907,18 @@ def _get_deserialize_callable_from_annotation( # pylint: disable=too-many-retur # is it optional? try: - if any(a for a in annotation.__args__ if a == type(None)): # pyright: ignore + if any(a is _NONE_TYPE for a in annotation.__args__): # pyright: ignore + if rf: + rf._is_optional = True if len(annotation.__args__) <= 2: # pyright: ignore if_obj_deserializer = _get_deserialize_callable_from_annotation( - next(a for a in annotation.__args__ if a != type(None)), module, rf # pyright: ignore + next(a for a in annotation.__args__ if a is not _NONE_TYPE), module, rf # pyright: ignore ) return functools.partial(_deserialize_with_optional, if_obj_deserializer) # the type is Optional[Union[...]], we need to remove the None type from the Union annotation_copy = copy.copy(annotation) - annotation_copy.__args__ = [a for a in annotation_copy.__args__ if a != type(None)] # pyright: ignore + annotation_copy.__args__ = [a for a in annotation_copy.__args__ if a is not _NONE_TYPE] # pyright: ignore return _get_deserialize_callable_from_annotation(annotation_copy, module, rf) except AttributeError: pass @@ -910,16 +1002,20 @@ def _deserialize_with_callable( return float(value.text) if value.text else None if deserializer is bool: return value.text == "true" if value.text else None + if deserializer and deserializer in _DESERIALIZE_MAPPING.values(): + return deserializer(value.text) if value.text else None + if deserializer and deserializer in _DESERIALIZE_MAPPING_WITHFORMAT.values(): + return deserializer(value.text) if value.text else None if deserializer is None: return value if deserializer in [int, float, bool]: return deserializer(value) if isinstance(deserializer, CaseInsensitiveEnumMeta): try: - return deserializer(value) + return deserializer(value.text if isinstance(value, ET.Element) else value) except ValueError: # for unknown value, return raw value - return value + return value.text if isinstance(value, ET.Element) else value if isinstance(deserializer, type) and issubclass(deserializer, Model): return deserializer._deserialize(value, []) return typing.cast(typing.Callable[[typing.Any], typing.Any], deserializer)(value) @@ -952,7 +1048,7 @@ def _failsafe_deserialize( ) -> typing.Any: try: return _deserialize(deserializer, response.json(), module, rf, format) - except DeserializationError: + except Exception: # pylint: disable=broad-except _LOGGER.warning( "Ran into a deserialization error. Ignoring since this is failsafe deserialization", exc_info=True ) @@ -965,13 +1061,14 @@ def _failsafe_deserialize_xml( ) -> typing.Any: try: return _deserialize_xml(deserializer, response.text()) - except DeserializationError: + except Exception: # pylint: disable=broad-except _LOGGER.warning( "Ran into a deserialization error. Ignoring since this is failsafe deserialization", exc_info=True ) return None +# pylint: disable=too-many-instance-attributes class _RestField: def __init__( self, @@ -984,6 +1081,7 @@ def __init__( format: typing.Optional[str] = None, is_multipart_file_input: bool = False, xml: typing.Optional[dict[str, typing.Any]] = None, + original_tsp_name: typing.Optional[str] = None, ): self._type = type self._rest_name_input = name @@ -991,14 +1089,20 @@ def __init__( self._is_discriminator = is_discriminator self._visibility = visibility self._is_model = False + self._is_optional = False self._default = default self._format = format self._is_multipart_file_input = is_multipart_file_input self._xml = xml if xml is not None else {} + self._original_tsp_name = original_tsp_name @property def _class_type(self) -> typing.Any: - return getattr(self._type, "args", [None])[0] + result = getattr(self._type, "args", [None])[0] + # type may be wrapped by nested functools.partial so we need to check for that + if isinstance(result, functools.partial): + return getattr(result, "args", [None])[0] + return result @property def _rest_name(self) -> str: @@ -1009,14 +1113,37 @@ def _rest_name(self) -> str: def __get__(self, obj: Model, type=None): # pylint: disable=redefined-builtin # by this point, type and rest_name will have a value bc we default # them in __new__ of the Model class - item = obj.get(self._rest_name) + # Use _data.get() directly to avoid triggering __getitem__ which clears the cache + item = obj._data.get(self._rest_name) if item is None: return item if self._is_model: return item - return _deserialize(self._type, _serialize(item, self._format), rf=self) + + # For mutable types, we want mutations to directly affect _data + # Check if we've already deserialized this value + cache_attr = f"_deserialized_{self._rest_name}" + if hasattr(obj, cache_attr): + # Return the value from _data directly (it's been deserialized in place) + return obj._data.get(self._rest_name) + + deserialized = _deserialize(self._type, _serialize(item, self._format), rf=self) + + # For mutable types, store the deserialized value back in _data + # so mutations directly affect _data + if isinstance(deserialized, (dict, list, set)): + obj._data[self._rest_name] = deserialized + object.__setattr__(obj, cache_attr, True) # Mark as deserialized + return deserialized + + return deserialized def __set__(self, obj: Model, value) -> None: + # Clear the cached deserialized object when setting a new value + cache_attr = f"_deserialized_{self._rest_name}" + if hasattr(obj, cache_attr): + object.__delattr__(obj, cache_attr) + if value is None: # we want to wipe out entries if users set attr to None try: @@ -1046,6 +1173,7 @@ def rest_field( format: typing.Optional[str] = None, is_multipart_file_input: bool = False, xml: typing.Optional[dict[str, typing.Any]] = None, + original_tsp_name: typing.Optional[str] = None, ) -> typing.Any: return _RestField( name=name, @@ -1055,6 +1183,7 @@ def rest_field( format=format, is_multipart_file_input=is_multipart_file_input, xml=xml, + original_tsp_name=original_tsp_name, ) @@ -1184,7 +1313,7 @@ def _get_wrapped_element( _get_element(v, exclude_readonly, meta, wrapped_element) else: wrapped_element.text = _get_primitive_type_value(v) - return wrapped_element + return wrapped_element # type: ignore[no-any-return] def _get_primitive_type_value(v) -> str: @@ -1197,7 +1326,9 @@ def _get_primitive_type_value(v) -> str: return str(v) -def _create_xml_element(tag, prefix=None, ns=None): +def _create_xml_element( + tag: typing.Any, prefix: typing.Optional[str] = None, ns: typing.Optional[str] = None +) -> ET.Element: if prefix and ns: ET.register_namespace(prefix, ns) if ns: diff --git a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/serialization.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/serialization.py similarity index 99% rename from sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/serialization.py rename to sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/serialization.py index 45a3e44e45cb..81ec1de5922b 100644 --- a/sdk/agentserver/azure-ai-agentserver-core/azure/ai/agentserver/core/models/projects/_utils/serialization.py +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/_utils/serialization.py @@ -821,13 +821,20 @@ def serialize_basic(cls, data, data_type, **kwargs): :param str data_type: Type of object in the iterable. :rtype: str, int, float, bool :return: serialized object + :raises TypeError: raise if data_type is not one of str, int, float, bool. """ custom_serializer = cls._get_custom_serializers(data_type, **kwargs) if custom_serializer: return custom_serializer(data) if data_type == "str": return cls.serialize_unicode(data) - return eval(data_type)(data) # nosec # pylint: disable=eval-used + if data_type == "int": + return int(data) + if data_type == "float": + return float(data) + if data_type == "bool": + return bool(data) + raise TypeError("Unknown basic data type: {}".format(data_type)) @classmethod def serialize_unicode(cls, data): @@ -1757,7 +1764,7 @@ def deserialize_basic(self, attr, data_type): # pylint: disable=too-many-return :param str data_type: deserialization data type. :return: Deserialized basic type. :rtype: str, int, float or bool - :raises TypeError: if string format is not valid. + :raises TypeError: if string format is not valid or data_type is not one of str, int, float, bool. """ # If we're here, data is supposed to be a basic type. # If it's still an XML node, take the text @@ -1783,7 +1790,11 @@ def deserialize_basic(self, attr, data_type): # pylint: disable=too-many-return if data_type == "str": return self.deserialize_unicode(attr) - return eval(data_type)(attr) # nosec # pylint: disable=eval-used + if data_type == "int": + return int(attr) + if data_type == "float": + return float(attr) + raise TypeError("Unknown basic data type: {}".format(data_type)) @staticmethod def deserialize_unicode(data): diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/__init__.py new file mode 100644 index 000000000000..77be5d3427a5 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/__init__.py @@ -0,0 +1,906 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +# pylint: disable=wrong-import-position + +from typing import TYPE_CHECKING + +if TYPE_CHECKING: + from ._patch import * # pylint: disable=unused-wildcard-import + + +from ._models import ( # type: ignore + A2APreviewTool, + A2AToolCall, + A2AToolCallOutput, + AISearchIndexResource, + AgentId, + AgentReference, + Annotation, + ApiErrorResponse, + ApplyPatchCreateFileOperation, + ApplyPatchCreateFileOperationParam, + ApplyPatchDeleteFileOperation, + ApplyPatchDeleteFileOperationParam, + ApplyPatchFileOperation, + ApplyPatchOperationParam, + ApplyPatchToolCallItemParam, + ApplyPatchToolCallOutputItemParam, + ApplyPatchToolParam, + ApplyPatchUpdateFileOperation, + ApplyPatchUpdateFileOperationParam, + ApproximateLocation, + AutoCodeInterpreterToolParam, + AzureAISearchTool, + AzureAISearchToolCall, + AzureAISearchToolCallOutput, + AzureAISearchToolResource, + AzureFunctionBinding, + AzureFunctionDefinition, + AzureFunctionDefinitionFunction, + AzureFunctionStorageQueue, + AzureFunctionTool, + AzureFunctionToolCall, + AzureFunctionToolCallOutput, + BingCustomSearchConfiguration, + BingCustomSearchPreviewTool, + BingCustomSearchToolCall, + BingCustomSearchToolCallOutput, + BingCustomSearchToolParameters, + BingGroundingSearchConfiguration, + BingGroundingSearchToolParameters, + BingGroundingTool, + BingGroundingToolCall, + BingGroundingToolCallOutput, + BrowserAutomationPreviewTool, + BrowserAutomationToolCall, + BrowserAutomationToolCallOutput, + BrowserAutomationToolConnectionParameters, + BrowserAutomationToolParameters, + CaptureStructuredOutputsTool, + ChatSummaryMemoryItem, + ClickParam, + CodeInterpreterOutputImage, + CodeInterpreterOutputLogs, + CodeInterpreterTool, + CompactResource, + CompactionSummaryItemParam, + ComparisonFilter, + CompoundFilter, + ComputerAction, + ComputerCallOutputItemParam, + ComputerCallSafetyCheckParam, + ComputerScreenshotContent, + ComputerScreenshotImage, + ComputerTool, + ComputerUsePreviewTool, + ContainerAutoParam, + ContainerFileCitationBody, + ContainerNetworkPolicyAllowlistParam, + ContainerNetworkPolicyDisabledParam, + ContainerNetworkPolicyDomainSecretParam, + ContainerNetworkPolicyParam, + ContainerReferenceResource, + ContainerSkill, + ContextManagementParam, + ConversationParam_2, + ConversationReference, + CoordParam, + CreateResponse, + CreatedBy, + CustomGrammarFormatParam, + CustomTextFormatParam, + CustomToolParam, + CustomToolParamFormat, + DeleteResponseResult, + DoubleClickAction, + DragParam, + EmptyModelParam, + Error, + FabricDataAgentToolCall, + FabricDataAgentToolCallOutput, + FabricDataAgentToolParameters, + FileCitationBody, + FilePath, + FileSearchTool, + FileSearchToolCallResults, + FunctionAndCustomToolCallOutput, + FunctionAndCustomToolCallOutputInputFileContent, + FunctionAndCustomToolCallOutputInputImageContent, + FunctionAndCustomToolCallOutputInputTextContent, + FunctionCallOutputItemParam, + FunctionShellAction, + FunctionShellActionParam, + FunctionShellCallEnvironment, + FunctionShellCallItemParam, + FunctionShellCallItemParamEnvironment, + FunctionShellCallItemParamEnvironmentContainerReferenceParam, + FunctionShellCallItemParamEnvironmentLocalEnvironmentParam, + FunctionShellCallOutputContent, + FunctionShellCallOutputContentParam, + FunctionShellCallOutputExitOutcome, + FunctionShellCallOutputExitOutcomeParam, + FunctionShellCallOutputItemParam, + FunctionShellCallOutputOutcome, + FunctionShellCallOutputOutcomeParam, + FunctionShellCallOutputTimeoutOutcome, + FunctionShellCallOutputTimeoutOutcomeParam, + FunctionShellToolParam, + FunctionShellToolParamEnvironment, + FunctionShellToolParamEnvironmentContainerReferenceParam, + FunctionShellToolParamEnvironmentLocalEnvironmentParam, + FunctionTool, + FunctionToolParam, + HybridSearchOptions, + ImageGenTool, + ImageGenToolInputImageMask, + InlineSkillParam, + InlineSkillSourceParam, + InputFileContent, + InputFileContentParam, + InputImageContent, + InputImageContentParamAutoParam, + InputTextContent, + InputTextContentParam, + Item, + ItemCodeInterpreterToolCall, + ItemComputerToolCall, + ItemCustomToolCall, + ItemCustomToolCallOutput, + ItemField, + ItemFieldApplyPatchToolCall, + ItemFieldApplyPatchToolCallOutput, + ItemFieldCodeInterpreterToolCall, + ItemFieldCompactionBody, + ItemFieldComputerToolCall, + ItemFieldComputerToolCallOutput, + ItemFieldCustomToolCall, + ItemFieldCustomToolCallOutput, + ItemFieldFileSearchToolCall, + ItemFieldFunctionShellCall, + ItemFieldFunctionShellCallOutput, + ItemFieldFunctionToolCall, + ItemFieldFunctionToolCallOutput, + ItemFieldImageGenToolCall, + ItemFieldLocalShellToolCall, + ItemFieldLocalShellToolCallOutput, + ItemFieldMcpApprovalRequest, + ItemFieldMcpApprovalResponseResource, + ItemFieldMcpListTools, + ItemFieldMcpToolCall, + ItemFieldMessage, + ItemFieldReasoningItem, + ItemFieldToolSearchCall, + ItemFieldToolSearchOutput, + ItemFieldWebSearchToolCall, + ItemFileSearchToolCall, + ItemFunctionToolCall, + ItemImageGenToolCall, + ItemLocalShellToolCall, + ItemLocalShellToolCallOutput, + ItemMcpApprovalRequest, + ItemMcpListTools, + ItemMcpToolCall, + ItemMessage, + ItemOutputMessage, + ItemReasoningItem, + ItemReferenceParam, + ItemWebSearchToolCall, + KeyPressAction, + LocalEnvironmentResource, + LocalShellExecAction, + LocalShellToolParam, + LocalSkillParam, + LogProb, + MCPApprovalResponse, + MCPListToolsTool, + MCPListToolsToolAnnotations, + MCPListToolsToolInputSchema, + MCPTool, + MCPToolFilter, + MCPToolRequireApproval, + MemoryItem, + MemorySearchItem, + MemorySearchOptions, + MemorySearchPreviewTool, + MemorySearchTool, + MemorySearchToolCallItemParam, + MemorySearchToolCallItemResource, + MessageContent, + MessageContentInputFileContent, + MessageContentInputImageContent, + MessageContentInputTextContent, + MessageContentOutputTextContent, + MessageContentReasoningTextContent, + MessageContentRefusalContent, + Metadata, + MicrosoftFabricPreviewTool, + MoveParam, + NamespaceToolParam, + OAuthConsentRequestOutputItem, + OpenApiAnonymousAuthDetails, + OpenApiAuthDetails, + OpenApiFunctionDefinition, + OpenApiFunctionDefinitionFunction, + OpenApiManagedAuthDetails, + OpenApiManagedSecurityScheme, + OpenApiProjectConnectionAuthDetails, + OpenApiProjectConnectionSecurityScheme, + OpenApiTool, + OpenApiToolCall, + OpenApiToolCallOutput, + OutputContent, + OutputContentOutputTextContent, + OutputContentReasoningTextContent, + OutputContentRefusalContent, + OutputItem, + OutputItemApplyPatchToolCall, + OutputItemApplyPatchToolCallOutput, + OutputItemCodeInterpreterToolCall, + OutputItemCompactionBody, + OutputItemComputerToolCall, + OutputItemComputerToolCallOutput, + OutputItemCustomToolCall, + OutputItemCustomToolCallOutput, + OutputItemFileSearchToolCall, + OutputItemFunctionShellCall, + OutputItemFunctionShellCallOutput, + OutputItemFunctionToolCall, + OutputItemFunctionToolCallOutput, + OutputItemImageGenToolCall, + OutputItemLocalShellToolCall, + OutputItemLocalShellToolCallOutput, + OutputItemMcpApprovalRequest, + OutputItemMcpApprovalResponseResource, + OutputItemMcpListTools, + OutputItemMcpToolCall, + OutputItemMessage, + OutputItemOutputMessage, + OutputItemReasoningItem, + OutputItemToolSearchCall, + OutputItemToolSearchOutput, + OutputItemWebSearchToolCall, + OutputMessageContent, + OutputMessageContentOutputTextContent, + OutputMessageContentRefusalContent, + Prompt, + RankingOptions, + RealtimeMCPError, + RealtimeMCPHTTPError, + RealtimeMCPProtocolError, + RealtimeMCPToolExecutionError, + Reasoning, + ReasoningTextContent, + Response, + ResponseAudioDeltaEvent, + ResponseAudioDoneEvent, + ResponseAudioTranscriptDeltaEvent, + ResponseAudioTranscriptDoneEvent, + ResponseCodeInterpreterCallCodeDeltaEvent, + ResponseCodeInterpreterCallCodeDoneEvent, + ResponseCodeInterpreterCallCompletedEvent, + ResponseCodeInterpreterCallInProgressEvent, + ResponseCodeInterpreterCallInterpretingEvent, + ResponseCompletedEvent, + ResponseContentPartAddedEvent, + ResponseContentPartDoneEvent, + ResponseCreatedEvent, + ResponseCustomToolCallInputDeltaEvent, + ResponseCustomToolCallInputDoneEvent, + ResponseError, + ResponseErrorEvent, + ResponseFailedEvent, + ResponseFileSearchCallCompletedEvent, + ResponseFileSearchCallInProgressEvent, + ResponseFileSearchCallSearchingEvent, + ResponseFormatJsonSchemaSchema, + ResponseFunctionCallArgumentsDeltaEvent, + ResponseFunctionCallArgumentsDoneEvent, + ResponseImageGenCallCompletedEvent, + ResponseImageGenCallGeneratingEvent, + ResponseImageGenCallInProgressEvent, + ResponseImageGenCallPartialImageEvent, + ResponseInProgressEvent, + ResponseIncompleteDetails, + ResponseIncompleteEvent, + ResponseLogProb, + ResponseLogProbTopLogprobs, + ResponseMCPCallArgumentsDeltaEvent, + ResponseMCPCallArgumentsDoneEvent, + ResponseMCPCallCompletedEvent, + ResponseMCPCallFailedEvent, + ResponseMCPCallInProgressEvent, + ResponseMCPListToolsCompletedEvent, + ResponseMCPListToolsFailedEvent, + ResponseMCPListToolsInProgressEvent, + ResponseOutputItemAddedEvent, + ResponseOutputItemDoneEvent, + ResponseOutputTextAnnotationAddedEvent, + ResponsePromptVariables, + ResponseQueuedEvent, + ResponseReasoningSummaryPartAddedEvent, + ResponseReasoningSummaryPartAddedEventPart, + ResponseReasoningSummaryPartDoneEvent, + ResponseReasoningSummaryPartDoneEventPart, + ResponseReasoningSummaryTextDeltaEvent, + ResponseReasoningSummaryTextDoneEvent, + ResponseReasoningTextDeltaEvent, + ResponseReasoningTextDoneEvent, + ResponseRefusalDeltaEvent, + ResponseRefusalDoneEvent, + ResponseStreamEvent, + ResponseStreamOptions, + ResponseTextDeltaEvent, + ResponseTextDoneEvent, + ResponseTextParam, + ResponseUsage, + ResponseUsageInputTokensDetails, + ResponseUsageOutputTokensDetails, + ResponseWebSearchCallCompletedEvent, + ResponseWebSearchCallInProgressEvent, + ResponseWebSearchCallSearchingEvent, + ScreenshotParam, + ScrollParam, + SharepointGroundingToolCall, + SharepointGroundingToolCallOutput, + SharepointGroundingToolParameters, + SharepointPreviewTool, + SkillReferenceParam, + SpecificApplyPatchParam, + SpecificFunctionShellParam, + StructuredOutputDefinition, + StructuredOutputsOutputItem, + SummaryTextContent, + TextContent, + TextResponseFormatConfiguration, + TextResponseFormatConfigurationResponseFormatJsonObject, + TextResponseFormatConfigurationResponseFormatText, + TextResponseFormatJsonSchema, + Tool, + ToolChoiceAllowed, + ToolChoiceCodeInterpreter, + ToolChoiceComputer, + ToolChoiceComputerUse, + ToolChoiceComputerUsePreview, + ToolChoiceCustom, + ToolChoiceFileSearch, + ToolChoiceFunction, + ToolChoiceImageGeneration, + ToolChoiceMCP, + ToolChoiceParam, + ToolChoiceWebSearchPreview, + ToolChoiceWebSearchPreview20250311, + ToolProjectConnection, + ToolSearchCallItemParam, + ToolSearchOutputItemParam, + ToolSearchToolParam, + TopLogProb, + TypeParam, + UrlCitationBody, + UserProfileMemoryItem, + VectorStoreFileAttributes, + WaitParam, + WebSearchActionFind, + WebSearchActionOpenPage, + WebSearchActionSearch, + WebSearchActionSearchSources, + WebSearchApproximateLocation, + WebSearchConfiguration, + WebSearchPreviewTool, + WebSearchTool, + WebSearchToolFilters, + WorkflowActionOutputItem, +) + +from ._enums import ( # type: ignore + AnnotationType, + ApplyPatchCallOutputStatus, + ApplyPatchCallOutputStatusParam, + ApplyPatchCallStatus, + ApplyPatchCallStatusParam, + ApplyPatchFileOperationType, + ApplyPatchOperationParamType, + AzureAISearchQueryType, + ClickButtonType, + ComputerActionType, + ComputerEnvironment, + ContainerMemoryLimit, + ContainerNetworkPolicyParamType, + ContainerSkillType, + CustomToolParamFormatType, + DetailEnum, + FunctionAndCustomToolCallOutputType, + FunctionCallItemStatus, + FunctionCallOutputStatusEnum, + FunctionCallStatus, + FunctionShellCallEnvironmentType, + FunctionShellCallItemParamEnvironmentType, + FunctionShellCallItemStatus, + FunctionShellCallOutputOutcomeParamType, + FunctionShellCallOutputOutcomeType, + FunctionShellToolParamEnvironmentType, + GrammarSyntax1, + ImageDetail, + ImageGenActionEnum, + IncludeEnum, + InputFidelity, + ItemFieldType, + ItemType, + LocalShellCallOutputStatusEnum, + LocalShellCallStatus, + MCPToolCallStatus, + MemoryItemKind, + MessageContentType, + MessagePhase, + MessageRole, + MessageStatus, + ModelIdsCompaction, + OpenApiAuthType, + OutputContentType, + OutputItemType, + OutputMessageContentType, + PageOrder, + RankerVersionType, + RealtimeMcpErrorType, + ResponseErrorCode, + ResponseStreamEventType, + SearchContentType, + SearchContextSize, + TextResponseFormatConfigurationType, + ToolCallStatus, + ToolChoiceOptions, + ToolChoiceParamType, + ToolSearchExecutionType, + ToolType, +) +from ._patch import __all__ as _patch_all +from ._patch import * +from ._patch import patch_sdk as _patch_sdk + +__all__ = [ + "A2APreviewTool", + "A2AToolCall", + "A2AToolCallOutput", + "AISearchIndexResource", + "AgentId", + "AgentReference", + "Annotation", + "ApiErrorResponse", + "ApplyPatchCreateFileOperation", + "ApplyPatchCreateFileOperationParam", + "ApplyPatchDeleteFileOperation", + "ApplyPatchDeleteFileOperationParam", + "ApplyPatchFileOperation", + "ApplyPatchOperationParam", + "ApplyPatchToolCallItemParam", + "ApplyPatchToolCallOutputItemParam", + "ApplyPatchToolParam", + "ApplyPatchUpdateFileOperation", + "ApplyPatchUpdateFileOperationParam", + "ApproximateLocation", + "AutoCodeInterpreterToolParam", + "AzureAISearchTool", + "AzureAISearchToolCall", + "AzureAISearchToolCallOutput", + "AzureAISearchToolResource", + "AzureFunctionBinding", + "AzureFunctionDefinition", + "AzureFunctionDefinitionFunction", + "AzureFunctionStorageQueue", + "AzureFunctionTool", + "AzureFunctionToolCall", + "AzureFunctionToolCallOutput", + "BingCustomSearchConfiguration", + "BingCustomSearchPreviewTool", + "BingCustomSearchToolCall", + "BingCustomSearchToolCallOutput", + "BingCustomSearchToolParameters", + "BingGroundingSearchConfiguration", + "BingGroundingSearchToolParameters", + "BingGroundingTool", + "BingGroundingToolCall", + "BingGroundingToolCallOutput", + "BrowserAutomationPreviewTool", + "BrowserAutomationToolCall", + "BrowserAutomationToolCallOutput", + "BrowserAutomationToolConnectionParameters", + "BrowserAutomationToolParameters", + "CaptureStructuredOutputsTool", + "ChatSummaryMemoryItem", + "ClickParam", + "CodeInterpreterOutputImage", + "CodeInterpreterOutputLogs", + "CodeInterpreterTool", + "CompactResource", + "CompactionSummaryItemParam", + "ComparisonFilter", + "CompoundFilter", + "ComputerAction", + "ComputerCallOutputItemParam", + "ComputerCallSafetyCheckParam", + "ComputerScreenshotContent", + "ComputerScreenshotImage", + "ComputerTool", + "ComputerUsePreviewTool", + "ContainerAutoParam", + "ContainerFileCitationBody", + "ContainerNetworkPolicyAllowlistParam", + "ContainerNetworkPolicyDisabledParam", + "ContainerNetworkPolicyDomainSecretParam", + "ContainerNetworkPolicyParam", + "ContainerReferenceResource", + "ContainerSkill", + "ContextManagementParam", + "ConversationParam_2", + "ConversationReference", + "CoordParam", + "CreateResponse", + "CreatedBy", + "CustomGrammarFormatParam", + "CustomTextFormatParam", + "CustomToolParam", + "CustomToolParamFormat", + "DeleteResponseResult", + "DoubleClickAction", + "DragParam", + "EmptyModelParam", + "Error", + "FabricDataAgentToolCall", + "FabricDataAgentToolCallOutput", + "FabricDataAgentToolParameters", + "FileCitationBody", + "FilePath", + "FileSearchTool", + "FileSearchToolCallResults", + "FunctionAndCustomToolCallOutput", + "FunctionAndCustomToolCallOutputInputFileContent", + "FunctionAndCustomToolCallOutputInputImageContent", + "FunctionAndCustomToolCallOutputInputTextContent", + "FunctionCallOutputItemParam", + "FunctionShellAction", + "FunctionShellActionParam", + "FunctionShellCallEnvironment", + "FunctionShellCallItemParam", + "FunctionShellCallItemParamEnvironment", + "FunctionShellCallItemParamEnvironmentContainerReferenceParam", + "FunctionShellCallItemParamEnvironmentLocalEnvironmentParam", + "FunctionShellCallOutputContent", + "FunctionShellCallOutputContentParam", + "FunctionShellCallOutputExitOutcome", + "FunctionShellCallOutputExitOutcomeParam", + "FunctionShellCallOutputItemParam", + "FunctionShellCallOutputOutcome", + "FunctionShellCallOutputOutcomeParam", + "FunctionShellCallOutputTimeoutOutcome", + "FunctionShellCallOutputTimeoutOutcomeParam", + "FunctionShellToolParam", + "FunctionShellToolParamEnvironment", + "FunctionShellToolParamEnvironmentContainerReferenceParam", + "FunctionShellToolParamEnvironmentLocalEnvironmentParam", + "FunctionTool", + "FunctionToolParam", + "HybridSearchOptions", + "ImageGenTool", + "ImageGenToolInputImageMask", + "InlineSkillParam", + "InlineSkillSourceParam", + "InputFileContent", + "InputFileContentParam", + "InputImageContent", + "InputImageContentParamAutoParam", + "InputTextContent", + "InputTextContentParam", + "Item", + "ItemCodeInterpreterToolCall", + "ItemComputerToolCall", + "ItemCustomToolCall", + "ItemCustomToolCallOutput", + "ItemField", + "ItemFieldApplyPatchToolCall", + "ItemFieldApplyPatchToolCallOutput", + "ItemFieldCodeInterpreterToolCall", + "ItemFieldCompactionBody", + "ItemFieldComputerToolCall", + "ItemFieldComputerToolCallOutput", + "ItemFieldCustomToolCall", + "ItemFieldCustomToolCallOutput", + "ItemFieldFileSearchToolCall", + "ItemFieldFunctionShellCall", + "ItemFieldFunctionShellCallOutput", + "ItemFieldFunctionToolCall", + "ItemFieldFunctionToolCallOutput", + "ItemFieldImageGenToolCall", + "ItemFieldLocalShellToolCall", + "ItemFieldLocalShellToolCallOutput", + "ItemFieldMcpApprovalRequest", + "ItemFieldMcpApprovalResponseResource", + "ItemFieldMcpListTools", + "ItemFieldMcpToolCall", + "ItemFieldMessage", + "ItemFieldReasoningItem", + "ItemFieldToolSearchCall", + "ItemFieldToolSearchOutput", + "ItemFieldWebSearchToolCall", + "ItemFileSearchToolCall", + "ItemFunctionToolCall", + "ItemImageGenToolCall", + "ItemLocalShellToolCall", + "ItemLocalShellToolCallOutput", + "ItemMcpApprovalRequest", + "ItemMcpListTools", + "ItemMcpToolCall", + "ItemMessage", + "ItemOutputMessage", + "ItemReasoningItem", + "ItemReferenceParam", + "ItemWebSearchToolCall", + "KeyPressAction", + "LocalEnvironmentResource", + "LocalShellExecAction", + "LocalShellToolParam", + "LocalSkillParam", + "LogProb", + "MCPApprovalResponse", + "MCPListToolsTool", + "MCPListToolsToolAnnotations", + "MCPListToolsToolInputSchema", + "MCPTool", + "MCPToolFilter", + "MCPToolRequireApproval", + "MemoryItem", + "MemorySearchItem", + "MemorySearchOptions", + "MemorySearchPreviewTool", + "MemorySearchTool", + "MemorySearchToolCallItemParam", + "MemorySearchToolCallItemResource", + "MessageContent", + "MessageContentInputFileContent", + "MessageContentInputImageContent", + "MessageContentInputTextContent", + "MessageContentOutputTextContent", + "MessageContentReasoningTextContent", + "MessageContentRefusalContent", + "Metadata", + "MicrosoftFabricPreviewTool", + "MoveParam", + "NamespaceToolParam", + "OAuthConsentRequestOutputItem", + "OpenApiAnonymousAuthDetails", + "OpenApiAuthDetails", + "OpenApiFunctionDefinition", + "OpenApiFunctionDefinitionFunction", + "OpenApiManagedAuthDetails", + "OpenApiManagedSecurityScheme", + "OpenApiProjectConnectionAuthDetails", + "OpenApiProjectConnectionSecurityScheme", + "OpenApiTool", + "OpenApiToolCall", + "OpenApiToolCallOutput", + "OutputContent", + "OutputContentOutputTextContent", + "OutputContentReasoningTextContent", + "OutputContentRefusalContent", + "OutputItem", + "OutputItemApplyPatchToolCall", + "OutputItemApplyPatchToolCallOutput", + "OutputItemCodeInterpreterToolCall", + "OutputItemCompactionBody", + "OutputItemComputerToolCall", + "OutputItemComputerToolCallOutput", + "OutputItemCustomToolCall", + "OutputItemCustomToolCallOutput", + "OutputItemFileSearchToolCall", + "OutputItemFunctionShellCall", + "OutputItemFunctionShellCallOutput", + "OutputItemFunctionToolCall", + "OutputItemFunctionToolCallOutput", + "OutputItemImageGenToolCall", + "OutputItemLocalShellToolCall", + "OutputItemLocalShellToolCallOutput", + "OutputItemMcpApprovalRequest", + "OutputItemMcpApprovalResponseResource", + "OutputItemMcpListTools", + "OutputItemMcpToolCall", + "OutputItemMessage", + "OutputItemOutputMessage", + "OutputItemReasoningItem", + "OutputItemToolSearchCall", + "OutputItemToolSearchOutput", + "OutputItemWebSearchToolCall", + "OutputMessageContent", + "OutputMessageContentOutputTextContent", + "OutputMessageContentRefusalContent", + "Prompt", + "RankingOptions", + "RealtimeMCPError", + "RealtimeMCPHTTPError", + "RealtimeMCPProtocolError", + "RealtimeMCPToolExecutionError", + "Reasoning", + "ReasoningTextContent", + "Response", + "ResponseAudioDeltaEvent", + "ResponseAudioDoneEvent", + "ResponseAudioTranscriptDeltaEvent", + "ResponseAudioTranscriptDoneEvent", + "ResponseCodeInterpreterCallCodeDeltaEvent", + "ResponseCodeInterpreterCallCodeDoneEvent", + "ResponseCodeInterpreterCallCompletedEvent", + "ResponseCodeInterpreterCallInProgressEvent", + "ResponseCodeInterpreterCallInterpretingEvent", + "ResponseCompletedEvent", + "ResponseContentPartAddedEvent", + "ResponseContentPartDoneEvent", + "ResponseCreatedEvent", + "ResponseCustomToolCallInputDeltaEvent", + "ResponseCustomToolCallInputDoneEvent", + "ResponseError", + "ResponseErrorEvent", + "ResponseFailedEvent", + "ResponseFileSearchCallCompletedEvent", + "ResponseFileSearchCallInProgressEvent", + "ResponseFileSearchCallSearchingEvent", + "ResponseFormatJsonSchemaSchema", + "ResponseFunctionCallArgumentsDeltaEvent", + "ResponseFunctionCallArgumentsDoneEvent", + "ResponseImageGenCallCompletedEvent", + "ResponseImageGenCallGeneratingEvent", + "ResponseImageGenCallInProgressEvent", + "ResponseImageGenCallPartialImageEvent", + "ResponseInProgressEvent", + "ResponseIncompleteDetails", + "ResponseIncompleteEvent", + "ResponseLogProb", + "ResponseLogProbTopLogprobs", + "ResponseMCPCallArgumentsDeltaEvent", + "ResponseMCPCallArgumentsDoneEvent", + "ResponseMCPCallCompletedEvent", + "ResponseMCPCallFailedEvent", + "ResponseMCPCallInProgressEvent", + "ResponseMCPListToolsCompletedEvent", + "ResponseMCPListToolsFailedEvent", + "ResponseMCPListToolsInProgressEvent", + "ResponseOutputItemAddedEvent", + "ResponseOutputItemDoneEvent", + "ResponseOutputTextAnnotationAddedEvent", + "ResponsePromptVariables", + "ResponseQueuedEvent", + "ResponseReasoningSummaryPartAddedEvent", + "ResponseReasoningSummaryPartAddedEventPart", + "ResponseReasoningSummaryPartDoneEvent", + "ResponseReasoningSummaryPartDoneEventPart", + "ResponseReasoningSummaryTextDeltaEvent", + "ResponseReasoningSummaryTextDoneEvent", + "ResponseReasoningTextDeltaEvent", + "ResponseReasoningTextDoneEvent", + "ResponseRefusalDeltaEvent", + "ResponseRefusalDoneEvent", + "ResponseStreamEvent", + "ResponseStreamOptions", + "ResponseTextDeltaEvent", + "ResponseTextDoneEvent", + "ResponseTextParam", + "ResponseUsage", + "ResponseUsageInputTokensDetails", + "ResponseUsageOutputTokensDetails", + "ResponseWebSearchCallCompletedEvent", + "ResponseWebSearchCallInProgressEvent", + "ResponseWebSearchCallSearchingEvent", + "ScreenshotParam", + "ScrollParam", + "SharepointGroundingToolCall", + "SharepointGroundingToolCallOutput", + "SharepointGroundingToolParameters", + "SharepointPreviewTool", + "SkillReferenceParam", + "SpecificApplyPatchParam", + "SpecificFunctionShellParam", + "StructuredOutputDefinition", + "StructuredOutputsOutputItem", + "SummaryTextContent", + "TextContent", + "TextResponseFormatConfiguration", + "TextResponseFormatConfigurationResponseFormatJsonObject", + "TextResponseFormatConfigurationResponseFormatText", + "TextResponseFormatJsonSchema", + "Tool", + "ToolChoiceAllowed", + "ToolChoiceCodeInterpreter", + "ToolChoiceComputer", + "ToolChoiceComputerUse", + "ToolChoiceComputerUsePreview", + "ToolChoiceCustom", + "ToolChoiceFileSearch", + "ToolChoiceFunction", + "ToolChoiceImageGeneration", + "ToolChoiceMCP", + "ToolChoiceParam", + "ToolChoiceWebSearchPreview", + "ToolChoiceWebSearchPreview20250311", + "ToolProjectConnection", + "ToolSearchCallItemParam", + "ToolSearchOutputItemParam", + "ToolSearchToolParam", + "TopLogProb", + "TypeParam", + "UrlCitationBody", + "UserProfileMemoryItem", + "VectorStoreFileAttributes", + "WaitParam", + "WebSearchActionFind", + "WebSearchActionOpenPage", + "WebSearchActionSearch", + "WebSearchActionSearchSources", + "WebSearchApproximateLocation", + "WebSearchConfiguration", + "WebSearchPreviewTool", + "WebSearchTool", + "WebSearchToolFilters", + "WorkflowActionOutputItem", + "AnnotationType", + "ApplyPatchCallOutputStatus", + "ApplyPatchCallOutputStatusParam", + "ApplyPatchCallStatus", + "ApplyPatchCallStatusParam", + "ApplyPatchFileOperationType", + "ApplyPatchOperationParamType", + "AzureAISearchQueryType", + "ClickButtonType", + "ComputerActionType", + "ComputerEnvironment", + "ContainerMemoryLimit", + "ContainerNetworkPolicyParamType", + "ContainerSkillType", + "CustomToolParamFormatType", + "DetailEnum", + "FunctionAndCustomToolCallOutputType", + "FunctionCallItemStatus", + "FunctionCallOutputStatusEnum", + "FunctionCallStatus", + "FunctionShellCallEnvironmentType", + "FunctionShellCallItemParamEnvironmentType", + "FunctionShellCallItemStatus", + "FunctionShellCallOutputOutcomeParamType", + "FunctionShellCallOutputOutcomeType", + "FunctionShellToolParamEnvironmentType", + "GrammarSyntax1", + "ImageDetail", + "ImageGenActionEnum", + "IncludeEnum", + "InputFidelity", + "ItemFieldType", + "ItemType", + "LocalShellCallOutputStatusEnum", + "LocalShellCallStatus", + "MCPToolCallStatus", + "MemoryItemKind", + "MessageContentType", + "MessagePhase", + "MessageRole", + "MessageStatus", + "ModelIdsCompaction", + "OpenApiAuthType", + "OutputContentType", + "OutputItemType", + "OutputMessageContentType", + "PageOrder", + "RankerVersionType", + "RealtimeMcpErrorType", + "ResponseErrorCode", + "ResponseStreamEventType", + "SearchContentType", + "SearchContextSize", + "TextResponseFormatConfigurationType", + "ToolCallStatus", + "ToolChoiceOptions", + "ToolChoiceParamType", + "ToolSearchExecutionType", + "ToolType", +] +__all__.extend([p for p in _patch_all if p not in __all__]) # pyright: ignore +_patch_sdk() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_enums.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_enums.py new file mode 100644 index 000000000000..59c25a129d99 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_enums.py @@ -0,0 +1,1320 @@ +# pylint: disable=too-many-lines +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- + +from enum import Enum +from azure.core import CaseInsensitiveEnumMeta + + +class AnnotationType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of AnnotationType.""" + + FILE_CITATION = "file_citation" + """FILE_CITATION.""" + URL_CITATION = "url_citation" + """URL_CITATION.""" + CONTAINER_FILE_CITATION = "container_file_citation" + """CONTAINER_FILE_CITATION.""" + FILE_PATH = "file_path" + """FILE_PATH.""" + + +class ApplyPatchCallOutputStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ApplyPatchCallOutputStatus.""" + + COMPLETED = "completed" + """COMPLETED.""" + FAILED = "failed" + """FAILED.""" + + +class ApplyPatchCallOutputStatusParam(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Apply patch call output status.""" + + COMPLETED = "completed" + """COMPLETED.""" + FAILED = "failed" + """FAILED.""" + + +class ApplyPatchCallStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ApplyPatchCallStatus.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + + +class ApplyPatchCallStatusParam(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Apply patch call status.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + + +class ApplyPatchFileOperationType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ApplyPatchFileOperationType.""" + + CREATE_FILE = "create_file" + """CREATE_FILE.""" + DELETE_FILE = "delete_file" + """DELETE_FILE.""" + UPDATE_FILE = "update_file" + """UPDATE_FILE.""" + + +class ApplyPatchOperationParamType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ApplyPatchOperationParamType.""" + + CREATE_FILE = "create_file" + """CREATE_FILE.""" + DELETE_FILE = "delete_file" + """DELETE_FILE.""" + UPDATE_FILE = "update_file" + """UPDATE_FILE.""" + + +class AzureAISearchQueryType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Available query types for Azure AI Search tool.""" + + SIMPLE = "simple" + """Query type ``simple``.""" + SEMANTIC = "semantic" + """Query type ``semantic``.""" + VECTOR = "vector" + """Query type ``vector``.""" + VECTOR_SIMPLE_HYBRID = "vector_simple_hybrid" + """Query type ``vector_simple_hybrid``.""" + VECTOR_SEMANTIC_HYBRID = "vector_semantic_hybrid" + """Query type ``vector_semantic_hybrid``.""" + + +class ClickButtonType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ClickButtonType.""" + + LEFT = "left" + """LEFT.""" + RIGHT = "right" + """RIGHT.""" + WHEEL = "wheel" + """WHEEL.""" + BACK = "back" + """BACK.""" + FORWARD = "forward" + """FORWARD.""" + + +class ComputerActionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ComputerActionType.""" + + CLICK = "click" + """CLICK.""" + DOUBLE_CLICK = "double_click" + """DOUBLE_CLICK.""" + DRAG = "drag" + """DRAG.""" + KEYPRESS = "keypress" + """KEYPRESS.""" + MOVE = "move" + """MOVE.""" + SCREENSHOT = "screenshot" + """SCREENSHOT.""" + SCROLL = "scroll" + """SCROLL.""" + TYPE = "type" + """TYPE.""" + WAIT = "wait" + """WAIT.""" + + +class ComputerEnvironment(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ComputerEnvironment.""" + + WINDOWS = "windows" + """WINDOWS.""" + MAC = "mac" + """MAC.""" + LINUX = "linux" + """LINUX.""" + UBUNTU = "ubuntu" + """UBUNTU.""" + BROWSER = "browser" + """BROWSER.""" + + +class ContainerMemoryLimit(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ContainerMemoryLimit.""" + + ENUM_1_G = "1g" + """1_G.""" + ENUM_4_G = "4g" + """4_G.""" + ENUM_16_G = "16g" + """16_G.""" + ENUM_64_G = "64g" + """64_G.""" + + +class ContainerNetworkPolicyParamType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ContainerNetworkPolicyParamType.""" + + DISABLED = "disabled" + """DISABLED.""" + ALLOWLIST = "allowlist" + """ALLOWLIST.""" + + +class ContainerSkillType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ContainerSkillType.""" + + SKILL_REFERENCE = "skill_reference" + """SKILL_REFERENCE.""" + INLINE = "inline" + """INLINE.""" + + +class CustomToolParamFormatType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of CustomToolParamFormatType.""" + + TEXT = "text" + """TEXT.""" + GRAMMAR = "grammar" + """GRAMMAR.""" + + +class DetailEnum(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of DetailEnum.""" + + LOW = "low" + """LOW.""" + HIGH = "high" + """HIGH.""" + AUTO = "auto" + """AUTO.""" + ORIGINAL = "original" + """ORIGINAL.""" + + +class FunctionAndCustomToolCallOutputType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionAndCustomToolCallOutputType.""" + + INPUT_TEXT = "input_text" + """INPUT_TEXT.""" + INPUT_IMAGE = "input_image" + """INPUT_IMAGE.""" + INPUT_FILE = "input_file" + """INPUT_FILE.""" + + +class FunctionCallItemStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionCallItemStatus.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class FunctionCallOutputStatusEnum(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionCallOutputStatusEnum.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class FunctionCallStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionCallStatus.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class FunctionShellCallEnvironmentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionShellCallEnvironmentType.""" + + LOCAL = "local" + """LOCAL.""" + CONTAINER_REFERENCE = "container_reference" + """CONTAINER_REFERENCE.""" + + +class FunctionShellCallItemParamEnvironmentType( # pylint: disable=name-too-long + str, Enum, metaclass=CaseInsensitiveEnumMeta +): + """Type of FunctionShellCallItemParamEnvironmentType.""" + + LOCAL = "local" + """LOCAL.""" + CONTAINER_REFERENCE = "container_reference" + """CONTAINER_REFERENCE.""" + + +class FunctionShellCallItemStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Shell call status.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class FunctionShellCallOutputOutcomeParamType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionShellCallOutputOutcomeParamType.""" + + TIMEOUT = "timeout" + """TIMEOUT.""" + EXIT = "exit" + """EXIT.""" + + +class FunctionShellCallOutputOutcomeType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionShellCallOutputOutcomeType.""" + + TIMEOUT = "timeout" + """TIMEOUT.""" + EXIT = "exit" + """EXIT.""" + + +class FunctionShellToolParamEnvironmentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of FunctionShellToolParamEnvironmentType.""" + + CONTAINER_AUTO = "container_auto" + """CONTAINER_AUTO.""" + LOCAL = "local" + """LOCAL.""" + CONTAINER_REFERENCE = "container_reference" + """CONTAINER_REFERENCE.""" + + +class GrammarSyntax1(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of GrammarSyntax1.""" + + LARK = "lark" + """LARK.""" + REGEX = "regex" + """REGEX.""" + + +class ImageDetail(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ImageDetail.""" + + LOW = "low" + """LOW.""" + HIGH = "high" + """HIGH.""" + AUTO = "auto" + """AUTO.""" + ORIGINAL = "original" + """ORIGINAL.""" + + +class ImageGenActionEnum(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ImageGenActionEnum.""" + + GENERATE = "generate" + """GENERATE.""" + EDIT = "edit" + """EDIT.""" + AUTO = "auto" + """AUTO.""" + + +class IncludeEnum(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Specify additional output data to include in the model response. Currently supported values + are: + + * `web_search_call.action.sources`: Include the sources of the web search tool call. + * `code_interpreter_call.outputs`: Includes the outputs of python code execution in code + interpreter tool call items. + * `computer_call_output.output.image_url`: Include image urls from the computer call output. + * `file_search_call.results`: Include the search results of the file search tool call. + * `message.input_image.image_url`: Include image urls from the input message. + * `message.output_text.logprobs`: Include logprobs with assistant messages. + * `reasoning.encrypted_content`: Includes an encrypted version of reasoning tokens in reasoning + item outputs. This enables reasoning items to be used in multi-turn conversations when using + the Responses API statelessly (like when the `store` parameter is set to `false`, or when an + organization is enrolled in the zero data retention program). + """ + + FILE_SEARCH_CALL_RESULTS = "file_search_call.results" + """FILE_SEARCH_CALL_RESULTS.""" + WEB_SEARCH_CALL_RESULTS = "web_search_call.results" + """WEB_SEARCH_CALL_RESULTS.""" + WEB_SEARCH_CALL_ACTION_SOURCES = "web_search_call.action.sources" + """WEB_SEARCH_CALL_ACTION_SOURCES.""" + MESSAGE_INPUT_IMAGE_IMAGE_URL = "message.input_image.image_url" + """MESSAGE_INPUT_IMAGE_IMAGE_URL.""" + COMPUTER_CALL_OUTPUT_OUTPUT_IMAGE_URL = "computer_call_output.output.image_url" + """COMPUTER_CALL_OUTPUT_OUTPUT_IMAGE_URL.""" + CODE_INTERPRETER_CALL_OUTPUTS = "code_interpreter_call.outputs" + """CODE_INTERPRETER_CALL_OUTPUTS.""" + REASONING_ENCRYPTED_CONTENT = "reasoning.encrypted_content" + """REASONING_ENCRYPTED_CONTENT.""" + MESSAGE_OUTPUT_TEXT_LOGPROBS = "message.output_text.logprobs" + """MESSAGE_OUTPUT_TEXT_LOGPROBS.""" + MEMORY_SEARCH_CALL_RESULTS = "memory_search_call.results" + """MEMORY_SEARCH_CALL_RESULTS.""" + + +class InputFidelity(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Control how much effort the model will exert to match the style and features, especially facial + features, of input images. This parameter is only supported for ``gpt-image-1`` and + ``gpt-image-1.5`` and later models, unsupported for ``gpt-image-1-mini``. Supports ``high`` and + ``low``. Defaults to ``low``. + """ + + HIGH = "high" + """HIGH.""" + LOW = "low" + """LOW.""" + + +class ItemFieldType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ItemFieldType.""" + + MESSAGE = "message" + """MESSAGE.""" + FUNCTION_CALL = "function_call" + """FUNCTION_CALL.""" + TOOL_SEARCH_CALL = "tool_search_call" + """TOOL_SEARCH_CALL.""" + TOOL_SEARCH_OUTPUT = "tool_search_output" + """TOOL_SEARCH_OUTPUT.""" + FUNCTION_CALL_OUTPUT = "function_call_output" + """FUNCTION_CALL_OUTPUT.""" + FILE_SEARCH_CALL = "file_search_call" + """FILE_SEARCH_CALL.""" + WEB_SEARCH_CALL = "web_search_call" + """WEB_SEARCH_CALL.""" + IMAGE_GENERATION_CALL = "image_generation_call" + """IMAGE_GENERATION_CALL.""" + COMPUTER_CALL = "computer_call" + """COMPUTER_CALL.""" + COMPUTER_CALL_OUTPUT = "computer_call_output" + """COMPUTER_CALL_OUTPUT.""" + REASONING = "reasoning" + """REASONING.""" + COMPACTION = "compaction" + """COMPACTION.""" + CODE_INTERPRETER_CALL = "code_interpreter_call" + """CODE_INTERPRETER_CALL.""" + LOCAL_SHELL_CALL = "local_shell_call" + """LOCAL_SHELL_CALL.""" + LOCAL_SHELL_CALL_OUTPUT = "local_shell_call_output" + """LOCAL_SHELL_CALL_OUTPUT.""" + SHELL_CALL = "shell_call" + """SHELL_CALL.""" + SHELL_CALL_OUTPUT = "shell_call_output" + """SHELL_CALL_OUTPUT.""" + APPLY_PATCH_CALL = "apply_patch_call" + """APPLY_PATCH_CALL.""" + APPLY_PATCH_CALL_OUTPUT = "apply_patch_call_output" + """APPLY_PATCH_CALL_OUTPUT.""" + MCP_LIST_TOOLS = "mcp_list_tools" + """MCP_LIST_TOOLS.""" + MCP_APPROVAL_REQUEST = "mcp_approval_request" + """MCP_APPROVAL_REQUEST.""" + MCP_APPROVAL_RESPONSE = "mcp_approval_response" + """MCP_APPROVAL_RESPONSE.""" + MCP_CALL = "mcp_call" + """MCP_CALL.""" + CUSTOM_TOOL_CALL = "custom_tool_call" + """CUSTOM_TOOL_CALL.""" + CUSTOM_TOOL_CALL_OUTPUT = "custom_tool_call_output" + """CUSTOM_TOOL_CALL_OUTPUT.""" + + +class ItemType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ItemType.""" + + MESSAGE = "message" + """MESSAGE.""" + OUTPUT_MESSAGE = "output_message" + """OUTPUT_MESSAGE.""" + FILE_SEARCH_CALL = "file_search_call" + """FILE_SEARCH_CALL.""" + COMPUTER_CALL = "computer_call" + """COMPUTER_CALL.""" + COMPUTER_CALL_OUTPUT = "computer_call_output" + """COMPUTER_CALL_OUTPUT.""" + WEB_SEARCH_CALL = "web_search_call" + """WEB_SEARCH_CALL.""" + FUNCTION_CALL = "function_call" + """FUNCTION_CALL.""" + FUNCTION_CALL_OUTPUT = "function_call_output" + """FUNCTION_CALL_OUTPUT.""" + TOOL_SEARCH_CALL = "tool_search_call" + """TOOL_SEARCH_CALL.""" + TOOL_SEARCH_OUTPUT = "tool_search_output" + """TOOL_SEARCH_OUTPUT.""" + REASONING = "reasoning" + """REASONING.""" + COMPACTION = "compaction" + """COMPACTION.""" + IMAGE_GENERATION_CALL = "image_generation_call" + """IMAGE_GENERATION_CALL.""" + CODE_INTERPRETER_CALL = "code_interpreter_call" + """CODE_INTERPRETER_CALL.""" + LOCAL_SHELL_CALL = "local_shell_call" + """LOCAL_SHELL_CALL.""" + LOCAL_SHELL_CALL_OUTPUT = "local_shell_call_output" + """LOCAL_SHELL_CALL_OUTPUT.""" + SHELL_CALL = "shell_call" + """SHELL_CALL.""" + SHELL_CALL_OUTPUT = "shell_call_output" + """SHELL_CALL_OUTPUT.""" + APPLY_PATCH_CALL = "apply_patch_call" + """APPLY_PATCH_CALL.""" + APPLY_PATCH_CALL_OUTPUT = "apply_patch_call_output" + """APPLY_PATCH_CALL_OUTPUT.""" + MCP_LIST_TOOLS = "mcp_list_tools" + """MCP_LIST_TOOLS.""" + MCP_APPROVAL_REQUEST = "mcp_approval_request" + """MCP_APPROVAL_REQUEST.""" + MCP_APPROVAL_RESPONSE = "mcp_approval_response" + """MCP_APPROVAL_RESPONSE.""" + MCP_CALL = "mcp_call" + """MCP_CALL.""" + CUSTOM_TOOL_CALL_OUTPUT = "custom_tool_call_output" + """CUSTOM_TOOL_CALL_OUTPUT.""" + CUSTOM_TOOL_CALL = "custom_tool_call" + """CUSTOM_TOOL_CALL.""" + ITEM_REFERENCE = "item_reference" + """ITEM_REFERENCE.""" + STRUCTURED_OUTPUTS = "structured_outputs" + """STRUCTURED_OUTPUTS.""" + OAUTH_CONSENT_REQUEST = "oauth_consent_request" + """OAUTH_CONSENT_REQUEST.""" + MEMORY_SEARCH_CALL = "memory_search_call" + """MEMORY_SEARCH_CALL.""" + WORKFLOW_ACTION = "workflow_action" + """WORKFLOW_ACTION.""" + A2_A_PREVIEW_CALL = "a2a_preview_call" + """A2_A_PREVIEW_CALL.""" + A2_A_PREVIEW_CALL_OUTPUT = "a2a_preview_call_output" + """A2_A_PREVIEW_CALL_OUTPUT.""" + BING_GROUNDING_CALL = "bing_grounding_call" + """BING_GROUNDING_CALL.""" + BING_GROUNDING_CALL_OUTPUT = "bing_grounding_call_output" + """BING_GROUNDING_CALL_OUTPUT.""" + SHAREPOINT_GROUNDING_PREVIEW_CALL = "sharepoint_grounding_preview_call" + """SHAREPOINT_GROUNDING_PREVIEW_CALL.""" + SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT = "sharepoint_grounding_preview_call_output" + """SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT.""" + AZURE_AI_SEARCH_CALL = "azure_ai_search_call" + """AZURE_AI_SEARCH_CALL.""" + AZURE_AI_SEARCH_CALL_OUTPUT = "azure_ai_search_call_output" + """AZURE_AI_SEARCH_CALL_OUTPUT.""" + BING_CUSTOM_SEARCH_PREVIEW_CALL = "bing_custom_search_preview_call" + """BING_CUSTOM_SEARCH_PREVIEW_CALL.""" + BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT = "bing_custom_search_preview_call_output" + """BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT.""" + OPENAPI_CALL = "openapi_call" + """OPENAPI_CALL.""" + OPENAPI_CALL_OUTPUT = "openapi_call_output" + """OPENAPI_CALL_OUTPUT.""" + BROWSER_AUTOMATION_PREVIEW_CALL = "browser_automation_preview_call" + """BROWSER_AUTOMATION_PREVIEW_CALL.""" + BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT = "browser_automation_preview_call_output" + """BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT.""" + FABRIC_DATAAGENT_PREVIEW_CALL = "fabric_dataagent_preview_call" + """FABRIC_DATAAGENT_PREVIEW_CALL.""" + FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT = "fabric_dataagent_preview_call_output" + """FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT.""" + AZURE_FUNCTION_CALL = "azure_function_call" + """AZURE_FUNCTION_CALL.""" + AZURE_FUNCTION_CALL_OUTPUT = "azure_function_call_output" + """AZURE_FUNCTION_CALL_OUTPUT.""" + + +class LocalShellCallOutputStatusEnum(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of LocalShellCallOutputStatusEnum.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class LocalShellCallStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of LocalShellCallStatus.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class MCPToolCallStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of MCPToolCallStatus.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + CALLING = "calling" + """CALLING.""" + FAILED = "failed" + """FAILED.""" + + +class MemoryItemKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Memory item kind.""" + + USER_PROFILE = "user_profile" + """User profile information extracted from conversations.""" + CHAT_SUMMARY = "chat_summary" + """Summary of chat conversations.""" + + +class MessageContentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of MessageContentType.""" + + INPUT_TEXT = "input_text" + """INPUT_TEXT.""" + OUTPUT_TEXT = "output_text" + """OUTPUT_TEXT.""" + TEXT = "text" + """TEXT.""" + SUMMARY_TEXT = "summary_text" + """SUMMARY_TEXT.""" + REASONING_TEXT = "reasoning_text" + """REASONING_TEXT.""" + REFUSAL = "refusal" + """REFUSAL.""" + INPUT_IMAGE = "input_image" + """INPUT_IMAGE.""" + COMPUTER_SCREENSHOT = "computer_screenshot" + """COMPUTER_SCREENSHOT.""" + INPUT_FILE = "input_file" + """INPUT_FILE.""" + + +class MessagePhase(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Labels an ``assistant`` message as intermediate commentary (``commentary``) or the final answer + (``final_answer``). For models like ``gpt-5.3-codex`` and beyond, when sending follow-up + requests, preserve and resend phase on all assistant messages — dropping it can degrade + performance. Not used for user messages. + """ + + COMMENTARY = "commentary" + """COMMENTARY.""" + FINAL_ANSWER = "final_answer" + """FINAL_ANSWER.""" + + +class MessageRole(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of MessageRole.""" + + UNKNOWN = "unknown" + """UNKNOWN.""" + USER = "user" + """USER.""" + ASSISTANT = "assistant" + """ASSISTANT.""" + SYSTEM = "system" + """SYSTEM.""" + CRITIC = "critic" + """CRITIC.""" + DISCRIMINATOR = "discriminator" + """DISCRIMINATOR.""" + DEVELOPER = "developer" + """DEVELOPER.""" + TOOL = "tool" + """TOOL.""" + + +class MessageStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of MessageStatus.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + + +class ModelIdsCompaction(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Model ID used to generate the response, like ``gpt-5`` or ``o3``. OpenAI offers a wide range of + models with different capabilities, performance characteristics, and price points. Refer to the + `model guide `_ to browse and compare available models. + """ + + GPT5_4 = "gpt-5.4" + """GPT5_4.""" + GPT5_3_CHAT_LATEST = "gpt-5.3-chat-latest" + """GPT5_3_CHAT_LATEST.""" + GPT5_2 = "gpt-5.2" + """GPT5_2.""" + GPT5_2_2025_12_11 = "gpt-5.2-2025-12-11" + """GPT5_2_2025_12_11.""" + GPT5_2_CHAT_LATEST = "gpt-5.2-chat-latest" + """GPT5_2_CHAT_LATEST.""" + GPT5_2_PRO = "gpt-5.2-pro" + """GPT5_2_PRO.""" + GPT5_2_PRO2025_12_11 = "gpt-5.2-pro-2025-12-11" + """GPT5_2_PRO2025_12_11.""" + GPT5_1 = "gpt-5.1" + """GPT5_1.""" + GPT5_1_2025_11_13 = "gpt-5.1-2025-11-13" + """GPT5_1_2025_11_13.""" + GPT5_1_CODEX = "gpt-5.1-codex" + """GPT5_1_CODEX.""" + GPT5_1_MINI = "gpt-5.1-mini" + """GPT5_1_MINI.""" + GPT5_1_CHAT_LATEST = "gpt-5.1-chat-latest" + """GPT5_1_CHAT_LATEST.""" + GPT5 = "gpt-5" + """GPT5.""" + GPT5_MINI = "gpt-5-mini" + """GPT5_MINI.""" + GPT5_NANO = "gpt-5-nano" + """GPT5_NANO.""" + GPT5_2025_08_07 = "gpt-5-2025-08-07" + """GPT5_2025_08_07.""" + GPT5_MINI2025_08_07 = "gpt-5-mini-2025-08-07" + """GPT5_MINI2025_08_07.""" + GPT5_NANO2025_08_07 = "gpt-5-nano-2025-08-07" + """GPT5_NANO2025_08_07.""" + GPT5_CHAT_LATEST = "gpt-5-chat-latest" + """GPT5_CHAT_LATEST.""" + GPT4_1 = "gpt-4.1" + """GPT4_1.""" + GPT4_1_MINI = "gpt-4.1-mini" + """GPT4_1_MINI.""" + GPT4_1_NANO = "gpt-4.1-nano" + """GPT4_1_NANO.""" + GPT4_1_2025_04_14 = "gpt-4.1-2025-04-14" + """GPT4_1_2025_04_14.""" + GPT4_1_MINI2025_04_14 = "gpt-4.1-mini-2025-04-14" + """GPT4_1_MINI2025_04_14.""" + GPT4_1_NANO2025_04_14 = "gpt-4.1-nano-2025-04-14" + """GPT4_1_NANO2025_04_14.""" + O4_MINI = "o4-mini" + """O4_MINI.""" + O4_MINI2025_04_16 = "o4-mini-2025-04-16" + """O4_MINI2025_04_16.""" + O3 = "o3" + """O3.""" + O3_2025_04_16 = "o3-2025-04-16" + """O3_2025_04_16.""" + O3_MINI = "o3-mini" + """O3_MINI.""" + O3_MINI2025_01_31 = "o3-mini-2025-01-31" + """O3_MINI2025_01_31.""" + O1 = "o1" + """O1.""" + O1_2024_12_17 = "o1-2024-12-17" + """O1_2024_12_17.""" + O1_PREVIEW = "o1-preview" + """O1_PREVIEW.""" + O1_PREVIEW2024_09_12 = "o1-preview-2024-09-12" + """O1_PREVIEW2024_09_12.""" + O1_MINI = "o1-mini" + """O1_MINI.""" + O1_MINI2024_09_12 = "o1-mini-2024-09-12" + """O1_MINI2024_09_12.""" + GPT4_O = "gpt-4o" + """GPT4_O.""" + GPT4_O2024_11_20 = "gpt-4o-2024-11-20" + """GPT4_O2024_11_20.""" + GPT4_O2024_08_06 = "gpt-4o-2024-08-06" + """GPT4_O2024_08_06.""" + GPT4_O2024_05_13 = "gpt-4o-2024-05-13" + """GPT4_O2024_05_13.""" + GPT4_O_AUDIO_PREVIEW = "gpt-4o-audio-preview" + """GPT4_O_AUDIO_PREVIEW.""" + GPT4_O_AUDIO_PREVIEW2024_10_01 = "gpt-4o-audio-preview-2024-10-01" + """GPT4_O_AUDIO_PREVIEW2024_10_01.""" + GPT4_O_AUDIO_PREVIEW2024_12_17 = "gpt-4o-audio-preview-2024-12-17" + """GPT4_O_AUDIO_PREVIEW2024_12_17.""" + GPT4_O_AUDIO_PREVIEW2025_06_03 = "gpt-4o-audio-preview-2025-06-03" + """GPT4_O_AUDIO_PREVIEW2025_06_03.""" + GPT4_O_MINI_AUDIO_PREVIEW = "gpt-4o-mini-audio-preview" + """GPT4_O_MINI_AUDIO_PREVIEW.""" + GPT4_O_MINI_AUDIO_PREVIEW2024_12_17 = "gpt-4o-mini-audio-preview-2024-12-17" + """GPT4_O_MINI_AUDIO_PREVIEW2024_12_17.""" + GPT4_O_SEARCH_PREVIEW = "gpt-4o-search-preview" + """GPT4_O_SEARCH_PREVIEW.""" + GPT4_O_MINI_SEARCH_PREVIEW = "gpt-4o-mini-search-preview" + """GPT4_O_MINI_SEARCH_PREVIEW.""" + GPT4_O_SEARCH_PREVIEW2025_03_11 = "gpt-4o-search-preview-2025-03-11" + """GPT4_O_SEARCH_PREVIEW2025_03_11.""" + GPT4_O_MINI_SEARCH_PREVIEW2025_03_11 = "gpt-4o-mini-search-preview-2025-03-11" + """GPT4_O_MINI_SEARCH_PREVIEW2025_03_11.""" + CHATGPT4_O_LATEST = "chatgpt-4o-latest" + """CHATGPT4_O_LATEST.""" + CODEX_MINI_LATEST = "codex-mini-latest" + """CODEX_MINI_LATEST.""" + GPT4_O_MINI = "gpt-4o-mini" + """GPT4_O_MINI.""" + GPT4_O_MINI2024_07_18 = "gpt-4o-mini-2024-07-18" + """GPT4_O_MINI2024_07_18.""" + GPT4_TURBO = "gpt-4-turbo" + """GPT4_TURBO.""" + GPT4_TURBO2024_04_09 = "gpt-4-turbo-2024-04-09" + """GPT4_TURBO2024_04_09.""" + GPT4_0125_PREVIEW = "gpt-4-0125-preview" + """GPT4_0125_PREVIEW.""" + GPT4_TURBO_PREVIEW = "gpt-4-turbo-preview" + """GPT4_TURBO_PREVIEW.""" + GPT4_1106_PREVIEW = "gpt-4-1106-preview" + """GPT4_1106_PREVIEW.""" + GPT4_VISION_PREVIEW = "gpt-4-vision-preview" + """GPT4_VISION_PREVIEW.""" + GPT4 = "gpt-4" + """GPT4.""" + GPT4_0314 = "gpt-4-0314" + """GPT4_0314.""" + GPT4_0613 = "gpt-4-0613" + """GPT4_0613.""" + GPT4_32_K = "gpt-4-32k" + """GPT4_32_K.""" + GPT4_32_K0314 = "gpt-4-32k-0314" + """GPT4_32_K0314.""" + GPT4_32_K0613 = "gpt-4-32k-0613" + """GPT4_32_K0613.""" + GPT3_5_TURBO = "gpt-3.5-turbo" + """GPT3_5_TURBO.""" + GPT3_5_TURBO16_K = "gpt-3.5-turbo-16k" + """GPT3_5_TURBO16_K.""" + GPT3_5_TURBO0301 = "gpt-3.5-turbo-0301" + """GPT3_5_TURBO0301.""" + GPT3_5_TURBO0613 = "gpt-3.5-turbo-0613" + """GPT3_5_TURBO0613.""" + GPT3_5_TURBO1106 = "gpt-3.5-turbo-1106" + """GPT3_5_TURBO1106.""" + GPT3_5_TURBO0125 = "gpt-3.5-turbo-0125" + """GPT3_5_TURBO0125.""" + GPT3_5_TURBO16_K0613 = "gpt-3.5-turbo-16k-0613" + """GPT3_5_TURBO16_K0613.""" + O1_PRO = "o1-pro" + """O1_PRO.""" + O1_PRO2025_03_19 = "o1-pro-2025-03-19" + """O1_PRO2025_03_19.""" + O3_PRO = "o3-pro" + """O3_PRO.""" + O3_PRO2025_06_10 = "o3-pro-2025-06-10" + """O3_PRO2025_06_10.""" + O3_DEEP_RESEARCH = "o3-deep-research" + """O3_DEEP_RESEARCH.""" + O3_DEEP_RESEARCH2025_06_26 = "o3-deep-research-2025-06-26" + """O3_DEEP_RESEARCH2025_06_26.""" + O4_MINI_DEEP_RESEARCH = "o4-mini-deep-research" + """O4_MINI_DEEP_RESEARCH.""" + O4_MINI_DEEP_RESEARCH2025_06_26 = "o4-mini-deep-research-2025-06-26" + """O4_MINI_DEEP_RESEARCH2025_06_26.""" + COMPUTER_USE_PREVIEW = "computer-use-preview" + """COMPUTER_USE_PREVIEW.""" + COMPUTER_USE_PREVIEW2025_03_11 = "computer-use-preview-2025-03-11" + """COMPUTER_USE_PREVIEW2025_03_11.""" + GPT5_CODEX = "gpt-5-codex" + """GPT5_CODEX.""" + GPT5_PRO = "gpt-5-pro" + """GPT5_PRO.""" + GPT5_PRO2025_10_06 = "gpt-5-pro-2025-10-06" + """GPT5_PRO2025_10_06.""" + GPT5_1_CODEX_MAX = "gpt-5.1-codex-max" + """GPT5_1_CODEX_MAX.""" + + +class OpenApiAuthType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Authentication type for OpenApi endpoint. Allowed types are: + + * Anonymous (no authentication required) + * Project Connection (requires project_connection_id to endpoint, as setup in AI Foundry) + * Managed_Identity (requires audience for identity based auth). + """ + + ANONYMOUS = "anonymous" + """ANONYMOUS.""" + PROJECT_CONNECTION = "project_connection" + """PROJECT_CONNECTION.""" + MANAGED_IDENTITY = "managed_identity" + """MANAGED_IDENTITY.""" + + +class OutputContentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of OutputContentType.""" + + OUTPUT_TEXT = "output_text" + """OUTPUT_TEXT.""" + REFUSAL = "refusal" + """REFUSAL.""" + REASONING_TEXT = "reasoning_text" + """REASONING_TEXT.""" + + +class OutputItemType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of OutputItemType.""" + + OUTPUT_MESSAGE = "output_message" + """OUTPUT_MESSAGE.""" + FILE_SEARCH_CALL = "file_search_call" + """FILE_SEARCH_CALL.""" + FUNCTION_CALL = "function_call" + """FUNCTION_CALL.""" + WEB_SEARCH_CALL = "web_search_call" + """WEB_SEARCH_CALL.""" + COMPUTER_CALL = "computer_call" + """COMPUTER_CALL.""" + REASONING = "reasoning" + """REASONING.""" + TOOL_SEARCH_CALL = "tool_search_call" + """TOOL_SEARCH_CALL.""" + TOOL_SEARCH_OUTPUT = "tool_search_output" + """TOOL_SEARCH_OUTPUT.""" + COMPACTION = "compaction" + """COMPACTION.""" + IMAGE_GENERATION_CALL = "image_generation_call" + """IMAGE_GENERATION_CALL.""" + CODE_INTERPRETER_CALL = "code_interpreter_call" + """CODE_INTERPRETER_CALL.""" + LOCAL_SHELL_CALL = "local_shell_call" + """LOCAL_SHELL_CALL.""" + SHELL_CALL = "shell_call" + """SHELL_CALL.""" + SHELL_CALL_OUTPUT = "shell_call_output" + """SHELL_CALL_OUTPUT.""" + APPLY_PATCH_CALL = "apply_patch_call" + """APPLY_PATCH_CALL.""" + APPLY_PATCH_CALL_OUTPUT = "apply_patch_call_output" + """APPLY_PATCH_CALL_OUTPUT.""" + MCP_CALL = "mcp_call" + """MCP_CALL.""" + MCP_LIST_TOOLS = "mcp_list_tools" + """MCP_LIST_TOOLS.""" + MCP_APPROVAL_REQUEST = "mcp_approval_request" + """MCP_APPROVAL_REQUEST.""" + CUSTOM_TOOL_CALL = "custom_tool_call" + """CUSTOM_TOOL_CALL.""" + MESSAGE = "message" + """MESSAGE.""" + COMPUTER_CALL_OUTPUT = "computer_call_output" + """COMPUTER_CALL_OUTPUT.""" + FUNCTION_CALL_OUTPUT = "function_call_output" + """FUNCTION_CALL_OUTPUT.""" + LOCAL_SHELL_CALL_OUTPUT = "local_shell_call_output" + """LOCAL_SHELL_CALL_OUTPUT.""" + MCP_APPROVAL_RESPONSE = "mcp_approval_response" + """MCP_APPROVAL_RESPONSE.""" + CUSTOM_TOOL_CALL_OUTPUT = "custom_tool_call_output" + """CUSTOM_TOOL_CALL_OUTPUT.""" + STRUCTURED_OUTPUTS = "structured_outputs" + """STRUCTURED_OUTPUTS.""" + OAUTH_CONSENT_REQUEST = "oauth_consent_request" + """OAUTH_CONSENT_REQUEST.""" + MEMORY_SEARCH_CALL = "memory_search_call" + """MEMORY_SEARCH_CALL.""" + WORKFLOW_ACTION = "workflow_action" + """WORKFLOW_ACTION.""" + A2_A_PREVIEW_CALL = "a2a_preview_call" + """A2_A_PREVIEW_CALL.""" + A2_A_PREVIEW_CALL_OUTPUT = "a2a_preview_call_output" + """A2_A_PREVIEW_CALL_OUTPUT.""" + BING_GROUNDING_CALL = "bing_grounding_call" + """BING_GROUNDING_CALL.""" + BING_GROUNDING_CALL_OUTPUT = "bing_grounding_call_output" + """BING_GROUNDING_CALL_OUTPUT.""" + SHAREPOINT_GROUNDING_PREVIEW_CALL = "sharepoint_grounding_preview_call" + """SHAREPOINT_GROUNDING_PREVIEW_CALL.""" + SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT = "sharepoint_grounding_preview_call_output" + """SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT.""" + AZURE_AI_SEARCH_CALL = "azure_ai_search_call" + """AZURE_AI_SEARCH_CALL.""" + AZURE_AI_SEARCH_CALL_OUTPUT = "azure_ai_search_call_output" + """AZURE_AI_SEARCH_CALL_OUTPUT.""" + BING_CUSTOM_SEARCH_PREVIEW_CALL = "bing_custom_search_preview_call" + """BING_CUSTOM_SEARCH_PREVIEW_CALL.""" + BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT = "bing_custom_search_preview_call_output" + """BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT.""" + OPENAPI_CALL = "openapi_call" + """OPENAPI_CALL.""" + OPENAPI_CALL_OUTPUT = "openapi_call_output" + """OPENAPI_CALL_OUTPUT.""" + BROWSER_AUTOMATION_PREVIEW_CALL = "browser_automation_preview_call" + """BROWSER_AUTOMATION_PREVIEW_CALL.""" + BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT = "browser_automation_preview_call_output" + """BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT.""" + FABRIC_DATAAGENT_PREVIEW_CALL = "fabric_dataagent_preview_call" + """FABRIC_DATAAGENT_PREVIEW_CALL.""" + FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT = "fabric_dataagent_preview_call_output" + """FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT.""" + AZURE_FUNCTION_CALL = "azure_function_call" + """AZURE_FUNCTION_CALL.""" + AZURE_FUNCTION_CALL_OUTPUT = "azure_function_call_output" + """AZURE_FUNCTION_CALL_OUTPUT.""" + + +class OutputMessageContentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of OutputMessageContentType.""" + + OUTPUT_TEXT = "output_text" + """OUTPUT_TEXT.""" + REFUSAL = "refusal" + """REFUSAL.""" + + +class PageOrder(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of PageOrder.""" + + ASC = "asc" + """ASC.""" + DESC = "desc" + """DESC.""" + + +class RankerVersionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of RankerVersionType.""" + + AUTO = "auto" + """AUTO.""" + DEFAULT2024_11_15 = "default-2024-11-15" + """DEFAULT2024_11_15.""" + + +class RealtimeMcpErrorType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of RealtimeMcpErrorType.""" + + PROTOCOL_ERROR = "protocol_error" + """PROTOCOL_ERROR.""" + TOOL_EXECUTION_ERROR = "tool_execution_error" + """TOOL_EXECUTION_ERROR.""" + HTTP_ERROR = "http_error" + """HTTP_ERROR.""" + + +class ResponseErrorCode(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """The error code for the response.""" + + SERVER_ERROR = "server_error" + """SERVER_ERROR.""" + RATE_LIMIT_EXCEEDED = "rate_limit_exceeded" + """RATE_LIMIT_EXCEEDED.""" + INVALID_PROMPT = "invalid_prompt" + """INVALID_PROMPT.""" + VECTOR_STORE_TIMEOUT = "vector_store_timeout" + """VECTOR_STORE_TIMEOUT.""" + INVALID_IMAGE = "invalid_image" + """INVALID_IMAGE.""" + INVALID_IMAGE_FORMAT = "invalid_image_format" + """INVALID_IMAGE_FORMAT.""" + INVALID_BASE64_IMAGE = "invalid_base64_image" + """INVALID_BASE64_IMAGE.""" + INVALID_IMAGE_URL = "invalid_image_url" + """INVALID_IMAGE_URL.""" + IMAGE_TOO_LARGE = "image_too_large" + """IMAGE_TOO_LARGE.""" + IMAGE_TOO_SMALL = "image_too_small" + """IMAGE_TOO_SMALL.""" + IMAGE_PARSE_ERROR = "image_parse_error" + """IMAGE_PARSE_ERROR.""" + IMAGE_CONTENT_POLICY_VIOLATION = "image_content_policy_violation" + """IMAGE_CONTENT_POLICY_VIOLATION.""" + INVALID_IMAGE_MODE = "invalid_image_mode" + """INVALID_IMAGE_MODE.""" + IMAGE_FILE_TOO_LARGE = "image_file_too_large" + """IMAGE_FILE_TOO_LARGE.""" + UNSUPPORTED_IMAGE_MEDIA_TYPE = "unsupported_image_media_type" + """UNSUPPORTED_IMAGE_MEDIA_TYPE.""" + EMPTY_IMAGE_FILE = "empty_image_file" + """EMPTY_IMAGE_FILE.""" + FAILED_TO_DOWNLOAD_IMAGE = "failed_to_download_image" + """FAILED_TO_DOWNLOAD_IMAGE.""" + IMAGE_FILE_NOT_FOUND = "image_file_not_found" + """IMAGE_FILE_NOT_FOUND.""" + + +class ResponseStreamEventType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ResponseStreamEventType.""" + + RESPONSE_AUDIO_DELTA = "response.audio.delta" + """RESPONSE_AUDIO_DELTA.""" + RESPONSE_AUDIO_DONE = "response.audio.done" + """RESPONSE_AUDIO_DONE.""" + RESPONSE_AUDIO_TRANSCRIPT_DELTA = "response.audio.transcript.delta" + """RESPONSE_AUDIO_TRANSCRIPT_DELTA.""" + RESPONSE_AUDIO_TRANSCRIPT_DONE = "response.audio.transcript.done" + """RESPONSE_AUDIO_TRANSCRIPT_DONE.""" + RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA = "response.code_interpreter_call_code.delta" + """RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA.""" + RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE = "response.code_interpreter_call_code.done" + """RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE.""" + RESPONSE_CODE_INTERPRETER_CALL_COMPLETED = "response.code_interpreter_call.completed" + """RESPONSE_CODE_INTERPRETER_CALL_COMPLETED.""" + RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS = "response.code_interpreter_call.in_progress" + """RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS.""" + RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING = "response.code_interpreter_call.interpreting" + """RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING.""" + RESPONSE_COMPLETED = "response.completed" + """RESPONSE_COMPLETED.""" + RESPONSE_CONTENT_PART_ADDED = "response.content_part.added" + """RESPONSE_CONTENT_PART_ADDED.""" + RESPONSE_CONTENT_PART_DONE = "response.content_part.done" + """RESPONSE_CONTENT_PART_DONE.""" + RESPONSE_CREATED = "response.created" + """RESPONSE_CREATED.""" + ERROR = "error" + """ERROR.""" + RESPONSE_FILE_SEARCH_CALL_COMPLETED = "response.file_search_call.completed" + """RESPONSE_FILE_SEARCH_CALL_COMPLETED.""" + RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS = "response.file_search_call.in_progress" + """RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS.""" + RESPONSE_FILE_SEARCH_CALL_SEARCHING = "response.file_search_call.searching" + """RESPONSE_FILE_SEARCH_CALL_SEARCHING.""" + RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA = "response.function_call_arguments.delta" + """RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA.""" + RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE = "response.function_call_arguments.done" + """RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE.""" + RESPONSE_IN_PROGRESS = "response.in_progress" + """RESPONSE_IN_PROGRESS.""" + RESPONSE_FAILED = "response.failed" + """RESPONSE_FAILED.""" + RESPONSE_INCOMPLETE = "response.incomplete" + """RESPONSE_INCOMPLETE.""" + RESPONSE_OUTPUT_ITEM_ADDED = "response.output_item.added" + """RESPONSE_OUTPUT_ITEM_ADDED.""" + RESPONSE_OUTPUT_ITEM_DONE = "response.output_item.done" + """RESPONSE_OUTPUT_ITEM_DONE.""" + RESPONSE_REASONING_SUMMARY_PART_ADDED = "response.reasoning_summary_part.added" + """RESPONSE_REASONING_SUMMARY_PART_ADDED.""" + RESPONSE_REASONING_SUMMARY_PART_DONE = "response.reasoning_summary_part.done" + """RESPONSE_REASONING_SUMMARY_PART_DONE.""" + RESPONSE_REASONING_SUMMARY_TEXT_DELTA = "response.reasoning_summary_text.delta" + """RESPONSE_REASONING_SUMMARY_TEXT_DELTA.""" + RESPONSE_REASONING_SUMMARY_TEXT_DONE = "response.reasoning_summary_text.done" + """RESPONSE_REASONING_SUMMARY_TEXT_DONE.""" + RESPONSE_REASONING_TEXT_DELTA = "response.reasoning_text.delta" + """RESPONSE_REASONING_TEXT_DELTA.""" + RESPONSE_REASONING_TEXT_DONE = "response.reasoning_text.done" + """RESPONSE_REASONING_TEXT_DONE.""" + RESPONSE_REFUSAL_DELTA = "response.refusal.delta" + """RESPONSE_REFUSAL_DELTA.""" + RESPONSE_REFUSAL_DONE = "response.refusal.done" + """RESPONSE_REFUSAL_DONE.""" + RESPONSE_OUTPUT_TEXT_DELTA = "response.output_text.delta" + """RESPONSE_OUTPUT_TEXT_DELTA.""" + RESPONSE_OUTPUT_TEXT_DONE = "response.output_text.done" + """RESPONSE_OUTPUT_TEXT_DONE.""" + RESPONSE_WEB_SEARCH_CALL_COMPLETED = "response.web_search_call.completed" + """RESPONSE_WEB_SEARCH_CALL_COMPLETED.""" + RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS = "response.web_search_call.in_progress" + """RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS.""" + RESPONSE_WEB_SEARCH_CALL_SEARCHING = "response.web_search_call.searching" + """RESPONSE_WEB_SEARCH_CALL_SEARCHING.""" + RESPONSE_IMAGE_GENERATION_CALL_COMPLETED = "response.image_generation_call.completed" + """RESPONSE_IMAGE_GENERATION_CALL_COMPLETED.""" + RESPONSE_IMAGE_GENERATION_CALL_GENERATING = "response.image_generation_call.generating" + """RESPONSE_IMAGE_GENERATION_CALL_GENERATING.""" + RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS = "response.image_generation_call.in_progress" + """RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS.""" + RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE = "response.image_generation_call.partial_image" + """RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE.""" + RESPONSE_MCP_CALL_ARGUMENTS_DELTA = "response.mcp_call_arguments.delta" + """RESPONSE_MCP_CALL_ARGUMENTS_DELTA.""" + RESPONSE_MCP_CALL_ARGUMENTS_DONE = "response.mcp_call_arguments.done" + """RESPONSE_MCP_CALL_ARGUMENTS_DONE.""" + RESPONSE_MCP_CALL_COMPLETED = "response.mcp_call.completed" + """RESPONSE_MCP_CALL_COMPLETED.""" + RESPONSE_MCP_CALL_FAILED = "response.mcp_call.failed" + """RESPONSE_MCP_CALL_FAILED.""" + RESPONSE_MCP_CALL_IN_PROGRESS = "response.mcp_call.in_progress" + """RESPONSE_MCP_CALL_IN_PROGRESS.""" + RESPONSE_MCP_LIST_TOOLS_COMPLETED = "response.mcp_list_tools.completed" + """RESPONSE_MCP_LIST_TOOLS_COMPLETED.""" + RESPONSE_MCP_LIST_TOOLS_FAILED = "response.mcp_list_tools.failed" + """RESPONSE_MCP_LIST_TOOLS_FAILED.""" + RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS = "response.mcp_list_tools.in_progress" + """RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS.""" + RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED = "response.output_text.annotation.added" + """RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED.""" + RESPONSE_QUEUED = "response.queued" + """RESPONSE_QUEUED.""" + RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA = "response.custom_tool_call_input.delta" + """RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA.""" + RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE = "response.custom_tool_call_input.done" + """RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE.""" + + +class SearchContentType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of SearchContentType.""" + + TEXT = "text" + """TEXT.""" + IMAGE = "image" + """IMAGE.""" + + +class SearchContextSize(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of SearchContextSize.""" + + LOW = "low" + """LOW.""" + MEDIUM = "medium" + """MEDIUM.""" + HIGH = "high" + """HIGH.""" + + +class TextResponseFormatConfigurationType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of TextResponseFormatConfigurationType.""" + + TEXT = "text" + """TEXT.""" + JSON_SCHEMA = "json_schema" + """JSON_SCHEMA.""" + JSON_OBJECT = "json_object" + """JSON_OBJECT.""" + + +class ToolCallStatus(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """The status of a tool call.""" + + IN_PROGRESS = "in_progress" + """IN_PROGRESS.""" + COMPLETED = "completed" + """COMPLETED.""" + INCOMPLETE = "incomplete" + """INCOMPLETE.""" + FAILED = "failed" + """FAILED.""" + + +class ToolChoiceOptions(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Tool choice mode.""" + + NONE = "none" + """NONE.""" + AUTO = "auto" + """AUTO.""" + REQUIRED = "required" + """REQUIRED.""" + + +class ToolChoiceParamType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ToolChoiceParamType.""" + + ALLOWED_TOOLS = "allowed_tools" + """ALLOWED_TOOLS.""" + FUNCTION = "function" + """FUNCTION.""" + MCP = "mcp" + """MCP.""" + CUSTOM = "custom" + """CUSTOM.""" + APPLY_PATCH = "apply_patch" + """APPLY_PATCH.""" + SHELL = "shell" + """SHELL.""" + FILE_SEARCH = "file_search" + """FILE_SEARCH.""" + WEB_SEARCH_PREVIEW = "web_search_preview" + """WEB_SEARCH_PREVIEW.""" + COMPUTER_USE_PREVIEW = "computer_use_preview" + """COMPUTER_USE_PREVIEW.""" + WEB_SEARCH_PREVIEW2025_03_11 = "web_search_preview_2025_03_11" + """WEB_SEARCH_PREVIEW2025_03_11.""" + IMAGE_GENERATION = "image_generation" + """IMAGE_GENERATION.""" + CODE_INTERPRETER = "code_interpreter" + """CODE_INTERPRETER.""" + COMPUTER = "computer" + """COMPUTER.""" + COMPUTER_USE = "computer_use" + """COMPUTER_USE.""" + + +class ToolSearchExecutionType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ToolSearchExecutionType.""" + + SERVER = "server" + """SERVER.""" + CLIENT = "client" + """CLIENT.""" + + +class ToolType(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Type of ToolType.""" + + FUNCTION = "function" + """FUNCTION.""" + FILE_SEARCH = "file_search" + """FILE_SEARCH.""" + COMPUTER = "computer" + """COMPUTER.""" + COMPUTER_USE_PREVIEW = "computer_use_preview" + """COMPUTER_USE_PREVIEW.""" + WEB_SEARCH = "web_search" + """WEB_SEARCH.""" + MCP = "mcp" + """MCP.""" + CODE_INTERPRETER = "code_interpreter" + """CODE_INTERPRETER.""" + IMAGE_GENERATION = "image_generation" + """IMAGE_GENERATION.""" + LOCAL_SHELL = "local_shell" + """LOCAL_SHELL.""" + SHELL = "shell" + """SHELL.""" + CUSTOM = "custom" + """CUSTOM.""" + NAMESPACE = "namespace" + """NAMESPACE.""" + TOOL_SEARCH = "tool_search" + """TOOL_SEARCH.""" + WEB_SEARCH_PREVIEW = "web_search_preview" + """WEB_SEARCH_PREVIEW.""" + APPLY_PATCH = "apply_patch" + """APPLY_PATCH.""" + A2_A_PREVIEW = "a2a_preview" + """A2_A_PREVIEW.""" + BING_CUSTOM_SEARCH_PREVIEW = "bing_custom_search_preview" + """BING_CUSTOM_SEARCH_PREVIEW.""" + BROWSER_AUTOMATION_PREVIEW = "browser_automation_preview" + """BROWSER_AUTOMATION_PREVIEW.""" + FABRIC_DATAAGENT_PREVIEW = "fabric_dataagent_preview" + """FABRIC_DATAAGENT_PREVIEW.""" + SHAREPOINT_GROUNDING_PREVIEW = "sharepoint_grounding_preview" + """SHAREPOINT_GROUNDING_PREVIEW.""" + MEMORY_SEARCH_PREVIEW = "memory_search_preview" + """MEMORY_SEARCH_PREVIEW.""" + AZURE_AI_SEARCH = "azure_ai_search" + """AZURE_AI_SEARCH.""" + AZURE_FUNCTION = "azure_function" + """AZURE_FUNCTION.""" + BING_GROUNDING = "bing_grounding" + """BING_GROUNDING.""" + CAPTURE_STRUCTURED_OUTPUTS = "capture_structured_outputs" + """CAPTURE_STRUCTURED_OUTPUTS.""" + OPENAPI = "openapi" + """OPENAPI.""" + MEMORY_SEARCH = "memory_search" + """MEMORY_SEARCH.""" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_models.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_models.py new file mode 100644 index 000000000000..da5b679f971f --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_models.py @@ -0,0 +1,17869 @@ +# pylint: disable=line-too-long,useless-suppression,too-many-lines +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# Code generated by Microsoft (R) Python Code Generator. +# Changes may cause incorrect behavior and will be lost if the code is regenerated. +# -------------------------------------------------------------------------- +# pylint: disable=useless-super-delegation + +import datetime +from typing import Any, Literal, Mapping, Optional, TYPE_CHECKING, Union, overload + +from .._utils.model_base import Model as _Model, rest_discriminator, rest_field +from ._enums import ( + AnnotationType, + ApplyPatchFileOperationType, + ApplyPatchOperationParamType, + ComputerActionType, + ContainerNetworkPolicyParamType, + ContainerSkillType, + CustomToolParamFormatType, + FunctionAndCustomToolCallOutputType, + FunctionShellCallEnvironmentType, + FunctionShellCallItemParamEnvironmentType, + FunctionShellCallOutputOutcomeParamType, + FunctionShellCallOutputOutcomeType, + FunctionShellToolParamEnvironmentType, + ItemFieldType, + ItemType, + MemoryItemKind, + MessageContentType, + OpenApiAuthType, + OutputContentType, + OutputItemType, + OutputMessageContentType, + RealtimeMcpErrorType, + ResponseStreamEventType, + TextResponseFormatConfigurationType, + ToolChoiceParamType, + ToolType, +) + +if TYPE_CHECKING: + from .. import _types, models as _models + + +class Tool(_Model): + """A tool that can be used to generate a response. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + A2APreviewTool, ApplyPatchToolParam, AzureAISearchTool, AzureFunctionTool, + BingCustomSearchPreviewTool, BingGroundingTool, BrowserAutomationPreviewTool, + CaptureStructuredOutputsTool, CodeInterpreterTool, ComputerTool, ComputerUsePreviewTool, + CustomToolParam, MicrosoftFabricPreviewTool, FileSearchTool, FunctionTool, ImageGenTool, + LocalShellToolParam, MCPTool, MemorySearchTool, MemorySearchPreviewTool, NamespaceToolParam, + OpenApiTool, SharepointPreviewTool, FunctionShellToolParam, ToolSearchToolParam, WebSearchTool, + WebSearchPreviewTool + + :ivar type: Required. Known values are: "function", "file_search", "computer", + "computer_use_preview", "web_search", "mcp", "code_interpreter", "image_generation", + "local_shell", "shell", "custom", "namespace", "tool_search", "web_search_preview", + "apply_patch", "a2a_preview", "bing_custom_search_preview", "browser_automation_preview", + "fabric_dataagent_preview", "sharepoint_grounding_preview", "memory_search_preview", + "azure_ai_search", "azure_function", "bing_grounding", "capture_structured_outputs", "openapi", + and "memory_search". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"function\", \"file_search\", \"computer\", + \"computer_use_preview\", \"web_search\", \"mcp\", \"code_interpreter\", \"image_generation\", + \"local_shell\", \"shell\", \"custom\", \"namespace\", \"tool_search\", \"web_search_preview\", + \"apply_patch\", \"a2a_preview\", \"bing_custom_search_preview\", + \"browser_automation_preview\", \"fabric_dataagent_preview\", \"sharepoint_grounding_preview\", + \"memory_search_preview\", \"azure_ai_search\", \"azure_function\", \"bing_grounding\", + \"capture_structured_outputs\", \"openapi\", and \"memory_search\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class A2APreviewTool(Tool, discriminator="a2a_preview"): + """An agent implementing the A2A protocol. + + :ivar type: The type of the tool. Always ``"a2a_preview``. Required. A2_A_PREVIEW. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.A2_A_PREVIEW + :ivar base_url: Base URL of the agent. + :vartype base_url: str + :ivar agent_card_path: The path to the agent card relative to the ``base_url``. If not + provided, defaults to ``/.well-known/agent-card.json``. + :vartype agent_card_path: str + :ivar project_connection_id: The connection ID in the project for the A2A server. The + connection stores authentication and other connection details needed to connect to the A2A + server. + :vartype project_connection_id: str + """ + + type: Literal[ToolType.A2_A_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``\"a2a_preview``. Required. A2_A_PREVIEW.""" + base_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Base URL of the agent.""" + agent_card_path: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The path to the agent card relative to the ``base_url``. If not provided, defaults to + ``/.well-known/agent-card.json``.""" + project_connection_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The connection ID in the project for the A2A server. The connection stores authentication and + other connection details needed to connect to the A2A server.""" + + @overload + def __init__( + self, + *, + base_url: Optional[str] = None, + agent_card_path: Optional[str] = None, + project_connection_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.A2_A_PREVIEW # type: ignore + + +class OutputItem(_Model): + """OutputItem. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + A2AToolCall, A2AToolCallOutput, OutputItemApplyPatchToolCall, + OutputItemApplyPatchToolCallOutput, AzureAISearchToolCall, AzureAISearchToolCallOutput, + AzureFunctionToolCall, AzureFunctionToolCallOutput, BingCustomSearchToolCall, + BingCustomSearchToolCallOutput, BingGroundingToolCall, BingGroundingToolCallOutput, + BrowserAutomationToolCall, BrowserAutomationToolCallOutput, OutputItemCodeInterpreterToolCall, + OutputItemCompactionBody, OutputItemComputerToolCall, OutputItemComputerToolCallOutput, + OutputItemCustomToolCall, OutputItemCustomToolCallOutput, FabricDataAgentToolCall, + FabricDataAgentToolCallOutput, OutputItemFileSearchToolCall, OutputItemFunctionToolCall, + OutputItemFunctionToolCallOutput, OutputItemImageGenToolCall, OutputItemLocalShellToolCall, + OutputItemLocalShellToolCallOutput, OutputItemMcpApprovalRequest, + OutputItemMcpApprovalResponseResource, OutputItemMcpToolCall, OutputItemMcpListTools, + MemorySearchToolCallItemResource, OutputItemMessage, OAuthConsentRequestOutputItem, + OpenApiToolCall, OpenApiToolCallOutput, OutputItemOutputMessage, OutputItemReasoningItem, + SharepointGroundingToolCall, SharepointGroundingToolCallOutput, OutputItemFunctionShellCall, + OutputItemFunctionShellCallOutput, StructuredOutputsOutputItem, OutputItemToolSearchCall, + OutputItemToolSearchOutput, OutputItemWebSearchToolCall, WorkflowActionOutputItem + + :ivar type: Required. Known values are: "output_message", "file_search_call", "function_call", + "web_search_call", "computer_call", "reasoning", "tool_search_call", "tool_search_output", + "compaction", "image_generation_call", "code_interpreter_call", "local_shell_call", + "shell_call", "shell_call_output", "apply_patch_call", "apply_patch_call_output", "mcp_call", + "mcp_list_tools", "mcp_approval_request", "custom_tool_call", "message", + "computer_call_output", "function_call_output", "local_shell_call_output", + "mcp_approval_response", "custom_tool_call_output", "structured_outputs", + "oauth_consent_request", "memory_search_call", "workflow_action", "a2a_preview_call", + "a2a_preview_call_output", "bing_grounding_call", "bing_grounding_call_output", + "sharepoint_grounding_preview_call", "sharepoint_grounding_preview_call_output", + "azure_ai_search_call", "azure_ai_search_call_output", "bing_custom_search_preview_call", + "bing_custom_search_preview_call_output", "openapi_call", "openapi_call_output", + "browser_automation_preview_call", "browser_automation_preview_call_output", + "fabric_dataagent_preview_call", "fabric_dataagent_preview_call_output", "azure_function_call", + and "azure_function_call_output". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OutputItemType + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"output_message\", \"file_search_call\", \"function_call\", + \"web_search_call\", \"computer_call\", \"reasoning\", \"tool_search_call\", + \"tool_search_output\", \"compaction\", \"image_generation_call\", \"code_interpreter_call\", + \"local_shell_call\", \"shell_call\", \"shell_call_output\", \"apply_patch_call\", + \"apply_patch_call_output\", \"mcp_call\", \"mcp_list_tools\", \"mcp_approval_request\", + \"custom_tool_call\", \"message\", \"computer_call_output\", \"function_call_output\", + \"local_shell_call_output\", \"mcp_approval_response\", \"custom_tool_call_output\", + \"structured_outputs\", \"oauth_consent_request\", \"memory_search_call\", \"workflow_action\", + \"a2a_preview_call\", \"a2a_preview_call_output\", \"bing_grounding_call\", + \"bing_grounding_call_output\", \"sharepoint_grounding_preview_call\", + \"sharepoint_grounding_preview_call_output\", \"azure_ai_search_call\", + \"azure_ai_search_call_output\", \"bing_custom_search_preview_call\", + \"bing_custom_search_preview_call_output\", \"openapi_call\", \"openapi_call_output\", + \"browser_automation_preview_call\", \"browser_automation_preview_call_output\", + \"fabric_dataagent_preview_call\", \"fabric_dataagent_preview_call_output\", + \"azure_function_call\", and \"azure_function_call_output\".""" + created_by: Optional[Union["_models.CreatedBy", str]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The information about the creator of the item. Is either a CreatedBy type or a str type.""" + agent_reference: Optional["_models.AgentReference"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The agent that created the item.""" + response_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The response on which the item is created.""" + + @overload + def __init__( + self, + *, + type: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class A2AToolCall(OutputItem, discriminator="a2a_preview_call"): + """An A2A (Agent-to-Agent) tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. A2_A_PREVIEW_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.A2_A_PREVIEW_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar name: The name of the A2A agent card being called. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.A2_A_PREVIEW_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. A2_A_PREVIEW_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the A2A agent card being called. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.A2_A_PREVIEW_CALL # type: ignore + + +class A2AToolCallOutput(OutputItem, discriminator="a2a_preview_call_output"): + """The output of an A2A (Agent-to-Agent) tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. A2_A_PREVIEW_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.A2_A_PREVIEW_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar name: The name of the A2A agent card that was called. Required. + :vartype name: str + :ivar output: The output from the A2A tool call. Is one of the following types: {str: Any}, + str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.A2_A_PREVIEW_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. A2_A_PREVIEW_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the A2A agent card that was called. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the A2A tool call. Is one of the following types: {str: Any}, str, [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.A2_A_PREVIEW_CALL_OUTPUT # type: ignore + + +class AgentId(_Model): + """AgentId. + + :ivar type: Required. Default value is "agent_id". + :vartype type: str + :ivar name: The name of the agent. Required. + :vartype name: str + :ivar version: The version identifier of the agent. Required. + :vartype version: str + """ + + type: Literal["agent_id"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required. Default value is \"agent_id\".""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the agent. Required.""" + version: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The version identifier of the agent. Required.""" + + @overload + def __init__( + self, + *, + name: str, + version: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["agent_id"] = "agent_id" + + +class AgentReference(_Model): + """AgentReference. + + :ivar type: Required. Default value is "agent_reference". + :vartype type: str + :ivar name: The name of the agent. Required. + :vartype name: str + :ivar version: The version identifier of the agent. + :vartype version: str + """ + + type: Literal["agent_reference"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required. Default value is \"agent_reference\".""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the agent. Required.""" + version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The version identifier of the agent.""" + + @overload + def __init__( + self, + *, + name: str, + version: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["agent_reference"] = "agent_reference" + + +class AISearchIndexResource(_Model): + """A AI Search Index resource. + + :ivar project_connection_id: An index connection ID in an IndexResource attached to this agent. + :vartype project_connection_id: str + :ivar index_name: The name of an index in an IndexResource attached to this agent. + :vartype index_name: str + :ivar query_type: Type of query in an AIIndexResource attached to this agent. Known values are: + "simple", "semantic", "vector", "vector_simple_hybrid", and "vector_semantic_hybrid". + :vartype query_type: str or + ~azure.ai.agentserver.responses.sdk.models.models.AzureAISearchQueryType + :ivar top_k: Number of documents to retrieve from search and present to the model. + :vartype top_k: int + :ivar filter: filter string for search resource. `Learn more here + `_. + :vartype filter: str + :ivar index_asset_id: Index asset id for search resource. + :vartype index_asset_id: str + """ + + project_connection_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An index connection ID in an IndexResource attached to this agent.""" + index_name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of an index in an IndexResource attached to this agent.""" + query_type: Optional[Union[str, "_models.AzureAISearchQueryType"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Type of query in an AIIndexResource attached to this agent. Known values are: \"simple\", + \"semantic\", \"vector\", \"vector_simple_hybrid\", and \"vector_semantic_hybrid\".""" + top_k: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Number of documents to retrieve from search and present to the model.""" + filter: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """filter string for search resource. `Learn more here + `_.""" + index_asset_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Index asset id for search resource.""" + + @overload + def __init__( + self, + *, + project_connection_id: Optional[str] = None, + index_name: Optional[str] = None, + query_type: Optional[Union[str, "_models.AzureAISearchQueryType"]] = None, + top_k: Optional[int] = None, + filter: Optional[str] = None, # pylint: disable=redefined-builtin + index_asset_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class Annotation(_Model): + """An annotation that applies to a span of output text. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ContainerFileCitationBody, FileCitationBody, FilePath, UrlCitationBody + + :ivar type: Required. Known values are: "file_citation", "url_citation", + "container_file_citation", and "file_path". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.AnnotationType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"file_citation\", \"url_citation\", \"container_file_citation\", + and \"file_path\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ApiErrorResponse(_Model): + """Error response for API failures. + + :ivar error: Required. + :vartype error: ~azure.ai.agentserver.responses.sdk.models.models.Error + """ + + error: "_models.Error" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + error: "_models.Error", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ApplyPatchFileOperation(_Model): + """Apply patch operation. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ApplyPatchCreateFileOperation, ApplyPatchDeleteFileOperation, ApplyPatchUpdateFileOperation + + :ivar type: Required. Known values are: "create_file", "delete_file", and "update_file". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchFileOperationType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"create_file\", \"delete_file\", and \"update_file\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ApplyPatchCreateFileOperation(ApplyPatchFileOperation, discriminator="create_file"): + """Apply patch create file operation. + + :ivar type: Create a new file with the provided diff. Required. CREATE_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CREATE_FILE + :ivar path: Path of the file to create. Required. + :vartype path: str + :ivar diff: Diff to apply. Required. + :vartype diff: str + """ + + type: Literal[ApplyPatchFileOperationType.CREATE_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Create a new file with the provided diff. Required. CREATE_FILE.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Path of the file to create. Required.""" + diff: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Diff to apply. Required.""" + + @overload + def __init__( + self, + *, + path: str, + diff: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ApplyPatchFileOperationType.CREATE_FILE # type: ignore + + +class ApplyPatchOperationParam(_Model): + """Apply patch operation. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ApplyPatchCreateFileOperationParam, ApplyPatchDeleteFileOperationParam, + ApplyPatchUpdateFileOperationParam + + :ivar type: Required. Known values are: "create_file", "delete_file", and "update_file". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchOperationParamType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"create_file\", \"delete_file\", and \"update_file\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ApplyPatchCreateFileOperationParam(ApplyPatchOperationParam, discriminator="create_file"): + """Apply patch create file operation. + + :ivar type: The operation type. Always ``create_file``. Required. CREATE_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CREATE_FILE + :ivar path: Path of the file to create relative to the workspace root. Required. + :vartype path: str + :ivar diff: Unified diff content to apply when creating the file. Required. + :vartype diff: str + """ + + type: Literal[ApplyPatchOperationParamType.CREATE_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The operation type. Always ``create_file``. Required. CREATE_FILE.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Path of the file to create relative to the workspace root. Required.""" + diff: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unified diff content to apply when creating the file. Required.""" + + @overload + def __init__( + self, + *, + path: str, + diff: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ApplyPatchOperationParamType.CREATE_FILE # type: ignore + + +class ApplyPatchDeleteFileOperation(ApplyPatchFileOperation, discriminator="delete_file"): + """Apply patch delete file operation. + + :ivar type: Delete the specified file. Required. DELETE_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.DELETE_FILE + :ivar path: Path of the file to delete. Required. + :vartype path: str + """ + + type: Literal[ApplyPatchFileOperationType.DELETE_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Delete the specified file. Required. DELETE_FILE.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Path of the file to delete. Required.""" + + @overload + def __init__( + self, + *, + path: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ApplyPatchFileOperationType.DELETE_FILE # type: ignore + + +class ApplyPatchDeleteFileOperationParam(ApplyPatchOperationParam, discriminator="delete_file"): + """Apply patch delete file operation. + + :ivar type: The operation type. Always ``delete_file``. Required. DELETE_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.DELETE_FILE + :ivar path: Path of the file to delete relative to the workspace root. Required. + :vartype path: str + """ + + type: Literal[ApplyPatchOperationParamType.DELETE_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The operation type. Always ``delete_file``. Required. DELETE_FILE.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Path of the file to delete relative to the workspace root. Required.""" + + @overload + def __init__( + self, + *, + path: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ApplyPatchOperationParamType.DELETE_FILE # type: ignore + + +class Item(_Model): + """Content item used to generate a response. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ApplyPatchToolCallItemParam, ApplyPatchToolCallOutputItemParam, ItemCodeInterpreterToolCall, + CompactionSummaryItemParam, ItemComputerToolCall, ComputerCallOutputItemParam, + ItemCustomToolCall, ItemCustomToolCallOutput, ItemFileSearchToolCall, ItemFunctionToolCall, + FunctionCallOutputItemParam, ItemImageGenToolCall, ItemReferenceParam, ItemLocalShellToolCall, + ItemLocalShellToolCallOutput, ItemMcpApprovalRequest, MCPApprovalResponse, ItemMcpToolCall, + ItemMcpListTools, MemorySearchToolCallItemParam, ItemMessage, ItemOutputMessage, + ItemReasoningItem, FunctionShellCallItemParam, FunctionShellCallOutputItemParam, + ToolSearchCallItemParam, ToolSearchOutputItemParam, ItemWebSearchToolCall + + :ivar type: Required. Known values are: "message", "output_message", "file_search_call", + "computer_call", "computer_call_output", "web_search_call", "function_call", + "function_call_output", "tool_search_call", "tool_search_output", "reasoning", "compaction", + "image_generation_call", "code_interpreter_call", "local_shell_call", + "local_shell_call_output", "shell_call", "shell_call_output", "apply_patch_call", + "apply_patch_call_output", "mcp_list_tools", "mcp_approval_request", "mcp_approval_response", + "mcp_call", "custom_tool_call_output", "custom_tool_call", "item_reference", + "structured_outputs", "oauth_consent_request", "memory_search_call", "workflow_action", + "a2a_preview_call", "a2a_preview_call_output", "bing_grounding_call", + "bing_grounding_call_output", "sharepoint_grounding_preview_call", + "sharepoint_grounding_preview_call_output", "azure_ai_search_call", + "azure_ai_search_call_output", "bing_custom_search_preview_call", + "bing_custom_search_preview_call_output", "openapi_call", "openapi_call_output", + "browser_automation_preview_call", "browser_automation_preview_call_output", + "fabric_dataagent_preview_call", "fabric_dataagent_preview_call_output", "azure_function_call", + and "azure_function_call_output". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ItemType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"message\", \"output_message\", \"file_search_call\", + \"computer_call\", \"computer_call_output\", \"web_search_call\", \"function_call\", + \"function_call_output\", \"tool_search_call\", \"tool_search_output\", \"reasoning\", + \"compaction\", \"image_generation_call\", \"code_interpreter_call\", \"local_shell_call\", + \"local_shell_call_output\", \"shell_call\", \"shell_call_output\", \"apply_patch_call\", + \"apply_patch_call_output\", \"mcp_list_tools\", \"mcp_approval_request\", + \"mcp_approval_response\", \"mcp_call\", \"custom_tool_call_output\", \"custom_tool_call\", + \"item_reference\", \"structured_outputs\", \"oauth_consent_request\", \"memory_search_call\", + \"workflow_action\", \"a2a_preview_call\", \"a2a_preview_call_output\", + \"bing_grounding_call\", \"bing_grounding_call_output\", \"sharepoint_grounding_preview_call\", + \"sharepoint_grounding_preview_call_output\", \"azure_ai_search_call\", + \"azure_ai_search_call_output\", \"bing_custom_search_preview_call\", + \"bing_custom_search_preview_call_output\", \"openapi_call\", \"openapi_call_output\", + \"browser_automation_preview_call\", \"browser_automation_preview_call_output\", + \"fabric_dataagent_preview_call\", \"fabric_dataagent_preview_call_output\", + \"azure_function_call\", and \"azure_function_call_output\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ApplyPatchToolCallItemParam(Item, discriminator="apply_patch_call"): + """Apply patch tool call. + + :ivar type: The type of the item. Always ``apply_patch_call``. Required. APPLY_PATCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH_CALL + :ivar id: + :vartype id: str + :ivar call_id: The unique ID of the apply patch tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the apply patch tool call. One of ``in_progress`` or ``completed``. + Required. Known values are: "in_progress" and "completed". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchCallStatusParam + :ivar operation: The specific create, delete, or update instruction for the apply_patch tool + call. Required. + :vartype operation: ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchOperationParam + """ + + type: Literal[ItemType.APPLY_PATCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``apply_patch_call``. Required. APPLY_PATCH_CALL.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call generated by the model. Required.""" + status: Union[str, "_models.ApplyPatchCallStatusParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the apply patch tool call. One of ``in_progress`` or ``completed``. Required. + Known values are: \"in_progress\" and \"completed\".""" + operation: "_models.ApplyPatchOperationParam" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The specific create, delete, or update instruction for the apply_patch tool call. Required.""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ApplyPatchCallStatusParam"], + operation: "_models.ApplyPatchOperationParam", + id: Optional[str] = None, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.APPLY_PATCH_CALL # type: ignore + + +class ApplyPatchToolCallOutputItemParam(Item, discriminator="apply_patch_call_output"): + """Apply patch tool call output. + + :ivar type: The type of the item. Always ``apply_patch_call_output``. Required. + APPLY_PATCH_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH_CALL_OUTPUT + :ivar id: + :vartype id: str + :ivar call_id: The unique ID of the apply patch tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the apply patch tool call output. One of ``completed`` or + ``failed``. Required. Known values are: "completed" and "failed". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchCallOutputStatusParam + :ivar output: + :vartype output: str + """ + + type: Literal[ItemType.APPLY_PATCH_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``apply_patch_call_output``. Required. APPLY_PATCH_CALL_OUTPUT.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call generated by the model. Required.""" + status: Union[str, "_models.ApplyPatchCallOutputStatusParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the apply patch tool call output. One of ``completed`` or ``failed``. Required. + Known values are: \"completed\" and \"failed\".""" + output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ApplyPatchCallOutputStatusParam"], + id: Optional[str] = None, # pylint: disable=redefined-builtin + output: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.APPLY_PATCH_CALL_OUTPUT # type: ignore + + +class ApplyPatchToolParam(Tool, discriminator="apply_patch"): + """Apply patch tool. + + :ivar type: The type of the tool. Always ``apply_patch``. Required. APPLY_PATCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH + """ + + type: Literal[ToolType.APPLY_PATCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``apply_patch``. Required. APPLY_PATCH.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.APPLY_PATCH # type: ignore + + +class ApplyPatchUpdateFileOperation(ApplyPatchFileOperation, discriminator="update_file"): + """Apply patch update file operation. + + :ivar type: Update an existing file with the provided diff. Required. UPDATE_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.UPDATE_FILE + :ivar path: Path of the file to update. Required. + :vartype path: str + :ivar diff: Diff to apply. Required. + :vartype diff: str + """ + + type: Literal[ApplyPatchFileOperationType.UPDATE_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Update an existing file with the provided diff. Required. UPDATE_FILE.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Path of the file to update. Required.""" + diff: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Diff to apply. Required.""" + + @overload + def __init__( + self, + *, + path: str, + diff: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ApplyPatchFileOperationType.UPDATE_FILE # type: ignore + + +class ApplyPatchUpdateFileOperationParam(ApplyPatchOperationParam, discriminator="update_file"): + """Apply patch update file operation. + + :ivar type: The operation type. Always ``update_file``. Required. UPDATE_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.UPDATE_FILE + :ivar path: Path of the file to update relative to the workspace root. Required. + :vartype path: str + :ivar diff: Unified diff content to apply to the existing file. Required. + :vartype diff: str + """ + + type: Literal[ApplyPatchOperationParamType.UPDATE_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The operation type. Always ``update_file``. Required. UPDATE_FILE.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Path of the file to update relative to the workspace root. Required.""" + diff: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unified diff content to apply to the existing file. Required.""" + + @overload + def __init__( + self, + *, + path: str, + diff: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ApplyPatchOperationParamType.UPDATE_FILE # type: ignore + + +class ApproximateLocation(_Model): + """ApproximateLocation. + + :ivar type: The type of location approximation. Always ``approximate``. Required. Default value + is "approximate". + :vartype type: str + :ivar country: + :vartype country: str + :ivar region: + :vartype region: str + :ivar city: + :vartype city: str + :ivar timezone: + :vartype timezone: str + """ + + type: Literal["approximate"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of location approximation. Always ``approximate``. Required. Default value is + \"approximate\".""" + country: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + region: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + city: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + timezone: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + country: Optional[str] = None, + region: Optional[str] = None, + city: Optional[str] = None, + timezone: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["approximate"] = "approximate" + + +class AutoCodeInterpreterToolParam(_Model): + """Automatic Code Interpreter Tool Parameters. + + :ivar type: Always ``auto``. Required. Default value is "auto". + :vartype type: str + :ivar file_ids: An optional list of uploaded files to make available to your code. + :vartype file_ids: list[str] + :ivar memory_limit: Known values are: "1g", "4g", "16g", and "64g". + :vartype memory_limit: str or + ~azure.ai.agentserver.responses.sdk.models.models.ContainerMemoryLimit + :ivar network_policy: + :vartype network_policy: + ~azure.ai.agentserver.responses.sdk.models.models.ContainerNetworkPolicyParam + """ + + type: Literal["auto"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Always ``auto``. Required. Default value is \"auto\".""" + file_ids: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An optional list of uploaded files to make available to your code.""" + memory_limit: Optional[Union[str, "_models.ContainerMemoryLimit"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"1g\", \"4g\", \"16g\", and \"64g\".""" + network_policy: Optional["_models.ContainerNetworkPolicyParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + file_ids: Optional[list[str]] = None, + memory_limit: Optional[Union[str, "_models.ContainerMemoryLimit"]] = None, + network_policy: Optional["_models.ContainerNetworkPolicyParam"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["auto"] = "auto" + + +class AzureAISearchTool(Tool, discriminator="azure_ai_search"): + """The input definition information for an Azure AI search tool as used to configure an agent. + + :ivar type: The object type, which is always 'azure_ai_search'. Required. AZURE_AI_SEARCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.AZURE_AI_SEARCH + :ivar azure_ai_search: The azure ai search index resource. Required. + :vartype azure_ai_search: + ~azure.ai.agentserver.responses.sdk.models.models.AzureAISearchToolResource + """ + + type: Literal[ToolType.AZURE_AI_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'azure_ai_search'. Required. AZURE_AI_SEARCH.""" + azure_ai_search: "_models.AzureAISearchToolResource" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The azure ai search index resource. Required.""" + + @overload + def __init__( + self, + *, + azure_ai_search: "_models.AzureAISearchToolResource", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.AZURE_AI_SEARCH # type: ignore + + +class AzureAISearchToolCall(OutputItem, discriminator="azure_ai_search_call"): + """An Azure AI Search tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. AZURE_AI_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.AZURE_AI_SEARCH_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.AZURE_AI_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. AZURE_AI_SEARCH_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.AZURE_AI_SEARCH_CALL # type: ignore + + +class AzureAISearchToolCallOutput(OutputItem, discriminator="azure_ai_search_call_output"): + """The output of an Azure AI Search tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. AZURE_AI_SEARCH_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.AZURE_AI_SEARCH_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the Azure AI Search tool call. Is one of the following types: + {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.AZURE_AI_SEARCH_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. AZURE_AI_SEARCH_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the Azure AI Search tool call. Is one of the following types: {str: Any}, str, + [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.AZURE_AI_SEARCH_CALL_OUTPUT # type: ignore + + +class AzureAISearchToolResource(_Model): + """A set of index resources used by the ``azure_ai_search`` tool. + + :ivar indexes: The indices attached to this agent. There can be a maximum of 1 index resource + attached to the agent. Required. + :vartype indexes: list[~azure.ai.agentserver.responses.sdk.models.models.AISearchIndexResource] + """ + + indexes: list["_models.AISearchIndexResource"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The indices attached to this agent. There can be a maximum of 1 index resource attached to the + agent. Required.""" + + @overload + def __init__( + self, + *, + indexes: list["_models.AISearchIndexResource"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class AzureFunctionBinding(_Model): + """The structure for keeping storage queue name and URI. + + :ivar type: The type of binding, which is always 'storage_queue'. Required. Default value is + "storage_queue". + :vartype type: str + :ivar storage_queue: Storage queue. Required. + :vartype storage_queue: + ~azure.ai.agentserver.responses.sdk.models.models.AzureFunctionStorageQueue + """ + + type: Literal["storage_queue"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of binding, which is always 'storage_queue'. Required. Default value is + \"storage_queue\".""" + storage_queue: "_models.AzureFunctionStorageQueue" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Storage queue. Required.""" + + @overload + def __init__( + self, + *, + storage_queue: "_models.AzureFunctionStorageQueue", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["storage_queue"] = "storage_queue" + + +class AzureFunctionDefinition(_Model): + """The definition of Azure function. + + :ivar function: The definition of azure function and its parameters. Required. + :vartype function: + ~azure.ai.agentserver.responses.sdk.models.models.AzureFunctionDefinitionFunction + :ivar input_binding: Input storage queue. The queue storage trigger runs a function as messages + are added to it. Required. + :vartype input_binding: ~azure.ai.agentserver.responses.sdk.models.models.AzureFunctionBinding + :ivar output_binding: Output storage queue. The function writes output to this queue when the + input items are processed. Required. + :vartype output_binding: ~azure.ai.agentserver.responses.sdk.models.models.AzureFunctionBinding + """ + + function: "_models.AzureFunctionDefinitionFunction" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The definition of azure function and its parameters. Required.""" + input_binding: "_models.AzureFunctionBinding" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Input storage queue. The queue storage trigger runs a function as messages are added to it. + Required.""" + output_binding: "_models.AzureFunctionBinding" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Output storage queue. The function writes output to this queue when the input items are + processed. Required.""" + + @overload + def __init__( + self, + *, + function: "_models.AzureFunctionDefinitionFunction", + input_binding: "_models.AzureFunctionBinding", + output_binding: "_models.AzureFunctionBinding", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class AzureFunctionDefinitionFunction(_Model): + """AzureFunctionDefinitionFunction. + + :ivar name: The name of the function to be called. Required. + :vartype name: str + :ivar description: A description of what the function does, used by the model to choose when + and how to call the function. + :vartype description: str + :ivar parameters: The parameters the functions accepts, described as a JSON Schema object. + Required. + :vartype parameters: dict[str, any] + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to be called. Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A description of what the function does, used by the model to choose when and how to call the + function.""" + parameters: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The parameters the functions accepts, described as a JSON Schema object. Required.""" + + @overload + def __init__( + self, + *, + name: str, + parameters: dict[str, Any], + description: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class AzureFunctionStorageQueue(_Model): + """The structure for keeping storage queue name and URI. + + :ivar queue_service_endpoint: URI to the Azure Storage Queue service allowing you to manipulate + a queue. Required. + :vartype queue_service_endpoint: str + :ivar queue_name: The name of an Azure function storage queue. Required. + :vartype queue_name: str + """ + + queue_service_endpoint: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """URI to the Azure Storage Queue service allowing you to manipulate a queue. Required.""" + queue_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of an Azure function storage queue. Required.""" + + @overload + def __init__( + self, + *, + queue_service_endpoint: str, + queue_name: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class AzureFunctionTool(Tool, discriminator="azure_function"): + """The input definition information for an Azure Function Tool, as used to configure an Agent. + + :ivar type: The object type, which is always 'browser_automation'. Required. AZURE_FUNCTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.AZURE_FUNCTION + :ivar azure_function: The Azure Function Tool definition. Required. + :vartype azure_function: + ~azure.ai.agentserver.responses.sdk.models.models.AzureFunctionDefinition + """ + + type: Literal[ToolType.AZURE_FUNCTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'browser_automation'. Required. AZURE_FUNCTION.""" + azure_function: "_models.AzureFunctionDefinition" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The Azure Function Tool definition. Required.""" + + @overload + def __init__( + self, + *, + azure_function: "_models.AzureFunctionDefinition", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.AZURE_FUNCTION # type: ignore + + +class AzureFunctionToolCall(OutputItem, discriminator="azure_function_call"): + """An Azure Function tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. AZURE_FUNCTION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.AZURE_FUNCTION_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar name: The name of the Azure Function being called. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.AZURE_FUNCTION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. AZURE_FUNCTION_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the Azure Function being called. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.AZURE_FUNCTION_CALL # type: ignore + + +class AzureFunctionToolCallOutput(OutputItem, discriminator="azure_function_call_output"): + """The output of an Azure Function tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. AZURE_FUNCTION_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.AZURE_FUNCTION_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar name: The name of the Azure Function that was called. Required. + :vartype name: str + :ivar output: The output from the Azure Function tool call. Is one of the following types: + {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.AZURE_FUNCTION_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. AZURE_FUNCTION_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the Azure Function that was called. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the Azure Function tool call. Is one of the following types: {str: Any}, str, + [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.AZURE_FUNCTION_CALL_OUTPUT # type: ignore + + +class BingCustomSearchConfiguration(_Model): + """A bing custom search configuration. + + :ivar project_connection_id: Project connection id for grounding with bing search. Required. + :vartype project_connection_id: str + :ivar instance_name: Name of the custom configuration instance given to config. Required. + :vartype instance_name: str + :ivar market: The market where the results come from. + :vartype market: str + :ivar set_lang: The language to use for user interface strings when calling Bing API. + :vartype set_lang: str + :ivar count: The number of search results to return in the bing api response. + :vartype count: int + :ivar freshness: Filter search results by a specific time range. See `accepted values here + `_. + :vartype freshness: str + """ + + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Project connection id for grounding with bing search. Required.""" + instance_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Name of the custom configuration instance given to config. Required.""" + market: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The market where the results come from.""" + set_lang: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The language to use for user interface strings when calling Bing API.""" + count: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The number of search results to return in the bing api response.""" + freshness: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Filter search results by a specific time range. See `accepted values here + `_.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + instance_name: str, + market: Optional[str] = None, + set_lang: Optional[str] = None, + count: Optional[int] = None, + freshness: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class BingCustomSearchPreviewTool(Tool, discriminator="bing_custom_search_preview"): + """The input definition information for a Bing custom search tool as used to configure an agent. + + :ivar type: The object type, which is always 'bing_custom_search_preview'. Required. + BING_CUSTOM_SEARCH_PREVIEW. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BING_CUSTOM_SEARCH_PREVIEW + :ivar bing_custom_search_preview: The bing custom search tool parameters. Required. + :vartype bing_custom_search_preview: + ~azure.ai.agentserver.responses.sdk.models.models.BingCustomSearchToolParameters + """ + + type: Literal[ToolType.BING_CUSTOM_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'bing_custom_search_preview'. Required. + BING_CUSTOM_SEARCH_PREVIEW.""" + bing_custom_search_preview: "_models.BingCustomSearchToolParameters" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The bing custom search tool parameters. Required.""" + + @overload + def __init__( + self, + *, + bing_custom_search_preview: "_models.BingCustomSearchToolParameters", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.BING_CUSTOM_SEARCH_PREVIEW # type: ignore + + +class BingCustomSearchToolCall(OutputItem, discriminator="bing_custom_search_preview_call"): + """A Bing custom search tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. BING_CUSTOM_SEARCH_PREVIEW_CALL. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BING_CUSTOM_SEARCH_PREVIEW_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.BING_CUSTOM_SEARCH_PREVIEW_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. BING_CUSTOM_SEARCH_PREVIEW_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.BING_CUSTOM_SEARCH_PREVIEW_CALL # type: ignore + + +class BingCustomSearchToolCallOutput(OutputItem, discriminator="bing_custom_search_preview_call_output"): + """The output of a Bing custom search tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the Bing custom search tool call. Is one of the following types: + {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the Bing custom search tool call. Is one of the following types: {str: Any}, + str, [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.BING_CUSTOM_SEARCH_PREVIEW_CALL_OUTPUT # type: ignore + + +class BingCustomSearchToolParameters(_Model): + """The bing custom search tool parameters. + + :ivar search_configurations: The project connections attached to this tool. There can be a + maximum of 1 connection resource attached to the tool. Required. + :vartype search_configurations: + list[~azure.ai.agentserver.responses.sdk.models.models.BingCustomSearchConfiguration] + """ + + search_configurations: list["_models.BingCustomSearchConfiguration"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The project connections attached to this tool. There can be a maximum of 1 connection resource + attached to the tool. Required.""" + + @overload + def __init__( + self, + *, + search_configurations: list["_models.BingCustomSearchConfiguration"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class BingGroundingSearchConfiguration(_Model): + """Search configuration for Bing Grounding. + + :ivar project_connection_id: Project connection id for grounding with bing search. Required. + :vartype project_connection_id: str + :ivar market: The market where the results come from. + :vartype market: str + :ivar set_lang: The language to use for user interface strings when calling Bing API. + :vartype set_lang: str + :ivar count: The number of search results to return in the bing api response. + :vartype count: int + :ivar freshness: Filter search results by a specific time range. See `accepted values here + `_. + :vartype freshness: str + """ + + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Project connection id for grounding with bing search. Required.""" + market: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The market where the results come from.""" + set_lang: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The language to use for user interface strings when calling Bing API.""" + count: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The number of search results to return in the bing api response.""" + freshness: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Filter search results by a specific time range. See `accepted values here + `_.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + market: Optional[str] = None, + set_lang: Optional[str] = None, + count: Optional[int] = None, + freshness: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class BingGroundingSearchToolParameters(_Model): + """The bing grounding search tool parameters. + + :ivar search_configurations: The search configurations attached to this tool. There can be a + maximum of 1 search configuration resource attached to the tool. Required. + :vartype search_configurations: + list[~azure.ai.agentserver.responses.sdk.models.models.BingGroundingSearchConfiguration] + """ + + search_configurations: list["_models.BingGroundingSearchConfiguration"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The search configurations attached to this tool. There can be a maximum of 1 search + configuration resource attached to the tool. Required.""" + + @overload + def __init__( + self, + *, + search_configurations: list["_models.BingGroundingSearchConfiguration"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class BingGroundingTool(Tool, discriminator="bing_grounding"): + """The input definition information for a bing grounding search tool as used to configure an + agent. + + :ivar type: The object type, which is always 'bing_grounding'. Required. BING_GROUNDING. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.BING_GROUNDING + :ivar bing_grounding: The bing grounding search tool parameters. Required. + :vartype bing_grounding: + ~azure.ai.agentserver.responses.sdk.models.models.BingGroundingSearchToolParameters + """ + + type: Literal[ToolType.BING_GROUNDING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'bing_grounding'. Required. BING_GROUNDING.""" + bing_grounding: "_models.BingGroundingSearchToolParameters" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The bing grounding search tool parameters. Required.""" + + @overload + def __init__( + self, + *, + bing_grounding: "_models.BingGroundingSearchToolParameters", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.BING_GROUNDING # type: ignore + + +class BingGroundingToolCall(OutputItem, discriminator="bing_grounding_call"): + """A Bing grounding tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. BING_GROUNDING_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.BING_GROUNDING_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.BING_GROUNDING_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. BING_GROUNDING_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.BING_GROUNDING_CALL # type: ignore + + +class BingGroundingToolCallOutput(OutputItem, discriminator="bing_grounding_call_output"): + """The output of a Bing grounding tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. BING_GROUNDING_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BING_GROUNDING_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the Bing grounding tool call. Is one of the following types: + {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.BING_GROUNDING_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. BING_GROUNDING_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the Bing grounding tool call. Is one of the following types: {str: Any}, str, + [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.BING_GROUNDING_CALL_OUTPUT # type: ignore + + +class BrowserAutomationPreviewTool(Tool, discriminator="browser_automation_preview"): + """The input definition information for a Browser Automation Tool, as used to configure an Agent. + + :ivar type: The object type, which is always 'browser_automation_preview'. Required. + BROWSER_AUTOMATION_PREVIEW. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BROWSER_AUTOMATION_PREVIEW + :ivar browser_automation_preview: The Browser Automation Tool parameters. Required. + :vartype browser_automation_preview: + ~azure.ai.agentserver.responses.sdk.models.models.BrowserAutomationToolParameters + """ + + type: Literal[ToolType.BROWSER_AUTOMATION_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'browser_automation_preview'. Required. + BROWSER_AUTOMATION_PREVIEW.""" + browser_automation_preview: "_models.BrowserAutomationToolParameters" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The Browser Automation Tool parameters. Required.""" + + @overload + def __init__( + self, + *, + browser_automation_preview: "_models.BrowserAutomationToolParameters", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.BROWSER_AUTOMATION_PREVIEW # type: ignore + + +class BrowserAutomationToolCall(OutputItem, discriminator="browser_automation_preview_call"): + """A browser automation tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. BROWSER_AUTOMATION_PREVIEW_CALL. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BROWSER_AUTOMATION_PREVIEW_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.BROWSER_AUTOMATION_PREVIEW_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. BROWSER_AUTOMATION_PREVIEW_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.BROWSER_AUTOMATION_PREVIEW_CALL # type: ignore + + +class BrowserAutomationToolCallOutput(OutputItem, discriminator="browser_automation_preview_call_output"): + """The output of a browser automation tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the browser automation tool call. Is one of the following types: + {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the browser automation tool call. Is one of the following types: {str: Any}, + str, [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.BROWSER_AUTOMATION_PREVIEW_CALL_OUTPUT # type: ignore + + +class BrowserAutomationToolConnectionParameters(_Model): # pylint: disable=name-too-long + """Definition of input parameters for the connection used by the Browser Automation Tool. + + :ivar project_connection_id: The ID of the project connection to your Azure Playwright + resource. Required. + :vartype project_connection_id: str + """ + + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the project connection to your Azure Playwright resource. Required.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class BrowserAutomationToolParameters(_Model): + """Definition of input parameters for the Browser Automation Tool. + + :ivar connection: The project connection parameters associated with the Browser Automation + Tool. Required. + :vartype connection: + ~azure.ai.agentserver.responses.sdk.models.models.BrowserAutomationToolConnectionParameters + """ + + connection: "_models.BrowserAutomationToolConnectionParameters" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The project connection parameters associated with the Browser Automation Tool. Required.""" + + @overload + def __init__( + self, + *, + connection: "_models.BrowserAutomationToolConnectionParameters", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CaptureStructuredOutputsTool(Tool, discriminator="capture_structured_outputs"): + """A tool for capturing structured outputs. + + :ivar type: The type of the tool. Always ``capture_structured_outputs``. Required. + CAPTURE_STRUCTURED_OUTPUTS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.CAPTURE_STRUCTURED_OUTPUTS + :ivar outputs: The structured outputs to capture from the model. Required. + :vartype outputs: ~azure.ai.agentserver.responses.sdk.models.models.StructuredOutputDefinition + """ + + type: Literal[ToolType.CAPTURE_STRUCTURED_OUTPUTS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``capture_structured_outputs``. Required. + CAPTURE_STRUCTURED_OUTPUTS.""" + outputs: "_models.StructuredOutputDefinition" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The structured outputs to capture from the model. Required.""" + + @overload + def __init__( + self, + *, + outputs: "_models.StructuredOutputDefinition", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.CAPTURE_STRUCTURED_OUTPUTS # type: ignore + + +class MemoryItem(_Model): + """A single memory item stored in the memory store, containing content and metadata. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ChatSummaryMemoryItem, UserProfileMemoryItem + + :ivar memory_id: The unique ID of the memory item. Required. + :vartype memory_id: str + :ivar updated_at: The last update time of the memory item. Required. + :vartype updated_at: ~datetime.datetime + :ivar scope: The namespace that logically groups and isolates memories, such as a user ID. + Required. + :vartype scope: str + :ivar content: The content of the memory. Required. + :vartype content: str + :ivar kind: The kind of the memory item. Required. Known values are: "user_profile" and + "chat_summary". + :vartype kind: str or ~azure.ai.agentserver.responses.sdk.models.models.MemoryItemKind + """ + + __mapping__: dict[str, _Model] = {} + memory_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the memory item. Required.""" + updated_at: datetime.datetime = rest_field( + visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" + ) + """The last update time of the memory item. Required.""" + scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace that logically groups and isolates memories, such as a user ID. Required.""" + content: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the memory. Required.""" + kind: str = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) + """The kind of the memory item. Required. Known values are: \"user_profile\" and \"chat_summary\".""" + + @overload + def __init__( + self, + *, + memory_id: str, + updated_at: datetime.datetime, + scope: str, + content: str, + kind: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ChatSummaryMemoryItem(MemoryItem, discriminator="chat_summary"): + """A memory item containing a summary extracted from conversations. + + :ivar memory_id: The unique ID of the memory item. Required. + :vartype memory_id: str + :ivar updated_at: The last update time of the memory item. Required. + :vartype updated_at: ~datetime.datetime + :ivar scope: The namespace that logically groups and isolates memories, such as a user ID. + Required. + :vartype scope: str + :ivar content: The content of the memory. Required. + :vartype content: str + :ivar kind: The kind of the memory item. Required. Summary of chat conversations. + :vartype kind: str or ~azure.ai.agentserver.responses.sdk.models.models.CHAT_SUMMARY + """ + + kind: Literal[MemoryItemKind.CHAT_SUMMARY] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The kind of the memory item. Required. Summary of chat conversations.""" + + @overload + def __init__( + self, + *, + memory_id: str, + updated_at: datetime.datetime, + scope: str, + content: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.kind = MemoryItemKind.CHAT_SUMMARY # type: ignore + + +class ComputerAction(_Model): + """ComputerAction. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ClickParam, DoubleClickAction, DragParam, KeyPressAction, MoveParam, ScreenshotParam, + ScrollParam, TypeParam, WaitParam + + :ivar type: Required. Known values are: "click", "double_click", "drag", "keypress", "move", + "screenshot", "scroll", "type", and "wait". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ComputerActionType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"click\", \"double_click\", \"drag\", \"keypress\", \"move\", + \"screenshot\", \"scroll\", \"type\", and \"wait\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ClickParam(ComputerAction, discriminator="click"): + """Click. + + :ivar type: Specifies the event type. For a click action, this property is always ``click``. + Required. CLICK. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CLICK + :ivar button: Indicates which mouse button was pressed during the click. One of ``left``, + ``right``, ``wheel``, ``back``, or ``forward``. Required. Known values are: "left", "right", + "wheel", "back", and "forward". + :vartype button: str or ~azure.ai.agentserver.responses.sdk.models.models.ClickButtonType + :ivar x: The x-coordinate where the click occurred. Required. + :vartype x: int + :ivar y: The y-coordinate where the click occurred. Required. + :vartype y: int + """ + + type: Literal[ComputerActionType.CLICK] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a click action, this property is always ``click``. Required. + CLICK.""" + button: Union[str, "_models.ClickButtonType"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Indicates which mouse button was pressed during the click. One of ``left``, ``right``, + ``wheel``, ``back``, or ``forward``. Required. Known values are: \"left\", \"right\", + \"wheel\", \"back\", and \"forward\".""" + x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The x-coordinate where the click occurred. Required.""" + y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The y-coordinate where the click occurred. Required.""" + + @overload + def __init__( + self, + *, + button: Union[str, "_models.ClickButtonType"], + x: int, + y: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.CLICK # type: ignore + + +class CodeInterpreterOutputImage(_Model): + """Code interpreter output image. + + :ivar type: The type of the output. Always ``image``. Required. Default value is "image". + :vartype type: str + :ivar url: The URL of the image output from the code interpreter. Required. + :vartype url: str + """ + + type: Literal["image"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the output. Always ``image``. Required. Default value is \"image\".""" + url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the image output from the code interpreter. Required.""" + + @overload + def __init__( + self, + *, + url: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["image"] = "image" + + +class CodeInterpreterOutputLogs(_Model): + """Code interpreter output logs. + + :ivar type: The type of the output. Always ``logs``. Required. Default value is "logs". + :vartype type: str + :ivar logs: The logs output from the code interpreter. Required. + :vartype logs: str + """ + + type: Literal["logs"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the output. Always ``logs``. Required. Default value is \"logs\".""" + logs: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The logs output from the code interpreter. Required.""" + + @overload + def __init__( + self, + *, + logs: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["logs"] = "logs" + + +class CodeInterpreterTool(Tool, discriminator="code_interpreter"): + """Code interpreter. + + :ivar type: The type of the code interpreter tool. Always ``code_interpreter``. Required. + CODE_INTERPRETER. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CODE_INTERPRETER + :ivar container: The code interpreter container. Can be a container ID or an object that + specifies uploaded file IDs to make available to your code, along with an optional + ``memory_limit`` setting. If not provided, the service assumes auto. Is either a str type or a + AutoCodeInterpreterToolParam type. + :vartype container: str or + ~azure.ai.agentserver.responses.sdk.models.models.AutoCodeInterpreterToolParam + """ + + type: Literal[ToolType.CODE_INTERPRETER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the code interpreter tool. Always ``code_interpreter``. Required. CODE_INTERPRETER.""" + container: Optional[Union[str, "_models.AutoCodeInterpreterToolParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The code interpreter container. Can be a container ID or an object that specifies uploaded file + IDs to make available to your code, along with an optional ``memory_limit`` setting. If not + provided, the service assumes auto. Is either a str type or a AutoCodeInterpreterToolParam + type.""" + + @overload + def __init__( + self, + *, + container: Optional[Union[str, "_models.AutoCodeInterpreterToolParam"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.CODE_INTERPRETER # type: ignore + + +class CompactionSummaryItemParam(Item, discriminator="compaction"): + """Compaction item. + + :ivar id: + :vartype id: str + :ivar type: The type of the item. Always ``compaction``. Required. COMPACTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPACTION + :ivar encrypted_content: The encrypted content of the compaction summary. Required. + :vartype encrypted_content: str + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + type: Literal[ItemType.COMPACTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``compaction``. Required. COMPACTION.""" + encrypted_content: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The encrypted content of the compaction summary. Required.""" + + @overload + def __init__( + self, + *, + encrypted_content: str, + id: Optional[str] = None, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.COMPACTION # type: ignore + + +class CompactResource(_Model): + """The compacted response object. + + :ivar id: The unique identifier for the compacted response. Required. + :vartype id: str + :ivar object: The object type. Always ``response.compaction``. Required. Default value is + "response.compaction". + :vartype object: str + :ivar output: The compacted list of output items. Required. + :vartype output: list[~azure.ai.agentserver.responses.sdk.models.models.ItemField] + :ivar created_at: Unix timestamp (in seconds) when the compacted conversation was created. + Required. + :vartype created_at: ~datetime.datetime + :ivar usage: Token accounting for the compaction pass, including cached, reasoning, and total + tokens. Required. + :vartype usage: ~azure.ai.agentserver.responses.sdk.models.models.ResponseUsage + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier for the compacted response. Required.""" + object: Literal["response.compaction"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The object type. Always ``response.compaction``. Required. Default value is + \"response.compaction\".""" + output: list["_models.ItemField"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The compacted list of output items. Required.""" + created_at: datetime.datetime = rest_field( + visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" + ) + """Unix timestamp (in seconds) when the compacted conversation was created. Required.""" + usage: "_models.ResponseUsage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Token accounting for the compaction pass, including cached, reasoning, and total tokens. + Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + output: list["_models.ItemField"], + created_at: datetime.datetime, + usage: "_models.ResponseUsage", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.object: Literal["response.compaction"] = "response.compaction" + + +class ComparisonFilter(_Model): + """Comparison Filter. + + :ivar type: Specifies the comparison operator: ``eq``, ``ne``, ``gt``, ``gte``, ``lt``, + ``lte``, ``in``, ``nin``. + + * `eq`: equals + * `ne`: not equal + * `gt`: greater than + * `gte`: greater than or equal + * `lt`: less than + * `lte`: less than or equal + * `in`: in + * `nin`: not in. Required. Is one of the following types: Literal["eq"], Literal["ne"], + Literal["gt"], Literal["gte"], Literal["lt"], Literal["lte"] + :vartype type: str or str or str or str or str or str + :ivar key: The key to compare against the value. Required. + :vartype key: str + :ivar value: The value to compare against the attribute key; supports string, number, or + boolean types. Required. Is one of the following types: str, int, bool, [Union[str, int]] + :vartype value: str or int or bool or list[str or int] + """ + + type: Literal["eq", "ne", "gt", "gte", "lt", "lte"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Specifies the comparison operator: ``eq``, ``ne``, ``gt``, ``gte``, ``lt``, ``lte``, ``in``, + ``nin``. + + * `eq`: equals + * `ne`: not equal + * `gt`: greater than + * `gte`: greater than or equal + * `lt`: less than + * `lte`: less than or equal + * `in`: in + * `nin`: not in. Required. Is one of the following types: Literal[\"eq\"], + Literal[\"ne\"], Literal[\"gt\"], Literal[\"gte\"], Literal[\"lt\"], Literal[\"lte\"]""" + key: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The key to compare against the value. Required.""" + value: Union[str, int, bool, list[Union[str, int]]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The value to compare against the attribute key; supports string, number, or boolean types. + Required. Is one of the following types: str, int, bool, [Union[str, int]]""" + + @overload + def __init__( + self, + *, + type: Literal["eq", "ne", "gt", "gte", "lt", "lte"], + key: str, + value: Union[str, int, bool, list[Union[str, int]]], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CompoundFilter(_Model): + """Compound Filter. + + :ivar type: Type of operation: ``and`` or ``or``. Required. Is either a Literal["and"] type or + a Literal["or"] type. + :vartype type: str or str + :ivar filters: Array of filters to combine. Items can be ``ComparisonFilter`` or + ``CompoundFilter``. Required. + :vartype filters: list[~azure.ai.agentserver.responses.sdk.models.models.ComparisonFilter or + any] + """ + + type: Literal["and", "or"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Type of operation: ``and`` or ``or``. Required. Is either a Literal[\"and\"] type or a + Literal[\"or\"] type.""" + filters: list[Union["_models.ComparisonFilter", Any]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Array of filters to combine. Items can be ``ComparisonFilter`` or ``CompoundFilter``. Required.""" + + @overload + def __init__( + self, + *, + type: Literal["and", "or"], + filters: list[Union["_models.ComparisonFilter", Any]], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ComputerCallOutputItemParam(Item, discriminator="computer_call_output"): + """Computer tool call output. + + :ivar id: + :vartype id: str + :ivar call_id: The ID of the computer tool call that produced the output. Required. + :vartype call_id: str + :ivar type: The type of the computer tool call output. Always ``computer_call_output``. + Required. COMPUTER_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_CALL_OUTPUT + :ivar output: Required. + :vartype output: ~azure.ai.agentserver.responses.sdk.models.models.ComputerScreenshotImage + :ivar acknowledged_safety_checks: + :vartype acknowledged_safety_checks: + list[~azure.ai.agentserver.responses.sdk.models.models.ComputerCallSafetyCheckParam] + :ivar status: Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallItemStatus + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the computer tool call that produced the output. Required.""" + type: Literal[ItemType.COMPUTER_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer tool call output. Always ``computer_call_output``. Required. + COMPUTER_CALL_OUTPUT.""" + output: "_models.ComputerScreenshotImage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + acknowledged_safety_checks: Optional[list["_models.ComputerCallSafetyCheckParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + + @overload + def __init__( + self, + *, + call_id: str, + output: "_models.ComputerScreenshotImage", + id: Optional[str] = None, # pylint: disable=redefined-builtin + acknowledged_safety_checks: Optional[list["_models.ComputerCallSafetyCheckParam"]] = None, + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.COMPUTER_CALL_OUTPUT # type: ignore + + +class ComputerCallSafetyCheckParam(_Model): + """A pending safety check for the computer call. + + :ivar id: The ID of the pending safety check. Required. + :vartype id: str + :ivar code: + :vartype code: str + :ivar message: + :vartype message: str + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the pending safety check. Required.""" + code: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + message: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + code: Optional[str] = None, + message: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MessageContent(_Model): + """A content part that makes up an input or output item. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ComputerScreenshotContent, MessageContentInputFileContent, MessageContentInputImageContent, + MessageContentInputTextContent, MessageContentOutputTextContent, + MessageContentReasoningTextContent, MessageContentRefusalContent, SummaryTextContent, + TextContent + + :ivar type: Required. Known values are: "input_text", "output_text", "text", "summary_text", + "reasoning_text", "refusal", "input_image", "computer_screenshot", and "input_file". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageContentType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"input_text\", \"output_text\", \"text\", \"summary_text\", + \"reasoning_text\", \"refusal\", \"input_image\", \"computer_screenshot\", and \"input_file\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ComputerScreenshotContent(MessageContent, discriminator="computer_screenshot"): + """Computer screenshot. + + :ivar type: Specifies the event type. For a computer screenshot, this property is always set to + ``computer_screenshot``. Required. COMPUTER_SCREENSHOT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_SCREENSHOT + :ivar image_url: Required. + :vartype image_url: str + :ivar file_id: Required. + :vartype file_id: str + :ivar detail: The detail level of the screenshot image to be sent to the model. One of + ``high``, ``low``, ``auto``, or ``original``. Defaults to ``auto``. Required. Known values are: + "low", "high", "auto", and "original". + :vartype detail: str or ~azure.ai.agentserver.responses.sdk.models.models.ImageDetail + """ + + type: Literal[MessageContentType.COMPUTER_SCREENSHOT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a computer screenshot, this property is always set to + ``computer_screenshot``. Required. COMPUTER_SCREENSHOT.""" + image_url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + file_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + detail: Union[str, "_models.ImageDetail"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The detail level of the screenshot image to be sent to the model. One of ``high``, ``low``, + ``auto``, or ``original``. Defaults to ``auto``. Required. Known values are: \"low\", \"high\", + \"auto\", and \"original\".""" + + @overload + def __init__( + self, + *, + image_url: str, + file_id: str, + detail: Union[str, "_models.ImageDetail"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.COMPUTER_SCREENSHOT # type: ignore + + +class ComputerScreenshotImage(_Model): + """A computer screenshot image used with the computer use tool. + + :ivar type: Specifies the event type. For a computer screenshot, this property is always set to + ``computer_screenshot``. Required. Default value is "computer_screenshot". + :vartype type: str + :ivar image_url: The URL of the screenshot image. + :vartype image_url: str + :ivar file_id: The identifier of an uploaded file that contains the screenshot. + :vartype file_id: str + """ + + type: Literal["computer_screenshot"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Specifies the event type. For a computer screenshot, this property is always set to + ``computer_screenshot``. Required. Default value is \"computer_screenshot\".""" + image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the screenshot image.""" + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of an uploaded file that contains the screenshot.""" + + @overload + def __init__( + self, + *, + image_url: Optional[str] = None, + file_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["computer_screenshot"] = "computer_screenshot" + + +class ComputerTool(Tool, discriminator="computer"): + """Computer. + + :ivar type: The type of the computer tool. Always ``computer``. Required. COMPUTER. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER + """ + + type: Literal[ToolType.COMPUTER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer tool. Always ``computer``. Required. COMPUTER.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.COMPUTER # type: ignore + + +class ComputerUsePreviewTool(Tool, discriminator="computer_use_preview"): + """Computer use preview. + + :ivar type: The type of the computer use tool. Always ``computer_use_preview``. Required. + COMPUTER_USE_PREVIEW. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_USE_PREVIEW + :ivar environment: The type of computer environment to control. Required. Known values are: + "windows", "mac", "linux", "ubuntu", and "browser". + :vartype environment: str or + ~azure.ai.agentserver.responses.sdk.models.models.ComputerEnvironment + :ivar display_width: The width of the computer display. Required. + :vartype display_width: int + :ivar display_height: The height of the computer display. Required. + :vartype display_height: int + """ + + type: Literal[ToolType.COMPUTER_USE_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer use tool. Always ``computer_use_preview``. Required. + COMPUTER_USE_PREVIEW.""" + environment: Union[str, "_models.ComputerEnvironment"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The type of computer environment to control. Required. Known values are: \"windows\", \"mac\", + \"linux\", \"ubuntu\", and \"browser\".""" + display_width: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The width of the computer display. Required.""" + display_height: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The height of the computer display. Required.""" + + @overload + def __init__( + self, + *, + environment: Union[str, "_models.ComputerEnvironment"], + display_width: int, + display_height: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.COMPUTER_USE_PREVIEW # type: ignore + + +class FunctionShellToolParamEnvironment(_Model): + """FunctionShellToolParamEnvironment. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ContainerAutoParam, FunctionShellToolParamEnvironmentContainerReferenceParam, + FunctionShellToolParamEnvironmentLocalEnvironmentParam + + :ivar type: Required. Known values are: "container_auto", "local", and "container_reference". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellToolParamEnvironmentType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"container_auto\", \"local\", and \"container_reference\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ContainerAutoParam(FunctionShellToolParamEnvironment, discriminator="container_auto"): + """ContainerAutoParam. + + :ivar type: Automatically creates a container for this request. Required. CONTAINER_AUTO. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CONTAINER_AUTO + :ivar file_ids: An optional list of uploaded files to make available to your code. + :vartype file_ids: list[str] + :ivar memory_limit: Known values are: "1g", "4g", "16g", and "64g". + :vartype memory_limit: str or + ~azure.ai.agentserver.responses.sdk.models.models.ContainerMemoryLimit + :ivar skills: An optional list of skills referenced by id or inline data. + :vartype skills: list[~azure.ai.agentserver.responses.sdk.models.models.ContainerSkill] + :ivar network_policy: + :vartype network_policy: + ~azure.ai.agentserver.responses.sdk.models.models.ContainerNetworkPolicyParam + """ + + type: Literal[FunctionShellToolParamEnvironmentType.CONTAINER_AUTO] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Automatically creates a container for this request. Required. CONTAINER_AUTO.""" + file_ids: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An optional list of uploaded files to make available to your code.""" + memory_limit: Optional[Union[str, "_models.ContainerMemoryLimit"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"1g\", \"4g\", \"16g\", and \"64g\".""" + skills: Optional[list["_models.ContainerSkill"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """An optional list of skills referenced by id or inline data.""" + network_policy: Optional["_models.ContainerNetworkPolicyParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + file_ids: Optional[list[str]] = None, + memory_limit: Optional[Union[str, "_models.ContainerMemoryLimit"]] = None, + skills: Optional[list["_models.ContainerSkill"]] = None, + network_policy: Optional["_models.ContainerNetworkPolicyParam"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellToolParamEnvironmentType.CONTAINER_AUTO # type: ignore + + +class ContainerFileCitationBody(Annotation, discriminator="container_file_citation"): + """Container file citation. + + :ivar type: The type of the container file citation. Always ``container_file_citation``. + Required. CONTAINER_FILE_CITATION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CONTAINER_FILE_CITATION + :ivar container_id: The ID of the container file. Required. + :vartype container_id: str + :ivar file_id: The ID of the file. Required. + :vartype file_id: str + :ivar start_index: The index of the first character of the container file citation in the + message. Required. + :vartype start_index: int + :ivar end_index: The index of the last character of the container file citation in the message. + Required. + :vartype end_index: int + :ivar filename: The filename of the container file cited. Required. + :vartype filename: str + """ + + type: Literal[AnnotationType.CONTAINER_FILE_CITATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the container file citation. Always ``container_file_citation``. Required. + CONTAINER_FILE_CITATION.""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the container file. Required.""" + file_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the file. Required.""" + start_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the first character of the container file citation in the message. Required.""" + end_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the last character of the container file citation in the message. Required.""" + filename: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The filename of the container file cited. Required.""" + + @overload + def __init__( + self, + *, + container_id: str, + file_id: str, + start_index: int, + end_index: int, + filename: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = AnnotationType.CONTAINER_FILE_CITATION # type: ignore + + +class ContainerNetworkPolicyParam(_Model): + """Network access policy for the container. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ContainerNetworkPolicyAllowlistParam, ContainerNetworkPolicyDisabledParam + + :ivar type: Required. Known values are: "disabled" and "allowlist". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.ContainerNetworkPolicyParamType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"disabled\" and \"allowlist\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ContainerNetworkPolicyAllowlistParam(ContainerNetworkPolicyParam, discriminator="allowlist"): + """ContainerNetworkPolicyAllowlistParam. + + :ivar type: Allow outbound network access only to specified domains. Always ``allowlist``. + Required. ALLOWLIST. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ALLOWLIST + :ivar allowed_domains: A list of allowed domains when type is ``allowlist``. Required. + :vartype allowed_domains: list[str] + :ivar domain_secrets: Optional domain-scoped secrets for allowlisted domains. + :vartype domain_secrets: + list[~azure.ai.agentserver.responses.sdk.models.models.ContainerNetworkPolicyDomainSecretParam] + """ + + type: Literal[ContainerNetworkPolicyParamType.ALLOWLIST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Allow outbound network access only to specified domains. Always ``allowlist``. Required. + ALLOWLIST.""" + allowed_domains: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A list of allowed domains when type is ``allowlist``. Required.""" + domain_secrets: Optional[list["_models.ContainerNetworkPolicyDomainSecretParam"]] = rest_field( + visibility=["create"] + ) + """Optional domain-scoped secrets for allowlisted domains.""" + + @overload + def __init__( + self, + *, + allowed_domains: list[str], + domain_secrets: Optional[list["_models.ContainerNetworkPolicyDomainSecretParam"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ContainerNetworkPolicyParamType.ALLOWLIST # type: ignore + + +class ContainerNetworkPolicyDisabledParam(ContainerNetworkPolicyParam, discriminator="disabled"): + """ContainerNetworkPolicyDisabledParam. + + :ivar type: Disable outbound network access. Always ``disabled``. Required. DISABLED. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.DISABLED + """ + + type: Literal[ContainerNetworkPolicyParamType.DISABLED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Disable outbound network access. Always ``disabled``. Required. DISABLED.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ContainerNetworkPolicyParamType.DISABLED # type: ignore + + +class ContainerNetworkPolicyDomainSecretParam(_Model): + """ContainerNetworkPolicyDomainSecretParam. + + :ivar domain: The domain associated with the secret. Required. + :vartype domain: str + :ivar name: The name of the secret to inject for the domain. Required. + :vartype name: str + :ivar value: The secret value to inject for the domain. Required. + :vartype value: str + """ + + domain: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The domain associated with the secret. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the secret to inject for the domain. Required.""" + value: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The secret value to inject for the domain. Required.""" + + @overload + def __init__( + self, + *, + domain: str, + name: str, + value: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallEnvironment(_Model): + """FunctionShellCallEnvironment. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ContainerReferenceResource, LocalEnvironmentResource + + :ivar type: Required. Known values are: "local" and "container_reference". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallEnvironmentType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"local\" and \"container_reference\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ContainerReferenceResource(FunctionShellCallEnvironment, discriminator="container_reference"): + """Container Reference. + + :ivar type: The environment type. Always ``container_reference``. Required. + CONTAINER_REFERENCE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CONTAINER_REFERENCE + :ivar container_id: Required. + :vartype container_id: str + """ + + type: Literal[FunctionShellCallEnvironmentType.CONTAINER_REFERENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The environment type. Always ``container_reference``. Required. CONTAINER_REFERENCE.""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + container_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallEnvironmentType.CONTAINER_REFERENCE # type: ignore + + +class ContainerSkill(_Model): + """ContainerSkill. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + InlineSkillParam, SkillReferenceParam + + :ivar type: Required. Known values are: "skill_reference" and "inline". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ContainerSkillType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"skill_reference\" and \"inline\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ContextManagementParam(_Model): + """ContextManagementParam. + + :ivar type: The context management entry type. Currently only 'compaction' is supported. + Required. + :vartype type: str + :ivar compact_threshold: + :vartype compact_threshold: int + """ + + type: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The context management entry type. Currently only 'compaction' is supported. Required.""" + compact_threshold: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + type: str, + compact_threshold: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ConversationParam_2(_Model): + """Conversation object. + + :ivar id: The unique ID of the conversation. Required. + :vartype id: str + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the conversation. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ConversationReference(_Model): + """Conversation. + + :ivar id: The unique ID of the conversation that this response was associated with. Required. + :vartype id: str + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the conversation that this response was associated with. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CoordParam(_Model): + """Coordinate. + + :ivar x: The x-coordinate. Required. + :vartype x: int + :ivar y: The y-coordinate. Required. + :vartype y: int + """ + + x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The x-coordinate. Required.""" + y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The y-coordinate. Required.""" + + @overload + def __init__( + self, + *, + x: int, + y: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CreatedBy(_Model): + """CreatedBy. + + :ivar agent: The agent that created the item. + :vartype agent: ~azure.ai.agentserver.responses.sdk.models.models.AgentId + :ivar response_id: The response on which the item is created. + :vartype response_id: str + """ + + agent: Optional["_models.AgentId"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The agent that created the item.""" + response_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The response on which the item is created.""" + + @overload + def __init__( + self, + *, + agent: Optional["_models.AgentId"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CreateResponse(_Model): + """CreateResponse. + + :ivar metadata: + :vartype metadata: ~azure.ai.agentserver.responses.sdk.models.models.Metadata + :ivar top_logprobs: + :vartype top_logprobs: int + :ivar temperature: + :vartype temperature: int + :ivar top_p: + :vartype top_p: int + :ivar user: This field is being replaced by ``safety_identifier`` and ``prompt_cache_key``. Use + ``prompt_cache_key`` instead to maintain caching optimizations. A stable identifier for your + end-users. Used to boost cache hit rates by better bucketing similar requests and to help + OpenAI detect and prevent abuse. `Learn more + `_. + :vartype user: str + :ivar safety_identifier: A stable identifier used to help detect users of your application that + may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies + each user, with a maximum length of 64 characters. We recommend hashing their username or email + address, in order to avoid sending us any identifying information. `Learn more + `_. + :vartype safety_identifier: str + :ivar prompt_cache_key: Used by OpenAI to cache responses for similar requests to optimize your + cache hit rates. Replaces the ``user`` field. `Learn more `_. + :vartype prompt_cache_key: str + :ivar service_tier: Is one of the following types: Literal["auto"], Literal["default"], + Literal["flex"], Literal["scale"], Literal["priority"] + :vartype service_tier: str or str or str or str or str + :ivar prompt_cache_retention: Is either a Literal["in-memory"] type or a Literal["24h"] type. + :vartype prompt_cache_retention: str or str + :ivar previous_response_id: + :vartype previous_response_id: str + :ivar model: The model deployment to use for the creation of this response. + :vartype model: str + :ivar reasoning: + :vartype reasoning: ~azure.ai.agentserver.responses.sdk.models.models.Reasoning + :ivar background: + :vartype background: bool + :ivar max_output_tokens: + :vartype max_output_tokens: int + :ivar max_tool_calls: + :vartype max_tool_calls: int + :ivar text: + :vartype text: ~azure.ai.agentserver.responses.sdk.models.models.ResponseTextParam + :ivar tools: + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.Tool] + :ivar tool_choice: Is either a Union[str, "_models.ToolChoiceOptions"] type or a + ToolChoiceParam type. + :vartype tool_choice: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolChoiceOptions or + ~azure.ai.agentserver.responses.sdk.models.models.ToolChoiceParam + :ivar prompt: + :vartype prompt: ~azure.ai.agentserver.responses.sdk.models.models.Prompt + :ivar truncation: Is either a Literal["auto"] type or a Literal["disabled"] type. + :vartype truncation: str or str + :ivar input: Is either a str type or a [Item] type. + :vartype input: str or list[~azure.ai.agentserver.responses.sdk.models.models.Item] + :ivar include: + :vartype include: list[str or ~azure.ai.agentserver.responses.sdk.models.models.IncludeEnum] + :ivar parallel_tool_calls: + :vartype parallel_tool_calls: bool + :ivar store: + :vartype store: bool + :ivar instructions: + :vartype instructions: str + :ivar stream: + :vartype stream: bool + :ivar stream_options: + :vartype stream_options: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseStreamOptions + :ivar conversation: Is either a str type or a ConversationParam_2 type. + :vartype conversation: str or + ~azure.ai.agentserver.responses.sdk.models.models.ConversationParam_2 + :ivar context_management: Context management configuration for this request. + :vartype context_management: + list[~azure.ai.agentserver.responses.sdk.models.models.ContextManagementParam] + :ivar agent: (Deprecated) Use agent_reference instead. The agent to use for generating the + response. + :vartype agent: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar agent_reference: The agent to use for generating the response. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar structured_inputs: The structured inputs to the response that can participate in prompt + template substitution or tool argument bindings. + :vartype structured_inputs: dict[str, any] + :ivar agent_session_id: Optional session identifier for sandbox affinity. Currently only + relevant for hosted agents. When provided, the request is routed to the same sandbox. When + omitted, auto-derived from conversation_id/prev_response_id or a new UUID is generated. + :vartype agent_session_id: str + """ + + metadata: Optional["_models.Metadata"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + top_logprobs: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + temperature: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + top_p: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + user: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """This field is being replaced by ``safety_identifier`` and ``prompt_cache_key``. Use + ``prompt_cache_key`` instead to maintain caching optimizations. A stable identifier for your + end-users. Used to boost cache hit rates by better bucketing similar requests and to help + OpenAI detect and prevent abuse. `Learn more + `_.""" + safety_identifier: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A stable identifier used to help detect users of your application that may be violating + OpenAI's usage policies. The IDs should be a string that uniquely identifies each user, with a + maximum length of 64 characters. We recommend hashing their username or email address, in order + to avoid sending us any identifying information. `Learn more + `_.""" + prompt_cache_key: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. + Replaces the ``user`` field. `Learn more `_.""" + service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"auto\"], Literal[\"default\"], Literal[\"flex\"], + Literal[\"scale\"], Literal[\"priority\"]""" + prompt_cache_retention: Optional[Literal["in-memory", "24h"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Literal[\"in-memory\"] type or a Literal[\"24h\"] type.""" + previous_response_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + model: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The model deployment to use for the creation of this response.""" + reasoning: Optional["_models.Reasoning"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + background: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + max_output_tokens: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + max_tool_calls: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + text: Optional["_models.ResponseTextParam"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Union[str, \"_models.ToolChoiceOptions\"] type or a ToolChoiceParam type.""" + prompt: Optional["_models.Prompt"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + truncation: Optional[Literal["auto", "disabled"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Literal[\"auto\"] type or a Literal[\"disabled\"] type.""" + input: Optional["_types.InputParam"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Is either a str type or a [Item] type.""" + include: Optional[list[Union[str, "_models.IncludeEnum"]]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + parallel_tool_calls: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + store: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + instructions: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + stream: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + stream_options: Optional["_models.ResponseStreamOptions"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + conversation: Optional["_types.ConversationParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a str type or a ConversationParam_2 type.""" + context_management: Optional[list["_models.ContextManagementParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Context management configuration for this request.""" + agent: Optional["_models.AgentReference"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """(Deprecated) Use agent_reference instead. The agent to use for generating the response.""" + agent_reference: Optional["_models.AgentReference"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The agent to use for generating the response.""" + structured_inputs: Optional[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The structured inputs to the response that can participate in prompt template substitution or + tool argument bindings.""" + agent_session_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional session identifier for sandbox affinity. Currently only relevant for hosted agents. + When provided, the request is routed to the same sandbox. When omitted, auto-derived from + conversation_id/prev_response_id or a new UUID is generated.""" + + @overload + def __init__( # pylint: disable=too-many-locals + self, + *, + metadata: Optional["_models.Metadata"] = None, + top_logprobs: Optional[int] = None, + temperature: Optional[int] = None, + top_p: Optional[int] = None, + user: Optional[str] = None, + safety_identifier: Optional[str] = None, + prompt_cache_key: Optional[str] = None, + service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] = None, + prompt_cache_retention: Optional[Literal["in-memory", "24h"]] = None, + previous_response_id: Optional[str] = None, + model: Optional[str] = None, + reasoning: Optional["_models.Reasoning"] = None, + background: Optional[bool] = None, + max_output_tokens: Optional[int] = None, + max_tool_calls: Optional[int] = None, + text: Optional["_models.ResponseTextParam"] = None, + tools: Optional[list["_models.Tool"]] = None, + tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceParam"]] = None, + prompt: Optional["_models.Prompt"] = None, + truncation: Optional[Literal["auto", "disabled"]] = None, + input: Optional["_types.InputParam"] = None, + include: Optional[list[Union[str, "_models.IncludeEnum"]]] = None, + parallel_tool_calls: Optional[bool] = None, + store: Optional[bool] = None, + instructions: Optional[str] = None, + stream: Optional[bool] = None, + stream_options: Optional["_models.ResponseStreamOptions"] = None, + conversation: Optional["_types.ConversationParam"] = None, + context_management: Optional[list["_models.ContextManagementParam"]] = None, + agent: Optional["_models.AgentReference"] = None, + agent_reference: Optional["_models.AgentReference"] = None, + structured_inputs: Optional[dict[str, Any]] = None, + agent_session_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CustomToolParamFormat(_Model): + """The input format for the custom tool. Default is unconstrained text. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + CustomGrammarFormatParam, CustomTextFormatParam + + :ivar type: Required. Known values are: "text" and "grammar". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.CustomToolParamFormatType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"text\" and \"grammar\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class CustomGrammarFormatParam(CustomToolParamFormat, discriminator="grammar"): + """Grammar format. + + :ivar type: Grammar format. Always ``grammar``. Required. GRAMMAR. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.GRAMMAR + :ivar syntax: The syntax of the grammar definition. One of ``lark`` or ``regex``. Required. + Known values are: "lark" and "regex". + :vartype syntax: str or ~azure.ai.agentserver.responses.sdk.models.models.GrammarSyntax1 + :ivar definition: The grammar definition. Required. + :vartype definition: str + """ + + type: Literal[CustomToolParamFormatType.GRAMMAR] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Grammar format. Always ``grammar``. Required. GRAMMAR.""" + syntax: Union[str, "_models.GrammarSyntax1"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The syntax of the grammar definition. One of ``lark`` or ``regex``. Required. Known values are: + \"lark\" and \"regex\".""" + definition: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The grammar definition. Required.""" + + @overload + def __init__( + self, + *, + syntax: Union[str, "_models.GrammarSyntax1"], + definition: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = CustomToolParamFormatType.GRAMMAR # type: ignore + + +class CustomTextFormatParam(CustomToolParamFormat, discriminator="text"): + """Text format. + + :ivar type: Unconstrained text format. Always ``text``. Required. TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TEXT + """ + + type: Literal[CustomToolParamFormatType.TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Unconstrained text format. Always ``text``. Required. TEXT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = CustomToolParamFormatType.TEXT # type: ignore + + +class CustomToolParam(Tool, discriminator="custom"): + """Custom tool. + + :ivar type: The type of the custom tool. Always ``custom``. Required. CUSTOM. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM + :ivar name: The name of the custom tool, used to identify it in tool calls. Required. + :vartype name: str + :ivar description: Optional description of the custom tool, used to provide more context. + :vartype description: str + :ivar format: The input format for the custom tool. Default is unconstrained text. + :vartype format: ~azure.ai.agentserver.responses.sdk.models.models.CustomToolParamFormat + :ivar defer_loading: Whether this tool should be deferred and discovered via tool search. + :vartype defer_loading: bool + """ + + type: Literal[ToolType.CUSTOM] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool. Always ``custom``. Required. CUSTOM.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the custom tool, used to identify it in tool calls. Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional description of the custom tool, used to provide more context.""" + format: Optional["_models.CustomToolParamFormat"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The input format for the custom tool. Default is unconstrained text.""" + defer_loading: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether this tool should be deferred and discovered via tool search.""" + + @overload + def __init__( + self, + *, + name: str, + description: Optional[str] = None, + format: Optional["_models.CustomToolParamFormat"] = None, + defer_loading: Optional[bool] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.CUSTOM # type: ignore + + +class DeleteResponseResult(_Model): + """The result of a delete response operation. + + :ivar id: The operation ID. Required. + :vartype id: str + :ivar deleted: Always return true. Required. Default value is True. + :vartype deleted: bool + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The operation ID. Required.""" + deleted: Literal[True] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Always return true. Required. Default value is True.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.deleted: Literal[True] = True + + +class DoubleClickAction(ComputerAction, discriminator="double_click"): + """DoubleClick. + + :ivar type: Specifies the event type. For a double click action, this property is always set to + ``double_click``. Required. DOUBLE_CLICK. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.DOUBLE_CLICK + :ivar x: The x-coordinate where the double click occurred. Required. + :vartype x: int + :ivar y: The y-coordinate where the double click occurred. Required. + :vartype y: int + """ + + type: Literal[ComputerActionType.DOUBLE_CLICK] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a double click action, this property is always set to + ``double_click``. Required. DOUBLE_CLICK.""" + x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The x-coordinate where the double click occurred. Required.""" + y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The y-coordinate where the double click occurred. Required.""" + + @overload + def __init__( + self, + *, + x: int, + y: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.DOUBLE_CLICK # type: ignore + + +class DragParam(ComputerAction, discriminator="drag"): + """Drag. + + :ivar type: Specifies the event type. For a drag action, this property is always set to + ``drag``. Required. DRAG. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.DRAG + :ivar path: An array of coordinates representing the path of the drag action. Coordinates will + appear as an array of objects, eg + + .. code-block:: + + [ + { x: 100, y: 200 }, + { x: 200, y: 300 } + ]. Required. + :vartype path: list[~azure.ai.agentserver.responses.sdk.models.models.CoordParam] + """ + + type: Literal[ComputerActionType.DRAG] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a drag action, this property is always set to ``drag``. Required. + DRAG.""" + path: list["_models.CoordParam"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An array of coordinates representing the path of the drag action. Coordinates will appear as an + array of objects, eg + + .. code-block:: + + [ + { x: 100, y: 200 }, + { x: 200, y: 300 } + ]. Required.""" + + @overload + def __init__( + self, + *, + path: list["_models.CoordParam"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.DRAG # type: ignore + + +class EmptyModelParam(_Model): + """EmptyModelParam.""" + + +class Error(_Model): + """Error. + + :ivar code: Required. + :vartype code: str + :ivar message: Required. + :vartype message: str + :ivar param: + :vartype param: str + :ivar type: + :vartype type: str + :ivar details: + :vartype details: list[~azure.ai.agentserver.responses.sdk.models.models.Error] + :ivar additional_info: + :vartype additional_info: dict[str, any] + :ivar debug_info: + :vartype debug_info: dict[str, any] + """ + + code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + param: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + type: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + details: Optional[list["_models.Error"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + additional_info: Optional[dict[str, Any]] = rest_field( + name="additionalInfo", visibility=["read", "create", "update", "delete", "query"] + ) + debug_info: Optional[dict[str, Any]] = rest_field( + name="debugInfo", visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + code: str, + message: str, + param: Optional[str] = None, + type: Optional[str] = None, + details: Optional[list["_models.Error"]] = None, + additional_info: Optional[dict[str, Any]] = None, + debug_info: Optional[dict[str, Any]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FabricDataAgentToolCall(OutputItem, discriminator="fabric_dataagent_preview_call"): + """A Fabric data agent tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. FABRIC_DATAAGENT_PREVIEW_CALL. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FABRIC_DATAAGENT_PREVIEW_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.FABRIC_DATAAGENT_PREVIEW_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. FABRIC_DATAAGENT_PREVIEW_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.FABRIC_DATAAGENT_PREVIEW_CALL # type: ignore + + +class FabricDataAgentToolCallOutput(OutputItem, discriminator="fabric_dataagent_preview_call_output"): + """The output of a Fabric data agent tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the Fabric data agent tool call. Is one of the following types: + {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the Fabric data agent tool call. Is one of the following types: {str: Any}, + str, [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.FABRIC_DATAAGENT_PREVIEW_CALL_OUTPUT # type: ignore + + +class FabricDataAgentToolParameters(_Model): + """The fabric data agent tool parameters. + + :ivar project_connections: The project connections attached to this tool. There can be a + maximum of 1 connection resource attached to the tool. + :vartype project_connections: + list[~azure.ai.agentserver.responses.sdk.models.models.ToolProjectConnection] + """ + + project_connections: Optional[list["_models.ToolProjectConnection"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The project connections attached to this tool. There can be a maximum of 1 connection resource + attached to the tool.""" + + @overload + def __init__( + self, + *, + project_connections: Optional[list["_models.ToolProjectConnection"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FileCitationBody(Annotation, discriminator="file_citation"): + """File citation. + + :ivar type: The type of the file citation. Always ``file_citation``. Required. FILE_CITATION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_CITATION + :ivar file_id: The ID of the file. Required. + :vartype file_id: str + :ivar index: The index of the file in the list of files. Required. + :vartype index: int + :ivar filename: The filename of the file cited. Required. + :vartype filename: str + """ + + type: Literal[AnnotationType.FILE_CITATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the file citation. Always ``file_citation``. Required. FILE_CITATION.""" + file_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the file. Required.""" + index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the file in the list of files. Required.""" + filename: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The filename of the file cited. Required.""" + + @overload + def __init__( + self, + *, + file_id: str, + index: int, + filename: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = AnnotationType.FILE_CITATION # type: ignore + + +class FilePath(Annotation, discriminator="file_path"): + """File path. + + :ivar type: The type of the file path. Always ``file_path``. Required. FILE_PATH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_PATH + :ivar file_id: The ID of the file. Required. + :vartype file_id: str + :ivar index: The index of the file in the list of files. Required. + :vartype index: int + """ + + type: Literal[AnnotationType.FILE_PATH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the file path. Always ``file_path``. Required. FILE_PATH.""" + file_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the file. Required.""" + index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the file in the list of files. Required.""" + + @overload + def __init__( + self, + *, + file_id: str, + index: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = AnnotationType.FILE_PATH # type: ignore + + +class FileSearchTool(Tool, discriminator="file_search"): + """File search. + + :ivar type: The type of the file search tool. Always ``file_search``. Required. FILE_SEARCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_SEARCH + :ivar vector_store_ids: The IDs of the vector stores to search. Required. + :vartype vector_store_ids: list[str] + :ivar max_num_results: The maximum number of results to return. This number should be between 1 + and 50 inclusive. + :vartype max_num_results: int + :ivar ranking_options: Ranking options for search. + :vartype ranking_options: ~azure.ai.agentserver.responses.sdk.models.models.RankingOptions + :ivar filters: Is either a ComparisonFilter type or a CompoundFilter type. + :vartype filters: ~azure.ai.agentserver.responses.sdk.models.models.ComparisonFilter or + ~azure.ai.agentserver.responses.sdk.models.models.CompoundFilter + """ + + type: Literal[ToolType.FILE_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the file search tool. Always ``file_search``. Required. FILE_SEARCH.""" + vector_store_ids: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The IDs of the vector stores to search. Required.""" + max_num_results: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The maximum number of results to return. This number should be between 1 and 50 inclusive.""" + ranking_options: Optional["_models.RankingOptions"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Ranking options for search.""" + filters: Optional["_types.Filters"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Is either a ComparisonFilter type or a CompoundFilter type.""" + + @overload + def __init__( + self, + *, + vector_store_ids: list[str], + max_num_results: Optional[int] = None, + ranking_options: Optional["_models.RankingOptions"] = None, + filters: Optional["_types.Filters"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.FILE_SEARCH # type: ignore + + +class FileSearchToolCallResults(_Model): + """FileSearchToolCallResults. + + :ivar file_id: + :vartype file_id: str + :ivar text: + :vartype text: str + :ivar filename: + :vartype filename: str + :ivar attributes: + :vartype attributes: + ~azure.ai.agentserver.responses.sdk.models.models.VectorStoreFileAttributes + :ivar score: + :vartype score: float + """ + + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + text: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + attributes: Optional["_models.VectorStoreFileAttributes"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + score: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + file_id: Optional[str] = None, + text: Optional[str] = None, + filename: Optional[str] = None, + attributes: Optional["_models.VectorStoreFileAttributes"] = None, + score: Optional[float] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionAndCustomToolCallOutput(_Model): + """FunctionAndCustomToolCallOutput. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + FunctionAndCustomToolCallOutputInputFileContent, + FunctionAndCustomToolCallOutputInputImageContent, + FunctionAndCustomToolCallOutputInputTextContent + + :ivar type: Required. Known values are: "input_text", "input_image", and "input_file". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionAndCustomToolCallOutputType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"input_text\", \"input_image\", and \"input_file\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionAndCustomToolCallOutputInputFileContent( + FunctionAndCustomToolCallOutput, discriminator="input_file" +): # pylint: disable=name-too-long + """Input file. + + :ivar type: The type of the input item. Always ``input_file``. Required. INPUT_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INPUT_FILE + :ivar file_id: + :vartype file_id: str + :ivar filename: The name of the file to be sent to the model. + :vartype filename: str + :ivar file_data: The content of the file to be sent to the model. + :vartype file_data: str + :ivar file_url: The URL of the file to be sent to the model. + :vartype file_url: str + """ + + type: Literal[FunctionAndCustomToolCallOutputType.INPUT_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the input item. Always ``input_file``. Required. INPUT_FILE.""" + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the file to be sent to the model.""" + file_data: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the file to be sent to the model.""" + file_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the file to be sent to the model.""" + + @overload + def __init__( + self, + *, + file_id: Optional[str] = None, + filename: Optional[str] = None, + file_data: Optional[str] = None, + file_url: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionAndCustomToolCallOutputType.INPUT_FILE # type: ignore + + +class FunctionAndCustomToolCallOutputInputImageContent( + FunctionAndCustomToolCallOutput, discriminator="input_image" +): # pylint: disable=name-too-long + """Input image. + + :ivar type: The type of the input item. Always ``input_image``. Required. INPUT_IMAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INPUT_IMAGE + :ivar image_url: + :vartype image_url: str + :ivar file_id: + :vartype file_id: str + :ivar detail: The detail level of the image to be sent to the model. One of ``high``, ``low``, + ``auto``, or ``original``. Defaults to ``auto``. Required. Known values are: "low", "high", + "auto", and "original". + :vartype detail: str or ~azure.ai.agentserver.responses.sdk.models.models.ImageDetail + """ + + type: Literal[FunctionAndCustomToolCallOutputType.INPUT_IMAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the input item. Always ``input_image``. Required. INPUT_IMAGE.""" + image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + detail: Union[str, "_models.ImageDetail"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The detail level of the image to be sent to the model. One of ``high``, ``low``, ``auto``, or + ``original``. Defaults to ``auto``. Required. Known values are: \"low\", \"high\", \"auto\", + and \"original\".""" + + @overload + def __init__( + self, + *, + detail: Union[str, "_models.ImageDetail"], + image_url: Optional[str] = None, + file_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionAndCustomToolCallOutputType.INPUT_IMAGE # type: ignore + + +class FunctionAndCustomToolCallOutputInputTextContent( + FunctionAndCustomToolCallOutput, discriminator="input_text" +): # pylint: disable=name-too-long + """Input text. + + :ivar type: The type of the input item. Always ``input_text``. Required. INPUT_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INPUT_TEXT + :ivar text: The text input to the model. Required. + :vartype text: str + """ + + type: Literal[FunctionAndCustomToolCallOutputType.INPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the input item. Always ``input_text``. Required. INPUT_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text input to the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionAndCustomToolCallOutputType.INPUT_TEXT # type: ignore + + +class FunctionCallOutputItemParam(Item, discriminator="function_call_output"): + """Function tool call output. + + :ivar id: + :vartype id: str + :ivar call_id: The unique ID of the function tool call generated by the model. Required. + :vartype call_id: str + :ivar type: The type of the function tool call output. Always ``function_call_output``. + Required. FUNCTION_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION_CALL_OUTPUT + :ivar output: Text, image, or file output of the function tool call. Required. Is either a str + type or a [Union["_models.InputTextContentParam", "_models.InputImageContentParamAutoParam", + "_models.InputFileContentParam"]] type. + :vartype output: str or + list[~azure.ai.agentserver.responses.sdk.models.models.InputTextContentParam or + ~azure.ai.agentserver.responses.sdk.models.models.InputImageContentParamAutoParam or + ~azure.ai.agentserver.responses.sdk.models.models.InputFileContentParam] + :ivar status: Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallItemStatus + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the function tool call generated by the model. Required.""" + type: Literal[ItemType.FUNCTION_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool call output. Always ``function_call_output``. Required. + FUNCTION_CALL_OUTPUT.""" + output: Union[ + str, + list[ + Union[ + "_models.InputTextContentParam", + "_models.InputImageContentParamAutoParam", + "_models.InputFileContentParam", + ] + ], + ] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Text, image, or file output of the function tool call. Required. Is either a str type or a + [Union[\"_models.InputTextContentParam\", \"_models.InputImageContentParamAutoParam\", + \"_models.InputFileContentParam\"]] type.""" + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + + @overload + def __init__( + self, + *, + call_id: str, + output: Union[ + str, + list[ + Union[ + "_models.InputTextContentParam", + "_models.InputImageContentParamAutoParam", + "_models.InputFileContentParam", + ] + ], + ], + id: Optional[str] = None, # pylint: disable=redefined-builtin + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.FUNCTION_CALL_OUTPUT # type: ignore + + +class FunctionShellAction(_Model): + """Shell exec action. + + :ivar commands: Required. + :vartype commands: list[str] + :ivar timeout_ms: Required. + :vartype timeout_ms: int + :ivar max_output_length: Required. + :vartype max_output_length: int + """ + + commands: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + timeout_ms: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + max_output_length: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + commands: list[str], + timeout_ms: int, + max_output_length: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellActionParam(_Model): + """Shell action. + + :ivar commands: Ordered shell commands for the execution environment to run. Required. + :vartype commands: list[str] + :ivar timeout_ms: + :vartype timeout_ms: int + :ivar max_output_length: + :vartype max_output_length: int + """ + + commands: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Ordered shell commands for the execution environment to run. Required.""" + timeout_ms: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + max_output_length: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + commands: list[str], + timeout_ms: Optional[int] = None, + max_output_length: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallItemParam(Item, discriminator="shell_call"): + """Shell tool call. + + :ivar id: + :vartype id: str + :ivar call_id: The unique ID of the shell tool call generated by the model. Required. + :vartype call_id: str + :ivar type: The type of the item. Always ``shell_call``. Required. SHELL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL_CALL + :ivar action: The shell commands and limits that describe how to run the tool call. Required. + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellActionParam + :ivar status: Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallItemStatus + :ivar environment: + :vartype environment: + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallItemParamEnvironment + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call generated by the model. Required.""" + type: Literal[ItemType.SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``shell_call``. Required. SHELL_CALL.""" + action: "_models.FunctionShellActionParam" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The shell commands and limits that describe how to run the tool call. Required.""" + status: Optional[Union[str, "_models.FunctionShellCallItemStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + environment: Optional["_models.FunctionShellCallItemParamEnvironment"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + call_id: str, + action: "_models.FunctionShellActionParam", + id: Optional[str] = None, # pylint: disable=redefined-builtin + status: Optional[Union[str, "_models.FunctionShellCallItemStatus"]] = None, + environment: Optional["_models.FunctionShellCallItemParamEnvironment"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.SHELL_CALL # type: ignore + + +class FunctionShellCallItemParamEnvironment(_Model): + """The environment to execute the shell commands in. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + FunctionShellCallItemParamEnvironmentContainerReferenceParam, + FunctionShellCallItemParamEnvironmentLocalEnvironmentParam + + :ivar type: Required. Known values are: "local" and "container_reference". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallItemParamEnvironmentType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"local\" and \"container_reference\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallItemParamEnvironmentContainerReferenceParam( + FunctionShellCallItemParamEnvironment, discriminator="container_reference" +): # pylint: disable=name-too-long + """FunctionShellCallItemParamEnvironmentContainerReferenceParam. + + :ivar type: References a container created with the /v1/containers endpoint. Required. + CONTAINER_REFERENCE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CONTAINER_REFERENCE + :ivar container_id: The ID of the referenced container. Required. + :vartype container_id: str + """ + + type: Literal[FunctionShellCallItemParamEnvironmentType.CONTAINER_REFERENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """References a container created with the /v1/containers endpoint. Required. CONTAINER_REFERENCE.""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the referenced container. Required.""" + + @overload + def __init__( + self, + *, + container_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallItemParamEnvironmentType.CONTAINER_REFERENCE # type: ignore + + +class FunctionShellCallItemParamEnvironmentLocalEnvironmentParam( + FunctionShellCallItemParamEnvironment, discriminator="local" +): # pylint: disable=name-too-long + """FunctionShellCallItemParamEnvironmentLocalEnvironmentParam. + + :ivar type: Use a local computer environment. Required. LOCAL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL + :ivar skills: An optional list of skills. + :vartype skills: list[~azure.ai.agentserver.responses.sdk.models.models.LocalSkillParam] + """ + + type: Literal[FunctionShellCallItemParamEnvironmentType.LOCAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Use a local computer environment. Required. LOCAL.""" + skills: Optional[list["_models.LocalSkillParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """An optional list of skills.""" + + @overload + def __init__( + self, + *, + skills: Optional[list["_models.LocalSkillParam"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallItemParamEnvironmentType.LOCAL # type: ignore + + +class FunctionShellCallOutputContent(_Model): + """Shell call output content. + + :ivar stdout: The standard output that was captured. Required. + :vartype stdout: str + :ivar stderr: The standard error output that was captured. Required. + :vartype stderr: str + :ivar outcome: Shell call outcome. Required. + :vartype outcome: + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputOutcome + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + stdout: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The standard output that was captured. Required.""" + stderr: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The standard error output that was captured. Required.""" + outcome: "_models.FunctionShellCallOutputOutcome" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Shell call outcome. Required.""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + stdout: str, + stderr: str, + outcome: "_models.FunctionShellCallOutputOutcome", + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallOutputContentParam(_Model): + """Shell output content. + + :ivar stdout: Captured stdout output for the shell call. Required. + :vartype stdout: str + :ivar stderr: Captured stderr output for the shell call. Required. + :vartype stderr: str + :ivar outcome: The exit or timeout outcome associated with this shell call. Required. + :vartype outcome: + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputOutcomeParam + """ + + stdout: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Captured stdout output for the shell call. Required.""" + stderr: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Captured stderr output for the shell call. Required.""" + outcome: "_models.FunctionShellCallOutputOutcomeParam" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The exit or timeout outcome associated with this shell call. Required.""" + + @overload + def __init__( + self, + *, + stdout: str, + stderr: str, + outcome: "_models.FunctionShellCallOutputOutcomeParam", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallOutputOutcome(_Model): + """Shell call outcome. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + FunctionShellCallOutputExitOutcome, FunctionShellCallOutputTimeoutOutcome + + :ivar type: Required. Known values are: "timeout" and "exit". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputOutcomeType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"timeout\" and \"exit\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallOutputExitOutcome(FunctionShellCallOutputOutcome, discriminator="exit"): + """Shell call exit outcome. + + :ivar type: The outcome type. Always ``exit``. Required. EXIT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.EXIT + :ivar exit_code: Exit code from the shell process. Required. + :vartype exit_code: int + """ + + type: Literal[FunctionShellCallOutputOutcomeType.EXIT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The outcome type. Always ``exit``. Required. EXIT.""" + exit_code: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Exit code from the shell process. Required.""" + + @overload + def __init__( + self, + *, + exit_code: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallOutputOutcomeType.EXIT # type: ignore + + +class FunctionShellCallOutputOutcomeParam(_Model): + """Shell call outcome. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + FunctionShellCallOutputExitOutcomeParam, FunctionShellCallOutputTimeoutOutcomeParam + + :ivar type: Required. Known values are: "timeout" and "exit". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputOutcomeParamType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"timeout\" and \"exit\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class FunctionShellCallOutputExitOutcomeParam(FunctionShellCallOutputOutcomeParam, discriminator="exit"): + """Shell call exit outcome. + + :ivar type: The outcome type. Always ``exit``. Required. EXIT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.EXIT + :ivar exit_code: The exit code returned by the shell process. Required. + :vartype exit_code: int + """ + + type: Literal[FunctionShellCallOutputOutcomeParamType.EXIT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The outcome type. Always ``exit``. Required. EXIT.""" + exit_code: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The exit code returned by the shell process. Required.""" + + @overload + def __init__( + self, + *, + exit_code: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallOutputOutcomeParamType.EXIT # type: ignore + + +class FunctionShellCallOutputItemParam(Item, discriminator="shell_call_output"): + """Shell tool call output. + + :ivar id: + :vartype id: str + :ivar call_id: The unique ID of the shell tool call generated by the model. Required. + :vartype call_id: str + :ivar type: The type of the item. Always ``shell_call_output``. Required. SHELL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL_CALL_OUTPUT + :ivar output: Captured chunks of stdout and stderr output, along with their associated + outcomes. Required. + :vartype output: + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputContentParam] + :ivar status: Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallItemStatus + :ivar max_output_length: + :vartype max_output_length: int + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call generated by the model. Required.""" + type: Literal[ItemType.SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``shell_call_output``. Required. SHELL_CALL_OUTPUT.""" + output: list["_models.FunctionShellCallOutputContentParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Captured chunks of stdout and stderr output, along with their associated outcomes. Required.""" + status: Optional[Union[str, "_models.FunctionShellCallItemStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + max_output_length: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + call_id: str, + output: list["_models.FunctionShellCallOutputContentParam"], + id: Optional[str] = None, # pylint: disable=redefined-builtin + status: Optional[Union[str, "_models.FunctionShellCallItemStatus"]] = None, + max_output_length: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.SHELL_CALL_OUTPUT # type: ignore + + +class FunctionShellCallOutputTimeoutOutcome(FunctionShellCallOutputOutcome, discriminator="timeout"): + """Shell call timeout outcome. + + :ivar type: The outcome type. Always ``timeout``. Required. TIMEOUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TIMEOUT + """ + + type: Literal[FunctionShellCallOutputOutcomeType.TIMEOUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The outcome type. Always ``timeout``. Required. TIMEOUT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallOutputOutcomeType.TIMEOUT # type: ignore + + +class FunctionShellCallOutputTimeoutOutcomeParam( + FunctionShellCallOutputOutcomeParam, discriminator="timeout" +): # pylint: disable=name-too-long + """Shell call timeout outcome. + + :ivar type: The outcome type. Always ``timeout``. Required. TIMEOUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TIMEOUT + """ + + type: Literal[FunctionShellCallOutputOutcomeParamType.TIMEOUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The outcome type. Always ``timeout``. Required. TIMEOUT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallOutputOutcomeParamType.TIMEOUT # type: ignore + + +class FunctionShellToolParam(Tool, discriminator="shell"): + """Shell tool. + + :ivar type: The type of the shell tool. Always ``shell``. Required. SHELL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL + :ivar environment: + :vartype environment: + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellToolParamEnvironment + """ + + type: Literal[ToolType.SHELL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the shell tool. Always ``shell``. Required. SHELL.""" + environment: Optional["_models.FunctionShellToolParamEnvironment"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + environment: Optional["_models.FunctionShellToolParamEnvironment"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.SHELL # type: ignore + + +class FunctionShellToolParamEnvironmentContainerReferenceParam( + FunctionShellToolParamEnvironment, discriminator="container_reference" +): # pylint: disable=name-too-long + """FunctionShellToolParamEnvironmentContainerReferenceParam. + + :ivar type: References a container created with the /v1/containers endpoint. Required. + CONTAINER_REFERENCE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CONTAINER_REFERENCE + :ivar container_id: The ID of the referenced container. Required. + :vartype container_id: str + """ + + type: Literal[FunctionShellToolParamEnvironmentType.CONTAINER_REFERENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """References a container created with the /v1/containers endpoint. Required. CONTAINER_REFERENCE.""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the referenced container. Required.""" + + @overload + def __init__( + self, + *, + container_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellToolParamEnvironmentType.CONTAINER_REFERENCE # type: ignore + + +class FunctionShellToolParamEnvironmentLocalEnvironmentParam( + FunctionShellToolParamEnvironment, discriminator="local" +): # pylint: disable=name-too-long + """FunctionShellToolParamEnvironmentLocalEnvironmentParam. + + :ivar type: Use a local computer environment. Required. LOCAL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL + :ivar skills: An optional list of skills. + :vartype skills: list[~azure.ai.agentserver.responses.sdk.models.models.LocalSkillParam] + """ + + type: Literal[FunctionShellToolParamEnvironmentType.LOCAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Use a local computer environment. Required. LOCAL.""" + skills: Optional[list["_models.LocalSkillParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """An optional list of skills.""" + + @overload + def __init__( + self, + *, + skills: Optional[list["_models.LocalSkillParam"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellToolParamEnvironmentType.LOCAL # type: ignore + + +class FunctionTool(Tool, discriminator="function"): + """Function. + + :ivar type: The type of the function tool. Always ``function``. Required. FUNCTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION + :ivar name: The name of the function to call. Required. + :vartype name: str + :ivar description: + :vartype description: str + :ivar parameters: Required. + :vartype parameters: dict[str, any] + :ivar strict: Required. + :vartype strict: bool + :ivar defer_loading: Whether this function is deferred and loaded via tool search. + :vartype defer_loading: bool + """ + + type: Literal[ToolType.FUNCTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool. Always ``function``. Required. FUNCTION.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to call. Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + parameters: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + strict: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + defer_loading: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether this function is deferred and loaded via tool search.""" + + @overload + def __init__( + self, + *, + name: str, + parameters: dict[str, Any], + strict: bool, + description: Optional[str] = None, + defer_loading: Optional[bool] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.FUNCTION # type: ignore + + +class FunctionToolParam(_Model): + """FunctionToolParam. + + :ivar name: Required. + :vartype name: str + :ivar description: + :vartype description: str + :ivar parameters: + :vartype parameters: ~azure.ai.agentserver.responses.sdk.models.models.EmptyModelParam + :ivar strict: + :vartype strict: bool + :ivar type: Required. Default value is "function". + :vartype type: str + :ivar defer_loading: Whether this function should be deferred and discovered via tool search. + :vartype defer_loading: bool + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + parameters: Optional["_models.EmptyModelParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + strict: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + type: Literal["function"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required. Default value is \"function\".""" + defer_loading: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether this function should be deferred and discovered via tool search.""" + + @overload + def __init__( + self, + *, + name: str, + description: Optional[str] = None, + parameters: Optional["_models.EmptyModelParam"] = None, + strict: Optional[bool] = None, + defer_loading: Optional[bool] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["function"] = "function" + + +class HybridSearchOptions(_Model): + """HybridSearchOptions. + + :ivar embedding_weight: The weight of the embedding in the reciprocal ranking fusion. Required. + :vartype embedding_weight: int + :ivar text_weight: The weight of the text in the reciprocal ranking fusion. Required. + :vartype text_weight: int + """ + + embedding_weight: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The weight of the embedding in the reciprocal ranking fusion. Required.""" + text_weight: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The weight of the text in the reciprocal ranking fusion. Required.""" + + @overload + def __init__( + self, + *, + embedding_weight: int, + text_weight: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ImageGenTool(Tool, discriminator="image_generation"): + """Image generation tool. + + :ivar type: The type of the image generation tool. Always ``image_generation``. Required. + IMAGE_GENERATION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.IMAGE_GENERATION + :ivar model: Is one of the following types: Literal["gpt-image-1"], + Literal["gpt-image-1-mini"], Literal["gpt-image-1.5"], str + :vartype model: str or str or str or str + :ivar quality: The quality of the generated image. One of ``low``, ``medium``, ``high``, or + ``auto``. Default: ``auto``. Is one of the following types: Literal["low"], Literal["medium"], + Literal["high"], Literal["auto"] + :vartype quality: str or str or str or str + :ivar size: The size of the generated image. One of ``1024x1024``, ``1024x1536``, + ``1536x1024``, or ``auto``. Default: ``auto``. Is one of the following types: + Literal["1024x1024"], Literal["1024x1536"], Literal["1536x1024"], Literal["auto"] + :vartype size: str or str or str or str + :ivar output_format: The output format of the generated image. One of ``png``, ``webp``, or + ``jpeg``. Default: ``png``. Is one of the following types: Literal["png"], Literal["webp"], + Literal["jpeg"] + :vartype output_format: str or str or str + :ivar output_compression: Compression level for the output image. Default: 100. + :vartype output_compression: int + :ivar moderation: Moderation level for the generated image. Default: ``auto``. Is either a + Literal["auto"] type or a Literal["low"] type. + :vartype moderation: str or str + :ivar background: Background type for the generated image. One of ``transparent``, ``opaque``, + or ``auto``. Default: ``auto``. Is one of the following types: Literal["transparent"], + Literal["opaque"], Literal["auto"] + :vartype background: str or str or str + :ivar input_fidelity: Known values are: "high" and "low". + :vartype input_fidelity: str or ~azure.ai.agentserver.responses.sdk.models.models.InputFidelity + :ivar input_image_mask: Optional mask for inpainting. Contains ``image_url`` (string, optional) + and ``file_id`` (string, optional). + :vartype input_image_mask: + ~azure.ai.agentserver.responses.sdk.models.models.ImageGenToolInputImageMask + :ivar partial_images: Number of partial images to generate in streaming mode, from 0 (default + value) to 3. + :vartype partial_images: int + :ivar action: Whether to generate a new image or edit an existing image. Default: ``auto``. + Known values are: "generate", "edit", and "auto". + :vartype action: str or ~azure.ai.agentserver.responses.sdk.models.models.ImageGenActionEnum + """ + + type: Literal[ToolType.IMAGE_GENERATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the image generation tool. Always ``image_generation``. Required. IMAGE_GENERATION.""" + model: Optional[Union[Literal["gpt-image-1"], Literal["gpt-image-1-mini"], Literal["gpt-image-1.5"], str]] = ( + rest_field(visibility=["read", "create", "update", "delete", "query"]) + ) + """Is one of the following types: Literal[\"gpt-image-1\"], Literal[\"gpt-image-1-mini\"], + Literal[\"gpt-image-1.5\"], str""" + quality: Optional[Literal["low", "medium", "high", "auto"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The quality of the generated image. One of ``low``, ``medium``, ``high``, or ``auto``. Default: + ``auto``. Is one of the following types: Literal[\"low\"], Literal[\"medium\"], + Literal[\"high\"], Literal[\"auto\"]""" + size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The size of the generated image. One of ``1024x1024``, ``1024x1536``, ``1536x1024``, or + ``auto``. Default: ``auto``. Is one of the following types: Literal[\"1024x1024\"], + Literal[\"1024x1536\"], Literal[\"1536x1024\"], Literal[\"auto\"]""" + output_format: Optional[Literal["png", "webp", "jpeg"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output format of the generated image. One of ``png``, ``webp``, or ``jpeg``. Default: + ``png``. Is one of the following types: Literal[\"png\"], Literal[\"webp\"], Literal[\"jpeg\"]""" + output_compression: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Compression level for the output image. Default: 100.""" + moderation: Optional[Literal["auto", "low"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Moderation level for the generated image. Default: ``auto``. Is either a Literal[\"auto\"] type + or a Literal[\"low\"] type.""" + background: Optional[Literal["transparent", "opaque", "auto"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Background type for the generated image. One of ``transparent``, ``opaque``, or ``auto``. + Default: ``auto``. Is one of the following types: Literal[\"transparent\"], + Literal[\"opaque\"], Literal[\"auto\"]""" + input_fidelity: Optional[Union[str, "_models.InputFidelity"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"high\" and \"low\".""" + input_image_mask: Optional["_models.ImageGenToolInputImageMask"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Optional mask for inpainting. Contains ``image_url`` (string, optional) and ``file_id`` + (string, optional).""" + partial_images: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Number of partial images to generate in streaming mode, from 0 (default value) to 3.""" + action: Optional[Union[str, "_models.ImageGenActionEnum"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether to generate a new image or edit an existing image. Default: ``auto``. Known values are: + \"generate\", \"edit\", and \"auto\".""" + + @overload + def __init__( + self, + *, + model: Optional[ + Union[Literal["gpt-image-1"], Literal["gpt-image-1-mini"], Literal["gpt-image-1.5"], str] + ] = None, + quality: Optional[Literal["low", "medium", "high", "auto"]] = None, + size: Optional[Literal["1024x1024", "1024x1536", "1536x1024", "auto"]] = None, + output_format: Optional[Literal["png", "webp", "jpeg"]] = None, + output_compression: Optional[int] = None, + moderation: Optional[Literal["auto", "low"]] = None, + background: Optional[Literal["transparent", "opaque", "auto"]] = None, + input_fidelity: Optional[Union[str, "_models.InputFidelity"]] = None, + input_image_mask: Optional["_models.ImageGenToolInputImageMask"] = None, + partial_images: Optional[int] = None, + action: Optional[Union[str, "_models.ImageGenActionEnum"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.IMAGE_GENERATION # type: ignore + + +class ImageGenToolInputImageMask(_Model): + """ImageGenToolInputImageMask. + + :ivar image_url: + :vartype image_url: str + :ivar file_id: + :vartype file_id: str + """ + + image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + image_url: Optional[str] = None, + file_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class InlineSkillParam(ContainerSkill, discriminator="inline"): + """InlineSkillParam. + + :ivar type: Defines an inline skill for this request. Required. INLINE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INLINE + :ivar name: The name of the skill. Required. + :vartype name: str + :ivar description: The description of the skill. Required. + :vartype description: str + :ivar source: Inline skill payload. Required. + :vartype source: ~azure.ai.agentserver.responses.sdk.models.models.InlineSkillSourceParam + """ + + type: Literal[ContainerSkillType.INLINE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Defines an inline skill for this request. Required. INLINE.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the skill. Required.""" + description: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The description of the skill. Required.""" + source: "_models.InlineSkillSourceParam" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Inline skill payload. Required.""" + + @overload + def __init__( + self, + *, + name: str, + description: str, + source: "_models.InlineSkillSourceParam", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ContainerSkillType.INLINE # type: ignore + + +class InlineSkillSourceParam(_Model): + """Inline skill payload. + + :ivar type: The type of the inline skill source. Must be ``base64``. Required. Default value is + "base64". + :vartype type: str + :ivar media_type: The media type of the inline skill payload. Must be ``application/zip``. + Required. Default value is "application/zip". + :vartype media_type: str + :ivar data: Base64-encoded skill zip bundle. Required. + :vartype data: str + """ + + type: Literal["base64"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the inline skill source. Must be ``base64``. Required. Default value is \"base64\".""" + media_type: Literal["application/zip"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The media type of the inline skill payload. Must be ``application/zip``. Required. Default + value is \"application/zip\".""" + data: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Base64-encoded skill zip bundle. Required.""" + + @overload + def __init__( + self, + *, + data: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["base64"] = "base64" + self.media_type: Literal["application/zip"] = "application/zip" + + +class InputFileContent(_Model): + """Input file. + + :ivar type: The type of the input item. Always ``input_file``. Required. Default value is + "input_file". + :vartype type: str + :ivar file_id: + :vartype file_id: str + :ivar filename: The name of the file to be sent to the model. + :vartype filename: str + :ivar file_data: The content of the file to be sent to the model. + :vartype file_data: str + :ivar file_url: The URL of the file to be sent to the model. + :vartype file_url: str + """ + + type: Literal["input_file"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the input item. Always ``input_file``. Required. Default value is \"input_file\".""" + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the file to be sent to the model.""" + file_data: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the file to be sent to the model.""" + file_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the file to be sent to the model.""" + + @overload + def __init__( + self, + *, + file_id: Optional[str] = None, + filename: Optional[str] = None, + file_data: Optional[str] = None, + file_url: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["input_file"] = "input_file" + + +class InputFileContentParam(_Model): + """Input file. + + :ivar type: The type of the input item. Always ``input_file``. Required. Default value is + "input_file". + :vartype type: str + :ivar file_id: + :vartype file_id: str + :ivar filename: + :vartype filename: str + :ivar file_data: + :vartype file_data: str + :ivar file_url: + :vartype file_url: str + """ + + type: Literal["input_file"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the input item. Always ``input_file``. Required. Default value is \"input_file\".""" + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_data: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + file_id: Optional[str] = None, + filename: Optional[str] = None, + file_data: Optional[str] = None, + file_url: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["input_file"] = "input_file" + + +class InputImageContent(_Model): + """Input image. + + :ivar type: The type of the input item. Always ``input_image``. Required. Default value is + "input_image". + :vartype type: str + :ivar image_url: + :vartype image_url: str + :ivar file_id: + :vartype file_id: str + :ivar detail: The detail level of the image to be sent to the model. One of ``high``, ``low``, + ``auto``, or ``original``. Defaults to ``auto``. Required. Known values are: "low", "high", + "auto", and "original". + :vartype detail: str or ~azure.ai.agentserver.responses.sdk.models.models.ImageDetail + """ + + type: Literal["input_image"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the input item. Always ``input_image``. Required. Default value is \"input_image\".""" + image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + detail: Union[str, "_models.ImageDetail"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The detail level of the image to be sent to the model. One of ``high``, ``low``, ``auto``, or + ``original``. Defaults to ``auto``. Required. Known values are: \"low\", \"high\", \"auto\", + and \"original\".""" + + @overload + def __init__( + self, + *, + detail: Union[str, "_models.ImageDetail"], + image_url: Optional[str] = None, + file_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["input_image"] = "input_image" + + +class InputImageContentParamAutoParam(_Model): + """Input image. + + :ivar type: The type of the input item. Always ``input_image``. Required. Default value is + "input_image". + :vartype type: str + :ivar image_url: + :vartype image_url: str + :ivar file_id: + :vartype file_id: str + :ivar detail: Known values are: "low", "high", "auto", and "original". + :vartype detail: str or ~azure.ai.agentserver.responses.sdk.models.models.DetailEnum + """ + + type: Literal["input_image"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the input item. Always ``input_image``. Required. Default value is \"input_image\".""" + image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + detail: Optional[Union[str, "_models.DetailEnum"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"low\", \"high\", \"auto\", and \"original\".""" + + @overload + def __init__( + self, + *, + image_url: Optional[str] = None, + file_id: Optional[str] = None, + detail: Optional[Union[str, "_models.DetailEnum"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["input_image"] = "input_image" + + +class InputTextContent(_Model): + """Input text. + + :ivar type: The type of the input item. Always ``input_text``. Required. Default value is + "input_text". + :vartype type: str + :ivar text: The text input to the model. Required. + :vartype text: str + """ + + type: Literal["input_text"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the input item. Always ``input_text``. Required. Default value is \"input_text\".""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text input to the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["input_text"] = "input_text" + + +class InputTextContentParam(_Model): + """Input text. + + :ivar type: The type of the input item. Always ``input_text``. Required. Default value is + "input_text". + :vartype type: str + :ivar text: The text input to the model. Required. + :vartype text: str + """ + + type: Literal["input_text"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the input item. Always ``input_text``. Required. Default value is \"input_text\".""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text input to the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["input_text"] = "input_text" + + +class ItemCodeInterpreterToolCall(Item, discriminator="code_interpreter_call"): + """Code interpreter tool call. + + :ivar type: The type of the code interpreter tool call. Always ``code_interpreter_call``. + Required. CODE_INTERPRETER_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CODE_INTERPRETER_CALL + :ivar id: The unique ID of the code interpreter tool call. Required. + :vartype id: str + :ivar status: The status of the code interpreter tool call. Valid values are ``in_progress``, + ``completed``, ``incomplete``, ``interpreting``, and ``failed``. Required. Is one of the + following types: Literal["in_progress"], Literal["completed"], Literal["incomplete"], + Literal["interpreting"], Literal["failed"] + :vartype status: str or str or str or str or str + :ivar container_id: The ID of the container used to run the code. Required. + :vartype container_id: str + :ivar code: Required. + :vartype code: str + :ivar outputs: Required. + :vartype outputs: + list[~azure.ai.agentserver.responses.sdk.models.models.CodeInterpreterOutputLogs or + ~azure.ai.agentserver.responses.sdk.models.models.CodeInterpreterOutputImage] + """ + + type: Literal[ItemType.CODE_INTERPRETER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the code interpreter tool call. Always ``code_interpreter_call``. Required. + CODE_INTERPRETER_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the code interpreter tool call. Required.""" + status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the code interpreter tool call. Valid values are ``in_progress``, ``completed``, + ``incomplete``, ``interpreting``, and ``failed``. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"], + Literal[\"interpreting\"], Literal[\"failed\"]""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the container used to run the code. Required.""" + code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + outputs: list[Union["_models.CodeInterpreterOutputLogs", "_models.CodeInterpreterOutputImage"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"], + container_id: str, + code: str, + outputs: list[Union["_models.CodeInterpreterOutputLogs", "_models.CodeInterpreterOutputImage"]], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.CODE_INTERPRETER_CALL # type: ignore + + +class ItemComputerToolCall(Item, discriminator="computer_call"): + """Computer tool call. + + :ivar type: The type of the computer call. Always ``computer_call``. Required. COMPUTER_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_CALL + :ivar id: The unique ID of the computer call. Required. + :vartype id: str + :ivar call_id: An identifier used when responding to the tool call with output. Required. + :vartype call_id: str + :ivar action: + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.ComputerAction + :ivar actions: + :vartype actions: list[~azure.ai.agentserver.responses.sdk.models.models.ComputerAction] + :ivar pending_safety_checks: The pending safety checks for the computer call. Required. + :vartype pending_safety_checks: + list[~azure.ai.agentserver.responses.sdk.models.models.ComputerCallSafetyCheckParam] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemType.COMPUTER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer call. Always ``computer_call``. Required. COMPUTER_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the computer call. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An identifier used when responding to the tool call with output. Required.""" + action: Optional["_models.ComputerAction"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + actions: Optional[list["_models.ComputerAction"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + pending_safety_checks: list["_models.ComputerCallSafetyCheckParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The pending safety checks for the computer call. Required.""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + pending_safety_checks: list["_models.ComputerCallSafetyCheckParam"], + status: Literal["in_progress", "completed", "incomplete"], + action: Optional["_models.ComputerAction"] = None, + actions: Optional[list["_models.ComputerAction"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.COMPUTER_CALL # type: ignore + + +class ItemCustomToolCall(Item, discriminator="custom_tool_call"): + """Custom tool call. + + :ivar type: The type of the custom tool call. Always ``custom_tool_call``. Required. + CUSTOM_TOOL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM_TOOL_CALL + :ivar id: The unique ID of the custom tool call in the OpenAI platform. + :vartype id: str + :ivar call_id: An identifier used to map this custom tool call to a tool call output. Required. + :vartype call_id: str + :ivar namespace: The namespace of the custom tool being called. + :vartype namespace: str + :ivar name: The name of the custom tool being called. Required. + :vartype name: str + :ivar input: The input for the custom tool call generated by the model. Required. + :vartype input: str + """ + + type: Literal[ItemType.CUSTOM_TOOL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool call. Always ``custom_tool_call``. Required. CUSTOM_TOOL_CALL.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the custom tool call in the OpenAI platform.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An identifier used to map this custom tool call to a tool call output. Required.""" + namespace: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace of the custom tool being called.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the custom tool being called. Required.""" + input: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The input for the custom tool call generated by the model. Required.""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + input: str, + id: Optional[str] = None, # pylint: disable=redefined-builtin + namespace: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.CUSTOM_TOOL_CALL # type: ignore + + +class ItemCustomToolCallOutput(Item, discriminator="custom_tool_call_output"): + """Custom tool call output. + + :ivar type: The type of the custom tool call output. Always ``custom_tool_call_output``. + Required. CUSTOM_TOOL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM_TOOL_CALL_OUTPUT + :ivar id: The unique ID of the custom tool call output in the OpenAI platform. + :vartype id: str + :ivar call_id: The call ID, used to map this custom tool call output to a custom tool call. + Required. + :vartype call_id: str + :ivar output: The output from the custom tool call generated by your code. Can be a string or + an list of output content. Required. Is either a str type or a + [FunctionAndCustomToolCallOutput] type. + :vartype output: str or + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionAndCustomToolCallOutput] + """ + + type: Literal[ItemType.CUSTOM_TOOL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool call output. Always ``custom_tool_call_output``. Required. + CUSTOM_TOOL_CALL_OUTPUT.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the custom tool call output in the OpenAI platform.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The call ID, used to map this custom tool call output to a custom tool call. Required.""" + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the custom tool call generated by your code. Can be a string or an list of + output content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] type.""" + + @overload + def __init__( + self, + *, + call_id: str, + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]], + id: Optional[str] = None, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.CUSTOM_TOOL_CALL_OUTPUT # type: ignore + + +class ItemField(_Model): + """An item representing a message, tool call, tool output, reasoning, or other response element. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ItemFieldApplyPatchToolCall, ItemFieldApplyPatchToolCallOutput, + ItemFieldCodeInterpreterToolCall, ItemFieldCompactionBody, ItemFieldComputerToolCall, + ItemFieldComputerToolCallOutput, ItemFieldCustomToolCall, ItemFieldCustomToolCallOutput, + ItemFieldFileSearchToolCall, ItemFieldFunctionToolCall, ItemFieldFunctionToolCallOutput, + ItemFieldImageGenToolCall, ItemFieldLocalShellToolCall, ItemFieldLocalShellToolCallOutput, + ItemFieldMcpApprovalRequest, ItemFieldMcpApprovalResponseResource, ItemFieldMcpToolCall, + ItemFieldMcpListTools, ItemFieldMessage, ItemFieldReasoningItem, ItemFieldFunctionShellCall, + ItemFieldFunctionShellCallOutput, ItemFieldToolSearchCall, ItemFieldToolSearchOutput, + ItemFieldWebSearchToolCall + + :ivar type: Required. Known values are: "message", "function_call", "tool_search_call", + "tool_search_output", "function_call_output", "file_search_call", "web_search_call", + "image_generation_call", "computer_call", "computer_call_output", "reasoning", "compaction", + "code_interpreter_call", "local_shell_call", "local_shell_call_output", "shell_call", + "shell_call_output", "apply_patch_call", "apply_patch_call_output", "mcp_list_tools", + "mcp_approval_request", "mcp_approval_response", "mcp_call", "custom_tool_call", and + "custom_tool_call_output". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ItemFieldType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"message\", \"function_call\", \"tool_search_call\", + \"tool_search_output\", \"function_call_output\", \"file_search_call\", \"web_search_call\", + \"image_generation_call\", \"computer_call\", \"computer_call_output\", \"reasoning\", + \"compaction\", \"code_interpreter_call\", \"local_shell_call\", \"local_shell_call_output\", + \"shell_call\", \"shell_call_output\", \"apply_patch_call\", \"apply_patch_call_output\", + \"mcp_list_tools\", \"mcp_approval_request\", \"mcp_approval_response\", \"mcp_call\", + \"custom_tool_call\", and \"custom_tool_call_output\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ItemFieldApplyPatchToolCall(ItemField, discriminator="apply_patch_call"): + """Apply patch tool call. + + :ivar type: The type of the item. Always ``apply_patch_call``. Required. APPLY_PATCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH_CALL + :ivar id: The unique ID of the apply patch tool call. Populated when this item is returned via + API. Required. + :vartype id: str + :ivar call_id: The unique ID of the apply patch tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the apply patch tool call. One of ``in_progress`` or ``completed``. + Required. Known values are: "in_progress" and "completed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchCallStatus + :ivar operation: Apply patch operation. Required. + :vartype operation: ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchFileOperation + :ivar created_by: The ID of the entity that created this tool call. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.APPLY_PATCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``apply_patch_call``. Required. APPLY_PATCH_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call. Populated when this item is returned via API. + Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call generated by the model. Required.""" + status: Union[str, "_models.ApplyPatchCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the apply patch tool call. One of ``in_progress`` or ``completed``. Required. + Known values are: \"in_progress\" and \"completed\".""" + operation: "_models.ApplyPatchFileOperation" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Apply patch operation. Required.""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the entity that created this tool call.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + status: Union[str, "_models.ApplyPatchCallStatus"], + operation: "_models.ApplyPatchFileOperation", + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.APPLY_PATCH_CALL # type: ignore + + +class ItemFieldApplyPatchToolCallOutput(ItemField, discriminator="apply_patch_call_output"): + """Apply patch tool call output. + + :ivar type: The type of the item. Always ``apply_patch_call_output``. Required. + APPLY_PATCH_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH_CALL_OUTPUT + :ivar id: The unique ID of the apply patch tool call output. Populated when this item is + returned via API. Required. + :vartype id: str + :ivar call_id: The unique ID of the apply patch tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the apply patch tool call output. One of ``completed`` or + ``failed``. Required. Known values are: "completed" and "failed". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchCallOutputStatus + :ivar output: + :vartype output: str + :ivar created_by: The ID of the entity that created this tool call output. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.APPLY_PATCH_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``apply_patch_call_output``. Required. APPLY_PATCH_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call output. Populated when this item is returned via + API. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call generated by the model. Required.""" + status: Union[str, "_models.ApplyPatchCallOutputStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the apply patch tool call output. One of ``completed`` or ``failed``. Required. + Known values are: \"completed\" and \"failed\".""" + output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the entity that created this tool call output.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + status: Union[str, "_models.ApplyPatchCallOutputStatus"], + output: Optional[str] = None, + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.APPLY_PATCH_CALL_OUTPUT # type: ignore + + +class ItemFieldCodeInterpreterToolCall(ItemField, discriminator="code_interpreter_call"): + """Code interpreter tool call. + + :ivar type: The type of the code interpreter tool call. Always ``code_interpreter_call``. + Required. CODE_INTERPRETER_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CODE_INTERPRETER_CALL + :ivar id: The unique ID of the code interpreter tool call. Required. + :vartype id: str + :ivar status: The status of the code interpreter tool call. Valid values are ``in_progress``, + ``completed``, ``incomplete``, ``interpreting``, and ``failed``. Required. Is one of the + following types: Literal["in_progress"], Literal["completed"], Literal["incomplete"], + Literal["interpreting"], Literal["failed"] + :vartype status: str or str or str or str or str + :ivar container_id: The ID of the container used to run the code. Required. + :vartype container_id: str + :ivar code: Required. + :vartype code: str + :ivar outputs: Required. + :vartype outputs: + list[~azure.ai.agentserver.responses.sdk.models.models.CodeInterpreterOutputLogs or + ~azure.ai.agentserver.responses.sdk.models.models.CodeInterpreterOutputImage] + """ + + type: Literal[ItemFieldType.CODE_INTERPRETER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the code interpreter tool call. Always ``code_interpreter_call``. Required. + CODE_INTERPRETER_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the code interpreter tool call. Required.""" + status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the code interpreter tool call. Valid values are ``in_progress``, ``completed``, + ``incomplete``, ``interpreting``, and ``failed``. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"], + Literal[\"interpreting\"], Literal[\"failed\"]""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the container used to run the code. Required.""" + code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + outputs: list[Union["_models.CodeInterpreterOutputLogs", "_models.CodeInterpreterOutputImage"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"], + container_id: str, + code: str, + outputs: list[Union["_models.CodeInterpreterOutputLogs", "_models.CodeInterpreterOutputImage"]], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.CODE_INTERPRETER_CALL # type: ignore + + +class ItemFieldCompactionBody(ItemField, discriminator="compaction"): + """Compaction item. + + :ivar type: The type of the item. Always ``compaction``. Required. COMPACTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPACTION + :ivar id: The unique ID of the compaction item. Required. + :vartype id: str + :ivar encrypted_content: The encrypted content that was produced by compaction. Required. + :vartype encrypted_content: str + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.COMPACTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``compaction``. Required. COMPACTION.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the compaction item. Required.""" + encrypted_content: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The encrypted content that was produced by compaction. Required.""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + encrypted_content: str, + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.COMPACTION # type: ignore + + +class ItemFieldComputerToolCall(ItemField, discriminator="computer_call"): + """Computer tool call. + + :ivar type: The type of the computer call. Always ``computer_call``. Required. COMPUTER_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_CALL + :ivar id: The unique ID of the computer call. Required. + :vartype id: str + :ivar call_id: An identifier used when responding to the tool call with output. Required. + :vartype call_id: str + :ivar action: + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.ComputerAction + :ivar actions: + :vartype actions: list[~azure.ai.agentserver.responses.sdk.models.models.ComputerAction] + :ivar pending_safety_checks: The pending safety checks for the computer call. Required. + :vartype pending_safety_checks: + list[~azure.ai.agentserver.responses.sdk.models.models.ComputerCallSafetyCheckParam] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemFieldType.COMPUTER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer call. Always ``computer_call``. Required. COMPUTER_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the computer call. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An identifier used when responding to the tool call with output. Required.""" + action: Optional["_models.ComputerAction"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + actions: Optional[list["_models.ComputerAction"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + pending_safety_checks: list["_models.ComputerCallSafetyCheckParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The pending safety checks for the computer call. Required.""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + pending_safety_checks: list["_models.ComputerCallSafetyCheckParam"], + status: Literal["in_progress", "completed", "incomplete"], + action: Optional["_models.ComputerAction"] = None, + actions: Optional[list["_models.ComputerAction"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.COMPUTER_CALL # type: ignore + + +class ItemFieldComputerToolCallOutput(ItemField, discriminator="computer_call_output"): + """Computer tool call output. + + :ivar type: The type of the computer tool call output. Always ``computer_call_output``. + Required. COMPUTER_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_CALL_OUTPUT + :ivar id: The ID of the computer tool call output. Required. + :vartype id: str + :ivar call_id: The ID of the computer tool call that produced the output. Required. + :vartype call_id: str + :ivar acknowledged_safety_checks: The safety checks reported by the API that have been + acknowledged by the developer. + :vartype acknowledged_safety_checks: + list[~azure.ai.agentserver.responses.sdk.models.models.ComputerCallSafetyCheckParam] + :ivar output: Required. + :vartype output: ~azure.ai.agentserver.responses.sdk.models.models.ComputerScreenshotImage + :ivar status: The status of the message input. One of ``in_progress``, ``completed``, or + ``incomplete``. Populated when input items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemFieldType.COMPUTER_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer tool call output. Always ``computer_call_output``. Required. + COMPUTER_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read"]) + """The ID of the computer tool call output. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the computer tool call that produced the output. Required.""" + acknowledged_safety_checks: Optional[list["_models.ComputerCallSafetyCheckParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The safety checks reported by the API that have been acknowledged by the developer.""" + output: "_models.ComputerScreenshotImage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the message input. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when input items are returned via API. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + output: "_models.ComputerScreenshotImage", + acknowledged_safety_checks: Optional[list["_models.ComputerCallSafetyCheckParam"]] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.COMPUTER_CALL_OUTPUT # type: ignore + + +class ItemFieldCustomToolCall(ItemField, discriminator="custom_tool_call"): + """Custom tool call. + + :ivar type: The type of the custom tool call. Always ``custom_tool_call``. Required. + CUSTOM_TOOL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM_TOOL_CALL + :ivar id: The unique ID of the custom tool call in the OpenAI platform. + :vartype id: str + :ivar call_id: An identifier used to map this custom tool call to a tool call output. Required. + :vartype call_id: str + :ivar namespace: The namespace of the custom tool being called. + :vartype namespace: str + :ivar name: The name of the custom tool being called. Required. + :vartype name: str + :ivar input: The input for the custom tool call generated by the model. Required. + :vartype input: str + """ + + type: Literal[ItemFieldType.CUSTOM_TOOL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool call. Always ``custom_tool_call``. Required. CUSTOM_TOOL_CALL.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the custom tool call in the OpenAI platform.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An identifier used to map this custom tool call to a tool call output. Required.""" + namespace: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace of the custom tool being called.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the custom tool being called. Required.""" + input: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The input for the custom tool call generated by the model. Required.""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + input: str, + id: Optional[str] = None, # pylint: disable=redefined-builtin + namespace: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.CUSTOM_TOOL_CALL # type: ignore + + +class ItemFieldCustomToolCallOutput(ItemField, discriminator="custom_tool_call_output"): + """Custom tool call output. + + :ivar type: The type of the custom tool call output. Always ``custom_tool_call_output``. + Required. CUSTOM_TOOL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM_TOOL_CALL_OUTPUT + :ivar id: The unique ID of the custom tool call output in the OpenAI platform. + :vartype id: str + :ivar call_id: The call ID, used to map this custom tool call output to a custom tool call. + Required. + :vartype call_id: str + :ivar output: The output from the custom tool call generated by your code. Can be a string or + an list of output content. Required. Is either a str type or a + [FunctionAndCustomToolCallOutput] type. + :vartype output: str or + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionAndCustomToolCallOutput] + """ + + type: Literal[ItemFieldType.CUSTOM_TOOL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool call output. Always ``custom_tool_call_output``. Required. + CUSTOM_TOOL_CALL_OUTPUT.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the custom tool call output in the OpenAI platform.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The call ID, used to map this custom tool call output to a custom tool call. Required.""" + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the custom tool call generated by your code. Can be a string or an list of + output content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] type.""" + + @overload + def __init__( + self, + *, + call_id: str, + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]], + id: Optional[str] = None, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.CUSTOM_TOOL_CALL_OUTPUT # type: ignore + + +class ItemFieldFileSearchToolCall(ItemField, discriminator="file_search_call"): + """File search tool call. + + :ivar id: The unique ID of the file search tool call. Required. + :vartype id: str + :ivar type: The type of the file search tool call. Always ``file_search_call``. Required. + FILE_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_SEARCH_CALL + :ivar status: The status of the file search tool call. One of ``in_progress``, ``searching``, + ``incomplete`` or ``failed``,. Required. Is one of the following types: Literal["in_progress"], + Literal["searching"], Literal["completed"], Literal["incomplete"], Literal["failed"] + :vartype status: str or str or str or str or str + :ivar queries: The queries used to search for files. Required. + :vartype queries: list[str] + :ivar results: + :vartype results: + list[~azure.ai.agentserver.responses.sdk.models.models.FileSearchToolCallResults] + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the file search tool call. Required.""" + type: Literal[ItemFieldType.FILE_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the file search tool call. Always ``file_search_call``. Required. FILE_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the file search tool call. One of ``in_progress``, ``searching``, ``incomplete`` + or ``failed``,. Required. Is one of the following types: Literal[\"in_progress\"], + Literal[\"searching\"], Literal[\"completed\"], Literal[\"incomplete\"], Literal[\"failed\"]""" + queries: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The queries used to search for files. Required.""" + results: Optional[list["_models.FileSearchToolCallResults"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"], + queries: list[str], + results: Optional[list["_models.FileSearchToolCallResults"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.FILE_SEARCH_CALL # type: ignore + + +class ItemFieldFunctionShellCall(ItemField, discriminator="shell_call"): + """Shell tool call. + + :ivar type: The type of the item. Always ``shell_call``. Required. SHELL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL_CALL + :ivar id: The unique ID of the shell tool call. Populated when this item is returned via API. + Required. + :vartype id: str + :ivar call_id: The unique ID of the shell tool call generated by the model. Required. + :vartype call_id: str + :ivar action: The shell commands and limits that describe how to run the tool call. Required. + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellAction + :ivar status: The status of the shell call. One of ``in_progress``, ``completed``, or + ``incomplete``. Required. Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.LocalShellCallStatus + :ivar environment: Required. + :vartype environment: + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallEnvironment + :ivar created_by: The ID of the entity that created this tool call. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``shell_call``. Required. SHELL_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call. Populated when this item is returned via API. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call generated by the model. Required.""" + action: "_models.FunctionShellAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The shell commands and limits that describe how to run the tool call. Required.""" + status: Union[str, "_models.LocalShellCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the shell call. One of ``in_progress``, ``completed``, or ``incomplete``. + Required. Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + environment: "_models.FunctionShellCallEnvironment" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the entity that created this tool call.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + action: "_models.FunctionShellAction", + status: Union[str, "_models.LocalShellCallStatus"], + environment: "_models.FunctionShellCallEnvironment", + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.SHELL_CALL # type: ignore + + +class ItemFieldFunctionShellCallOutput(ItemField, discriminator="shell_call_output"): + """Shell call output. + + :ivar type: The type of the shell call output. Always ``shell_call_output``. Required. + SHELL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL_CALL_OUTPUT + :ivar id: The unique ID of the shell call output. Populated when this item is returned via API. + Required. + :vartype id: str + :ivar call_id: The unique ID of the shell tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the shell call output. One of ``in_progress``, ``completed``, or + ``incomplete``. Required. Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.LocalShellCallOutputStatusEnum + :ivar output: An array of shell call output contents. Required. + :vartype output: + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputContent] + :ivar max_output_length: Required. + :vartype max_output_length: int + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the shell call output. Always ``shell_call_output``. Required. SHELL_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell call output. Populated when this item is returned via API. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call generated by the model. Required.""" + status: Union[str, "_models.LocalShellCallOutputStatusEnum"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the shell call output. One of ``in_progress``, ``completed``, or ``incomplete``. + Required. Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + output: list["_models.FunctionShellCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """An array of shell call output contents. Required.""" + max_output_length: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + status: Union[str, "_models.LocalShellCallOutputStatusEnum"], + output: list["_models.FunctionShellCallOutputContent"], + max_output_length: int, + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.SHELL_CALL_OUTPUT # type: ignore + + +class ItemFieldFunctionToolCall(ItemField, discriminator="function_call"): + """Function tool call. + + :ivar id: The unique ID of the function tool call. Required. + :vartype id: str + :ivar type: The type of the function tool call. Always ``function_call``. Required. + FUNCTION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION_CALL + :ivar call_id: The unique ID of the function tool call generated by the model. Required. + :vartype call_id: str + :ivar namespace: The namespace of the function to run. + :vartype namespace: str + :ivar name: The name of the function to run. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments to pass to the function. Required. + :vartype arguments: str + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read"]) + """The unique ID of the function tool call. Required.""" + type: Literal[ItemFieldType.FUNCTION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool call. Always ``function_call``. Required. FUNCTION_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the function tool call generated by the model. Required.""" + namespace: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace of the function to run.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the function. Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + arguments: str, + namespace: Optional[str] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.FUNCTION_CALL # type: ignore + + +class ItemFieldFunctionToolCallOutput(ItemField, discriminator="function_call_output"): + """Function tool call output. + + :ivar id: The unique ID of the function tool call output. Populated when this item is returned + via API. Required. + :vartype id: str + :ivar type: The type of the function tool call output. Always ``function_call_output``. + Required. FUNCTION_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION_CALL_OUTPUT + :ivar call_id: The unique ID of the function tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the function call generated by your code. Can be a string or an + list of output content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] + type. + :vartype output: str or + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionAndCustomToolCallOutput] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read"]) + """The unique ID of the function tool call output. Populated when this item is returned via API. + Required.""" + type: Literal[ItemFieldType.FUNCTION_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool call output. Always ``function_call_output``. Required. + FUNCTION_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the function tool call generated by the model. Required.""" + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the function call generated by your code. Can be a string or an list of output + content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] type.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]], + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.FUNCTION_CALL_OUTPUT # type: ignore + + +class ItemFieldImageGenToolCall(ItemField, discriminator="image_generation_call"): + """Image generation call. + + :ivar type: The type of the image generation call. Always ``image_generation_call``. Required. + IMAGE_GENERATION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.IMAGE_GENERATION_CALL + :ivar id: The unique ID of the image generation call. Required. + :vartype id: str + :ivar status: The status of the image generation call. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["generating"], Literal["failed"] + :vartype status: str or str or str or str + :ivar result: Required. + :vartype result: str + """ + + type: Literal[ItemFieldType.IMAGE_GENERATION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the image generation call. Always ``image_generation_call``. Required. + IMAGE_GENERATION_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the image generation call. Required.""" + status: Literal["in_progress", "completed", "generating", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the image generation call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"generating\"], Literal[\"failed\"]""" + result: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "completed", "generating", "failed"], + result: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.IMAGE_GENERATION_CALL # type: ignore + + +class ItemFieldLocalShellToolCall(ItemField, discriminator="local_shell_call"): + """Local shell call. + + :ivar type: The type of the local shell call. Always ``local_shell_call``. Required. + LOCAL_SHELL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL_CALL + :ivar id: The unique ID of the local shell call. Required. + :vartype id: str + :ivar call_id: The unique ID of the local shell tool call generated by the model. Required. + :vartype call_id: str + :ivar action: Required. + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.LocalShellExecAction + :ivar status: The status of the local shell call. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemFieldType.LOCAL_SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell call. Always ``local_shell_call``. Required. LOCAL_SHELL_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell call. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell tool call generated by the model. Required.""" + action: "_models.LocalShellExecAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the local shell call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + action: "_models.LocalShellExecAction", + status: Literal["in_progress", "completed", "incomplete"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.LOCAL_SHELL_CALL # type: ignore + + +class ItemFieldLocalShellToolCallOutput(ItemField, discriminator="local_shell_call_output"): + """Local shell call output. + + :ivar type: The type of the local shell tool call output. Always ``local_shell_call_output``. + Required. LOCAL_SHELL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL_CALL_OUTPUT + :ivar id: The unique ID of the local shell tool call generated by the model. Required. + :vartype id: str + :ivar output: A JSON string of the output of the local shell tool call. Required. + :vartype output: str + :ivar status: Is one of the following types: Literal["in_progress"], Literal["completed"], + Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemFieldType.LOCAL_SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell tool call output. Always ``local_shell_call_output``. Required. + LOCAL_SHELL_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell tool call generated by the model. Required.""" + output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the output of the local shell tool call. Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], + Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + output: str, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.LOCAL_SHELL_CALL_OUTPUT # type: ignore + + +class ItemFieldMcpApprovalRequest(ItemField, discriminator="mcp_approval_request"): + """MCP approval request. + + :ivar type: The type of the item. Always ``mcp_approval_request``. Required. + MCP_APPROVAL_REQUEST. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_APPROVAL_REQUEST + :ivar id: The unique ID of the approval request. Required. + :vartype id: str + :ivar server_label: The label of the MCP server making the request. Required. + :vartype server_label: str + :ivar name: The name of the tool to run. Required. + :vartype name: str + :ivar arguments: A JSON string of arguments for the tool. Required. + :vartype arguments: str + """ + + type: Literal[ItemFieldType.MCP_APPROVAL_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_approval_request``. Required. MCP_APPROVAL_REQUEST.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the approval request. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server making the request. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool to run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of arguments for the tool. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + name: str, + arguments: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.MCP_APPROVAL_REQUEST # type: ignore + + +class ItemFieldMcpApprovalResponseResource(ItemField, discriminator="mcp_approval_response"): + """MCP approval response. + + :ivar type: The type of the item. Always ``mcp_approval_response``. Required. + MCP_APPROVAL_RESPONSE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_APPROVAL_RESPONSE + :ivar id: The unique ID of the approval response. Required. + :vartype id: str + :ivar approval_request_id: The ID of the approval request being answered. Required. + :vartype approval_request_id: str + :ivar approve: Whether the request was approved. Required. + :vartype approve: bool + :ivar reason: + :vartype reason: str + """ + + type: Literal[ItemFieldType.MCP_APPROVAL_RESPONSE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_approval_response``. Required. MCP_APPROVAL_RESPONSE.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the approval response. Required.""" + approval_request_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the approval request being answered. Required.""" + approve: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether the request was approved. Required.""" + reason: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + approval_request_id: str, + approve: bool, + reason: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.MCP_APPROVAL_RESPONSE # type: ignore + + +class ItemFieldMcpListTools(ItemField, discriminator="mcp_list_tools"): + """MCP list tools. + + :ivar type: The type of the item. Always ``mcp_list_tools``. Required. MCP_LIST_TOOLS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_LIST_TOOLS + :ivar id: The unique ID of the list. Required. + :vartype id: str + :ivar server_label: The label of the MCP server. Required. + :vartype server_label: str + :ivar tools: The tools available on the server. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.MCPListToolsTool] + :ivar error: + :vartype error: ~azure.ai.agentserver.responses.sdk.models.models.RealtimeMCPError + """ + + type: Literal[ItemFieldType.MCP_LIST_TOOLS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_list_tools``. Required. MCP_LIST_TOOLS.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the list. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server. Required.""" + tools: list["_models.MCPListToolsTool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The tools available on the server. Required.""" + error: Optional["_models.RealtimeMCPError"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + tools: list["_models.MCPListToolsTool"], + error: Optional["_models.RealtimeMCPError"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.MCP_LIST_TOOLS # type: ignore + + +class ItemFieldMcpToolCall(ItemField, discriminator="mcp_call"): + """MCP tool call. + + :ivar type: The type of the item. Always ``mcp_call``. Required. MCP_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_CALL + :ivar id: The unique ID of the tool call. Required. + :vartype id: str + :ivar server_label: The label of the MCP server running the tool. Required. + :vartype server_label: str + :ivar name: The name of the tool that was run. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments passed to the tool. Required. + :vartype arguments: str + :ivar output: + :vartype output: str + :ivar error: + :vartype error: dict[str, any] + :ivar status: The status of the tool call. One of ``in_progress``, ``completed``, + ``incomplete``, ``calling``, or ``failed``. Known values are: "in_progress", "completed", + "incomplete", "calling", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.MCPToolCallStatus + :ivar approval_request_id: + :vartype approval_request_id: str + """ + + type: Literal[ItemFieldType.MCP_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_call``. Required. MCP_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server running the tool. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool that was run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments passed to the tool. Required.""" + output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + error: Optional[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + status: Optional[Union[str, "_models.MCPToolCallStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. One of ``in_progress``, ``completed``, ``incomplete``, + ``calling``, or ``failed``. Known values are: \"in_progress\", \"completed\", \"incomplete\", + \"calling\", and \"failed\".""" + approval_request_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + name: str, + arguments: str, + output: Optional[str] = None, + error: Optional[dict[str, Any]] = None, + status: Optional[Union[str, "_models.MCPToolCallStatus"]] = None, + approval_request_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.MCP_CALL # type: ignore + + +class ItemFieldMessage(ItemField, discriminator="message"): + """Message. + + :ivar type: The type of the message. Always set to ``message``. Required. MESSAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MESSAGE + :ivar id: The unique ID of the message. Required. + :vartype id: str + :ivar status: The status of item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Required. Known values are: "in_progress", + "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageStatus + :ivar role: The role of the message. One of ``unknown``, ``user``, ``assistant``, ``system``, + ``critic``, ``discriminator``, ``developer``, or ``tool``. Required. Known values are: + "unknown", "user", "assistant", "system", "critic", "discriminator", "developer", and "tool". + :vartype role: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageRole + :ivar content: The content of the message. Required. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.MessageContent] + """ + + type: Literal[ItemFieldType.MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the message. Always set to ``message``. Required. MESSAGE.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the message. Required.""" + status: Union[str, "_models.MessageStatus"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The status of item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated when + items are returned via API. Required. Known values are: \"in_progress\", \"completed\", and + \"incomplete\".""" + role: Union[str, "_models.MessageRole"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The role of the message. One of ``unknown``, ``user``, ``assistant``, ``system``, ``critic``, + ``discriminator``, ``developer``, or ``tool``. Required. Known values are: \"unknown\", + \"user\", \"assistant\", \"system\", \"critic\", \"discriminator\", \"developer\", and + \"tool\".""" + content: list["_models.MessageContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the message. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Union[str, "_models.MessageStatus"], + role: Union[str, "_models.MessageRole"], + content: list["_models.MessageContent"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.MESSAGE # type: ignore + + +class ItemFieldReasoningItem(ItemField, discriminator="reasoning"): + """Reasoning. + + :ivar type: The type of the object. Always ``reasoning``. Required. REASONING. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REASONING + :ivar id: The unique identifier of the reasoning content. Required. + :vartype id: str + :ivar encrypted_content: + :vartype encrypted_content: str + :ivar summary: Reasoning summary content. Required. + :vartype summary: list[~azure.ai.agentserver.responses.sdk.models.models.SummaryTextContent] + :ivar content: Reasoning text content. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.ReasoningTextContent] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemFieldType.REASONING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the object. Always ``reasoning``. Required. REASONING.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the reasoning content. Required.""" + encrypted_content: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + summary: list["_models.SummaryTextContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Reasoning summary content. Required.""" + content: Optional[list["_models.ReasoningTextContent"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Reasoning text content.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + summary: list["_models.SummaryTextContent"], + encrypted_content: Optional[str] = None, + content: Optional[list["_models.ReasoningTextContent"]] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.REASONING # type: ignore + + +class ItemFieldToolSearchCall(ItemField, discriminator="tool_search_call"): + """ItemFieldToolSearchCall. + + :ivar type: The type of the item. Always ``tool_search_call``. Required. TOOL_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH_CALL + :ivar id: The unique ID of the tool search call item. Required. + :vartype id: str + :ivar call_id: Required. + :vartype call_id: str + :ivar execution: Whether tool search was executed by the server or by the client. Required. + Known values are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar arguments: Arguments used for the tool search call. Required. + :vartype arguments: any + :ivar status: The status of the tool search call item that was recorded. Required. Known values + are: "in_progress", "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallStatus + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.TOOL_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``tool_search_call``. Required. TOOL_SEARCH_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool search call item. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + execution: Union[str, "_models.ToolSearchExecutionType"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search was executed by the server or by the client. Required. Known values are: + \"server\" and \"client\".""" + arguments: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Arguments used for the tool search call. Required.""" + status: Union[str, "_models.FunctionCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool search call item that was recorded. Required. Known values are: + \"in_progress\", \"completed\", and \"incomplete\".""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + execution: Union[str, "_models.ToolSearchExecutionType"], + arguments: Any, + status: Union[str, "_models.FunctionCallStatus"], + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.TOOL_SEARCH_CALL # type: ignore + + +class ItemFieldToolSearchOutput(ItemField, discriminator="tool_search_output"): + """ItemFieldToolSearchOutput. + + :ivar type: The type of the item. Always ``tool_search_output``. Required. TOOL_SEARCH_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH_OUTPUT + :ivar id: The unique ID of the tool search output item. Required. + :vartype id: str + :ivar call_id: Required. + :vartype call_id: str + :ivar execution: Whether tool search was executed by the server or by the client. Required. + Known values are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar tools: The loaded tool definitions returned by tool search. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.Tool] + :ivar status: The status of the tool search output item that was recorded. Required. Known + values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallOutputStatusEnum + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + type: Literal[ItemFieldType.TOOL_SEARCH_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``tool_search_output``. Required. TOOL_SEARCH_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool search output item. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + execution: Union[str, "_models.ToolSearchExecutionType"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search was executed by the server or by the client. Required. Known values are: + \"server\" and \"client\".""" + tools: list["_models.Tool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The loaded tool definitions returned by tool search. Required.""" + status: Union[str, "_models.FunctionCallOutputStatusEnum"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool search output item that was recorded. Required. Known values are: + \"in_progress\", \"completed\", and \"incomplete\".""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + execution: Union[str, "_models.ToolSearchExecutionType"], + tools: list["_models.Tool"], + status: Union[str, "_models.FunctionCallOutputStatusEnum"], + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.TOOL_SEARCH_OUTPUT # type: ignore + + +class ItemFieldWebSearchToolCall(ItemField, discriminator="web_search_call"): + """Web search tool call. + + :ivar id: The unique ID of the web search tool call. Required. + :vartype id: str + :ivar type: The type of the web search tool call. Always ``web_search_call``. Required. + WEB_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH_CALL + :ivar status: The status of the web search tool call. Required. Is one of the following types: + Literal["in_progress"], Literal["searching"], Literal["completed"], Literal["failed"] + :vartype status: str or str or str or str + :ivar action: An object describing the specific action taken in this web search call. Includes + details on how the model used the web (search, open_page, find_in_page). Required. Is one of + the following types: WebSearchActionSearch, WebSearchActionOpenPage, WebSearchActionFind + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionSearch or + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionOpenPage or + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionFind + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the web search tool call. Required.""" + type: Literal[ItemFieldType.WEB_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the web search tool call. Always ``web_search_call``. Required. WEB_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the web search tool call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], Literal[\"failed\"]""" + action: Union["_models.WebSearchActionSearch", "_models.WebSearchActionOpenPage", "_models.WebSearchActionFind"] = ( + rest_field(visibility=["read", "create", "update", "delete", "query"]) + ) + """An object describing the specific action taken in this web search call. Includes details on how + the model used the web (search, open_page, find_in_page). Required. Is one of the following + types: WebSearchActionSearch, WebSearchActionOpenPage, WebSearchActionFind""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "searching", "completed", "failed"], + action: Union[ + "_models.WebSearchActionSearch", "_models.WebSearchActionOpenPage", "_models.WebSearchActionFind" + ], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemFieldType.WEB_SEARCH_CALL # type: ignore + + +class ItemFileSearchToolCall(Item, discriminator="file_search_call"): + """File search tool call. + + :ivar id: The unique ID of the file search tool call. Required. + :vartype id: str + :ivar type: The type of the file search tool call. Always ``file_search_call``. Required. + FILE_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_SEARCH_CALL + :ivar status: The status of the file search tool call. One of ``in_progress``, ``searching``, + ``incomplete`` or ``failed``,. Required. Is one of the following types: Literal["in_progress"], + Literal["searching"], Literal["completed"], Literal["incomplete"], Literal["failed"] + :vartype status: str or str or str or str or str + :ivar queries: The queries used to search for files. Required. + :vartype queries: list[str] + :ivar results: + :vartype results: + list[~azure.ai.agentserver.responses.sdk.models.models.FileSearchToolCallResults] + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the file search tool call. Required.""" + type: Literal[ItemType.FILE_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the file search tool call. Always ``file_search_call``. Required. FILE_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the file search tool call. One of ``in_progress``, ``searching``, ``incomplete`` + or ``failed``,. Required. Is one of the following types: Literal[\"in_progress\"], + Literal[\"searching\"], Literal[\"completed\"], Literal[\"incomplete\"], Literal[\"failed\"]""" + queries: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The queries used to search for files. Required.""" + results: Optional[list["_models.FileSearchToolCallResults"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"], + queries: list[str], + results: Optional[list["_models.FileSearchToolCallResults"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.FILE_SEARCH_CALL # type: ignore + + +class ItemFunctionToolCall(Item, discriminator="function_call"): + """Function tool call. + + :ivar id: The unique ID of the function tool call. Required. + :vartype id: str + :ivar type: The type of the function tool call. Always ``function_call``. Required. + FUNCTION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION_CALL + :ivar call_id: The unique ID of the function tool call generated by the model. Required. + :vartype call_id: str + :ivar namespace: The namespace of the function to run. + :vartype namespace: str + :ivar name: The name of the function to run. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments to pass to the function. Required. + :vartype arguments: str + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read"]) + """The unique ID of the function tool call. Required.""" + type: Literal[ItemType.FUNCTION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool call. Always ``function_call``. Required. FUNCTION_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the function tool call generated by the model. Required.""" + namespace: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace of the function to run.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the function. Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + arguments: str, + namespace: Optional[str] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.FUNCTION_CALL # type: ignore + + +class ItemImageGenToolCall(Item, discriminator="image_generation_call"): + """Image generation call. + + :ivar type: The type of the image generation call. Always ``image_generation_call``. Required. + IMAGE_GENERATION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.IMAGE_GENERATION_CALL + :ivar id: The unique ID of the image generation call. Required. + :vartype id: str + :ivar status: The status of the image generation call. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["generating"], Literal["failed"] + :vartype status: str or str or str or str + :ivar result: Required. + :vartype result: str + """ + + type: Literal[ItemType.IMAGE_GENERATION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the image generation call. Always ``image_generation_call``. Required. + IMAGE_GENERATION_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the image generation call. Required.""" + status: Literal["in_progress", "completed", "generating", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the image generation call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"generating\"], Literal[\"failed\"]""" + result: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "completed", "generating", "failed"], + result: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.IMAGE_GENERATION_CALL # type: ignore + + +class ItemLocalShellToolCall(Item, discriminator="local_shell_call"): + """Local shell call. + + :ivar type: The type of the local shell call. Always ``local_shell_call``. Required. + LOCAL_SHELL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL_CALL + :ivar id: The unique ID of the local shell call. Required. + :vartype id: str + :ivar call_id: The unique ID of the local shell tool call generated by the model. Required. + :vartype call_id: str + :ivar action: Required. + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.LocalShellExecAction + :ivar status: The status of the local shell call. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemType.LOCAL_SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell call. Always ``local_shell_call``. Required. LOCAL_SHELL_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell call. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell tool call generated by the model. Required.""" + action: "_models.LocalShellExecAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the local shell call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + action: "_models.LocalShellExecAction", + status: Literal["in_progress", "completed", "incomplete"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.LOCAL_SHELL_CALL # type: ignore + + +class ItemLocalShellToolCallOutput(Item, discriminator="local_shell_call_output"): + """Local shell call output. + + :ivar type: The type of the local shell tool call output. Always ``local_shell_call_output``. + Required. LOCAL_SHELL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL_CALL_OUTPUT + :ivar id: The unique ID of the local shell tool call generated by the model. Required. + :vartype id: str + :ivar output: A JSON string of the output of the local shell tool call. Required. + :vartype output: str + :ivar status: Is one of the following types: Literal["in_progress"], Literal["completed"], + Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemType.LOCAL_SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell tool call output. Always ``local_shell_call_output``. Required. + LOCAL_SHELL_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell tool call generated by the model. Required.""" + output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the output of the local shell tool call. Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], + Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + output: str, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.LOCAL_SHELL_CALL_OUTPUT # type: ignore + + +class ItemMcpApprovalRequest(Item, discriminator="mcp_approval_request"): + """MCP approval request. + + :ivar type: The type of the item. Always ``mcp_approval_request``. Required. + MCP_APPROVAL_REQUEST. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_APPROVAL_REQUEST + :ivar id: The unique ID of the approval request. Required. + :vartype id: str + :ivar server_label: The label of the MCP server making the request. Required. + :vartype server_label: str + :ivar name: The name of the tool to run. Required. + :vartype name: str + :ivar arguments: A JSON string of arguments for the tool. Required. + :vartype arguments: str + """ + + type: Literal[ItemType.MCP_APPROVAL_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_approval_request``. Required. MCP_APPROVAL_REQUEST.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the approval request. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server making the request. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool to run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of arguments for the tool. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + name: str, + arguments: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.MCP_APPROVAL_REQUEST # type: ignore + + +class ItemMcpListTools(Item, discriminator="mcp_list_tools"): + """MCP list tools. + + :ivar type: The type of the item. Always ``mcp_list_tools``. Required. MCP_LIST_TOOLS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_LIST_TOOLS + :ivar id: The unique ID of the list. Required. + :vartype id: str + :ivar server_label: The label of the MCP server. Required. + :vartype server_label: str + :ivar tools: The tools available on the server. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.MCPListToolsTool] + :ivar error: + :vartype error: ~azure.ai.agentserver.responses.sdk.models.models.RealtimeMCPError + """ + + type: Literal[ItemType.MCP_LIST_TOOLS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_list_tools``. Required. MCP_LIST_TOOLS.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the list. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server. Required.""" + tools: list["_models.MCPListToolsTool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The tools available on the server. Required.""" + error: Optional["_models.RealtimeMCPError"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + tools: list["_models.MCPListToolsTool"], + error: Optional["_models.RealtimeMCPError"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.MCP_LIST_TOOLS # type: ignore + + +class ItemMcpToolCall(Item, discriminator="mcp_call"): + """MCP tool call. + + :ivar type: The type of the item. Always ``mcp_call``. Required. MCP_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_CALL + :ivar id: The unique ID of the tool call. Required. + :vartype id: str + :ivar server_label: The label of the MCP server running the tool. Required. + :vartype server_label: str + :ivar name: The name of the tool that was run. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments passed to the tool. Required. + :vartype arguments: str + :ivar output: + :vartype output: str + :ivar error: + :vartype error: dict[str, any] + :ivar status: The status of the tool call. One of ``in_progress``, ``completed``, + ``incomplete``, ``calling``, or ``failed``. Known values are: "in_progress", "completed", + "incomplete", "calling", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.MCPToolCallStatus + :ivar approval_request_id: + :vartype approval_request_id: str + """ + + type: Literal[ItemType.MCP_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_call``. Required. MCP_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server running the tool. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool that was run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments passed to the tool. Required.""" + output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + error: Optional[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + status: Optional[Union[str, "_models.MCPToolCallStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. One of ``in_progress``, ``completed``, ``incomplete``, + ``calling``, or ``failed``. Known values are: \"in_progress\", \"completed\", \"incomplete\", + \"calling\", and \"failed\".""" + approval_request_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + name: str, + arguments: str, + output: Optional[str] = None, + error: Optional[dict[str, Any]] = None, + status: Optional[Union[str, "_models.MCPToolCallStatus"]] = None, + approval_request_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.MCP_CALL # type: ignore + + +class ItemMessage(Item, discriminator="message"): + """Message. + + :ivar type: The type of the message. Always set to ``message``. Required. MESSAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MESSAGE + :ivar id: The unique ID of the message. Required. + :vartype id: str + :ivar status: The status of item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Required. Known values are: "in_progress", + "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageStatus + :ivar role: The role of the message. One of ``unknown``, ``user``, ``assistant``, ``system``, + ``critic``, ``discriminator``, ``developer``, or ``tool``. Required. Known values are: + "unknown", "user", "assistant", "system", "critic", "discriminator", "developer", and "tool". + :vartype role: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageRole + :ivar content: The content of the message. Required. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.MessageContent] + """ + + type: Literal[ItemType.MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the message. Always set to ``message``. Required. MESSAGE.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the message. Required.""" + status: Union[str, "_models.MessageStatus"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The status of item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated when + items are returned via API. Required. Known values are: \"in_progress\", \"completed\", and + \"incomplete\".""" + role: Union[str, "_models.MessageRole"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The role of the message. One of ``unknown``, ``user``, ``assistant``, ``system``, ``critic``, + ``discriminator``, ``developer``, or ``tool``. Required. Known values are: \"unknown\", + \"user\", \"assistant\", \"system\", \"critic\", \"discriminator\", \"developer\", and + \"tool\".""" + content: list["_models.MessageContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the message. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Union[str, "_models.MessageStatus"], + role: Union[str, "_models.MessageRole"], + content: list["_models.MessageContent"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.MESSAGE # type: ignore + + +class ItemOutputMessage(Item, discriminator="output_message"): + """Output message. + + :ivar id: The unique ID of the output message. Required. + :vartype id: str + :ivar type: The type of the output message. Always ``message``. Required. OUTPUT_MESSAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OUTPUT_MESSAGE + :ivar role: The role of the output message. Always ``assistant``. Required. Default value is + "assistant". + :vartype role: str + :ivar content: The content of the output message. Required. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.OutputMessageContent] + :ivar phase: Known values are: "commentary" and "final_answer". + :vartype phase: str or ~azure.ai.agentserver.responses.sdk.models.models.MessagePhase + :ivar status: The status of the message input. One of ``in_progress``, ``completed``, or + ``incomplete``. Populated when input items are returned via API. Required. Is one of the + following types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the output message. Required.""" + type: Literal[ItemType.OUTPUT_MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the output message. Always ``message``. Required. OUTPUT_MESSAGE.""" + role: Literal["assistant"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The role of the output message. Always ``assistant``. Required. Default value is \"assistant\".""" + content: list["_models.OutputMessageContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The content of the output message. Required.""" + phase: Optional[Union[str, "_models.MessagePhase"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"commentary\" and \"final_answer\".""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the message input. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when input items are returned via API. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + content: list["_models.OutputMessageContent"], + status: Literal["in_progress", "completed", "incomplete"], + phase: Optional[Union[str, "_models.MessagePhase"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.OUTPUT_MESSAGE # type: ignore + self.role: Literal["assistant"] = "assistant" + + +class ItemReasoningItem(Item, discriminator="reasoning"): + """Reasoning. + + :ivar type: The type of the object. Always ``reasoning``. Required. REASONING. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REASONING + :ivar id: The unique identifier of the reasoning content. Required. + :vartype id: str + :ivar encrypted_content: + :vartype encrypted_content: str + :ivar summary: Reasoning summary content. Required. + :vartype summary: list[~azure.ai.agentserver.responses.sdk.models.models.SummaryTextContent] + :ivar content: Reasoning text content. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.ReasoningTextContent] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[ItemType.REASONING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the object. Always ``reasoning``. Required. REASONING.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the reasoning content. Required.""" + encrypted_content: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + summary: list["_models.SummaryTextContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Reasoning summary content. Required.""" + content: Optional[list["_models.ReasoningTextContent"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Reasoning text content.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + summary: list["_models.SummaryTextContent"], + encrypted_content: Optional[str] = None, + content: Optional[list["_models.ReasoningTextContent"]] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.REASONING # type: ignore + + +class ItemReferenceParam(Item, discriminator="item_reference"): + """Item reference. + + :ivar type: The type of item to reference. Always ``item_reference``. Required. ITEM_REFERENCE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ITEM_REFERENCE + :ivar id: The ID of the item to reference. Required. + :vartype id: str + """ + + type: Literal[ItemType.ITEM_REFERENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of item to reference. Always ``item_reference``. Required. ITEM_REFERENCE.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item to reference. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.ITEM_REFERENCE # type: ignore + + +class ItemWebSearchToolCall(Item, discriminator="web_search_call"): + """Web search tool call. + + :ivar id: The unique ID of the web search tool call. Required. + :vartype id: str + :ivar type: The type of the web search tool call. Always ``web_search_call``. Required. + WEB_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH_CALL + :ivar status: The status of the web search tool call. Required. Is one of the following types: + Literal["in_progress"], Literal["searching"], Literal["completed"], Literal["failed"] + :vartype status: str or str or str or str + :ivar action: An object describing the specific action taken in this web search call. Includes + details on how the model used the web (search, open_page, find_in_page). Required. Is one of + the following types: WebSearchActionSearch, WebSearchActionOpenPage, WebSearchActionFind + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionSearch or + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionOpenPage or + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionFind + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the web search tool call. Required.""" + type: Literal[ItemType.WEB_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the web search tool call. Always ``web_search_call``. Required. WEB_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the web search tool call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], Literal[\"failed\"]""" + action: Union["_models.WebSearchActionSearch", "_models.WebSearchActionOpenPage", "_models.WebSearchActionFind"] = ( + rest_field(visibility=["read", "create", "update", "delete", "query"]) + ) + """An object describing the specific action taken in this web search call. Includes details on how + the model used the web (search, open_page, find_in_page). Required. Is one of the following + types: WebSearchActionSearch, WebSearchActionOpenPage, WebSearchActionFind""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "searching", "completed", "failed"], + action: Union[ + "_models.WebSearchActionSearch", "_models.WebSearchActionOpenPage", "_models.WebSearchActionFind" + ], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.WEB_SEARCH_CALL # type: ignore + + +class KeyPressAction(ComputerAction, discriminator="keypress"): + """KeyPress. + + :ivar type: Specifies the event type. For a keypress action, this property is always set to + ``keypress``. Required. KEYPRESS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.KEYPRESS + :ivar keys_property: The combination of keys the model is requesting to be pressed. This is an + array of strings, each representing a key. Required. + :vartype keys_property: list[str] + """ + + type: Literal[ComputerActionType.KEYPRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a keypress action, this property is always set to ``keypress``. + Required. KEYPRESS.""" + keys_property: list[str] = rest_field( + name="keys", visibility=["read", "create", "update", "delete", "query"], original_tsp_name="keys" + ) + """The combination of keys the model is requesting to be pressed. This is an array of strings, + each representing a key. Required.""" + + @overload + def __init__( + self, + *, + keys_property: list[str], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.KEYPRESS # type: ignore + + +class LocalEnvironmentResource(FunctionShellCallEnvironment, discriminator="local"): + """Local Environment. + + :ivar type: The environment type. Always ``local``. Required. LOCAL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL + """ + + type: Literal[FunctionShellCallEnvironmentType.LOCAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The environment type. Always ``local``. Required. LOCAL.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = FunctionShellCallEnvironmentType.LOCAL # type: ignore + + +class LocalShellExecAction(_Model): + """Local shell exec action. + + :ivar type: The type of the local shell action. Always ``exec``. Required. Default value is + "exec". + :vartype type: str + :ivar command: The command to run. Required. + :vartype command: list[str] + :ivar timeout_ms: + :vartype timeout_ms: int + :ivar working_directory: + :vartype working_directory: str + :ivar env: Environment variables to set for the command. Required. + :vartype env: dict[str, str] + :ivar user: + :vartype user: str + """ + + type: Literal["exec"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the local shell action. Always ``exec``. Required. Default value is \"exec\".""" + command: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The command to run. Required.""" + timeout_ms: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + working_directory: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + env: dict[str, str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Environment variables to set for the command. Required.""" + user: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + command: list[str], + env: dict[str, str], + timeout_ms: Optional[int] = None, + working_directory: Optional[str] = None, + user: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["exec"] = "exec" + + +class LocalShellToolParam(Tool, discriminator="local_shell"): + """Local shell tool. + + :ivar type: The type of the local shell tool. Always ``local_shell``. Required. LOCAL_SHELL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL + """ + + type: Literal[ToolType.LOCAL_SHELL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell tool. Always ``local_shell``. Required. LOCAL_SHELL.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.LOCAL_SHELL # type: ignore + + +class LocalSkillParam(_Model): + """LocalSkillParam. + + :ivar name: The name of the skill. Required. + :vartype name: str + :ivar description: The description of the skill. Required. + :vartype description: str + :ivar path: The path to the directory containing the skill. Required. + :vartype path: str + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the skill. Required.""" + description: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The description of the skill. Required.""" + path: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The path to the directory containing the skill. Required.""" + + @overload + def __init__( + self, + *, + name: str, + description: str, + path: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class LogProb(_Model): + """Log probability. + + :ivar token: Required. + :vartype token: str + :ivar logprob: Required. + :vartype logprob: int + :ivar bytes: Required. + :vartype bytes: list[int] + :ivar top_logprobs: Required. + :vartype top_logprobs: list[~azure.ai.agentserver.responses.sdk.models.models.TopLogProb] + """ + + token: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + logprob: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + bytes: list[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + top_logprobs: list["_models.TopLogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + token: str, + logprob: int, + bytes: list[int], + top_logprobs: list["_models.TopLogProb"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MCPApprovalResponse(Item, discriminator="mcp_approval_response"): + """MCP approval response. + + :ivar type: The type of the item. Always ``mcp_approval_response``. Required. + MCP_APPROVAL_RESPONSE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_APPROVAL_RESPONSE + :ivar id: + :vartype id: str + :ivar approval_request_id: The ID of the approval request being answered. Required. + :vartype approval_request_id: str + :ivar approve: Whether the request was approved. Required. + :vartype approve: bool + :ivar reason: + :vartype reason: str + """ + + type: Literal[ItemType.MCP_APPROVAL_RESPONSE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_approval_response``. Required. MCP_APPROVAL_RESPONSE.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + approval_request_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the approval request being answered. Required.""" + approve: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether the request was approved. Required.""" + reason: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + approval_request_id: str, + approve: bool, + id: Optional[str] = None, # pylint: disable=redefined-builtin + reason: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.MCP_APPROVAL_RESPONSE # type: ignore + + +class MCPListToolsTool(_Model): + """MCP list tools tool. + + :ivar name: The name of the tool. Required. + :vartype name: str + :ivar description: + :vartype description: str + :ivar input_schema: The JSON schema describing the tool's input. Required. + :vartype input_schema: + ~azure.ai.agentserver.responses.sdk.models.models.MCPListToolsToolInputSchema + :ivar annotations: + :vartype annotations: + ~azure.ai.agentserver.responses.sdk.models.models.MCPListToolsToolAnnotations + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool. Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + input_schema: "_models.MCPListToolsToolInputSchema" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The JSON schema describing the tool's input. Required.""" + annotations: Optional["_models.MCPListToolsToolAnnotations"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + name: str, + input_schema: "_models.MCPListToolsToolInputSchema", + description: Optional[str] = None, + annotations: Optional["_models.MCPListToolsToolAnnotations"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MCPListToolsToolAnnotations(_Model): + """MCPListToolsToolAnnotations.""" + + +class MCPListToolsToolInputSchema(_Model): + """MCPListToolsToolInputSchema.""" + + +class MCPTool(Tool, discriminator="mcp"): + """MCP tool. + + :ivar type: The type of the MCP tool. Always ``mcp``. Required. MCP. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP + :ivar server_label: A label for this MCP server, used to identify it in tool calls. Required. + :vartype server_label: str + :ivar server_url: The URL for the MCP server. One of ``server_url`` or ``connector_id`` must be + provided. + :vartype server_url: str + :ivar connector_id: Identifier for service connectors, like those available in ChatGPT. One of + ``server_url`` or ``connector_id`` must be provided. Learn more about service connectors `here + `_. Currently supported ``connector_id`` values are: + + * Dropbox: `connector_dropbox` + * Gmail: `connector_gmail` + * Google Calendar: `connector_googlecalendar` + * Google Drive: `connector_googledrive` + * Microsoft Teams: `connector_microsoftteams` + * Outlook Calendar: `connector_outlookcalendar` + * Outlook Email: `connector_outlookemail` + * SharePoint: `connector_sharepoint`. Is one of the following types: + Literal["connector_dropbox"], Literal["connector_gmail"], Literal["connector_googlecalendar"], + Literal["connector_googledrive"], Literal["connector_microsoftteams"], + Literal["connector_outlookcalendar"], Literal["connector_outlookemail"], + Literal["connector_sharepoint"] + :vartype connector_id: str or str or str or str or str or str or str or str + :ivar authorization: An OAuth access token that can be used with a remote MCP server, either + with a custom MCP server URL or a service connector. Your application must handle the OAuth + authorization flow and provide the token here. + :vartype authorization: str + :ivar server_description: Optional description of the MCP server, used to provide more context. + :vartype server_description: str + :ivar headers: + :vartype headers: dict[str, str] + :ivar allowed_tools: Is either a [str] type or a MCPToolFilter type. + :vartype allowed_tools: list[str] or + ~azure.ai.agentserver.responses.sdk.models.models.MCPToolFilter + :ivar require_approval: Is one of the following types: MCPToolRequireApproval, + Literal["always"], Literal["never"] + :vartype require_approval: + ~azure.ai.agentserver.responses.sdk.models.models.MCPToolRequireApproval or str or str + :ivar defer_loading: Whether this MCP tool is deferred and discovered via tool search. + :vartype defer_loading: bool + :ivar project_connection_id: The connection ID in the project for the MCP server. The + connection stores authentication and other connection details needed to connect to the MCP + server. + :vartype project_connection_id: str + """ + + type: Literal[ToolType.MCP] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the MCP tool. Always ``mcp``. Required. MCP.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A label for this MCP server, used to identify it in tool calls. Required.""" + server_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL for the MCP server. One of ``server_url`` or ``connector_id`` must be provided.""" + connector_id: Optional[ + Literal[ + "connector_dropbox", + "connector_gmail", + "connector_googlecalendar", + "connector_googledrive", + "connector_microsoftteams", + "connector_outlookcalendar", + "connector_outlookemail", + "connector_sharepoint", + ] + ] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Identifier for service connectors, like those available in ChatGPT. One of ``server_url`` or + ``connector_id`` must be provided. Learn more about service connectors `here + `_. Currently supported ``connector_id`` values are: + + * Dropbox: `connector_dropbox` + * Gmail: `connector_gmail` + * Google Calendar: `connector_googlecalendar` + * Google Drive: `connector_googledrive` + * Microsoft Teams: `connector_microsoftteams` + * Outlook Calendar: `connector_outlookcalendar` + * Outlook Email: `connector_outlookemail` + * SharePoint: `connector_sharepoint`. Is one of the following types: + Literal[\"connector_dropbox\"], Literal[\"connector_gmail\"], + Literal[\"connector_googlecalendar\"], Literal[\"connector_googledrive\"], + Literal[\"connector_microsoftteams\"], Literal[\"connector_outlookcalendar\"], + Literal[\"connector_outlookemail\"], Literal[\"connector_sharepoint\"]""" + authorization: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An OAuth access token that can be used with a remote MCP server, either with a custom MCP + server URL or a service connector. Your application must handle the OAuth authorization flow + and provide the token here.""" + server_description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional description of the MCP server, used to provide more context.""" + headers: Optional[dict[str, str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + allowed_tools: Optional[Union[list[str], "_models.MCPToolFilter"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a [str] type or a MCPToolFilter type.""" + require_approval: Optional[Union["_models.MCPToolRequireApproval", Literal["always"], Literal["never"]]] = ( + rest_field(visibility=["read", "create", "update", "delete", "query"]) + ) + """Is one of the following types: MCPToolRequireApproval, Literal[\"always\"], Literal[\"never\"]""" + defer_loading: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether this MCP tool is deferred and discovered via tool search.""" + project_connection_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The connection ID in the project for the MCP server. The connection stores authentication and + other connection details needed to connect to the MCP server.""" + + @overload + def __init__( + self, + *, + server_label: str, + server_url: Optional[str] = None, + connector_id: Optional[ + Literal[ + "connector_dropbox", + "connector_gmail", + "connector_googlecalendar", + "connector_googledrive", + "connector_microsoftteams", + "connector_outlookcalendar", + "connector_outlookemail", + "connector_sharepoint", + ] + ] = None, + authorization: Optional[str] = None, + server_description: Optional[str] = None, + headers: Optional[dict[str, str]] = None, + allowed_tools: Optional[Union[list[str], "_models.MCPToolFilter"]] = None, + require_approval: Optional[Union["_models.MCPToolRequireApproval", Literal["always"], Literal["never"]]] = None, + defer_loading: Optional[bool] = None, + project_connection_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.MCP # type: ignore + + +class MCPToolFilter(_Model): + """MCP tool filter. + + :ivar tool_names: MCP allowed tools. + :vartype tool_names: list[str] + :ivar read_only: Indicates whether or not a tool modifies data or is read-only. If an MCP + server is `annotated with `readOnlyHint` + `_, + it will match this filter. + :vartype read_only: bool + """ + + tool_names: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """MCP allowed tools.""" + read_only: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Indicates whether or not a tool modifies data or is read-only. If an MCP server is `annotated + with `readOnlyHint` + `_, + it will match this filter.""" + + @overload + def __init__( + self, + *, + tool_names: Optional[list[str]] = None, + read_only: Optional[bool] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MCPToolRequireApproval(_Model): + """MCPToolRequireApproval. + + :ivar always: + :vartype always: ~azure.ai.agentserver.responses.sdk.models.models.MCPToolFilter + :ivar never: + :vartype never: ~azure.ai.agentserver.responses.sdk.models.models.MCPToolFilter + """ + + always: Optional["_models.MCPToolFilter"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + never: Optional["_models.MCPToolFilter"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + always: Optional["_models.MCPToolFilter"] = None, + never: Optional["_models.MCPToolFilter"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MemorySearchItem(_Model): + """A retrieved memory item from memory search. + + :ivar memory_item: Retrieved memory item. Required. + :vartype memory_item: ~azure.ai.agentserver.responses.sdk.models.models.MemoryItem + """ + + memory_item: "_models.MemoryItem" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Retrieved memory item. Required.""" + + @overload + def __init__( + self, + *, + memory_item: "_models.MemoryItem", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MemorySearchOptions(_Model): + """Memory search options. + + :ivar max_memories: Maximum number of memory items to return. + :vartype max_memories: int + """ + + max_memories: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Maximum number of memory items to return.""" + + @overload + def __init__( + self, + *, + max_memories: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class MemorySearchPreviewTool(Tool, discriminator="memory_search_preview"): + """A tool for integrating memories into the agent. + + :ivar type: The type of the tool. Always ``memory_search_preview``. Required. + MEMORY_SEARCH_PREVIEW. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MEMORY_SEARCH_PREVIEW + :ivar memory_store_name: The name of the memory store to use. Required. + :vartype memory_store_name: str + :ivar scope: The namespace used to group and isolate memories, such as a user ID. Limits which + memories can be retrieved or updated. Use special variable ``{{$userId}}`` to scope memories to + the current signed-in user. Required. + :vartype scope: str + :ivar search_options: Options for searching the memory store. + :vartype search_options: ~azure.ai.agentserver.responses.sdk.models.models.MemorySearchOptions + :ivar update_delay: Time to wait before updating memories after inactivity (seconds). Default + 300. + :vartype update_delay: int + """ + + type: Literal[ToolType.MEMORY_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``memory_search_preview``. Required. MEMORY_SEARCH_PREVIEW.""" + memory_store_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the memory store to use. Required.""" + scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace used to group and isolate memories, such as a user ID. Limits which memories can + be retrieved or updated. Use special variable ``{{$userId}}`` to scope memories to the current + signed-in user. Required.""" + search_options: Optional["_models.MemorySearchOptions"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Options for searching the memory store.""" + update_delay: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Time to wait before updating memories after inactivity (seconds). Default 300.""" + + @overload + def __init__( + self, + *, + memory_store_name: str, + scope: str, + search_options: Optional["_models.MemorySearchOptions"] = None, + update_delay: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.MEMORY_SEARCH_PREVIEW # type: ignore + + +class MemorySearchTool(Tool, discriminator="memory_search"): + """A tool for integrating memories into the agent. + + :ivar type: The type of the tool. Always ``memory_search_preview``. Required. MEMORY_SEARCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MEMORY_SEARCH + :ivar memory_store_name: The name of the memory store to use. Required. + :vartype memory_store_name: str + :ivar scope: The namespace used to group and isolate memories, such as a user ID. Limits which + memories can be retrieved or updated. Use special variable ``{{$userId}}`` to scope memories to + the current signed-in user. Required. + :vartype scope: str + :ivar search_options: Options for searching the memory store. + :vartype search_options: ~azure.ai.agentserver.responses.sdk.models.models.MemorySearchOptions + :ivar update_delay: Time to wait before updating memories after inactivity (seconds). Default + 300. + :vartype update_delay: int + """ + + type: Literal[ToolType.MEMORY_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``memory_search_preview``. Required. MEMORY_SEARCH.""" + memory_store_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the memory store to use. Required.""" + scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace used to group and isolate memories, such as a user ID. Limits which memories can + be retrieved or updated. Use special variable ``{{$userId}}`` to scope memories to the current + signed-in user. Required.""" + search_options: Optional["_models.MemorySearchOptions"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Options for searching the memory store.""" + update_delay: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Time to wait before updating memories after inactivity (seconds). Default 300.""" + + @overload + def __init__( + self, + *, + memory_store_name: str, + scope: str, + search_options: Optional["_models.MemorySearchOptions"] = None, + update_delay: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.MEMORY_SEARCH # type: ignore + + +class MemorySearchToolCallItemParam(Item, discriminator="memory_search_call"): + """MemorySearchToolCallItemParam. + + :ivar type: Required. MEMORY_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MEMORY_SEARCH_CALL + :ivar results: The results returned from the memory search. + :vartype results: list[~azure.ai.agentserver.responses.sdk.models.models.MemorySearchItem] + """ + + type: Literal[ItemType.MEMORY_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. MEMORY_SEARCH_CALL.""" + results: Optional[list["_models.MemorySearchItem"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The results returned from the memory search.""" + + @overload + def __init__( + self, + *, + results: Optional[list["_models.MemorySearchItem"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.MEMORY_SEARCH_CALL # type: ignore + + +class MemorySearchToolCallItemResource(OutputItem, discriminator="memory_search_call"): + """MemorySearchToolCallItemResource. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. MEMORY_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MEMORY_SEARCH_CALL + :ivar status: The status of the memory search tool call. One of ``in_progress``, ``searching``, + ``completed``, ``incomplete`` or ``failed``,. Required. Is one of the following types: + Literal["in_progress"], Literal["searching"], Literal["completed"], Literal["incomplete"], + Literal["failed"] + :vartype status: str or str or str or str or str + :ivar results: The results returned from the memory search. + :vartype results: list[~azure.ai.agentserver.responses.sdk.models.models.MemorySearchItem] + """ + + type: Literal[OutputItemType.MEMORY_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. MEMORY_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the memory search tool call. One of ``in_progress``, ``searching``, + ``completed``, ``incomplete`` or ``failed``,. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], + Literal[\"incomplete\"], Literal[\"failed\"]""" + results: Optional[list["_models.MemorySearchItem"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The results returned from the memory search.""" + + @overload + def __init__( + self, + *, + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + results: Optional[list["_models.MemorySearchItem"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.MEMORY_SEARCH_CALL # type: ignore + + +class MessageContentInputFileContent(MessageContent, discriminator="input_file"): + """Input file. + + :ivar type: The type of the input item. Always ``input_file``. Required. INPUT_FILE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INPUT_FILE + :ivar file_id: + :vartype file_id: str + :ivar filename: The name of the file to be sent to the model. + :vartype filename: str + :ivar file_data: The content of the file to be sent to the model. + :vartype file_data: str + :ivar file_url: The URL of the file to be sent to the model. + :vartype file_url: str + """ + + type: Literal[MessageContentType.INPUT_FILE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the input item. Always ``input_file``. Required. INPUT_FILE.""" + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + filename: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the file to be sent to the model.""" + file_data: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the file to be sent to the model.""" + file_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the file to be sent to the model.""" + + @overload + def __init__( + self, + *, + file_id: Optional[str] = None, + filename: Optional[str] = None, + file_data: Optional[str] = None, + file_url: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.INPUT_FILE # type: ignore + + +class MessageContentInputImageContent(MessageContent, discriminator="input_image"): + """Input image. + + :ivar type: The type of the input item. Always ``input_image``. Required. INPUT_IMAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INPUT_IMAGE + :ivar image_url: + :vartype image_url: str + :ivar file_id: + :vartype file_id: str + :ivar detail: The detail level of the image to be sent to the model. One of ``high``, ``low``, + ``auto``, or ``original``. Defaults to ``auto``. Required. Known values are: "low", "high", + "auto", and "original". + :vartype detail: str or ~azure.ai.agentserver.responses.sdk.models.models.ImageDetail + """ + + type: Literal[MessageContentType.INPUT_IMAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the input item. Always ``input_image``. Required. INPUT_IMAGE.""" + image_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + file_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + detail: Union[str, "_models.ImageDetail"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The detail level of the image to be sent to the model. One of ``high``, ``low``, ``auto``, or + ``original``. Defaults to ``auto``. Required. Known values are: \"low\", \"high\", \"auto\", + and \"original\".""" + + @overload + def __init__( + self, + *, + detail: Union[str, "_models.ImageDetail"], + image_url: Optional[str] = None, + file_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.INPUT_IMAGE # type: ignore + + +class MessageContentInputTextContent(MessageContent, discriminator="input_text"): + """Input text. + + :ivar type: The type of the input item. Always ``input_text``. Required. INPUT_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.INPUT_TEXT + :ivar text: The text input to the model. Required. + :vartype text: str + """ + + type: Literal[MessageContentType.INPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the input item. Always ``input_text``. Required. INPUT_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text input to the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.INPUT_TEXT # type: ignore + + +class MessageContentOutputTextContent(MessageContent, discriminator="output_text"): + """Output text. + + :ivar type: The type of the output text. Always ``output_text``. Required. OUTPUT_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OUTPUT_TEXT + :ivar text: The text output from the model. Required. + :vartype text: str + :ivar annotations: The annotations of the text output. Required. + :vartype annotations: list[~azure.ai.agentserver.responses.sdk.models.models.Annotation] + :ivar logprobs: Required. + :vartype logprobs: list[~azure.ai.agentserver.responses.sdk.models.models.LogProb] + """ + + type: Literal[MessageContentType.OUTPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the output text. Always ``output_text``. Required. OUTPUT_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text output from the model. Required.""" + annotations: list["_models.Annotation"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The annotations of the text output. Required.""" + logprobs: list["_models.LogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + text: str, + annotations: list["_models.Annotation"], + logprobs: list["_models.LogProb"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.OUTPUT_TEXT # type: ignore + + +class MessageContentReasoningTextContent(MessageContent, discriminator="reasoning_text"): + """Reasoning text. + + :ivar type: The type of the reasoning text. Always ``reasoning_text``. Required. + REASONING_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REASONING_TEXT + :ivar text: The reasoning text from the model. Required. + :vartype text: str + """ + + type: Literal[MessageContentType.REASONING_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the reasoning text. Always ``reasoning_text``. Required. REASONING_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The reasoning text from the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.REASONING_TEXT # type: ignore + + +class MessageContentRefusalContent(MessageContent, discriminator="refusal"): + """Refusal. + + :ivar type: The type of the refusal. Always ``refusal``. Required. REFUSAL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REFUSAL + :ivar refusal: The refusal explanation from the model. Required. + :vartype refusal: str + """ + + type: Literal[MessageContentType.REFUSAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the refusal. Always ``refusal``. Required. REFUSAL.""" + refusal: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The refusal explanation from the model. Required.""" + + @overload + def __init__( + self, + *, + refusal: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.REFUSAL # type: ignore + + +class Metadata(_Model): + """Set of 16 key-value pairs that can be attached to an object. This can be useful for storing + additional information about the object in a structured format, and querying for objects via + API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are + strings with a maximum length of 512 characters. + + """ + + +class MicrosoftFabricPreviewTool(Tool, discriminator="fabric_dataagent_preview"): + """The input definition information for a Microsoft Fabric tool as used to configure an agent. + + :ivar type: The object type, which is always 'fabric_dataagent_preview'. Required. + FABRIC_DATAAGENT_PREVIEW. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.FABRIC_DATAAGENT_PREVIEW + :ivar fabric_dataagent_preview: The fabric data agent tool parameters. Required. + :vartype fabric_dataagent_preview: + ~azure.ai.agentserver.responses.sdk.models.models.FabricDataAgentToolParameters + """ + + type: Literal[ToolType.FABRIC_DATAAGENT_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'fabric_dataagent_preview'. Required. + FABRIC_DATAAGENT_PREVIEW.""" + fabric_dataagent_preview: "_models.FabricDataAgentToolParameters" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The fabric data agent tool parameters. Required.""" + + @overload + def __init__( + self, + *, + fabric_dataagent_preview: "_models.FabricDataAgentToolParameters", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.FABRIC_DATAAGENT_PREVIEW # type: ignore + + +class MoveParam(ComputerAction, discriminator="move"): + """Move. + + :ivar type: Specifies the event type. For a move action, this property is always set to + ``move``. Required. MOVE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MOVE + :ivar x: The x-coordinate to move to. Required. + :vartype x: int + :ivar y: The y-coordinate to move to. Required. + :vartype y: int + """ + + type: Literal[ComputerActionType.MOVE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a move action, this property is always set to ``move``. Required. + MOVE.""" + x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The x-coordinate to move to. Required.""" + y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The y-coordinate to move to. Required.""" + + @overload + def __init__( + self, + *, + x: int, + y: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.MOVE # type: ignore + + +class NamespaceToolParam(Tool, discriminator="namespace"): + """Namespace. + + :ivar type: The type of the tool. Always ``namespace``. Required. NAMESPACE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.NAMESPACE + :ivar name: The namespace name used in tool calls (for example, ``crm``). Required. + :vartype name: str + :ivar description: A description of the namespace shown to the model. Required. + :vartype description: str + :ivar tools: The function/custom tools available inside this namespace. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.FunctionToolParam or + ~azure.ai.agentserver.responses.sdk.models.models.CustomToolParam] + """ + + type: Literal[ToolType.NAMESPACE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``namespace``. Required. NAMESPACE.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace name used in tool calls (for example, ``crm``). Required.""" + description: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A description of the namespace shown to the model. Required.""" + tools: list[Union["_models.FunctionToolParam", "_models.CustomToolParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The function/custom tools available inside this namespace. Required.""" + + @overload + def __init__( + self, + *, + name: str, + description: str, + tools: list[Union["_models.FunctionToolParam", "_models.CustomToolParam"]], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.NAMESPACE # type: ignore + + +class OAuthConsentRequestOutputItem(OutputItem, discriminator="oauth_consent_request"): + """Request from the service for the user to perform OAuth consent. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar id: Required. + :vartype id: str + :ivar type: Required. OAUTH_CONSENT_REQUEST. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OAUTH_CONSENT_REQUEST + :ivar consent_link: The link the user can use to perform OAuth consent. Required. + :vartype consent_link: str + :ivar server_label: The server label for the OAuth consent request. Required. + :vartype server_label: str + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + type: Literal[OutputItemType.OAUTH_CONSENT_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. OAUTH_CONSENT_REQUEST.""" + consent_link: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The link the user can use to perform OAuth consent. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The server label for the OAuth consent request. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + consent_link: str, + server_label: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.OAUTH_CONSENT_REQUEST # type: ignore + + +class OpenApiAuthDetails(_Model): + """authentication details for OpenApiFunctionDefinition. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + OpenApiAnonymousAuthDetails, OpenApiManagedAuthDetails, OpenApiProjectConnectionAuthDetails + + :ivar type: The type of authentication, must be anonymous/project_connection/managed_identity. + Required. Known values are: "anonymous", "project_connection", and "managed_identity". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OpenApiAuthType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """The type of authentication, must be anonymous/project_connection/managed_identity. Required. + Known values are: \"anonymous\", \"project_connection\", and \"managed_identity\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OpenApiAnonymousAuthDetails(OpenApiAuthDetails, discriminator="anonymous"): + """Security details for OpenApi anonymous authentication. + + :ivar type: The object type, which is always 'anonymous'. Required. ANONYMOUS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ANONYMOUS + """ + + type: Literal[OpenApiAuthType.ANONYMOUS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'anonymous'. Required. ANONYMOUS.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OpenApiAuthType.ANONYMOUS # type: ignore + + +class OpenApiFunctionDefinition(_Model): + """The input definition information for an openapi function. + + :ivar name: The name of the function to be called. Required. + :vartype name: str + :ivar description: A description of what the function does, used by the model to choose when + and how to call the function. + :vartype description: str + :ivar spec: The openapi function shape, described as a JSON Schema object. Required. + :vartype spec: dict[str, any] + :ivar auth: Open API authentication details. Required. + :vartype auth: ~azure.ai.agentserver.responses.sdk.models.models.OpenApiAuthDetails + :ivar default_params: List of OpenAPI spec parameters that will use user-provided defaults. + :vartype default_params: list[str] + :ivar functions: List of function definitions used by OpenApi tool. + :vartype functions: + list[~azure.ai.agentserver.responses.sdk.models.models.OpenApiFunctionDefinitionFunction] + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to be called. Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A description of what the function does, used by the model to choose when and how to call the + function.""" + spec: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The openapi function shape, described as a JSON Schema object. Required.""" + auth: "_models.OpenApiAuthDetails" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Open API authentication details. Required.""" + default_params: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """List of OpenAPI spec parameters that will use user-provided defaults.""" + functions: Optional[list["_models.OpenApiFunctionDefinitionFunction"]] = rest_field(visibility=["read"]) + """List of function definitions used by OpenApi tool.""" + + @overload + def __init__( + self, + *, + name: str, + spec: dict[str, Any], + auth: "_models.OpenApiAuthDetails", + description: Optional[str] = None, + default_params: Optional[list[str]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OpenApiFunctionDefinitionFunction(_Model): + """OpenApiFunctionDefinitionFunction. + + :ivar name: The name of the function to be called. Required. + :vartype name: str + :ivar description: A description of what the function does, used by the model to choose when + and how to call the function. + :vartype description: str + :ivar parameters: The parameters the functions accepts, described as a JSON Schema object. + Required. + :vartype parameters: dict[str, any] + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to be called. Required.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A description of what the function does, used by the model to choose when and how to call the + function.""" + parameters: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The parameters the functions accepts, described as a JSON Schema object. Required.""" + + @overload + def __init__( + self, + *, + name: str, + parameters: dict[str, Any], + description: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OpenApiManagedAuthDetails(OpenApiAuthDetails, discriminator="managed_identity"): + """Security details for OpenApi managed_identity authentication. + + :ivar type: The object type, which is always 'managed_identity'. Required. MANAGED_IDENTITY. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MANAGED_IDENTITY + :ivar security_scheme: Connection auth security details. Required. + :vartype security_scheme: + ~azure.ai.agentserver.responses.sdk.models.models.OpenApiManagedSecurityScheme + """ + + type: Literal[OpenApiAuthType.MANAGED_IDENTITY] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'managed_identity'. Required. MANAGED_IDENTITY.""" + security_scheme: "_models.OpenApiManagedSecurityScheme" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Connection auth security details. Required.""" + + @overload + def __init__( + self, + *, + security_scheme: "_models.OpenApiManagedSecurityScheme", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OpenApiAuthType.MANAGED_IDENTITY # type: ignore + + +class OpenApiManagedSecurityScheme(_Model): + """Security scheme for OpenApi managed_identity authentication. + + :ivar audience: Authentication scope for managed_identity auth type. Required. + :vartype audience: str + """ + + audience: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Authentication scope for managed_identity auth type. Required.""" + + @overload + def __init__( + self, + *, + audience: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OpenApiProjectConnectionAuthDetails(OpenApiAuthDetails, discriminator="project_connection"): + """Security details for OpenApi project connection authentication. + + :ivar type: The object type, which is always 'project_connection'. Required. + PROJECT_CONNECTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.PROJECT_CONNECTION + :ivar security_scheme: Project connection auth security details. Required. + :vartype security_scheme: + ~azure.ai.agentserver.responses.sdk.models.models.OpenApiProjectConnectionSecurityScheme + """ + + type: Literal[OpenApiAuthType.PROJECT_CONNECTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'project_connection'. Required. PROJECT_CONNECTION.""" + security_scheme: "_models.OpenApiProjectConnectionSecurityScheme" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Project connection auth security details. Required.""" + + @overload + def __init__( + self, + *, + security_scheme: "_models.OpenApiProjectConnectionSecurityScheme", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OpenApiAuthType.PROJECT_CONNECTION # type: ignore + + +class OpenApiProjectConnectionSecurityScheme(_Model): + """Security scheme for OpenApi managed_identity authentication. + + :ivar project_connection_id: Project connection id for Project Connection auth type. Required. + :vartype project_connection_id: str + """ + + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Project connection id for Project Connection auth type. Required.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OpenApiTool(Tool, discriminator="openapi"): + """The input definition information for an OpenAPI tool as used to configure an agent. + + :ivar type: The object type, which is always 'openapi'. Required. OPENAPI. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OPENAPI + :ivar openapi: The openapi function definition. Required. + :vartype openapi: ~azure.ai.agentserver.responses.sdk.models.models.OpenApiFunctionDefinition + """ + + type: Literal[ToolType.OPENAPI] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'openapi'. Required. OPENAPI.""" + openapi: "_models.OpenApiFunctionDefinition" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The openapi function definition. Required.""" + + @overload + def __init__( + self, + *, + openapi: "_models.OpenApiFunctionDefinition", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.OPENAPI # type: ignore + + +class OpenApiToolCall(OutputItem, discriminator="openapi_call"): + """An OpenAPI tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. OPENAPI_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OPENAPI_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar name: The name of the OpenAPI operation being called. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.OPENAPI_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. OPENAPI_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the OpenAPI operation being called. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.OPENAPI_CALL # type: ignore + + +class OpenApiToolCallOutput(OutputItem, discriminator="openapi_call_output"): + """The output of an OpenAPI tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. OPENAPI_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OPENAPI_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar name: The name of the OpenAPI operation that was called. Required. + :vartype name: str + :ivar output: The output from the OpenAPI tool call. Is one of the following types: {str: Any}, + str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.OPENAPI_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. OPENAPI_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the OpenAPI operation that was called. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the OpenAPI tool call. Is one of the following types: {str: Any}, str, [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.OPENAPI_CALL_OUTPUT # type: ignore + + +class OutputContent(_Model): + """OutputContent. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + OutputContentOutputTextContent, OutputContentReasoningTextContent, OutputContentRefusalContent + + :ivar type: Required. Known values are: "output_text", "refusal", and "reasoning_text". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OutputContentType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"output_text\", \"refusal\", and \"reasoning_text\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OutputContentOutputTextContent(OutputContent, discriminator="output_text"): + """Output text. + + :ivar type: The type of the output text. Always ``output_text``. Required. OUTPUT_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OUTPUT_TEXT + :ivar text: The text output from the model. Required. + :vartype text: str + :ivar annotations: The annotations of the text output. Required. + :vartype annotations: list[~azure.ai.agentserver.responses.sdk.models.models.Annotation] + :ivar logprobs: Required. + :vartype logprobs: list[~azure.ai.agentserver.responses.sdk.models.models.LogProb] + """ + + type: Literal[OutputContentType.OUTPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the output text. Always ``output_text``. Required. OUTPUT_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text output from the model. Required.""" + annotations: list["_models.Annotation"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The annotations of the text output. Required.""" + logprobs: list["_models.LogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + text: str, + annotations: list["_models.Annotation"], + logprobs: list["_models.LogProb"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputContentType.OUTPUT_TEXT # type: ignore + + +class OutputContentReasoningTextContent(OutputContent, discriminator="reasoning_text"): + """Reasoning text. + + :ivar type: The type of the reasoning text. Always ``reasoning_text``. Required. + REASONING_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REASONING_TEXT + :ivar text: The reasoning text from the model. Required. + :vartype text: str + """ + + type: Literal[OutputContentType.REASONING_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the reasoning text. Always ``reasoning_text``. Required. REASONING_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The reasoning text from the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputContentType.REASONING_TEXT # type: ignore + + +class OutputContentRefusalContent(OutputContent, discriminator="refusal"): + """Refusal. + + :ivar type: The type of the refusal. Always ``refusal``. Required. REFUSAL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REFUSAL + :ivar refusal: The refusal explanation from the model. Required. + :vartype refusal: str + """ + + type: Literal[OutputContentType.REFUSAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the refusal. Always ``refusal``. Required. REFUSAL.""" + refusal: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The refusal explanation from the model. Required.""" + + @overload + def __init__( + self, + *, + refusal: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputContentType.REFUSAL # type: ignore + + +class OutputItemApplyPatchToolCall(OutputItem, discriminator="apply_patch_call"): + """Apply patch tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``apply_patch_call``. Required. APPLY_PATCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH_CALL + :ivar id: The unique ID of the apply patch tool call. Populated when this item is returned via + API. Required. + :vartype id: str + :ivar call_id: The unique ID of the apply patch tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the apply patch tool call. One of ``in_progress`` or ``completed``. + Required. Known values are: "in_progress" and "completed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchCallStatus + :ivar operation: Apply patch operation. Required. + :vartype operation: ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchFileOperation + """ + + type: Literal[OutputItemType.APPLY_PATCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``apply_patch_call``. Required. APPLY_PATCH_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call. Populated when this item is returned via API. + Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call generated by the model. Required.""" + status: Union[str, "_models.ApplyPatchCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the apply patch tool call. One of ``in_progress`` or ``completed``. Required. + Known values are: \"in_progress\" and \"completed\".""" + operation: "_models.ApplyPatchFileOperation" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Apply patch operation. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + status: Union[str, "_models.ApplyPatchCallStatus"], + operation: "_models.ApplyPatchFileOperation", + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.APPLY_PATCH_CALL # type: ignore + + +class OutputItemApplyPatchToolCallOutput(OutputItem, discriminator="apply_patch_call_output"): + """Apply patch tool call output. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``apply_patch_call_output``. Required. + APPLY_PATCH_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH_CALL_OUTPUT + :ivar id: The unique ID of the apply patch tool call output. Populated when this item is + returned via API. Required. + :vartype id: str + :ivar call_id: The unique ID of the apply patch tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the apply patch tool call output. One of ``completed`` or + ``failed``. Required. Known values are: "completed" and "failed". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.ApplyPatchCallOutputStatus + :ivar output: + :vartype output: str + """ + + type: Literal[OutputItemType.APPLY_PATCH_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``apply_patch_call_output``. Required. APPLY_PATCH_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call output. Populated when this item is returned via + API. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the apply patch tool call generated by the model. Required.""" + status: Union[str, "_models.ApplyPatchCallOutputStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the apply patch tool call output. One of ``completed`` or ``failed``. Required. + Known values are: \"completed\" and \"failed\".""" + output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + status: Union[str, "_models.ApplyPatchCallOutputStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.APPLY_PATCH_CALL_OUTPUT # type: ignore + + +class OutputItemCodeInterpreterToolCall(OutputItem, discriminator="code_interpreter_call"): + """Code interpreter tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the code interpreter tool call. Always ``code_interpreter_call``. + Required. CODE_INTERPRETER_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CODE_INTERPRETER_CALL + :ivar id: The unique ID of the code interpreter tool call. Required. + :vartype id: str + :ivar status: The status of the code interpreter tool call. Valid values are ``in_progress``, + ``completed``, ``incomplete``, ``interpreting``, and ``failed``. Required. Is one of the + following types: Literal["in_progress"], Literal["completed"], Literal["incomplete"], + Literal["interpreting"], Literal["failed"] + :vartype status: str or str or str or str or str + :ivar container_id: The ID of the container used to run the code. Required. + :vartype container_id: str + :ivar code: Required. + :vartype code: str + :ivar outputs: Required. + :vartype outputs: + list[~azure.ai.agentserver.responses.sdk.models.models.CodeInterpreterOutputLogs or + ~azure.ai.agentserver.responses.sdk.models.models.CodeInterpreterOutputImage] + """ + + type: Literal[OutputItemType.CODE_INTERPRETER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the code interpreter tool call. Always ``code_interpreter_call``. Required. + CODE_INTERPRETER_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the code interpreter tool call. Required.""" + status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the code interpreter tool call. Valid values are ``in_progress``, ``completed``, + ``incomplete``, ``interpreting``, and ``failed``. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"], + Literal[\"interpreting\"], Literal[\"failed\"]""" + container_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the container used to run the code. Required.""" + code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + outputs: list[Union["_models.CodeInterpreterOutputLogs", "_models.CodeInterpreterOutputImage"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"], + container_id: str, + code: str, + outputs: list[Union["_models.CodeInterpreterOutputLogs", "_models.CodeInterpreterOutputImage"]], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.CODE_INTERPRETER_CALL # type: ignore + + +class OutputItemCompactionBody(OutputItem, discriminator="compaction"): + """Compaction item. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``compaction``. Required. COMPACTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPACTION + :ivar id: The unique ID of the compaction item. Required. + :vartype id: str + :ivar encrypted_content: The encrypted content that was produced by compaction. Required. + :vartype encrypted_content: str + """ + + type: Literal[OutputItemType.COMPACTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``compaction``. Required. COMPACTION.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the compaction item. Required.""" + encrypted_content: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The encrypted content that was produced by compaction. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + encrypted_content: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.COMPACTION # type: ignore + + +class OutputItemComputerToolCall(OutputItem, discriminator="computer_call"): + """Computer tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the computer call. Always ``computer_call``. Required. COMPUTER_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_CALL + :ivar id: The unique ID of the computer call. Required. + :vartype id: str + :ivar call_id: An identifier used when responding to the tool call with output. Required. + :vartype call_id: str + :ivar action: + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.ComputerAction + :ivar actions: + :vartype actions: list[~azure.ai.agentserver.responses.sdk.models.models.ComputerAction] + :ivar pending_safety_checks: The pending safety checks for the computer call. Required. + :vartype pending_safety_checks: + list[~azure.ai.agentserver.responses.sdk.models.models.ComputerCallSafetyCheckParam] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[OutputItemType.COMPUTER_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer call. Always ``computer_call``. Required. COMPUTER_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the computer call. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An identifier used when responding to the tool call with output. Required.""" + action: Optional["_models.ComputerAction"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + actions: Optional[list["_models.ComputerAction"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + pending_safety_checks: list["_models.ComputerCallSafetyCheckParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The pending safety checks for the computer call. Required.""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + pending_safety_checks: list["_models.ComputerCallSafetyCheckParam"], + status: Literal["in_progress", "completed", "incomplete"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + action: Optional["_models.ComputerAction"] = None, + actions: Optional[list["_models.ComputerAction"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.COMPUTER_CALL # type: ignore + + +class OutputItemComputerToolCallOutput(OutputItem, discriminator="computer_call_output"): + """Computer tool call output. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the computer tool call output. Always ``computer_call_output``. + Required. COMPUTER_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_CALL_OUTPUT + :ivar id: The ID of the computer tool call output. Required. + :vartype id: str + :ivar call_id: The ID of the computer tool call that produced the output. Required. + :vartype call_id: str + :ivar acknowledged_safety_checks: The safety checks reported by the API that have been + acknowledged by the developer. + :vartype acknowledged_safety_checks: + list[~azure.ai.agentserver.responses.sdk.models.models.ComputerCallSafetyCheckParam] + :ivar output: Required. + :vartype output: ~azure.ai.agentserver.responses.sdk.models.models.ComputerScreenshotImage + :ivar status: The status of the message input. One of ``in_progress``, ``completed``, or + ``incomplete``. Populated when input items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[OutputItemType.COMPUTER_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the computer tool call output. Always ``computer_call_output``. Required. + COMPUTER_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read"]) + """The ID of the computer tool call output. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the computer tool call that produced the output. Required.""" + acknowledged_safety_checks: Optional[list["_models.ComputerCallSafetyCheckParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The safety checks reported by the API that have been acknowledged by the developer.""" + output: "_models.ComputerScreenshotImage" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the message input. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when input items are returned via API. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + output: "_models.ComputerScreenshotImage", + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + acknowledged_safety_checks: Optional[list["_models.ComputerCallSafetyCheckParam"]] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.COMPUTER_CALL_OUTPUT # type: ignore + + +class OutputItemCustomToolCall(OutputItem, discriminator="custom_tool_call"): + """Custom tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the custom tool call. Always ``custom_tool_call``. Required. + CUSTOM_TOOL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM_TOOL_CALL + :ivar id: The unique ID of the custom tool call in the OpenAI platform. + :vartype id: str + :ivar call_id: An identifier used to map this custom tool call to a tool call output. Required. + :vartype call_id: str + :ivar namespace: The namespace of the custom tool being called. + :vartype namespace: str + :ivar name: The name of the custom tool being called. Required. + :vartype name: str + :ivar input: The input for the custom tool call generated by the model. Required. + :vartype input: str + """ + + type: Literal[OutputItemType.CUSTOM_TOOL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool call. Always ``custom_tool_call``. Required. CUSTOM_TOOL_CALL.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the custom tool call in the OpenAI platform.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An identifier used to map this custom tool call to a tool call output. Required.""" + namespace: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace of the custom tool being called.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the custom tool being called. Required.""" + input: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The input for the custom tool call generated by the model. Required.""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + input: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + id: Optional[str] = None, # pylint: disable=redefined-builtin + namespace: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.CUSTOM_TOOL_CALL # type: ignore + + +class OutputItemCustomToolCallOutput(OutputItem, discriminator="custom_tool_call_output"): + """Custom tool call output. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the custom tool call output. Always ``custom_tool_call_output``. + Required. CUSTOM_TOOL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM_TOOL_CALL_OUTPUT + :ivar id: The unique ID of the custom tool call output in the OpenAI platform. + :vartype id: str + :ivar call_id: The call ID, used to map this custom tool call output to a custom tool call. + Required. + :vartype call_id: str + :ivar output: The output from the custom tool call generated by your code. Can be a string or + an list of output content. Required. Is either a str type or a + [FunctionAndCustomToolCallOutput] type. + :vartype output: str or + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionAndCustomToolCallOutput] + """ + + type: Literal[OutputItemType.CUSTOM_TOOL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the custom tool call output. Always ``custom_tool_call_output``. Required. + CUSTOM_TOOL_CALL_OUTPUT.""" + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the custom tool call output in the OpenAI platform.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The call ID, used to map this custom tool call output to a custom tool call. Required.""" + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the custom tool call generated by your code. Can be a string or an list of + output content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] type.""" + + @overload + def __init__( + self, + *, + call_id: str, + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + id: Optional[str] = None, # pylint: disable=redefined-builtin + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.CUSTOM_TOOL_CALL_OUTPUT # type: ignore + + +class OutputItemFileSearchToolCall(OutputItem, discriminator="file_search_call"): + """File search tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar id: The unique ID of the file search tool call. Required. + :vartype id: str + :ivar type: The type of the file search tool call. Always ``file_search_call``. Required. + FILE_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_SEARCH_CALL + :ivar status: The status of the file search tool call. One of ``in_progress``, ``searching``, + ``incomplete`` or ``failed``,. Required. Is one of the following types: Literal["in_progress"], + Literal["searching"], Literal["completed"], Literal["incomplete"], Literal["failed"] + :vartype status: str or str or str or str or str + :ivar queries: The queries used to search for files. Required. + :vartype queries: list[str] + :ivar results: + :vartype results: + list[~azure.ai.agentserver.responses.sdk.models.models.FileSearchToolCallResults] + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the file search tool call. Required.""" + type: Literal[OutputItemType.FILE_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the file search tool call. Always ``file_search_call``. Required. FILE_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the file search tool call. One of ``in_progress``, ``searching``, ``incomplete`` + or ``failed``,. Required. Is one of the following types: Literal[\"in_progress\"], + Literal[\"searching\"], Literal[\"completed\"], Literal[\"incomplete\"], Literal[\"failed\"]""" + queries: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The queries used to search for files. Required.""" + results: Optional[list["_models.FileSearchToolCallResults"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "searching", "completed", "incomplete", "failed"], + queries: list[str], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + results: Optional[list["_models.FileSearchToolCallResults"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.FILE_SEARCH_CALL # type: ignore + + +class OutputItemFunctionShellCall(OutputItem, discriminator="shell_call"): + """Shell tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``shell_call``. Required. SHELL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL_CALL + :ivar id: The unique ID of the shell tool call. Populated when this item is returned via API. + Required. + :vartype id: str + :ivar call_id: The unique ID of the shell tool call generated by the model. Required. + :vartype call_id: str + :ivar action: The shell commands and limits that describe how to run the tool call. Required. + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellAction + :ivar status: The status of the shell call. One of ``in_progress``, ``completed``, or + ``incomplete``. Required. Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.LocalShellCallStatus + :ivar environment: Required. + :vartype environment: + ~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallEnvironment + """ + + type: Literal[OutputItemType.SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``shell_call``. Required. SHELL_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call. Populated when this item is returned via API. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call generated by the model. Required.""" + action: "_models.FunctionShellAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The shell commands and limits that describe how to run the tool call. Required.""" + status: Union[str, "_models.LocalShellCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the shell call. One of ``in_progress``, ``completed``, or ``incomplete``. + Required. Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + environment: "_models.FunctionShellCallEnvironment" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + action: "_models.FunctionShellAction", + status: Union[str, "_models.LocalShellCallStatus"], + environment: "_models.FunctionShellCallEnvironment", + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.SHELL_CALL # type: ignore + + +class OutputItemFunctionShellCallOutput(OutputItem, discriminator="shell_call_output"): + """Shell call output. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the shell call output. Always ``shell_call_output``. Required. + SHELL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL_CALL_OUTPUT + :ivar id: The unique ID of the shell call output. Populated when this item is returned via API. + Required. + :vartype id: str + :ivar call_id: The unique ID of the shell tool call generated by the model. Required. + :vartype call_id: str + :ivar status: The status of the shell call output. One of ``in_progress``, ``completed``, or + ``incomplete``. Required. Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.LocalShellCallOutputStatusEnum + :ivar output: An array of shell call output contents. Required. + :vartype output: + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionShellCallOutputContent] + :ivar max_output_length: Required. + :vartype max_output_length: int + """ + + type: Literal[OutputItemType.SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the shell call output. Always ``shell_call_output``. Required. SHELL_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell call output. Populated when this item is returned via API. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the shell tool call generated by the model. Required.""" + status: Union[str, "_models.LocalShellCallOutputStatusEnum"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the shell call output. One of ``in_progress``, ``completed``, or ``incomplete``. + Required. Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + output: list["_models.FunctionShellCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """An array of shell call output contents. Required.""" + max_output_length: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + status: Union[str, "_models.LocalShellCallOutputStatusEnum"], + output: list["_models.FunctionShellCallOutputContent"], + max_output_length: int, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.SHELL_CALL_OUTPUT # type: ignore + + +class OutputItemFunctionToolCall(OutputItem, discriminator="function_call"): + """Function tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar id: The unique ID of the function tool call. Required. + :vartype id: str + :ivar type: The type of the function tool call. Always ``function_call``. Required. + FUNCTION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION_CALL + :ivar call_id: The unique ID of the function tool call generated by the model. Required. + :vartype call_id: str + :ivar namespace: The namespace of the function to run. + :vartype namespace: str + :ivar name: The name of the function to run. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments to pass to the function. Required. + :vartype arguments: str + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read"]) + """The unique ID of the function tool call. Required.""" + type: Literal[OutputItemType.FUNCTION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool call. Always ``function_call``. Required. FUNCTION_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the function tool call generated by the model. Required.""" + namespace: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The namespace of the function to run.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the function. Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + name: str, + arguments: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + namespace: Optional[str] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.FUNCTION_CALL # type: ignore + + +class OutputItemFunctionToolCallOutput(OutputItem, discriminator="function_call_output"): + """Function tool call output. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar id: The unique ID of the function tool call output. Populated when this item is returned + via API. Required. + :vartype id: str + :ivar type: The type of the function tool call output. Always ``function_call_output``. + Required. FUNCTION_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION_CALL_OUTPUT + :ivar call_id: The unique ID of the function tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the function call generated by your code. Can be a string or an + list of output content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] + type. + :vartype output: str or + list[~azure.ai.agentserver.responses.sdk.models.models.FunctionAndCustomToolCallOutput] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read"]) + """The unique ID of the function tool call output. Populated when this item is returned via API. + Required.""" + type: Literal[OutputItemType.FUNCTION_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the function tool call output. Always ``function_call_output``. Required. + FUNCTION_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the function tool call generated by the model. Required.""" + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the function call generated by your code. Can be a string or an list of output + content. Required. Is either a str type or a [FunctionAndCustomToolCallOutput] type.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + call_id: str, + output: Union[str, list["_models.FunctionAndCustomToolCallOutput"]], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.FUNCTION_CALL_OUTPUT # type: ignore + + +class OutputItemImageGenToolCall(OutputItem, discriminator="image_generation_call"): + """Image generation call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the image generation call. Always ``image_generation_call``. Required. + IMAGE_GENERATION_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.IMAGE_GENERATION_CALL + :ivar id: The unique ID of the image generation call. Required. + :vartype id: str + :ivar status: The status of the image generation call. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["generating"], Literal["failed"] + :vartype status: str or str or str or str + :ivar result: Required. + :vartype result: str + """ + + type: Literal[OutputItemType.IMAGE_GENERATION_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the image generation call. Always ``image_generation_call``. Required. + IMAGE_GENERATION_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the image generation call. Required.""" + status: Literal["in_progress", "completed", "generating", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the image generation call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"generating\"], Literal[\"failed\"]""" + result: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "completed", "generating", "failed"], + result: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.IMAGE_GENERATION_CALL # type: ignore + + +class OutputItemLocalShellToolCall(OutputItem, discriminator="local_shell_call"): + """Local shell call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the local shell call. Always ``local_shell_call``. Required. + LOCAL_SHELL_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL_CALL + :ivar id: The unique ID of the local shell call. Required. + :vartype id: str + :ivar call_id: The unique ID of the local shell tool call generated by the model. Required. + :vartype call_id: str + :ivar action: Required. + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.LocalShellExecAction + :ivar status: The status of the local shell call. Required. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[OutputItemType.LOCAL_SHELL_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell call. Always ``local_shell_call``. Required. LOCAL_SHELL_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell call. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell tool call generated by the model. Required.""" + action: "_models.LocalShellExecAction" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the local shell call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + action: "_models.LocalShellExecAction", + status: Literal["in_progress", "completed", "incomplete"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.LOCAL_SHELL_CALL # type: ignore + + +class OutputItemLocalShellToolCallOutput(OutputItem, discriminator="local_shell_call_output"): + """Local shell call output. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the local shell tool call output. Always ``local_shell_call_output``. + Required. LOCAL_SHELL_CALL_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.LOCAL_SHELL_CALL_OUTPUT + :ivar id: The unique ID of the local shell tool call generated by the model. Required. + :vartype id: str + :ivar output: A JSON string of the output of the local shell tool call. Required. + :vartype output: str + :ivar status: Is one of the following types: Literal["in_progress"], Literal["completed"], + Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[OutputItemType.LOCAL_SHELL_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the local shell tool call output. Always ``local_shell_call_output``. Required. + LOCAL_SHELL_CALL_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the local shell tool call generated by the model. Required.""" + output: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the output of the local shell tool call. Required.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"in_progress\"], Literal[\"completed\"], + Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + output: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.LOCAL_SHELL_CALL_OUTPUT # type: ignore + + +class OutputItemMcpApprovalRequest(OutputItem, discriminator="mcp_approval_request"): + """MCP approval request. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``mcp_approval_request``. Required. + MCP_APPROVAL_REQUEST. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_APPROVAL_REQUEST + :ivar id: The unique ID of the approval request. Required. + :vartype id: str + :ivar server_label: The label of the MCP server making the request. Required. + :vartype server_label: str + :ivar name: The name of the tool to run. Required. + :vartype name: str + :ivar arguments: A JSON string of arguments for the tool. Required. + :vartype arguments: str + """ + + type: Literal[OutputItemType.MCP_APPROVAL_REQUEST] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_approval_request``. Required. MCP_APPROVAL_REQUEST.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the approval request. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server making the request. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool to run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of arguments for the tool. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + name: str, + arguments: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.MCP_APPROVAL_REQUEST # type: ignore + + +class OutputItemMcpApprovalResponseResource(OutputItem, discriminator="mcp_approval_response"): + """MCP approval response. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``mcp_approval_response``. Required. + MCP_APPROVAL_RESPONSE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_APPROVAL_RESPONSE + :ivar id: The unique ID of the approval response. Required. + :vartype id: str + :ivar approval_request_id: The ID of the approval request being answered. Required. + :vartype approval_request_id: str + :ivar approve: Whether the request was approved. Required. + :vartype approve: bool + :ivar reason: + :vartype reason: str + """ + + type: Literal[OutputItemType.MCP_APPROVAL_RESPONSE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_approval_response``. Required. MCP_APPROVAL_RESPONSE.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the approval response. Required.""" + approval_request_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the approval request being answered. Required.""" + approve: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether the request was approved. Required.""" + reason: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + approval_request_id: str, + approve: bool, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + reason: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.MCP_APPROVAL_RESPONSE # type: ignore + + +class OutputItemMcpListTools(OutputItem, discriminator="mcp_list_tools"): + """MCP list tools. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``mcp_list_tools``. Required. MCP_LIST_TOOLS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_LIST_TOOLS + :ivar id: The unique ID of the list. Required. + :vartype id: str + :ivar server_label: The label of the MCP server. Required. + :vartype server_label: str + :ivar tools: The tools available on the server. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.MCPListToolsTool] + :ivar error: + :vartype error: ~azure.ai.agentserver.responses.sdk.models.models.RealtimeMCPError + """ + + type: Literal[OutputItemType.MCP_LIST_TOOLS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_list_tools``. Required. MCP_LIST_TOOLS.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the list. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server. Required.""" + tools: list["_models.MCPListToolsTool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The tools available on the server. Required.""" + error: Optional["_models.RealtimeMCPError"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + tools: list["_models.MCPListToolsTool"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + error: Optional["_models.RealtimeMCPError"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.MCP_LIST_TOOLS # type: ignore + + +class OutputItemMcpToolCall(OutputItem, discriminator="mcp_call"): + """MCP tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``mcp_call``. Required. MCP_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP_CALL + :ivar id: The unique ID of the tool call. Required. + :vartype id: str + :ivar server_label: The label of the MCP server running the tool. Required. + :vartype server_label: str + :ivar name: The name of the tool that was run. Required. + :vartype name: str + :ivar arguments: A JSON string of the arguments passed to the tool. Required. + :vartype arguments: str + :ivar output: + :vartype output: str + :ivar error: + :vartype error: dict[str, any] + :ivar status: The status of the tool call. One of ``in_progress``, ``completed``, + ``incomplete``, ``calling``, or ``failed``. Known values are: "in_progress", "completed", + "incomplete", "calling", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.MCPToolCallStatus + :ivar approval_request_id: + :vartype approval_request_id: str + """ + + type: Literal[OutputItemType.MCP_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``mcp_call``. Required. MCP_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call. Required.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server running the tool. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the tool that was run. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments passed to the tool. Required.""" + output: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + error: Optional[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + status: Optional[Union[str, "_models.MCPToolCallStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. One of ``in_progress``, ``completed``, ``incomplete``, + ``calling``, or ``failed``. Known values are: \"in_progress\", \"completed\", \"incomplete\", + \"calling\", and \"failed\".""" + approval_request_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + server_label: str, + name: str, + arguments: str, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional[str] = None, + error: Optional[dict[str, Any]] = None, + status: Optional[Union[str, "_models.MCPToolCallStatus"]] = None, + approval_request_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.MCP_CALL # type: ignore + + +class OutputItemMessage(OutputItem, discriminator="message"): + """Message. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the message. Always set to ``message``. Required. MESSAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MESSAGE + :ivar id: The unique ID of the message. Required. + :vartype id: str + :ivar status: The status of item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Required. Known values are: "in_progress", + "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageStatus + :ivar role: The role of the message. One of ``unknown``, ``user``, ``assistant``, ``system``, + ``critic``, ``discriminator``, ``developer``, or ``tool``. Required. Known values are: + "unknown", "user", "assistant", "system", "critic", "discriminator", "developer", and "tool". + :vartype role: str or ~azure.ai.agentserver.responses.sdk.models.models.MessageRole + :ivar content: The content of the message. Required. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.MessageContent] + """ + + type: Literal[OutputItemType.MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the message. Always set to ``message``. Required. MESSAGE.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the message. Required.""" + status: Union[str, "_models.MessageStatus"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The status of item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated when + items are returned via API. Required. Known values are: \"in_progress\", \"completed\", and + \"incomplete\".""" + role: Union[str, "_models.MessageRole"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The role of the message. One of ``unknown``, ``user``, ``assistant``, ``system``, ``critic``, + ``discriminator``, ``developer``, or ``tool``. Required. Known values are: \"unknown\", + \"user\", \"assistant\", \"system\", \"critic\", \"discriminator\", \"developer\", and + \"tool\".""" + content: list["_models.MessageContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content of the message. Required.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Union[str, "_models.MessageStatus"], + role: Union[str, "_models.MessageRole"], + content: list["_models.MessageContent"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.MESSAGE # type: ignore + + +class OutputItemOutputMessage(OutputItem, discriminator="output_message"): + """Output message. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar id: The unique ID of the output message. Required. + :vartype id: str + :ivar type: The type of the output message. Always ``message``. Required. OUTPUT_MESSAGE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OUTPUT_MESSAGE + :ivar role: The role of the output message. Always ``assistant``. Required. Default value is + "assistant". + :vartype role: str + :ivar content: The content of the output message. Required. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.OutputMessageContent] + :ivar phase: Known values are: "commentary" and "final_answer". + :vartype phase: str or ~azure.ai.agentserver.responses.sdk.models.models.MessagePhase + :ivar status: The status of the message input. One of ``in_progress``, ``completed``, or + ``incomplete``. Populated when input items are returned via API. Required. Is one of the + following types: Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the output message. Required.""" + type: Literal[OutputItemType.OUTPUT_MESSAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the output message. Always ``message``. Required. OUTPUT_MESSAGE.""" + role: Literal["assistant"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The role of the output message. Always ``assistant``. Required. Default value is \"assistant\".""" + content: list["_models.OutputMessageContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The content of the output message. Required.""" + phase: Optional[Union[str, "_models.MessagePhase"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"commentary\" and \"final_answer\".""" + status: Literal["in_progress", "completed", "incomplete"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the message input. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when input items are returned via API. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + content: list["_models.OutputMessageContent"], + status: Literal["in_progress", "completed", "incomplete"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + phase: Optional[Union[str, "_models.MessagePhase"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.OUTPUT_MESSAGE # type: ignore + self.role: Literal["assistant"] = "assistant" + + +class OutputItemReasoningItem(OutputItem, discriminator="reasoning"): + """Reasoning. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the object. Always ``reasoning``. Required. REASONING. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REASONING + :ivar id: The unique identifier of the reasoning content. Required. + :vartype id: str + :ivar encrypted_content: + :vartype encrypted_content: str + :ivar summary: Reasoning summary content. Required. + :vartype summary: list[~azure.ai.agentserver.responses.sdk.models.models.SummaryTextContent] + :ivar content: Reasoning text content. + :vartype content: list[~azure.ai.agentserver.responses.sdk.models.models.ReasoningTextContent] + :ivar status: The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. + Populated when items are returned via API. Is one of the following types: + Literal["in_progress"], Literal["completed"], Literal["incomplete"] + :vartype status: str or str or str + """ + + type: Literal[OutputItemType.REASONING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the object. Always ``reasoning``. Required. REASONING.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the reasoning content. Required.""" + encrypted_content: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + summary: list["_models.SummaryTextContent"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Reasoning summary content. Required.""" + content: Optional[list["_models.ReasoningTextContent"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Reasoning text content.""" + status: Optional[Literal["in_progress", "completed", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the item. One of ``in_progress``, ``completed``, or ``incomplete``. Populated + when items are returned via API. Is one of the following types: Literal[\"in_progress\"], + Literal[\"completed\"], Literal[\"incomplete\"]""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + summary: list["_models.SummaryTextContent"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + encrypted_content: Optional[str] = None, + content: Optional[list["_models.ReasoningTextContent"]] = None, + status: Optional[Literal["in_progress", "completed", "incomplete"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.REASONING # type: ignore + + +class OutputItemToolSearchCall(OutputItem, discriminator="tool_search_call"): + """OutputItemToolSearchCall. + + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``tool_search_call``. Required. TOOL_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH_CALL + :ivar id: The unique ID of the tool search call item. Required. + :vartype id: str + :ivar call_id: Required. + :vartype call_id: str + :ivar execution: Whether tool search was executed by the server or by the client. Required. + Known values are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar arguments: Arguments used for the tool search call. Required. + :vartype arguments: any + :ivar status: The status of the tool search call item that was recorded. Required. Known values + are: "in_progress", "completed", and "incomplete". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallStatus + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + type: Literal[OutputItemType.TOOL_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``tool_search_call``. Required. TOOL_SEARCH_CALL.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool search call item. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + execution: Union[str, "_models.ToolSearchExecutionType"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search was executed by the server or by the client. Required. Known values are: + \"server\" and \"client\".""" + arguments: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Arguments used for the tool search call. Required.""" + status: Union[str, "_models.FunctionCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool search call item that was recorded. Required. Known values are: + \"in_progress\", \"completed\", and \"incomplete\".""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + execution: Union[str, "_models.ToolSearchExecutionType"], + arguments: Any, + status: Union[str, "_models.FunctionCallStatus"], + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.TOOL_SEARCH_CALL # type: ignore + + +class OutputItemToolSearchOutput(OutputItem, discriminator="tool_search_output"): + """OutputItemToolSearchOutput. + + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: The type of the item. Always ``tool_search_output``. Required. TOOL_SEARCH_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH_OUTPUT + :ivar id: The unique ID of the tool search output item. Required. + :vartype id: str + :ivar call_id: Required. + :vartype call_id: str + :ivar execution: Whether tool search was executed by the server or by the client. Required. + Known values are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar tools: The loaded tool definitions returned by tool search. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.Tool] + :ivar status: The status of the tool search output item that was recorded. Required. Known + values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallOutputStatusEnum + :ivar created_by: The identifier of the actor that created the item. + :vartype created_by: str + """ + + type: Literal[OutputItemType.TOOL_SEARCH_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the item. Always ``tool_search_output``. Required. TOOL_SEARCH_OUTPUT.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool search output item. Required.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + execution: Union[str, "_models.ToolSearchExecutionType"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search was executed by the server or by the client. Required. Known values are: + \"server\" and \"client\".""" + tools: list["_models.Tool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The loaded tool definitions returned by tool search. Required.""" + status: Union[str, "_models.FunctionCallOutputStatusEnum"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool search output item that was recorded. Required. Known values are: + \"in_progress\", \"completed\", and \"incomplete\".""" + created_by: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The identifier of the actor that created the item.""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + call_id: str, + execution: Union[str, "_models.ToolSearchExecutionType"], + tools: list["_models.Tool"], + status: Union[str, "_models.FunctionCallOutputStatusEnum"], + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + created_by: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.TOOL_SEARCH_OUTPUT # type: ignore + + +class OutputItemWebSearchToolCall(OutputItem, discriminator="web_search_call"): + """Web search tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar id: The unique ID of the web search tool call. Required. + :vartype id: str + :ivar type: The type of the web search tool call. Always ``web_search_call``. Required. + WEB_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH_CALL + :ivar status: The status of the web search tool call. Required. Is one of the following types: + Literal["in_progress"], Literal["searching"], Literal["completed"], Literal["failed"] + :vartype status: str or str or str or str + :ivar action: An object describing the specific action taken in this web search call. Includes + details on how the model used the web (search, open_page, find_in_page). Required. Is one of + the following types: WebSearchActionSearch, WebSearchActionOpenPage, WebSearchActionFind + :vartype action: ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionSearch or + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionOpenPage or + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionFind + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the web search tool call. Required.""" + type: Literal[OutputItemType.WEB_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the web search tool call. Always ``web_search_call``. Required. WEB_SEARCH_CALL.""" + status: Literal["in_progress", "searching", "completed", "failed"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the web search tool call. Required. Is one of the following types: + Literal[\"in_progress\"], Literal[\"searching\"], Literal[\"completed\"], Literal[\"failed\"]""" + action: Union["_models.WebSearchActionSearch", "_models.WebSearchActionOpenPage", "_models.WebSearchActionFind"] = ( + rest_field(visibility=["read", "create", "update", "delete", "query"]) + ) + """An object describing the specific action taken in this web search call. Includes details on how + the model used the web (search, open_page, find_in_page). Required. Is one of the following + types: WebSearchActionSearch, WebSearchActionOpenPage, WebSearchActionFind""" + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + status: Literal["in_progress", "searching", "completed", "failed"], + action: Union[ + "_models.WebSearchActionSearch", "_models.WebSearchActionOpenPage", "_models.WebSearchActionFind" + ], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.WEB_SEARCH_CALL # type: ignore + + +class OutputMessageContent(_Model): + """OutputMessageContent. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + OutputMessageContentOutputTextContent, OutputMessageContentRefusalContent + + :ivar type: Required. Known values are: "output_text" and "refusal". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.OutputMessageContentType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"output_text\" and \"refusal\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class OutputMessageContentOutputTextContent(OutputMessageContent, discriminator="output_text"): + """Output text. + + :ivar type: The type of the output text. Always ``output_text``. Required. OUTPUT_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.OUTPUT_TEXT + :ivar text: The text output from the model. Required. + :vartype text: str + :ivar annotations: The annotations of the text output. Required. + :vartype annotations: list[~azure.ai.agentserver.responses.sdk.models.models.Annotation] + :ivar logprobs: Required. + :vartype logprobs: list[~azure.ai.agentserver.responses.sdk.models.models.LogProb] + """ + + type: Literal[OutputMessageContentType.OUTPUT_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the output text. Always ``output_text``. Required. OUTPUT_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text output from the model. Required.""" + annotations: list["_models.Annotation"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The annotations of the text output. Required.""" + logprobs: list["_models.LogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + text: str, + annotations: list["_models.Annotation"], + logprobs: list["_models.LogProb"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputMessageContentType.OUTPUT_TEXT # type: ignore + + +class OutputMessageContentRefusalContent(OutputMessageContent, discriminator="refusal"): + """Refusal. + + :ivar type: The type of the refusal. Always ``refusal``. Required. REFUSAL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.REFUSAL + :ivar refusal: The refusal explanation from the model. Required. + :vartype refusal: str + """ + + type: Literal[OutputMessageContentType.REFUSAL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the refusal. Always ``refusal``. Required. REFUSAL.""" + refusal: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The refusal explanation from the model. Required.""" + + @overload + def __init__( + self, + *, + refusal: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputMessageContentType.REFUSAL # type: ignore + + +class Prompt(_Model): + """Reference to a prompt template and its variables. `Learn more + `_. + + :ivar id: The unique identifier of the prompt template to use. Required. + :vartype id: str + :ivar version: + :vartype version: str + :ivar variables: + :vartype variables: ~azure.ai.agentserver.responses.sdk.models.models.ResponsePromptVariables + """ + + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the prompt template to use. Required.""" + version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + variables: Optional["_models.ResponsePromptVariables"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + id: str, # pylint: disable=redefined-builtin + version: Optional[str] = None, + variables: Optional["_models.ResponsePromptVariables"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class RankingOptions(_Model): + """RankingOptions. + + :ivar ranker: The ranker to use for the file search. Known values are: "auto" and + "default-2024-11-15". + :vartype ranker: str or ~azure.ai.agentserver.responses.sdk.models.models.RankerVersionType + :ivar score_threshold: The score threshold for the file search, a number between 0 and 1. + Numbers closer to 1 will attempt to return only the most relevant results, but may return fewer + results. + :vartype score_threshold: int + :ivar hybrid_search: Weights that control how reciprocal rank fusion balances semantic + embedding matches versus sparse keyword matches when hybrid search is enabled. + :vartype hybrid_search: ~azure.ai.agentserver.responses.sdk.models.models.HybridSearchOptions + """ + + ranker: Optional[Union[str, "_models.RankerVersionType"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The ranker to use for the file search. Known values are: \"auto\" and \"default-2024-11-15\".""" + score_threshold: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The score threshold for the file search, a number between 0 and 1. Numbers closer to 1 will + attempt to return only the most relevant results, but may return fewer results.""" + hybrid_search: Optional["_models.HybridSearchOptions"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Weights that control how reciprocal rank fusion balances semantic embedding matches versus + sparse keyword matches when hybrid search is enabled.""" + + @overload + def __init__( + self, + *, + ranker: Optional[Union[str, "_models.RankerVersionType"]] = None, + score_threshold: Optional[int] = None, + hybrid_search: Optional["_models.HybridSearchOptions"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class RealtimeMCPError(_Model): + """RealtimeMCPError. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + RealtimeMCPHTTPError, RealtimeMCPProtocolError, RealtimeMCPToolExecutionError + + :ivar type: Required. Known values are: "protocol_error", "tool_execution_error", and + "http_error". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RealtimeMcpErrorType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"protocol_error\", \"tool_execution_error\", and \"http_error\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class RealtimeMCPHTTPError(RealtimeMCPError, discriminator="http_error"): + """Realtime MCP HTTP error. + + :ivar type: Required. HTTP_ERROR. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.HTTP_ERROR + :ivar code: Required. + :vartype code: int + :ivar message: Required. + :vartype message: str + """ + + type: Literal[RealtimeMcpErrorType.HTTP_ERROR] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. HTTP_ERROR.""" + code: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + code: int, + message: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = RealtimeMcpErrorType.HTTP_ERROR # type: ignore + + +class RealtimeMCPProtocolError(RealtimeMCPError, discriminator="protocol_error"): + """Realtime MCP protocol error. + + :ivar type: Required. PROTOCOL_ERROR. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.PROTOCOL_ERROR + :ivar code: Required. + :vartype code: int + :ivar message: Required. + :vartype message: str + """ + + type: Literal[RealtimeMcpErrorType.PROTOCOL_ERROR] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. PROTOCOL_ERROR.""" + code: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + code: int, + message: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = RealtimeMcpErrorType.PROTOCOL_ERROR # type: ignore + + +class RealtimeMCPToolExecutionError(RealtimeMCPError, discriminator="tool_execution_error"): + """Realtime MCP tool execution error. + + :ivar type: Required. TOOL_EXECUTION_ERROR. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_EXECUTION_ERROR + :ivar message: Required. + :vartype message: str + """ + + type: Literal[RealtimeMcpErrorType.TOOL_EXECUTION_ERROR] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. TOOL_EXECUTION_ERROR.""" + message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + message: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = RealtimeMcpErrorType.TOOL_EXECUTION_ERROR # type: ignore + + +class Reasoning(_Model): + """Reasoning. + + :ivar effort: Is one of the following types: Literal["none"], Literal["minimal"], + Literal["low"], Literal["medium"], Literal["high"], Literal["xhigh"] + :vartype effort: str or str or str or str or str or str + :ivar summary: Is one of the following types: Literal["auto"], Literal["concise"], + Literal["detailed"] + :vartype summary: str or str or str + :ivar generate_summary: Is one of the following types: Literal["auto"], Literal["concise"], + Literal["detailed"] + :vartype generate_summary: str or str or str + """ + + effort: Optional[Literal["none", "minimal", "low", "medium", "high", "xhigh"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"none\"], Literal[\"minimal\"], Literal[\"low\"], + Literal[\"medium\"], Literal[\"high\"], Literal[\"xhigh\"]""" + summary: Optional[Literal["auto", "concise", "detailed"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"auto\"], Literal[\"concise\"], Literal[\"detailed\"]""" + generate_summary: Optional[Literal["auto", "concise", "detailed"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"auto\"], Literal[\"concise\"], Literal[\"detailed\"]""" + + @overload + def __init__( + self, + *, + effort: Optional[Literal["none", "minimal", "low", "medium", "high", "xhigh"]] = None, + summary: Optional[Literal["auto", "concise", "detailed"]] = None, + generate_summary: Optional[Literal["auto", "concise", "detailed"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ReasoningTextContent(_Model): + """Reasoning text. + + :ivar type: The type of the reasoning text. Always ``reasoning_text``. Required. Default value + is "reasoning_text". + :vartype type: str + :ivar text: The reasoning text from the model. Required. + :vartype text: str + """ + + type: Literal["reasoning_text"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of the reasoning text. Always ``reasoning_text``. Required. Default value is + \"reasoning_text\".""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The reasoning text from the model. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["reasoning_text"] = "reasoning_text" + + +class Response(_Model): + """The response object. + + :ivar metadata: + :vartype metadata: ~azure.ai.agentserver.responses.sdk.models.models.Metadata + :ivar top_logprobs: + :vartype top_logprobs: int + :ivar temperature: + :vartype temperature: int + :ivar top_p: + :vartype top_p: int + :ivar user: This field is being replaced by ``safety_identifier`` and ``prompt_cache_key``. Use + ``prompt_cache_key`` instead to maintain caching optimizations. A stable identifier for your + end-users. Used to boost cache hit rates by better bucketing similar requests and to help + OpenAI detect and prevent abuse. `Learn more + `_. + :vartype user: str + :ivar safety_identifier: A stable identifier used to help detect users of your application that + may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies + each user, with a maximum length of 64 characters. We recommend hashing their username or email + address, in order to avoid sending us any identifying information. `Learn more + `_. + :vartype safety_identifier: str + :ivar prompt_cache_key: Used by OpenAI to cache responses for similar requests to optimize your + cache hit rates. Replaces the ``user`` field. `Learn more `_. + :vartype prompt_cache_key: str + :ivar service_tier: Is one of the following types: Literal["auto"], Literal["default"], + Literal["flex"], Literal["scale"], Literal["priority"] + :vartype service_tier: str or str or str or str or str + :ivar prompt_cache_retention: Is either a Literal["in-memory"] type or a Literal["24h"] type. + :vartype prompt_cache_retention: str or str + :ivar previous_response_id: + :vartype previous_response_id: str + :ivar model: The model deployment to use for the creation of this response. + :vartype model: str + :ivar reasoning: + :vartype reasoning: ~azure.ai.agentserver.responses.sdk.models.models.Reasoning + :ivar background: + :vartype background: bool + :ivar max_output_tokens: + :vartype max_output_tokens: int + :ivar max_tool_calls: + :vartype max_tool_calls: int + :ivar text: + :vartype text: ~azure.ai.agentserver.responses.sdk.models.models.ResponseTextParam + :ivar tools: + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.Tool] + :ivar tool_choice: Is either a Union[str, "_models.ToolChoiceOptions"] type or a + ToolChoiceParam type. + :vartype tool_choice: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolChoiceOptions or + ~azure.ai.agentserver.responses.sdk.models.models.ToolChoiceParam + :ivar prompt: + :vartype prompt: ~azure.ai.agentserver.responses.sdk.models.models.Prompt + :ivar truncation: Is either a Literal["auto"] type or a Literal["disabled"] type. + :vartype truncation: str or str + :ivar id: Unique identifier for this Response. Required. + :vartype id: str + :ivar object: The object type of this resource - always set to ``response``. Required. Default + value is "response". + :vartype object: str + :ivar status: The status of the response generation. One of ``completed``, ``failed``, + ``in_progress``, ``cancelled``, ``queued``, or ``incomplete``. Is one of the following types: + Literal["completed"], Literal["failed"], Literal["in_progress"], Literal["cancelled"], + Literal["queued"], Literal["incomplete"] + :vartype status: str or str or str or str or str or str + :ivar created_at: Unix timestamp (in seconds) of when this Response was created. Required. + :vartype created_at: ~datetime.datetime + :ivar completed_at: + :vartype completed_at: ~datetime.datetime + :ivar error: Required. + :vartype error: ~azure.ai.agentserver.responses.sdk.models.models.ResponseError + :ivar incomplete_details: Required. + :vartype incomplete_details: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseIncompleteDetails + :ivar output: An array of content items generated by the model. + + * The length and order of items in the `output` array is dependent + on the model's response. + * Rather than accessing the first item in the `output` array and + assuming it's an `assistant` message with the content generated by + the model, you might consider using the `output_text` property where + supported in SDKs. Required. + :vartype output: list[~azure.ai.agentserver.responses.sdk.models.models.OutputItem] + :ivar instructions: Required. Is either a str type or a [Item] type. + :vartype instructions: str or list[~azure.ai.agentserver.responses.sdk.models.models.Item] + :ivar output_text: + :vartype output_text: str + :ivar usage: + :vartype usage: ~azure.ai.agentserver.responses.sdk.models.models.ResponseUsage + :ivar parallel_tool_calls: Whether to allow the model to run tool calls in parallel. Required. + :vartype parallel_tool_calls: bool + :ivar conversation: + :vartype conversation: ~azure.ai.agentserver.responses.sdk.models.models.ConversationReference + :ivar agent: (Deprecated) Use agent_reference instead. The agent used for this response. + :vartype agent: ~azure.ai.agentserver.responses.sdk.models.models.AgentId + :ivar agent_reference: The agent used for this response. Required. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar agent_session_id: The session identifier for this response. Currently only relevant for + hosted agents. Always returned for hosted agents — either the caller-provided value, the + auto-derived value, or an auto-generated UUID. Use for session-scoped operations and to + maintain sandbox affinity in follow-up calls. + :vartype agent_session_id: str + """ + + metadata: Optional["_models.Metadata"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + top_logprobs: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + temperature: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + top_p: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + user: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """This field is being replaced by ``safety_identifier`` and ``prompt_cache_key``. Use + ``prompt_cache_key`` instead to maintain caching optimizations. A stable identifier for your + end-users. Used to boost cache hit rates by better bucketing similar requests and to help + OpenAI detect and prevent abuse. `Learn more + `_.""" + safety_identifier: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A stable identifier used to help detect users of your application that may be violating + OpenAI's usage policies. The IDs should be a string that uniquely identifies each user, with a + maximum length of 64 characters. We recommend hashing their username or email address, in order + to avoid sending us any identifying information. `Learn more + `_.""" + prompt_cache_key: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. + Replaces the ``user`` field. `Learn more `_.""" + service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"auto\"], Literal[\"default\"], Literal[\"flex\"], + Literal[\"scale\"], Literal[\"priority\"]""" + prompt_cache_retention: Optional[Literal["in-memory", "24h"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Literal[\"in-memory\"] type or a Literal[\"24h\"] type.""" + previous_response_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + model: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The model deployment to use for the creation of this response.""" + reasoning: Optional["_models.Reasoning"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + background: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + max_output_tokens: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + max_tool_calls: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + text: Optional["_models.ResponseTextParam"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceParam"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Union[str, \"_models.ToolChoiceOptions\"] type or a ToolChoiceParam type.""" + prompt: Optional["_models.Prompt"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + truncation: Optional[Literal["auto", "disabled"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Literal[\"auto\"] type or a Literal[\"disabled\"] type.""" + id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique identifier for this Response. Required.""" + object: Literal["response"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The object type of this resource - always set to ``response``. Required. Default value is + \"response\".""" + status: Optional[Literal["completed", "failed", "in_progress", "cancelled", "queued", "incomplete"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the response generation. One of ``completed``, ``failed``, ``in_progress``, + ``cancelled``, ``queued``, or ``incomplete``. Is one of the following types: + Literal[\"completed\"], Literal[\"failed\"], Literal[\"in_progress\"], Literal[\"cancelled\"], + Literal[\"queued\"], Literal[\"incomplete\"]""" + created_at: datetime.datetime = rest_field( + visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" + ) + """Unix timestamp (in seconds) of when this Response was created. Required.""" + completed_at: Optional[datetime.datetime] = rest_field( + visibility=["read", "create", "update", "delete", "query"], format="unix-timestamp" + ) + error: "_models.ResponseError" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + incomplete_details: "_models.ResponseIncompleteDetails" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + output: list["_models.OutputItem"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """An array of content items generated by the model. + + * The length and order of items in the `output` array is dependent + on the model's response. + * Rather than accessing the first item in the `output` array and + assuming it's an `assistant` message with the content generated by + the model, you might consider using the `output_text` property where + supported in SDKs. Required.""" + instructions: Union[str, list["_models.Item"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required. Is either a str type or a [Item] type.""" + output_text: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + usage: Optional["_models.ResponseUsage"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + parallel_tool_calls: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether to allow the model to run tool calls in parallel. Required.""" + conversation: Optional["_models.ConversationReference"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + agent: Optional["_models.AgentId"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """(Deprecated) Use agent_reference instead. The agent used for this response.""" + agent_reference: "_models.AgentReference" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The agent used for this response. Required.""" + agent_session_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The session identifier for this response. Currently only relevant for hosted agents. Always + returned for hosted agents — either the caller-provided value, the auto-derived value, or an + auto-generated UUID. Use for session-scoped operations and to maintain sandbox affinity in + follow-up calls.""" + + @overload + def __init__( # pylint: disable=too-many-locals + self, + *, + id: str, # pylint: disable=redefined-builtin + created_at: datetime.datetime, + error: "_models.ResponseError", + incomplete_details: "_models.ResponseIncompleteDetails", + output: list["_models.OutputItem"], + instructions: Union[str, list["_models.Item"]], + parallel_tool_calls: bool, + agent_reference: "_models.AgentReference", + metadata: Optional["_models.Metadata"] = None, + top_logprobs: Optional[int] = None, + temperature: Optional[int] = None, + top_p: Optional[int] = None, + user: Optional[str] = None, + safety_identifier: Optional[str] = None, + prompt_cache_key: Optional[str] = None, + service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] = None, + prompt_cache_retention: Optional[Literal["in-memory", "24h"]] = None, + previous_response_id: Optional[str] = None, + model: Optional[str] = None, + reasoning: Optional["_models.Reasoning"] = None, + background: Optional[bool] = None, + max_output_tokens: Optional[int] = None, + max_tool_calls: Optional[int] = None, + text: Optional["_models.ResponseTextParam"] = None, + tools: Optional[list["_models.Tool"]] = None, + tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceParam"]] = None, + prompt: Optional["_models.Prompt"] = None, + truncation: Optional[Literal["auto", "disabled"]] = None, + status: Optional[Literal["completed", "failed", "in_progress", "cancelled", "queued", "incomplete"]] = None, + completed_at: Optional[datetime.datetime] = None, + output_text: Optional[str] = None, + usage: Optional["_models.ResponseUsage"] = None, + conversation: Optional["_models.ConversationReference"] = None, + agent: Optional["_models.AgentId"] = None, + agent_session_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.object: Literal["response"] = "response" + + +class ResponseStreamEvent(_Model): + """ResponseStreamEvent. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ResponseErrorEvent, ResponseAudioDeltaEvent, ResponseAudioDoneEvent, + ResponseAudioTranscriptDeltaEvent, ResponseAudioTranscriptDoneEvent, + ResponseCodeInterpreterCallCompletedEvent, ResponseCodeInterpreterCallInProgressEvent, + ResponseCodeInterpreterCallInterpretingEvent, ResponseCodeInterpreterCallCodeDeltaEvent, + ResponseCodeInterpreterCallCodeDoneEvent, ResponseCompletedEvent, + ResponseContentPartAddedEvent, ResponseContentPartDoneEvent, ResponseCreatedEvent, + ResponseCustomToolCallInputDeltaEvent, ResponseCustomToolCallInputDoneEvent, + ResponseFailedEvent, ResponseFileSearchCallCompletedEvent, + ResponseFileSearchCallInProgressEvent, ResponseFileSearchCallSearchingEvent, + ResponseFunctionCallArgumentsDeltaEvent, ResponseFunctionCallArgumentsDoneEvent, + ResponseImageGenCallCompletedEvent, ResponseImageGenCallGeneratingEvent, + ResponseImageGenCallInProgressEvent, ResponseImageGenCallPartialImageEvent, + ResponseInProgressEvent, ResponseIncompleteEvent, ResponseMCPCallCompletedEvent, + ResponseMCPCallFailedEvent, ResponseMCPCallInProgressEvent, ResponseMCPCallArgumentsDeltaEvent, + ResponseMCPCallArgumentsDoneEvent, ResponseMCPListToolsCompletedEvent, + ResponseMCPListToolsFailedEvent, ResponseMCPListToolsInProgressEvent, + ResponseOutputItemAddedEvent, ResponseOutputItemDoneEvent, + ResponseOutputTextAnnotationAddedEvent, ResponseTextDeltaEvent, ResponseTextDoneEvent, + ResponseQueuedEvent, ResponseReasoningSummaryPartAddedEvent, + ResponseReasoningSummaryPartDoneEvent, ResponseReasoningSummaryTextDeltaEvent, + ResponseReasoningSummaryTextDoneEvent, ResponseReasoningTextDeltaEvent, + ResponseReasoningTextDoneEvent, ResponseRefusalDeltaEvent, ResponseRefusalDoneEvent, + ResponseWebSearchCallCompletedEvent, ResponseWebSearchCallInProgressEvent, + ResponseWebSearchCallSearchingEvent + + :ivar type: Required. Known values are: "response.audio.delta", "response.audio.done", + "response.audio.transcript.delta", "response.audio.transcript.done", + "response.code_interpreter_call_code.delta", "response.code_interpreter_call_code.done", + "response.code_interpreter_call.completed", "response.code_interpreter_call.in_progress", + "response.code_interpreter_call.interpreting", "response.completed", + "response.content_part.added", "response.content_part.done", "response.created", "error", + "response.file_search_call.completed", "response.file_search_call.in_progress", + "response.file_search_call.searching", "response.function_call_arguments.delta", + "response.function_call_arguments.done", "response.in_progress", "response.failed", + "response.incomplete", "response.output_item.added", "response.output_item.done", + "response.reasoning_summary_part.added", "response.reasoning_summary_part.done", + "response.reasoning_summary_text.delta", "response.reasoning_summary_text.done", + "response.reasoning_text.delta", "response.reasoning_text.done", "response.refusal.delta", + "response.refusal.done", "response.output_text.delta", "response.output_text.done", + "response.web_search_call.completed", "response.web_search_call.in_progress", + "response.web_search_call.searching", "response.image_generation_call.completed", + "response.image_generation_call.generating", "response.image_generation_call.in_progress", + "response.image_generation_call.partial_image", "response.mcp_call_arguments.delta", + "response.mcp_call_arguments.done", "response.mcp_call.completed", "response.mcp_call.failed", + "response.mcp_call.in_progress", "response.mcp_list_tools.completed", + "response.mcp_list_tools.failed", "response.mcp_list_tools.in_progress", + "response.output_text.annotation.added", "response.queued", + "response.custom_tool_call_input.delta", and "response.custom_tool_call_input.done". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ResponseStreamEventType + :ivar sequence_number: Required. + :vartype sequence_number: int + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"response.audio.delta\", \"response.audio.done\", + \"response.audio.transcript.delta\", \"response.audio.transcript.done\", + \"response.code_interpreter_call_code.delta\", \"response.code_interpreter_call_code.done\", + \"response.code_interpreter_call.completed\", \"response.code_interpreter_call.in_progress\", + \"response.code_interpreter_call.interpreting\", \"response.completed\", + \"response.content_part.added\", \"response.content_part.done\", \"response.created\", + \"error\", \"response.file_search_call.completed\", \"response.file_search_call.in_progress\", + \"response.file_search_call.searching\", \"response.function_call_arguments.delta\", + \"response.function_call_arguments.done\", \"response.in_progress\", \"response.failed\", + \"response.incomplete\", \"response.output_item.added\", \"response.output_item.done\", + \"response.reasoning_summary_part.added\", \"response.reasoning_summary_part.done\", + \"response.reasoning_summary_text.delta\", \"response.reasoning_summary_text.done\", + \"response.reasoning_text.delta\", \"response.reasoning_text.done\", + \"response.refusal.delta\", \"response.refusal.done\", \"response.output_text.delta\", + \"response.output_text.done\", \"response.web_search_call.completed\", + \"response.web_search_call.in_progress\", \"response.web_search_call.searching\", + \"response.image_generation_call.completed\", \"response.image_generation_call.generating\", + \"response.image_generation_call.in_progress\", + \"response.image_generation_call.partial_image\", \"response.mcp_call_arguments.delta\", + \"response.mcp_call_arguments.done\", \"response.mcp_call.completed\", + \"response.mcp_call.failed\", \"response.mcp_call.in_progress\", + \"response.mcp_list_tools.completed\", \"response.mcp_list_tools.failed\", + \"response.mcp_list_tools.in_progress\", \"response.output_text.annotation.added\", + \"response.queued\", \"response.custom_tool_call_input.delta\", and + \"response.custom_tool_call_input.done\".""" + sequence_number: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + type: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseAudioDeltaEvent(ResponseStreamEvent, discriminator="response.audio.delta"): + """Emitted when there is a partial audio response. + + :ivar type: The type of the event. Always ``response.audio.delta``. Required. + RESPONSE_AUDIO_DELTA. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_AUDIO_DELTA + :ivar sequence_number: A sequence number for this chunk of the stream response. Required. + :vartype sequence_number: int + :ivar delta: A chunk of Base64 encoded response audio bytes. Required. + :vartype delta: bytes + """ + + type: Literal[ResponseStreamEventType.RESPONSE_AUDIO_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.audio.delta``. Required. RESPONSE_AUDIO_DELTA.""" + delta: bytes = rest_field(visibility=["read", "create", "update", "delete", "query"], format="base64") + """A chunk of Base64 encoded response audio bytes. Required.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + delta: bytes, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_AUDIO_DELTA # type: ignore + + +class ResponseAudioDoneEvent(ResponseStreamEvent, discriminator="response.audio.done"): + """Emitted when the audio response is complete. + + :ivar type: The type of the event. Always ``response.audio.done``. Required. + RESPONSE_AUDIO_DONE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_AUDIO_DONE + :ivar sequence_number: The sequence number of the delta. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_AUDIO_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.audio.done``. Required. RESPONSE_AUDIO_DONE.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_AUDIO_DONE # type: ignore + + +class ResponseAudioTranscriptDeltaEvent(ResponseStreamEvent, discriminator="response.audio.transcript.delta"): + """Emitted when there is a partial transcript of audio. + + :ivar type: The type of the event. Always ``response.audio.transcript.delta``. Required. + RESPONSE_AUDIO_TRANSCRIPT_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_AUDIO_TRANSCRIPT_DELTA + :ivar delta: The partial transcript of the audio response. Required. + :vartype delta: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_AUDIO_TRANSCRIPT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.audio.transcript.delta``. Required. + RESPONSE_AUDIO_TRANSCRIPT_DELTA.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The partial transcript of the audio response. Required.""" + + @overload + def __init__( + self, + *, + delta: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_AUDIO_TRANSCRIPT_DELTA # type: ignore + + +class ResponseAudioTranscriptDoneEvent(ResponseStreamEvent, discriminator="response.audio.transcript.done"): + """Emitted when the full audio transcript is completed. + + :ivar type: The type of the event. Always ``response.audio.transcript.done``. Required. + RESPONSE_AUDIO_TRANSCRIPT_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_AUDIO_TRANSCRIPT_DONE + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_AUDIO_TRANSCRIPT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.audio.transcript.done``. Required. + RESPONSE_AUDIO_TRANSCRIPT_DONE.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_AUDIO_TRANSCRIPT_DONE # type: ignore + + +class ResponseCodeInterpreterCallCodeDeltaEvent( + ResponseStreamEvent, discriminator="response.code_interpreter_call_code.delta" +): # pylint: disable=name-too-long + """Emitted when a partial code snippet is streamed by the code interpreter. + + :ivar type: The type of the event. Always ``response.code_interpreter_call_code.delta``. + Required. RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA + :ivar output_index: The index of the output item in the response for which the code is being + streamed. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the code interpreter tool call item. Required. + :vartype item_id: str + :ivar delta: The partial code snippet being streamed by the code interpreter. Required. + :vartype delta: str + :ivar sequence_number: The sequence number of this event, used to order streaming events. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.code_interpreter_call_code.delta``. Required. + RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response for which the code is being streamed. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the code interpreter tool call item. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The partial code snippet being streamed by the code interpreter. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + delta: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA # type: ignore + + +class ResponseCodeInterpreterCallCodeDoneEvent( + ResponseStreamEvent, discriminator="response.code_interpreter_call_code.done" +): + """Emitted when the code snippet is finalized by the code interpreter. + + :ivar type: The type of the event. Always ``response.code_interpreter_call_code.done``. + Required. RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE + :ivar output_index: The index of the output item in the response for which the code is + finalized. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the code interpreter tool call item. Required. + :vartype item_id: str + :ivar code: The final code snippet output by the code interpreter. Required. + :vartype code: str + :ivar sequence_number: The sequence number of this event, used to order streaming events. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.code_interpreter_call_code.done``. Required. + RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response for which the code is finalized. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the code interpreter tool call item. Required.""" + code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The final code snippet output by the code interpreter. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + code: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE # type: ignore + + +class ResponseCodeInterpreterCallCompletedEvent( + ResponseStreamEvent, discriminator="response.code_interpreter_call.completed" +): # pylint: disable=name-too-long + """Emitted when the code interpreter call is completed. + + :ivar type: The type of the event. Always ``response.code_interpreter_call.completed``. + Required. RESPONSE_CODE_INTERPRETER_CALL_COMPLETED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED + :ivar output_index: The index of the output item in the response for which the code interpreter + call is completed. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the code interpreter tool call item. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of this event, used to order streaming events. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.code_interpreter_call.completed``. Required. + RESPONSE_CODE_INTERPRETER_CALL_COMPLETED.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response for which the code interpreter call is completed. + Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the code interpreter tool call item. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED # type: ignore + + +class ResponseCodeInterpreterCallInProgressEvent( + ResponseStreamEvent, discriminator="response.code_interpreter_call.in_progress" +): # pylint: disable=name-too-long + """Emitted when a code interpreter call is in progress. + + :ivar type: The type of the event. Always ``response.code_interpreter_call.in_progress``. + Required. RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS + :ivar output_index: The index of the output item in the response for which the code interpreter + call is in progress. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the code interpreter tool call item. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of this event, used to order streaming events. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.code_interpreter_call.in_progress``. Required. + RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response for which the code interpreter call is in + progress. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the code interpreter tool call item. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS # type: ignore + + +class ResponseCodeInterpreterCallInterpretingEvent( + ResponseStreamEvent, discriminator="response.code_interpreter_call.interpreting" +): # pylint: disable=name-too-long + """Emitted when the code interpreter is actively interpreting the code snippet. + + :ivar type: The type of the event. Always ``response.code_interpreter_call.interpreting``. + Required. RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING + :ivar output_index: The index of the output item in the response for which the code interpreter + is interpreting code. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the code interpreter tool call item. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of this event, used to order streaming events. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.code_interpreter_call.interpreting``. Required. + RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response for which the code interpreter is interpreting + code. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the code interpreter tool call item. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING # type: ignore + + +class ResponseCompletedEvent(ResponseStreamEvent, discriminator="response.completed"): + """Emitted when the model response is complete. + + :ivar type: The type of the event. Always ``response.completed``. Required. RESPONSE_COMPLETED. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_COMPLETED + :ivar response: Properties of the completed response. Required. + :vartype response: ~azure.ai.agentserver.responses.sdk.models.models.Response + :ivar sequence_number: The sequence number for this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.completed``. Required. RESPONSE_COMPLETED.""" + response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Properties of the completed response. Required.""" + + @overload + def __init__( + self, + *, + response: "_models.Response", + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_COMPLETED # type: ignore + + +class ResponseContentPartAddedEvent(ResponseStreamEvent, discriminator="response.content_part.added"): + """Emitted when a new content part is added. + + :ivar type: The type of the event. Always ``response.content_part.added``. Required. + RESPONSE_CONTENT_PART_ADDED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CONTENT_PART_ADDED + :ivar item_id: The ID of the output item that the content part was added to. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the content part was added to. Required. + :vartype output_index: int + :ivar content_index: The index of the content part that was added. Required. + :vartype content_index: int + :ivar part: The content part that was added. Required. + :vartype part: ~azure.ai.agentserver.responses.sdk.models.models.OutputContent + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CONTENT_PART_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.content_part.added``. Required. + RESPONSE_CONTENT_PART_ADDED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the content part was added to. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the content part was added to. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part that was added. Required.""" + part: "_models.OutputContent" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content part that was added. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + part: "_models.OutputContent", + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CONTENT_PART_ADDED # type: ignore + + +class ResponseContentPartDoneEvent(ResponseStreamEvent, discriminator="response.content_part.done"): + """Emitted when a content part is done. + + :ivar type: The type of the event. Always ``response.content_part.done``. Required. + RESPONSE_CONTENT_PART_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CONTENT_PART_DONE + :ivar item_id: The ID of the output item that the content part was added to. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the content part was added to. Required. + :vartype output_index: int + :ivar content_index: The index of the content part that is done. Required. + :vartype content_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar part: The content part that is done. Required. + :vartype part: ~azure.ai.agentserver.responses.sdk.models.models.OutputContent + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CONTENT_PART_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.content_part.done``. Required. + RESPONSE_CONTENT_PART_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the content part was added to. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the content part was added to. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part that is done. Required.""" + part: "_models.OutputContent" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The content part that is done. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + sequence_number: int, + part: "_models.OutputContent", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CONTENT_PART_DONE # type: ignore + + +class ResponseCreatedEvent(ResponseStreamEvent, discriminator="response.created"): + """An event that is emitted when a response is created. + + :ivar type: The type of the event. Always ``response.created``. Required. RESPONSE_CREATED. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CREATED + :ivar response: The response that was created. Required. + :vartype response: ~azure.ai.agentserver.responses.sdk.models.models.Response + :ivar sequence_number: The sequence number for this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CREATED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.created``. Required. RESPONSE_CREATED.""" + response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The response that was created. Required.""" + + @overload + def __init__( + self, + *, + response: "_models.Response", + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CREATED # type: ignore + + +class ResponseCustomToolCallInputDeltaEvent(ResponseStreamEvent, discriminator="response.custom_tool_call_input.delta"): + """ResponseCustomToolCallInputDelta. + + :ivar type: The event type identifier. Required. RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar output_index: The index of the output this delta applies to. Required. + :vartype output_index: int + :ivar item_id: Unique identifier for the API item associated with this event. Required. + :vartype item_id: str + :ivar delta: The incremental input data (delta) for the custom tool call. Required. + :vartype delta: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The event type identifier. Required. RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output this delta applies to. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique identifier for the API item associated with this event. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The incremental input data (delta) for the custom tool call. Required.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + output_index: int, + item_id: str, + delta: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA # type: ignore + + +class ResponseCustomToolCallInputDoneEvent(ResponseStreamEvent, discriminator="response.custom_tool_call_input.done"): + """ResponseCustomToolCallInputDone. + + :ivar type: The event type identifier. Required. RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar output_index: The index of the output this event applies to. Required. + :vartype output_index: int + :ivar item_id: Unique identifier for the API item associated with this event. Required. + :vartype item_id: str + :ivar input: The complete input data for the custom tool call. Required. + :vartype input: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The event type identifier. Required. RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output this event applies to. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique identifier for the API item associated with this event. Required.""" + input: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The complete input data for the custom tool call. Required.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + output_index: int, + item_id: str, + input: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE # type: ignore + + +class ResponseError(_Model): + """An error object returned when the model fails to generate a Response. + + :ivar code: Required. Known values are: "server_error", "rate_limit_exceeded", + "invalid_prompt", "vector_store_timeout", "invalid_image", "invalid_image_format", + "invalid_base64_image", "invalid_image_url", "image_too_large", "image_too_small", + "image_parse_error", "image_content_policy_violation", "invalid_image_mode", + "image_file_too_large", "unsupported_image_media_type", "empty_image_file", + "failed_to_download_image", and "image_file_not_found". + :vartype code: str or ~azure.ai.agentserver.responses.sdk.models.models.ResponseErrorCode + :ivar message: A human-readable description of the error. Required. + :vartype message: str + """ + + code: Union[str, "_models.ResponseErrorCode"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required. Known values are: \"server_error\", \"rate_limit_exceeded\", \"invalid_prompt\", + \"vector_store_timeout\", \"invalid_image\", \"invalid_image_format\", + \"invalid_base64_image\", \"invalid_image_url\", \"image_too_large\", \"image_too_small\", + \"image_parse_error\", \"image_content_policy_violation\", \"invalid_image_mode\", + \"image_file_too_large\", \"unsupported_image_media_type\", \"empty_image_file\", + \"failed_to_download_image\", and \"image_file_not_found\".""" + message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A human-readable description of the error. Required.""" + + @overload + def __init__( + self, + *, + code: Union[str, "_models.ResponseErrorCode"], + message: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseErrorEvent(ResponseStreamEvent, discriminator="error"): + """Emitted when an error occurs. + + :ivar type: The type of the event. Always ``error``. Required. ERROR. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ERROR + :ivar code: Required. + :vartype code: str + :ivar message: The error message. Required. + :vartype message: str + :ivar param: Required. + :vartype param: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.ERROR] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``error``. Required. ERROR.""" + code: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + message: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The error message. Required.""" + param: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + code: str, + message: str, + param: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.ERROR # type: ignore + + +class ResponseFailedEvent(ResponseStreamEvent, discriminator="response.failed"): + """An event that is emitted when a response fails. + + :ivar type: The type of the event. Always ``response.failed``. Required. RESPONSE_FAILED. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_FAILED + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar response: The response that failed. Required. + :vartype response: ~azure.ai.agentserver.responses.sdk.models.models.Response + """ + + type: Literal[ResponseStreamEventType.RESPONSE_FAILED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.failed``. Required. RESPONSE_FAILED.""" + response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The response that failed. Required.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + response: "_models.Response", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_FAILED # type: ignore + + +class ResponseFileSearchCallCompletedEvent(ResponseStreamEvent, discriminator="response.file_search_call.completed"): + """Emitted when a file search call is completed (results found). + + :ivar type: The type of the event. Always ``response.file_search_call.completed``. Required. + RESPONSE_FILE_SEARCH_CALL_COMPLETED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_FILE_SEARCH_CALL_COMPLETED + :ivar output_index: The index of the output item that the file search call is initiated. + Required. + :vartype output_index: int + :ivar item_id: The ID of the output item that the file search call is initiated. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.file_search_call.completed``. Required. + RESPONSE_FILE_SEARCH_CALL_COMPLETED.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the file search call is initiated. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the file search call is initiated. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_COMPLETED # type: ignore + + +class ResponseFileSearchCallInProgressEvent(ResponseStreamEvent, discriminator="response.file_search_call.in_progress"): + """Emitted when a file search call is initiated. + + :ivar type: The type of the event. Always ``response.file_search_call.in_progress``. Required. + RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS + :ivar output_index: The index of the output item that the file search call is initiated. + Required. + :vartype output_index: int + :ivar item_id: The ID of the output item that the file search call is initiated. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.file_search_call.in_progress``. Required. + RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the file search call is initiated. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the file search call is initiated. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS # type: ignore + + +class ResponseFileSearchCallSearchingEvent(ResponseStreamEvent, discriminator="response.file_search_call.searching"): + """Emitted when a file search is currently searching. + + :ivar type: The type of the event. Always ``response.file_search_call.searching``. Required. + RESPONSE_FILE_SEARCH_CALL_SEARCHING. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_FILE_SEARCH_CALL_SEARCHING + :ivar output_index: The index of the output item that the file search call is searching. + Required. + :vartype output_index: int + :ivar item_id: The ID of the output item that the file search call is initiated. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_SEARCHING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.file_search_call.searching``. Required. + RESPONSE_FILE_SEARCH_CALL_SEARCHING.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the file search call is searching. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the file search call is initiated. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_FILE_SEARCH_CALL_SEARCHING # type: ignore + + +class ResponseFormatJsonSchemaSchema(_Model): + """JSON schema.""" + + +class ResponseFunctionCallArgumentsDeltaEvent( + ResponseStreamEvent, discriminator="response.function_call_arguments.delta" +): + """Emitted when there is a partial function-call arguments delta. + + :ivar type: The type of the event. Always ``response.function_call_arguments.delta``. Required. + RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA + :ivar item_id: The ID of the output item that the function-call arguments delta is added to. + Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the function-call arguments delta is + added to. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar delta: The function-call arguments delta that is added. Required. + :vartype delta: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.function_call_arguments.delta``. Required. + RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the function-call arguments delta is added to. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the function-call arguments delta is added to. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The function-call arguments delta that is added. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + sequence_number: int, + delta: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA # type: ignore + + +class ResponseFunctionCallArgumentsDoneEvent( + ResponseStreamEvent, discriminator="response.function_call_arguments.done" +): + """Emitted when function-call arguments are finalized. + + :ivar type: Required. RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE + :ivar item_id: The ID of the item. Required. + :vartype item_id: str + :ivar name: The name of the function that was called. Required. + :vartype name: str + :ivar output_index: The index of the output item. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar arguments: The function-call arguments. Required. + :vartype arguments: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item. Required.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function that was called. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The function-call arguments. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + name: str, + output_index: int, + sequence_number: int, + arguments: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE # type: ignore + + +class ResponseImageGenCallCompletedEvent(ResponseStreamEvent, discriminator="response.image_generation_call.completed"): + """ResponseImageGenCallCompletedEvent. + + :ivar type: The type of the event. Always 'response.image_generation_call.completed'. Required. + RESPONSE_IMAGE_GENERATION_CALL_COMPLETED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar item_id: The unique identifier of the image generation item being processed. Required. + :vartype item_id: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.image_generation_call.completed'. Required. + RESPONSE_IMAGE_GENERATION_CALL_COMPLETED.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the image generation item being processed. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + sequence_number: int, + item_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED # type: ignore + + +class ResponseImageGenCallGeneratingEvent( + ResponseStreamEvent, discriminator="response.image_generation_call.generating" +): + """ResponseImageGenCallGeneratingEvent. + + :ivar type: The type of the event. Always 'response.image_generation_call.generating'. + Required. RESPONSE_IMAGE_GENERATION_CALL_GENERATING. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_IMAGE_GENERATION_CALL_GENERATING + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the image generation item being processed. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of the image generation item being processed. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_GENERATING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.image_generation_call.generating'. Required. + RESPONSE_IMAGE_GENERATION_CALL_GENERATING.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the image generation item being processed. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_GENERATING # type: ignore + + +class ResponseImageGenCallInProgressEvent( + ResponseStreamEvent, discriminator="response.image_generation_call.in_progress" +): + """ResponseImageGenCallInProgressEvent. + + :ivar type: The type of the event. Always 'response.image_generation_call.in_progress'. + Required. RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the image generation item being processed. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of the image generation item being processed. + Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.image_generation_call.in_progress'. Required. + RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the image generation item being processed. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS # type: ignore + + +class ResponseImageGenCallPartialImageEvent( + ResponseStreamEvent, discriminator="response.image_generation_call.partial_image" +): + """ResponseImageGenCallPartialImageEvent. + + :ivar type: The type of the event. Always 'response.image_generation_call.partial_image'. + Required. RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the image generation item being processed. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of the image generation item being processed. + Required. + :vartype sequence_number: int + :ivar partial_image_index: 0-based index for the partial image (backend is 1-based, but this is + 0-based for the user). Required. + :vartype partial_image_index: int + :ivar partial_image_b64: Base64-encoded partial image data, suitable for rendering as an image. + Required. + :vartype partial_image_b64: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.image_generation_call.partial_image'. Required. + RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the image generation item being processed. Required.""" + partial_image_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """0-based index for the partial image (backend is 1-based, but this is 0-based for the user). + Required.""" + partial_image_b64: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Base64-encoded partial image data, suitable for rendering as an image. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + partial_image_index: int, + partial_image_b64: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE # type: ignore + + +class ResponseIncompleteDetails(_Model): + """ResponseIncompleteDetails. + + :ivar reason: Is either a Literal["max_output_tokens"] type or a Literal["content_filter"] + type. + :vartype reason: str or str + """ + + reason: Optional[Literal["max_output_tokens", "content_filter"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is either a Literal[\"max_output_tokens\"] type or a Literal[\"content_filter\"] type.""" + + @overload + def __init__( + self, + *, + reason: Optional[Literal["max_output_tokens", "content_filter"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseIncompleteEvent(ResponseStreamEvent, discriminator="response.incomplete"): + """An event that is emitted when a response finishes as incomplete. + + :ivar type: The type of the event. Always ``response.incomplete``. Required. + RESPONSE_INCOMPLETE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_INCOMPLETE + :ivar response: The response that was incomplete. Required. + :vartype response: ~azure.ai.agentserver.responses.sdk.models.models.Response + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_INCOMPLETE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.incomplete``. Required. RESPONSE_INCOMPLETE.""" + response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The response that was incomplete. Required.""" + + @overload + def __init__( + self, + *, + response: "_models.Response", + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_INCOMPLETE # type: ignore + + +class ResponseInProgressEvent(ResponseStreamEvent, discriminator="response.in_progress"): + """Emitted when the response is in progress. + + :ivar type: The type of the event. Always ``response.in_progress``. Required. + RESPONSE_IN_PROGRESS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_IN_PROGRESS + :ivar response: The response that is in progress. Required. + :vartype response: ~azure.ai.agentserver.responses.sdk.models.models.Response + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.in_progress``. Required. RESPONSE_IN_PROGRESS.""" + response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The response that is in progress. Required.""" + + @overload + def __init__( + self, + *, + response: "_models.Response", + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_IN_PROGRESS # type: ignore + + +class ResponseLogProb(_Model): + """A logprob is the logarithmic probability that the model assigns to producing a particular token + at a given position in the sequence. Less-negative (higher) logprob values indicate greater + model confidence in that token choice. + + :ivar token: A possible text token. Required. + :vartype token: str + :ivar logprob: The log probability of this token. Required. + :vartype logprob: int + :ivar top_logprobs: The log probability of the top 20 most likely tokens. + :vartype top_logprobs: + list[~azure.ai.agentserver.responses.sdk.models.models.ResponseLogProbTopLogprobs] + """ + + token: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A possible text token. Required.""" + logprob: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The log probability of this token. Required.""" + top_logprobs: Optional[list["_models.ResponseLogProbTopLogprobs"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The log probability of the top 20 most likely tokens.""" + + @overload + def __init__( + self, + *, + token: str, + logprob: int, + top_logprobs: Optional[list["_models.ResponseLogProbTopLogprobs"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseLogProbTopLogprobs(_Model): + """ResponseLogProbTopLogprobs. + + :ivar token: + :vartype token: str + :ivar logprob: + :vartype logprob: int + """ + + token: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + logprob: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + token: Optional[str] = None, + logprob: Optional[int] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseMCPCallArgumentsDeltaEvent(ResponseStreamEvent, discriminator="response.mcp_call_arguments.delta"): + """ResponseMCPCallArgumentsDeltaEvent. + + :ivar type: The type of the event. Always 'response.mcp_call_arguments.delta'. Required. + RESPONSE_MCP_CALL_ARGUMENTS_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_CALL_ARGUMENTS_DELTA + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the MCP tool call item being processed. Required. + :vartype item_id: str + :ivar delta: A JSON string containing the partial update to the arguments for the MCP tool + call. Required. + :vartype delta: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_call_arguments.delta'. Required. + RESPONSE_MCP_CALL_ARGUMENTS_DELTA.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the MCP tool call item being processed. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string containing the partial update to the arguments for the MCP tool call. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + delta: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DELTA # type: ignore + + +class ResponseMCPCallArgumentsDoneEvent(ResponseStreamEvent, discriminator="response.mcp_call_arguments.done"): + """ResponseMCPCallArgumentsDoneEvent. + + :ivar type: The type of the event. Always 'response.mcp_call_arguments.done'. Required. + RESPONSE_MCP_CALL_ARGUMENTS_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_CALL_ARGUMENTS_DONE + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the MCP tool call item being processed. Required. + :vartype item_id: str + :ivar arguments: A JSON string containing the finalized arguments for the MCP tool call. + Required. + :vartype arguments: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_call_arguments.done'. Required. + RESPONSE_MCP_CALL_ARGUMENTS_DONE.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the MCP tool call item being processed. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string containing the finalized arguments for the MCP tool call. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + arguments: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_ARGUMENTS_DONE # type: ignore + + +class ResponseMCPCallCompletedEvent(ResponseStreamEvent, discriminator="response.mcp_call.completed"): + """ResponseMCPCallCompletedEvent. + + :ivar type: The type of the event. Always 'response.mcp_call.completed'. Required. + RESPONSE_MCP_CALL_COMPLETED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_CALL_COMPLETED + :ivar item_id: The ID of the MCP tool call item that completed. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that completed. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_call.completed'. Required. + RESPONSE_MCP_CALL_COMPLETED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the MCP tool call item that completed. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that completed. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_COMPLETED # type: ignore + + +class ResponseMCPCallFailedEvent(ResponseStreamEvent, discriminator="response.mcp_call.failed"): + """ResponseMCPCallFailedEvent. + + :ivar type: The type of the event. Always 'response.mcp_call.failed'. Required. + RESPONSE_MCP_CALL_FAILED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_CALL_FAILED + :ivar item_id: The ID of the MCP tool call item that failed. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that failed. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_FAILED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_call.failed'. Required. RESPONSE_MCP_CALL_FAILED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the MCP tool call item that failed. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that failed. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_FAILED # type: ignore + + +class ResponseMCPCallInProgressEvent(ResponseStreamEvent, discriminator="response.mcp_call.in_progress"): + """ResponseMCPCallInProgressEvent. + + :ivar type: The type of the event. Always 'response.mcp_call.in_progress'. Required. + RESPONSE_MCP_CALL_IN_PROGRESS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_CALL_IN_PROGRESS + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar item_id: The unique identifier of the MCP tool call item being processed. Required. + :vartype item_id: str + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_call.in_progress'. Required. + RESPONSE_MCP_CALL_IN_PROGRESS.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the MCP tool call item being processed. Required.""" + + @overload + def __init__( + self, + *, + sequence_number: int, + output_index: int, + item_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_CALL_IN_PROGRESS # type: ignore + + +class ResponseMCPListToolsCompletedEvent(ResponseStreamEvent, discriminator="response.mcp_list_tools.completed"): + """ResponseMCPListToolsCompletedEvent. + + :ivar type: The type of the event. Always 'response.mcp_list_tools.completed'. Required. + RESPONSE_MCP_LIST_TOOLS_COMPLETED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_LIST_TOOLS_COMPLETED + :ivar item_id: The ID of the MCP tool call item that produced this output. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that was processed. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_list_tools.completed'. Required. + RESPONSE_MCP_LIST_TOOLS_COMPLETED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the MCP tool call item that produced this output. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that was processed. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_COMPLETED # type: ignore + + +class ResponseMCPListToolsFailedEvent(ResponseStreamEvent, discriminator="response.mcp_list_tools.failed"): + """ResponseMCPListToolsFailedEvent. + + :ivar type: The type of the event. Always 'response.mcp_list_tools.failed'. Required. + RESPONSE_MCP_LIST_TOOLS_FAILED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_LIST_TOOLS_FAILED + :ivar item_id: The ID of the MCP tool call item that failed. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that failed. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_FAILED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_list_tools.failed'. Required. + RESPONSE_MCP_LIST_TOOLS_FAILED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the MCP tool call item that failed. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that failed. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_FAILED # type: ignore + + +class ResponseMCPListToolsInProgressEvent(ResponseStreamEvent, discriminator="response.mcp_list_tools.in_progress"): + """ResponseMCPListToolsInProgressEvent. + + :ivar type: The type of the event. Always 'response.mcp_list_tools.in_progress'. Required. + RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS + :ivar item_id: The ID of the MCP tool call item that is being processed. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that is being processed. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.mcp_list_tools.in_progress'. Required. + RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the MCP tool call item that is being processed. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that is being processed. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS # type: ignore + + +class ResponseOutputItemAddedEvent(ResponseStreamEvent, discriminator="response.output_item.added"): + """Emitted when a new output item is added. + + :ivar type: The type of the event. Always ``response.output_item.added``. Required. + RESPONSE_OUTPUT_ITEM_ADDED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_OUTPUT_ITEM_ADDED + :ivar output_index: The index of the output item that was added. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar item: The output item that was added. Required. + :vartype item: ~azure.ai.agentserver.responses.sdk.models.models.OutputItem + """ + + type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.output_item.added``. Required. + RESPONSE_OUTPUT_ITEM_ADDED.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that was added. Required.""" + item: "_models.OutputItem" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The output item that was added. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + sequence_number: int, + item: "_models.OutputItem", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_ADDED # type: ignore + + +class ResponseOutputItemDoneEvent(ResponseStreamEvent, discriminator="response.output_item.done"): + """Emitted when an output item is marked done. + + :ivar type: The type of the event. Always ``response.output_item.done``. Required. + RESPONSE_OUTPUT_ITEM_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_OUTPUT_ITEM_DONE + :ivar output_index: The index of the output item that was marked done. Required. + :vartype output_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar item: The output item that was marked done. Required. + :vartype item: ~azure.ai.agentserver.responses.sdk.models.models.OutputItem + """ + + type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.output_item.done``. Required. + RESPONSE_OUTPUT_ITEM_DONE.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that was marked done. Required.""" + item: "_models.OutputItem" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The output item that was marked done. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + sequence_number: int, + item: "_models.OutputItem", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_OUTPUT_ITEM_DONE # type: ignore + + +class ResponseOutputTextAnnotationAddedEvent( + ResponseStreamEvent, discriminator="response.output_text.annotation.added" +): + """ResponseOutputTextAnnotationAddedEvent. + + :ivar type: The type of the event. Always 'response.output_text.annotation.added'. Required. + RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED + :ivar item_id: The unique identifier of the item to which the annotation is being added. + Required. + :vartype item_id: str + :ivar output_index: The index of the output item in the response's output array. Required. + :vartype output_index: int + :ivar content_index: The index of the content part within the output item. Required. + :vartype content_index: int + :ivar annotation_index: The index of the annotation within the content part. Required. + :vartype annotation_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar annotation: The annotation object being added. (See annotation schema for details.). + Required. + :vartype annotation: ~azure.ai.agentserver.responses.sdk.models.models.Annotation + """ + + type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.output_text.annotation.added'. Required. + RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique identifier of the item to which the annotation is being added. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item in the response's output array. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part within the output item. Required.""" + annotation_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the annotation within the content part. Required.""" + annotation: "_models.Annotation" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The annotation object being added. (See annotation schema for details.). Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + annotation_index: int, + sequence_number: int, + annotation: "_models.Annotation", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED # type: ignore + + +class ResponsePromptVariables(_Model): + """Prompt Variables.""" + + +class ResponseQueuedEvent(ResponseStreamEvent, discriminator="response.queued"): + """ResponseQueuedEvent. + + :ivar type: The type of the event. Always 'response.queued'. Required. RESPONSE_QUEUED. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_QUEUED + :ivar response: The full response object that is queued. Required. + :vartype response: ~azure.ai.agentserver.responses.sdk.models.models.Response + :ivar sequence_number: The sequence number for this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_QUEUED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always 'response.queued'. Required. RESPONSE_QUEUED.""" + response: "_models.Response" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The full response object that is queued. Required.""" + + @overload + def __init__( + self, + *, + response: "_models.Response", + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_QUEUED # type: ignore + + +class ResponseReasoningSummaryPartAddedEvent( + ResponseStreamEvent, discriminator="response.reasoning_summary_part.added" +): + """Emitted when a new reasoning summary part is added. + + :ivar type: The type of the event. Always ``response.reasoning_summary_part.added``. Required. + RESPONSE_REASONING_SUMMARY_PART_ADDED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REASONING_SUMMARY_PART_ADDED + :ivar item_id: The ID of the item this summary part is associated with. Required. + :vartype item_id: str + :ivar output_index: The index of the output item this summary part is associated with. + Required. + :vartype output_index: int + :ivar summary_index: The index of the summary part within the reasoning summary. Required. + :vartype summary_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar part: The summary part that was added. Required. + :vartype part: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseReasoningSummaryPartAddedEventPart + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_ADDED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.reasoning_summary_part.added``. Required. + RESPONSE_REASONING_SUMMARY_PART_ADDED.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item this summary part is associated with. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item this summary part is associated with. Required.""" + summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the summary part within the reasoning summary. Required.""" + part: "_models.ResponseReasoningSummaryPartAddedEventPart" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The summary part that was added. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + summary_index: int, + sequence_number: int, + part: "_models.ResponseReasoningSummaryPartAddedEventPart", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_ADDED # type: ignore + + +class ResponseReasoningSummaryPartAddedEventPart(_Model): # pylint: disable=name-too-long + """ResponseReasoningSummaryPartAddedEventPart. + + :ivar type: Required. Default value is "summary_text". + :vartype type: str + :ivar text: Required. + :vartype text: str + """ + + type: Literal["summary_text"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required. Default value is \"summary_text\".""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["summary_text"] = "summary_text" + + +class ResponseReasoningSummaryPartDoneEvent(ResponseStreamEvent, discriminator="response.reasoning_summary_part.done"): + """Emitted when a reasoning summary part is completed. + + :ivar type: The type of the event. Always ``response.reasoning_summary_part.done``. Required. + RESPONSE_REASONING_SUMMARY_PART_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REASONING_SUMMARY_PART_DONE + :ivar item_id: The ID of the item this summary part is associated with. Required. + :vartype item_id: str + :ivar output_index: The index of the output item this summary part is associated with. + Required. + :vartype output_index: int + :ivar summary_index: The index of the summary part within the reasoning summary. Required. + :vartype summary_index: int + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + :ivar part: The completed summary part. Required. + :vartype part: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseReasoningSummaryPartDoneEventPart + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.reasoning_summary_part.done``. Required. + RESPONSE_REASONING_SUMMARY_PART_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item this summary part is associated with. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item this summary part is associated with. Required.""" + summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the summary part within the reasoning summary. Required.""" + part: "_models.ResponseReasoningSummaryPartDoneEventPart" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The completed summary part. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + summary_index: int, + sequence_number: int, + part: "_models.ResponseReasoningSummaryPartDoneEventPart", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_PART_DONE # type: ignore + + +class ResponseReasoningSummaryPartDoneEventPart(_Model): # pylint: disable=name-too-long + """ResponseReasoningSummaryPartDoneEventPart. + + :ivar type: Required. Default value is "summary_text". + :vartype type: str + :ivar text: Required. + :vartype text: str + """ + + type: Literal["summary_text"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required. Default value is \"summary_text\".""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["summary_text"] = "summary_text" + + +class ResponseReasoningSummaryTextDeltaEvent( + ResponseStreamEvent, discriminator="response.reasoning_summary_text.delta" +): + """Emitted when a delta is added to a reasoning summary text. + + :ivar type: The type of the event. Always ``response.reasoning_summary_text.delta``. Required. + RESPONSE_REASONING_SUMMARY_TEXT_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REASONING_SUMMARY_TEXT_DELTA + :ivar item_id: The ID of the item this summary text delta is associated with. Required. + :vartype item_id: str + :ivar output_index: The index of the output item this summary text delta is associated with. + Required. + :vartype output_index: int + :ivar summary_index: The index of the summary part within the reasoning summary. Required. + :vartype summary_index: int + :ivar delta: The text delta that was added to the summary. Required. + :vartype delta: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.reasoning_summary_text.delta``. Required. + RESPONSE_REASONING_SUMMARY_TEXT_DELTA.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item this summary text delta is associated with. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item this summary text delta is associated with. Required.""" + summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the summary part within the reasoning summary. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text delta that was added to the summary. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + summary_index: int, + delta: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DELTA # type: ignore + + +class ResponseReasoningSummaryTextDoneEvent(ResponseStreamEvent, discriminator="response.reasoning_summary_text.done"): + """Emitted when a reasoning summary text is completed. + + :ivar type: The type of the event. Always ``response.reasoning_summary_text.done``. Required. + RESPONSE_REASONING_SUMMARY_TEXT_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REASONING_SUMMARY_TEXT_DONE + :ivar item_id: The ID of the item this summary text is associated with. Required. + :vartype item_id: str + :ivar output_index: The index of the output item this summary text is associated with. + Required. + :vartype output_index: int + :ivar summary_index: The index of the summary part within the reasoning summary. Required. + :vartype summary_index: int + :ivar text: The full text of the completed reasoning summary. Required. + :vartype text: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.reasoning_summary_text.done``. Required. + RESPONSE_REASONING_SUMMARY_TEXT_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item this summary text is associated with. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item this summary text is associated with. Required.""" + summary_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the summary part within the reasoning summary. Required.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The full text of the completed reasoning summary. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + summary_index: int, + text: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REASONING_SUMMARY_TEXT_DONE # type: ignore + + +class ResponseReasoningTextDeltaEvent(ResponseStreamEvent, discriminator="response.reasoning_text.delta"): + """Emitted when a delta is added to a reasoning text. + + :ivar type: The type of the event. Always ``response.reasoning_text.delta``. Required. + RESPONSE_REASONING_TEXT_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REASONING_TEXT_DELTA + :ivar item_id: The ID of the item this reasoning text delta is associated with. Required. + :vartype item_id: str + :ivar output_index: The index of the output item this reasoning text delta is associated with. + Required. + :vartype output_index: int + :ivar content_index: The index of the reasoning content part this delta is associated with. + Required. + :vartype content_index: int + :ivar delta: The text delta that was added to the reasoning content. Required. + :vartype delta: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REASONING_TEXT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.reasoning_text.delta``. Required. + RESPONSE_REASONING_TEXT_DELTA.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item this reasoning text delta is associated with. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item this reasoning text delta is associated with. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the reasoning content part this delta is associated with. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text delta that was added to the reasoning content. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + delta: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REASONING_TEXT_DELTA # type: ignore + + +class ResponseReasoningTextDoneEvent(ResponseStreamEvent, discriminator="response.reasoning_text.done"): + """Emitted when a reasoning text is completed. + + :ivar type: The type of the event. Always ``response.reasoning_text.done``. Required. + RESPONSE_REASONING_TEXT_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REASONING_TEXT_DONE + :ivar item_id: The ID of the item this reasoning text is associated with. Required. + :vartype item_id: str + :ivar output_index: The index of the output item this reasoning text is associated with. + Required. + :vartype output_index: int + :ivar content_index: The index of the reasoning content part. Required. + :vartype content_index: int + :ivar text: The full text of the completed reasoning content. Required. + :vartype text: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REASONING_TEXT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.reasoning_text.done``. Required. + RESPONSE_REASONING_TEXT_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the item this reasoning text is associated with. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item this reasoning text is associated with. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the reasoning content part. Required.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The full text of the completed reasoning content. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + text: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REASONING_TEXT_DONE # type: ignore + + +class ResponseRefusalDeltaEvent(ResponseStreamEvent, discriminator="response.refusal.delta"): + """Emitted when there is a partial refusal text. + + :ivar type: The type of the event. Always ``response.refusal.delta``. Required. + RESPONSE_REFUSAL_DELTA. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REFUSAL_DELTA + :ivar item_id: The ID of the output item that the refusal text is added to. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the refusal text is added to. Required. + :vartype output_index: int + :ivar content_index: The index of the content part that the refusal text is added to. Required. + :vartype content_index: int + :ivar delta: The refusal text that is added. Required. + :vartype delta: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REFUSAL_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.refusal.delta``. Required. RESPONSE_REFUSAL_DELTA.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the refusal text is added to. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the refusal text is added to. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part that the refusal text is added to. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The refusal text that is added. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + delta: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REFUSAL_DELTA # type: ignore + + +class ResponseRefusalDoneEvent(ResponseStreamEvent, discriminator="response.refusal.done"): + """Emitted when refusal text is finalized. + + :ivar type: The type of the event. Always ``response.refusal.done``. Required. + RESPONSE_REFUSAL_DONE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_REFUSAL_DONE + :ivar item_id: The ID of the output item that the refusal text is finalized. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the refusal text is finalized. Required. + :vartype output_index: int + :ivar content_index: The index of the content part that the refusal text is finalized. + Required. + :vartype content_index: int + :ivar refusal: The refusal text that is finalized. Required. + :vartype refusal: str + :ivar sequence_number: The sequence number of this event. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_REFUSAL_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.refusal.done``. Required. RESPONSE_REFUSAL_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the refusal text is finalized. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the refusal text is finalized. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part that the refusal text is finalized. Required.""" + refusal: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The refusal text that is finalized. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + refusal: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_REFUSAL_DONE # type: ignore + + +class ResponseStreamOptions(_Model): + """Options for streaming responses. Only set this when you set ``stream: true``. + + :ivar include_obfuscation: When true, stream obfuscation will be enabled. Stream obfuscation + adds random characters to an ``obfuscation`` field on streaming delta events to normalize + payload sizes as a mitigation to certain side-channel attacks. These obfuscation fields are + included by default, but add a small amount of overhead to the data stream. You can set + ``include_obfuscation`` to false to optimize for bandwidth if you trust the network links + between your application and the OpenAI API. + :vartype include_obfuscation: bool + """ + + include_obfuscation: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """When true, stream obfuscation will be enabled. Stream obfuscation adds random characters to an + ``obfuscation`` field on streaming delta events to normalize payload sizes as a mitigation to + certain side-channel attacks. These obfuscation fields are included by default, but add a small + amount of overhead to the data stream. You can set ``include_obfuscation`` to false to optimize + for bandwidth if you trust the network links between your application and the OpenAI API.""" + + @overload + def __init__( + self, + *, + include_obfuscation: Optional[bool] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseTextDeltaEvent(ResponseStreamEvent, discriminator="response.output_text.delta"): + """Emitted when there is an additional text delta. + + :ivar type: The type of the event. Always ``response.output_text.delta``. Required. + RESPONSE_OUTPUT_TEXT_DELTA. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_OUTPUT_TEXT_DELTA + :ivar item_id: The ID of the output item that the text delta was added to. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the text delta was added to. Required. + :vartype output_index: int + :ivar content_index: The index of the content part that the text delta was added to. Required. + :vartype content_index: int + :ivar delta: The text delta that was added. Required. + :vartype delta: str + :ivar sequence_number: The sequence number for this event. Required. + :vartype sequence_number: int + :ivar logprobs: The log probabilities of the tokens in the delta. Required. + :vartype logprobs: list[~azure.ai.agentserver.responses.sdk.models.models.ResponseLogProb] + """ + + type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DELTA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.output_text.delta``. Required. + RESPONSE_OUTPUT_TEXT_DELTA.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the text delta was added to. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the text delta was added to. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part that the text delta was added to. Required.""" + delta: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text delta that was added. Required.""" + logprobs: list["_models.ResponseLogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The log probabilities of the tokens in the delta. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + delta: str, + sequence_number: int, + logprobs: list["_models.ResponseLogProb"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DELTA # type: ignore + + +class ResponseTextDoneEvent(ResponseStreamEvent, discriminator="response.output_text.done"): + """Emitted when text content is finalized. + + :ivar type: The type of the event. Always ``response.output_text.done``. Required. + RESPONSE_OUTPUT_TEXT_DONE. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_OUTPUT_TEXT_DONE + :ivar item_id: The ID of the output item that the text content is finalized. Required. + :vartype item_id: str + :ivar output_index: The index of the output item that the text content is finalized. Required. + :vartype output_index: int + :ivar content_index: The index of the content part that the text content is finalized. + Required. + :vartype content_index: int + :ivar text: The text content that is finalized. Required. + :vartype text: str + :ivar sequence_number: The sequence number for this event. Required. + :vartype sequence_number: int + :ivar logprobs: The log probabilities of the tokens in the delta. Required. + :vartype logprobs: list[~azure.ai.agentserver.responses.sdk.models.models.ResponseLogProb] + """ + + type: Literal[ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DONE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.output_text.done``. Required. + RESPONSE_OUTPUT_TEXT_DONE.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the output item that the text content is finalized. Required.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the text content is finalized. Required.""" + content_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the content part that the text content is finalized. Required.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text content that is finalized. Required.""" + logprobs: list["_models.ResponseLogProb"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The log probabilities of the tokens in the delta. Required.""" + + @overload + def __init__( + self, + *, + item_id: str, + output_index: int, + content_index: int, + text: str, + sequence_number: int, + logprobs: list["_models.ResponseLogProb"], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_OUTPUT_TEXT_DONE # type: ignore + + +class ResponseTextParam(_Model): + """Configuration options for a text response from the model. Can be plain + text or structured JSON data. Learn more: + + * [Text inputs and outputs](/docs/guides/text) + * [Structured Outputs](/docs/guides/structured-outputs). + + :ivar format: + :vartype format: + ~azure.ai.agentserver.responses.sdk.models.models.TextResponseFormatConfiguration + :ivar verbosity: Is one of the following types: Literal["low"], Literal["medium"], + Literal["high"] + :vartype verbosity: str or str or str + """ + + format: Optional["_models.TextResponseFormatConfiguration"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + verbosity: Optional[Literal["low", "medium", "high"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Is one of the following types: Literal[\"low\"], Literal[\"medium\"], Literal[\"high\"]""" + + @overload + def __init__( + self, + *, + format: Optional["_models.TextResponseFormatConfiguration"] = None, + verbosity: Optional[Literal["low", "medium", "high"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseUsage(_Model): + """Represents token usage details including input tokens, output tokens, a breakdown of output + tokens, and the total tokens used. + + :ivar input_tokens: The number of input tokens. Required. + :vartype input_tokens: int + :ivar input_tokens_details: A detailed breakdown of the input tokens. Required. + :vartype input_tokens_details: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseUsageInputTokensDetails + :ivar output_tokens: The number of output tokens. Required. + :vartype output_tokens: int + :ivar output_tokens_details: A detailed breakdown of the output tokens. Required. + :vartype output_tokens_details: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseUsageOutputTokensDetails + :ivar total_tokens: The total number of tokens used. Required. + :vartype total_tokens: int + """ + + input_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The number of input tokens. Required.""" + input_tokens_details: "_models.ResponseUsageInputTokensDetails" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """A detailed breakdown of the input tokens. Required.""" + output_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The number of output tokens. Required.""" + output_tokens_details: "_models.ResponseUsageOutputTokensDetails" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """A detailed breakdown of the output tokens. Required.""" + total_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The total number of tokens used. Required.""" + + @overload + def __init__( + self, + *, + input_tokens: int, + input_tokens_details: "_models.ResponseUsageInputTokensDetails", + output_tokens: int, + output_tokens_details: "_models.ResponseUsageOutputTokensDetails", + total_tokens: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseUsageInputTokensDetails(_Model): + """ResponseUsageInputTokensDetails. + + :ivar cached_tokens: Required. + :vartype cached_tokens: int + """ + + cached_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + cached_tokens: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseUsageOutputTokensDetails(_Model): + """ResponseUsageOutputTokensDetails. + + :ivar reasoning_tokens: Required. + :vartype reasoning_tokens: int + """ + + reasoning_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + reasoning_tokens: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ResponseWebSearchCallCompletedEvent(ResponseStreamEvent, discriminator="response.web_search_call.completed"): + """Emitted when a web search call is completed. + + :ivar type: The type of the event. Always ``response.web_search_call.completed``. Required. + RESPONSE_WEB_SEARCH_CALL_COMPLETED. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_WEB_SEARCH_CALL_COMPLETED + :ivar output_index: The index of the output item that the web search call is associated with. + Required. + :vartype output_index: int + :ivar item_id: Unique ID for the output item associated with the web search call. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of the web search call being processed. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_COMPLETED] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.web_search_call.completed``. Required. + RESPONSE_WEB_SEARCH_CALL_COMPLETED.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the web search call is associated with. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique ID for the output item associated with the web search call. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_COMPLETED # type: ignore + + +class ResponseWebSearchCallInProgressEvent(ResponseStreamEvent, discriminator="response.web_search_call.in_progress"): + """Emitted when a web search call is initiated. + + :ivar type: The type of the event. Always ``response.web_search_call.in_progress``. Required. + RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS + :ivar output_index: The index of the output item that the web search call is associated with. + Required. + :vartype output_index: int + :ivar item_id: Unique ID for the output item associated with the web search call. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of the web search call being processed. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.web_search_call.in_progress``. Required. + RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the web search call is associated with. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique ID for the output item associated with the web search call. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS # type: ignore + + +class ResponseWebSearchCallSearchingEvent(ResponseStreamEvent, discriminator="response.web_search_call.searching"): + """Emitted when a web search call is executing. + + :ivar type: The type of the event. Always ``response.web_search_call.searching``. Required. + RESPONSE_WEB_SEARCH_CALL_SEARCHING. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.RESPONSE_WEB_SEARCH_CALL_SEARCHING + :ivar output_index: The index of the output item that the web search call is associated with. + Required. + :vartype output_index: int + :ivar item_id: Unique ID for the output item associated with the web search call. Required. + :vartype item_id: str + :ivar sequence_number: The sequence number of the web search call being processed. Required. + :vartype sequence_number: int + """ + + type: Literal[ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_SEARCHING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the event. Always ``response.web_search_call.searching``. Required. + RESPONSE_WEB_SEARCH_CALL_SEARCHING.""" + output_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the output item that the web search call is associated with. Required.""" + item_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique ID for the output item associated with the web search call. Required.""" + + @overload + def __init__( + self, + *, + output_index: int, + item_id: str, + sequence_number: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ResponseStreamEventType.RESPONSE_WEB_SEARCH_CALL_SEARCHING # type: ignore + + +class ScreenshotParam(ComputerAction, discriminator="screenshot"): + """Screenshot. + + :ivar type: Specifies the event type. For a screenshot action, this property is always set to + ``screenshot``. Required. SCREENSHOT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SCREENSHOT + """ + + type: Literal[ComputerActionType.SCREENSHOT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a screenshot action, this property is always set to + ``screenshot``. Required. SCREENSHOT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.SCREENSHOT # type: ignore + + +class ScrollParam(ComputerAction, discriminator="scroll"): + """Scroll. + + :ivar type: Specifies the event type. For a scroll action, this property is always set to + ``scroll``. Required. SCROLL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SCROLL + :ivar x: The x-coordinate where the scroll occurred. Required. + :vartype x: int + :ivar y: The y-coordinate where the scroll occurred. Required. + :vartype y: int + :ivar scroll_x: The horizontal scroll distance. Required. + :vartype scroll_x: int + :ivar scroll_y: The vertical scroll distance. Required. + :vartype scroll_y: int + """ + + type: Literal[ComputerActionType.SCROLL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a scroll action, this property is always set to ``scroll``. + Required. SCROLL.""" + x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The x-coordinate where the scroll occurred. Required.""" + y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The y-coordinate where the scroll occurred. Required.""" + scroll_x: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The horizontal scroll distance. Required.""" + scroll_y: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The vertical scroll distance. Required.""" + + @overload + def __init__( + self, + *, + x: int, + y: int, + scroll_x: int, + scroll_y: int, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.SCROLL # type: ignore + + +class SharepointGroundingToolCall(OutputItem, discriminator="sharepoint_grounding_preview_call"): + """A SharePoint grounding tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. SHAREPOINT_GROUNDING_PREVIEW_CALL. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.SHAREPOINT_GROUNDING_PREVIEW_CALL + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar arguments: A JSON string of the arguments to pass to the tool. Required. + :vartype arguments: str + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.SHAREPOINT_GROUNDING_PREVIEW_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. SHAREPOINT_GROUNDING_PREVIEW_CALL.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + arguments: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A JSON string of the arguments to pass to the tool. Required.""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + arguments: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.SHAREPOINT_GROUNDING_PREVIEW_CALL # type: ignore + + +class SharepointGroundingToolCallOutput(OutputItem, discriminator="sharepoint_grounding_preview_call_output"): + """The output of a SharePoint grounding tool call. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT + :ivar call_id: The unique ID of the tool call generated by the model. Required. + :vartype call_id: str + :ivar output: The output from the SharePoint grounding tool call. Is one of the following + types: {str: Any}, str, [Any] + :vartype output: dict[str, any] or str or list[any] + :ivar status: The status of the tool call. Required. Known values are: "in_progress", + "completed", "incomplete", and "failed". + :vartype status: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolCallStatus + """ + + type: Literal[OutputItemType.SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT.""" + call_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The unique ID of the tool call generated by the model. Required.""" + output: Optional["_types.ToolCallOutputContent"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The output from the SharePoint grounding tool call. Is one of the following types: {str: Any}, + str, [Any]""" + status: Union[str, "_models.ToolCallStatus"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The status of the tool call. Required. Known values are: \"in_progress\", \"completed\", + \"incomplete\", and \"failed\".""" + + @overload + def __init__( + self, + *, + call_id: str, + status: Union[str, "_models.ToolCallStatus"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + output: Optional["_types.ToolCallOutputContent"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.SHAREPOINT_GROUNDING_PREVIEW_CALL_OUTPUT # type: ignore + + +class SharepointGroundingToolParameters(_Model): + """The sharepoint grounding tool parameters. + + :ivar project_connections: The project connections attached to this tool. There can be a + maximum of 1 connection resource attached to the tool. + :vartype project_connections: + list[~azure.ai.agentserver.responses.sdk.models.models.ToolProjectConnection] + """ + + project_connections: Optional[list["_models.ToolProjectConnection"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The project connections attached to this tool. There can be a maximum of 1 connection resource + attached to the tool.""" + + @overload + def __init__( + self, + *, + project_connections: Optional[list["_models.ToolProjectConnection"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class SharepointPreviewTool(Tool, discriminator="sharepoint_grounding_preview"): + """The input definition information for a sharepoint tool as used to configure an agent. + + :ivar type: The object type, which is always 'sharepoint_grounding_preview'. Required. + SHAREPOINT_GROUNDING_PREVIEW. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.SHAREPOINT_GROUNDING_PREVIEW + :ivar sharepoint_grounding_preview: The sharepoint grounding tool parameters. Required. + :vartype sharepoint_grounding_preview: + ~azure.ai.agentserver.responses.sdk.models.models.SharepointGroundingToolParameters + """ + + type: Literal[ToolType.SHAREPOINT_GROUNDING_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'sharepoint_grounding_preview'. Required. + SHAREPOINT_GROUNDING_PREVIEW.""" + sharepoint_grounding_preview: "_models.SharepointGroundingToolParameters" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The sharepoint grounding tool parameters. Required.""" + + @overload + def __init__( + self, + *, + sharepoint_grounding_preview: "_models.SharepointGroundingToolParameters", + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.SHAREPOINT_GROUNDING_PREVIEW # type: ignore + + +class SkillReferenceParam(ContainerSkill, discriminator="skill_reference"): + """SkillReferenceParam. + + :ivar type: References a skill created with the /v1/skills endpoint. Required. SKILL_REFERENCE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SKILL_REFERENCE + :ivar skill_id: The ID of the referenced skill. Required. + :vartype skill_id: str + :ivar version: Optional skill version. Use a positive integer or 'latest'. Omit for default. + :vartype version: str + """ + + type: Literal[ContainerSkillType.SKILL_REFERENCE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """References a skill created with the /v1/skills endpoint. Required. SKILL_REFERENCE.""" + skill_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the referenced skill. Required.""" + version: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional skill version. Use a positive integer or 'latest'. Omit for default.""" + + @overload + def __init__( + self, + *, + skill_id: str, + version: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ContainerSkillType.SKILL_REFERENCE # type: ignore + + +class ToolChoiceParam(_Model): + """How the model should select which tool (or tools) to use when generating a response. See the + ``tools`` parameter to see how to specify which tools the model can call. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + ToolChoiceAllowed, SpecificApplyPatchParam, ToolChoiceCodeInterpreter, ToolChoiceComputer, + ToolChoiceComputerUse, ToolChoiceComputerUsePreview, ToolChoiceCustom, ToolChoiceFileSearch, + ToolChoiceFunction, ToolChoiceImageGeneration, ToolChoiceMCP, SpecificFunctionShellParam, + ToolChoiceWebSearchPreview, ToolChoiceWebSearchPreview20250311 + + :ivar type: Required. Known values are: "allowed_tools", "function", "mcp", "custom", + "apply_patch", "shell", "file_search", "web_search_preview", "computer_use_preview", + "web_search_preview_2025_03_11", "image_generation", "code_interpreter", "computer", and + "computer_use". + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ToolChoiceParamType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"allowed_tools\", \"function\", \"mcp\", \"custom\", + \"apply_patch\", \"shell\", \"file_search\", \"web_search_preview\", \"computer_use_preview\", + \"web_search_preview_2025_03_11\", \"image_generation\", \"code_interpreter\", \"computer\", + and \"computer_use\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class SpecificApplyPatchParam(ToolChoiceParam, discriminator="apply_patch"): + """Specific apply patch tool choice. + + :ivar type: The tool to call. Always ``apply_patch``. Required. APPLY_PATCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.APPLY_PATCH + """ + + type: Literal[ToolChoiceParamType.APPLY_PATCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The tool to call. Always ``apply_patch``. Required. APPLY_PATCH.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.APPLY_PATCH # type: ignore + + +class SpecificFunctionShellParam(ToolChoiceParam, discriminator="shell"): + """Specific shell tool choice. + + :ivar type: The tool to call. Always ``shell``. Required. SHELL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SHELL + """ + + type: Literal[ToolChoiceParamType.SHELL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The tool to call. Always ``shell``. Required. SHELL.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.SHELL # type: ignore + + +class StructuredOutputDefinition(_Model): + """A structured output that can be produced by the agent. + + :ivar name: The name of the structured output. Required. + :vartype name: str + :ivar description: A description of the output to emit. Used by the model to determine when to + emit the output. Required. + :vartype description: str + :ivar schema: The JSON schema for the structured output. Required. + :vartype schema: dict[str, any] + :ivar strict: Whether to enforce strict validation. Default ``true``. Required. + :vartype strict: bool + """ + + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the structured output. Required.""" + description: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A description of the output to emit. Used by the model to determine when to emit the output. + Required.""" + schema: dict[str, Any] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The JSON schema for the structured output. Required.""" + strict: bool = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Whether to enforce strict validation. Default ``true``. Required.""" + + @overload + def __init__( + self, + *, + name: str, + description: str, + schema: dict[str, Any], + strict: bool, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class StructuredOutputsOutputItem(OutputItem, discriminator="structured_outputs"): + """StructuredOutputsOutputItem. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. STRUCTURED_OUTPUTS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.STRUCTURED_OUTPUTS + :ivar output: The structured output captured during the response. Required. + :vartype output: any + """ + + type: Literal[OutputItemType.STRUCTURED_OUTPUTS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. STRUCTURED_OUTPUTS.""" + output: Any = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The structured output captured during the response. Required.""" + + @overload + def __init__( + self, + *, + output: Any, + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.STRUCTURED_OUTPUTS # type: ignore + + +class SummaryTextContent(MessageContent, discriminator="summary_text"): + """Summary text. + + :ivar type: The type of the object. Always ``summary_text``. Required. SUMMARY_TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.SUMMARY_TEXT + :ivar text: A summary of the reasoning output from the model so far. Required. + :vartype text: str + """ + + type: Literal[MessageContentType.SUMMARY_TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the object. Always ``summary_text``. Required. SUMMARY_TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A summary of the reasoning output from the model so far. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.SUMMARY_TEXT # type: ignore + + +class TextContent(MessageContent, discriminator="text"): + """Text Content. + + :ivar type: Required. TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TEXT + :ivar text: Required. + :vartype text: str + """ + + type: Literal[MessageContentType.TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. TEXT.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = MessageContentType.TEXT # type: ignore + + +class TextResponseFormatConfiguration(_Model): + """An object specifying the format that the model must output. Configuring ``{ "type": + "json_schema" }`` enables Structured Outputs, which ensures the model will match your supplied + JSON schema. Learn more in the `Structured Outputs guide `_. + The default format is ``{ "type": "text" }`` with no additional options. *Not recommended for + gpt-4o and newer models:** Setting to ``{ "type": "json_object" }`` enables the older JSON + mode, which ensures the message the model generates is valid JSON. Using ``json_schema`` is + preferred for models that support it. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + TextResponseFormatConfigurationResponseFormatJsonObject, TextResponseFormatJsonSchema, + TextResponseFormatConfigurationResponseFormatText + + :ivar type: Required. Known values are: "text", "json_schema", and "json_object". + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.TextResponseFormatConfigurationType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"text\", \"json_schema\", and \"json_object\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class TextResponseFormatConfigurationResponseFormatJsonObject( + TextResponseFormatConfiguration, discriminator="json_object" +): # pylint: disable=name-too-long + """JSON object. + + :ivar type: The type of response format being defined. Always ``json_object``. Required. + JSON_OBJECT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.JSON_OBJECT + """ + + type: Literal[TextResponseFormatConfigurationType.JSON_OBJECT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of response format being defined. Always ``json_object``. Required. JSON_OBJECT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = TextResponseFormatConfigurationType.JSON_OBJECT # type: ignore + + +class TextResponseFormatConfigurationResponseFormatText( + TextResponseFormatConfiguration, discriminator="text" +): # pylint: disable=name-too-long + """Text. + + :ivar type: The type of response format being defined. Always ``text``. Required. TEXT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TEXT + """ + + type: Literal[TextResponseFormatConfigurationType.TEXT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of response format being defined. Always ``text``. Required. TEXT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = TextResponseFormatConfigurationType.TEXT # type: ignore + + +class TextResponseFormatJsonSchema(TextResponseFormatConfiguration, discriminator="json_schema"): + """JSON schema. + + :ivar type: The type of response format being defined. Always ``json_schema``. Required. + JSON_SCHEMA. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.JSON_SCHEMA + :ivar description: A description of what the response format is for, used by the model to + determine how to respond in the format. + :vartype description: str + :ivar name: The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and + dashes, with a maximum length of 64. Required. + :vartype name: str + :ivar schema: Required. + :vartype schema: + ~azure.ai.agentserver.responses.sdk.models.models.ResponseFormatJsonSchemaSchema + :ivar strict: + :vartype strict: bool + """ + + type: Literal[TextResponseFormatConfigurationType.JSON_SCHEMA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of response format being defined. Always ``json_schema``. Required. JSON_SCHEMA.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A description of what the response format is for, used by the model to determine how to respond + in the format.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with + a maximum length of 64. Required.""" + schema: "_models.ResponseFormatJsonSchemaSchema" = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Required.""" + strict: Optional[bool] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + name: str, + schema: "_models.ResponseFormatJsonSchemaSchema", + description: Optional[str] = None, + strict: Optional[bool] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = TextResponseFormatConfigurationType.JSON_SCHEMA # type: ignore + + +class ToolChoiceAllowed(ToolChoiceParam, discriminator="allowed_tools"): + """Allowed tools. + + :ivar type: Allowed tool configuration type. Always ``allowed_tools``. Required. ALLOWED_TOOLS. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.ALLOWED_TOOLS + :ivar mode: Constrains the tools available to the model to a pre-defined set. ``auto`` allows + the model to pick from among the allowed tools and generate a message. ``required`` requires + the model to call one or more of the allowed tools. Required. Is either a Literal["auto"] type + or a Literal["required"] type. + :vartype mode: str or str + :ivar tools: A list of tool definitions that the model should be allowed to call. For the + Responses API, the list of tool definitions might look like: + + .. code-block:: json + + [ + { "type": "function", "name": "get_weather" }, + { "type": "mcp", "server_label": "deepwiki" }, + { "type": "image_generation" } + ]. Required. + :vartype tools: list[dict[str, any]] + """ + + type: Literal[ToolChoiceParamType.ALLOWED_TOOLS] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Allowed tool configuration type. Always ``allowed_tools``. Required. ALLOWED_TOOLS.""" + mode: Literal["auto", "required"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Constrains the tools available to the model to a pre-defined set. ``auto`` allows the model to + pick from among the allowed tools and generate a message. ``required`` requires the model to + call one or more of the allowed tools. Required. Is either a Literal[\"auto\"] type or a + Literal[\"required\"] type.""" + tools: list[dict[str, Any]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A list of tool definitions that the model should be allowed to call. For the Responses API, the + list of tool definitions might look like: + + .. code-block:: json + + [ + { \"type\": \"function\", \"name\": \"get_weather\" }, + { \"type\": \"mcp\", \"server_label\": \"deepwiki\" }, + { \"type\": \"image_generation\" } + ]. Required.""" + + @overload + def __init__( + self, + *, + mode: Literal["auto", "required"], + tools: list[dict[str, Any]], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.ALLOWED_TOOLS # type: ignore + + +class ToolChoiceCodeInterpreter(ToolChoiceParam, discriminator="code_interpreter"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. CODE_INTERPRETER. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CODE_INTERPRETER + """ + + type: Literal[ToolChoiceParamType.CODE_INTERPRETER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. CODE_INTERPRETER.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.CODE_INTERPRETER # type: ignore + + +class ToolChoiceComputer(ToolChoiceParam, discriminator="computer"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. COMPUTER. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER + """ + + type: Literal[ToolChoiceParamType.COMPUTER] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. COMPUTER.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.COMPUTER # type: ignore + + +class ToolChoiceComputerUse(ToolChoiceParam, discriminator="computer_use"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. COMPUTER_USE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_USE + """ + + type: Literal[ToolChoiceParamType.COMPUTER_USE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. COMPUTER_USE.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.COMPUTER_USE # type: ignore + + +class ToolChoiceComputerUsePreview(ToolChoiceParam, discriminator="computer_use_preview"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. COMPUTER_USE_PREVIEW. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.COMPUTER_USE_PREVIEW + """ + + type: Literal[ToolChoiceParamType.COMPUTER_USE_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. COMPUTER_USE_PREVIEW.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.COMPUTER_USE_PREVIEW # type: ignore + + +class ToolChoiceCustom(ToolChoiceParam, discriminator="custom"): + """Custom tool. + + :ivar type: For custom tool calling, the type is always ``custom``. Required. CUSTOM. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.CUSTOM + :ivar name: The name of the custom tool to call. Required. + :vartype name: str + """ + + type: Literal[ToolChoiceParamType.CUSTOM] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """For custom tool calling, the type is always ``custom``. Required. CUSTOM.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the custom tool to call. Required.""" + + @overload + def __init__( + self, + *, + name: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.CUSTOM # type: ignore + + +class ToolChoiceFileSearch(ToolChoiceParam, discriminator="file_search"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. FILE_SEARCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FILE_SEARCH + """ + + type: Literal[ToolChoiceParamType.FILE_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. FILE_SEARCH.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.FILE_SEARCH # type: ignore + + +class ToolChoiceFunction(ToolChoiceParam, discriminator="function"): + """Function tool. + + :ivar type: For function calling, the type is always ``function``. Required. FUNCTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.FUNCTION + :ivar name: The name of the function to call. Required. + :vartype name: str + """ + + type: Literal[ToolChoiceParamType.FUNCTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """For function calling, the type is always ``function``. Required. FUNCTION.""" + name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The name of the function to call. Required.""" + + @overload + def __init__( + self, + *, + name: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.FUNCTION # type: ignore + + +class ToolChoiceImageGeneration(ToolChoiceParam, discriminator="image_generation"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. IMAGE_GENERATION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.IMAGE_GENERATION + """ + + type: Literal[ToolChoiceParamType.IMAGE_GENERATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. IMAGE_GENERATION.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.IMAGE_GENERATION # type: ignore + + +class ToolChoiceMCP(ToolChoiceParam, discriminator="mcp"): + """MCP tool. + + :ivar type: For MCP tools, the type is always ``mcp``. Required. MCP. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.MCP + :ivar server_label: The label of the MCP server to use. Required. + :vartype server_label: str + :ivar name: + :vartype name: str + """ + + type: Literal[ToolChoiceParamType.MCP] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """For MCP tools, the type is always ``mcp``. Required. MCP.""" + server_label: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The label of the MCP server to use. Required.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + server_label: str, + name: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.MCP # type: ignore + + +class ToolChoiceWebSearchPreview(ToolChoiceParam, discriminator="web_search_preview"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. WEB_SEARCH_PREVIEW. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH_PREVIEW + """ + + type: Literal[ToolChoiceParamType.WEB_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. WEB_SEARCH_PREVIEW.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.WEB_SEARCH_PREVIEW # type: ignore + + +class ToolChoiceWebSearchPreview20250311(ToolChoiceParam, discriminator="web_search_preview_2025_03_11"): + """Indicates that the model should use a built-in tool to generate a response. `Learn more about + built-in tools `_. + + :ivar type: Required. WEB_SEARCH_PREVIEW2025_03_11. + :vartype type: str or + ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH_PREVIEW2025_03_11 + """ + + type: Literal[ToolChoiceParamType.WEB_SEARCH_PREVIEW2025_03_11] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. WEB_SEARCH_PREVIEW2025_03_11.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolChoiceParamType.WEB_SEARCH_PREVIEW2025_03_11 # type: ignore + + +class ToolProjectConnection(_Model): + """A project connection resource. + + :ivar project_connection_id: A project connection in a ToolProjectConnectionList attached to + this tool. Required. + :vartype project_connection_id: str + """ + + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """A project connection in a ToolProjectConnectionList attached to this tool. Required.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class ToolSearchCallItemParam(Item, discriminator="tool_search_call"): + """ToolSearchCallItemParam. + + :ivar id: + :vartype id: str + :ivar call_id: + :vartype call_id: str + :ivar type: The item type. Always ``tool_search_call``. Required. TOOL_SEARCH_CALL. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH_CALL + :ivar execution: Whether tool search was executed by the server or by the client. Known values + are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar arguments: The arguments supplied to the tool search call. Required. + :vartype arguments: ~azure.ai.agentserver.responses.sdk.models.models.EmptyModelParam + :ivar status: Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallItemStatus + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + type: Literal[ItemType.TOOL_SEARCH_CALL] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The item type. Always ``tool_search_call``. Required. TOOL_SEARCH_CALL.""" + execution: Optional[Union[str, "_models.ToolSearchExecutionType"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search was executed by the server or by the client. Known values are: \"server\" + and \"client\".""" + arguments: "_models.EmptyModelParam" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The arguments supplied to the tool search call. Required.""" + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + + @overload + def __init__( + self, + *, + arguments: "_models.EmptyModelParam", + id: Optional[str] = None, # pylint: disable=redefined-builtin + call_id: Optional[str] = None, + execution: Optional[Union[str, "_models.ToolSearchExecutionType"]] = None, + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.TOOL_SEARCH_CALL # type: ignore + + +class ToolSearchOutputItemParam(Item, discriminator="tool_search_output"): + """ToolSearchOutputItemParam. + + :ivar id: + :vartype id: str + :ivar call_id: + :vartype call_id: str + :ivar type: The item type. Always ``tool_search_output``. Required. TOOL_SEARCH_OUTPUT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH_OUTPUT + :ivar execution: Whether tool search was executed by the server or by the client. Known values + are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar tools: The loaded tool definitions returned by the tool search output. Required. + :vartype tools: list[~azure.ai.agentserver.responses.sdk.models.models.Tool] + :ivar status: Known values are: "in_progress", "completed", and "incomplete". + :vartype status: str or + ~azure.ai.agentserver.responses.sdk.models.models.FunctionCallItemStatus + """ + + id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + call_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + type: Literal[ItemType.TOOL_SEARCH_OUTPUT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The item type. Always ``tool_search_output``. Required. TOOL_SEARCH_OUTPUT.""" + execution: Optional[Union[str, "_models.ToolSearchExecutionType"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search was executed by the server or by the client. Known values are: \"server\" + and \"client\".""" + tools: list["_models.Tool"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The loaded tool definitions returned by the tool search output. Required.""" + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Known values are: \"in_progress\", \"completed\", and \"incomplete\".""" + + @overload + def __init__( + self, + *, + tools: list["_models.Tool"], + id: Optional[str] = None, # pylint: disable=redefined-builtin + call_id: Optional[str] = None, + execution: Optional[Union[str, "_models.ToolSearchExecutionType"]] = None, + status: Optional[Union[str, "_models.FunctionCallItemStatus"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ItemType.TOOL_SEARCH_OUTPUT # type: ignore + + +class ToolSearchToolParam(Tool, discriminator="tool_search"): + """Tool search tool. + + :ivar type: The type of the tool. Always ``tool_search``. Required. TOOL_SEARCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TOOL_SEARCH + :ivar execution: Whether tool search is executed by the server or by the client. Known values + are: "server" and "client". + :vartype execution: str or + ~azure.ai.agentserver.responses.sdk.models.models.ToolSearchExecutionType + :ivar description: + :vartype description: str + :ivar parameters: + :vartype parameters: ~azure.ai.agentserver.responses.sdk.models.models.EmptyModelParam + """ + + type: Literal[ToolType.TOOL_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``tool_search``. Required. TOOL_SEARCH.""" + execution: Optional[Union[str, "_models.ToolSearchExecutionType"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Whether tool search is executed by the server or by the client. Known values are: \"server\" + and \"client\".""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + parameters: Optional["_models.EmptyModelParam"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + execution: Optional[Union[str, "_models.ToolSearchExecutionType"]] = None, + description: Optional[str] = None, + parameters: Optional["_models.EmptyModelParam"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.TOOL_SEARCH # type: ignore + + +class TopLogProb(_Model): + """Top log probability. + + :ivar token: Required. + :vartype token: str + :ivar logprob: Required. + :vartype logprob: int + :ivar bytes: Required. + :vartype bytes: list[int] + """ + + token: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + logprob: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + bytes: list[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + token: str, + logprob: int, + bytes: list[int], + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class TypeParam(ComputerAction, discriminator="type"): + """Type. + + :ivar type: Specifies the event type. For a type action, this property is always set to + ``type``. Required. TYPE. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.TYPE + :ivar text: The text to type. Required. + :vartype text: str + """ + + type: Literal[ComputerActionType.TYPE] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a type action, this property is always set to ``type``. Required. + TYPE.""" + text: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The text to type. Required.""" + + @overload + def __init__( + self, + *, + text: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.TYPE # type: ignore + + +class UrlCitationBody(Annotation, discriminator="url_citation"): + """URL citation. + + :ivar type: The type of the URL citation. Always ``url_citation``. Required. URL_CITATION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.URL_CITATION + :ivar url: The URL of the web resource. Required. + :vartype url: str + :ivar start_index: The index of the first character of the URL citation in the message. + Required. + :vartype start_index: int + :ivar end_index: The index of the last character of the URL citation in the message. Required. + :vartype end_index: int + :ivar title: The title of the web resource. Required. + :vartype title: str + """ + + type: Literal[AnnotationType.URL_CITATION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the URL citation. Always ``url_citation``. Required. URL_CITATION.""" + url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the web resource. Required.""" + start_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the first character of the URL citation in the message. Required.""" + end_index: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The index of the last character of the URL citation in the message. Required.""" + title: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The title of the web resource. Required.""" + + @overload + def __init__( + self, + *, + url: str, + start_index: int, + end_index: int, + title: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = AnnotationType.URL_CITATION # type: ignore + + +class UserProfileMemoryItem(MemoryItem, discriminator="user_profile"): + """A memory item specifically containing user profile information extracted from conversations, + such as preferences, interests, and personal details. + + :ivar memory_id: The unique ID of the memory item. Required. + :vartype memory_id: str + :ivar updated_at: The last update time of the memory item. Required. + :vartype updated_at: ~datetime.datetime + :ivar scope: The namespace that logically groups and isolates memories, such as a user ID. + Required. + :vartype scope: str + :ivar content: The content of the memory. Required. + :vartype content: str + :ivar kind: The kind of the memory item. Required. User profile information extracted from + conversations. + :vartype kind: str or ~azure.ai.agentserver.responses.sdk.models.models.USER_PROFILE + """ + + kind: Literal[MemoryItemKind.USER_PROFILE] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The kind of the memory item. Required. User profile information extracted from conversations.""" + + @overload + def __init__( + self, + *, + memory_id: str, + updated_at: datetime.datetime, + scope: str, + content: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.kind = MemoryItemKind.USER_PROFILE # type: ignore + + +class VectorStoreFileAttributes(_Model): + """Set of 16 key-value pairs that can be attached to an object. This can be useful for storing + additional information about the object in a structured format, and querying for objects via + API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are + strings with a maximum length of 512 characters, booleans, or numbers. + + """ + + +class WaitParam(ComputerAction, discriminator="wait"): + """Wait. + + :ivar type: Specifies the event type. For a wait action, this property is always set to + ``wait``. Required. WAIT. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WAIT + """ + + type: Literal[ComputerActionType.WAIT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Specifies the event type. For a wait action, this property is always set to ``wait``. Required. + WAIT.""" + + @overload + def __init__( + self, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ComputerActionType.WAIT # type: ignore + + +class WebSearchActionFind(_Model): + """Find action. + + :ivar type: The action type. Required. Default value is "find_in_page". + :vartype type: str + :ivar url: The URL of the page searched for the pattern. Required. + :vartype url: str + :ivar pattern: The pattern or text to search for within the page. Required. + :vartype pattern: str + """ + + type: Literal["find_in_page"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The action type. Required. Default value is \"find_in_page\".""" + url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL of the page searched for the pattern. Required.""" + pattern: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The pattern or text to search for within the page. Required.""" + + @overload + def __init__( + self, + *, + url: str, + pattern: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["find_in_page"] = "find_in_page" + + +class WebSearchActionOpenPage(_Model): + """Open page action. + + :ivar type: The action type. Required. Default value is "open_page". + :vartype type: str + :ivar url: The URL opened by the model. + :vartype url: str + """ + + type: Literal["open_page"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The action type. Required. Default value is \"open_page\".""" + url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The URL opened by the model.""" + + @overload + def __init__( + self, + *, + url: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["open_page"] = "open_page" + + +class WebSearchActionSearch(_Model): + """Search action. + + :ivar type: The action type. Required. Default value is "search". + :vartype type: str + :ivar query: [DEPRECATED] The search query. Required. + :vartype query: str + :ivar queries: Search queries. + :vartype queries: list[str] + :ivar sources: Web search sources. + :vartype sources: + list[~azure.ai.agentserver.responses.sdk.models.models.WebSearchActionSearchSources] + """ + + type: Literal["search"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The action type. Required. Default value is \"search\".""" + query: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """[DEPRECATED] The search query. Required.""" + queries: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Search queries.""" + sources: Optional[list["_models.WebSearchActionSearchSources"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Web search sources.""" + + @overload + def __init__( + self, + *, + query: str, + queries: Optional[list[str]] = None, + sources: Optional[list["_models.WebSearchActionSearchSources"]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["search"] = "search" + + +class WebSearchActionSearchSources(_Model): + """WebSearchActionSearchSources. + + :ivar type: Required. Default value is "url". + :vartype type: str + :ivar url: Required. + :vartype url: str + """ + + type: Literal["url"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required. Default value is \"url\".""" + url: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Required.""" + + @overload + def __init__( + self, + *, + url: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["url"] = "url" + + +class WebSearchApproximateLocation(_Model): + """Web search approximate location. + + :ivar type: The type of location approximation. Always ``approximate``. Required. Default value + is "approximate". + :vartype type: str + :ivar country: + :vartype country: str + :ivar region: + :vartype region: str + :ivar city: + :vartype city: str + :ivar timezone: + :vartype timezone: str + """ + + type: Literal["approximate"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The type of location approximation. Always ``approximate``. Required. Default value is + \"approximate\".""" + country: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + region: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + city: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + timezone: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + country: Optional[str] = None, + region: Optional[str] = None, + city: Optional[str] = None, + timezone: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type: Literal["approximate"] = "approximate" + + +class WebSearchConfiguration(_Model): + """A web search configuration for bing custom search. + + :ivar project_connection_id: Project connection id for grounding with bing custom search. + Required. + :vartype project_connection_id: str + :ivar instance_name: Name of the custom configuration instance given to config. Required. + :vartype instance_name: str + """ + + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Project connection id for grounding with bing custom search. Required.""" + instance_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Name of the custom configuration instance given to config. Required.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + instance_name: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class WebSearchPreviewTool(Tool, discriminator="web_search_preview"): + """Web search preview. + + :ivar type: The type of the web search tool. One of ``web_search_preview`` or + ``web_search_preview_2025_03_11``. Required. WEB_SEARCH_PREVIEW. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH_PREVIEW + :ivar user_location: + :vartype user_location: ~azure.ai.agentserver.responses.sdk.models.models.ApproximateLocation + :ivar search_context_size: High level guidance for the amount of context window space to use + for the search. One of ``low``, ``medium``, or ``high``. ``medium`` is the default. Known + values are: "low", "medium", and "high". + :vartype search_context_size: str or + ~azure.ai.agentserver.responses.sdk.models.models.SearchContextSize + :ivar search_content_types: + :vartype search_content_types: list[str or + ~azure.ai.agentserver.responses.sdk.models.models.SearchContentType] + """ + + type: Literal[ToolType.WEB_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the web search tool. One of ``web_search_preview`` or + ``web_search_preview_2025_03_11``. Required. WEB_SEARCH_PREVIEW.""" + user_location: Optional["_models.ApproximateLocation"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + search_context_size: Optional[Union[str, "_models.SearchContextSize"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """High level guidance for the amount of context window space to use for the search. One of + ``low``, ``medium``, or ``high``. ``medium`` is the default. Known values are: \"low\", + \"medium\", and \"high\".""" + search_content_types: Optional[list[Union[str, "_models.SearchContentType"]]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + + @overload + def __init__( + self, + *, + user_location: Optional["_models.ApproximateLocation"] = None, + search_context_size: Optional[Union[str, "_models.SearchContextSize"]] = None, + search_content_types: Optional[list[Union[str, "_models.SearchContentType"]]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.WEB_SEARCH_PREVIEW # type: ignore + + +class WebSearchTool(Tool, discriminator="web_search"): + """Web search. + + :ivar type: The type of the web search tool. One of ``web_search`` or + ``web_search_2025_08_26``. Required. WEB_SEARCH. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WEB_SEARCH + :ivar filters: + :vartype filters: ~azure.ai.agentserver.responses.sdk.models.models.WebSearchToolFilters + :ivar user_location: + :vartype user_location: + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchApproximateLocation + :ivar search_context_size: High level guidance for the amount of context window space to use + for the search. One of ``low``, ``medium``, or ``high``. ``medium`` is the default. Is one of + the following types: Literal["low"], Literal["medium"], Literal["high"] + :vartype search_context_size: str or str or str + :ivar custom_search_configuration: The project connections attached to this tool. There can be + a maximum of 1 connection resource attached to the tool. + :vartype custom_search_configuration: + ~azure.ai.agentserver.responses.sdk.models.models.WebSearchConfiguration + """ + + type: Literal[ToolType.WEB_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the web search tool. One of ``web_search`` or ``web_search_2025_08_26``. Required. + WEB_SEARCH.""" + filters: Optional["_models.WebSearchToolFilters"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + user_location: Optional["_models.WebSearchApproximateLocation"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + search_context_size: Optional[Literal["low", "medium", "high"]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """High level guidance for the amount of context window space to use for the search. One of + ``low``, ``medium``, or ``high``. ``medium`` is the default. Is one of the following types: + Literal[\"low\"], Literal[\"medium\"], Literal[\"high\"]""" + custom_search_configuration: Optional["_models.WebSearchConfiguration"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """The project connections attached to this tool. There can be a maximum of 1 connection resource + attached to the tool.""" + + @overload + def __init__( + self, + *, + filters: Optional["_models.WebSearchToolFilters"] = None, + user_location: Optional["_models.WebSearchApproximateLocation"] = None, + search_context_size: Optional[Literal["low", "medium", "high"]] = None, + custom_search_configuration: Optional["_models.WebSearchConfiguration"] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.WEB_SEARCH # type: ignore + + +class WebSearchToolFilters(_Model): + """WebSearchToolFilters. + + :ivar allowed_domains: + :vartype allowed_domains: list[str] + """ + + allowed_domains: Optional[list[str]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + + @overload + def __init__( + self, + *, + allowed_domains: Optional[list[str]] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class WorkflowActionOutputItem(OutputItem, discriminator="workflow_action"): + """WorkflowActionOutputItem. + + :ivar created_by: The information about the creator of the item. Is either a CreatedBy type or + a str type. + :vartype created_by: ~azure.ai.agentserver.responses.sdk.models.models.CreatedBy or str + :ivar agent_reference: The agent that created the item. + :vartype agent_reference: ~azure.ai.agentserver.responses.sdk.models.models.AgentReference + :ivar response_id: The response on which the item is created. + :vartype response_id: str + :ivar type: Required. WORKFLOW_ACTION. + :vartype type: str or ~azure.ai.agentserver.responses.sdk.models.models.WORKFLOW_ACTION + :ivar kind: The kind of CSDL action (e.g., 'SetVariable', 'InvokeAzureAgent'). Required. + :vartype kind: str + :ivar action_id: Unique identifier for the action. Required. + :vartype action_id: str + :ivar parent_action_id: ID of the parent action if this is a nested action. + :vartype parent_action_id: str + :ivar previous_action_id: ID of the previous action if this action follows another. + :vartype previous_action_id: str + :ivar status: Status of the action (e.g., 'in_progress', 'completed', 'failed', 'cancelled'). + Required. Is one of the following types: Literal["completed"], Literal["failed"], + Literal["in_progress"], Literal["cancelled"] + :vartype status: str or str or str or str + """ + + type: Literal[OutputItemType.WORKFLOW_ACTION] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """Required. WORKFLOW_ACTION.""" + kind: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The kind of CSDL action (e.g., 'SetVariable', 'InvokeAzureAgent'). Required.""" + action_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Unique identifier for the action. Required.""" + parent_action_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """ID of the parent action if this is a nested action.""" + previous_action_id: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """ID of the previous action if this action follows another.""" + status: Literal["completed", "failed", "in_progress", "cancelled"] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """Status of the action (e.g., 'in_progress', 'completed', 'failed', 'cancelled'). Required. Is + one of the following types: Literal[\"completed\"], Literal[\"failed\"], + Literal[\"in_progress\"], Literal[\"cancelled\"]""" + + @overload + def __init__( + self, + *, + kind: str, + action_id: str, + status: Literal["completed", "failed", "in_progress", "cancelled"], + created_by: Optional[Union["_models.CreatedBy", str]] = None, + agent_reference: Optional["_models.AgentReference"] = None, + response_id: Optional[str] = None, + parent_action_id: Optional[str] = None, + previous_action_id: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = OutputItemType.WORKFLOW_ACTION # type: ignore diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_patch.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_patch.py new file mode 100644 index 000000000000..8e28092abd44 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/models/_patch.py @@ -0,0 +1,46 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- +"""Hand-written customizations injected into the generated models package. + +This file is copied over the generated ``_patch.py`` inside +``sdk/models/models/`` by ``make generate-models``. Anything listed in +``__all__`` is automatically re-exported by the generated ``__init__.py``. + +Follow our quickstart for examples: https://aka.ms/azsdk/python/dpcodegen/python/customize +""" + +from enum import Enum + +from azure.core import CaseInsensitiveEnumMeta + + +class ResponseIncompleteReason(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Reason a response finished as incomplete. + + The upstream TypeSpec defines this as an inline literal union + (``"max_output_tokens" | "content_filter"``), so the code generator + emits ``Literal[...]`` instead of a named enum. This hand-written + enum provides a friendlier symbolic constant for SDK consumers. + """ + + MAX_OUTPUT_TOKENS = "max_output_tokens" + """The response was cut short because the maximum output token limit was reached.""" + CONTENT_FILTER = "content_filter" + """The response was cut short because of a content filter.""" + + +__all__: list[str] = [ + "ResponseIncompleteReason", +] + + +def patch_sdk(): + """Do not remove from this file. + + `patch_sdk` is a last resort escape hatch that allows you to do customizations + you can't accomplish using the techniques described in + https://aka.ms/azsdk/python/dpcodegen/python/customize + """ diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/py.typed b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/py.typed new file mode 100644 index 000000000000..e5aff4f83af8 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_generated/sdk/models/py.typed @@ -0,0 +1 @@ +# Marker file for PEP 561. \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_helpers.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_helpers.py new file mode 100644 index 000000000000..74f238829e8e --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/_helpers.py @@ -0,0 +1,275 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Helper functions for CreateResponse and Response model expansion. + +Mirrors .NET Azure.AI.AgentHost.Responses.Contracts/src/Custom/ extensions, +adapted for the Python code generator's native union types. +""" + +from __future__ import annotations + +from typing import Any, Mapping, Optional, Union + +from azure.ai.agentserver.responses.models._generated.sdk.models._types import InputParam + +from ._generated import ( + ConversationParam_2, + CreateResponse, + Item, + ItemMessage, + MessageContent, + MessageContentInputTextContent, + MessageRole, + OutputItem, + Response, + ToolChoiceAllowed, + ToolChoiceOptions, + ToolChoiceParam, +) + + +# --------------------------------------------------------------------------- +# Internal utilities for dict-safe field access +# --------------------------------------------------------------------------- + + +def _get_field(obj: Any, field: str, default: Any = None) -> Any: + """Get *field* from a model instance or a plain dict.""" + if isinstance(obj, dict): + return obj.get(field, default) + return getattr(obj, field, default) + + +def _is_type(obj: Any, model_cls: type, type_value: str) -> bool: + """Check whether *obj* is *model_cls* or a dict with matching ``type``.""" + if isinstance(obj, model_cls): + return True + if isinstance(obj, dict): + return obj.get("type") == type_value + return False + + +# --------------------------------------------------------------------------- +# CreateResponse helpers +# --------------------------------------------------------------------------- + + +def get_conversation_id(request: CreateResponse) -> Optional[str]: + """Extract conversation ID from ``CreateResponse.conversation``. + + If conversation is a plain string, returns it directly. + If it is a :class:`ConversationParam_2` object, returns its ``id`` field. + + :param request: The create-response request. + :type request: CreateResponse + :returns: The conversation ID, or ``None`` if no conversation is set. + :rtype: str | None + """ + conv = request.conversation + if conv is None: + return None + if isinstance(conv, str): + return conv or None + # Model instance or plain dict + cid = _get_field(conv, "id") + return str(cid) if cid else None + + +def get_input_expanded(request: CreateResponse) -> list[dict]: + """Normalize ``CreateResponse.input`` into a list of :class:`Item`. + + - If input is ``None``, returns ``[]``. + - If input is a string, wraps it as a single :class:`ItemMessage` with + ``role=user`` and :class:`MessageContentInputTextContent`. + - If input is already a list, returns a shallow copy. + + :param request: The create-response request. + :type request: CreateResponse + :returns: A list of input items. + :rtype: list[dict] + """ + inp = request.input + if inp is None: + return [] + if isinstance(inp, str): + return [ + ItemMessage( + id="", + status="completed", + role=MessageRole.USER, + content=[MessageContentInputTextContent(text=inp)], + ).as_dict() + ] + return list(inp) + + +def get_input_text(request: CreateResponse) -> str: + """Extract all text content from ``CreateResponse.input`` as a single string. + + Expands input via :func:`get_input_expanded`, filters for + :class:`ItemMessage` items, and joins all + :class:`MessageContentInputTextContent` text values with newlines. + + :param request: The create-response request. + :type request: CreateResponse + :returns: The combined text content, or ``""`` if no text found. + :rtype: str + """ + items = get_input_expanded(request) + texts: list[str] = [] + for item in items: + if _is_type(item, ItemMessage, "message"): + for part in _get_field(item, "content") or []: + if _is_type(part, MessageContentInputTextContent, "input_text"): + text = _get_field(part, "text") + if text is not None: + texts.append(text) + return "\n".join(texts) + + +def get_tool_choice_expanded(request: CreateResponse) -> Optional[ToolChoiceParam]: + """Expand ``CreateResponse.tool_choice`` into a typed :class:`ToolChoiceParam`. + + String shorthands (``"auto"``, ``"required"``) are expanded to + :class:`ToolChoiceAllowed` with the corresponding mode. + ``"none"`` returns ``None``. + + :param request: The create-response request. + :type request: CreateResponse + :returns: The typed tool choice, or ``None`` if unset or ``"none"``. + :rtype: ToolChoiceParam | None + :raises ValueError: If the tool_choice value is an unrecognized string. + """ + tc = request.tool_choice + if tc is None: + return None + if isinstance(tc, ToolChoiceParam): + return tc + if isinstance(tc, str): + normalized = tc if not isinstance(tc, ToolChoiceOptions) else tc.value + if normalized in ("auto", "required"): + return ToolChoiceAllowed(mode=normalized, tools=[]) + if normalized == "none": + return None + raise ValueError( + f"Unrecognized tool_choice string value: '{normalized}'. " + "Expected 'auto', 'required', or 'none'." + ) + # dict fallback — wrap in ToolChoiceParam if it has a "type" key + if isinstance(tc, dict) and "type" in tc: + return ToolChoiceParam(tc) + return None + + +def get_conversation_expanded(request: CreateResponse) -> Optional[ConversationParam_2]: + """Expand ``CreateResponse.conversation`` into a typed :class:`ConversationParam_2`. + + A plain string is treated as the conversation ID. + + :param request: The create-response request. + :type request: CreateResponse + :returns: The typed conversation parameter, or ``None``. + :rtype: ConversationParam_2 | None + """ + conv = request.conversation + if conv is None: + return None + if isinstance(conv, ConversationParam_2): + return conv + if isinstance(conv, str): + return ConversationParam_2(id=conv) if conv else None + # dict fallback + if isinstance(conv, dict): + cid = conv.get("id") + return ConversationParam_2(id=cid) if cid else None + return None + + +# --------------------------------------------------------------------------- +# Response helpers +# --------------------------------------------------------------------------- + + +def get_instruction_items(response: Response) -> list[dict]: + """Expand ``Response.instructions`` into a list of :class:`Item`. + + - If instructions is ``None``, returns ``[]``. + - If instructions is a string, wraps it as a single :class:`ItemMessage` + with ``role=developer`` and :class:`MessageContentInputTextContent`. + - If instructions is already a list, returns a shallow copy. + + :param response: The response object. + :type response: Response + :returns: A list of instruction items. + :rtype: list[Item] + """ + instr = response.instructions + if instr is None: + return [] + if isinstance(instr, str): + return [ + ItemMessage( + id="", + status="completed", + role=MessageRole.DEVELOPER, + content=[MessageContentInputTextContent(text=instr)], + ).as_dict() + ] + return list(instr) + + +# --------------------------------------------------------------------------- +# OutputItem helpers +# --------------------------------------------------------------------------- + + +def get_output_item_id(item: OutputItem) -> str: + """Extract the ``id`` field from any :class:`OutputItem` subtype. + + The base :class:`OutputItem` class does not define ``id``, but all + concrete subtypes do. Falls back to dict-style access for unknown + subtypes. + + :param item: The output item to extract the ID from. + :type item: OutputItem + :returns: The item's ID. + :rtype: str + :raises ValueError: If the item has no valid ``id``. + """ + item_id = _get_field(item, "id") + if item_id is not None: + return str(item_id) + + # Fallback: Model subclass supports Mapping protocol + try: + raw_id = item["id"] # type: ignore[index] + if raw_id is not None: + return str(raw_id) + except (KeyError, TypeError): + pass + + raise ValueError( + f"OutputItem of type '{type(item).__name__}' does not have a valid id. " + "Ensure the id property is set before accessing it." + ) + + +# --------------------------------------------------------------------------- +# ItemMessage helpers +# --------------------------------------------------------------------------- + + +def get_content_expanded(message: ItemMessage) -> list[MessageContent]: + """Return the typed content list from an :class:`ItemMessage`. + + In Python the generated ``ItemMessage.content`` is already + ``list[MessageContent]``, so this is a convenience passthrough that + returns an empty list when content is ``None``. + + :param message: The item message. + :type message: ItemMessage + :returns: The message content parts. + :rtype: list[MessageContent] + """ + content = _get_field(message, "content") + return list(content) if content else [] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/errors.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/errors.py new file mode 100644 index 000000000000..07f2d624795a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/errors.py @@ -0,0 +1,65 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Error model types for request validation failures.""" + +from __future__ import annotations + +from typing import Any + +from azure.ai.agentserver.responses.models._generated import ApiErrorResponse, Error + + +class RequestValidationError(ValueError): + """Represents a client-visible request validation failure.""" + + def __init__( + self, + message: str, + *, + code: str = "invalid_request", + param: str | None = None, + error_type: str = "invalid_request_error", + debug_info: dict[str, Any] | None = None, + details: list[dict[str, str]] | None = None, + ) -> None: + super().__init__(message) + self.message = message + self.code = code + self.param = param + self.error_type = error_type + self.debug_info = debug_info + self.details = details + + def to_error(self) -> Error: + """Convert this validation error to the generated ``Error`` model. + + :returns: An ``Error`` instance populated from this validation error's fields. + :rtype: Error + """ + detail_errors: list[Error] | None = None + if self.details: + detail_errors = [ + Error( + code=d.get("code", "invalid_value"), + message=d.get("message", ""), + param=d.get("param"), + type="invalid_request_error", + ) + for d in self.details + ] + return Error( + code=self.code, + message=self.message, + param=self.param, + type=self.error_type, + debug_info=self.debug_info, + details=detail_errors, + ) + + def to_api_error_response(self) -> ApiErrorResponse: + """Convert this validation error to the generated API error envelope. + + :returns: An ``ApiErrorResponse`` wrapping the generated ``Error``. + :rtype: ApiErrorResponse + """ + return ApiErrorResponse(error=self.to_error()) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/runtime.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/runtime.py new file mode 100644 index 000000000000..fc7bc079be83 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/models/runtime.py @@ -0,0 +1,365 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Runtime domain models for response sessions and stream events.""" + +from __future__ import annotations + +import asyncio # pylint: disable=do-not-import-asyncio +from copy import deepcopy +from datetime import datetime, timezone +from typing import Any, Literal, Mapping + +from ._generated import Response, ResponseStreamEvent, ResponseStreamEventType + +EVENT_TYPE = ResponseStreamEventType + +ResponseStatus = Literal["queued", "in_progress", "completed", "failed", "cancelled", "incomplete"] +TerminalResponseStatus = Literal["completed", "failed", "cancelled", "incomplete"] + + +class ResponseModeFlags: + """Execution mode flags captured from the create request.""" + + def __init__(self, *, stream: bool, store: bool, background: bool) -> None: + self.stream = stream + self.store = store + self.background = background + + +class StreamEventRecord: + """A persisted record for one emitted stream event.""" + + def __init__( + self, + *, + sequence_number: int, + event_type: str, + payload: Mapping[str, Any], + emitted_at: datetime | None = None, + ) -> None: + self.sequence_number = sequence_number + self.event_type = event_type + self.payload = payload + self.emitted_at = emitted_at if emitted_at is not None else datetime.now(timezone.utc) + + @property + def terminal(self) -> bool: + """Return True when this event is one of the terminal response events. + + Terminal events are ``response.completed``, ``response.failed``, + and ``response.incomplete``. + + :returns: True if the event type is terminal, False otherwise. + :rtype: bool + """ + return self.event_type in { + EVENT_TYPE.RESPONSE_COMPLETED.value, + EVENT_TYPE.RESPONSE_FAILED.value, + EVENT_TYPE.RESPONSE_INCOMPLETE.value, + } + + @classmethod + def from_generated(cls, event: ResponseStreamEvent, payload: Mapping[str, Any]) -> "StreamEventRecord": + """Create a stream event record from a generated response stream event model. + + :param event: The generated response stream event to convert. + :type event: ResponseStreamEvent + :param payload: The serialized payload mapping for the event. + :type payload: Mapping[str, Any] + :returns: A new ``StreamEventRecord`` populated from the generated event. + :rtype: StreamEventRecord + """ + return cls(sequence_number=event.sequence_number, event_type=event.type, payload=payload) + + +class ResponseExecution: # pylint: disable=too-many-instance-attributes + """Lightweight pipeline state for one response execution. + + This type intentionally does not own persisted stream history. Stream replay + concerns are modeled separately in :class:`StreamReplayState`. + """ + + def __init__( + self, + *, + response_id: str, + mode_flags: ResponseModeFlags, + created_at: datetime | None = None, + updated_at: datetime | None = None, + completed_at: datetime | None = None, + status: ResponseStatus = "queued", + response: Response | None = None, + execution_task: asyncio.Task[Any] | None = None, + cancel_requested: bool = False, + client_disconnected: bool = False, + response_created_seen: bool = False, + subject: Any | None = None, + cancel_signal: asyncio.Event | None = None, + input_items: list[dict[str, Any]] | None = None, + previous_response_id: str | None = None, + response_context: Any | None = None, + ) -> None: + self.response_id = response_id + self.mode_flags = mode_flags + self.created_at = created_at if created_at is not None else datetime.now(timezone.utc) + self.updated_at = updated_at if updated_at is not None else datetime.now(timezone.utc) + self.completed_at = completed_at + self.status = status + self.response = response + self.execution_task = execution_task + self.cancel_requested = cancel_requested + self.client_disconnected = client_disconnected + self.response_created_seen = response_created_seen + self.subject = subject + self.cancel_signal = cancel_signal if cancel_signal is not None else asyncio.Event() + self.input_items: list[dict[str, Any]] = input_items if input_items is not None else [] + self.previous_response_id = previous_response_id + self.response_context = response_context + self.response_created_signal: asyncio.Event = asyncio.Event() + self.response_failed_before_events: bool = False + + def transition_to(self, next_status: ResponseStatus) -> None: + """Transition this execution to a valid lifecycle status. + + Updates ``status``, ``updated_at``, and ``completed_at`` (for terminal states). + Re-entering the current status is a no-op that only refreshes ``updated_at``. + + :param next_status: The target lifecycle status. + :type next_status: ResponseStatus + :raises ValueError: If the requested transition is not allowed. + """ + allowed: dict[ResponseStatus, set[ResponseStatus]] = { + "queued": {"in_progress", "failed"}, + "in_progress": {"completed", "failed", "cancelled", "incomplete"}, + "completed": set(), + "failed": set(), + "cancelled": set(), + "incomplete": set(), + } + + if next_status == self.status: + self.updated_at = datetime.now(timezone.utc) + return + + if next_status not in allowed[self.status]: + raise ValueError(f"invalid status transition: {self.status} -> {next_status}") + + self.status = next_status + now = datetime.now(timezone.utc) + self.updated_at = now + if self.is_terminal: + self.completed_at = now + + @property + def is_terminal(self) -> bool: + """Return whether the execution has reached a terminal state. + + :returns: True if the status is one of completed, failed, cancelled, or incomplete. + :rtype: bool + """ + return self.status in {"completed", "failed", "cancelled", "incomplete"} + + def set_response_snapshot(self, response: Response) -> None: + """Replace the current response snapshot from handler-emitted events. + + :param response: The latest response snapshot to store. + :type response: Response + """ + self.response = response + self.updated_at = datetime.now(timezone.utc) + + @property + def replay_enabled(self) -> bool: + """SSE replay is only available for background+stream+store responses. + + :returns: True if this execution supports SSE replay. + :rtype: bool + """ + return self.mode_flags.stream and self.mode_flags.store and self.mode_flags.background + + @property + def visible_via_get(self) -> bool: + """Non-streaming stored responses are retrievable via GET after completion. + + :returns: True if this execution can be retrieved via GET. + :rtype: bool + """ + return self.mode_flags.store + + def apply_event(self, normalized: dict[str, Any], all_events: list[dict[str, Any]]) -> None: + """Apply a normalised stream event — updates self.response and self.status. + + Does nothing if the execution is already ``"cancelled"``. + + :param normalized: The normalised event dictionary (``{"type": ..., "payload": {...}}``). + :type normalized: dict[str, Any] + :param all_events: The full ordered list of handler events seen so far + (used to extract the latest response snapshot). + :type all_events: list[dict[str, Any]] + """ + # Lazy imports to avoid circular dependency (models.runtime ← streaming._helpers ← models.__init__) + from ..streaming._internals import _RESPONSE_SNAPSHOT_EVENT_TYPES # pylint: disable=import-outside-toplevel + from ..streaming._helpers import _extract_response_snapshot_from_events # pylint: disable=import-outside-toplevel + + if self.status == "cancelled": + return + event_type = normalized.get("type") + payload = normalized.get("payload", {}) + if event_type in _RESPONSE_SNAPSHOT_EVENT_TYPES: + agent_reference = ( + self.response.get("agent_reference") if self.response is not None else {} # type: ignore[union-attr] + ) or {} + model = self.response.get("model") if self.response is not None else None # type: ignore[union-attr] + snapshot = _extract_response_snapshot_from_events( + all_events, + response_id=self.response_id, + agent_reference=agent_reference, + model=model, + ) + self.set_response_snapshot(Response(snapshot)) + resolved = snapshot.get("status") + if isinstance(resolved, str): + self.status = resolved + elif event_type == EVENT_TYPE.RESPONSE_OUTPUT_ITEM_ADDED.value: + item = payload.get("item") + if isinstance(item, dict) and self.response is not None: + output = self.response.setdefault("output", []) + if isinstance(output, list): + output.append(deepcopy(item)) + elif event_type == EVENT_TYPE.RESPONSE_OUTPUT_ITEM_DONE.value: + item = payload.get("item") + output_index = payload.get("output_index") + if isinstance(item, dict) and isinstance(output_index, int) and self.response is not None: + output = self.response.get("output", []) + if isinstance(output, list) and 0 <= output_index < len(output): + output[output_index] = deepcopy(item) + + @property + def agent_reference(self) -> dict[str, Any]: + """Extract agent_reference from the stored response snapshot. + + :returns: The agent reference dict, or empty dict if no response snapshot is set. + :rtype: dict[str, Any] + """ + if self.response is not None: + return self.response.get("agent_reference") or {} # type: ignore[return-value] + return {} + + @property + def model(self) -> str | None: + """Extract model name from the stored response snapshot. + + :returns: The model name, or ``None`` if no response snapshot is set. + :rtype: str | None + """ + if self.response is not None: + return self.response.get("model") # type: ignore[return-value] + return None + + +class StreamReplayState: + """Persisted stream replay state for one response identifier.""" + + def __init__( + self, + *, + response_id: str, + events: list[StreamEventRecord] | None = None, + ) -> None: + self.response_id = response_id + self.events = events if events is not None else [] + + def append(self, event: StreamEventRecord) -> None: + """Append a stream event and enforce replay sequence integrity. + + :param event: The stream event record to append. + :type event: StreamEventRecord + :raises ValueError: If the sequence number is not strictly increasing or + a terminal event has already been recorded. + """ + if self.events and event.sequence_number <= self.events[-1].sequence_number: + raise ValueError("stream event sequence numbers must be strictly increasing") + + if self.events and self.events[-1].terminal: + raise ValueError("cannot append events after a terminal event") + + self.events.append(event) + + @property + def terminal_event_seen(self) -> bool: + """Return whether replay state has already recorded a terminal event. + + :returns: True if the last recorded event is terminal, False otherwise. + :rtype: bool + """ + return bool(self.events and self.events[-1].terminal) + + +def build_cancelled_response( + response_id: str, + agent_reference: dict[str, Any], + model: str | None, + created_at: datetime | None = None, +) -> Response: + """Build a Response object representing a cancelled terminal state. + + :param response_id: The response identifier. + :type response_id: str + :param agent_reference: The agent reference metadata dict. + :type agent_reference: dict[str, Any] + :param model: Optional model identifier. + :type model: str | None + :param created_at: Optional creation timestamp; defaults to now if omitted. + :type created_at: datetime | None + :returns: A Response object with status ``"cancelled"`` and empty output. + :rtype: Response + """ + payload: dict[str, Any] = { + "id": response_id, + "response_id": response_id, + "agent_reference": deepcopy(agent_reference), + "object": "response", + "status": "cancelled", + "model": model, + "output": [], + } + if created_at is not None: + payload["created_at"] = created_at.isoformat() + return Response(payload) + + +def build_failed_response( + response_id: str, + agent_reference: dict[str, Any], + model: str | None, + created_at: datetime | None = None, + error_message: str = "An internal server error occurred.", +) -> Response: + """Build a Response object representing a failed terminal state. + + :param response_id: The response identifier. + :type response_id: str + :param agent_reference: The agent reference metadata dict. + :type agent_reference: dict[str, Any] + :param model: Optional model identifier. + :type model: str | None + :param created_at: Optional creation timestamp; defaults to now if omitted. + :type created_at: datetime | None + :param error_message: Human-readable error message. + :type error_message: str + :returns: A Response object with status ``"failed"`` and empty output. + :rtype: Response + """ + payload: dict[str, Any] = { + "id": response_id, + "response_id": response_id, + "agent_reference": deepcopy(agent_reference), + "object": "response", + "status": "failed", + "model": model, + "output": [], + "error": {"code": "server_error", "message": error_message}, + } + if created_at is not None: + payload["created_at"] = created_at.isoformat() + return Response(payload) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/py.typed b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/py.typed new file mode 100644 index 000000000000..e69de29bb2d1 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/__init__.py new file mode 100644 index 000000000000..9a0454564dbb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/__init__.py @@ -0,0 +1,2 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_base.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_base.py new file mode 100644 index 000000000000..731175413f76 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_base.py @@ -0,0 +1,152 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Persistence abstraction for response execution and replay state.""" + +from __future__ import annotations + +from typing import Any, Iterable, Protocol, runtime_checkable + +from ..models._generated import Response + + +@runtime_checkable +class ResponseProviderProtocol(Protocol): + """Protocol aligned with the .NET ``IResponsesProvider`` contract. + + Implementations provide response envelope storage plus input/history item lookup. + """ + + async def create_response_async( + self, + response: Response, + input_items: Iterable[Any] | None, + history_item_ids: Iterable[str] | None, + ) -> None: + """Persist a new response envelope and optional input/history references. + + :param response: The response envelope to persist. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :param input_items: Optional input items to associate with the response. + :type input_items: Iterable[Any] | None + :param history_item_ids: Optional history item IDs to link to the response. + :type history_item_ids: Iterable[str] | None + :rtype: None + """ + + async def get_response_async(self, response_id: str) -> Response: + """Load one response envelope by ID. + + :param response_id: The unique identifier of the response to retrieve. + :type response_id: str + :returns: The response envelope matching the given ID. + :rtype: ~azure.ai.agentserver.responses.models._generated.Response + :raises KeyError: If the response does not exist. + """ + + async def update_response_async(self, response: Response) -> None: + """Persist an updated response envelope. + + :param response: The response envelope with updated fields to persist. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :rtype: None + """ + + async def delete_response_async(self, response_id: str) -> None: + """Delete a response envelope by ID. + + :param response_id: The unique identifier of the response to delete. + :type response_id: str + :rtype: None + :raises KeyError: If the response does not exist. + """ + + async def get_input_items_async( + self, + response_id: str, + limit: int = 20, + ascending: bool = False, + after: str | None = None, + before: str | None = None, + ) -> list[Any]: + """Get response input/history items for one response ID using cursor pagination. + + :param response_id: The unique identifier of the response whose items to fetch. + :type response_id: str + :param limit: Maximum number of items to return. Defaults to 20. + :type limit: int + :param ascending: Whether to return items in ascending order. Defaults to False. + :type ascending: bool + :param after: Cursor ID; only return items after this ID. + :type after: str | None + :param before: Cursor ID; only return items before this ID. + :type before: str | None + :returns: A list of input/history items matching the pagination criteria. + :rtype: list[Any] + """ + + async def get_items_async(self, item_ids: Iterable[str]) -> list[Any | None]: + """Get items by ID (missing IDs produce ``None`` entries). + + :param item_ids: The item identifiers to look up. + :type item_ids: Iterable[str] + :returns: A list of items in the same order as *item_ids*; missing items are ``None``. + :rtype: list[Any | None] + """ + + async def get_history_item_ids_async( + self, + previous_response_id: str | None, + conversation_id: str | None, + limit: int, + ) -> list[str]: + """Get history item IDs for a conversation chain scope. + + :param previous_response_id: Optional response ID to chain history from. + :type previous_response_id: str | None + :param conversation_id: Optional conversation ID to scope history lookup. + :type conversation_id: str | None + :param limit: Maximum number of history item IDs to return. + :type limit: int + :returns: A list of history item IDs within the given scope. + :rtype: list[str] + """ + + +@runtime_checkable +class ResponseStreamProviderProtocol(Protocol): + """Protocol for providers that can persist and replay SSE stream events. + + Implement this protocol alongside :class:`ResponseProviderProtocol` to enable + SSE replay for responses that are no longer resident in the in-process runtime + state (for example, after a process restart). + """ + + async def save_stream_events_async( + self, + response_id: str, + events: list[dict[str, Any]], + ) -> None: + """Persist the complete ordered list of SSE events for a response. + + Called once when the background+stream response reaches terminal state. + The *events* list uses the same normalised format that the SSE encoding + layer expects: ``[{"type": str, "payload": dict}, ...]``. + + :param response_id: The unique identifier of the response. + :type response_id: str + :param events: Ordered list of normalised SSE event dicts to persist. + :type events: list[dict[str, Any]] + :rtype: None + """ + + async def get_stream_events_async( + self, + response_id: str, + ) -> list[dict[str, Any]] | None: + """Retrieve the persisted SSE events for a response. + + :param response_id: The unique identifier of the response whose events to retrieve. + :type response_id: str + :returns: The ordered list of normalised SSE event dicts, or ``None`` if not found. + :rtype: list[dict[str, Any]] | None + """ diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_errors.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_errors.py new file mode 100644 index 000000000000..8bd492f104c5 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_errors.py @@ -0,0 +1,79 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Exception hierarchy for Foundry storage API errors.""" + +from __future__ import annotations + +import json + +import httpx + + +class FoundryStorageError(Exception): + """Base class for errors returned by the Foundry storage API.""" + + def __init__(self, message: str) -> None: + super().__init__(message) + self.message = message + + +class FoundryResourceNotFoundError(FoundryStorageError): + """Raised when the requested resource does not exist (HTTP 404).""" + + +class FoundryBadRequestError(FoundryStorageError): + """Raised for invalid-request or conflict errors (HTTP 400, 409).""" + + +class FoundryApiError(FoundryStorageError): + """Raised for all other non-success HTTP responses.""" + + def __init__(self, message: str, status_code: int) -> None: + super().__init__(message) + self.status_code = status_code + + +def raise_for_storage_error(response: httpx.Response) -> None: + """Raise an appropriate :class:`FoundryStorageError` subclass if *response* is not successful. + + :param response: The HTTP response to inspect. + :type response: httpx.Response + :raises FoundryResourceNotFoundError: For HTTP 404. + :raises FoundryBadRequestError: For HTTP 400 or 409. + :raises FoundryApiError: For all other non-2xx responses. + """ + if response.is_success: + return + + status = response.status_code + message = _extract_error_message(response, status) + + if status == 404: + raise FoundryResourceNotFoundError(message) + if status in (400, 409): + raise FoundryBadRequestError(message) + raise FoundryApiError(message, status) + + +def _extract_error_message(response: httpx.Response, status: int) -> str: + """Extract an error message from *response*, falling back to a generic string. + + :param response: The HTTP response whose body is inspected. + :type response: httpx.Response + :param status: The HTTP status code of the response. + :type status: int + :returns: A human-readable error message string. + :rtype: str + """ + try: + body = response.text + if body: + data = json.loads(body) + error = data.get("error") + if isinstance(error, dict): + msg = error.get("message") + if msg: + return str(msg) + except Exception: # pylint: disable=broad-except + pass + return f"Foundry storage request failed with HTTP {status}." diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_provider.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_provider.py new file mode 100644 index 000000000000..8ad3aee075aa --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_provider.py @@ -0,0 +1,245 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""HTTP-backed Foundry storage provider for Azure AI Responses.""" + +from __future__ import annotations + +from typing import Any, Iterable +from urllib.parse import quote as _url_quote + +import httpx +from azure.core.credentials_async import AsyncTokenCredential + +from ..models._generated import Response # type: ignore[attr-defined] +from ._foundry_errors import raise_for_storage_error +from ._foundry_serializer import ( + deserialize_history_ids, + deserialize_items_array, + deserialize_paged_items, + deserialize_response, + serialize_batch_request, + serialize_create_request, + serialize_response, +) +from ._foundry_settings import FoundryStorageSettings + +_FOUNDRY_TOKEN_SCOPE = "https://ai.azure.com/.default" +_JSON_CONTENT_TYPE = "application/json" + + +def _encode(value: str) -> str: + return _url_quote(value, safe="") + + +class FoundryStorageProvider: + """An HTTP-backed response storage provider that persists data via the Foundry storage API. + + This class satisfies the + :class:`~azure.ai.agentserver.responses.store._base.ResponseProviderProtocol` structural + protocol. Obtain an instance through the constructor and supply it when building a + ``ResponsesServer``. + + :param credential: An async credential used to obtain bearer tokens for the Foundry API. + :type credential: AsyncTokenCredential + :param settings: Storage settings. If omitted, + :meth:`~FoundryStorageSettings.from_env` is called automatically. + :type settings: FoundryStorageSettings | None + :param http_client: An existing :class:`httpx.AsyncClient` to use. If omitted, a new + client is created and owned by this instance (closed in :meth:`aclose`). + :type http_client: httpx.AsyncClient | None + + Example:: + + async with FoundryStorageProvider(credential=DefaultAzureCredential()) as provider: + app = ResponsesServer(handler=my_handler, provider=provider) + """ + + def __init__( + self, + credential: AsyncTokenCredential, + settings: FoundryStorageSettings | None = None, + http_client: httpx.AsyncClient | None = None, + ) -> None: + self._credential = credential + self._settings = settings or FoundryStorageSettings.from_env() + self._owns_client = http_client is None + self._http_client = http_client if http_client is not None else httpx.AsyncClient() + + # ------------------------------------------------------------------ + # Async context-manager support + # ------------------------------------------------------------------ + + async def aclose(self) -> None: + """Close the underlying HTTP client if it is owned by this instance.""" + if self._owns_client: + await self._http_client.aclose() + + async def __aenter__(self) -> "FoundryStorageProvider": + return self + + async def __aexit__(self, *args: Any) -> None: + await self.aclose() + + # ------------------------------------------------------------------ + # Internal helpers + # ------------------------------------------------------------------ + + async def _auth_headers(self) -> dict[str, str]: + token = await self._credential.get_token(_FOUNDRY_TOKEN_SCOPE) + return { + "Authorization": f"Bearer {token.token}", + "Content-Type": _JSON_CONTENT_TYPE, + } + + # ------------------------------------------------------------------ + # ResponseProviderProtocol implementation + # ------------------------------------------------------------------ + + async def create_response_async( + self, + response: Response, + input_items: Iterable[Any] | None, + history_item_ids: Iterable[str] | None, + ) -> None: + """Persist a new response with its associated input items and history. + + :param response: The initial response snapshot. + :type response: Response + :param input_items: Ordered input items for this response turn. + :type input_items: Iterable[Any] | None + :param history_item_ids: Item IDs from the prior conversation turn, if any. + :type history_item_ids: Iterable[str] | None + :raises FoundryApiError: On non-success HTTP response. + """ + body = serialize_create_request(response, input_items, history_item_ids) + url = self._settings.build_url("responses") + http_resp = await self._http_client.post(url, content=body, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + + async def get_response_async(self, response_id: str) -> Response: + """Retrieve a stored response by its ID. + + :param response_id: The response identifier. + :type response_id: str + :returns: The deserialized :class:`Response` model. + :rtype: Response + :raises FoundryResourceNotFoundError: If the response does not exist. + :raises FoundryApiError: On other non-success HTTP response. + """ + url = self._settings.build_url(f"responses/{_encode(response_id)}") + http_resp = await self._http_client.get(url, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + return deserialize_response(http_resp.text) + + async def update_response_async(self, response: Response) -> None: + """Persist an updated response snapshot. + + :param response: The updated response model. Must contain a valid ``id`` field. + :type response: Response + :raises FoundryResourceNotFoundError: If the response does not exist. + :raises FoundryApiError: On other non-success HTTP response. + """ + response_id = str(response["id"]) # type: ignore[index] + body = serialize_response(response) + url = self._settings.build_url(f"responses/{_encode(response_id)}") + http_resp = await self._http_client.post(url, content=body, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + + async def delete_response_async(self, response_id: str) -> None: + """Delete a stored response and its associated data. + + :param response_id: The response identifier. + :type response_id: str + :raises FoundryResourceNotFoundError: If the response does not exist. + :raises FoundryApiError: On other non-success HTTP response. + """ + url = self._settings.build_url(f"responses/{_encode(response_id)}") + http_resp = await self._http_client.delete(url, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + + async def get_input_items_async( + self, + response_id: str, + limit: int = 20, + ascending: bool = False, + after: str | None = None, + before: str | None = None, + ) -> list[Any]: + """Retrieve a page of input items for the given response. + + :param response_id: The response whose input items are being listed. + :type response_id: str + :param limit: Maximum number of items to return. Defaults to 20. + :type limit: int + :param ascending: ``True`` for oldest-first ordering; ``False`` (default) for newest-first. + :type ascending: bool + :param after: Start the page after this item ID (cursor-based pagination). + :type after: str | None + :param before: End the page before this item ID (cursor-based pagination). + :type before: str | None + :returns: A list of deserialized :class:`OutputItem` instances. + :rtype: list[Any] + :raises FoundryResourceNotFoundError: If the response does not exist. + :raises FoundryApiError: On other non-success HTTP response. + """ + extra: dict[str, str] = { + "limit": str(limit), + "order": "asc" if ascending else "desc", + } + if after is not None: + extra["after"] = after + if before is not None: + extra["before"] = before + + url = self._settings.build_url(f"responses/{_encode(response_id)}/input_items", **extra) + http_resp = await self._http_client.get(url, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + return deserialize_paged_items(http_resp.text) + + async def get_items_async(self, item_ids: Iterable[str]) -> list[Any | None]: + """Retrieve multiple items by their IDs in a single batch request. + + Positions in the returned list correspond to positions in *item_ids*. + Entries are ``None`` where no item was found for the given ID. + + :param item_ids: The item identifiers to retrieve. + :type item_ids: Iterable[str] + :returns: A list of :class:`OutputItem` instances (or ``None`` for missing items). + :rtype: list[Any | None] + :raises FoundryApiError: On non-success HTTP response. + """ + ids = list(item_ids) + body = serialize_batch_request(ids) + url = self._settings.build_url("items/batch/retrieve") + http_resp = await self._http_client.post(url, content=body, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + return deserialize_items_array(http_resp.text) + + async def get_history_item_ids_async( + self, + previous_response_id: str | None, + conversation_id: str | None, + limit: int, + ) -> list[str]: + """Retrieve the ordered list of item IDs that form the conversation history. + + :param previous_response_id: The response whose prior turn should be the history anchor. + :type previous_response_id: str | None + :param conversation_id: An explicit conversation scope identifier, if available. + :type conversation_id: str | None + :param limit: Maximum number of item IDs to return. + :type limit: int + :returns: Ordered list of item ID strings. + :rtype: list[str] + :raises FoundryApiError: On non-success HTTP response. + """ + extra: dict[str, str] = {"limit": str(limit)} + if previous_response_id is not None: + extra["previous_response_id"] = previous_response_id + if conversation_id is not None: + extra["conversation_id"] = conversation_id + + url = self._settings.build_url("history/item_ids", **extra) + http_resp = await self._http_client.get(url, headers=await self._auth_headers()) + raise_for_storage_error(http_resp) + return deserialize_history_ids(http_resp.text) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_serializer.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_serializer.py new file mode 100644 index 000000000000..a64d394ef9de --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_serializer.py @@ -0,0 +1,114 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""JSON serialization helpers for Foundry storage envelope payloads.""" + +from __future__ import annotations + +import json +from typing import Any, Iterable + +from ..models._generated import OutputItem, Response # type: ignore[attr-defined] + + +def serialize_create_request( + response: Response, + input_items: Iterable[Any] | None, + history_item_ids: Iterable[str] | None, +) -> bytes: + """Serialize a create-response request envelope to JSON bytes. + + :param response: The initial response snapshot. + :type response: Response + :param input_items: Ordered input items to store alongside the response. + :type input_items: Iterable[Any] | None + :param history_item_ids: Item IDs drawn from a prior conversation turn. + :type history_item_ids: Iterable[str] | None + :returns: UTF-8 encoded JSON body. + :rtype: bytes + """ + payload: dict[str, Any] = { + "response": response.as_dict(), + "input_items": [item.as_dict() for item in (input_items or [])], + "history_item_ids": list(history_item_ids or []), + } + return json.dumps(payload).encode("utf-8") + + +def serialize_response(response: Response) -> bytes: + """Serialize a single :class:`Response` snapshot to JSON bytes. + + :param response: The response model to encode. + :type response: Response + :returns: UTF-8 encoded JSON body. + :rtype: bytes + """ + return json.dumps(response.as_dict()).encode("utf-8") + + +def serialize_batch_request(item_ids: list[str]) -> bytes: + """Serialize a batch-retrieve request to JSON bytes. + + :param item_ids: Ordered list of item IDs to retrieve. + :type item_ids: list[str] + :returns: UTF-8 encoded JSON body. + :rtype: bytes + """ + return json.dumps({"item_ids": item_ids}).encode("utf-8") + + +def deserialize_response(body: str) -> Response: + """Deserialize a JSON response body into a :class:`Response` model. + + :param body: The raw JSON response text from the storage API. + :type body: str + :returns: A populated :class:`Response` model. + :rtype: Response + """ + return Response(json.loads(body)) # type: ignore[call-arg] + + +def deserialize_paged_items(body: str) -> list[Any]: + """Deserialize a paged-response JSON body, extracting the ``data`` array. + + The discriminator field ``type`` on each item determines the concrete + :class:`OutputItem` subclass returned. + + :param body: The raw JSON response text from the storage API. + :type body: str + :returns: A list of deserialized :class:`OutputItem` instances. + :rtype: list[Any] + """ + data = json.loads(body) + return [OutputItem._deserialize(item, []) for item in data.get("data", [])] # type: ignore[attr-defined] + + +def deserialize_items_array(body: str) -> list[Any | None]: + """Deserialize a JSON array of items, preserving ``null`` gaps. + + Null entries in the array indicate that no item was found for the + corresponding ID in a batch-retrieve response. + + :param body: The raw JSON response text from the storage API. + :type body: str + :returns: A list of deserialized :class:`OutputItem` instances or ``None`` for missing items. + :rtype: list[Any | None] + """ + items: list[Any] = json.loads(body) + result: list[Any | None] = [] + for item in items: + if item is None: + result.append(None) + else: + result.append(OutputItem._deserialize(item, [])) # type: ignore[attr-defined] + return result + + +def deserialize_history_ids(body: str) -> list[str]: + """Deserialize a JSON array of history item ID strings. + + :param body: The raw JSON response text from the storage API. + :type body: str + :returns: List of item ID strings. + :rtype: list[str] + """ + return list(json.loads(body)) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_settings.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_settings.py new file mode 100644 index 000000000000..8ebef2a48894 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_foundry_settings.py @@ -0,0 +1,63 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Configuration helpers for the Foundry storage backend.""" + +from __future__ import annotations + +import os +from urllib.parse import quote as _url_quote + +_PROJECT_ENDPOINT_ENV_VAR = "FOUNDRY_PROJECT_ENDPOINT" +_API_VERSION = "v1" + + +def _encode(value: str) -> str: + return _url_quote(value, safe="") + + +class FoundryStorageSettings: + """Immutable runtime configuration for :class:`FoundryStorageProvider`. + + Construct via :meth:`from_env` in hosted environments, or directly by + supplying *storage_base_url* for local testing. + """ + + def __init__(self, *, storage_base_url: str) -> None: + self.storage_base_url = storage_base_url + + @classmethod + def from_env(cls) -> "FoundryStorageSettings": + """Create settings by reading the ``FOUNDRY_PROJECT_ENDPOINT`` environment variable. + + :raises EnvironmentError: If the variable is missing or empty. + :raises ValueError: If the variable does not contain a valid absolute URL. + :returns: A new :class:`FoundryStorageSettings` configured from the environment. + :rtype: FoundryStorageSettings + """ + value = os.environ.get(_PROJECT_ENDPOINT_ENV_VAR) + if not value: + raise EnvironmentError( + f"The '{_PROJECT_ENDPOINT_ENV_VAR}' environment variable is required. " + "In hosted environments, the Azure AI Foundry platform must set this variable." + ) + if not (value.startswith("http://") or value.startswith("https://")): + raise ValueError( + f"The '{_PROJECT_ENDPOINT_ENV_VAR}' environment variable must be a valid absolute URL, " + f"got: {value!r}" + ) + base = value.rstrip("/") + "/storage/" + return cls(storage_base_url=base) + + def build_url(self, path: str, **extra_params: str) -> str: + """Build a full storage API URL for *path* with ``api-version`` appended. + + :param path: The resource path segment, e.g. ``responses/abc123``. + :type path: str + :keyword str extra_params: Additional query parameters; values are URL-encoded automatically. + :returns: The complete URL string. + :rtype: str + """ + url = f"{self.storage_base_url}{path}?api-version={_encode(_API_VERSION)}" + for key, value in extra_params.items(): + url += f"&{key}={_encode(value)}" + return url diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_memory.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_memory.py new file mode 100644 index 000000000000..3e486443fa2d --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/store/_memory.py @@ -0,0 +1,621 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""In-memory response store implementation.""" + +from __future__ import annotations + +import asyncio +from copy import deepcopy +from datetime import datetime, timedelta, timezone +from typing import Any, Dict, Iterable + +from ..models._generated import Response +from ..models._helpers import get_conversation_id +from ..models.runtime import ResponseExecution, ResponseModeFlags, ResponseStatus, StreamEventRecord, StreamReplayState +from ._base import ResponseProviderProtocol, ResponseStreamProviderProtocol + + +class _StoreEntry: + """Container for one response execution and its replay state.""" + + def __init__( + self, + *, + execution: ResponseExecution, + replay: StreamReplayState, + expires_at: datetime | None = None, + response: Response | None = None, + input_item_ids: list[str] | None = None, + output_item_ids: list[str] | None = None, + history_item_ids: list[str] | None = None, + deleted: bool = False, + ) -> None: + self.execution = execution + self.replay = replay + self.expires_at = expires_at + self.response = response + self.input_item_ids = input_item_ids + self.output_item_ids = output_item_ids + self.history_item_ids = history_item_ids + self.deleted = deleted + + +class InMemoryResponseProvider(ResponseProviderProtocol, ResponseStreamProviderProtocol): + """In-memory provider implementing both ``ResponseProviderProtocol`` and ``ResponseStreamProviderProtocol``.""" + + def __init__(self) -> None: + """Initialize in-memory state and an async mutation lock.""" + self._entries: Dict[str, _StoreEntry] = {} + self._lock = asyncio.Lock() + self._item_store: Dict[str, Any] = {} + self._conversation_responses: Dict[str, list[str]] = {} + self._stream_events: Dict[str, list[dict[str, Any]]] = {} + + async def create_response_async( + self, + response: Response, + input_items: Iterable[Any] | None, + history_item_ids: Iterable[str] | None, + ) -> None: + """Persist a new response envelope and optional input/history references. + + Stores a deep copy of the response, indexes input items by their IDs, + and tracks conversation membership for history resolution. + + :param response: The response envelope to persist. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :param input_items: Optional input items to associate with the response. + :type input_items: Iterable[Any] | None + :param history_item_ids: Optional history item IDs to link to the response. + :type history_item_ids: Iterable[str] | None + :rtype: None + :raises ValueError: If a non-deleted response with the same ID already exists. + """ + response_id = str(getattr(response, "id")) + async with self._lock: + self._purge_expired_unlocked() + + entry = self._entries.get(response_id) + if entry is not None and not entry.deleted: + raise ValueError(f"response '{response_id}' already exists") + + input_ids: list[str] = [] + if input_items is not None: + for item in input_items: + item_id = self._extract_item_id(item) + if item_id is None: + continue + self._item_store[item_id] = deepcopy(item) + input_ids.append(item_id) + + history_ids = list(history_item_ids) if history_item_ids is not None else [] + output_ids = self._store_output_items_unlocked(response) + self._entries[response_id] = _StoreEntry( + execution=ResponseExecution( + response_id=response_id, + mode_flags=self._resolve_mode_flags_from_response(response), + ), + replay=StreamReplayState(response_id=response_id), + response=deepcopy(response), + input_item_ids=input_ids, + output_item_ids=output_ids, + history_item_ids=history_ids, + deleted=False, + ) + + conversation_id = get_conversation_id(response) + if conversation_id is not None: + self._conversation_responses.setdefault(conversation_id, []).append(response_id) + + async def get_response_async(self, response_id: str) -> Response: + """Retrieve one response envelope by identifier. + + :param response_id: The unique identifier of the response to retrieve. + :type response_id: str + :returns: A deep copy of the stored response envelope. + :rtype: ~azure.ai.agentserver.responses.models._generated.Response + :raises KeyError: If the response does not exist or has been deleted. + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None or entry.deleted or entry.response is None: + raise KeyError(f"response '{response_id}' not found") + return deepcopy(entry.response) + + async def update_response_async(self, response: Response) -> None: + """Update a stored response envelope. + + Replaces the stored response with a deep copy and updates + the execution snapshot. + + :param response: The response envelope with updated fields. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :rtype: None + :raises KeyError: If the response does not exist or has been deleted. + """ + response_id = str(getattr(response, "id")) + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None or entry.deleted: + raise KeyError(f"response '{response_id}' not found") + + entry.response = deepcopy(response) + entry.execution.set_response_snapshot(deepcopy(response)) + entry.output_item_ids = self._store_output_items_unlocked(response) + + async def delete_response_async(self, response_id: str) -> None: + """Delete a stored response envelope by identifier. + + Marks the entry as deleted and clears the response payload. + + :param response_id: The unique identifier of the response to delete. + :type response_id: str + :rtype: None + :raises KeyError: If the response does not exist or has already been deleted. + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None or entry.deleted: + raise KeyError(f"response '{response_id}' not found") + entry.deleted = True + entry.response = None + + async def get_input_items_async( + self, + response_id: str, + limit: int = 20, + ascending: bool = False, + after: str | None = None, + before: str | None = None, + ) -> list[Any]: + """Retrieve input/history items for a response with basic cursor paging. + + Returns deep copies of stored items, combining history and input item IDs + with optional cursor-based pagination. + + :param response_id: The unique identifier of the response whose items to fetch. + :type response_id: str + :param limit: Maximum number of items to return (clamped to 1–100). Defaults to 20. + :type limit: int + :param ascending: Whether to return items in ascending order. Defaults to False. + :type ascending: bool + :param after: Cursor ID; only return items after this ID. + :type after: str | None + :param before: Cursor ID; only return items before this ID. + :type before: str | None + :returns: A list of input/history items matching the pagination criteria. + :rtype: list[Any] + :raises KeyError: If the response does not exist. + :raises ValueError: If the response has been deleted. + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + raise KeyError(f"response '{response_id}' not found") + if entry.deleted: + raise ValueError(f"response '{response_id}' has been deleted") + + item_ids = [ + *(entry.history_item_ids or []), + *(entry.input_item_ids or []), + ] + ordered_ids = item_ids if ascending else list(reversed(item_ids)) + + if after is not None: + try: + ordered_ids = ordered_ids[ordered_ids.index(after) + 1 :] + except ValueError: + pass + if before is not None: + try: + ordered_ids = ordered_ids[: ordered_ids.index(before)] + except ValueError: + pass + + safe_limit = max(1, min(100, int(limit))) + return [deepcopy(self._item_store[item_id]) for item_id in ordered_ids[:safe_limit] if item_id in self._item_store] + + async def get_items_async(self, item_ids: Iterable[str]) -> list[Any | None]: + """Retrieve items by ID, preserving request order. + + Returns deep copies of stored items. Missing IDs produce ``None`` entries. + + :param item_ids: The item identifiers to look up. + :type item_ids: Iterable[str] + :returns: A list of items in the same order as *item_ids*; missing items are ``None``. + :rtype: list[Any | None] + """ + async with self._lock: + return [deepcopy(self._item_store[item_id]) if item_id in self._item_store else None for item_id in item_ids] + + async def get_history_item_ids_async( + self, + previous_response_id: str | None, + conversation_id: str | None, + limit: int, + ) -> list[str]: + """Resolve history item IDs from previous response and/or conversation scope. + + Collects history item IDs from the previous response chain and/or + all responses within the given conversation, up to *limit*. + + :param previous_response_id: Optional response ID to chain history from. + :type previous_response_id: str | None + :param conversation_id: Optional conversation ID to scope history lookup. + :type conversation_id: str | None + :param limit: Maximum number of history item IDs to return. + :type limit: int + :returns: A list of history item IDs within the given scope. + :rtype: list[str] + """ + async with self._lock: + self._purge_expired_unlocked() + resolved: list[str] = [] + + if previous_response_id is not None: + entry = self._entries.get(previous_response_id) + if entry is not None and not entry.deleted: + # Mirror .NET IResponsesProvider.GetHistoryItemIdsAsync: + # return historyItemIds + inputItemIds + outputItemIds of the previous response + resolved.extend(entry.history_item_ids or []) + resolved.extend(entry.input_item_ids or []) + resolved.extend(entry.output_item_ids or []) + + if conversation_id is not None: + for response_id in self._conversation_responses.get(conversation_id, []): + entry = self._entries.get(response_id) + if entry is None or entry.deleted: + continue + resolved.extend(entry.history_item_ids or []) + resolved.extend(entry.input_item_ids or []) + resolved.extend(entry.output_item_ids or []) + + if limit <= 0: + return [] + return resolved[:limit] + + async def create_execution(self, execution: ResponseExecution, *, ttl_seconds: int | None = None) -> None: + """Create a new execution and replay container for ``execution.response_id``. + + :param execution: The execution state to store. + :type execution: ~azure.ai.agentserver.responses.models.runtime.ResponseExecution + :keyword int or None ttl_seconds: Optional time-to-live in seconds for automatic expiration. + :rtype: None + :raises ValueError: If an entry with the same response ID already exists. + """ + async with self._lock: + self._purge_expired_unlocked() + + if execution.response_id in self._entries: + raise ValueError(f"response '{execution.response_id}' already exists") + + self._entries[execution.response_id] = _StoreEntry( + execution=deepcopy(execution), + replay=StreamReplayState(response_id=execution.response_id), + expires_at=self._compute_expiry(ttl_seconds), + ) + + async def get_execution(self, response_id: str) -> ResponseExecution | None: + """Get a defensive copy of execution state for ``response_id`` if present. + + :param response_id: The unique identifier of the response execution to retrieve. + :type response_id: str + :returns: A deep copy of the execution state, or ``None`` if not found. + :rtype: ~azure.ai.agentserver.responses.models.runtime.ResponseExecution | None + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + return None + return deepcopy(entry.execution) + + async def set_response_snapshot( + self, + response_id: str, + response: Response, + *, + ttl_seconds: int | None = None, + ) -> bool: + """Set the latest response snapshot for an existing response execution. + + :param response_id: The unique identifier of the response to update. + :type response_id: str + :param response: The response snapshot to associate with the execution. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :keyword int or None ttl_seconds: Optional time-to-live in seconds to refresh expiration. + :returns: ``True`` if the entry was found and updated, ``False`` otherwise. + :rtype: bool + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + return False + + entry.execution.set_response_snapshot(response) + self._apply_ttl_unlocked(entry, ttl_seconds) + return True + + async def transition_execution_status( + self, + response_id: str, + next_status: ResponseStatus, + *, + ttl_seconds: int | None = None, + ) -> bool: + """Transition execution state while preserving lifecycle invariants. + + :param response_id: The unique identifier of the response execution to transition. + :type response_id: str + :param next_status: The target status to transition to. + :type next_status: ~azure.ai.agentserver.responses.models.runtime.ResponseStatus + :keyword int or None ttl_seconds: Optional time-to-live in seconds to refresh expiration. + :returns: ``True`` if the entry was found and transitioned, ``False`` otherwise. + :rtype: bool + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + return False + + entry.execution.transition_to(next_status) + self._apply_ttl_unlocked(entry, ttl_seconds) + return True + + async def set_cancel_requested(self, response_id: str, *, ttl_seconds: int | None = None) -> bool: + """Mark cancellation requested and enforce lifecycle-safe cancel transitions. + + :param response_id: The unique identifier of the response to cancel. + :type response_id: str + :keyword int or None ttl_seconds: Optional time-to-live in seconds to refresh expiration. + :returns: ``True`` if the entry was found and cancel was applied, ``False`` otherwise. + :rtype: bool + :raises ValueError: If the execution is already terminal in a non-cancelled state. + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + return False + + self._apply_cancel_transition_unlocked(entry) + self._apply_ttl_unlocked(entry, ttl_seconds) + return True + + @staticmethod + def _apply_cancel_transition_unlocked(entry: _StoreEntry) -> None: + """Apply deterministic and lifecycle-safe cancellation status updates. + + Transitions the entry through ``queued -> in_progress -> cancelled`` when + applicable, and sets the ``cancel_requested`` flag. + + :param entry: The store entry whose execution state will be updated. + :type entry: _StoreEntry + :rtype: None + :raises ValueError: If the execution is in a terminal non-cancelled state. + """ + status = entry.execution.status + + if status == "cancelled": + entry.execution.cancel_requested = True + entry.execution.updated_at = datetime.now(timezone.utc) + return + + if status in {"completed", "failed", "incomplete"}: + raise ValueError(f"cannot cancel terminal execution in status '{status}'") + + if status == "queued": + entry.execution.transition_to("in_progress") + + entry.execution.transition_to("cancelled") + entry.execution.cancel_requested = True + + async def append_stream_event( + self, + response_id: str, + event: StreamEventRecord, + *, + ttl_seconds: int | None = None, + ) -> bool: + """Append one stream event to replay state for an existing execution. + + :param response_id: The unique identifier of the response to append the event to. + :type response_id: str + :param event: The stream event record to append. + :type event: ~azure.ai.agentserver.responses.models.runtime.StreamEventRecord + :keyword int or None ttl_seconds: Optional time-to-live in seconds to refresh expiration. + :returns: ``True`` if the entry was found and the event was appended, ``False`` otherwise. + :rtype: bool + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + return False + + entry.replay.append(deepcopy(event)) + self._apply_ttl_unlocked(entry, ttl_seconds) + return True + + async def get_stream_events(self, response_id: str) -> list[StreamEventRecord] | None: + """Get defensive copies of all replay events for ``response_id``. + + :param response_id: The unique identifier of the response whose events to retrieve. + :type response_id: str + :returns: A list of deep-copied stream event records, or ``None`` if not found. + :rtype: list[~azure.ai.agentserver.responses.models.runtime.StreamEventRecord] | None + """ + async with self._lock: + self._purge_expired_unlocked() + entry = self._entries.get(response_id) + if entry is None: + return None + return deepcopy(entry.replay.events) + + async def delete(self, response_id: str) -> bool: + """Delete all state for a response ID if present. + + Removes the entry entirely from the store (unlike ``delete_response_async`` + which soft-deletes). + + :param response_id: The unique identifier of the response to remove. + :type response_id: str + :returns: ``True`` if an entry was found and removed, ``False`` otherwise. + :rtype: bool + """ + async with self._lock: + self._purge_expired_unlocked() + self._stream_events.pop(response_id, None) + return self._entries.pop(response_id, None) is not None + + async def save_stream_events_async( + self, + response_id: str, + events: list[dict[str, Any]], + ) -> None: + """Persist the complete ordered list of SSE events for ``response_id``. + + :param response_id: The unique identifier of the response. + :type response_id: str + :param events: Ordered list of normalised SSE event dicts. + :type events: list[dict[str, Any]] + :rtype: None + """ + async with self._lock: + self._stream_events[response_id] = deepcopy(events) + + async def get_stream_events_async( + self, + response_id: str, + ) -> list[dict[str, Any]] | None: + """Retrieve the persisted SSE events for ``response_id``. + + :param response_id: The unique identifier of the response whose events to retrieve. + :type response_id: str + :returns: A deep-copied list of normalised SSE event dicts, or ``None`` if not found. + :rtype: list[dict[str, Any]] | None + """ + async with self._lock: + events = self._stream_events.get(response_id) + if events is None: + return None + return deepcopy(events) + + async def purge_expired(self, *, now: datetime | None = None) -> int: + """Remove expired entries and return count. + + :keyword ~datetime.datetime or None now: Optional override for the current time (useful for testing). + :returns: The number of expired entries that were removed. + :rtype: int + """ + async with self._lock: + return self._purge_expired_unlocked(now=now) + + @staticmethod + def _compute_expiry(ttl_seconds: int | None) -> datetime | None: + """Compute an absolute expiration timestamp from a TTL. + + :param ttl_seconds: Time-to-live in seconds, or ``None`` for no expiration. + :type ttl_seconds: int | None + :returns: A UTC datetime for the expiry, or ``None`` if *ttl_seconds* is ``None``. + :rtype: ~datetime.datetime | None + :raises ValueError: If *ttl_seconds* is <= 0. + """ + if ttl_seconds is None: + return None + if ttl_seconds <= 0: + raise ValueError("ttl_seconds must be > 0 when set") + return datetime.now(timezone.utc) + timedelta(seconds=ttl_seconds) + + def _apply_ttl_unlocked(self, entry: _StoreEntry, ttl_seconds: int | None) -> None: + """Update entry expiration timestamp when a TTL value is supplied. + + :param entry: The store entry whose expiration to update. + :type entry: _StoreEntry + :param ttl_seconds: Time-to-live in seconds, or ``None`` to leave unchanged. + :type ttl_seconds: int | None + :rtype: None + """ + if ttl_seconds is not None: + entry.expires_at = self._compute_expiry(ttl_seconds) + + def _purge_expired_unlocked(self, *, now: datetime | None = None) -> int: + """Remove expired entries without acquiring the lock. + + :keyword ~datetime.datetime or None now: Optional override for the current time (useful for testing). + :returns: The number of expired entries that were removed. + :rtype: int + """ + current_time = now or datetime.now(timezone.utc) + expired_ids = [ + response_id + for response_id, entry in self._entries.items() + if entry.expires_at is not None and entry.expires_at <= current_time + ] + + for response_id in expired_ids: + del self._entries[response_id] + + return len(expired_ids) + + def _store_output_items_unlocked(self, response: Response) -> list[str]: + """Extract output items from a response, store them in the item store, and return their IDs. + + Must be called while holding ``self._lock``. + + :param response: The response envelope whose output items should be stored. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :returns: Ordered list of output item IDs. + :rtype: list[str] + """ + output = getattr(response, "output", None) + if not output: + return [] + output_ids: list[str] = [] + for item in output: + item_id = self._extract_item_id(item) + if item_id is not None: + self._item_store[item_id] = deepcopy(item) + output_ids.append(item_id) + return output_ids + + @staticmethod + def _extract_item_id(item: Any) -> str | None: + """Extract item identifier from object-like or mapping-like values. + + Supports both dict-like (``item["id"]``) and attribute-like (``item.id``) + access patterns. + + :param item: The item to extract an ID from. + :type item: Any + :returns: The string ID if found, or ``None``. + :rtype: str | None + """ + if item is None: + return None + if isinstance(item, dict): + value = item.get("id") + return str(value) if value is not None else None + value = getattr(item, "id", None) + return str(value) if value is not None else None + + @staticmethod + def _resolve_mode_flags_from_response(response: Response) -> ResponseModeFlags: + """Build mode flags from a response snapshot where available. + + :param response: The response envelope to extract mode flags from. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :returns: Mode flags derived from the response's ``stream``, ``store``, and ``background`` attributes. + :rtype: ~azure.ai.agentserver.responses.models.runtime.ResponseModeFlags + """ + return ResponseModeFlags( + stream=bool(getattr(response, "stream", False)), + store=bool(getattr(response, "store", True)), + background=bool(getattr(response, "background", False)), + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/__init__.py new file mode 100644 index 000000000000..b4e6d02d12bf --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/__init__.py @@ -0,0 +1,23 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Event streaming, SSE encoding, and output item builders.""" + +from ._helpers import ( + EVENT_TYPE, +) +from ._sse import encode_sse_event, encode_sse_payload, encode_keep_alive_comment +from ._state_machine import ( + LifecycleStateMachineError, + normalize_lifecycle_events, + validate_response_event_stream, +) + +__all__ = [ + "EVENT_TYPE", + "LifecycleStateMachineError", + "encode_sse_event", + "encode_sse_payload", + "encode_keep_alive_comment", + "normalize_lifecycle_events", + "validate_response_event_stream", +] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/__init__.py new file mode 100644 index 000000000000..8717a9be4160 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/__init__.py @@ -0,0 +1,53 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Streaming output-item builders aligned with .NET builder semantics.""" + +from ._base import ( + BaseOutputItemBuilder, + BuilderLifecycleState, + OutputItemBuilder, + _require_non_empty, +) +from ._function import ( + OutputItemFunctionCallBuilder, + OutputItemFunctionCallOutputBuilder, +) +from ._message import ( + OutputItemMessageBuilder, + RefusalContentBuilder, + TextContentBuilder, +) +from ._reasoning import ( + OutputItemReasoningItemBuilder, + ReasoningSummaryPartBuilder, +) +from ._tools import ( + OutputItemCodeInterpreterCallBuilder, + OutputItemCustomToolCallBuilder, + OutputItemFileSearchCallBuilder, + OutputItemImageGenCallBuilder, + OutputItemMcpCallBuilder, + OutputItemMcpListToolsBuilder, + OutputItemWebSearchCallBuilder, +) + +__all__ = [ + "BaseOutputItemBuilder", + "BuilderLifecycleState", + "OutputItemBuilder", + "OutputItemCodeInterpreterCallBuilder", + "OutputItemCustomToolCallBuilder", + "OutputItemFileSearchCallBuilder", + "OutputItemFunctionCallBuilder", + "OutputItemFunctionCallOutputBuilder", + "OutputItemImageGenCallBuilder", + "OutputItemMcpCallBuilder", + "OutputItemMcpListToolsBuilder", + "OutputItemMessageBuilder", + "OutputItemReasoningItemBuilder", + "OutputItemWebSearchCallBuilder", + "ReasoningSummaryPartBuilder", + "RefusalContentBuilder", + "TextContentBuilder", + "_require_non_empty", +] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_base.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_base.py new file mode 100644 index 000000000000..c45fdd92a25d --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_base.py @@ -0,0 +1,192 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Base builder infrastructure: lifecycle state, base class, and generic builder.""" + +from __future__ import annotations + +from copy import deepcopy +from enum import Enum +from typing import TYPE_CHECKING, Any + +from ...models import _generated as generated_models + +EVENT_TYPE = generated_models.ResponseStreamEventType + +if TYPE_CHECKING: + from .._event_stream import ResponseEventStream + + +def _require_non_empty(value: str, field_name: str) -> str: + """Validate that a string value is non-empty. + + :param value: The string value to check. + :type value: str + :param field_name: The field name to include in the error message. + :type field_name: str + :returns: The validated non-empty string. + :rtype: str + :raises ValueError: If *value* is not a non-empty string. + """ + if not isinstance(value, str) or not value.strip(): + raise ValueError(f"{field_name} must be a non-empty string") + return value + + +class BuilderLifecycleState(Enum): + NOT_STARTED = "not_started" + ADDED = "added" + DONE = "done" + + +class BaseOutputItemBuilder: + """Base output-item builder with lifecycle guards for added/done events.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, item_id: str) -> None: + """Initialize the base output-item builder. + + :param stream: The parent event stream to emit events into. + :type stream: ResponseEventStream + :param output_index: The zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + """ + self._stream = stream + self._output_index = output_index + self._item_id = item_id + self._lifecycle_state = BuilderLifecycleState.NOT_STARTED + + @property + def item_id(self) -> str: + """Return the output item identifier. + + :returns: The item ID. + :rtype: str + """ + return self._item_id + + @property + def output_index(self) -> int: + """Return the zero-based output index. + + :returns: The output index. + :rtype: int + """ + return self._output_index + + def _ensure_transition(self, expected: BuilderLifecycleState, new_state: BuilderLifecycleState) -> None: + """Guard a lifecycle state transition. + + :param expected: The expected current lifecycle state. + :type expected: BuilderLifecycleState + :param new_state: The target state to transition to. + :type new_state: BuilderLifecycleState + :rtype: None + :raises ValueError: If the current state does not match *expected*. + """ + if self._lifecycle_state is not expected: + raise ValueError( + "cannot transition to " + f"'{new_state.value}' from '{self._lifecycle_state.value}' " + f"(expected '{expected.value}')" + ) + self._lifecycle_state = new_state + + def _emit_added(self, item: dict[str, Any]) -> dict[str, Any]: + """Emit an ``output_item.added`` event with lifecycle guard. + + :param item: The output item dict to include in the event payload. + :type item: dict[str, Any] + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``NOT_STARTED`` state. + """ + self._ensure_transition(BuilderLifecycleState.NOT_STARTED, BuilderLifecycleState.ADDED) + stamped_item = self._stream.with_output_item_defaults(item) + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_OUTPUT_ITEM_ADDED.value, + "payload": { + "output_index": self._output_index, + "item": stamped_item, + }, + } + ) + + def _emit_done(self, item: dict[str, Any]) -> dict[str, Any]: + """Emit an ``output_item.done`` event with lifecycle guard. + + :param item: The completed output item dict to include in the event payload. + :type item: dict[str, Any] + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``ADDED`` state. + """ + self._ensure_transition(BuilderLifecycleState.ADDED, BuilderLifecycleState.DONE) + stamped_item = self._stream.with_output_item_defaults(item) + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_OUTPUT_ITEM_DONE.value, + "payload": { + "output_index": self._output_index, + "item": stamped_item, + }, + } + ) + + def _emit_item_state_event(self, event_type: str, *, extra_payload: dict[str, Any] | None = None) -> dict[str, Any]: + """Emit an item-level state event (e.g., in-progress, searching, completed). + + :param event_type: The event type string. + :type event_type: str + :keyword extra_payload: Optional additional payload fields to merge. + :paramtype extra_payload: dict[str, Any] | None + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + payload: dict[str, Any] = { + "item_id": self._item_id, + "output_index": self._output_index, + } + if extra_payload: + payload.update(deepcopy(extra_payload)) + return self._stream.emit_event({"type": event_type, "payload": payload}) + + +class OutputItemBuilder(BaseOutputItemBuilder): + """Generic output-item builder for item types without dedicated scoped builders.""" + + def _coerce_item(self, item: Any) -> dict[str, Any]: + """Coerce an item to a plain dict. + + :param item: A dict or a generated model with ``as_dict()``. + :type item: Any + :returns: A deep-copied dict representation of the item. + :rtype: dict[str, Any] + :raises TypeError: If *item* is not a dict or model with ``as_dict()``. + """ + if isinstance(item, dict): + return deepcopy(item) + if hasattr(item, "as_dict"): + return deepcopy(item.as_dict()) + raise TypeError("item must be a dict or a generated model with as_dict()") + + def emit_added(self, item: Any) -> dict[str, Any]: + """Emit an ``output_item.added`` event for a generic item. + + :param item: The output item (dict or model with ``as_dict()``). + :type item: Any + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added(self._coerce_item(item)) + + def emit_done(self, item: Any) -> dict[str, Any]: + """Emit an ``output_item.done`` event for a generic item. + + :param item: The completed output item (dict or model with ``as_dict()``). + :type item: Any + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done(self._coerce_item(item)) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_function.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_function.py new file mode 100644 index 000000000000..549ff9efd19d --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_function.py @@ -0,0 +1,209 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Function call builders: function-call and function-call-output output items.""" + +from __future__ import annotations + +from copy import deepcopy +from typing import TYPE_CHECKING, Any + +from ._base import BaseOutputItemBuilder, EVENT_TYPE, _require_non_empty + +if TYPE_CHECKING: + from .._event_stream import ResponseEventStream + + +class OutputItemFunctionCallBuilder(BaseOutputItemBuilder): + """Scoped builder for a function-call output item in stream mode.""" + + def __init__( + self, + stream: "ResponseEventStream", + output_index: int, + item_id: str, + name: str, + call_id: str, + ) -> None: + """Initialize the function-call output item builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + :param name: The function name being called. + :type name: str + :param call_id: Unique identifier for this function call. + :type call_id: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._name = _require_non_empty(name, "name") + self._call_id = _require_non_empty(call_id, "call_id") + self._final_arguments: str | None = None + + @property + def name(self) -> str: + """Return the function name. + + :returns: The function name. + :rtype: str + """ + return self._name + + @property + def call_id(self) -> str: + """Return the function call identifier. + + :returns: The call ID. + :rtype: str + """ + return self._call_id + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for this function call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "function_call", + "id": self._item_id, + "call_id": self._call_id, + "name": self._name, + "arguments": "", + "status": "in_progress", + } + ) + + def emit_arguments_delta(self, delta: str) -> dict[str, Any]: + """Emit a function-call arguments delta event. + + :param delta: The incremental arguments text fragment. + :type delta: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "delta": delta, + }, + } + ) + + def emit_arguments_done(self, arguments: str) -> dict[str, Any]: + """Emit a function-call arguments done event. + + :param arguments: The final, complete arguments string. + :type arguments: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + self._final_arguments = arguments + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "name": self._name, + "arguments": arguments, + }, + } + ) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this function call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "function_call", + "id": self._item_id, + "call_id": self._call_id, + "name": self._name, + "arguments": self._final_arguments or "", + "status": "completed", + } + ) + + +class OutputItemFunctionCallOutputBuilder(BaseOutputItemBuilder): + """Scoped builder for a function-call-output item in stream mode.""" + + def __init__( + self, + stream: "ResponseEventStream", + output_index: int, + item_id: str, + call_id: str, + ) -> None: + """Initialize the function-call-output item builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + :param call_id: The call ID of the function call this output belongs to. + :type call_id: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._call_id = _require_non_empty(call_id, "call_id") + self._final_output: str | list[Any] | None = None + + @property + def call_id(self) -> str: + """Return the function call identifier. + + :returns: The call ID. + :rtype: str + """ + return self._call_id + + def emit_added(self, output: str | list[Any] | None = None) -> dict[str, Any]: + """Emit an ``output_item.added`` event for this function-call output. + + :param output: Optional initial output value. + :type output: str | list[Any] | None + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "function_call_output", + "id": self._item_id, + "call_id": self._call_id, + "output": deepcopy(output) if output is not None else "", + "status": "in_progress", + } + ) + + def emit_done(self, output: str | list[Any] | None = None) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this function-call output. + + :param output: Optional final output value. Uses previously set output if ``None``. + :type output: str | list[Any] | None + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + if output is not None: + self._final_output = deepcopy(output) + + return self._emit_done( + { + "type": "function_call_output", + "id": self._item_id, + "call_id": self._call_id, + "output": deepcopy(self._final_output) if self._final_output is not None else "", + "status": "completed", + } + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_message.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_message.py new file mode 100644 index 000000000000..51ed8cfc32a1 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_message.py @@ -0,0 +1,379 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Message-related builders: text content, refusal content, and message output item.""" + +from __future__ import annotations + +from copy import deepcopy +from typing import TYPE_CHECKING, Any + +from ._base import BaseOutputItemBuilder, BuilderLifecycleState, EVENT_TYPE + +if TYPE_CHECKING: + from .._event_stream import ResponseEventStream + + +class TextContentBuilder: + """Scoped builder for a text content part within an output message item.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, content_index: int, item_id: str) -> None: + """Initialize the text content builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of the parent output item. + :type output_index: int + :param content_index: Zero-based index of this content part. + :type content_index: int + :param item_id: Identifier of the parent output item. + :type item_id: str + """ + self._stream = stream + self._output_index = output_index + self._content_index = content_index + self._item_id = item_id + self._final_text: str | None = None + self._delta_fragments: list[str] = [] + self._annotation_index = 0 + self._lifecycle_state = BuilderLifecycleState.NOT_STARTED + + @property + def final_text(self) -> str | None: + """Return the final merged text, or ``None`` if not yet done. + + :returns: The final text string. + :rtype: str | None + """ + return self._final_text + + @property + def content_index(self) -> int: + """Return the zero-based content part index. + + :returns: The content index. + :rtype: int + """ + return self._content_index + + def emit_added(self) -> dict[str, Any]: + """Emit a ``content_part.added`` event for this text content. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``NOT_STARTED`` state. + """ + if self._lifecycle_state is not BuilderLifecycleState.NOT_STARTED: + raise ValueError(f"cannot call emit_added in '{self._lifecycle_state.value}' state") + self._lifecycle_state = BuilderLifecycleState.ADDED + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_CONTENT_PART_ADDED.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "part": {"type": "output_text", "text": "", "annotations": [], "logprobs": []}, + }, + } + ) + + def emit_delta(self, text: str) -> dict[str, Any]: + if self._lifecycle_state is not BuilderLifecycleState.ADDED: + raise ValueError(f"cannot call emit_delta in '{self._lifecycle_state.value}' state") + self._delta_fragments.append(text) + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_OUTPUT_TEXT_DELTA.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "delta": text, + "logprobs": [], + }, + } + ) + + def emit_done(self, final_text: str | None = None) -> dict[str, Any]: + """Emit a text done event with the merged final text. + + :param final_text: Optional override for the final text; uses merged deltas if ``None``. + :type final_text: str | None + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``ADDED`` state. + """ + if self._lifecycle_state is not BuilderLifecycleState.ADDED: + raise ValueError(f"cannot call emit_done in '{self._lifecycle_state.value}' state") + self._lifecycle_state = BuilderLifecycleState.DONE + merged_text = "".join(self._delta_fragments) + if not merged_text and final_text is not None: + merged_text = final_text + self._final_text = merged_text + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_OUTPUT_TEXT_DONE.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "text": merged_text, + "logprobs": [], + }, + } + ) + + def emit_annotation_added(self, annotation: dict[str, Any]) -> dict[str, Any]: + """Emit a text annotation added event. + + :param annotation: The annotation dict to attach. + :type annotation: dict[str, Any] + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + annotation_index = self._annotation_index + self._annotation_index += 1 + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "annotation_index": annotation_index, + "annotation": deepcopy(annotation), + }, + } + ) + + +class RefusalContentBuilder: + """Scoped builder for a refusal content part within an output message item.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, content_index: int, item_id: str) -> None: + """Initialize the refusal content builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of the parent output item. + :type output_index: int + :param content_index: Zero-based index of this content part. + :type content_index: int + :param item_id: Identifier of the parent output item. + :type item_id: str + """ + self._stream = stream + self._output_index = output_index + self._content_index = content_index + self._item_id = item_id + self._final_refusal: str | None = None + self._lifecycle_state = BuilderLifecycleState.NOT_STARTED + + @property + def final_refusal(self) -> str | None: + """Return the final refusal text, or ``None`` if not yet done. + + :returns: The final refusal string. + :rtype: str | None + """ + return self._final_refusal + + @property + def content_index(self) -> int: + """Return the zero-based content part index. + + :returns: The content index. + :rtype: int + """ + return self._content_index + + def emit_added(self) -> dict[str, Any]: + """Emit a ``content_part.added`` event for this refusal content. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``NOT_STARTED`` state. + """ + if self._lifecycle_state is not BuilderLifecycleState.NOT_STARTED: + raise ValueError(f"cannot call emit_added in '{self._lifecycle_state.value}' state") + self._lifecycle_state = BuilderLifecycleState.ADDED + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_CONTENT_PART_ADDED.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "part": {"type": "refusal", "refusal": ""}, + }, + } + ) + + def emit_delta(self, text: str) -> dict[str, Any]: + """Emit a refusal delta event. + + :param text: The incremental refusal text fragment. + :type text: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_REFUSAL_DELTA.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "delta": text, + }, + } + ) + + def emit_done(self, final_refusal: str) -> dict[str, Any]: + """Emit a refusal done event. + + :param final_refusal: The final, complete refusal text. + :type final_refusal: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``ADDED`` state. + """ + if self._lifecycle_state is not BuilderLifecycleState.ADDED: + raise ValueError(f"cannot call emit_done in '{self._lifecycle_state.value}' state") + self._lifecycle_state = BuilderLifecycleState.DONE + self._final_refusal = final_refusal + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_REFUSAL_DONE.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": self._content_index, + "refusal": final_refusal, + }, + } + ) + + +class OutputItemMessageBuilder(BaseOutputItemBuilder): + """Scoped builder for a message output item in stream mode.""" + + def __init__( + self, + stream: "ResponseEventStream", + output_index: int, + item_id: str, + ) -> None: + """Initialize the message output item builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._content_index = 0 + self._completed_contents: list[dict[str, Any]] = [] + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for this message item. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "output_message", + "id": self._item_id, + "role": "assistant", + "content": [], + "status": "in_progress", + } + ) + + def add_text_content(self) -> TextContentBuilder: + """Create and return a text content part builder. + + :returns: A new text content builder scoped to this message. + :rtype: TextContentBuilder + """ + content_index = self._content_index + self._content_index += 1 + return TextContentBuilder( + stream=self._stream, + output_index=self._output_index, + content_index=content_index, + item_id=self._item_id, + ) + + def add_refusal_content(self) -> RefusalContentBuilder: + """Create and return a refusal content part builder. + + :returns: A new refusal content builder scoped to this message. + :rtype: RefusalContentBuilder + """ + content_index = self._content_index + self._content_index += 1 + return RefusalContentBuilder( + stream=self._stream, + output_index=self._output_index, + content_index=content_index, + item_id=self._item_id, + ) + + def emit_content_done(self, content_builder: TextContentBuilder | RefusalContentBuilder) -> dict[str, Any]: + """Emit a ``content_part.done`` event for a completed content part. + + :param content_builder: The content builder whose final state to emit. + :type content_builder: TextContentBuilder | RefusalContentBuilder + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + if isinstance(content_builder, TextContentBuilder): + part = { + "type": "output_text", + "text": content_builder.final_text or "", + "annotations": [], + "logprobs": [], + } + content_index = content_builder.content_index + else: + part = { + "type": "refusal", + "refusal": content_builder.final_refusal or "", + } + content_index = content_builder.content_index + + self._completed_contents.append(deepcopy(part)) + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_CONTENT_PART_DONE.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "content_index": content_index, + "part": deepcopy(part), + }, + } + ) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this message item. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If no content parts have been completed. + """ + if len(self._completed_contents) == 0: + raise ValueError("message output item requires at least one content part before emit_done") + return self._emit_done( + { + "type": "output_message", + "id": self._item_id, + "role": "assistant", + "content": deepcopy(self._completed_contents), + "status": "completed", + } + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_reasoning.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_reasoning.py new file mode 100644 index 000000000000..372311ba3473 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_reasoning.py @@ -0,0 +1,199 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Reasoning-related builders: summary parts and reasoning output items.""" + +from __future__ import annotations + +from copy import deepcopy +from typing import TYPE_CHECKING, Any + +from ._base import BaseOutputItemBuilder, BuilderLifecycleState, EVENT_TYPE + +if TYPE_CHECKING: + from .._event_stream import ResponseEventStream + + +class ReasoningSummaryPartBuilder: + """Scoped builder for a single reasoning summary part.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, summary_index: int, item_id: str) -> None: + """Initialize the reasoning summary part builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of the parent output item. + :type output_index: int + :param summary_index: Zero-based index of this summary part. + :type summary_index: int + :param item_id: Identifier of the parent output item. + :type item_id: str + """ + self._stream = stream + self._output_index = output_index + self._summary_index = summary_index + self._item_id = item_id + self._final_text: str | None = None + self._lifecycle_state = BuilderLifecycleState.NOT_STARTED + + @property + def final_text(self) -> str | None: + """Return the final summary text, or ``None`` if not yet done. + + :returns: The final text string. + :rtype: str | None + """ + return self._final_text + + @property + def summary_index(self) -> int: + """Return the zero-based summary part index. + + :returns: The summary index. + :rtype: int + """ + return self._summary_index + + def emit_added(self) -> dict[str, Any]: + """Emit a ``reasoning_summary_part.added`` event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``NOT_STARTED`` state. + """ + if self._lifecycle_state is not BuilderLifecycleState.NOT_STARTED: + raise ValueError(f"cannot call emit_added in '{self._lifecycle_state.value}' state") + self._lifecycle_state = BuilderLifecycleState.ADDED + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_REASONING_SUMMARY_PART_ADDED.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "summary_index": self._summary_index, + "part": {"type": "summary_text", "text": ""}, + }, + } + ) + + def emit_text_delta(self, text: str) -> dict[str, Any]: + """Emit a reasoning summary text delta event. + + :param text: The incremental summary text fragment. + :type text: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_REASONING_SUMMARY_TEXT_DELTA.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "summary_index": self._summary_index, + "delta": text, + }, + } + ) + + def emit_text_done(self, final_text: str) -> dict[str, Any]: + """Emit a reasoning summary text done event. + + :param final_text: The final, complete summary text. + :type final_text: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + self._final_text = final_text + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_REASONING_SUMMARY_TEXT_DONE.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "summary_index": self._summary_index, + "text": final_text, + }, + } + ) + + def emit_done(self) -> dict[str, Any]: + """Emit a ``reasoning_summary_part.done`` event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + :raises ValueError: If the builder is not in ``ADDED`` state. + """ + if self._lifecycle_state is not BuilderLifecycleState.ADDED: + raise ValueError(f"cannot call emit_done in '{self._lifecycle_state.value}' state") + self._lifecycle_state = BuilderLifecycleState.DONE + return self._stream.emit_event( + { + "type": EVENT_TYPE.RESPONSE_REASONING_SUMMARY_PART_DONE.value, + "payload": { + "item_id": self._item_id, + "output_index": self._output_index, + "summary_index": self._summary_index, + "part": {"type": "summary_text", "text": self._final_text or ""}, + }, + } + ) + + +class OutputItemReasoningItemBuilder(BaseOutputItemBuilder): + """Scoped builder for reasoning output items with summary part support.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, item_id: str) -> None: + """Initialize the reasoning output item builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._summary_index = 0 + self._completed_summaries: list[dict[str, Any]] = [] + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for this reasoning item. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added({"type": "reasoning", "id": self._item_id, "summary": [], "status": "in_progress"}) + + def add_summary_part(self) -> ReasoningSummaryPartBuilder: + """Create and return a reasoning summary part builder. + + :returns: A new summary part builder scoped to this reasoning item. + :rtype: ReasoningSummaryPartBuilder + """ + summary_index = self._summary_index + self._summary_index += 1 + return ReasoningSummaryPartBuilder(self._stream, self._output_index, summary_index, self._item_id) + + def emit_summary_part_done(self, summary_part: ReasoningSummaryPartBuilder) -> None: + """Record a completed summary part for inclusion in the done event. + + :param summary_part: The completed summary part builder. + :type summary_part: ReasoningSummaryPartBuilder + :rtype: None + """ + self._completed_summaries.append({"type": "summary_text", "text": summary_part.final_text or ""}) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this reasoning item. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "reasoning", + "id": self._item_id, + "summary": deepcopy(self._completed_summaries), + "status": "completed", + } + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_tools.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_tools.py new file mode 100644 index 000000000000..6d1c76a733d5 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_builders/_tools.py @@ -0,0 +1,617 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Tool call builders: file search, web search, code interpreter, image gen, MCP, and custom tools.""" + +from __future__ import annotations + +from typing import TYPE_CHECKING, Any + +from ._base import BaseOutputItemBuilder, EVENT_TYPE, _require_non_empty + +if TYPE_CHECKING: + from .._event_stream import ResponseEventStream + + +class OutputItemFileSearchCallBuilder(BaseOutputItemBuilder): + """Scoped builder for file search tool call events.""" + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for a file search call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "file_search_call", + "id": self._item_id, + "status": "in_progress", + "queries": [], + } + ) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit a file-search in-progress state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS.value) + + def emit_searching(self) -> dict[str, Any]: + """Emit a file-search searching state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_FILE_SEARCH_CALL_SEARCHING.value) + + def emit_completed(self) -> dict[str, Any]: + """Emit a file-search completed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_FILE_SEARCH_CALL_COMPLETED.value) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this file search call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done({"type": "file_search_call", "id": self._item_id, "status": "completed", "queries": []}) + + +class OutputItemWebSearchCallBuilder(BaseOutputItemBuilder): + """Scoped builder for web search tool call events.""" + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for a web search call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added({"type": "web_search_call", "id": self._item_id, "status": "in_progress", "action": {}}) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit a web-search in-progress state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS.value) + + def emit_searching(self) -> dict[str, Any]: + """Emit a web-search searching state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_WEB_SEARCH_CALL_SEARCHING.value) + + def emit_completed(self) -> dict[str, Any]: + """Emit a web-search completed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_WEB_SEARCH_CALL_COMPLETED.value) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this web search call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done({"type": "web_search_call", "id": self._item_id, "status": "completed", "action": {}}) + + +class OutputItemCodeInterpreterCallBuilder(BaseOutputItemBuilder): + """Scoped builder for code interpreter tool call events.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, item_id: str) -> None: + """Initialize the code-interpreter call builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._final_code: str | None = None + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for a code interpreter call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "code_interpreter_call", + "id": self._item_id, + "status": "in_progress", + "container_id": "", + "code": "", + "outputs": [], + } + ) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit a code-interpreter in-progress state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS.value) + + def emit_interpreting(self) -> dict[str, Any]: + """Emit a code-interpreter interpreting state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING.value) + + def emit_code_delta(self, delta: str) -> dict[str, Any]: + """Emit a code-interpreter code delta event. + + :param delta: The incremental code fragment. + :type delta: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA.value, + extra_payload={"delta": delta}, + ) + + def emit_code_done(self, code: str) -> dict[str, Any]: + """Emit a code-interpreter code done event. + + :param code: The final, complete code string. + :type code: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + self._final_code = code + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE.value, + extra_payload={"code": code}, + ) + + def emit_completed(self) -> dict[str, Any]: + """Emit a code-interpreter completed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED.value) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this code interpreter call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "code_interpreter_call", + "id": self._item_id, + "status": "completed", + "container_id": "", + "code": self._final_code or "", + "outputs": [], + } + ) + + +class OutputItemImageGenCallBuilder(BaseOutputItemBuilder): + """Scoped builder for image generation tool call events.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, item_id: str) -> None: + """Initialize the image-generation call builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._partial_image_index = 0 + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for an image generation call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "image_generation_call", + "id": self._item_id, + "status": "in_progress", + "result": "", + } + ) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit an image-generation in-progress state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS.value) + + def emit_generating(self) -> dict[str, Any]: + """Emit an image-generation generating state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_GENERATING.value) + + def emit_partial_image(self, partial_image_b64: str) -> dict[str, Any]: + """Emit a partial image event with base64-encoded image data. + + :param partial_image_b64: Base64-encoded partial image data. + :type partial_image_b64: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + partial_index = self._partial_image_index + self._partial_image_index += 1 + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE.value, + extra_payload={"partial_image_index": partial_index, "partial_image_b64": partial_image_b64}, + ) + + def emit_completed(self) -> dict[str, Any]: + """Emit an image-generation completed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED.value) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this image generation call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "image_generation_call", + "id": self._item_id, + "status": "completed", + "result": "", + } + ) + + +class OutputItemMcpCallBuilder(BaseOutputItemBuilder): + """Scoped builder for MCP tool call events.""" + + def __init__( + self, + stream: "ResponseEventStream", + output_index: int, + item_id: str, + server_label: str, + name: str, + ) -> None: + """Initialize the MCP call builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + :param server_label: Label identifying the MCP server. + :type server_label: str + :param name: Name of the MCP tool being called. + :type name: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._server_label = _require_non_empty(server_label, "server_label") + self._name = _require_non_empty(name, "name") + self._final_arguments: str | None = None + + @property + def server_label(self) -> str: + """Return the MCP server label. + + :returns: The server label. + :rtype: str + """ + return self._server_label + + @property + def name(self) -> str: + """Return the MCP tool name. + + :returns: The tool name. + :rtype: str + """ + return self._name + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for an MCP call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "mcp_call", + "id": self._item_id, + "server_label": self._server_label, + "name": self._name, + "arguments": "", + "status": "in_progress", + } + ) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit an MCP call in-progress state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_MCP_CALL_IN_PROGRESS.value) + + def emit_arguments_delta(self, delta: str) -> dict[str, Any]: + """Emit an MCP call arguments delta event. + + :param delta: The incremental arguments text fragment. + :type delta: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_MCP_CALL_ARGUMENTS_DELTA.value, + extra_payload={"delta": delta}, + ) + + def emit_arguments_done(self, arguments: str) -> dict[str, Any]: + """Emit an MCP call arguments done event. + + :param arguments: The final, complete arguments string. + :type arguments: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + self._final_arguments = arguments + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_MCP_CALL_ARGUMENTS_DONE.value, + extra_payload={"arguments": arguments}, + ) + + def emit_completed(self) -> dict[str, Any]: + """Emit an MCP call completed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_MCP_CALL_COMPLETED.value) + + def emit_failed(self) -> dict[str, Any]: + """Emit an MCP call failed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_MCP_CALL_FAILED.value) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this MCP call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "mcp_call", + "id": self._item_id, + "server_label": self._server_label, + "name": self._name, + "arguments": self._final_arguments or "", + "status": "completed", + } + ) + + +class OutputItemMcpListToolsBuilder(BaseOutputItemBuilder): + """Scoped builder for MCP list-tools lifecycle events.""" + + def __init__(self, stream: "ResponseEventStream", output_index: int, item_id: str, server_label: str) -> None: + """Initialize the MCP list-tools builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + :param server_label: Label identifying the MCP server. + :type server_label: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._server_label = _require_non_empty(server_label, "server_label") + + @property + def server_label(self) -> str: + """Return the MCP server label. + + :returns: The server label. + :rtype: str + """ + return self._server_label + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for MCP list-tools. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "mcp_list_tools", + "id": self._item_id, + "server_label": self._server_label, + "tools": [], + } + ) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit an MCP list-tools in-progress state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS.value) + + def emit_completed(self) -> dict[str, Any]: + """Emit an MCP list-tools completed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_MCP_LIST_TOOLS_COMPLETED.value) + + def emit_failed(self) -> dict[str, Any]: + """Emit an MCP list-tools failed state event. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event(EVENT_TYPE.RESPONSE_MCP_LIST_TOOLS_FAILED.value) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for MCP list-tools. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "mcp_list_tools", + "id": self._item_id, + "server_label": self._server_label, + "tools": [], + } + ) + + +class OutputItemCustomToolCallBuilder(BaseOutputItemBuilder): + """Scoped builder for custom tool call events.""" + + def __init__( + self, + stream: "ResponseEventStream", + output_index: int, + item_id: str, + call_id: str, + name: str, + ) -> None: + """Initialize the custom tool call builder. + + :param stream: The parent event stream. + :type stream: ResponseEventStream + :param output_index: Zero-based index of this output item. + :type output_index: int + :param item_id: Unique identifier for this output item. + :type item_id: str + :param call_id: Unique identifier for this tool call. + :type call_id: str + :param name: Name of the custom tool being called. + :type name: str + """ + super().__init__(stream=stream, output_index=output_index, item_id=item_id) + self._call_id = _require_non_empty(call_id, "call_id") + self._name = _require_non_empty(name, "name") + self._final_input: str | None = None + + @property + def call_id(self) -> str: + """Return the tool call identifier. + + :returns: The call ID. + :rtype: str + """ + return self._call_id + + @property + def name(self) -> str: + """Return the custom tool name. + + :returns: The tool name. + :rtype: str + """ + return self._name + + def emit_added(self) -> dict[str, Any]: + """Emit an ``output_item.added`` event for a custom tool call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_added( + { + "type": "custom_tool_call", + "id": self._item_id, + "call_id": self._call_id, + "name": self._name, + "input": "", + } + ) + + def emit_input_delta(self, delta: str) -> dict[str, Any]: + """Emit a custom tool call input delta event. + + :param delta: The incremental input text fragment. + :type delta: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA.value, + extra_payload={"delta": delta}, + ) + + def emit_input_done(self, input_text: str) -> dict[str, Any]: + """Emit a custom tool call input done event. + + :param input_text: The final, complete input text. + :type input_text: str + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + self._final_input = input_text + return self._emit_item_state_event( + EVENT_TYPE.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE.value, + extra_payload={"input": input_text}, + ) + + def emit_done(self) -> dict[str, Any]: + """Emit an ``output_item.done`` event for this custom tool call. + + :returns: The emitted event dict. + :rtype: dict[str, Any] + """ + return self._emit_done( + { + "type": "custom_tool_call", + "id": self._item_id, + "call_id": self._call_id, + "name": self._name, + "input": self._final_input or "", + } + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_event_stream.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_event_stream.py new file mode 100644 index 000000000000..d68f608dae21 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_event_stream.py @@ -0,0 +1,508 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Response event stream builders for lifecycle and output item events.""" + +from __future__ import annotations + +from copy import deepcopy +from datetime import datetime, timezone +from typing import Any + +from . import _internals +from ._builders import ( + OutputItemCodeInterpreterCallBuilder, + OutputItemBuilder, + OutputItemCustomToolCallBuilder, + OutputItemFileSearchCallBuilder, + OutputItemFunctionCallBuilder, + OutputItemFunctionCallOutputBuilder, + OutputItemImageGenCallBuilder, + OutputItemMcpCallBuilder, + OutputItemMcpListToolsBuilder, + OutputItemMessageBuilder, + OutputItemReasoningItemBuilder, + OutputItemWebSearchCallBuilder, +) +from .._id_generator import IdGenerator +from ._state_machine import validate_response_event_stream +from ..models import _generated as generated_models + +EVENT_TYPE = generated_models.ResponseStreamEventType + + +class ResponseEventStream: # pylint: disable=too-many-public-methods + """.NET-aligned response event stream with deterministic sequence numbers.""" + + def __init__( + self, + *, + response_id: str | None = None, + agent_reference: dict[str, Any] | None = None, + model: str | None = None, + request: generated_models.CreateResponse | dict[str, Any] | None = None, + response: generated_models.Response | dict[str, Any] | None = None, + ) -> None: + """Initialize a new response event stream. + + :param response_id: Unique identifier for the response. Inferred from *response* if omitted. + :type response_id: str | None + :param agent_reference: Optional agent reference metadata dict. + :type agent_reference: dict[str, Any] | None + :param model: Optional model identifier to stamp on the response. + :type model: str | None + :param request: Optional create-response request to seed the response envelope from. + :type request: ~azure.ai.agentserver.responses.models._generated.CreateResponse | dict[str, Any] | None + :param response: Optional pre-existing response envelope to build upon. + :type response: ~azure.ai.agentserver.responses.models._generated.Response | dict[str, Any] | None + :raises ValueError: If both *request* and *response* are provided, or if *response_id* cannot be resolved. + """ + if request is not None and response is not None: + raise ValueError("request and response cannot both be provided") + + request_mapping = _internals.coerce_model_mapping(request) + response_mapping = _internals.coerce_model_mapping(response) + + resolved_response_id = response_id + if resolved_response_id is None and response_mapping is not None: + candidate_id = response_mapping.get("id") + if isinstance(candidate_id, str) and candidate_id: + resolved_response_id = candidate_id + + if not isinstance(resolved_response_id, str) or not resolved_response_id: + raise ValueError("response_id is required") + + self._response_id = resolved_response_id + + if response_mapping is not None: + payload = deepcopy(response_mapping) + payload["id"] = self._response_id + payload.setdefault("object", "response") + payload.setdefault("output", []) + self._response = generated_models.Response(payload) + else: + self._response = generated_models.Response( + { + "id": self._response_id, + "object": "response", + "output": [], + "created_at": datetime.now(timezone.utc), + } + ) + if request_mapping is not None: + for field_name in ("metadata", "background", "conversation", "previous_response_id"): + value = request_mapping.get(field_name) + if value is not None: + setattr(self._response, field_name, deepcopy(value)) + request_model = request_mapping.get("model") + if isinstance(request_model, str) and request_model: + self._response.model = request_model + request_agent_reference = request_mapping.get("agent_reference") + if isinstance(request_agent_reference, dict): + self._response.agent_reference = deepcopy(request_agent_reference) # type: ignore[assignment] + + if model is not None: + self._response.model = model + + if agent_reference is not None: + self._response.agent_reference = deepcopy(agent_reference) # type: ignore[assignment] + + self._agent_reference = _internals.extract_agent_reference(self._response) + self._model = _internals.extract_model(self._response) + self._events: list[dict[str, Any]] = [] + self._output_index = 0 + + @property + def response(self) -> generated_models.Response: + """Return the current response envelope. + + :returns: The mutable response envelope being built by this stream. + :rtype: ~azure.ai.agentserver.responses.models._generated.Response + """ + return self._response + + def emit_queued(self) -> dict[str, Any]: + """Emit a ``response.queued`` lifecycle event. + + :returns: The emitted event dict with type and payload. + :rtype: dict[str, Any] + """ + self._response.status = "queued" + return self.emit_event( + { + "type": EVENT_TYPE.RESPONSE_QUEUED.value, + "payload": self._response_payload(), + } + ) + + def emit_created(self, *, status: str = "in_progress") -> dict[str, Any]: + """Emit a ``response.created`` lifecycle event. + + :keyword status: Initial status to set on the response. Defaults to ``"in_progress"``. + :keyword type status: str + :returns: The emitted event dict with type and payload. + :rtype: dict[str, Any] + """ + self._response.status = status # type: ignore[assignment] + return self.emit_event( + { + "type": EVENT_TYPE.RESPONSE_CREATED.value, + "payload": self._response_payload(), + } + ) + + def emit_in_progress(self) -> dict[str, Any]: + """Emit a ``response.in_progress`` lifecycle event. + + :returns: The emitted event dict with type and payload. + :rtype: dict[str, Any] + """ + self._response.status = "in_progress" + return self.emit_event( + { + "type": EVENT_TYPE.RESPONSE_IN_PROGRESS.value, + "payload": self._response_payload(), + } + ) + + def emit_completed(self, *, usage: generated_models.ResponseUsage | dict[str, Any] | None = None) -> dict[str, Any]: + """Emit a ``response.completed`` terminal lifecycle event. + + :keyword usage: Optional usage statistics to attach to the response. + :keyword type usage: ~azure.ai.agentserver.responses.models._generated.ResponseUsage | dict[str, Any] | None + :returns: The emitted event dict with type and payload. + :rtype: dict[str, Any] + """ + self._response.status = "completed" + self._response.error = None # type: ignore[assignment] + self._response.incomplete_details = None # type: ignore[assignment] + self._set_terminal_fields(usage=usage) + return self.emit_event( + { + "type": EVENT_TYPE.RESPONSE_COMPLETED.value, + "payload": self._response_payload(), + } + ) + + def emit_failed( + self, + *, + code: str | generated_models.ResponseErrorCode = "server_error", + message: str = "An internal server error occurred.", + usage: generated_models.ResponseUsage | dict[str, Any] | None = None, + ) -> dict[str, Any]: + """Emit a ``response.failed`` terminal lifecycle event. + + :keyword code: Error code describing the failure. + :keyword type code: str | ~azure.ai.agentserver.responses.models._generated.ResponseErrorCode + :keyword message: Human-readable error message. + :keyword type message: str + :keyword usage: Optional usage statistics to attach to the response. + :keyword type usage: ~azure.ai.agentserver.responses.models._generated.ResponseUsage | dict[str, Any] | None + :returns: The emitted event dict with type and payload. + :rtype: dict[str, Any] + """ + self._response.status = "failed" + self._response.incomplete_details = None # type: ignore[assignment] + self._response.error = generated_models.ResponseError( + { + "code": _internals.enum_value(code), + "message": message, + } + ) + self._set_terminal_fields(usage=usage) + return self.emit_event( + { + "type": EVENT_TYPE.RESPONSE_FAILED.value, + "payload": self._response_payload(), + } + ) + + def emit_incomplete( + self, + *, + reason: str | None = None, + usage: generated_models.ResponseUsage | dict[str, Any] | None = None, + ) -> dict[str, Any]: + """Emit a ``response.incomplete`` terminal lifecycle event. + + :keyword reason: Optional reason for incompleteness. + :keyword type reason: str | ~azure.ai.agentserver.responses.models._generated.ResponseIncompleteReason + | None + :keyword usage: Optional usage statistics to attach to the response. + :keyword type usage: ~azure.ai.agentserver.responses.models._generated.ResponseUsage | dict[str, Any] + | None + :returns: The emitted event dict with type and payload. + :rtype: dict[str, Any] + """ + self._response.status = "incomplete" + self._response.error = None # type: ignore[assignment] + if reason is None: + self._response.incomplete_details = None # type: ignore[assignment] + else: + self._response.incomplete_details = generated_models.ResponseIncompleteDetails( + { + "reason": _internals.enum_value(reason), + } + ) + self._set_terminal_fields(usage=usage) + return self.emit_event( + { + "type": EVENT_TYPE.RESPONSE_INCOMPLETE.value, + "payload": self._response_payload(), + } + ) + + def add_output_item(self, item_id: str) -> OutputItemBuilder: + """Add a generic output item and return its builder. + + :param item_id: Unique identifier for the output item. + :type item_id: str + :returns: A builder for emitting added/done events for the output item. + :rtype: OutputItemBuilder + :raises TypeError: If *item_id* is None. + :raises ValueError: If *item_id* is empty or has an invalid format. + """ + if item_id is None: + raise TypeError("item_id must not be None") + if not isinstance(item_id, str) or not item_id.strip(): + raise ValueError("item_id must be a non-empty string") + + is_valid_id, error = IdGenerator.is_valid(item_id) + if not is_valid_id: + raise ValueError(f"invalid item_id '{item_id}': {error}") + + output_index = self._output_index + self._output_index += 1 + return OutputItemBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_message(self) -> OutputItemMessageBuilder: + """Add a message output item and return its scoped builder. + + :returns: A builder for emitting message content, text deltas, and lifecycle events. + :rtype: OutputItemMessageBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_message_item_id(self._response_id) + return OutputItemMessageBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_function_call(self, name: str, call_id: str) -> OutputItemFunctionCallBuilder: + """Add a function-call output item and return its scoped builder. + + :param name: The function name being called. + :type name: str + :param call_id: Unique identifier for this function call. + :type call_id: str + :returns: A builder for emitting function-call argument deltas and lifecycle events. + :rtype: OutputItemFunctionCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_function_call_item_id(self._response_id) + return OutputItemFunctionCallBuilder( + self, + output_index=output_index, + item_id=item_id, + name=name, + call_id=call_id, + ) + + def add_output_item_function_call_output(self, call_id: str) -> OutputItemFunctionCallOutputBuilder: + """Add a function-call-output item and return its scoped builder. + + :param call_id: The call ID of the function call this output belongs to. + :type call_id: str + :returns: A builder for emitting function-call output lifecycle events. + :rtype: OutputItemFunctionCallOutputBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_function_call_output_item_id(self._response_id) + return OutputItemFunctionCallOutputBuilder( + self, + output_index=output_index, + item_id=item_id, + call_id=call_id, + ) + + def add_output_item_reasoning_item(self) -> OutputItemReasoningItemBuilder: + """Add a reasoning output item and return its scoped builder. + + :returns: A builder for emitting reasoning summary parts and lifecycle events. + :rtype: OutputItemReasoningItemBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_reasoning_item_id(self._response_id) + return OutputItemReasoningItemBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_file_search_call(self) -> OutputItemFileSearchCallBuilder: + """Add a file-search tool call output item and return its scoped builder. + + :returns: A builder for emitting file-search call lifecycle events. + :rtype: OutputItemFileSearchCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_file_search_call_item_id(self._response_id) + return OutputItemFileSearchCallBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_web_search_call(self) -> OutputItemWebSearchCallBuilder: + """Add a web-search tool call output item and return its scoped builder. + + :returns: A builder for emitting web-search call lifecycle events. + :rtype: OutputItemWebSearchCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_web_search_call_item_id(self._response_id) + return OutputItemWebSearchCallBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_code_interpreter_call(self) -> OutputItemCodeInterpreterCallBuilder: + """Add a code-interpreter tool call output item and return its scoped builder. + + :returns: A builder for emitting code-interpreter call lifecycle events. + :rtype: OutputItemCodeInterpreterCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_code_interpreter_call_item_id(self._response_id) + return OutputItemCodeInterpreterCallBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_image_gen_call(self) -> OutputItemImageGenCallBuilder: + """Add an image-generation tool call output item and return its scoped builder. + + :returns: A builder for emitting image-generation call lifecycle events. + :rtype: OutputItemImageGenCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_image_gen_call_item_id(self._response_id) + return OutputItemImageGenCallBuilder(self, output_index=output_index, item_id=item_id) + + def add_output_item_mcp_call(self, server_label: str, name: str) -> OutputItemMcpCallBuilder: + """Add an MCP tool call output item and return its scoped builder. + + :param server_label: Label identifying the MCP server. + :type server_label: str + :param name: Name of the MCP tool being called. + :type name: str + :returns: A builder for emitting MCP call argument deltas and lifecycle events. + :rtype: OutputItemMcpCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_mcp_call_item_id(self._response_id) + return OutputItemMcpCallBuilder( + self, + output_index=output_index, + item_id=item_id, + server_label=server_label, + name=name, + ) + + def add_output_item_mcp_list_tools(self, server_label: str) -> OutputItemMcpListToolsBuilder: + """Add an MCP list-tools output item and return its scoped builder. + + :param server_label: Label identifying the MCP server. + :type server_label: str + :returns: A builder for emitting MCP list-tools lifecycle events. + :rtype: OutputItemMcpListToolsBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_mcp_list_tools_item_id(self._response_id) + return OutputItemMcpListToolsBuilder( + self, + output_index=output_index, + item_id=item_id, + server_label=server_label, + ) + + def add_output_item_custom_tool_call(self, call_id: str, name: str) -> OutputItemCustomToolCallBuilder: + """Add a custom tool call output item and return its scoped builder. + + :param call_id: Unique identifier for this tool call. + :type call_id: str + :param name: Name of the custom tool being called. + :type name: str + :returns: A builder for emitting custom tool call input deltas and lifecycle events. + :rtype: OutputItemCustomToolCallBuilder + """ + output_index = self._output_index + self._output_index += 1 + item_id = IdGenerator.new_custom_tool_call_item_id(self._response_id) + return OutputItemCustomToolCallBuilder( + self, + output_index=output_index, + item_id=item_id, + call_id=call_id, + name=name, + ) + + def events(self) -> list[dict[str, Any]]: + """Return a deep copy of all events emitted so far (before finalization). + + :returns: A list of deep-copied event dicts. + :rtype: list[dict[str, Any]] + """ + return [deepcopy(event) for event in self._events] + + def emit_event(self, event: dict[str, Any]) -> dict[str, Any]: + """Emit a single event, applying defaults and validating the stream. + + :param event: The raw event dict to emit. + :type event: dict[str, Any] + :returns: A deep copy of the normalized and validated event. + :rtype: dict[str, Any] + """ + candidate = deepcopy(event) + _internals.apply_common_defaults( + [candidate], + response_id=self._response_id, + agent_reference=self._agent_reference, + model=self._model + ) + _internals.track_completed_output_item(self._response, candidate) + payload = candidate.get("payload") + if isinstance(payload, dict): + payload["sequence_number"] = len(self._events) + + candidate = _internals.coerce_event_with_generated_class(candidate) + + self._events.append(candidate) + validate_response_event_stream(self._events) + return deepcopy(candidate) + + def _response_payload(self) -> dict[str, Any]: + """Serialize the current response envelope to a plain dict. + + :returns: A materialized dict representation of the response. + :rtype: dict[str, Any] + """ + return _internals.materialize_generated_payload(self._response.as_dict()) + + def with_output_item_defaults(self, item: dict[str, Any]) -> dict[str, Any]: + """Stamp an output item dict with response-level defaults. + + :param item: The item dict to stamp. + :type item: dict[str, Any] + :returns: A deep copy of the item with ``response_id`` and ``agent_reference`` defaults applied. + :rtype: dict[str, Any] + """ + return deepcopy(item) + + def _set_terminal_fields(self, *, usage: generated_models.ResponseUsage | dict[str, Any] | None) -> None: + """Set terminal fields on the response envelope (completed_at, usage, output_text). + + :keyword usage: Optional usage statistics to attach. + :keyword type usage: ~azure.ai.agentserver.responses.models._generated.ResponseUsage | dict[str, Any] | None + :rtype: None + """ + # B6: completed_at is non-null only for completed status + if self._response.status == "completed": + self._response.completed_at = datetime.now(timezone.utc) + else: + self._response.completed_at = None # type: ignore[assignment] + self._response.usage = _internals.coerce_usage(usage) + self._response.output_text = _internals.compute_output_text(self._response) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_helpers.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_helpers.py new file mode 100644 index 000000000000..d70bc3052a20 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_helpers.py @@ -0,0 +1,199 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Event coercion, defaults application, and snapshot extraction helpers.""" + +from __future__ import annotations + +from copy import deepcopy +from typing import Any, AsyncIterator + +from . import _internals +from ._event_stream import ResponseEventStream +from ._internals import _RESPONSE_SNAPSHOT_EVENT_TYPES +from ._sse import encode_sse_payload +from ..models import _generated as generated_models + +EVENT_TYPE = generated_models.ResponseStreamEventType + + +def _build_events( + response_id: str, + *, + include_progress: bool, + agent_reference: dict[str, Any], + model: str | None, +) -> list[dict[str, Any]]: + """Build a minimal lifecycle event sequence for a response. + + :param response_id: Unique identifier for the response. + :type response_id: str + :keyword include_progress: Whether to include an ``in_progress`` event. + :keyword type include_progress: bool + :keyword agent_reference: Agent reference metadata dict. + :keyword type agent_reference: dict[str, Any] + :keyword model: Optional model identifier. + :keyword type model: str | None + :returns: A list of event dicts containing created and completed (and optionally in_progress) events. + :rtype: list[dict[str, Any]] + """ + stream = ResponseEventStream( + response_id=response_id, + agent_reference=agent_reference, + model=model, + ) + events = [stream.emit_created(status="queued")] + if include_progress: + events.append(stream.emit_in_progress()) + events.append(stream.emit_completed()) + return events + + +async def _encode_sse(events: list[dict[str, Any]]) -> AsyncIterator[str]: + """Encode a list of event dicts as SSE-formatted strings. + + :param events: The event dicts to encode. + :type events: list[dict[str, Any]] + :returns: An async iterator yielding SSE-formatted strings. + :rtype: AsyncIterator[str] + """ + for event in events: + yield encode_sse_payload(event["type"], event["payload"]) + + +def _coerce_handler_event(handler_event: Any) -> dict[str, Any]: + """Coerce a handler event to a normalized ``{"type": ..., "payload": ...}`` dict. + + :param handler_event: The event to normalize (dict or model with ``as_dict()``). + :type handler_event: Any + :returns: A normalized event dict with ``type`` and ``payload`` keys. + :rtype: dict[str, Any] + :raises TypeError: If the event is not a dict or a model with ``as_dict()``. + :raises ValueError: If the event does not include a non-empty ``type``. + """ + if isinstance(handler_event, dict): + event_data = deepcopy(handler_event) + elif hasattr(handler_event, "as_dict"): + event_data = handler_event.as_dict() + else: + raise TypeError("handler events must be dictionaries or generated event models") + + event_type = event_data.get("type") + if not isinstance(event_type, str) or not event_type: + raise ValueError("handler event must include a non-empty 'type'") + + payload = event_data.get("payload") + if isinstance(payload, dict): + normalized_payload = deepcopy(payload) + else: + normalized_payload = {key: deepcopy(value) for key, value in event_data.items() if key != "type"} + + return {"type": event_type, "payload": normalized_payload} + + +def _apply_stream_event_defaults( + event: dict[str, Any], + *, + response_id: str, + agent_reference: dict[str, Any], + model: str | None, + sequence_number: int | None, +) -> dict[str, Any]: + """Apply response-level defaults to an event payload. + + For lifecycle events whose payload is a ``Response`` snapshot + (``response.created``, ``response.queued``, ``response.in_progress``, + ``response.completed``, ``response.failed``, ``response.incomplete``), + stamps ``id``, ``response_id``, ``object``, ``agent_reference``, and + ``model`` using ``setdefault`` so handler-supplied values are not overwritten. + For all other event types the payload is left untouched — those events have + different schemas per the contract and do not carry these fields. + + ``sequence_number`` is always applied (or removed) regardless of event type, + because it lives on the ``ResponseStreamEvent`` base class. + + :param event: The event dict to enrich. + :type event: dict[str, Any] + :keyword response_id: Response ID to stamp in lifecycle-event payloads. + :keyword type response_id: str + :keyword agent_reference: Agent reference metadata dict. + :keyword type agent_reference: dict[str, Any] + :keyword model: Optional model identifier. + :keyword type model: str | None + :keyword sequence_number: Optional sequence number to set; removed if ``None``. + :keyword type sequence_number: int | None + :returns: A deep copy of the event with defaults applied. + :rtype: dict[str, Any] + """ + normalized = deepcopy(event) + # Delegate lifecycle-event stamping to the canonical implementation in _internals. + _internals.apply_common_defaults( + [normalized], + response_id=response_id, + agent_reference=agent_reference if agent_reference else {}, + model=model, + ) + payload = normalized.get("payload") + if not isinstance(payload, dict): + payload = {} + normalized["payload"] = payload + if sequence_number is not None: + payload["sequence_number"] = sequence_number + else: + payload.pop("sequence_number", None) + return normalized + + +def _extract_response_snapshot_from_events( + events: list[dict[str, Any]], + *, + response_id: str, + agent_reference: dict[str, Any], + model: str | None, + remove_sequence_number: bool = False, +) -> dict[str, Any]: + """Extract the latest response snapshot payload from a list of events. + + Scans events in reverse for the most recent response-level lifecycle event + and returns its payload enriched with defaults. Falls back to building a + synthetic completed lifecycle if no snapshot event is found. + + :param events: The event stream to search. + :type events: list[dict[str, Any]] + :keyword response_id: Response ID for default stamping. + :keyword type response_id: str + :keyword agent_reference: Agent reference metadata dict. + :keyword type agent_reference: dict[str, Any] + :keyword model: Optional model identifier. + :keyword type model: str | None + :keyword remove_sequence_number: Whether to strip ``sequence_number`` from the result. + :keyword type remove_sequence_number: bool + :returns: A dict representing the response snapshot payload. + :rtype: dict[str, Any] + """ + for event in reversed(events): + event_type = event.get("type") + payload = event.get("payload") + if event_type in _RESPONSE_SNAPSHOT_EVENT_TYPES and isinstance(payload, dict): + snapshot = deepcopy(payload) + snapshot.setdefault("id", response_id) + snapshot.setdefault("response_id", response_id) + snapshot.setdefault("agent_reference", deepcopy(agent_reference)) + snapshot.setdefault("object", "response") + snapshot.setdefault("output", []) + if model is not None: + snapshot.setdefault("model", model) + if remove_sequence_number: + snapshot.pop("sequence_number", None) + return snapshot + + fallback_events = _build_events( + response_id, + include_progress=True, + agent_reference=agent_reference, + model=model, + ) + fallback_payload = deepcopy(fallback_events[-1]["payload"]) + fallback_payload.setdefault("output", []) + if remove_sequence_number: + fallback_payload.pop("sequence_number", None) + return fallback_payload diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_internals.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_internals.py new file mode 100644 index 000000000000..4f8226cb60ae --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_internals.py @@ -0,0 +1,360 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Internal helper functions extracted from ResponseEventStream. + +These are pure or near-pure functions that operate on event dicts +and generated model objects. They carry no mutable state of their own. +""" + +from __future__ import annotations + +from copy import deepcopy +from types import GeneratorType +from typing import Any + +from ..models import _generated as generated_models + +EVENT_TYPE = generated_models.ResponseStreamEventType + + +# Event types whose payload is a full Response snapshot. +# Only these events should carry id/response_id/object/agent_reference/model. +_RESPONSE_SNAPSHOT_EVENT_TYPES: frozenset[str] = frozenset({ + EVENT_TYPE.RESPONSE_QUEUED.value, + EVENT_TYPE.RESPONSE_CREATED.value, + EVENT_TYPE.RESPONSE_IN_PROGRESS.value, + EVENT_TYPE.RESPONSE_COMPLETED.value, + EVENT_TYPE.RESPONSE_FAILED.value, + EVENT_TYPE.RESPONSE_INCOMPLETE.value, +}) + +_EVENT_MODEL_CLASS_NAMES: dict[str, str] = { + EVENT_TYPE.RESPONSE_QUEUED.value: "ResponseQueuedEvent", + EVENT_TYPE.RESPONSE_CREATED.value: "ResponseCreatedEvent", + EVENT_TYPE.RESPONSE_IN_PROGRESS.value: "ResponseInProgressEvent", + EVENT_TYPE.RESPONSE_COMPLETED.value: "ResponseCompletedEvent", + EVENT_TYPE.RESPONSE_FAILED.value: "ResponseFailedEvent", + EVENT_TYPE.RESPONSE_INCOMPLETE.value: "ResponseIncompleteEvent", + EVENT_TYPE.RESPONSE_CONTENT_PART_ADDED.value: "ResponseContentPartAddedEvent", + EVENT_TYPE.RESPONSE_CONTENT_PART_DONE.value: "ResponseContentPartDoneEvent", + EVENT_TYPE.RESPONSE_OUTPUT_TEXT_DELTA.value: "ResponseTextDeltaEvent", + EVENT_TYPE.RESPONSE_OUTPUT_TEXT_DONE.value: "ResponseTextDoneEvent", + EVENT_TYPE.RESPONSE_OUTPUT_TEXT_ANNOTATION_ADDED.value: "ResponseOutputTextAnnotationAddedEvent", + EVENT_TYPE.RESPONSE_FUNCTION_CALL_ARGUMENTS_DELTA.value: "ResponseFunctionCallArgumentsDeltaEvent", + EVENT_TYPE.RESPONSE_FUNCTION_CALL_ARGUMENTS_DONE.value: "ResponseFunctionCallArgumentsDoneEvent", + EVENT_TYPE.RESPONSE_REFUSAL_DELTA.value: "ResponseRefusalDeltaEvent", + EVENT_TYPE.RESPONSE_REFUSAL_DONE.value: "ResponseRefusalDoneEvent", + EVENT_TYPE.RESPONSE_REASONING_SUMMARY_PART_ADDED.value: "ResponseReasoningSummaryPartAddedEvent", + EVENT_TYPE.RESPONSE_REASONING_SUMMARY_PART_DONE.value: "ResponseReasoningSummaryPartDoneEvent", + EVENT_TYPE.RESPONSE_REASONING_SUMMARY_TEXT_DELTA.value: "ResponseReasoningSummaryTextDeltaEvent", + EVENT_TYPE.RESPONSE_REASONING_SUMMARY_TEXT_DONE.value: "ResponseReasoningSummaryTextDoneEvent", + EVENT_TYPE.RESPONSE_FILE_SEARCH_CALL_IN_PROGRESS.value: "ResponseFileSearchCallInProgressEvent", + EVENT_TYPE.RESPONSE_FILE_SEARCH_CALL_SEARCHING.value: "ResponseFileSearchCallSearchingEvent", + EVENT_TYPE.RESPONSE_FILE_SEARCH_CALL_COMPLETED.value: "ResponseFileSearchCallCompletedEvent", + EVENT_TYPE.RESPONSE_WEB_SEARCH_CALL_IN_PROGRESS.value: "ResponseWebSearchCallInProgressEvent", + EVENT_TYPE.RESPONSE_WEB_SEARCH_CALL_SEARCHING.value: "ResponseWebSearchCallSearchingEvent", + EVENT_TYPE.RESPONSE_WEB_SEARCH_CALL_COMPLETED.value: "ResponseWebSearchCallCompletedEvent", + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_IN_PROGRESS.value: "ResponseCodeInterpreterCallInProgressEvent", + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_INTERPRETING.value: "ResponseCodeInterpreterCallInterpretingEvent", + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_COMPLETED.value: "ResponseCodeInterpreterCallCompletedEvent", + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_CODE_DELTA.value: "ResponseCodeInterpreterCallCodeDeltaEvent", + EVENT_TYPE.RESPONSE_CODE_INTERPRETER_CALL_CODE_DONE.value: "ResponseCodeInterpreterCallCodeDoneEvent", + EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_IN_PROGRESS.value: "ResponseImageGenCallInProgressEvent", + EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_GENERATING.value: "ResponseImageGenCallGeneratingEvent", + EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_PARTIAL_IMAGE.value: "ResponseImageGenCallPartialImageEvent", + EVENT_TYPE.RESPONSE_IMAGE_GENERATION_CALL_COMPLETED.value: "ResponseImageGenCallCompletedEvent", + EVENT_TYPE.RESPONSE_MCP_CALL_IN_PROGRESS.value: "ResponseMCPCallInProgressEvent", + EVENT_TYPE.RESPONSE_MCP_CALL_COMPLETED.value: "ResponseMCPCallCompletedEvent", + EVENT_TYPE.RESPONSE_MCP_CALL_FAILED.value: "ResponseMCPCallFailedEvent", + EVENT_TYPE.RESPONSE_MCP_CALL_ARGUMENTS_DELTA.value: "ResponseMCPCallArgumentsDeltaEvent", + EVENT_TYPE.RESPONSE_MCP_CALL_ARGUMENTS_DONE.value: "ResponseMCPCallArgumentsDoneEvent", + EVENT_TYPE.RESPONSE_MCP_LIST_TOOLS_IN_PROGRESS.value: "ResponseMCPListToolsInProgressEvent", + EVENT_TYPE.RESPONSE_MCP_LIST_TOOLS_COMPLETED.value: "ResponseMCPListToolsCompletedEvent", + EVENT_TYPE.RESPONSE_MCP_LIST_TOOLS_FAILED.value: "ResponseMCPListToolsFailedEvent", + EVENT_TYPE.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DELTA.value: "ResponseCustomToolCallInputDeltaEvent", + EVENT_TYPE.RESPONSE_CUSTOM_TOOL_CALL_INPUT_DONE.value: "ResponseCustomToolCallInputDoneEvent", +} + + +# --------------------------------------------------------------------------- +# Pure / near-pure helpers +# --------------------------------------------------------------------------- + + +def enum_value(value: Any) -> Any: + """Return the ``.value`` of an enum member, or the value itself. + + :param value: An enum member or a plain value. + :type value: Any + :returns: The ``.value`` attribute if present, otherwise *value* unchanged. + :rtype: Any + """ + return getattr(value, "value", value) + + +def coerce_model_mapping(value: Any) -> dict[str, Any] | None: + """Normalise a generated model, dict, or ``None`` to a plain dict copy. + + :param value: A generated model, a dict, or ``None``. + :type value: Any + :returns: A deep-copied plain dict, or ``None`` if *value* is ``None`` or not coercible. + :rtype: dict[str, Any] | None + """ + if value is None: + return None + if isinstance(value, dict): + return deepcopy(value) + if hasattr(value, "as_dict"): + payload = value.as_dict() + if isinstance(payload, dict): + return deepcopy(payload) + return None + + +def materialize_generated_payload(value: Any) -> Any: + """Recursively resolve generators/tuples to plain lists/dicts. + + :param value: A nested structure that may contain generators or tuples. + :type value: Any + :returns: A fully materialized structure using only dicts and lists. + :rtype: Any + """ + if isinstance(value, dict): + return {key: materialize_generated_payload(item) for key, item in value.items()} + if isinstance(value, list): + return [materialize_generated_payload(item) for item in value] + if isinstance(value, tuple): + return [materialize_generated_payload(item) for item in value] + if isinstance(value, GeneratorType): + return [materialize_generated_payload(item) for item in value] + return value + + +def coerce_event_with_generated_class(event: dict[str, Any]) -> dict[str, Any]: + """Round-trip an event dict through the corresponding generated model class. + + If the event type has a known generated model class, the event is + serialized into that class and back to ensure canonical shape. + + :param event: The event dict to coerce. + :type event: dict[str, Any] + :returns: The coerced event dict, or the original if no matching class is found. + :rtype: dict[str, Any] + """ + event_type = event.get("type") + if not isinstance(event_type, str) or not event_type: + return event + + class_name = _EVENT_MODEL_CLASS_NAMES.get(event_type) + if class_name is None: + return event + + event_class = getattr(generated_models, class_name, None) + if event_class is None: + return event + + payload = event.get("payload") + flattened: dict[str, Any] = {"type": event_type} + if isinstance(payload, dict): + flattened.update(deepcopy(payload)) + + try: + model_event = event_class(flattened) + model_data = materialize_generated_payload(model_event.as_dict()) + model_type = model_data.pop("type", event_type) + return {"type": model_type, "payload": model_data} + except Exception: # pylint: disable=broad-exception-caught + return event + + +def apply_common_defaults( + events: list[dict[str, Any]], + *, + response_id: str, + agent_reference: dict[str, Any] | None, + model: str | None, +) -> None: + """Stamp lifecycle event payloads with response-level defaults. + + Only events whose payload is a ``Response`` snapshot + (``response.queued``, ``response.created``, ``response.in_progress``, + ``response.completed``, ``response.failed``, ``response.incomplete``) + receive ``id``, ``response_id``, ``object``, ``agent_reference``, and + ``model`` defaults. Other event types carry different schemas per the + contract and are left untouched. + + :param events: The list of event dicts to mutate. + :type events: list[dict[str, Any]] + :keyword response_id: Response ID to set as default. + :keyword type response_id: str + :keyword agent_reference: Optional agent reference metadata dict. + :keyword type agent_reference: dict[str, Any] | None + :keyword model: Optional model identifier. + :keyword type model: str | None + :rtype: None + """ + for event in events: + event_type = event.get("type") + if event_type not in _RESPONSE_SNAPSHOT_EVENT_TYPES: + continue + payload = event.get("payload") + if not isinstance(payload, dict): + payload = {} + event["payload"] = payload + payload.setdefault("id", response_id) + payload.setdefault("response_id", response_id) + payload.setdefault("object", "response") + if agent_reference is not None: + payload.setdefault("agent_reference", deepcopy(agent_reference)) + if model is not None: + payload.setdefault("model", model) + + +def assign_sequence_numbers(events: list[dict[str, Any]]) -> None: + """Assign deterministic ``sequence_number`` to every payload. + + Mutates each event's payload in-place, setting ``sequence_number`` + to the event's zero-based index. + + :param events: The list of event dicts to mutate. + :type events: list[dict[str, Any]] + :rtype: None + """ + for index, event in enumerate(events): + payload = event.get("payload") + if isinstance(payload, dict): + payload["sequence_number"] = index + + +def track_completed_output_item( + response: generated_models.Response, + event: dict[str, Any], +) -> None: + """When an output-item-done event arrives, persist the item on the response. + + Checks if the event is of type ``response.output_item.done`` and, if so, + stores the item at the appropriate index in ``response.output``. + + :param response: The response envelope to which the completed item is attached. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :param event: The event dict to inspect. + :type event: dict[str, Any] + :rtype: None + """ + if event.get("type") != EVENT_TYPE.RESPONSE_OUTPUT_ITEM_DONE.value: + return + + payload = event.get("payload") + if not isinstance(payload, dict): + return + + output_index = payload.get("output_index") + item = payload.get("item") + if not isinstance(output_index, int) or output_index < 0 or not isinstance(item, dict): + return + + output_items: list[Any] = response.output if isinstance(response.output, list) else [] + if not isinstance(response.output, list): + response.output = output_items + + try: + typed_item: Any = generated_models.OutputItem(deepcopy(item)) + except Exception: # pylint: disable=broad-exception-caught + typed_item = deepcopy(item) + + while len(output_items) <= output_index: + output_items.append(None) + + output_items[output_index] = typed_item + + +def coerce_usage( + usage: generated_models.ResponseUsage | dict[str, Any] | None, +) -> generated_models.ResponseUsage | None: + """Normalise a usage value to a generated ``ResponseUsage`` instance. + + :param usage: A usage dict, a ``ResponseUsage`` model, or ``None``. + :type usage: ~azure.ai.agentserver.responses.models._generated.ResponseUsage | dict[str, Any] | None + :returns: A ``ResponseUsage`` instance, or ``None`` if *usage* is ``None``. + :rtype: ~azure.ai.agentserver.responses.models._generated.ResponseUsage | None + :raises TypeError: If *usage* is not a dict or a generated ``ResponseUsage`` model. + """ + if usage is None: + return None + if isinstance(usage, dict): + return generated_models.ResponseUsage(deepcopy(usage)) + if hasattr(usage, "as_dict"): + return generated_models.ResponseUsage(deepcopy(usage.as_dict())) + raise TypeError("usage must be a dict or a generated ResponseUsage model") + + +def compute_output_text(response: generated_models.Response) -> str | None: + """Concatenate all ``output_text`` content parts from message output items. + + :param response: The response envelope whose output items to scan. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :returns: Concatenated text from all ``output_text`` content parts, or ``None`` if none found. + :rtype: str | None + """ + output = response.output + if not isinstance(output, list): + return None + + fragments: list[str] = [] + for item in output: + item_payload = coerce_model_mapping(item) + if not isinstance(item_payload, dict): + continue + if item_payload.get("type") not in ("output_message", "message"): + continue + + content = item_payload.get("content") + if not isinstance(content, list): + continue + + for part in content: + if not isinstance(part, dict): + continue + if part.get("type") != "output_text": + continue + text = part.get("text") + if isinstance(text, str) and text: + fragments.append(text) + + if not fragments: + return None + return "".join(fragments) + + +def extract_agent_reference(response: generated_models.Response) -> dict[str, Any] | None: + """Pull the ``agent_reference`` dict from a response, if present. + + :param response: The response envelope to inspect. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :returns: The agent reference dict, or ``None`` if not present. + :rtype: dict[str, Any] | None + """ + payload = coerce_model_mapping(response) + if not isinstance(payload, dict): + return None + agent_reference = payload.get("agent_reference") + if isinstance(agent_reference, dict): + return agent_reference + return None + + +def extract_model(response: generated_models.Response) -> str | None: + """Pull the ``model`` string from a response, if present. + + :param response: The response envelope to inspect. + :type response: ~azure.ai.agentserver.responses.models._generated.Response + :returns: The model string, or ``None`` if not present. + :rtype: str | None + """ + payload = coerce_model_mapping(response) + if not isinstance(payload, dict): + return None + model = payload.get("model") + if isinstance(model, str) and model: + return model + return None diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_sse.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_sse.py new file mode 100644 index 000000000000..8aceeb3f5ea0 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_sse.py @@ -0,0 +1,164 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Server-sent events helpers for Responses streaming.""" + +from __future__ import annotations + +import itertools +import json +from contextvars import ContextVar +from typing import Any, Mapping + +from ..models._generated import ResponseStreamEvent +from ._internals import _RESPONSE_SNAPSHOT_EVENT_TYPES + + +_stream_counter_var: ContextVar[itertools.count] = ContextVar("_stream_counter_var") + + +def new_stream_counter() -> None: + """Initialize a fresh per-stream SSE sequence number counter for the current context. + + Call this once at the start of each streaming response so that concurrent + streams are numbered independently, each starting from 0. + + :rtype: None + """ + _stream_counter_var.set(itertools.count()) + + +def _next_sequence_number() -> int: + """Return the next SSE sequence number for the current stream context. + + Initializes a new per-stream counter if none has been set for the current + context (e.g. direct calls from tests or outside a streaming request). + + :returns: A monotonically increasing integer, starting from 0 for each stream. + :rtype: int + """ + counter = _stream_counter_var.get(None) + if counter is None: + counter = itertools.count() + _stream_counter_var.set(counter) + return next(counter) + + +def _coerce_payload(event: Any) -> tuple[str, dict[str, Any]]: + """Extract and normalize event type and payload from an event object. + + Supports dict-like, model-with-``as_dict()``, and plain-object event sources. + + :param event: The SSE event object to coerce. + :type event: Any + :returns: A tuple of ``(event_type, payload_dict)``. + :rtype: tuple[str, dict[str, Any]] + :raises ValueError: If the event does not include a non-empty ``type``. + """ + event_type = getattr(event, "type", None) + + if isinstance(event, Mapping): + payload = dict(event) + if event_type is None: + event_type = payload.get("type") + elif hasattr(event, "as_dict"): + payload = event.as_dict() # type: ignore[assignment] + if event_type is None: + event_type = payload.get("type") + else: + payload = {key: value for key, value in vars(event).items() if not key.startswith("_")} + + if not event_type: + raise ValueError("SSE event must include a non-empty 'type'") + + payload.pop("type", None) + return str(event_type), payload + + +def _ensure_sequence_number(event: Any, payload: dict[str, Any]) -> None: + """Ensure the payload has a valid ``sequence_number``, assigning one if missing. + + :param event: The original event object (used for attribute fallback). + :type event: Any + :param payload: The payload dict to mutate. + :type payload: dict[str, Any] + :rtype: None + """ + explicit = payload.get("sequence_number") + event_value = getattr(event, "sequence_number", None) + candidate = explicit if explicit is not None else event_value + + if not isinstance(candidate, int) or candidate < 0: + candidate = _next_sequence_number() + + payload["sequence_number"] = candidate + + +def _build_sse_frame(event_type: str, payload: dict[str, Any]) -> str: + """Build a single SSE frame string from event type and payload. + + :param event_type: The SSE event type name. + :type event_type: str + :param payload: The payload dict to serialize as JSON. + :type payload: dict[str, Any] + :returns: A complete SSE frame string with trailing newlines. + :rtype: str + """ + lines = [f"event: {event_type}"] + + # Emit multiline text as data lines for readability, then emit canonical + # JSON payload for deterministic parsers. + text_value = payload.get("text") + if isinstance(text_value, str) and "\n" in text_value: + lines.extend(f"data: {line}" for line in text_value.splitlines()) + + lines.append(f"data: {json.dumps(payload)}") + lines.append("") + lines.append("") + return "\n".join(lines) + + +def encode_sse_event(event: ResponseStreamEvent) -> str: + """Encode a response stream event into SSE wire format. + + :param event: Generated response stream event model. + :type event: ~azure.ai.agentserver.responses.models._generated.ResponseStreamEvent + :returns: Encoded SSE payload string. + :rtype: str + """ + event_type, payload = _coerce_payload(event) + _ensure_sequence_number(event, payload) + return _build_sse_frame(event_type, {"type": event_type, **payload}) + + +def encode_sse_payload(event_type: str, payload: Mapping[str, Any]) -> str: + """Encode an event type + payload pair into SSE wire format. + + :param event_type: The SSE event type name. + :type event_type: str + :param payload: The payload mapping to encode. + :type payload: Mapping[str, Any] + :returns: The encoded SSE frame string. + :rtype: str + """ + event = {"type": event_type, **dict(payload)} + normalized_type, normalized_payload = _coerce_payload(event) + _ensure_sequence_number(event, normalized_payload) + if normalized_type in _RESPONSE_SNAPSHOT_EVENT_TYPES: + seq = normalized_payload.pop("sequence_number", None) + data: dict[str, Any] = {"type": normalized_type, "response": normalized_payload} + if seq is not None: + data["sequence_number"] = seq + else: + data = {"type": normalized_type, **normalized_payload} + return _build_sse_frame(normalized_type, data) + + +def encode_keep_alive_comment(comment: str = "keep-alive") -> str: + """Encode an SSE comment frame used for keep-alive traffic. + + :param comment: The comment text to include. Defaults to ``"keep-alive"``. + :type comment: str + :returns: An SSE comment frame string. + :rtype: str + """ + return f": {comment}\n\n" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_state_machine.py b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_state_machine.py new file mode 100644 index 000000000000..4768edf31e35 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/azure/ai/agentserver/responses/streaming/_state_machine.py @@ -0,0 +1,176 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Lifecycle event state machine for Responses streaming.""" + +from __future__ import annotations + +from copy import deepcopy +from typing import Any, Mapping, MutableMapping, Sequence, cast + +from ..models import _generated as generated_models + +EVENT_TYPE = generated_models.ResponseStreamEventType +OUTPUT_ITEM_DELTA_EVENT_TYPE = "response.output_item.delta" + +_TERMINAL_EVENT_TYPES = { + EVENT_TYPE.RESPONSE_COMPLETED.value, + EVENT_TYPE.RESPONSE_FAILED.value, + EVENT_TYPE.RESPONSE_INCOMPLETE.value, +} +_OUTPUT_ITEM_EVENT_TYPES = { + EVENT_TYPE.RESPONSE_OUTPUT_ITEM_ADDED.value, + OUTPUT_ITEM_DELTA_EVENT_TYPE, + EVENT_TYPE.RESPONSE_OUTPUT_ITEM_DONE.value, +} +_EVENT_STAGES = { + EVENT_TYPE.RESPONSE_CREATED.value: 0, + EVENT_TYPE.RESPONSE_IN_PROGRESS.value: 1, + EVENT_TYPE.RESPONSE_COMPLETED.value: 2, + EVENT_TYPE.RESPONSE_FAILED.value: 2, + EVENT_TYPE.RESPONSE_INCOMPLETE.value: 2, +} + + +class LifecycleStateMachineError(ValueError): + """Raised when lifecycle events violate ordering constraints.""" + + +def validate_response_event_stream(events: Sequence[Mapping[str, Any]]) -> None: + """Validate lifecycle and output-item event ordering for a response stream. + + Checks that the first event is ``response.created``, lifecycle events + are in monotonically non-decreasing order, at most one terminal event + exists, and output-item events obey added/delta/done constraints. + + :param events: The sequence of event mappings to validate. + :type events: Sequence[Mapping[str, Any]] + :rtype: None + :raises LifecycleStateMachineError: If any ordering or structural constraint is violated. + """ + if not events: + raise LifecycleStateMachineError("event stream cannot be empty") + + first_type = events[0].get("type") + if first_type != EVENT_TYPE.RESPONSE_CREATED.value: + raise LifecycleStateMachineError("first lifecycle event must be response.created") + + terminal_count = 0 + last_stage = -1 + terminal_seen = False + added_indexes: set[int] = set() + done_indexes: set[int] = set() + + for raw_event in events: + event_type = raw_event.get("type") + if not isinstance(event_type, str) or not event_type: + raise LifecycleStateMachineError("each lifecycle event must include a non-empty type") + + stage = _EVENT_STAGES.get(event_type) + if stage is not None: + if stage < last_stage: + raise LifecycleStateMachineError("lifecycle events are out of order") + if event_type in _TERMINAL_EVENT_TYPES: + terminal_count += 1 + if terminal_count > 1: + raise LifecycleStateMachineError("multiple terminal lifecycle events are not allowed") + terminal_seen = True + last_stage = stage + continue + + if event_type not in _OUTPUT_ITEM_EVENT_TYPES: + continue + + if last_stage < 0: + raise LifecycleStateMachineError("output item events cannot appear before response.created") + if terminal_seen: + raise LifecycleStateMachineError("output item events cannot appear after terminal lifecycle event") + + payload = raw_event.get("payload") + payload_mapping = payload if isinstance(payload, Mapping) else {} + output_index_raw = payload_mapping.get("output_index", 0) + output_index = output_index_raw if isinstance(output_index_raw, int) and output_index_raw >= 0 else 0 + + if event_type == EVENT_TYPE.RESPONSE_OUTPUT_ITEM_ADDED.value: + if output_index in done_indexes: + raise LifecycleStateMachineError("cannot add output item after it has been marked done") + added_indexes.add(output_index) + continue + + if output_index not in added_indexes: + raise LifecycleStateMachineError("output item delta/done requires a preceding output_item.added") + + if event_type == EVENT_TYPE.RESPONSE_OUTPUT_ITEM_DONE.value: + done_indexes.add(output_index) + continue + + if event_type == OUTPUT_ITEM_DELTA_EVENT_TYPE and output_index in done_indexes: + raise LifecycleStateMachineError("output item delta cannot appear after output_item.done") + + +def normalize_lifecycle_events( + *, response_id: str, events: Sequence[Mapping[str, Any]], default_model: str | None = None +) -> list[dict[str, Any]]: + """Normalize lifecycle events with ordering and terminal-state guarantees. + + Applies ``id`` and ``model`` defaults to each payload, validates ordering, + and appends a synthetic ``response.failed`` terminal event when none is present. + + :keyword response_id: Response ID to stamp in each event payload. + :keyword type response_id: str + :keyword events: The sequence of raw lifecycle event mappings. + :keyword type events: Sequence[Mapping[str, Any]] + :keyword default_model: Optional default model identifier to set. + :keyword type default_model: str | None + :returns: A list of normalized event dicts with guaranteed terminal event. + :rtype: list[dict[str, Any]] + :raises LifecycleStateMachineError: If a lifecycle event has no type or ordering is invalid. + """ + normalized: list[dict[str, Any]] = [] + + for raw_event in events: + event_type = raw_event.get("type") + if not isinstance(event_type, str) or not event_type: + raise LifecycleStateMachineError("each lifecycle event must include a non-empty type") + + payload_raw = raw_event.get("payload") + payload_raw_dict = deepcopy(payload_raw) if isinstance(payload_raw, Mapping) else {} + payload = cast(MutableMapping[str, Any], payload_raw_dict) + + payload.setdefault("id", response_id) + payload.setdefault("object", "response") + if default_model is not None: + payload.setdefault("model", default_model) + + normalized.append({"type": event_type, "payload": payload}) + + if not normalized: + normalized = [ + { + "type": EVENT_TYPE.RESPONSE_CREATED.value, + "payload": { + "id": response_id, + "object": "response", + "status": "queued", + "model": default_model, + }, + } + ] + + validate_response_event_stream(normalized) + + terminal_count = sum(1 for event in normalized if event["type"] in _TERMINAL_EVENT_TYPES) + + if terminal_count == 0: + normalized.append( + { + "type": EVENT_TYPE.RESPONSE_FAILED.value, + "payload": { + "id": response_id, + "object": "response", + "status": "failed", + "model": default_model, + }, + } + ) + + return normalized diff --git a/sdk/agentserver/azure-ai-agentserver-responses/cspell.json b/sdk/agentserver/azure-ai-agentserver-responses/cspell.json new file mode 100644 index 000000000000..9b3d0673a5fb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/cspell.json @@ -0,0 +1,19 @@ +{ + "ignoreWords": [ + "mcpl", + "mcpr", + "mcpa", + "ctco", + "lsho", + "funcs", + "addl", + "badid" + ], + "ignorePaths": [ + "*.csv", + "*.json", + "*.rst", + "samples/**", + "Makefile" + ] + } \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/dev_requirements.txt b/sdk/agentserver/azure-ai-agentserver-responses/dev_requirements.txt new file mode 100644 index 000000000000..b2a3fc35434a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/dev_requirements.txt @@ -0,0 +1,2 @@ +-e ../../../eng/tools/azure-sdk-tools +httpx \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/mypy.ini b/sdk/agentserver/azure-ai-agentserver-responses/mypy.ini new file mode 100644 index 000000000000..abd4cc95b282 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/mypy.ini @@ -0,0 +1,8 @@ +[mypy] +explicit_package_bases = True + +[mypy-samples.*] +ignore_errors = true + +[mypy-azure.ai.agentserver.responses.models._generated.*] +ignore_errors = true \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/pyproject.toml b/sdk/agentserver/azure-ai-agentserver-responses/pyproject.toml new file mode 100644 index 000000000000..003068e02968 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/pyproject.toml @@ -0,0 +1,79 @@ +[project] +name = "azure-ai-agentserver-responses" +dynamic = ["version", "readme"] +description = "Python SDK for building servers implementing the Azure AI Responses protocol" + +requires-python = ">=3.10" +license = "MIT" +authors = [ + { name = "Microsoft Corporation" }, +] +classifiers = [ + "Development Status :: 3 - Alpha", + "Programming Language :: Python :: 3", + "Programming Language :: Python :: 3.10", + "Programming Language :: Python :: 3.11", + "Programming Language :: Python :: 3.12", + "Programming Language :: Python :: 3.13", +] +dependencies = [ + "azure-ai-agentserver-core", + "azure-core>=1.30.0", + "azure-identity>=1.15.0", + "httpx>=0.27", + "isodate>=0.6.1", + "starlette>=1.0.0rc1,<2.0.0", + "uvicorn>=0.31.0", +] +keywords = ["azure", "azure sdk"] + +[project.urls] +repository = "https://github.com/Azure/azure-sdk-for-python" + +[build-system] +requires = ["setuptools>=69", "wheel"] +build-backend = "setuptools.build_meta" + +[project.optional-dependencies] +dev = [ + "pytest>=7.0", + "pytest-asyncio>=0.21", + "ruff>=0.4", + "mypy>=1.0", +] + +[tool.setuptools.dynamic] +version = { attr = "azure.ai.agentserver.responses._version.VERSION" } +readme = { file = ["README.md"], content-type = "text/markdown" } + +[tool.setuptools.packages.find] +exclude = [ + "tests*", + "type_spec*", + "samples*", + "doc*", + "azure", + "azure.ai", + "scripts*" +] + +[tool.setuptools.package-data] +pytyped = ["py.typed"] + +[tool.ruff] +target-version = "py310" +line-length = 120 + +[tool.ruff.lint] +select = ["E", "F", "W", "I"] + +[tool.mypy] +python_version = "3.10" +warn_return_any = true +warn_unused_configs = true +disallow_untyped_defs = true + +[tool.pytest.ini_options] +asyncio_mode = "auto" +testpaths = ["tests"] +pythonpath = ["."] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/ConversationHistory/app.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/ConversationHistory/app.py new file mode 100644 index 000000000000..4a7b0f4f7e33 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/ConversationHistory/app.py @@ -0,0 +1,73 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""ConversationHistory sample for azure-ai-agentserver-responses. + +Run: + python samples/ConversationHistory/app.py +""" + +from __future__ import annotations + +from collections.abc import Sequence +from typing import Any + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses._response_context import ResponseContext +from azure.ai.agentserver.responses.models._generated.sdk.models._types import InputParam +from azure.ai.agentserver.responses.models._generated.sdk.models.models._models import CreateResponse, OutputItem +from azure.ai.agentserver.responses.models._helpers import get_input_text +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses._options import ResponsesServerOptions + + + +def _build_reply(current_input: str, history: Sequence[OutputItem]) -> str: + if len(history) == 0: + return f"[Turn 1] No history. You said: \"{current_input}\"" + + history_messages = [item for item in history if getattr(item, "type", None) == "output_message"] + turn_number = len(history_messages) + 1 + last_message = history_messages[-1] if history_messages else None + last_text = last_message["content"][0]["text"] if last_message and last_message.get("content") else "(no text)" + + return ( + f"[Turn {turn_number}] History has {len(history)} item(s). " + f"Last assistant message: \"{last_text}\". " + f"You said: \"{current_input}\"" + ) + +server = AgentHost() +responses = ResponseHandler(server, options=ResponsesServerOptions(default_fetch_history_count=20)) + +@responses.create_handler +async def create_async(request: CreateResponse, context: ResponseContext, cancellation_signal: Any) -> AsyncIterable[dict[str, Any]]: + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + + yield stream.emit_created() + yield stream.emit_in_progress() + + history = await context.get_history_async() + current_input = get_input_text(request) + reply = _build_reply(current_input, history) + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta(reply) + yield text_content.emit_done(reply) + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + + +def main() -> None: + server.run(host="127.0.0.1", port=5103) + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/ConversationHistory/test.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/ConversationHistory/test.py new file mode 100644 index 000000000000..ab501d0dcb51 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/ConversationHistory/test.py @@ -0,0 +1,107 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Requests-based test client for the ConversationHistory sample. + +Usage: + python samples/ConversationHistory/test.py +""" + +from __future__ import annotations + +import json +from typing import Any + +import requests + +BASE_URL = "http://127.0.0.1:5103" + + +def _print_header(title: str) -> None: + print(f"\n--- {title} ---") + + +def _pretty_print(payload: Any) -> None: + print(json.dumps(payload, ensure_ascii=False, indent=2)) + + +def _assert_ok(response: requests.Response) -> None: + try: + response.raise_for_status() + except requests.HTTPError as exc: + raise RuntimeError(f"HTTP request failed: {response.status_code} {response.text}") from exc + + +def _create(payload: dict[str, Any]) -> dict[str, Any]: + response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(response) + body = response.json() + _pretty_print(body) + return body + + +def _turn_1() -> str: + _print_header("Turn 1: Initial message (no history)") + body = _create({"model": "test", "input": "Hello, I am Alice."}) + response_id = body.get("id") + if not isinstance(response_id, str) or not response_id: + raise RuntimeError("Turn 1 did not return a valid response id") + print(f"Response 1 ID: {response_id}") + return response_id + + +def _turn_2(previous_response_id: str) -> str: + _print_header("Turn 2: Chain via previous_response_id") + body = _create( + { + "model": "test", + "input": "What is 2 + 2?", + "previous_response_id": previous_response_id, + } + ) + response_id = body.get("id") + if not isinstance(response_id, str) or not response_id: + raise RuntimeError("Turn 2 did not return a valid response id") + print(f"Response 2 ID: {response_id}") + return response_id + + +def _turn_3(previous_response_id: str) -> str: + _print_header("Turn 3: Chain again") + body = _create( + { + "model": "test", + "input": "Thanks for the help!", + "previous_response_id": previous_response_id, + } + ) + response_id = body.get("id") + if not isinstance(response_id, str) or not response_id: + raise RuntimeError("Turn 3 did not return a valid response id") + print(f"Response 3 ID: {response_id}") + return response_id + + +def _turn_4_stream(previous_response_id: str) -> None: + _print_header("Turn 4: Streaming with chained history") + payload = { + "model": "test", + "stream": True, + "input": "One more thing.", + "previous_response_id": previous_response_id, + } + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + for line in response.iter_lines(decode_unicode=True): + if line: + print(line) + + +def main() -> None: + response_1_id = _turn_1() + response_2_id = _turn_2(response_1_id) + response_3_id = _turn_3(response_2_id) + _turn_4_stream(response_3_id) + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/FunctionCalling/app.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/FunctionCalling/app.py new file mode 100644 index 000000000000..94dc6d74ddd1 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/FunctionCalling/app.py @@ -0,0 +1,82 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""FunctionCalling sample for azure-ai-agentserver-responses. + +Run: + python samples/FunctionCalling/app.py +""" + +from __future__ import annotations + +import asyncio +import json +from collections.abc import AsyncIterable +from typing import Any + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses import ResponseContext +from azure.ai.agentserver.responses.models import get_input_expanded +from azure.ai.agentserver.responses.models._generated.sdk.models.models._models import CreateResponse, ItemType +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +def _extract_function_call_output(request_payload: CreateResponse) -> str | None: + items = get_input_expanded(request_payload) + + for item in items: + if isinstance(item, str): + continue + if item.get("type") == ItemType.FUNCTION_CALL_OUTPUT: + return item.get("content", {}).get("output") + + return None + + +server = AgentHost(log_level="debug") +responses = ResponseHandler(server) + + +@responses.create_handler +def weather_handler(request: CreateResponse, context: ResponseContext, cancellation_signal: asyncio.Event) -> AsyncIterable[dict[str, Any]]: + """Two-turn function-calling sample handler.""" + async def _events() -> AsyncIterable[dict[str, Any]]: + tool_output = _extract_function_call_output(request) + + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + + yield stream.emit_created() + yield stream.emit_in_progress() + + if tool_output is not None: + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + + reply = f"The weather is: {tool_output}" + yield text_content.emit_delta(reply) + yield text_content.emit_done(reply) + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + else: + function_call = stream.add_output_item_function_call("get_weather", "call_weather_1") + yield function_call.emit_added() + + arguments = json.dumps({"location": "Seattle", "unit": "fahrenheit"}) + yield function_call.emit_arguments_delta(arguments) + yield function_call.emit_arguments_done(arguments) + yield function_call.emit_done() + + yield stream.emit_completed() + + return _events() + + +def main() -> None: + server.run(host="127.0.0.1", port=5101) + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/FunctionCalling/test.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/FunctionCalling/test.py new file mode 100644 index 000000000000..51ca08cceaa2 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/FunctionCalling/test.py @@ -0,0 +1,105 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Requests-based test client for the FunctionCalling sample. + +Usage: + python samples/FunctionCalling/test.py +""" + +from __future__ import annotations + +import json +from typing import Any + +import requests + +BASE_URL = "http://127.0.0.1:5101" +CONVERSATION_ID = "conv_a1b2c3d4e5f6789800WeatherConvSampleDemoRequest0001" + + +def _print_header(title: str) -> None: + print(f"\n--- {title} ---") + + +def _pretty_print(payload: Any) -> None: + print(json.dumps(payload, ensure_ascii=False, indent=2)) + + +def _assert_ok(response: requests.Response) -> None: + try: + response.raise_for_status() + except requests.HTTPError as exc: + raise RuntimeError(f"HTTP request failed: {response.status_code} {response.text}") from exc + + +def _turn_1_request_function_call() -> str: + _print_header("Turn 1: Request function call") + payload = { + "model": "test", + "conversation": CONVERSATION_ID, + "input": "What is the weather in Seattle?", + } + response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(response) + body = response.json() + _pretty_print(body) + + output = body.get("output") + if not isinstance(output, list) or not output: + raise RuntimeError("Turn 1 response does not include output items") + + call_id = output[0].get("call_id") if isinstance(output[0], dict) else None + if not isinstance(call_id, str) or not call_id: + raise RuntimeError("Turn 1 response did not include a function call_id") + + print(f"Extracted call_id: {call_id}") + return call_id + + +def _turn_2_submit_function_output(call_id: str) -> None: + _print_header("Turn 2: Submit function output (JSON)") + payload = { + "model": "test", + "conversation": CONVERSATION_ID, + "input": [ + { + "type": "function_call_output", + "call_id": call_id, + "output": '{"temperature": 72, "condition": "sunny"}', + } + ], + } + response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(response) + _pretty_print(response.json()) + + +def _turn_2_submit_function_output_streaming(call_id: str) -> None: + _print_header("Turn 2: Submit function output (streaming)") + payload = { + "model": "test", + "stream": True, + "conversation": CONVERSATION_ID, + "input": [ + { + "type": "function_call_output", + "call_id": call_id, + "output": '{"temperature": 72, "condition": "sunny"}', + } + ], + } + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + for line in response.iter_lines(decode_unicode=True): + if line: + print(line) + + +def main() -> None: + call_id = _turn_1_request_function_call() + _turn_2_submit_function_output(call_id) + _turn_2_submit_function_output_streaming(call_id) + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/README.md b/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/README.md new file mode 100644 index 000000000000..3500a37bb7e2 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/README.md @@ -0,0 +1,89 @@ +# GettingStarted (Python) + +This sample mirrors the .NET GettingStarted sample and shows how to host the Responses API with Starlette. + +## Streaming event style used in this sample + +The sample emits streaming events with typed builders instead of a generic output-item delta payload. + +Event flow used in `samples/GetStarted/app.py`: + +1. Create message item: `stream.add_output_item_message()` +2. Emit message added: `message_item.emit_added()` +3. Create text part: `message_item.add_text_content()` +4. Emit text part events: `emit_added()` -> `emit_delta()` -> `emit_done()` +5. Seal content part on message: `message_item.emit_content_done(text_content)` +6. Emit message done: `message_item.emit_done()` + +Minimal excerpt: + +```python +message_item = stream.add_output_item_message() +yield message_item.emit_added() + +text_content = message_item.add_text_content() +yield text_content.emit_added() +yield text_content.emit_delta("Hello from the Python GettingStarted sample!") +yield text_content.emit_done() +yield message_item.emit_content_done(text_content) + +yield message_item.emit_done() +``` + +## Start server + +From the repository root: + +```bash +python samples/GetStarted/app.py +``` + +## Run sample test script (requests) + +Install dependency (if needed): + +```bash +pip install requests +``` + +Run script: + +```bash +python samples/GetStarted/test.py +``` + +## Health check + +```bash +curl http://127.0.0.1:5100/ready +``` + +## Create response (JSON mode) + +```bash +curl -X POST http://127.0.0.1:5100/responses \ + -H "Content-Type: application/json" \ + -d '{"model":"gpt-4o-mini","input":"hello"}' +``` + +## Create response (stream mode) + +```bash +curl -N -X POST http://127.0.0.1:5100/responses \ + -H "Content-Type: application/json" \ + -d '{"model":"gpt-4o-mini","input":"hello","stream":true}' +``` + +## Background mode + +```bash +curl -X POST http://127.0.0.1:5100/responses \ + -H "Content-Type: application/json" \ + -d '{"model":"gpt-4o-mini","input":"hello","background":true}' +``` + +Then query by id: + +```bash +curl http://127.0.0.1:5100/responses/ +``` diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/app.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/app.py new file mode 100644 index 000000000000..c5e2405b3bb2 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/app.py @@ -0,0 +1,55 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""GettingStarted sample for azure-ai-agentserver-responses. + +Run: + python samples/GetStarted/app.py +""" + +from __future__ import annotations + +import asyncio +from collections.abc import AsyncIterable +from typing import Any + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses import ResponseContext +from azure.ai.agentserver.responses.models._generated import CreateResponse +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +server = AgentHost() +responses = ResponseHandler(server) + + +@responses.create_handler +def my_handler(request: CreateResponse, context: ResponseContext, cancellation_signal: asyncio.Event) -> AsyncIterable[dict[str, Any]]: + async def _events() -> AsyncIterable[dict[str, Any]]: + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + + yield stream.emit_created() + yield stream.emit_in_progress() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("Hello from the Python GettingStarted sample!") + yield text_content.emit_done("Hello from the Python GettingStarted sample!") + yield message_item.emit_content_done(text_content) + + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + +def main() -> None: + server.run(host="127.0.0.1", port=5100) + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/test.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/test.py new file mode 100644 index 000000000000..4e2ea47809ae --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/GetStarted/test.py @@ -0,0 +1,171 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Smoke-test script for the GetStarted sample. + +Usage: + python samples/GetStarted/test.py +""" + +from __future__ import annotations + +import json +import time +from typing import Any + +import requests + +BASE_URL = "http://127.0.0.1:5100" + + +def _print_header(title: str) -> None: + print(f"\n--- {title} ---") + + +def _pretty_print(payload: Any) -> None: + print(json.dumps(payload, ensure_ascii=False, indent=2)) + + +def _assert_ok(response: requests.Response) -> None: + try: + response.raise_for_status() + except requests.HTTPError as exc: + raise RuntimeError(f"HTTP request failed: {response.status_code} {response.text}") from exc + + +def _default_mode() -> None: + _print_header("Default mode (JSON)") + payload = {"model": "gpt-4o-mini", "input": "hello"} + response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(response) + _pretty_print(response.json()) + + +def _stream_mode() -> None: + _print_header("Streaming mode (SSE)") + payload = {"model": "gpt-4o-mini", "input": "hello", "stream": True} + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + for line in response.iter_lines(decode_unicode=True): + if line: + print(line) + + +def _background_mode() -> None: + _print_header("Background mode (POST then GET)") + payload = {"model": "gpt-4o-mini", "input": "hello", "background": True} + create_response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(create_response) + created_payload = create_response.json() + _pretty_print(created_payload) + + response_id = created_payload.get("id") + if not isinstance(response_id, str) or not response_id: + raise RuntimeError("Background response does not include a valid id") + + deadline = time.monotonic() + 5 + while True: + get_response = requests.get(f"{BASE_URL}/responses/{response_id}", timeout=10) + _assert_ok(get_response) + current_payload = get_response.json() + status = current_payload.get("status") + if status in {"completed", "failed", "incomplete", "cancelled"}: + _pretty_print(current_payload) + return + + if time.monotonic() >= deadline: + _pretty_print(current_payload) + raise RuntimeError("Timed out waiting for background response to complete") + + time.sleep(0.2) + + +def _background_stream_mode() -> None: + _print_header("Background + Streaming mode (SSE then GET)") + payload = {"model": "gpt-4o-mini", "input": "hello", "background": True, "stream": True} + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + raw_lines: list[str] = [] + response_id: str | None = None + + for line in response.iter_lines(decode_unicode=True): + if line is None: + continue + if line: + raw_lines.append(line) + print(line) + + if response_id is None and line.startswith("data:"): + data_str = line.split(":", 1)[1].strip() + try: + data_payload = json.loads(data_str) + except json.JSONDecodeError: + continue + + # Response ID is nested under data_payload["response"]["id"] + # (e.g. response.created event), not at the top level. + candidate = data_payload.get("response", {}).get("id") + if isinstance(candidate, str) and candidate: + response_id = candidate + + if response_id is None: + raise RuntimeError( + "Could not extract response id from background+stream SSE output. " + f"Collected lines: {raw_lines}" + ) + + get_response = requests.get(f"{BASE_URL}/responses/{response_id}", timeout=10) + _assert_ok(get_response) + _pretty_print(get_response.json()) + + +def _get_replay_mode() -> None: + _print_header("GET replay mode (background+stream response)") + payload = {"model": "gpt-4o-mini", "input": "hello", "background": True, "stream": True} + + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as create_response: + _assert_ok(create_response) + response_id: str | None = None + + for line in create_response.iter_lines(decode_unicode=True): + if not line: + continue + if line.startswith("data:"): + data_str = line.split(":", 1)[1].strip() + try: + data_payload = json.loads(data_str) + except json.JSONDecodeError: + continue + + candidate = data_payload.get("response", {}).get("id") + if isinstance(candidate, str) and candidate: + response_id = candidate + break + + if response_id is None: + raise RuntimeError("Replay test could not find response id from create stream") + + replay_response = requests.get(f"{BASE_URL}/responses/{response_id}?stream=true", stream=True, timeout=30) + _assert_ok(replay_response) + try: + saw_event = False + for line in replay_response.iter_lines(decode_unicode=True): + if line: + saw_event = True + print(line) + + if not saw_event: + raise RuntimeError("Replay stream returned no SSE lines") + finally: + replay_response.close() + + +def main() -> None: + _default_mode() + _stream_mode() + _background_mode() + _background_stream_mode() + _get_replay_mode() + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiOutput/app.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiOutput/app.py new file mode 100644 index 000000000000..62064d84a83c --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiOutput/app.py @@ -0,0 +1,66 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""MultiOutput sample for azure-ai-agentserver-responses. + +Run: + python samples/MultiOutput/app.py +""" + +from __future__ import annotations + +import asyncio +from collections.abc import AsyncIterable +from typing import Any + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses import ResponseContext +from azure.ai.agentserver.responses.models._generated.sdk.models.models._models import CreateResponse +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +server = AgentHost() +responses = ResponseHandler(server) + + +@responses.create_handler +def multi_output_handler(request: CreateResponse, context: ResponseContext, cancellation_signal: asyncio.Event) -> AsyncIterable[dict[str, Any]]: + """Produces reasoning plus final message output in one response.""" + async def _events() -> AsyncIterable[dict[str, Any]]: + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + + yield stream.emit_created() + yield stream.emit_in_progress() + + reasoning_item = stream.add_output_item_reasoning_item() + yield reasoning_item.emit_added() + + summary_part = reasoning_item.add_summary_part() + yield summary_part.emit_added() + yield summary_part.emit_text_delta("Let me think about this...") + yield summary_part.emit_text_done("Let me think about this...") + yield summary_part.emit_done() + reasoning_item.emit_summary_part_done(summary_part) + yield reasoning_item.emit_done() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("Here is my answer.") + yield text_content.emit_done("Here is my answer.") + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + +def main() -> None: + server.run(host="127.0.0.1", port=5102) + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiOutput/test.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiOutput/test.py new file mode 100644 index 000000000000..603966535cf6 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiOutput/test.py @@ -0,0 +1,63 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Requests-based test client for the MultiOutput sample. + +Usage: + python samples/MultiOutput/test.py +""" + +from __future__ import annotations + +import json +from typing import Any + +import requests + +BASE_URL = "http://127.0.0.1:5102" + + +def _print_header(title: str) -> None: + print(f"\n--- {title} ---") + + +def _pretty_print(payload: Any) -> None: + print(json.dumps(payload, ensure_ascii=False, indent=2)) + + +def _assert_ok(response: requests.Response) -> None: + try: + response.raise_for_status() + except requests.HTTPError as exc: + raise RuntimeError(f"HTTP request failed: {response.status_code} {response.text}") from exc + + +def _default_mode() -> None: + _print_header("Default mode (JSON) - reasoning + message") + payload = {"model": "test"} + response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(response) + body = response.json() + _pretty_print(body) + + output = body.get("output") + if not isinstance(output, list) or len(output) < 2: + raise RuntimeError("Expected at least two output items (reasoning + message)") + + +def _stream_mode() -> None: + _print_header("Streaming mode (SSE)") + payload = {"model": "test", "stream": True} + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + for line in response.iter_lines(decode_unicode=True): + if line: + print(line) + + +def main() -> None: + _default_mode() + _stream_mode() + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiProtocol/app.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiProtocol/app.py new file mode 100644 index 000000000000..90b1ca40f536 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiProtocol/app.py @@ -0,0 +1,162 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Multi-protocol sample: Invocations + Responses on a single AgentHost. + +Demonstrates how both protocol handlers can coexist on the same server, +sharing health probes, tracing, graceful shutdown, and the Hypercorn host. + +Endpoints: + POST /invocations - Invoke the agent (invocation protocol) + GET /invocations/{id} - Get invocation status + POST /invocations/{id}/cancel - Cancel an invocation + POST /responses - Create a response (responses protocol) + GET /responses/{id} - Get a response + DELETE /responses/{id} - Delete a response + POST /responses/{id}/cancel - Cancel a response + GET /responses/{id}/input_items - List input items + GET /healthy - Health probe (provided by hosting) + +Usage:: + + # Start the server + python app.py + + # --- Invocation protocol --- + curl -X POST http://localhost:8088/invocations \\ + -H "Content-Type: application/json" \\ + -d '{"message": "Hello from invocations!"}' + + # --- Responses protocol (non-streaming) --- + curl -X POST http://localhost:8088/responses \\ + -H "Content-Type: application/json" \\ + -d '{"model": "echo", "input": "Hello from responses!", "stream": false, "store": true}' + + # --- Responses protocol (streaming) --- + curl -X POST http://localhost:8088/responses \\ + -H "Content-Type: application/json" \\ + -d '{"model": "echo", "input": "Hello from responses!", "stream": true, "store": true}' + + # --- Health check (provided automatically by AgentHost) --- + curl http://localhost:8088/healthy +""" + +from __future__ import annotations + +from collections.abc import AsyncIterable +from typing import Any + +from starlette.requests import Request +from starlette.responses import JSONResponse, Response + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.invocations import InvocationHandler +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses.models import get_input_text +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + +# ===================================================================== +# 1. Create the server — single host for both protocols +# ===================================================================== + +server = AgentHost() + + +# ===================================================================== +# 2. Invocation protocol — simple echo agent +# ===================================================================== + +invocations = InvocationHandler(server) + + +@invocations.invoke_handler +async def handle_invoke(request: Request) -> Response: + """Process an invocation request by echoing the input. + + :param request: The incoming Starlette request. + :type request: starlette.requests.Request + :return: JSON response echoing the input. + :rtype: starlette.responses.JSONResponse + """ + data = await request.json() + invocation_id = request.state.invocation_id + message = data.get("message", "") + + return JSONResponse({ + "invocation_id": invocation_id, + "status": "completed", + "output": f"[Invocation] Echo: {message}", + }) + + +# ===================================================================== +# 3. Responses protocol — streaming echo agent +# ===================================================================== + +responses = ResponseHandler(server) + + +@responses.create_handler +def echo_response_handler( + request: Any, + context: Any, + cancellation_signal: Any, +) -> AsyncIterable[dict[str, Any]]: + """Handle a response request by echoing the input as a streamed message. + + Emits the full Responses API event lifecycle: + created -> in_progress -> output items -> completed + + :param request: The parsed create-response request. + :param context: Runtime context with response_id and mode flags. + :param cancellation_signal: Event that signals cancellation. + :return: Async iterable of response events. + """ + async def _events() -> AsyncIterable[dict[str, Any]]: + # Build the event stream helper + stream = ResponseEventStream( + response_id=context.response_id, + model=getattr(request, "model", None), + ) + + # Lifecycle: created -> in_progress + yield stream.emit_created() + yield stream.emit_in_progress() + + # Extract input text + echo_text = get_input_text(request) or "hello!" + + # Emit an output message with text content + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + + # Stream the echo text word-by-word for demonstration + words = f"[Response] Echo: {echo_text}".split() + for i, word in enumerate(words): + # Check for cancellation between chunks + if cancellation_signal.is_set(): + yield stream.emit_incomplete(reason="cancelled") + return + + delta = word if i == 0 else f" {word}" + yield text_content.emit_delta(delta) + + yield text_content.emit_done() + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + # Lifecycle: completed + yield stream.emit_completed() + + return _events() + + +# ===================================================================== +# 4. Start the server +# ===================================================================== + +if __name__ == "__main__": + server.run() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiProtocol/test.py b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiProtocol/test.py new file mode 100644 index 000000000000..7ffe99559fad --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/MultiProtocol/test.py @@ -0,0 +1,182 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Smoke-test script for the MultiProtocol sample. + +Exercises both the Invocations and Responses protocol endpoints +on the same AgentHost instance. + +Usage: + python samples/MultiProtocol/test.py +""" + +from __future__ import annotations + +import json +import time +from typing import Any + +import requests + +BASE_URL = "http://127.0.0.1:8088" + + +def _print_header(title: str) -> None: + print(f"\n--- {title} ---") + + +def _pretty_print(payload: Any) -> None: + print(json.dumps(payload, ensure_ascii=False, indent=2)) + + +def _assert_ok(response: requests.Response) -> None: + try: + response.raise_for_status() + except requests.HTTPError as exc: + raise RuntimeError(f"HTTP request failed: {response.status_code} {response.text}") from exc + + +# ===================================================================== +# Health check +# ===================================================================== + + +def _health_check() -> None: + _print_header("Health check (GET /healthy)") + response = requests.get(f"{BASE_URL}/healthy", timeout=10) + _assert_ok(response) + print(f"Status: {response.status_code}") + print(response.text) + + +# ===================================================================== +# Invocation protocol +# ===================================================================== + + +def _invocation_echo() -> None: + _print_header("Invocation protocol — echo") + payload = {"message": "Hello from invocations!"} + response = requests.post(f"{BASE_URL}/invocations", json=payload, timeout=10) + _assert_ok(response) + body = response.json() + _pretty_print(body) + + if body.get("status") != "completed": + raise RuntimeError(f"Expected status 'completed', got '{body.get('status')}'") + + output = body.get("output", "") + if "Hello from invocations!" not in output: + raise RuntimeError(f"Echo output missing input text: {output}") + + print(f"Invocation ID: {body.get('invocation_id')}") + + +# ===================================================================== +# Responses protocol +# ===================================================================== + + +def _responses_default_mode() -> None: + _print_header("Responses protocol — default mode (JSON)") + payload = {"model": "echo", "input": "Hello from responses!", "stream": False, "store": True} + response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(response) + body = response.json() + _pretty_print(body) + + response_id = body.get("id") + if not isinstance(response_id, str) or not response_id: + raise RuntimeError("Response does not include a valid id") + + status = body.get("status") + if status not in {"completed", "in_progress", "queued"}: + raise RuntimeError(f"Unexpected status: {status}") + + +def _responses_stream_mode() -> None: + _print_header("Responses protocol — streaming mode (SSE)") + payload = {"model": "echo", "input": "Hello from responses!", "stream": True, "store": True} + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + saw_event = False + for line in response.iter_lines(decode_unicode=True): + if line: + saw_event = True + print(line) + + if not saw_event: + raise RuntimeError("Streaming response returned no SSE lines") + + +def _responses_background_mode() -> None: + _print_header("Responses protocol — background mode (POST then GET)") + payload = {"model": "echo", "input": "Hello from responses!", "background": True} + create_response = requests.post(f"{BASE_URL}/responses", json=payload, timeout=10) + _assert_ok(create_response) + created_payload = create_response.json() + _pretty_print(created_payload) + + response_id = created_payload.get("id") + if not isinstance(response_id, str) or not response_id: + raise RuntimeError("Background response does not include a valid id") + + deadline = time.monotonic() + 5 + while True: + get_response = requests.get(f"{BASE_URL}/responses/{response_id}", timeout=10) + _assert_ok(get_response) + current_payload = get_response.json() + status = current_payload.get("status") + if status in {"completed", "failed", "incomplete", "cancelled"}: + _pretty_print(current_payload) + return + + if time.monotonic() >= deadline: + _pretty_print(current_payload) + raise RuntimeError("Timed out waiting for background response to complete") + + time.sleep(0.2) + + +def _responses_background_stream_mode() -> None: + _print_header("Responses protocol — background + streaming (SSE then GET)") + payload = {"model": "echo", "input": "Hello from responses!", "background": True, "stream": True} + with requests.post(f"{BASE_URL}/responses", json=payload, stream=True, timeout=30) as response: + _assert_ok(response) + response_id: str | None = None + + for line in response.iter_lines(decode_unicode=True): + if line is None: + continue + if line: + print(line) + + if response_id is None and line.startswith("data:"): + data_str = line.split(":", 1)[1].strip() + try: + data_payload = json.loads(data_str) + except json.JSONDecodeError: + continue + + candidate = data_payload.get("response", {}).get("id") + if isinstance(candidate, str) and candidate: + response_id = candidate + + if response_id is None: + raise RuntimeError("Could not extract response id from background+stream SSE output") + + get_response = requests.get(f"{BASE_URL}/responses/{response_id}", timeout=10) + _assert_ok(get_response) + _pretty_print(get_response.json()) + + +def main() -> None: + _health_check() + _invocation_echo() + _responses_default_mode() + _responses_stream_mode() + _responses_background_mode() + _responses_background_stream_mode() + + +if __name__ == "__main__": + main() diff --git a/sdk/agentserver/azure-ai-agentserver-responses/samples/README.md b/sdk/agentserver/azure-ai-agentserver-responses/samples/README.md new file mode 100644 index 000000000000..6a79c7204a4a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/samples/README.md @@ -0,0 +1,70 @@ +# Samples (Python) + +Runnable Starlette servers demonstrating the `azure-ai-agentserver-responses` package. + +Each sample includes: +- `app.py` to start the sample server +- `test.py` as a requests-based client that exercises the scenario + +## Prerequisites + +- Python 3.10+ +- Dependencies from this package installed +- `requests` installed for test clients + +Install requests if needed: + +```bash +pip install requests +``` + +## Samples + +### GettingStarted + +A minimal echo handler showing default JSON mode, streaming SSE mode, and background mode. + +```bash +python samples/GetStarted/app.py +python samples/GetStarted/test.py +``` + +### FunctionCalling + +A two-turn function-calling flow: +- Turn 1 returns a `get_weather` function call output item. +- Turn 2 sends a `function_call_output` input item and receives a text reply. + +```bash +python samples/FunctionCalling/app.py +python samples/FunctionCalling/test.py +``` + +### MultiOutput + +A response containing multiple output items in one run: +- reasoning item with summary text +- final text message item + +```bash +python samples/MultiOutput/app.py +python samples/MultiOutput/test.py +``` + +### ConversationHistory + +A multi-turn conversational flow using `previous_response_id` and `context.get_history_async()`. + +```bash +python samples/ConversationHistory/app.py +python samples/ConversationHistory/test.py +``` + +## Notes + +- Each sample binds to a unique local port: + - GettingStarted: `5100` + - FunctionCalling: `5101` + - MultiOutput: `5102` + - ConversationHistory: `5103` +- Start one sample app before running its corresponding `test.py`. \ No newline at end of file diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/__init__.py new file mode 100644 index 000000000000..94be86955924 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/__init__.py @@ -0,0 +1,3 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Code generation scripts for the responses server.""" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generate_validators.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generate_validators.py new file mode 100644 index 000000000000..88aa1e210294 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generate_validators.py @@ -0,0 +1,120 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +#!/usr/bin/env python3 +"""Generate Python payload validators from an OpenAPI document.""" + +from __future__ import annotations + +import argparse +import json +from pathlib import Path +from typing import Any + +from validator_emitter import build_validator_module +from validator_schema_walker import SchemaWalker, discover_post_request_roots + + +def _load_spec(input_path: Path) -> dict[str, Any]: + """Load a JSON or YAML OpenAPI document from disk.""" + text = input_path.read_text(encoding="utf-8") + try: + loaded = json.loads(text) + if isinstance(loaded, dict): + return loaded + except json.JSONDecodeError: + pass + + try: + import yaml # type: ignore[import-not-found] + except ModuleNotFoundError as exc: + raise ValueError( + f"unable to parse OpenAPI file '{input_path}'. Expected JSON, or install PyYAML for YAML input." + ) from exc + + loaded_yaml = yaml.safe_load(text) + if not isinstance(loaded_yaml, dict): + raise ValueError(f"OpenAPI file '{input_path}' must contain a top-level object") + return loaded_yaml + + +def _build_output(spec: dict[str, Any], roots: list[str]) -> str: + """Create deterministic validator module source text.""" + schemas = spec.get("components", {}).get("schemas", {}) + if not isinstance(schemas, dict): + schemas = {} + else: + schemas = dict(schemas) + + def _find_create_response_inline_schema() -> dict[str, Any] | None: + paths = spec.get("paths", {}) + for path, methods in paths.items(): + if not isinstance(methods, dict): + continue + if "responses" not in str(path).lower(): + continue + post = methods.get("post") + if not isinstance(post, dict): + continue + request_body = post.get("requestBody", {}) + content = request_body.get("content", {}).get("application/json", {}) + schema = content.get("schema", {}) + if isinstance(schema, dict) and "anyOf" in schema: + branches = schema.get("anyOf", []) + if isinstance(branches, list) and branches and isinstance(branches[0], dict): + return branches[0] + if isinstance(schema, dict) and "oneOf" in schema: + branches = schema.get("oneOf", []) + if isinstance(branches, list) and branches and isinstance(branches[0], dict): + return branches[0] + if isinstance(schema, dict): + return schema + return None + + for root in roots: + if root in schemas: + continue + if root == "CreateResponse": + inline_schema = _find_create_response_inline_schema() + if isinstance(inline_schema, dict): + schemas[root] = inline_schema + + # If explicit roots are provided, respect them and skip route-wide discovery. + discovered_roots = [] if roots else discover_post_request_roots(spec) + merged_roots: list[str] = [] + seen: set[str] = set() + for root in [*roots, *discovered_roots]: + if root and root not in seen: + seen.add(root) + merged_roots.append(root) + + walker = SchemaWalker(schemas) + for root in merged_roots: + walker.walk(root) + + reachable = walker.reachable if walker.reachable else schemas + effective_roots = merged_roots if merged_roots else sorted(reachable) + return build_validator_module(reachable, effective_roots) + + +def main() -> int: + """Run the validator generator CLI.""" + parser = argparse.ArgumentParser(description="Generate Python payload validators from OpenAPI") + parser.add_argument("--input", required=True, help="Path to OpenAPI JSON file") + parser.add_argument("--output", required=True, help="Output Python module path") + parser.add_argument("--root-schemas", default="", help="Comma-separated root schema names") + args = parser.parse_args() + + input_path = Path(args.input) + output_path = Path(args.output) + roots = [part.strip() for part in args.root_schemas.split(",") if part.strip()] + + spec = _load_spec(input_path) + output = _build_output(spec, roots) + + output_path.parent.mkdir(parents=True, exist_ok=True) + output_path.write_text(output, encoding="utf-8") + return 0 + + +if __name__ == "__main__": + raise SystemExit(main()) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/__init__.py new file mode 100644 index 000000000000..b783bfa73795 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/__init__.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility re-exports for generated models preserved under sdk/models.""" + +from .sdk.models.models import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_enums.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_enums.py new file mode 100644 index 000000000000..481d6d628755 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_enums.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility shim for generated enum symbols.""" + +from .sdk.models.models._enums import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_models.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_models.py new file mode 100644 index 000000000000..01e649adb824 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_models.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility shim for generated model symbols.""" + +from .sdk.models.models._models import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_patch.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_patch.py new file mode 100644 index 000000000000..66ee2dea3a63 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/_patch.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Compatibility shim for generated patch helpers.""" + +from .sdk.models.models._patch import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/models_patch.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/models_patch.py new file mode 100644 index 000000000000..8e28092abd44 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/models_patch.py @@ -0,0 +1,46 @@ +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- +"""Hand-written customizations injected into the generated models package. + +This file is copied over the generated ``_patch.py`` inside +``sdk/models/models/`` by ``make generate-models``. Anything listed in +``__all__`` is automatically re-exported by the generated ``__init__.py``. + +Follow our quickstart for examples: https://aka.ms/azsdk/python/dpcodegen/python/customize +""" + +from enum import Enum + +from azure.core import CaseInsensitiveEnumMeta + + +class ResponseIncompleteReason(str, Enum, metaclass=CaseInsensitiveEnumMeta): + """Reason a response finished as incomplete. + + The upstream TypeSpec defines this as an inline literal union + (``"max_output_tokens" | "content_filter"``), so the code generator + emits ``Literal[...]`` instead of a named enum. This hand-written + enum provides a friendlier symbolic constant for SDK consumers. + """ + + MAX_OUTPUT_TOKENS = "max_output_tokens" + """The response was cut short because the maximum output token limit was reached.""" + CONTENT_FILTER = "content_filter" + """The response was cut short because of a content filter.""" + + +__all__: list[str] = [ + "ResponseIncompleteReason", +] + + +def patch_sdk(): + """Do not remove from this file. + + `patch_sdk` is a last resort escape hatch that allows you to do customizations + you can't accomplish using the techniques described in + https://aka.ms/azsdk/python/dpcodegen/python/customize + """ diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/sdk_models__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/sdk_models__init__.py new file mode 100644 index 000000000000..9abd30ab9c84 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/generated_shims/sdk_models__init__.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +# coding=utf-8 +# -------------------------------------------------------------------------- +# Copyright (c) Microsoft Corporation. All rights reserved. +# Licensed under the MIT License. See License.txt in the project root for license information. +# -------------------------------------------------------------------------- + +"""Model-only generated package surface.""" + +from .models import * # type: ignore # noqa: F401,F403 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/validator_emitter.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/validator_emitter.py new file mode 100644 index 000000000000..79076e440034 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/validator_emitter.py @@ -0,0 +1,423 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Emitter that builds deterministic Python validator modules from schemas.""" + +from __future__ import annotations + +from typing import Any + + +def _sanitize_identifier(name: str) -> str: + normalized = "".join(ch if ch.isalnum() else "_" for ch in name) + while "__" in normalized: + normalized = normalized.replace("__", "_") + normalized = normalized.strip("_") + return normalized or "schema" + + +def _resolve_ref(ref: str) -> str: + return ref.rsplit("/", 1)[-1] + + +def _ordered(value: Any) -> Any: + if isinstance(value, dict): + return {k: _ordered(value[k]) for k in sorted(value)} + if isinstance(value, list): + return [_ordered(v) for v in value] + return value + + +def _header() -> str: + return ( + "# pylint: disable=line-too-long,useless-suppression,too-many-lines\n" + "# coding=utf-8\n" + "# --------------------------------------------------------------------------\n" + "# Copyright (c) Microsoft Corporation. All rights reserved.\n" + "# Licensed under the MIT License. See License.txt in the project root for license information.\n" + "# Code generated by Microsoft (R) Python Code Generator.\n" + "# Changes may cause incorrect behavior and will be lost if the code is regenerated.\n" + "# --------------------------------------------------------------------------\n" + ) + + +def _schema_kind(schema: dict[str, Any]) -> str | None: + schema_type = schema.get("type") + if isinstance(schema_type, str): + return schema_type + if "properties" in schema or "additionalProperties" in schema or "discriminator" in schema: + return "object" + if "oneOf" in schema or "anyOf" in schema: + return "union" + return None + + +def build_validator_module(schemas: dict[str, dict[str, Any]], roots: list[str]) -> str: + """Build generated validator module source code without runtime schema blobs.""" + ordered_schemas = _ordered(schemas) + target_roots = sorted(dict.fromkeys(roots)) if roots else sorted(ordered_schemas) + + lines: list[str] = [_header(), "", "from __future__ import annotations", "", "from typing import Any", ""] + lines.extend( + [ + "try:", + " from . import _enums as _generated_enums", + "except Exception:", + " _generated_enums = None", + "", + "def _append_error(errors: list[dict[str, str]], path: str, message: str) -> None:", + " errors.append({'path': path, 'message': message})", + "", + "def _type_label(value: Any) -> str:", + " if value is None:", + " return 'null'", + " if isinstance(value, bool):", + " return 'boolean'", + " if isinstance(value, int):", + " return 'integer'", + " if isinstance(value, float):", + " return 'number'", + " if isinstance(value, str):", + " return 'string'", + " if isinstance(value, dict):", + " return 'object'", + " if isinstance(value, list):", + " return 'array'", + " return type(value).__name__", + "", + "def _is_type(value: Any, expected: str) -> bool:", + " if expected == 'string':", + " return isinstance(value, str)", + " if expected == 'integer':", + " return isinstance(value, int) and not isinstance(value, bool)", + " if expected == 'number':", + " return (isinstance(value, int) and not isinstance(value, bool)) or isinstance(value, float)", + " if expected == 'boolean':", + " return isinstance(value, bool)", + " if expected == 'object':", + " return isinstance(value, dict)", + " if expected == 'array':", + " return isinstance(value, list)", + " return True", + "", + "def _append_type_mismatch(errors: list[dict[str, str]], path: str, expected: str, value: Any) -> None:", + " _append_error(errors, path, f\"Expected {expected}, got {_type_label(value)}\")", + "", + "def _enum_values(enum_name: str) -> tuple[tuple[str, ...] | None, str | None]:", + " if _generated_enums is None:", + " return None, f'enum type _enums.{enum_name} is unavailable'", + " enum_cls = getattr(_generated_enums, enum_name, None)", + " if enum_cls is None:", + " return None, f'enum type _enums.{enum_name} is not defined'", + " try:", + " return tuple(str(member.value) for member in enum_cls), None", + " except Exception:", + " return None, f'enum type _enums.{enum_name} failed to load values'", + "", + ] + ) + + function_schemas: dict[str, dict[str, Any]] = {} + function_hints: dict[str, str | None] = {} + function_order: list[str] = [] + anonymous_by_key: dict[str, str] = {} + + def make_unique_function_name(hint: str | None) -> str: + base = _sanitize_identifier(hint or "branch") + candidate = f"_validate_{base}" + if candidate not in function_schemas: + return candidate + + suffix = 2 + while True: + candidate = f"_validate_{base}_{suffix}" + if candidate not in function_schemas: + return candidate + suffix += 1 + + def ensure_schema_function(schema_name: str) -> str: + fn_name = f"_validate_{_sanitize_identifier(schema_name)}" + if fn_name not in function_schemas: + schema = ordered_schemas.get(schema_name) + if isinstance(schema, dict): + function_schemas[fn_name] = schema + function_hints[fn_name] = schema_name + function_order.append(fn_name) + return fn_name + + def ensure_anonymous_function(schema: dict[str, Any], hint: str | None = None) -> str: + key = repr(_ordered(schema)) + if key in anonymous_by_key: + existing = anonymous_by_key[key] + if function_hints.get(existing) is None and hint is not None: + function_hints[existing] = hint + return existing + fn_name = make_unique_function_name(hint) + anonymous_by_key[key] = fn_name + function_schemas[fn_name] = schema + function_hints[fn_name] = hint + function_order.append(fn_name) + return fn_name + + for root in target_roots: + ensure_schema_function(root) + + def emit_line(block: list[str], indent: int, text: str) -> None: + block.append((" " * indent) + text) + + def emit_union( + schema: dict[str, Any], + block: list[str], + indent: int, + value_expr: str, + path_expr: str, + errors_expr: str, + schema_name_hint: str | None, + ) -> None: + branches = schema.get("oneOf", schema.get("anyOf", [])) + branch_funcs: list[tuple[str, str]] = [] + expected_labels: list[str] = [] + has_inline_enum_branch = False + + for branch in branches: + if not isinstance(branch, dict): + continue + + if "$ref" in branch: + ref_name = _resolve_ref(str(branch["$ref"])) + ref_schema = ordered_schemas.get(ref_name) + if isinstance(ref_schema, dict): + branch_funcs.append((ensure_schema_function(ref_name), _schema_kind(ref_schema) or "value")) + expected_labels.append(ref_name) + continue + + if schema_name_hint and "enum" in branch: + # Keep enum branches tied to the logical schema name so enum-class resolution stays stable. + branch_hint = schema_name_hint + has_inline_enum_branch = True + else: + branch_type = branch.get("type") if isinstance(branch.get("type"), str) else (_schema_kind(branch) or "branch") + branch_hint = f"{schema_name_hint}_{branch_type}" if schema_name_hint else str(branch_type) + fn_name = ensure_anonymous_function(branch, hint=branch_hint) + branch_funcs.append((fn_name, _schema_kind(branch) or "value")) + label = branch.get("type") if isinstance(branch.get("type"), str) else (_schema_kind(branch) or "value") + expected_labels.append(str(label)) + + if not branch_funcs: + return + + emit_line(block, indent, "_matched_union = False") + for idx, (fn_name, kind) in enumerate(branch_funcs): + condition = "True" if kind in ("value", "union", None) else f"_is_type({value_expr}, {kind!r})" + emit_line(block, indent, f"if not _matched_union and {condition}:") + emit_line(block, indent + 1, f"_branch_errors_{idx}: list[dict[str, str]] = []") + emit_line(block, indent + 1, f"{fn_name}({value_expr}, {path_expr}, _branch_errors_{idx})") + emit_line(block, indent + 1, f"if not _branch_errors_{idx}:") + emit_line(block, indent + 2, "_matched_union = True") + + unique_expected_labels = list(dict.fromkeys(expected_labels)) + emit_line(block, indent, "if not _matched_union:") + if len(unique_expected_labels) == 1: + only_label = unique_expected_labels[0] + if schema_name_hint and only_label == "string" and has_inline_enum_branch: + schema_label = schema_name_hint.rsplit(".", 1)[-1] + emit_line( + block, + indent + 1, + f"_append_error({errors_expr}, {path_expr}, f\"Expected {schema_label} to be a string value, got {{_type_label({value_expr})}}\")", + ) + else: + emit_line(block, indent + 1, f"_append_error({errors_expr}, {path_expr}, 'Expected {only_label}')") + else: + expected = ", ".join(unique_expected_labels) if unique_expected_labels else "valid branch" + emit_line( + block, + indent + 1, + f"_append_error({errors_expr}, {path_expr}, f\"Expected one of: {expected}; got {{_type_label({value_expr})}}\")", + ) + emit_line(block, indent + 1, "return") + + def emit_schema_body( + schema: dict[str, Any], + block: list[str], + indent: int, + value_expr: str, + path_expr: str, + errors_expr: str, + schema_name_hint: str | None = None, + ) -> None: + if schema.get("nullable"): + emit_line(block, indent, f"if {value_expr} is None:") + emit_line(block, indent + 1, "return") + + if "$ref" in schema: + ref_name = _resolve_ref(str(schema["$ref"])) + ref_schema = ordered_schemas.get(ref_name) + if isinstance(ref_schema, dict): + emit_line(block, indent, f"{ensure_schema_function(ref_name)}({value_expr}, {path_expr}, {errors_expr})") + return + + if "enum" in schema: + allowed = tuple(schema.get("enum", [])) + enum_class_name = None + if schema_name_hint: + hint_schema = ordered_schemas.get(schema_name_hint) + hint_is_enum_like = False + if isinstance(hint_schema, dict): + if "enum" in hint_schema: + hint_is_enum_like = True + else: + for combo in ("oneOf", "anyOf"): + branches = hint_schema.get(combo, []) + if isinstance(branches, list) and any( + isinstance(b, dict) and "enum" in b for b in branches + ): + hint_is_enum_like = True + break + if hint_is_enum_like: + candidate = schema_name_hint.rsplit(".", 1)[-1] + if candidate and candidate[0].isalpha(): + enum_class_name = candidate + + if enum_class_name: + emit_line( + block, + indent, + f"_allowed_values, _enum_error = _enum_values({enum_class_name!r})", + ) + emit_line(block, indent, "if _enum_error is not None:") + emit_line(block, indent + 1, f"_append_error({errors_expr}, {path_expr}, _enum_error)") + emit_line(block, indent + 1, "return") + emit_line(block, indent, "if _allowed_values is None:") + emit_line(block, indent + 1, "return") + else: + emit_line(block, indent, f"_allowed_values = {allowed!r}") + emit_line(block, indent, f"if {value_expr} not in _allowed_values:") + emit_line( + block, + indent + 1, + f"_append_error({errors_expr}, {path_expr}, f\"Invalid value '{{{value_expr}}}'. Allowed: {{', '.join(str(v) for v in _allowed_values)}}\")", + ) + + if "oneOf" in schema or "anyOf" in schema: + emit_union(schema, block, indent, value_expr, path_expr, errors_expr, schema_name_hint) + return + + schema_type = schema.get("type") + effective_type = schema_type if isinstance(schema_type, str) else _schema_kind(schema) + + if isinstance(effective_type, str) and effective_type not in ("value", "union"): + emit_line(block, indent, f"if not _is_type({value_expr}, {effective_type!r}):") + emit_line(block, indent + 1, f"_append_type_mismatch({errors_expr}, {path_expr}, {effective_type!r}, {value_expr})") + emit_line(block, indent + 1, "return") + + if effective_type == "array": + items = schema.get("items") + if isinstance(items, dict): + item_hint = f"{schema_name_hint}_item" if schema_name_hint else "item" + item_fn = ensure_anonymous_function(items, hint=item_hint) + emit_line(block, indent, f"for _idx, _item in enumerate({value_expr}):") + emit_line(block, indent + 1, f"{item_fn}(_item, f\"{{{path_expr}}}[{{_idx}}]\", {errors_expr})") + return + + if effective_type == "object": + properties = schema.get("properties", {}) + required = schema.get("required", []) + if isinstance(properties, dict): + for field in required: + emit_line(block, indent, f"if {field!r} not in {value_expr}:") + emit_line( + block, + indent + 1, + f"_append_error({errors_expr}, f\"{{{path_expr}}}.{field}\", \"Required property '{field}' is missing\")", + ) + + for field, field_schema in sorted(properties.items()): + if not isinstance(field_schema, dict): + continue + field_hint = f"{schema_name_hint}_{field}" if schema_name_hint else field + field_fn = ensure_anonymous_function(field_schema, hint=field_hint) + emit_line(block, indent, f"if {field!r} in {value_expr}:") + emit_line( + block, + indent + 1, + f"{field_fn}({value_expr}[{field!r}], f\"{{{path_expr}}}.{field}\", {errors_expr})", + ) + + addl = schema.get("additionalProperties") + if isinstance(addl, dict): + addl_hint = f"{schema_name_hint}_additional_property" if schema_name_hint else "additional_property" + addl_fn = ensure_anonymous_function(addl, hint=addl_hint) + known = tuple(sorted(properties.keys())) if isinstance(properties, dict) else tuple() + emit_line(block, indent, f"for _key, _item in {value_expr}.items():") + emit_line(block, indent + 1, f"if _key not in {known!r}:") + emit_line(block, indent + 2, f"{addl_fn}(_item, f\"{{{path_expr}}}.{{_key}}\", {errors_expr})") + + disc = schema.get("discriminator") + if isinstance(disc, dict): + prop = disc.get("propertyName", "type") + mapping = disc.get("mapping", {}) + emit_line(block, indent, f"_disc_value = {value_expr}.get({prop!r})") + emit_line(block, indent, f"if not isinstance(_disc_value, str):") + emit_line( + block, + indent + 1, + f"_append_error({errors_expr}, f\"{{{path_expr}}}.{prop}\", \"Required discriminator '{prop}' is missing or invalid\")", + ) + emit_line(block, indent + 1, "return") + + for disc_value, ref in sorted(mapping.items()): + if not isinstance(ref, str): + continue + ref_name = _resolve_ref(ref) + ref_schema = ordered_schemas.get(ref_name) + if not isinstance(ref_schema, dict): + continue + ref_fn = ensure_schema_function(ref_name) + emit_line(block, indent, f"if _disc_value == {disc_value!r}:") + emit_line(block, indent + 1, f"{ref_fn}({value_expr}, {path_expr}, {errors_expr})") + + rendered_blocks: dict[str, list[str]] = {} + idx = 0 + while idx < len(function_order): + fn_name = function_order[idx] + idx += 1 + schema = function_schemas[fn_name] + block: list[str] = [f"def {fn_name}(value: Any, path: str, errors: list[dict[str, str]]) -> None:"] + schema_name_hint = function_hints.get(fn_name) + emit_schema_body(schema, block, 1, "value", "path", "errors", schema_name_hint=schema_name_hint) + if len(block) == 1: + emit_line(block, 1, "return") + rendered_blocks[fn_name] = block + + for fn_name in function_order: + lines.extend(rendered_blocks[fn_name]) + lines.append("") + + lines.append("ROOT_SCHEMAS = " + repr(target_roots)) + lines.append("") + + for root in target_roots: + class_name = f"{_sanitize_identifier(root)}Validator" + fn_name = f"_validate_{_sanitize_identifier(root)}" + lines.append(f"class {class_name}:") + lines.append(" \"\"\"Generated validator for the root schema.\"\"\"") + lines.append("") + lines.append(" @staticmethod") + lines.append(" def validate(payload: Any) -> list[dict[str, str]]:") + lines.append(" errors: list[dict[str, str]] = []") + lines.append(f" {fn_name}(payload, '$', errors)") + lines.append(" return errors") + lines.append("") + + wrapper_name = f"validate_{_sanitize_identifier(root)}" + lines.append(f"def {wrapper_name}(payload: Any) -> list[dict[str, str]]:") + lines.append(f" return {class_name}.validate(payload)") + lines.append("") + + if not target_roots: + lines.append("def validate_payload(payload: Any) -> list[dict[str, str]]:") + lines.append(" _ = payload") + lines.append(" return []") + lines.append("") + + return "\n".join(lines).rstrip() + "\n" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/scripts/validator_schema_walker.py b/sdk/agentserver/azure-ai-agentserver-responses/scripts/validator_schema_walker.py new file mode 100644 index 000000000000..a3a90bffd027 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/scripts/validator_schema_walker.py @@ -0,0 +1,105 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Schema walking helpers for validator generation.""" + +from __future__ import annotations + +from typing import Any + + +def resolve_ref(ref: str) -> str: + """Extract schema name from OpenAPI $ref values.""" + return ref.rsplit("/", 1)[-1] + + +def _iter_subschemas(schema: dict[str, Any]) -> list[dict[str, Any]]: + """Yield nested schema objects that may contain references.""" + nested: list[dict[str, Any]] = [] + + for key in ("oneOf", "anyOf", "allOf"): + branches = schema.get(key, []) + if isinstance(branches, list): + nested.extend([branch for branch in branches if isinstance(branch, dict)]) + + properties = schema.get("properties", {}) + if isinstance(properties, dict): + nested.extend([value for value in properties.values() if isinstance(value, dict)]) + + items = schema.get("items") + if isinstance(items, dict): + nested.append(items) + + additional = schema.get("additionalProperties") + if isinstance(additional, dict): + nested.append(additional) + + return nested + + +class SchemaWalker: + """Collect schemas reachable from one or more roots.""" + + def __init__(self, schemas: dict[str, dict[str, Any]]) -> None: + self.schemas = schemas + self.reachable: dict[str, dict[str, Any]] = {} + self._visited: set[str] = set() + + def walk(self, name: str) -> None: + """Walk a schema by name and recursively collect reachable references.""" + if name in self._visited: + return + self._visited.add(name) + + schema = self.schemas.get(name) + if schema is None: + return + + self.reachable[name] = schema + self._walk_schema(schema) + + def _walk_schema(self, schema: dict[str, Any]) -> None: + """Walk nested schema branches.""" + ref = schema.get("$ref") + if isinstance(ref, str): + self.walk(resolve_ref(ref)) + return + + for nested in _iter_subschemas(schema): + self._walk_schema(nested) + + +def discover_post_request_roots(spec: dict[str, Any]) -> list[str]: + """Discover root schema names referenced by POST request bodies.""" + roots: list[str] = [] + paths = spec.get("paths", {}) + + for _path, methods in sorted(paths.items()): + if not isinstance(methods, dict): + continue + post = methods.get("post") + if not isinstance(post, dict): + continue + request_body = post.get("requestBody", {}) + content = request_body.get("content", {}).get("application/json", {}) + schema = content.get("schema", {}) + + if isinstance(schema, dict) and isinstance(schema.get("$ref"), str): + roots.append(resolve_ref(schema["$ref"])) + continue + + if isinstance(schema, dict): + for key in ("oneOf", "anyOf"): + branches = schema.get(key, []) + if not isinstance(branches, list): + continue + for branch in branches: + if isinstance(branch, dict) and isinstance(branch.get("$ref"), str): + roots.append(resolve_ref(branch["$ref"])) + + deduped: list[str] = [] + seen: set[str] = set() + for root in roots: + if root not in seen: + seen.add(root) + deduped.append(root) + return deduped diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/__init__.py new file mode 100644 index 000000000000..9a0454564dbb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/__init__.py @@ -0,0 +1,2 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/_helpers/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/_helpers/__init__.py new file mode 100644 index 000000000000..0efcce424aec --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/_helpers/__init__.py @@ -0,0 +1,7 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Shared testing helpers for deterministic synchronization and diagnostics.""" + +from .synchronization import EventGate, format_async_failure, poll_until + +__all__ = ["poll_until", "EventGate", "format_async_failure"] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/_helpers/synchronization.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/_helpers/synchronization.py new file mode 100644 index 000000000000..22215fdd283b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/_helpers/synchronization.py @@ -0,0 +1,76 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Deterministic synchronization helpers used by contract and integration tests.""" + +from __future__ import annotations + +import threading +import time +from typing import Any, Callable, Mapping + + +ContextProvider = Callable[[], Mapping[str, Any] | None] + + +def format_async_failure( + *, + label: str, + timeout_s: float, + elapsed_s: float, + context: Mapping[str, Any] | None, +) -> str: + """Build a stable, diagnostics-rich timeout failure message.""" + context_payload = dict(context or {}) + return ( + f"{label} timed out after {elapsed_s:.3f}s (budget={timeout_s:.3f}s); " + f"diagnostics={context_payload}" + ) + + +def poll_until( + condition: Callable[[], bool], + *, + timeout_s: float, + interval_s: float = 0.05, + context_provider: ContextProvider | None = None, + label: str = "poll_until condition", +) -> tuple[bool, str | None]: + """Poll a condition until true or timeout; always returns diagnostic details on timeout.""" + start = time.monotonic() + deadline = start + timeout_s + last_context: Mapping[str, Any] | None = None + + while time.monotonic() < deadline: + if condition(): + return True, None + if context_provider is not None: + maybe_context = context_provider() + if maybe_context is not None: + last_context = maybe_context + time.sleep(interval_s) + + elapsed = time.monotonic() - start + return False, format_async_failure( + label=label, + timeout_s=timeout_s, + elapsed_s=elapsed, + context=last_context, + ) + + +class EventGate: + """Thread-safe event gate for deterministic synchronization in tests.""" + + __slots__ = ("_event", "_payload") + + def __init__(self) -> None: + self._event = threading.Event() + self._payload: Any = None + + def signal(self, payload: Any = None) -> None: + self._payload = payload + self._event.set() + + def wait(self, *, timeout_s: float) -> tuple[bool, Any]: + ok = self._event.wait(timeout_s) + return ok, self._payload diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/conftest.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/conftest.py new file mode 100644 index 000000000000..9d834c339b88 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/conftest.py @@ -0,0 +1,11 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Root conftest — ensures the project root is on sys.path so that +``from tests._helpers import …`` works regardless of how pytest is invoked.""" + +import sys +from pathlib import Path + +_PROJECT_ROOT = str(Path(__file__).resolve().parent.parent) +if _PROJECT_ROOT not in sys.path: + sys.path.insert(0, _PROJECT_ROOT) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/__init__.py new file mode 100644 index 000000000000..9a0454564dbb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/__init__.py @@ -0,0 +1,2 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cancel_endpoint.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cancel_endpoint.py new file mode 100644 index 000000000000..5e8f0ac8268f --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cancel_endpoint.py @@ -0,0 +1,421 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for POST /responses/{response_id}/cancel behavior.""" + +from __future__ import annotations + +import asyncio +import threading +from typing import Any + +import pytest +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses._id_generator import IdGenerator +from tests._helpers import EventGate, poll_until + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire the hosting surface in contract tests.""" + async def _events(): + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _delayed_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that keeps background execution cancellable for a short period.""" + async def _events(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.25) + if cancellation_signal.is_set(): + return + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _cancellable_bg_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits response.created then blocks until cancelled. + + Phase 3: response_created_signal is set on the first event, so run_background + returns quickly with in_progress status while the task waits for cancellation. + """ + async def _events(): + yield { + "type": "response.created", + "payload": { + "status": "in_progress", + "output": [], + }, + } + # Block until cancellation signal is set + while not cancellation_signal.is_set(): + await asyncio.sleep(0.01) + + return _events() + + +def _raising_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that raises to transition a background response into failed.""" + async def _events(): + raise RuntimeError("simulated handler failure") + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _unknown_cancellation_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that raises an unknown cancellation exception source.""" + async def _events(): + raise asyncio.CancelledError("unknown cancellation source") + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _incomplete_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits an explicit incomplete terminal response event.""" + async def _events(): + yield { + "type": "response.created", + "payload": { + "status": "queued", + "output": [], + }, + } + yield { + "type": "response.in_progress", + "payload": { + "status": "in_progress", + "output": [], + }, + } + yield { + "type": "response.incomplete", + "payload": { + "status": "incomplete", + "output": [], + }, + } + + return _events() + + +def _make_blocking_sync_response_handler( + started_gate: EventGate, release_gate: threading.Event +): + """Factory for a handler that holds a sync request in-flight for deterministic concurrent cancel checks.""" + def handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + started_gate.signal(True) + while not release_gate.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + return handler + + +def _build_client(handler: Any | None = None) -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler or _noop_response_handler) + return TestClient(server.app) + + +def _create_background_response(client: TestClient, *, response_id: str | None = None) -> str: + payload: dict[str, Any] = { + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + } + if response_id is not None: + payload["response_id"] = response_id + + create_response = client.post("/responses", json=payload) + assert create_response.status_code == 200 + created_id = create_response.json().get("id") + assert isinstance(created_id, str) + return created_id + + +def _wait_for_status(client: TestClient, response_id: str, expected_status: str, *, timeout_s: float = 5.0) -> None: + latest_status: str | None = None + + def _has_expected_status() -> bool: + nonlocal latest_status + get_response = client.get(f"/responses/{response_id}") + if get_response.status_code != 200: + return False + latest_status = get_response.json().get("status") + return latest_status == expected_status + + ok, failure = poll_until( + _has_expected_status, + timeout_s=timeout_s, + interval_s=0.05, + context_provider=lambda: {"response_id": response_id, "last_status": latest_status}, + label=f"wait for status={expected_status}", + ) + assert ok, failure + + +def _assert_error( + response: Any, + *, + expected_status: int, + expected_type: str, + expected_message: str | None = None, +) -> None: + assert response.status_code == expected_status + payload = response.json() + assert isinstance(payload.get("error"), dict) + assert payload["error"].get("type") == expected_type + if expected_message is not None: + assert payload["error"].get("message") == expected_message + + +def test_cancel__cancels_background_response_and_clears_output() -> None: + client = _build_client(_cancellable_bg_response_handler) + + response_id = _create_background_response(client) + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + payload = cancel_response.json() + assert payload.get("status") == "cancelled" + assert payload.get("output") == [] + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + snapshot = get_response.json() + assert snapshot.get("status") == "cancelled" + assert snapshot.get("output") == [] + + +def test_cancel__is_idempotent_for_already_cancelled_response() -> None: + client = _build_client(_cancellable_bg_response_handler) + + response_id = _create_background_response(client) + + first_cancel = client.post(f"/responses/{response_id}/cancel") + assert first_cancel.status_code == 200 + assert first_cancel.json().get("status") == "cancelled" + assert first_cancel.json().get("output") == [] + + second_cancel = client.post(f"/responses/{response_id}/cancel") + assert second_cancel.status_code == 200 + assert second_cancel.json().get("status") == "cancelled" + assert second_cancel.json().get("output") == [] + + +def test_cancel__returns_400_for_completed_background_response() -> None: + client = _build_client() + response_id = _create_background_response(client) + _wait_for_status(client, response_id, "completed") + + cancel_response = client.post(f"/responses/{response_id}/cancel") + _assert_error( + cancel_response, + expected_status=400, + expected_type="invalid_request_error", + expected_message="Cannot cancel a completed response.", + ) + + +def test_cancel__returns_400_for_failed_background_response() -> None: + """Phase 3: handler that raises before emitting any event causes POST to return 500. + A subsequent cancel on the non-existent record returns 404. + """ + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _raising_before_events(req: Any, ctx: Any, sig: Any): + async def _ev(): + raise RuntimeError("simulated handler failure") + if False: # pragma: no cover + yield None + return _ev() + + client = _build_client(_raising_before_events) + create_response = client.post( + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": False, "store": True, "background": True}, + ) + # Phase 3: handler failed before emitting response.created → HTTP 500 + assert create_response.status_code == 500 + + +@pytest.mark.skip( + reason="Known gap (S-024): unknown cancellation exceptions should map to handler-error path instead of escaping as CancelledError", +) +def test_cancel__unknown_cancellation_exception_is_treated_as_failed() -> None: + client = _build_client(_unknown_cancellation_response_handler) + response_id = _create_background_response(client) + _wait_for_status(client, response_id, "failed") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + assert get_response.json().get("status") == "failed" + + +def test_cancel__stream_disconnect_sets_handler_cancellation_signal() -> None: + pytest.skip( + "Requires a real ASGI disconnect harness; Starlette TestClient does not deterministically surface" + " client-disconnect cancellation signals to the handler." + ) + + +def test_cancel__background_stream_disconnect_does_not_cancel_handler() -> None: + pytest.skip( + "Requires a real ASGI disconnect harness to verify that background execution is immune to" + " stream client disconnect per S-026." + ) + + +def test_cancel__returns_400_for_incomplete_background_response() -> None: + client = _build_client(_incomplete_response_handler) + response_id = _create_background_response(client) + _wait_for_status(client, response_id, "incomplete") + + cancel_response = client.post(f"/responses/{response_id}/cancel") + _assert_error( + cancel_response, + expected_status=400, + expected_type="invalid_request_error", + ) + + +def test_cancel__returns_400_for_synchronous_response() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + _assert_error( + cancel_response, + expected_status=400, + expected_type="invalid_request_error", + expected_message="Cannot cancel a synchronous response.", + ) + + +def test_cancel__returns_404_for_in_flight_synchronous_response() -> None: + started_gate = EventGate() + release_gate = threading.Event() + client = _build_client(_make_blocking_sync_response_handler(started_gate, release_gate)) + response_id = IdGenerator.new_response_id() + + create_result: dict[str, Any] = {} + + def _issue_sync_create() -> None: + try: + create_result["response"] = client.post( + "/responses", + json={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + except Exception as exc: # pragma: no cover - surfaced by assertions below. + create_result["error"] = exc + + create_thread = threading.Thread(target=_issue_sync_create, daemon=True) + create_thread.start() + + started, _ = started_gate.wait(timeout_s=2.0) + assert started, "Expected sync create request to enter handler before cancel call" + + cancel_response = client.post(f"/responses/{response_id}/cancel") + _assert_error( + cancel_response, + expected_status=404, + expected_type="invalid_request_error", + ) + + release_gate.set() + create_thread.join(timeout=2.0) + assert not create_thread.is_alive(), "Expected in-flight sync request to finish after release" + + thread_error = create_result.get("error") + assert thread_error is None, str(thread_error) + create_response = create_result.get("response") + assert create_response is not None + assert create_response.status_code == 200 + + +def test_cancel__returns_404_for_unknown_response_id() -> None: + client = _build_client() + + cancel_response = client.post("/responses/resp_does_not_exist/cancel") + _assert_error( + cancel_response, + expected_status=404, + expected_type="invalid_request_error", + ) + + +# ══════════════════════════════════════════════════════════ +# B-11: Cancel from queued / early in_progress state +# ══════════════════════════════════════════════════════════ + + +def test_cancel__from_queued_or_early_in_progress_succeeds() -> None: + """B-11 — Cancel issued immediately after creation (queued/early in_progress) returns HTTP 200, + status=cancelled, and output=[].""" + client = _build_client(_cancellable_bg_response_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + # Cancel immediately — response is likely queued or very early in_progress. + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + + _wait_for_status(client, response_id, "cancelled") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + assert payload["status"] == "cancelled" + assert payload.get("output") == [], ( + f"output must be cleared for a cancelled response, got: {payload.get('output')}" + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_create_endpoint.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_create_endpoint.py new file mode 100644 index 000000000000..b029ca0e9e2c --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_create_endpoint.py @@ -0,0 +1,841 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for POST /responses endpoint behavior.""" + +from __future__ import annotations + +from typing import Any + +from starlette.testclient import TestClient + +from tests._helpers import poll_until + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire the hosting surface in contract tests.""" + async def _events(): + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _build_client() -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def test_create__returns_json_response_for_non_streaming_success() -> None: + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + payload = response.json() + assert isinstance(payload.get("id"), str) + assert payload["id"].startswith("caresp_") + assert payload.get("response_id") == payload.get("id") + assert isinstance(payload.get("agent_reference"), dict) + assert payload["agent_reference"].get("type") == "agent_reference" + assert isinstance(payload["agent_reference"].get("name"), str) + assert payload.get("object") == "response" + assert payload.get("status") in {"completed", "in_progress", "queued"} + assert "sequence_number" not in payload + + +def test_create__preserves_client_supplied_identity_fields() -> None: + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "response_id": "caresp_1234567890abcdef00ABCDEFGHIJKLMNOPQRSTUVWXYZ012345", + "agent_reference": { + "type": "agent_reference", + "name": "custom-agent", + "version": "v1", + }, + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + payload = response.json() + assert payload.get("id") == "caresp_1234567890abcdef00ABCDEFGHIJKLMNOPQRSTUVWXYZ012345" + assert payload.get("response_id") == "caresp_1234567890abcdef00ABCDEFGHIJKLMNOPQRSTUVWXYZ012345" + assert payload.get("agent_reference") == { + "type": "agent_reference", + "name": "custom-agent", + "version": "v1", + } + + +def test_create__rejects_invalid_response_id_format() -> None: + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "response_id": "bad-id", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + assert payload["error"].get("type") == "invalid_request_error" + assert payload["error"].get("param") == "response_id" + + +def test_create__rejects_invalid_agent_reference_shape() -> None: + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "agent_reference": {"type": "not_agent_reference", "name": "bad"}, + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + assert payload["error"].get("type") == "invalid_request_error" + assert payload["error"].get("param") == "agent_reference.type" + + +def test_create__returns_structured_400_for_invalid_payload() -> None: + client = _build_client() + + response = client.post( + "/responses", + json={ + "background": True, + "store": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + error = payload.get("error") + assert isinstance(error, dict) + assert error.get("type") == "invalid_request_error" + + +def test_create__store_false_response_is_not_visible_via_get() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": False, + "background": False, + }, + ) + + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 404 + + +def test_create__background_mode_returns_immediate_then_reaches_terminal_state() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + + assert create_response.status_code == 200 + created_payload = create_response.json() + # Phase 3: handler runs immediately, so POST may return completed when the + # noop handler finishes quickly in the TestClient synchronous context. + assert created_payload.get("status") in {"queued", "in_progress", "completed"} + response_id = created_payload["id"] + + latest_snapshot: dict[str, Any] = {} + + def _is_terminal() -> bool: + nonlocal latest_snapshot + snapshot_response = client.get(f"/responses/{response_id}") + if snapshot_response.status_code != 200: + return False + latest_snapshot = snapshot_response.json() + return latest_snapshot.get("status") in {"completed", "failed", "incomplete", "cancelled"} + + ok, failure = poll_until( + _is_terminal, + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: {"last_status": latest_snapshot.get("status")}, + label="background create terminal transition", + ) + assert ok, failure + + +def test_create__non_stream_returns_completed_response_with_output_items() -> None: + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _output_producing_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("hello") + yield text_content.emit_done() + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_output_producing_handler) + client = TestClient(server.app) + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + payload = response.json() + assert payload.get("status") == "completed" + assert "sequence_number" not in payload + assert isinstance(payload.get("output"), list) + assert len(payload["output"]) == 1 + assert payload["output"][0].get("type") == "output_message" + assert payload["output"][0].get("content", [])[0].get("type") == "output_text" + assert payload["output"][0].get("content", [])[0].get("text") == "hello" + + +def test_create__background_non_stream_get_eventually_returns_output_items() -> None: + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _output_producing_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("hello") + yield text_content.emit_done() + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_output_producing_handler) + client = TestClient(server.app) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + latest_snapshot: dict[str, Any] = {} + + def _is_completed_with_output() -> bool: + nonlocal latest_snapshot + snapshot_response = client.get(f"/responses/{response_id}") + if snapshot_response.status_code != 200: + return False + latest_snapshot = snapshot_response.json() + output = latest_snapshot.get("output") + return latest_snapshot.get("status") == "completed" and isinstance(output, list) and len(output) == 1 + + ok, failure = poll_until( + _is_completed_with_output, + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: { + "last_status": latest_snapshot.get("status"), + "last_output_count": len(latest_snapshot.get("output", [])) + if isinstance(latest_snapshot.get("output"), list) + else None, + }, + label="background non-stream output availability", + ) + assert ok, failure + + assert latest_snapshot["output"][0].get("type") == "output_message" + assert latest_snapshot["output"][0].get("content", [])[0].get("type") == "output_text" + assert latest_snapshot["output"][0].get("content", [])[0].get("text") == "hello" + assert "sequence_number" not in latest_snapshot + + +def test_create__model_is_optional_and_resolved_to_empty_or_default() -> None: + """B22 — model can be omitted. Resolution: request.model → default_model → empty string.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + payload = response.json() + # B22: model should be present (possibly empty string or server default) + assert "model" in payload + assert isinstance(payload["model"], str) + + +def test_create__metadata_rejects_more_than_16_keys() -> None: + """Metadata constraints: max 16 key-value pairs.""" + client = _build_client() + + metadata = {f"key_{i}": f"value_{i}" for i in range(17)} + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "metadata": metadata, + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + assert payload["error"]["type"] == "invalid_request_error" + + +def test_create__metadata_rejects_key_longer_than_64_chars() -> None: + """Metadata constraints: key max 64 characters.""" + client = _build_client() + + metadata = {"a" * 65: "value"} + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "metadata": metadata, + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + assert payload["error"]["type"] == "invalid_request_error" + + +def test_create__metadata_rejects_value_longer_than_512_chars() -> None: + """Metadata constraints: value max 512 characters.""" + client = _build_client() + + metadata = {"key": "v" * 513} + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "metadata": metadata, + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + assert payload["error"]["type"] == "invalid_request_error" + + +def test_create__validation_error_includes_details_array() -> None: + """B29 — Invalid request returns 400 with details[] array.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": "not-a-bool", + "store": True, + "background": False, + }, + ) + + assert response.status_code == 400 + payload = response.json() + error = payload.get("error") + assert error is not None + assert error.get("type") == "invalid_request_error" + # B29: should have details[] array + details = error.get("details") + assert isinstance(details, list), f"Expected details[] array per B29, got: {type(details)}" + assert len(details) >= 1 + for detail in details: + assert detail.get("type") == "invalid_request_error" + assert detail.get("code") == "invalid_value" + assert "param" in detail + assert "message" in detail + + +# ══════════════════════════════════════════════════════════ +# B-1, B-2, B-3: Request body edge cases +# ══════════════════════════════════════════════════════════ + + +def test_create__returns_400_for_empty_body() -> None: + """B-1 — Empty request body → HTTP 400, error.type: invalid_request_error.""" + client = _build_client() + + response = client.post( + "/responses", + content=b"", + headers={"Content-Type": "application/json"}, + ) + + assert response.status_code == 400 + payload = response.json() + assert isinstance(payload.get("error"), dict) + assert payload["error"].get("type") == "invalid_request_error" + + +def test_create__returns_400_for_invalid_json_body() -> None: + """B-2 — Malformed JSON body → HTTP 400, error.type: invalid_request_error.""" + client = _build_client() + + response = client.post( + "/responses", + content=b"{invalid json", + headers={"Content-Type": "application/json"}, + ) + + assert response.status_code == 400 + payload = response.json() + assert isinstance(payload.get("error"), dict) + assert payload["error"].get("type") == "invalid_request_error" + + +def test_create__ignores_unknown_fields_in_request_body() -> None: + """B-3 — Unknown fields are ignored for forward compatibility → HTTP 200.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + "foo": "bar", + "unknown_nested": {"key": "value"}, + }, + ) + + assert response.status_code == 200 + payload = response.json() + assert payload.get("object") == "response" + + +# ══════════════════════════════════════════════════════════ +# Task 4.1 — _process_handler_events sync contract tests +# ══════════════════════════════════════════════════════════ + + +def test_sync_handler_exception_returns_500() -> None: + """T5 — Handler raises an exception; stream=False → HTTP 500. + + B8 / B-13 for sync mode: any handler exception surfaces as HTTP 500. + """ + def _raising_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + raise RuntimeError("Simulated handler failure") + if False: # pragma: no cover + yield None + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_raising_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + response = client.post( + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": False, "store": True, "background": False}, + ) + + assert response.status_code == 500 + + +def test_sync_no_terminal_event_still_completes() -> None: + """T6 — Handler yields response.created + response.in_progress but no terminal; stream=False → HTTP 200, status=failed. + + S-021: When the handler completes without emitting a terminal event, the library + synthesises a ``response.failed`` terminal. Sync callers receive HTTP 200 with + a "failed" response body (not HTTP 500). + """ + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _no_terminal_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream( + response_id=context.response_id, model=getattr(request, "model", None) + ) + yield stream.emit_created() + yield stream.emit_in_progress() + # Intentionally omit terminal event (response.completed / response.failed) + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_no_terminal_handler) + client = TestClient(_server.app) + + response = client.post( + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": False, "store": True, "background": False}, + ) + + assert response.status_code == 200, ( + f"S-021: sync no-terminal handler must return HTTP 200, got {response.status_code}" + ) + payload = response.json() + assert payload.get("status") == "failed", ( + f"S-021: synthesised terminal must set status to 'failed', got {payload.get('status')!r}" + ) + + +# ══════════════════════════════════════════════════════════ +# Phase 5 — Task 5.1: S-007 / S-008 / S-009 first-event contract tests +# ══════════════════════════════════════════════════════════ + + +def test_s007_wrong_first_event_sync() -> None: + """T1 — Handler yields response.in_progress as first event; stream=False → HTTP 500. + + S-007: The first event MUST be response.created. Violations are treated as + pre-creation errors (B8) and map to HTTP 500 in sync mode. + Uses a raw dict to bypass ResponseEventStream internal ordering validation so + the orchestrator's _check_first_event_contract is the authority under test. + """ + def _wrong_first_event_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + # Raw dict bypasses ResponseEventStream validation so _check_first_event_contract runs + yield { + "type": "response.in_progress", + "payload": { + "status": "in_progress", + "object": "response", + }, + } + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_wrong_first_event_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + response = client.post( + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": False, "store": True, "background": False}, + ) + + assert response.status_code == 500, ( + f"S-007 violation in sync mode must return HTTP 500, got {response.status_code}" + ) + + +def test_s007_wrong_first_event_stream() -> None: + """T2 — Handler yields response.in_progress as first event; stream=True → SSE contains only 'error'. + + S-007: Violation → single standalone error event; no response.created in stream. + Uses a raw dict to bypass ResponseEventStream internal ordering validation. + """ + def _wrong_first_event_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + yield { + "type": "response.in_progress", + "payload": { + "status": "in_progress", + "object": "response", + }, + } + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_wrong_first_event_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + import json as _json + + events: list[dict[str, Any]] = [] + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + current_type: str | None = None + current_data: str | None = None + for line in response.iter_lines(): + if not line: + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + current_type = None + current_data = None + continue + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + + event_types = [e["type"] for e in events] + assert event_types == ["error"], ( + f"S-007 violation in stream mode must produce exactly ['error'], got: {event_types}" + ) + assert "response.created" not in event_types + + +def test_s008_mismatched_id_stream() -> None: + """T3 — Handler yields response.created with wrong id; stream=True → SSE contains only 'error'. + + S-008: The id in response.created MUST equal the library-assigned response_id. + """ + def _mismatched_id_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + # Emit response.created with a deliberately wrong id + yield { + "type": "response.created", + "payload": { + "id": "caresp_WRONG00000000000000000000000000000000000000000000", + "response_id": "caresp_WRONG00000000000000000000000000000000000000000000", + "status": "queued", + "object": "response", + }, + } + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_mismatched_id_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + import json as _json + + events: list[dict[str, Any]] = [] + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + current_type: str | None = None + current_data: str | None = None + for line in response.iter_lines(): + if not line: + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + current_type = None + current_data = None + continue + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + + event_types = [e["type"] for e in events] + assert event_types == ["error"], ( + f"S-008 violation must produce exactly ['error'], got: {event_types}" + ) + + +def test_s009_terminal_status_on_created_stream() -> None: + """T4 — Handler yields response.created with terminal status; stream=True → SSE contains only 'error'. + + S-009: The status in response.created MUST be non-terminal (queued or in_progress). + """ + def _terminal_on_created_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + yield { + "type": "response.created", + "payload": { + "status": "completed", + "object": "response", + }, + } + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_terminal_on_created_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + import json as _json + + events: list[dict[str, Any]] = [] + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + current_type: str | None = None + current_data: str | None = None + for line in response.iter_lines(): + if not line: + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + current_type = None + current_data = None + continue + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + + event_types = [e["type"] for e in events] + assert event_types == ["error"], ( + f"S-009 violation must produce exactly ['error'], got: {event_types}" + ) + + +def test_s007_valid_handler_not_affected() -> None: + """T5 — Compliant handler emits response.created with correct id; stream=True → normal SSE flow. + + Regression: the S-007/S-008/S-009 validation must not block valid handlers. + """ + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _compliant_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream( + response_id=context.response_id, model=getattr(request, "model", None) + ) + yield stream.emit_created() + yield stream.emit_completed() + + return _events() + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_compliant_handler) + client = TestClient(_server.app) + + import json as _json + + events: list[dict[str, Any]] = [] + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + current_type: str | None = None + current_data: str | None = None + for line in response.iter_lines(): + if not line: + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + current_type = None + current_data = None + continue + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + if current_type is not None: + events.append({"type": current_type, "data": _json.loads(current_data) if current_data else {}}) + + event_types = [e["type"] for e in events] + assert "response.created" in event_types, ( + f"Compliant handler must not be blocked; expected response.created in: {event_types}" + ) + assert "error" not in event_types, ( + f"Compliant handler must not produce error event; got: {event_types}" + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_create_mode_matrix.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_create_mode_matrix.py new file mode 100644 index 000000000000..834f516dbc20 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_create_mode_matrix.py @@ -0,0 +1,252 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract matrix tests for POST /responses store/background/stream combinations. + +These cases mirror C1-C8 in docs/api-behaviour-contract.md. +""" + +from __future__ import annotations + +import json +from typing import Any + +import pytest +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire contract matrix tests.""" + async def _events(): + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +class _CreateModeCase: + def __init__( + self, + id: str, + store: bool, + background: bool, + stream: bool, + expected_http: int, + expected_content_prefix: str, + expected_get_status: int | None = None, + ) -> None: + self.id = id + self.store = store + self.background = background + self.stream = stream + self.expected_http = expected_http + self.expected_content_prefix = expected_content_prefix + self.expected_get_status = expected_get_status + + +def _build_client() -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def _collect_sse_events(response: Any) -> list[dict[str, Any]]: + events: list[dict[str, Any]] = [] + current_type: str | None = None + current_data: str | None = None + + for line in response.iter_lines(): + if not line: + if current_type is not None: + payload = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + current_type = None + current_data = None + continue + + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None: + payload = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + + return events + + +def _extract_response_id_from_sse_text(raw_text: str) -> str | None: + current_type: str | None = None + current_data: str | None = None + + for line in raw_text.splitlines(): + if not line: + if current_type is not None and current_data: + payload = json.loads(current_data) + candidate = payload.get("response", {}).get("id") + if isinstance(candidate, str) and candidate: + return candidate + current_type = None + current_data = None + continue + + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None and current_data: + payload = json.loads(current_data) + candidate = payload.get("response", {}).get("id") + if isinstance(candidate, str) and candidate: + return candidate + + return None + + +_CASES: tuple[_CreateModeCase, ...] = ( + _CreateModeCase( + id="C1", + store=True, + background=False, + stream=False, + expected_http=200, + expected_content_prefix="application/json", + expected_get_status=200, + ), + _CreateModeCase( + id="C2", + store=True, + background=False, + stream=True, + expected_http=200, + expected_content_prefix="text/event-stream", + expected_get_status=200, + ), + _CreateModeCase( + id="C3", + store=True, + background=True, + stream=False, + expected_http=200, + expected_content_prefix="application/json", + expected_get_status=200, + ), + _CreateModeCase( + id="C4", + store=True, + background=True, + stream=True, + expected_http=200, + expected_content_prefix="text/event-stream", + expected_get_status=200, + ), + _CreateModeCase( + id="C5", + store=False, + background=False, + stream=False, + expected_http=200, + expected_content_prefix="application/json", + expected_get_status=404, + ), + _CreateModeCase( + id="C6", + store=False, + background=False, + stream=True, + expected_http=200, + expected_content_prefix="text/event-stream", + expected_get_status=404, + ), + _CreateModeCase( + id="C7", + store=False, + background=True, + stream=False, + expected_http=400, + expected_content_prefix="application/json", + expected_get_status=None, + ), + _CreateModeCase( + id="C8", + store=False, + background=True, + stream=True, + expected_http=400, + expected_content_prefix="application/json", + expected_get_status=None, + ), +) + + +@pytest.mark.parametrize( + "case", + [ + *_CASES, + ], + ids=[case.id for case in _CASES], +) +def test_create_mode_matrix__http_and_content_type(case: _CreateModeCase) -> None: + client = _build_client() + payload = { + "model": "gpt-4o-mini", + "input": "hello", + "stream": case.stream, + "store": case.store, + "background": case.background, + } + + response = client.post("/responses", json=payload) + + assert response.status_code == case.expected_http + assert response.headers.get("content-type", "").startswith(case.expected_content_prefix) + # Contract: C7/C8 (store=false, background=true) → error.code="unsupported_parameter", error.param="background" + if case.id in {"C7", "C8"}: + error = response.json().get("error", {}) + assert error.get("code") == "unsupported_parameter" + assert error.get("param") == "background" + + if case.expected_http == 400: + body = response.json() + assert isinstance(body.get("error"), dict) + assert body["error"].get("type") == "invalid_request_error" + + +@pytest.mark.parametrize( + "case", + [ + case + for case in _CASES + if case.expected_http == 200 and case.expected_get_status is not None + ], + ids=[case.id for case in _CASES if case.expected_http == 200 and case.expected_get_status is not None], +) +def test_create_mode_matrix__get_visibility(case: _CreateModeCase) -> None: + client = _build_client() + payload = { + "model": "gpt-4o-mini", + "input": "hello", + "stream": case.stream, + "store": case.store, + "background": case.background, + } + + create_response = client.post("/responses", json=payload) + assert create_response.status_code == 200 + content_type = create_response.headers.get("content-type", "") + + if content_type.startswith("text/event-stream"): + response_id = _extract_response_id_from_sse_text(create_response.text) + else: + body = create_response.json() + response_id = body.get("id") + + assert isinstance(response_id, str) and response_id + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == case.expected_get_status diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cross_api_e2e.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cross_api_e2e.py new file mode 100644 index 000000000000..8257cd53cf4a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cross_api_e2e.py @@ -0,0 +1,932 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Cross-API E2E behavioural tests exercising multi-endpoint flows on a single response. + +Each test calls 2+ endpoints and asserts cross-endpoint consistency per the contract. +Validates: E1–E44 from the cross-API matrix. + +Python port of CrossApiE2eTests.cs from the .NET SDK. +""" + +from __future__ import annotations + +import asyncio +import json +import threading +from typing import Any + +import pytest +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from azure.ai.agentserver.responses._id_generator import IdGenerator +from tests._helpers import EventGate, poll_until + + +# ════════════════════════════════════════════════════════════ +# Shared helpers +# ════════════════════════════════════════════════════════════ + + +def _collect_sse_events(response: Any) -> list[dict[str, Any]]: + """Parse SSE lines from a streaming response into a list of event dicts.""" + events: list[dict[str, Any]] = [] + current_type: str | None = None + current_data: str | None = None + + for line in response.iter_lines(): + if not line: + if current_type is not None: + payload = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + current_type = None + current_data = None + continue + + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None: + payload = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + + return events + + +def _wait_for_terminal( + client: TestClient, + response_id: str, + *, + timeout_s: float = 5.0, +) -> dict[str, Any]: + """Poll GET until the response reaches a terminal status.""" + latest: dict[str, Any] = {} + terminal_statuses = {"completed", "failed", "incomplete", "cancelled"} + + def _is_terminal() -> bool: + nonlocal latest + r = client.get(f"/responses/{response_id}") + if r.status_code != 200: + return False + latest = r.json() + return latest.get("status") in terminal_statuses + + ok, failure = poll_until( + _is_terminal, + timeout_s=timeout_s, + interval_s=0.05, + context_provider=lambda: {"status": latest.get("status")}, + label=f"wait_for_terminal({response_id})", + ) + assert ok, failure + return latest + + +# ════════════════════════════════════════════════════════════ +# Handler factories +# ════════════════════════════════════════════════════════════ + + +def _noop_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler — emits no events (framework auto-completes).""" + async def _events(): + if False: # pragma: no cover + yield None + + return _events() + + +def _simple_text_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits created + completed with no output items.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_completed() + + return _events() + + +def _output_producing_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that produces a single message output item with text 'hello'.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + message = stream.add_output_item_message() + yield message.emit_added() + text = message.add_text_content() + yield text.emit_added() + yield text.emit_delta("hello") + yield text.emit_done() + yield message.emit_content_done(text) + yield message.emit_done() + yield stream.emit_completed() + + return _events() + + +def _throwing_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that raises after emitting created.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + raise RuntimeError("Simulated handler failure") + + return _events() + + +def _incomplete_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits an incomplete terminal event.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_incomplete(reason="max_output_tokens") + + return _events() + + +def _delayed_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that sleeps briefly, checking for cancellation.""" + async def _events(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.25) + if cancellation_signal.is_set(): + return + if False: # pragma: no cover + yield None + + return _events() + + +def _cancellable_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits response.created then blocks until cancelled. + + Suitable for Phase 3 cancel tests: response_created_signal is set on the + first event, so run_background returns immediately with in_progress status + while the task continues running until cancellation. + """ + async def _events(): + stream = ResponseEventStream( + response_id=context.response_id, + model=getattr(request, "model", None), + ) + yield stream.emit_created() # unblocks run_background + # Block until cancelled + while not cancellation_signal.is_set(): + await asyncio.sleep(0.01) + + return _events() + + +def _make_blocking_sync_handler( + started_gate: EventGate, release_gate: threading.Event +): + """Factory for a handler that blocks on a gate, for testing concurrent GET/Cancel on in-flight sync requests.""" + def handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + started_gate.signal(True) + while not release_gate.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + if False: # pragma: no cover + yield None + + return _events() + + return handler + + +def _make_two_item_gated_handler( + item1_emitted: EventGate, + item1_gate: threading.Event, + item2_emitted: EventGate, + item2_gate: threading.Event, +): + """Factory for a handler that emits two message output items with gates between them.""" + def handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + + # First message + msg1 = stream.add_output_item_message() + yield msg1.emit_added() + text1 = msg1.add_text_content() + yield text1.emit_added() + yield text1.emit_delta("Hello") + yield text1.emit_done() + yield msg1.emit_content_done(text1) + yield msg1.emit_done() + + item1_emitted.signal() + while not item1_gate.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + + # Second message + msg2 = stream.add_output_item_message() + yield msg2.emit_added() + text2 = msg2.add_text_content() + yield text2.emit_added() + yield text2.emit_delta("World") + yield text2.emit_done() + yield msg2.emit_content_done(text2) + yield msg2.emit_done() + + item2_emitted.signal() + while not item2_gate.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + + yield stream.emit_completed() + + return _events() + + return handler + + +def _build_client(handler: Any | None = None) -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler or _noop_handler) + return TestClient(server.app) + + +def _create_sync_response(client: TestClient, **extra: Any) -> str: + """POST /responses with stream=False, store=True, background=False. Returns response_id.""" + payload = {"model": "gpt-4o-mini", "input": "hello", "stream": False, "store": True, "background": False} + payload.update(extra) + r = client.post("/responses", json=payload) + assert r.status_code == 200 + return r.json()["id"] + + +def _create_streaming_response(client: TestClient, **extra: Any) -> str: + """POST /responses with stream=True. Consumes the SSE stream and returns response_id.""" + payload = {"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False} + payload.update(extra) + with client.stream("POST", "/responses", json=payload) as resp: + assert resp.status_code == 200 + events = _collect_sse_events(resp) + assert events, "Expected at least one SSE event" + return events[0]["data"]["response"]["id"] + + +def _create_bg_response(client: TestClient, **extra: Any) -> str: + """POST /responses with background=True, stream=False. Returns response_id.""" + payload = {"model": "gpt-4o-mini", "input": "hello", "stream": False, "store": True, "background": True} + payload.update(extra) + r = client.post("/responses", json=payload) + assert r.status_code == 200 + return r.json()["id"] + + +def _create_bg_streaming_response(client: TestClient, **extra: Any) -> str: + """POST /responses with background=True, stream=True. Consumes SSE and returns response_id.""" + payload = {"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": True} + payload.update(extra) + with client.stream("POST", "/responses", json=payload) as resp: + assert resp.status_code == 200 + events = _collect_sse_events(resp) + assert events, "Expected at least one SSE event" + return events[0]["data"]["response"]["id"] + + +# ════════════════════════════════════════════════════════════ +# C5/C6 — Ephemeral (store=false): E30–E35 +# ════════════════════════════════════════════════════════════ + + +class TestEphemeralStoreFalse: + """store=false responses are not retrievable or cancellable (B14).""" + + @pytest.mark.parametrize( + "stream, operation", + [ + (False, "GET"), # E30: C5 → GET JSON → 404 + (False, "GET_SSE"), # E31: C5 → GET SSE replay → 404 + (True, "GET"), # E33: C6 → GET JSON → 404 + (True, "GET_SSE"), # E34: C6 → GET SSE replay → 404 + ], + ids=["E30-sync-GET", "E31-sync-GET_SSE", "E33-stream-GET", "E34-stream-GET_SSE"], + ) + def test_ephemeral_store_false_cross_api_returns_404(self, stream: bool, operation: str) -> None: + """B14 — store=false responses are not retrievable.""" + handler = _simple_text_handler if stream else _noop_handler + client = _build_client(handler) + + create_payload: dict[str, Any] = { + "model": "gpt-4o-mini", + "input": "hello", + "store": False, + "stream": stream, + "background": False, + } + + if stream: + with client.stream("POST", "/responses", json=create_payload) as resp: + assert resp.status_code == 200 + events = _collect_sse_events(resp) + response_id = events[0]["data"]["response"]["id"] + else: + r = client.post("/responses", json=create_payload) + assert r.status_code == 200 + response_id = r.json()["id"] + + if operation == "GET": + result = client.get(f"/responses/{response_id}") + else: + result = client.get(f"/responses/{response_id}?stream=true") + + assert result.status_code == 404 + + @pytest.mark.parametrize( + "stream", + [False, True], + ids=["E32-sync-cancel", "E35-stream-cancel"], + ) + def test_ephemeral_store_false_cancel_rejected(self, stream: bool) -> None: + """B1, B14 — store=false response not bg, cancel rejected. + + The Python SDK returns 404 because store=false responses are not + persisted, so the cancel endpoint cannot find them. + """ + handler = _simple_text_handler if stream else _noop_handler + client = _build_client(handler) + + create_payload: dict[str, Any] = { + "model": "gpt-4o-mini", + "input": "hello", + "store": False, + "stream": stream, + "background": False, + } + + if stream: + with client.stream("POST", "/responses", json=create_payload) as resp: + assert resp.status_code == 200 + events = _collect_sse_events(resp) + response_id = events[0]["data"]["response"]["id"] + else: + r = client.post("/responses", json=create_payload) + assert r.status_code == 200 + response_id = r.json()["id"] + + result = client.post(f"/responses/{response_id}/cancel") + # Contract: store=false responses are never persisted → cancel always 404 + assert result.status_code == 404 + + +# ════════════════════════════════════════════════════════════ +# C1 — Synchronous, stored (store=T, bg=F, stream=F): E1–E6 +# ════════════════════════════════════════════════════════════ + + +class TestC1SyncStored: + """Synchronous non-streaming stored response cross-API tests.""" + + def test_e1_create_then_get_after_completion_returns_200_completed(self) -> None: + """B5 — JSON GET returns current snapshot; B16 — after completion, accessible.""" + client = _build_client() + response_id = _create_sync_response(client) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + payload = get_resp.json() + assert payload["status"] == "completed" + + def test_e2_create_get_during_in_flight_returns_404(self) -> None: + """B16 — non-bg in-flight → 404.""" + started_gate = EventGate() + release_gate = threading.Event() + handler = _make_blocking_sync_handler(started_gate, release_gate) + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + create_result: dict[str, Any] = {} + + def _do_create() -> None: + try: + create_result["response"] = client.post( + "/responses", + json={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + except Exception as exc: # pragma: no cover + create_result["error"] = exc + + t = threading.Thread(target=_do_create, daemon=True) + t.start() + + started, _ = started_gate.wait(timeout_s=5.0) + assert started, "Handler should have started" + + # GET during in-flight → 404 + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 404 + + # Release handler + release_gate.set() + t.join(timeout=5.0) + assert not t.is_alive() + + # Now GET succeeds + get_after = client.get(f"/responses/{response_id}") + assert get_after.status_code == 200 + assert get_after.json()["status"] == "completed" + + def test_e3_create_then_get_sse_replay_returns_400(self) -> None: + """B2 — SSE replay requires background.""" + client = _build_client() + response_id = _create_sync_response(client) + + get_resp = client.get(f"/responses/{response_id}?stream=true") + assert get_resp.status_code == 400 + + def test_e4_create_then_cancel_after_completion_returns_400(self) -> None: + """B1 — cancel requires background; B12 — cancel rejection.""" + client = _build_client() + response_id = _create_sync_response(client) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 400 + payload = cancel_resp.json() + assert payload["error"]["type"] == "invalid_request_error" + assert "synchronous" in payload["error"]["message"].lower() + + def test_e5_create_cancel_during_in_flight_returns_400(self) -> None: + """B1 — cancel requires background; non-bg → 400.""" + started_gate = EventGate() + release_gate = threading.Event() + handler = _make_blocking_sync_handler(started_gate, release_gate) + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + def _do_create() -> None: + client.post( + "/responses", + json={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + t = threading.Thread(target=_do_create, daemon=True) + t.start() + + started, _ = started_gate.wait(timeout_s=5.0) + assert started + + # Cancel during in-flight non-bg → 404 (not yet stored, S7) + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 404, "S7: non-background in-flight cancel must return 404 (not yet stored)" + + release_gate.set() + t.join(timeout=5.0) + + def test_e6_disconnect_then_get_returns_not_found(self) -> None: + """B17 — connection termination cancels non-bg; not persisted → GET 404. + + Note: Starlette TestClient does not deterministically simulate client disconnect. + We skip this test as the Python SDK disconnect tests need a real ASGI harness. + """ + pytest.skip( + "Starlette TestClient does not deterministically surface client-disconnect " + "cancellation signals. Requires real ASGI harness." + ) + + +# ════════════════════════════════════════════════════════════ +# C2 — Synchronous streaming, stored (store=T, bg=F, stream=T): E7–E12 +# ════════════════════════════════════════════════════════════ + + +class TestC2StreamStored: + """Synchronous streaming stored response cross-API tests.""" + + def test_e7_stream_create_then_get_after_stream_ends_returns_200_completed(self) -> None: + """B5 — JSON GET returns current snapshot.""" + client = _build_client(_simple_text_handler) + response_id = _create_streaming_response(client) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "completed" + + # E8 moved to test_cross_api_e2e_async.py (requires async ASGI client) + + def test_e9_stream_create_then_get_sse_replay_returns_400(self) -> None: + """B2 — SSE replay requires background.""" + client = _build_client(_simple_text_handler) + response_id = _create_streaming_response(client) + + get_resp = client.get(f"/responses/{response_id}?stream=true") + assert get_resp.status_code == 400 + + def test_e10_stream_create_then_cancel_after_stream_ends_returns_400(self) -> None: + """B1, B12 — cancel non-bg rejected.""" + client = _build_client(_simple_text_handler) + response_id = _create_streaming_response(client) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 400 + assert "synchronous" in cancel_resp.json()["error"]["message"].lower() + + # E11 moved to test_cross_api_e2e_async.py (requires async ASGI client) + + def test_e12_stream_disconnect_then_get_returns_not_found(self) -> None: + """B17 — connection termination cancels non-bg. + + Skipped: same limitation as E6. + """ + pytest.skip( + "Starlette TestClient does not deterministically surface client-disconnect " + "cancellation signals." + ) + + +# ════════════════════════════════════════════════════════════ +# C3 — Background poll, stored (store=T, bg=T, stream=F): E13–E19, E36–E39 +# ════════════════════════════════════════════════════════════ + + +class TestC3BgPollStored: + """Background non-streaming stored response cross-API tests.""" + + def test_e13_bg_create_then_get_immediate_returns_queued_or_in_progress(self) -> None: + """B5, B10 — background non-streaming returns immediately. + + Note: The Starlette TestClient may process requests synchronously, + so the background execution might complete before the GET. We accept + completed as well in that case. + """ + client = _build_client(_delayed_handler) + + r = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert r.status_code == 200 + create_payload = r.json() + response_id = create_payload["id"] + # Contract (C3): background POST must return immediately; with Phase 3 the handler + # runs right away so the response may be completed in the TestClient sync context. + assert create_payload["status"] in {"queued", "in_progress", "completed"} + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] in {"queued", "in_progress", "completed"} + + # Wait for terminal + _wait_for_terminal(client, response_id) + + def test_e14_bg_create_then_get_after_completion_returns_completed(self) -> None: + """B5, B10.""" + client = _build_client() + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "completed" + + def test_e15_bg_create_then_get_sse_replay_returns_400(self) -> None: + """B2 — SSE replay requires stream=true at creation.""" + client = _build_client() + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}?stream=true") + assert get_resp.status_code == 400 + + def test_e16_bg_create_cancel_then_get_returns_cancelled(self) -> None: + """B7 — cancelled status; B11 — output cleared.""" + client = _build_client(_cancellable_bg_handler) + response_id = _create_bg_response(client) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 200 + + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + snapshot = get_resp.json() + assert snapshot["status"] == "cancelled" + assert snapshot["output"] == [] + + def test_e17_bg_create_wait_complete_then_cancel_returns_400(self) -> None: + """B12 — cannot cancel a completed response.""" + client = _build_client() + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 400 + assert "Cannot cancel a completed response" in cancel_resp.json()["error"]["message"] + + def test_e18_bg_create_cancel_cancel_returns_200_idempotent(self) -> None: + """B3 — cancel is idempotent.""" + client = _build_client(_cancellable_bg_handler) + response_id = _create_bg_response(client) + + cancel1 = client.post(f"/responses/{response_id}/cancel") + assert cancel1.status_code == 200 + + cancel2 = client.post(f"/responses/{response_id}/cancel") + assert cancel2.status_code == 200 + + _wait_for_terminal(client, response_id) + + def test_e19_bg_create_disconnect_then_get_returns_completed(self) -> None: + """B18 — background responses unaffected by connection termination.""" + client = _build_client() + response_id = _create_bg_response(client) + # bg POST already returned — bg mode is immune to disconnect + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "completed" + + def test_e36_bg_handler_throws_then_get_returns_failed(self) -> None: + """B5, B6 — failed status invariants.""" + client = _build_client(_throwing_handler) + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + snapshot = get_resp.json() + assert snapshot["status"] == "failed" + # B6: failed → error must be non-null + error = snapshot.get("error") + assert error is not None, "B6: error must be non-null for status=failed" + assert "code" in error + assert "message" in error + + def test_e37_bg_handler_incomplete_then_get_returns_incomplete(self) -> None: + """B5, B6 — incomplete status invariants.""" + client = _build_client(_incomplete_handler) + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + snapshot = get_resp.json() + assert snapshot["status"] == "incomplete" + # B6: incomplete → error null + assert snapshot.get("error") is None + + def test_e38_bg_handler_throws_then_cancel_returns_400(self) -> None: + """B12 — cancel rejection on failed.""" + client = _build_client(_throwing_handler) + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 400 + assert "Cannot cancel a failed response" in cancel_resp.json()["error"]["message"] + + def test_e39_bg_handler_incomplete_then_cancel_returns_400(self) -> None: + """B12 — cancel rejection on incomplete (terminal status).""" + client = _build_client(_incomplete_handler) + response_id = _create_bg_response(client) + _wait_for_terminal(client, response_id) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 400 + + def test_e44_bg_progressive_polling_output_grows(self) -> None: + """B5, B10 — background poll shows progressive output accumulation. + + Verifies that after completion, the response contains full output. + Note: Fine-grained mid-stream gating across async/sync boundary + is unreliable with Starlette TestClient, so we verify final state. + """ + client = _build_client(_output_producing_handler) + response_id = _create_bg_response(client) + terminal = _wait_for_terminal(client, response_id) + + assert terminal["status"] == "completed" + assert isinstance(terminal.get("output"), list) + assert len(terminal["output"]) >= 1 + assert terminal["output"][0]["type"] == "output_message" + assert terminal["output"][0]["content"][0]["text"] == "hello" + + +# ════════════════════════════════════════════════════════════ +# C4 — Background streaming, stored (store=T, bg=T, stream=T): E20–E29, E40–E42 +# ════════════════════════════════════════════════════════════ + + +class TestC4BgStreamStored: + """Background streaming stored response cross-API tests.""" + + def test_e21_bg_stream_create_get_after_stream_ends_returns_completed(self) -> None: + """B5.""" + client = _build_client(_simple_text_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "completed" + + def test_e22_bg_stream_completed_sse_replay_returns_all_events(self) -> None: + """B4 — SSE replay; B9 — sequence numbers; B26 — terminal event.""" + client = _build_client(_simple_text_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + with client.stream("GET", f"/responses/{response_id}?stream=true") as replay_resp: + assert replay_resp.status_code == 200 + events = _collect_sse_events(replay_resp) + + assert len(events) >= 2, "Replay should have at least 2 events" + + # B26: terminal event is response.completed + assert events[-1]["type"] == "response.completed" + + # B9: sequence numbers monotonically increasing + seq_nums = [e["data"]["sequence_number"] for e in events] + for i in range(1, len(seq_nums)): + assert seq_nums[i] > seq_nums[i - 1] + + def test_e23_bg_stream_sse_replay_with_starting_after_skips_events(self) -> None: + """B4 — starting_after cursor.""" + client = _build_client(_simple_text_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + # Full replay + with client.stream("GET", f"/responses/{response_id}?stream=true") as full_resp: + full_events = _collect_sse_events(full_resp) + assert len(full_events) >= 2, "Need at least 2 events for cursor test" + + first_seq = full_events[0]["data"]["sequence_number"] + + # Replay with starting_after = first seq → skips first event + with client.stream( + "GET", f"/responses/{response_id}?stream=true&starting_after={first_seq}" + ) as cursor_resp: + assert cursor_resp.status_code == 200 + cursor_events = _collect_sse_events(cursor_resp) + + assert len(cursor_events) == len(full_events) - 1 + + def test_e24_bg_stream_cancel_immediate_returns_cancelled(self) -> None: + """B7, B11 — cancel → cancelled with 0 output. + + Uses non-streaming bg path because the synchronous TestClient cannot + issue concurrent requests during an active SSE stream. The actual + bg+stream mid-stream cancel is tested in test_cross_api_e2e_async.py + (E25) using the async ASGI client. + """ + client = _build_client(_cancellable_bg_handler) + response_id = _create_bg_response(client) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 200 + + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + snapshot = get_resp.json() + assert snapshot["status"] == "cancelled" + assert snapshot["output"] == [] + + def test_e27_bg_stream_completed_then_cancel_returns_400(self) -> None: + """B12 — cannot cancel completed.""" + client = _build_client(_simple_text_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 400 + assert "Cannot cancel a completed response" in cancel_resp.json()["error"]["message"] + + def test_e28_bg_stream_cancel_cancel_returns_200_idempotent(self) -> None: + """B3 — cancel is idempotent. + + Uses non-streaming bg path because the synchronous TestClient cannot + issue concurrent requests during an active SSE stream. + """ + client = _build_client(_cancellable_bg_handler) + response_id = _create_bg_response(client) + + cancel1 = client.post(f"/responses/{response_id}/cancel") + assert cancel1.status_code == 200 + + cancel2 = client.post(f"/responses/{response_id}/cancel") + assert cancel2.status_code == 200 + + _wait_for_terminal(client, response_id) + + def test_e29_bg_stream_disconnect_then_get_returns_completed(self) -> None: + """B18 — background responses unaffected by connection termination.""" + client = _build_client(_simple_text_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "completed" + + def test_e40_bg_stream_handler_throws_get_and_sse_replay_returns_failed(self) -> None: + """B5, B6 — failed status invariants; B26 — terminal event.""" + client = _build_client(_throwing_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + # GET JSON → failed + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + snapshot = get_resp.json() + assert snapshot["status"] == "failed" + # B6: failed → error must be non-null + error = snapshot.get("error") + assert error is not None, "B6: error must be non-null for status=failed" + assert "code" in error + assert "message" in error + + def test_e41_bg_stream_handler_incomplete_get_and_sse_replay_returns_incomplete(self) -> None: + """B5, B6 — incomplete status invariants; B26 — terminal event.""" + client = _build_client(_incomplete_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + # GET JSON → incomplete + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + snapshot = get_resp.json() + assert snapshot["status"] == "incomplete" + assert snapshot.get("error") is None + + # SSE replay → terminal = response.incomplete + with client.stream("GET", f"/responses/{response_id}?stream=true") as replay_resp: + assert replay_resp.status_code == 200 + events = _collect_sse_events(replay_resp) + assert events[-1]["type"] == "response.incomplete" + + def test_e42_bg_stream_sse_replay_starting_after_max_returns_empty(self) -> None: + """B4 — starting_after >= max → empty stream.""" + client = _build_client(_simple_text_handler) + response_id = _create_bg_streaming_response(client) + _wait_for_terminal(client, response_id) + + # Get max sequence number from full replay + with client.stream("GET", f"/responses/{response_id}?stream=true") as full_resp: + full_events = _collect_sse_events(full_resp) + max_seq = full_events[-1]["data"]["sequence_number"] + + # Replay with starting_after = max → empty + with client.stream( + "GET", f"/responses/{response_id}?stream=true&starting_after={max_seq}" + ) as empty_resp: + assert empty_resp.status_code == 200 + empty_events = _collect_sse_events(empty_resp) + assert empty_events == [] + + def test_e26_bg_stream_cancel_then_sse_replay_has_terminal_event(self) -> None: + """B26 — terminal SSE event after cancel; B11. + + Uses non-streaming bg path because the synchronous TestClient cannot + issue concurrent requests during an active SSE stream. + """ + client = _build_client(_cancellable_bg_handler) + response_id = _create_bg_response(client) + + cancel_resp = client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 200 + _wait_for_terminal(client, response_id) + + # After cancel, the response is cancelled. + get_resp = client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "cancelled" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cross_api_e2e_async.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cross_api_e2e_async.py new file mode 100644 index 000000000000..4d2af42688d7 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_cross_api_e2e_async.py @@ -0,0 +1,544 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Cross-API E2E tests requiring concurrent HTTP operations during active handlers. + +These tests use a lightweight async ASGI client that invokes the Starlette app +directly via ``await app(scope, receive, send)``, combined with +``asyncio.create_task`` for concurrency. This enables: + +* Issuing GET / Cancel requests while a streaming POST handler is still running. +* Using ``asyncio.Event`` for deterministic handler gating (same event loop). +* Pre-generating response IDs via ``IdGenerator`` to avoid parsing the SSE stream. + +Tests validate: E8, E11, E20, E25, E43 from the cross-API matrix. + +**Parallel-safety:** every test creates its own Starlette app, ASGI client, and +handler instances — fully isolated with no shared state, no port binding, and no +global singletons. Safe for ``pytest-xdist`` and any concurrent test runner. +""" + +from __future__ import annotations + +import asyncio +import json as _json +from typing import Any + +import pytest +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses._id_generator import IdGenerator +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + +# ════════════════════════════════════════════════════════════ +# Lightweight async ASGI test client +# ════════════════════════════════════════════════════════════ + + +class _AsgiResponse: + """Result of a non-streaming ASGI request.""" + + def __init__(self, status_code: int, body: bytes, headers: list[tuple[bytes, bytes]]) -> None: + self.status_code = status_code + self.body = body + self.headers = headers + + def json(self) -> Any: + return _json.loads(self.body) + + +class _AsyncAsgiClient: + """Minimal async ASGI client that supports concurrent in-flight requests. + + Unlike ``httpx.ASGITransport`` (which buffers the entire response body before + returning) or Starlette ``TestClient`` (synchronous), this client calls the + ASGI app directly. Combined with ``asyncio.create_task``, the test can issue + additional requests while a previous one is still being processed. + + **Thread-safety:** instances are NOT thread-safe. Each test should create + its own client via ``_build_client()``. + """ + + def __init__(self, app: Any) -> None: + self._app = app + + # ── helpers ────────────────────────────────────────────── + + @staticmethod + def _build_scope(method: str, path: str, body: bytes) -> dict[str, Any]: + headers: list[tuple[bytes, bytes]] = [] + query_string = b"" + + if "?" in path: + path, qs = path.split("?", 1) + query_string = qs.encode() + + if body: + headers = [ + (b"content-type", b"application/json"), + (b"content-length", str(len(body)).encode()), + ] + + return { + "type": "http", + "asgi": {"version": "3.0"}, + "http_version": "1.1", + "method": method, + "headers": headers, + "scheme": "http", + "path": path, + "raw_path": path.encode(), + "query_string": query_string, + "server": ("localhost", 80), + "client": ("127.0.0.1", 123), + "root_path": "", + } + + # ── public API ────────────────────────────────────────── + + async def request( + self, + method: str, + path: str, + *, + json_body: dict[str, Any] | None = None, + ) -> _AsgiResponse: + """Send a request and collect the full response.""" + body = _json.dumps(json_body).encode() if json_body else b"" + scope = self._build_scope(method, path, body) + + status_code: int | None = None + response_headers: list[tuple[bytes, bytes]] = [] + body_parts: list[bytes] = [] + request_sent = False + response_done = asyncio.Event() + + async def receive() -> dict[str, Any]: + nonlocal request_sent + if not request_sent: + request_sent = True + return {"type": "http.request", "body": body, "more_body": False} + await response_done.wait() + return {"type": "http.disconnect"} + + async def send(message: dict[str, Any]) -> None: + nonlocal status_code, response_headers + if message["type"] == "http.response.start": + status_code = message["status"] + response_headers = message.get("headers", []) + elif message["type"] == "http.response.body": + chunk = message.get("body", b"") + if chunk: + body_parts.append(chunk) + if not message.get("more_body", False): + response_done.set() + + await self._app(scope, receive, send) + + assert status_code is not None + return _AsgiResponse( + status_code=status_code, + body=b"".join(body_parts), + headers=response_headers, + ) + + async def get(self, path: str) -> _AsgiResponse: + return await self.request("GET", path) + + async def post( + self, path: str, *, json_body: dict[str, Any] | None = None + ) -> _AsgiResponse: + return await self.request("POST", path, json_body=json_body) + + +# ════════════════════════════════════════════════════════════ +# Helpers +# ════════════════════════════════════════════════════════════ + + +def _build_client(handler: Any) -> _AsyncAsgiClient: + """Create a fully isolated async ASGI client.""" + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler) + return _AsyncAsgiClient(server.app) + + +async def _ensure_task_done( + task: asyncio.Task[Any], + handler: Any, + timeout: float = 5.0, +) -> None: + """Release handler gates and await the task with a timeout.""" + # Release all asyncio.Event gates on the handler so it can exit. + for attr in vars(handler): + obj = getattr(handler, attr, None) + if isinstance(obj, asyncio.Event): + obj.set() + if not task.done(): + try: + await asyncio.wait_for(task, timeout=timeout) + except (asyncio.TimeoutError, Exception): + task.cancel() + try: + await task + except (asyncio.CancelledError, Exception): + pass + + +def _parse_sse_events(text: str) -> list[dict[str, Any]]: + """Parse SSE events from raw text.""" + events: list[dict[str, Any]] = [] + current_type: str | None = None + current_data: str | None = None + + for line in text.splitlines(): + if not line: + if current_type is not None: + payload = _json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + current_type = None + current_data = None + continue + + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None: + payload = _json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + + return events + + +# ════════════════════════════════════════════════════════════ +# Handler factories (asyncio.Event gating — same event loop as test) +# ════════════════════════════════════════════════════════════ + + +def _make_gated_stream_handler(): + """Factory for a handler that emits created + in_progress, then blocks until ``release`` is set.""" + started = asyncio.Event() + release = asyncio.Event() + + def handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream( + response_id=context.response_id, + model=getattr(request, "model", None), + ) + yield stream.emit_created() + yield stream.emit_in_progress() + started.set() + while not release.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + yield stream.emit_completed() + + return _events() + + handler.started = started # type: ignore[attr-defined] + handler.release = release # type: ignore[attr-defined] + return handler + + +def _make_gated_stream_handler_with_output(): + """Factory for a handler that emits created + in_progress + a partial message, then blocks.""" + started = asyncio.Event() + release = asyncio.Event() + + def handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream( + response_id=context.response_id, + model=getattr(request, "model", None), + ) + yield stream.emit_created() + yield stream.emit_in_progress() + + message = stream.add_output_item_message() + yield message.emit_added() + text = message.add_text_content() + yield text.emit_added() + yield text.emit_delta("Hello") + + started.set() + while not release.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + + yield text.emit_done() + yield message.emit_content_done(text) + yield message.emit_done() + yield stream.emit_completed() + + return _events() + + handler.started = started # type: ignore[attr-defined] + handler.release = release # type: ignore[attr-defined] + return handler + + +# ════════════════════════════════════════════════════════════ +# C2 — Sync streaming, stored: E8, E11 +# ════════════════════════════════════════════════════════════ + + +class TestC2StreamStoredAsync: + """Sync streaming tests requiring concurrent access during an active stream.""" + + async def test_e8_stream_get_during_stream_returns_404(self) -> None: + """B16 — non-bg in-flight → 404.""" + handler = _make_gated_stream_handler() + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + post_task = asyncio.create_task( + client.post( + "/responses", + json_body={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) + ) + try: + await asyncio.wait_for(handler.started.wait(), timeout=5.0) + + # GET during non-bg in-flight → 404 + get_resp = await client.get(f"/responses/{response_id}") + assert get_resp.status_code == 404 + + # Release handler so it can complete + handler.release.set() + post_resp = await asyncio.wait_for(post_task, timeout=5.0) + assert post_resp.status_code == 200 + finally: + await _ensure_task_done(post_task, handler) + + # After stream ends, response should be stored + get_after = await client.get(f"/responses/{response_id}") + assert get_after.status_code == 200 + assert get_after.json()["status"] == "completed" + + async def test_e11_stream_cancel_during_stream_returns_400(self) -> None: + """B1 — cancel requires background; non-bg → 400.""" + handler = _make_gated_stream_handler() + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + post_task = asyncio.create_task( + client.post( + "/responses", + json_body={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) + ) + try: + await asyncio.wait_for(handler.started.wait(), timeout=5.0) + + # Cancel non-bg in-flight → 404 (not yet stored, S7) + cancel_resp = await client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 404, "S7: non-background in-flight cancel must return 404 (not yet stored)" + + handler.release.set() + await asyncio.wait_for(post_task, timeout=5.0) + finally: + await _ensure_task_done(post_task, handler) + + +# ════════════════════════════════════════════════════════════ +# C4 — Background streaming, stored: E20, E25, E43 +# +# The Python SDK now stores the execution record at response.created +# time for background+stream responses (S-035), enabling mid-stream +# GET, Cancel, and progressive-poll. +# ════════════════════════════════════════════════════════════ + + +class TestC4BgStreamStoredAsync: + """Background streaming tests requiring concurrent access during active stream.""" + + async def test_e20_bg_stream_get_during_stream_returns_in_progress(self) -> None: + """B5 — background responses accessible during in-progress.""" + handler = _make_gated_stream_handler() + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + post_task = asyncio.create_task( + client.post( + "/responses", + json_body={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) + ) + try: + await asyncio.wait_for(handler.started.wait(), timeout=5.0) + + # GET during bg in-flight → 200 with in_progress + get_resp = await client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "in_progress" + + handler.release.set() + post_resp = await asyncio.wait_for(post_task, timeout=5.0) + assert post_resp.status_code == 200 + finally: + await _ensure_task_done(post_task, handler) + + # After stream ends, response should be completed + get_after = await client.get(f"/responses/{response_id}") + assert get_after.status_code == 200 + assert get_after.json()["status"] == "completed" + + async def test_e25_bg_stream_cancel_mid_stream_returns_cancelled(self) -> None: + """B7, B11 — cancel mid-stream → cancelled with 0 output.""" + handler = _make_gated_stream_handler() + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + post_task = asyncio.create_task( + client.post( + "/responses", + json_body={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) + ) + try: + await asyncio.wait_for(handler.started.wait(), timeout=5.0) + + # Cancel bg in-flight → 200 + cancel_resp = await client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 200 + snapshot = cancel_resp.json() + assert snapshot["status"] == "cancelled" + assert snapshot["output"] == [] + + await asyncio.wait_for(post_task, timeout=5.0) + finally: + await _ensure_task_done(post_task, handler) + + # GET after cancel → cancelled + get_resp = await client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + assert get_resp.json()["status"] == "cancelled" + assert get_resp.json()["output"] == [] + + async def test_e43_bg_stream_get_during_stream_returns_partial_output(self) -> None: + """B5, B23 — GET mid-stream returns partial output items.""" + handler = _make_gated_stream_handler_with_output() + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + post_task = asyncio.create_task( + client.post( + "/responses", + json_body={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) + ) + try: + await asyncio.wait_for(handler.started.wait(), timeout=5.0) + + # GET during bg in-flight → 200 with in_progress and partial output + get_resp = await client.get(f"/responses/{response_id}") + assert get_resp.status_code == 200 + body = get_resp.json() + assert body["status"] == "in_progress" + # The response should have at least one output item from the + # output_item.added event emitted before the gate. + assert len(body.get("output", [])) >= 1 + + handler.release.set() + post_resp = await asyncio.wait_for(post_task, timeout=5.0) + assert post_resp.status_code == 200 + finally: + await _ensure_task_done(post_task, handler) + + # After completion, full output should be present + get_after = await client.get(f"/responses/{response_id}") + assert get_after.status_code == 200 + assert get_after.json()["status"] == "completed" + + async def test_bg_stream_cancel_terminal_sse_is_response_failed_with_cancelled(self) -> None: + """B11, B26 — cancel mid-stream → terminal SSE event is response.failed with status cancelled.""" + handler = _make_gated_stream_handler() + client = _build_client(handler) + response_id = IdGenerator.new_response_id() + + post_task = asyncio.create_task( + client.post( + "/responses", + json_body={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) + ) + try: + await asyncio.wait_for(handler.started.wait(), timeout=5.0) + + # Cancel bg in-flight → 200 + cancel_resp = await client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 200 + + post_resp = await asyncio.wait_for(post_task, timeout=5.0) + assert post_resp.status_code == 200 + + # Parse SSE events from the response body + events = _parse_sse_events(post_resp.body.decode()) + + # Find terminal events + terminal_types = {"response.completed", "response.failed", "response.incomplete"} + terminal_events = [e for e in events if e["type"] in terminal_types] + assert len(terminal_events) == 1, ( + f"Expected exactly one terminal event, got: {[e['type'] for e in terminal_events]}" + ) + + terminal = terminal_events[0] + # B26: cancelled responses emit response.failed + assert terminal["type"] == "response.failed", ( + f"Expected response.failed for cancel per B26, got: {terminal['type']}" + ) + # B11: status inside is "cancelled" + assert terminal["data"]["response"].get("status") == "cancelled" + # B11: output cleared + assert terminal["data"]["response"].get("output") == [] + finally: + await _ensure_task_done(post_task, handler) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_delete_endpoint.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_delete_endpoint.py new file mode 100644 index 000000000000..d416d59a97ae --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_delete_endpoint.py @@ -0,0 +1,422 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for DELETE /responses/{response_id} endpoint behavior.""" + +from __future__ import annotations + +import asyncio +import threading +from typing import Any + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses._id_generator import IdGenerator +from tests._helpers import EventGate, poll_until + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire the hosting surface in contract tests.""" + async def _events(): + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _delayed_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that keeps background execution in-flight for deterministic delete checks.""" + async def _events(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.5) + if cancellation_signal.is_set(): + return + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _build_client(handler: Any | None = None) -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler or _noop_response_handler) + return TestClient(server.app) + + +def _throwing_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Background handler that raises immediately — produces status=failed.""" + async def _events(): + raise RuntimeError("Simulated handler failure") + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _throwing_after_created_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Background handler that emits response.created then raises — produces status=failed. + + Phase 3: by yielding response.created first, the POST returns HTTP 200 instead of 500. + """ + async def _events(): + yield {"type": "response.created", "payload": {"status": "in_progress", "output": []}} + raise RuntimeError("Simulated handler failure") + + return _events() + + +def _cancellable_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits response.created then blocks until cancelled (Phase 3).""" + async def _events(): + yield {"type": "response.created", "payload": {"status": "in_progress", "output": []}} + while not cancellation_signal.is_set(): + await asyncio.sleep(0.01) + + return _events() + + +def _incomplete_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Background handler that emits an incomplete terminal event.""" + async def _events(): + yield {"type": "response.created", "payload": {"status": "in_progress", "output": []}} + yield {"type": "response.incomplete", "payload": {"status": "incomplete", "output": []}} + + return _events() + + +def test_delete__deletes_stored_completed_response() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + payload = delete_response.json() + assert payload.get("id") == response_id + assert payload.get("object") == "response.deleted" + assert payload.get("deleted") is True + + +def test_delete__returns_400_for_background_in_flight_response() -> None: + client = _build_client(_cancellable_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 400 + payload = delete_response.json() + assert payload["error"].get("type") == "invalid_request_error" + assert payload["error"].get("message") == "Cannot delete an in-flight response." + + +def test_delete__returns_404_for_unknown_response_id() -> None: + client = _build_client() + + delete_response = client.delete("/responses/resp_does_not_exist") + assert delete_response.status_code == 404 + payload = delete_response.json() + assert payload["error"].get("type") == "invalid_request_error" + + +def test_delete__returns_404_for_store_false_response() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": False, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 404 + payload = delete_response.json() + assert payload["error"].get("type") == "invalid_request_error" + + +def test_delete__get_returns_400_after_deletion() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 400 + payload = get_response.json() + assert payload["error"].get("type") == "invalid_request_error" + assert "deleted" in (payload["error"].get("message") or "").lower() + + +def test_delete__cancel_returns_404_after_deletion() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 404 + payload = cancel_response.json() + assert payload["error"].get("type") == "invalid_request_error" + + +def _make_blocking_sync_response_handler(started_gate: EventGate, release_gate: threading.Event): + """Factory for a handler that holds a sync request in-flight for concurrent operation tests.""" + + def _handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + started_gate.signal(True) + while not release_gate.is_set(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.01) + if False: # pragma: no cover + yield None + + return _events() + + return _handler + + +def test_delete__returns_404_for_non_bg_in_flight_response() -> None: + """FR-024 — Non-background in-flight responses are not findable → DELETE 404.""" + started_gate = EventGate() + release_gate = threading.Event() + handler = _make_blocking_sync_response_handler(started_gate, release_gate) + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler) + client = TestClient(server.app) + response_id = IdGenerator.new_response_id() + + create_result: dict[str, Any] = {} + + def _do_create() -> None: + try: + create_result["response"] = client.post( + "/responses", + json={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + except Exception as exc: # pragma: no cover + create_result["error"] = exc + + t = threading.Thread(target=_do_create, daemon=True) + t.start() + + started, _ = started_gate.wait(timeout_s=2.0) + assert started, "Expected sync create to enter handler before DELETE" + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 404 + + release_gate.set() + t.join(timeout=2.0) + assert not t.is_alive() + + +# ══════════════════════════════════════════════════════════ +# B-6: DELETE on terminal statuses (failed / incomplete / cancelled) +# ══════════════════════════════════════════════════════════ + + +def test_delete__deletes_stored_failed_response() -> None: + """B-6 — DELETE on a failed (terminal) stored response returns 200 with deleted=True.""" + client = _build_client(_throwing_after_created_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + ok, failure = poll_until( + lambda: client.get(f"/responses/{response_id}").json().get("status") == "failed", + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: client.get(f"/responses/{response_id}").json().get("status"), + label=f"status=failed for {response_id}", + ) + assert ok, failure + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + payload = delete_response.json() + assert payload.get("id") == response_id + assert payload.get("object") == "response.deleted" + assert payload.get("deleted") is True + + +def test_delete__deletes_stored_incomplete_response() -> None: + """B-6 — DELETE on an incomplete (terminal) stored response returns 200 with deleted=True.""" + client = _build_client(_incomplete_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + ok, failure = poll_until( + lambda: client.get(f"/responses/{response_id}").json().get("status") == "incomplete", + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: client.get(f"/responses/{response_id}").json().get("status"), + label=f"status=incomplete for {response_id}", + ) + assert ok, failure + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + payload = delete_response.json() + assert payload.get("id") == response_id + assert payload.get("object") == "response.deleted" + assert payload.get("deleted") is True + + +def test_delete__deletes_stored_cancelled_response() -> None: + """B-6 — DELETE on a cancelled (terminal) stored response returns 200 with deleted=True.""" + client = _build_client(_cancellable_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + + ok, failure = poll_until( + lambda: client.get(f"/responses/{response_id}").json().get("status") == "cancelled", + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: client.get(f"/responses/{response_id}").json().get("status"), + label=f"status=cancelled for {response_id}", + ) + assert ok, failure + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + payload = delete_response.json() + assert payload.get("id") == response_id + assert payload.get("object") == "response.deleted" + assert payload.get("deleted") is True + + +# ══════════════════════════════════════════════════════════ +# N-5: Second DELETE on already-deleted response → 404 +# ══════════════════════════════════════════════════════════ + + +def test_delete__second_delete_returns_404() -> None: + """FR-024 — Deletion is permanent; a second DELETE on an already-deleted ID returns 404.""" + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + # First DELETE – should succeed + first_delete = client.delete(f"/responses/{response_id}") + assert first_delete.status_code == 200 + + # Second DELETE – response is gone, must return 404 + second_delete = client.delete(f"/responses/{response_id}") + assert second_delete.status_code == 404, ( + "Second DELETE on an already-deleted response must return 404 (response no longer exists)" + ) + payload = second_delete.json() + assert payload["error"].get("type") == "invalid_request_error" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_get_endpoint.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_get_endpoint.py new file mode 100644 index 000000000000..7f76f2c86dc5 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_get_endpoint.py @@ -0,0 +1,459 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for GET /responses/{response_id} endpoint behavior.""" + +from __future__ import annotations + +import json +from typing import Any + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire the hosting surface in contract tests.""" + async def _events(): + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _build_client() -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def _collect_replay_events(response: Any) -> list[dict[str, Any]]: + events: list[dict[str, Any]] = [] + current_type: str | None = None + current_data: str | None = None + + for line in response.iter_lines(): + if not line: + if current_type is not None: + payload = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + current_type = None + current_data = None + continue + + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None: + payload = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": payload}) + + return events + + +def _create_streaming_and_get_response_id(client: TestClient) -> str: + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as create_response: + assert create_response.status_code == 200 + assert create_response.headers.get("content-type", "").startswith("text/event-stream") + events = _collect_replay_events(create_response) + + assert events, "Expected streaming create to emit at least one event" + response_id = events[0]["data"]["response"].get("id") + assert isinstance(response_id, str) + return response_id + + +def _create_background_streaming_and_get_response_id(client: TestClient) -> str: + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) as create_response: + assert create_response.status_code == 200 + assert create_response.headers.get("content-type", "").startswith("text/event-stream") + events = _collect_replay_events(create_response) + + assert events, "Expected background streaming create to emit at least one event" + response_id = events[0]["data"]["response"].get("id") + assert isinstance(response_id, str) + return response_id + + +def test_get__returns_latest_snapshot_for_existing_response() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + assert payload.get("id") == response_id + assert payload.get("response_id") == response_id + assert payload.get("object") == "response" + assert isinstance(payload.get("agent_reference"), dict) + assert payload["agent_reference"].get("type") == "agent_reference" + assert payload.get("status") in {"queued", "in_progress", "completed", "failed", "incomplete", "cancelled"} + assert payload.get("model") == "gpt-4o-mini" + assert "sequence_number" not in payload + + +def test_get__returns_404_for_unknown_response_id() -> None: + client = _build_client() + + get_response = client.get("/responses/resp_does_not_exist") + assert get_response.status_code == 404 + payload = get_response.json() + assert isinstance(payload.get("error"), dict) + + +def test_get__returns_snapshot_for_stored_non_background_stream_response_after_completion() -> None: + client = _build_client() + + response_id = _create_streaming_and_get_response_id(client) + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + assert payload.get("id") == response_id + assert payload.get("status") in {"completed", "failed", "incomplete", "cancelled"} + + +def test_get_replay__rejects_request_when_replay_preconditions_are_not_met() -> None: + client = _build_client() + + response_id = _create_streaming_and_get_response_id(client) + + replay_response = client.get(f"/responses/{response_id}?stream=true") + assert replay_response.status_code == 400 + payload = replay_response.json() + assert isinstance(payload.get("error"), dict) + assert payload["error"].get("type") == "invalid_request_error" + assert payload["error"].get("param") == "stream" + + +def test_get_replay__rejects_invalid_starting_after_cursor_type() -> None: + client = _build_client() + + response_id = _create_background_streaming_and_get_response_id(client) + + replay_response = client.get(f"/responses/{response_id}?stream=true&starting_after=not-an-int") + assert replay_response.status_code == 400 + payload = replay_response.json() + assert payload["error"].get("type") == "invalid_request_error" + assert payload["error"].get("param") == "starting_after" + + +def test_get_replay__starting_after_returns_events_after_cursor() -> None: + client = _build_client() + + response_id = _create_background_streaming_and_get_response_id(client) + + with client.stream("GET", f"/responses/{response_id}?stream=true&starting_after=0") as replay_response: + assert replay_response.status_code == 200 + assert replay_response.headers.get("content-type", "").startswith("text/event-stream") + replay_events = _collect_replay_events(replay_response) + + assert replay_events, "Expected replay stream to include events after cursor" + sequence_numbers = [event["data"].get("sequence_number") for event in replay_events] + assert all(isinstance(sequence_number, int) for sequence_number in sequence_numbers) + assert min(sequence_numbers) > 0 + terminal_events = { + "response.completed", + "response.failed", + "response.incomplete", + } + assert any(event["type"] in terminal_events for event in replay_events) + + +def test_get_replay__rejects_bg_non_stream_response() -> None: + """B2 — SSE replay requires stream=true at creation. background=true, stream=false → 400.""" + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + replay_response = client.get(f"/responses/{response_id}?stream=true") + assert replay_response.status_code == 400 + payload = replay_response.json() + assert payload["error"]["type"] == "invalid_request_error" + + +# ══════════════════════════════════════════════════════════ +# B-5: SSE replay rejection message text +# ══════════════════════════════════════════════════════════ + + +def test_get_replay__rejection_message_hints_at_background_true() -> None: + """B-5 — SSE replay rejection error message contains 'background=true' hint so clients know how to fix their request.""" + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + replay_response = client.get(f"/responses/{response_id}?stream=true") + assert replay_response.status_code == 400 + payload = replay_response.json() + error_message = payload["error"].get("message", "") + assert "background=true" in error_message, ( + f"Error message should hint at 'background=true' to guide the client, but got: {error_message!r}" + ) + + +# ════════════════════════════════════════════════════════ +# N-6: GET ?stream=true SSE response headers +# ════════════════════════════════════════════════════════ + + +def test_get_replay__sse_response_headers_are_correct() -> None: + """SSE headers contract — GET ?stream=true replay must return required SSE response headers.""" + client = _build_client() + + response_id = _create_background_streaming_and_get_response_id(client) + + with client.stream("GET", f"/responses/{response_id}?stream=true") as replay_response: + assert replay_response.status_code == 200 + headers = replay_response.headers + + content_type = headers.get("content-type", "") + assert "text/event-stream" in content_type, ( + f"SSE replay Content-Type must be text/event-stream, got: {content_type!r}" + ) + assert headers.get("cache-control") == "no-cache", ( + f"SSE replay Cache-Control must be no-cache, got: {headers.get('cache-control')!r}" + ) + assert headers.get("connection", "").lower() == "keep-alive", ( + f"SSE replay Connection must be keep-alive, got: {headers.get('connection')!r}" + ) + assert headers.get("x-accel-buffering") == "no", ( + f"SSE replay X-Accel-Buffering must be no, got: {headers.get('x-accel-buffering')!r}" + ) + + +# ══════════════════════════════════════════════════════════ +# Task 4.2 — _finalize_bg_stream / _finalize_non_bg_stream +# ══════════════════════════════════════════════════════════ + + +def test_c2_sync_stream_stored_get_returns_200() -> None: + """T1 — store=True, bg=False, stream=True: POST then GET returns HTTP 200. + + _finalize_non_bg_stream must register a ResponseExecution so that the + subsequent GET can find the stored non-background stream response. + """ + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as create_response: + assert create_response.status_code == 200 + events = _collect_replay_events(create_response) + + assert events, "Expected at least one SSE event" + response_id = events[0]["data"]["response"].get("id") + assert isinstance(response_id, str) + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200, ( + f"_finalize_non_bg_stream must persist the record so GET returns 200, got {get_response.status_code}" + ) + payload = get_response.json() + assert payload.get("status") in {"completed", "failed", "incomplete", "cancelled"}, ( + f"Non-bg stored stream must be terminal after POST completes, got status={payload.get('status')!r}" + ) + + +def test_c4_bg_stream_get_sse_replay() -> None: + """T2 — store=True, bg=True, stream=True: POST complete, then GET ?stream=true returns SSE replay. + + _finalize_bg_stream must complete the subject so that the subsequent + replay GET can iterate the historical events to completion. + """ + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": True}, + ) as create_response: + assert create_response.status_code == 200 + create_events = _collect_replay_events(create_response) + + assert create_events, "Expected at least one SSE event from POST" + response_id = create_events[0]["data"]["response"].get("id") + assert isinstance(response_id, str) + + with client.stream("GET", f"/responses/{response_id}?stream=true") as replay_response: + assert replay_response.status_code == 200, ( + f"bg+stream GET ?stream=true must return 200, got {replay_response.status_code}" + ) + assert replay_response.headers.get("content-type", "").startswith("text/event-stream") + replay_events = _collect_replay_events(replay_response) + + assert replay_events, "Expected at least one event in SSE replay" + replay_types = [e["type"] for e in replay_events] + terminal_types = {"response.completed", "response.failed", "response.incomplete"} + assert any(t in terminal_types for t in replay_types), ( + f"SSE replay must include a terminal event, got: {replay_types}" + ) + # Replay must start from the beginning (response.created should be present) + assert "response.created" in replay_types, ( + f"SSE replay must include response.created, got: {replay_types}" + ) + + +def test_c6_non_stored_stream_no_get() -> None: + """T3 — store=False, bg=False, stream=True: GET returns HTTP 404. + + _finalize_non_bg_stream must NOT register the execution record when + store=False, so a subsequent GET returns 404 (B-16 / C6 contract). + """ + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": False, "background": False}, + ) as create_response: + assert create_response.status_code == 200 + create_events = _collect_replay_events(create_response) + + assert create_events, "Expected at least one SSE event from POST" + response_id = create_events[0]["data"]["response"].get("id") + assert isinstance(response_id, str) + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 404, ( + f"store=False stream response must not be retrievable via GET (C6), got {get_response.status_code}" + ) + + +def test_bg_stream_cancelled_subject_completed() -> None: + """T4 — bg+stream response cancelled mid-stream: subject.complete() is called, no hang. + + _finalize_bg_stream must call subject.complete() even when the record's + status is 'cancelled', so that live replay subscribers can exit cleanly. + """ + from azure.ai.agentserver.responses._id_generator import IdGenerator + from tests._helpers import poll_until + + gate_started: list[bool] = [] + gate_proceed: list[bool] = [] + + def _blocking_bg_stream_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + yield {"type": "response.created", "payload": {"status": "in_progress", "output": []}} + gate_started.append(True) + # Block until cancelled + while not cancellation_signal.is_set(): + import asyncio as _asyncio + await _asyncio.sleep(0.01) + + return _events() + + import asyncio + import threading + + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_blocking_bg_stream_handler) + app = _server.app + + response_id = IdGenerator.new_response_id() + stream_events_received: list[str] = [] + stream_done = threading.Event() + + def _stream_thread() -> None: + from starlette.testclient import TestClient as _TC + _client = _TC(app) + with _client.stream( + "POST", + "/responses", + json={ + "response_id": response_id, + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) as resp: + for line in resp.iter_lines(): + stream_events_received.append(line) + stream_done.set() + + t = threading.Thread(target=_stream_thread, daemon=True) + t.start() + + # Wait for handler to start + ok, _ = poll_until( + lambda: bool(gate_started), + timeout_s=5.0, + interval_s=0.02, + label="wait for bg stream handler to start", + ) + assert ok, "Handler did not start within timeout" + + # Cancel the response + from starlette.testclient import TestClient as _TC2 + _cancel_client = _TC2(app) + cancel_resp = _cancel_client.post(f"/responses/{response_id}/cancel") + assert cancel_resp.status_code == 200 + + # The SSE stream should terminate (subject.complete() unblocks the iterator) + assert stream_done.wait(timeout=5.0), ( + "_finalize_bg_stream must call subject.complete() so SSE stream terminates after cancel" + ) + t.join(timeout=1.0) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_input_items_endpoint.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_input_items_endpoint.py new file mode 100644 index 000000000000..c62147af282a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_input_items_endpoint.py @@ -0,0 +1,417 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for GET /responses/{response_id}/input_items behavior.""" + +from __future__ import annotations + +from typing import Any + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire the hosting surface in contract tests.""" + async def _events(): + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _build_client() -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def _message_input(item_id: str, text: str) -> dict[str, Any]: + return { + "id": item_id, + "type": "message", + "role": "user", + "content": [{"type": "input_text", "text": text}], + } + + +def _create_response( + client: TestClient, + *, + input_items: list[dict[str, Any]] | None, + store: bool = True, + background: bool = False, + previous_response_id: str | None = None, +) -> str: + payload: dict[str, Any] = { + "model": "gpt-4o-mini", + "stream": False, + "store": store, + "background": background, + "input": input_items if input_items is not None else [], + } + if previous_response_id is not None: + payload["previous_response_id"] = previous_response_id + + create_response = client.post("/responses", json=payload) + assert create_response.status_code == 200 + response_id = create_response.json().get("id") + assert isinstance(response_id, str) + return response_id + + +def _assert_error_envelope(response: Any, expected_status: int) -> dict[str, Any]: + assert response.status_code == expected_status + try: + payload = response.json() + except Exception as exc: # pragma: no cover - defensive diagnostics for routing regressions. + raise AssertionError( + f"Expected JSON error envelope with status {expected_status}, got non-JSON body: {response.text!r}" + ) from exc + assert isinstance(payload.get("error"), dict) + assert "message" in payload["error"] + assert "type" in payload["error"] + assert "param" in payload["error"] + assert "code" in payload["error"] + return payload + + +def test_input_items_returns_200_with_items_and_paged_fields() -> None: + client = _build_client() + + response_id = _create_response( + client, + input_items=[ + _message_input("msg_001", "one"), + _message_input("msg_002", "two"), + _message_input("msg_003", "three"), + ], + ) + + response = client.get(f"/responses/{response_id}/input_items") + assert response.status_code == 200 + payload = response.json() + + assert payload.get("object") == "list" + assert isinstance(payload.get("data"), list) + assert len(payload["data"]) == 3 + assert payload["data"][0].get("id") == "msg_003" + assert payload["data"][2].get("id") == "msg_001" + assert payload.get("first_id") == "msg_003" + assert payload.get("last_id") == "msg_001" + assert payload.get("has_more") is False + + +def test_input_items_returns_200_with_empty_data() -> None: + client = _build_client() + + response_id = _create_response(client, input_items=[]) + + response = client.get(f"/responses/{response_id}/input_items") + assert response.status_code == 200 + payload = response.json() + + assert payload.get("object") == "list" + assert payload.get("data") == [] + assert payload.get("has_more") is False + + +def test_input_items_returns_400_for_invalid_limit() -> None: + client = _build_client() + + response_id = _create_response(client, input_items=[_message_input("msg_001", "one")]) + + low_limit = client.get(f"/responses/{response_id}/input_items?limit=0") + low_payload = _assert_error_envelope(low_limit, 400) + assert low_payload["error"].get("type") == "invalid_request_error" + + high_limit = client.get(f"/responses/{response_id}/input_items?limit=101") + high_payload = _assert_error_envelope(high_limit, 400) + assert high_payload["error"].get("type") == "invalid_request_error" + + +def test_input_items_returns_400_for_invalid_order() -> None: + client = _build_client() + + response_id = _create_response(client, input_items=[_message_input("msg_001", "one")]) + + response = client.get(f"/responses/{response_id}/input_items?order=invalid") + payload = _assert_error_envelope(response, 400) + assert payload["error"].get("type") == "invalid_request_error" + + +def test_input_items_returns_400_for_deleted_response() -> None: + client = _build_client() + + response_id = _create_response(client, input_items=[_message_input("msg_001", "one")]) + + delete_response = client.delete(f"/responses/{response_id}") + assert delete_response.status_code == 200 + + response = client.get(f"/responses/{response_id}/input_items") + payload = _assert_error_envelope(response, 400) + assert payload["error"].get("type") == "invalid_request_error" + assert "deleted" in (payload["error"].get("message") or "").lower() + + +def test_input_items_returns_404_for_missing_or_non_stored_response() -> None: + client = _build_client() + + missing_response = client.get("/responses/resp_does_not_exist/input_items") + missing_payload = _assert_error_envelope(missing_response, 404) + assert missing_payload["error"].get("type") == "invalid_request_error" + + non_stored_id = _create_response( + client, + input_items=[_message_input("msg_001", "one")], + store=False, + ) + non_stored_response = client.get(f"/responses/{non_stored_id}/input_items") + non_stored_payload = _assert_error_envelope(non_stored_response, 404) + assert non_stored_payload["error"].get("type") == "invalid_request_error" + + +def test_input_items_default_limit_is_20_and_has_more_when_truncated() -> None: + client = _build_client() + + input_items = [_message_input(f"msg_{index:03d}", f"item-{index:03d}") for index in range(1, 26)] + response_id = _create_response(client, input_items=input_items) + + response = client.get(f"/responses/{response_id}/input_items") + assert response.status_code == 200 + payload = response.json() + + assert payload.get("object") == "list" + assert isinstance(payload.get("data"), list) + assert len(payload["data"]) == 20 + assert payload.get("has_more") is True + assert payload.get("first_id") == "msg_025" + assert payload.get("last_id") == "msg_006" + + +def test_input_items_supports_order_and_cursor_pagination() -> None: + client = _build_client() + + response_id = _create_response( + client, + input_items=[ + _message_input("msg_001", "one"), + _message_input("msg_002", "two"), + _message_input("msg_003", "three"), + _message_input("msg_004", "four"), + ], + ) + + asc_response = client.get(f"/responses/{response_id}/input_items?order=asc&limit=2") + assert asc_response.status_code == 200 + asc_payload = asc_response.json() + assert [item.get("id") for item in asc_payload.get("data", [])] == ["msg_001", "msg_002"] + assert asc_payload.get("first_id") == "msg_001" + assert asc_payload.get("last_id") == "msg_002" + assert asc_payload.get("has_more") is True + + after_response = client.get(f"/responses/{response_id}/input_items?order=asc&after=msg_002") + assert after_response.status_code == 200 + after_payload = after_response.json() + assert [item.get("id") for item in after_payload.get("data", [])] == ["msg_003", "msg_004"] + + before_response = client.get(f"/responses/{response_id}/input_items?order=asc&before=msg_004") + assert before_response.status_code == 200 + before_payload = before_response.json() + assert [item.get("id") for item in before_payload.get("data", [])] == ["msg_001", "msg_002", "msg_003"] + + +def test_input_items_returns_history_plus_current_input_in_desc_order() -> None: + client = _build_client() + + first_response_id = _create_response( + client, + input_items=[ + _message_input("msg_hist_001", "history-1"), + _message_input("msg_hist_002", "history-2"), + ], + ) + + second_response_id = _create_response( + client, + input_items=[_message_input("msg_curr_001", "current-1")], + previous_response_id=first_response_id, + ) + + response = client.get(f"/responses/{second_response_id}/input_items?order=desc") + assert response.status_code == 200 + payload = response.json() + + assert [item.get("id") for item in payload.get("data", [])] == [ + "msg_curr_001", + "msg_hist_002", + "msg_hist_001", + ] + assert payload.get("first_id") == "msg_curr_001" + assert payload.get("last_id") == "msg_hist_001" + assert payload.get("has_more") is False + + +# --------------------------------------------------------------------------- +# Task 6.1 — input_items sourced from parsed model +# --------------------------------------------------------------------------- + +def test_input_items_string_input_treated_as_empty() -> None: + """T1: string input (not a list) should produce an empty input_items list.""" + client = _build_client() + + # Send a create request where 'input' is a plain string, not a list. + create_response = client.post( + "/responses", + json={"model": "gpt-4o-mini", "stream": False, "store": True, "input": "hello"}, + ) + assert create_response.status_code == 200 + response_id = create_response.json().get("id") + assert isinstance(response_id, str) + + response = client.get(f"/responses/{response_id}/input_items") + assert response.status_code == 200 + payload = response.json() + assert payload.get("data") == [] + assert payload.get("has_more") is False + + +def test_input_items_list_input_preserved() -> None: + """T2: list input items are preserved and retrievable via GET /input_items.""" + client = _build_client() + + item = {"id": "msg_x01", "type": "message", "role": "user", "content": [{"type": "input_text", "text": "hi"}]} + response_id = _create_response(client, input_items=[item]) + + response = client.get(f"/responses/{response_id}/input_items?order=asc") + assert response.status_code == 200 + payload = response.json() + assert len(payload.get("data", [])) == 1 + assert payload["data"][0].get("id") == "msg_x01" + assert payload["data"][0].get("type") == "message" + + +def test_previous_response_id_propagated() -> None: + """T3: previous_response_id is propagated so input_items chain walk works.""" + client = _build_client() + + parent_id = _create_response( + client, + input_items=[_message_input("msg_parent_001", "parent-item")], + ) + child_id = _create_response( + client, + input_items=[_message_input("msg_child_001", "child-item")], + previous_response_id=parent_id, + ) + + response = client.get(f"/responses/{child_id}/input_items?order=asc") + assert response.status_code == 200 + payload = response.json() + ids = [item.get("id") for item in payload.get("data", [])] + # Both parent and child items appear, parent first in ascending order. + assert "msg_parent_001" in ids + assert "msg_child_001" in ids + assert ids.index("msg_parent_001") < ids.index("msg_child_001") + + +def test_empty_previous_response_id_handled() -> None: + """T4: an empty string for previous_response_id should not raise; treated as absent.""" + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "stream": False, + "store": True, + "input": [], + "previous_response_id": "", + }, + ) + # The server should accept the request (empty string treated as absent). + assert create_response.status_code == 200 + response_id = create_response.json().get("id") + assert isinstance(response_id, str) + + response = client.get(f"/responses/{response_id}/input_items") + assert response.status_code == 200 + + +# --------------------------------------------------------------------------- +# Task 6.2 — provider/runtime_state branch alignment + pagination edge cases +# --------------------------------------------------------------------------- + +def test_input_items_in_flight_fallback_to_runtime() -> None: + """T3: in-progress background response serves input_items from runtime_state.""" + import asyncio + + # Handler that sleeps indefinitely so the response stays in_progress + def _slow_handler(request: Any, context: Any, cancellation_signal: Any): # type: ignore[no-redef] + async def _events(): + await asyncio.sleep(60) # Keep response in-flight + if False: # pragma: no cover + yield None + + return _events() + + _server = AgentHost() + _rhandler = ResponseHandler(_server) + _rhandler.create_handler(_slow_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + item = _message_input("inflight_msg_001", "in-flight-content") + payload: Any = { + "model": "gpt-4o-mini", + "stream": False, + "store": True, + "background": True, + "input": [item], + } + create_response = client.post("/responses", json=payload) + assert create_response.status_code == 200 + response_id = create_response.json().get("id") + assert isinstance(response_id, str) + + # GET /input_items while the response is still in-flight (in runtime_state, not yet in provider) + items_response = client.get(f"/responses/{response_id}/input_items") + assert items_response.status_code == 200 + items_payload = items_response.json() + assert items_payload.get("object") == "list" + item_ids = [i.get("id") for i in items_payload.get("data", [])] + assert "inflight_msg_001" in item_ids + + +def test_input_items_limit_boundary_1() -> None: + """T4: limit=1 returns exactly one item.""" + client = _build_client() + + response_id = _create_response( + client, + input_items=[ + _message_input("msg_a", "a"), + _message_input("msg_b", "b"), + ], + ) + + response = client.get(f"/responses/{response_id}/input_items?limit=1") + assert response.status_code == 200 + payload = response.json() + assert len(payload.get("data", [])) == 1 + assert payload.get("has_more") is True + + +def test_input_items_limit_boundary_100() -> None: + """T5: limit=100 returns at most 100 items.""" + client = _build_client() + + input_items = [_message_input(f"msg_{i:03d}", f"item-{i}") for i in range(1, 51)] + response_id = _create_response(client, input_items=input_items) + + response = client.get(f"/responses/{response_id}/input_items?order=asc&limit=100") + assert response.status_code == 200 + payload = response.json() + assert len(payload.get("data", [])) == 50 + assert payload.get("has_more") is False diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_keep_alive.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_keep_alive.py new file mode 100644 index 000000000000..861ca7bdb090 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_keep_alive.py @@ -0,0 +1,214 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for SSE keep-alive comment frames during streaming.""" + +from __future__ import annotations + +import asyncio +import json +from typing import Any + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses._options import ResponsesServerOptions + + +def _make_slow_handler(delay_seconds: float = 0.5, event_count: int = 2): + """Factory for a handler that yields events with a configurable delay between them.""" + + def _handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + for i in range(event_count): + if i > 0: + await asyncio.sleep(delay_seconds) + yield { + "type": "response.created" if i == 0 else "response.completed", + "payload": { + "status": "in_progress" if i == 0 else "completed", + }, + } + + return _events() + + return _handler + + +def _noop_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler producing an empty stream.""" + async def _events(): + if False: # pragma: no cover + yield None + + return _events() + + +def _build_client( + handler: Any | None = None, + *, + keep_alive_seconds: int | None = None, +) -> TestClient: + server = AgentHost() + options = ResponsesServerOptions(sse_keep_alive_interval_seconds=keep_alive_seconds) + responses = ResponseHandler(server, options=options) + responses.create_handler(handler or _noop_handler) + return TestClient(server.app) + + +def _parse_raw_lines(response: Any) -> list[str]: + """Collect all raw lines (including SSE comments) from a streaming response.""" + return list(response.iter_lines()) + + +def _collect_events_and_comments(response: Any) -> tuple[list[dict[str, Any]], list[str]]: + """Parse SSE stream into (events, comments). + + Events are objects with ``type`` and ``data`` keys. + Comments are raw lines starting with ``:``. + """ + events: list[dict[str, Any]] = [] + comments: list[str] = [] + current_type: str | None = None + current_data: str | None = None + + for line in response.iter_lines(): + if not line: + if current_type is not None: + parsed: dict[str, Any] = {} + if current_data: + parsed = json.loads(current_data) + events.append({"type": current_type, "data": parsed}) + current_type = None + current_data = None + continue + + if line.startswith(":"): + comments.append(line) + elif line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None: + parsed = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": parsed}) + + return events, comments + + +def _stream_post(client: TestClient, **extra_json: Any) -> Any: + """Issue a streaming POST /responses and return the streaming context manager.""" + payload = { + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + **extra_json, + } + return client.stream("POST", "/responses", json=payload) + + +# -- Tests: keep-alive disabled (default) ----------------------------------- + + +def test_keep_alive__disabled_by_default_no_comment_frames() -> None: + """When keep-alive is not configured, no SSE comment frames should appear.""" + handler = _make_slow_handler(delay_seconds=0.3, event_count=2) + client = _build_client(handler) + + with _stream_post(client) as response: + assert response.status_code == 200 + events, comments = _collect_events_and_comments(response) + + assert len(events) >= 1 + assert len(comments) == 0, f"Expected no keep-alive comments, got: {comments}" + + +# -- Tests: keep-alive enabled ----------------------------------------------- + + +def test_keep_alive__enabled_interleaves_comment_frames_during_slow_handler() -> None: + """When keep-alive is enabled with a short interval, SSE comment frames + should appear between handler events when the handler is slow.""" + # Handler delays 1.5s between events; keep-alive fires every 0.2s + handler = _make_slow_handler(delay_seconds=1.5, event_count=2) + client = _build_client(handler, keep_alive_seconds=1) + + with _stream_post(client) as response: + assert response.status_code == 200 + events, comments = _collect_events_and_comments(response) + + # At least one keep-alive comment should have been sent during the 1.5s gap + assert len(comments) >= 1, ( + f"Expected at least one keep-alive comment, got {len(comments)}. " + f"Events: {[e['type'] for e in events]}" + ) + # All comments should be the standard keep-alive format + for comment in comments: + assert comment == ": keep-alive" + + +def test_keep_alive__comment_format_is_sse_compliant() -> None: + """Keep-alive frames must be valid SSE comments (colon-prefixed).""" + handler = _make_slow_handler(delay_seconds=1.5, event_count=2) + client = _build_client(handler, keep_alive_seconds=1) + + with _stream_post(client) as response: + assert response.status_code == 200 + raw_lines = _parse_raw_lines(response) + + keep_alive_lines = [line for line in raw_lines if line.startswith(": keep-alive")] + assert len(keep_alive_lines) >= 1 + for line in keep_alive_lines: + # SSE comments start with colon; must not contain "event:" or "data:" + assert line.startswith(":") + assert "event:" not in line + assert "data:" not in line + + +def test_keep_alive__does_not_disrupt_event_stream_integrity() -> None: + """Even with keep-alive enabled, all handler events should be present + with correct types, ordering, and monotonic sequence numbers.""" + handler = _make_slow_handler(delay_seconds=1.5, event_count=2) + client = _build_client(handler, keep_alive_seconds=1) + + with _stream_post(client) as response: + assert response.status_code == 200 + events, comments = _collect_events_and_comments(response) + + event_types = [e["type"] for e in events] + assert "response.created" in event_types + # Sequence numbers should still be monotonically increasing + seq_nums = [e["data"].get("sequence_number") for e in events if "sequence_number" in e["data"]] + assert seq_nums == sorted(seq_nums) + + +def test_keep_alive__no_comments_after_stream_ends() -> None: + """After the handler finishes, no trailing keep-alive comments should appear.""" + handler = _make_slow_handler(delay_seconds=0.0, event_count=2) + client = _build_client(handler, keep_alive_seconds=1) + + with _stream_post(client) as response: + assert response.status_code == 200 + events, comments = _collect_events_and_comments(response) + + # Handler is fast (0s delay), so no keep-alive should be needed + # (the stream finishes before the 1s interval fires) + assert len(events) >= 1 + # No comments expected since the handler is faster than the keep-alive interval + assert len(comments) == 0 + + +def test_keep_alive__fallback_stream_does_not_include_keep_alive() -> None: + """When the handler yields no events (empty generator → fallback stream), + keep-alive should not appear since the fallback stream is immediate.""" + client = _build_client(_noop_handler, keep_alive_seconds=1) + + with _stream_post(client) as response: + assert response.status_code == 200 + events, comments = _collect_events_and_comments(response) + + assert len(events) >= 1 # fallback auto-generates lifecycle events + assert len(comments) == 0 diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_response_invariants.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_response_invariants.py new file mode 100644 index 000000000000..40a0b4d09261 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_response_invariants.py @@ -0,0 +1,729 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for response field invariants across statuses (B6, B19, B33).""" + +from __future__ import annotations + +import asyncio +from typing import Any + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from tests._helpers import poll_until + + +def _noop_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler — auto-completes.""" + async def _events(): + if False: # pragma: no cover + yield None + + return _events() + + +def _throwing_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that raises after emitting created.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + raise RuntimeError("Simulated handler failure") + + return _events() + + +def _incomplete_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits an incomplete terminal event.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_incomplete(reason="max_output_tokens") + + return _events() + + +def _delayed_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that sleeps briefly, checking for cancellation.""" + async def _events(): + if cancellation_signal.is_set(): + return + await asyncio.sleep(0.25) + if cancellation_signal.is_set(): + return + if False: # pragma: no cover + yield None + + return _events() + + +def _cancellable_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits response.created then blocks until cancelled (Phase 3).""" + async def _events(): + yield {"type": "response.created", "payload": {"status": "in_progress", "output": []}} + while not cancellation_signal.is_set(): + await asyncio.sleep(0.01) + + return _events() + + +def _build_client(handler: Any | None = None) -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler or _noop_handler) + return TestClient(server.app) + + +def _wait_for_status( + client: TestClient, + response_id: str, + expected_status: str, + *, + timeout_s: float = 5.0, +) -> None: + latest_status: str | None = None + + def _check() -> bool: + nonlocal latest_status + r = client.get(f"/responses/{response_id}") + if r.status_code != 200: + return False + latest_status = r.json().get("status") + return latest_status == expected_status + + ok, failure = poll_until( + _check, + timeout_s=timeout_s, + interval_s=0.05, + context_provider=lambda: {"status": latest_status}, + label=f"wait for {expected_status}", + ) + assert ok, failure + + +# ══════════════════════════════════════════════════════════ +# B6: completed_at invariant +# ══════════════════════════════════════════════════════════ + + +def test_completed_at__nonnull_only_for_completed_status() -> None: + """B6 — completed_at is non-null only when status is completed.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + assert payload.get("completed_at") is not None, "completed_at should be non-null for completed status" + assert isinstance(payload["completed_at"], (int, float)), "completed_at should be a Unix timestamp" + + +def test_completed_at__null_for_failed_status() -> None: + """B6 — completed_at is null when status is failed.""" + client = _build_client(_throwing_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + _wait_for_status(client, response_id, "failed") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + assert payload["status"] == "failed" + assert payload.get("completed_at") is None, "completed_at should be null for failed status" + + +def test_completed_at__null_for_cancelled_status() -> None: + """B6 — completed_at is null when status is cancelled.""" + client = _build_client(_cancellable_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + + _wait_for_status(client, response_id, "cancelled") + + get_response = client.get(f"/responses/{response_id}") + payload = get_response.json() + assert payload["status"] == "cancelled" + assert payload.get("completed_at") is None, "completed_at should be null for cancelled status" + + +def test_completed_at__null_for_incomplete_status() -> None: + """B6 — completed_at is null when status is incomplete.""" + client = _build_client(_incomplete_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + _wait_for_status(client, response_id, "incomplete") + + get_response = client.get(f"/responses/{response_id}") + payload = get_response.json() + assert payload["status"] == "incomplete" + assert payload.get("completed_at") is None, "completed_at should be null for incomplete status" + + +# ══════════════════════════════════════════════════════════ +# B19: x-platform-server header +# ══════════════════════════════════════════════════════════ + + +def test_x_platform_server_header__present_on_post_response() -> None: + """B19 — All responses include x-platform-server header.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + header = response.headers.get("x-platform-server") + assert header is not None, "x-platform-server header must be present per B19" + assert isinstance(header, str) and len(header) > 0 + + +def test_x_platform_server_header__present_on_get_response() -> None: + """B19 — x-platform-server header on GET responses.""" + client = _build_client() + + create = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + response_id = create.json()["id"] + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + header = get_response.headers.get("x-platform-server") + assert header is not None, "x-platform-server header must be present on GET per B19" + + +# ══════════════════════════════════════════════════════════ +# B33: Token usage +# ══════════════════════════════════════════════════════════ + + +def test_token_usage__structure_valid_when_present() -> None: + """B33 — Terminal events include optional usage field. When present, check structure.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + # B33: usage is optional. If present, verify structure. + usage = payload.get("usage") + if usage is not None: + assert isinstance(usage.get("input_tokens"), int), "input_tokens should be int" + assert isinstance(usage.get("output_tokens"), int), "output_tokens should be int" + assert isinstance(usage.get("total_tokens"), int), "total_tokens should be int" + assert usage["total_tokens"] == usage["input_tokens"] + usage["output_tokens"] + + +# ══════════════════════════════════════════════════════════ +# B-7: created_at present on every response +# ══════════════════════════════════════════════════════════ + + +def test_created_at__present_on_sync_response() -> None: + """B-7 — created_at field must be present (and numeric) on every response object.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + created_at = payload.get("created_at") + assert created_at is not None, "created_at must be present on every response" + assert isinstance(created_at, (int, float)), f"created_at must be numeric, got: {type(created_at)}" + + +def test_created_at__present_on_background_response() -> None: + """B-7 — created_at is also present when fetching a background response via GET.""" + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + _wait_for_status(client, response_id, "completed") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + created_at = payload.get("created_at") + assert created_at is not None, "created_at must be present on background response" + assert isinstance(created_at, (int, float)), f"created_at must be numeric, got: {type(created_at)}" + + +# ══════════════════════════════════════════════════════════ +# B-8: ResponseError shape (only code + message, no type/param) +# ══════════════════════════════════════════════════════════ + + +def test_response_error__shape_has_only_code_and_message() -> None: + """B-8 — The error field on a failed response has code and message but NOT type or param.""" + client = _build_client(_throwing_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + _wait_for_status(client, response_id, "failed") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + error = payload.get("error") + assert error is not None, "error field must be present on a failed response" + assert isinstance(error, dict) + # ResponseError shape: MUST have code and message + assert "code" in error, f"error must have 'code' field, got: {list(error.keys())}" + assert "message" in error, f"error must have 'message' field, got: {list(error.keys())}" + # ResponseError shape: must NOT have type or param (those are for request errors) + assert "type" not in error, f"error must NOT have 'type' field (that is for request errors), got: {list(error.keys())}" + assert "param" not in error, f"error must NOT have 'param' field (that is for request errors), got: {list(error.keys())}" + + +# ══════════════════════════════════════════════════════════ +# B-12: GET /responses/{id} returns 200 for all terminal statuses +# ══════════════════════════════════════════════════════════ + + +def test_get__returns_200_for_failed_response() -> None: + """B-12 — GET returns HTTP 200 for a response in failed status.""" + client = _build_client(_throwing_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + _wait_for_status(client, response_id, "failed") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + assert get_response.json()["status"] == "failed" + + +def test_get__returns_200_for_incomplete_response() -> None: + """B-12 — GET returns HTTP 200 for a response in incomplete status.""" + client = _build_client(_incomplete_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + _wait_for_status(client, response_id, "incomplete") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + assert get_response.json()["status"] == "incomplete" + + +def test_get__returns_200_for_cancelled_response() -> None: + """B-12 — GET returns HTTP 200 for a response in cancelled status.""" + client = _build_client(_cancellable_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + _wait_for_status(client, response_id, "cancelled") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + assert get_response.json()["status"] == "cancelled" + + +# ════════════════════════════════════════════════════════ +# N-8, B6: error=null for non-failed terminal statuses +# ════════════════════════════════════════════════════════ + + +def test_error_field__null_for_completed_status() -> None: + """B6 — error must be null for status=completed.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + assert payload.get("error") is None, "B6: error must be null for status=completed" + + +def test_error_field__null_for_cancelled_status() -> None: + """B6 — error must be null for status=cancelled.""" + client = _build_client(_cancellable_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + _wait_for_status(client, response_id, "cancelled") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + assert payload["status"] == "cancelled" + assert payload.get("error") is None, "B6: error must be null for status=cancelled" + + +# ════════════════════════════════════════════════════════ +# N-1, N-2, B20/B21: response_id and agent_reference on output items +# ════════════════════════════════════════════════════════ + + +def _output_item_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits a single output message item.""" + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("hi") + yield text_content.emit_done() + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + +def test_output_item__no_response_id_on_item() -> None: + """Output items do not carry response_id — that field belongs on the Response only.""" + client = _build_client(_output_item_handler) + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + assert len(payload.get("output", [])) == 1 + item = payload["output"][0] + assert "response_id" not in item, ( + f"response_id must not appear on output items (belongs on Response only), got: {item!r}" + ) + + +def test_output_item__agent_reference_on_response_not_item() -> None: + """agent_reference from the request is present on the Response but not on individual output items.""" + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_output_item_handler) + client = TestClient(server.app) + + agent_ref = {"type": "agent_reference", "name": "my-agent", "version": "v2"} + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "agent_reference": agent_ref, + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + # agent_reference is propagated to the Response + assert payload.get("agent_reference", {}).get("name") == "my-agent" + assert payload.get("agent_reference", {}).get("version") == "v2" + # agent_reference does NOT appear on individual output items + assert len(payload.get("output", [])) == 1 + item = payload["output"][0] + assert "agent_reference" not in item, ( + f"agent_reference must not appear on output items (belongs on Response only), got: {item!r}" + ) + + +# ════════════════════════════════════════════════════════ +# N-3, B19: x-platform-server on SSE streaming responses +# ════════════════════════════════════════════════════════ + + +def test_x_platform_server_header__present_on_sse_streaming_post_response() -> None: + """B19 — x-platform-server header must be present on SSE streaming POST /responses.""" + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + header = response.headers.get("x-platform-server") + + assert header is not None, "B19: x-platform-server header must be present on SSE streaming POST per B19" + assert isinstance(header, str) and len(header) > 0 + + +def test_x_platform_server_header__present_on_sse_replay_get_response() -> None: + """B19 — x-platform-server header must be present on GET ?stream=true replay.""" + import json as _json + + client = _build_client() + + # Create a background+stream response so SSE replay is available + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) as post_response: + assert post_response.status_code == 200 + first_data: str | None = None + for line in post_response.iter_lines(): + if line.startswith("data:"): + first_data = line.split(":", 1)[1].strip() + break + assert first_data is not None + response_id = _json.loads(first_data)["response"]["id"] + + with client.stream("GET", f"/responses/{response_id}?stream=true") as replay_response: + assert replay_response.status_code == 200 + header = replay_response.headers.get("x-platform-server") + + assert header is not None, "B19: x-platform-server header must be present on SSE replay GET per B19" + assert isinstance(header, str) and len(header) > 0 + + +# ══════════════════════════════════════════════════════════ +# B-14: x-platform-server header on 4xx error responses +# ══════════════════════════════════════════════════════════ + + +def test_x_platform_server__present_on_400_create_error() -> None: + """B-14 — x-platform-server header must be present on 4xx error responses (not just 2xx).""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": "not-a-bool", # invalid value → 400 + }, + ) + assert response.status_code == 400 + header = response.headers.get("x-platform-server") + assert header is not None, "x-platform-server header must be present on 400 error responses per B14" + + +# ══════════════════════════════════════════════════════════ +# B-15: output[] preserved for completed, cleared for cancelled +# ══════════════════════════════════════════════════════════ + + +def test_output__preserved_for_completed_response() -> None: + """B-15 — output[] is preserved (may be non-empty) for completed responses.""" + client = _build_client() + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + # output must be present as a list (may be empty for noop handler, but must not be absent) + assert "output" in payload, "output field must be present on completed response" + assert isinstance(payload["output"], list), f"output must be a list, got: {type(payload['output'])}" + + +def test_output__cleared_for_cancelled_response() -> None: + """B-15 — output[] is cleared (empty list) when a response is cancelled.""" + client = _build_client(_cancellable_bg_handler) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + _wait_for_status(client, response_id, "cancelled") + + get_response = client.get(f"/responses/{response_id}") + assert get_response.status_code == 200 + payload = get_response.json() + assert payload.get("output") == [], ( + f"output must be cleared (empty []) for cancelled responses, got: {payload.get('output')}" + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_streaming_behavior.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_streaming_behavior.py new file mode 100644 index 000000000000..8fddb09fc437 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/contract/test_streaming_behavior.py @@ -0,0 +1,537 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Contract tests for SSE streaming behavior.""" + +from __future__ import annotations + +import asyncio +import json +from typing import Any + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire the hosting surface in contract tests.""" + async def _events(): + if False: # pragma: no cover - required to keep async-generator shape. + yield None + + return _events() + + +def _build_client() -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def _throwing_before_yield_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that raises before yielding any event. + + Used to test pre-creation error handling in SSE streaming mode. + """ + async def _events(): + raise RuntimeError("Simulated pre-creation failure") + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _throwing_after_created_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits response.created then raises. + + Used to test post-creation error handling in SSE streaming mode. + """ + async def _events(): + stream = ResponseEventStream( + response_id=context.response_id, model=getattr(request, "model", None) + ) + yield stream.emit_created() + raise RuntimeError("Simulated post-creation failure") + + return _events() + + +def _collect_stream_events(response: Any) -> list[dict[str, Any]]: + events: list[dict[str, Any]] = [] + current_type: str | None = None + current_data: str | None = None + + for line in response.iter_lines(): + if not line: + if current_type is not None: + parsed_data: dict[str, Any] = {} + if current_data: + parsed_data = json.loads(current_data) + events.append({"type": current_type, "data": parsed_data}) + current_type = None + current_data = None + continue + + if line.startswith("event:"): + current_type = line.split(":", 1)[1].strip() + elif line.startswith("data:"): + current_data = line.split(":", 1)[1].strip() + + if current_type is not None: + parsed_data = json.loads(current_data) if current_data else {} + events.append({"type": current_type, "data": parsed_data}) + + return events + + +def test_streaming__first_event_is_response_created() -> None: + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + assert response.headers.get("content-type", "").startswith("text/event-stream") + events = _collect_stream_events(response) + + assert events, "Expected at least one SSE event" + assert events[0]["type"] == "response.created" + # Contract (B8): response.created event status must be queued or in_progress + assert events[0]["data"]["response"].get("status") in {"queued", "in_progress"}, ( + f"response.created status must be queued or in_progress per B8, got: {events[0]['data']['response'].get('status')}" + ) + + +def test_streaming__sequence_number_is_monotonic_and_contiguous() -> None: + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + assert events, "Expected at least one SSE event" + sequence_numbers = [event["data"].get("sequence_number") for event in events] + assert all(isinstance(sequence_number, int) for sequence_number in sequence_numbers) + assert sequence_numbers == sorted(sequence_numbers) + assert sequence_numbers == list(range(len(sequence_numbers))) + + +def test_streaming__has_exactly_one_terminal_event() -> None: + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [event["type"] for event in events] + terminal_types = {"response.completed", "response.failed", "response.incomplete"} + terminal_count = sum(1 for event_type in event_types if event_type in terminal_types) + assert terminal_count == 1 + + +def test_streaming__identity_fields_are_consistent_across_events() -> None: + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + assert events, "Expected at least one SSE event" + # The first event is response.created — a lifecycle event whose data wraps the + # Response snapshot under the "response" key per the ResponseCreatedEvent contract. + first_response = events[0]["data"]["response"] + response_id = first_response.get("response_id") + assert response_id == first_response.get("id") + assert isinstance(first_response.get("agent_reference"), dict) + + _LIFECYCLE_TYPES = { + "response.queued", "response.created", "response.in_progress", + "response.completed", "response.failed", "response.incomplete", + } + lifecycle_events = [e for e in events if e["type"] in _LIFECYCLE_TYPES] + for event in lifecycle_events: + response_payload = event["data"]["response"] + assert response_payload.get("response_id") == response_id + assert response_payload.get("id") == response_id + assert response_payload.get("agent_reference") == first_response.get("agent_reference") + + +def test_streaming__forwards_emitted_event_before_late_handler_failure() -> None: + def _fail_after_first_event_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + yield { + "type": "response.created", + "payload": { + "status": "in_progress", + }, + } + await asyncio.sleep(0) + raise RuntimeError("late stream failure") + + return _events() + + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_fail_after_first_event_handler) + client = TestClient(server.app) + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + assert response.headers.get("content-type", "").startswith("text/event-stream") + first_event_line = "" + for line in response.iter_lines(): + if line.startswith("event:"): + first_event_line = line + break + + assert first_event_line == "event: response.created" + + +def test_streaming__sse_response_headers_per_contract() -> None: + """SSE Response Headers: Content-Type with charset, Connection, Cache-Control, X-Accel-Buffering.""" + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + content_type = response.headers.get("content-type", "") + assert content_type == "text/event-stream; charset=utf-8", ( + f"Expected Content-Type with charset per SSE headers contract, got: {content_type}" + ) + assert response.headers.get("connection") == "keep-alive", "Missing Connection: keep-alive" + assert response.headers.get("cache-control") == "no-cache", "Missing Cache-Control: no-cache" + assert response.headers.get("x-accel-buffering") == "no", "Missing X-Accel-Buffering: no" + list(response.iter_lines()) + + +def test_streaming__wire_format_has_no_sse_id_field() -> None: + """B27 — SSE wire format must not contain id: lines. Sequence number is in JSON payload.""" + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + raw_lines = list(response.iter_lines()) + + id_lines = [line for line in raw_lines if line.startswith("id:")] + assert id_lines == [], f"SSE stream must not contain id: lines per B27, found: {id_lines}" + + +def test_streaming__background_stream_may_include_response_queued_event() -> None: + """B8 — response.queued is optional in background mode SSE streams.""" + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": True, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + assert events, "Expected at least one SSE event" + event_types = [e["type"] for e in events] + # response.created must be first + assert event_types[0] == "response.created" + # If response.queued is present, it must be right after response.created + if "response.queued" in event_types: + queued_idx = event_types.index("response.queued") + assert queued_idx == 1, "response.queued should be the second event if present" + + +# ══════════════════════════════════════════════════════════ +# B-4, B-10, B-13: Handler failure and in_progress event +# ══════════════════════════════════════════════════════════ + + +def test_streaming__pre_creation_handler_failure_produces_terminal_event() -> None: + """B-4 — Handler raising before any yield in streaming mode → SSE stream terminates with a proper terminal event.""" + handler = _throwing_before_yield_handler + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler) + client = TestClient(server.app, raise_server_exceptions=False) + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [e["type"] for e in events] + # B8: pre-creation error → standalone `error` SSE event only. + # No response.created must precede it. + assert "error" in event_types, ( + f"SSE stream must emit standalone 'error' event for pre-creation failure, got: {event_types}" + ) + assert "response.created" not in event_types, ( + f"Pre-creation error must NOT emit response.created before 'error' event, got: {event_types}" + ) + + +def test_streaming__response_in_progress_event_is_in_stream() -> None: + """B-10 — response.in_progress must appear in the SSE stream between response.created and the terminal event.""" + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [e["type"] for e in events] + assert "response.in_progress" in event_types, ( + f"Expected response.in_progress in SSE stream, got: {event_types}" + ) + created_idx = event_types.index("response.created") + in_progress_idx = event_types.index("response.in_progress") + terminal_set = {"response.completed", "response.failed", "response.incomplete"} + terminal_idx = next( + (i for i, t in enumerate(event_types) if t in terminal_set), None + ) + assert terminal_idx is not None, f"No terminal event found in: {event_types}" + assert created_idx < in_progress_idx < terminal_idx, ( + f"response.in_progress must appear after response.created and before terminal event. " + f"Order was: {event_types}" + ) + + +def test_streaming__post_creation_error_yields_response_failed_not_error_event() -> None: + """B-13 — Handler raising after response.created → terminal is response.failed, NOT a standalone error event.""" + handler = _throwing_after_created_handler + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(handler) + client = TestClient(server.app, raise_server_exceptions=False) + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [e["type"] for e in events] + assert "response.failed" in event_types, ( + f"Expected response.failed terminal event after post-creation error, got: {event_types}" + ) + # After response.created has been emitted, no standalone 'error' event should appear. + # The failure must be surfaced as response.failed, not a raw error event. + assert "error" not in event_types, ( + f"Standalone 'error' event must not appear after response.created. Events: {event_types}" + ) + + +# ══════════════════════════════════════════════════════════ +# Task 4.1 — _process_handler_events pipeline contract tests +# ══════════════════════════════════════════════════════════ + + +def test_stream_pre_creation_error_emits_error_event() -> None: + """T1 — Handler raises before yielding; stream=True → SSE stream contains only a standalone error event. + + B8: The standalone ``error`` event must be the only event; ``response.created`` must NOT appear. + """ + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_throwing_before_yield_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [e["type"] for e in events] + assert event_types == ["error"], ( + f"Pre-creation error must produce exactly one 'error' event, got: {event_types}" + ) + assert "response.created" not in event_types + + +def test_stream_post_creation_error_emits_response_failed() -> None: + """T2 — Handler raises after response.created; stream=True → SSE ends with response.failed. + + B-13: After response.created, handler failures surface as ``response.failed``, not raw ``error``. + """ + _server = AgentHost() + _rh = ResponseHandler(_server) + _rh.create_handler(_throwing_after_created_handler) + client = TestClient(_server.app, raise_server_exceptions=False) + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [e["type"] for e in events] + assert "response.failed" in event_types, ( + f"Expected response.failed terminal after post-creation error, got: {event_types}" + ) + assert "error" not in event_types, ( + f"No standalone error event expected after response.created, got: {event_types}" + ) + # Exactly one terminal event + terminal_types = {"response.completed", "response.failed", "response.incomplete"} + assert sum(1 for t in event_types if t in terminal_types) == 1 + + +def test_stream_empty_handler_emits_full_lifecycle() -> None: + """T3 — Handler yields zero events; _process_handler_events synthesises full lifecycle. + + The SSE stream must contain response.created → response.in_progress → response.completed. + """ + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + event_types = [e["type"] for e in events] + assert "response.created" in event_types, f"Missing response.created: {event_types}" + assert "response.in_progress" in event_types, f"Missing response.in_progress: {event_types}" + terminal_types = {"response.completed", "response.failed", "response.incomplete"} + assert any(t in terminal_types for t in event_types), ( + f"Missing terminal event in: {event_types}" + ) + # created must come before in_progress which must come before terminal + created_idx = event_types.index("response.created") + in_progress_idx = event_types.index("response.in_progress") + terminal_idx = next(i for i, t in enumerate(event_types) if t in terminal_types) + assert created_idx < in_progress_idx < terminal_idx, ( + f"Lifecycle order violated: {event_types}" + ) + + +def test_stream_sequence_numbers_monotonic() -> None: + """T4 — SSE events from a streaming response have strictly monotonically increasing sequence numbers starting at 0.""" + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={"model": "gpt-4o-mini", "input": "hello", "stream": True, "store": True, "background": False}, + ) as response: + assert response.status_code == 200 + events = _collect_stream_events(response) + + assert events, "Expected at least one SSE event" + sequence_numbers = [e["data"].get("sequence_number") for e in events] + assert all(isinstance(sn, int) for sn in sequence_numbers), ( + f"All events must carry an integer sequence_number, got: {sequence_numbers}" + ) + assert sequence_numbers[0] == 0, f"First sequence_number must be 0, got {sequence_numbers[0]}" + assert sequence_numbers == sorted(sequence_numbers), ( + f"Sequence numbers must be monotonically non-decreasing: {sequence_numbers}" + ) + assert len(set(sequence_numbers)) == len(sequence_numbers), ( + f"Sequence numbers must be unique (strictly increasing): {sequence_numbers}" + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/data/minimal_openapi.json b/sdk/agentserver/azure-ai-agentserver-responses/tests/data/minimal_openapi.json new file mode 100644 index 000000000000..f5b41faf41d2 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/data/minimal_openapi.json @@ -0,0 +1,28 @@ +{ + "paths": { + "/responses": { + "post": { + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/CreateResponse" + } + } + } + } + } + } + }, + "components": { + "schemas": { + "CreateResponse": { + "type": "object", + "required": ["model"], + "properties": { + "model": {"type": "string"} + } + } + } + } +} diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/__init__.py new file mode 100644 index 000000000000..9a0454564dbb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/__init__.py @@ -0,0 +1,2 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/test_starlette_hosting.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/test_starlette_hosting.py new file mode 100644 index 000000000000..9bfa4c113fea --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/test_starlette_hosting.py @@ -0,0 +1,352 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Integration tests for AgentHost host registration and wiring.""" + +from __future__ import annotations + +import asyncio +import threading +from typing import Any + +import pytest + +from starlette.testclient import TestClient + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler +from azure.ai.agentserver.responses.hosting._observability import InMemoryCreateSpanHook +from azure.ai.agentserver.responses._options import ResponsesServerOptions +from tests._helpers import EventGate + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire host integration tests.""" + async def _events(): + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _build_client(*, prefix: str = "", options: ResponsesServerOptions | None = None) -> TestClient: + server = AgentHost() + responses = ResponseHandler(server, prefix=prefix, options=options) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def test_hosting__registers_create_get_cancel_routes_under_prefix() -> None: + client = _build_client(prefix="/v1") + + create_response = client.post( + "/v1/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + get_response = client.get(f"/v1/responses/{response_id}") + assert get_response.status_code in {200, 404} + + cancel_response = client.post(f"/v1/responses/{response_id}/cancel") + assert cancel_response.status_code in {200, 400, 404} + + +def test_hosting__options_are_applied_to_runtime_behavior() -> None: + options = ResponsesServerOptions( + additional_server_identity="integration-suite", + default_model="gpt-4o-mini", + sse_keep_alive_interval_seconds=5, + ) + client = _build_client(options=options) + + response = client.post( + "/responses", + json={ + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + assert "x-platform-server" in response.headers + + +def test_hosting__client_disconnect_behavior_remains_contract_compliant() -> None: + client = _build_client() + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + first_line = next(response.iter_lines(), "") + assert first_line.startswith("event:") or first_line.startswith("data:") + + # Post-disconnect visibility and state should remain contract-compliant. + # This call should not raise and should return a defined protocol outcome. + follow_up = client.get("/responses/resp_disconnect_probe") + assert follow_up.status_code in {200, 400, 404} + + +def test_hosting__create_emits_single_root_span_with_key_tags_and_identity_header() -> None: + hook = InMemoryCreateSpanHook() + options = ResponsesServerOptions( + additional_server_identity="integration-suite", + create_span_hook=hook, + ) + client = _build_client(options=options) + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + assert "x-platform-server" in response.headers + + assert len(hook.spans) == 1 + span = hook.spans[0] + assert span.name == "create_response" + assert span.error is None + assert span.ended_at is not None + assert span.tags["service.name"] == "azure-ai-agentserver-responses" + assert span.tags["gen_ai.operation.name"] == "create_response" + assert span.tags["gen_ai.system"] == "responses" + assert span.tags["gen_ai.request.model"] == "gpt-4o-mini" + assert isinstance(span.tags["gen_ai.response.id"], str) + + +def test_hosting__stream_mode_surfaces_handler_output_item_and_content_events() -> None: + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _streaming_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("hello") + yield text_content.emit_done() + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_streaming_handler) + client = TestClient(server.app) + + with client.stream( + "POST", + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": True, + "store": True, + "background": False, + }, + ) as response: + assert response.status_code == 200 + lines = [line for line in response.iter_lines() if line] + + event_lines = [line for line in lines if line.startswith("event:")] + assert "event: response.output_item.added" in event_lines + assert "event: response.content_part.added" in event_lines + assert "event: response.output_text.delta" in event_lines + assert "event: response.output_text.done" in event_lines + assert "event: response.content_part.done" in event_lines + assert "event: response.output_item.done" in event_lines + + +def test_hosting__non_stream_mode_returns_completed_response_with_output_items() -> None: + from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream + + def _non_stream_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + stream = ResponseEventStream(response_id=context.response_id, model=getattr(request, "model", None)) + yield stream.emit_created() + yield stream.emit_in_progress() + + message_item = stream.add_output_item_message() + yield message_item.emit_added() + + text_content = message_item.add_text_content() + yield text_content.emit_added() + yield text_content.emit_delta("hello") + yield text_content.emit_done() + yield message_item.emit_content_done(text_content) + yield message_item.emit_done() + + yield stream.emit_completed() + + return _events() + + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_non_stream_handler) + client = TestClient(server.app) + + response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + + assert response.status_code == 200 + payload = response.json() + assert payload["status"] == "completed" + assert payload["id"].startswith("caresp_") + assert isinstance(payload.get("output"), list) + assert len(payload["output"]) == 1 + assert payload["output"][0]["type"] == "output_message" + assert payload["output"][0]["content"][0]["type"] == "output_text" + assert payload["output"][0]["content"][0]["text"] == "hello" + + +def test_hosting__health_endpoint_is_available() -> None: + """Verify AgentHost provides health endpoint automatically.""" + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + client = TestClient(server.app) + + response = client.get("/healthy") + assert response.status_code == 200 + assert response.json()["status"] == "healthy" + + +def test_hosting__multi_protocol_composition() -> None: + """Verify ResponseHandler can coexist with other protocol handlers on the same server.""" + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + client = TestClient(server.app) + + # Responses endpoint works + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + + # Health endpoint works + health_response = client.get("/healthy") + assert health_response.status_code == 200 + + +@pytest.mark.skip(reason="Shutdown handler registration under investigation after _hosting.py refactor") +def test_hosting__shutdown_signals_inflight_background_execution() -> None: + started_gate = EventGate() + cancelled_gate = EventGate() + shutdown_gate = EventGate() + + def _shutdown_aware_handler(request: Any, context: Any, cancellation_signal: Any): + async def _events(): + yield { + "type": "response.created", + "payload": { + "status": "in_progress", + "output": [], + }, + } + started_gate.signal(True) + + while True: + if context.is_shutdown_requested: + shutdown_gate.signal(True) + if cancellation_signal.is_set(): + cancelled_gate.signal(True) + return + await asyncio.sleep(0.01) + + return _events() + + server = AgentHost() + responses = ResponseHandler( + server, + options=ResponsesServerOptions(shutdown_grace_period_seconds=2), + ) + responses.create_handler(_shutdown_aware_handler) + + create_result: dict[str, Any] = {} + get_result: dict[str, Any] = {} + + with TestClient(server.app) as client: + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + create_result["response_id"] = response_id + + def _issue_get() -> None: + try: + get_result["response"] = client.get(f"/responses/{response_id}") + except Exception as exc: # pragma: no cover - surfaced via assertion below. + get_result["error"] = exc + + get_thread = threading.Thread(target=_issue_get, daemon=True) + get_thread.start() + + started, _ = started_gate.wait(timeout_s=2.0) + assert started, "Expected background handler execution to start before shutdown" + assert client.portal is not None + client.portal.call(server.app.router.shutdown) + + cancelled, _ = cancelled_gate.wait(timeout_s=2.0) + shutdown_seen, _ = shutdown_gate.wait(timeout_s=2.0) + assert cancelled, "Expected shutdown to trigger cancellation_signal for in-flight execution" + assert shutdown_seen, "Expected shutdown to set context.is_shutdown_requested" + + get_thread.join(timeout=2.0) + assert not get_thread.is_alive(), "Expected in-flight GET request to finish after shutdown" + assert get_result.get("error") is None, str(get_result.get("error")) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/test_store_lifecycle.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/test_store_lifecycle.py new file mode 100644 index 000000000000..e68467ef038a --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/integration/test_store_lifecycle.py @@ -0,0 +1,145 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Integration tests for store and lifecycle behavior.""" + +from __future__ import annotations + +import asyncio +from typing import Any + +from starlette.testclient import TestClient + +from tests._helpers import poll_until + +from azure.ai.agentserver.core import AgentHost +from azure.ai.agentserver.responses.hosting import ResponseHandler + + +def _noop_response_handler(request: Any, context: Any, cancellation_signal: Any): + """Minimal handler used to wire lifecycle integration tests.""" + async def _events(): + if False: # pragma: no cover - keep async generator shape. + yield None + + return _events() + + +def _cancellable_bg_handler(request: Any, context: Any, cancellation_signal: Any): + """Handler that emits response.created then blocks until cancelled (Phase 3).""" + async def _events(): + yield {"type": "response.created", "payload": {"status": "in_progress", "output": []}} + while not cancellation_signal.is_set(): + await asyncio.sleep(0.01) + + return _events() + + +def _build_client() -> TestClient: + server = AgentHost() + responses = ResponseHandler(server) + responses.create_handler(_noop_response_handler) + return TestClient(server.app) + + +def test_store_lifecycle__create_read_and_cleanup_behavior() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": False, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + read_response = client.get(f"/responses/{response_id}") + assert read_response.status_code == 200 + + # Lifecycle cleanup contract: after explicit cancellation, read should still be stable or terminally unavailable. + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code in {200, 400} + + +def test_store_lifecycle__background_completion_is_observed_deterministically() -> None: + client = _build_client() + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + terminal_states = {"completed", "failed", "incomplete", "cancelled"} + latest_status: str | None = None + + def _is_terminal() -> bool: + nonlocal latest_status + get_response = client.get(f"/responses/{response_id}") + if get_response.status_code != 200: + return False + latest_status = get_response.json().get("status") + return latest_status in terminal_states + + ok, failure = poll_until( + _is_terminal, + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: {"last_status": latest_status}, + label="background completion polling", + ) + assert ok, failure + + +def test_store_lifecycle__background_cancel_transition_is_deterministic() -> None: + server = AgentHost() + _responses = ResponseHandler(server) + _responses.create_handler(_cancellable_bg_handler) + client = TestClient(server.app) + + create_response = client.post( + "/responses", + json={ + "model": "gpt-4o-mini", + "input": "hello", + "stream": False, + "store": True, + "background": True, + }, + ) + assert create_response.status_code == 200 + response_id = create_response.json()["id"] + + cancel_response = client.post(f"/responses/{response_id}/cancel") + assert cancel_response.status_code == 200 + assert cancel_response.json().get("status") == "cancelled" + + latest_status: str | None = None + + def _is_cancelled() -> bool: + nonlocal latest_status + get_response = client.get(f"/responses/{response_id}") + if get_response.status_code != 200: + return False + latest_status = get_response.json().get("status") + return latest_status == "cancelled" + + ok, failure = poll_until( + _is_cancelled, + timeout_s=5.0, + interval_s=0.05, + context_provider=lambda: {"last_status": latest_status}, + label="background cancel transition polling", + ) + assert ok, failure diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/__init__.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/__init__.py new file mode 100644 index 000000000000..9a0454564dbb --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/__init__.py @@ -0,0 +1,2 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_builders.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_builders.py new file mode 100644 index 000000000000..1a5fb808b83b --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_builders.py @@ -0,0 +1,296 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Phase E Part D stream builder parity tests.""" + +from __future__ import annotations + +from azure.ai.agentserver.responses._id_generator import IdGenerator +from azure.ai.agentserver.responses import ( + OutputItemFunctionCallBuilder, + OutputItemFunctionCallOutputBuilder, + OutputItemMessageBuilder, + ResponseEventStream, + TextContentBuilder, +) + + +def test_text_content_builder__emits_added_delta_done_events() -> None: + stream = ResponseEventStream(response_id="resp_builder_1") + stream.emit_created() + message = stream.add_output_item_message() + message.emit_added() + text = message.add_text_content() + + added = text.emit_added() + delta = text.emit_delta("hello") + done = text.emit_done() + + assert isinstance(text, TextContentBuilder) + assert added["type"] == "response.content_part.added" + assert delta["type"] == "response.output_text.delta" + assert done["type"] == "response.output_text.done" + assert done["payload"]["text"] == "hello" + + +def test_text_content_builder__emit_done_merges_all_delta_fragments() -> None: + stream = ResponseEventStream(response_id="resp_builder_1b") + stream.emit_created() + message = stream.add_output_item_message() + message.emit_added() + text = message.add_text_content() + + text.emit_added() + text.emit_delta("hello") + text.emit_delta(" ") + text.emit_delta("world") + done = text.emit_done() + + assert done["type"] == "response.output_text.done" + assert done["payload"]["text"] == "hello world" + assert text.final_text == "hello world" + + +def test_output_item_message_builder__emits_added_content_done_and_done() -> None: + stream = ResponseEventStream(response_id="resp_builder_2") + stream.emit_created() + message = stream.add_output_item_message() + text = message.add_text_content() + + added = message.emit_added() + text.emit_added() + text.emit_delta("alpha") + text.emit_done() + content_done = message.emit_content_done(text) + done = message.emit_done() + + assert isinstance(message, OutputItemMessageBuilder) + assert added["type"] == "response.output_item.added" + assert content_done["type"] == "response.content_part.done" + assert done["type"] == "response.output_item.done" + assert done["payload"]["item"]["type"] == "output_message" + assert done["payload"]["item"]["content"][0]["text"] == "alpha" + + +def test_output_item_function_call_builder__emits_arguments_and_done_events() -> None: + stream = ResponseEventStream(response_id="resp_builder_3") + stream.emit_created() + function_call = stream.add_output_item_function_call("get_weather", "call_1") + + added = function_call.emit_added() + delta = function_call.emit_arguments_delta('{"loc') + args_done = function_call.emit_arguments_done('{"location": "Seattle"}') + done = function_call.emit_done() + + assert isinstance(function_call, OutputItemFunctionCallBuilder) + assert added["type"] == "response.output_item.added" + assert delta["type"] == "response.function_call_arguments.delta" + assert args_done["type"] == "response.function_call_arguments.done" + assert done["type"] == "response.output_item.done" + assert done["payload"]["item"]["name"] == "get_weather" + assert done["payload"]["item"]["call_id"] == "call_1" + assert done["payload"]["item"]["arguments"] == '{"location": "Seattle"}' + + +def test_output_item_function_call_output_builder__emits_added_and_done_events() -> None: + stream = ResponseEventStream(response_id="resp_builder_3b") + stream.emit_created() + function_output = stream.add_output_item_function_call_output("call_1") + + added = function_output.emit_added("partial") + done = function_output.emit_done("result") + + assert isinstance(function_output, OutputItemFunctionCallOutputBuilder) + assert added["type"] == "response.output_item.added" + assert added["payload"]["item"]["type"] == "function_call_output" + assert added["payload"]["item"]["call_id"] == "call_1" + assert done["type"] == "response.output_item.done" + assert done["payload"]["item"]["output"] == "result" + + +def test_output_item_events__item_has_no_response_id_or_agent_reference() -> None: + stream = ResponseEventStream( + response_id="resp_builder_3c", + agent_reference={"type": "agent_reference", "name": "agent-a"}, + ) + stream.emit_created() + function_call = stream.add_output_item_function_call("get_weather", "call_2") + + added = function_call.emit_added() + done = function_call.emit_done() + + assert "response_id" not in added["payload"]["item"] + assert "agent_reference" not in added["payload"]["item"] + assert "response_id" not in done["payload"]["item"] + assert "agent_reference" not in done["payload"]["item"] + + +def test_stream_builders__share_global_sequence_number() -> None: + stream = ResponseEventStream(response_id="resp_builder_4") + stream.emit_created() + stream.emit_in_progress() + message = stream.add_output_item_message() + event = message.emit_added() + + assert event["payload"]["sequence_number"] == 2 + + +def test_message_builder__output_index_increments_across_factories() -> None: + stream = ResponseEventStream(response_id="resp_builder_5") + stream.emit_created() + message = stream.add_output_item_message() + function_call = stream.add_output_item_function_call("fn", "call_1") + function_output = stream.add_output_item_function_call_output("call_2") + + assert message.output_index == 0 + assert function_call.output_index == 1 + assert function_output.output_index == 2 + + +def test_message_builder__emit_done_requires_completed_content() -> None: + stream = ResponseEventStream(response_id="resp_builder_6") + stream.emit_created() + message = stream.add_output_item_message() + message.emit_added() + + import pytest + + with pytest.raises(ValueError): + message.emit_done() + + +def test_builder_events__include_required_payload_fields_per_event_type() -> None: + stream = ResponseEventStream(response_id="resp_builder_7") + stream.emit_created() + + code_interpreter = stream.add_output_item_code_interpreter_call() + code_delta = code_interpreter.emit_code_delta("print('hi')") + code_done = code_interpreter.emit_code_done("print('hi')") + + image_gen = stream.add_output_item_image_gen_call() + partial_image = image_gen.emit_partial_image("ZmFrZS1pbWFnZQ==") + + custom_tool = stream.add_output_item_custom_tool_call("call_7", "custom") + input_done = custom_tool.emit_input_done('{"ok": true}') + + function_call = stream.add_output_item_function_call("tool_fn", "call_fn_7") + args_done = function_call.emit_arguments_done('{"city": "Seattle"}') + + mcp_call = stream.add_output_item_mcp_call("srv", "tool") + mcp_args_done = mcp_call.emit_arguments_done('{"arg": 1}') + + message = stream.add_output_item_message() + message.emit_added() + refusal = message.add_refusal_content() + refusal.emit_added() + refusal.emit_done("cannot comply") + refusal_part_done = message.emit_content_done(refusal) + + reasoning = stream.add_output_item_reasoning_item() + reasoning.emit_added() + summary = reasoning.add_summary_part() + summary_added = summary.emit_added() + summary.emit_text_done("short reason") + summary_done = summary.emit_done() + reasoning.emit_summary_part_done(summary) + reasoning_item_done = reasoning.emit_done() + + assert code_delta["type"] == "response.code_interpreter_call_code.delta" + assert code_delta["payload"]["item_id"] == code_interpreter.item_id + assert code_delta["payload"]["delta"] == "print('hi')" + + assert code_done["type"] == "response.code_interpreter_call_code.done" + assert code_done["payload"]["item_id"] == code_interpreter.item_id + assert code_done["payload"]["code"] == "print('hi')" + + assert partial_image["type"] == "response.image_generation_call.partial_image" + assert partial_image["payload"]["item_id"] == image_gen.item_id + assert partial_image["payload"]["partial_image_index"] == 0 + assert partial_image["payload"]["partial_image_b64"] == "ZmFrZS1pbWFnZQ==" + + assert input_done["type"] == "response.custom_tool_call_input.done" + assert input_done["payload"]["item_id"] == custom_tool.item_id + assert input_done["payload"]["input"] == '{"ok": true}' + + assert args_done["type"] == "response.function_call_arguments.done" + assert args_done["payload"]["item_id"] == function_call.item_id + assert args_done["payload"]["name"] == "tool_fn" + assert args_done["payload"]["arguments"] == '{"city": "Seattle"}' + + assert mcp_args_done["type"] == "response.mcp_call_arguments.done" + assert mcp_args_done["payload"]["item_id"] == mcp_call.item_id + assert mcp_args_done["payload"]["arguments"] == '{"arg": 1}' + + assert refusal_part_done["type"] == "response.content_part.done" + assert refusal_part_done["payload"]["part"]["type"] == "refusal" + assert refusal_part_done["payload"]["part"]["refusal"] == "cannot comply" + + assert summary_added["type"] == "response.reasoning_summary_part.added" + assert summary_added["payload"]["part"]["type"] == "summary_text" + assert summary_added["payload"]["part"]["text"] == "" + + assert summary_done["type"] == "response.reasoning_summary_part.done" + assert summary_done["payload"]["part"]["type"] == "summary_text" + assert summary_done["payload"]["part"]["text"] == "short reason" + + assert reasoning_item_done["type"] == "response.output_item.done" + assert reasoning_item_done["payload"]["item"]["summary"][0]["type"] == "summary_text" + assert reasoning_item_done["payload"]["item"]["summary"][0]["text"] == "short reason" + + +def test_stream_item_id_generation__uses_dotnet_shape_and_response_partition_key() -> None: + response_id = IdGenerator.new_response_id() + stream = ResponseEventStream(response_id=response_id) + + generated_item_ids = [ + stream.add_output_item_message().item_id, + stream.add_output_item_function_call("fn", "call_a").item_id, + stream.add_output_item_function_call_output("call_b").item_id, + stream.add_output_item_reasoning_item().item_id, + stream.add_output_item_file_search_call().item_id, + stream.add_output_item_web_search_call().item_id, + stream.add_output_item_code_interpreter_call().item_id, + stream.add_output_item_image_gen_call().item_id, + stream.add_output_item_mcp_call("srv", "tool").item_id, + stream.add_output_item_mcp_list_tools("srv").item_id, + stream.add_output_item_custom_tool_call("call_c", "custom").item_id, + ] + + response_partition_key = IdGenerator.extract_partition_key(response_id) + for item_id in generated_item_ids: + assert IdGenerator.extract_partition_key(item_id) == response_partition_key + body = item_id.split("_", maxsplit=1)[1] + assert len(body) == 50 + + +def test_response_event_stream__exposes_mutable_response_snapshot_for_lifecycle_events() -> None: + stream = ResponseEventStream(response_id="resp_builder_snapshot", model="gpt-4o-mini") + stream.response.temperature = 1 + stream.response.metadata = {"source": "unit-test"} + + created = stream.emit_created() + + assert created["type"] == "response.created" + assert created["payload"]["id"] == "resp_builder_snapshot" + assert created["payload"]["model"] == "gpt-4o-mini" + assert created["payload"]["temperature"] == 1 + assert created["payload"]["metadata"] == {"source": "unit-test"} + + +def test_response_event_stream__tracks_completed_output_items_into_response_output() -> None: + stream = ResponseEventStream(response_id="resp_builder_output") + stream.emit_created() + + message = stream.add_output_item_message() + message.emit_added() + text = message.add_text_content() + text.emit_added() + text.emit_delta("hello") + text.emit_done() + message.emit_content_done(text) + done = message.emit_done() + + assert done["type"] == "response.output_item.done" + output_item = stream.response.output[0].as_dict() + assert output_item["id"] == message.item_id + assert output_item["type"] == "output_message" + assert output_item["content"][0]["text"] == "hello" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_foundry_storage_provider.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_foundry_storage_provider.py new file mode 100644 index 000000000000..beed4242ff54 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_foundry_storage_provider.py @@ -0,0 +1,581 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for FoundryStorageProvider — validates HTTP request construction and +response deserialization by mocking httpx.AsyncClient responses.""" + +from __future__ import annotations + +import json +from typing import Any +from unittest.mock import AsyncMock, MagicMock, patch + +import httpx +import pytest + +from azure.ai.agentserver.responses.store._foundry_errors import ( + FoundryApiError, + FoundryBadRequestError, + FoundryResourceNotFoundError, +) +from azure.ai.agentserver.responses.store._foundry_provider import FoundryStorageProvider +from azure.ai.agentserver.responses.store._foundry_settings import FoundryStorageSettings + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +_BASE_URL = "https://foundry.example.com/storage/" +_SETTINGS = FoundryStorageSettings(storage_base_url=_BASE_URL) + +_RESPONSE_DICT: dict[str, Any] = { + "id": "resp_abc123", + "object": "response", + "status": "completed", + "output": [], + "model": "gpt-4o", + "created_at": 1710000000, +} + +_INPUT_ITEM_DICT: dict[str, Any] = { + "id": "item_001", + "type": "message", + "role": "user", + "content": [{"type": "input_text", "text": "hello"}], +} + +_OUTPUT_ITEM_DICT: dict[str, Any] = { + "id": "item_out_001", + "type": "message", + "role": "assistant", + "status": "completed", + "content": [{"type": "output_text", "text": "hi", "annotations": []}], +} + + +def _make_credential(token: str = "tok_test") -> Any: + """Return a mock async credential that always yields *token*.""" + token_obj = MagicMock() + token_obj.token = token + cred = AsyncMock() + cred.get_token = AsyncMock(return_value=token_obj) + return cred + + +def _make_response(status_code: int, body: Any) -> httpx.Response: + """Build a real :class:`httpx.Response` with the given *status_code* and JSON *body*.""" + content = json.dumps(body).encode("utf-8") + return httpx.Response(status_code=status_code, content=content) + + +def _make_client(response: httpx.Response) -> MagicMock: + """Return an httpx.AsyncClient mock whose HTTP methods always return *response*.""" + client = AsyncMock(spec=httpx.AsyncClient) + client.post = AsyncMock(return_value=response) + client.get = AsyncMock(return_value=response) + client.delete = AsyncMock(return_value=response) + return client + + +# --------------------------------------------------------------------------- +# Fixtures +# --------------------------------------------------------------------------- + +@pytest.fixture() +def credential() -> Any: + return _make_credential() + + +@pytest.fixture() +def settings() -> FoundryStorageSettings: + return _SETTINGS + + +# =========================================================================== +# create_response_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_create_response_async__posts_to_responses_endpoint(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + response = Response(_RESPONSE_DICT) + await provider.create_response_async(response, None, None) + + call_args = client.post.call_args + called_url: str = call_args[0][0] + assert called_url.startswith(f"{_BASE_URL}responses") + assert "api-version=v1" in called_url + + +@pytest.mark.asyncio +async def test_create_response_async__sends_correct_envelope(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + response = Response(_RESPONSE_DICT) + await provider.create_response_async(response, [MagicMock(as_dict=lambda: _INPUT_ITEM_DICT)], ["prev_item_1"]) + + body_bytes: bytes = client.post.call_args.kwargs["content"] + payload = json.loads(body_bytes.decode("utf-8")) + assert payload["response"]["id"] == "resp_abc123" + assert len(payload["input_items"]) == 1 + assert payload["history_item_ids"] == ["prev_item_1"] + + +@pytest.mark.asyncio +async def test_create_response_async__sends_bearer_token(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + await provider.create_response_async(Response(_RESPONSE_DICT), None, None) + + headers: dict[str, str] = client.post.call_args.kwargs["headers"] + assert headers["Authorization"] == "Bearer tok_test" + assert headers["Content-Type"] == "application/json" + + +@pytest.mark.asyncio +async def test_create_response_async__raises_foundry_api_error_on_500(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(500, {"error": {"message": "server fault"}})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + with pytest.raises(FoundryApiError) as exc_info: + await provider.create_response_async(Response(_RESPONSE_DICT), None, None) + + assert exc_info.value.status_code == 500 + assert "server fault" in exc_info.value.message + + +# =========================================================================== +# get_response_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_get_response_async__gets_correct_url(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, _RESPONSE_DICT)) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_response_async("resp_abc123") + + called_url: str = client.get.call_args[0][0] + assert "responses/resp_abc123" in called_url + assert "api-version=v1" in called_url + + +@pytest.mark.asyncio +async def test_get_response_async__returns_deserialized_response(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, _RESPONSE_DICT)) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + result = await provider.get_response_async("resp_abc123") + + assert result["id"] == "resp_abc123" + assert result["status"] == "completed" + + +@pytest.mark.asyncio +async def test_get_response_async__raises_not_found_on_404(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(404, {"error": {"message": "not found"}})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + with pytest.raises(FoundryResourceNotFoundError) as exc_info: + await provider.get_response_async("missing_id") + + assert "not found" in exc_info.value.message + + +@pytest.mark.asyncio +async def test_get_response_async__url_encodes_special_characters(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, _RESPONSE_DICT)) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_response_async("id with spaces/slash") + + called_url: str = client.get.call_args[0][0] + assert " " not in called_url + assert "id%20with%20spaces%2Fslash" in called_url + + +# =========================================================================== +# update_response_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_update_response_async__posts_to_response_id_url(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + response = Response(_RESPONSE_DICT) + await provider.update_response_async(response) + + called_url: str = client.post.call_args[0][0] + assert "responses/resp_abc123" in called_url + + +@pytest.mark.asyncio +async def test_update_response_async__sends_serialized_response_body(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + response = Response(_RESPONSE_DICT) + await provider.update_response_async(response) + + body_bytes: bytes = client.post.call_args.kwargs["content"] + payload = json.loads(body_bytes.decode("utf-8")) + assert payload["id"] == "resp_abc123" + + +@pytest.mark.asyncio +async def test_update_response_async__raises_bad_request_on_409(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(409, {"error": {"message": "conflict"}})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + from azure.ai.agentserver.responses.models._generated import Response + + with pytest.raises(FoundryBadRequestError) as exc_info: + await provider.update_response_async(Response(_RESPONSE_DICT)) + + assert "conflict" in exc_info.value.message + + +# =========================================================================== +# delete_response_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_delete_response_async__sends_delete_to_response_url(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.delete_response_async("resp_abc123") + + called_url: str = client.delete.call_args[0][0] + assert "responses/resp_abc123" in called_url + assert "api-version=v1" in called_url + + +@pytest.mark.asyncio +async def test_delete_response_async__raises_not_found_on_404(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(404, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + with pytest.raises(FoundryResourceNotFoundError): + await provider.delete_response_async("ghost_id") + + +# =========================================================================== +# get_input_items_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_get_input_items_async__default_params_in_url(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {"data": [_OUTPUT_ITEM_DICT], "object": "list"})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_input_items_async("resp_abc123") + + called_url: str = client.get.call_args[0][0] + assert "responses/resp_abc123/input_items" in called_url + assert "limit=20" in called_url + assert "order=desc" in called_url + + +@pytest.mark.asyncio +async def test_get_input_items_async__ascending_sets_order_asc(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {"data": []})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_input_items_async("resp_abc123", ascending=True, limit=5) + + called_url: str = client.get.call_args[0][0] + assert "order=asc" in called_url + assert "limit=5" in called_url + + +@pytest.mark.asyncio +async def test_get_input_items_async__cursor_params_appended(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {"data": []})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_input_items_async("resp_abc123", after="item_cursor_1", before="item_cursor_2") + + called_url: str = client.get.call_args[0][0] + assert "after=item_cursor_1" in called_url + assert "before=item_cursor_2" in called_url + + +@pytest.mark.asyncio +async def test_get_input_items_async__returns_deserialized_items(credential: Any, settings: FoundryStorageSettings) -> None: + paged_body = {"data": [_OUTPUT_ITEM_DICT], "object": "list"} + client = _make_client(_make_response(200, paged_body)) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + items = await provider.get_input_items_async("resp_abc123") + + assert len(items) == 1 + assert items[0]["id"] == "item_out_001" + assert items[0]["type"] == "message" + + +@pytest.mark.asyncio +async def test_get_input_items_async__empty_data_returns_empty_list(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {"data": [], "object": "list"})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + items = await provider.get_input_items_async("resp_abc123") + + assert items == [] + + +@pytest.mark.asyncio +async def test_get_input_items_async__cursor_params_omitted_when_none(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, {"data": []})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_input_items_async("resp_abc123", after=None, before=None) + + called_url: str = client.get.call_args[0][0] + assert "after=" not in called_url + assert "before=" not in called_url + + +# =========================================================================== +# get_items_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_get_items_async__posts_to_batch_retrieve_endpoint(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, [_OUTPUT_ITEM_DICT, None])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_items_async(["item_out_001", "missing_id"]) + + called_url: str = client.post.call_args[0][0] + assert "items/batch/retrieve" in called_url + assert "api-version=v1" in called_url + + +@pytest.mark.asyncio +async def test_get_items_async__sends_item_ids_in_body(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, [_OUTPUT_ITEM_DICT])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_items_async(["item_out_001"]) + + body_bytes: bytes = client.post.call_args.kwargs["content"] + payload = json.loads(body_bytes.decode("utf-8")) + assert payload["item_ids"] == ["item_out_001"] + + +@pytest.mark.asyncio +async def test_get_items_async__returns_none_for_missing_items(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, [_OUTPUT_ITEM_DICT, None])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + items = await provider.get_items_async(["item_out_001", "missing_id"]) + + assert len(items) == 2 + assert items[0]["id"] == "item_out_001" + assert items[1] is None + + +@pytest.mark.asyncio +async def test_get_items_async__preserves_input_order(credential: Any, settings: FoundryStorageSettings) -> None: + item_a = {**_OUTPUT_ITEM_DICT, "id": "item_a"} + item_b = {**_OUTPUT_ITEM_DICT, "id": "item_b"} + client = _make_client(_make_response(200, [item_b, item_a])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + items = await provider.get_items_async(["id_b", "id_a"]) + + assert items[0]["id"] == "item_b" + assert items[1]["id"] == "item_a" + + +# =========================================================================== +# get_history_item_ids_async +# =========================================================================== + +@pytest.mark.asyncio +async def test_get_history_item_ids_async__gets_to_history_endpoint(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, ["item_h1", "item_h2"])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_history_item_ids_async(None, None, limit=10) + + called_url: str = client.get.call_args[0][0] + assert "history/item_ids" in called_url + assert "api-version=v1" in called_url + assert "limit=10" in called_url + + +@pytest.mark.asyncio +async def test_get_history_item_ids_async__returns_list_of_strings(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, ["item_h1", "item_h2"])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + ids = await provider.get_history_item_ids_async(None, None, limit=10) + + assert ids == ["item_h1", "item_h2"] + + +@pytest.mark.asyncio +async def test_get_history_item_ids_async__appends_previous_response_id(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, ["item_h1"])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_history_item_ids_async("prev_resp_99", None, limit=5) + + called_url: str = client.get.call_args[0][0] + assert "previous_response_id=prev_resp_99" in called_url + + +@pytest.mark.asyncio +async def test_get_history_item_ids_async__appends_conversation_id(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, [])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_history_item_ids_async(None, "conv_42", limit=3) + + called_url: str = client.get.call_args[0][0] + assert "conversation_id=conv_42" in called_url + + +@pytest.mark.asyncio +async def test_get_history_item_ids_async__omits_optional_params_when_none(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(200, [])) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + await provider.get_history_item_ids_async(None, None, limit=10) + + called_url: str = client.get.call_args[0][0] + assert "previous_response_id" not in called_url + assert "conversation_id" not in called_url + + +# =========================================================================== +# Error mapping +# =========================================================================== + +@pytest.mark.asyncio +async def test_error_mapping__400_raises_bad_request(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(400, {"error": {"message": "invalid input"}})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + with pytest.raises(FoundryBadRequestError) as exc_info: + await provider.get_response_async("any_id") + + assert "invalid input" in exc_info.value.message + + +@pytest.mark.asyncio +async def test_error_mapping__generic_status_raises_foundry_api_error(credential: Any, settings: FoundryStorageSettings) -> None: + client = _make_client(_make_response(503, {})) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + with pytest.raises(FoundryApiError) as exc_info: + await provider.get_response_async("any_id") + + assert exc_info.value.status_code == 503 + + +@pytest.mark.asyncio +async def test_error_mapping__error_message_falls_back_for_non_json_body(credential: Any, settings: FoundryStorageSettings) -> None: + raw = httpx.Response(status_code=502, content=b"Bad Gateway") + client = AsyncMock(spec=httpx.AsyncClient) + client.get = AsyncMock(return_value=raw) + provider = FoundryStorageProvider(credential, settings=settings, http_client=client) + + with pytest.raises(FoundryApiError) as exc_info: + await provider.get_response_async("any_id") + + assert "502" in exc_info.value.message + + +# =========================================================================== +# HTTP client lifecycle +# =========================================================================== + +@pytest.mark.asyncio +async def test_aclose__closes_owned_client() -> None: + mock_client = AsyncMock(spec=httpx.AsyncClient) + provider = FoundryStorageProvider(_make_credential(), settings=_SETTINGS, http_client=mock_client) + # When the client is supplied explicitly, the provider does NOT own it + # and should NOT close it. + await provider.aclose() + mock_client.aclose.assert_not_called() + + +@pytest.mark.asyncio +async def test_aclose__does_not_close_externally_provided_client() -> None: + """Provider should close the client it created internally.""" + internal = AsyncMock() + with patch("azure.ai.agentserver.responses.store._foundry_provider.httpx.AsyncClient") as MockClient: + MockClient.return_value = internal + + provider = FoundryStorageProvider(_make_credential(), settings=_SETTINGS) + await provider.aclose() + + internal.aclose.assert_awaited_once() + + +@pytest.mark.asyncio +async def test_async_context_manager__closes_internal_client_on_exit() -> None: + internal = AsyncMock() + with patch("azure.ai.agentserver.responses.store._foundry_provider.httpx.AsyncClient") as MockClient: + MockClient.return_value = internal + + async with FoundryStorageProvider(_make_credential(), settings=_SETTINGS): + pass + + internal.aclose.assert_awaited_once() + + +# =========================================================================== +# FoundryStorageSettings +# =========================================================================== + +def test_settings__from_env__reads_foundry_project_endpoint(monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setenv("FOUNDRY_PROJECT_ENDPOINT", "https://myproject.foundry.azure.com") + settings = FoundryStorageSettings.from_env() + assert settings.storage_base_url == "https://myproject.foundry.azure.com/storage/" + + +def test_settings__from_env__strips_trailing_slash(monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setenv("FOUNDRY_PROJECT_ENDPOINT", "https://myproject.foundry.azure.com/") + settings = FoundryStorageSettings.from_env() + assert settings.storage_base_url == "https://myproject.foundry.azure.com/storage/" + + +def test_settings__from_env__raises_if_env_var_missing(monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.delenv("FOUNDRY_PROJECT_ENDPOINT", raising=False) + with pytest.raises(EnvironmentError, match="FOUNDRY_PROJECT_ENDPOINT"): + FoundryStorageSettings.from_env() + + +def test_settings__from_env__raises_if_not_absolute_url(monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setenv("FOUNDRY_PROJECT_ENDPOINT", "just-a-hostname") + with pytest.raises(ValueError, match="FOUNDRY_PROJECT_ENDPOINT"): + FoundryStorageSettings.from_env() + + +def test_settings__build_url__includes_api_version() -> None: + url = _SETTINGS.build_url("responses/abc") + assert url == f"{_BASE_URL}responses/abc?api-version=v1" + + +def test_settings__build_url__appends_extra_params_encoded() -> None: + url = _SETTINGS.build_url("responses", limit="10", order="asc") + assert "limit=10" in url + assert "order=asc" in url + + +def test_settings__build_url__url_encodes_extra_param_values() -> None: + url = _SETTINGS.build_url("history/item_ids", conversation_id="conv id/1") + assert "conversation_id=conv%20id%2F1" in url diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_generated_payload_validation.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_generated_payload_validation.py new file mode 100644 index 000000000000..d06062736ed4 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_generated_payload_validation.py @@ -0,0 +1,96 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for generated payload validator integration in parse flow.""" + +from __future__ import annotations + +import pytest + +from azure.ai.agentserver.responses.hosting._validation import parse_create_response +from azure.ai.agentserver.responses.models.errors import RequestValidationError +from azure.ai.agentserver.responses.models._generated._validators import validate_CreateResponse + + +# --------------------------------------------------------------------------- +# parse_create_response integration tests (real validator + real model) +# --------------------------------------------------------------------------- + + +def test_parse_create_response_rejects_invalid_payload() -> None: + """A payload with a wrong-typed field is caught by the generated validator.""" + with pytest.raises(RequestValidationError) as exc_info: + parse_create_response({"model": 123}) + + error = exc_info.value + assert error.code == "invalid_request" + assert error.details is not None + assert any(d["param"] == "$.model" for d in error.details) + + +def test_parse_create_response_allows_valid_payload() -> None: + parsed = parse_create_response({"model": "gpt-4o"}) + assert parsed.model == "gpt-4o" + + +def test_parse_create_response_rejects_non_object_body() -> None: + with pytest.raises(RequestValidationError) as exc_info: + parse_create_response("not-a-dict") # type: ignore[arg-type] + + assert exc_info.value.code == "invalid_request" + + +# --------------------------------------------------------------------------- +# Generated validator tests (validate_CreateResponse directly) +# --------------------------------------------------------------------------- + + +def test_generated_create_response_validator_accepts_string_input() -> None: + errors = validate_CreateResponse({"input": "hello world"}) + assert errors == [] + + +def test_generated_create_response_validator_accepts_array_input_items() -> None: + errors = validate_CreateResponse({"input": [{"type": "message"}]}) + assert errors == [] + + +def test_generated_create_response_validator_rejects_non_string_non_array_input() -> None: + errors = validate_CreateResponse({"input": 123}) + assert any(e["path"] == "$.input" and "Expected one of: string, array" in e["message"] for e in errors) + + +def test_generated_create_response_validator_rejects_non_object_input_item() -> None: + errors = validate_CreateResponse({"input": [123]}) + assert any(e["path"] == "$.input" and "Expected one of: string, array" in e["message"] for e in errors) + + +def test_generated_create_response_validator_rejects_input_item_missing_type() -> None: + errors = validate_CreateResponse({"input": [{}]}) + assert any(e["path"] == "$.input" and "Expected one of: string, array" in e["message"] for e in errors) + + +def test_generated_create_response_validator_rejects_input_item_type_with_wrong_primitive() -> None: + errors = validate_CreateResponse({"input": [{"type": 1}]}) + assert any(e["path"] == "$.input" and "Expected one of: string, array" in e["message"] for e in errors) + + +@pytest.mark.parametrize( + "item_type", + [ + "message", + "item_reference", + "function_call_output", + "computer_call_output", + "apply_patch_call_output", + ], +) +def test_generated_create_response_validator_accepts_multiple_input_item_types(item_type: str) -> None: + errors = validate_CreateResponse({"input": [{"type": item_type}]}) + assert errors == [] + + +def test_generated_create_response_validator_accepts_mixed_input_item_types() -> None: + errors = validate_CreateResponse( + {"input": [{"type": "message"}, {"type": "item_reference"}, {"type": "function_call_output"}]} + ) + assert errors == [] diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_id_generator.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_id_generator.py new file mode 100644 index 000000000000..a32972af60d2 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_id_generator.py @@ -0,0 +1,110 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for .NET-parity ID generation behavior.""" + +from __future__ import annotations + +import re + +import pytest + +from azure.ai.agentserver.responses._id_generator import IdGenerator +from azure.ai.agentserver.responses.models import _generated as generated_models + + +def test_id_generator__new_id_uses_new_format_shape() -> None: + created_id = IdGenerator.new_id("msg") + + assert created_id.startswith("msg_") + body = created_id[len("msg_") :] + assert len(body) == 50 + + partition_key = body[:18] + entropy = body[18:] + + assert len(partition_key) == 18 + assert partition_key.endswith("00") + assert re.fullmatch(r"[0-9a-f]{16}00", partition_key) is not None + assert len(entropy) == 32 + assert re.fullmatch(r"[A-Za-z0-9]{32}", entropy) is not None + + +def test_id_generator__new_id_reuses_new_format_partition_key_from_hint() -> None: + hint = "caresp_1234567890abcdef00ABCDEFGHIJKLMNOPQRSTUVWXYZ012345" + + created_id = IdGenerator.new_id("fc", hint) + + assert created_id.startswith("fc_1234567890abcdef00") + + +def test_id_generator__new_id_upgrades_legacy_partition_key_from_hint() -> None: + legacy_partition_key = "1234567890abcdef" + legacy_entropy = "A" * 32 + legacy_hint = f"msg_{legacy_entropy}{legacy_partition_key}" + + created_id = IdGenerator.new_id("rs", legacy_hint) + + assert created_id.startswith("rs_1234567890abcdef00") + + +def test_id_generator__extract_partition_key_supports_new_and_legacy_formats() -> None: + new_format_id = "caresp_1234567890abcdef00ABCDEFGHIJKLMNOPQRSTUVWXYZ012345" + legacy_format_id = "msg_AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1234567890abcdef" + + assert IdGenerator.extract_partition_key(new_format_id) == "1234567890abcdef00" + assert IdGenerator.extract_partition_key(legacy_format_id) == "1234567890abcdef" + + +def test_id_generator__extract_partition_key_raises_for_bad_input() -> None: + with pytest.raises(ValueError, match="ID must not be null or empty"): + IdGenerator.extract_partition_key("") + + with pytest.raises(ValueError, match="has no '_' delimiter"): + IdGenerator.extract_partition_key("badid") + + with pytest.raises(ValueError, match="unexpected body length"): + IdGenerator.extract_partition_key("msg_short") + + +def test_id_generator__is_valid_reports_dotnet_compatible_errors() -> None: + assert IdGenerator.is_valid("") == (False, "ID must not be null or empty.") + assert IdGenerator.is_valid("badid") == (False, "ID 'badid' has no '_' delimiter.") + assert IdGenerator.is_valid("_short") == ( + False, + "ID has an empty prefix.", + ) + assert IdGenerator.is_valid("msg_short") == ( + False, + "ID 'msg_short' has unexpected body length 5 (expected 50 or 48).", + ) + + +def test_id_generator__is_valid_checks_allowed_prefixes() -> None: + valid_id = "msg_1234567890abcdef00ABCDEFGHIJKLMNOPQRSTUVWXYZ012345" + + ok, error = IdGenerator.is_valid(valid_id, allowed_prefixes=["msg", "fc"]) + assert ok is True + assert error is None + + ok, error = IdGenerator.is_valid(valid_id, allowed_prefixes=["fc"]) + assert ok is False + assert error == "ID prefix 'msg' is not in the allowed set [fc]." + + +def test_id_generator__convenience_method_uses_caresp_prefix() -> None: + created_id = IdGenerator.new_response_id() + + assert created_id.startswith("caresp_") + assert len(created_id.split("_", maxsplit=1)[1]) == 50 + + +def test_id_generator__new_item_id_dispatches_by_generated_model_type() -> None: + item_message = object.__new__(generated_models.ItemMessage) + item_reference = object.__new__(generated_models.ItemReferenceParam) + + generated_id = IdGenerator.new_item_id(item_message) + + assert generated_id is not None + assert generated_id.startswith("msg_") + assert IdGenerator.new_item_id(item_reference) is None + assert IdGenerator.new_item_id(object()) is None diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_in_memory_provider_crud.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_in_memory_provider_crud.py new file mode 100644 index 000000000000..56a132fa6439 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_in_memory_provider_crud.py @@ -0,0 +1,467 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""CRUD tests for InMemoryResponseProvider. + +Covers create, read, update, delete of response envelopes, +output item storage, history resolution via previous_response_id +and conversation_id, and defensive-copy isolation. +""" + +from __future__ import annotations + +import asyncio +from typing import Any + +import pytest + +from azure.ai.agentserver.responses.models import _generated as generated_models +from azure.ai.agentserver.responses.store._memory import InMemoryResponseProvider + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + + +def _response( + response_id: str, + *, + status: str = "completed", + output: list[dict[str, Any]] | None = None, + conversation_id: str | None = None, +) -> generated_models.Response: + payload: dict[str, Any] = { + "id": response_id, + "object": "response", + "output": output or [], + "store": True, + "status": status, + } + if conversation_id is not None: + payload["conversation"] = {"id": conversation_id} + return generated_models.Response(payload) + + +def _input_item(item_id: str, text: str) -> dict[str, Any]: + return { + "id": item_id, + "type": "message", + "role": "user", + "content": [{"type": "input_text", "text": text}], + } + + +def _output_message(item_id: str, text: str) -> dict[str, Any]: + return { + "id": item_id, + "type": "output_message", + "role": "assistant", + "status": "completed", + "content": [{"type": "output_text", "text": text}], + } + + +# =========================================================================== +# Create +# =========================================================================== + + +def test_create__stores_response_envelope() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_1"), None, None)) + + result = asyncio.run(provider.get_response_async("resp_1")) + assert str(getattr(result, "id")) == "resp_1" + + +def test_create__duplicate_raises_value_error() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_dup"), None, None)) + + with pytest.raises(ValueError, match="already exists"): + asyncio.run(provider.create_response_async(_response("resp_dup"), None, None)) + + +def test_create__stores_input_items_in_item_store() -> None: + provider = InMemoryResponseProvider() + items = [_input_item("in_1", "hello"), _input_item("in_2", "world")] + asyncio.run(provider.create_response_async(_response("resp_in"), items, None)) + + fetched = asyncio.run(provider.get_items_async(["in_1", "in_2"])) + assert len(fetched) == 2 + assert fetched[0]["id"] == "in_1" + assert fetched[1]["id"] == "in_2" + + +def test_create__stores_output_items_in_item_store() -> None: + provider = InMemoryResponseProvider() + resp = _response( + "resp_out", + output=[_output_message("out_1", "hi"), _output_message("out_2", "there")], + ) + asyncio.run(provider.create_response_async(resp, None, None)) + + fetched = asyncio.run(provider.get_items_async(["out_1", "out_2"])) + assert len(fetched) == 2 + assert fetched[0]["id"] == "out_1" + assert fetched[1]["id"] == "out_2" + + +def test_create__returns_defensive_copy() -> None: + """Mutating the returned response must not affect the stored copy.""" + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_copy"), None, None)) + + r1 = asyncio.run(provider.get_response_async("resp_copy")) + r1["status"] = "failed" + + r2 = asyncio.run(provider.get_response_async("resp_copy")) + assert str(getattr(r2, "status")) == "completed" + + +# =========================================================================== +# Read (get) +# =========================================================================== + + +def test_get__raises_key_error_for_missing() -> None: + provider = InMemoryResponseProvider() + with pytest.raises(KeyError, match="not found"): + asyncio.run(provider.get_response_async("nonexistent")) + + +def test_get__raises_key_error_for_deleted() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_del"), None, None)) + asyncio.run(provider.delete_response_async("resp_del")) + + with pytest.raises(KeyError, match="not found"): + asyncio.run(provider.get_response_async("resp_del")) + + +def test_get_items__missing_ids_return_none() -> None: + provider = InMemoryResponseProvider() + result = asyncio.run(provider.get_items_async(["no_such_item"])) + assert result == [None] + + +# =========================================================================== +# Update +# =========================================================================== + + +def test_update__replaces_envelope() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_upd", status="in_progress"), None, None)) + + updated = _response("resp_upd", status="completed") + asyncio.run(provider.update_response_async(updated)) + + result = asyncio.run(provider.get_response_async("resp_upd")) + assert str(getattr(result, "status")) == "completed" + + +def test_update__stores_new_output_items() -> None: + """Updating a response with new output items must index them in the item store.""" + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_upd2", status="in_progress"), None, None)) + + updated = _response( + "resp_upd2", + status="completed", + output=[_output_message("out_upd_1", "answer")], + ) + asyncio.run(provider.update_response_async(updated)) + + fetched = asyncio.run(provider.get_items_async(["out_upd_1"])) + assert fetched[0] is not None + assert fetched[0]["id"] == "out_upd_1" + + +def test_update__raises_key_error_for_missing() -> None: + provider = InMemoryResponseProvider() + with pytest.raises(KeyError, match="not found"): + asyncio.run(provider.update_response_async(_response("ghost"))) + + +def test_update__raises_key_error_for_deleted() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_d"), None, None)) + asyncio.run(provider.delete_response_async("resp_d")) + + with pytest.raises(KeyError, match="not found"): + asyncio.run(provider.update_response_async(_response("resp_d"))) + + +# =========================================================================== +# Delete +# =========================================================================== + + +def test_delete__marks_entry_as_deleted() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_del2"), None, None)) + asyncio.run(provider.delete_response_async("resp_del2")) + + with pytest.raises(KeyError): + asyncio.run(provider.get_response_async("resp_del2")) + + +def test_delete__raises_key_error_for_missing() -> None: + provider = InMemoryResponseProvider() + with pytest.raises(KeyError, match="not found"): + asyncio.run(provider.delete_response_async("nonexistent")) + + +def test_delete__double_delete_raises() -> None: + provider = InMemoryResponseProvider() + asyncio.run(provider.create_response_async(_response("resp_dd"), None, None)) + asyncio.run(provider.delete_response_async("resp_dd")) + + with pytest.raises(KeyError, match="not found"): + asyncio.run(provider.delete_response_async("resp_dd")) + + +# =========================================================================== +# History resolution — previous_response_id path +# =========================================================================== + + +def test_history__previous_response_returns_input_and_output_ids() -> None: + """get_history_item_ids_async via previous_response_id must include + history + input + output item IDs from the previous response.""" + provider = InMemoryResponseProvider() + resp = _response( + "resp_prev", + output=[_output_message("out_h1", "reply")], + ) + asyncio.run( + provider.create_response_async( + resp, + [_input_item("in_h1", "question")], + history_item_ids=None, + ) + ) + + ids = asyncio.run(provider.get_history_item_ids_async("resp_prev", None, 100)) + assert "in_h1" in ids + assert "out_h1" in ids + + +def test_history__previous_response_chains_history_ids() -> None: + """History chain: resp_1 (with input) → resp_2 (previous_response_id=resp_1) + should yield resp_1's history + input + output when queried from resp_2.""" + provider = InMemoryResponseProvider() + resp1 = _response( + "resp_chain1", + output=[_output_message("out_c1", "first reply")], + ) + asyncio.run( + provider.create_response_async( + resp1, + [_input_item("in_c1", "first question")], + history_item_ids=None, + ) + ) + + # Build resp_2 with history referencing resp_1's items + history_from_1 = asyncio.run(provider.get_history_item_ids_async("resp_chain1", None, 100)) + resp2 = _response( + "resp_chain2", + output=[_output_message("out_c2", "second reply")], + ) + asyncio.run( + provider.create_response_async( + resp2, + [_input_item("in_c2", "second question")], + history_item_ids=history_from_1, + ) + ) + + # Now query history from resp_2's perspective + ids = asyncio.run(provider.get_history_item_ids_async("resp_chain2", None, 100)) + # Should include: history (in_c1, out_c1) + input (in_c2) + output (out_c2) + assert "in_c1" in ids + assert "out_c1" in ids + assert "in_c2" in ids + assert "out_c2" in ids + + +def test_history__items_resolvable_after_chain() -> None: + """Full round-trip: create resp_1, then resp_2 referencing resp_1, and + verify all history items are resolvable via get_items_async.""" + provider = InMemoryResponseProvider() + resp1 = _response( + "resp_rt1", + output=[_output_message("out_rt1", "answer one")], + ) + asyncio.run( + provider.create_response_async( + resp1, + [_input_item("in_rt1", "question one")], + history_item_ids=None, + ) + ) + + history_ids = asyncio.run(provider.get_history_item_ids_async("resp_rt1", None, 100)) + resp2 = _response("resp_rt2", output=[_output_message("out_rt2", "answer two")]) + asyncio.run( + provider.create_response_async( + resp2, + [_input_item("in_rt2", "question two")], + history_item_ids=history_ids, + ) + ) + + all_ids = asyncio.run(provider.get_history_item_ids_async("resp_rt2", None, 100)) + items = asyncio.run(provider.get_items_async(all_ids)) + assert all(item is not None for item in items), f"Some history items not found: {all_ids}" + resolved_ids = [item["id"] for item in items] + assert "in_rt1" in resolved_ids + assert "out_rt1" in resolved_ids + assert "in_rt2" in resolved_ids + assert "out_rt2" in resolved_ids + + +def test_history__deleted_response_excluded() -> None: + provider = InMemoryResponseProvider() + asyncio.run( + provider.create_response_async( + _response("resp_hdel", output=[_output_message("out_hdel", "msg")]), + [_input_item("in_hdel", "q")], + None, + ) + ) + asyncio.run(provider.delete_response_async("resp_hdel")) + + ids = asyncio.run(provider.get_history_item_ids_async("resp_hdel", None, 100)) + assert ids == [] + + +def test_history__respects_limit() -> None: + provider = InMemoryResponseProvider() + many_inputs = [_input_item(f"in_lim_{i}", f"msg {i}") for i in range(10)] + asyncio.run(provider.create_response_async(_response("resp_lim"), many_inputs, None)) + + ids = asyncio.run(provider.get_history_item_ids_async("resp_lim", None, 3)) + assert len(ids) == 3 + + +def test_history__zero_limit_returns_empty() -> None: + provider = InMemoryResponseProvider() + asyncio.run( + provider.create_response_async( + _response("resp_z"), + [_input_item("in_z", "q")], + None, + ) + ) + + ids = asyncio.run(provider.get_history_item_ids_async("resp_z", None, 0)) + assert ids == [] + + +# =========================================================================== +# History resolution — conversation_id path +# =========================================================================== + + +def test_history__conversation_id_collects_across_responses() -> None: + """All input + output item IDs from responses in a conversation should be returned.""" + provider = InMemoryResponseProvider() + + resp1 = _response( + "resp_cv1", + conversation_id="conv_1", + output=[_output_message("out_cv1", "reply 1")], + ) + asyncio.run( + provider.create_response_async( + resp1, + [_input_item("in_cv1", "q1")], + None, + ) + ) + + resp2 = _response( + "resp_cv2", + conversation_id="conv_1", + output=[_output_message("out_cv2", "reply 2")], + ) + asyncio.run( + provider.create_response_async( + resp2, + [_input_item("in_cv2", "q2")], + None, + ) + ) + + ids = asyncio.run(provider.get_history_item_ids_async(None, "conv_1", 100)) + assert "in_cv1" in ids + assert "out_cv1" in ids + assert "in_cv2" in ids + assert "out_cv2" in ids + + +def test_history__conversation_excludes_deleted_responses() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_cvd1", conversation_id="conv_d"), + [_input_item("in_cvd1", "q1")], + None, + ) + ) + asyncio.run( + provider.create_response_async( + _response("resp_cvd2", conversation_id="conv_d"), + [_input_item("in_cvd2", "q2")], + None, + ) + ) + asyncio.run(provider.delete_response_async("resp_cvd1")) + + ids = asyncio.run(provider.get_history_item_ids_async(None, "conv_d", 100)) + assert "in_cvd1" not in ids + assert "in_cvd2" in ids + + +def test_history__no_previous_no_conversation_returns_empty() -> None: + provider = InMemoryResponseProvider() + ids = asyncio.run(provider.get_history_item_ids_async(None, None, 100)) + assert ids == [] + + +# =========================================================================== +# Output items updated on update_response_async +# =========================================================================== + + +def test_update__output_items_reflected_in_history() -> None: + """After updating a response with new output, history resolution should + include the updated output item IDs.""" + provider = InMemoryResponseProvider() + asyncio.run( + provider.create_response_async( + _response("resp_uo", status="in_progress"), + [_input_item("in_uo", "question")], + None, + ) + ) + + # Initially no output + ids_before = asyncio.run(provider.get_history_item_ids_async("resp_uo", None, 100)) + assert "out_uo" not in ids_before + + # Update adds output + updated = _response( + "resp_uo", + status="completed", + output=[_output_message("out_uo", "answer")], + ) + asyncio.run(provider.update_response_async(updated)) + + ids_after = asyncio.run(provider.get_history_item_ids_async("resp_uo", None, 100)) + assert "in_uo" in ids_after + assert "out_uo" in ids_after diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_input_items_provider_behavior.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_input_items_provider_behavior.py new file mode 100644 index 000000000000..ee9255d1b0ed --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_input_items_provider_behavior.py @@ -0,0 +1,178 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for input-items provider paging and error semantics.""" + +from __future__ import annotations + +import asyncio + +import pytest + +from azure.ai.agentserver.responses.models import _generated as generated_models +from azure.ai.agentserver.responses.store._memory import InMemoryResponseProvider + + +def _response(response_id: str, *, store: bool = True) -> generated_models.Response: + return generated_models.Response( + { + "id": response_id, + "object": "response", + "output": [], + "store": store, + "status": "completed", + } + ) + + +def _item(item_id: str, text: str) -> dict[str, object]: + return { + "id": item_id, + "type": "message", + "role": "user", + "content": [{"type": "input_text", "text": text}], + } + + +def _ids(items: list[object]) -> list[str]: + result: list[str] = [] + for item in items: + if isinstance(item, dict): + item_id = item.get("id") + if isinstance(item_id, str): + result.append(item_id) + return result + + +def test_provider_input_items__supports_after_before_combination() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_combo"), + [ + _item("msg_001", "one"), + _item("msg_002", "two"), + _item("msg_003", "three"), + _item("msg_004", "four"), + _item("msg_005", "five"), + ], + history_item_ids=None, + ) + ) + + items = asyncio.run( + provider.get_input_items_async( + "resp_combo", + ascending=True, + after="msg_002", + before="msg_005", + ) + ) + + assert _ids(items) == ["msg_003", "msg_004"] + + +def test_provider_input_items__returns_empty_page_after_last_cursor() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_empty"), + [ + _item("msg_001", "one"), + _item("msg_002", "two"), + ], + history_item_ids=None, + ) + ) + + items = asyncio.run(provider.get_input_items_async("resp_empty", ascending=True, after="msg_002")) + + assert items == [] + + +def test_provider_input_items__returns_history_only_items_when_current_input_is_empty() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_base"), + [ + _item("msg_hist_001", "history-1"), + _item("msg_hist_002", "history-2"), + ], + history_item_ids=None, + ) + ) + + asyncio.run( + provider.create_response_async( + _response("resp_history_only"), + [], + history_item_ids=["msg_hist_001", "msg_hist_002"], + ) + ) + + items = asyncio.run(provider.get_input_items_async("resp_history_only", ascending=True)) + + assert _ids(items) == ["msg_hist_001", "msg_hist_002"] + + +def test_provider_input_items__returns_current_only_items_when_no_history() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_current_only"), + [ + _item("msg_curr_001", "current-1"), + _item("msg_curr_002", "current-2"), + ], + history_item_ids=None, + ) + ) + + items = asyncio.run(provider.get_input_items_async("resp_current_only", ascending=True)) + + assert _ids(items) == ["msg_curr_001", "msg_curr_002"] + + +def test_provider_input_items__respects_limit_boundaries_1_and_100() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_limits"), + [_item(f"msg_{index:03d}", f"item-{index:03d}") for index in range(1, 151)], + history_item_ids=None, + ) + ) + + one_item = asyncio.run(provider.get_input_items_async("resp_limits", ascending=True, limit=1)) + hundred_items = asyncio.run(provider.get_input_items_async("resp_limits", ascending=True, limit=100)) + + assert len(one_item) == 1 + assert _ids(one_item) == ["msg_001"] + assert len(hundred_items) == 100 + assert _ids(hundred_items)[0] == "msg_001" + assert _ids(hundred_items)[-1] == "msg_100" + + +def test_provider_input_items__raises_for_deleted_and_missing_response() -> None: + provider = InMemoryResponseProvider() + + asyncio.run( + provider.create_response_async( + _response("resp_deleted"), + [_item("msg_001", "one")], + history_item_ids=None, + ) + ) + + asyncio.run(provider.delete_response_async("resp_deleted")) + + with pytest.raises(ValueError): + asyncio.run(provider.get_input_items_async("resp_deleted")) + + with pytest.raises(KeyError): + asyncio.run(provider.get_input_items_async("resp_missing")) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_lifecycle_state_machine.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_lifecycle_state_machine.py new file mode 100644 index 000000000000..6cfc0d8fcd16 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_lifecycle_state_machine.py @@ -0,0 +1,86 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for lifecycle event state machine normalization.""" + +from __future__ import annotations + +import pytest + +from azure.ai.agentserver.responses.streaming._state_machine import ( + LifecycleStateMachineError, + normalize_lifecycle_events, +) + + +def test_lifecycle_state_machine__requires_response_created_as_first_event() -> None: + with pytest.raises(LifecycleStateMachineError): + normalize_lifecycle_events( + response_id="resp_123", + events=[ + { + "type": "response.in_progress", + "payload": {"status": "in_progress"}, + } + ], + ) + + +def test_lifecycle_state_machine__rejects_multiple_terminal_events() -> None: + with pytest.raises(LifecycleStateMachineError): + normalize_lifecycle_events( + response_id="resp_123", + events=[ + {"type": "response.created", "payload": {"status": "queued"}}, + {"type": "response.completed", "payload": {"status": "completed"}}, + {"type": "response.failed", "payload": {"status": "failed"}}, + ], + ) + + +def test_lifecycle_state_machine__auto_appends_failed_when_terminal_missing() -> None: + normalized = normalize_lifecycle_events( + response_id="resp_123", + events=[ + {"type": "response.created", "payload": {"status": "queued"}}, + {"type": "response.in_progress", "payload": {"status": "in_progress"}}, + ], + ) + + assert normalized[-1]["type"] == "response.failed" + assert normalized[-1]["payload"]["status"] == "failed" + + +def test_lifecycle_state_machine__rejects_out_of_order_transitions() -> None: + with pytest.raises(LifecycleStateMachineError): + normalize_lifecycle_events( + response_id="resp_123", + events=[ + {"type": "response.created", "payload": {"status": "queued"}}, + {"type": "response.completed", "payload": {"status": "completed"}}, + {"type": "response.in_progress", "payload": {"status": "in_progress"}}, + ], + ) + + +def test_lifecycle_state_machine__returns_deep_copied_payload_snapshots() -> None: + original_events = [ + { + "type": "response.created", + "payload": { + "status": "queued", + "metadata": {"nested": "before"}, + }, + }, + { + "type": "response.completed", + "payload": { + "status": "completed", + "metadata": {"nested": "before"}, + }, + }, + ] + + normalized = normalize_lifecycle_events(response_id="resp_123", events=original_events) + + original_events[0]["payload"]["metadata"]["nested"] = "after" + assert normalized[0]["payload"]["metadata"]["nested"] == "before" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_observability.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_observability.py new file mode 100644 index 000000000000..671dc701a2af --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_observability.py @@ -0,0 +1,57 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for observability helpers.""" + +from __future__ import annotations + +from azure.ai.agentserver.responses.hosting._observability import ( + InMemoryCreateSpanHook, + build_create_span_tags, + build_platform_server_header, + start_create_span, +) + + +def test_observability__build_platform_server_header_includes_extra_identity() -> None: + value = build_platform_server_header( + sdk_name="azure-ai-agentserver-responses", + version="0.1.0", + runtime="python/3.11", + extra="integration-suite", + ) + + assert value == "azure-ai-agentserver-responses/0.1.0 (python/3.11) integration-suite" + + +def test_observability__start_create_span_records_single_lifecycle_event() -> None: + hook = InMemoryCreateSpanHook() + span = start_create_span( + "create_response", + {"service.name": "svc", "gen_ai.operation.name": "create_response"}, + hook=hook, + ) + + span.set_tag("gen_ai.response.id", "resp_123") + span.end() + span.end() # idempotent + + assert len(hook.spans) == 1 + assert hook.spans[0].name == "create_response" + assert hook.spans[0].tags["gen_ai.response.id"] == "resp_123" + assert hook.spans[0].ended_at is not None + + +def test_observability__build_create_span_tags_uses_agent_name_and_model() -> None: + tags = build_create_span_tags( + response_id="resp_abc", + model="gpt-4o-mini", + agent_reference={"name": "agent-one", "version": "v1"}, + service_name="azure-ai-agentserver-responses", + ) + + assert tags["service.name"] == "azure-ai-agentserver-responses" + assert tags["gen_ai.operation.name"] == "create_response" + assert tags["gen_ai.response.id"] == "resp_abc" + assert tags["gen_ai.request.model"] == "gpt-4o-mini" + assert tags["gen_ai.agent.name"] == "agent-one" + assert tags["gen_ai.agent.id"] == "agent-one:v1" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_options.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_options.py new file mode 100644 index 000000000000..fb7991aec62e --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_options.py @@ -0,0 +1,68 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for server options behavior.""" + +from __future__ import annotations + +import pytest + +from azure.ai.agentserver.responses._options import ResponsesServerOptions + + +def test_options__defaults_match_public_contract() -> None: + options = ResponsesServerOptions() + + assert options.default_fetch_history_count == 100 + assert options.default_model is None + assert options.additional_server_identity is None + assert options.sse_keep_alive_enabled is False + + +def test_options__environment_values_override_defaults() -> None: + options = ResponsesServerOptions.from_env( + { + "AZURE_AI_RESPONSES_SERVER_DEFAULT_FETCH_HISTORY_ITEM_COUNT": "42", + "AZURE_AI_RESPONSES_SERVER_SSE_KEEPALIVE_INTERVAL": "12", + } + ) + + assert options.default_fetch_history_count == 42 + assert options.sse_keep_alive_interval_seconds == 12 + + +def test_options__invalid_boundary_values_fail_fast() -> None: + with pytest.raises(ValueError): + ResponsesServerOptions(default_fetch_history_count=0) + + with pytest.raises(ValueError): + ResponsesServerOptions(sse_keep_alive_interval_seconds=0) + + with pytest.raises(ValueError): + ResponsesServerOptions.from_env( + {"AZURE_AI_RESPONSES_SERVER_DEFAULT_FETCH_HISTORY_ITEM_COUNT": "-1"} + ) + + +def test_options__dotnet_environment_variable_names_are_supported() -> None: + options = ResponsesServerOptions.from_env( + { + "AZURE_AI_RESPONSES_SERVER_DEFAULT_FETCH_HISTORY_ITEM_COUNT": "55", + "AZURE_AI_RESPONSES_SERVER_SSE_KEEPALIVE_INTERVAL": "15", + } + ) + + assert options.default_fetch_history_count == 55 + assert options.sse_keep_alive_interval_seconds == 15 + + +def test_options__legacy_environment_variable_names_are_ignored() -> None: + options = ResponsesServerOptions.from_env( + { + "RESPONSES_FETCH_HISTORY_COUNT": "42", + "RESPONSES_SSE_KEEP_ALIVE_INTERVAL_SECONDS": "9", + } + ) + + assert options.default_model is None + assert options.default_fetch_history_count == 100 + assert options.sse_keep_alive_interval_seconds is None diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_response_event_stream_builder.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_response_event_stream_builder.py new file mode 100644 index 000000000000..63d219b48bf5 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_response_event_stream_builder.py @@ -0,0 +1,247 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for .NET-aligned response event stream APIs.""" + +from __future__ import annotations + +import pytest + +from azure.ai.agentserver.responses._id_generator import IdGenerator +from azure.ai.agentserver.responses.streaming._event_stream import ResponseEventStream +from azure.ai.agentserver.responses.streaming._state_machine import LifecycleStateMachineError +from azure.ai.agentserver.responses.models import _generated as generated_models + + +def test_event_stream_builder__builds_lifecycle_events() -> None: + stream = ResponseEventStream( + response_id="resp_builder_12345", + agent_reference={"type": "agent_reference", "name": "unit-agent"}, + model="gpt-4o-mini", + ) + + events = [ + stream.emit_created(status="queued"), + stream.emit_in_progress(), + stream.emit_completed(), + ] + + assert [event["type"] for event in events] == [ + "response.created", + "response.in_progress", + "response.completed", + ] + assert [event["payload"]["sequence_number"] for event in events] == [0, 1, 2] + assert all(event["payload"]["response_id"] == "resp_builder_12345" for event in events) + assert all(event["payload"]["agent_reference"]["name"] == "unit-agent" for event in events) + + +def test_event_stream_builder__builds_output_item_events() -> None: + stream = ResponseEventStream(response_id="resp_builder_output_12345") + message = stream.add_output_item_message() + text = message.add_text_content() + + events = [ + stream.emit_created(status="queued"), + stream.emit_in_progress(), + message.emit_added(), + text.emit_added(), + text.emit_delta("hello"), + text.emit_done(), + message.emit_content_done(text), + message.emit_done(), + stream.emit_completed(), + ] + + event_types = [event["type"] for event in events] + assert "response.output_item.added" in event_types + assert "response.output_text.delta" in event_types + assert "response.output_item.done" in event_types + + +def test_event_stream_builder__output_item_added_returns_event_immediately() -> None: + stream = ResponseEventStream( + response_id="resp_builder_incremental_12345", + agent_reference={"type": "agent_reference", "name": "unit-agent"}, + model="gpt-4o-mini", + ) + stream.emit_created(status="queued") + stream.emit_in_progress() + message = stream.add_output_item_message() + + emitted = message.emit_added() + + assert emitted["type"] == "response.output_item.added" + assert emitted["payload"]["output_index"] == 0 + assert emitted["payload"]["item"]["id"] == message.item_id + assert emitted["payload"]["item"]["type"] == "output_message" + # response_id and agent_reference belong on the Response, not on the item + assert "response_id" not in emitted["payload"]["item"] + assert "agent_reference" not in emitted["payload"]["item"] + assert emitted["payload"]["sequence_number"] == 2 + + +def test_event_stream_builder__rejects_illegal_output_item_sequence() -> None: + stream = ResponseEventStream(response_id="resp_builder_bad_12345") + stream.emit_created(status="queued") + stream.emit_in_progress() + message = stream.add_output_item_message() + + with pytest.raises(ValueError): + message.emit_done() + + +def test_event_stream_builder__rejects_invalid_global_stream_order() -> None: + with pytest.raises(LifecycleStateMachineError): + stream = ResponseEventStream(response_id="resp_builder_bad_order_12345") + stream.emit_created(status="queued") + stream.emit_in_progress() + message = stream.add_output_item_message() + text = message.add_text_content() + message.emit_added() + stream.emit_completed() + text.emit_added() + text.emit_done() + message.emit_content_done(text) + message.emit_done() + + +def test_event_stream_builder__emit_completed_accepts_usage_and_sets_terminal_fields() -> None: + stream = ResponseEventStream(response_id="resp_builder_completed_params") + stream.emit_created(status="in_progress") + + message = stream.add_output_item_message() + message.emit_added() + text = message.add_text_content() + text.emit_added() + text.emit_delta("hello") + text.emit_done() + message.emit_content_done(text) + message.emit_done() + + usage = { + "input_tokens": 1, + "input_tokens_details": {"cached_tokens": 0}, + "output_tokens": 2, + "output_tokens_details": {"reasoning_tokens": 0}, + "total_tokens": 3, + } + + completed = stream.emit_completed(usage=usage) + + assert completed["type"] == "response.completed" + assert completed["payload"]["status"] == "completed" + assert completed["payload"]["usage"]["total_tokens"] == 3 + assert completed["payload"]["output_text"] == "hello" + assert isinstance(completed["payload"]["completed_at"], int) + + +def test_event_stream_builder__emit_failed_accepts_error_and_usage() -> None: + stream = ResponseEventStream(response_id="resp_builder_failed_params") + stream.emit_created(status="in_progress") + + usage = { + "input_tokens": 4, + "input_tokens_details": {"cached_tokens": 0}, + "output_tokens": 5, + "output_tokens_details": {"reasoning_tokens": 0}, + "total_tokens": 9, + } + + failed = stream.emit_failed(code="server_error", message="boom", usage=usage) + + assert failed["type"] == "response.failed" + assert failed["payload"]["status"] == "failed" + assert failed["payload"]["error"]["code"] == "server_error" + assert failed["payload"]["error"]["message"] == "boom" + assert failed["payload"]["usage"]["total_tokens"] == 9 + assert failed["payload"].get("completed_at") is None + + +def test_event_stream_builder__emit_incomplete_accepts_reason_and_usage() -> None: + stream = ResponseEventStream(response_id="resp_builder_incomplete_params") + stream.emit_created(status="in_progress") + + usage = { + "input_tokens": 2, + "input_tokens_details": {"cached_tokens": 0}, + "output_tokens": 3, + "output_tokens_details": {"reasoning_tokens": 0}, + "total_tokens": 5, + } + + incomplete = stream.emit_incomplete(reason="max_output_tokens", usage=usage) + + assert incomplete["type"] == "response.incomplete" + assert incomplete["payload"]["status"] == "incomplete" + assert incomplete["payload"]["incomplete_details"]["reason"] == "max_output_tokens" + assert incomplete["payload"]["usage"]["total_tokens"] == 5 + assert incomplete["payload"].get("completed_at") is None + + +def test_event_stream_builder__add_output_item_generic_emits_added_and_done() -> None: + stream = ResponseEventStream(response_id="resp_builder_generic_item") + stream.emit_created(status="in_progress") + + item_id = IdGenerator.new_computer_call_output_item_id("resp_builder_generic_item") + builder = stream.add_output_item(item_id) + added_item = { + "id": item_id, + "type": "computer_call_output", + "call_id": "call_1", + "output": {"type": "computer_screenshot", "image_url": "https://example.com/1.png"}, + "status": "in_progress", + } + done_item = { + "id": item_id, + "type": "computer_call_output", + "call_id": "call_1", + "output": {"type": "computer_screenshot", "image_url": "https://example.com/2.png"}, + "status": "completed", + } + + added = builder.emit_added(added_item) + done = builder.emit_done(done_item) + + assert added["type"] == "response.output_item.added" + assert added["payload"]["output_index"] == 0 + assert done["type"] == "response.output_item.done" + assert done["payload"]["item"]["status"] == "completed" + + +def test_event_stream_builder__constructor_accepts_seed_response() -> None: + seed_response = generated_models.Response( + { + "id": "resp_builder_seed_response", + "object": "response", + "output": [], + "model": "gpt-4o-mini", + "metadata": {"source": "seed"}, + } + ) + + stream = ResponseEventStream(response=seed_response) + created = stream.emit_created() + + assert created["payload"]["id"] == "resp_builder_seed_response" + assert created["payload"]["model"] == "gpt-4o-mini" + assert created["payload"]["metadata"] == {"source": "seed"} + + +def test_event_stream_builder__constructor_accepts_request_seed_fields() -> None: + request = generated_models.CreateResponse( + { + "model": "gpt-4o-mini", + "background": True, + "metadata": {"tag": "seeded"}, + "previous_response_id": "resp_prev_seed", + } + ) + + stream = ResponseEventStream(response_id="resp_builder_seed_request", request=request) + created = stream.emit_created() + + assert created["payload"]["id"] == "resp_builder_seed_request" + assert created["payload"]["model"] == "gpt-4o-mini" + assert created["payload"]["background"] is True + assert created["payload"]["previous_response_id"] == "resp_prev_seed" + assert created["payload"]["metadata"] == {"tag": "seeded"} diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_response_execution.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_response_execution.py new file mode 100644 index 000000000000..fbc93294f682 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_response_execution.py @@ -0,0 +1,257 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for ResponseExecution fields, properties, apply_event, and build_cancelled_response.""" + +from __future__ import annotations + +import asyncio + +import pytest + +from azure.ai.agentserver.responses.models.runtime import ( + ResponseExecution, + ResponseModeFlags, + build_cancelled_response, +) + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +def _make_execution(**kwargs) -> ResponseExecution: + defaults = dict( + response_id="caresp_test000000000000000000000000", + mode_flags=ResponseModeFlags(stream=False, store=True, background=False), + ) + defaults.update(kwargs) + return ResponseExecution(**defaults) + + +# --------------------------------------------------------------------------- +# T1 – transition_to valid +# --------------------------------------------------------------------------- + +def test_transition_to_valid() -> None: + execution = _make_execution(status="queued") + execution.transition_to("in_progress") + assert execution.status == "in_progress" + assert execution.completed_at is None + + +# --------------------------------------------------------------------------- +# T2 – transition_to terminal sets completed_at +# --------------------------------------------------------------------------- + +def test_transition_to_terminal_sets_completed_at() -> None: + execution = _make_execution(status="in_progress") + execution.transition_to("completed") + assert execution.status == "completed" + assert execution.completed_at is not None + + +# --------------------------------------------------------------------------- +# T3 – transition_to invalid raises ValueError +# --------------------------------------------------------------------------- + +def test_transition_invalid_raises() -> None: + execution = _make_execution(status="completed") + with pytest.raises(ValueError, match="invalid status transition: completed -> in_progress"): + execution.transition_to("in_progress") + + +# --------------------------------------------------------------------------- +# T4 – transition_to same status is a no-op that refreshes updated_at +# --------------------------------------------------------------------------- + +def test_transition_same_status_noop() -> None: + execution = _make_execution(status="in_progress") + before = execution.updated_at + execution.transition_to("in_progress") + assert execution.status == "in_progress" + assert execution.updated_at >= before + + +# --------------------------------------------------------------------------- +# T5 – replay_enabled is True only for bg+stream+store +# --------------------------------------------------------------------------- + +def test_replay_enabled_bg_stream_store() -> None: + execution = _make_execution( + mode_flags=ResponseModeFlags(stream=True, store=True, background=True) + ) + assert execution.replay_enabled is True + + +# --------------------------------------------------------------------------- +# T6 – replay_enabled is False for non-background +# --------------------------------------------------------------------------- + +def test_replay_enabled_false_for_non_bg() -> None: + execution = _make_execution( + mode_flags=ResponseModeFlags(stream=True, store=True, background=False) + ) + assert execution.replay_enabled is False + + +# --------------------------------------------------------------------------- +# T7 – visible_via_get is True when store=True +# --------------------------------------------------------------------------- + +def test_visible_via_get_store_true() -> None: + execution = _make_execution( + mode_flags=ResponseModeFlags(stream=False, store=True, background=False) + ) + assert execution.visible_via_get is True + + +# --------------------------------------------------------------------------- +# T8 – visible_via_get is False when store=False +# --------------------------------------------------------------------------- + +def test_visible_via_get_store_false() -> None: + execution = _make_execution( + mode_flags=ResponseModeFlags(stream=False, store=False, background=False) + ) + assert execution.visible_via_get is False + + +# --------------------------------------------------------------------------- +# T9 – apply_event with response.completed snapshot updates status and response +# --------------------------------------------------------------------------- + +def test_apply_event_response_snapshot_updates_status() -> None: + execution = _make_execution(status="in_progress") + + events = [ + { + "type": "response.created", + "payload": { + "id": execution.response_id, + "response_id": execution.response_id, + "agent_reference": {"name": "test-agent"}, + "object": "response", + "status": "queued", + "output": [], + }, + }, + { + "type": "response.completed", + "payload": { + "id": execution.response_id, + "response_id": execution.response_id, + "agent_reference": {"name": "test-agent"}, + "object": "response", + "status": "completed", + "output": [], + }, + }, + ] + + execution.apply_event(events[-1], events) + + assert execution.status == "completed" + assert execution.response is not None + + +# --------------------------------------------------------------------------- +# T10 – apply_event is a no-op when already cancelled +# --------------------------------------------------------------------------- + +def test_apply_event_cancelled_is_noop() -> None: + execution = _make_execution(status="cancelled") + + events = [ + { + "type": "response.completed", + "payload": { + "id": execution.response_id, + "response_id": execution.response_id, + "agent_reference": {}, + "object": "response", + "status": "completed", + "output": [], + }, + } + ] + execution.apply_event(events[0], events) + + assert execution.status == "cancelled" + assert execution.response is None + + +# --------------------------------------------------------------------------- +# T11 – apply_event output_item.added appends item +# --------------------------------------------------------------------------- + +def test_apply_event_output_item_added() -> None: + from azure.ai.agentserver.responses.models._generated import Response + + execution = _make_execution(status="in_progress") + execution.response = Response( + { + "id": execution.response_id, + "response_id": execution.response_id, + "agent_reference": {}, + "object": "response", + "status": "in_progress", + "output": [], + } + ) + + item = {"id": "item_1", "type": "text"} + event = {"type": "response.output_item.added", "payload": {"item": item}} + execution.apply_event(event, [event]) + + output = execution.response.get("output", []) + assert isinstance(output, list) + assert len(output) == 1 + assert output[0]["id"] == "item_1" + + +# --------------------------------------------------------------------------- +# T12 – build_cancelled_response +# --------------------------------------------------------------------------- + +def test_build_cancelled_response() -> None: + response = build_cancelled_response( + "caresp_xxx0000000000000000000000000000", + {"name": "agent-a"}, + "gpt-4o", + ) + assert response is not None + assert response.get("status") == "cancelled" + assert response.get("output") == [] + assert response.get("id") == "caresp_xxx0000000000000000000000000000" + + +# --------------------------------------------------------------------------- +# Extra – new fields exist with expected defaults +# --------------------------------------------------------------------------- + +def test_new_fields_have_correct_defaults() -> None: + execution = _make_execution() + assert execution.subject is None + assert isinstance(execution.cancel_signal, asyncio.Event) + assert execution.input_items == [] + assert execution.previous_response_id is None + assert execution.response_context is None + + +def test_input_items_and_previous_response_id_set() -> None: + items = [{"id": "i1", "type": "message"}] + execution = _make_execution( + input_items=items, + previous_response_id="caresp_parent00000000000000000000000", + ) + assert execution.input_items == items + assert execution.previous_response_id == "caresp_parent00000000000000000000000" + + +def test_input_items_are_independent_copy() -> None: + original = [{"id": "i1"}] + execution = _make_execution(input_items=original) + original.append({"id": "i2"}) + # The execution's list is the same reference passed in — plan does not require deep copy at construction + # just verify the field is correctly set and is a list + assert isinstance(execution.input_items, list) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_responses_provider_parity.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_responses_provider_parity.py new file mode 100644 index 000000000000..8723bd008274 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_responses_provider_parity.py @@ -0,0 +1,32 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Parity checks for provider naming alignment with .NET contracts.""" + +from __future__ import annotations + +from azure.ai.agentserver.responses.store._base import ResponseProviderProtocol +from azure.ai.agentserver.responses.store._memory import InMemoryResponseProvider + + +def test_provider_parity__in_memory_class_name_is_canonical() -> None: + provider = InMemoryResponseProvider() + + assert isinstance(provider, InMemoryResponseProvider) + + +def test_provider_parity__interface_name_is_responseproviderprotocol() -> None: + provider = InMemoryResponseProvider() + + assert isinstance(provider, ResponseProviderProtocol) + + +def test_provider_parity__dotnet_surface_methods_exist() -> None: + provider = InMemoryResponseProvider() + + assert hasattr(provider, "create_response_async") + assert hasattr(provider, "get_response_async") + assert hasattr(provider, "update_response_async") + assert hasattr(provider, "delete_response_async") + assert hasattr(provider, "get_input_items_async") + assert hasattr(provider, "get_items_async") + assert hasattr(provider, "get_history_item_ids_async") diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_runtime_state.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_runtime_state.py new file mode 100644 index 000000000000..bd189eb3be21 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_runtime_state.py @@ -0,0 +1,238 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for _RuntimeState with ResponseExecution values.""" + +from __future__ import annotations + +import pytest + +from azure.ai.agentserver.responses.hosting._runtime_state import _RuntimeState +from azure.ai.agentserver.responses.models._generated import Response +from azure.ai.agentserver.responses.models.runtime import ResponseExecution, ResponseModeFlags + + +# --------------------------------------------------------------------------- +# Helpers +# --------------------------------------------------------------------------- + +def _make_execution( + response_id: str, + *, + store: bool = True, + background: bool = False, + stream: bool = False, + status: str = "queued", + input_items: list[dict] | None = None, + previous_response_id: str | None = None, +) -> ResponseExecution: + return ResponseExecution( + response_id=response_id, + mode_flags=ResponseModeFlags(stream=stream, store=store, background=background), + status=status, # type: ignore[arg-type] + input_items=input_items, + previous_response_id=previous_response_id, + ) + + +# --------------------------------------------------------------------------- +# T1 – add + get returns the same object +# --------------------------------------------------------------------------- + +async def test_add_and_get() -> None: + state = _RuntimeState() + execution = _make_execution("caresp_aaa0000000000000000000000000000") + await state.add(execution) + retrieved = await state.get("caresp_aaa0000000000000000000000000000") + assert retrieved is execution + + +# --------------------------------------------------------------------------- +# T2 – get unknown returns None +# --------------------------------------------------------------------------- + +async def test_get_nonexistent_returns_none() -> None: + state = _RuntimeState() + assert await state.get("unknown_id") is None + + +# --------------------------------------------------------------------------- +# T3 – delete marks deleted; get returns None; is_deleted returns True +# --------------------------------------------------------------------------- + +async def test_delete_marks_deleted() -> None: + state = _RuntimeState() + execution = _make_execution("caresp_bbb0000000000000000000000000000") + await state.add(execution) + + result = await state.delete("caresp_bbb0000000000000000000000000000") + + assert result is True + assert await state.get("caresp_bbb0000000000000000000000000000") is None + assert await state.is_deleted("caresp_bbb0000000000000000000000000000") is True + + +# --------------------------------------------------------------------------- +# T4 – delete non-existent returns False +# --------------------------------------------------------------------------- + +async def test_delete_nonexistent_returns_false() -> None: + state = _RuntimeState() + assert await state.delete("nonexistent_id") is False + + +# --------------------------------------------------------------------------- +# T5 – get_input_items single execution (no chain) +# --------------------------------------------------------------------------- + +async def test_get_input_items_single() -> None: + state = _RuntimeState() + items = [{"id": "item_1", "type": "message"}] + execution = _make_execution( + "caresp_ccc0000000000000000000000000000", + input_items=items, + previous_response_id=None, + ) + await state.add(execution) + + result = await state.get_input_items("caresp_ccc0000000000000000000000000000") + assert result == items + + +# --------------------------------------------------------------------------- +# T6 – get_input_items chain walk (parent items come first) +# --------------------------------------------------------------------------- + +async def test_get_input_items_chain_walk() -> None: + state = _RuntimeState() + parent_id = "caresp_parent000000000000000000000000" + child_id = "caresp_child0000000000000000000000000" + + parent = _make_execution(parent_id, input_items=[{"id": "a"}]) + child = _make_execution(child_id, input_items=[{"id": "b"}], previous_response_id=parent_id) + + await state.add(parent) + await state.add(child) + + result = await state.get_input_items(child_id) + ids = [item["id"] for item in result] + assert ids == ["a", "b"] + + +# --------------------------------------------------------------------------- +# T7 – get_input_items on deleted response raises ValueError +# --------------------------------------------------------------------------- + +async def test_get_input_items_deleted_raises_value_error() -> None: + state = _RuntimeState() + execution = _make_execution("caresp_ddd0000000000000000000000000000") + await state.add(execution) + await state.delete("caresp_ddd0000000000000000000000000000") + + with pytest.raises(ValueError, match="deleted"): + await state.get_input_items("caresp_ddd0000000000000000000000000000") + + +# --------------------------------------------------------------------------- +# T8 – to_snapshot with response set returns dict with required fields +# --------------------------------------------------------------------------- + +def test_to_snapshot_with_response() -> None: + rid = "caresp_eee0000000000000000000000000000" + execution = _make_execution(rid, status="completed") + execution.response = Response( + { + "id": rid, + "response_id": rid, + "agent_reference": {"name": "test-agent"}, + "object": "response", + "status": "completed", + "output": [], + } + ) + + snapshot = _RuntimeState.to_snapshot(execution) + + assert isinstance(snapshot, dict) + assert snapshot["status"] == "completed" + assert snapshot["id"] == rid + assert snapshot["response_id"] == rid + + +# --------------------------------------------------------------------------- +# T9 – to_snapshot with no response returns minimal dict for queued state +# --------------------------------------------------------------------------- + +def test_to_snapshot_queued_no_response() -> None: + rid = "caresp_fff0000000000000000000000000000" + execution = _make_execution(rid, status="queued") + # execution.response is None + + snapshot = _RuntimeState.to_snapshot(execution) + + assert snapshot["id"] == rid + assert snapshot["response_id"] == rid + assert snapshot["object"] == "response" + assert snapshot["status"] == "queued" + + +# --------------------------------------------------------------------------- +# Extra: to_snapshot status field overrides response payload status +# --------------------------------------------------------------------------- + +def test_to_snapshot_status_matches_execution_status() -> None: + """to_snapshot should authoritative-stamp status from execution.status.""" + rid = "caresp_ggg0000000000000000000000000000" + execution = _make_execution(rid, status="in_progress") + # Give a response that says completed but execution.status says in_progress + execution.response = Response({"id": rid, "status": "completed", "output": []}) + + snapshot = _RuntimeState.to_snapshot(execution) + + assert snapshot["status"] == "in_progress" + + +# --------------------------------------------------------------------------- +# Extra: to_snapshot injects id/response_id defaults when missing from response +# --------------------------------------------------------------------------- + +def test_to_snapshot_injects_defaults_when_response_missing_ids() -> None: + rid = "caresp_hhh0000000000000000000000000000" + execution = _make_execution(rid, status="completed") + # Response without id/response_id + execution.response = Response({"status": "completed", "output": []}) + + snapshot = _RuntimeState.to_snapshot(execution) + + assert snapshot["id"] == rid + assert snapshot["response_id"] == rid + assert snapshot["object"] == "response" + + +# --------------------------------------------------------------------------- +# Extra: list_records returns all stored executions +# --------------------------------------------------------------------------- + +async def test_list_records_returns_all() -> None: + state = _RuntimeState() + e1 = _make_execution("caresp_iii0000000000000000000000000000") + e2 = _make_execution("caresp_jjj0000000000000000000000000000") + await state.add(e1) + await state.add(e2) + + records = await state.list_records() + assert len(records) == 2 + ids = {r.response_id for r in records} + assert ids == {"caresp_iii0000000000000000000000000000", "caresp_jjj0000000000000000000000000000"} + + +# --------------------------------------------------------------------------- +# T1 (Task 7.1) – _ExecutionRecord is no longer exported from _runtime_state +# --------------------------------------------------------------------------- + +def test_import_does_not_expose_execution_record() -> None: + """_ExecutionRecord was deleted in Task 7.1; the module must not export it.""" + import importlib + mod = importlib.import_module("azure.ai.agentserver.responses.hosting._runtime_state") + assert not hasattr(mod, "_ExecutionRecord"), ( + "_ExecutionRecord should have been removed from _runtime_state in Phase 7 / Task 7.1" + ) diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_sse_writer.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_sse_writer.py new file mode 100644 index 000000000000..ed5021e3ee76 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_sse_writer.py @@ -0,0 +1,59 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for SSE encoding helpers.""" + +from __future__ import annotations + +from azure.ai.agentserver.responses.streaming import _sse + + +class _FakeEvent: + def __init__(self, type: str, sequence_number: int, text: str) -> None: + self.type = type + self.sequence_number = sequence_number + self.text = text + + +def test_sse_writer__encodes_event_and_data_lines_with_separator() -> None: + event = _FakeEvent(type="response.created", sequence_number=0, text="hello") + + encoded = _sse.encode_sse_event(event) # type: ignore[arg-type] + assert encoded.startswith("event: response.created\n") + assert "data:" in encoded + assert encoded.endswith("\n\n") + + +def test_sse_writer__encodes_multiline_data_as_multiple_data_lines() -> None: + event = _FakeEvent(type="response.output_text.delta", sequence_number=1, text="line1\nline2") + + encoded = _sse.encode_sse_event(event) # type: ignore[arg-type] + assert "data: line1" in encoded + assert "data: line2" in encoded + + +def test_sse_writer__keep_alive_comment_frame_format() -> None: + keep_alive_frame = _sse.encode_keep_alive_comment() # type: ignore[attr-defined] + assert keep_alive_frame == ": keep-alive\n\n" + + +def test_sse_writer__injects_monotonic_sequence_numbers() -> None: + import json as _json + + _sse.new_stream_counter() + + first_event = _FakeEvent(type="response.created", sequence_number=-1, text="a") + second_event = _FakeEvent(type="response.in_progress", sequence_number=-1, text="b") + + encoded_first = _sse.encode_sse_event(first_event) # type: ignore[arg-type] + encoded_second = _sse.encode_sse_event(second_event) # type: ignore[arg-type] + + def _extract_sequence_number(encoded: str) -> int: + data_line = next(line for line in encoded.splitlines() if line.startswith("data:")) + payload = _json.loads(data_line[len("data:"):].strip()) + return int(payload["sequence_number"]) + + seq_first = _extract_sequence_number(encoded_first) + seq_second = _extract_sequence_number(encoded_second) + + assert seq_first == 0, f"first sequence_number must be 0 for a new stream, got {seq_first}" + assert seq_second == 1, f"second sequence_number must be 1 for a new stream, got {seq_second}" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validation.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validation.py new file mode 100644 index 000000000000..679b0fce6e8d --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validation.py @@ -0,0 +1,49 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Unit tests for validation helpers.""" + +from __future__ import annotations + +import pytest + +from azure.ai.agentserver.responses.hosting._validation import parse_create_response, to_api_error_response, validate_create_response +from azure.ai.agentserver.responses.models.errors import RequestValidationError + + +class _FakeCreateRequest: + def __init__( + self, + store: bool | None = True, + background: bool = False, + stream: bool | None = False, + stream_options: object | None = None, + model: str | None = "gpt-4o-mini", + ) -> None: + self.store = store + self.background = background + self.stream = stream + self.stream_options = stream_options + self.model = model + + +def test_validation__non_object_payload_returns_invalid_request() -> None: + with pytest.raises(RequestValidationError) as exc_info: + parse_create_response(["not", "an", "object"]) # type: ignore[arg-type] + + assert exc_info.value.code == "invalid_request" + + +def test_validation__cross_field_stream_options_requires_stream_flag() -> None: + request = _FakeCreateRequest(stream=False, stream_options={"foo": "bar"}) + + with pytest.raises(RequestValidationError) as exc_info: + validate_create_response(request) # type: ignore[arg-type] + + assert exc_info.value.param == "stream" + + +def test_validation__unexpected_exception_maps_to_bad_request_category() -> None: + error = ValueError("bad payload") + envelope = to_api_error_response(error) + + assert envelope.error.type == "invalid_request_error" diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_emitter.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_emitter.py new file mode 100644 index 000000000000..ade64e77ef91 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_emitter.py @@ -0,0 +1,251 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Tests for validator emitter behavior.""" + +from __future__ import annotations + +import re +from types import ModuleType + +from scripts.validator_emitter import build_validator_module + + +def _load_module(code: str) -> ModuleType: + module = ModuleType("generated_validators") + exec(code, module.__dict__) + return module + + +def test_emitter_generates_required_property_check() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "required": ["model"], + "properties": {"model": {"type": "string"}}, + } + } + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + errors = module.validate_CreateResponse({}) + assert any(e["path"] == "$.model" and "missing" in e["message"].lower() for e in errors) + + +def test_emitter_generates_class_without_schema_definition() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "required": ["model"], + "properties": {"model": {"type": "string"}}, + } + } + code = build_validator_module(schemas, ["CreateResponse"]) + assert "class CreateResponseValidator" in code + assert "\nSCHEMAS =" not in code + + +def test_emitter_uses_generated_enum_values_when_available() -> None: + schemas = { + "OpenAI.ToolType": { + "anyOf": [ + {"type": "string"}, + {"type": "string", "enum": ["function", "file_search"]}, + ] + } + } + code = build_validator_module(schemas, ["OpenAI.ToolType"]) + assert "_enum_values('ToolType')" in code + + +def test_emitter_deduplicates_string_union_error_message() -> None: + schemas = { + "OpenAI.InputItemType": { + "anyOf": [ + {"type": "string"}, + {"type": "string", "enum": ["message", "item_reference"]}, + ] + } + } + + module = _load_module(build_validator_module(schemas, ["OpenAI.InputItemType"])) + errors = module.validate_OpenAI_InputItemType(123) + assert errors + assert errors[0]["path"] == "$" + assert "InputItemType" in errors[0]["message"] + assert "got integer" in errors[0]["message"].lower() + assert "string, string" not in errors[0]["message"] + + +def test_emitter_generates_nullable_handling() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": {"instructions": {"type": "string", "nullable": True}}, + } + } + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + assert module.validate_CreateResponse({"instructions": None}) == [] + + +def test_emitter_generates_primitive_type_checks_and_enum_literal() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "model": {"type": "string", "enum": ["gpt-4o", "gpt-4.1"]}, + "temperature": {"type": "number"}, + "stream": {"type": "boolean"}, + }, + } + } + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + errors = module.validate_CreateResponse({"model": "bad", "temperature": "hot", "stream": "yes"}) + assert any(e["path"] == "$.model" and "allowed" in e["message"].lower() for e in errors) + assert any(e["path"] == "$.temperature" and "number" in e["message"].lower() for e in errors) + assert any(e["path"] == "$.stream" and "boolean" in e["message"].lower() for e in errors) + + +def test_emitter_generates_nested_delegate_calls() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": {"metadata": {"$ref": "#/components/schemas/Metadata"}}, + }, + "Metadata": { + "type": "object", + "required": ["id"], + "properties": {"id": {"type": "string"}}, + }, + } + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + errors = module.validate_CreateResponse({"metadata": {}}) + assert any(e["path"] == "$.metadata.id" for e in errors) + + +def test_emitter_generates_union_kind_check_for_oneof_anyof() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "tool_choice": { + "anyOf": [ + {"type": "string"}, + {"$ref": "#/components/schemas/ToolChoiceParam"}, + ] + } + }, + }, + "ToolChoiceParam": { + "type": "object", + "required": ["type"], + "properties": {"type": {"type": "string"}}, + }, + } + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + errors = module.validate_CreateResponse({"tool_choice": 123}) + assert any(e["path"] == "$.tool_choice" and "expected one of" in e["message"].lower() for e in errors) + + +def test_emitter_validates_create_response_input_property() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "input": { + "anyOf": [ + {"type": "string"}, + { + "type": "array", + "items": {"$ref": "#/components/schemas/InputItem"}, + }, + ] + } + }, + }, + "InputItem": { + "type": "object", + "required": ["type"], + "properties": {"type": {"type": "string"}}, + }, + } + + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + + # Invalid input kind should fail the CreateResponse.input union check. + invalid_errors = module.validate_CreateResponse({"input": 123}) + assert any(e["path"] == "$.input" and "expected one of" in e["message"].lower() for e in invalid_errors) + + # Supported input kinds should pass. + assert module.validate_CreateResponse({"input": "hello"}) == [] + assert module.validate_CreateResponse({"input": [{"type": "message"}]}) == [] + + +def test_emitter_generates_discriminator_dispatch() -> None: + schemas = { + "Tool": { + "type": "object", + "discriminator": { + "propertyName": "type", + "mapping": { + "function": "#/components/schemas/FunctionTool", + }, + }, + "properties": {"type": {"type": "string"}}, + }, + "FunctionTool": { + "type": "object", + "required": ["name"], + "properties": { + "type": {"type": "string"}, + "name": {"type": "string"}, + }, + }, + } + module = _load_module(build_validator_module(schemas, ["Tool"])) + errors = module.validate_Tool({"type": "function"}) + assert any(e["path"] == "$.name" and "missing" in e["message"].lower() for e in errors) + + +def test_emitter_generates_array_and_map_checks() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "tools": { + "type": "array", + "items": {"$ref": "#/components/schemas/Tool"}, + }, + "metadata": { + "type": "object", + "additionalProperties": {"type": "string"}, + }, + }, + }, + "Tool": { + "type": "object", + "required": ["name"], + "properties": {"name": {"type": "string"}}, + }, + } + module = _load_module(build_validator_module(schemas, ["CreateResponse"])) + errors = module.validate_CreateResponse({"tools": [{}], "metadata": {"a": 1}}) + assert any(e["path"] == "$.tools[0].name" for e in errors) + assert any(e["path"] == "$.metadata.a" for e in errors) + + +def test_emitter_uses_descriptive_helper_function_names() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "model": {"type": "string"}, + "metadata": { + "type": "object", + "additionalProperties": {"type": "string"}, + }, + }, + } + } + + code = build_validator_module(schemas, ["CreateResponse"]) + assert "_validate_CreateResponse_model" in code + assert "_validate_CreateResponse_metadata" in code + assert re.search(r"_validate_branch_\d+", code) is None diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_generator_contract.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_generator_contract.py new file mode 100644 index 000000000000..954a29189af5 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_generator_contract.py @@ -0,0 +1,111 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Tests for validator generator contract behavior.""" + +from __future__ import annotations + +import subprocess +import sys +from pathlib import Path + + +def _script_path() -> Path: + return Path(__file__).resolve().parents[2] / "scripts" / "generate_validators.py" + + +def _minimal_spec() -> str: + return """{ + "paths": { + "/responses": { + "post": { + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/CreateResponse" + } + } + } + } + } + } + }, + "components": { + "schemas": { + "CreateResponse": { + "type": "object", + "required": ["model"], + "properties": { + "model": {"type": "string"} + } + } + } + } +} +""" + + +def test_generator_requires_cli_args() -> None: + proc = subprocess.run( + [sys.executable, str(_script_path())], + capture_output=True, + text=True, + check=False, + ) + assert proc.returncode != 0 + assert "--input" in proc.stderr + assert "--output" in proc.stderr + + +def test_generated_file_has_autogen_header(tmp_path: Path) -> None: + spec_path = tmp_path / "spec.json" + out_path = tmp_path / "_validators.py" + spec_path.write_text(_minimal_spec(), encoding="utf-8") + + proc = subprocess.run( + [ + sys.executable, + str(_script_path()), + "--input", + str(spec_path), + "--output", + str(out_path), + "--root-schemas", + "CreateResponse", + ], + capture_output=True, + text=True, + check=False, + ) + + assert proc.returncode == 0, proc.stderr + content = out_path.read_text(encoding="utf-8") + assert content.startswith("# pylint: disable=line-too-long,useless-suppression,too-many-lines") + assert "# Code generated by Microsoft (R) Python Code Generator." in content + + +def test_generation_is_deterministic_for_same_input(tmp_path: Path) -> None: + spec_path = tmp_path / "spec.json" + out_path = tmp_path / "_validators.py" + spec_path.write_text(_minimal_spec(), encoding="utf-8") + + cmd = [ + sys.executable, + str(_script_path()), + "--input", + str(spec_path), + "--output", + str(out_path), + "--root-schemas", + "CreateResponse", + ] + + first = subprocess.run(cmd, capture_output=True, text=True, check=False) + assert first.returncode == 0, first.stderr + first_output = out_path.read_text(encoding="utf-8") + + second = subprocess.run(cmd, capture_output=True, text=True, check=False) + assert second.returncode == 0, second.stderr + second_output = out_path.read_text(encoding="utf-8") + + assert first_output == second_output diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_generator_e2e.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_generator_e2e.py new file mode 100644 index 000000000000..adaffd242655 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_generator_e2e.py @@ -0,0 +1,194 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""End-to-end tests for validator generator CLI output.""" + +from __future__ import annotations + +import importlib.util +import subprocess +import sys +from pathlib import Path + + +def _script_path() -> Path: + return Path(__file__).resolve().parents[2] / "scripts" / "generate_validators.py" + + +def _spec() -> str: + return """{ + "paths": { + "/responses": { + "post": { + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/CreateResponse" + } + } + } + } + } + } + }, + "components": { + "schemas": { + "CreateResponse": { + "type": "object", + "required": ["model"], + "properties": { + "model": {"type": "string"}, + "metadata": {"$ref": "#/components/schemas/Metadata"} + } + }, + "Metadata": { + "type": "object", + "additionalProperties": {"type": "string"} + } + } + } +} +""" + + +def test_generator_emits_valid_python_module(tmp_path: Path) -> None: + spec_path = tmp_path / "spec.json" + out_path = tmp_path / "_validators.py" + spec_path.write_text(_spec(), encoding="utf-8") + + proc = subprocess.run( + [ + sys.executable, + str(_script_path()), + "--input", + str(spec_path), + "--output", + str(out_path), + "--root-schemas", + "CreateResponse", + ], + capture_output=True, + text=True, + check=False, + ) + assert proc.returncode == 0, proc.stderr + + source = out_path.read_text(encoding="utf-8") + compile(source, str(out_path), "exec") + + +def test_generated_module_exposes_expected_validate_functions(tmp_path: Path) -> None: + spec_path = tmp_path / "spec.json" + out_path = tmp_path / "_validators.py" + spec_path.write_text(_spec(), encoding="utf-8") + + proc = subprocess.run( + [ + sys.executable, + str(_script_path()), + "--input", + str(spec_path), + "--output", + str(out_path), + "--root-schemas", + "CreateResponse", + ], + capture_output=True, + text=True, + check=False, + ) + assert proc.returncode == 0, proc.stderr + + module_name = "generated_validator_module" + spec = importlib.util.spec_from_file_location(module_name, out_path) + assert spec is not None and spec.loader is not None + module = importlib.util.module_from_spec(spec) + spec.loader.exec_module(module) + + assert hasattr(module, "validate_CreateResponse") + + +def test_regeneration_overwrites_previous_output_cleanly(tmp_path: Path) -> None: + spec_path = tmp_path / "spec.json" + out_path = tmp_path / "_validators.py" + spec_path.write_text(_spec(), encoding="utf-8") + + out_path.write_text("stale-content", encoding="utf-8") + + proc = subprocess.run( + [ + sys.executable, + str(_script_path()), + "--input", + str(spec_path), + "--output", + str(out_path), + "--root-schemas", + "CreateResponse", + ], + capture_output=True, + text=True, + check=False, + ) + assert proc.returncode == 0, proc.stderr + + content = out_path.read_text(encoding="utf-8") + assert "stale-content" not in content + assert content.startswith("# pylint: disable=line-too-long,useless-suppression,too-many-lines") + + +def test_generator_handles_inline_create_response_schema(tmp_path: Path) -> None: + spec_path = tmp_path / "spec-inline.json" + out_path = tmp_path / "_validators.py" + spec_path.write_text( + """{ + "paths": { + "/responses": { + "post": { + "requestBody": { + "content": { + "application/json": { + "schema": { + "anyOf": [ + { + "type": "object", + "required": ["model"], + "properties": { + "model": {"type": "string"} + } + } + ] + } + } + } + } + } + } + }, + "components": { + "schemas": {} + } +} +""", + encoding="utf-8", + ) + + proc = subprocess.run( + [ + sys.executable, + str(_script_path()), + "--input", + str(spec_path), + "--output", + str(out_path), + "--root-schemas", + "CreateResponse", + ], + capture_output=True, + text=True, + check=False, + ) + assert proc.returncode == 0, proc.stderr + content = out_path.read_text(encoding="utf-8") + assert "def _validate_CreateResponse(" in content + assert "class CreateResponseValidator" in content diff --git a/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_schema_walker.py b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_schema_walker.py new file mode 100644 index 000000000000..35c756095913 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/tests/unit/test_validator_schema_walker.py @@ -0,0 +1,86 @@ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT license. +"""Tests for OpenAPI schema walker behavior used by validator generation.""" + +from __future__ import annotations + +from scripts.validator_schema_walker import SchemaWalker, discover_post_request_roots, resolve_ref + + +def test_resolve_ref_extracts_schema_name() -> None: + assert resolve_ref("#/components/schemas/CreateResponse") == "CreateResponse" + + +def test_schema_walker_collects_reachable_from_root_schema() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "metadata": {"$ref": "#/components/schemas/Metadata"}, + }, + }, + "Metadata": { + "type": "object", + "properties": {"id": {"type": "string"}}, + }, + } + + walker = SchemaWalker(schemas) + walker.walk("CreateResponse") + + assert "CreateResponse" in walker.reachable + assert "Metadata" in walker.reachable + + +def test_schema_walker_discovers_inline_post_request_schema() -> None: + spec = { + "paths": { + "/responses": { + "post": { + "requestBody": { + "content": { + "application/json": { + "schema": { + "$ref": "#/components/schemas/CreateResponse", + } + } + } + } + } + } + } + } + + assert discover_post_request_roots(spec) == ["CreateResponse"] + + +def test_schema_walker_handles_oneof_anyof_ref_branches() -> None: + schemas = { + "CreateResponse": { + "type": "object", + "properties": { + "input": { + "oneOf": [ + {"$ref": "#/components/schemas/InputText"}, + {"$ref": "#/components/schemas/InputImage"}, + ] + }, + "tool_choice": { + "anyOf": [ + {"type": "string"}, + {"$ref": "#/components/schemas/ToolChoiceParam"}, + ] + }, + }, + }, + "InputText": {"type": "string"}, + "InputImage": {"type": "object", "properties": {"url": {"type": "string"}}}, + "ToolChoiceParam": {"type": "object", "properties": {"type": {"type": "string"}}}, + } + + walker = SchemaWalker(schemas) + walker.walk("CreateResponse") + + assert "InputText" in walker.reachable + assert "InputImage" in walker.reachable + assert "ToolChoiceParam" in walker.reachable diff --git a/sdk/agentserver/azure-ai-agentserver-responses/type_spec/client.tsp b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/client.tsp new file mode 100644 index 000000000000..024b7b9489ad --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/client.tsp @@ -0,0 +1,161 @@ +import "./main.tsp"; + +using Azure.ClientGenerator.Core; +using Azure.Core.Experimental; + +// Map all OpenAI base types into our SDK namespace +@clientNamespace("Azure.AI.AgentServer.Responses.Sdk.Models") +namespace OpenAI { + // The responses view does not export ItemResourceType (only the full src does). + // Bridge the gap with an alias so the Azure augmentations can reference it. + alias ItemResourceType = OutputItemType; +} + +// Map Azure.AI.Projects augmentation types into our SDK namespace +#suppress "@azure-tools/typespec-azure-core/experimental-feature" "" +@clientNamespace("Azure.AI.AgentServer.Responses.Sdk.Models") +namespace Azure.AI.Projects { + // Propagate "sequence_number" to base of stream events + @@copyProperties(OpenAI.ResponseStreamEvent, + { + sequence_number: integer, + } + ); + + // Remove created_by from specific models to avoid BinaryData/string mismatch + // with the base OutputItem.CreatedBy (BinaryData) type + @@withoutOmittedProperties(OpenAI.OutputItemFunctionShellCallOutput, "created_by"); + @@withoutOmittedProperties(OpenAI.OutputItemFunctionShellCall, "created_by"); + @@withoutOmittedProperties(OpenAI.OutputItemCompactionBody, "created_by"); + @@withoutOmittedProperties(OpenAI.OutputItemApplyPatchToolCallOutput, "created_by"); + @@withoutOmittedProperties(OpenAI.OutputItemApplyPatchToolCall, "created_by"); + + // Remove "object" from DeleteResponseResult to work around codegen bug + // (TypeSpec emitter generates `= "response"` string default for ResponseObjectType enum) + @@withoutOmittedProperties(Azure.AI.Projects.DeleteResponseResult, "object"); + + // ============================================================================ + // Public constructors: mark models as input+output so the C# emitter generates + // public compact constructors. Consumers need these to construct events in their + // IResponseHandler implementations. + // ============================================================================ + + // --- ResponseStreamEvent subtypes (53 concrete types) --- + @@usage(OpenAI.ResponseAudioDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseAudioDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseAudioTranscriptDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseAudioTranscriptDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCodeInterpreterCallCodeDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCodeInterpreterCallCodeDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCodeInterpreterCallCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCodeInterpreterCallInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCodeInterpreterCallInterpretingEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseContentPartAddedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseContentPartDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCreatedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCustomToolCallInputDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseCustomToolCallInputDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseErrorEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseFailedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseFileSearchCallCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseFileSearchCallInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseFileSearchCallSearchingEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseFunctionCallArgumentsDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseFunctionCallArgumentsDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseImageGenCallCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseImageGenCallGeneratingEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseImageGenCallInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseImageGenCallPartialImageEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseIncompleteEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPCallArgumentsDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPCallArgumentsDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPCallCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPCallFailedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPCallInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPListToolsCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPListToolsFailedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseMCPListToolsInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseOutputItemAddedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseOutputItemDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseOutputTextAnnotationAddedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseQueuedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseReasoningSummaryPartAddedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseReasoningSummaryPartDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseReasoningSummaryTextDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseReasoningSummaryTextDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseReasoningTextDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseReasoningTextDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseRefusalDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseRefusalDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseTextDeltaEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseTextDoneEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseWebSearchCallCompletedEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseWebSearchCallInProgressEvent, Usage.input | Usage.output); + @@usage(OpenAI.ResponseWebSearchCallSearchingEvent, Usage.input | Usage.output); + + // --- Response, ResponseError, and CreateResponse --- + @@usage(OpenAI.Response, Usage.input | Usage.output); + @@usage(OpenAI.ResponseError, Usage.input | Usage.output); + @@usage(OpenAI.CreateResponse, Usage.input | Usage.output); + + // --- OpenAI OutputItem subtypes (24 concrete types) --- + @@usage(OpenAI.OutputItemApplyPatchToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemApplyPatchToolCallOutput, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemCodeInterpreterToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemCompactionBody, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemComputerToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemComputerToolCallOutput, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemCustomToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemCustomToolCallOutput, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemFileSearchToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemFunctionShellCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemFunctionShellCallOutput, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemFunctionToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemImageGenToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemLocalShellToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemLocalShellToolCallOutput, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemMcpApprovalRequest, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemMcpApprovalResponseResource, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemMcpListTools, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemMcpToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemMessage, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemOutputMessage, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemReasoningItem, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemWebSearchToolCall, Usage.input | Usage.output); + @@usage(OpenAI.OutputItemFunctionToolCallOutput, Usage.input | Usage.output); + + // --- OutputContent subtypes (3 concrete types) --- + @@usage(OpenAI.OutputContentOutputTextContent, Usage.input | Usage.output); + @@usage(OpenAI.OutputContentReasoningTextContent, Usage.input | Usage.output); + @@usage(OpenAI.OutputContentRefusalContent, Usage.input | Usage.output); + + // --- OutputMessageContent subtypes (2 concrete types) --- + @@usage(OpenAI.OutputMessageContentOutputTextContent, Usage.input | Usage.output); + @@usage(OpenAI.OutputMessageContentRefusalContent, Usage.input | Usage.output); + + // --- Azure.AI.Projects OutputItem subtypes (22 concrete types) --- + @@usage(Azure.AI.Projects.A2AToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.A2AToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.AzureAISearchToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.AzureAISearchToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.AzureFunctionToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.AzureFunctionToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.BingCustomSearchToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.BingCustomSearchToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.BingGroundingToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.BingGroundingToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.BrowserAutomationToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.BrowserAutomationToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.FabricDataAgentToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.FabricDataAgentToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.MemorySearchToolCallItemResource, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.OAuthConsentRequestOutputItem, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.OpenApiToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.OpenApiToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.SharepointGroundingToolCall, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.SharepointGroundingToolCallOutput, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.StructuredOutputsOutputItem, Usage.input | Usage.output); + @@usage(Azure.AI.Projects.WorkflowActionOutputItem, Usage.input | Usage.output); +} diff --git a/sdk/agentserver/azure-ai-agentserver-responses/type_spec/main.tsp b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/main.tsp new file mode 100644 index 000000000000..f6d3a43eb65d --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/main.tsp @@ -0,0 +1,23 @@ +// Azure AI Responses Server SDK — TypeSpec view +// +// This view selectively imports the OpenAI responses routes and models from the +// upstream Azure REST API spec and applies local customizations (client.tsp) to +// generate C# model classes in our SDK namespace. +// +// Pattern based on: +// https://github.com/Azure/azure-rest-api-specs/.../sdk-service-agents-contracts + +// OpenAI base models + operations used by the local responses contract view +import "@azure-tools/openai-typespec/views/client-emitters"; + +// OpenAI responses routes + models (routes reference the model types, making them visible to the emitter) +import "./TempTypeSpecFiles/openai-responses/routes.tsp"; + +// Common service definition (namespace Azure.AI.Projects, Versions enum) +import "./TempTypeSpecFiles/common/service.tsp"; + +// Common models (FoundryFeaturesOptInKeys, operation utilities, etc.) +import "./TempTypeSpecFiles/common/models.tsp"; + +// Local customizations (namespace mapping, sequence_number) +import "./client.tsp"; diff --git a/sdk/agentserver/azure-ai-agentserver-responses/type_spec/tsp-location.yaml b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/tsp-location.yaml new file mode 100644 index 000000000000..a5f940991b6c --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/tsp-location.yaml @@ -0,0 +1,9 @@ +directory: specification/ai-foundry/data-plane/Foundry +commit: c1c762593f2c91877f6f76384bc8315404b64e8f +repo: Azure/azure-rest-api-specs +additionalDirectories: + - specification/ai-foundry/data-plane/Foundry/src/openai-responses + - specification/ai-foundry/data-plane/Foundry/src/openai-conversations + - specification/ai-foundry/data-plane/Foundry/src/tools + - specification/ai-foundry/data-plane/Foundry/src/common + - specification/ai-foundry/data-plane/Foundry/src/memory-stores diff --git a/sdk/agentserver/azure-ai-agentserver-responses/type_spec/tspconfig.yaml b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/tspconfig.yaml new file mode 100644 index 000000000000..8875bfb7dfa8 --- /dev/null +++ b/sdk/agentserver/azure-ai-agentserver-responses/type_spec/tspconfig.yaml @@ -0,0 +1,23 @@ +emit: + - "@typespec/openapi3" + - "@azure-tools/typespec-python" +options: + "@typespec/openapi3": + emitter-output-dir: "{output-dir}" + "@azure-tools/typespec-python": + emitter-output-dir: "{output-dir}" + package-name: "azure-ai-agentserver-responses" + package-mode: "dataplane" + flavor: "azure" + unreferenced-types-handling: keepAll + generate-test: false + generate-sample: false +imports: + - "@typespec/http" + - "@typespec/openapi" + - "@azure-tools/typespec-azure-core" + - "@azure-tools/typespec-azure-core/experimental" + - "@azure-tools/typespec-client-generator-core" + - "@typespec/versioning" + - "@typespec/events" + - "@typespec/sse" diff --git a/sdk/agentserver/ci.yml b/sdk/agentserver/ci.yml index bb2d6f479b00..83b80cf4ed19 100644 --- a/sdk/agentserver/ci.yml +++ b/sdk/agentserver/ci.yml @@ -40,9 +40,9 @@ extends: Selection: sparse GenerateVMJobs: true Artifacts: + - name: azure-ai-agentserver-invocations + safeName: azureaiagentserverinvocations - name: azure-ai-agentserver-core safeName: azureaiagentservercore - - name: azure-ai-agentserver-agentframework - safeName: azureaiagentserveragentframework - - name: azure-ai-agentserver-langgraph - safeName: azureaiagentserverlanggraph + - name: azure-ai-agentserver-responses + safeName: azureaiagentserverresponses