diff --git a/sdk/ai/azure-ai-projects/.github/skills/README.md b/sdk/ai/azure-ai-projects/.github/skills/README.md new file mode 100644 index 000000000000..7ea120484d10 --- /dev/null +++ b/sdk/ai/azure-ai-projects/.github/skills/README.md @@ -0,0 +1,33 @@ +# CoPilot skills for azure-ai-projects development + +## Prerequisite + +* Clone the `azure-sdk-for-python` repo to your local machine, if you don't already have it: + ``` + git clone https://github.com/Azure/azure-sdk-for-python.git + ``` +* Change to the directory `sdk\ai\azure-ai-projects`. +* Switch to the current feature branch: `git switch feature/azure-ai-projects/2.2.0`. +* Make sure you don't have any files edited or added in this branch (clean `git status` state). + +## Emit from TypeSpec and create a PR + +### Using GitHub CoPilot in VSCode + +* Open VSCode in the current folder. +* Open the CoPilot chat window ("Toggle Chat"). +* Make sure you are in "Agent" mode. +* Start typing `/azure-ai-projects` and press tab to auto complete it to `/azure-ai-projects-emit-from-typespec`, then press Enter. +* Answer some questions and approve execution to go through the workflow + +### Using CoPilot CLI or Agency Copilot CLI + +* Install [GitHub CoPilot CLI](https://docs.github.com/copilot/how-tos/copilot-cli/set-up-copilot-cli/install-copilot-cli) or [Agency CoPilot CLI](https://aka.ms/agency) (VPN required) if you don't already have it. +* Run CoPilot CLI by typing `copilot` +* Start typing `/azure-ai-projects` and press tab to auto complete it to `/azure-ai-projects-emit-from-typespec`, then press Enter. +* Answer some questions and approve execution to go through the workflow + + + + + diff --git a/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects-emit-from-typespec/SKILL.md b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects-emit-from-typespec/SKILL.md new file mode 100644 index 000000000000..286a43c36e4d --- /dev/null +++ b/sdk/ai/azure-ai-projects/.github/skills/azure-ai-projects-emit-from-typespec/SKILL.md @@ -0,0 +1,194 @@ +--- +name: azure-ai-projects-emit-from-typespec +license: MIT +metadata: + version: "1.0.0" + distribution: local +description: "Emit the azure-ai-projects Python SDK from TypeSpec, apply post-emitter fixes, update changelog, and create a Pull Request. WHEN: \"emit SDK from TypeSpec\", \"generate azure-ai-projects SDK\", \"update azure-ai-projects from TypeSpec\", \"emit from TypeSpec\", \"regenerate azure-ai-projects\". DO NOT USE FOR: other Azure SDK packages, manual code edits without TypeSpec. INVOKES: azsdk-common-generate-sdk-locally skill, post-emitter-fixes.cmd script, git commands, gh CLI for PR creation." +compatibility: + requires: "azure-sdk-mcp server, local azure-sdk-for-python clone, git, gh CLI" +--- + +# Emit azure-ai-projects Python SDK from TypeSpec + +This skill guides Copilot through emitting the azure-ai-projects Python SDK from TypeSpec, +applying post-emitter fixes, updating the changelog, installing package from sources and creating a Pull Request. + +**Working directory:** `sdk/ai/azure-ai-projects` + +**Skills:** This workflow relies on skills defined under `.github/skills/` at the root of the repository. Use those skills for SDK generation, building, changelog updates, and other SDK lifecycle operations instead of running commands directly. In particular: + +- **`azsdk-common-generate-sdk-locally`** – For generating SDK from TypeSpec, building, running checks/tests, updating changelog, metadata, and version. + +--- + +## Step 1: Gather information from the user + +Ask the user the following questions **one at a time**, waiting for each answer before proceeding. + +### 1a. Topic branch name + +Ask the user to choose **one** of the following two options for the target topic branch: + +1. **Create a new topic branch (with default branch name)** – Create a new topic branch for the emitted changes. If selected, this default branch name will be used "/", where `github-userid` is the user's GitHub ID and `DD-MM-HHMM` is the current date-time using date, month, hour and minute. For example, if the GitHub ID is "dargilco" and the current date and time is May 1st, 2026 at 8:13am, the default branch name would be `dargilco/emit-from-typespec-01-05-0813`. This should be the default option, and the default branch name should be displayed. If you press enter without typing anything, this option will be selected. + +2. **Create a new topic branch (branch name given by user)** - Ask the user for the branch name. Mention that a common format is "/". If the user enters a branch name `feature/azure-ai-projects/2.2.0` then stop and report that they cannot emit directly to the current feature branch. + +3. **Emit to current branch** – Emit directly to the current branch without creating a new topic branch. This is not common, but may be necessary if the user is re-running this workflow because of a previous failure, where the topic branch was already created. If the current branch is named `feature/azure-ai-projects/2.2.0` then stop and report that they cannot emit directly to the current feature branch. + +### 1b. TypeSpec source + +Ask the user to choose **one** of the following three options for the TypeSpec source: + +1. **Latest commit on `feature/foundry-release`** – Automatically find the latest commit to the `feature/foundry-release` branch in [Azure/azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs) that touched files under `specification/ai-foundry/data-plane/Foundry`, and use that commit hash. This should be the default option. If you press enter without typing anything, this option will be selected. + +2. **Local TypeSpec folder** – Emit from a local clone of the [azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs) repository. If selected, ask for the **full folder path** to the TypeSpec project. This is the folder ending with `\specification\ai-foundry\data-plane\Foundry`. If it does not end with that string, stop and report the error to the user. Do not continue. + +3. **TypeSpec commit hash** – Emit from a specific commit in the [azure-rest-api-specs](https://github.com/Azure/azure-rest-api-specs) repository. If selected, ask for the **full commit SHA** (40 characters). + + +--- + +## Step 2: Record the current branch + +Before creating the topic branch, record the name of the **current Git branch**. This is the branch that the topic branch will be created from, and the branch the PR will target. + +``` +git branch --show-current +``` + +Save this as `BASE_BRANCH`. + +--- + +## Step 3: Create the topic branch + +Create the topic branch off the current branch and switch to it: + +``` +git fetch +git switch -c origin/ +``` + +Replace `` with the name provided by the user in Step 1a. + +--- + +## Step 4: Emit SDK from TypeSpec + +Use the **`azsdk-common-generate-sdk-locally`** skill to generate the SDK code. The skill knows how to invoke `azsdk_package_generate_code` and related MCP tools. + +Provide the skill with the TypeSpec source selected by the user. With is either: + +- **Local folder:** Pass the local spec repo path for local generation. Or, +- **Commit hash:** Update `commit:` in `tsp-location.yaml` to the full SHA first, then invoke the skill for generation. + +Note: +- You are only allowed to use the `tsp-client update` command. Do not use any of the other `tsp-client` commands. +- If you are generating from local TypeSpec folder, do not edit the file `tsp-location.yaml`. Leave it as is. It should not be used by the emitter. +- If you are generating from local TypeSpec folder, make sure that the local folder path you provide `tsp-client update --local-spec-repo` ends with `specification\ai-foundry\data-plane\Foundry`. +- **If the generation fails**, stop and report the error to the user. Do not continue. + +--- + +## Step 5: Run post-emitter fixes + +After a successful emit, run the post-emitter fix script located in the `sdk/ai/azure-ai-projects` folder: + +``` +post-emitter-fixes.cmd +``` + +This script applies azure-ai-projects-specific corrections to the emitted code (restores `pyproject.toml`, fixes enum names, patches Sphinx doc-string issues, and runs `black` formatting). + +**If the script fails**, stop and report the error to the user. Do not continue. Do not attempt to analyze the script failures and fix them with Copilot. The script should be fixed by the engineering team if it is not working. + +--- + +## Step 6: Fix patched code related to preview feature headers + +The emitted code may have introduced another beta sub-client (a new property on class `BetaOperations`). It may have also added another enum value to the existing internal class `_FoundryFeaturesOptInKeys`. This means that the client library needs to set a new HTTP request header when making REST API calls to the service, to opt-in to the new service features which are still in preview. If that's the case, do the following: + +* Update the dictionary `_BETA_OPERATION_FEATURE_HEADERS` defined in `azure\ai\projects\models\_patch.py`, to include a new key-value pair to map the new beta sub-client name to the proper value from `_FoundryFeaturesOptInKeys`. If no new beta sub-client was introduced, but a new enum value was added to `_FoundryFeaturesOptInKeys`, you will need to update one of the existing key-value pairs in `_BETA_OPERATION_FEATURE_HEADERS` to a comma-separated join of multiple values from `_FoundryFeaturesOptInKeys`. + +* Do a similar change to the dictionary `EXPECTED_FOUNDRY_FEATURES` defined in the test file `tests\foundry_features_header\foundry_features_header_test_base.py`: add a new key-value pair if a new beta sub-client was introduced, or update an existing key-value pair to include the new enum value if no new beta sub-client was introduced. + +* Finally, look at the two files `azure\ai\projects\operations\_patch.py` and `azure\ai\projects\aio\operations\_patch.py`. They define the public `BetaOperations` classes for the sync and async clients. To support a new sub-client, you will need to add a new property to this class with the proper doc string. You will need to update the import statement at the top of the file to import the new sub-client class. And you will need to update `__all__` statement at the bottom of the file to include the new sub-client class name. Follow the examples you see there for `BetaDatasetsOperations` or `BetaSkillsOperations`. + +If a new enum value was added to `_AgentDefinitionOptInKeys`, please print a note on screen that mentions which value was added, and tell the user that a review is needed to make sure this new value is properly used. But otherwise continue on. + +--- + +## Step 7: Update samples and tests + +If there were any breaking changes in existing APIs, like class or method renames: +* update the patched code accordingly in the client library to reflect those changes. Changes should be made to Python source file names that start with "_patch", under the `azure\ai\projects` folder. +* update the samples accordingly to reflect those changes. Changes should be made under `sdk/ai/azure-ai-projects/samples` folder. +* update the tests accordingly to reflect those changes. Changes should be made under `sdk/ai/azure-ai-projects/tests` folder. + +--- + +## Step 8: Install package from sources + +In the folder `sdk\ai\azure-ai-projects`, run `pip install -e .` to install the package from sources. If there are any errors, stop and report the error to the user. Do not continue. + +--- + +## Step 9: Update CHANGELOG.md + +Use the **`azsdk-common-generate-sdk-locally`** skill's changelog capability (`azsdk_package_update_changelog_content`) to update `CHANGELOG.md` in the `sdk/ai/azure-ai-projects` folder with a summary of changes from the TypeSpec emit. Some guidelines to follow: +* Start by examining the public SDK API surface of the latest released version of the azure-ai-projects package. The source code for this version can be found in the Main branch of the `azure-sdk-for-python` repository, in the folder `sdk\ai\azure-ai-projects`. +* Then compare it to the public SDK API surface of current version in this topic branch. +* Look at the existing change log from the latest version (if exists) and edit or add to it to capture all the changes you see. If a change log does not exist for the current version at the top of `CHANGELOG.md`, create a new one. +* If a new method was added, there is no need to add the list of all new classes that define the inputs and output of the method. It's enough to mention that the new method was added. +* Show the user the proposed changelog entry and ask for confirmation or edits before saving. + +--- + +## Step 10: Commit and push + +Stage all changes (excluding file names that start with `.env`), commit, and push the topic branch: + +``` +git add -A -- ':!.env*' +git commit -m "Emit SDK from TypeSpec and apply post-emitter fixes + +Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>" +git push -u origin +``` + +--- + +## Step 11: Create a Pull Request + +Create a draft PR from the **topic branch** to the **base branch** (recorded in Step 2): + +``` +gh pr create --draft --base --head --assignee @me --title "" --body "" +``` + +- **Title:** Use a descriptive title such as `[azure-ai-projects] Emit SDK from TypeSpec ()`. +- **Body:** Include which TypeSpec source was used and a summary of the changelog entry. + +You must show the user the resulting PR URL on screen when done, before you continue to the next step. + +Open a new tab in the default browser and navigate to the PR URL. + +--- + +## Step 12: Optionally run tests locally + +Prompt the user with this message: "Tests will run as part of the Pull Request. However, you can optionally run tests locally in a Python virtual environment, right now. It will take a few minutes. Do you want to run tests locally? (yes/no)" + +If the user answers "yes", run all tests from recordings. Follow these guidelines: +* Run tests in a local Python virtual environment. Create this virtual environment if it does not already exists: + ``` + python -m venv .venv + ``` + and activate it: + ``` + .venv\Scripts\activate + ``` +* Show test progress on screen, as tests are run. + + diff --git a/sdk/ai/azure-ai-projects/CHANGELOG.md b/sdk/ai/azure-ai-projects/CHANGELOG.md index d6a0253d9f0a..cd1529baa26a 100644 --- a/sdk/ai/azure-ai-projects/CHANGELOG.md +++ b/sdk/ai/azure-ai-projects/CHANGELOG.md @@ -1,5 +1,39 @@ # Release History +## 2.2.0 (Unreleased) + +### Features Added + +* New Agent tools `WorkIQPreviewTool`, `FabricIQPreviewTool` and `ToolboxSearchPreviewTool`. +* New optional string properties `description` and `name` added to Agent tools which did not have them already. + +### Breaking Changes + +Breaking changes in beta methods: +* Required keyword `isolation_key` removed from `.beta.agents.create_session()` and `.beta.agents.delete_session()` methods. +* Argument `body` in methods `.beta.evaluation_taxonomies.create()` and `.beta.evaluation_taxonomies.update()` renamed to `taxonomy`. +* Argument `body` in method `.beta.skills.create_from_package()` renamed to `content`. + +Breaking changes classes used by beta features: +* Required property `isolation_key_source` removed from class `EntraAuthorizationScheme`. +* Optional property `code_configuration` removed from class `HostedAgentDefinition`. +* Renamed class `AgentEndpoint` to `AgentEndpointConfig`. +* Renamed class `DeleteSkillResponse` to `DeleteSkillResult`. +* Renamed class `PendingUploadResponse` to `PendingUploadResult`. +* Renamed class `SessionDirectoryListResponse` to `SessionDirectoryListResult`. +* Renamed class `SessionFileWriteResponse` to `SessionFileWriteResult`. +* Renamed class `SkillObject` to `SkillDetails`. +* Renamed class `Target` to `EvaluationTarget`. +* Renamed class `TargetConfig` to `RedTeamTargetConfig`. + +### Bugs Fixed + +* Fixed telemetry instrumentor to correctly call is_recording() as a method on spans, ensuring non-recording spans are properly skipped (e.g., when sampling is configured) ([GitHub issue 46544](https://github.com/Azure/azure-sdk-for-python/issues/46544)). + +### Sample updates + +* Added Toolbox tool-search sample `sample_toolboxes_with_search_preview.py` and `sample_toolboxes_with_search_preview_async.py`, demonstrating creating a Toolbox version with `ToolboxSearchPreviewTool` and invoking `MCPTool`. + ## 2.1.0 (2026-04-20) ### Features Added diff --git a/sdk/ai/azure-ai-projects/README.md b/sdk/ai/azure-ai-projects/README.md index b2f754b91fd2..d5d8454c3861 100644 --- a/sdk/ai/azure-ai-projects/README.md +++ b/sdk/ai/azure-ai-projects/README.md @@ -13,6 +13,7 @@ resources in your Microsoft Foundry Project. Use it to: * Browser Automation (Preview) * Code Interpreter * Computer Use (Preview) + * FabricIQ (Preview) * File Search * Function Tool * Image Generation @@ -21,8 +22,10 @@ resources in your Microsoft Foundry Project. Use it to: * Microsoft SharePoint (Preview) * Model Context Protocol (MCP) * OpenAPI + * Toolbox Search (Preview) * Web Search * Web Search (Preview) + * WorkIQ (Preview) * **Get an OpenAI client** using `.get_openai_client()` method to run Responses, Conversations, Evaluations and Fine-Tuning operations with your Agent. * **Manage memory stores (preview)** for Agent conversations, using `.beta.memory_stores` operations. * **Explore additional evaluation tools (some in preview)** to assess the performance of your generative AI application, using `.evaluation_rules`, diff --git a/sdk/ai/azure-ai-projects/apiview-properties.json b/sdk/ai/azure-ai-projects/apiview-properties.json index f6caeb745b80..0dc2d9c6c6ad 100644 --- a/sdk/ai/azure-ai-projects/apiview-properties.json +++ b/sdk/ai/azure-ai-projects/apiview-properties.json @@ -12,8 +12,8 @@ "azure.ai.projects.models.AgentClusterInsightResult": "Azure.AI.Projects.AgentClusterInsightResult", "azure.ai.projects.models.AgentDefinition": "Azure.AI.Projects.AgentDefinition", "azure.ai.projects.models.AgentDetails": "Azure.AI.Projects.AgentObject", - "azure.ai.projects.models.AgentEndpoint": "Azure.AI.Projects.AgentEndpoint", "azure.ai.projects.models.AgentEndpointAuthorizationScheme": "Azure.AI.Projects.AgentEndpointAuthorizationScheme", + "azure.ai.projects.models.AgentEndpointConfig": "Azure.AI.Projects.AgentEndpointConfig", "azure.ai.projects.models.BaseCredentials": "Azure.AI.Projects.BaseCredentials", "azure.ai.projects.models.AgenticIdentityPreviewCredentials": "Azure.AI.Projects.AgenticIdentityPreviewCredentials", "azure.ai.projects.models.AgentIdentity": "Azure.AI.Projects.AgentIdentity", @@ -29,7 +29,7 @@ "azure.ai.projects.models.ApplyPatchToolParam": "OpenAI.ApplyPatchToolParam", "azure.ai.projects.models.ApproximateLocation": "OpenAI.ApproximateLocation", "azure.ai.projects.models.AutoCodeInterpreterToolParam": "OpenAI.AutoCodeInterpreterToolParam", - "azure.ai.projects.models.Target": "Azure.AI.Projects.Target", + "azure.ai.projects.models.EvaluationTarget": "Azure.AI.Projects.Target", "azure.ai.projects.models.AzureAIAgentTarget": "Azure.AI.Projects.AzureAIAgentTarget", "azure.ai.projects.models.AzureAIModelTarget": "Azure.AI.Projects.AzureAIModelTarget", "azure.ai.projects.models.Index": "Azure.AI.Projects.Index", @@ -41,7 +41,7 @@ "azure.ai.projects.models.AzureFunctionDefinitionFunction": "Azure.AI.Projects.AzureFunctionDefinition.function.anonymous", "azure.ai.projects.models.AzureFunctionStorageQueue": "Azure.AI.Projects.AzureFunctionStorageQueue", "azure.ai.projects.models.AzureFunctionTool": "Azure.AI.Projects.AzureFunctionTool", - "azure.ai.projects.models.TargetConfig": "Azure.AI.Projects.TargetConfig", + "azure.ai.projects.models.RedTeamTargetConfig": "Azure.AI.Projects.RedTeamTargetConfig", "azure.ai.projects.models.AzureOpenAIModelConfiguration": "Azure.AI.Projects.AzureOpenAIModelConfiguration", "azure.ai.projects.models.BingCustomSearchConfiguration": "Azure.AI.Projects.BingCustomSearchConfiguration", "azure.ai.projects.models.BingCustomSearchPreviewTool": "Azure.AI.Projects.BingCustomSearchPreviewTool", @@ -64,7 +64,6 @@ "azure.ai.projects.models.ClusterTokenUsage": "Azure.AI.Projects.ClusterTokenUsage", "azure.ai.projects.models.EvaluatorDefinition": "Azure.AI.Projects.EvaluatorDefinition", "azure.ai.projects.models.CodeBasedEvaluatorDefinition": "Azure.AI.Projects.CodeBasedEvaluatorDefinition", - "azure.ai.projects.models.CodeConfiguration": "Azure.AI.Projects.CodeConfiguration", "azure.ai.projects.models.CodeInterpreterTool": "OpenAI.CodeInterpreterTool", "azure.ai.projects.models.ComparisonFilter": "OpenAI.ComparisonFilter", "azure.ai.projects.models.CompoundFilter": "OpenAI.CompoundFilter", @@ -95,13 +94,11 @@ "azure.ai.projects.models.DeleteAgentResponse": "Azure.AI.Projects.DeleteAgentResponse", "azure.ai.projects.models.DeleteAgentVersionResponse": "Azure.AI.Projects.DeleteAgentVersionResponse", "azure.ai.projects.models.DeleteMemoryStoreResult": "Azure.AI.Projects.DeleteMemoryStoreResponse", - "azure.ai.projects.models.DeleteSkillResponse": "Azure.AI.Projects.DeleteSkillResponse", + "azure.ai.projects.models.DeleteSkillResult": "Azure.AI.Projects.DeleteSkillResponse", "azure.ai.projects.models.Deployment": "Azure.AI.Projects.Deployment", "azure.ai.projects.models.EmbeddingConfiguration": "Azure.AI.Projects.EmbeddingConfiguration", "azure.ai.projects.models.EntraAuthorizationScheme": "Azure.AI.Projects.EntraAuthorizationScheme", "azure.ai.projects.models.EntraIDCredentials": "Azure.AI.Projects.EntraIDCredentials", - "azure.ai.projects.models.IsolationKeySource": "Azure.AI.Projects.IsolationKeySource", - "azure.ai.projects.models.EntraIsolationKeySource": "Azure.AI.Projects.EntraIsolationKeySource", "azure.ai.projects.models.EvalResult": "Azure.AI.Projects.EvalResult", "azure.ai.projects.models.EvalRunResultCompareItem": "Azure.AI.Projects.EvalRunResultCompareItem", "azure.ai.projects.models.EvalRunResultComparison": "Azure.AI.Projects.EvalRunResultComparison", @@ -120,6 +117,7 @@ "azure.ai.projects.models.EvaluatorMetric": "Azure.AI.Projects.EvaluatorMetric", "azure.ai.projects.models.EvaluatorVersion": "Azure.AI.Projects.EvaluatorVersion", "azure.ai.projects.models.FabricDataAgentToolParameters": "Azure.AI.Projects.FabricDataAgentToolParameters", + "azure.ai.projects.models.FabricIQPreviewTool": "Azure.AI.Projects.FabricIQPreviewTool", "azure.ai.projects.models.FieldMapping": "Azure.AI.Projects.FieldMapping", "azure.ai.projects.models.FileDatasetVersion": "Azure.AI.Projects.FileDatasetVersion", "azure.ai.projects.models.FileSearchTool": "OpenAI.FileSearchTool", @@ -130,7 +128,6 @@ "azure.ai.projects.models.FunctionShellToolParamEnvironmentContainerReferenceParam": "OpenAI.FunctionShellToolParamEnvironmentContainerReferenceParam", "azure.ai.projects.models.FunctionShellToolParamEnvironmentLocalEnvironmentParam": "OpenAI.FunctionShellToolParamEnvironmentLocalEnvironmentParam", "azure.ai.projects.models.FunctionTool": "OpenAI.FunctionTool", - "azure.ai.projects.models.HeaderIsolationKeySource": "Azure.AI.Projects.HeaderIsolationKeySource", "azure.ai.projects.models.TelemetryEndpointAuth": "Azure.AI.Projects.TelemetryEndpointAuth", "azure.ai.projects.models.HeaderTelemetryEndpointAuth": "Azure.AI.Projects.HeaderTelemetryEndpointAuth", "azure.ai.projects.models.HostedAgentDefinition": "Azure.AI.Projects.HostedAgentDefinition", @@ -186,7 +183,7 @@ "azure.ai.projects.models.TelemetryEndpoint": "Azure.AI.Projects.TelemetryEndpoint", "azure.ai.projects.models.OtlpTelemetryEndpoint": "Azure.AI.Projects.OtlpTelemetryEndpoint", "azure.ai.projects.models.PendingUploadRequest": "Azure.AI.Projects.PendingUploadRequest", - "azure.ai.projects.models.PendingUploadResponse": "Azure.AI.Projects.PendingUploadResponse", + "azure.ai.projects.models.PendingUploadResult": "Azure.AI.Projects.PendingUploadResponse", "azure.ai.projects.models.PromptAgentDefinition": "Azure.AI.Projects.PromptAgentDefinition", "azure.ai.projects.models.PromptAgentDefinitionTextOptions": "Azure.AI.Projects.PromptAgentDefinitionTextOptions", "azure.ai.projects.models.PromptBasedEvaluatorDefinition": "Azure.AI.Projects.PromptBasedEvaluatorDefinition", @@ -202,12 +199,12 @@ "azure.ai.projects.models.Schedule": "Azure.AI.Projects.Schedule", "azure.ai.projects.models.ScheduleRun": "Azure.AI.Projects.ScheduleRun", "azure.ai.projects.models.SessionDirectoryEntry": "Azure.AI.Projects.SessionDirectoryEntry", - "azure.ai.projects.models.SessionDirectoryListResponse": "Azure.AI.Projects.SessionDirectoryListResponse", - "azure.ai.projects.models.SessionFileWriteResponse": "Azure.AI.Projects.SessionFileWriteResponse", + "azure.ai.projects.models.SessionDirectoryListResult": "Azure.AI.Projects.SessionDirectoryListResponse", + "azure.ai.projects.models.SessionFileWriteResult": "Azure.AI.Projects.SessionFileWriteResponse", "azure.ai.projects.models.SessionLogEvent": "Azure.AI.Projects.SessionLogEvent", "azure.ai.projects.models.SharepointGroundingToolParameters": "Azure.AI.Projects.SharepointGroundingToolParameters", "azure.ai.projects.models.SharepointPreviewTool": "Azure.AI.Projects.SharepointPreviewTool", - "azure.ai.projects.models.SkillObject": "Azure.AI.Projects.SkillObject", + "azure.ai.projects.models.SkillDetails": "Azure.AI.Projects.SkillObject", "azure.ai.projects.models.SkillReferenceParam": "OpenAI.SkillReferenceParam", "azure.ai.projects.models.ToolChoiceParam": "OpenAI.ToolChoiceParam", "azure.ai.projects.models.SpecificApplyPatchParam": "OpenAI.SpecificApplyPatchParam", @@ -223,6 +220,7 @@ "azure.ai.projects.models.TextResponseFormatText": "OpenAI.TextResponseFormatConfigurationResponseFormatText", "azure.ai.projects.models.ToolboxObject": "Azure.AI.Projects.ToolboxObject", "azure.ai.projects.models.ToolboxPolicies": "Azure.AI.Projects.ToolboxPolicies", + "azure.ai.projects.models.ToolboxSearchPreviewTool": "Azure.AI.Projects.ToolboxSearchPreviewTool", "azure.ai.projects.models.ToolboxVersionObject": "Azure.AI.Projects.ToolboxVersionObject", "azure.ai.projects.models.ToolChoiceAllowed": "OpenAI.ToolChoiceAllowed", "azure.ai.projects.models.ToolChoiceCodeInterpreter": "OpenAI.ToolChoiceCodeInterpreter", @@ -249,7 +247,6 @@ "azure.ai.projects.models.WeeklyRecurrenceSchedule": "Azure.AI.Projects.WeeklyRecurrenceSchedule", "azure.ai.projects.models.WorkflowAgentDefinition": "Azure.AI.Projects.WorkflowAgentDefinition", "azure.ai.projects.models.WorkIQPreviewTool": "Azure.AI.Projects.WorkIQPreviewTool", - "azure.ai.projects.models.WorkIQPreviewToolParameters": "Azure.AI.Projects.WorkIQPreviewToolParameters", "azure.ai.projects.models.AgentObjectType": "Azure.AI.Projects.AgentObjectType", "azure.ai.projects.models.AgentKind": "Azure.AI.Projects.AgentKind", "azure.ai.projects.models.ToolType": "OpenAI.ToolType", @@ -278,7 +275,6 @@ "azure.ai.projects.models.VersionSelectorType": "Azure.AI.Projects.VersionSelectorType", "azure.ai.projects.models.AgentEndpointProtocol": "Azure.AI.Projects.AgentEndpointProtocol", "azure.ai.projects.models.AgentEndpointAuthorizationSchemeType": "Azure.AI.Projects.AgentEndpointAuthorizationSchemeType", - "azure.ai.projects.models.IsolationKeySourceKind": "Azure.AI.Projects.IsolationKeySourceKind", "azure.ai.projects.models.VersionIndicatorType": "Azure.AI.Projects.VersionIndicatorType", "azure.ai.projects.models.AgentSessionStatus": "Azure.AI.Projects.AgentSessionStatus", "azure.ai.projects.models.PageOrder": "Azure.AI.Projects.PageOrder", diff --git a/sdk/ai/azure-ai-projects/assets.json b/sdk/ai/azure-ai-projects/assets.json index 87ee354cd9f9..40aa8f703495 100644 --- a/sdk/ai/azure-ai-projects/assets.json +++ b/sdk/ai/azure-ai-projects/assets.json @@ -2,5 +2,5 @@ "AssetsRepo": "Azure/azure-sdk-assets", "AssetsRepoPrefixPath": "python", "TagPrefix": "python/ai/azure-ai-projects", - "Tag": "python/ai/azure-ai-projects_64257c2deb" + "Tag": "python/ai/azure-ai-projects_1e9c04cf49" } diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py b/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py index ccb75164d3bc..8abefc77cabd 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/_version.py @@ -6,4 +6,4 @@ # Changes may cause incorrect behavior and will be lost if the code is regenerated. # -------------------------------------------------------------------------- -VERSION = "2.1.0" +VERSION = "2.2.0" diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py index 41e22e8fa2e8..916512d21b0b 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_operations.py @@ -2131,7 +2131,7 @@ async def pending_upload( *, content_type: str = "application/json", **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -2143,8 +2143,8 @@ async def pending_upload( :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ @@ -2157,7 +2157,7 @@ async def pending_upload( *, content_type: str = "application/json", **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -2169,8 +2169,8 @@ async def pending_upload( :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ @@ -2183,7 +2183,7 @@ async def pending_upload( *, content_type: str = "application/json", **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -2195,8 +2195,8 @@ async def pending_upload( :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ @@ -2207,7 +2207,7 @@ async def pending_upload( version: str, pending_upload_request: Union[_models.PendingUploadRequest, JSON, IO[bytes]], **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -2218,8 +2218,8 @@ async def pending_upload( types: PendingUploadRequest, JSON, IO[bytes] Required. :type pending_upload_request: ~azure.ai.projects.models.PendingUploadRequest or JSON or IO[bytes] - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -2234,7 +2234,7 @@ async def pending_upload( _params = kwargs.pop("params", {}) or {} content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.PendingUploadResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.PendingUploadResult] = kwargs.pop("cls", None) content_type = content_type or "application/json" _content = None @@ -2277,7 +2277,7 @@ async def pending_upload( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.PendingUploadResponse, response.json()) + deserialized = _deserialize(_models.PendingUploadResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -3018,7 +3018,7 @@ async def patch_agent_details( agent_name: str, *, content_type: str = "application/merge-patch+json", - agent_endpoint: Optional[_models.AgentEndpoint] = None, + agent_endpoint: Optional[_models.AgentEndpointConfig] = None, agent_card: Optional[_models.AgentCard] = None, **kwargs: Any ) -> _models.AgentDetails: @@ -3030,7 +3030,7 @@ async def patch_agent_details( Default value is "application/merge-patch+json". :paramtype content_type: str :keyword agent_endpoint: The endpoint configuration for the agent. Default value is None. - :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpoint + :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpointConfig :keyword agent_card: Optional agent card for the agent. Default value is None. :paramtype agent_card: ~azure.ai.projects.models.AgentCard :return: AgentDetails. The AgentDetails is compatible with MutableMapping @@ -3080,7 +3080,7 @@ async def patch_agent_details( agent_name: str, body: Union[JSON, IO[bytes]] = _Unset, *, - agent_endpoint: Optional[_models.AgentEndpoint] = None, + agent_endpoint: Optional[_models.AgentEndpointConfig] = None, agent_card: Optional[_models.AgentCard] = None, **kwargs: Any ) -> _models.AgentDetails: @@ -3091,7 +3091,7 @@ async def patch_agent_details( :param body: Is either a JSON type or a IO[bytes] type. Required. :type body: JSON or IO[bytes] :keyword agent_endpoint: The endpoint configuration for the agent. Default value is None. - :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpoint + :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpointConfig :keyword agent_card: Optional agent card for the agent. Default value is None. :paramtype agent_card: ~azure.ai.projects.models.AgentCard :return: AgentDetails. The AgentDetails is compatible with MutableMapping @@ -3171,7 +3171,6 @@ async def create_session( self, agent_name: str, *, - isolation_key: str, version_indicator: _models.VersionIndicator, content_type: str = "application/json", agent_session_id: Optional[str] = None, @@ -3183,9 +3182,6 @@ async def create_session( :param agent_name: The name of the agent to create a session for. Required. :type agent_name: str - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword version_indicator: Determines which agent version backs the session. Required. :paramtype version_indicator: ~azure.ai.projects.models.VersionIndicator :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. @@ -3201,7 +3197,7 @@ async def create_session( @overload async def create_session( - self, agent_name: str, body: JSON, *, isolation_key: str, content_type: str = "application/json", **kwargs: Any + self, agent_name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any ) -> _models.AgentSessionResource: """Creates a new session for an agent endpoint. The endpoint resolves the backing agent version from ``version_indicator`` and enforces session ownership using the provided isolation key for @@ -3211,9 +3207,6 @@ async def create_session( :type agent_name: str :param body: Required. :type body: JSON - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -3224,13 +3217,7 @@ async def create_session( @overload async def create_session( - self, - agent_name: str, - body: IO[bytes], - *, - isolation_key: str, - content_type: str = "application/json", - **kwargs: Any + self, agent_name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any ) -> _models.AgentSessionResource: """Creates a new session for an agent endpoint. The endpoint resolves the backing agent version from ``version_indicator`` and enforces session ownership using the provided isolation key for @@ -3240,9 +3227,6 @@ async def create_session( :type agent_name: str :param body: Required. :type body: IO[bytes] - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str @@ -3257,7 +3241,6 @@ async def create_session( agent_name: str, body: Union[JSON, IO[bytes]] = _Unset, *, - isolation_key: str, version_indicator: _models.VersionIndicator = _Unset, agent_session_id: Optional[str] = None, **kwargs: Any @@ -3270,9 +3253,6 @@ async def create_session( :type agent_name: str :param body: Is either a JSON type or a IO[bytes] type. Required. :type body: JSON or IO[bytes] - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword version_indicator: Determines which agent version backs the session. Required. :paramtype version_indicator: ~azure.ai.projects.models.VersionIndicator :keyword agent_session_id: Optional caller-provided session ID. If specified, it must be unique @@ -3310,7 +3290,6 @@ async def create_session( _request = build_beta_agents_create_session_request( agent_name=agent_name, - isolation_key=isolation_key, content_type=content_type, api_version=self._config.api_version, content=_content, @@ -3422,7 +3401,7 @@ async def get_session(self, agent_name: str, session_id: str, **kwargs: Any) -> return deserialized # type: ignore @distributed_trace_async - async def delete_session(self, agent_name: str, session_id: str, *, isolation_key: str, **kwargs: Any) -> None: + async def delete_session(self, agent_name: str, session_id: str, **kwargs: Any) -> None: """Deletes a session synchronously. Returns 204 No Content when the session is deleted or does not exist. @@ -3430,9 +3409,6 @@ async def delete_session(self, agent_name: str, session_id: str, *, isolation_ke :type agent_name: str :param session_id: The session identifier. Required. :type session_id: str - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :return: None :rtype: None :raises ~azure.core.exceptions.HttpResponseError: @@ -3453,7 +3429,6 @@ async def delete_session(self, agent_name: str, session_id: str, *, isolation_ke _request = build_beta_agents_delete_session_request( agent_name=agent_name, session_id=session_id, - isolation_key=isolation_key, api_version=self._config.api_version, headers=_headers, params=_params, @@ -3679,7 +3654,7 @@ async def get_session_log_stream( @distributed_trace_async async def _upload_session_file( self, agent_name: str, agent_session_id: str, content: bytes, *, path: str, **kwargs: Any - ) -> _models.SessionFileWriteResponse: + ) -> _models.SessionFileWriteResult: """Upload a file to the session sandbox via binary stream. Maximum file size is 50 MB. Uploads exceeding this limit return 413 Payload Too Large. @@ -3692,9 +3667,8 @@ async def _upload_session_file( :keyword path: The destination file path within the sandbox, relative to the session home directory. Required. :paramtype path: str - :return: SessionFileWriteResponse. The SessionFileWriteResponse is compatible with - MutableMapping - :rtype: ~azure.ai.projects.models.SessionFileWriteResponse + :return: SessionFileWriteResult. The SessionFileWriteResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SessionFileWriteResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -3709,7 +3683,7 @@ async def _upload_session_file( _params = kwargs.pop("params", {}) or {} content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/octet-stream")) - cls: ClsType[_models.SessionFileWriteResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.SessionFileWriteResult] = kwargs.pop("cls", None) _content = content @@ -3752,7 +3726,7 @@ async def _upload_session_file( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SessionFileWriteResponse, response.json()) + deserialized = _deserialize(_models.SessionFileWriteResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -3833,7 +3807,7 @@ async def download_session_file( @distributed_trace_async async def get_session_files( self, agent_name: str, agent_session_id: str, *, path: str, **kwargs: Any - ) -> _models.SessionDirectoryListResponse: + ) -> _models.SessionDirectoryListResult: """List files and directories at a given path in the session sandbox. Returns only the immediate children of the specified directory (non-recursive). @@ -3843,9 +3817,9 @@ async def get_session_files( :type agent_session_id: str :keyword path: The directory path to list, relative to the session home directory. Required. :paramtype path: str - :return: SessionDirectoryListResponse. The SessionDirectoryListResponse is compatible with + :return: SessionDirectoryListResult. The SessionDirectoryListResult is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SessionDirectoryListResponse + :rtype: ~azure.ai.projects.models.SessionDirectoryListResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -3859,7 +3833,7 @@ async def get_session_files( _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[_models.SessionDirectoryListResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.SessionDirectoryListResult] = kwargs.pop("cls", None) _request = build_beta_agents_get_session_files_request( agent_name=agent_name, @@ -3898,7 +3872,7 @@ async def get_session_files( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SessionDirectoryListResponse, response.json()) + deserialized = _deserialize(_models.SessionDirectoryListResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -4196,14 +4170,14 @@ async def delete(self, name: str, **kwargs: Any) -> None: @overload async def create( - self, name: str, body: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -4214,14 +4188,14 @@ async def create( @overload async def create( - self, name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: JSON, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: JSON + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: JSON :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -4232,14 +4206,14 @@ async def create( @overload async def create( - self, name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: IO[bytes], *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: IO[bytes] + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: IO[bytes] :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str @@ -4250,15 +4224,15 @@ async def create( @distributed_trace_async async def create( - self, name: str, body: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any + self, name: str, taxonomy: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, JSON, - IO[bytes] Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] + :param taxonomy: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, + JSON, IO[bytes] Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] :return: EvaluationTaxonomy. The EvaluationTaxonomy is compatible with MutableMapping :rtype: ~azure.ai.projects.models.EvaluationTaxonomy :raises ~azure.core.exceptions.HttpResponseError: @@ -4279,10 +4253,10 @@ async def create( content_type = content_type or "application/json" _content = None - if isinstance(body, (IOBase, bytes)): - _content = body + if isinstance(taxonomy, (IOBase, bytes)): + _content = taxonomy else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + _content = json.dumps(taxonomy, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore _request = build_beta_evaluation_taxonomies_create_request( name=name, @@ -4326,14 +4300,14 @@ async def create( @overload async def update( - self, name: str, body: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -4344,14 +4318,14 @@ async def update( @overload async def update( - self, name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: JSON, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: JSON + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: JSON :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -4362,14 +4336,14 @@ async def update( @overload async def update( - self, name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: IO[bytes], *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: IO[bytes] + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: IO[bytes] :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str @@ -4380,15 +4354,15 @@ async def update( @distributed_trace_async async def update( - self, name: str, body: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any + self, name: str, taxonomy: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, JSON, - IO[bytes] Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] + :param taxonomy: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, + JSON, IO[bytes] Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] :return: EvaluationTaxonomy. The EvaluationTaxonomy is compatible with MutableMapping :rtype: ~azure.ai.projects.models.EvaluationTaxonomy :raises ~azure.core.exceptions.HttpResponseError: @@ -4409,10 +4383,10 @@ async def update( content_type = content_type or "application/json" _content = None - if isinstance(body, (IOBase, bytes)): - _content = body + if isinstance(taxonomy, (IOBase, bytes)): + _content = taxonomy else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + _content = json.dumps(taxonomy, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore _request = build_beta_evaluation_taxonomies_update_request( name=name, @@ -8061,7 +8035,7 @@ async def create( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Creates a skill. :keyword name: The unique name of the skill. Required. @@ -8081,13 +8055,15 @@ async def create( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload - async def create(self, body: JSON, *, content_type: str = "application/json", **kwargs: Any) -> _models.SkillObject: + async def create( + self, body: JSON, *, content_type: str = "application/json", **kwargs: Any + ) -> _models.SkillDetails: """Creates a skill. :param body: Required. @@ -8095,15 +8071,15 @@ async def create(self, body: JSON, *, content_type: str = "application/json", ** :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload async def create( self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Creates a skill. :param body: Required. @@ -8111,8 +8087,8 @@ async def create( :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @@ -8126,7 +8102,7 @@ async def create( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Creates a skill. :param body: Is either a JSON type or a IO[bytes] type. Required. @@ -8145,8 +8121,8 @@ async def create( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -8161,7 +8137,7 @@ async def create( _params = kwargs.pop("params", {}) or {} content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) if body is _Unset: if name is _Unset: @@ -8211,7 +8187,7 @@ async def create( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -8219,13 +8195,13 @@ async def create( return deserialized # type: ignore @distributed_trace_async - async def create_from_package(self, body: bytes, **kwargs: Any) -> _models.SkillObject: + async def create_from_package(self, content: bytes, **kwargs: Any) -> _models.SkillDetails: """Creates a skill from a zip package. - :param body: The zip package used to create the skill. Required. - :type body: bytes - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :param content: The zip package used to create the skill. Required. + :type content: bytes + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -8240,9 +8216,9 @@ async def create_from_package(self, body: bytes, **kwargs: Any) -> _models.Skill _params = kwargs.pop("params", {}) or {} content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/zip")) - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) - _content = body + _content = content _request = build_beta_skills_create_from_package_request( content_type=content_type, @@ -8280,7 +8256,7 @@ async def create_from_package(self, body: bytes, **kwargs: Any) -> _models.Skill if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -8288,13 +8264,13 @@ async def create_from_package(self, body: bytes, **kwargs: Any) -> _models.Skill return deserialized # type: ignore @distributed_trace_async - async def get(self, name: str, **kwargs: Any) -> _models.SkillObject: + async def get(self, name: str, **kwargs: Any) -> _models.SkillDetails: """Retrieves a skill. :param name: The unique name of the skill. Required. :type name: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -8308,7 +8284,7 @@ async def get(self, name: str, **kwargs: Any) -> _models.SkillObject: _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) _request = build_beta_skills_get_request( name=name, @@ -8345,7 +8321,7 @@ async def get(self, name: str, **kwargs: Any) -> _models.SkillObject: if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -8425,7 +8401,7 @@ def list( order: Optional[Union[str, _models.PageOrder]] = None, before: Optional[str] = None, **kwargs: Any - ) -> AsyncItemPaged["_models.SkillObject"]: + ) -> AsyncItemPaged["_models.SkillDetails"]: """Returns the list of all skills. :keyword limit: A limit on the number of objects to be returned. Limit can range between 1 and @@ -8442,14 +8418,14 @@ def list( subsequent call can include before=obj_foo in order to fetch the previous page of the list. Default value is None. :paramtype before: str - :return: An iterator like instance of SkillObject - :rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.ai.projects.models.SkillObject] + :return: An iterator like instance of SkillDetails + :rtype: ~azure.core.async_paging.AsyncItemPaged[~azure.ai.projects.models.SkillDetails] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[List[_models.SkillObject]] = kwargs.pop("cls", None) + cls: ClsType[List[_models.SkillDetails]] = kwargs.pop("cls", None) error_map: MutableMapping = { 401: ClientAuthenticationError, @@ -8479,7 +8455,7 @@ def prepare_request(_continuation_token=None): async def extract_data(pipeline_response): deserialized = pipeline_response.http_response.json() list_of_elem = _deserialize( - List[_models.SkillObject], + List[_models.SkillDetails], deserialized.get("data", []), ) if cls: @@ -8517,7 +8493,7 @@ async def update( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -8537,15 +8513,15 @@ async def update( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload async def update( self, name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -8555,15 +8531,15 @@ async def update( :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload async def update( self, name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -8573,8 +8549,8 @@ async def update( :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @@ -8588,7 +8564,7 @@ async def update( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -8607,8 +8583,8 @@ async def update( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -8623,7 +8599,7 @@ async def update( _params = kwargs.pop("params", {}) or {} content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) if body is _Unset: body = {"description": description, "instructions": instructions, "metadata": metadata} @@ -8672,7 +8648,7 @@ async def update( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -8680,13 +8656,13 @@ async def update( return deserialized # type: ignore @distributed_trace_async - async def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResponse: + async def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResult: """Deletes a skill. :param name: The unique name of the skill. Required. :type name: str - :return: DeleteSkillResponse. The DeleteSkillResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.DeleteSkillResponse + :return: DeleteSkillResult. The DeleteSkillResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.DeleteSkillResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -8700,7 +8676,7 @@ async def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResponse: _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[_models.DeleteSkillResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.DeleteSkillResult] = kwargs.pop("cls", None) _request = build_beta_skills_delete_request( name=name, @@ -8737,7 +8713,7 @@ async def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResponse: if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.DeleteSkillResponse, response.json()) + deserialized = _deserialize(_models.DeleteSkillResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_datasets_async.py b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_datasets_async.py index dc7095c827ea..f4983d79e7f9 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_datasets_async.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_datasets_async.py @@ -22,7 +22,7 @@ FileDatasetVersion, FolderDatasetVersion, PendingUploadRequest, - PendingUploadResponse, + PendingUploadResult, PendingUploadType, ) @@ -48,7 +48,7 @@ async def _create_dataset_and_get_its_container_client( connection_name: Optional[str] = None, ) -> Tuple[ContainerClient, str]: - pending_upload_response: PendingUploadResponse = await self.pending_upload( + pending_upload_response: PendingUploadResult = await self.pending_upload( name=name, version=input_version, pending_upload_request=PendingUploadRequest( diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_sessions_async.py b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_sessions_async.py index c4257fec3242..dd1427541a65 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_sessions_async.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/aio/operations/_patch_sessions_async.py @@ -34,7 +34,7 @@ async def upload_session_file( # type: ignore[override] *, path: str, **kwargs: Any, - ) -> _models.SessionFileWriteResponse: + ) -> _models.SessionFileWriteResult: """Upload a file to the session sandbox. Accepts either a ``bytes`` buffer or a local file path (``str``). @@ -52,9 +52,9 @@ async def upload_session_file( # type: ignore[override] :keyword path: The destination file path within the sandbox, relative to the session home directory. Required. :paramtype path: str - :return: SessionFileWriteResponse. The SessionFileWriteResponse is compatible with + :return: SessionFileWriteResult. The SessionFileWriteResult is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SessionFileWriteResponse + :rtype: ~azure.ai.projects.models.SessionFileWriteResult :raises ~azure.core.exceptions.HttpResponseError: :raises FileNotFoundError: If *content_or_file_path* is a ``str`` and the file does not exist. """ diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/models/__init__.py b/sdk/ai/azure-ai-projects/azure/ai/projects/models/__init__.py index 8dfca8b30a03..b7f356f41778 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/models/__init__.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/models/__init__.py @@ -23,8 +23,8 @@ AgentClusterInsightResult, AgentDefinition, AgentDetails, - AgentEndpoint, AgentEndpointAuthorizationScheme, + AgentEndpointConfig, AgentIdentity, AgentObjectVersions, AgentSessionResource, @@ -68,7 +68,6 @@ ClusterInsightResult, ClusterTokenUsage, CodeBasedEvaluatorDefinition, - CodeConfiguration, CodeInterpreterTool, ComparisonFilter, CompoundFilter, @@ -95,12 +94,11 @@ DeleteAgentResponse, DeleteAgentVersionResponse, DeleteMemoryStoreResult, - DeleteSkillResponse, + DeleteSkillResult, Deployment, EmbeddingConfiguration, EntraAuthorizationScheme, EntraIDCredentials, - EntraIsolationKeySource, EvalResult, EvalRunResultCompareItem, EvalRunResultComparison, @@ -114,12 +112,14 @@ EvaluationRunClusterInsightRequest, EvaluationRunClusterInsightResult, EvaluationScheduleTask, + EvaluationTarget, EvaluationTaxonomy, EvaluationTaxonomyInput, EvaluatorDefinition, EvaluatorMetric, EvaluatorVersion, FabricDataAgentToolParameters, + FabricIQPreviewTool, FieldMapping, FileDatasetVersion, FileSearchTool, @@ -130,7 +130,6 @@ FunctionShellToolParamEnvironmentContainerReferenceParam, FunctionShellToolParamEnvironmentLocalEnvironmentParam, FunctionTool, - HeaderIsolationKeySource, HeaderTelemetryEndpointAuth, HostedAgentDefinition, HourlyRecurrenceSchedule, @@ -150,7 +149,6 @@ InsightScheduleTask, InsightSummary, InsightsMetadata, - IsolationKeySource, LocalShellToolParam, LocalSkillParam, MCPTool, @@ -190,7 +188,7 @@ OpenApiTool, OtlpTelemetryEndpoint, PendingUploadRequest, - PendingUploadResponse, + PendingUploadResult, PromptAgentDefinition, PromptAgentDefinitionTextOptions, PromptBasedEvaluatorDefinition, @@ -201,6 +199,7 @@ RecurrenceSchedule, RecurrenceTrigger, RedTeam, + RedTeamTargetConfig, ResponseUsageInputTokensDetails, ResponseUsageOutputTokensDetails, SASCredentials, @@ -208,19 +207,17 @@ ScheduleRun, ScheduleTask, SessionDirectoryEntry, - SessionDirectoryListResponse, - SessionFileWriteResponse, + SessionDirectoryListResult, + SessionFileWriteResult, SessionLogEvent, SharepointGroundingToolParameters, SharepointPreviewTool, - SkillObject, + SkillDetails, SkillReferenceParam, SpecificApplyPatchParam, SpecificFunctionShellParam, StructuredInputDefinition, StructuredOutputDefinition, - Target, - TargetConfig, TaxonomyCategory, TaxonomySubCategory, TelemetryConfig, @@ -246,6 +243,7 @@ ToolProjectConnection, ToolboxObject, ToolboxPolicies, + ToolboxSearchPreviewTool, ToolboxVersionObject, Trigger, UpdateToolboxRequest, @@ -261,7 +259,6 @@ WebSearchToolFilters, WeeklyRecurrenceSchedule, WorkIQPreviewTool, - WorkIQPreviewToolParameters, WorkflowAgentDefinition, ) @@ -300,7 +297,6 @@ IndexType, InputFidelity, InsightType, - IsolationKeySourceKind, MemoryItemKind, MemoryOperationKind, MemoryStoreKind, @@ -344,8 +340,8 @@ "AgentClusterInsightResult", "AgentDefinition", "AgentDetails", - "AgentEndpoint", "AgentEndpointAuthorizationScheme", + "AgentEndpointConfig", "AgentIdentity", "AgentObjectVersions", "AgentSessionResource", @@ -389,7 +385,6 @@ "ClusterInsightResult", "ClusterTokenUsage", "CodeBasedEvaluatorDefinition", - "CodeConfiguration", "CodeInterpreterTool", "ComparisonFilter", "CompoundFilter", @@ -416,12 +411,11 @@ "DeleteAgentResponse", "DeleteAgentVersionResponse", "DeleteMemoryStoreResult", - "DeleteSkillResponse", + "DeleteSkillResult", "Deployment", "EmbeddingConfiguration", "EntraAuthorizationScheme", "EntraIDCredentials", - "EntraIsolationKeySource", "EvalResult", "EvalRunResultCompareItem", "EvalRunResultComparison", @@ -435,12 +429,14 @@ "EvaluationRunClusterInsightRequest", "EvaluationRunClusterInsightResult", "EvaluationScheduleTask", + "EvaluationTarget", "EvaluationTaxonomy", "EvaluationTaxonomyInput", "EvaluatorDefinition", "EvaluatorMetric", "EvaluatorVersion", "FabricDataAgentToolParameters", + "FabricIQPreviewTool", "FieldMapping", "FileDatasetVersion", "FileSearchTool", @@ -451,7 +447,6 @@ "FunctionShellToolParamEnvironmentContainerReferenceParam", "FunctionShellToolParamEnvironmentLocalEnvironmentParam", "FunctionTool", - "HeaderIsolationKeySource", "HeaderTelemetryEndpointAuth", "HostedAgentDefinition", "HourlyRecurrenceSchedule", @@ -471,7 +466,6 @@ "InsightScheduleTask", "InsightSummary", "InsightsMetadata", - "IsolationKeySource", "LocalShellToolParam", "LocalSkillParam", "MCPTool", @@ -511,7 +505,7 @@ "OpenApiTool", "OtlpTelemetryEndpoint", "PendingUploadRequest", - "PendingUploadResponse", + "PendingUploadResult", "PromptAgentDefinition", "PromptAgentDefinitionTextOptions", "PromptBasedEvaluatorDefinition", @@ -522,6 +516,7 @@ "RecurrenceSchedule", "RecurrenceTrigger", "RedTeam", + "RedTeamTargetConfig", "ResponseUsageInputTokensDetails", "ResponseUsageOutputTokensDetails", "SASCredentials", @@ -529,19 +524,17 @@ "ScheduleRun", "ScheduleTask", "SessionDirectoryEntry", - "SessionDirectoryListResponse", - "SessionFileWriteResponse", + "SessionDirectoryListResult", + "SessionFileWriteResult", "SessionLogEvent", "SharepointGroundingToolParameters", "SharepointPreviewTool", - "SkillObject", + "SkillDetails", "SkillReferenceParam", "SpecificApplyPatchParam", "SpecificFunctionShellParam", "StructuredInputDefinition", "StructuredOutputDefinition", - "Target", - "TargetConfig", "TaxonomyCategory", "TaxonomySubCategory", "TelemetryConfig", @@ -567,6 +560,7 @@ "ToolProjectConnection", "ToolboxObject", "ToolboxPolicies", + "ToolboxSearchPreviewTool", "ToolboxVersionObject", "Trigger", "UpdateToolboxRequest", @@ -582,7 +576,6 @@ "WebSearchToolFilters", "WeeklyRecurrenceSchedule", "WorkIQPreviewTool", - "WorkIQPreviewToolParameters", "WorkflowAgentDefinition", "AgentBlueprintReferenceType", "AgentEndpointAuthorizationSchemeType", @@ -618,7 +611,6 @@ "IndexType", "InputFidelity", "InsightType", - "IsolationKeySourceKind", "MemoryItemKind", "MemoryOperationKind", "MemoryStoreKind", diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py index 588927c00421..faecfb8689bf 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py @@ -26,8 +26,6 @@ class _AgentDefinitionOptInKeys(str, Enum, metaclass=CaseInsensitiveEnumMeta): class _FoundryFeaturesOptInKeys(str, Enum, metaclass=CaseInsensitiveEnumMeta): """Type of _FoundryFeaturesOptInKeys.""" - SKILLS_V1_PREVIEW = "Skills=V1Preview" - """SKILLS_V1_PREVIEW.""" EVALUATIONS_V1_PREVIEW = "Evaluations=V1Preview" """EVALUATIONS_V1_PREVIEW.""" SCHEDULES_V1_PREVIEW = "Schedules=V1Preview" @@ -40,6 +38,8 @@ class _FoundryFeaturesOptInKeys(str, Enum, metaclass=CaseInsensitiveEnumMeta): """MEMORY_STORES_V1_PREVIEW.""" TOOLBOXES_V1_PREVIEW = "Toolboxes=V1Preview" """TOOLBOXES_V1_PREVIEW.""" + SKILLS_V1_PREVIEW = "Skills=V1Preview" + """SKILLS_V1_PREVIEW.""" class AgentBlueprintReferenceType(str, Enum, metaclass=CaseInsensitiveEnumMeta): @@ -69,6 +69,8 @@ class AgentEndpointProtocol(str, Enum, metaclass=CaseInsensitiveEnumMeta): """RESPONSES.""" A2A = "a2a" """A2A.""" + MCP = "mcp" + """MCP.""" INVOCATIONS = "invocations" """INVOCATIONS.""" @@ -106,6 +108,8 @@ class AgentProtocol(str, Enum, metaclass=CaseInsensitiveEnumMeta): """ACTIVITY_PROTOCOL.""" RESPONSES = "responses" """RESPONSES.""" + MCP = "mcp" + """MCP.""" INVOCATIONS = "invocations" """INVOCATIONS.""" @@ -516,15 +520,6 @@ class InsightType(str, Enum, metaclass=CaseInsensitiveEnumMeta): """Evaluation Comparison.""" -class IsolationKeySourceKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): - """Type of IsolationKeySourceKind.""" - - ENTRA = "Entra" - """ENTRA.""" - HEADER = "Header" - """HEADER.""" - - class MemoryItemKind(str, Enum, metaclass=CaseInsensitiveEnumMeta): """Memory item kind.""" @@ -840,6 +835,10 @@ class ToolType(str, Enum, metaclass=CaseInsensitiveEnumMeta): """MEMORY_SEARCH_PREVIEW.""" WORK_IQ_PREVIEW = "work_iq_preview" """WORK_IQ_PREVIEW.""" + FABRIC_IQ_PREVIEW = "fabric_iq_preview" + """FABRIC_IQ_PREVIEW.""" + TOOLBOX_SEARCH_PREVIEW = "toolbox_search_preview" + """TOOLBOX_SEARCH_PREVIEW.""" AZURE_AI_SEARCH = "azure_ai_search" """AZURE_AI_SEARCH.""" AZURE_FUNCTION = "azure_function" diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py index f2da2ee9bdb2..1498bf59bb85 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py @@ -29,7 +29,6 @@ FunctionShellToolParamEnvironmentType, IndexType, InsightType, - IsolationKeySourceKind, MemoryItemKind, MemoryStoreKind, MemoryStoreObjectType, @@ -59,16 +58,18 @@ class Tool(_Model): A2APreviewTool, ApplyPatchToolParam, AzureAISearchTool, AzureFunctionTool, BingCustomSearchPreviewTool, BingGroundingTool, BrowserAutomationPreviewTool, CaptureStructuredOutputsTool, CodeInterpreterTool, ComputerUsePreviewTool, CustomToolParam, - MicrosoftFabricPreviewTool, FileSearchTool, FunctionTool, ImageGenTool, LocalShellToolParam, - MCPTool, MemorySearchPreviewTool, OpenApiTool, SharepointPreviewTool, FunctionShellToolParam, - WebSearchTool, WebSearchPreviewTool, WorkIQPreviewTool + MicrosoftFabricPreviewTool, FabricIQPreviewTool, FileSearchTool, FunctionTool, ImageGenTool, + LocalShellToolParam, MCPTool, MemorySearchPreviewTool, OpenApiTool, SharepointPreviewTool, + FunctionShellToolParam, ToolboxSearchPreviewTool, WebSearchTool, WebSearchPreviewTool, + WorkIQPreviewTool :ivar type: Required. Known values are: "function", "file_search", "computer_use_preview", "web_search", "mcp", "code_interpreter", "image_generation", "local_shell", "shell", "custom", "web_search_preview", "apply_patch", "a2a_preview", "bing_custom_search_preview", "browser_automation_preview", "fabric_dataagent_preview", "sharepoint_grounding_preview", - "memory_search_preview", "work_iq_preview", "azure_ai_search", "azure_function", - "bing_grounding", "capture_structured_outputs", and "openapi". + "memory_search_preview", "work_iq_preview", "fabric_iq_preview", "toolbox_search_preview", + "azure_ai_search", "azure_function", "bing_grounding", "capture_structured_outputs", and + "openapi". :vartype type: str or ~azure.ai.projects.models.ToolType """ @@ -79,8 +80,8 @@ class Tool(_Model): \"shell\", \"custom\", \"web_search_preview\", \"apply_patch\", \"a2a_preview\", \"bing_custom_search_preview\", \"browser_automation_preview\", \"fabric_dataagent_preview\", \"sharepoint_grounding_preview\", \"memory_search_preview\", \"work_iq_preview\", - \"azure_ai_search\", \"azure_function\", \"bing_grounding\", \"capture_structured_outputs\", - and \"openapi\".""" + \"fabric_iq_preview\", \"toolbox_search_preview\", \"azure_ai_search\", \"azure_function\", + \"bing_grounding\", \"capture_structured_outputs\", and \"openapi\".""" @overload def __init__( @@ -105,6 +106,10 @@ class A2APreviewTool(Tool, discriminator="a2a_preview"): :ivar type: The type of the tool. Always ``"a2a_preview``. Required. A2A_PREVIEW. :vartype type: str or ~azure.ai.projects.models.A2A_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar base_url: Base URL of the agent. :vartype base_url: str :ivar agent_card_path: The path to the agent card relative to the ``base_url``. If not @@ -118,6 +123,10 @@ class A2APreviewTool(Tool, discriminator="a2a_preview"): type: Literal[ToolType.A2A_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The type of the tool. Always ``\"a2a_preview``. Required. A2A_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" base_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) """Base URL of the agent.""" agent_card_path: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) @@ -131,6 +140,8 @@ class A2APreviewTool(Tool, discriminator="a2a_preview"): def __init__( self, *, + name: Optional[str] = None, + description: Optional[str] = None, base_url: Optional[str] = None, agent_card_path: Optional[str] = None, project_connection_id: Optional[str] = None, @@ -459,7 +470,7 @@ class AgentDetails(_Model): :ivar versions: The latest version of the agent. Required. :vartype versions: ~azure.ai.projects.models.AgentObjectVersions :ivar agent_endpoint: The endpoint configuration for the agent. - :vartype agent_endpoint: ~azure.ai.projects.models.AgentEndpoint + :vartype agent_endpoint: ~azure.ai.projects.models.AgentEndpointConfig :ivar instance_identity: The instance identity of the agent. :vartype instance_identity: ~azure.ai.projects.models.AgentIdentity :ivar blueprint: The blueprint for the agent. @@ -478,7 +489,7 @@ class AgentDetails(_Model): """The name of the agent. Required.""" versions: "_models.AgentObjectVersions" = rest_field(visibility=["read", "create", "update", "delete", "query"]) """The latest version of the agent. Required.""" - agent_endpoint: Optional["_models.AgentEndpoint"] = rest_field( + agent_endpoint: Optional["_models.AgentEndpointConfig"] = rest_field( visibility=["read", "create", "update", "delete", "query"] ) """The endpoint configuration for the agent.""" @@ -498,7 +509,7 @@ def __init__( id: str, # pylint: disable=redefined-builtin name: str, versions: "_models.AgentObjectVersions", - agent_endpoint: Optional["_models.AgentEndpoint"] = None, + agent_endpoint: Optional["_models.AgentEndpointConfig"] = None, agent_card: Optional["_models.AgentCard"] = None, ) -> None: ... @@ -513,8 +524,40 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class AgentEndpoint(_Model): - """AgentEndpoint. +class AgentEndpointAuthorizationScheme(_Model): + """AgentEndpointAuthorizationScheme. + + You probably want to use the sub-classes and not this class directly. Known sub-classes are: + BotServiceAuthorizationScheme, BotServiceRbacAuthorizationScheme, EntraAuthorizationScheme + + :ivar type: Required. Known values are: "Entra", "BotService", and "BotServiceRbac". + :vartype type: str or ~azure.ai.projects.models.AgentEndpointAuthorizationSchemeType + """ + + __mapping__: dict[str, _Model] = {} + type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) + """Required. Known values are: \"Entra\", \"BotService\", and \"BotServiceRbac\".""" + + @overload + def __init__( + self, + *, + type: str, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + + +class AgentEndpointConfig(_Model): + """AgentEndpointConfig. :ivar version_selector: The version selector of the agent endpoint determines how traffic is routed to different versions of the agent. @@ -560,38 +603,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class AgentEndpointAuthorizationScheme(_Model): - """AgentEndpointAuthorizationScheme. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - BotServiceAuthorizationScheme, BotServiceRbacAuthorizationScheme, EntraAuthorizationScheme - - :ivar type: Required. Known values are: "Entra", "BotService", and "BotServiceRbac". - :vartype type: str or ~azure.ai.projects.models.AgentEndpointAuthorizationSchemeType - """ - - __mapping__: dict[str, _Model] = {} - type: str = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"Entra\", \"BotService\", and \"BotServiceRbac\".""" - - @overload - def __init__( - self, - *, - type: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - class BaseCredentials(_Model): """A base class for connection credentials. @@ -815,14 +826,14 @@ class AgentTaxonomyInput(EvaluationTaxonomyInput, discriminator="agent"): :ivar type: Input type of the evaluation taxonomy. Required. Agent. :vartype type: str or ~azure.ai.projects.models.AGENT :ivar target: Target configuration for the agent. Required. - :vartype target: ~azure.ai.projects.models.Target + :vartype target: ~azure.ai.projects.models.EvaluationTarget :ivar risk_categories: List of risk categories to evaluate against. Required. :vartype risk_categories: list[str or ~azure.ai.projects.models.RiskCategory] """ type: Literal[EvaluationTaxonomyInputType.AGENT] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """Input type of the evaluation taxonomy. Required. Agent.""" - target: "_models.Target" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + target: "_models.EvaluationTarget" = rest_field(visibility=["read", "create", "update", "delete", "query"]) """Target configuration for the agent. Required.""" risk_categories: list[Union[str, "_models.RiskCategory"]] = rest_field( name="riskCategories", visibility=["read", "create", "update", "delete", "query"] @@ -833,7 +844,7 @@ class AgentTaxonomyInput(EvaluationTaxonomyInput, discriminator="agent"): def __init__( self, *, - target: "_models.Target", + target: "_models.EvaluationTarget", risk_categories: list[Union[str, "_models.RiskCategory"]], ) -> None: ... @@ -1250,7 +1261,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type: Literal["auto"] = "auto" -class Target(_Model): +class EvaluationTarget(_Model): """Base class for targets with discriminator support. You probably want to use the sub-classes and not this class directly. Known sub-classes are: @@ -1282,7 +1293,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class AzureAIAgentTarget(Target, discriminator="azure_ai_agent"): +class AzureAIAgentTarget(EvaluationTarget, discriminator="azure_ai_agent"): """Represents a target specifying an Azure AI agent. :ivar type: The type of target, always ``azure_ai_agent``. Required. Default value is @@ -1329,7 +1340,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type = "azure_ai_agent" # type: ignore -class AzureAIModelTarget(Target, discriminator="azure_ai_model"): +class AzureAIModelTarget(EvaluationTarget, discriminator="azure_ai_model"): """Represents a target specifying an Azure AI model for operations requiring model selection. :ivar type: The type of target, always ``azure_ai_model``. Required. Default value is @@ -1487,12 +1498,20 @@ class AzureAISearchTool(Tool, discriminator="azure_ai_search"): :ivar type: The object type, which is always 'azure_ai_search'. Required. AZURE_AI_SEARCH. :vartype type: str or ~azure.ai.projects.models.AZURE_AI_SEARCH + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar azure_ai_search: The azure ai search index resource. Required. :vartype azure_ai_search: ~azure.ai.projects.models.AzureAISearchToolResource """ type: Literal[ToolType.AZURE_AI_SEARCH] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'azure_ai_search'. Required. AZURE_AI_SEARCH.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" azure_ai_search: "_models.AzureAISearchToolResource" = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -1503,6 +1522,8 @@ def __init__( self, *, azure_ai_search: "_models.AzureAISearchToolResource", + name: Optional[str] = None, + description: Optional[str] = None, ) -> None: ... @overload @@ -1744,7 +1765,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type = ToolType.AZURE_FUNCTION # type: ignore -class TargetConfig(_Model): +class RedTeamTargetConfig(_Model): """Abstract class for target configuration. You probably want to use the sub-classes and not this class directly. Known sub-classes are: @@ -1776,7 +1797,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class AzureOpenAIModelConfiguration(TargetConfig, discriminator="AzureOpenAIModel"): +class AzureOpenAIModelConfiguration(RedTeamTargetConfig, discriminator="AzureOpenAIModel"): """Azure OpenAI model configuration. The API version would be selected by the service for querying the model. @@ -1876,6 +1897,10 @@ class BingCustomSearchPreviewTool(Tool, discriminator="bing_custom_search_previe :ivar type: The object type, which is always 'bing_custom_search_preview'. Required. BING_CUSTOM_SEARCH_PREVIEW. :vartype type: str or ~azure.ai.projects.models.BING_CUSTOM_SEARCH_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar bing_custom_search_preview: The bing custom search tool parameters. Required. :vartype bing_custom_search_preview: ~azure.ai.projects.models.BingCustomSearchToolParameters """ @@ -1883,6 +1908,10 @@ class BingCustomSearchPreviewTool(Tool, discriminator="bing_custom_search_previe type: Literal[ToolType.BING_CUSTOM_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'bing_custom_search_preview'. Required. BING_CUSTOM_SEARCH_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" bing_custom_search_preview: "_models.BingCustomSearchToolParameters" = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -1893,6 +1922,8 @@ def __init__( self, *, bing_custom_search_preview: "_models.BingCustomSearchToolParameters", + name: Optional[str] = None, + description: Optional[str] = None, ) -> None: ... @overload @@ -2028,12 +2059,20 @@ class BingGroundingTool(Tool, discriminator="bing_grounding"): :ivar type: The object type, which is always 'bing_grounding'. Required. BING_GROUNDING. :vartype type: str or ~azure.ai.projects.models.BING_GROUNDING + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar bing_grounding: The bing grounding search tool parameters. Required. :vartype bing_grounding: ~azure.ai.projects.models.BingGroundingSearchToolParameters """ type: Literal[ToolType.BING_GROUNDING] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'bing_grounding'. Required. BING_GROUNDING.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" bing_grounding: "_models.BingGroundingSearchToolParameters" = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -2044,6 +2083,8 @@ def __init__( self, *, bing_grounding: "_models.BingGroundingSearchToolParameters", + name: Optional[str] = None, + description: Optional[str] = None, ) -> None: ... @overload @@ -2181,6 +2222,10 @@ class BrowserAutomationPreviewTool(Tool, discriminator="browser_automation_previ :ivar type: The object type, which is always 'browser_automation_preview'. Required. BROWSER_AUTOMATION_PREVIEW. :vartype type: str or ~azure.ai.projects.models.BROWSER_AUTOMATION_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar browser_automation_preview: The Browser Automation Tool parameters. Required. :vartype browser_automation_preview: ~azure.ai.projects.models.BrowserAutomationToolParameters """ @@ -2188,6 +2233,10 @@ class BrowserAutomationPreviewTool(Tool, discriminator="browser_automation_previ type: Literal[ToolType.BROWSER_AUTOMATION_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'browser_automation_preview'. Required. BROWSER_AUTOMATION_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" browser_automation_preview: "_models.BrowserAutomationToolParameters" = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -2198,6 +2247,8 @@ def __init__( self, *, browser_automation_preview: "_models.BrowserAutomationToolParameters", + name: Optional[str] = None, + description: Optional[str] = None, ) -> None: ... @overload @@ -2691,41 +2742,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type = EvaluatorDefinitionType.CODE # type: ignore -class CodeConfiguration(_Model): - """Code-based deployment configuration for a hosted agent. - - :ivar runtime: The runtime identifier for code execution (e.g., 'python_3_11', 'python_3_12', - 'python_3_13'). Required. - :vartype runtime: str - :ivar entry_point: The entry point command and arguments for the code execution. Required. - :vartype entry_point: list[str] - """ - - runtime: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The runtime identifier for code execution (e.g., 'python_3_11', 'python_3_12', 'python_3_13'). - Required.""" - entry_point: list[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The entry point command and arguments for the code execution. Required.""" - - @overload - def __init__( - self, - *, - runtime: str, - entry_point: list[str], - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - class CodeInterpreterTool(Tool, discriminator="code_interpreter"): """Code interpreter. @@ -3953,7 +3969,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class DeleteSkillResponse(_Model): +class DeleteSkillResult(_Model): """A deleted skill Object. :ivar name: The unique name of the skill. Required. @@ -4062,22 +4078,14 @@ class EntraAuthorizationScheme(AgentEndpointAuthorizationScheme, discriminator=" :ivar type: Required. ENTRA. :vartype type: str or ~azure.ai.projects.models.ENTRA - :ivar isolation_key_source: Required. - :vartype isolation_key_source: ~azure.ai.projects.models.IsolationKeySource """ type: Literal[AgentEndpointAuthorizationSchemeType.ENTRA] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """Required. ENTRA.""" - isolation_key_source: "_models.IsolationKeySource" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Required.""" @overload def __init__( self, - *, - isolation_key_source: "_models.IsolationKeySource", ) -> None: ... @overload @@ -4119,65 +4127,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type = CredentialType.ENTRA_ID # type: ignore -class IsolationKeySource(_Model): - """IsolationKeySource. - - You probably want to use the sub-classes and not this class directly. Known sub-classes are: - EntraIsolationKeySource, HeaderIsolationKeySource - - :ivar kind: Required. Known values are: "Entra" and "Header". - :vartype kind: str or ~azure.ai.projects.models.IsolationKeySourceKind - """ - - __mapping__: dict[str, _Model] = {} - kind: str = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) - """Required. Known values are: \"Entra\" and \"Header\".""" - - @overload - def __init__( - self, - *, - kind: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - - -class EntraIsolationKeySource(IsolationKeySource, discriminator="Entra"): - """EntraIsolationKeySource. - - :ivar kind: Required. ENTRA. - :vartype kind: str or ~azure.ai.projects.models.ENTRA - """ - - kind: Literal[IsolationKeySourceKind.ENTRA] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. ENTRA.""" - - @overload - def __init__( - self, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = IsolationKeySourceKind.ENTRA # type: ignore - - class EvalResult(_Model): """Result of the evaluation. @@ -5051,6 +5000,70 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) +class FabricIQPreviewTool(Tool, discriminator="fabric_iq_preview"): + """A FabricIQ server-side tool. + + :ivar type: The object type, which is always 'fabric_iq_preview'. Required. FABRIC_IQ_PREVIEW. + :vartype type: str or ~azure.ai.projects.models.FABRIC_IQ_PREVIEW + :ivar project_connection_id: The ID of the FabricIQ project connection. Required. + :vartype project_connection_id: str + :ivar server_label: (Optional) The label of the FabricIQ MCP server to connect to. + :vartype server_label: str + :ivar server_url: (Optional) The URL of the FabricIQ MCP server. If not provided, the URL from + the project connection will be used. + :vartype server_url: str + :ivar require_approval: (Optional) Whether the agent requires approval before executing + actions. Default is always. Is either a MCPToolRequireApproval type or a str type. + :vartype require_approval: ~azure.ai.projects.models.MCPToolRequireApproval or str + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str + """ + + type: Literal[ToolType.FABRIC_IQ_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The object type, which is always 'fabric_iq_preview'. Required. FABRIC_IQ_PREVIEW.""" + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the FabricIQ project connection. Required.""" + server_label: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """(Optional) The label of the FabricIQ MCP server to connect to.""" + server_url: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """(Optional) The URL of the FabricIQ MCP server. If not provided, the URL from the project + connection will be used.""" + require_approval: Optional[Union["_models.MCPToolRequireApproval", str]] = rest_field( + visibility=["read", "create", "update", "delete", "query"] + ) + """(Optional) Whether the agent requires approval before executing actions. Default is always. Is + either a MCPToolRequireApproval type or a str type.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" + + @overload + def __init__( + self, + *, + project_connection_id: str, + server_label: Optional[str] = None, + server_url: Optional[str] = None, + require_approval: Optional[Union["_models.MCPToolRequireApproval", str]] = None, + name: Optional[str] = None, + description: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.FABRIC_IQ_PREVIEW # type: ignore + + class FieldMapping(_Model): """Field mapping configuration class. @@ -5508,44 +5521,6 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type = ToolType.FUNCTION # type: ignore -class HeaderIsolationKeySource(IsolationKeySource, discriminator="Header"): - """HeaderIsolationKeySource. - - :ivar kind: Required. HEADER. - :vartype kind: str or ~azure.ai.projects.models.HEADER - :ivar user_isolation_key: The user isolation key header value. Required. - :vartype user_isolation_key: str - :ivar chat_isolation_key: The chat isolation key header value. Required. - :vartype chat_isolation_key: str - """ - - kind: Literal[IsolationKeySourceKind.HEADER] = rest_discriminator(name="kind", visibility=["read", "create", "update", "delete", "query"]) # type: ignore - """Required. HEADER.""" - user_isolation_key: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The user isolation key header value. Required.""" - chat_isolation_key: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The chat isolation key header value. Required.""" - - @overload - def __init__( - self, - *, - user_isolation_key: str, - chat_isolation_key: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self.kind = IsolationKeySourceKind.HEADER # type: ignore - - class TelemetryEndpointAuth(_Model): """Authentication configuration for a telemetry endpoint. @@ -5651,10 +5626,6 @@ class HostedAgentDefinition(AgentDefinition, discriminator="hosted"): :vartype container_configuration: ~azure.ai.projects.models.ContainerConfiguration :ivar protocol_versions: The protocols that the agent supports for ingress communication. :vartype protocol_versions: list[~azure.ai.projects.models.ProtocolVersionRecord] - :ivar code_configuration: Code-based deployment configuration. Provide this for code-based - deployments. Mutually exclusive with container_configuration — the service validates that - exactly one is set. - :vartype code_configuration: ~azure.ai.projects.models.CodeConfiguration :ivar telemetry_config: Optional customer-supplied telemetry configuration for exporting container logs, traces, and metrics. :vartype telemetry_config: ~azure.ai.projects.models.TelemetryConfig @@ -5688,11 +5659,6 @@ class HostedAgentDefinition(AgentDefinition, discriminator="hosted"): visibility=["read", "create", "update", "delete", "query"] ) """The protocols that the agent supports for ingress communication.""" - code_configuration: Optional["_models.CodeConfiguration"] = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """Code-based deployment configuration. Provide this for code-based deployments. Mutually - exclusive with container_configuration — the service validates that exactly one is set.""" telemetry_config: Optional["_models.TelemetryConfig"] = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -5712,7 +5678,6 @@ def __init__( image: Optional[str] = None, container_configuration: Optional["_models.ContainerConfiguration"] = None, protocol_versions: Optional[list["_models.ProtocolVersionRecord"]] = None, - code_configuration: Optional["_models.CodeConfiguration"] = None, telemetry_config: Optional["_models.TelemetryConfig"] = None, ) -> None: ... @@ -6814,6 +6779,10 @@ class MemorySearchPreviewTool(Tool, discriminator="memory_search_preview"): :ivar type: The type of the tool. Always ``memory_search_preview``. Required. MEMORY_SEARCH_PREVIEW. :vartype type: str or ~azure.ai.projects.models.MEMORY_SEARCH_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar memory_store_name: The name of the memory store to use. Required. :vartype memory_store_name: str :ivar scope: The namespace used to group and isolate memories, such as a user ID. Limits which @@ -6829,6 +6798,10 @@ class MemorySearchPreviewTool(Tool, discriminator="memory_search_preview"): type: Literal[ToolType.MEMORY_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The type of the tool. Always ``memory_search_preview``. Required. MEMORY_SEARCH_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" memory_store_name: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) """The name of the memory store to use. Required.""" scope: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) @@ -6848,6 +6821,8 @@ def __init__( *, memory_store_name: str, scope: str, + name: Optional[str] = None, + description: Optional[str] = None, search_options: Optional["_models.MemorySearchOptions"] = None, update_delay: Optional[int] = None, ) -> None: ... @@ -7296,6 +7271,10 @@ class MicrosoftFabricPreviewTool(Tool, discriminator="fabric_dataagent_preview") :ivar type: The object type, which is always 'fabric_dataagent_preview'. Required. FABRIC_DATAAGENT_PREVIEW. :vartype type: str or ~azure.ai.projects.models.FABRIC_DATAAGENT_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar fabric_dataagent_preview: The fabric data agent tool parameters. Required. :vartype fabric_dataagent_preview: ~azure.ai.projects.models.FabricDataAgentToolParameters """ @@ -7303,6 +7282,10 @@ class MicrosoftFabricPreviewTool(Tool, discriminator="fabric_dataagent_preview") type: Literal[ToolType.FABRIC_DATAAGENT_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'fabric_dataagent_preview'. Required. FABRIC_DATAAGENT_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" fabric_dataagent_preview: "_models.FabricDataAgentToolParameters" = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -7313,6 +7296,8 @@ def __init__( self, *, fabric_dataagent_preview: "_models.FabricDataAgentToolParameters", + name: Optional[str] = None, + description: Optional[str] = None, ) -> None: ... @overload @@ -7432,33 +7417,33 @@ class ModelSamplingParams(_Model): """Represents a set of parameters used to control the sampling behavior of a language model during text generation. - :ivar temperature: The temperature parameter for sampling. Required. + :ivar temperature: The temperature parameter for sampling. Defaults to 1.0. :vartype temperature: float - :ivar top_p: The top-p parameter for nucleus sampling. Required. + :ivar top_p: The top-p parameter for nucleus sampling. Defaults to 1.0. :vartype top_p: float - :ivar seed: The random seed for reproducibility. Required. + :ivar seed: The random seed for reproducibility. Defaults to 42. :vartype seed: int - :ivar max_completion_tokens: The maximum number of tokens allowed in the completion. Required. + :ivar max_completion_tokens: The maximum number of tokens allowed in the completion. :vartype max_completion_tokens: int """ - temperature: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The temperature parameter for sampling. Required.""" - top_p: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The top-p parameter for nucleus sampling. Required.""" - seed: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The random seed for reproducibility. Required.""" - max_completion_tokens: int = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The maximum number of tokens allowed in the completion. Required.""" + temperature: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The temperature parameter for sampling. Defaults to 1.0.""" + top_p: Optional[float] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The top-p parameter for nucleus sampling. Defaults to 1.0.""" + seed: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The random seed for reproducibility. Defaults to 42.""" + max_completion_tokens: Optional[int] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The maximum number of tokens allowed in the completion.""" @overload def __init__( self, *, - temperature: float, - top_p: float, - seed: int, - max_completion_tokens: int, + temperature: Optional[float] = None, + top_p: Optional[float] = None, + seed: Optional[int] = None, + max_completion_tokens: Optional[int] = None, ) -> None: ... @overload @@ -7998,8 +7983,8 @@ class PendingUploadRequest(_Model): :ivar connection_name: Azure Storage Account connection name to use for generating temporary SAS token. :vartype connection_name: str - :ivar pending_upload_type: BlobReference is the only supported type. Required. Blob Reference - is the only supported type. + :ivar pending_upload_type: The type of pending upload. Required. Blob Reference is the only + supported type. :vartype pending_upload_type: str or ~azure.ai.projects.models.BLOB_REFERENCE """ @@ -8014,7 +7999,7 @@ class PendingUploadRequest(_Model): pending_upload_type: Literal[PendingUploadType.BLOB_REFERENCE] = rest_field( name="pendingUploadType", visibility=["read", "create", "update", "delete", "query"] ) - """BlobReference is the only supported type. Required. Blob Reference is the only supported type.""" + """The type of pending upload. Required. Blob Reference is the only supported type.""" @overload def __init__( @@ -8036,7 +8021,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class PendingUploadResponse(_Model): +class PendingUploadResult(_Model): """Represents the response for a pending upload request. :ivar blob_reference: Container-level read, write, list SAS. Required. @@ -8046,8 +8031,8 @@ class PendingUploadResponse(_Model): :ivar version: Version of asset to be created if user did not specify version when initially creating upload. :vartype version: str - :ivar pending_upload_type: BlobReference is the only supported type. Required. Blob Reference - is the only supported type. + :ivar pending_upload_type: The type of pending upload. Required. Blob Reference is the only + supported type. :vartype pending_upload_type: str or ~azure.ai.projects.models.BLOB_REFERENCE """ @@ -8064,7 +8049,7 @@ class PendingUploadResponse(_Model): pending_upload_type: Literal[PendingUploadType.BLOB_REFERENCE] = rest_field( name="pendingUploadType", visibility=["read", "create", "update", "delete", "query"] ) - """BlobReference is the only supported type. Required. Blob Reference is the only supported type.""" + """The type of pending upload. Required. Blob Reference is the only supported type.""" @overload def __init__( @@ -8270,7 +8255,7 @@ class ProtocolVersionRecord(_Model): """A record mapping for a single protocol and its version. :ivar protocol: The protocol type. Required. Known values are: "activity_protocol", - "responses", and "invocations". + "responses", "mcp", and "invocations". :vartype protocol: str or ~azure.ai.projects.models.AgentProtocol :ivar version: The version string for the protocol, e.g. 'v0.1.1'. Required. :vartype version: str @@ -8279,8 +8264,8 @@ class ProtocolVersionRecord(_Model): protocol: Union[str, "_models.AgentProtocol"] = rest_field( visibility=["read", "create", "update", "delete", "query"] ) - """The protocol type. Required. Known values are: \"activity_protocol\", \"responses\", and - \"invocations\".""" + """The protocol type. Required. Known values are: \"activity_protocol\", \"responses\", \"mcp\", + and \"invocations\".""" version: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) """The version string for the protocol, e.g. 'v0.1.1'. Required.""" @@ -8511,7 +8496,7 @@ class RedTeam(_Model): :ivar status: Status of the red-team. It is set by service and is read-only. :vartype status: str :ivar target: Target configuration for the red-team run. Required. - :vartype target: ~azure.ai.projects.models.TargetConfig + :vartype target: ~azure.ai.projects.models.RedTeamTargetConfig """ name: str = rest_field(name="id", visibility=["read"]) @@ -8546,14 +8531,14 @@ class RedTeam(_Model): removed.""" status: Optional[str] = rest_field(visibility=["read"]) """Status of the red-team. It is set by service and is read-only.""" - target: "_models.TargetConfig" = rest_field(visibility=["read", "create", "update", "delete", "query"]) + target: "_models.RedTeamTargetConfig" = rest_field(visibility=["read", "create", "update", "delete", "query"]) """Target configuration for the red-team run. Required.""" @overload def __init__( self, *, - target: "_models.TargetConfig", + target: "_models.RedTeamTargetConfig", display_name: Optional[str] = None, num_turns: Optional[int] = None, attack_strategies: Optional[list[Union[str, "_models.AttackStrategy"]]] = None, @@ -8836,7 +8821,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class SessionDirectoryListResponse(_Model): +class SessionDirectoryListResult(_Model): """Response from listing a directory in a session sandbox. :ivar path: The path that was listed, relative to the session home directory. Required. @@ -8871,7 +8856,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) -class SessionFileWriteResponse(_Model): +class SessionFileWriteResult(_Model): """Response from uploading a file to a session sandbox. :ivar path: The path where the file was written, relative to the session home directory. @@ -9001,6 +8986,10 @@ class SharepointPreviewTool(Tool, discriminator="sharepoint_grounding_preview"): :ivar type: The object type, which is always 'sharepoint_grounding_preview'. Required. SHAREPOINT_GROUNDING_PREVIEW. :vartype type: str or ~azure.ai.projects.models.SHAREPOINT_GROUNDING_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str :ivar sharepoint_grounding_preview: The sharepoint grounding tool parameters. Required. :vartype sharepoint_grounding_preview: ~azure.ai.projects.models.SharepointGroundingToolParameters @@ -9009,6 +8998,10 @@ class SharepointPreviewTool(Tool, discriminator="sharepoint_grounding_preview"): type: Literal[ToolType.SHAREPOINT_GROUNDING_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'sharepoint_grounding_preview'. Required. SHAREPOINT_GROUNDING_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" sharepoint_grounding_preview: "_models.SharepointGroundingToolParameters" = rest_field( visibility=["read", "create", "update", "delete", "query"] ) @@ -9019,6 +9012,8 @@ def __init__( self, *, sharepoint_grounding_preview: "_models.SharepointGroundingToolParameters", + name: Optional[str] = None, + description: Optional[str] = None, ) -> None: ... @overload @@ -9033,7 +9028,7 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: self.type = ToolType.SHAREPOINT_GROUNDING_PREVIEW # type: ignore -class SkillObject(_Model): +class SkillDetails(_Model): """A skill object. :ivar skill_id: The unique identifier of the skill. Required. @@ -9666,6 +9661,46 @@ def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) +class ToolboxSearchPreviewTool(Tool, discriminator="toolbox_search_preview"): + """A tool for searching over the agent's toolbox. When present, deferred tools are hidden from + ``tools/list`` and only discoverable via ``search_tools`` queries at runtime. + + :ivar type: The type of the tool. Always ``toolbox_search_preview``. Required. + TOOLBOX_SEARCH_PREVIEW. + :vartype type: str or ~azure.ai.projects.models.TOOLBOX_SEARCH_PREVIEW + :ivar name: Optional user-defined name for this tool or configuration. + :vartype name: str + :ivar description: Optional user-defined description for this tool or configuration. + :vartype description: str + """ + + type: Literal[ToolType.TOOLBOX_SEARCH_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore + """The type of the tool. Always ``toolbox_search_preview``. Required. TOOLBOX_SEARCH_PREVIEW.""" + name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined name for this tool or configuration.""" + description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """Optional user-defined description for this tool or configuration.""" + + @overload + def __init__( + self, + *, + name: Optional[str] = None, + description: Optional[str] = None, + ) -> None: ... + + @overload + def __init__(self, mapping: Mapping[str, Any]) -> None: + """ + :param mapping: raw JSON to initialize the model. + :type mapping: Mapping[str, Any] + """ + + def __init__(self, *args: Any, **kwargs: Any) -> None: + super().__init__(*args, **kwargs) + self.type = ToolType.TOOLBOX_SEARCH_PREVIEW # type: ignore + + class ToolboxVersionObject(_Model): """A specific version of a toolbox. @@ -10611,30 +10646,28 @@ class WorkIQPreviewTool(Tool, discriminator="work_iq_preview"): :ivar type: The object type, which is always 'work_iq_preview'. Required. WORK_IQ_PREVIEW. :vartype type: str or ~azure.ai.projects.models.WORK_IQ_PREVIEW + :ivar project_connection_id: The ID of the WorkIQ project connection. Required. + :vartype project_connection_id: str :ivar name: Optional user-defined name for this tool or configuration. :vartype name: str :ivar description: Optional user-defined description for this tool or configuration. :vartype description: str - :ivar work_iq_preview: The WorkIQ tool parameters. Required. - :vartype work_iq_preview: ~azure.ai.projects.models.WorkIQPreviewToolParameters """ type: Literal[ToolType.WORK_IQ_PREVIEW] = rest_discriminator(name="type", visibility=["read", "create", "update", "delete", "query"]) # type: ignore """The object type, which is always 'work_iq_preview'. Required. WORK_IQ_PREVIEW.""" + project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) + """The ID of the WorkIQ project connection. Required.""" name: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) """Optional user-defined name for this tool or configuration.""" description: Optional[str] = rest_field(visibility=["read", "create", "update", "delete", "query"]) """Optional user-defined description for this tool or configuration.""" - work_iq_preview: "_models.WorkIQPreviewToolParameters" = rest_field( - visibility=["read", "create", "update", "delete", "query"] - ) - """The WorkIQ tool parameters. Required.""" @overload def __init__( self, *, - work_iq_preview: "_models.WorkIQPreviewToolParameters", + project_connection_id: str, name: Optional[str] = None, description: Optional[str] = None, ) -> None: ... @@ -10649,31 +10682,3 @@ def __init__(self, mapping: Mapping[str, Any]) -> None: def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) self.type = ToolType.WORK_IQ_PREVIEW # type: ignore - - -class WorkIQPreviewToolParameters(_Model): - """The WorkIQ tool parameters. - - :ivar project_connection_id: The ID of the WorkIQ project connection. Required. - :vartype project_connection_id: str - """ - - project_connection_id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) - """The ID of the WorkIQ project connection. Required.""" - - @overload - def __init__( - self, - *, - project_connection_id: str, - ) -> None: ... - - @overload - def __init__(self, mapping: Mapping[str, Any]) -> None: - """ - :param mapping: raw JSON to initialize the model. - :type mapping: Mapping[str, Any] - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py index 52f7a21b1821..00e5948d22f7 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_operations.py @@ -826,7 +826,7 @@ def build_beta_agents_patch_agent_details_request( # pylint: disable=name-too-l return HttpRequest(method="PATCH", url=_url, params=_params, headers=_headers, **kwargs) -def build_beta_agents_create_session_request(agent_name: str, *, isolation_key: str, **kwargs: Any) -> HttpRequest: +def build_beta_agents_create_session_request(agent_name: str, **kwargs: Any) -> HttpRequest: _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) @@ -846,7 +846,6 @@ def build_beta_agents_create_session_request(agent_name: str, *, isolation_key: _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") # Construct headers - _headers["x-session-isolation-key"] = _SERIALIZER.header("isolation_key", isolation_key, "str") if content_type is not None: _headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str") _headers["Accept"] = _SERIALIZER.header("accept", accept, "str") @@ -879,10 +878,7 @@ def build_beta_agents_get_session_request(agent_name: str, session_id: str, **kw return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs) -def build_beta_agents_delete_session_request( - agent_name: str, session_id: str, *, isolation_key: str, **kwargs: Any -) -> HttpRequest: - _headers = case_insensitive_dict(kwargs.pop("headers", {}) or {}) +def build_beta_agents_delete_session_request(agent_name: str, session_id: str, **kwargs: Any) -> HttpRequest: _params = case_insensitive_dict(kwargs.pop("params", {}) or {}) api_version: str = kwargs.pop("api_version", _params.pop("api-version", "v1")) @@ -898,10 +894,7 @@ def build_beta_agents_delete_session_request( # Construct parameters _params["api-version"] = _SERIALIZER.query("api_version", api_version, "str") - # Construct headers - _headers["x-session-isolation-key"] = _SERIALIZER.header("isolation_key", isolation_key, "str") - - return HttpRequest(method="DELETE", url=_url, params=_params, headers=_headers, **kwargs) + return HttpRequest(method="DELETE", url=_url, params=_params, **kwargs) def build_beta_agents_list_sessions_request( @@ -4303,7 +4296,7 @@ def pending_upload( *, content_type: str = "application/json", **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -4315,8 +4308,8 @@ def pending_upload( :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ @@ -4329,7 +4322,7 @@ def pending_upload( *, content_type: str = "application/json", **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -4341,8 +4334,8 @@ def pending_upload( :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ @@ -4355,7 +4348,7 @@ def pending_upload( *, content_type: str = "application/json", **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -4367,8 +4360,8 @@ def pending_upload( :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ @@ -4379,7 +4372,7 @@ def pending_upload( version: str, pending_upload_request: Union[_models.PendingUploadRequest, JSON, IO[bytes]], **kwargs: Any - ) -> _models.PendingUploadResponse: + ) -> _models.PendingUploadResult: """Start a new or get an existing pending upload of a dataset for a specific version. :param name: The name of the resource. Required. @@ -4390,8 +4383,8 @@ def pending_upload( types: PendingUploadRequest, JSON, IO[bytes] Required. :type pending_upload_request: ~azure.ai.projects.models.PendingUploadRequest or JSON or IO[bytes] - :return: PendingUploadResponse. The PendingUploadResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.PendingUploadResponse + :return: PendingUploadResult. The PendingUploadResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.PendingUploadResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -4406,7 +4399,7 @@ def pending_upload( _params = kwargs.pop("params", {}) or {} content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.PendingUploadResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.PendingUploadResult] = kwargs.pop("cls", None) content_type = content_type or "application/json" _content = None @@ -4449,7 +4442,7 @@ def pending_upload( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.PendingUploadResponse, response.json()) + deserialized = _deserialize(_models.PendingUploadResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -5190,7 +5183,7 @@ def patch_agent_details( agent_name: str, *, content_type: str = "application/merge-patch+json", - agent_endpoint: Optional[_models.AgentEndpoint] = None, + agent_endpoint: Optional[_models.AgentEndpointConfig] = None, agent_card: Optional[_models.AgentCard] = None, **kwargs: Any ) -> _models.AgentDetails: @@ -5202,7 +5195,7 @@ def patch_agent_details( Default value is "application/merge-patch+json". :paramtype content_type: str :keyword agent_endpoint: The endpoint configuration for the agent. Default value is None. - :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpoint + :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpointConfig :keyword agent_card: Optional agent card for the agent. Default value is None. :paramtype agent_card: ~azure.ai.projects.models.AgentCard :return: AgentDetails. The AgentDetails is compatible with MutableMapping @@ -5252,7 +5245,7 @@ def patch_agent_details( agent_name: str, body: Union[JSON, IO[bytes]] = _Unset, *, - agent_endpoint: Optional[_models.AgentEndpoint] = None, + agent_endpoint: Optional[_models.AgentEndpointConfig] = None, agent_card: Optional[_models.AgentCard] = None, **kwargs: Any ) -> _models.AgentDetails: @@ -5263,7 +5256,7 @@ def patch_agent_details( :param body: Is either a JSON type or a IO[bytes] type. Required. :type body: JSON or IO[bytes] :keyword agent_endpoint: The endpoint configuration for the agent. Default value is None. - :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpoint + :paramtype agent_endpoint: ~azure.ai.projects.models.AgentEndpointConfig :keyword agent_card: Optional agent card for the agent. Default value is None. :paramtype agent_card: ~azure.ai.projects.models.AgentCard :return: AgentDetails. The AgentDetails is compatible with MutableMapping @@ -5343,7 +5336,6 @@ def create_session( self, agent_name: str, *, - isolation_key: str, version_indicator: _models.VersionIndicator, content_type: str = "application/json", agent_session_id: Optional[str] = None, @@ -5355,9 +5347,6 @@ def create_session( :param agent_name: The name of the agent to create a session for. Required. :type agent_name: str - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword version_indicator: Determines which agent version backs the session. Required. :paramtype version_indicator: ~azure.ai.projects.models.VersionIndicator :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. @@ -5373,7 +5362,7 @@ def create_session( @overload def create_session( - self, agent_name: str, body: JSON, *, isolation_key: str, content_type: str = "application/json", **kwargs: Any + self, agent_name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any ) -> _models.AgentSessionResource: """Creates a new session for an agent endpoint. The endpoint resolves the backing agent version from ``version_indicator`` and enforces session ownership using the provided isolation key for @@ -5383,9 +5372,6 @@ def create_session( :type agent_name: str :param body: Required. :type body: JSON - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -5396,13 +5382,7 @@ def create_session( @overload def create_session( - self, - agent_name: str, - body: IO[bytes], - *, - isolation_key: str, - content_type: str = "application/json", - **kwargs: Any + self, agent_name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any ) -> _models.AgentSessionResource: """Creates a new session for an agent endpoint. The endpoint resolves the backing agent version from ``version_indicator`` and enforces session ownership using the provided isolation key for @@ -5412,9 +5392,6 @@ def create_session( :type agent_name: str :param body: Required. :type body: IO[bytes] - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str @@ -5429,7 +5406,6 @@ def create_session( agent_name: str, body: Union[JSON, IO[bytes]] = _Unset, *, - isolation_key: str, version_indicator: _models.VersionIndicator = _Unset, agent_session_id: Optional[str] = None, **kwargs: Any @@ -5442,9 +5418,6 @@ def create_session( :type agent_name: str :param body: Is either a JSON type or a IO[bytes] type. Required. :type body: JSON or IO[bytes] - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :keyword version_indicator: Determines which agent version backs the session. Required. :paramtype version_indicator: ~azure.ai.projects.models.VersionIndicator :keyword agent_session_id: Optional caller-provided session ID. If specified, it must be unique @@ -5482,7 +5455,6 @@ def create_session( _request = build_beta_agents_create_session_request( agent_name=agent_name, - isolation_key=isolation_key, content_type=content_type, api_version=self._config.api_version, content=_content, @@ -5595,7 +5567,7 @@ def get_session(self, agent_name: str, session_id: str, **kwargs: Any) -> _model @distributed_trace def delete_session( # pylint: disable=inconsistent-return-statements - self, agent_name: str, session_id: str, *, isolation_key: str, **kwargs: Any + self, agent_name: str, session_id: str, **kwargs: Any ) -> None: """Deletes a session synchronously. Returns 204 No Content when the session is deleted or does not exist. @@ -5604,9 +5576,6 @@ def delete_session( # pylint: disable=inconsistent-return-statements :type agent_name: str :param session_id: The session identifier. Required. :type session_id: str - :keyword isolation_key: Isolation key used by the agent endpoint to enforce session ownership - for session-mutating operations. Required. - :paramtype isolation_key: str :return: None :rtype: None :raises ~azure.core.exceptions.HttpResponseError: @@ -5627,7 +5596,6 @@ def delete_session( # pylint: disable=inconsistent-return-statements _request = build_beta_agents_delete_session_request( agent_name=agent_name, session_id=session_id, - isolation_key=isolation_key, api_version=self._config.api_version, headers=_headers, params=_params, @@ -5853,7 +5821,7 @@ def get_session_log_stream( @distributed_trace def _upload_session_file( self, agent_name: str, agent_session_id: str, content: bytes, *, path: str, **kwargs: Any - ) -> _models.SessionFileWriteResponse: + ) -> _models.SessionFileWriteResult: """Upload a file to the session sandbox via binary stream. Maximum file size is 50 MB. Uploads exceeding this limit return 413 Payload Too Large. @@ -5866,9 +5834,8 @@ def _upload_session_file( :keyword path: The destination file path within the sandbox, relative to the session home directory. Required. :paramtype path: str - :return: SessionFileWriteResponse. The SessionFileWriteResponse is compatible with - MutableMapping - :rtype: ~azure.ai.projects.models.SessionFileWriteResponse + :return: SessionFileWriteResult. The SessionFileWriteResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SessionFileWriteResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -5883,7 +5850,7 @@ def _upload_session_file( _params = kwargs.pop("params", {}) or {} content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/octet-stream")) - cls: ClsType[_models.SessionFileWriteResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.SessionFileWriteResult] = kwargs.pop("cls", None) _content = content @@ -5926,7 +5893,7 @@ def _upload_session_file( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SessionFileWriteResponse, response.json()) + deserialized = _deserialize(_models.SessionFileWriteResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -6007,7 +5974,7 @@ def download_session_file( @distributed_trace def get_session_files( self, agent_name: str, agent_session_id: str, *, path: str, **kwargs: Any - ) -> _models.SessionDirectoryListResponse: + ) -> _models.SessionDirectoryListResult: """List files and directories at a given path in the session sandbox. Returns only the immediate children of the specified directory (non-recursive). @@ -6017,9 +5984,9 @@ def get_session_files( :type agent_session_id: str :keyword path: The directory path to list, relative to the session home directory. Required. :paramtype path: str - :return: SessionDirectoryListResponse. The SessionDirectoryListResponse is compatible with + :return: SessionDirectoryListResult. The SessionDirectoryListResult is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SessionDirectoryListResponse + :rtype: ~azure.ai.projects.models.SessionDirectoryListResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -6033,7 +6000,7 @@ def get_session_files( _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[_models.SessionDirectoryListResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.SessionDirectoryListResult] = kwargs.pop("cls", None) _request = build_beta_agents_get_session_files_request( agent_name=agent_name, @@ -6072,7 +6039,7 @@ def get_session_files( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SessionDirectoryListResponse, response.json()) + deserialized = _deserialize(_models.SessionDirectoryListResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -6370,14 +6337,14 @@ def delete(self, name: str, **kwargs: Any) -> None: # pylint: disable=inconsist @overload def create( - self, name: str, body: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -6388,14 +6355,14 @@ def create( @overload def create( - self, name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: JSON, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: JSON + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: JSON :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -6406,14 +6373,14 @@ def create( @overload def create( - self, name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: IO[bytes], *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: IO[bytes] + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: IO[bytes] :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str @@ -6424,15 +6391,15 @@ def create( @distributed_trace def create( - self, name: str, body: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any + self, name: str, taxonomy: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any ) -> _models.EvaluationTaxonomy: """Create an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, JSON, - IO[bytes] Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] + :param taxonomy: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, + JSON, IO[bytes] Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] :return: EvaluationTaxonomy. The EvaluationTaxonomy is compatible with MutableMapping :rtype: ~azure.ai.projects.models.EvaluationTaxonomy :raises ~azure.core.exceptions.HttpResponseError: @@ -6453,10 +6420,10 @@ def create( content_type = content_type or "application/json" _content = None - if isinstance(body, (IOBase, bytes)): - _content = body + if isinstance(taxonomy, (IOBase, bytes)): + _content = taxonomy else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + _content = json.dumps(taxonomy, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore _request = build_beta_evaluation_taxonomies_create_request( name=name, @@ -6500,14 +6467,14 @@ def create( @overload def update( - self, name: str, body: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: _models.EvaluationTaxonomy, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -6518,14 +6485,14 @@ def update( @overload def update( - self, name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: JSON, *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: JSON + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: JSON :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str @@ -6536,14 +6503,14 @@ def update( @overload def update( - self, name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any + self, name: str, taxonomy: IO[bytes], *, content_type: str = "application/json", **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Required. - :type body: IO[bytes] + :param taxonomy: The evaluation taxonomy. Required. + :type taxonomy: IO[bytes] :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str @@ -6554,15 +6521,15 @@ def update( @distributed_trace def update( - self, name: str, body: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any + self, name: str, taxonomy: Union[_models.EvaluationTaxonomy, JSON, IO[bytes]], **kwargs: Any ) -> _models.EvaluationTaxonomy: """Update an evaluation taxonomy. :param name: The name of the evaluation taxonomy. Required. :type name: str - :param body: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, JSON, - IO[bytes] Required. - :type body: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] + :param taxonomy: The evaluation taxonomy. Is one of the following types: EvaluationTaxonomy, + JSON, IO[bytes] Required. + :type taxonomy: ~azure.ai.projects.models.EvaluationTaxonomy or JSON or IO[bytes] :return: EvaluationTaxonomy. The EvaluationTaxonomy is compatible with MutableMapping :rtype: ~azure.ai.projects.models.EvaluationTaxonomy :raises ~azure.core.exceptions.HttpResponseError: @@ -6583,10 +6550,10 @@ def update( content_type = content_type or "application/json" _content = None - if isinstance(body, (IOBase, bytes)): - _content = body + if isinstance(taxonomy, (IOBase, bytes)): + _content = taxonomy else: - _content = json.dumps(body, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore + _content = json.dumps(taxonomy, cls=SdkJSONEncoder, exclude_readonly=True) # type: ignore _request = build_beta_evaluation_taxonomies_update_request( name=name, @@ -10230,7 +10197,7 @@ def create( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Creates a skill. :keyword name: The unique name of the skill. Required. @@ -10250,13 +10217,13 @@ def create( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload - def create(self, body: JSON, *, content_type: str = "application/json", **kwargs: Any) -> _models.SkillObject: + def create(self, body: JSON, *, content_type: str = "application/json", **kwargs: Any) -> _models.SkillDetails: """Creates a skill. :param body: Required. @@ -10264,13 +10231,13 @@ def create(self, body: JSON, *, content_type: str = "application/json", **kwargs :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload - def create(self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any) -> _models.SkillObject: + def create(self, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any) -> _models.SkillDetails: """Creates a skill. :param body: Required. @@ -10278,8 +10245,8 @@ def create(self, body: IO[bytes], *, content_type: str = "application/json", **k :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @@ -10293,7 +10260,7 @@ def create( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Creates a skill. :param body: Is either a JSON type or a IO[bytes] type. Required. @@ -10312,8 +10279,8 @@ def create( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -10328,7 +10295,7 @@ def create( _params = kwargs.pop("params", {}) or {} content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) if body is _Unset: if name is _Unset: @@ -10378,7 +10345,7 @@ def create( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -10386,13 +10353,13 @@ def create( return deserialized # type: ignore @distributed_trace - def create_from_package(self, body: bytes, **kwargs: Any) -> _models.SkillObject: + def create_from_package(self, content: bytes, **kwargs: Any) -> _models.SkillDetails: """Creates a skill from a zip package. - :param body: The zip package used to create the skill. Required. - :type body: bytes - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :param content: The zip package used to create the skill. Required. + :type content: bytes + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -10407,9 +10374,9 @@ def create_from_package(self, body: bytes, **kwargs: Any) -> _models.SkillObject _params = kwargs.pop("params", {}) or {} content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/zip")) - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) - _content = body + _content = content _request = build_beta_skills_create_from_package_request( content_type=content_type, @@ -10447,7 +10414,7 @@ def create_from_package(self, body: bytes, **kwargs: Any) -> _models.SkillObject if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -10455,13 +10422,13 @@ def create_from_package(self, body: bytes, **kwargs: Any) -> _models.SkillObject return deserialized # type: ignore @distributed_trace - def get(self, name: str, **kwargs: Any) -> _models.SkillObject: + def get(self, name: str, **kwargs: Any) -> _models.SkillDetails: """Retrieves a skill. :param name: The unique name of the skill. Required. :type name: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -10475,7 +10442,7 @@ def get(self, name: str, **kwargs: Any) -> _models.SkillObject: _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) _request = build_beta_skills_get_request( name=name, @@ -10512,7 +10479,7 @@ def get(self, name: str, **kwargs: Any) -> _models.SkillObject: if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -10592,7 +10559,7 @@ def list( order: Optional[Union[str, _models.PageOrder]] = None, before: Optional[str] = None, **kwargs: Any - ) -> ItemPaged["_models.SkillObject"]: + ) -> ItemPaged["_models.SkillDetails"]: """Returns the list of all skills. :keyword limit: A limit on the number of objects to be returned. Limit can range between 1 and @@ -10609,14 +10576,14 @@ def list( subsequent call can include before=obj_foo in order to fetch the previous page of the list. Default value is None. :paramtype before: str - :return: An iterator like instance of SkillObject - :rtype: ~azure.core.paging.ItemPaged[~azure.ai.projects.models.SkillObject] + :return: An iterator like instance of SkillDetails + :rtype: ~azure.core.paging.ItemPaged[~azure.ai.projects.models.SkillDetails] :raises ~azure.core.exceptions.HttpResponseError: """ _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[List[_models.SkillObject]] = kwargs.pop("cls", None) + cls: ClsType[List[_models.SkillDetails]] = kwargs.pop("cls", None) error_map: MutableMapping = { 401: ClientAuthenticationError, @@ -10646,7 +10613,7 @@ def prepare_request(_continuation_token=None): def extract_data(pipeline_response): deserialized = pipeline_response.http_response.json() list_of_elem = _deserialize( - List[_models.SkillObject], + List[_models.SkillDetails], deserialized.get("data", []), ) if cls: @@ -10684,7 +10651,7 @@ def update( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -10704,15 +10671,15 @@ def update( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload def update( self, name: str, body: JSON, *, content_type: str = "application/json", **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -10722,15 +10689,15 @@ def update( :keyword content_type: Body Parameter content-type. Content type parameter for JSON body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @overload def update( self, name: str, body: IO[bytes], *, content_type: str = "application/json", **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -10740,8 +10707,8 @@ def update( :keyword content_type: Body Parameter content-type. Content type parameter for binary body. Default value is "application/json". :paramtype content_type: str - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ @@ -10755,7 +10722,7 @@ def update( instructions: Optional[str] = None, metadata: Optional[dict[str, str]] = None, **kwargs: Any - ) -> _models.SkillObject: + ) -> _models.SkillDetails: """Updates an existing skill. :param name: The unique name of the skill. Required. @@ -10774,8 +10741,8 @@ def update( Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Default value is None. :paramtype metadata: dict[str, str] - :return: SkillObject. The SkillObject is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SkillObject + :return: SkillDetails. The SkillDetails is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.SkillDetails :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -10790,7 +10757,7 @@ def update( _params = kwargs.pop("params", {}) or {} content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None)) - cls: ClsType[_models.SkillObject] = kwargs.pop("cls", None) + cls: ClsType[_models.SkillDetails] = kwargs.pop("cls", None) if body is _Unset: body = {"description": description, "instructions": instructions, "metadata": metadata} @@ -10839,7 +10806,7 @@ def update( if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.SkillObject, response.json()) + deserialized = _deserialize(_models.SkillDetails, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore @@ -10847,13 +10814,13 @@ def update( return deserialized # type: ignore @distributed_trace - def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResponse: + def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResult: """Deletes a skill. :param name: The unique name of the skill. Required. :type name: str - :return: DeleteSkillResponse. The DeleteSkillResponse is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.DeleteSkillResponse + :return: DeleteSkillResult. The DeleteSkillResult is compatible with MutableMapping + :rtype: ~azure.ai.projects.models.DeleteSkillResult :raises ~azure.core.exceptions.HttpResponseError: """ error_map: MutableMapping = { @@ -10867,7 +10834,7 @@ def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResponse: _headers = kwargs.pop("headers", {}) or {} _params = kwargs.pop("params", {}) or {} - cls: ClsType[_models.DeleteSkillResponse] = kwargs.pop("cls", None) + cls: ClsType[_models.DeleteSkillResult] = kwargs.pop("cls", None) _request = build_beta_skills_delete_request( name=name, @@ -10904,7 +10871,7 @@ def delete(self, name: str, **kwargs: Any) -> _models.DeleteSkillResponse: if _stream: deserialized = response.iter_bytes() if _decompress else response.iter_raw() else: - deserialized = _deserialize(_models.DeleteSkillResponse, response.json()) + deserialized = _deserialize(_models.DeleteSkillResult, response.json()) if cls: return cls(pipeline_response, deserialized, {}) # type: ignore diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_datasets.py b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_datasets.py index bf2c0db51271..e8c13ff64627 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_datasets.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_datasets.py @@ -21,7 +21,7 @@ FileDatasetVersion, FolderDatasetVersion, PendingUploadRequest, - PendingUploadResponse, + PendingUploadResult, PendingUploadType, ) @@ -47,7 +47,7 @@ def _create_dataset_and_get_its_container_client( connection_name: Optional[str] = None, ) -> Tuple[ContainerClient, str]: - pending_upload_response: PendingUploadResponse = self.pending_upload( + pending_upload_response: PendingUploadResult = self.pending_upload( name=name, version=input_version, pending_upload_request=PendingUploadRequest( diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_sessions.py b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_sessions.py index 053bbc8292ed..8d417620e725 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_sessions.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/operations/_patch_sessions.py @@ -34,7 +34,7 @@ def upload_session_file( # type: ignore[override] *, path: str, **kwargs: Any, - ) -> _models.SessionFileWriteResponse: + ) -> _models.SessionFileWriteResult: """Upload a file to the session sandbox. Accepts either a ``bytes`` buffer or a local file path (``str``). @@ -52,9 +52,9 @@ def upload_session_file( # type: ignore[override] :keyword path: The destination file path within the sandbox, relative to the session home directory. Required. :paramtype path: str - :return: SessionFileWriteResponse. The SessionFileWriteResponse is compatible with + :return: SessionFileWriteResult. The SessionFileWriteResult is compatible with MutableMapping - :rtype: ~azure.ai.projects.models.SessionFileWriteResponse + :rtype: ~azure.ai.projects.models.SessionFileWriteResult :raises ~azure.core.exceptions.HttpResponseError: :raises FileNotFoundError: If *content_or_file_path* is a ``str`` and the file does not exist. """ diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_ai_project_instrumentor.py b/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_ai_project_instrumentor.py index 1a22ca314704..c23a994b795e 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_ai_project_instrumentor.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_ai_project_instrumentor.py @@ -645,7 +645,7 @@ def _add_message_event( # pylint: disable=too-many-branches,too-many-statements attribute_name = GEN_AI_INPUT_MESSAGES # Set the attribute on the span - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): span.add_attribute(attribute_name, message_json) def _get_field(self, obj: Any, field: str) -> Any: @@ -722,7 +722,7 @@ def _add_instructions_event( # Use attributes for instructions tracing # System instructions format: array of content objects without role/parts wrapper message_json = json.dumps(content_array, ensure_ascii=False) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): span.add_attribute(GEN_AI_SYSTEM_MESSAGE, message_json) def _status_to_string(self, status: Any) -> str: @@ -782,7 +782,7 @@ def start_create_agent_span( # pylint: disable=too-many-locals reasoning_summary=reasoning_summary, structured_inputs=(str(structured_inputs) if structured_inputs is not None else None), ) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): span.add_attribute(GEN_AI_OPERATION_NAME, OperationName.CREATE_AGENT.value) if name: span.add_attribute(GEN_AI_AGENT_NAME, name) @@ -842,7 +842,7 @@ def start_create_thread_span( # _tool_resources: Optional["ToolResources"] = None, ) -> "Optional[AbstractSpan]": span = start_span(OperationName.CREATE_THREAD, server_address=server_address, port=port) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): for message in messages or []: self.add_thread_message_event(span, message) diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_responses_instrumentor.py b/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_responses_instrumentor.py index 95cb28183b35..fdb3cc456214 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_responses_instrumentor.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_responses_instrumentor.py @@ -523,7 +523,7 @@ def _set_attributes(self, span: "AbstractSpan", *attrs: Tuple[str, Any]) -> None def _set_span_attribute_safe(self, span: "AbstractSpan", key: str, value: Any) -> None: """Safely set a span attribute only if the value is meaningful.""" - if not span or not span.span_instance.is_recording: + if not span or not span.span_instance.is_recording(): return # Only set attribute if value exists and is meaningful @@ -846,7 +846,7 @@ def _add_workflow_action_events( conversation_id: Optional[str] = None, ) -> None: """Add workflow action events to the span for workflow agents.""" - if not span or not span.span_instance.is_recording: + if not span or not span.span_instance.is_recording(): return # Check if response has output items @@ -1149,7 +1149,7 @@ def _add_tool_call_events( # pylint: disable=too-many-branches conversation_id: Optional[str] = None, ) -> None: """Add tool call events to the span from response output.""" - if not span or not span.span_instance.is_recording: + if not span or not span.span_instance.is_recording(): return # Extract function calls and tool calls from response output @@ -1638,7 +1638,7 @@ def start_responses_span( gen_ai_provider=RESPONSES_PROVIDER, ) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): # Set operation name attribute (start_span doesn't set this automatically) self._set_attributes( span, @@ -2614,7 +2614,7 @@ def cleanup(self): # Join all accumulated output content complete_content = "".join(self.accumulated_output) - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): # Add tool call events if we detected any output items (tool calls, etc.) if self.has_output_items: # Create mock response with output items for event generation @@ -2721,7 +2721,7 @@ def __init__( ) # End span with proper status - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): self.span.span_instance.set_status( # pyright: ignore [reportPossiblyUnboundVariable] StatusCode.OK @@ -2764,7 +2764,7 @@ def __next__(self): span_attributes=span_attributes, error_type=str(type(e).__name__), ) - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): self.span.span_instance.set_status( # pyright: ignore [reportPossiblyUnboundVariable] StatusCode.ERROR, @@ -2791,7 +2791,7 @@ def _finalize_span(self): span_attributes=span_attributes, ) - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): # Note: For streaming responses, response metadata like tokens, finish_reasons # are typically not available in individual chunks, so we focus on content. @@ -3092,7 +3092,7 @@ def cleanup(self): # Join all accumulated output content complete_content = "".join(self.accumulated_output) - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): # Add tool call events if we detected any output items (tool calls, etc.) if self.has_output_items: # Create mock response with output items for event generation @@ -3199,7 +3199,7 @@ def __init__( ) # End span with proper status - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): self.span.span_instance.set_status( # pyright: ignore [reportPossiblyUnboundVariable] StatusCode.OK @@ -3241,7 +3241,7 @@ async def __anext__(self): span_attributes=span_attributes, error_type=str(type(e).__name__), ) - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): self.span.span_instance.set_status( # pyright: ignore [reportPossiblyUnboundVariable] StatusCode.ERROR, @@ -3268,7 +3268,7 @@ def _finalize_span(self): span_attributes=span_attributes, ) - if self.span.span_instance.is_recording: + if self.span.span_instance.is_recording(): # Note: For streaming responses, response metadata like tokens, finish_reasons # are typically not available in individual chunks, so we focus on content. @@ -3407,7 +3407,7 @@ def start_create_conversation_span( gen_ai_provider=RESPONSES_PROVIDER, ) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): self._set_span_attribute_safe(span, GEN_AI_OPERATION_NAME, OperationName.CREATE_CONVERSATION.value) return span @@ -3605,7 +3605,7 @@ def start_list_conversation_items_span( gen_ai_provider=RESPONSES_PROVIDER, ) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): # Set operation name attribute (start_span doesn't set this automatically) self._set_attributes( span, @@ -3624,7 +3624,7 @@ def _add_conversation_item_event( # pylint: disable=too-many-branches,too-many- item: Any, ) -> None: """Add a conversation item event to the span.""" - if not span or not span.span_instance.is_recording: + if not span or not span.span_instance.is_recording(): return # Extract basic item information diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_utils.py b/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_utils.py index 931c3d2abf7b..47047e2720c3 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_utils.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/telemetry/_utils.py @@ -223,7 +223,7 @@ def start_span( schema_version=GEN_AI_SEMANTIC_CONVENTIONS_SCHEMA_VERSION, ) - if span and span.span_instance.is_recording: + if span and span.span_instance.is_recording(): span.add_attribute(AZ_NAMESPACE, AZ_NAMESPACE_VALUE) span.add_attribute(GEN_AI_PROVIDER_NAME, AGENTS_PROVIDER) diff --git a/sdk/ai/azure-ai-projects/cspell.json b/sdk/ai/azure-ai-projects/cspell.json index b3584c59e861..7decf206d14a 100644 --- a/sdk/ai/azure-ai-projects/cspell.json +++ b/sdk/ai/azure-ai-projects/cspell.json @@ -12,6 +12,7 @@ "closefd", "cogsvc", "CSDL", + "dargilco", "dedup", "evals", "FineTuning", @@ -31,8 +32,8 @@ "Tadmaq", "Udbk", "UPIA", - "xhigh", - "Vnext" + "Vnext", + "xhigh" ], "ignorePaths": [ "*.csv", diff --git a/sdk/ai/azure-ai-projects/samples/agents/tools/sample_toolboxes_with_search_preview.py b/sdk/ai/azure-ai-projects/samples/agents/tools/sample_toolboxes_with_search_preview.py new file mode 100644 index 000000000000..7c8a830f55f7 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/tools/sample_toolboxes_with_search_preview.py @@ -0,0 +1,124 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to create a Toolbox in tool-search mode and + invoke it from a Prompt Agent using the synchronous AIProjectClient and + the OpenAI-compatible client. + + A toolbox version that includes 'ToolboxSearchPreviewTool' exposes only + two meta tools at its '/mcp' endpoint -- 'tool_search' and 'call_tool' + -- and defers every other tool behind them. The agent uses an 'MCPTool' + pointed at the toolbox's versioned '/mcp' URL to discover and invoke + those inner tools. + + Toolboxes and tool search are preview features. CRUD goes through + 'project_client.beta.toolboxes'. + +USAGE: + python sample_toolboxes_with_search_preview.py + + Before running the sample: + + pip install "azure-ai-projects>=2.2.0" python-dotenv openai + + Set these environment variables with your own values: + 1) FOUNDRY_PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Microsoft Foundry portal. + 2) FOUNDRY_MODEL_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Microsoft Foundry project. + 3) MCP_PROJECT_CONNECTION_ID - The connection resource ID in Custom keys used by + the inner MCP server inside the toolbox. +""" + +import os +from dotenv import load_dotenv +from azure.identity import DefaultAzureCredential +from azure.ai.projects import AIProjectClient +from azure.ai.projects.models import ( + MCPTool, + PromptAgentDefinition, + ToolboxSearchPreviewTool, +) + +load_dotenv() + +endpoint = os.environ["FOUNDRY_PROJECT_ENDPOINT"] + +TOOLBOX_NAME = "toolbox_with_mcp_tool" +INNER_MCP_LABEL = "github" +INNER_MCP_URL = "https://api.githubcopilot.com/mcp" +TOOLBOX_MCP_LABEL = "search-tool" + + +with ( + DefaultAzureCredential() as credential, + AIProjectClient(endpoint=endpoint, credential=credential) as project_client, + project_client.get_openai_client() as openai_client, +): + + inner_mcp_tool = MCPTool( + server_label=INNER_MCP_LABEL, + server_url=INNER_MCP_URL, + require_approval="never", + project_connection_id=os.environ["MCP_PROJECT_CONNECTION_ID"], + ) + + toolbox_version = project_client.beta.toolboxes.create_version( + name=TOOLBOX_NAME, + description=f"Toolbox with `{INNER_MCP_LABEL}` MCP server and tool search enabled.", + tools=[inner_mcp_tool, ToolboxSearchPreviewTool()], + ) + print(f"Created toolbox `{TOOLBOX_NAME}` (version {toolbox_version.version}).") + + toolbox_mcp_url = f"{endpoint}/toolboxes/{TOOLBOX_NAME}/versions/{toolbox_version.version}/mcp?api-version=v1" + token = credential.get_token("https://ai.azure.com/.default").token + + toolbox_mcp_tool = MCPTool( + server_label=TOOLBOX_MCP_LABEL, + server_url=toolbox_mcp_url, + authorization=token, + headers={"Foundry-Features": "Toolboxes=V1Preview"}, + require_approval="never", + ) + + agent = project_client.agents.create_version( + agent_name="MyAgent", + definition=PromptAgentDefinition( + model=os.environ["FOUNDRY_MODEL_NAME"], + instructions=( + "Always use the toolbox search tool to answer questions and perform tasks. " + "Use `tool_search` to discover a relevant tool, then `call_tool` " + "with the tool name returned by the search." + ), + tools=[toolbox_mcp_tool], + ), + ) + print(f"Agent created (id: {agent.id}, name: {agent.name}, version: {agent.version}).") + + response = openai_client.responses.create( + input="What is my username in Github profile?", + extra_body={"agent_reference": {"name": agent.name, "type": "agent_reference"}}, + ) + + for item in response.output: + if item.type == "mcp_approval_request": + print(f"server_label={item.server_label}, name={item.name}") + elif item.type == "mcp_list_tools": + print(f"server_label={item.server_label}, tools={[t.name for t in (item.tools or [])]}") + elif item.type == "mcp_call": + print(f"server_label={item.server_label}, name={item.name}, error={item.error}") + else: + print() + + print(f"Response: {response.output_text}") + + project_client.beta.toolboxes.delete_version(name=toolbox_version.name, version=toolbox_version.version) + print(f"Toolbox version {toolbox_version.version} deleted.") + + project_client.agents.delete_version(agent_name=agent.name, agent_version=agent.version) + print(f"Agent version {agent.version} deleted.") diff --git a/sdk/ai/azure-ai-projects/samples/agents/tools/sample_toolboxes_with_search_preview_async.py b/sdk/ai/azure-ai-projects/samples/agents/tools/sample_toolboxes_with_search_preview_async.py new file mode 100644 index 000000000000..b5bcc18661d6 --- /dev/null +++ b/sdk/ai/azure-ai-projects/samples/agents/tools/sample_toolboxes_with_search_preview_async.py @@ -0,0 +1,130 @@ +# pylint: disable=line-too-long,useless-suppression +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ + +""" +DESCRIPTION: + This sample demonstrates how to create a Toolbox in tool-search mode and + invoke it from a Prompt Agent using the asynchronous AIProjectClient and + the OpenAI-compatible client. + + A toolbox version that includes 'ToolboxSearchPreviewTool' exposes only + two meta tools at its '/mcp' endpoint -- 'tool_search' and 'call_tool' + -- and defers every other tool behind them. The agent uses an 'MCPTool' + pointed at the toolbox's versioned '/mcp' URL to discover and invoke + those inner tools. + + Toolboxes and tool search are preview features. CRUD goes through + 'project_client.beta.toolboxes'. + +USAGE: + python sample_toolboxes_with_search_preview_async.py + + Before running the sample: + + pip install "azure-ai-projects>=2.2.0" python-dotenv openai aiohttp + + Set these environment variables with your own values: + 1) FOUNDRY_PROJECT_ENDPOINT - The Azure AI Project endpoint, as found in the Overview + page of your Microsoft Foundry portal. + 2) FOUNDRY_MODEL_NAME - The deployment name of the AI model, as found under the "Name" column in + the "Models + endpoints" tab in your Microsoft Foundry project. + 3) MCP_PROJECT_CONNECTION_ID - The connection resource ID in Custom keys used by + the inner MCP server inside the toolbox. +""" + +import asyncio +import os +from dotenv import load_dotenv +from azure.identity.aio import DefaultAzureCredential +from azure.ai.projects.aio import AIProjectClient +from azure.ai.projects.models import ( + MCPTool, + PromptAgentDefinition, + ToolboxSearchPreviewTool, +) + +load_dotenv() + +endpoint = os.environ["FOUNDRY_PROJECT_ENDPOINT"] + +TOOLBOX_NAME = "toolbox_with_mcp_tool" +INNER_MCP_LABEL = "github" +INNER_MCP_URL = "https://api.githubcopilot.com/mcp" +TOOLBOX_MCP_LABEL = "search-tool" + + +async def main() -> None: + async with ( + DefaultAzureCredential() as credential, + AIProjectClient(endpoint=endpoint, credential=credential) as project_client, + project_client.get_openai_client() as openai_client, + ): + + inner_mcp_tool = MCPTool( + server_label=INNER_MCP_LABEL, + server_url=INNER_MCP_URL, + require_approval="never", + project_connection_id=os.environ["MCP_PROJECT_CONNECTION_ID"], + ) + + toolbox_version = await project_client.beta.toolboxes.create_version( + name=TOOLBOX_NAME, + description=f"Toolbox with `{INNER_MCP_LABEL}` MCP server and tool search enabled.", + tools=[inner_mcp_tool, ToolboxSearchPreviewTool()], + ) + print(f"Created toolbox `{TOOLBOX_NAME}` (version {toolbox_version.version}).") + + toolbox_mcp_url = f"{endpoint}/toolboxes/{TOOLBOX_NAME}/versions/{toolbox_version.version}/mcp?api-version=v1" + token = (await credential.get_token("https://ai.azure.com/.default")).token + + toolbox_mcp_tool = MCPTool( + server_label=TOOLBOX_MCP_LABEL, + server_url=toolbox_mcp_url, + authorization=token, + headers={"Foundry-Features": "Toolboxes=V1Preview"}, + require_approval="never", + ) + + agent = await project_client.agents.create_version( + agent_name="MyAgent", + definition=PromptAgentDefinition( + model=os.environ["FOUNDRY_MODEL_NAME"], + instructions=( + "Always use the toolbox search tool to answer questions and perform tasks. " + "Use `tool_search` to discover a relevant tool, then `call_tool` " + "with the tool name returned by the search." + ), + tools=[toolbox_mcp_tool], + ), + ) + print(f"Agent created (id: {agent.id}, name: {agent.name}, version: {agent.version}).") + + response = await openai_client.responses.create( + input="What is my username in Github profile?", + extra_body={"agent_reference": {"name": agent.name, "type": "agent_reference"}}, + ) + + for item in response.output: + if item.type == "mcp_approval_request": + print(f"server_label={item.server_label}, name={item.name}") + elif item.type == "mcp_list_tools": + print(f"server_label={item.server_label}, tools={[t.name for t in (item.tools or [])]}") + elif item.type == "mcp_call": + print(f"server_label={item.server_label}, name={item.name}, error={item.error}") + else: + print() + + print(f"Response: {response.output_text}") + + await project_client.beta.toolboxes.delete_version(name=toolbox_version.name, version=toolbox_version.version) + print(f"Toolbox version {toolbox_version.version} deleted.") + + await project_client.agents.delete_version(agent_name=agent.name, agent_version=agent.version) + print(f"Agent version {agent.version} deleted.") + + +if __name__ == "__main__": + asyncio.run(main()) diff --git a/sdk/ai/azure-ai-projects/samples/evaluations/sample_redteam_evaluations.py b/sdk/ai/azure-ai-projects/samples/evaluations/sample_redteam_evaluations.py index 82cd63dfe664..a88d71125b12 100644 --- a/sdk/ai/azure-ai-projects/samples/evaluations/sample_redteam_evaluations.py +++ b/sdk/ai/azure-ai-projects/samples/evaluations/sample_redteam_evaluations.py @@ -95,7 +95,7 @@ def main() -> None: # pylint: disable=too-many-statements description="Taxonomy for red teaming evaluation", taxonomy_input=agent_taxonomy_input ) - taxonomy = project_client.beta.evaluation_taxonomies.create(name=agent_name, body=eval_taxonomy_input) + taxonomy = project_client.beta.evaluation_taxonomies.create(name=agent_name, taxonomy=eval_taxonomy_input) taxonomy_path = os.path.join(tempfile.gettempdir(), f"taxonomy_{agent_name}.json") with open(taxonomy_path, "w", encoding="utf-8") as f: f.write(json.dumps(_to_json_primitive(taxonomy), indent=2)) diff --git a/sdk/ai/azure-ai-projects/samples/evaluations/sample_scheduled_evaluations.py b/sdk/ai/azure-ai-projects/samples/evaluations/sample_scheduled_evaluations.py index 85c89d83abad..5b66ce656b07 100644 --- a/sdk/ai/azure-ai-projects/samples/evaluations/sample_scheduled_evaluations.py +++ b/sdk/ai/azure-ai-projects/samples/evaluations/sample_scheduled_evaluations.py @@ -380,7 +380,7 @@ def schedule_redteam_evaluation() -> None: # pylint: disable=too-many-locals description="Taxonomy for red teaming evaluation", taxonomy_input=agent_taxonomy_input ) - taxonomy = project_client.beta.evaluation_taxonomies.create(name=agent_name, body=eval_taxonomy_input) + taxonomy = project_client.beta.evaluation_taxonomies.create(name=agent_name, taxonomy=eval_taxonomy_input) taxonomy_path = os.path.join(data_folder, f"taxonomy_{agent_name}.json") # Create the data folder if it doesn't exist os.makedirs(data_folder, exist_ok=True) diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/hosted_agents_util.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/hosted_agents_util.py index 9641a6fb2616..5e64d18a80eb 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/hosted_agents_util.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/hosted_agents_util.py @@ -86,7 +86,6 @@ def create_agent_and_session( project_client: AIProjectClient, agent_name: str, image: str, - isolation_key: str = "sample-isolation-key", ): agent = project_client.agents.create_version( agent_name=agent_name, @@ -110,7 +109,6 @@ def create_agent_and_session( session = project_client.beta.agents.create_session( agent_name=agent_name, - isolation_key=isolation_key, version_indicator=VersionRefIndicator(agent_version=agent.version), ) print(f"Session created (id: {session.agent_session_id}, status: {session.status})") @@ -121,7 +119,6 @@ def create_agent_and_session( project_client.beta.agents.delete_session( agent_name=agent_name, session_id=session.agent_session_id, - isolation_key=isolation_key, ) print(f"Session with id: {session.agent_session_id} deleted.") @@ -134,7 +131,6 @@ async def create_agent_and_session_async( project_client: AsyncAIProjectClient, agent_name: str, image: str, - isolation_key: str = "sample-isolation-key", ) -> AsyncGenerator[tuple[str, str], None]: agent = await project_client.agents.create_version( agent_name=agent_name, @@ -158,7 +154,6 @@ async def create_agent_and_session_async( session = await project_client.beta.agents.create_session( agent_name=agent_name, - isolation_key=isolation_key, version_indicator=VersionRefIndicator(agent_version=agent.version), ) print(f"Session created (id: {session.agent_session_id}, status: {session.status})") @@ -169,7 +164,6 @@ async def create_agent_and_session_async( await project_client.beta.agents.delete_session( agent_name=agent_name, session_id=session.agent_session_id, - isolation_key=isolation_key, ) print(f"Session with id: {session.agent_session_id} deleted.") diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint.py index 0833548e857e..c23fb5bbbed2 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint.py @@ -42,7 +42,7 @@ from azure.ai.projects import AIProjectClient from azure.ai.projects.models import ( - AgentEndpoint, + AgentEndpointConfig, AgentEndpointProtocol, FixedRatioVersionSelectionRule, VersionSelector, @@ -66,7 +66,7 @@ ): # Configure endpoint routing so this agent name serves the created version. # 100% of traffic is routed to the single created version. - endpoint_config = AgentEndpoint( + endpoint_config = AgentEndpointConfig( version_selector=VersionSelector( version_selection_rules=[ FixedRatioVersionSelectionRule(agent_version=agent.version, traffic_percentage=100), diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint_async.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint_async.py index 91a9d8ac2437..3312464993c8 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint_async.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_agent_endpoint_async.py @@ -43,7 +43,7 @@ from azure.ai.projects.aio import AIProjectClient from azure.ai.projects.models import ( - AgentEndpoint, + AgentEndpointConfig, AgentEndpointProtocol, FixedRatioVersionSelectionRule, VersionSelector, @@ -69,7 +69,7 @@ async def main() -> None: ): # Configure endpoint routing so this agent name serves the created version. # 100% of traffic is routed to the single created version. - endpoint_config = AgentEndpoint( + endpoint_config = AgentEndpointConfig( version_selector=VersionSelector( version_selection_rules=[ FixedRatioVersionSelectionRule(agent_version=agent_version, traffic_percentage=100), diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream.py index e2658265cbb7..cb811839f9b4 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream.py @@ -41,7 +41,7 @@ from azure.ai.projects import AIProjectClient from azure.ai.projects.models import ( - AgentEndpoint, + AgentEndpointConfig, AgentEndpointProtocol, FixedRatioVersionSelectionRule, VersionSelector, @@ -93,7 +93,7 @@ def _iter_sse_frames(stream, max_log_events: int): ) as project_client, create_agent_and_session(project_client, agent_name, image) as (agent, session), ): - endpoint_config = AgentEndpoint( + endpoint_config = AgentEndpointConfig( version_selector=VersionSelector( version_selection_rules=[ FixedRatioVersionSelectionRule(agent_version=agent.version, traffic_percentage=100), diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream_async.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream_async.py index aeafaf39b7ae..2b6bc5fa6639 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream_async.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_session_log_stream_async.py @@ -42,7 +42,7 @@ from azure.ai.projects.aio import AIProjectClient from azure.ai.projects.models import ( - AgentEndpoint, + AgentEndpointConfig, AgentEndpointProtocol, FixedRatioVersionSelectionRule, VersionSelector, @@ -95,7 +95,7 @@ async def main() -> None: ) as project_client, create_agent_and_session_async(project_client, agent_name, image) as (agent_version, session_id), ): - endpoint_config = AgentEndpoint( + endpoint_config = AgentEndpointConfig( version_selector=VersionSelector( version_selection_rules=[ FixedRatioVersionSelectionRule(agent_version=agent_version, traffic_percentage=100), diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud.py index 5a840d7d0df1..add7a7631e51 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud.py @@ -77,13 +77,10 @@ agent_version=agent.version, ) - isolation_key = "sample-isolation-key" - # Create a session for the agent print(f"Creating {3} sessions for the agent...") session = project_client.beta.agents.create_session( agent_name=agent_name, - isolation_key=isolation_key, version_indicator=VersionRefIndicator(agent_version=agent.version), ) print(f"Session created (id: {session.agent_session_id}, status: {session.status})") @@ -107,6 +104,5 @@ project_client.beta.agents.delete_session( agent_name=agent_name, session_id=session.agent_session_id, - isolation_key=isolation_key, ) print(f"Session with id: {session.agent_session_id} deleted.") diff --git a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud_async.py b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud_async.py index 49169702195b..895075364f5e 100644 --- a/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud_async.py +++ b/sdk/ai/azure-ai-projects/samples/hosted_agents/sample_sessions_crud_async.py @@ -80,10 +80,8 @@ async def main() -> None: agent_version=agent.version, ) - isolation_key = "sample-isolation-key" session = await project_client.beta.agents.create_session( agent_name=agent_name, - isolation_key=isolation_key, version_indicator=VersionRefIndicator(agent_version=agent.version), ) print(f"Session created (id: {session.agent_session_id}, status: {session.status})") @@ -107,7 +105,6 @@ async def main() -> None: await project_client.beta.agents.delete_session( agent_name=agent_name, session_id=session.agent_session_id, - isolation_key=isolation_key, ) print(f"Session with id: {session.agent_session_id} deleted.") diff --git a/sdk/ai/azure-ai-projects/tests/agents/telemetry/test_non_recording_span.py b/sdk/ai/azure-ai-projects/tests/agents/telemetry/test_non_recording_span.py new file mode 100644 index 000000000000..a94ed086c203 --- /dev/null +++ b/sdk/ai/azure-ai-projects/tests/agents/telemetry/test_non_recording_span.py @@ -0,0 +1,199 @@ +# ------------------------------------ +# Copyright (c) Microsoft Corporation. +# Licensed under the MIT License. +# ------------------------------------ +""" +Tests verifying that instrumentors correctly skip non-recording spans. + +When a span is not recording, the instrumentor must not attempt to write +attributes or events to it. These tests use a mock span whose +``is_recording()`` returns False and whose mutation methods raise +``AssertionError`` if called, ensuring the guards work correctly. +""" + +from unittest.mock import MagicMock + +from azure.ai.projects.telemetry._ai_project_instrumentor import ( + _AIAgentsInstrumentorPreview, +) +from azure.ai.projects.telemetry._responses_instrumentor import ( + _ResponsesInstrumentorPreview, +) + + +def _make_non_recording_span(): + """Return a mock AbstractSpan wrapping a non-recording OTel span. + + * ``span_instance.is_recording()`` returns ``False`` + * ``span_instance.is_recording`` (the property/method) also returns ``False`` + so the guard correctly skips writes. + * Any call to ``add_event``, ``set_status``, ``record_exception`` or + ``set_attribute`` raises ``AssertionError``, catching any code path + that fails to check ``is_recording()`` properly. + """ + span_instance = MagicMock() + span_instance.is_recording = MagicMock(return_value=False) + span_instance.add_event = MagicMock(side_effect=AssertionError("add_event called on non-recording span")) + span_instance.set_status = MagicMock(side_effect=AssertionError("set_status called on non-recording span")) + span_instance.record_exception = MagicMock( + side_effect=AssertionError("record_exception called on non-recording span") + ) + + span = MagicMock() + span.span_instance = span_instance + span.add_attribute = MagicMock(side_effect=AssertionError("add_attribute called on non-recording span")) + return span + + +class TestNonRecordingSpanProjectInstrumentor: + """Verify _AIAgentsInstrumentorPreview skips non-recording spans.""" + + def test_add_message_event_skips_non_recording_span(self): + """_add_message_event should not write to a non-recording span.""" + instrumentor = _AIAgentsInstrumentorPreview() + span = _make_non_recording_span() + + # This must not raise; the guard should return early. + instrumentor._add_message_event(span, role="user", content="hello") + + def test_add_instructions_event_skips_non_recording_span(self): + """_add_instructions_event should not write to a non-recording span.""" + instrumentor = _AIAgentsInstrumentorPreview() + span = _make_non_recording_span() + + instrumentor._add_instructions_event(span, instructions="Be helpful", additional_instructions=None) + + def test_start_create_agent_span_skips_non_recording_span(self): + """start_create_agent_span should not write attributes to a non-recording span.""" + instrumentor = _AIAgentsInstrumentorPreview() + + # We need to patch start_span to return our non-recording span + from unittest.mock import patch + + non_recording_span = _make_non_recording_span() + + with patch( + "azure.ai.projects.telemetry._ai_project_instrumentor.start_span", + return_value=non_recording_span, + ): + result = instrumentor.start_create_agent_span( + server_address="test.openai.azure.com", + port=443, + model="gpt-4", + name="test-agent", + instructions="Be helpful", + ) + + # Should return the span but not have written any attributes/events to it + assert result is non_recording_span + non_recording_span.add_attribute.assert_not_called() + non_recording_span.span_instance.add_event.assert_not_called() + + +class TestNonRecordingSpanResponsesInstrumentor: + """Verify _ResponsesInstrumentorPreview skips non-recording spans.""" + + def test_set_span_attribute_safe_skips_non_recording_span(self): + """_set_span_attribute_safe should not write to a non-recording span.""" + instrumentor = _ResponsesInstrumentorPreview() + span = _make_non_recording_span() + + # This must not raise; the guard should return early. + instrumentor._set_span_attribute_safe(span, "test.key", "test_value") + + def test_start_responses_span_skips_non_recording_span(self): + """start_responses_span should not write attributes to a non-recording span.""" + instrumentor = _ResponsesInstrumentorPreview() + + from unittest.mock import patch + + non_recording_span = _make_non_recording_span() + + with patch( + "azure.ai.projects.telemetry._responses_instrumentor.start_span", + return_value=non_recording_span, + ): + result = instrumentor.start_responses_span( + server_address="test.openai.azure.com", + port=443, + model="gpt-4", + assistant_name="test-agent", + conversation_id="conv-123", + input_text="Hello", + ) + + assert result is non_recording_span + non_recording_span.add_attribute.assert_not_called() + non_recording_span.span_instance.add_event.assert_not_called() + + def test_start_create_conversation_span_skips_non_recording_span(self): + """start_create_conversation_span should not write to a non-recording span.""" + instrumentor = _ResponsesInstrumentorPreview() + + from unittest.mock import patch + + non_recording_span = _make_non_recording_span() + + with patch( + "azure.ai.projects.telemetry._responses_instrumentor.start_span", + return_value=non_recording_span, + ): + result = instrumentor.start_create_conversation_span( + server_address="test.openai.azure.com", + port=443, + ) + + assert result is non_recording_span + non_recording_span.add_attribute.assert_not_called() + non_recording_span.span_instance.add_event.assert_not_called() + + def test_start_list_conversation_items_span_skips_non_recording_span(self): + """start_list_conversation_items_span should not write to a non-recording span.""" + instrumentor = _ResponsesInstrumentorPreview() + + from unittest.mock import patch + + non_recording_span = _make_non_recording_span() + + with patch( + "azure.ai.projects.telemetry._responses_instrumentor.start_span", + return_value=non_recording_span, + ): + result = instrumentor.start_list_conversation_items_span( + server_address="test.openai.azure.com", + port=443, + conversation_id="conv-123", + ) + + assert result is non_recording_span + non_recording_span.add_attribute.assert_not_called() + non_recording_span.span_instance.add_event.assert_not_called() + + def test_add_tool_call_events_skips_non_recording_span(self): + """_add_tool_call_events should not write to a non-recording span.""" + instrumentor = _ResponsesInstrumentorPreview() + span = _make_non_recording_span() + + # Create a mock response with function call output + mock_response = MagicMock() + mock_output_item = MagicMock() + mock_output_item.type = "function_call" + mock_output_item.name = "get_weather" + mock_output_item.call_id = "call_123" + mock_output_item.arguments = '{"city": "Seattle"}' + mock_response.output = [mock_output_item] + + instrumentor._add_tool_call_events(span, mock_response) + + def test_add_conversation_item_event_skips_non_recording_span(self): + """_add_conversation_item_event should not write to a non-recording span.""" + instrumentor = _ResponsesInstrumentorPreview() + span = _make_non_recording_span() + + mock_item = MagicMock() + mock_item.id = "item_123" + mock_item.type = "message" + mock_item.role = "user" + mock_item.content = [] + + instrumentor._add_conversation_item_event(span, mock_item) diff --git a/sdk/ai/azure-ai-projects/tests/samples/llm_instructions.py b/sdk/ai/azure-ai-projects/tests/samples/llm_instructions.py index af98d794a98d..0e0f776441cf 100644 --- a/sdk/ai/azure-ai-projects/tests/samples/llm_instructions.py +++ b/sdk/ai/azure-ai-projects/tests/samples/llm_instructions.py @@ -18,7 +18,8 @@ from typing import Final -agent_tools_instructions: Final[str] = """ +agent_tools_instructions: Final[str] = ( + """ We just ran Python code and captured print/log output in an attached log file (TXT). Validate whether sample execution/output is correct for a tool-driven assistant workflow. @@ -43,9 +44,11 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) -memories_instructions: Final[str] = """ +memories_instructions: Final[str] = ( + """ We just ran Python code and captured print/log output in an attached log file (TXT). Validate whether sample execution/output is correct for a memories workflow. @@ -70,9 +73,11 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) -agents_instructions: Final[str] = """ +agents_instructions: Final[str] = ( + """ We just ran Python code and captured print/log output in an attached log file (TXT). Validate whether sample execution/output is correct. @@ -103,9 +108,11 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) -chat_completions_instructions: Final[str] = """ +chat_completions_instructions: Final[str] = ( + """ We just ran Python code and captured print/log output in an attached log file (TXT). Validate whether sample execution/output is correct for Chat Completions scenarios. @@ -124,9 +131,11 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) -resource_management_instructions: Final[str] = """ +resource_management_instructions: Final[str] = ( + """ We just ran Python code and captured print/log output in an attached log file (TXT). Validate whether sample execution/output is correct for resource-management samples (for example connections, files, and deployments). @@ -152,9 +161,11 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) -fine_tuning_instructions: Final[str] = """ +fine_tuning_instructions: Final[str] = ( + """ We just ran Python code and captured print/log output in an attached log file (TXT). Validate whether sample execution/output is correct for a fine-tuning workflow. @@ -178,9 +189,11 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) -evaluations_instructions: Final[str] = """ +evaluations_instructions: Final[str] = ( + """ We just ran Python code for an evaluation sample and captured print/log output in an attached log file (TXT). Your job: determine if the sample code executed to completion WITHOUT throwing an unhandled exception. @@ -202,9 +215,11 @@ Always respond with `reason` indicating the reason for the response. """.strip() +) -hosted_agents_instructions: Final[str] = """ +hosted_agents_instructions: Final[str] = ( + """ We just ran Python code for a hosted-agent sample and captured print/log output in an attached log file (TXT). Validate whether the sample executed correctly. @@ -226,6 +241,7 @@ Always include `reason` with a concise explanation tied to the observed print output. """.strip() +) # Folder (under samples/) -> instructions. diff --git a/sdk/ai/azure-ai-projects/tests/samples/test_samples.py b/sdk/ai/azure-ai-projects/tests/samples/test_samples.py index 4ae4f4010925..dd7367b6a406 100644 --- a/sdk/ai/azure-ai-projects/tests/samples/test_samples.py +++ b/sdk/ai/azure-ai-projects/tests/samples/test_samples.py @@ -87,6 +87,7 @@ def test_memory_samples(self, sample_path: str, **kwargs) -> None: samples_to_skip=[ "sample_workflow_multi_agent.py", # No issue to run. Just postpone recording. "sample_workflow_multi_agent_with_mcp_approval.py", # No issue to run. Just postpone recording. + "sample_toolboxes_with_search_preview.py", ], ), ) diff --git a/sdk/ai/azure-ai-projects/tests/samples/test_samples_async.py b/sdk/ai/azure-ai-projects/tests/samples/test_samples_async.py index 1c742537c5fc..100b431d935a 100644 --- a/sdk/ai/azure-ai-projects/tests/samples/test_samples_async.py +++ b/sdk/ai/azure-ai-projects/tests/samples/test_samples_async.py @@ -69,7 +69,7 @@ async def test_memory_samples(self, sample_path: str, **kwargs) -> None: "sample_path", get_async_sample_paths( "agents", - samples_to_skip=["sample_workflow_multi_agent_async.py"], + samples_to_skip=["sample_workflow_multi_agent_async.py", "sample_toolboxes_with_search_preview_async.py"], ), ) @servicePreparer()