Skip to content

Python: Fix single-tool input handling in OpenAIResponsesClient._prepare_tools_for_openai#4312

Merged
moonbox3 merged 2 commits intomicrosoft:mainfrom
moonbox3:agent/fix-4304-1
Feb 26, 2026
Merged

Python: Fix single-tool input handling in OpenAIResponsesClient._prepare_tools_for_openai#4312
moonbox3 merged 2 commits intomicrosoft:mainfrom
moonbox3:agent/fix-4304-1

Conversation

@moonbox3
Copy link
Contributor

Motivation and Context

When a single tool (e.g., a FunctionTool or a dict) was passed directly to OpenAIResponsesClient instead of wrapped in a list, _prepare_tools_for_openai would iterate over the object's fields/characters instead of treating it as one tool. This caused silent misinterpretation of tool definitions and runtime failures.

Fixes #4304

Description

The root cause was that _prepare_tools_for_openai iterated directly over its tools parameter assuming it was always a sequence, but callers could pass a single tool object. The fix uses the existing normalize_tools utility to coerce a single tool into a one-element list before iteration, and widens the type signature to Sequence[Any] | Any | None to reflect the accepted inputs. Two new tests verify that both a single FunctionTool and a single dict tool are correctly normalized into a one-element list of prepared tools.

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the Contribution Guidelines
  • All unit tests pass, and I have added new tests where possible
  • Is this a breaking change? If yes, add "[BREAKING]" prefix to the title of the PR.

Note: PR autogenerated by moonbox3's agent

)

Use normalize_tools() in _prepare_tools_for_openai to wrap single tools
(FunctionTool or dict) in a list before iteration, consistent with the
chat client implementation.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 26, 2026 11:44
Copy link
Contributor Author

@moonbox3 moonbox3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated Code Review

Reviewers: 3 | Confidence: 84%

✓ Correctness

The diff adds support for passing a single tool (not wrapped in a list) to _prepare_tools_for_openai by using normalize_tools to coerce the input. The change is straightforward and the tests cover the new behavior. There are no blocking correctness issues. One minor observation: the type annotation Sequence[Any] | Any | None is effectively just Any since Any already subsumes all other types, which makes the signature less informative than intended. The if not tools: guard on line 440 runs before normalize_tools, which is fine for None and empty sequences, but could theoretically swallow a single falsy tool object (e.g., an empty dict); in practice this is extremely unlikely for valid tool inputs.

✓ Security Reliability

This diff broadens the _prepare_tools_for_openai signature to accept a single tool in addition to a sequence, delegating normalization to normalize_tools. The change is small and well-tested. No security or reliability issues were found: there are no injection risks, no secret exposure, and no resource leaks. The if not tools: early-return guard executes before normalize_tools, which is safe because a single valid tool object will always be truthy.

✓ Test Coverage

The diff adds support for passing a single tool (not wrapped in a list) to _prepare_tools_for_openai by using normalize_tools(). Two new tests cover the happy path for a single FunctionTool and a single dict tool. However, the tests are missing edge cases: passing None, passing an empty list, and passing a mixed list that includes both single tools and sequences should all be validated. The existing tests may already cover some of these, but a test for None input after the signature change (now Any | None instead of Sequence[Any] | None) would be valuable to confirm the early return still works correctly with normalize_tools in the path.

Suggestions

  • The type annotation Sequence[Any] | Any | None collapses to just Any. Consider a more precise union like Tool | Sequence[Tool] | None (with an appropriate Tool type alias) to provide meaningful type information to callers.
  • The if not tools: guard on line 440 runs before normalize_tools. If a caller ever passes a single falsy tool object (e.g., an empty dict {}), it would be silently swallowed. Consider moving the emptiness check after normalize_tools: tools_list = normalize_tools(tools); if not tools_list: return [].
  • The type hint Sequence[Any] | Any | None is technically redundant since Any already subsumes both Sequence[Any] and None. Consider using a more precise union (e.g., Sequence[Tool] | Tool | None) if concrete tool types are available, to preserve type-checker value.
  • Add a test for _prepare_tools_for_openai(None) to verify the early return still works correctly now that normalize_tools is in the code path. If normalize_tools(None) is called before the if not tools guard, this could raise an error.
  • Add a test passing a single string or other non-tool type to verify normalize_tools handles or rejects invalid input gracefully.
  • Consider verifying the parameters and strict fields in test_prepare_tools_for_openai_single_function_tool to match the assertion depth of existing tests like test_prepare_tools_for_openai_with_function_tool.

Automated review by moonbox3's agents

@markwallace-microsoft
Copy link
Member

markwallace-microsoft commented Feb 26, 2026

Python Test Coverage

Python Test Coverage Report •
FileStmtsMissCoverMissing
packages/core/agent_framework/openai
   _responses_client.py6408686%292–295, 299–300, 303–304, 310–311, 316, 329–335, 356, 364, 387, 556, 611, 615, 617, 619, 621, 697, 707, 712, 755, 834, 851, 864, 925, 1016, 1021, 1025–1027, 1031–1032, 1055, 1124, 1146–1147, 1162–1163, 1181–1182, 1223–1226, 1335–1336, 1352, 1354, 1433–1441, 1560, 1615, 1630, 1673–1676, 1684–1685, 1687–1689, 1703–1705, 1715–1716, 1722, 1737
TOTAL22175276187% 

Python Unit Test Overview

Tests Skipped Failures Errors Time
4675 247 💤 0 ❌ 0 🔥 1m 20s ⏱️

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a bug in the Python OpenAIResponsesClient tool-preparation path where passing a single tool object (instead of a list) could be iterated incorrectly, leading to mis-parsed tool definitions or runtime errors.

Changes:

  • Normalize tools input in OpenAIResponsesClient._prepare_tools_for_openai via normalize_tools(...) before iteration.
  • Widen _prepare_tools_for_openai’s accepted input type to allow single-tool inputs.
  • Add unit tests covering single FunctionTool and single dict-tool inputs.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
python/packages/core/agent_framework/openai/_responses_client.py Normalizes tool inputs before iteration to correctly handle single-tool values.
python/packages/core/tests/openai/test_openai_responses_client.py Adds regression tests ensuring single-tool inputs are treated as one tool, not iterated.

- Use precise type annotation matching normalize_tools/OpenAIChatClient signature
  instead of collapsed Sequence[Any] | Any | None
- Move emptiness guard after normalize_tools() call so single falsy tool
  objects are not silently swallowed
- Import ToolTypes for the type annotation
- Expand test_prepare_tools_for_openai_single_function_tool assertions to
  verify parameters, strict, and parameter schema fields
- Add test_prepare_tools_for_openai_none to verify None input returns []

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Contributor Author

@moonbox3 moonbox3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated Code Review

Reviewers: 3 | Confidence: 89%

✓ Correctness

The diff refactors _prepare_tools_for_openai to normalize the tools list before checking emptiness, which is a correctness improvement over the old code that checked the raw input's truthiness (e.g., an empty list vs. None vs. a single falsy-looking object). The type signature is tightened using ToolTypes, and new tests cover both None input and deeper schema assertions. No bugs, race conditions, or incorrect API usage detected.

✓ Security Reliability

This diff is a minor refactor that narrows type annotations on _prepare_tools_for_openai and reorders the call to normalize_tools so it happens before the emptiness check. The change is behaviorally equivalent for all practical inputs and introduces no new security or reliability concerns. No secrets, injection vectors, resource leaks, or unsafe deserialization patterns are present. The new tests appropriately cover the None input case and validate tool schema properties.

✓ Test Coverage

The diff refactors _prepare_tools_for_openai to call normalize_tools before the emptiness check and broadens the type signature to explicitly accept ToolTypes, Callable, and sequences thereof. Test coverage is improved with enhanced assertions on the FunctionTool test and a new None-input test. However, there are notable gaps: no test for an empty list [] (which exercises the early-return path differently than None), no test for passing a bare Callable (now explicitly in the type signature but never tested), and no test for a mixed-type sequence (e.g., a FunctionTool alongside a dict tool).

Suggestions

  • Minor: consider adding a test for an empty list input ([]) to _prepare_tools_for_openai alongside the None test, to ensure normalize_tools handles both empty cases consistently.
  • Add a test for empty list input (_prepare_tools_for_openai([])) to verify the early-return path when normalize_tools returns an empty list.
  • Add a test passing a bare callable (plain function, not wrapped in FunctionTool) since Callable[..., Any] is now explicitly in the type signature — verify it is correctly converted.
  • Add a test for a mixed-type sequence (e.g., [function_tool, dict_tool]) to confirm heterogeneous tool lists are handled correctly after the refactor.

Automated review by moonbox3's agents

@moonbox3 moonbox3 self-assigned this Feb 26, 2026
@moonbox3 moonbox3 moved this to In Progress in Agent Framework Feb 26, 2026
@moonbox3 moonbox3 moved this from In Progress to In Review in Agent Framework Feb 26, 2026
@moonbox3 moonbox3 added this pull request to the merge queue Feb 26, 2026
Merged via the queue into microsoft:main with commit ff124c4 Feb 26, 2026
29 checks passed
@github-project-automation github-project-automation bot moved this from In Review to Done in Agent Framework Feb 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Python: OpenAIResponsesClient mishandles single-tool inputs

5 participants