Skip to content

fix(python): add default 60s timeout to all AI provider clients#13696

Open
badhra-ajaz wants to merge 1 commit intomicrosoft:mainfrom
badhra-ajaz:peakinfer/add-timeout-to-llm-clients
Open

fix(python): add default 60s timeout to all AI provider clients#13696
badhra-ajaz wants to merge 1 commit intomicrosoft:mainfrom
badhra-ajaz:peakinfer/add-timeout-to-llm-clients

Conversation

@badhra-ajaz
Copy link

Summary

Adds default 60-second timeout to all 5 AI provider client instantiations across OpenAI, Azure OpenAI, Anthropic, and NVIDIA connectors to prevent indefinite hangs on API calls.

Problem

Currently, Semantic Kernel creates AI provider clients without timeout parameters. This means API calls can hang indefinitely on network issues or server unresponsiveness, causing:

  • Kernel operations stuck waiting forever on LLM API calls
  • Resource exhaustion in production applications
  • Poor UX (AI features hang with no error feedback)

Solution

Added timeout=60.0 to all AI provider client instantiations:

File Client Provider
open_ai_config_base.py AsyncOpenAI OpenAI
azure_config_base.py AsyncAzureOpenAI Azure OpenAI
anthropic_chat_completion.py AsyncAnthropic Anthropic
nvidia_chat_completion.py AsyncOpenAI NVIDIA
nvidia_text_embedding.py AsyncOpenAI NVIDIA

PeakInfer Analysis

Category: Reliability + Latency
Issue: Missing default timeout on LLM API clients
Impact: Prevents indefinite hangs across all Semantic Kernel AI connectors

Testing

  • All 5 client instantiations now have 60s timeout
  • Covers OpenAI, Azure, Anthropic, and NVIDIA providers
  • User-provided clients (passed via constructor) are unaffected
  • 60s timeout appropriate for chat completion and embedding operations

🤖 Powered by PeakInfer LLM inference optimization

Adds timeout=60.0 to AsyncOpenAI, AsyncAzureOpenAI, and AsyncAnthropic
client instantiations across OpenAI, Azure, NVIDIA, and Anthropic connectors
to prevent indefinite hangs on API calls.

Changes (5 files, 5 client instances):
- OpenAI config base: AsyncOpenAI client
- Azure config base: AsyncAzureOpenAI client
- Anthropic chat completion: AsyncAnthropic client
- NVIDIA chat completion: AsyncOpenAI client
- NVIDIA text embedding: AsyncOpenAI client

🤖 Powered by [PeakInfer](https://peakinfer.com) LLM inference optimization
@badhra-ajaz badhra-ajaz requested a review from a team as a code owner March 23, 2026 04:16
@microsoft-github-policy-service

@badhra-ajaz please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated Code Review

Reviewers: 4 | Confidence: 90%

✓ Correctness

This PR adds a hardcoded 60-second timeout to all internally-created AI client instances (OpenAI, Azure OpenAI, Anthropic, NVIDIA). The changes are mechanically correct — each timeout is added only when the library constructs its own client (i.e., when no user-provided client is passed), and the variable scoping in azure_config_base.py is fine since args is reassigned on the next line after client creation. However, the SDK defaults for both OpenAI and Anthropic clients are 600 seconds, so this is a 10x reduction that could break users with long-running completions. The timeout is also not user-configurable without providing a pre-built client, which is a usability concern but not a correctness bug.

✓ Security Reliability

This PR adds explicit timeout=60.0 to all internally-constructed AI clients (Anthropic, NVIDIA, OpenAI, Azure OpenAI). This is a reliability improvement that prevents indefinite hangs when upstream services are unresponsive. The 60-second value is reasonable for LLM calls. The timeout is only applied when the SDK creates the client internally (not when a user passes their own client), which is the correct pattern. No security or reliability issues found.

✗ Test Coverage

This PR adds a hardcoded timeout=60.0 to 5 AI client constructors (Anthropic, NVIDIA chat/embedding, OpenAI, Azure OpenAI), reducing the SDK default from 600s to 60s. However, none of the existing initialization tests are updated to verify the timeout is correctly set on the constructed client. The magic number 60.0 is duplicated across all 5 files with no shared constant, and there is no way for users to configure the timeout without providing their own pre-built client.

✗ Design Approach

The PR hardcodes a 60-second timeout at the HTTP client construction level across all five service constructors. This is a symptom-level fix (presumably addressing hanging requests) that introduces a non-configurable magic constant. Users with legitimate long-running workloads (large context windows, streaming completions, batch embeddings) cannot increase this limit without providing a pre-built client instance. Additionally, the NVIDIA and OpenAI embedding services already expose a per-request timeout field in their PromptExecutionSettings, so the correct pattern already exists in the codebase — per-request timeout in execution settings should be the primary mechanism. A client-level cap of 60s would silently shadow any per-request timeout exceeding 60s for those execution paths that do carry a timeout in settings, making the behavior surprising. The right approach is to expose timeout as an optional constructor parameter (defaulting to None to preserve SDK defaults) rather than hardcoding it.

Flagged Issues

  • The 60s timeout is hardcoded and not user-configurable. Users with long-running workloads (large context windows, streaming completions, batch embeddings) have no way to increase it without supplying their own pre-built client. The constructor should accept an optional timeout parameter (default None) passed through to the underlying SDK client, preserving the SDK's own default when unset.
  • The NVIDIA services already expose timeout in NvidiaChatPromptExecutionSettings and NvidiaEmbeddingPromptExecutionSettings. A hardcoded 60s client-level cap creates an invisible ceiling that will silently truncate any per-request timeout greater than 60s, making the two timeout mechanisms inconsistent and confusing.
  • No tests verify the new timeout=60.0 behavior. Each service has init tests that assert on model_id, client type, and headers, but none assert that client.timeout is set to 60.0. Since this is a behavioral change (reducing timeout from 600s to 60s), it should have corresponding test coverage to prevent regressions.

Suggestions

  • Extract the magic number 60.0 into a shared module-level constant (e.g., DEFAULT_CLIENT_TIMEOUT = 60.0) to avoid duplication across 5 files and improve maintainability.
  • Document this behavioral change — the 60s timeout is a significant reduction from the SDK default of 600s, and users may not realize their previously-working long completions will now fail with timeout errors.
  • Consider aligning with the existing per-request timeout pattern already present in OpenAI/NVIDIA execution settings and extending it to Anthropic, rather than adding a new implicit client-level layer.
  • Add assertions to existing init tests to verify the timeout is correctly propagated to the client (e.g., assert client.timeout == httpx.Timeout(60.0)).

Automated review by badhra-ajaz's agents

api_key=api_key,
organization=org_id,
default_headers=merged_headers,
timeout=60.0,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded 60s with no way for the caller to override. Should be an optional constructor parameter passed through to AsyncOpenAI. Also missing test coverage — test_init in test_openai_chat_completion.py does not assert client.timeout.

Suggested change
timeout=60.0,
client = AsyncOpenAI(
api_key=api_key,
organization=org_id,
default_headers=merged_headers,
)

if "websocket_base_url" in kwargs:
args["websocket_base_url"] = kwargs.pop("websocket_base_url")

args["timeout"] = 60.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded 60s is not surfaced as a constructor parameter. Azure OpenAI deployments used for long-context or batch workloads routinely exceed 60s. This should be an __init__ parameter or removed to rely on the SDK default. Also missing test coverage — test_init in test_azure_chat_completion.py does not assert the timeout.

Suggested change
args["timeout"] = 60.0
client = AsyncAzureOpenAI(**args)

if not async_client:
async_client = AsyncAnthropic(
api_key=anthropic_settings.api_key.get_secret_value(),
timeout=60.0,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded 60s timeout is not configurable and will silently break long-running Anthropic requests. Expose as an __init__ parameter or omit to rely on the SDK default. Also no test exercises this code path — existing tests use a pre-mocked client.

Suggested change
timeout=60.0,
async_client = AsyncAnthropic(
api_key=anthropic_settings.api_key.get_secret_value(),
)

client = AsyncOpenAI(
api_key=nvidia_settings.api_key.get_secret_value() if nvidia_settings.api_key else None,
base_url=nvidia_settings.base_url,
timeout=60.0,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NvidiaChatPromptExecutionSettings already has a per-request timeout field. This hardcoded 60s client-level cap creates an invisible ceiling that silently overrides caller-specified per-request timeouts. Also missing test coverage in test_init_with_defaults.

Suggested change
timeout=60.0,
client = AsyncOpenAI(
api_key=nvidia_settings.api_key.get_secret_value() if nvidia_settings.api_key else None,
base_url=nvidia_settings.base_url,
)

client = AsyncOpenAI(
api_key=nvidia_settings.api_key.get_secret_value() if nvidia_settings.api_key else None,
base_url=nvidia_settings.base_url,
timeout=60.0,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same issue: NvidiaEmbeddingPromptExecutionSettings exposes per-request timeout, so hardcoding 60s at the client level creates a hidden cap. Also missing timeout assertion in test_init.

Suggested change
timeout=60.0,
client = AsyncOpenAI(
api_key=nvidia_settings.api_key.get_secret_value() if nvidia_settings.api_key else None,
base_url=nvidia_settings.base_url,
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant