Skip to content

Comments

fix(models): allow slash-containing model IDs with unknown prefixes#2493

Closed
OiPunk wants to merge 1 commit intoopenai:mainfrom
OiPunk:codex/openai-agents-2492-openrouter-prefix
Closed

fix(models): allow slash-containing model IDs with unknown prefixes#2493
OiPunk wants to merge 1 commit intoopenai:mainfrom
OiPunk:codex/openai-agents-2492-openrouter-prefix

Conversation

@OiPunk
Copy link
Contributor

@OiPunk OiPunk commented Feb 15, 2026

Summary

Fixes #2492 by making MultiProvider treat unknown prefixes as plain model names instead of hard-failing with Unknown prefix.

This keeps explicit provider prefixes (openai/, litellm/, and prefixes configured in provider_map) working as before, while allowing model IDs that naturally contain slashes (for example openrouter/openai/gpt-5) to be sent to the default OpenAI provider unchanged.

What changed

  • Added _is_known_prefix() in MultiProvider.
  • Updated get_model() routing logic:
    • Known prefix: keep prefix-based routing.
    • Unknown prefix: do not treat it as provider prefix; route as plain model name.
  • Added new tests in tests/models/test_multi_provider.py covering:
    • unknown prefix passthrough
    • known fallback prefix routing
    • provider_map routing
    • helper map mutations
    • fallback cache behavior
    • unknown fallback error path
    • dynamic litellm fallback import path

Validation

  • uv run --with ruff ruff check src/agents/models/multi_provider.py tests/models/test_multi_provider.py
  • uv run mypy src/agents/models/multi_provider.py tests/models/test_multi_provider.py
  • uv run --with pytest pytest -q tests/models/test_multi_provider.py
  • uv run --with coverage --with pytest coverage run -m pytest -q tests/models/test_multi_provider.py
  • uv run --with coverage coverage report -m src/agents/models/multi_provider.py

Coverage result for changed source file:

  • src/agents/models/multi_provider.py: 100%

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9b438acb60

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +149 to +152
if self._is_known_prefix(prefix):
model_name = parsed_model_name
else:
prefix = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve fallback routing for custom prefixes

Clearing prefix for every non-hardcoded/non-provider_map prefix means get_model("custom/..." ) can no longer reach _get_fallback_provider, so subclasses that extend MultiProvider by overriding _create_fallback_provider lose custom prefix routing. Before this change, those prefixes flowed into fallback creation; now they are silently sent to the OpenAI provider with the unmodified model string, which is a behavior regression for existing custom MultiProvider extensions.

Useful? React with 👍 / 👎.

@seratch seratch marked this pull request as draft February 16, 2026 05:37
@seratch
Copy link
Member

seratch commented Feb 16, 2026

These changes may actually work, but we generally recommend using either LiteLLM or model object for non-OpenAI models. Making the parser complex for further flexibility (the litellm was the special case because we generally recommend using litellm for non-OAI models) may not be a change we would like to have.

@OiPunk
Copy link
Contributor Author

OiPunk commented Feb 16, 2026

Thanks for the clear guidance, this makes sense.

I agree we should keep the parser behavior conservative and align with the current recommendation (LiteLLM or explicit model objects for non-OpenAI providers). I can close this PR to avoid adding parser complexity unless you prefer a much narrower variant.

@seratch seratch closed this Feb 17, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Unknown prefix when using openrouter models

2 participants