fix(models): allow slash-containing model IDs with unknown prefixes#2493
fix(models): allow slash-containing model IDs with unknown prefixes#2493OiPunk wants to merge 1 commit intoopenai:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9b438acb60
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if self._is_known_prefix(prefix): | ||
| model_name = parsed_model_name | ||
| else: | ||
| prefix = None |
There was a problem hiding this comment.
Preserve fallback routing for custom prefixes
Clearing prefix for every non-hardcoded/non-provider_map prefix means get_model("custom/..." ) can no longer reach _get_fallback_provider, so subclasses that extend MultiProvider by overriding _create_fallback_provider lose custom prefix routing. Before this change, those prefixes flowed into fallback creation; now they are silently sent to the OpenAI provider with the unmodified model string, which is a behavior regression for existing custom MultiProvider extensions.
Useful? React with 👍 / 👎.
|
These changes may actually work, but we generally recommend using either LiteLLM or model object for non-OpenAI models. Making the parser complex for further flexibility (the litellm was the special case because we generally recommend using litellm for non-OAI models) may not be a change we would like to have. |
|
Thanks for the clear guidance, this makes sense. I agree we should keep the parser behavior conservative and align with the current recommendation (LiteLLM or explicit model objects for non-OpenAI providers). I can close this PR to avoid adding parser complexity unless you prefer a much narrower variant. |
Summary
Fixes #2492 by making
MultiProvidertreat unknown prefixes as plain model names instead of hard-failing withUnknown prefix.This keeps explicit provider prefixes (
openai/,litellm/, and prefixes configured inprovider_map) working as before, while allowing model IDs that naturally contain slashes (for exampleopenrouter/openai/gpt-5) to be sent to the default OpenAI provider unchanged.What changed
_is_known_prefix()inMultiProvider.get_model()routing logic:tests/models/test_multi_provider.pycovering:Validation
uv run --with ruff ruff check src/agents/models/multi_provider.py tests/models/test_multi_provider.pyuv run mypy src/agents/models/multi_provider.py tests/models/test_multi_provider.pyuv run --with pytest pytest -q tests/models/test_multi_provider.pyuv run --with coverage --with pytest coverage run -m pytest -q tests/models/test_multi_provider.pyuv run --with coverage coverage report -m src/agents/models/multi_provider.pyCoverage result for changed source file:
src/agents/models/multi_provider.py: 100%