Skip to content

feat: add model capability preset picker for OpenAI Compatible provider#11902

Draft
roomote-v0[bot] wants to merge 3 commits intomainfrom
feature/openai-compatible-model-capability-presets
Draft

feat: add model capability preset picker for OpenAI Compatible provider#11902
roomote-v0[bot] wants to merge 3 commits intomainfrom
feature/openai-compatible-model-capability-presets

Conversation

@roomote-v0
Copy link
Contributor

@roomote-v0 roomote-v0 bot commented Mar 9, 2026

Related GitHub Issue

Closes: #11674

Description

This PR attempts to address Issue #11674 by adding a Model Capability Preset dropdown to the OpenAI Compatible provider settings.

When users select the OpenAI Compatible provider, they now see a searchable dropdown that lists all known models across every provider Roo supports. Selecting a model automatically populates the capability fields (context window, max tokens, image support, prompt caching, pricing, etc.) so users running local models via llama.cpp, exo, NVIDIA NIM, or similar tools get the same model-specific behavior as native provider users.

Key implementation details:

  • New modelCapabilityPresets aggregation in packages/types that collects model definitions from Anthropic, OpenAI, DeepSeek, Gemini, MiniMax, Mistral, Moonshot/Kimi, Qwen, SambaNova, xAI, and ZAi/GLM
  • Searchable combobox UI in OpenAICompatible.tsx using the existing Command/Popover components, grouped by provider
  • "Custom (configure manually)" option preserves the current behavior for models not in the preset list
  • Users can still override any auto-populated field after selecting a preset

Trade-offs:

  • Cloud-only routing providers (OpenRouter, Requesty, etc.) and platform-locked providers (Bedrock, Vertex, etc.) are excluded since their model IDs do not map to local inference
  • Pricing fields are populated from the preset but may not be relevant for local inference -- users can clear them

Test Procedure

  • Unit tests added for modelCapabilityPresets data (6 tests in packages/types)
  • Unit tests added for preset picker UI rendering (4 tests in webview-ui)
  • All existing tests continue to pass
  • Run: cd packages/types && npx vitest run and cd webview-ui && npx vitest run src/components/settings/providers/__tests__/OpenAICompatible.spec.tsx

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue
  • Scope: Changes are focused on the linked issue
  • Self-Review: Performed a self-review of the code
  • Testing: New tests added covering the changes
  • Documentation Impact: No documentation updates required
  • Contribution Guidelines: Read and agree to the Contributor Guidelines

Documentation Updates

  • No documentation updates are required.

Additional Notes

Feedback and guidance are welcome.

Interactively review PR in Roo Code Cloud

roomote added 2 commits March 9, 2026 18:49
Adds a searchable dropdown to the OpenAI Compatible provider settings
that lets users select from all known model capabilities across every
provider Roo supports (Anthropic, OpenAI, DeepSeek, Gemini, MiniMax,
Mistral, Moonshot/Kimi, Qwen, SambaNova, xAI, ZAi/GLM).

When a preset is selected, the model capability fields (context window,
max tokens, image support, prompt caching, pricing, etc.) are
automatically populated. Users can still choose "Custom" to configure
everything manually as before.

Changes:
- packages/types: new all-model-capabilities.ts aggregating presets
- webview-ui: preset picker dropdown in OpenAICompatible.tsx
- i18n: English translation keys for the new UI
- Tests for both the preset data and the UI component

Addresses #11674
@bozoweed
Copy link

bozoweed commented Mar 9, 2026

@roomote please ensure all translations are correctly defined thx

@roomote-v0
Copy link
Contributor Author

roomote-v0 bot commented Mar 9, 2026

Fixaroo Clock   See task

Added capabilityPreset translations to all 17 non-EN locales (ca, de, es, fr, hi, id, it, ja, ko, nl, pl, pt-BR, ru, tr, vi, zh-CN, zh-TW). All JSON files validated, missing translations script reports zero gaps, and all existing tests pass.

View commit | Revert commit

@bozoweed
Copy link

appear to work so fine now <3 please merge that or do something like that , i can finnaly use kimi k2.5 on NVIDIA NIM API that was so usefull and closeing issue about wrong model usage on OpenAI compatible endpoint

what this PR solve ?

  • issue with kimi k2.5 that not supporte thinking correctly
  • issue with deepseek v3.2 with thinking enabled
  • all issue about wrong model usage during LLM auto turn

why should we merge that ?
@hannesrudolph told me that you should have fix that issue from few days ago ( intial Issue about nvidia api usage on roocode) and open new issue if still failed, so because of that i have opend new issue because it's not solved at all roocode still not use models correctly, but with that change all seem work like charmed. i don't know what you think about that changes but i can guaranty that is working perfectly now on NVIDIA i have build that PR and currently using it on my vs_code

thx by advance <3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] For OpenAI Compatible endpoints provide a dropdown from which users can select among all the models that Roo has specific capabilities for

2 participants