Problem (one or two sentences)
For users of LLMs served locally via llama.cpp, we are missing out on the model support that Roo has. For example, support for MiniMax M2.5 was just added, but only if you use it via the MiniMax provider, however the model is the same if served via llama.cpp.
Context (who is affected and when)
Anyone using models via llama.cpp (or exo, or mlx-openai-server, etc.) via OpenAI Compatible endpoint, missing out on the specific interfaces to make models work better with Roo.
Desired behavior (conceptual, not technical)
If you select an OpenAI Compatible Provider, there is another drop down underneath it, that lets a user see all the different models Roo has support for - allowing the user to select a model to match what they are running.
Constraints / preferences (optional)
No response
Request checklist
Roo Code Task Links (optional)
No response
Acceptance criteria (optional)
No response
Proposed approach (optional)
No response
Trade-offs / risks (optional)
For models that the user may be running but aren't in Roo's supported list, provide a 'Universal' option in the list, which is essentially what we have today.