Conversation
|
The PR description has been updated. Please fill out the template for your PR to be reviewed. |
markstur
left a comment
There was a problem hiding this comment.
helpers/server_type.py is_vllm_server_with_structured_output() is still there (and its tests), but it looks like that is for openai-vllm which we keep. Intentional, right?
| # There's also some risk that a group could unintentionally fully import the dependencies of | ||
| # another group (without explicitly listing it). That will lead to false positives here on | ||
| # the should_fail side. This happens with vllm (which imports all the parts of hf) so we just | ||
| # exclude the hf import statements from that test. |
There was a problem hiding this comment.
Can we please leave this comment in the tests? I'm okay if we want to explicitly remove the vllm stuff at the end.
There was a problem hiding this comment.
Perhaps we should name this something backend agnostic so that future changes don't require a change in the script name?
Yes; we want to remove our vLLM Backend but not the vLLM compatibility with the OpenAI Backend. |
Remove vLLM backend to unblock transformers v5 upgrade
Type of PR
Description
mellea.backends.vllm#777 , Remove vllm backend and dependency #780vLLM pins an older version of
transformersas a hard dependency, whichblocks us from upgrading to transformers v5. Since vLLM is the only thing
holding us back, the cleanest path forward is to remove it from the project
entirely rather than waiting for vLLM to catch up.
Changes in this PR:
LocalVLLMBackendand themellea/backends/vllmmodule[vllm]optional dependency and dropped it from[all]/[backends]OpenAI-compatible server fixture
uv.lock— the transformers version constraint is now freedOut of scope: The transformers v5 upgrade itself is a being done here #418
Testing