Allow custom VLLM endpoint URL#306
Conversation
There was a problem hiding this comment.
Review by Korbit AI
Korbit automatically attempts to detect when you fix issues in new commits.
| Category | Issue | Status |
|---|---|---|
| Repeated environment variable lookup on model instantiation ▹ view |
Files scanned
| File Path | Reviewed |
|---|---|
| src/agentlab/llm/chat_api.py | ✅ |
Explore our documentation to understand the languages and file types we support and the files we ignore.
Check out our docs on how you can make Korbit work best for you and your team.
| api_key_env_var="VLLM_API_KEY", | ||
| client_class=OpenAI, | ||
| client_args={"base_url": "http://0.0.0.0:8000/v1"}, | ||
| client_args={"base_url": os.getenv("VLLM_API_URL", "http://localhost:8000/v1")}, |
There was a problem hiding this comment.
Repeated environment variable lookup on model instantiation 
Tell me more
What is the issue?
The os.getenv() call is executed on every VLLMChatModel instantiation, performing an unnecessary environment variable lookup each time.
Why this matters
This creates redundant system calls when multiple VLLMChatModel instances are created, as the environment variable is unlikely to change during program execution. The overhead becomes more significant in scenarios with frequent model instantiation.
Suggested change ∙ Feature Preview
Cache the environment variable lookup at module level or class level to avoid repeated os.getenv() calls:
# At module level
VLLM_BASE_URL = os.getenv("VLLM_API_URL", "http://localhost:8000/v1")
# Then in __init__:
client_args={"base_url": VLLM_BASE_URL}Provide feedback to improve future suggestions
💬 Looking for more details? Reply to this comment to chat with Korbit.
When deploying a VLLM server on a different Node/GPU than the one we are using when running agents, the base URL for the endpoint cannot be
http://0.0.0.0:8000.AgentLab/src/agentlab/llm/chat_api.py
Lines 463 to 482 in da8cb7c
This PR introduces a new environment variable
VLLM_API_URLthat allows to use a custom endpoint URL or fall back to the default local server.Description by Korbit AI
What change is being made?
Allow configuring the VLLM endpoint URL via environment variable VLLM_API_URL, defaulting to http://localhost:8000/v1 if not set.
Why are these changes being made?
To enable configuring the VLLM backend URL without code changes, using a sensible default when the variable is not provided.