Problem
There's no way to verify that configured models are working before running a pipeline. You only find out when a command fails mid-run, potentially after spending time/money on partial execution.
Proposed Solution
Add extropy config test command that:
- Pings each configured model (pipeline fast, pipeline strong, simulation fast, simulation strong) with a minimal request
- Reports success/failure for each
- Shows latency and any error messages
Example Output
$ extropy config test
Testing model connectivity...
Pipeline models:
✓ anthropic/claude-sonnet-4-6 (fast) — 342ms
✓ anthropic/claude-sonnet-4-6 (strong) — 358ms
Simulation models:
✓ azure/gpt-5-mini (fast) — 128ms
✓ azure/gpt-5-mini (strong) — 131ms
All models responding.
$ extropy config test
Testing model connectivity...
Pipeline models:
✓ anthropic/claude-sonnet-4-6 (fast) — 342ms
✗ anthropic/claude-sonnet-4-6 (strong) — Error: Invalid API key
Simulation models:
✓ azure/gpt-5-mini (fast) — 128ms
✓ azure/gpt-5-mini (strong) — 131ms
1 model failed. Check API keys and model availability.
Implementation Notes
- Use the existing LLM client infrastructure
- Send a trivial completion request (e.g., "Say 'ok'" with max_tokens=5)
- Consider adding
--model <model> flag to test a specific model
- Exit code 0 if all pass, non-zero if any fail
Problem
There's no way to verify that configured models are working before running a pipeline. You only find out when a command fails mid-run, potentially after spending time/money on partial execution.
Proposed Solution
Add
extropy config testcommand that:Example Output
Implementation Notes
--model <model>flag to test a specific model