|
| 1 | +# LLM Model Evaluation Results |
| 2 | + |
| 3 | +## Overview |
| 4 | + |
| 5 | +This document tracks evaluation results of LLM models used with the StackRox MCP server. Evaluations measure how well a model selects the correct MCP tools, passes appropriate parameters, stays within expected tool call bounds, and produces accurate responses. |
| 6 | + |
| 7 | +All evaluations use the [mcpchecker](https://github.com/mcpchecker/mcpchecker) framework against a deterministic WireMock-based mock backend, ensuring reproducible results across runs. |
| 8 | + |
| 9 | +## Evaluation Methodology |
| 10 | + |
| 11 | +### Test Framework |
| 12 | + |
| 13 | +Evaluations are run using **mcpchecker**, configured in [`e2e-tests/mcpchecker/eval.yaml`](../e2e-tests/mcpchecker/eval.yaml). The framework: |
| 14 | + |
| 15 | +1. Sends a natural language prompt to the model under test |
| 16 | +2. The model interacts with the MCP server (tool calls, parameter selection) |
| 17 | +3. Assertions validate tool usage against expected behavior |
| 18 | +4. An LLM judge evaluates response quality against reference answers |
| 19 | + |
| 20 | +### Test Environment |
| 21 | + |
| 22 | +- **Backend**: WireMock mock server with deterministic fixtures (no live StackRox Central required) |
| 23 | +- **MCP Config**: [`e2e-tests/mcpchecker/mcp-config-mock.yaml`](../e2e-tests/mcpchecker/mcp-config-mock.yaml) |
| 24 | +- **Task definitions**: [`e2e-tests/mcpchecker/tasks/`](../e2e-tests/mcpchecker/tasks/) |
| 25 | + |
| 26 | +### Assertions |
| 27 | + |
| 28 | +Each task defines assertions from the following set: |
| 29 | + |
| 30 | +| Assertion | Description | |
| 31 | +|-----------|-------------| |
| 32 | +| `toolsUsed` | Required tool(s) must be called, optionally with matching arguments (`argumentsMatch`) | |
| 33 | +| `minToolCalls` | Minimum total tool calls across all tools | |
| 34 | +| `maxToolCalls` | Maximum total tool calls (prevents runaway tool usage) | |
| 35 | + |
| 36 | +A task passes when **all** its assertions pass **and** the LLM judge approves the response. |
| 37 | + |
| 38 | +## Evaluation Results |
| 39 | + |
| 40 | +<!-- model:gpt-5-mini start --> |
| 41 | + |
| 42 | +### gpt-5-mini — 2026-03-31 |
| 43 | + |
| 44 | +**Overall: 10/11 tasks passed (90%)** |
| 45 | + |
| 46 | +#### Task Results |
| 47 | + |
| 48 | +| # | Task | Result | toolsUsed | minCalls | maxCalls | Input Tokens | Output Tokens | |
| 49 | +|---|------|--------|-----------|----------|----------|--------------|---------------| |
| 50 | +| 1 | list-clusters | Pass | Pass | Pass | Pass | 1728 | 962 | |
| 51 | +| 2 | cve-detected-workloads | Pass | Pass | Pass | Pass | 565 | 1187 | |
| 52 | +| 3 | cve-detected-clusters | Pass | **Fail** | Pass | Pass | 640 | 1998 | |
| 53 | +| 4 | cve-nonexistent | Pass | Pass | Pass | Pass | 1077 | 2605 | |
| 54 | +| 5 | cve-cluster-does-exist | **Fail** | Pass | Pass | Pass | 539 | 1285 | |
| 55 | +| 6 | cve-cluster-does-not-exist | Pass | **Fail** | Pass | Pass | 1528 | 1324 | |
| 56 | +| 7 | cve-clusters-general | Pass | Pass | Pass | Pass | 796 | 2304 | |
| 57 | +| 8 | cve-cluster-list | Pass | Pass | Pass | Pass | 488 | 1917 | |
| 58 | +| 9 | cve-log4shell | Pass | Pass | Pass | Pass | 1008 | 2936 | |
| 59 | +| 10 | cve-multiple | Pass | Pass | Pass | Pass | 1142 | 2493 | |
| 60 | +| 11 | rhsa-not-supported | Pass | — | Pass | Pass | 650 | 2488 | |
| 61 | + |
| 62 | +**Total input tokens**: 10161 | **Total output tokens**: 21499 |
| 63 | + |
| 64 | +<!-- model:gpt-5-mini end --> |
| 65 | + |
| 66 | +<!-- model:gpt-5 start --> |
| 67 | + |
| 68 | +### gpt-5 — 2026-03-31 |
| 69 | + |
| 70 | +**Overall: 9/11 tasks passed (81%)** |
| 71 | + |
| 72 | +#### Task Results |
| 73 | + |
| 74 | +| # | Task | Result | toolsUsed | minCalls | maxCalls | Input Tokens | Output Tokens | |
| 75 | +|---|------|--------|-----------|----------|----------|--------------|---------------| |
| 76 | +| 1 | list-clusters | Pass | Pass | Pass | Pass | 1720 | 552 | |
| 77 | +| 2 | cve-detected-workloads | Pass | Pass | Pass | Pass | 1589 | 1003 | |
| 78 | +| 3 | cve-detected-clusters | Pass | Pass | Pass | Pass | 521 | 1702 | |
| 79 | +| 4 | cve-nonexistent | **Fail** | Pass | Pass | Pass | 2406 | 2085 | |
| 80 | +| 5 | cve-cluster-does-exist | Pass | Pass | Pass | Pass | 1563 | 1682 | |
| 81 | +| 6 | cve-cluster-does-not-exist | **Fail** | **Fail** | Pass | Pass | 504 | 1868 | |
| 82 | +| 7 | cve-clusters-general | Pass | Pass | Pass | Pass | 516 | 1477 | |
| 83 | +| 8 | cve-cluster-list | Pass | Pass | Pass | Pass | 706 | 1964 | |
| 84 | +| 9 | cve-log4shell | Pass | Pass | Pass | Pass | 1008 | 2304 | |
| 85 | +| 10 | cve-multiple | Pass | Pass | Pass | Pass | 2166 | 2492 | |
| 86 | +| 11 | rhsa-not-supported | Pass | — | Pass | Pass | 818 | 2187 | |
| 87 | + |
| 88 | +**Total input tokens**: 13517 | **Total output tokens**: 19316 |
| 89 | + |
| 90 | +<!-- model:gpt-5 end --> |
| 91 | + |
| 92 | +## How to Run Evaluations |
| 93 | + |
| 94 | +### Prerequisites |
| 95 | + |
| 96 | +- Go 1.25+ |
| 97 | +- LLM judge credentials configured via environment variables (see below) |
| 98 | + |
| 99 | +### Running an Evaluation |
| 100 | + |
| 101 | +1. **Configure the agent model** via environment variable or in `e2e-tests/mcpchecker/eval.yaml`: |
| 102 | + |
| 103 | + ```bash |
| 104 | + export MODEL_NAME=gpt-5-nano |
| 105 | + ``` |
| 106 | + |
| 107 | +2. **Set judge environment variables**: |
| 108 | + |
| 109 | + ```bash |
| 110 | + export JUDGE_TYPE=openai |
| 111 | + export JUDGE_API_KEY=<your-key> |
| 112 | + export JUDGE_MODEL_NAME=<judge-model> |
| 113 | + ``` |
| 114 | + |
| 115 | +3. **Run the evaluation**: |
| 116 | + |
| 117 | + ```bash |
| 118 | + make e2e-test |
| 119 | + ``` |
| 120 | + |
| 121 | +4. **Update this document** with the results: |
| 122 | + |
| 123 | + ```bash |
| 124 | + ./scripts/update-model-evaluation.sh \ |
| 125 | + --model-id <model-id> \ |
| 126 | + --results e2e-tests/mcpchecker/mcpchecker-stackrox-mcp-e2e-out.json |
| 127 | + ``` |
| 128 | + |
| 129 | + The script generates a markdown section with the task results table and |
| 130 | + inserts or updates it in this document using HTML comment markers. |
| 131 | + |
| 132 | + If results for the given `--model-id` already exist, the script replaces |
| 133 | + the existing section. Otherwise, it appends a new section. |
0 commit comments