Skip to content

Commit 94daa3c

Browse files
committed
fix(tests): stabilize unreadable-source CLI assertion across runners by checking combined stdout/stderr output; fix README.md; fix matrics error in GAW
1 parent ad5067f commit 94daa3c

3 files changed

Lines changed: 55 additions & 34 deletions

File tree

.github/workflows/benchmark.yml

Lines changed: 44 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -64,22 +64,52 @@ jobs:
6464
memory: ""
6565
timeout_minutes: 60
6666

67-
if: >
68-
(github.event_name != 'workflow_dispatch' && matrix.profile == 'smoke') ||
69-
(github.event_name == 'workflow_dispatch' && matrix.profile == inputs.profile)
70-
7167
steps:
68+
- name: Resolve run profile gate
69+
shell: bash
70+
run: |
71+
enabled=0
72+
if [ "${{ github.event_name }}" != "workflow_dispatch" ]; then
73+
if [ "${{ matrix.profile }}" = "smoke" ]; then
74+
enabled=1
75+
fi
76+
else
77+
if [ "${{ matrix.profile }}" = "${{ inputs.profile }}" ]; then
78+
enabled=1
79+
fi
80+
fi
81+
echo "BENCH_ENABLED=$enabled" >> "$GITHUB_ENV"
82+
7283
- name: Checkout
73-
uses: actions/checkout@v6
84+
if: env.BENCH_ENABLED == '1'
85+
uses: actions/checkout@v6.0.2
86+
87+
- name: Set up Python (macOS local benchmark)
88+
if: env.BENCH_ENABLED == '1' && runner.os == 'macOS'
89+
uses: actions/setup-python@v6.2.0
90+
with:
91+
python-version: "3.13"
92+
allow-prereleases: true
93+
94+
- name: Set up uv (macOS local benchmark)
95+
if: env.BENCH_ENABLED == '1' && runner.os == 'macOS'
96+
uses: astral-sh/setup-uv@v5
97+
with:
98+
enable-cache: true
99+
100+
- name: Install dependencies (macOS local benchmark)
101+
if: env.BENCH_ENABLED == '1' && runner.os == 'macOS'
102+
run: uv sync --all-extras --dev
74103

75104
- name: Set benchmark output path
105+
if: env.BENCH_ENABLED == '1'
76106
shell: bash
77107
run: |
78108
mkdir -p .cache/benchmarks
79109
echo "BENCH_JSON=.cache/benchmarks/codeclone-benchmark-${{ matrix.label }}.json" >> "$GITHUB_ENV"
80110
81111
- name: Build and run Docker benchmark (Linux)
82-
if: runner.os == 'Linux'
112+
if: env.BENCH_ENABLED == '1' && runner.os == 'Linux'
83113
env:
84114
RUNS: ${{ matrix.runs }}
85115
WARMUPS: ${{ matrix.warmups }}
@@ -90,7 +120,7 @@ jobs:
90120
cp .cache/benchmarks/codeclone-benchmark.json "$BENCH_JSON"
91121
92122
- name: Run local benchmark (macOS)
93-
if: runner.os == 'macOS'
123+
if: env.BENCH_ENABLED == '1' && runner.os == 'macOS'
94124
run: |
95125
uv run python benchmarks/run_benchmark.py \
96126
--target . \
@@ -100,7 +130,7 @@ jobs:
100130
--output "$BENCH_JSON"
101131
102132
- name: Print benchmark summary
103-
if: always()
133+
if: env.BENCH_ENABLED == '1'
104134
shell: bash
105135
run: |
106136
python - <<'PY'
@@ -184,9 +214,13 @@ jobs:
184214
fh.write("\n".join(lines) + "\n")
185215
PY
186216
217+
- name: Skip non-selected profile
218+
if: env.BENCH_ENABLED != '1'
219+
run: echo "Skipping matrix profile '${{ matrix.profile }}' for event '${{ github.event_name }}'"
220+
187221
- name: Upload benchmark artifact
188-
if: always()
189-
uses: actions/upload-artifact@v7
222+
if: env.BENCH_ENABLED == '1'
223+
uses: actions/upload-artifact@v4
190224
with:
191225
name: codeclone-benchmark-${{ matrix.label }}
192226
path: ${{ env.BENCH_JSON }}

README.md

Lines changed: 6 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ all with baseline-aware governance that separates **known** technical debt from
2929
- **Baseline governance** — known debt stays accepted; CI blocks only new clones and metric regressions
3030
- **Reports** — interactive HTML, deterministic JSON/TXT plus Markdown and SARIF projections from one canonical report
3131
- **CI-first** — deterministic output, stable ordering, exit code contract, pre-commit support
32-
- **Fast** — incremental caching, parallel processing, warm-run optimization
32+
- **Fast*** — incremental caching, parallel processing, warm-run optimization, and reproducible benchmark coverage
3333

3434
## Quick Start
3535

@@ -42,21 +42,6 @@ codeclone . --json --md --sarif --text # generate machine-readable reports
4242
codeclone . --ci # CI mode (--fail-on-new --no-color --quiet)
4343
```
4444

45-
## Reproducible Docker Benchmark
46-
47-
```bash
48-
./benchmarks/run_docker_benchmark.sh
49-
```
50-
51-
The wrapper builds `benchmarks/Dockerfile`, runs isolated container benchmarks, and
52-
writes deterministic results to `.cache/benchmarks/codeclone-benchmark.json`.
53-
Use environment overrides to pin benchmark envelope:
54-
55-
```bash
56-
CPUSET=0 CPUS=1.0 MEMORY=2g RUNS=16 WARMUPS=4 \
57-
./benchmarks/run_docker_benchmark.sh
58-
```
59-
6045
<details>
6146
<summary>Run without install</summary>
6247

@@ -273,10 +258,10 @@ Architecture: [`docs/architecture.md`](docs/architecture.md) · CFG semantics: [
273258
| Docker benchmark contract | [`docs/book/18-benchmarking.md`](docs/book/18-benchmarking.md) |
274259
| Determinism | [`docs/book/12-determinism.md`](docs/book/12-determinism.md) |
275260

276-
<details>
277-
<summary>Benchmarking</summary>
261+
## Benchmarking
278262

279-
## Reproducible Docker Benchmark
263+
<details>
264+
<summary>Reproducible Docker Benchmark</summary>
280265

281266
```bash
282267
./benchmarks/run_docker_benchmark.sh
@@ -292,7 +277,8 @@ CPUSET=0 CPUS=1.0 MEMORY=2g RUNS=16 WARMUPS=4 \
292277
./benchmarks/run_docker_benchmark.sh
293278
```
294279

295-
Benchmark contract: [docs/book/18-benchmarking.md](docs/book/18-benchmarking.md)
280+
* Performance claims are backed by the reproducible benchmark workflow documented
281+
in [docs/book/18-benchmarking.md](docs/book/18-benchmarking.md)
296282

297283
</details>
298284

tests/test_cli_inprocess.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2930,10 +2930,11 @@ def _source_read_error(
29302930
monkeypatch.setattr(pipeline, "process_file", _source_read_error)
29312931
_patch_parallel(monkeypatch)
29322932
_run_main(monkeypatch, [str(tmp_path), "--no-progress"])
2933-
out = capsys.readouterr().out
2934-
assert "Cannot read file" in out
2935-
assert "CONTRACT ERROR:" not in out
2936-
assert _summary_metric(out, "Files skipped") == 1
2933+
captured = capsys.readouterr()
2934+
combined = captured.out + captured.err
2935+
assert "Cannot read file" in combined
2936+
assert "CONTRACT ERROR:" not in combined
2937+
assert _summary_metric(captured.out, "Files skipped") == 1
29372938

29382939

29392940
def test_cli_unreadable_source_fails_in_ci_with_contract_error(

0 commit comments

Comments
 (0)