Fail fast when no tests are selected after collection#6558
Draft
Fail fast when no tests are selected after collection#6558
Conversation
5 tasks
Contributor
|
|
p-datadog
pushed a commit
to p-datadog/system-tests
that referenced
this pull request
Mar 20, 2026
Reverts: - 1f95bf3 selected test detection fixed - 69e93c4 terminate self more efficiently - 100a53d exit faster when nothing is collected - 848db3c use rsync instead of cp - 42f9453 use dtr These are replaced by: - DataDog#6558 (fail fast when no tests are selected) - DataDog#6560 (fix --binary-path for source repos) - DataDog#6561 (skip full wipe for --binary-path) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
✨ Fix all issues with BitsAI or with Cursor
|
When pytest collects zero tests for a scenario (e.g. wrong scenario name, misconfigured test filters), the run would previously either silently succeed or hang. This is a footgun in CI — a green build that tested nothing. Add an early exit in pytest_collection_finish that calls pytest.exit() with returncode=1 when no items survive collection filtering. The check correctly skips these legitimate zero-test modes: - --collect-only (inspection, not execution) - --declaration-report (metadata collection) - --sleep (intentionally deselects all tests) - --skip-empty-scenario (all tests are xfail/skip) Note: session.testscollected cannot be used here because in pytest 7.1.3 it is assigned on the line *after* the collection_finish hook fires in Session.perform_collect. len(session.items) is the correct check. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
0f88657 to
5e0353d
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
When pytest collects zero tests for a scenario, the run exits with code 0 after spending 10-45 seconds sleeping on interface timeouts. A misconfigured scenario name or broken test filter produces a passing CI build that ran no tests.
This PR adds a check in
pytest_collection_finishthat callspytest.exit()with return code 1 when no test items survive collection filtering.Why
len(session.items)instead ofsession.testscollectedIn pytest 7.1.3,
session.testscollectedis always0insidepytest_collection_finish. This is a hook ordering issue inSession.perform_collect:len(session.items)reflects the already-filtered list and is correct at this point.What happens today with zero tests
pytest_sessionstartstarts containers (weblog, agent, backend) before collection runs.pytest_collection_modifyitemsfilters all items out —session.itemsis empty.pytest_collection_finishruns — thefor item in session.itemsloop (line 411) iterates zero times, which is harmless. But execution falls through to line 457:For end-to-end scenarios,
post_setupcalls_wait_and_stop_containers(endtoend.py:400). Sinceskip_empty_scenariois false,force_interface_timout_to_zerois false, so the real per-language timeouts are used:_wait_interfacecallsinterface.wait(timeout)(endtoend.py:451), which isProxyBasedInterfaceValidator.wait()in_core.py:95:This is an unconditional
time.sleep()— it always sleeps the full duration regardless of whether any data arrived. Thelibrary_interface_timeoutvalues are set per language inendtoend.py:307-320:library_interface_timeoutPlus the 5s
agent_interface_timeout. So with zero tests, the run sleeps for 5-45 seconds waiting for interface data from tests that never ran.After the sleeps, containers stop, and pytest exits with code 0.
Excluded modes
The check skips modes where zero tests are intentional:
--collect-onlyand--declaration-report— return before the check (existing early returns)--sleep— deselects all items; keeps environment running for manual exploration--skip-empty-scenario— all tests are xfail/skip;pytest_sessionfinishconvertsNO_TESTS_COLLECTEDto exit code 0The change
conftest.py— 6 lines added inpytest_collection_finish, between the existing early-return guards and the sleep mode handler:Before / after
Before:
After:
Modes unaffected
--collect-only--declaration-report--sleep--skip-empty-scenarioTest plan
--collect-only— works as before--skip-empty-scenarioon a scenario with only xfail tests — exits 0--sleep— enters sleep mode as before🤖 Generated with Claude Code