Skip to content

Fail fast when no tests are selected after collection#6558

Draft
p-datadog wants to merge 2 commits intomainfrom
p-datadog/fail-fast-zero-tests
Draft

Fail fast when no tests are selected after collection#6558
p-datadog wants to merge 2 commits intomainfrom
p-datadog/fail-fast-zero-tests

Conversation

@p-datadog
Copy link
Copy Markdown
Member

What does this PR do?

When pytest collects zero tests for a scenario, the run exits with code 0 after spending 10-45 seconds sleeping on interface timeouts. A misconfigured scenario name or broken test filter produces a passing CI build that ran no tests.

This PR adds a check in pytest_collection_finish that calls pytest.exit() with return code 1 when no test items survive collection filtering.

Why len(session.items) instead of session.testscollected

In pytest 7.1.3, session.testscollected is always 0 inside pytest_collection_finish. This is a hook ordering issue in Session.perform_collect:

# _pytest/main.py, Session.perform_collect (pytest 7.1.3)
hook.pytest_collection_modifyitems(...)      # line 57 — filters items
hook.pytest_collection_finish(session=self)  # line 61 — our hook runs here
self.testscollected = len(items)             # line 63 — assigned AFTER the hook

len(session.items) reflects the already-filtered list and is correct at this point.

What happens today with zero tests

  1. pytest_sessionstart starts containers (weblog, agent, backend) before collection runs.

  2. pytest_collection_modifyitems filters all items out — session.items is empty.

  3. pytest_collection_finish runs — the for item in session.items loop (line 411) iterates zero times, which is harmless. But execution falls through to line 457:

    context.scenario.post_setup(session)
  4. For end-to-end scenarios, post_setup calls _wait_and_stop_containers (endtoend.py:400). Since skip_empty_scenario is false, force_interface_timout_to_zero is false, so the real per-language timeouts are used:

    self._wait_interface(interfaces.library, self.library_interface_timeout)
    self._wait_interface(interfaces.agent, self.agent_interface_timeout)      # 5s
    self._wait_interface(interfaces.backend, self.backend_interface_timeout)  # 0s
  5. _wait_interface calls interface.wait(timeout) (endtoend.py:451), which is ProxyBasedInterfaceValidator.wait() in _core.py:95:

    def wait(self, timeout: int):
        time.sleep(timeout)

    This is an unconditional time.sleep() — it always sleeps the full duration regardless of whether any data arrived. The library_interface_timeout values are set per language in endtoend.py:307-320:

    Language library_interface_timeout
    Java 25s
    Go 10s
    Node.js, Ruby 0s
    PHP 10s
    Python 5s
    .NET, C++, Rust (default) 40s

    Plus the 5s agent_interface_timeout. So with zero tests, the run sleeps for 5-45 seconds waiting for interface data from tests that never ran.

  6. After the sleeps, containers stop, and pytest exits with code 0.

Excluded modes

The check skips modes where zero tests are intentional:

  • --collect-only and --declaration-report — return before the check (existing early returns)
  • --sleep — deselects all items; keeps environment running for manual exploration
  • --skip-empty-scenario — all tests are xfail/skip; pytest_sessionfinish converts NO_TESTS_COLLECTED to exit code 0

The change

conftest.py — 6 lines added in pytest_collection_finish, between the existing early-return guards and the sleep mode handler:

if len(session.items) == 0 and not session.config.option.sleep and not session.config.option.skip_empty_scenario:
    pytest.exit("No tests were selected — check scenario name and test filters", returncode=1)

Before / after

Before:

$ ./run.sh --scenario DOES_NOT_EXIST
...
===== no tests ran =====      # exit code 0

After:

$ ./run.sh --scenario DOES_NOT_EXIST
...
!!! No tests were selected — check scenario name and test filters
                              # exit code 1

Modes unaffected

Mode Zero tests expected? Behavior
--collect-only N/A Returns before the check
--declaration-report N/A Returns before the check
--sleep Yes Excluded from check
--skip-empty-scenario Yes Excluded from check
Normal run No Fails with exit code 1

Test plan

  • Run a valid scenario — tests execute normally
  • Run with a bogus scenario name — exits with code 1 and clear message
  • Run with --collect-only — works as before
  • Run with --skip-empty-scenario on a scenario with only xfail tests — exits 0
  • Run with --sleep — enters sleep mode as before

🤖 Generated with Claude Code

@github-actions
Copy link
Copy Markdown
Contributor

CODEOWNERS have been resolved as:

conftest.py                                                             @DataDog/system-tests-core

p-datadog pushed a commit to p-datadog/system-tests that referenced this pull request Mar 20, 2026
Reverts:
- 1f95bf3 selected test detection fixed
- 69e93c4 terminate self more efficiently
- 100a53d exit faster when nothing is collected
- 848db3c use rsync instead of cp
- 42f9453 use dtr

These are replaced by:
- DataDog#6558 (fail fast when no tests are selected)
- DataDog#6560 (fix --binary-path for source repos)
- DataDog#6561 (skip full wipe for --binary-path)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@datadog-official
Copy link
Copy Markdown

datadog-official bot commented Mar 20, 2026

⚠️ Tests

Fix all issues with BitsAI or with Cursor

⚠️ Warnings

🧪 1 Test failed

tests.remote_config.test_remote_configuration.Test_RemoteConfigurationUpdateSequenceLiveDebugging.test_tracer_update_sequence[apache-mod-7.4-zts] from system_tests_suite   View in Datadog   (Fix with Cursor)
utils.interfaces._core.ValidationError: ("{'path': 'datadog/2/LIVE_DEBUGGING/metricProbe_33a64d99-fbed-5eab-bb10-80735405c09b/config', 'length': 360, 'hashes': [{'algorithm': 'sha256', 'hash': '6daaa0eb13996d340d99983bb014ef17453bad39edf19041f24a87a159ff94fe'}]} should be in cached_target_files property: [{'path': 'datadog/2/LIVE_DEBUGGING/metricProbe_33a64d99-fbed-5eab-bb10-80735405c09b/config', 'length': 365, 'hashes': [{'algorithm': 'sha256', 'hash': '4f12b33894fd7178f2464d3fc2c63223c3ee2a29a5cf0936de60ceee88fd0656'}]}, {'path': 'datadog/2/LIVE_DEBUGGING/logProbe_22953c88-eadc-4f9a-aa0f-7f6243f4bf8a/config', 'length': 239, 'hashes': [{'algorithm': 'sha256', 'hash': '8176095e451a5f4d49db40e5eadf7d79b0ca6956cf28c83f87d18f4d66ea2583'}]}, {'path': 'datadog/2/LIVE_DEBUGGING/spanProbe_kepf0cf2-9top-45cf-9f39-59installed/config', 'length': 188, 'hashes': [{'algorithm': 'sha256', 'hash': 'd22df7cf36e9f2b0134c4f6535a7340b9a4435876b79280f91d80942c9562b5b'}]}]", 'SUCCESS - Add back the initial config along with the second (add multiple). RFC about integrating with remote-config: https://docs.google.com/document/d/1u_G7TOr8wJX0dOM_zUDKuRJgxoJU_hVTd5SeaMucQUs')

self = <tests.remote_config.test_remote_configuration.Test_RemoteConfigurationUpdateSequenceLiveDebugging object at 0x7f9bac688b30>

    def test_tracer_update_sequence(self):
        """Test update sequence, based on a scenario mocked in the proxy"""
    
        # Index the request number by runtime ID so that we can support applications
        # that spawns multiple worker processes, each running its own RCM client.
        request_number: dict = defaultdict(int)
...

ℹ️ Info

No other issues found (see more)

❄️ No new flaky tests detected

This comment will be updated automatically if new data arrives.
🔗 Commit SHA: 5e0353d | Docs | Datadog PR Page | Was this helpful? React with 👍/👎 or give us feedback!

Unicorn Enterprises and others added 2 commits March 26, 2026 20:52
When pytest collects zero tests for a scenario (e.g. wrong scenario name,
misconfigured test filters), the run would previously either silently
succeed or hang. This is a footgun in CI — a green build that tested
nothing.

Add an early exit in pytest_collection_finish that calls pytest.exit()
with returncode=1 when no items survive collection filtering.

The check correctly skips these legitimate zero-test modes:
- --collect-only (inspection, not execution)
- --declaration-report (metadata collection)
- --sleep (intentionally deselects all tests)
- --skip-empty-scenario (all tests are xfail/skip)

Note: session.testscollected cannot be used here because in pytest 7.1.3
it is assigned on the line *after* the collection_finish hook fires in
Session.perform_collect. len(session.items) is the correct check.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@p-datadog p-datadog force-pushed the p-datadog/fail-fast-zero-tests branch from 0f88657 to 5e0353d Compare March 27, 2026 01:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant