Skip to content

[TRTLLM-10030][perf] beam search (remove GPU sync + fix batching + refactor)#11276

Merged
ixlmar merged 3 commits intoNVIDIA:mainfrom
ixlmar:perf/beam-search
Feb 5, 2026
Merged

[TRTLLM-10030][perf] beam search (remove GPU sync + fix batching + refactor)#11276
ixlmar merged 3 commits intoNVIDIA:mainfrom
ixlmar:perf/beam-search

Conversation

@ixlmar
Copy link
Collaborator

@ixlmar ixlmar commented Feb 4, 2026

Description

In developer testing, this improves TorchSampler beam search iteration time from 20% slower than TRTLLMSampler to 14% faster.

Detailed changes:

  • fix batching of beam searches (per beam width)
  • fix various typing issues (and silence issues in other parts of the sampling code)
  • avoid redundant H2D/D2H cycle with stream syncs in sample batching
  • add StrategyImpl.get_temperature
  • unpack sampling batch components by name (not by position)
  • remove no longer needed return_probs argument from strategy_grouping_key()

Test Coverage

n/a

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Refactor
    • Improved type system constraints and validation across sampling utilities for enhanced code reliability.
    • Refined API signatures for strategy sampling methods to strengthen type safety.
    • Enhanced optional field handling in beam search structures for better flexibility.
    • Removed redundant public methods from data classes to streamline public interfaces.

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 4, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34760 [ run ] triggered by Bot. Commit: b34d8a4

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 4, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34777 [ run ] triggered by Bot. Commit: b34d8a4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34777 [ run ] completed with state FAILURE. Commit: b34d8a4
/LLM/main/L0_MergeRequest_PR pipeline #26834 completed with status: 'ABORTED'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 4, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34781 [ run ] triggered by Bot. Commit: b34d8a4

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34781 [ run ] completed with state DISABLED
CI server is currently disabled for unplanned maintenance. Estimated completion time: 8 AM PST on 1/30.

Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 4, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34789 [ run ] triggered by Bot. Commit: c6ae779

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34789 [ run ] completed with state SUCCESS. Commit: c6ae779
/LLM/main/L0_MergeRequest_PR pipeline #26835 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 5, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34941 [ run ] triggered by Bot. Commit: c6ae779

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34941 [ run ] completed with state FAILURE. Commit: c6ae779

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 5, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34942 [ run ] triggered by Bot. Commit: c6ae779

@ixlmar ixlmar requested review from Funatiq and stnie February 5, 2026 10:34
@ixlmar ixlmar marked this pull request as ready for review February 5, 2026 10:35
@ixlmar ixlmar requested a review from a team as a code owner February 5, 2026 10:35
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

This pull request refactors the sampling infrastructure to improve type safety and simplify strategy grouping. Key changes include removing the generator parameter from beam search, introducing Hashable-constrained strategy keys, adding temperature retrieval to strategies, reworking grouped strategy sampling logic, and expanding optional field handling across sampling state dataclasses.

Changes

Cohort / File(s) Summary
Core Sampler Typing & State
tensorrt_llm/_torch/pyexecutor/sampler.py
Extensive typing refactoring: removed __iter__ and __len__ from public dataclasses (RequestGroupValue, RequestGroupValueWithMetadata), converted fields to Optional types, added type: ignore annotations for compatibility, updated tensor shapes to tuples, and introduced runtime assertions for non-None buffers in beam-search paths.
Sampling Interface Updates
tensorrt_llm/_torch/pyexecutor/sampling_utils.py
Constrained GenericStrategyKeyType to Hashable bound, removed generator parameter from beam_search_sampling_batch, simplified strategy_grouping_key signature (dropped return_probs parameter), and updated sample_grouped_strategies return type to include optional temperature.
FlashInfer Strategy Grouping
tensorrt_llm/_torch/pyexecutor/sampling_utils_flashinfer.py
Introduced _STRATEGY_KEY_TYPE alias, added get_temperature method to StrategyImpl, converted BeamSearchMixin to accept int beam widths instead of tensors, removed generator parameter, and reworked strategy dispatch logic to use pattern matching instead of dynamic class specialization.
Test Updates
tests/unittest/_torch/sampler/test_beam_search.py, tests/unittest/_torch/sampler/test_torch_sampler.py
Removed generator argument from beam_search_sampling_batch call and updated flashinfer key tracking to store (group_key, return_probs) tuples.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.95% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main changes: beam search optimization involving GPU sync removal, batching fixes, and refactoring. It directly relates to the primary focus of the PR.
Description check ✅ Passed The PR description includes a clear summary of changes and performance improvements, but lacks details on specific test coverage and verification. The description explains what and why, though test coverage is marked as n/a.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@tensorrt_llm/_torch/pyexecutor/sampler.py`:
- Around line 1530-1532: The method setup_sampler_step has the `@override`
decorator applied twice; remove the redundant decorator so only a single
`@override` precedes the def setup_sampler_step(self, scheduled_requests:
ScheduledRequests): declaration, leaving the rest of the method intact and
ensuring no change to method signature or behavior.
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/sampler.py (1)

1577-1583: Assertions added for beam search buffer initialization.

The assertions ensure non-None values before accessing beam search buffers. While these add safety, note that per the codebase learnings, "performance is prioritized over additional validation checks." These assertions will run every iteration during beam search preparation.

Consider whether these could be replaced with a single early check that _use_beam_search implies all buffers are initialized, or moved to _create_store validation.

@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 5, 2026

/bot run --disable-fail-fast

1 similar comment
@ixlmar
Copy link
Collaborator Author

ixlmar commented Feb 5, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34967 [ run ] triggered by Bot. Commit: c6ae779

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34967 [ run ] completed with state SUCCESS. Commit: c6ae779
/LLM/main/L0_MergeRequest_PR pipeline #26977 completed with status: 'SUCCESS'

@ixlmar ixlmar merged commit 719e82c into NVIDIA:main Feb 5, 2026
7 checks passed
@ixlmar ixlmar deleted the perf/beam-search branch February 5, 2026 14:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants