Skip to content

[https://nvbugs/5962106][fix] Exclude NCCL_SYMMETRIC from allreduce auto-tuner tactics#12709

Open
nv-lschneider wants to merge 1 commit intoNVIDIA:mainfrom
nv-lschneider:disable-nccl-symmetri-auto-ar
Open

[https://nvbugs/5962106][fix] Exclude NCCL_SYMMETRIC from allreduce auto-tuner tactics#12709
nv-lschneider wants to merge 1 commit intoNVIDIA:mainfrom
nv-lschneider:disable-nccl-symmetri-auto-ar

Conversation

@nv-lschneider
Copy link
Copy Markdown
Collaborator

@nv-lschneider nv-lschneider commented Apr 2, 2026

Summary by CodeRabbit

  • Performance Optimization
    • Refined auto-tuning strategies for distributed collective operations to improve selection efficiency during inference and training workloads.

Description

This removes NCCL_SYMMETRIC from the auto-tuned tactics.
In some CPU starved platforms the auto tuner is unable to evaluate the memcpy + kernel workload for nccl_symmetric.
This is a temporary change until #11589 can be merged. Which removes the 2 part NCCL_SYMMETRIC execution.

See bugs for details.
May have to be cherry picked into release 1.3

Test Coverage

No specific test necessary.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: Ludwig Schneider <lschneider@nvidia.com>
@nv-lschneider nv-lschneider requested a review from a team as a code owner April 2, 2026 19:31
@nv-lschneider nv-lschneider requested a review from hyukn April 2, 2026 19:31
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 2, 2026

📝 Walkthrough

Walkthrough

Modified AllReduceRunner.get_valid_tactics to exclude AllReduceStrategy.NCCL_SYMMETRIC from the initial candidate strategies set, while preserving fallback behavior that still selects this strategy when the autotuner cache misses.

Changes

Cohort / File(s) Summary
All-Reduce Strategy Selection
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py
Removed AllReduceStrategy.NCCL_SYMMETRIC from candidate strategies during tuning, keeping only NCCL with conditional additions of ONESHOT and TWOSHOT. Fallback path and buffer preallocation remain unchanged.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically summarizes the main change: excluding NCCL_SYMMETRIC from allreduce auto-tuner tactics, with proper formatting following the template.
Description check ✅ Passed The description explains the issue (auto-tuner unable to evaluate NCCL_SYMMETRIC on CPU-starved platforms) and the solution, though the PR checklist acknowledgment seems premature without full completion of all items.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tensorrt_llm/_torch/custom_ops/torch_custom_ops.py (1)

1805-1827: ⚠️ Potential issue | 🔴 Critical

Also clamp cache misses and persisted tactics to the new tactic set.

At Lines 1813-1819 you intentionally collapse the choices to [NCCL] for large workspaces, but Line 1856 still hard-codes NCCL_SYMMETRIC when tactic == -1. During preparation, Line 1841 will now skip the symmetric-buffer preallocation for that path as well. So an uncached shape can still automatically take the very tactic this change is trying to exclude, including the large-workspace case you call out as a hang risk. tensorrt_llm/_torch/autotuner.py:553-575 also reloads cached tactic IDs without revalidating them against get_valid_tactics(), so stale NCCL_SYMMETRIC entries bypass this filter too. Please make the miss path and cached-tactic validation derive from valid_strategies instead of hard-coding tactic 8 (for example, fall back with if tactic not in valid_tactics: tactic = valid_tactics[0]).

Also applies to: 1837-1856

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/custom_ops/torch_custom_ops.py` around lines 1805 - 1827,
The code narrows allowed strategies into valid_strategies but still hard-codes
NCCL_SYMMETRIC/tactic == -1 (and tactic id 8) on the miss path and when
reloading cached tactics, so update the miss and cache-reload paths to clamp
tactic choices to the computed valid_strategies: wherever you see the fallback
logic that sets tactic == -1 or uses literal id 8, replace it with a validation
like "valid_tactics = get_valid_tactics(...); if tactic not in valid_tactics:
tactic = valid_tactics[0]" (apply this in the preparation/miss path around the
tactic selection and in the cached-tactic reload logic in the autotuner),
ensuring both uncached misses and persisted cached IDs are validated against
valid_strategies before proceeding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@tensorrt_llm/_torch/custom_ops/torch_custom_ops.py`:
- Around line 1805-1827: The code narrows allowed strategies into
valid_strategies but still hard-codes NCCL_SYMMETRIC/tactic == -1 (and tactic id
8) on the miss path and when reloading cached tactics, so update the miss and
cache-reload paths to clamp tactic choices to the computed valid_strategies:
wherever you see the fallback logic that sets tactic == -1 or uses literal id 8,
replace it with a validation like "valid_tactics = get_valid_tactics(...); if
tactic not in valid_tactics: tactic = valid_tactics[0]" (apply this in the
preparation/miss path around the tactic selection and in the cached-tactic
reload logic in the autotuner), ensuring both uncached misses and persisted
cached IDs are validated against valid_strategies before proceeding.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ae5cd3c7-1ba4-49ee-bee3-e22e3810e8f9

📥 Commits

Reviewing files that changed from the base of the PR and between 11c40bb and a39e45c.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/custom_ops/torch_custom_ops.py

@nv-lschneider
Copy link
Copy Markdown
Collaborator Author

/bot run --add-multi-gpu --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41508 [ run ] triggered by Bot. Commit: a39e45c Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41508 [ run ] completed with state SUCCESS. Commit: a39e45c
/LLM/main/L0_MergeRequest_PR pipeline #32425 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nv-lschneider
Copy link
Copy Markdown
Collaborator Author

/bot run --add-multi-gpu --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41552 [ run ] triggered by Bot. Commit: a39e45c Link to invocation

Copy link
Copy Markdown
Collaborator

@hyukn hyukn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @nv-lschneider. We can first bypass the issue in the bug, and then re-enable the tactic after #11589 lands to solve the issue.

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #41552 [ run ] completed with state SUCCESS. Commit: a39e45c
/LLM/main/L0_MergeRequest_PR pipeline #32461 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants