Skip to content

[None][doc] add attention developer guide#12693

Open
QiJune wants to merge 1 commit intoNVIDIA:mainfrom
QiJune:attention
Open

[None][doc] add attention developer guide#12693
QiJune wants to merge 1 commit intoNVIDIA:mainfrom
QiJune:attention

Conversation

@QiJune
Copy link
Copy Markdown
Collaborator

@QiJune QiJune commented Apr 2, 2026

Summary by CodeRabbit

  • Documentation
    • Added comprehensive developer guide documenting the attention stack architecture, including module organization, backend selection, metadata contracts, and KV-cache behavior.
    • Updated documentation index with reference to the new guide.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

To see a list of available CI bot commands, please comment /bot help.

Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
@QiJune QiJune requested review from a team as code owners April 2, 2026 12:36
@QiJune QiJune requested review from kaiyux and lfr-0531 April 2, 2026 12:37
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 2, 2026

📝 Walkthrough

Walkthrough

Documentation-only changes adding a new comprehensive developer guide for TRT-LLM's PyTorch attention module stack and updating the AGENTS.md reference index. The guide covers attention architecture, backend families, metadata contracts, KV-cache semantics, and testing guidelines.

Changes

Cohort / File(s) Summary
Documentation Index
AGENTS.md
Added reference row pointing to new attention developer guide with high-level description of covered topics.
New Attention Developer Guide
tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md
Comprehensive new documentation covering attention module architecture, backend selection logic, metadata/runtime contracts, KV-cache ownership and decode semantics, MLA dispatch control flow, backend families and sparse registrations, testing pitfalls, anti-patterns, and evaluating new attention implementations.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~5 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description only contains the repository's template with no actual content filled in; all required sections (Description, Test Coverage) are empty and the checklist is incomplete. Fill in the Description section explaining the purpose of the attention developer guide, and the Test Coverage section noting that documentation-only changes don't require code tests.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding an attention developer guide documentation file.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md (1)

285-285: Reword “needs paged KV” to standard technical English.

At Line 285, consider rephrasing to “needs paged-KV support” or “requires paged KV” for clearer phrasing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md` at line 285, Reword
the phrase "needs paged KV" at the sentence containing "If a source attention
implementation needs paged KV, chunked prefill," to standard technical English —
replace it with either "requires paged KV" or "requires paged-KV support" so the
sentence reads e.g. "If a source attention implementation requires paged-KV
support, chunked prefill," to improve clarity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md`:
- Line 164: Update the hyphenation for three compound modifiers in
ATTENTION_DEVELOPER_GUIDE.md: replace "custom-op based" with "custom-op-based"
(occurrence at line where phrase appears), "cross attention" with
"cross-attention", and "KT-cache related" with "KT-cache-related"; ensure you
apply exact, case-sensitive replacements and run a quick repo-wide search for
these same unhyphenated phrases to fix any other occurrences for consistency.

---

Nitpick comments:
In `@tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md`:
- Line 285: Reword the phrase "needs paged KV" at the sentence containing "If a
source attention implementation needs paged KV, chunked prefill," to standard
technical English — replace it with either "requires paged KV" or "requires
paged-KV support" so the sentence reads e.g. "If a source attention
implementation requires paged-KV support, chunked prefill," to improve clarity.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a48b0f0f-ab23-48c6-b681-4097f0d22b92

📥 Commits

Reviewing files that changed from the base of the PR and between dbb1c8c and 81c61d9.

📒 Files selected for processing (2)
  • AGENTS.md
  • tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md


`torch.compile` path:

- the compiled path may use custom-op based execution paths
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix compound-modifier hyphenation for consistency.

Please hyphenate these phrases for technical-doc readability:

  • Line 164: custom-op basedcustom-op-based
  • Line 271: cross attentioncross-attention
  • Line 325: KT-cache relatedKT-cache-related

Also applies to: 271-271, 325-325

🧰 Tools
🪛 LanguageTool

[grammar] ~164-~164: Use a hyphen to join words.
Context: ...: - the compiled path may use custom-op based execution paths - under `torch.com...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md` at line 164, Update
the hyphenation for three compound modifiers in ATTENTION_DEVELOPER_GUIDE.md:
replace "custom-op based" with "custom-op-based" (occurrence at line where
phrase appears), "cross attention" with "cross-attention", and "KT-cache
related" with "KT-cache-related"; ensure you apply exact, case-sensitive
replacements and run a quick repo-wide search for these same unhyphenated
phrases to fix any other occurrences for consistency.

| `tensorrt_llm/executor/executor.py` | Execution abstraction (`GenerationExecutor`) |
| `tensorrt_llm/models/automodel.py` | Auto-discovery and model registry |
| `tensorrt_llm/_torch/models/` | PyTorch backend model implementations (distinct from `models/` used by TensorRT backend) |
| `tensorrt_llm/_torch/modules/ATTENTION_DEVELOPER_GUIDE.md` | Attention, MLA, backend families, sparse backends, metadata contracts, and KV-cache behavior - **read before modifying `tensorrt_llm/_torch/modules/attention.py` or `_torch/attention_backend/`** |
Copy link
Copy Markdown
Collaborator

@yuxianq yuxianq Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use tensorrt_llm/_torch/attention_backend/ instead of _torch/attention_backend/?


- `is_lite` changes the projection structure, not just a small code path.
- `self.is_dsa == True` means the DSA path is active.
- `self.mqa` is the sparse DSA backend.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.mqa is not just the sparse DSA backend, it is also the dense absorption generation backend and the DSA indexer backend.

`torch.compile` path:

- the compiled path may use custom-op based execution paths
- under `torch.compile`, `_should_use_short_mha()` returns `False`, so the
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disable short mha for torch.compile is a workaround, not by design, should we record workaround in the guide?


### 1.3 MLA dispatch reference

For DSA-style MLA models, the dispatch is:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also add dispatch for normal MLA models? E.g., about absorption and chunked prefill.

`TrtllmAttention` can dispatch to `trtllm_gen.py` for supported dense cases.
That is an internal fast path, not a separate top-level backend selection.

It only applies to a subset of dense cases. If it does not apply,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is also determined by TRTLLM_ENABLE_TRTLLM_GEN_ATTENTION, which is disabled by default, so no case is dispatched to trtllm_gen.py by default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants