Skip to content

[https://nvbugs/5814504][fix] Add skip_pre_hopper flag on NVILA & Nano V2 VLMs#11275

Merged
litaotju merged 1 commit intoNVIDIA:release/1.2from
yechank-nvidia:fix_prehopper_test
Feb 10, 2026
Merged

[https://nvbugs/5814504][fix] Add skip_pre_hopper flag on NVILA & Nano V2 VLMs#11275
litaotju merged 1 commit intoNVIDIA:release/1.2from
yechank-nvidia:fix_prehopper_test

Conversation

@yechank-nvidia
Copy link
Collaborator

@yechank-nvidia yechank-nvidia commented Feb 4, 2026

Summary by CodeRabbit

  • Tests
    • Updated test configurations for multimodal LLM functionality to optimize execution across different hardware environments.

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
@yechank-nvidia yechank-nvidia self-assigned this Feb 4, 2026
@yechank-nvidia yechank-nvidia requested a review from a team as a code owner February 4, 2026 10:29
@yechank-nvidia yechank-nvidia changed the title [https://nvbugs/5845769][fix] Add skip_pre_hopper flag on NVILA & Nano V2 VLMs [https://nvbugs/5814504][fix] Add skip_pre_hopper flag on NVILA & Nano V2 VLMs Feb 4, 2026
@yechank-nvidia
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 4, 2026

📝 Walkthrough

Walkthrough

Two test classes in the multimodal LLM API test suite are annotated with the @skip_pre_hopper decorator to conditionally skip these tests in pre-hopper environments. No test logic or functionality is modified.

Changes

Cohort / File(s) Summary
Test Skipping Configuration
tests/integration/defs/accuracy/test_llm_api_pytorch_multimodal.py
Added @skip_pre_hopper decorator to TestNVILA_8B and TestNemotron_Nano_12B_V2_VL test classes to skip execution in pre-hopper environments.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely missing; only the placeholder '@coderabbitai summary' is present with no explanation of the change, rationale, or test coverage details. Add a complete description explaining what the skip_pre_hopper flag does, why these specific tests need it, and confirm test coverage is adequate for the changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The PR title accurately describes the main change: adding @skip_pre_hopper decorators to two test classes (NVILA_8B and Nemotron_Nano_12B_V2_VL).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/accuracy/test_llm_api_pytorch_multimodal.py (1)

1-1: ⚠️ Potential issue | 🟠 Major

Add the required NVIDIA copyright header.

Line 1: this file has been modified but still lacks the required NVIDIA copyright header with the latest modification year. Please add the repo-standard header (updated to 2026) at the top of the file. As per coding guidelines, “All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification”.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34759 [ run ] triggered by Bot. Commit: f098163

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34790 [ run ] triggered by Bot. Commit: f098163

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34790 [ run ] completed with state SUCCESS. Commit: f098163
/LLM/release-1.2/L0_MergeRequest_PR pipeline #319 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@yechank-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34862 [ run ] triggered by Bot. Commit: f098163

@tensorrt-cicd
Copy link
Collaborator

PR_Github #34862 [ run ] completed with state SUCCESS. Commit: f098163
/LLM/release-1.2/L0_MergeRequest_PR pipeline #326 completed with status: 'SUCCESS'

@yechank-nvidia yechank-nvidia added the Multimodal Label for issues & PRs regarding Multimodal related objects label Feb 5, 2026
@litaotju litaotju merged commit daabeab into NVIDIA:release/1.2 Feb 10, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Multimodal Label for issues & PRs regarding Multimodal related objects

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants