Skip to content

fix : unwaive skipped/special TRT-RTX tests#4156

Open
tp5uiuc wants to merge 2 commits intopytorch:mainfrom
tp5uiuc:dev/tejaswinp/unwaive-trtrtx-test
Open

fix : unwaive skipped/special TRT-RTX tests#4156
tp5uiuc wants to merge 2 commits intopytorch:mainfrom
tp5uiuc:dev/tejaswinp/unwaive-trtrtx-test

Conversation

@tp5uiuc
Copy link
Copy Markdown

@tp5uiuc tp5uiuc commented Mar 31, 2026

Description

This change unwaives TensorRT-RTX behavior that was previously skipped or hard-blocked:

  • quantize.py: Drops validate_int8_activation_quantization, which raised NotImplementedError on RTX for INT8 activation quantization (.input_quantizer). Removes the ENABLED_FEATURES import and the validation call from the quantize conversion path so RTX can use the same INT8 activation path as other backends (runtime support is acceptable in TRT-RTX 1.4).
  • test_models_export.py: Removes RTX-only pytest.skip for bfloat16 in test_base_int8 and test_base_int8_dynamic_shape, and removes the RTX workaround that disabled *input_quantizer under ModelOpt’s default INT8 config so tests exercise full default INT8 (weights + activations) on RTX. This was enabled in TRT-RTX 1.4
  • test_004_weight_streaming.py: Removes the @unittest.skipIf that skipped a test when tensorrt_rtx was enabled (previously attributed to a cudagraphs issue).

Motivation: Align RTX with non-RTX coverage by removing waivers now that the underlying limitations or bugs are considered resolved.

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • [ x I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

Signed-off-by: tejaswinp <tejaswinp@nvidia.com>
@meta-cla meta-cla bot added the cla signed label Mar 31, 2026
@github-actions github-actions bot added component: tests Issues re: Tests component: conversion Issues re: Conversion stage component: core Issues re: The core compiler component: converters Issues re: Specific op converters component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Mar 31, 2026
@github-actions github-actions bot requested a review from cehongwang March 31, 2026 20:50
@narendasan narendasan requested a review from lanluo-nvidia April 2, 2026 21:15
@github-actions github-actions bot added the component: build system Issues re: Build system label Apr 3, 2026
Signed-off-by: tejaswinp <tejaswinp@nvidia.com>

fix : tmpl links

Signed-off-by: tejaswinp <tejaswinp@nvidia.com>
@tp5uiuc tp5uiuc force-pushed the dev/tejaswinp/unwaive-trtrtx-test branch from b9ddf76 to 59ff9ce Compare April 3, 2026 18:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed component: api [Python] Issues re: Python API component: build system Issues re: Build system component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: core Issues re: The core compiler component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: tests Issues re: Tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant