Skip to content

Conversation

@mxinO
Copy link
Contributor

@mxinO mxinO commented Jan 14, 2026

What does this PR do?

Type of change: improvement

Overview:

  • For Quantization aware reinforcement learning, after folding weight of rollout, we want to keep the quantization attrs for next step.
  • Minor fix for empty extra state

Usage

mtq.fold_weight(keep_attrs=True) will keep quantizer attrs after folding weight,

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes
  • Did you write any new necessary tests?: NA
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: No

Additional Information

Signed-off-by: Meng Xin <mxin@nvidia.com>
@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 14, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 14, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Jan 14, 2026

Codecov Report

❌ Patch coverage is 20.00000% with 12 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.22%. Comparing base (18d9b1e) to head (0552699).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...lopt/torch/quantization/nn/modules/quant_linear.py 14.28% 6 Missing ⚠️
...lopt/torch/quantization/nn/modules/quant_module.py 16.66% 5 Missing ⚠️
modelopt/torch/quantization/model_quant.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main     #779   +/-   ##
=======================================
  Coverage   74.22%   74.22%           
=======================================
  Files         192      192           
  Lines       19027    19035    +8     
=======================================
+ Hits        14123    14129    +6     
- Misses       4904     4906    +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Signed-off-by: Meng Xin <mxin@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants