-
Notifications
You must be signed in to change notification settings - Fork 236
Add static per block MSE for NVFP4 weight #613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #613 +/- ##
==========================================
+ Coverage 74.57% 74.64% +0.07%
==========================================
Files 183 192 +9
Lines 18412 19027 +615
==========================================
+ Hits 13730 14202 +472
- Misses 4682 4825 +143 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…nce; quant scale to FP8; rename static kernel Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add step_size as well as mse_calib argument?
…nel launch func Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
…calibrate Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
realAsma
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!!
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
## What does this PR do?
**Type of change:** ? <!-- Use one of the following: Bug fix, new
feature, new example, new tests, documentation. -->
new feature
**Overview:** ?
Support static block-wise MSE for NVFP4 weight quantization.
Add a FP4 triton kernel that take in scales for each block. It also
quantizes the scales to FP8.
This PR does the following:
1. Enable static NVFP4 implementation, i.e. block scales for weights are
calculated during calibration and feed into fake quant kernels
2.Extend mse_calibrate to support static NVFP4 with block scales
searching by MSE and global scale set as MAX
3.Refinements: calibrate weight quantizers only once during MSE
calibration
## Usage
<!-- You can potentially add a usage example below. -->
Example config:
```python
NVFP4_WEIGHT_MSE_CFG = {
"quant_cfg": {
"*weight_quantizer": {
"num_bits": (2, 1),
"block_sizes": {-1: 16, "type": "static", "scale_bits": (4, 3)},
"axis": None,
"enable": True,
},
"*input_quantizer": {
"enable": False,
},
**_default_disabled_quantizer_cfg,
},
"algorithm": {
"method": "mse",
"step_size": 0.25,
"start_multiplier": 0.25,
"stop_multiplier": 2.0,
},
}
NVFP4_WEIGHT_ACT_MSE_CFG = {
"quant_cfg": {
"*weight_quantizer": {
"num_bits": (2, 1),
"block_sizes": {-1: 16, "type": "static", "scale_bits": (4, 3)},
"axis": None,
"enable": True,
},
"*input_quantizer": {
"num_bits": (2, 1),
"block_sizes": {-1: 16, "type": "dynamic", "scale_bits": (4, 3)},
"axis": None,
"enable": True,
},
**_default_disabled_quantizer_cfg,
},
"algorithm": {
"method": "mse",
"step_size": 0.25,
"start_multiplier": 0.25,
"stop_multiplier": 2.0,
},
}
```
## Testing
<!-- Mention how have you tested your change if applicable. -->
## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->
- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: Yes/No <!--- If No, explain
why. -->
- **Did you write any new necessary tests?**: Yes/No
- **Did you add or update any necessary documentation?**: Yes/No
- **Did you update
[Changelog](https://github.com/NVIDIA/TensorRT-Model-Optimizer/blob/main/CHANGELOG.rst)?**:
Yes/No <!--- Only for new features, API changes, critical bug fixes or
bw breaking changes. -->
## Additional Information
<!-- E.g. related issue. -->
---------
Signed-off-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Jingyu Xin <jingyux@nvidia.com>
What does this PR do?
Type of change: ?
new feature
Overview: ?
Support static block-wise MSE for NVFP4 weight quantization.
Add a FP4 triton kernel that take in scales for each block. It also quantizes the scales to FP8.
This PR does the following:
2.Extend mse_calibrate to support static NVFP4 with block scales searching by MSE and global scale set as MAX
3.Refinements: calibrate weight quantizers only once during MSE calibration
Usage
Example config:
Testing
Before your PR is "Ready for review"
Additional Information