When performing weight-only quantization (e.g., NVFP4) on models with QuantTEColumnParallelGroupedLinear or other GroupedLinear modules, the quantization process fails to properly compute and store the amax values.
It may because GroupedLinear modules use multiple weight parameters (weight0, weight1, weight2, ...) instead of a single weight parameter, and share a single weight_quantizer across all weights.
The existing weight_attr_names() function cannot detect these weight parameters, causing weight_only_quantize() to skip these modules entirely.
When performing weight-only quantization (e.g., NVFP4) on models with
QuantTEColumnParallelGroupedLinearor other GroupedLinear modules, the quantization process fails to properly compute and store theamaxvalues.It may because GroupedLinear modules use multiple weight parameters (
weight0,weight1,weight2, ...) instead of a singleweightparameter, and share a singleweight_quantizeracross all weights.The existing
weight_attr_names()function cannot detect these weight parameters, causingweight_only_quantize()to skip these modules entirely.