From ba70b4ac510e9aaf9500e8cdb14111b4eee99d83 Mon Sep 17 00:00:00 2001 From: Aamir Nazir Date: Mon, 11 May 2026 17:11:05 +0400 Subject: [PATCH 1/5] Update openvino_quantizer.rst --- unstable_source/openvino_quantizer.rst | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/unstable_source/openvino_quantizer.rst b/unstable_source/openvino_quantizer.rst index f8609d2a70d..59b89a0f030 100644 --- a/unstable_source/openvino_quantizer.rst +++ b/unstable_source/openvino_quantizer.rst @@ -118,7 +118,8 @@ After we capture the FX Module to be quantized, we will import the OpenVINOQuant .. code-block:: python - from nncf.experimental.torch.fx import OpenVINOQuantizer + from executorch.backends.openvino.quantizer import OpenVINOQuantizer + from executorch.backends.openvino.quantizer import QuantizationMode quantizer = OpenVINOQuantizer() @@ -126,21 +127,20 @@ After we capture the FX Module to be quantized, we will import the OpenVINOQuant Below is the list of essential parameters and their description: -* ``preset`` - defines quantization scheme for the model. Two types of presets are available: +* ``mode`` - defines quantization scheme for the model. Multiple modes are supported: - * ``PERFORMANCE`` (default) - defines symmetric quantization of weights and activations + * ``INT8_SYM`` (default) - defines symmetric quantization of weights and activations. This is the best for performance - * ``MIXED`` - weights are quantized with symmetric quantization and the activations are quantized with asymmetric quantization. This preset is recommended for models with non-ReLU and asymmetric activation functions, e.g. ELU, PReLU, GELU, etc. + * ``INT8_MIXED`` - weights are quantized with symmetric quantization and the activations are quantized with asymmetric quantization. This preset is recommended for models with non-ReLU and asymmetric activation functions, e.g. ELU, PReLU, GELU, etc. - .. code-block:: python - - OpenVINOQuantizer(preset=nncf.QuantizationPreset.MIXED) + * ``INT8_TRANSFORMER`` - special quantization scheme to preserve accuracy after quantization of Transformer models (BERT, Llama, etc.). None is default, i.e. no specific scheme is defined. -* ``model_type`` - used to specify quantization scheme required for specific type of the model. Transformer is the only supported special quantization scheme to preserve accuracy after quantization of Transformer models (BERT, Llama, etc.). None is default, i.e. no specific scheme is defined. + * ``INT8WO_SYM``, ``INT8WO_ASYM``, ``INT4WO_SYM``, ``INT4WO_ASYM`` - these are weights-only quantization schemes. They apply vanilla min-max quantization to model weights to INT8/INT4 with Symmetric and Asymmetric schemes. .. code-block:: python - OpenVINOQuantizer(model_type=nncf.ModelType.Transformer) + OpenVINOQuantizer(mode=QuantizationMode.INT8_SYM) + * ``ignored_scope`` - this parameter can be used to exclude some layers from the quantization process to preserve the model accuracy. For example, when you want to exclude the last layer of the model from quantization. Below are some examples of how to use this parameter: From d89f3e889f904d728351b9ef3f112d04d7bb92ca Mon Sep 17 00:00:00 2001 From: Aamir Nazir Date: Mon, 11 May 2026 17:17:44 +0400 Subject: [PATCH 2/5] Update openvino_quantizer.rst --- unstable_source/openvino_quantizer.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/unstable_source/openvino_quantizer.rst b/unstable_source/openvino_quantizer.rst index 59b89a0f030..52ba3110477 100644 --- a/unstable_source/openvino_quantizer.rst +++ b/unstable_source/openvino_quantizer.rst @@ -15,7 +15,7 @@ Introduction This is an experimental feature, the quantization API is subject to change. -This tutorial demonstrates how to use ``OpenVINOQuantizer`` from `Neural Network Compression Framework (NNCF) `_ in PyTorch 2 Export Quantization flow to generate a quantized model customized for the `OpenVINO torch.compile backend `_ and explains how to lower the quantized model into the `OpenVINO `_ representation. +This tutorial demonstrates how to use ``OpenVINOQuantizer`` from `Executorch `_ in PyTorch 2 Export Quantization flow to generate a quantized model customized for the `OpenVINO torch.compile backend `_ and explains how to lower the quantized model into the `OpenVINO `_ representation. ``OpenVINOQuantizer`` unlocks the full potential of low-precision OpenVINO kernels due to the placement of quantizers designed specifically for the OpenVINO. The PyTorch 2 export quantization flow uses ``torch.export`` to capture the model into a graph and performs quantization transformations on top of the ATen graph. @@ -135,7 +135,7 @@ Below is the list of essential parameters and their description: * ``INT8_TRANSFORMER`` - special quantization scheme to preserve accuracy after quantization of Transformer models (BERT, Llama, etc.). None is default, i.e. no specific scheme is defined. - * ``INT8WO_SYM``, ``INT8WO_ASYM``, ``INT4WO_SYM``, ``INT4WO_ASYM`` - these are weights-only quantization schemes. They apply vanilla min-max quantization to model weights to INT8/INT4 with Symmetric and Asymmetric schemes. + * ``INT8WO_SYM``, ``INT8WO_ASYM``, ``INT4WO_SYM``, ``INT4WO_ASYM`` - these are weights-only quantization schemes. They apply simple min-max quantization to model weights to INT8/INT4 with Symmetric and Asymmetric schemes. .. code-block:: python From d652ee07f6bb7995a239524ffa37078863ae4c43 Mon Sep 17 00:00:00 2001 From: Aamir Nazir Date: Mon, 11 May 2026 17:19:14 +0400 Subject: [PATCH 3/5] Update openvino_quantizer.rst --- unstable_source/openvino_quantizer.rst | 6 ------ 1 file changed, 6 deletions(-) diff --git a/unstable_source/openvino_quantizer.rst b/unstable_source/openvino_quantizer.rst index 52ba3110477..7108d2e6231 100644 --- a/unstable_source/openvino_quantizer.rst +++ b/unstable_source/openvino_quantizer.rst @@ -165,12 +165,6 @@ Below is the list of essential parameters and their description: OpenVINOQuantizer(ignored_scope=nncf.IgnoredScope(subgraphs=[subgraph])) -* ``target_device`` - defines the target device, the specificity of which will be taken into account during optimization. The following values are supported: ``ANY`` (default), ``CPU``, ``CPU_SPR``, ``GPU``, and ``NPU``. - - .. code-block:: python - - OpenVINOQuantizer(target_device=nncf.TargetDevice.CPU) - For further details on `OpenVINOQuantizer` please see the `documentation `_. After we import the backend-specific Quantizer, we will prepare the model for post-training quantization. From 695bd7de734d895e7c981c44be2348366df4fe37 Mon Sep 17 00:00:00 2001 From: Aamir Nazir Date: Mon, 11 May 2026 17:21:03 +0400 Subject: [PATCH 4/5] update ovquantizer location in executorch --- unstable_source/openvino_quantizer.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/unstable_source/openvino_quantizer.rst b/unstable_source/openvino_quantizer.rst index 7108d2e6231..ce3afebb33d 100644 --- a/unstable_source/openvino_quantizer.rst +++ b/unstable_source/openvino_quantizer.rst @@ -15,7 +15,7 @@ Introduction This is an experimental feature, the quantization API is subject to change. -This tutorial demonstrates how to use ``OpenVINOQuantizer`` from `Executorch `_ in PyTorch 2 Export Quantization flow to generate a quantized model customized for the `OpenVINO torch.compile backend `_ and explains how to lower the quantized model into the `OpenVINO `_ representation. +This tutorial demonstrates how to use ``OpenVINOQuantizer`` from `Executorch `_ in PyTorch 2 Export Quantization flow to generate a quantized model customized for the `OpenVINO torch.compile backend `_ and explains how to lower the quantized model into the `OpenVINO `_ representation. ``OpenVINOQuantizer`` unlocks the full potential of low-precision OpenVINO kernels due to the placement of quantizers designed specifically for the OpenVINO. The PyTorch 2 export quantization flow uses ``torch.export`` to capture the model into a graph and performs quantization transformations on top of the ATen graph. From 6397dec631def3388cf13629078e5a0c23e598d9 Mon Sep 17 00:00:00 2001 From: Aamir Nazir Date: Tue, 12 May 2026 16:27:15 +0400 Subject: [PATCH 5/5] Update openvino_quantizer.rst --- unstable_source/openvino_quantizer.rst | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/unstable_source/openvino_quantizer.rst b/unstable_source/openvino_quantizer.rst index ce3afebb33d..9af50819754 100644 --- a/unstable_source/openvino_quantizer.rst +++ b/unstable_source/openvino_quantizer.rst @@ -211,9 +211,8 @@ This should significantly speed up inference time in comparison with the eager m 4. Optional: Improve quantized model metrics ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -NNCF implements advanced quantization algorithms like `SmoothQuant `_ and `BiasCorrection `_, which help -to improve the quantized model metrics while minimizing the output discrepancies between the original and compressed models. -These advanced NNCF algorithms can be accessed via the NNCF `quantize_pt2e` API: +NNCF implements advanced quantization algorithms like `SmoothQuant `_ and `BiasCorrection `_ for static activation and weights quantization. For weights-only quantization, there are `AWQ https://arxiv.org/abs/2306.00978`_ and `Scale Estimation https://github.com/openvinotoolkit/nncf/blob/develop/src/nncf/quantization/algorithms/weight_compression/scale_estimation.py`_ algorithms. These techniques help in improving the quantized model metrics while minimizing the output discrepancies between the original and compressed models. +These advanced NNCF algorithms can be accessed via the NNCF `quantize_pt2e` API for static activation and weights or `compress_pt2e` for weights-only quantization: .. code-block:: python @@ -234,7 +233,7 @@ These advanced NNCF algorithms can be accessed via the NNCF `quantize_pt2e` API: For further details, please see the `documentation `_ -and a complete `example on Resnet18 quantization `_. +and `for some examples with llama and stable_diffusion checkout `_. For `YoloV26 example with this API ` Conclusion ------------