Skip to content

Conversation

daniil-lyakhov
Copy link
Collaborator

@daniil-lyakhov daniil-lyakhov commented Jul 21, 2025

Context

torch.ao directory is being moved to a separate repo, torchao, and the legacy torch.ao implementation was deprecated in the latest release of PyTorch (see details here)

The solution in our side is to

  • Deprecate OpenVINOQuantizer in nncf leaving only the ExecuTorch implementation
  • Eventually remove all dependencies on torch.ao from the nncf.quantuze TorchFX backend
  • Introduce torchao dependency for the quantize_pt2e API or remove all dependencies on torch.ao from the quantize_pt2e, torch_ao_adapter as well

This PR does not achieve the goal, but makes necessary first steps to achieve the goal

Changes

  • OpenVINOQuantizer, TorchAOQuantizerAdapter and quantize_pt2e are using torchao classes whenever it possible (using the conditional import)
  • torch_fx_MinMaxBackend and TorchFX transformations don't use the torch.ao FakeQuantize class anymore. Instead, a structure TorchQDQParameters is introduced in src/nncf/experimental/torch/fx/quantization/qdq_parameters.py
  • TorchFX transformations.py dependency on torch.ao is resolved (by moving _fuse_conv_bn_ import to other files and moving create_getattr_from_value function to the nncf transformations.py file)
  • XNNPACKQuantizer is removed from the tests as the actual torchao implementation is moved to ExecuTorch

Reason for changes

  • To support OpenVINOQuantizer from ExecuTorch in quantize_pt2e
  • To eliminate dependencies to torch.ao from the transformations.py

Related tickets

170072

Tests

test_openvino_quantizer_with_torch_ao_convert_pt2e is enable only for the torchao implementation

@daniil-lyakhov daniil-lyakhov requested a review from a team as a code owner July 21, 2025 15:53
@github-actions github-actions bot added the API Public API-impacting changes label Jul 21, 2025
@daniil-lyakhov daniil-lyakhov force-pushed the dl/fx/migrate_to_torchao branch 5 times, most recently from 8695761 to 3432700 Compare July 22, 2025 16:16
return PassResult(graph_module, True)


def get_device(module: torch.nn.Module) -> torch.device:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please reuse

def get_model_device(model: torch.nn.Module) -> torch.device:

Comment on lines +23 to +34
:param quant_min: Minimum quant value.
:type quant_min: int
:param quant_max: Maximum quant value.
:type quant_max: int
:param scale: Defines the scale factor used for quantization.
:type scale: torch.Tensor
:param zero_point: Specifies the quantized value to which 0 in floating point maps to.
:type zero_point: torch.Tensor
:param is_per_channel: Whether quantization is applied per channel.
:type is_per_channel: bool
:param ch_axis: Channel axis used for per-channel quantization.
:type ch_axis: int
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
:param quant_min: Minimum quant value.
:type quant_min: int
:param quant_max: Maximum quant value.
:type quant_max: int
:param scale: Defines the scale factor used for quantization.
:type scale: torch.Tensor
:param zero_point: Specifies the quantized value to which 0 in floating point maps to.
:type zero_point: torch.Tensor
:param is_per_channel: Whether quantization is applied per channel.
:type is_per_channel: bool
:param ch_axis: Channel axis used for per-channel quantization.
:type ch_axis: int
:param quant_min: Minimum quant value.
:param quant_max: Maximum quant value.
:param scale: Defines the scale factor used for quantization.
:param zero_point: Specifies the quantized value to which 0 in floating point maps to.
:param is_per_channel: Whether quantization is applied per channel.
:param ch_axis: Channel axis used for per-channel quantization.

:type in docstring used only for API objects

return named_param.device


def create_getattr_from_value(module: torch.nn.Module, graph: torch.fx.Graph, prefix: str, value: Any) -> torch.fx.Node:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not found where value is not a torch.Tensor, is it really need to use Any?

"""

def get_new_attr_name(module: torch.nn.Module, prefix: str):
def get_attr_name(i: int):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API Public API-impacting changes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants