[WIP] [core] fix group offloading when using torchao#13276
Draft
[WIP] [core] fix group offloading when using torchao#13276
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
jerryzh168
reviewed
Mar 17, 2026
| pinned_dict = None | ||
|
|
||
| def _transfer_tensor_to_device(self, tensor, source_tensor, default_stream): | ||
| tensor.data = source_tensor.to(self.onload_device, non_blocking=self.non_blocking) |
There was a problem hiding this comment.
does this mean the to op is not implemented properly for torchao tensors?
if you have a minimal repro, we might be able to fix I think
Member
Author
There was a problem hiding this comment.
import torch
from torchao.quantization import Int8WeightOnlyConfig, quantize_
linear = torch.nn.Linear(64, 64, dtype=torch.bfloat16)
quantize_(linear, Int8WeightOnlyConfig(version=2))
p = linear.weight
# Move a copy to CUDA and assign via .data
cpu_copy = p.data.cpu()
cuda_copy = cpu_copy.to("cuda")
p.data = cuda_copy
print(f"p.qdata.device = {p.qdata.device}") # cpu
print(f"cuda_copy.qdata.device = {cuda_copy.qdata.device}") # cuda:0
# Forward fails: input on cuda, weight internals still on cpu
linear.bias.data = linear.bias.data.to("cuda")
x = torch.randn(1, 64, device="cuda", dtype=torch.bfloat16)
linear(x) # RuntimeError: mat2 is on cpu
jerryzh168
reviewed
Mar 17, 2026
| for group_module in self.modules: | ||
| group_module.to(self.offload_device, non_blocking=False) | ||
| for param in self.parameters: | ||
| param.data = param.data.to(self.offload_device, non_blocking=False) |
There was a problem hiding this comment.
I remember hearing from Brian and Alban before that param.data is a private API and we should not rely on it, I think it also does not work with tensor subclasses
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
This PR is opened to discuss if these changes should rather be made to TorchAO or they should live in Diffusers.
The benefit of this is that many new releases benefit from quantization schemes robustly implemented and tested in TorchAO. But quantization alone rarely helps, we need offloading too. Many large models need group offloading (overlapping compute with data transfer).
Problem
Group offloading moves parameters between CPU and GPU by reassigning
param.data:This works for regular tensors but breaks for TorchAO quantized tensors.
TorchAO tensors are special instances that store their actual data in internal attributes (e.g., .qdata, .scale), not in the standard tensor storage. The
.dataassignment replaces theouter wrapper storage but leaves these internal attributes on the original device, causing a device mismatch at compute time.
A further subtlety: accessing
.dataon a wrapper subclass parameter returns a new wrapper object each time, so mutating attributes onparam.datadoesn't persist either.This PR
For TorchAO tensors, instead of reassigning
data, we update the internal tensor attributes directly on the parameter object itself:Related issue: pytorch/ao#4088.
Happens with nightlies as well.
Code to test: https://gist.github.com/sayakpaul/929678132809874c5dbf9c5215460d33#file-check_torchao_offload_compile-py (run with
--quantize,--group-offload; and potentially with--full-compile).Nice results (with quantization + group offloading + full compile):