[WIP] Attempt at adjusting SVD pullback tolerance to make its accuracy independent of the normalization #285
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
A follow-up on the issue with the accuracy of the SVD pullback for tensors with a very small initial norm (see #276 and QuantumKitHub/MatrixAlgebraKit.jl#89). With the changes here, I managed to achieve the same accuracy that's achieved after #276, but without explicitly normalizing the halfinfinite and infinite environments before the SVD in the forward pass through CTMRG. This doesn't necessarily need to be merged, since I'm not exactly in a rush to remove the normalization before the SVD again now that it's cleat that this massively improved the accuracy of our gradients. Rather, it serves as a demonstration of some underlying issues that could be solved in the future (in particular in MatrixAlgebraKit.jl).
The issues I tracked down and their solutions(?):
maxfrom the scaling factor in_default_pullback_gaugetol, as was suggested in [Bug] SVD pullback does not properly take into account the norm of the initial matrix MatrixAlgebraKit.jl#89. In particular, this should definitely also be done in MatrixAlgebraKit.jl_default_pullback_gaugetol)broadeningcoefficient with the same scaling factor based on the norm of the primal input (this is the aspect I missed in [Bug] SVD pullback does not properly take into account the norm of the initial matrix MatrixAlgebraKit.jl#89, removing themaxindefault_pullback_gaugetolactually did fix the issue from the MatrixAlgebraKit.jl side)εas used in_lorentz_broadento make sure the pullback remained accurate with varying initial normalization. This is more of a heuristic than anything else, but in practice the added square seems to be consistent with how things scale under scaling of the primal input.broadeningsetting and the corresponding test, since now this parameter needs to be larger to get the same relative broadening as before. Things seem to work for a new default value of1.0e-10, but this was very much more just a guess than anything else.TLDR: while I definitely don't think we should remove the normalization before the SVD in the CTMRG projector computation in PEPSKit.jl for now, this reflects some of the changes that should end up in MatrixAlgebraKit.jl if we want an SVD pullback whose accuracy is independent of the normalization of the primal input.
Tagging @lkdvos, @pbrehmer and @Jutho.