Skip to content

Conversation

@pbrehmer
Copy link
Collaborator

This PR will add generic contractions which allow us to use multi-site operators in expectation_value(pf::InfinitePartitionFunction, ...). Here, we will try to reuse the local operator contraction mechanism which automatically constructs the correct enlarged environment and inserts states/operators based on the supplied inds.

@codecov
Copy link

codecov bot commented Jan 14, 2025

Codecov Report

Attention: Patch coverage is 94.82759% with 3 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/algorithms/contractions/localoperator.jl 94.44% 3 Missing ⚠️
Files with missing lines Coverage Δ
src/algorithms/toolbox.jl 97.50% <100.00%> (+0.06%) ⬆️
src/algorithms/contractions/localoperator.jl 94.77% <94.44%> (-0.43%) ⬇️

@pbrehmer
Copy link
Collaborator Author

pbrehmer commented Jan 14, 2025

I have added an initial sketch of what I was talking about yesterday in #111. Operators that act on N sites are supplied via a NTuple{N,Pair{CartesianIndex(2),AbstractTensorMap{S,2,2}}}, so e.g. ops = ((1, 1) => op1, (2, 1) => op2) for a nearest-neighbor operator.

While I do think that this is functional, I'm not sure about a few things. For example, to compute the denominator in expectation_value, I reuse contract_local_tensors by supplying nothing instead of the operator - not sure if I like that.

Overall, I'd be okay with having an interface like this since you can readily evaluate correlation functions or arbitrary operator strings and it is kind of similar to what we have with LocalOperator with the difference that we only specify on-site tensors and can't have higher-rank tensors. But of course I'm not fixed to this idea and I'm happy to discuss and discard this approach if necessary.

@lkdvos
Copy link
Member

lkdvos commented Jan 14, 2025

The thing I don't like about this approach is that it is fully biased towards non-symmetric tensors, and does not generalize. For example, even in the Ising model, inserting a single flip in a symmetric partition function does not work with the Z2 symmetry, and really what you need is an auxiliary leg to carry that charge, which connects the other flip, in order to obtain a non-zero result. In that sense, this is not really a clean generalization of the quantum case, where you can insert pairs of spin flips at 2 locations because they are represented by a single operator. Additionally, the syntax in terms of ntuples doesn't provide enough information, because even if we would give each tensor an additional leg, you now also need to specify how to pair them up in the case where N>2.
In general, I would say that this is the kind of code that is a bit against the design of TensorKit, given that it is not agnostic about the symmetry. If at all possible, I would really prefer to have an API that does not need to know about the symmetry first, before considering the implementation.

Given these constraints, I would maybe suggest that we don't try to push two-point correlators into the same formalism, given that there is not really that much shared code anyways. In MPSKit.jl, there is a function

"""
    correlator(ψ, O1, O2, i, j)
    correlator(ψ, O12, i, j)

Compute the 2-point correlator <ψ|O1[i]O2[j]|ψ> for inserting `O1` at `i` and `O2` at `j`.
Also accepts ranges for `j`.
"""

while I don't think the second syntax makes sense here, I would be more tempted to add the first. This is a more intuitive syntax for what we're trying to compute anyways, and does not need to include the generic case of N>2, so we could have O1::AbstractTensorMap{T,S,2,3} being "excited" partition function tensors, which can now uniquely be matched up.

@pbrehmer
Copy link
Collaborator Author

Indeed, I forgot about symmetric tensors. Then let's forget about the N>2 case and try to implement the correlator approach as in MPSKit. I would have two questions:

  • Is it necessary that O1 and O2 are adjacent, i.e. nearest-neighbor tensors? Or can the auxiliary leg also be connected across multiple sites?
  • Should we still allow for AbstractTensorMap{T,S,2,2} as an input (since that is the usual case of non-symmetric tensors) and then just convert to an AbstractTensorMap{T,S,2,3} (or AbstractTensorMap{T,S,3,2}) with a dummy leg?

@lkdvos
Copy link
Member

lkdvos commented Jan 15, 2025

It's definitely not necessary for both to be adjacent, and I have no problem also supporting the case without an additional leg. The bigger question is whether the implementation via a full contraction is the correct one. Keep in mind that this code is written for evaluating local expectation values, ie nearest-neighbour, next-nearest neighbour etc, and really does not scale very well. On the one hand, if the correlator would be in a vertical or horizontal line, you really would not want to hand off this entire contraction to @tensor with @autoopt, because the cost of that scales exponentially with the number of tensors, while we actually now how to contract that. For diagonal correlators, there are probably smarter approaches using channel methods etc. It's not because we can write down the contraction for such a correlation function that we can actually contract it 😉

@pbrehmer
Copy link
Collaborator Author

Well, you have a point. Probably the best way to go forward is to wait until someone actually needs to evaluate correlators and then implement it (well who could have thought :P). Since I'm short on time and really need to get some other stuff done, I'll probably let the PR sleep for a while or someone else implements this.

@leburgel leburgel marked this pull request as draft March 10, 2025 16:01
@sanderdemeyer
Copy link
Contributor

sanderdemeyer commented May 20, 2025

The day that someone needs this has come. I am interested in implementing the expectation value of a LocalOperator of a PEPOTraceSandwich (see PR #184). This reduces to the expectation_value function you implemented in this PR if we allow operators to be of the form AbstractTensorMap{T,S,2,3} and AbstractTensorMap{T,S,3,2}. I will take a look at implementing this in the most general way possible. I'm not 100% sure about the syntax though, since we could in principle allow for multiple two-site operators to be contracted (in this language: multiple pairs of an AbstractTensorMap{T,S,2,3} and an AbstractTensorMap{T,S,3,2}). Alternatively, we could restrict to only one pair, since in the case of a LocalOperator we would just need the sum over these contributions anyway.

Additionally, I would like to look at the case where we calculate the correlation function on a horizontal or vertical patch. As mentioned above, this can be calculated much more efficiently than just contracting the full tensor contraction by doing something similar to how it's defined in MPSKit. We should probably postpone this to a separate PR, though.

@sanderdemeyer
Copy link
Contributor

After thinking about it a bit more, I would propose that instead of a Tuple of Pairs of indices and operators, we could generalise this to the situation where multiple (possibly excited) partition functions are inserted. We would then need a way to denote which excited partition functions should be contracted with each other and how. To this end, we could change the input argument from op::NTuple{N,Pair{CartesianIndex{2},<:AbstractTensorMap{T,S,2,2}}} (which is currently the case in this PR) to op::NTuple{N,Array{<:Pair{CartesianIndex{2},<:AbstractTensorMap}}}. i.e., instead of a tuple of pairs of indices and operators, we change to a tuple of arrays thereof, where the elements of one list are contracted over their extra legs. This means that the operators of an array of size 1 have the type AbstractTensorMap{T,S,2,2}, of size 2 AbstractTensorMap{T,S,2,3} and AbstractTensorMap{T,S,3,2}, of size 3 AbstractTensorMap{T,S,2,3}, AbstractTensorMap{T,S,3,3}, andAbstractTensorMap{T,S,3,2}, and so on...

Any suggestions on this are very welcome. It should be relatively straightforward to change the current code to the more general case, and I'll look into this when we have settled on a convention. As mentioned above, I think that additional functionality like the expectation value of a LocalOperator of an InfinitePartitionFunction and horizontal/vertical correlation functions should be postponed to a later PR.

@lkdvos
Copy link
Member

lkdvos commented May 22, 2025

I'm not entirely sure I'm following what you are saying. It seems to me that trying to reduce the expectation value for finite temperature things to this case would reintroduce the problems discussed before, and then we have an interface where you would have to provide the partially contracted operator with the density matrix instead of simply giving the operator and the density matrix.

What I mean is this, say we do some finite temperature calculations to construct rho. If I now wish to measure the energy, I would simply call expectation_value(rho, H), keeping the exact same structure as the regular (2+1D) quantum case, and simply trace the additional legs.
Unless I'm misunderstanding, you are suggesting that I would have to take the gates in H, apply them manually to some part of the tensors in rho, obtaining new pepotensors with possible additional legs, and then hand this back to expectation_value? This does not really sound like a better interface to me, so I don't see why you would not just keep the same structure as the regular (2+1D) InfinitePEPS case.

@sanderdemeyer
Copy link
Contributor

No, I mean it exactly like you said, and would like to keep the interface expectation_value(rho, H).

I mean that internally, if we want to calculate that expression, we would need to sum over the different gates in H (I don't think there is a way around that if it is a LocalOperator. This could be avoided in the PEPO case, though). So for each gate this would reduce to the case where we need to insert 'excited' PartitionFunctionTensors in the network. The only choice there is, is whether we want to split the tensors we obtained by applying the gate to a patch of PEPOTensors and tracing out the physical legs. If so, we can insert them in the way I've explained. If not, we would need to insert the full tensor. I think this would be less efficient, and would require us to also take into account whether the indices are adjacent.

@lkdvos
Copy link
Member

lkdvos commented May 22, 2025

Why do you think this can be avoided in the PEPO case, but not the PEPS case? I agree that there are some parts of the contractions that can be reused in general, and we might invest some time in optimizing this for specific examples, but I don't see why this would be different for pepos than for peps. Unless I'm missing something, I would say that the finite temperature case can really just be understood as a superstate with two physical legs, and the local operators then act on only one of them, implemented slightly more efficiently. Is that incorrect?

@sanderdemeyer
Copy link
Contributor

I meant that It's different to evaluate expectation_value(rho, H) when H is a LocalOperator versus when it is a PEPO. The latter would reduce to a simple evaluation of the network_value of the double rho-H layer, when the other PR is merged.

sanderdemeyer added a commit to sanderdemeyer/PEPSKit.jl that referenced this pull request May 27, 2025
Based on PR QuantumKitHub#117 of PEPSKit, this generalizes the expectation value for InfinitePEPOs and InfinitePartitionFunctions
@lkdvos lkdvos force-pushed the master branch 2 times, most recently from ad4945f to 37ace4c Compare June 13, 2025 12:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants