Skip to content

Conversation

@lkdvos
Copy link
Member

@lkdvos lkdvos commented Dec 11, 2025

This is a proposal for reworking the dispatch strategies of the TensorMap constructors to avoid having to special case GPU-backed tensors.

The main idea is that instead of using TensorMap{T} as an entry point, I want to use (TensorMap{T, S, N1, N2, A} where {S, N1, N2}) as an entry point, such that it becomes easier to work with type aliases that specify the storage type.
I think I am imagining that at least for now we will have a very limited (finite) set of these, so it might be reasonable to imagine that these each have their own type alias, for which the generic functionality is defined in this PR.

Furthermore, I tried to leave a hook (_tensormap_storagetype) to map array types to A, to correctly infer the storage type for various inputs. This is definitely not an exhaustive way of doing this and we might have to rethink this strategy, but I do think we can at least make it work for a number of often-occurring cases.

Obviously comments welcome, names are tentative, and it would be nice to have some input from @kshyatt to see if this actually alleviates the need for the GPU constructors, as I did not test this. (I locally tested some of these to try _TensorMap{T, Memory{T}}, which did actually work 🎉

While going over this, I did think of a different way of going from a Dict{<:Sector, <:AbstractMatrix} to a data vector, for which the main idea consists of leveraging mapreduce(x -> reshape(x, :), vcat, data) to simply let dispatch figure out what kind of vector comes out (up to making sure the order is correct of course), which might just work out of the gate for a large variety of input matrix types (and maybe also abstract array types?), although it would probably be a little less efficient. This is definitely also something worth exploring.

@codecov
Copy link

codecov bot commented Dec 11, 2025

Codecov Report

❌ Patch coverage is 72.72727% with 27 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/tensors/abstracttensor.jl 71.15% 15 Missing ⚠️
src/tensors/tensor.jl 70.73% 12 Missing ⚠️
Files with missing lines Coverage Δ
src/tensors/tensoroperations.jl 97.48% <100.00%> (+0.02%) ⬆️
src/tensors/tensor.jl 85.29% <70.73%> (-2.47%) ⬇️
src/tensors/abstracttensor.jl 58.89% <71.15%> (+1.88%) ⬆️

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@lkdvos lkdvos requested a review from Jutho December 14, 2025 01:58
@lkdvos
Copy link
Member Author

lkdvos commented Dec 14, 2025

I think this is ready for review and shouldn't be breaking.

@lkdvos lkdvos force-pushed the ld-constructors branch 3 times, most recently from 52d85bb to edead13 Compare December 15, 2025 17:58
@lkdvos lkdvos requested a review from Jutho December 15, 2025 19:25
@kshyatt kshyatt self-requested a review December 16, 2025 12:22
kshyatt
kshyatt previously approved these changes Dec 16, 2025
@lkdvos lkdvos requested a review from kshyatt December 16, 2025 12:22
@lkdvos
Copy link
Member Author

lkdvos commented Dec 16, 2025

I incorporated some of the suggested changes, and while doing so also included some of the similar constructors, making sure everything is consistent. One thing that I've noticed though is that we now have two separate places where the storage type is being constructed:

  • _tensormap_storagetype(TorA)
  • similarstoragetype(t, [T])

The former attempts to use inference with _tensormap_storagetype(parenttype(TorA)), the latter uses inference with similar(storagetype(t), T).
I was wondering if it might be worth it to merge these approaches and simply go all-in on the similar inference, i.e. something like

similarstoragetype(t::AbstractTensorMap, [::Type{T}]) where {T <: Number} =
    similarstoragetype(storagetype(t), [T])

similarstoragetype(::Type{TT}, [::Type{T}]) where {TT <: AbstractTensorMap, T <: Number} =
    similarstoragetype(storagetype(TT), [T])

similarstoragetype(::Type{A}, [::Type{T}]) where {A <: AbstractArray, T <: Number} =
    Core.Compiler.return_type(similar, Tuple{A, T, Int})

@Jutho
Copy link
Member

Jutho commented Dec 16, 2025

I incorporated some of the suggested changes, and while doing so also included some of the similar constructors, making sure everything is consistent. One thing that I've noticed though is that we now have two separate places where the storage type is being constructed:

  • _tensormap_storagetype(TorA)
  • similarstoragetype(t, [T])

The former attempts to use inference with _tensormap_storagetype(parenttype(TorA)), the latter uses inference with similar(storagetype(t), T). I was wondering if it might be worth it to merge these approaches and simply go all-in on the similar inference, i.e. something like

similarstoragetype(t::AbstractTensorMap, [::Type{T}]) where {T <: Number} =
    similarstoragetype(storagetype(t), [T])

similarstoragetype(::Type{TT}, [::Type{T}]) where {TT <: AbstractTensorMap, T <: Number} =
    similarstoragetype(storagetype(TT), [T])

similarstoragetype(::Type{A}, [::Type{T}]) where {A <: AbstractArray, T <: Number} =
    Core.Compiler.return_type(similar, Tuple{A, T, Int})

I agree there is quite a strong overlap between those two bits of functionality, so it would be great if there could be a way to simplify/unify them.

@lkdvos
Copy link
Member Author

lkdvos commented Dec 16, 2025

I made an attempt to uniformize them, the point being that the single argument and two-argument methods are the respective ways to determine the storagetypes. I think this should now work 🤞 , so could do with another round of review

return similar(t, T, codomain one(codomain))
end
Base.similar(t::AbstractTensorMap, codomain::TensorSpace, domain::TensorSpace) =
similar(t, similarstoragetype(t, scalartype(t)), codomain domain)
Copy link
Member

@Jutho Jutho Dec 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason this one has similarstoragetype(t, scalartype(t)) as second argument (which would be equivalent to the shorter similarstoragetype(t)), but then the definitions below simply use scalartype(t) as second argument?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, which might actually be a good reason to revert my last several commits:

For the constructors, we want to take the type of the provided raw vector, so there I need to have similarstoragetype(data::DenseVector) = typeof(data). On the other hand, for the similar calls I need to take the type of what would be the result of similar, which doesn't always align. This is what was failing - the PtrArray data from the manual allocator in TensorOperations has PtrArray{T,1} <: DenseVector, but similar(::PtrArray{T,1})::Vector{T}.
In order to still force these two methods together, I therefore distinguish by the one and two-argument versions.

return TT′(undef, V)
end
Base.similar(::Type{TT}, cod::TensorSpace, dom::TensorSpace) where {TT <: AbstractTensorMap} =
similar(TT, cod dom)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there no type-domain similar methods with a TorA argument?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There were none before, which is in line with Base.similar which also doesn't have a similar in the type domain with 3 arguments

@Jutho
Copy link
Member

Jutho commented Dec 16, 2025

Ok, if all tests run and you take a look at my final comments, I think this is good for me!

lkdvos and others added 11 commits December 16, 2025 22:24
try handle PtrArrays

fix docstrings
fix length check

fix reshape

Fix typos [skip ci]
also update `tensoralloc`
fix ambiguity

more careful with storagetypes

more careful with tensoroperations

even more careful

the carefulest!
Co-authored-by: Jutho <Jutho@users.noreply.github.com>
@lkdvos
Copy link
Member Author

lkdvos commented Dec 17, 2025

I think I responded to all comments, and included the diagonal changes here as well. Feel free to merge tomorrow if the tests pass.

Should we consider tagging another patch release?

@Jutho Jutho merged commit d60855e into main Dec 17, 2025
38 of 42 checks passed
@Jutho Jutho deleted the ld-constructors branch December 17, 2025 08:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants