-
Notifications
You must be signed in to change notification settings - Fork 56
rework tensor contructors to allow storagetype specification #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
... and 1 file with indirect coverage changes 🚀 New features to boost your workflow:
|
|
I think this is ready for review and shouldn't be breaking. |
52d85bb to
edead13
Compare
|
I incorporated some of the suggested changes, and while doing so also included some of the
The former attempts to use inference with similarstoragetype(t::AbstractTensorMap, [::Type{T}]) where {T <: Number} =
similarstoragetype(storagetype(t), [T])
similarstoragetype(::Type{TT}, [::Type{T}]) where {TT <: AbstractTensorMap, T <: Number} =
similarstoragetype(storagetype(TT), [T])
similarstoragetype(::Type{A}, [::Type{T}]) where {A <: AbstractArray, T <: Number} =
Core.Compiler.return_type(similar, Tuple{A, T, Int}) |
I agree there is quite a strong overlap between those two bits of functionality, so it would be great if there could be a way to simplify/unify them. |
|
I made an attempt to uniformize them, the point being that the single argument and two-argument methods are the respective ways to determine the storagetypes. I think this should now work 🤞 , so could do with another round of review |
| return similar(t, T, codomain ← one(codomain)) | ||
| end | ||
| Base.similar(t::AbstractTensorMap, codomain::TensorSpace, domain::TensorSpace) = | ||
| similar(t, similarstoragetype(t, scalartype(t)), codomain ← domain) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason this one has similarstoragetype(t, scalartype(t)) as second argument (which would be equivalent to the shorter similarstoragetype(t)), but then the definitions below simply use scalartype(t) as second argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, which might actually be a good reason to revert my last several commits:
For the constructors, we want to take the type of the provided raw vector, so there I need to have similarstoragetype(data::DenseVector) = typeof(data). On the other hand, for the similar calls I need to take the type of what would be the result of similar, which doesn't always align. This is what was failing - the PtrArray data from the manual allocator in TensorOperations has PtrArray{T,1} <: DenseVector, but similar(::PtrArray{T,1})::Vector{T}.
In order to still force these two methods together, I therefore distinguish by the one and two-argument versions.
| return TT′(undef, V) | ||
| end | ||
| Base.similar(::Type{TT}, cod::TensorSpace, dom::TensorSpace) where {TT <: AbstractTensorMap} = | ||
| similar(TT, cod ← dom) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there no type-domain similar methods with a TorA argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There were none before, which is in line with Base.similar which also doesn't have a similar in the type domain with 3 arguments
|
Ok, if all tests run and you take a look at my final comments, I think this is good for me! |
try handle PtrArrays fix docstrings
fix length check fix reshape Fix typos [skip ci]
also update `tensoralloc`
fix ambiguity more careful with storagetypes more careful with tensoroperations even more careful the carefulest!
Co-authored-by: Jutho <Jutho@users.noreply.github.com>
e4ba512 to
fe2c5d5
Compare
|
I think I responded to all comments, and included the diagonal changes here as well. Feel free to merge tomorrow if the tests pass. Should we consider tagging another patch release? |
This is a proposal for reworking the dispatch strategies of the
TensorMapconstructors to avoid having to special case GPU-backed tensors.The main idea is that instead of using
TensorMap{T}as an entry point, I want to use(TensorMap{T, S, N1, N2, A} where {S, N1, N2})as an entry point, such that it becomes easier to work with type aliases that specify the storage type.I think I am imagining that at least for now we will have a very limited (finite) set of these, so it might be reasonable to imagine that these each have their own type alias, for which the generic functionality is defined in this PR.
Furthermore, I tried to leave a hook (
_tensormap_storagetype) to map array types toA, to correctly infer the storage type for various inputs. This is definitely not an exhaustive way of doing this and we might have to rethink this strategy, but I do think we can at least make it work for a number of often-occurring cases.Obviously comments welcome, names are tentative, and it would be nice to have some input from @kshyatt to see if this actually alleviates the need for the GPU constructors, as I did not test this. (I locally tested some of these to try
_TensorMap{T, Memory{T}}, which did actually work 🎉While going over this, I did think of a different way of going from a
Dict{<:Sector, <:AbstractMatrix}to a data vector, for which the main idea consists of leveragingmapreduce(x -> reshape(x, :), vcat, data)to simply let dispatch figure out what kind of vector comes out (up to making sure the order is correct of course), which might just work out of the gate for a large variety of input matrix types (and maybe also abstract array types?), although it would probably be a little less efficient. This is definitely also something worth exploring.