Skip to content

Conversation

@lkdvos
Copy link
Member

@lkdvos lkdvos commented Oct 1, 2025

This is a small feature which I noticed can be useful in various contexts, and which I have definitely wanted to have in the toolbox myself at some point. It still needs some more tests and making sure we have the correct implementations, but I think this would be a good addition to the package.

@codecov
Copy link

codecov bot commented Oct 1, 2025

Codecov Report

❌ Patch coverage is 93.98496% with 8 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/common/matrixproperties.jl 92.85% 4 Missing ⚠️
src/implementations/projections.jl 95.00% 3 Missing ⚠️
src/interface/projections.jl 80.00% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/MatrixAlgebraKit.jl 100.00% <ø> (ø)
src/pullbacks/eigh.jl 91.04% <100.00%> (ø)
src/pullbacks/lq.jl 95.31% <100.00%> (ø)
src/pullbacks/polar.jl 100.00% <100.00%> (ø)
src/pullbacks/qr.jl 95.23% <100.00%> (ø)
src/pullbacks/svd.jl 96.26% <100.00%> (ø)
src/interface/projections.jl 80.00% <80.00%> (ø)
src/implementations/projections.jl 95.00% <95.00%> (ø)
src/common/matrixproperties.jl 88.70% <92.85%> (-1.30%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@lkdvos lkdvos linked an issue Oct 1, 2025 that may be closed by this pull request
@lkdvos lkdvos marked this pull request as draft October 1, 2025 20:13
@Jutho
Copy link
Member

Jutho commented Oct 5, 2025

Ok I've added a blocked version of the (anti)hermitianpart! function, and made it using our @functiondef infrastructure. This means that the output can be a different matrix, but by default the same matrix is selected.

The cache-friendliness of the blocked strategy only seems to show up on my Mac M4 only for really large matrices (several thousands), but that might be because the Apple M processors have very big caches if I remember correctly. The performance seems roughly constant as soon as the blocksize is somewhere in the range 32 - 512, though I did not do any serious benchmarking. Also, this will be hardware dependent, and I assume older hardware would benefit from blocksizes on the smaller end of this range.

Interface-wise, I would still like to propose the following changes:

  • Rename hermitianpart! to project_hermitian! and similar for the antihermitian case.
  • Similarly, rename the hermitian.jl file to projections.jl.
  • Also change uppertriangular! to project_uppertriangular! and similar for lower and add it there.
  • Maybe also have project_isometric! (not necessarily in this PR), although this in principle covered by left_orth! left_polar! and is "mathematically" somewhat different as it is a manifold projection rather than a subspace projection, and is therefore nonlinear.

Of course this will need dedicated GPU implementations (@kshyatt 😇). And maybe chainrules?

I have not yet looked at the ishermitian and friends, but I guess something similar can be done there, although I am not sure this is really necessary.

Copy link
Member Author

@lkdvos lkdvos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely happy with project_hermitian etc instead, might even be nicer to not name-clash with LinearAlgebra too.

I think it might even pay off to add the optimization to the ishermitian check, since this is actually called in all the check_input(eigh_*, ...) methods.

@Jutho
Copy link
Member

Jutho commented Oct 6, 2025

I think it might even pay off to add the optimization to the ishermitian check, since this is actually called in all the check_input(eigh_*, ...) methods.

The problem with this is that we then have to provide a blocksize or more generally an algorithm for ishermitian, which requires thinking about an interface for these methods as well. From some elementary benchmarks, it does however seem to pay off indeed (e.g. about 40 - 50 % reduction of runtime for large matrices, testing for exact hermiticity).

I have some more questions about the current ishermitian implementation for the case with nonzero tolerance. With e.g. standard norm comparison, you cannot do this elementwise without allocations the way it is done now, right? You do actually need to compute norm(A-A'), which I would write as 2*norm(project_antihermitian(A)).

Final question concerns the naming. Are we fine shipping our own ishermitian that may conflict with LinearAlgebra.ishermitian? So far with the factorizations we have been able to avoid this.

@kshyatt
Copy link
Member

kshyatt commented Oct 6, 2025

For the GPU support, since ishermitian is a LinearAlgebra function, I wonder if it makes sense to implement that one specifically at GPUArrays.jl and the others here? I have a rough idea of how to set this up...

@Jutho
Copy link
Member

Jutho commented Oct 6, 2025

The difference with LinearAlgebra.ishermitian is that our implementation accepts a tolerance. We currently have ishermitian and isisometry and is_left_isometry and so forth.

One proposal could be to use underscores always: is_hermitian, is_antihermitian, is_isometry, … (even though I am not myself a big fan of underscores). That seems quite orthogonal to Base, that has a whole bunch of is... methods without any underscore anywhere.

Alternatively, we could use isapprox in the name: isapprox_hermitian, isapprox_isometry, …
However, I think we will often be using the atol = rtol = 0 version.

More ideas are welcome.

@Jutho
Copy link
Member

Jutho commented Oct 6, 2025

On a different note, the blocked versions should probably be restricted to StridedMatrix.

@kshyatt
Copy link
Member

kshyatt commented Oct 6, 2025

For the approx versions, we can probably then adapt the one in GPUArrays and revisit it if it's a performance problem, then

@lkdvos
Copy link
Member Author

lkdvos commented Oct 6, 2025

The problem with this is that we then have to provide a blocksize or more generally an algorithm for ishermitian, which requires thinking about an interface for these methods as well. From some elementary benchmarks, it does however seem to pay off indeed (e.g. about 40 - 50 % reduction of runtime for large matrices, testing for exact hermiticity).

In principle I would think we just provide one set of parameters for that and go from there? I'm okay with trying to pass an algorithm too, but realistically this is not something I will ever try to tweak, so if we can run just a few benchmarks for the blasfloats and select reasonable defaults from there I'd be happy.

I have some more questions about the current ishermitian implementation for the case with nonzero tolerance. With e.g. standard norm comparison, you cannot do this elementwise without allocations the way it is done now, right? You do actually need to compute norm(A-A'), which I would write as 2*norm(project_antihermitian(A)).

Yes, I wasn't thinking about it too much. In principle I guess an optimal implementation should compute both the norm of A, and the norm of the project_antihermitian(A) part at the same time, and compare them both at the end? This is presumably overkill?

Final question concerns the naming. Are we fine shipping our own ishermitian that may conflict with LinearAlgebra.ishermitian? So far with the factorizations we have been able to avoid this.

I think I'm okay with shipping our own. Given that we already have isunitary, and isisometry, where these also have an isapprox option, I don't think I'd like to then have is_hermitian etc, and I do also prefer the look without underscores...

@Jutho
Copy link
Member

Jutho commented Oct 6, 2025

Ok, so we keep the current naming? Another question I had while modifying these implementations is when we restrict A to ::AbstractMatrix and when not. I guess whenever it can make sense and we don't use anything specific to matrices, we don't, but still it is sometimes ambiguous.

@Jutho
Copy link
Member

Jutho commented Oct 6, 2025

One more question: for isisometry we need a default tolerance (that is now handed off to isapprox, but I think there is a more efficient implementation so that we then would need to set the default ourselves). For ishermitian we probably want this to be zero? Is that inconsistent?

@lkdvos
Copy link
Member Author

lkdvos commented Oct 6, 2025

I'm happy with the current names. I'm also happy with the default tolerances right now, since I would like to have is* be exact, with the option of having a tolerance, but whenever that is not feasible because of floating points we additionally account for that. I'm assuming that this is precisely why LinearAlgebra doesn't support isunitary etc, but I think that as long as we are clear in our docstrings we should just take whichever is more practical.

@lkdvos lkdvos marked this pull request as ready for review October 6, 2025 15:35
@Jutho
Copy link
Member

Jutho commented Oct 6, 2025

Ok, for me this is also ready I think. I know I mentioned to also add project_uppertriangular! in this style, but for now I would like to keep it simple like this.

Co-authored-by: Jutho <Jutho@users.noreply.github.com>
@lkdvos lkdvos merged commit 07f1ee2 into main Oct 6, 2025
8 of 9 checks passed
@lkdvos lkdvos deleted the ld-hermitianpart branch October 6, 2025 23:27
@lkdvos lkdvos mentioned this pull request Nov 14, 2025
lkdvos referenced this pull request Nov 14, 2025
* Bump v0.6

* rename `gaugefix` -> `fixgauge`

* reduce unnecessary warnings

* fix `copy_input` signatures in Mooncake tests

* Add changelog to docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support passing Hermitian matrix to eigh_x

4 participants