You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
message: 'After the build completes, the updated documentation will be available [here](https://quantumkithub.github.io/MatrixAlgebraKit.jl/previews/PR${{ github.event.number }}/)'
Copy file name to clipboardExpand all lines: docs/src/user_interface/decompositions.md
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -55,6 +55,7 @@ Not all matrices can be diagonalized, and some real matrices can only be diagona
55
55
In particular, the resulting decomposition can only guaranteed to be real for real symmetric inputs `A`.
56
56
Therefore, we provide `eig_` and `eigh_` variants, where `eig` always results in complex-valued `V` and `D`, while `eigh` requires symmetric inputs but retains the scalartype of the input.
57
57
58
+
The full set of eigenvalues and eigenvectors can be computed using the [`eig_full`](@ref) and [`eigh_full`](@ref) functions.
58
59
If only the eigenvalues are required, the [`eig_vals`](@ref) and [`eigh_vals`](@ref) functions can be used.
59
60
These functions return the diagonal elements of `D` in a vector.
60
61
@@ -99,7 +100,7 @@ Filter = t -> t isa Type && t <: MatrixAlgebraKit.LAPACK_EigAlgorithm
99
100
100
101
The [Schur decomposition](https://en.wikipedia.org/wiki/Schur_decomposition) transforms a complex square matrix `A` into a product `Q * T * Qᴴ`, where `Q` is unitary and `T` is upper triangular.
101
102
It rewrites an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of `A`.
102
-
For real matrices, the same decomposition can be achieved with `T`being quasi-upper triangular, ie triangular with blocks of size `(1, 1)` and `(2, 2)` on the diagonal.
103
+
For real matrices, the same decomposition can be achieved in real arithmetic by allowing `T`to be quasi-upper triangular, i.e. triangular with blocks of size `(1, 1)` and `(2, 2)` on the diagonal.
103
104
104
105
This decomposition is also useful for computing the eigenvalues of a matrix, which is exposed through the [`schur_vals`](@ref) function.
105
106
@@ -117,12 +118,12 @@ Filter = t -> t isa Type && t <: MatrixAlgebraKit.LAPACK_EigAlgorithm
117
118
118
119
## Singular Value Decomposition
119
120
120
-
The [Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) transforms a matrix `A` into a product `U * Σ * Vᴴ`, where `U` and `V` are orthogonal, and `Σ` is diagonal, real and non-negative.
121
-
For a square matrix `A`, both `U` and `V` are unitary, and if the singular values are distinct, the decomposition is unique.
121
+
The [Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) transforms a matrix `A` into a product `U * Σ * Vᴴ`, where `U` and `Vᴴ` are unitary, and `Σ` is diagonal, real and non-negative.
122
+
For a square matrix `A`, both `U` and `Vᴴ` are unitary, and if the singular values are distinct, the decomposition is unique.
122
123
123
124
For rectangular matrices `A` of size `(m, n)`, there are two modes of operation, [`svd_full`](@ref) and [`svd_compact`](@ref).
124
-
The former ensures that the resulting `U`, and `V` remain square unitary matrices, of size `(m, m)` and `(n, n)`, with rectangular `Σ` of size `(m, n)`.
125
-
The latter creates an isometric `U` of size `(m, min(m, n))`, and `V` of size `(n, min(m, n))`, with a square `Σ` of size `(min(m, n), min(m, n))`.
125
+
The former ensures that the resulting `U`, and `Vᴴ` remain square unitary matrices, of size `(m, m)` and `(n, n)`, with rectangular `Σ` of size `(m, n)`.
126
+
The latter creates an isometric `U` of size `(m, min(m, n))`, and `V = (Vᴴ)'` of size `(n, min(m, n))`, with a square `Σ` of size `(min(m, n), min(m, n))`.
126
127
127
128
It is also possible to compute the singular values only, using the [`svd_vals`](@ref) function.
128
129
This then returns a vector of the values on the diagonal of `Σ`.
@@ -147,21 +148,23 @@ Filter = t -> t isa Type && t <: MatrixAlgebraKit.LAPACK_SVDAlgorithm
147
148
148
149
The [Polar Decomposition](https://en.wikipedia.org/wiki/Polar_decomposition) of a matrix `A` is a factorization `A = W * P`, where `W` is unitary and `P` is positive semi-definite.
149
150
If `A` is invertible (and therefore square), the polar decomposition always exists and is unique.
150
-
For non-square matrices, the polar decomposition is not unique, but `P` is.
151
-
In particular, the polar decomposition is unique if`A`is full rank.
151
+
For non-square matrices`A` of size `(m, n)`, the decomposition `A = W * P` with `P` positive semi-definite of size `(n, n)` and `W` isometric of size `(m, n)` exists only if `m >= n`, and is unique if `A` and thus `P` is full rank.
152
+
For `m <= n`, we can analoguously decompose `A` as`A = P * Wᴴ` with `P` positive semi-definite of size `(m, m)` and `Wᴴ` of size `(m, n)` such that `W = (Wᴴ)'`is isometric. Only in the case `m = n` do both decompositions exist.
152
153
153
-
This decomposition can be computed for both sides, resulting in the [`left_polar`](@ref) and [`right_polar`](@ref) functions.
154
+
The decompositions `A = W * P` or `A = P * Wᴴ` can be computed with the [`left_polar`](@ref) and [`right_polar`](@ref) functions, respectively.
154
155
155
156
```@docs; canonical=false
156
157
left_polar
157
158
right_polar
158
159
```
159
160
160
-
These functions are implemented by first computing a singular value decomposition, and then constructing the polar decomposition from the singular values and vectors.
161
-
Therefore, the relevant LAPACK-based implementation is the one for the SVD:
161
+
These functions can be implemented by first computing a singular value decomposition, and then constructing the polar decomposition from the singular values and vectors. Alternatively, the polar decomposition can be computed using an
162
+
iterative method based on Newton's method, that can be more efficient for large matrices, especially if they are
163
+
close to being isometric already.
162
164
163
165
```@docs; canonical=false
164
166
PolarViaSVD
167
+
PolarNewton
165
168
```
166
169
167
170
## Orthogonal Subspaces
@@ -179,7 +182,7 @@ right_orth
179
182
## Null Spaces
180
183
181
184
Similarly, it can be convenient to obtain an orthogonal basis for the kernel or cokernel of a matrix.
182
-
These are the compliments of the image and coimage, and can be computed using the [`left_null`](@ref) and [`right_null`](@ref) functions.
185
+
These are the compliments of the coimage and image, respectively, and can be computed using the [`left_null`](@ref) and [`right_null`](@ref) functions.
183
186
Again, this is typically implemented through a combination of the decompositions mentioned above, and serves as a convenient interface to these operations.
Copy file name to clipboardExpand all lines: docs/src/user_interface/truncations.md
+110-6Lines changed: 110 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,112 @@ CollapsedDocStrings = true
5
5
6
6
# Truncations
7
7
8
-
Currently, truncations are supported through the following different methods:
8
+
Truncation strategies allow you to control which eigenvalues or singular values to keep when computing partial or truncated decompositions. These strategies are used in the functions [`eigh_trunc`](@ref), [`eig_trunc`](@ref), and [`svd_trunc`](@ref) to reduce the size of the decomposition while retaining the most important information.
9
+
10
+
## Using Truncations in Decompositions
11
+
12
+
Truncation strategies can be used with truncated decomposition functions in two ways, as illustrated below.
13
+
For concreteness, we use the following matrix as an example:
14
+
15
+
```jldoctest truncations
16
+
using MatrixAlgebraKit
17
+
using MatrixAlgebraKit: diagview
18
+
19
+
A = [2 1 0; 1 3 1; 0 1 4];
20
+
D, V = eigh_full(A);
21
+
22
+
diagview(D) ≈ [3 - √3, 3, 3 + √3]
23
+
24
+
# output
25
+
26
+
true
27
+
```
28
+
29
+
### 1. Using the `trunc` keyword with a `NamedTuple`
30
+
31
+
The simplest approach is to pass a `NamedTuple` with the truncation parameters.
32
+
For example, keeping only the largest 2 eigenvalues:
## Truncation with SVD vs Eigenvalue Decompositions
102
+
103
+
When using truncations with different decomposition types, keep in mind:
104
+
105
+
-**`svd_trunc`**: Singular values are always real and non-negative, sorted in descending order. Truncation by value typically keeps the largest singular values.
106
+
107
+
-**`eigh_trunc`**: Eigenvalues are real but can be negative for symmetric matrices. By default, `truncrank` sorts by absolute value, so `truncrank(k)` keeps the `k` eigenvalues with largest magnitude (positive or negative).
108
+
109
+
-**`eig_trunc`**: For general (non-symmetric) matrices, eigenvalues can be complex. Truncation by absolute value considers the complex magnitude.
110
+
111
+
## Truncation Strategies
112
+
113
+
MatrixAlgebraKit provides several built-in truncation strategies:
9
114
10
115
```@docs; canonical=false
11
116
notrunc
@@ -15,11 +120,10 @@ truncfilter
15
120
truncerror
16
121
```
17
122
18
-
It is additionally possible to combine truncation strategies by making use of the `&` operator.
19
-
For example, truncating to a maximal dimension `10`, and discarding all values below `1e-6` would be achieved by:
123
+
Truncation strategies can be combined using the `&` operator to create intersection-based truncation.
124
+
When strategies are combined, only the values that satisfy all conditions are kept.
0 commit comments