fredrikj.net / blog /
Eigenvalues in Arb
December 2, 2018
When I discuss Arb with someone for the first time, I usually get asked one of the following questions:
- What is the difference between interval and ball arithmetic?
- Does it compute eigenvalues?
To everyone's disappointment, my answer to the second question has been no, not yet, or sometimes well, kind of, indirectly. It has been possible to compute eigenvalues via roots of the characteristic polynomial, but this is a bad algorithm for large matrices. The polynomial root-finding code in Arb is also a bit of a hassle to use and currently does not support overlapping roots, i.e. overlapping/multiple eigenvalues.
Well, I've just spent two very productive weeks in Germany, specifically in Kaiserslautern (working on FLINT with Bill Hart and Daniel Schultz), Trier (at the MPFR/MPC/iRRAM Workshop), and Berlin (at the OSCAR meeting). In between other issues, I finally had time and motivation to work on this long-standing feature request, and now the answer is yes, Arb computes eigenvalues! At least, the git master now does, and a new release will be available soon.
Methods
Given an $n$ by $n$ matrix $A$ with real or complex entries, the ordinary eigenvalue problem consists of solving $Ax = \lambda x$ for some or all of the eigenvalues $\lambda$ and corresponding eigenvectors $x$. It is natural to compute all eigenvalues and eigenvectors simultaneously. If all eigenvalues are simple (there are $n$ distinct $\lambda$), this results in an explicit diagonalization $X^{-1} A X = D$.
Galois theory implies that the eigenvalues of a general matrix of size $n > 4$ cannot be expressed using a finite combination of arithmetic operations and radicals and therefore must be approximated using iterative methods. In interval/ball arithmetic, we have two added difficulties: obtaining rigorous error bounds (heuristic numerical convergence tests are not enough), and handling overlapping eigenvalues in a sensible way. Arb follows the standard approach for finding eigenvalues and/or eigenvectors of an interval matrix, which proceeds in two separate stages:
- Computing approximate solutions (non-rigorously)
- Computing certified error bounds for the approximate solutions
The two stages are independent; in particular, any old numerical eigenvalue algorithm can be used for stage 1. The user can even use their favorite external library for stage 1 and just plug the approximations into Arb for stage 2, if they so prefer.
Stage 1
Right now, Arb only provides one method for stage 1:
- acb_mat_approx_eig_qr - this computes floating-point approximations of the eigenvalues (and optionally left and right eigenvectors) using the standard implicitly shifted QR algorithm with reduction to Hessenberg form. Uses $O(n^3)$ operations. The code is an adaptation of the eig() function in mpmath, contributed to that project by Timo Hartmann a few years ago. I have only made minor changes to the algorithm; it should produce similar results (up to rounding differences), and for most purposes could be used as a drop-in replacement for mpmath's eig() for users who just want more speed and don't care about stage 2.
This implementation handles arbitrary complex matrices and should be quite robust, but it is suboptimal for real matrices and for symmetric/Hermitian matrices, so more algorithms should be implemented in the future. Eigenvalue algorithms are a major subfield of numerical analysis; suffice to say, there are textbook methods that do it well, though trying to do it really well in all kinds of special cases is guaranteed to lead down a rabbit hole!
Stage 2
Various algorithms for stage 2 exist in the literature. The usual idea is to view the eigenproblem as a fixed-point problem: if we have a sufficiently accurate approximation, we can find a neighborhood of this approximation where a suitable continuous mapping is contracting; fixed-point theorems then imply the existence of a solution in this neighborhood (it takes some more work to establish uniqueness). For stage 2, Arb provides several methods, currently:
- acb_mat_eig_global_enclosure - fast $O(n^3)$ algorithm implementing a theorem by Shinya Miyajima. This gives a good enclosure for the entire spectrum, but does not isolate the individual eigenvalues from each other.
- acb_mat_eig_enclosure_rump - verifies existence and computes error bounds for a given eigenvalue-eigenvector pair or a given eigenvalue cluster along with basis vectors spanning the invariant subspace associated to that cluster. This function implements the Newton-like algorithm by Siegfried Rump which is the basis of the verifyeig() function in INTLAB. Uses $O(n^3)$ operations.
- acb_mat_eig_simple_rump - certifies all eigenvalues and optionally eigenvectors of the matrix, producing a complete diagonalization. Uses $O(n^4)$ operations since the eigenvalues are certified one at a time. This algorithm can only succeed for isolated simple eigenvalues (success proves that there are $n$ simple eigenvalues); it reports failure in case of possible multiple/overlapping eigenvalues.
- acb_mat_eig_simple_vdhoeven_mourrain - as above, but uses the recent algorithm by Joris van der Hoeven and Bernard Mourrain which certifies all eigenvalues and eigenvectors in a single step, requiring only $O(n^3)$ operations. The drawback seems to be that the enclosures are a little worse.
- acb_mat_eig_simple - currently equivalent to acb_mat_eig_simple_vdhoeven_mourrain.
- acb_mat_eig_multiple_rump - certifies all eigenvalues, with correct multiplicities in case of overlap. Possible overlapping eigenvalues get grouped into clusters automatically. Requires $O(m n^3)$ operations for $m$ clusters.
That's a lot of methods, but most users will not need all the details! The typical usage pattern is simply to call acb_mat_approx_eig_qr followed by acb_mat_eig_simple. If that fails, the user can either retry with higher precision (if the goal is to isolate all eigenvalues and diagonalize the matrix) or call acb_mat_eig_multiple_rump (if isolating eigenvalue clusters is sufficient). The user might also try acb_mat_eig_simple_rump instead of (or after) acb_mat_eig_simple since it seems to give better error bounds, though this might be slower than just using higher precision.
In special applications, users might want to invoke acb_mat_eig_enclosure_rump manually for fine-grained handling of clusters; or they might want to use acb_mat_eig_global_enclosure to check some property of the spectrum. More methods should be added in the future; for example, there should be some interface to perform block diagonalization in case of overlapping eigenvalues. There is also a more general version of the van der Hoeven - Mourrain algorithm that can deal with clusters in $O(n^3)$ operations which has not yet been implemented.
Example: a nice matrix
As a well-conditioned complex test matrix, we can take $A_{j,k} = \exp(i (jn+k)^2)$, $0 \le j, k < n$. The table shows timings for mpmath (arbitrary-precision floating-point without rigorous error bounds), the Julia package GenericLinearAlgebra.jl with BigFloat matrices (also arbitrary-precision floating-point without rigorous error bounds), and three different ways using Arb:
- Stage 1 approximation alone (qr)
- Stage 1 + stage 2 using the slower more accurate Rump certification method (qr+simple_rump)
- Stage 1 + stage 2 using the faster default van der Hoeven - Mourrain method (qr+simple)
n | prec (bits) | mpmath | GenericLinearAlgebra.jl | Arb (qr) | Arb (qr+simple_rump) | Arb (qr+simple) |
---|---|---|---|---|---|---|
10 | 128 | 0.22 s | 0.021 s | 0.0036 | 0.0082 s | 0.0045 s |
10 | 384 | 0.29 s | 0.043 s | 0.011 | 0.022 s | 0.013 s |
100 | 128 | 171 s | 8.8 s | 2.5 | 18.2 s | 2.9 s |
100 | 384 | 223 s | 18.5 s | 8.7 | 59 s | 9.8 s |
It is nice to see that Arb has a comfortable lead in performance (at least for a generic complex matrix -- both mpmath and GenericLinearAlgebra.jl have better code for structured matrices)! Moreover, the fast certification method has almost negligible overhead; we pay very little for error bounds.
I used the following mpmath code, which returns both eigenvalues and right eigenvectors (with an additional flag, it is possible to compute the left eigenvectors as well):
mp.prec = 128; n = 100 A = matrix([[expj((i*n+j)**2) for j in range(n)] for i in range(n)]) timing(eig, A)
I used the following Julia code, which only computes eigenvalues (GenericLinearAlgebra.jl seems to have an incomplete interface and lacks documentation; I could not figure out how to obtain eigenvectors):
setprecision(128); n=100; A = [exp(big((i*n+j)^2)*1im) for i in 0:n-1, j in 0:n-1]; @time GenericLinearAlgebra._eigvals!(A)
Here is the C code for Arb. In fact, this program computes both left and right eigenvectors in addition to eigenvalues, so it is doing slightly more work. Of course, convenient interfaces will soon be available in Nemo, Python-FLINT, and elsewhere, making one-liners possible!
#include "acb_mat.h" #include "flint/profiler.h" int main() { acb_mat_t A, L, R, T, D; acb_ptr E; slong n, i, j, prec; int success; prec = 128; n = 100; acb_mat_init(A, n, n); acb_mat_init(L, n, n); acb_mat_init(R, n, n); acb_mat_init(D, n, n); acb_mat_init(T, n, n); E = _acb_vec_init(n); for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { acb_set_ui(acb_mat_entry(A, i, j), (i * n + j) * (i * n + j)); acb_mul_onei(acb_mat_entry(A, i, j), acb_mat_entry(A, i, j)); acb_exp(acb_mat_entry(A, i, j), acb_mat_entry(A, i, j), prec); } } TIMEIT_START /* Stage 1 */ acb_mat_approx_eig_qr(E, NULL, R, A, NULL, 0, prec); /* Stage 2 */ #if 1 /* Faster, less accurate */ success = acb_mat_eig_simple(E, L, R, A, E, R, prec); #else /* Slower, more accurate */ success = acb_mat_eig_simple_rump(E, L, R, A, E, R, prec); #endif TIMEIT_STOP if (success) { printf("Eigenvalues:\n"); for (i = 0; i < n; i++) { acb_printn(E + i, prec * 0.333, 0); printf("\n"); } for (i = 0; i < n; i++) acb_set(acb_mat_entry(D, i, i), E + i); acb_mat_mul(T, D, L, prec); acb_mat_mul(T, R, T, prec); acb_mat_sub(T, T, A, prec); printf("RDL - A = \n"); acb_printn(acb_mat_entry(T, 0, 0), 10, 0); printf(" ... "); acb_printn(acb_mat_entry(T, 0, n - 1), 10, 0); printf("\n ... \n"); acb_printn(acb_mat_entry(T, n - 1, 0), 10, 0); printf(" ... "); acb_printn(acb_mat_entry(T, n - 1, n - 1), 10, 0); printf("\n"); } acb_mat_clear(A); acb_mat_clear(L); acb_mat_clear(R); acb_mat_clear(T); acb_mat_clear(D); _acb_vec_clear(E, n); }
This is the output of the program ($n = 100$, 128-bit precision):
cpu/wall(s): 2.898 2.898 Eigenvalues: [5.56068827954618728916732365251909 +/- 7.80e-33] + [-9.3408543292239753150589171562723 +/- 2.51e-32]*I [4.2365569359408181242510429145996 +/- 5.23e-32] + [-9.7204690652877413035789086363927 +/- 2.11e-32]*I [4.22669960816308674112303111406704 +/- 9.37e-33] + [-9.21193845169088313034537528933005 +/- 9.30e-33]*I [9.1920595249637687353709979820349 +/- 3.40e-32] + [-3.4848536118003011857135351969476 +/- 5.56e-32]*I [-2.0337682036412271375391843005501 +/- 3.15e-32] + [-9.7140402264690709257114992644427 +/- 5.51e-32]*I ... [0.0436675130324122521996807479373 +/- 1.36e-32] + [0.02868114819405961293161026215550 +/- 9.53e-33]*I [0.0358798755734110362961328968563 +/- 4.18e-32] + [0.08818619538306771869366908847318 +/- 8.43e-33]*I [-0.02085291987296907286863875234738 +/- 8.62e-33] + [0.13781646138722583186333043461477 +/- 9.60e-33]*I [-0.0856748889145463293201183909735 +/- 4.13e-32] + [0.2719270607624117267774103444137 +/- 3.18e-32]*I [0.0996054545744685766267403307648 +/- 4.41e-32] + [0.4631153073409059953970733582965 +/- 2.52e-32]*I RDL - A = [+/- 1.89e-22] + [+/- 1.89e-22]*I ... [+/- 6.95e-22] + [+/- 6.95e-22]*I ... [+/- 2.04e-22] + [+/- 2.04e-22]*I ... [+/- 7.51e-22] + [+/- 7.51e-22]*I
This is the output with the certification method changed to acb_mat_eig_simple_rump:
cpu/wall(s): 18.239 18.256 Eigenvalues: [5.56068827954618728916732365251909001 +/- 6.19e-36] + [-9.34085432922397531505891715627228269 +/- 3.17e-36]*I [4.23655693594081812425104291459964451 +/- 4.38e-36] + [-9.72046906528774130357890863639271330 +/- 3.98e-36]*I [4.22669960816308674112303111406703842 +/- 4.35e-36] + [-9.21193845169088313034537528933004849 +/- 3.74e-36]*I [9.1920595249637687353709979820348739 +/- 5.29e-35] + [-3.4848536118003011857135351969476478 +/- 4.03e-35]*I [-2.03376820364122713753918430055007634 +/- 8.67e-36] + [-9.71404022646907092571149926444274722 +/- 6.19e-36]*I ... [0.04366751303241225219968074793730582 +/- 3.75e-36] + [0.02868114819405961293161026215550181 +/- 7.77e-36]*I [0.03587987557341103629613289685633402 +/- 7.46e-36] + [0.08818619538306771869366908847318071 +/- 3.70e-36]*I [-0.02085291987296907286863875234738090 +/- 2.20e-36] + [0.13781646138722583186333043461477188 +/- 3.70e-36]*I [-0.08567488891454632932011839097353355 +/- 3.75e-36] + [0.27192706076241172677741034441372401 +/- 3.59e-36]*I [0.09960545457446857662674033076483631 +/- 2.70e-36] + [0.46311530734090599539707335829648252 +/- 2.93e-36]*I RDL - A = [+/- 8.10e-30] + [+/- 8.10e-30]*I ... [+/- 3.00e-29] + [+/- 3.00e-29]*I ... [+/- 8.75e-30] + [+/- 8.75e-30]*I ... [+/- 3.23e-29] + [+/- 3.23e-29]*I
We lose some digits in the certification stage with either method (128 bits should give 39 digits), but the accuracy looks good enough for practical use!
Example: a matrix with repeated eigenvalues
To demonstrate that overlapping eigenvalues can be handled, we consider the DFT matrix $$A_{j,k} = \frac{\omega^{jk}}{\sqrt{n}}, \quad \omega = e^{-2\pi i/n}.$$ The exact version of this matrix only has eigenvalues in the set $\{1, -1, i, -i\}$, each appearing with multiplicity about $n/4$. The code snippets above can be modified as follows:
A = matrix([[exp(1j*2*pi*i*j/n)/sqrt(n) for j in range(n)] for i in range(n)]) A = [exp(-2im*big(pi)*i*j/n)/sqrt(big(n)) for i in 0:n-1, j in 0:n-1]; acb_mat_dft(A, 0, prec); ... success = acb_mat_eig_multiple_rump(E, A, E, R, prec);
The Arb program isolates the four eigenvalue clusters with the correct multiplicities. Of course, using interval computations alone, it is impossible to distinguish between eigenvalues that actually have multiplicity $k > 1$ and $k$-fold clusters that are merely too close to resolve. Here for $n = 10$ and with 128-bit precision:
cpu/wall(s): 0.00481 0.00481 Eigenvalues: [+/- 1.55e-37] + [1.00000000000000000000000000000000000 +/- 2.33e-37]*I [+/- 1.55e-37] + [1.00000000000000000000000000000000000 +/- 2.33e-37]*I [-1.00000000000000000000000000000000000 +/- 2.42e-37] + [+/- 1.74e-37]*I [-1.00000000000000000000000000000000000 +/- 2.42e-37] + [+/- 1.74e-37]*I [-1.00000000000000000000000000000000000 +/- 2.42e-37] + [+/- 1.74e-37]*I [1.00000000000000000000000000000000000 +/- 2.03e-37] + [+/- 1.51e-37]*I [1.00000000000000000000000000000000000 +/- 2.03e-37] + [+/- 1.51e-37]*I [1.00000000000000000000000000000000000 +/- 2.03e-37] + [+/- 1.51e-37]*I [+/- 1.08e-37] + [-1.00000000000000000000000000000000000 +/- 1.15e-37]*I [+/- 1.08e-37] + [-1.00000000000000000000000000000000000 +/- 1.15e-37]*I
The certification step fails for $n = 100$ with 128-bit precision. The approximate eigenvalues computed by the QR algorithm are in fact accurate to nearly the full precision here, but the certification is too sensitive. The certification does succeed at 384 bits (in fact, it also succeeds at 192 bits).
n | prec (bits) | mpmath | GenericLinearAlgebra.jl | Arb (qr) | Arb (qr+multiple_rump) |
---|---|---|---|---|---|
10 | 128 | 0.17 s | 0.016 s | 0.0026 s | 0.0048 s |
10 | 384 | 0.17 s | 0.036 s | 0.0065 s | 0.011 s |
100 | 128 | 111 s | 5.6 s | 1.7 s | fails |
100 | 384 | 129 s | 8.9 s | 4.7 s | 8.5 s |
Example: many tiny eigenvalues
As the third and last example for this blog post, we consider the ill-conditioned Hilbert-like matrix $A_{j,k} = 1/(j+k+i)$ (where $i = \sqrt{-1}$). This matrix has $n$ simple eigenvalues, but the eigenvalues are clustered extremely close to 0. Arbitrary-precision ball arithmetic comes in handy; we can just double the precision until all the eigenvalues have been isolated.
#include "acb_mat.h" #include "flint/profiler.h" int main() { acb_mat_t A, L, R, T, D; acb_ptr E; slong n, i, j, prec; int success; n = 100; acb_mat_init(A, n, n); acb_mat_init(L, n, n); acb_mat_init(R, n, n); acb_mat_init(D, n, n); acb_mat_init(T, n, n); E = _acb_vec_init(n); TIMEIT_START for (prec = 64; ; prec *= 2) { for (i = 0; i < n; i++) { for (j = 0; j < n; j++) { acb_set_d_d(acb_mat_entry(A, i, j), i + j, 1); acb_inv(acb_mat_entry(A, i, j), acb_mat_entry(A, i, j), prec); } } acb_mat_approx_eig_qr(E, L, R, A, NULL, 0, prec); success = acb_mat_eig_simple(E, L, R, A, E, R, prec); if (success) break; } TIMEIT_STOP printf("Finished at prec = %ld\n", prec); printf("Eigenvalues:\n"); for (i = 0; i < n; i++) { acb_printn(E + i, 10, 0); printf("\n"); } for (i = 0; i < n; i++) acb_set(acb_mat_entry(D, i, i), E + i); acb_mat_mul(T, D, L, prec); acb_mat_mul(T, R, T, prec); acb_mat_sub(T, T, A, prec); printf("RDL - A = \n"); acb_printn(acb_mat_entry(T, 0, 0), 10, 0); printf(" ... "); acb_printn(acb_mat_entry(T, 0, n - 1), 10, 0); printf("\n ... \n"); acb_printn(acb_mat_entry(T, n - 1, 0), 10, 0); printf(" ... "); acb_printn(acb_mat_entry(T, n - 1, n - 1), 10, 0); printf("\n"); acb_mat_clear(A); acb_mat_clear(L); acb_mat_clear(R); acb_mat_clear(T); acb_mat_clear(D); _acb_vec_clear(E, n); }
Output for $n = 10$:
cpu/wall(s): 0.00678 0.00678 Finished at prec = 128 Eigenvalues: [1.366155652 +/- 1.75e-10] + [-0.8100049688 +/- 2.52e-11]*I [-0.05300699850 +/- 3.65e-12] + [-0.4074452918 +/- 4.65e-12]*I [-0.02118801282 +/- 4.48e-12] + [-0.1033603262 +/- 8.83e-12]*I [-0.0008899507858 +/- 4.40e-14] + [-0.008746380333 +/- 3.21e-13]*I [-1.919387003e-13 +/- 1.03e-23] + [-5.835328934e-13 +/- 5.79e-24]*I [-3.289015028e-11 +/- 4.18e-21] + [-1.157824254e-10 +/- 2.17e-20]*I [-2.488965866e-9 +/- 3.86e-20] + [-1.041748265e-8 +/- 2.43e-18]*I [-1.090083000e-7 +/- 4.97e-17] + [-5.609401403e-7 +/- 4.98e-17]*I [-3.053581711e-6 +/- 3.81e-16] + [-2.006557694e-5 +/- 1.48e-15]*I [-5.838546284e-5 +/- 2.91e-16] + [-0.0004992252671 +/- 4.04e-14]*I RDL - A = [+/- 3.34e-15] + [+/- 3.34e-15]*I ... [+/- 1.95e-14] + [+/- 1.95e-14]*I ... [+/- 7.18e-16] + [+/- 7.18e-16]*I ... [+/- 4.20e-15] + [+/- 4.20e-15]*I
Output for $n = 30$:
cpu/wall(s): 0.611 0.611 Finished at prec = 512 Eigenvalues: [1.687077371 +/- 2.35e-10] + [-0.6297920399 +/- 3.99e-11]*I [0.1880138546 +/- 3.19e-11] + [-0.4493444206 +/- 6.56e-12]*I [-0.02852125579 +/- 4.74e-12] + [-0.2235084246 +/- 1.27e-12]*I [0.008684977140 +/- 3.66e-13] + [-0.03936255529 +/- 2.18e-12]*I [0.001313131456 +/- 2.20e-13] + [-0.005198151724 +/- 4.71e-13]*I [0.0001503259292 +/- 2.04e-14] + [-0.0005992190914 +/- 4.08e-14]*I [1.435903435e-5 +/- 3.47e-15] + [-6.062632874e-5 +/- 1.19e-15]*I [1.178937306e-6 +/- 3.66e-16] + [-5.429673203e-6 +/- 2.84e-17]*I [8.431431171e-8 +/- 7.9e-20] + [-4.330614883e-7 +/- 4.19e-17]*I [5.282830896e-9 +/- 4.91e-19] + [-3.088670229e-8 +/- 1.98e-18]*I [2.902835594e-10 +/- 3.27e-20] + [-1.975167415e-9 +/- 2.50e-19]*I [1.394282269e-11 +/- 4.48e-21] + [-1.134310460e-10 +/- 1.96e-20]*I [5.803704683e-13 +/- 4.95e-23] + [-5.853885237e-12 +/- 4.07e-22]*I [2.056817313e-14 +/- 3.47e-24] + [-2.714430579e-13 +/- 6.87e-24]*I [5.975095704e-16 +/- 2.81e-26] + [-1.129910787e-14 +/- 4.96e-24]*I [1.285534429e-17 +/- 2.87e-27] + [-4.215231323e-16 +/- 3.12e-26]*I [-5.316892632e-21 +/- 2.13e-31] + [-4.178165550e-19 +/- 3.24e-29]*I [-3.699945099e-22 +/- 7.12e-33] + [-1.101700146e-20 +/- 8.61e-31]*I [-1.383247874e-23 +/- 4.03e-33] + [-2.563294313e-22 +/- 2.06e-32]*I [-6.231879504e-44 +/- 1.16e-55] + [-2.640161375e-43 +/- 6.78e-54]*I [-3.773231929e-41 +/- 2.09e-51] + [-1.720439248e-40 +/- 2.79e-50]*I [-1.102552431e-38 +/- 1.10e-48] + [-5.449791797e-38 +/- 2.29e-48]*I [-2.068511509e-36 +/- 3.86e-46] + [-1.118084391e-35 +/- 1.19e-45]*I [-2.795459525e-34 +/- 3.93e-44] + [-1.670093916e-33 +/- 1.32e-43]*I [-2.892579877e-32 +/- 5.26e-43] + [-1.935762120e-31 +/- 4.98e-41]*I [-2.375921843e-30 +/- 2.24e-40] + [-1.812000229e-29 +/- 3.58e-39]*I [-1.582782873e-28 +/- 4.84e-38] + [-1.407635988e-27 +/- 3.81e-37]*I [-8.641388534e-27 +/- 8.82e-38] + [-9.254884331e-26 +/- 1.77e-36]*I [-3.861307198e-25 +/- 8.09e-36] + [-5.226072728e-24 +/- 4.49e-34]*I [1.213330658e-19 +/- 6.65e-30] + [-1.405879965e-17 +/- 3.60e-27]*I RDL - A = [+/- 2.63e-80] + [+/- 2.63e-80]*I ... [+/- 2.73e-73] + [+/- 2.73e-73]*I ... [+/- 4.12e-81] + [+/- 4.12e-81]*I ... [+/- 4.27e-74] + [+/- 4.27e-74]*I
Output for $n = 100$:
cpu/wall(s): 96.914 96.93 Finished at prec = 2048 Eigenvalues: [1.943480811 +/- 4.42e-10] + [-0.4816376154 +/- 2.54e-11]*I [0.4799100437 +/- 2.06e-11] + [-0.4651667935 +/- 1.49e-11]*I [-0.01622370499 +/- 1.48e-12] + [-0.2825534762 +/- 3.97e-11]*I [0.04438676167 +/- 1.70e-14] + [-0.1001354185 +/- 3.84e-11]*I [0.01041211575 +/- 2.22e-12] + [-0.01972058401 +/- 4.58e-12]*I [0.002101687354 +/- 2.27e-13] + [-0.003793420616 +/- 1.39e-13]*I [0.0003851729469 +/- 1.57e-14] + [-0.0006856534531 +/- 4.21e-14]*I [6.534312545e-5 +/- 1.65e-15] + [-0.0001166863523 +/- 2.96e-14]*I [1.037827781e-5 +/- 1.53e-15] + [-1.877189987e-5 +/- 5.23e-16]*I [1.554461268e-6 +/- 2.36e-16] + [-2.864793694e-6 +/- 1.46e-16]*I [2.206681813e-7 +/- 4.14e-17] + [-4.159558286e-7 +/- 4.10e-17]*I [2.979913328e-8 +/- 4.83e-18] + [-5.760126057e-8 +/- 3.66e-18]*I [3.838849263e-9 +/- 4.17e-19] + [-7.623306916e-9 +/- 1.17e-20]*I [4.728351719e-10 +/- 7.58e-21] + [-9.659281431e-10 +/- 8.64e-21]*I [5.578629723e-11 +/- 1.02e-21] + [-1.173526849e-10 +/- 2.63e-20]*I [6.314196219e-12 +/- 1.67e-22] + [-1.368856294e-11 +/- 2.79e-21]*I [6.865023122e-13 +/- 1.23e-23] + [-1.534748270e-12 +/- 2.19e-22]*I [7.177586545e-14 +/- 4.97e-24] + [-1.655649735e-13 +/- 5.12e-24]*I [7.223381555e-15 +/- 1.04e-25] + [-1.720038713e-14 +/- 2.43e-25]*I [7.003048810e-16 +/- 2.97e-26] + [-1.722221742e-15 +/- 2.09e-25]*I [6.545327537e-17 +/- 2.58e-27] + [-1.663128360e-16 +/- 2.82e-26]*I [5.901297596e-18 +/- 4.39e-28] + [-1.549958405e-17 +/- 4.95e-28]*I [5.135423126e-19 +/- 1.96e-29] + [-1.394808303e-18 +/- 1.43e-29]*I [4.315477405e-20 +/- 3.93e-31] + [-1.212623710e-19 +/- 8.24e-30]*I [3.503407823e-21 +/- 2.10e-31] + [-1.018939614e-20 +/- 3.16e-31]*I [2.748678105e-22 +/- 2.42e-32] + [-8.278559730e-22 +/- 2.98e-32]*I [2.084824626e-23 +/- 4.15e-33] + [-6.505780438e-23 +/- 3.44e-34]*I [1.529148270e-24 +/- 3.02e-34] + [-4.946745039e-24 +/- 3.63e-34]*I [1.084846797e-25 +/- 7.80e-36] + [-3.640296178e-25 +/- 1.11e-35]*I [7.445842868e-27 +/- 1.65e-37] + [-2.593331792e-26 +/- 1.67e-36]*I [4.944910513e-28 +/- 2.73e-38] + [-1.788863951e-27 +/- 3.28e-37]*I [3.178044355e-29 +/- 2.80e-39] + [-1.195021125e-28 +/- 2.46e-38]*I [1.976797700e-30 +/- 2.72e-40] + [-7.732542808e-30 +/- 3.68e-40]*I [1.190132959e-31 +/- 1.79e-41] + [-4.847032399e-31 +/- 3.95e-41]*I [6.935482229e-33 +/- 1.16e-43] + [-2.943636655e-32 +/- 4.27e-42]*I [3.912096624e-34 +/- 3.43e-45] + [-1.732148008e-33 +/- 3.57e-44]*I [1.128703677e-36 +/- 4.20e-46] + [-5.457173233e-36 +/- 3.57e-47]*I [5.772428016e-38 +/- 3.72e-48] + [-2.921989229e-37 +/- 4.33e-47]*I [2.856737124e-39 +/- 4.96e-49] + [-1.516143717e-38 +/- 4.30e-49]*I [1.367886380e-40 +/- 2.18e-50] + [-7.623347080e-40 +/- 3.88e-50]*I [6.336024096e-42 +/- 6.4e-54] + [-3.714315864e-41 +/- 6.41e-52]*I [2.838394121e-43 +/- 3.78e-53] + [-1.753540190e-42 +/- 3.47e-52]*I [1.229422235e-44 +/- 1.63e-54] + [-8.020917188e-44 +/- 3.70e-54]*I [5.147123787e-46 +/- 4.99e-57] + [-3.554374440e-45 +/- 4.37e-55]*I [2.082105239e-47 +/- 5.27e-58] + [-1.525753265e-46 +/- 2.84e-57]*I [3.067758194e-50 +/- 3.78e-61] + [-2.554085325e-49 +/- 3.62e-59]*I [1.116160792e-51 +/- 4.07e-61] + [-9.956933814e-51 +/- 2.74e-61]*I [3.915107085e-53 +/- 2.98e-63] + [-3.757664802e-52 +/- 4.53e-62]*I [1.322863291e-54 +/- 3.13e-64] + [-1.372526876e-53 +/- 1.01e-63]*I [4.301434802e-56 +/- 3.41e-66] + [-4.851020547e-55 +/- 4.05e-65]*I [1.344373239e-57 +/- 3.03e-67] + [-1.658610372e-56 +/- 1.61e-66]*I [4.032677559e-59 +/- 1.75e-69] + [-5.484466891e-58 +/- 2.90e-68]*I [1.158844817e-60 +/- 4.89e-70] + [-1.753367034e-59 +/- 1.95e-70]*I [3.182487112e-62 +/- 3.25e-73] + [-5.417745727e-61 +/- 2.02e-71]*I [8.325542481e-64 +/- 3.67e-74] + [-1.617408256e-62 +/- 3.45e-72]*I [2.065377391e-65 +/- 1.67e-75] + [-4.663502336e-64 +/- 5.39e-75]*I [4.826486351e-67 +/- 4.86e-77] + [-1.298136146e-65 +/- 4.47e-76]*I [1.051279623e-68 +/- 1.27e-78] + [-3.487023251e-67 +/- 3.04e-77]*I [2.095198047e-70 +/- 4.51e-81] + [-9.034691954e-69 +/- 3.18e-79]*I [3.679538511e-72 +/- 2.35e-82] + [-2.256724534e-70 +/- 3.97e-80]*I [5.153870935e-74 +/- 2.13e-84] + [-5.431474176e-72 +/- 3.75e-82]*I [-1.106556567e-77 +/- 1.74e-87] + [-2.808037704e-75 +/- 1.58e-85]*I [-6.360299162e-79 +/- 4.38e-89] + [-6.024148669e-77 +/- 2.76e-87]*I [-2.125546686e-80 +/- 3.00e-90] + [-1.242089667e-78 +/- 3.37e-88]*I [-5.805954933e-82 +/- 2.69e-92] + [-2.459492972e-80 +/- 3.60e-90]*I [-1.403872182e-83 +/- 4.52e-94] + [-4.673262893e-82 +/- 9.84e-94]*I [-3.100265990e-85 +/- 3.14e-95] + [-8.513282813e-84 +/- 3.98e-94]*I [-6.348456644e-87 +/- 1.96e-98] + [-1.485482431e-85 +/- 3.77e-95]*I [-1.215348184e-88 +/- 1.55e-98] + [-2.480226502e-87 +/- 3.21e-97]*I [-2.185406674e-90 +/- 8.56e-101] + [-3.958161037e-89 +/- 2.07e-99]*I [-3.701068723e-92 +/- 3.24e-102] + [-6.030596918e-91 +/- 2.74e-101]*I [-5.911613971e-94 +/- 1.06e-104] + [-8.760635019e-93 +/- 2.02e-103]*I [-8.910921278e-96 +/- 2.38e-106] + [-1.211756681e-94 +/- 1.47e-104]*I [-1.267545686e-97 +/- 3.26e-107] + [-1.593465264e-96 +/- 1.40e-106]*I [-1.700622625e-99 +/- 3.46e-109] + [-1.988840351e-98 +/- 1.74e-108]*I [-2.150110040e-101 +/- 4.58e-112] + [-2.351810505e-100 +/- 4.23e-110]*I [-2.558420582e-103 +/- 4.09e-113] + [-2.629586962e-102 +/- 7.18e-113]*I [-2.860520397e-105 +/- 1.56e-115] + [-2.773990056e-104 +/- 1.03e-114]*I [-2.999416566e-107 +/- 5.01e-117] + [-2.754240689e-106 +/- 4.44e-116]*I [-2.942732987e-109 +/- 2.85e-119] + [-2.566892738e-108 +/- 4.25e-118]*I [-2.694170329e-111 +/- 1.48e-121] + [-2.238789094e-110 +/- 1.37e-120]*I [-2.294624592e-113 +/- 4.41e-123] + [-1.821149410e-112 +/- 4.18e-122]*I [-1.811548910e-115 +/- 1.30e-125] + [-1.376376283e-114 +/- 1.09e-124]*I [-1.320169929e-117 +/- 3.30e-127] + [-9.622434683e-117 +/- 6.15e-128]*I [-8.837733397e-120 +/- 3.56e-130] + [-6.191536078e-119 +/- 2.36e-129]*I [-5.403891375e-122 +/- 3.94e-132] + [-3.645276171e-121 +/- 4.25e-131]*I [-2.997728722e-124 +/- 4.10e-134] + [-1.950223764e-123 +/- 3.42e-133]*I [-1.496504616e-126 +/- 3.84e-136] + [-9.403415631e-126 +/- 3.20e-136]*I [-6.656969031e-129 +/- 2.92e-139] + [-4.045740498e-128 +/- 3.13e-139]*I [-2.606540912e-131 +/- 2.19e-142] + [-1.534104432e-130 +/- 4.15e-140]*I [-8.844423015e-134 +/- 1.41e-144] + [-5.047133393e-133 +/- 3.56e-143]*I [-2.547997802e-136 +/- 2.21e-146] + [-1.411367673e-135 +/- 1.69e-145]*I [-6.060065846e-139 +/- 3.90e-149] + [-3.261621649e-138 +/- 7.27e-149]*I [-1.142433244e-141 +/- 4.22e-151] + [-5.980297546e-141 +/- 4.98e-152]*I [-1.600600559e-144 +/- 7.48e-155] + [-8.156525245e-144 +/- 4.20e-154]*I [-1.481598962e-147 +/- 4.32e-157] + [-7.356206837e-147 +/- 1.55e-158]*I [-6.796451170e-151 +/- 6.07e-162] + [-3.290452259e-150 +/- 2.90e-160]*I [3.450608848e-76 +/- 1.22e-86] + [-1.258871913e-73 +/- 2.77e-83]*I [8.134442110e-49 +/- 2.34e-59] + [-6.343519049e-48 +/- 6.79e-59]*I [2.135919472e-35 +/- 2.74e-45] + [-9.876578677e-35 +/- 1.22e-45]*I RDL - A = [+/- 5.37e-344] + [+/- 5.37e-344]*I ... [+/- 9.87e-328] + [+/- 9.87e-328]*I ... [+/- 3.51e-345] + [+/- 3.51e-345]*I ... [+/- 6.46e-329] + [+/- 6.46e-329]*I
I did not do a detailed comparison with mpmath and GenericLinearAlgebra.jl for this example, but basically, these implementations just produce noise for anything smaller than the epsilon of the working precision, so the user has to detect such cases with heuristics which may or may not be reliable...
To do
As I already pointed out, eigenvalue algorithms are a huge research area, and this is just the first iteration of the code, so a lot of things can be improved. In particular, much better code is possible for real/symmetric/Hermitian matrices. Contributions would of course be most welcome, since my own time is finite. A contributor did actually submit a pull request for eigenvalues of symmetric matrices a few years ago, but I did not merge it since some issues with the code were not resolved.
Significant improvements are also possible for the generic QR algorithm, including low-level optimizations, balancing for poorly scaled matrices, use of blocks to benefit from matrix multiplication (for huge matrices), and parallelization.
There is also the possiblity of using mixed precision and iterative refinement. The support for eigenvalue clusters can definitely be improved. The list goes on...
Independently of this, it would be nice to improve the polynomial root-finding code in Arb too; it would then be interesting to compare finding eigenvalues via the characteristic polynomial and finding polynomial roots via the companion matrix, for different ranges of precisions and sizes.
fredrikj.net | Blog index | RSS feed | Follow me on Mastodon | Become a sponsor