fredrikj.net / blog /

Ball linear algebra

September 29, 2012

I’ve tagged version 0.2 of Arb (documentation here). The changes from version 0.1 are:

There are a couple of features on that list that I haven’t written about yet; let’s take matrices today. Some of the usual algorithms of linear algebra translate to ball/interval arithmetic in a straightforward way, but some subtle issues arise if we want to obtain meaningful results when matrices don’t have full rank, or rather don’t have full rank numerically.

Computing the determinant of a matrix by reducing it to upper triangular form is a good example. If you do Gaussian elimination in exact arithmetic, you either keep finding nonzero pivot elements, meaning that the matrix certainly is nonsingular, or at some point you only find zeros in a column, meaning that the matrix certainly is singular.

With ball or interval arithmetic, what can happen is that the entries in a column are intervals containing zero. All we can say in this situation is that the matrix might be singular. Stopping and returning zero would be incorrect, because the determinant could be some small number, which we simply failed to isolate from zero due to the numerical precision being insufficient. If we ignore the fact that the intervals contain zero and just continue with the standard Gaussian elimination algorithm, we end up with an interval of $[-\infty, +\infty]$, which is correct but useless, because the determinant is certainly bounded (unless there were infinities in the matrix to begin with).

A better solution is to do Gaussian elimination as far as possible and then switch to a division-free algorithm to bound the determinant of the remaining submatrix. Note that at this point we have

$$\det\begin{pmatrix}U & R\\ 0 & S\end{pmatrix} = \det(U) \det(S)$$

where $U$ is the reduced upper triangular part (which certainly has a nonzero determinant) and $S$ is the remaining possibly singular part. What I’ve done for now is to use the Hadamard bound to compute a bound $[-B, B]$ for the remaining submatrix $S$. This has the advantage of requiring only $O(n^2)$ operations; an alternative would be to switch to, say, the Berkowitz algorithm, which might give a more accurate bound, but then the cost would increase to $O(n^4)$.

To see how this plays out, we try computing the determinant of the nth Hilbert matrix, a notoriously ill-conditioned beast which only has small fractions as entries but whose determinant decreases roughly like $4^{-n^2}$. Using 53-bit precision, we get the following results, where I have included the decimal value of the exact determinant as computed by FLINT‘s fmpq_mat_det function (all decimal values below have been rounded to five digits to keep the table compact):

n Exact Ball arithmetic
0 1 1 +/- 0
1 1 1 +/- 0
2 0.083333 0.083333 +/- 5.5511e-17
3 0.00046296 0.00046296 +/- 1.8455e-17
4 1.6534e-07 1.6534e-07 +/- 3.4645e-19
5 3.7493e-12 3.7493e-12 +/- 1.2605e-21
6 5.3673e-18 5.3673e-18 +/- 2.1465e-25
7 4.8358e-25 4.8358e-25 +/- 2.3744e-30
8 2.7371e-33 2.737e-33 +/- 3.5949e-36
9 9.7202e-43 9.7202e-43 +/- 1.8249e-43
10 2.1642e-53 0 +/- 9.3621e-52
11 3.0191e-65 0 +/- 3.4621e-60
12 2.6378e-78 0 +/- 3.4343e-68
13 1.4429e-92 0 +/- 1.8851e-75
14 4.9403e-108 0 +/- 3.2644e-82
15 1.0585e-124 0 +/- 1.2476e-89
16 1.4191e-142 0 +/- 4.8906e-96
17 1.1903e-161 0 +/- 9.8846e-102
18 6.2449e-182 0 +/- 1.4861e-107
19 2.0493e-203 0 +/- 1.2648e-105
20 4.2062e-226 0 +/- 1.732e-112

We see that 53-bit ball arithmetic only is sufficient to distinguish Hilbert matrices below size 10 from singular matrices; from then on, we obtain a magnitude bound which is correct but progressively loses accuracy. This is fine since the bound is obtained cheaply; if we know that the matrix is nonsingular, we just have to keep increasing the precision.

To make things interesting, we take a much larger Hilbert matrix, of size 200. The determinant is a fraction $1/q$ where $q$ has 23924 digits. FLINT computes the exact rational determinant in 22 seconds, using a $p$-adic lifting algorithm behind the scenes (Gaussian elimination over the rational numbers would be incredibly slow!). Starting from 100 bits of precision and doubling until convergence, the determinant bounds computed using ball arithmetic are:

Bits Result Time
100 0 +/- 2.1027e-1061 0.74 s
200 0 +/- 2.738e-2419 1.2 s
400 0 +/- 9.2614e-5247 2.0 s
800 0 +/- 2.1169e-11259 3.6 s
1600 0 +/- 1.2474e-21119 7.0 s
3200 2.95545429708284e-23924 +/- 1.4149e-24236 15 s

We need a little less than 1000 decimal digits of precision to distinguish the determinant from zero. This turns to be about as fast as the exact computation, but we obtain much less information; we would have to increase the precision to 25000 digits or so to be able to recover the exact fraction from the numerical interval. In fact, using 83250 bits of precision, it takes 1274 seconds to obtain the value 2.95545429708284e-23924 +/- 5.0573e-48334.

For just determining that the matrix is nonsingular, it’s much faster to just compute the determinant modulo a single prime. Using FLINT’s nmod_mat module, it takes 0.005 seconds to find that the determinant is congruent to 400 modulo 401, or 0.012 seconds to compute it modulo a 64-bit prime.

Typically, ball or interval matrices are probably most useful for doing computations with irrational numbers, but there are still some cases where they can be useful for doing computations with rational numbers. For example, to prove that a matrix is singular, we generally have to keep computing determinants modulo several small primes, until the product of the primes exceeds some bound for the magnitude of the determinant. In some cases, a numerical determinant calculation might give a bound that is much tighter than the Hadamard bound for the whole matrix.



fredrikj.net  |  Blog index  |  RSS feed  |  Follow me on Mastodon  |  Become a sponsor