fredrikj.net / blog /

High-precision linear algebra in Julia: BigFloat vs Arb

July 31, 2018

A few persons have asked me about the relative performance of Julia's native matrices with BigFloat elements and Arb matrices available through Nemo.jl. Unlike machine-precision matrices which build on BLAS technology, BigFloat matrices in Julia use generic code that has not been optimized specifically for BigFloat elements, so we can expect Arb's specialized C implementations to be faster. But by how much?

In this blog post, I provide some quick benchmarks done with Nemo 0.8.5 in Julia 0.6.4. We take the test matrix $A$ to be the DCT-II matrix of size $N$. This matrix can be constructed as follows as a Julia BigFloat matrix and as a Nemo Arb matrix:

setprecision(256)
N=10; A = [j == 0 ? 1/sqrt(BigFloat(N)): cos(j*(k+0.5)*BigFloat(pi)/N) * sqrt(BigFloat(2)/N) for j in 0:N-1, k in 0:N-1];
R = ArbField(256)
N=10; A = matrix(R, [j == 0 ? 1/sqrt(R(N)): cos(j*(k+0.5)*R(pi)/N) * sqrt(R(2)/N) for j in 0:N-1, k in 0:N-1]);

For these benchmarks, the precision is set to 256 bits, which is the default BigFloat precision in Julia. The relative performance will be slightly different with higher or lower precision. I only run the benchmarks with real matrices to keep things brief; complex matrices are about 3-4 times slower than real matrices with either BigFloat or Arb, so the relative performance figures should be similar.

Matrix multiplication

We time computing A * B where B = transpose(A).

$N$ Julia BigFloat Arb Speedup
10 0.000310 s 0.000112 s 2.77
30 0.00940 s 0.00243 s 3.87
100 0.649 s 0.0934 s 6.95
300 23.645 s 1.151 s 20.54
1000 924.843 s 24.606 s 37.59

Not surprisingly, the clever matrix multiplication code in Arb (which uses FLINT behind the scenes) is leaps and bounds faster than Julia's generic code doing classical matrix multiplication over BigFloats.

Inverse

We time computing inv(A) which is equivalent to solving the system $AX = I$ in both Arb and Julia.

The overhead inflicted by tracking error bounds in Arb is negligible for matrix multiplication. However, linear solving switches from straightforward LU factorization to a significantly more expensive algorithm using preconditioning to prevent the error bounds from blowing up exponentially with $N$. Arb 2.14 introduced separate "approx" functions for performing a few operations (including LU factorization and solving) without error bounds. These functions may be useful as drop-in replacements for the usual numerical Julia functions when heuristic error estimates are sufficient. Wrappers for these functions are not yet implemented in Nemo, but it is easy to write such functions manually in Julia. Here is a wrapper for in-place approximate solving, and a corresponding approximate matrix inverse function:

function approx_solve!(z::arb_mat, x::arb_mat, y::arb_mat)
  r = ccall((:arb_mat_approx_solve, :libarb), Cint,
              (Ref{arb_mat}, Ref{arb_mat}, Ref{arb_mat}, Int),
              z, x, y, prec(base_ring(x)))
  r == 0 && error("Matrix cannot be inverted numerically")
  nothing
end

function approx_inv(x::arb_mat)
  y = one(parent(x))
  approx_solve!(y, x, y)
  return y
end

The table compares performance separately for the rigorous and approx versions:

$N$ Julia BigFloat Arb Speedup Arb (approx) Speedup (approx)
10 0.000308 s 0.000250 s 1.23 0.000205 s 1.50
30 0.00983 s 0.00868 s 1.13 0.00400 s 2.46
100 0.786 s 0.287 s 2.74 0.111 s 7.08
300 29.018 s 4.372 s 6.64 2.378 s 12.20
1000 1641.659 s 84.715 s 19.38 51.168 s 32.08

The rigorous algorithm in Arb is barely faster for small $N$ but eventually picks up steam. The approximate version performs even better.

Matrix exponential

We time computing exp(A) with Arb (note that this gives the matrix exponential and not the elementwise exponential like exp applied to a normal Julia array).

Unfortunately, Julia's matrix exponential function expm does not work with BigFloat matrices, and I'm not aware of any other Julia package that offers such a function except for mpmath via PyCall. So, here is a totally lopsided comparison to mpmath.expm (also with 256-bit precision) to have a point of reference.

$N$ mpmath Arb Speedup
10 0.175 s 0.00236 s 74.1
30 3.762 s 0.0456 s 82.5
100 145.82 s 1.334 s 109.3
300 16.600 s
1000 337.495 s

Conclusions

Arb matrices are likely to be slightly faster than Julia's native BigFloat matrices in general, just due to having much less garbage collection and memory allocation overhead. In addition, Arb matrices are likely to be much faster for large dimensions due to using algorithms based on fast matrix multiplication.

Should you be using Nemo's Arb matrices instead of BigFloat matrices when 53-bit doubles don't cut it? If rigorous error bounds are useful, then the answer should be obvious! Otherwise, the short answer is that it depends. The benchmark results speak for themselves, but here are two more points to consider:



fredrikj.net  |  Blog index  |  RSS feed  |  Follow me on Mastodon  |  Become a sponsor