# Arb 2.0 available

May 27, 2014

Version 2.0.0 of Arb is now available. This version includes new types which generally are twice as fast at modest precision (10-100 digits). Measurable speedups can also be expected at higher precision (100-1000 digits).

The following table shows the time in seconds to multiply two 100 x 100 matrices with complex entries at prec-bit precision using fmpcb_mat_mul (old) and acb_mat_mul (new). Timings are also shown for matrix multiplication in Sage 6.1.1 over CF = ComplexField(prec) and CIF = ComplexIntervalField(prec).

prec Sage (CF) Sage (CIF) Arb (old) Arb (new) Speedup
32 1.01 1.83 0.64 0.27 2.37x
64 1.25 1.92 0.93 0.315 2.95x
128 1.21 2.01 1.06 0.379 2.80x
256 1.40 2.33 1.71 0.588 2.91x
512 1.85 3.06 1.451 0.857 1.69x
1024 3.07 5.39 2.509 1.889 1.33x
2048 5.43 10.57 5.808 5.055 1.15x
4096 13.08 25.99 15.373 12.554 1.22x

I've made vague promises about a mythical 2x speedup to various people at various times in the past, so it's nice to finally get the code out and remove this big item from the todo list!

## The new types

The new types have similar semantics to the respective old types and can generally be used as drop-in replacements:

• mag_t replaces fmpr_t for fixed-precision radii
• arf_t replaces fmpr_t for arbitrary-precision midpoints
• arb_t replaces fmprb_t for real numbers
• acb_t replaces fmpcb_t for complex numbers
• arb_poly_t replaces fmprb_poly_t for real polynomials
• acb_poly_t replaces fmpcb_poly_t for complex polynomials
• arb_mat_t replaces fmprb_mat_t for real matrices
• acb_mat_t replaces fmpcb_mat_t for complex matrices

Some points worth noting:

• All old types and methods are still available in Arb 2.0, so users can safely transition to the new types. Most code can be converted using search-and-replace of type prefixes (fmpcbacb, fmprbarb, fmprarf). Further (easy) changes must be made to any code that uses fmprb_radref, fmpr_manref or fmpr_expref to access the internal components of numbers.
• Most transcendental functions (such as arb_exp) are just wrappers of the fmprb/fmpcb versions right now, and thus not faster. Fast implementations will be added in the near future.
• The code for the gamma function, zeta function, hypergeometric series, Bernoulli numbers and the partition function (among others) has not yet been ported to the new types. Users who require these functions should stick with the old types for now.
• Since the new types have not yet been field-tested, use with care! I still have some documentation to write and some unit tests to strengthen, but I believe the code is perfectly usable right now, so I'll save that work for a future version.

## More benchmarks

See also the preliminary benchmarks for single arithmetic operations, reported in the previous blog post.

### Hilbert matrices

Here are old and new timings for computing an accurate numerical approximation of the determinant of an $n \times n$ Hilbert matrix (using the hilbert_matrix and hilbert_matrix2 example programs). Note that the speedup decreases for larger $n$, as high precision has to be used to combat the ill-conditioning (for the $n$ below, the algorithm terminates at a precision of 160, 640, 1280 and 2560 bits, respectively).

n Arb (old) Arb (new) Speedup
20 0.00279 0.00129 2.16x
50 0.0613 0.0315 1.95x
100 0.637 0.407 1.57x
200 9.115 7.237 1.26x

### Polynomial root isolation

The poly_roots example program isolates all the complex roots of a polynomial with integer coefficients, using functions in the fmpcb_poly module. The poly_roots2 program is similar, but uses the new acb_poly instead.

The root isolation algorithm in Arb computes approximate roots using floating-point arithmetic, and then uses ball arithmetic to determine a rigorous radius bound around each tentative root. Specifically, the Durand-Kerner method is used in the floating-point stage. The precision is automatically doubled every once in a while. The bulk of the work consists of evaluating the polynomial over and over again at each approximate root, so it's a good benchmark for complex floating-point arithmetic (however, it does not exercise the ball arithmetic much). No optimizations are made for real roots.

Sage uses a similar strategy. The floating-point stage uses NumPy (at low precision) and Pari (at high precision). Note that Sage starts by computing a squarefree factorization of the input polynomial. The Arb benchmark program requires an input that is already squarefree. Computing the squarefree factorization is cheap, though, so this shouldn't affect the comparison too much.

Bernoulli polynomial $B_n(x) = \sum_{k=0}^n {n \choose k} B_{n-k} x^k$:

$n$ Sage Arb (old) Arb (new) Speedup
20 0.030 0.00665 0.00301 2.21x
50 0.502 0.071 0.0377 1.88x
100 7.04 1.097 0.421 2.61x
200 > 1000 10.359 4.618 2.24x

Wilkinson polynomial $W_n(x) = \prod_{k=1}^n (x-k)$:

$n$ Sage Arb (old) Arb (new) Speedup
20 0.058 0.0203 0.00711 2.86x
50 0.964 0.359 0.196 1.83x
100 16.65 4.086 1.932 2.11x
200 226 43.298 23.557 1.84x

Exponential Taylor polynomial $E_n(x) = \sum_{k=0}^n \frac{x^k}{k!}$:

$n$ Sage Arb (old) Arb (new) Speedup
20 0.039 0.00759 0.00354 2.14x
50 0.338 0.205 0.0666 3.08x
100 2.62 2.046 0.503 4.06x
200 31.93 12.76 5.551 2.30x

NB: poly_roots and poly_roots2 use slightly different precision steps, causing some variation in the timings.

### Power series exponentials

Finally, hare are timings for computing the Taylor expansion of $\exp(\exp(1+x))$ to order $x^n$ at a precision of $n$ decimal digits. This problem is taken from the FLINT benchmarks page. The new type gives a nice speedup in the basecase range, finally achieving pari-ty with the floating-point arithmetic in Pari 2.5.5. For large $n$, FLINT's asymptotically fast polynomial arithmetic is used, and performance is unchanged.

n Pari Arb (old) Arb (new) Speedup
10 0.000014 0.0000166 0.0000113 1.47x
30 0.000066 0.000147 0.0000583 2.52x
100 0.00105 0.0015 0.000859 1.75x
300 0.0273 0.0117 0.0116 1.00x
1000 1.53 0.181 0.180 1.00x
3000 69 2.314 2.313 1.00x
10000 29.96 29.81 1.00x