# Arb 0.3, with complex numbers

November 7, 2012

I’ve tagged version 0.3 of my C library for ball interval arithmetic, Arb (documentation here).

Most importantly, this version adds support for complex numbers. It also includes some improvements behind the scenes, such as the automatic hypergeometric series code, and a new module in support of computations with holonomic functions. This is intended to allow implementing many special functions in an efficient and general way in the future, but this requires some more work first.

The new fmpcb_t type represents a complex number as a rectangular interval (defined by two real ball intervals of type fmprb_t). Arithmetic operations are implemented the straightforward way by doing operations on the real and imaginary parts. Another possible approach to complex interval arithmetic would be to represent a complex interval as a midpoint with a single radius. This only requires storing three real floating-point numbers (rather than four), and is sometimes nicer to work with mathematically, but overall seems much less convenient from an implementation point of view (it is also extremely convenient in many applications to assume that the error parts for the real and imaginary parts can vary independently, and in particular, that one part can be exactly zero even when the other is approximate).

The new fmpcb_mat module implements matrices over the complex numbers, including functions for LU decomposition, nonsingular solving, inverse and determinant. It works just like the previously-described code for real matrices.

The new fmpcb_poly module implements polynomials over the complex numbers. It does not yet provide any arithmetic operations (this is planned for the next version), but does contain code for computing all the roots of a polynomial. The code allows the input polynomial (assumed squarefree) to have interval coefficients, and computes rigorous isolating intervals for the roots (or signals failure if the working precision is too low or the input coefficients are too imprecise for successful isolation). It uses a simple algorithm: it computes approximations of the roots numerically using the Durand-Kerner method, and then determines a rigorous bounding interval for each approximate root as a post-processing step. It can happen that D-K iteration fails to produce approximations which are good enough to successfully isolate the roots, but it always succeeds in practice at sufficiently high precision.

It’s pretty easy to use the code to, for instance, isolate the roots of an fmpq_poly. Here is some example code:

#include "fmpcb.h"
#include "fmpcb_poly.h"
#include "arith.h"
#include "profiler.h"

void
fmpq_poly_print_roots(const fmpq_poly_t poly, long initial_prec)
{
long prec, deg, isolated, maxiter;
fmpcb_poly_t cpoly, roots;
timeit_t t0;

deg = poly->length - 1;

fmpcb_poly_init(cpoly);
fmpcb_poly_init2(roots, deg);
_fmpcb_poly_set_length(roots, deg);

for (prec = initial_prec; ; prec *= 2)
{
printf("trying with precision %ld\n", prec);
fmpcb_poly_set_fmpq_poly(cpoly, poly, prec);
maxiter = FLINT_MIN(deg, prec);

timeit_start(t0);
isolated = fmpcb_poly_find_roots(roots->coeffs, cpoly,
prec == initial_prec ? NULL : roots->coeffs,
maxiter, prec);
timeit_stop(t0);

printf("elapsed time %ld ms, found %ld isolated roots\n",
t0->cpu, isolated);

if (isolated == deg)
{
fmpcb_poly_printd(roots, 15);
printf("\n");
break;
}
}

fmpcb_poly_clear(cpoly);
fmpcb_poly_clear(roots);

}

int main()
{
fmpq_poly_t poly;
fmpq_poly_init(poly);
fmpq_poly_set_coeff_ui(poly, 1, 1);
fmpq_poly_exp_series(poly, poly, 257);
fmpq_poly_print_roots(poly, 53);
fmpq_poly_clear(poly);
}


This isolates the roots of the polynomial $1 + x + \frac{x^2}{2} + \ldots + \frac{x^{256}}{256!}$.

On one 32-bit system, the output is:

trying with precision 53
elapsed time 9700 ms, found 0 isolated roots
trying with precision 106
elapsed time 19780 ms, found 0 isolated roots
trying with precision 212
elapsed time 42140 ms, found 0 isolated roots
trying with precision 424
elapsed time 2380 ms, found 256 isolated roots
[(-68.8865324641218 + -25.4303424859427j)  +/-  (2.36e-25, 2.36e-25j)
(87.9887751889355 + 102.299996578322j)  +/-  (9.76e-08, 9.76e-08j)
(-40.4066020990447 + 71.3703317844642j)  +/-  (1.01e-43, 1.01e-43j)
...


The total running time is 74 seconds; for comparison, Mathematica’s RootIntervals isolates the roots of this polynomial in 83 seconds and Sage’s complex_roots (which calls Pari and then uses interval arithmetic to verify the roots in a similar fashion) takes 366 seconds. Once the roots are isolated, convergence is fast and obtaining $b$ bits only takes $O(\log b)$ steps.

It also works as expected when given polynomials with clustered roots, such as $x^{100} + (300x+1)^4$. Here four roots are so close together that a very precision is necessary to isolate them; the other 96 roots are found quickly:

trying with precision 53
elapsed time 1290 ms, found 0 isolated roots
trying with precision 106
elapsed time 350 ms, found 0 isolated roots
trying with precision 212
elapsed time 1510 ms, found 96 isolated roots
trying with precision 424
elapsed time 3980 ms, found 96 isolated roots
trying with precision 848
elapsed time 7040 ms, found 96 isolated roots
trying with precision 1696
elapsed time 17260 ms, found 100 isolated roots


On this input Sage takes 25 seconds and Mathematica takes 132 seconds.

There are much more sophisticated ways to isolate the roots of a polynomial, and much better methods are available for real roots. But so far this approach seems to scale reasonably well (and will be even faster at low precision after future optimizations).