# Computing 100,000 digits of zetazero(1)

July 29, 2013

One of my goals with the Arb library is to support evaluating special functions of a power series argument to very high precision and with rigorous error bounds. I’ve recently added most of the elementary functions (exp, log, sin, cos, …) as well as the gamma function (in fact, there are three different versions: $\Gamma(z)$, $1/\Gamma(z)$, and $\log \Gamma(z)$ with the correct branch structure). The Hurwitz zeta function (and as a special case the Riemann zeta function) was one of the first functions I implemented, several months ago; I’ve recently added a more user-friendly wrapper for it which also uses the reflection formula in the negative half-plane (this makes use of the gamma function power series).

One of the nice things you can do with power series is to implement various analytic operations in an efficient and reliable way. For example, when you have a truncated power series approximation of a function, the roots of the Taylor polynomial close to the expansion point are approximately roots of the function itself, and with some care, you can rigorously count the number of roots in a given region and compute them (again with rigorous error bounds) to very high precision.

This weekend, I decided to try to use Arb to compute a high-precision value of the first nontrivial root of the Riemann zeta function, which is

$$\rho_1 \approx 0.5 + 14.13472514173469379i.$$

I did this by implementing the Z-function of a power series argument, and then using Newton iteration (with strict error bounds obtained from the function derivatives). Of course, Newton iteration is essentially successive approximation by first-order Taylor polynomials. I copied the Newton iteration code that was already available in Arb for polynomial root polishing (see: precise computation of roots of unity) and basically just replaced the polynomial evaluation with the Z-function power series evaluation. In fact, I cheated a bit by starting the computation from a given double-precision bounding interval for $\rho_1$, but you could generate such an interval quite easily from scratch, for example using bisection (it’s just a bit more coding to do).

For reference, I computed $\rho_1$ to 100, 1000 and 10,000 digits using the ZetaZero function in Mathematica 9.0, the zetazero function in mpmath in Sage, and using the Arb code. Then I computed $\rho_1$ to 100,000 digits using Arb. In each case, I also evaluated $\zeta(\rho_1)$ to the same precision to get a check on the result. Here are the timings (in seconds, obtained on a 64-bit Xeon X5675 3.07 GHz system):

 Digits Mathematica mpmath Arb $\rho_1$ $\zeta(\rho_1)$ $\rho_1$ $\zeta(\rho_1)$ $\rho_1$ $\zeta(\rho_1)$ 100 0.040 0.0080 0.071 0.0027 0.011 0.0014 1000 10 1.4 6.6 0.23 0.53 0.090 10,000 4394 621 6584 216 159 20 100,000 - - - - 55507 5182

The first thing one might observe is that, as intended, Arb is much faster than Mathematica and mpmath (and should be even faster in the future).

The second thing one might observe is that, in all implementations, computing the root takes much longer than evaluating the zeta function once. This is not quite optimal: ideally, computing the root should be perhaps a factor two slower than a single full-precision evaluation close to the root. I don’t know what Mathematica does internally, but in mpmath, most of the overhead comes from the fact that the root-finding algorithm doesn’t increase the working precision adaptively. In Arb, the zeta function of a power series argument (used for the Newton iteration to compute $\zeta(s)$ and $\zeta’(s)$ simultaneously) is currently less optimized than the scalar version, and that should account for perhaps a factor 2-4. Finally, Newton’s method is not necessarily optimal; the secant method (which does not require the derivative) might perform better.

The interval computed for the imaginary part of $\rho_1$ was 14.134725 (truncated) +/- 1.364e-100002. Reassuringly, evaluating $\zeta(\rho_1)$ then produced the complex interval (-1.49497253e-99999 + 4.419940914e-100000j) +/- (4.17e-99998, 4.25e-99998j) which indeed contains zero and approximates zero with an absolute accuracy of almost 10000 digits (a couple of digits can be expected to be lost in the roundtrip).

You can find the program used for the computation here. Please note that this is just a script I put together in a hurry; the code is undocumented and very messy (a lot of it is just moving coefficients back and forth between polynomials, due to a current lack of various convenience functions in Arb for doing such things more cleanly). Finally, here are data file with the 100,000-digit approximation of $\rho_1$: decimal representation of the imaginary part, and the internal representation consisting of integers $a, b, c, d$ such that the imaginary part is contained in $[m-r, m+r]$ where $m = a \times 2^b$, $r = c \times 2^d$.