fredrikj.net / blog /
The Riemann-Siegel formula in Arb
October 25, 2016
This is a plot of $\zeta(1/2 + it)$ with t between $10^{20}-1$ and $10^{20}+1$, computed with the latest git version of Arb:
(Red is the real part, orange is the imaginary part, blue is the Z-function.)
Computing the Riemann zeta function $\zeta(s), s = \sigma + it$ in and around the critical strip $0 \le \sigma \le 1$ when $t \gg 0$ is an interesting problem. The classical application of Euler-Maclaurin summation requires $O(t)$ operations due to having to compute the truncated zeta sum $\sum_{k=1}^N k^{-s}$ with $N \sim t$ for an accurate result. The Riemann-Siegel formula reduces the number of terms to $N \sim O(t^{1/2})$, which is a huge improvement when $t > 10^6$, say. I finally implemented the Riemann-Siegel formula in Arb in this commit a couple of days ago. With the new code for the zeta function, it took a day to compute the 257 samples for the plot above on a 12-core machine (courtesy of Ricky Farr). With the Euler-Maclaurin formula, it would have taken thousands of years!
How it works
In 2010, I wrote about the implementation of the Riemann-Siegel formula in mpmath done by Juan Arias de Reyna. Implementing it in interval arithmetic in Arb was quite easy thanks to the good error bounds provided in his paper (J. A. de R., High precision computation of Riemann's zeta function by the Riemann-Siegel formula, I, Math. Comp., 80 (2011), 995--1009.). Following the paper, the Riemann-Siegel formula reads:
$$\zeta(s) = \mathcal{R}(s) + X(s) \overline{\mathcal{R}}(1-s), \quad X(s) = \pi^{s-1/2} \frac{\Gamma((1-s)/2)}{\Gamma(s/2)}$$ $$\mathcal{R}(s) = \sum_{k=1}^N \frac{1}{k^s} + (-1)^{N-1} U a^{-\sigma} \left[ \sum_{k=0}^K \frac{C_k(p)}{a^k} + RS_K \right]$$ $$U = \exp\left(-i\left[ \frac{t}{2} \log\left(\frac{t}{2\pi}\right)-\frac{t}{2}-\frac{\pi}{8} \right]\right), \quad a = \sqrt{\frac{t}{2\pi}}, \quad N = \lfloor a \rfloor, \quad p = 1-2(a-N).$$Here, $C_k(p)$ are certain coefficients that can be computed using a combination of recurrence relations and power series operations. The Arb implementation computes a provably correct enclosure for $\zeta(s)$ by performing all steps with interval arithmetic and adding a rigorous bound for the error term $RS_K$. Selecting $K$ and the internal working precision is done using simple heuristics (which probably could be fine-tuned, but don't affect correctness). In mpmath, managing the precision is much more complicated (since all intermediate errors have to be estimated manually), and the end result is not quite rigorous, though Juan did a very good making it as robust as feasible. In any case, getting error bounds is not the only advantage of Arb -- the underlying arithmetic is a lot faster too!
Timings
The following table compares the time in seconds to compute $\zeta(1/2 + it)$ at 53-bit precision. I timed mpmath inside Sage, since it uses Cython code and MPFR to speed up core operations. I also timed mpmath.fp.zeta, which uses machine doubles. This is a lot faster, but one loses about d digits of precision at a height of 10d (mpmath.mp.zeta and Arb, which work with multiprecision arithmetic, automatically increase the internal precision to compensate). I've indicated the number of accurate digits from mpmath.fp.zeta in parentheses. Arb (old) is using Euler-Maclaurin summation, and Arb (new) is using the Riemann-Siegel formula.
t | mpmath.mp.zeta | mpmath.fp.zeta | Arb (old) | Arb (new) |
---|---|---|---|---|
104 | 0.0285 | 0.0033 (10) | 0.0010 | 0.000571 |
105 | 0.0084 | 0.0016 (10) | 0.0069 | 0.000487 |
106 | 0.0081 | 0.0014 (8) | 0.067 | 0.000525 |
107 | 0.0093 | 0.0015 (6) | 0.58 | 0.000864 |
108 | 0.016 | 0.0024 (6) | 5.35 | 0.0017 |
109 | 0.051 | 0.0058 (5) | 53.4 | 0.0050 |
1010 | 0.136 | 0.016 (4) | 0.0149 | |
1011 | 0.444 | 0.052 (3) | 0.0438 | |
1012 | 1.30 | 0.158 (1) | 0.147 | |
1013 | 4.99 | 0.492 (1) | 0.639 | |
1014 | 16.6 | 2.00 (0) | 1.90 | |
1015 | 54.9 | 5.67 | ||
1016 | 182 | 18.5 | ||
1017 | 579 | 58.1 | ||
1018 | 279 | |||
1019 | 896 | |||
1020 | 2884 |
Arb is consistently 10 times faster than mpmath.mp.zeta, and about as fast as mpmath.fp.zeta (but without the precision loss). The limit with the multiprecision version in mpmath is about 1017 on my computer, since the sieving algorithm used to compute $\sum_{k=1}^N k^{-s}$ starts using too much memory (more than 16 GB) beyond that point. In Arb, I have currently set the code to use at most 4 GB, switching to a slower algorithm that only performs partial sieving. I have not tried any values larger than $10^{20}$, but everything should continue working smoothly beyond that point, if you can wait for the result. You can see in the table that the time generally increases by the expected factor $\sqrt{10} \approx 3.1$ when t increases by a factor 10, but the time between 1017 and 1018 jumps by a factor five. This is caused by switching to the less memory-intensive algorithm, which ergo is about half as efficient. With more memory, you could save that factor two up to larger heights.
The Riemann-Siegel formula is a divergent asymptotic expansion. The code automatically switches back to Euler-Maclaurin summation when the precision is too high / when $t$ is too small. In practice, the precision that can be achieved with the Riemann-Siegel formula is quite high even for modest $t$, and increasing the precision does not slow things down too much. It takes 0.038 seconds, only 2.5 times longer than at 16-digit precision, to compute $\zeta(1/2+10^{10} i)$ to 100 digits, giving the following result:
[0.3568002308560733825395879104841957210373014420874373050142712175339008343141482693777263151130786228 +/- 4.82e-101] + [0.2865058490958361032920930146630741610609031719754977429540826783781992120435878412328144305891108445 +/- 4.04e-101]i
It takes just a bit more time, 0.084 seconds, to compute the off-line value $\zeta(1+10^{10} i)$, giving:
[0.5418173564211820524034624121659667953558431041044457216964715299375307905569065108676877748904826898 +/- 1.02e-101] + [0.6353035818958803226792469545696694740690293250470023077031856365134468567333873901985580729268928249 +/- 4.46e-101]i
Missing features
As always, there are things left to do:
- Right now, the Riemann-Siegel formula is only used if s is exact. The Riemann-Siegel formula has internal discontinuities which cause trouble when applied directly to an interval. This is a minor technical problem that will be fixed soon.
- The Riemann-Siegel formula is not currently used to compute derivatives. Figuring out how to do this rigorously will be much more complicated.
- Evaluation works for general complex s, i.e. not necessarily with real part 1/2. However, this requires roughly twice the work due to computing both $\sum_{k=1}^N 1/k^s$ and $\sum_{k=1}^N 1/k^{1-s}$. In mpmath, these computations are combined to save time. This trick should be possible to implement in Arb as well.
- The work to evaluate a single zeta value could easily be parallelized. This is not a critical improvement, because in practice one is usually interested in computing several values which can simply be processed in parallel by the user. Nonetheless, I will probably implement parallelization for single values in the near future.
Lower complexity
The Riemann-Siegel formula was discovered by Bernhard Riemann in the 1850s, yet essentially remains the best available tool for large-$t$ evaluation of the Riemann zeta function. However, some tricks have been discovered that allow approximating the length $N \sim O(t^{1/2})$ main sum in the Riemann-Siegel formula using less than $O(N)$ operations. Ghaith Hiary and Jonathan Bober have implemented an algorithm with $O(t^{1/3})$ complexity and used it for computations around $t = 10^{30}$. Note that, from the complexity estimates alone, a single evaluation at that height should be about as hard as a single evaluation at $t = 10^{20}$ with the $O(t^{1/2})$ algorithm. It would be interesting to have a version of the $O(t^{1/3})$ method in Arb.
The best theoretical complexity for the Riemann zeta function achieved so far is $O(t^{0.308})$ by Ghaith Hiary. It's an open problem to lower this exponent or prove a nontrivial lower complexity bound (nothing currently seems to rule out the possibility of some day finding an algorithm with complexity subexponential in $\log t$).
fredrikj.net | Blog index | RSS feed | Follow me on Mastodon | Become a sponsor