fredrikj.net / blog /

Testing Li’s criterion

March 28, 2013

In this post, I will look at testing the Riemann hypothesis via Li’s criterion and numerical evaluation. Li’s criterion involves a property of Riemann xi function

$$\xi(s) = (s-1) \pi^{-s/2} \Gamma\left(1+\tfrac{1}{2} s\right) \zeta(s),$$

so-defined because it puts the functional equation of the Riemann zeta function into the neat form $\xi(1-s) = \xi(s)$. If we look at the Taylor expansion

$$\log \xi\left(\frac{z}{z-1}\right) = \sum_{n=0}^{\infty} \lambda_n z^n = -0.693147 + 0.0230957 z + 0.0461729 z^2 + \ldots,$$

then Li’s criterion says that the Riemann hypothesis is equivalent to the statement that $\lambda_n > 0$ for all $n > 0$. We can thus gather experimental evidence in support of the Riemann hypothesis by numerically testing that $\lambda_n > 0$ for many $n$. Better yet, if we could find an $n$ such that $\lambda_n \le 0$ (the strict error bounding in Arb would allow us to prove the strict inequality $\lambda_n < 0$), we would have proof that the Riemann hypothesis is false.

(Footnote: Li's coefficients are sometimes defined with a slightly different scaling, $\tilde \lambda_n = n \lambda_n$ for $n > 0$. I’m not sure which convention is more common.)

Computing just a handful of coefficients is very easy with mpmath, but this method is nonrigorous and gets very slow for $n$ beyond $\approx 10^1$:

>>> mp.dps = 5; mp.pretty = True
>>> xi = lambda s: (s-1) * pi**(-0.5*s) * gamma(1+0.5*s) * zeta(s)
>>> taylor(lambda z: log(xi(z/(z-1))), 0, 20)
[-0.69315, 0.023096, 0.046173, 0.069213, 0.092198, 0.11511, 0.13793, 0.16064, 0.18322, 0.20566, 0.22793, 0.25003, 0.27194, 0.29363, 0.31511, 0.33634, 0.35732, 0.37803, 0.39847, 0.41862, 0.43846]

Using Arb, we can compute the $\lambda_n$ coefficients rigorously via the Taylor series of the constituent functions and some power series arithmetic. We write $\log \xi(s)$ as follows to avoid poles and logarithms of negative numbers when expanded around $s = 0$:

$$\log \xi(s) = \log(1-s) – \frac{1}{2} s \log{\pi} + \log \Gamma\left(1 + \tfrac{1}{2} s\right) + \log(-\zeta(s))$$

From here, we need:

* The Taylor expansion of $\log \Gamma(1+z)$, which is essentially just the sequence of integer zeta values
* The Taylor expansion of the Riemann zeta function (at $s = 0$)
* A power series logarithm
* A final power series composition by $z/(z-1)$

All the necessary algorithmic code is already available in Arb, though it takes a bit of work to glue the parts together. In the future, I will make this kind of thing much easier by allowing you to just call power series versions of the respective special functions which take polynomial objects as input and output. Here is how you can implement it right now:

#include "fmprb.h"
#include "fmpcb.h"
#include "fmprb_poly.h"
#include "zeta.h"
#include "profiler.h"

void log_zeta(fmprb_struct * z, long n, long prec)
{
    fmpcb_struct * w;
    fmprb_struct * t;
    fmpcb_t s, a;
    long i;

    t = _fmprb_vec_init(n);
    w = _fmpcb_vec_init(n);

    fmpcb_init(s);
    fmpcb_init(a);

    fmpcb_zero(s);
    fmpcb_one(a);

    zeta_series(w, s, a, 0, n, prec);

    for (i = 0; i < n; i++)
        fmprb_neg(t + i, fmpcb_realref(w + i));

    _fmprb_poly_log_series(z, t, n, prec);

    fmpcb_clear(s);
    fmpcb_clear(a);

    _fmpcb_vec_clear(w, n);
    _fmprb_vec_clear(t, n);
}

void log_gamma(fmprb_struct * z, long n, long prec)
{
    long i;

    if (n > 0) fmprb_zero(z);
    if (n > 1) fmprb_const_euler(z + 1, prec);
    if (n > 2) zeta_ui_vec(z + 2, 2, n - 2, prec);

    for (i = 2; i < n; i++)
        fmprb_div_ui(z + i, z + i, i, prec);

    for (i = 1; i < n; i += 2)
        fmprb_neg(z + i, z + i);

    for (i = 0; i < n; i++)
        fmprb_mul_2exp_si(z + i, z + i, -i);
}

void log_1s(fmprb_struct * z, long n, long prec)
{
    long i;

    if (n > 0) fmprb_zero(z);

    if (n > 1)
    {
        fmprb_const_pi(z + 1, prec);
        fmprb_log(z + 1, z + 1, prec);
        fmprb_mul_2exp_si(z + 1, z + 1, -1);
        fmprb_add_ui(z + 1, z + 1, 1, prec);
    }

    for (i = 2; i < n; i++)
    {
        fmprb_one(z + i);
        fmprb_div_ui(z + i, z + i, i, prec);
    }

    _fmprb_vec_neg(z, z, n);
}

int main()
{
    fmprb_struct *z, *t, *a;
    timeit_t t0;

    long i, n, prec;

    n = 100 + 1;
    prec = 6 * n;  /* must choose wisely */

    z = _fmprb_vec_init(n);
    t = _fmprb_vec_init(n);
    a = _fmprb_vec_init(n);

    timeit_start(t0);
    log_zeta(z, n, prec);
    timeit_stop(t0);
    printf("zeta: %ld ms\n", t0->cpu);

    timeit_start(t0);
    log_gamma(t, n, prec);
    timeit_stop(t0);
    printf("gamma: %ld ms\n", t0->cpu);

    _fmprb_vec_add(z, z, t, n, prec);
    log_1s(t, n, prec);
    _fmprb_vec_add(z, z, t, n, prec);

    for (i = 0; i < n; i++)
        fmprb_set_si(t + i, (i == 0) ? 0 : -1);

    timeit_start(t0);
    _fmprb_poly_compose_series(a, z, n, t, n, n, prec);
    timeit_stop(t0);
    printf("composition: %ld ms\n", t0->cpu);

    for (i = 0; i < n; i++)
    {
        printf("%ld: ", i); fmprb_printd(a + i, 15);
        printf("\n");
    }

    _fmprb_vec_clear(z, n);
    _fmprb_vec_clear(t, n);
    _fmprb_vec_clear(a, n);
}

Now some output.

With $n = 101$, and a working precision of $6n$ bits:

zeta: 50 ms
gamma: 0 ms
composition: 10 ms
0: -0.693147180559945 +/- 6.7256e-23
1: 0.023095708966121 +/- 1.3451e-22
2: 0.0461728676140233 +/- 2.0177e-22
3: 0.0692129735181083 +/- 2.6902e-22

  (snip)

98: 1.174991905549 +/- 6.6583e-21
99: 1.18052415096895 +/- 6.7256e-21
100: 1.18603775376791 +/- 6.7929e-21

With $n = 1001$, and a working precision of $7n$ bits:

zeta: 33170 ms
gamma: 590 ms
composition: 1690 ms
0: -0.693147180559945 +/- 4.6182e-133
1: 0.023095708966121 +/- 9.2364e-133
2: 0.0461728676140233 +/- 1.3855e-132
3: 0.0692129735181083 +/- 1.8473e-132

  (snip)

998: 2.32465175910327 +/- 4.6136e-130
999: 2.32535482758614 +/- 4.6182e-130
1000: 2.32605316168647 +/- 4.6228e-130

With $n = 4001$, and a working precision of $8n$ bits:

zeta: 4144350 ms
gamma: 29470 ms
composition: 97590 ms
0: -0.693147180559945 +/- 8.1489e-634
1: 0.023095708966121 +/- 1.6298e-633
2: 0.0461728676140233 +/- 2.4447e-633
3: 0.0692129735181083 +/- 3.2596e-633

  (snip)

3998: 3.0173210353597 +/- 3.2587e-630
3999: 3.01746810245569 +/- 3.2596e-630
4000: 3.01761690589927 +/- 3.2604e-630

Unsurprisingly, the Riemann hypothesis survived this attack.

It turns out that quite a high working precision is needed to get accurate values for large $n$ (though, for fixed $n$, once the precision is high enough to get just a few accurate bits, we just need to add $b$ bits to gain $b$ accurate bits). This is unfortunate, because evaluating the zeta function to high precision is expensive. It appears that the error bounds get inflated very significantly when applying the series logarithm, so a slower but more accurate version of that function would probably be a worthwhile tradeoff (I'm guessing that a 2x speedup is possible, perhaps even more).

In the computations above, I guessed a a reasonable working precision; you could also do it automatically by repeated precision doubling, which at worst would be a couple of times slower than optimal.

Reassuringly, the value of $\lambda_{4000}$ agrees with that given by Keiper. Keiper computes the coefficients using the zeros of the Riemann zeta function (if I read the paper correctly), which probably is more efficient for large $n$ but less efficient for very high precision. It's also probably much harder to obtain guaranteed error bounds using Keiper's method (indeed Keiper does not provide this). Maslanka has also recently computed the coefficients up to $\lambda_{3000}$.



fredrikj.net  |  Blog index  |  RSS feed  |  Follow me on Mastodon  |  Become a sponsor