fredrikj.net / blog /

# Easy hypergeometric series at unity

*June 25, 2014*

In many applications, one needs the numerical value of a "balanced" hypergeometric series such as $${}_2F_1(a,b;c;z) = \sum_{k=0}^\infty \frac{(a)_k (b)_k}{(c)_k} \frac{z^k}{k!}$$ or $${}_3F_2(a,b,c;d,e;z) = \sum_{n=0}^\infty \frac{(a)_k (b)_k (c)_k}{(d)_k (e)_k} \frac{z^k}{k!}$$

or ${}_4F_3, {}_5F_4$, etc. Here $(a)_k = \Gamma(a+k) / \Gamma(a)$ denotes a rising factorial as usual. These series are "balanced" in the sense that there are equally many factorials or gamma functions in the numerator and denominator, so convergence is controlled by the factor $z^k$. Alas, $z = 1$ is often the most interesting point, and here this factor obviously does not decrease. There is a closed form solution for ${}_2F_1$ in this case, but not for the higher hypergeometric functions. What then?Note that we can write any balanced hypergeometric series as $$C \sum_{k=0}^\infty G(k) z^k, \quad G(k) = \frac{\Gamma(a_1+k) \cdots \Gamma(a_p+k)}{\Gamma(b_1+k) \cdots \Gamma(b_p+k)}$$

where we have pulled out a constant factor $C$ and converted the $k!$ into an extra factor $\Gamma(1+k)$ (or cancelled it against a $\Gamma(1+k)$ in the numerator). From Stirling's series, one can check that $G(k) \sim k^{\gamma}$ as $k \to \infty$, where $\gamma = \sum_{j=1}^p a_j - b_j$. So the balanced hypergeometric series with $z = 1$ converges when $\operatorname{Re}[\gamma] \lt -1$. But unless $\gamma$ happens to be a monstrously negative number, the convergence is just far too slow for high-precision computation. If $\operatorname{Re}[\gamma]$ is very close to $-1$, getting even a few digits by direct summation of the series is absolutely useless!The solution is to use some form of convergence acceleration. This is what mpmath attempts, and I blogged about this four years ago. To summarize: Shanks extrapolation is useless here, and Richardson extrapolation is only effective when $\gamma$ is an integer. I ended up using Euler-Maclaurin summation, which, well, "kinda-sorta" works (slowly). At least for some $\gamma$ and not too high precision. But not always:

from mpmath import * mp.dps = 15 a, b, c, d, e = mpf(16)/10 + 7j, mpf(24)/10 - 1j, sqrt(2), 3+j, sqrt(6)+1j hyp3f2(a, b, c, d, e, 1) (cue 15 seconds of dramatic music here) Traceback (click to the left of this block for traceback) ... mpmath.libmp.libhyper.NoConvergence: Euler-Maclaurin summation did not converge

Gnargh! Not good at all! Here $\operatorname{Re}[\gamma] \approx -1.03527$, so convergence is indeed nefariously slow (this probably ruins the numerical integration in the E-M summation).

A different method is given in Bogolubsky and Skorokhodov, "Fast evaluation of the hypergeometric function ${}_pF_{p-1}(a;b;z)$ at the singular point $z = 1$ by means of the Hurwitz zeta function $\zeta(\alpha,s)$", *Programming and Computer Software*, 2006, Volume 32, Issue 3, pp. 145-153 (DOI). In fact, the example above is taken from their paper.

I have known about this paper for a long time, but hesitated to implement it since the formulas for the coefficients looked messy. On closer look, they are actually not that bad. As I will show next, there is an even simpler way to compute the coefficients than the one given in the paper. The combined code for evaluating hypergeometric functions at $z = 1$ only takes 20 lines of code, assuming that we already have the gamma function, the Riemann zeta function (we use the Hurwitz zeta function, but the second parameter is a positive integer, so it's actually just the Riemann zeta function plus a finite power sum), and Bernoulli polynomials (easily implemented using a recursion formula if needed).

The algorithm is a combination of two tricks. The function $G(k)$ has an asymptotic expansion $$G(k) \sim k^{\gamma} + c_1 k^{\gamma-1} + c_2 k^{\gamma-2} + \ldots$$ valid as $k \to \infty$. So we can write $$\sum_{k=0}^{\infty} G(k) = \sum_{k=0}^{N-1} + \sum_{k=N}^{\infty} G(k) \sim \sum_{k=0}^{N-1} + \sum_{k=N}^{\infty} \sum_{j=0}^{M-1} c_j k^{\gamma-j}.$$ When we truncate the original series at some moderately large $N$, a small number $M$ of terms of the asymptotic series should give a good approximation of each term for all $k \ge N$. This is the first trick. The second trick is to rearrange the last sum to obtain $$\sum_{k=N}^{\infty} \sum_{j=0}^{M-1} c_j k^{\gamma-j} = \sum_{j=0}^{M-1} \sum_{k=N}^{\infty} c_j k^{\gamma-j} = \sum_{j=0}^{M-1} c_j \zeta(-\gamma+j, N)$$ where $\zeta(s,a) = \sum_{k=0}^\infty (a+k)^{-s}$ is the Hurwitz zeta function, which can be computed efficiently using Euler-Maclaurin summation. This last trick is called as zeta function convergence acceleration, and has many other uses.

Now the problem is just to obtain the coefficients $c_j$. Note that we have

$$G(k) = \exp\left(\sum_{n=1}^p \log(\Gamma(a_n+k)) - \log(\Gamma(b_n+k))\right).$$ By formula 5.11.8 in the DLMF, $$\log(\Gamma(a+k)) \sim \left(a+k-\tfrac{1}{2}\right) \log(k) - k + \tfrac{1}{2} \log(2\pi) + \sum_{i=2}^{\infty} \frac{(-1)^i B_i(a)}{i (i-1) k^{i-1}}$$ holds when $k \to \infty$ (it is stated there that we should have $a \in [0,1]$, but this is clearly not necessary). Adding up these expansions, we obtain $$G(k) \sim \exp\left( \gamma \log(k) + A(1/k) \right) = k^{\gamma} \exp(A(1/k))$$ where $$A(x) = \sum_{n=1}^p \sum_{i=2}^{\infty} \frac{(-1)^i (B_i(a_n) - B_i(b_n))}{i (i-1)} x^{i-1}$$ is a power series with zero constant term. The coefficients $c_j$ now pop out if we compute the exponential function of the power series $A(x)$.Here is the full implementation with mpmath:

def exp_series(A): B = [1] for k in range(1, len(A)): B.append(sum(j*A[j]*B[k-j]/k for j in range(1,k+1))) return B def gammaprod_series(As, Bs, M): A = [0] + [sum((-1)**k * (bernpoly(k,As[j])-bernpoly(k,Bs[j]))/(k*(k-1)) for j in range(len(As))) for k in range(2,M+1)] return exp_series(A) def hyper1(As, Bs, N, M): with extraprec(mp.prec): s = t = 1 for j in range(1, N): t *= fprod(a+j-1 for a in As) / fprod(b+j-1 for b in Bs) s += t if M > 0: s2 = 0 g = sum(As)-sum(Bs) for (j, c) in enumerate(gammaprod_series(As, Bs, M)): s2 += c * zeta(-(g-j), N) s += s2 * gammaprod(Bs, As) return s

*Edit 2014-06-26: fixed hyper1 to avoid dividing by zero for negative integer parameters.*

The working precision is doubled in the main function to circumvent any loss of accuracy in the zeta function or elsewhere (this is just a suboptimal heuristic, of course). Now we try it with the hard example, and with high precision:

mp.dps = 50 a, b, c, d, e = mpf(16)/10 + 7j, mpf(24)/10 - 1j, sqrt(2), 3+j, sqrt(6)+1j print hyper1([a,b,c], [d,e,1], 100, 10) print hyper1([a,b,c], [d,e,1], 200, 20) print hyper1([a,b,c], [d,e,1], 300, 30) print hyper1([a,b,c], [d,e,1], 400, 40)

The output is:

(-1.8386690519028601298520368327843164545326107359488 - 4.7233286369910786116403837203301101298831069274951j) (-1.8386690511111322419029645995417155507439082060626 - 4.7233286419923547231570869262609708452747158228716j) (-1.8386690511111322419029645994904354439724950900321 - 4.7233286419923547231570869261852035804994544855457j) (-1.838669051111132241902964599490435443972495090032 - 4.7233286419923547231570869261852035804994544855458j)

The last value is indeed **correct to full precision**. Mathematica agrees with it:

In[199] := N[HypergeometricPFQ[{16/10 + 7 I, 24/10 - I, Sqrt[2]}, {3 + I, Sqrt[6] + I}, 1], 50] Out[199] := -1.8386690511111322419029645994904354439724950900320 - 4.7233286419923547231570869261852035804994544855458 I

We get 50 digits with just $N = 300$ terms of the main series and $M = 30$ terms of the asymptotic series. The evaluation only takes 0.2 seconds. It could be done even faster with decent multi-evaluation code for the zeta function, but mpmath currently doesn't provide this. Going up to several hundred digits seems to work well too.

By contrast, consider what happens if we do no convergence acceleration at all:

print hyper1([a,b,c], [d,e,1], 100, 0) print hyper1([a,b,c], [d,e,1], 1000, 0) print hyper1([a,b,c], [d,e,1], 10000, 0) print hyper1([a,b,c], [d,e,1], 100000, 0) (-7.2983680014720681627715084355524037801559880909067 - 904.27457907588532032257022492605090915896273179118j) (277.71827990881239685094741554644515030263289969297 + 986.81591095410803675977592602521097798205544940526j) (-462.34929641977445114546309496298588915452622813428 - 859.44010041162925556952307462831886340784594085071j) (582.32670843855307876165179061985950381399218740935 + 676.12050657616763757180821842680454425453621796957j)

Even with a million terms, we are nowhere near the ballpark. Taking just one term gets us pretty close to the right magnitude:

print hyper1([a,b,c], [d,e,1], 100, 1) print hyper1([a,b,c], [d,e,1], 1000, 1) print hyper1([a,b,c], [d,e,1], 10000, 1) print hyper1([a,b,c], [d,e,1], 100000, 1) (90.948309372882257116191778727742086671730654525454 + 236.70974922309771973966280272891340184345157091749j) (-16.343045413523289847073967722785531166042821084249 - 27.024075439971160104801093662464604873635815629349j) (-0.075843059117630657838154958175836613724593502217172 - 2.9784296000005873446715690311399719635854610781842j) (-2.0319203982213791526961621422730335325896973793371 - 4.8460843541629192126109290158276501384501029179505j)

Two terms and we're nearly in business:

print hyper1([a,b,c], [d,e,1], 100, 2) print hyper1([a,b,c], [d,e,1], 1000, 2) print hyper1([a,b,c], [d,e,1], 10000, 2) print hyper1([a,b,c], [d,e,1], 100000, 2) (-18.414420018986148737987474801358589559255254901201 - 35.859309956763290845650594649631593350518335026858j) (-1.6187732496130587318316255311684534662301752626194 - 4.4519371973552460822796478778724575246039260858465j) (-1.8411961002835111419315030479105059273866507092272 - 4.7253629000452790193148164471866906756920003195248j) (-1.8386422781663589754856672085111489465395717608016 - 4.7233152577800890346984038780297376722461091450247j)

Since we are using asymptotic series, $M$ cannot be too large. This gives worse results:

print hyper1([a,b,c], [d,e,1], 100, 100) print hyper1([a,b,c], [d,e,1], 200, 200) print hyper1([a,b,c], [d,e,1], 300, 300) (-1.8386690510961359317382563224017243051564540019873 - 4.7233286419267482390810961300485644352058515470777j) (6.6084066611048668544725042277072365564360194555749e+57 + 9.563710960868873888062595897976634761743753910953e+56j) (-88703377705444960847236.545379221954510755520815982 - 71126571164401998673619.19208238439441087435654002j)

Exercise for the reader: write a version that selects $N$ and $M$ adaptively for a given precision, test and tune the code for lots of input, and submit a patch for mpmath to speed up `hyper()` for $z = 1$.