fredrikj.net / blog /

# Arb 0.7: getting serious with power series

*August 7, 2013*

I’ve tagged version 0.7 of Arb, as the changelog was starting to grow bit long. Most importantly, Arb now supports power series much better (providing operations such as reversion and evaluation of various special functions, for power series over both the real and complex numbers).

Here’s a demo: the following program computes the terms in the Taylor expansion of $\tan(\log(1+\operatorname{atan}(\cos(\exp(\sqrt{1+x})))))$ at $x = 0$ to order $O(x^{500})$ using 1000-digit working precision.

#include "fmprb_poly.h" int main() { fmprb_poly_t a, b, c, d, e, f, g; long n, prec; n = 500; prec = 1000 * 3.323; fmprb_poly_init(a); fmprb_poly_init(b); fmprb_poly_init(c); fmprb_poly_init(d); fmprb_poly_init(e); fmprb_poly_init(f); fmprb_poly_init(g); fmprb_poly_zero(a); fmprb_poly_set_coeff_si(a, 0, 1); fmprb_poly_set_coeff_si(a, 1, 1); fmprb_poly_sqrt_series(b, a, n, prec); fmprb_poly_exp_series(c, b, n, prec); fmprb_poly_cos_series(d, c, n, prec); fmprb_poly_atan_series(e, d, n, prec); fmprb_add_ui(e->coeffs, e->coeffs, 1, prec); fmprb_poly_log_series(f, e, n, prec); fmprb_poly_tan_series(g, f, n, prec); fmprb_poly_printd(g, 10); printf("\n\n"); fmprb_poly_clear(a); fmprb_poly_clear(b); fmprb_poly_clear(c); fmprb_poly_clear(d); fmprb_poly_clear(e); fmprb_poly_clear(f); fmprb_poly_clear(g); flint_cleanup(); return 0; }

This program takes 0.5 seconds to run (64-bit Xeon X5675 3.07 GHz), and the last coefficients are accurate to about 900 digits (setting the precision to about 1100 digits would give 1000 accurate digits while only slightly increasing the running time).

In Mathematica 9.0.1, evaluating

Timing[Series[Tan[Log[1+ArcTan[ Cos[Exp[Sqrt[1`1000 + x]]]]]], {x, 0, 500 - 1}];]

on the same computer takes 752 seconds. Mathematica’s slower algorithm is more numerically stable, giving about 990 accurate digits (it does not guarantee that they are provably correct).

Also the gamma, log gamma and zeta functions of power series are supported, and are much faster than in any other software I’m aware of (with the added bonus of using rigorous error bounds!). For example, one can compute the Taylor series of $\Gamma(\exp(1+x))$ at $x = 0$ with:

fmprb_poly_set_coeff_si(a, 0, 1); fmprb_poly_set_coeff_si(a, 1, 1); fmprb_poly_exp_series(b, a, n, prec); fmprb_poly_gamma_series(c, b, n, prec);

Computing 100 terms to 100 accurate digits takes 0.03 seconds, and computing 1000 terms to 1000 accurate digits takes 17 seconds. Meanwhile, Mathematica takes one minute to compute just 13 terms to 100 digit accuracy with the command `Series[Gamma[Exp[1`100+x]], {x,0,12}]` (the running time appears to increase exponentially with the number of terms).

Regarding the zeta function, the code for computing many terms in the Taylor series has been available for some time (I wrote about it in the earlier post about Arb 0.4). The news in 0.7 is that user-friendly functions are provided for the `fmprb_poly` and `fmpcb_poly` types, and they use the reflection formula in the negative half-plane. Computing no more than a few terms has also gotten much faster, as mentioned in the recent post about computing zeros of the zeta function.

Here is the overview of changes from 0.6 to 0.7, copied from the documentation:

- floating-point and ball functions
- documented, added test code, and fixed bugs for various operations involving a ball containing an infinity or NaN
- added reciprocal square root functions (fmpr_rsqrt, fmprb_rsqrt) based on mpfr_rec_sqrt
- faster high-precision division by not computing an explicit remainder
- slightly faster computation of pi by using new reciprocal square root and division code
- added an fmpr function for approximate division to speed up certain radius operations
- added fmpr_set_d for conversion from double
- allow use of doubles to optionally compute the partition function faster but without an error bound
- bypass mpfr overflow when computing the exponential function to extremely high precision (approximately 1 billion digits)
- made fmprb_exp faster for large numbers at extremely high precision by skipping the log(2) removal
- made fmpcb_lgamma faster at high precision by speeding up the argument reduction branch computation
- added fmprb_asin, fmprb_acos
- added various other utility functions to the fmprb module
- added a function for computing the Glaisher constant
- optimized evaluation of the Riemann zeta function at high precision

- polynomials and power series
- made squaring of polynomials faster than generic multiplication
- implemented power series reversion (various algorithms) for the fmprb_poly type
- added many fmprb_poly utility functions (shifting, truncating, setting/getting coefficients, etc.)
- improved power series division when either operand is short
- improved power series logarithm when the input is short
- improved power series exponential to use the basecase algorithm for short input regardless of the output size
- added power series square root and reciprocal square root
- added atan, tan, sin, cos, sin_cos, asin, acos fmprb_poly power series functions
- added Newton iteration macros to simplify various functions
- added gamma functions of real and complex power series ([fmprb/fmpcb]_poly_[gamma/rgamma/lgamma]_series)
- added wrappers for computing the Hurwitz zeta function of a power series ([fmprb/fmpcb]_poly_zeta_series)
- implemented sieving and other optimizations to improve performance for evaluating the zeta function of a short power series
- improved power series composition when the inner series is linear
- added many fmpcb_poly versions of nearly all fmprb_poly functions
- improved speed and stability of series composition/reversion by balancing the power table exponents

- other
- added support for freeing all cached data by calling flint_cleanup()
- introduced fmprb_ptr, fmprb_srcptr, fmpcb_ptr, fmpcb_srcptr typedefs for cleaner function signatures
- various bug fixes and general cleanup

I did not post about the release of version 0.6 to this blog, so here is the overview of changes from 0.5 to 0.6 as well:

- made fast polynomial multiplication over the reals numerically stable by using a blockwise algorithm
- disabled default use of the Gauss formula for multiplication of complex polynomials, to improve numerical stability
- added division and remainder for complex polynomials
- added fast multipoint evaluation and interpolation for complex polynomials
- added missing fmprb_poly_sub and fmpcb_poly_sub functions
- faster exponentials (fmprb_exp and dependent functions) at low precision, using precomputation
- rewrote fmpr_add and fmpr_sub using mpn level code, improving efficiency at low precision
- ported the partition function implementation from flint (using ball arithmetic

in all steps of the calculation to guarantee correctness) - ported algorithm for computing the cosine minimal polynomial from flint (using

ball arithmetic to guarantee correctness) - support using gmp instead of mpir
- only use thread-local storage when enabled in flint
- slightly faster error bounding for the zeta function
- added some other helper functions

On a final note, I’ve made available the slides from my ISSAC ’13 presentation about Arb (for which I received the best software demo award, winning an awesome gömböc courtesy of Maplesoft!).

fredrikj.net | Blog index | RSS feed | Follow me on Mastodon | Become a sponsor