fredrikj.net / blog /

Numerical power series reversion

July 11, 2013

In some cases, fast algorithms are less numerically stable than slow algorithms (see the earlier posts about polynomial multiplication and fast multipoint evaluation). In other cases, the fast algorithms automatically turn out to be more numerically stable than the slow ones. It appears that reversion of power series (at least sometimes) falls into the latter category. I have at last implemented reversion for power series over the reals in Arb; as previously for the various exact FLINT types, the algorithms are:

Here is a comparison for reversion of $-\log(1-x) = \sum_{k \ge 1} x^k / k$ to order $O(x^n)$ and with a working precision of $10n$ bits:

n    algorithm      time     last coefficient (midpoint +/- radius)
10   Lagrange       130 us   2.75573192239859e-06 +/- 8.6736e-19
10   Newton+BK      160 us   2.75573192239859e-06 +/- 1.7347e-17
10   Fast Lagrange  85 us    2.75573192239859e-06 +/- 3.4121e-24

100  Lagrange       100 ms   1.07151028812547e-156 +/- 3.9827e-59
100  Newton+BK      40 ms    1.07151028812547e-156 +/- 5.9409e-211
100  Fast Lagrange  27 ms    1.07151028812547e-156 +/- 3.0194e-245

1000 Lagrange       203 s    2.48516814326678e-2565 +/- 1.1505e+310
1000 Newton+BK      39 s     2.48516814326678e-2565 +/- 2.1049e-2504
1000 Fast Lagrange  22 s     2.48516814326678e-2565 +/- 1.1864e-2670

Interestingly, the true numerical error (the difference between the computed midpoint and the mathematically exact result) is of roughly the same order of magnitude with all algorithms, and the instability occurs in the error bound calculation. This raises the question of whether the algorithms (even just the naive one) can be modified in such a way that the computed error bound more closely matches the true error.



fredrikj.net  |  Blog index  |  RSS feed  |  Follow me on Mastodon  |  Become a sponsor