fredrikj.net / blog /

# Arb 1.1 released, 2.0 right around the corner

*May 3, 2014*

## Arb 1.1

I've tagged Arb 1.1.0, which includes the following changes since 1.0.0:

- Embettered polynomial multiplication: propagated error bounds are now always as good as with classical multiplication, and multiplying high-degree polynomials with fixed-magnitude coefficients is faster.
- Improved the partition function to support
*n*bigger than a single word, and enabled the possibility to use two threads for the computation. Now you too can compute the number of partitions of 10^{20}(if you have enough RAM)! - Faster and much less memory-hungry exponentials at very high precision (see the partition function blog post for details).
- Fixed a bug in floating-point arithmetic that caused a too small bound for the rounding error to be reported when the result of an inexact operation was rounded up to a power of two. Fortunately, this bug did not affect the correctness of ball arithmetic, because operations on ball midpoints always round down (after fixing the bug, I went through the codebase to be sure). Indeed, if this bug had affected ball operations, it would almost certainly have been picked up by the ball arithmetic test code. I have now strengthened the floating-point test code to verify that the rounding bound is exactly what it is supposed to be.
- Minor optimizations to floating-point arithmetic.
- Improved argument reduction of the digamma function and short series expansions of the rising factorial.
- Removed the holonomic module for now, as it did not really do anything very useful.

## Toward Arb 2.0

At least one person has complained that it can be hard to distinguish between the following type names (all are unpronounceable and look like random permutations of each other):

`mpf_t`(GMP/MPIR float)`mpfr_t`(MPFR float)`fmpr_t`(Arb float)`fmprb_t`(Arb ball)

In fact, the very first development version of Arb (two years ago) just had a ball type logically called `arb_t`. But I didn't like the design of that type, which is why I replaced it with the `fmpr_t` and `fmprb_t` types, though without changing the name of the library.

Anyway, I'm now working on a new "Arb 2.0" floating-point type called `arf_t`, to be followed by a new `arb_t` ball type (unrelated to the ancient type with that name). This name should be easier to remember, and opens up the door for much more mnemonic derived types such as `barf_t`, `barc_t`, etcetera. More seriously, my target with this rewrite is a 2x speedup (as would seem appropriate when bumping the version number from 1.x to 2.x) for ball arithmetic at precision up to a couple hundred bits, and thus for the majority of computations. There should be a smaller but still noticeable improvement up to at least a few thousand bits.

Work on the floating-point type is almost complete. Addition and multiplication are done, so it's already to possible benchmark matrix multiplication. Here are timings in seconds for multiplying two 200 x 200 matrices with entries $(\pm \sqrt{200i+j}), 0 \le i, j < 200$:

Precision (bits) | mpf_t | mpfr_t | fmpr_t | arf_t |
---|---|---|---|---|

32 | 0.66 | 0.79 | 0.60 | 0.50 |

64 | 0.66 | 0.89 | 1.27 | 0.50 |

128 | 0.73 | 0.94 | 1.36 | 0.56 |

256 | 0.91 | 1.30 | 1.69 | 0.92 |

512 | 1.46 | 1.96 | 2.51 | 1.61 |

1024 | 3.78 | 4.20 | 5.59 | 4.00 |

2048 | 10.58 | 9.78 | 12.53 | 11.12* |

Note: (*) around 2048 bits, it becomes worthwhile to multiply via MPFR (which implements the mulhigh algorithm), so here the timing can be reduced to about 10 seconds. In fact, this is already implemented, but I didn't use it in the benchmark.

I hope that the improvement going from `fmprb_t` to the forthcoming `arb_t` type will be even more dramatic than that between the `fmpr_t` and `arf_t` types, as I will use some tricks to speed up error bound computations. But that is yet to come! Expect news within the next couple of weeks.

The `arf_t` type differs in its internal representation from the `fmpr_t` type, but is functionally equivalent: it supports correct rounding to a precise number of bits, uses dynamic allocation (so even if the working precision is 10000 bits, an exact 100-bit integer value just takes up a few words), and supports arbitrary-precision exponents as well as nan/−inf/+inf special values. However, the new representations makes certain operations more efficient. For example, conversion to an `mpfr_t` and back is very cheap (sometimes copying pointers shallowly is sufficient). Most importantly, bounds computations required for ball arithmetic can be done faster.

I'm creating completely new types that will live in parallel with the old ones for the time being. I can't easily just replace the `fmpr_t` type since quite a lot of the Arb 1.x ball arithmetic code depends on implementation details that will be done differently. I don't want to (1) break anything that currently works, or (2) be stuck for months porting absolutely everything. I should be able to release an Arb 2.0 very soon that contains a subset of the Arb 1.x functionality ported to the new types, plus all the Arb 1.x code intact for interface compatibility and feature completeness.

## New website for mpmath

In other news, I've finally set up a proper website for mpmath at http://mpmath.org. This URL used to be a redirect to the Google Code project site, but all development now takes place on GitHub.