Long-term development of some mathematical software projects
February 28, 2022
This post discusses the current state and long-term development plans (features, maintenance, funding) of some free software projects that I help maintain or depend on (mpmath, FLINT, Arb, Calcium, Fungrim, MPFR, GMP).
I have only done minimal work on mpmath in recent years, and regret to say that bugfixes, PR reviews and releases are lagging behind severely. A possible solution is on the horizon — I have been discussing with the SymPy developers to let them take over the basic maintenance of mpmath to get things back rolling more smoothly, and this could happen imminently.
Why the neglect of mpmath? Reason number one is that mpmath development does not align all that well with my present research activities; mpmath lacks the foundations for the kind of algorithms I'm implementing in Arb, and Python is the wrong language to implement fast algorithms. Number two is that there is quite some technical debt in the way of making substantial improvements. Reason number three is that mpmath already does its thing quite well for me (I use it every day), so I have no urgent reason to improve it.
There are some features I'd like to add to mpmath if I ever find the time, but the most significant improvement I can think of would be to add an Arb-based backend. There are currently three backends in mpmath: the pure-Python version, a pure-Python version using GMPY, and a Cython version part of SageMath (in a weird dependency loop, since SageMath depends on mpmath). An Arb-based backend written in Cython would be quite a bit more efficient than either of these, and could perhaps replace at least the SageMath backend entirely. (Some optimizations could also be achieved by wrapping Python-FLINT, but a dedicated mpmath backend module would be better.)
The SymPy developers also seem interested in optionally making use of FLINT and Arb (presumably via Python-FLINT) to speed up symbolic computation performance in SymPy, and that is also something I would like to assist with.
FLINT, Arb and Calcium
The main developers of FLINT in the last few years are Bill Hart, Daniel Schultz, and myself. (Several other contributors are also putting in great work, for example Albin Ahlbäck has provided some nice performance patches for fmpz functions in FLINT.) Arb and Calcium are essentially one-person operations, though I do get plenty of helpful patches and feedback from the community.
Within the next year or so, Bill Hart will step down as maintainer as FLINT, most likely to quit the mathematical software field entirely, and Daniel Schultz may leave as well. The most likely scenario is that I will remain as the sole maintainer responsible for FLINT.
The list of possible improvements to these projects is too long to even begin to list here, so I will certainly have to do some hard thinking about priorities.
Unfortunately, I have no direct funding for Arb, FLINT, Calcium or related software projects apart from my own salary, and the prospects to find such funding to cover extra staffing don't look good. Last year, I tried to apply for Inria resources to hire a research engineer to work on Arb, which was rejected. The year before that, I tried to apply for an ANR grant to fund a postdoc to work on Calcium, which was also rejected. I have considered applying for an ERC Consolidator Grant, but I'm not optimistic about that either (ERC grants are hard to get, emphatically not meant for software development, and involve a lot of bureaucracy).
As some readers already know, Maple and Wolfram Mathematica now both use FLINT and Arb, but they are unlikely to be viable sources. (Wolfram plainly doesn't do open source funding. MapleSoft interacts more generously with the research community, for instance by sponsoring the open access journal Maple Transactions, but I can't see substantial donations to peripheral open source projects being in their interests.)
I am fortunately part of a different ANR project (NuSCAP) covering a few Arb-related research tasks, thanks to which I currently at least have coverage for normal working expenses for myself (travel, equipment). (I pay things like website hosting and domain names for my projects out of my own pocket, but this is not a huge expense.)
Indeed, I finally ordered a new laptop four months ago with NuSCAP money. This is about time because my existing eight-year-old laptop is suffering random blackouts and is virtually antique anyway; referees have complained that I do my benchmarks on obsolete hardware! The new laptop has yet to be delivered, presumably because of the current worlwide supply chain issues, but when it arrives (hopefully!), I will finally have a proper multicore development machine (8-core AMD Ryzen 5000, instead of my current dual-core Intel Core i5). This should make life easier by significantly cutting compile times, and I should finally be able to do some meaningful work on multithreaded algorithms (which are a weak point of Arb and FLINT).
Anyway, people who are interested in contributing to any of these projects are most welcome to get in touch. I'm both interested in direct contributions (maintenance, documentation, testing, implementation and optimization of features), research projects involving FLINT, Arb or Calcium, and ideas for sources of funding that could support more ambitious research and development.
Barring any major surprises, my own research in the near future will probably revolve around Arb (functionality, optimization) and Calcium. I have some vague plans to develop a C-level generics layer for FLINT, Arb and Calcium that could help interfacing externally with these libraries and avoid some internal code duplication. (Yes, I've heard about C++. It's a possible solution, not necessarily the solution.) I am also thinking about more general ideas to develop a new object system (and maybe a domain-specific programming language) for doing computer algebra and symbolic computation focused on exact and rigorous real/complex analysis, greatly generalizing the kind of functionality found in FLINT, Arb and Calcium, but that is more pie-in-the-sky and it's too early to say whether it will lead anywhere.
Fungrim is still up and running, but I have not added any new content recently. I would like to develop a more robust backend for checking and defining formal semantics for formulas. In the long term, it would be nice to link the semantics to a formal proof system (Lean, Coq?), but I don't have concrete plans for how to get this done.
MPFR is currently maintained by Paul Zimmermann and Vincent Lefèvre. MPFR is a dependency of FLINT and Arb and a thousand more important projects (GCC, for example). The complex extension of MPFR, MPC, is maintained by Andreas Enge who is head of my Inria team (LFANT). I have only made minor direct code contributions to MPFR itself; as far as I can recall, I contributed the code to compute Euler's constant $\gamma$, some bug reports, and some optimization suggestions.
Recently, Paul Zimmermann asked me if I'd be interested in taking over maintenance of MPFR (especially for the mathematical side of MPFR) when he retires in a few years. This would certainly be an interesting responsibility, though it might be tough to balance with my other commitments.
This is probably a good time to discuss some pros and cons of MPFR, how it could be improved in the long term, and its relation to Arb.
When I started Arb in 2012, it was based on the realization that the programming model in libraries like MPFR and mpmath is not suitable for implementing certain types of mathematical functionality: higher transcendental functions, rigorous polynomial arithmetic and linear algebra, etc. Automatic error propagation via ball arithmetic not only makes implementing functionality easier, it makes optimization easier (all transcendental functions in Arb are significantly faster than their MPFR counterparts). Another key difference is that Arb functions can "give up early" while MPFR will get slow/hang/crash for some input since its contract requires terminating with a correctly rounded result even when this is arbitrarily difficult to guarantee; the Arb semantics are often more useful for computer algebra and even for automated testing.
There were also some technical issues with MPFR that led me to implement floating-point numbers and balls from scratch in Arb instead of wrapping mpfr_t:
- MPFR uses global state (exceptions, exponent range). This is not good library design — a criticism that also applies to the IEEE 754 standard and the C standard library.
- The mpfr_t type has a bounded exponent range, which is annoying in some cases since one cannot assume that for example multiplication by $2^e$ is exact. At least in the past, there have also been various bugs in MPFR for input near the exponent boundaries, and I suspect some bugs of this kind remain. (In Arb, exponents are bignums.)
- The mpfr_t type always allocates heap space and allocates up to the full working precision. Arb stores data directly in the struct up to 128-bit precision, and does not allocate space for trailing zero limbs. The last feature makes it easier to implement some algorithms with optimal asymptotic complexity. (This is a tradeoff: the MPFR model is more efficient in some situations.)
- The mpfr_t type stores a precision in each variable, while Arb passes it as a function argument. (This has pros and cons.)
Another technical point: MPFR largely avoids using machine floating-point arithmetic internally. This is mainly done because it is very difficult to guarantee consistent cross-platform behavior (regardless of what standards documents say). I use machine FP sparingly in Arb for the same reason, though more promiscuously than MPFR. Avoiding machine FP unfortunately precludes many optimizations for modern hardware; this is a weak point of both MPFR and Arb.
I see three avenues of improvement for the kind of functionality that is overlapping between MPFR and Arb:
- Port more of the efficient algorithms in Arb to MPFR. In some sense this feels like a waste of time since the code in Arb already works, and it will be difficult to adapt some of the code to MPFR because of the semantic and technical differences discussed above. Some algorithms in Arb also depend on FLINT features (polynomial arithmetic, etc.) that I am not keen to reimplement from scratch, and which makes little sense to replace with poor placeholder code. On the other hand, work of this kind could well be worth the pains since improvements to MPFR would benefit a lot of users.
- Add more MPFR-type functionality to Arb. For example, the arb_fpwrap module could be extended with functions for MPFR and MPC types. The arf_t type in Arb could be extended to provide a more complete MPFR-like interface with correct rounding for a wide range of operations. These things are relatively easy to do, and should help projects like SageMath which use both MPFR and Arb, but will not help projects like GCC which are unlikely to add Arb as a dependency.
- Develop even better mathematical functions code that can be used as a basis for MPFR, Arb, and beyond. What I have in mind are algorithms that mainly use low-overhead machine arithmetic together with GMP mpn functions for performance, possibly in combination with increased use of precomputed tables and code generation. There are endless possibilities for this kind of work (and also endless timesinks in terms of development effort).
Also relevant is Paul Zimmermann's new CORE-MATH project, which aims to get correctly rounded fp32, fp64 and fp128 elementary functions into standard math libraries. Personally (and related to point 3 above), I'd be interested in more work on machine-precision transcendental functions with rigorous error bounds but not necessarily correct rounding, especially for complex functions beyond the scope of the IEEE 754 standard.
GMP is the computational engine behind most multiprecision software. The active GMP developers in the last few years seem to be Torbjörn Granlund (lead developer/maintainer), Marco Bodrato and Niels Möller. I am not involved in GMP development, but I depend on it heavily and have a reasonably good understanding of its internals.
The best feature of GMP is that it is rock-solid. The developers take both correctness, performance and backward compatibility very seriously. I haven't run into a serious bug in GMP itself in forever. I've had to patch code after upgrading GMP, but only to fix documented breaking changes, and this has never been a nuisance.
The algorithms in GMP are generally top notch and keep being improved. Just to give some examples, GMP 6.2 (the most recent version) switched from a pure Miller-Rabin test to a BPSW test for probabilistic primality testing, added noticeable GCD improvements, and added asymptotically fast binomial coefficient code (among many other improvements).
GMP also has various shortcomings:
- No small-value optimization in the mpz_t type
- No vector operands
- 32-bit size limit for mpz_t and in some functions
- No multithreaded algorithms
- Few optimizations for modern SIMD architectures
- No GPU support
- Uses Schönhage-Strassen FFT instead of faster floating-point FFT (medium operands) and NTT (huge operands)
- Some inconvenient API choices (e.g. use of 32-bit long in _si methods on some 64-bit systems)
- Few public functions/macros for few-word operands
- Missing public interfaces for division with precomputed inverses
- No interfaces for working with FFT-transformed operands
- Some missing arithmetic functions, e.g. truncating (high) multiplication
- No interface for preallocated scratch space in low-level functions (only available for some crypto-oriented functions)
- Missing various useful number-theoretical functions
- The floating-point type (mpf_t) is not very useful (though MPFR fills this gap)
As a result, advanced GMP users often end up working around or reimplementing parts of GMP. For example, FLINT contains its own Schönhage-Strassen FFT, its own fmpz_t type with small-value optimizations, wrappers around GMP functions needed to work correctly on 64-bit Windows, and dozens of functions with miscellaneous low-level GMP-type functionality. A lot of GMP replacement code also gets written for isolated research projects or non-free software and unfortunately never ends up in maintained libraries or even in public repositories. This state of affairs is clearly suboptimal.
It would certainly be nice to see more improvements in GMP itself, to better live up to the promise of being "carefully designed to be as fast as possible, both for small operands and for huge operands". Unfortunately, some potential improvements to GMP would necessarily break ABI compatibility, others are difficult to make portable, and some would add bloat which is hard to justify for a core GNU library. GMP's first priority is to provide stable multiprecision arithmetic support for compilers, cryptography and the like, and this goal does not always align with shipping bleeding-edge features and performance. Another issue that may hold back GMP development is that the codebase puts up quite some barriers for new developers: heavy reliance on assembly code, complex build system, need to support every platform known to GNU developers in the last 30 years, and various sources of code complexity (like conditional support for "nails").
Attempts to replace GMP with simpler or more optimized alternatives have been launched and failed. It is not too hard to beat GMP's performance in one narrow area on a specific architecture; doing so across the entire feature set is incredibly hard. (Part of the problem is that compilers are terrible at optimizing C/C++ code for multiprecision arithmetic, so GMP's meticulous assembly optimizations remain indispensable.)
Where am I going with this? Serious efforts to improve GMP could potentially benefit many downstream projects. For various reasons much of this effort is happening elsewhere (or not happening at all). It's worth having a discussion about this.