Wrapping up FLINT's Summer of Code 2014
October 3, 2014
The next version of FLINT will feature greatly enhanced linear algebra over the integers, thanks to interstellar work done by our two Google Summer of Code scholars this year: Abhinav Baid and Alex Best.
Abhinav implemented lattice reduction (LLL); Alex implemented Smith/Hermite normal form (SNF/HNF) and also improved our rank/nullspace/RREF computation. These have all been among the most-requested FLINT features for several years. Both Abhinav's and Alex's code has now been merged into the FLINT git trunk, and we will soon prepare to make a new release. This comes just in time as Sage now is switching its integer matrix implementation to a wrapper around FLINT's fmpz_mat_t.
The new FLINT functions are competitive with state-of-the-art implementations. Here are benchmark results (courtesy of Bill Hart) for LLL reducing an integer relations matrix of dimension d and entries from 10d to 40d bits in size, versus Damien Stehlé's fpLLL (the shown timings are in seconds):
|d \ bits||10d||20d||30d||40d|
|d \ bits||10d||20d||30d||40d|
We see that FLINT is very close to fpLLL. One of the most important features of the new LLL is that you can specify the input as a Gram matrix (no other open source implementation of LLL currently allows that).
Here are the ratios (courtesy of Alex) between FLINT's new HNF and the Pernet-Stein HNF implementation in Sage (lower is better, i.e. 0.5 means that FLINT is twice as fast):
|d \ bits||2||4||8||16||32||64||128||256||512|
A very nice speedup overall.
Regarding RREF, the git version of FLINT now computes the RREF or nullspace of a random 512 by 513 integer matrix with 1-bit entries in 0.35 seconds on my computer, whereas the old FLINT takes 14 seconds! Sage does it in 0.54 seconds.
There was a proposed third linear algebra-related GSoC project which we unfortunately did not get: optimizing linear algebra modulo small primes, mainly by wrapping BLAS (which would be an optional dependency for FLINT). Using BLAS is a trick which many computer algebra systems and libraries such as Linbox, IML, Sage, Magma, etc. use. This can give a speedup of 3x or more for matrices having several hundreds or thousands of rows. Since FLINT does not use this trick, it sometimes loses out to other implementations when working with very large matrices. Nonetheless, FLINT now generally seems to have linear algebra over the integers that is among the fastest available (if not the fastest) at least for matrices with up to a few hundred rows and columns.
The BLAS/optimization project should still be available next year, unless someone steps in and does the work before that. In any case, getting two GSoC slots was more than we had hoped for, since FLINT is a small and specialized project. Getting two excellent students, on top of that, has been a special pleasure! Huge thanks to Burcin Erocal, and of course Google, for making FLINT's GSoC participation possible this year. I was the primary mentor for Alex (and had to do very little mentoring); Curtis Bright and Bill Hart mentored Abhinav.