fredrikj.net / blog /
Wrapping up FLINT's Summer of Code 2014
October 3, 2014
The next version of FLINT will feature greatly enhanced linear algebra over the integers, thanks to interstellar work done by our two Google Summer of Code scholars this year: Abhinav Baid and Alex Best.
Abhinav implemented lattice reduction (LLL); Alex implemented Smith/Hermite normal form (SNF/HNF) and also improved our rank/nullspace/RREF computation. These have all been among the most-requested FLINT features for several years. Both Abhinav's and Alex's code has now been merged into the FLINT git trunk, and we will soon prepare to make a new release. This comes just in time as Sage now is switching its integer matrix implementation to a wrapper around FLINT's fmpz_mat_t.
The new FLINT functions are competitive with state-of-the-art implementations. Here are benchmark results (courtesy of Bill Hart) for LLL reducing an integer relations matrix of dimension d and entries from 10d to 40d bits in size, versus Damien Stehlé's fpLLL (the shown timings are in seconds):
d \ bits | 10d | 20d | 30d | 40d |
32 | 0.04 | 0.1 | 0.16 | 0.22 |
64 | 0.84 | 1.82 | 2.97 | 4.48 |
96 | 4.69 | 10.78 | 17.77 | 27.64 |
128 | 16.27 | 38.32 | 66.07 | 108.37 |
160 | 42.33 | 109.62 | 196 |
d \ bits | 10d | 20d | 30d | 40d |
32 | 0.06 | 0.13 | 0.19 | 0.24 |
64 | 1.02 | 2.12 | 3.3 | 4.5 |
96 | 5.24 | 11.68 | 18.75 | 26.01 |
128 | 17.8 | 42.57 | 70.44 | 101.36 |
160 | 46.56 | 120.72 | 199.08 |
We see that FLINT is very close to fpLLL. One of the most important features of the new LLL is that you can specify the input as a Gram matrix (no other open source implementation of LLL currently allows that).
Here are the ratios (courtesy of Alex) between FLINT's new HNF and the Pernet-Stein HNF implementation in Sage (lower is better, i.e. 0.5 means that FLINT is twice as fast):
d \ bits | 2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 |
30 | 0.6834 | 0.61456 | 0.6601 | 0.7135 | 0.7201 | 0.6785 | 0.6077 | 0.6247 | 0.7476 |
50 | 0.4684 | 0.5343 | 0.5328 | 0.4016 | 0.3073 | 0.2302 | 0.2070 | 0.2098 | 0.2480 |
70 | 0.5548 | 0.3918 | 0.3588 | 0.2341 | 0.1503 | 0.1210 | 0.1076 | 0.1117 | 0.1388 |
90 | 0.1379 | 0.1578 | 0.1613 | 0.2214 | 0.2954 | 0.4492 | 0.5713 | 0.5418 | 0.4356 |
110 | 0.1596 | 0.1568 | 0.1762 | 0.2141 | 0.2979 | 0.4569 | 0.5511 | 0.5179 | 0.4471 |
130 | 0.1747 | 0.1809 | 0.1932 | 0.2920 | 0.4041 | 0.5131 | 0.5367 | 0.5039 | 0.4196 |
150 | 0.1384 | 0.2127 | 0.2092 | 0.2691 | 0.3202 | 0.4880 | 0.5438 | 0.5192 | 0.4914 |
170 | 0.2267 | 0.1844 | 0.2016 | 0.2613 | 0.3657 | 0.5138 | 0.5839 | 0.5665 | 0.5127 |
190 | 0.2571 | 0.2681 | 0.3382 | 0.2975 | 0.4802 | 0.5156 | 0.5938 | 0.6588 | 0.5198 |
210 | 0.2654 | 0.2674 | 0.2351 | 0.3550 | 0.3833 | 0.5657 | 0.6014 | 0.6737 | 0.5325 |
230 | 0.2298 | 0.3567 | 0.1902 | 0.3501 | 0.4248 | 0.6056 | 0.6249 | 0.6543 | 0.5789 |
250 | 0.3325 | 0.3008 | 0.3782 | 0.3159 | 0.5064 | 0.6179 | 0.6544 | 0.6392 | 0.5813 |
A very nice speedup overall.
Regarding RREF, the git version of FLINT now computes the RREF or nullspace of a random 512 by 513 integer matrix with 1-bit entries in 0.35 seconds on my computer, whereas the old FLINT takes 14 seconds! Sage does it in 0.54 seconds.
There was a proposed third linear algebra-related GSoC project which we unfortunately did not get: optimizing linear algebra modulo small primes, mainly by wrapping BLAS (which would be an optional dependency for FLINT). Using BLAS is a trick which many computer algebra systems and libraries such as Linbox, IML, Sage, Magma, etc. use. This can give a speedup of 3x or more for matrices having several hundreds or thousands of rows. Since FLINT does not use this trick, it sometimes loses out to other implementations when working with very large matrices. Nonetheless, FLINT now generally seems to have linear algebra over the integers that is among the fastest available (if not the fastest) at least for matrices with up to a few hundred rows and columns.
The BLAS/optimization project should still be available next year, unless someone steps in and does the work before that. In any case, getting two GSoC slots was more than we had hoped for, since FLINT is a small and specialized project. Getting two excellent students, on top of that, has been a special pleasure! Huge thanks to Burcin Erocal, and of course Google, for making FLINT's GSoC participation possible this year. I was the primary mentor for Alex (and had to do very little mentoring); Curtis Bright and Bill Hart mentored Abhinav.
fredrikj.net | Blog index | RSS feed | Follow me on Mastodon | Become a sponsor