An fmprb_poly_t represents a polynomial over the real numbers, implemented as an array of coefficients of type fmprb_struct.
Most functions are provided in two versions: an underscore method which operates directly on pre-allocated arrays of coefficients and generally has some restrictions (such as requiring the lengths to be nonzero and not supporting aliasing of the input and output arrays), and a non-underscore method which performs automatic memory management and handles degenerate cases.
Contains a pointer to an array of coefficients (coeffs), the used length (length), and the allocated size of the array (alloc).
An fmprb_poly_t is defined as an array of length one of type fmprb_poly_struct, permitting an fmprb_poly_t to be passed by reference.
Initializes the polynomial for use, setting it to the zero polynomial.
Clears the polynomial, deallocating all coefficients and the coefficient array.
Makes sures that the coefficient array of the polynomial contains at least len initialized coefficients.
Directly changes the length of the polynomial, without allocating or deallocating coefficients. The value shold not exceed the allocation length.
Strips any trailing coefficients which are identical to zero.
Sets poly to the constant 0 respectively 1.
Sets the coefficient with index n in poly to the value c. We require that n is nonnegative.
Sets v to the value of the coefficient with index n in poly. We require that n is nonnegative.
Given \(n \ge 0\), returns a pointer to coefficient n of poly, or NULL if n exceeds the length of poly.
Sets res to poly divided by \(x^n\), throwing away the lower coefficients. We require that n is nonnegative.
Sets res to poly multiplied by \(x^n\). We require that n is nonnegative.
Truncates poly to have length at most n, i.e. degree strictly smaller than n.
Returns the length of poly, i.e. zero if poly is identically zero, and otherwise one more than the index of the highest term that is not identically zero.
Returns the degree of poly, defined as one less than its length. Note that if one or several leading coefficients are balls containing zero, this value can be larger than the true degree of the exact polynomial represented by poly, so the return value of this function is effectively an upper bound.
Sets poly to src, rounding the coefficients to prec bits.
Prints the polynomial as an array of coefficients, printing each coefficient using fmprb_printd.
Creates a random polynomial with length at most len.
Returns nonzero iff poly1 contains poly2.
Returns nonzero iff A and B are equal as polynomial balls, i.e. all coefficients have equal midpoint and radius.
Returns nonzero iff poly1 overlaps with poly2. The underscore function requires that len1 ist at least as large as len2.
Sets {C, max(lenA, lenB)} to the sum of {A, lenA} and {B, lenB}. Allows aliasing of the input and output operands.
Sets C to the sum of A and B.
Sets {C, max(lenA, lenB)} to the difference of {A, lenA} and {B, lenB}. Allows aliasing of the input and output operands.
Sets C to the difference of A and B.
Sets C to the negation of A.
Sets C to A multiplied by \(2^c\).
Sets {C, n} to the product of {A, lenA} and {B, lenB}, truncated to length n. The output is not allowed to be aliased with either of the inputs. We require \(\mathrm{lenA} \ge \mathrm{lenB} > 0\), \(n > 0\), \(\mathrm{lenA} + \mathrm{lenB} - 1 \ge n\).
The classical version uses a plain loop. This has good numerical stability but gets slow for large n.
The ztrunc version puts each input polynomial on a common exponent, truncates to prec bits, and multiplies exactly over the integers. The output error is computed by cross-multiplying the max norms. This is fast but has poor numerical stability unless all coefficients are of the same magnitude.
The block version decomposes the product into several subproducts which are computed exactly over the integers. This is typically nearly as fast as ztrunc, and the numerical stability is essentially as good as classical.
The block_scaled version attempts to find an integer \(c\) such that \(A(2^c x)\) and \(B(2^c x)\) have slowly varying coefficients, then multiplies the scaled polynomials using the block algorithm, and finally unscales the result.
The scaling factor \(c\) is chosen in a quick, heuristic way by picking the first and last nonzero terms in each polynomial. If the indices in \(A\) are \(a_2, a_1\) and the log-2 magnitudes are \(e_2, e_1\), and the indices in \(B\) are \(b_2, b_1\) with corresponding magnitudes \(f_2, f_1\), then we compute \(c\) as the weighted arithmetic mean of the slopes, rounded to the nearest integer:
This strategy is used because it is simple. It is not optimal in all cases, but will typically give good performance when multiplying two power series with a similar decay rate.
The default algorithm chooses the classical algorithm for small polynomials, the block algorithm for medium polynomials, and the block_scaled algorithm for large polynomials.
If the input pointers are identical (and the lengths are the same), they are assumed to represent the same polynomial, and its square is computed.
Sets C to the product of A and B, truncated to length n. If the same variable is passed for A and B, sets C to the square of A truncated to length n.
Sets {C, lenA + lenB - 1} to the product of {A, lenA} and {B, lenB}. The output is not allowed to be aliased with either of the inputs. We require \(\mathrm{lenA} \ge \mathrm{lenB} > 0\). This function is implemented as a simple wrapper for _fmprb_poly_mullow().
If the input pointers are identical (and the lengths are the same), they are assumed to represent the same polynomial, and its square is computed.
Sets C to the product of A and B. If the same variable is passed for A and B, sets C to the square of A.
Sets {Q, len} to the power series inverse of {A, Alen}. Uses Newton iteration.
Sets Q to the power series inverse of A, truncated to length n.
Sets {Q, n} to the power series quotient of {A, Alen} by {B, Blen}. Uses Newton iteration followed by multiplication.
Sets Q to the power series quotient A divided by B, truncated to length n.
Performs polynomial division with remainder, computing a quotient \(Q\) and a remainder \(R\) such that \(A = BQ + R\). The leading coefficient of \(B\) must not contain zero. The implementation reverses the inputs and performs power series division.
Divides \(A\) by the polynomial \(x - c\), computing the quotient \(Q\) as well as the remainder \(R = f(c)\).
Sets res to the composition \(h(x) = f(g(x))\) where \(f\) is given by poly1 and \(g\) is given by poly2, respectively using Horner’s rule, divide-and-conquer, and an automatic choice between the two algorithms. The underscore methods do not support aliasing of the output with either input polynomial.
Sets res to the power series composition \(h(x) = f(g(x))\) truncated to order \(O(x^n)\) where \(f\) is given by poly1 and \(g\) is given by poly2, respectively using Horner’s rule, the Brent-Kung baby step-giant step algorithm, and an automatic choice between the two algorithms. We require that the constant term in \(g(x)\) is exactly zero. The underscore methods do not support aliasing of the output with either input polynomial.
Sets \(h\) to the power series reversion of \(f\), i.e. the expansion of the compositional inverse function \(f^{-1}(x)\), truncated to order \(O(x^n)\), using respectively Lagrange inversion, Newton iteration, fast Lagrange inversion, and a default algorithm choice.
We require that the constant term in \(f\) is exactly zero and that the linear term is nonzero. The underscore methods assume that flen is at least 2, and do not support aliasing.
Sets \(y = f(x)\), evaluated respectively using Horner’s rule, rectangular splitting, and an automatic algorithm choice.
Sets \(y = f(x)\) where \(x\) is a complex number, evaluating the polynomial respectively using Horner’s rule, rectangular splitting, and an automatic algorithm choice.
Sets \(y = f(x), z = f'(x)\), evaluated respectively using Horner’s rule, rectangular splitting, and an automatic algorithm choice.
When Horner’s rule is used, the only advantage of evaluating the function and its derivative simultaneously is that one does not have to generate the derivative polynomial explicitly. With the rectangular splitting algorithm, the powers can be reused, making simultaneous evaluation slightly faster.
Sets \(y = f(x), z = f'(x)\), evaluated respectively using Horner’s rule, rectangular splitting, and an automatic algorithm choice.
Generates the polynomial \((x-x_0)(x-x_1)\cdots(x-x_{n-1})\).
Returns an initialized data structured capable of representing a remainder tree (product tree) of len roots.
Deallocates a tree structure as allocated using _fmprb_poly_tree_alloc.
Constructs a product tree from a given array of len roots. The tree structure must be pre-allocated to the specified length using _fmprb_poly_tree_alloc().
Evaluates the polynomial simultaneously at n given points, calling _fmprb_poly_evaluate() repeatedly.
Evaluates the polynomial simultaneously at n given points, using fast multipoint evaluation.
Recovers the unique polynomial of length at most n that interpolates the given x and y values. This implementation first interpolates in the Newton basis and then converts back to the monomial basis.
Recovers the unique polynomial of length at most n that interpolates the given x and y values. This implementation uses the barycentric form of Lagrange interpolation.
Recovers the unique polynomial of length at most n that interpolates the given x and y values, using fast Lagrange interpolation. The precomp function takes a precomputed product tree over the x values and a vector of interpolation weights as additional inputs.
Sets {res, len - 1} to the derivative of {poly, len}. Allows aliasing of the input and output.
Sets res to the derivative of poly.
Sets {res, len} to the integral of {poly, len - 1}. Allows aliasing of the input and output.
Sets res to the integral of poly.
Computes the Borel transform of the input polynomial, mapping \(\sum_k a_k x^k\) to \(\sum_k (a_k / k!) x^k\). The underscore method allows aliasing.
Computes the inverse Borel transform of the input polynomial, mapping \(\sum_k a_k x^k\) to \(\sum_k a_k k! x^k\). The underscore method allows aliasing.
Computes the binomial transform of the input polynomial, truncating the output to length len. The binomial transform maps the coefficients \(a_k\) in the input polynomial to the coefficients \(b_k\) in the output polynomial via \(b_n = \sum_{k=0}^n (-1)^k {n \choose k} a_k\). The binomial transform is equivalent to the power series composition \(f(x) \to (1-x)^{-1} f(x/(x-1))\), and is its own inverse.
The basecase version evaluates coefficients one by one from the definition, generating the binomial coefficients by a recurrence relation.
The convolution version uses the identity \(T(f(x)) = B^{-1}(e^x B(f(-x)))\) where \(T\) denotes the binomial transform operator and \(B\) denotes the Borel transform operator. This only costs a single polynomial multiplication, plus some scalar operations.
The default version automatically chooses an algorithm.
The underscore methods do not support aliasing, and assume that the lengths are nonzero.
Sets {res, len} to {f, flen} raised to the power exp, truncated to length len. Requires that len is no longer than the length of the power as computed without truncation (i.e. no zero-padding is performed). Does not support aliasing of the input and output, and requires that flen and len are positive. Uses binary expontiation.
Sets res to poly raised to the power exp, truncated to length len. Uses binary exponentiation.
Sets res to {f, flen} raised to the power exp. Does not support aliasing of the input and output, and requires that flen is positive.
Sets res to poly raised to the power exp.
Sets {h, len} to the power series \(f(x)^{g(x)} = \exp(g(x) \log f(x))\) truncated to length len. This function detects special cases such as g being an exact small integer or \(\pm 1/2\), and computes such powers more efficiently. This function does not support aliasing of the output with either of the input operands. It requires that all lengths are positive, and assumes that flen and glen do not exceed len.
Sets h to the power series \(f(x)^{g(x)} = \exp(g(x) \log f(x))\) truncated to length len. This function detects special cases such as g being an exact small integer or \(\pm 1/2\), and computes such powers more efficiently.
Sets {h, len} to the power series \(f(x)^g = \exp(g \log f(x))\) truncated to length len. This function detects special cases such as g being an exact small integer or \(\pm 1/2\), and computes such powers more efficiently. This function does not support aliasing of the output with either of the input operands. It requires that all lengths are positive, and assumes that flen does not exceed len.
Sets h to the power series \(f(x)^g = \exp(g \log f(x))\) truncated to length len.
Sets g to the power series square root of h, truncated to length n. Uses division-free Newton iteration for the reciprocal square root, followed by a multiplication.
The underscore method does not support aliasing of the input and output arrays. It requires that hlen and n are greater than zero.
Sets g to the reciprocal power series square root of h, truncated to length n. Uses division-free Newton iteration.
The underscore method does not support aliasing of the input and output arrays. It requires that hlen and n are greater than zero.
Sets res to the power series logarithm of f, truncated to length n. Uses the formula \(\log(f(x)) = \int f'(x) / f(x) dx\), adding the logarithm of the constant term in f as the constant of integration.
The underscore method supports aliasing of the input and output arrays. It requires that flen and n are greater than zero.
Sets res respectively to the power series inverse tangent, inverse sine and inverse cosine of f, truncated to length n.
Uses the formulas
adding the inverse function of the constant term in f as the constant of integration.
The underscore methods supports aliasing of the input and output arrays. They require that flen and n are greater than zero.
Sets \(f\) to the power series exponential of \(h\), truncated to length \(n\).
The basecase version uses a simple recurrence for the coefficients, requiring \(O(nm)\) operations where \(m\) is the length of \(h\).
The main implementation uses Newton iteration, starting from a small number of terms given by the basecase algorithm. The complexity is \(O(M(n))\). Redundant operations in the Newton iteration are avoided by using the scheme described in [HZ2004].
The underscore methods support aliasing and allow the input to be shorter than the output, but require the lengths to be nonzero.
Sets s and c to the power series sine and cosine of h, computed simultaneously.
The basecase version uses a simple recurrence for the coefficients, requiring \(O(nm)\) operations where \(m\) is the length of \(h\).
The tangent version uses the tangent half-angle formulas to compute the sine and cosine via _fmprb_poly_tan_series(). This requires \(O(M(n))\) operations. When \(h = h_0 + h_1\) where the constant term \(h_0\) is nonzero, the evaluation is done as \(\sin(h_0 + h_1) = \cos(h_0) \sin(h_1) + \sin(h_0) \cos(h_1)\), \(\cos(h_0 + h_1) = \cos(h_0) \cos(h_1) - \sin(h_0) \sin(h_1)\), to improve accuracy and avoid dividing by zero at the poles of the tangent function.
The default version automatically selects between the basecase and tangent algorithms depending on the input.
The underscore methods support aliasing and require the lengths to be nonzero.
Respectively evaluates the power series sine or cosine. These functions simply wrap _fmprb_poly_sin_cos_series(). The underscore methods support aliasing and require the lengths to be nonzero.
Sets g to the power series tangent of h.
For small n takes the quotient of the sine and cosine as computed using the basecase algorithm. For large n, uses Newton iteration to invert the inverse tangent series. The complexity is \(O(M(n))\).
The underscore version does not support aliasing, and requires the lengths to be nonzero.
Sets res to the series expansion of \(\Gamma(h(x))\), \(1/\Gamma(h(x))\), or \(\log \Gamma(h(x))\), truncated to length n.
These functions first generate the Taylor series at the constant term of h, and then call _fmprb_poly_compose_series(). The Taylor coefficients are generated using the Riemann zeta function if the constant term of h is a small integer, and with Stirling’s series otherwise.
The underscore methods support aliasing of the input and output arrays, and require that hlen and n are greater than zero.
Sets res to the rising factorial \((f) (f+1) (f+2) \cdots (f+r-1)\), truncated to length trunc. The underscore method assumes that flen, r and trunc are at least 1, and does not support aliasing. Uses binary splitting.
Sets res to the Hurwitz zeta function \(\zeta(s,a)\) where \(s\) a power series and \(a\) is a constant, truncated to length n. To evaluate the usual Riemann zeta function, set \(a = 1\).
If deflate is nonzero, evaluates \(\zeta(s,a) + 1/(1-s)\), which is well-defined as a limit when the constant term of \(s\) is 1. In particular, expanding \(\zeta(s,a) + 1/(1-s)\) with \(s = 1+x\) gives the Stieltjes constants
If \(a = 1\), this implementation uses the reflection formula if the midpoint of the constant term of \(s\) is negative.
Sets res to the series expansion of the Riemann-Siegel theta function
where the argument of the gamma function is chosen continuously as the imaginary part of the log gamma function.
The underscore method does not support aliasing of the input and output arrays, and requires that the lengths are greater than zero.
Sets res to the series expansion of the Riemann-Siegel Z-function
The zeros of the Z-function on the real line precisely correspond to the imaginary parts of the zeros of the Riemann zeta function on the critical line.
The underscore method supports aliasing of the input and output arrays, and requires that the lengths are greater than zero.
Given an interval \(I\) specified by convergence_interval, evaluates a bound for \(C = \sup_{t,u \in I} \frac{1}{2} |f''(t)| / |f'(u)|\), where \(f\) is the polynomial defined by the coefficients {poly, len}. The bound is obtained by evaluating \(f'(I)\) and \(f''(I)\) directly. If \(f\) has large coefficients, \(I\) must be extremely precise in order to get a finite factor.
Performs a single step with Newton’s method.
The input consists of the polynomial \(f\) specified by the coefficients {poly, len}, an interval \(x = [m-r, m+r]\) known to contain a single root of \(f\), an interval \(I\) (convergence_interval) containing \(x\) with an associated bound (convergence_factor) for \(C = \sup_{t,u \in I} \frac{1}{2} |f''(t)| / |f'(u)|\), and a working precision prec.
The Newton update consists of setting \(x' = [m'-r', m'+r']\) where \(m' = m - f(m) / f'(m)\) and \(r' = C r^2\). The expression \(m - f(m) / f'(m)\) is evaluated using ball arithmetic at a working precision of prec bits, and the rounding error during this evaluation is accounted for in the output. We now check that \(x' \in I\) and \(m' < m\). If both conditions are satisfied, we set xnew to \(x'\) and return nonzero. If either condition fails, we set xnew to \(x\) and return zero, indicating that no progress was made.
Refines a precise estimate of a polynomial root to high precision by performing several Newton steps, using nearly optimally chosen doubling precision steps.
The inputs are defined as for _fmprb_poly_newton_step, except for the precision parameters: prec is the target accuracy and eval_extra_prec is the estimated number of guard bits that need to be added to evaluate the polynomial accurately close to the root (typically, if the polynomial has large coefficients of alternating signs, this needs to be approximately the bit size of the coefficients).