Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is this valid? Should it be another algorithm for arb_poly_eval? #413

Open
postmath opened this issue Apr 1, 2022 · 8 comments
Open

Is this valid? Should it be another algorithm for arb_poly_eval? #413

postmath opened this issue Apr 1, 2022 · 8 comments

Comments

@postmath
Copy link
Contributor

postmath commented Apr 1, 2022

It looks like this will sometimes give a much tighter box than the other algorithms: when evaluating a polynomial at x = x0 +/- delta, first do a Taylor shift by x0 +/- 0, then evaluate at 0 +/- delta. For example, for 49*x^4 - 188*x^2 + 72*x + 292 (radius 0 for all coefficients) at x = -1.5 +/- 0.1, I see:

  • 9.0625 +/- 70.1889 for _arb_poly_evaluate_horner;
  • 9.0625 +/- 138.544 for _arb_poly_evaluate_rectangular;
  • 9.0625 +/- 70.1889 for _arb_poly_taylor_shift_horner then taking the constant coefficient;
  • 9.0625 +/- 138.544 for _arb_poly_taylor_shift_divconquer then taking the constant coefficient;
  • 9.0625 +/- 138.544 for _arb_poly_taylor_shift_convolution then taking the constant coefficient;
  • 9.0625 +/- 7.5839 for this new approach.

If instead we take x = -1.5 +- 1e-6, we see:

  • 9.0625 +/- 6.36001e-4 for the *_horner options above;
  • 9.0625 +/- 1.2975e-3 for the other three options above;
  • 9.0625 +/- 2.55005e-5 for the new approach.
@postmath
Copy link
Contributor Author

postmath commented Apr 1, 2022

Rough code:

_arb_poly_evaluate_new(arb_t y, arb_srcptr f, slong len, const arb_t x, slong prec) {
        arb_set_arf(y, arb_midref(x));
        mag_zero(arb_radref(y));
        _arb_poly_taylor_shift(f, y, len, prec);
        arf_zero(arb_midref(y));
        mag_set(arb_radref(y), arb_radref(x));
        _arb_poly_evaluate(y, f, len, y, prec);
}

@postmath
Copy link
Contributor Author

postmath commented Apr 4, 2022

And the arb_srcptr type above should of course be arb_ptr. Which means this would either have a different signature than the other implementations of evaluate, or it would have to do an otherwise unnecessary copy.

@fredrik-johansson
Copy link
Collaborator

arb_poly_evaluate should probably not do a full Taylor shift by default since this is a higher-complexity operation (O(n^2) or O(n log n) instead of O(n)). However, it makes sense to evaluate at the midpoint and compute the error propagation separately. What happens if you compute a low-order (linear, quadratic, cubic?) Taylor enclosure for the error?

In some circumstances, you can maybe also figure out monotonicity conditions to determine optimal upper/lower bounds.

@postmath
Copy link
Contributor Author

postmath commented Apr 5, 2022

Brilliant. Yes, a Taylor enclosure is just what's needed. I think a linear enclosure gives the results that I got above; that's easy to do and fits within the calling sequence of _?arb_poly_evaluate* when doing the obvious conversion from a linear model + interval to an interval. Higher degrees might be useful for particularly wide intervals; but then I think it makes more sense to return the model and bound separately; I propose something like

_arb_poly_to_taylor_model(mag_t radius, arb_ptr g, slong glen, arb_srcptr f, slong len, const arb_t x, slong prec)

where the user allocates and initializes glen entries for g, and upon exit, abs(f(y) - g(y)) <= radius for y in x. The straightforward implementation is O(len * glen), which should be good enough as we expect this to be used for small, constant glen.

I expect (optimistically) that I'll be able to implement this in the next week or two.

@postmath
Copy link
Contributor Author

postmath commented Apr 5, 2022

Wait -- in my mind I was going to make g have arf_t coefficients, not arb_t ones.

@fredrik-johansson
Copy link
Collaborator

fredrik-johansson commented Apr 5, 2022

For the time being I would just use arb_t coefficients with zero radii since this is easy to use with all the existing polynomial methods.

I think two separate issues here are how to evaluate an arb_poly once and how to precondition/precompile a polynomial or power series for fast and accurate repeated evaluations on a fixed interval.

You'd ideally want some kind of dedicated, opaque data structure for the latter. Basically, you'd want to truncate the coefficients to the optimal precision individually, pack them into a single mpn array, then use Horner's rule in mpn arithmetic for the evaluation. You could further split such a representation into a multiprecision body and a double tail for the terms that need <= 53 bits of precision. This is one of those things I've been thinking about for a while; just need to sit down and write some code :-)

In any case, the interface you proposed is a good start for doing something a bit simpler (and certainly good enough for low-degree approximations). I would just swap radius and g in the argument list, and I think the most consistent with existing functions would be to put glen after x.

I disagree that "we expect this to be used for small, constant glen". It will be used with glen ~= len eventually, and it better be quasilinear in that case. But a quadratic version that works at all is fine to start with, of course.

@postmath
Copy link
Contributor Author

postmath commented Apr 6, 2022

Will do. Thanks!

@postmath
Copy link
Contributor Author

postmath commented Sep 8, 2022

Hi @fredrik-johansson - just wondering if you had any further thoughts on this. There's a pull request at #414. If it's just that you currently don't have time for this, that's fine, too - then we'll merge something similar to this into our own wrapper library. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants