Hacker News new | past | comments | ask | show | jobs | submit login
Chebyshev approximation and how it can help (2012) (embeddedrelated.com)
140 points by y04nn 8 months ago | hide | past | favorite | 19 comments



Some other useful things about Chebyshev approximations:

1. You can use a Fourier transform to get the coefficients in O(n log n) time.

2. So, multiplying two approximations only takes O(n log n) time.

3. Also, adding, integrating, or taking the derivative only take O(n) time.

This is why chebfun/chebpy can run so fast while magically finding roots/derivatives/etc. A couple other interesting facts:

1. Remember the double-angle formula? There's a more general recursion for the Chebyshev polynomials:

\[ T_n(x) = 2x T_{n-1}(x) - T_{n-2}. \]

So, e.g.

\[ T_2(cos(theta)) = cos(2*theta) = 2cos(theta)^2 - 1 = 2cos(theta)T_1(cos(theta)) - T_0(cos(theta)) \]

2. Computers actually use this recursion to calculate sines and cosines! So, it's a little inefficient to code your Chebyshev polynomials using `math.sin`.

3. Using generating functions, you can get a closed form for T_n(x) that only takes O(log n) time to calculate. (Note: assuming you count multiplications as constant. However, you actually need O(log n) bits to accurately represent x, so it's more accurately O((log n)^2 log log n).)


I’m gonna give this to my gpt system prompt as an example of how I want everything explained :)


Good luck. It uses to be this clear (modulo "reasoning" abilities) but it gets dumber and more waffly with every update


Agreed.


If you are ready to spend some precomputation time to compute a good approximation, you can use the Remez algorithm [1]. It is implemented in the Sollya library for machine precision [2,3]. It has notably been used to implement the Core Math library [4] to provide correct rounding for the math functions in the libc library.

[1]: https://en.wikipedia.org/wiki/Remez_algorithm

[2] : https://www.sollya.org/

[3]: https://www.sollya.org/sollya-weekly/sollya.php

[4]: https://core-math.gitlabpages.inria.fr/


A great book on this subject is Approximation Theory and Approximation Practice:

https://people.maths.ox.ac.uk/trefethen/ATAP/

Also chebfun!

https://www.chebfun.org/


This is so strange: a few days ago I commented on an HN post (https://news.ycombinator.com/item?id=40582712) about when "programmer" became an acknowledged job title, and, in my comment, mentioned how I used a Chebyshev approximation followed by two iterations of Newton's method to compute sqrt, and then today this article shows up with exactly that use case!

I wrote that code (to compute sqrt 2) in 1974 or 1975 in IBM 360 Assembler. I used a conditional macro constant that increased the number of iterations of Newton's from 2 to 3 just in case the client wanted double precision.


chebyshev approximations are fucking awesome, but this article gives too short shrift to table lookup; it does go a bit beyond nearest-neighbor interpolation to linear interpolation, and correctly points out that this gives you error that is quadratic in the distance from the x-coordinate of the nearest table entry (and therefore worst-case error quadratic in your point spacing), and that this gives you half the error of the fifth-order chebyshev approximation. it says that this is a 'rare case', but in fact you will always get a lower error from table lookup if you use enough points. it's just that with only linear interpolation, the number of points rapidly becomes impractical

as i understand it, other commonly-used strategies include spline interpolation (using second-, third-, or even fourth-order interpolation, requiring respectively three, four, and five multiplications, which can be done concurrently) and, in suitable cases like this example, newton iteration from an initial table-lookup guess

unlike the piecewise-taylor approach outlined early in the article, spline interpolation only requires storing a tiny amount more data than simple nearest-neighbor table lookup (potentially three more points for fourth-order interpolation, so a 256-entry table becomes 259 entries)

on a different topic, i think it's easy to find embedded dsp applications where the easiest solution uses fourier transforms, which usually do require high-precision floating point. machine vision, radio communication, musical applications, etc.

incidentally, if you find yourself in a situation where you actually need the taylor expansion of √(1+x) or √(½+x) or something, and you don't want to do a bunch of pencil-and-paper algebra (or don't trust yourself), pari/gp has your back:

    ? sqrt(1+x) + O(x^5)
    %5 = 1 + 1/2*x - 1/8*x^2 + 1/16*x^3 - 5/128*x^4 + O(x^5)
    ? sqrt(1+x) + O(x^7)
    %7 = 1 + 1/2*x - 1/8*x^2 + 1/16*x^3 - 5/128*x^4 + 7/256*x^5 - 21/1024*x^6 + O(x^7)
    ? sqrt(1/2+x) + O(x^5)
    %6 = 0.70710678118654752440084436210484903928 + 0.70710678118654752440084436210484903928*x - 0.35355339059327376220042218105242451964*x^2 + 0.35355339059327376220042218105242451964*x^3 - 0.44194173824159220275052772631553064955*x^4 + O(x^5)


As Boyd says in his book on Chebyshev Methods: when in doubt use Chebyshev polynomials.

I use Chebyshev polynomials extensively in finance and have tried problems like MNIST with Chebyshev and they get close to CNNs in accuracy.

ApproxFun Julia package pretty cool for Chebyshev work:

https://juliaapproximation.github.io/ApproxFun.jl/latest/


What do you mean by close to CNN?

What is your architecture? Is it just a fully connected layer of chebyshev?


Related:

Chebyshev Approximation - https://news.ycombinator.com/item?id=10115336 - Aug 2015 (60 comments)


Nice article, thanks for sharing!


Once more, this is _exactly_ why Ada has arbitrary precision decimal arithmetic. One merely needs to specify

type Result is range -100 .. 100 delta 0.0001;

and the compiler will figure out how to give you fast math with only the accuracy and resolution that you need!


How does that feature work without a solution to the tablemaker's dilemma? Does the compiler just give up and give you arbitrary precision every time you use transcendentals or does it give up and use lower precision if the estimate exceeds some arbitrary bound?


Looking at the spec [0], it only demands 1 ULP of precision for the elementary operations, and 2, 4, or 8 ULPs for special functions. So there's no magic 1/2-ULP guarantee.

[0] http://ada-auth.org/standards/22rm/html/RM-G-2-3.html


have you tested this? how fast and accurate were the results? or are you simply assuming that all compilers are perfect?

my experience outside of ada is that decimal arithmetic is never fast on binary computers


aaui that Ada line would declare a fixed-point type, so decimal/binary shouldn't really come up except for overflow handling, here. (Haven't tried it though.)


2012


Added. Thanks!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: