🌱 Taylor series

we're back with more bad sines:


    

this time we've got controools! you're looking at a "taylor series" approximating sin(x). loosely speaking, the order is how many steps to take in the approximation (or, the number of iterations in the for-loop, minus 1), and a is the "anchor point", around which the approximation is most accurate (notice how the error plot drops around it). the checkbox enables the triangle-wave symmetry-hack from last post.

before we continue you may have a complaint about the code:

isn't it weird that the approximation of sin(x) itself contains sin(x)? arent't we going in circles here?
well, maybe, but no: order and a would normally be constants at run-time. they've only been made dynamic so you can play with them. if order and a are constant, then:

if you don't know what a polynomial is:

taylor series can also approximate other (but not all) non-siney functions, but for today we'll stick with sines. here's the general taylor series formula:


        

          f(x)

          =

          f(a)

          +

          
            
              f'(a)
            
            
              1!
            
          
          
          
            
              (
              x
              -
              a
              )
            
            1
          

          +

          
            
              f''(a)
            
            
              2!
            
          
          
          
            
              (
              x
              -
              a
              )
            
            2
          

          +

          
            
              f'''(a)
            
            
              3!
            
          
          
          
            
              (
              x
              -
              a
              )
            
            3
          

          +

          

      

if you're new to derivatives they might seem a bit scary (sure scares me from time to time). the tool we're using is called differentiation, and it's a tool in the calculus-toolbox. when you "differentiate a function" f(x), you find its "derivative" f'(x). now you have stuff you can look up! but tl;dr: f'(x) is a function of x, just like f(x), except it returns the slope of f(x) (and f''(x) returns the slope of f'(x) and so on); try setting order=1 above; (open in new tab to avoid scrolling) this gives you a 1st order polynomial, also known as: "a line". notice how the line always tangents ("touches, but only barely") f(x) at x=a (drag the a-slider around to see it). that's because its slope is f'(a).

ok sure but how do you find the derivative of a function then? yeah well that's beyond the scope of this post, but i'll give you the cheatsheet for sin(x):

notice how the rules are circular (f''''(x)=f(x)). they are handled by the switch/case in the code, and the i%4 in switch(i%4) makes it circular.

to get a more intuitive feel for differentiation, since we've said that "f'(x) is the slope of f(x)", lets consider sin(x) in various places:

you can find these and other differentiation cheatsheets on wikipedia and elsewhere. the syntax is different (of course) due to some tabs-vs-spaces kind of nerd war in the late 17th century, but d/dx sin(x) = cos(x) means the same as sin'(x) = cos(x) *ducks*

we can also briefly explore something called numerical differentiation, which is a generic way to differentiate a function without actually knowing any "differentiation rules".

      to derive sin(x) we find the slope of the line between
      sin(x-delta) and sin(x+delta), for some small delta.
      and in the code above we do it recursively to find higher order
      derivatives. one nasty thing here is that if you lower the delta, you get
      better results early on, but it blows up faster.
      

if you've played around with the controls you may have noticed that sometimes adjacent orders look the same. for instance, if a=0, then the plots for order=5 and order=6 look the same. this is because some of the derivatives are zero (which also makes the polynomial coefficient zero) when a=0:

this makes a=0 a good choice because you kinda get half of the calculations for free (0*x is a calculation you don't have to make)

here's a hack to try in the editor above (open in new tab to avoid scrolling)

now the max error is around 0.000073 (~half) and is more evenly distributed. i handpicked/eyeballed this multiplier to lower max error.

if you need sin(pi/2) to be 1 as exactly as possible, you can also try "auto-calibrating" it:

to get back to plotting, lets find a fine sine. to get the polynomial coefficients we can insert this in the bottom of the for-loop in bad():

print([i,fa/factorial(i)]);
and this in the bottom of the code, outside of the function to trigger it:
bad(0);
(debug output is rendered after eval but before the plot, because otherwise this print-hack would spam output for every x coordinate due to the plotting code)

so applying everything until now for an order=9 sine, with sin(0)=0, sin(pi/2)=0, i get...


    

in C this actually performs slightly worse than our previous best sine from last post (which puzzles me a little bit because the previous one has a division, and the same number of multiplications, but maybe it has better parallelism?). and in terms of accuracy it's also worse! document your failures! some say there are more to come.

but one cool thing you can't take away from it: we analyzed and plotted and understood what was going on (if only a little bit) instead of just copying coefficients from wikipedia.

we have a reusable tool now!

we're going to be using it soon to implement exp2(x). but before we get to that we need another tool: delicious low-level floating-point binary hackery!

also re: Horner's method: polynomials like

c0 + c1*(x**1) + c2*(x**2) + c3*(x**3) + ...
can be rewritten as
c0 + x*(c1 + x*(c2 + x*(c3 + ...)))
cool?

also-also a few last words on differentiation: it's the "magic part" of taylor series where you'll find its strengths and weaknesses. some software like wxmaxima and wolfram alpha can differentiate functions for you (try if it can turn sin(x) into cos(x)). in the code above, you only need to change the code that calculates fa in order to approximate a completely different function (and maybe remove the triangle wave stuff that's tailored to sines)

but taylor series can't deal with functions that have discontinuities, unless their derivatives somehow "know about them". consider a square wave, defined as (mixed math/JS syntax)

f(x)=(x%1)<.5?1:-1
you could say its derivative is
f'(x)=0
because square waves are flat most of the time, and maybe it's "undefined" in the discontinuities where it jumps between 1 and -1, but neither helps the "taylor series engine" approximate your function in any way.

however, froos has before talked about fourier series, which can approximate square waves and, well, anything, and these can be differentiated! probably still doesn't mean taylor series will be any good at sharp edges, but at least they'll have a chance.

show page source