let's reinvent the wheel! i mean have you seen how many types of wheels there are? do we have enough?
no i mean let's reinvent sin(x): 💡 hit ctrl+enter to update the code. wheel/drag to zoom/pan in plot.
it starts to diverge outside [-pi;pi] and that's a bad sine so let's fix it by wrapping x around inside that interval:
nice! the dark red plot in the background shows the logarithm of the error between good(x) and bad(x). logarithms helps us "compress" many orders of magnitude into a plottable range. mousing over the highest peaks lets us find the max error of about 0.056. it doesn't sound like much, but please don't fly me to the moon in a rocket that uses this "sine" for navigation.
i call it a "parabolic sine" because it's just a cleverly mirrored parabola (x**2). try changing Math.abs(x) into just x in the first plot to see what i mean.
lets cheat a bit and grab a better sine from some wiki page
that's a lot better! around x=0 it's so accurate it even "escapes" the large "dynamic range" of our semi-log error plot. also, a subtle detail i hadn't noticed before i started plotting: if you zoom in on one of the error peaks it's actually a discontinuity; a jump from about -0.0005 to 0.0005. since sines are symmetric around x=pi/2 we can fix it by feeding a triangle wave into the sine approximation:
now that's a fine sine! no jumps, just a max error of about 0.0000001, so small that scientific notation is starting to show its utility: 1e-7. please don't fly me to the moon in general, but if you have to, i don't think this sine would be my the biggest problem.
in fact it might be a bit overkill for our needs
a quick&bad performance benchmark in C even suggests that our new sine is 30% slower than sinf(x). but the first "good-bad" "parabolic sine" is also about 22% faster. so maybe once we understand how all this works we can find good tradeoffs between speed and accuracy?
but tbh our original motivation wasn't even performance. we wanted to try doing minimal/barebones/nostdlib builds of dough, in which case you lose most math functions (except the ones your computer is born with, notably sqrt(x)). froos also found a similar use in barebones wat compilation.
anyway this post is also just to get us started and plotting. to get a feel for function approximation without really understanding most of it yet. future posts will explore:
one last "trick" before i go: in dough we didn't really use exp2/log2, but instead pow(x,y), so why isn't pow(x,y) on the list? well, a cool and pertinent math equivalence is:
now you can see we don't need pow(x,y) if we have the other 2. also, pow(x,y) is a function of 2 variables which seems harder to approximate than 1-variable functions. also-also, in many of our cases x is constant, like pow(10,y), and then we can simply skip the log2() calculation:pow(x,y) = exp2(y * log2(x))
pow(10,y) = exp2(y * log2(10)) = exp2(y * 3.321928094887362)
here a quick&bad performance benchmark in C shows a 6× speed-up! yessss
we don't need to replace sqrt(x) because our computer already has a square root operation, but should the need ever arise it can be done with exp2(.5*log2(x)) (because sqrt(x)==x**.5).