The cos and sin addition formulas are:
sin(x + y) = sin(x) cos(y) + cos(x) sin(y)
cos(x + y) = cos(x) cos(y) - sin(x) sin(y)
Switiching all the + and - gives the subtarction formulas
Writing
z[k] = x
s[k] = sin(z[k])
c[k] = cos(z[k])
z[k+1]=z[k] pm y
these become (where we have figured we might use a different y at each step)
s[k+1] = s[k] cos(y[k]) pm c[k] sin(y[k])
c[k+1] = c[k] cos(y[k]) mp s[k] cos(y[k])
z[k+1] = z[k] pm y[k]
we can construct an interative scheme for sin(zInf) and cos(zInf). All we have to do is pick a starting point z[0] and a
sequence of y's so that
z[k] --> zInf
Simplest Binary Algorithm:
The following is not the fastest but it will work and it only involves
the addition formulas. An example will make this clear. If we
have a binimal
0.11011
then we can start with
z[0] = 0 and c[0]=1 and s[0]=0
and add 1/2 or 0.1 to z in other words y[0] = 1/2 (or {010000000...} in our fixed point computation) and
then compute z[1] and s[1] and c[1] (of course we need to know some constants namely cos(1/2) and sin(1/2)) .
Then we need to add 1/4 or 0.01 (in other words we choose y[1] = {00100000...}) and compute z[2] and (Of course
we need the values for sine and cosine of 1/4) s[2] and c[2]. We
do not need to add anything for the third entry (if we did we
would need values for sine and cosine of 1/8), and we continue.
Notes: This is pretty easy to code.
- All the terms stay less than one! They do not get too big for our bit fields.
- The values for cos(2^-k) and sin(2^-k) are precomputed and defined in a suitable array.
- Bit field addition of our bit strings is the same as integer addition.
- Bit field multiplication of our fields is the same as integer multiplication plus a bit shift.
- If you can not get the bit arithmetic to work out do floating point arithmetic instead.
Second Simplest Binary Algorithm:
We can use tangent instead of both cosine and sine. Rewriting the addition formulas
sin(x + y) = cos(y) [sin(x) + cos(x) tan(y)]
cos(x + y) = cos(y)[ cos(x) - sin(x) tan(y)]
the trick (that saves some storage and ops) is that we do not need to
accumulate the values of the cosine multipliers as we go. We can
multiply by the product at the end. Writing
z[k] = x
S[k] = product[cos(y[i]),i,0,k] sin(z[k])
C[k] = product[cos(y[i]),i,0,k] cos(z[k])
z[k+1]=z[k] pm y
these become (where we have figured we might use a different y at each step)
S[k+1] = S[k] pm C[k] tan(y[k])
C[k+1] = C[k] mp S[k] tan(y[k])
z[k+1] = z[k] pm y[k]
The following is not the fastest but it
will work and it only involves the addition formulas. An example will
make this clear. If we have a binomial
0.11011
then we can start with
z[0] = 0 and C[0]=1 and S[0]=0
and add 1/2 or 0.1 to z in other words y[0] = 1/2 (or {010000000...} in our fixed point computation) and
then compute z[1] and S[1] and C[1] (of course we need to know one constant namely tan(1/2)) .
Then we need to add 1/4 or 0.01 (in other words we choose y[1] = {00100000...}) and compute z[2] and (Of course
we
need the values for tan(1/4)) S[2] and C[2]. We do not
need to add anything for the third entry (if we did we would need
values for tan(1/8)), and we continue. At the ned we need to divide out
to get rid of the product to recover the appropriate value for the sine
and cosine. There are things to check in the computation here.
Do things get too big or small for our representation?
Comments:
At the cost of precomputing some constants (a bunch of tangent
values and one product of cosines) we should be able to compute
accurate values of sines and cosines for all of our binimal values.
This should run pretty fast in the bit encoded form. The second
version should run almost twice as fast as the first version.