163
$\begingroup$

Navigation with a sextant or maneuvers using gimbal angles might be two examples of cases where an Apollo computer might need to do trigonometry.

Trigonometric functions like sine, arctangent, etc. are transcendental, as are logarithms. You can't evaluate these functions with a simple expression built on multiplication or division for example, without at least an iterative algorithm.

An engineer on the ground would grab a slide rule for two or three digits, and for more go to a book of trig, log and other tables for more digits. Between two lines you could interpolate by hand for even more digits.

But how did the Apollo computers evaluate transcendental functions, or how at least were calculations that required the use of transcendental functions implemented in the programs?

$\endgroup$
18
  • 7
    $\begingroup$ Are you sure about the need to do trigonometry for gimbaling? I'm pretty sure it would be easier to stay within vector math if you use a linear actuator. Anyway trignonometry functions are usually implemented by table lookup or approximated as a taylor expansion. No source directly related to Apollo, sorry. $\endgroup$
    – Christoph
    Commented Sep 27, 2018 at 11:14
  • 14
    $\begingroup$ @Christoph: Already in 1956 we had a better algorithm than Taylor, namely CORDIC. I don't know if that was used in Apollo. $\endgroup$
    – MSalters
    Commented Sep 27, 2018 at 11:43
  • 3
    $\begingroup$ @Joshua consider that these were developed in the early 1960's. as well as considering the size and weight constraints for landing on the moon and returning to Earth, this isn't just "any computer". In fact, technologies developed for computers in the space program helped pave the way for "personal" scientific calculators a decade later. It's the totality of the situation that makes this particular question compelling. $\endgroup$
    – uhoh
    Commented Sep 27, 2018 at 20:19
  • 7
    $\begingroup$ @MSalters: CORDIC needs tables of constants, about 50 values. Not very usefull for the Apollo computers with core rope memory for program and constants. $\endgroup$
    – Uwe
    Commented Sep 27, 2018 at 21:03
  • 7
    $\begingroup$ @Joshua exactly - every digital computer does it by this or similar methods. Another way to look at it is "where did the tables in the books we used to look this stuff up in come from?" In the old days, some poor schmucks had to grind through those series by hand, and later with the aid of mechanical calculators that could multiply and divide. The general field of study is called "Numerical methods" and you can easily find numerous books under that subject heading. $\endgroup$ Commented Sep 28, 2018 at 6:51

3 Answers 3

303
+50
$\begingroup$

Since the Apollo 11 code is on GitHub, I was able to find the code that looks like an implementation of sine and cosine functions: see here for the command module and here for the lunar lander (it looks like it is the same code).

For convenience, here is a copy of the code:

 # Page 1102
            BLOCK   02

# SINGLE PRECISION SINE AND COSINE

            COUNT*  $$/INTER
SPCOS       AD      HALF            # ARGUMENTS SCALED AT PI
SPSIN       TS      TEMK
            TCF     SPT
            CS      TEMK
SPT         DOUBLE
            TS      TEMK
            TCF     POLLEY
            XCH     TEMK
            INDEX   TEMK
            AD      LIMITS
            COM
            AD      TEMK
            TS      TEMK
            TCF     POLLEY
            TCF     ARG90
POLLEY      EXTEND
            MP      TEMK
            TS      SQ
            EXTEND
            MP      C5/2
            AD      C3/2
            EXTEND
            MP      SQ
            AD      C1/2
            EXTEND
            MP      TEMK
            DDOUBL
            TS      TEMK
            TC      Q
ARG90       INDEX   A
            CS      LIMITS
            TC      Q       # RESULT SCALED AT 1.

The comment

# SINGLE PRECISION SINE AND COSINE

indicates, that the following is indeed an implementation of the sine and cosine functions.

Information about the type of assembler used, can be found on Wikipedia.

Partial explanation of the code:

The subroutine SPSIN actually calculates $\sin(\pi x)$, and SPCOS calculates $\cos(\pi x)$.

The subroutine SPCOS first adds one half to the input, and then proceeds to calculate the sine (this is valid because of $\cos(\pi x) = \sin(\pi (x+\tfrac12))$). The argument is doubled at the beginning of the SPT subroutine. That is why we now have to calculate $\sin(\tfrac\pi2 y)$ for $y=2x$.

The subroutine POLLEY calculates an almost Taylor polynomial approximation of $\sin(\tfrac\pi2 x)$. First, we store $x^2$ in the register SQ (where $x$ denotes the input). This is used to calculate the polynomial $$ ((( C_{5/2} x^2 ) + C_{3/2} ) x^2 + C_{1/2}) x. $$ The values for the constants can be found in the same GitHub repository and are

$$\begin{aligned} C_{5/2} &= .0363551 \approx \left(\frac\pi2\right)^5 \cdot \frac1{2\cdot 5!}\\ C_{3/2} &= -.3216147 \approx -\left(\frac\pi2\right)^3 \cdot \frac1{2\cdot 3!}\\ C_{1/2} &= .7853134 \approx \frac\pi2 \cdot \frac12\\ \end{aligned}$$

which look like the first Taylor coefficients for the function $\frac12 \sin(\tfrac\pi2 x)$.

These values are not exact! So this is a polynomial approximation, which is very close to the Taylor approximation, but even better (see below, also thanks to @uhoh and @zch).

Finally, the result is doubled with the DDOUBL command, and the subroutine POLLEY returns an approximation to $\sin(\tfrac\pi2 x)$.

As for the scaling (first halve, then double, ...), @Christopher mentioned in the comments, that the 16-bit fixed-point number could only store values from -1 to +1. Therefore, scaling is necessary. See here for a source and further details on the data representation. Details for the assembler instructions can be found on the same page.

How accurate is this almost-Taylor approximation? Here you can see a plot on WolframAlpha for the sine, and it looks like a good approximation for $x$ from $-0.6$ to $+.6$. The cosine function and its approximation is plotted here. (I hope they never had to calculate the cosine for a value $\geq \tfrac\pi2$, because then the error would be unpleasantly large.)

@uhoh wrote some Python code, which compares the coefficients $C_{1/2}, C_{3/2}, C_{5/2}$ from the Apollo code with the Taylor coefficients and calculates the optimal coefficients (based on the maximal error for $-\tfrac\pi2 \leq x \leq \tfrac\pi2$ and quadratic error on that domain). It shows that the Apollo coefficients are closer to the optimal coefficients than the Taylor coefficients.

In this plot the differences between $\sin(\pi x)$ and the approximations (Apollo/Taylor) is displayed. One can see, that the Taylor approximation is much worse for $x\geq .3$, but much better for $x\leq .1$. Mathematically, this is not a huge surprise, because Taylor approximations are only locally defined, and therefore they are often only useful close to a single point (here $x=0$).

Note that for this polynomial approximation you only need four multiplications and two additions (MP and AD in the code). For the Apollo Guidance Computer, memory and CPU cycles were only available in small numbers.

There are some ways to increase accuracy and input range, which would have been available for them, but it would result in more code and more computation time. For example, exploiting symmetry and periodicity of sine and cosine, using the Taylor expansion for cosine, or simply adding more terms of the Taylor expansion would have improved the accuracy and would also have allowed for arbitrary large input values.

$\endgroup$
21
  • 4
    $\begingroup$ The scaling is due to the fact that single precision could only store value from -1 to +1 :). Precision is around 13 bits which fits the single precision type (16 bits one of which is the sign bit and one was a parity bit not accessable to software). $\endgroup$
    – Christoph
    Commented Sep 27, 2018 at 12:45
  • 13
    $\begingroup$ On topic: error graph. Maximum error within the domain would be around 0.0001 $\endgroup$
    – SF.
    Commented Sep 27, 2018 at 13:13
  • 20
    $\begingroup$ "I hope they never had to calculate the cosine for a value ≥ π/2." It is not necessary using the relation cos(π - α) = -cos(α). Using this and similar relations, only the range 0 ≤ α ≤ π/4 has to be computed. These transformations may be used for sin, cos, tan and cot. There may be other functions within the Apollo 11 code using these transformations when bigger arguments are possible. $\endgroup$
    – Uwe
    Commented Sep 27, 2018 at 15:55
  • 43
    $\begingroup$ @MagicOctopusUrn The Brogrammers among us who claim that "girls can't code" should take note that this code was developed by a woman. $\endgroup$
    – Philipp
    Commented Sep 28, 2018 at 11:43
  • 7
    $\begingroup$ @Philip: This code was developed by a team of mostly men and few women led by Magaret Hamilton. She did not tell here if she was the only women in the team. $\endgroup$
    – Uwe
    Commented Sep 29, 2018 at 14:21
42
+200
$\begingroup$

You also asked for the logarithm, so let's do this as well. As opposed to the sine and cosine functions, this one is not implemented with a Taylor series-like approach only. The algorithm is based on shifting the input and counting the number of shifts needed to arrive at the required scale. I don't know the name of this algorithm, this answer on SO describes the basic principle.

The LOG implementation is part of the CGM_GEOMETRY module, and labeled

SUBROUTINE TO COMPUTE THE NATURAL LOG

The routine uses the NORM assembly instruction, which, according to its documentation left-shifts the number in the accumulator ("MPAC" register), until it ends up with a value $\geq {0.5}$ and "nearly $1$"[1], and writes the number of shift operations performed into the memory location specified as its second argument (the mathematical meaning of the left shift operation is binary exponentiation $2^n$, exponents in the argument of a logarithm can be expressed as factors and products in the argument can be expressed as additions, so the simplification of $ln(2^c \cdot scaled)$ into $c \cdot const + ln(scaled)$ works, where $c$ is the shift count and $const$ is the precomputed value of $ln(2)$).

Then it uses a polynomial of third degree to approximate $ln$ in that interval, with hardcoded coefficients[3]:

enter image description here

Eventually the shift count obtained earlier is multiplied back in (times the 0.0216608494 constant[2], using SHORTMP).

Optimization pressure must have been so high that they did not fix the inverted sign before returning from the subroutine, requiring all call sites to take that into account instead.

Application of the logarithm subroutine:
for example as a part of range prediction in re-entry control.

---

[1] the storage format for a double precision number was built from two 16 bit words where the MSB of each is the sign and forms a one's complement representation of the range $-1 < x < 1$ but the LSB is a parity bit. So we deal with a 30 bit format containing two sign bits, causing some headache for emulator implementation.

[2] the ACG assembly language permits CLOG2/32 as an identifier name. This caused some more headache for emulator implementation.

[3] how were the coefficients found? Code comments on the assembly listing of the interpretive trignonometry module (yes, astronauts could make the ACG interpret dynamic instructions) suggest that the method was based on works by C. Hastings, especially Approximations for Digital Computers. Princeton University Press, 1955. The most complex polynomial of that kind in ACG is one of seventh order, same module, to compute $arccos(x) \approx \sqrt{1-x} \cdot P(x)$).

$\endgroup$
9
  • 2
    $\begingroup$ I suspect the equation ln(arg) = ln((2^a)*b) = a*ln(2) + ln(b) is used. a is the shift count needed to get 0.5 < b < 1 so that arg = (2^a)*b. But there should be a constant ln(2) = 0.693147 which I could not find. But ln(2)/32 = 0.02166 is close to the constant .0215738898. 32 * .0215738898 is 0,69036. $\endgroup$
    – Uwe
    Commented Oct 1, 2018 at 14:03
  • 2
    $\begingroup$ on second sight, there is a polynomial involved. line 274 evaluates a polynomial of the third degree (line 276, 2+1) with coefficients .031335467 etc. that follow. I'll emulate that and add a plot if I have time later. $\endgroup$
    – dlatikay
    Commented Oct 1, 2018 at 14:24
  • 3
    $\begingroup$ @Uwe The constant for $\ln(2)/32$ is present in line 302. $\endgroup$
    – Litho
    Commented Oct 3, 2018 at 8:27
  • 3
    $\begingroup$ 0.0216608494 * 32 is 0.693147181 indeed. I'm still struggling with that weird double-precision one's complement-with-overflow and parity storage, don't get anywhere near yet with an attempted emulation in C. $\endgroup$
    – dlatikay
    Commented Oct 3, 2018 at 10:33
  • 4
    $\begingroup$ @uhoh It looks to me that $0.031335467(1-x)+0.0130145859(1-x)^2+0.0215738898(1-x)^3$ is a good approximation of $-\ln(x)/32$ for $x$ between $0.5$ and $1$. $\endgroup$
    – Litho
    Commented Oct 3, 2018 at 15:11
4
$\begingroup$

One addition to the answer by @supinf:

a) The initial DOUBLE in SPT

b) overflows for input x (register A) above +0.5 (+90°) and underflows for x below -0.5 (-90°). In this case, A is >+1 or <-1 and the following TS stores the corrected value (effectively add one, if it is below -1, or subtract one, if it is above +1) in TEMK and sets A to the +1 (overflow) or -1 (underflow). Furthermore the jump TCF to POLLEY is ignored.

c) XCH swaps TEMK with A, so A contains the corrected value and TEMK ±1 now.

d) INDEX adds the value from TEMK (±1) to the value of the next AD instruction, which silently corrects overflows. Because LIMITS is equal to -0.5, this results in an addition of 0.5 in the overflow case (-0.5 + +1 = 0.5) and in a subtraction of 0.5 in the underflow case (-0.5 + -1 = -1.5 = -0.5).

e) COM negates the value of A – this includes inverting the overflow bit – and

f) the final AD adds one in the overflow case and subtracts one in the underflow case. AD does not perform a overflow correction prior to addition and sets the overflow flag after. So every overflowed value (>+135° and <-135°) will come back into the range [-1,+1].

Visualisation of the calculations a-f

If this AD under/overflows (I see no way this could happen), it sets A to ±1, jumps to ARG90 and sets A to -(LIMITS+A), which is -(-0.5+1)=-(+0.5)=-0.5 in the overflow case and -(-0.5-1)=-(-1.5)=-(-0.5)=+0.5 in the underflow case. I initially thought, this would happen for x > +135° or x < -135°, but this does not seem to be the case.

But this adjusting the case <-90° and >+90° looks kinda wrong to me. I would expect the line f to be from (+0.5,+1.0) to (+1.0,+0.0) and from (-0.5,-1.0) to (-1.0,-0.0). That would be the case if COM follows directly XCH without step d.

Please correct me, if I have some pieces wrong, I just recently saw that code and tried to figure it out with the AGC Instruction Set.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.