8
$\begingroup$

Using $$ \sinh x = x + \tfrac{x^3}{3!}+ \tfrac{x^5}{5!} + \tfrac{x^7}{7!}+ \cdots$$ as the Standard Power Series. This series takes a very long time to run. Can it be written without using the exponentials divided by a huge factorial. The example functions in Is there a way to get trig functions without a calculator? using the "Tailored Taylor" series representation for sin and cosine are very fast and give the same answers. I want to use it within my calculator program.

Thank you very much.

$\endgroup$
4
  • 4
    $\begingroup$ "hyperbolic sin" sounds very scary! :) $\endgroup$ Commented Mar 6, 2019 at 10:54
  • 1
    $\begingroup$ Things that human perform easily and quickly is not necessarly easy and quick for calculators. So it's better to for computer algorithms instead. For example stackoverflow.com/questions/2284860 $\endgroup$
    – user202729
    Commented Mar 6, 2019 at 11:37
  • $\begingroup$ Also if it's a calculator so it may have small program size, so also mention that if it's important. $\endgroup$
    – user202729
    Commented Mar 6, 2019 at 11:38
  • $\begingroup$ Here you can find a C++ implementation based on the CORDIC family of algorithms. $\endgroup$ Commented Mar 6, 2019 at 14:37

3 Answers 3

13
$\begingroup$

Note that $$\sinh x=\frac{e^x-e^{-x}}2$$ So all you need is a fast way to calculate the exponential $e^x$. You can use the regular Taylor series, but that's slow. So you can use the definition $$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n$$ For calculation purposes, use $n$ as a power of $2$, $n=2^k$. You calculate first $y=1+\frac x{2^k}$, then you repeat the $y=y\cdot y$ operation $k$ times. I've got the idea about calculating the fast exponential from this article.

$\endgroup$
5
$\begingroup$

Let me consider the problem from a computing point of view assumin that you do not know how to compute $e^x$.

The infinite series is $$\sinh(x)=\sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!}$$ If you compute each term independently of the other, for sure, it is expensive since you have to compute each power of $x$ as well as each factorial.

But suppose that you write instead $$\sinh(x)=\sum_{n=0}^\infty T_n \qquad \text{where} \qquad T_n=\frac{x^{2n+1}}{(2n+1)!}\qquad \text{and} \qquad T_0=x$$ then $$T_{n+1}= \frac {t\,\, T_n}{m(m+1)}\qquad \text{where} \qquad t=x^2\qquad \text{and} \qquad m=2n+2$$ This would be much less expensive in terms of basic operations and number of them.

You could use the same trick for most functions expressed as infinite series.

$\endgroup$
7
  • $\begingroup$ This approach also have the advantage that you know when to stop (as a function of $x$). One can estimate an upper limit for the error. +1 $\endgroup$
    – Andrei
    Commented Mar 6, 2019 at 19:40
  • $\begingroup$ @Andrei. Could you elaborate about the upper limit of the error ? In fact, I wanted to show that we can skip all IF tests "knowing" in advance the number of terms to be added for an accuracy of $10^{-k}$. Thanks. $\endgroup$ Commented Mar 7, 2019 at 2:25
  • $\begingroup$ see something like math.dartmouth.edu/~m8w19/ErrorEstimates.pdf. You calculate the difference between $\sinh x$ and the order $n$ expansion. The difference will be less than $\sinh \xi \frac{x^{2n+2}}{(2n+2)!}$, where $0\le \xi\le x$. This is a generalization of mean value theorem. $\endgroup$
    – Andrei
    Commented Mar 7, 2019 at 4:13
  • $\begingroup$ @Andrei. This one, I know ! The problem is precisely $\sinh(\xi)$ which makes entreing an infinite loop. Any other idea ? Thanks and cheers. $\endgroup$ Commented Mar 7, 2019 at 4:16
  • $\begingroup$ Use the current approximation for an estimate. For $x>0$, $\sinh$ is an increasing function, so $\sinh\xi<\sinh x\approx\sum_n T_n$ $\endgroup$
    – Andrei
    Commented Mar 7, 2019 at 4:55
2
$\begingroup$

A much better option than Andrei's answer is to use the identity $\exp(x) = \exp(x·2^{-k})^{2^k}$ for judicious choice of $k$, and use the Taylor series to approximate $\exp(x·2^{-k})$ to sufficient precision.

Suppose we want to compute $\exp(x)$ using the above identity to $p$-bit precision, meaning a relative error of $2^{-p}$. We shall perform each arithmetic operation with $q$-bit precision, where $q=p+k+m$, and $k,m$ are positive integers that we will determine later. To compute $\exp(x·2^{-k})^{2^k}$ with relative error $2^{-p}$ we need to compute $\exp(x·2^{-k})$ with relative error at most about $r_0 := 2^{-p-k-1}$, because on each squaring the error $r_n$ changes to about $r_{n+1} ≤ 2r_n+2^{-q}$, giving $r_k+2^{-q} ≤ (r_0+2^{-q})·2^k$ and hence $r_n ≤ 2^{-p}$.

Therefore we need to use enough terms of the Taylor expansion for $\exp(x·2^{-k})$ so that our error is at most $|\exp(x·2^{-k})|·2^{-p-k-1}$. If $k = \max(0,\log_2 |x|) + c$ for some positive integer $c$, then $|x·2^{-k}|<2^{-c}$ and so $|\exp(x·2^{-k})| > \exp(-1/2) > 1/2$, and thus it suffices to have our error less than $2^{-p-k-1}/2$. We allocate this error margin to two halves, one half for the Taylor remainder and one half for error in our arithmetic operations. Letting $z := x·2^{-k}$, we have $\sum_{i=n}^∞ |z^i/i!| ≤ |z|^n/n! · \sum_{i=0}^∞ (|z|/n)^i ≤ |z|^n/n! ≤ 2^{-c·n}$ for any $n ≥ 1$, so we only need to compute $\sum_{i=0}^{n-1} z^i/i!$ where $n ≥ 1$ and $2^{-c·n} < 2^{-p-k-1}/4$, both of which hold if $c·n ≥ p+k+3$. Each term requires one addition, one multiplication and one division, via the trivial identity $z^{n+1}/(n+1)! = z^n/n! · z/n$, and so if we start with $z$ at $q$-bit precision then the cumulative relative error is at most about $2n·2^{-q}$ since each "$· z/n$" introduces an extra error factor of about $(1+2^{-q})^2$. Since we want $2n·2^{-q} < 2^{-p-k-1}/4$, equivalently $n < 2^{m-4}$, it is enough to have $m = \log_2 n + 4$.

Even if we use schoolbook multiplication, namely that multiplying at $q$-bit precision takes $O(q^2)$ time, the above method yields a relatively efficient algorithm by choosing $k$ appropriately. The Taylor phase takes $O(n·q^2)$ time, and the exponentiation phase takes $O(k·q^2)$ time. If we choose $c = \sqrt{p}$ we can choose $n = \sqrt{p}+k$, which will give $k,n ∈ O( \sqrt{p} + \log |x| )$ and $q ∈ O( p + \log |x| )$. Thus for $x$ within a bounded domain, the whole algorithm takes $O(p^{2.5})$ time.


The above is based on purely elementary techniques. A more careful analysis using the same techniques yields an even better algorithm (see Efficient Multiple-Precision Evaluation of Elementary Functions).

There are ways to do much much better, basically coming down to using an AM-GM iteration to compute $\ln$, and then using Newton-Raphson inversion to obtain $\exp$ (see The Arithmetic-Geometric Mean and Fast Computation of Elementary Functions).

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .