14
$\begingroup$

(I wonder whether this is appropriate for the Math StackExchange or whether it'd be better on Stack Overflow as it deals with computing, but I'm asking about mathematical details, not about programming details, so I suspect here is better. But if it should be put somewhere else, feel free to move it.)

I was wondering about this: the computation of $e^{1/e}$ -- what I call the "tetrational constant" or "$\eta$" since it is important in tetration theory -- to a very large number of digits, similar to how $\pi$ and $e$ have been computed. In the case of $e$, it is possible to compute it fairly efficiently by

$$e = \sum_{n=0}^{\infty} \frac{1}{n!}$$.

Naively summing this series with simple multiplication methods yields a time complexity of $O(n^2)$ (or perhaps a little worse if we consider very large $n$ that don't fit in a computer's word, but there's also an improving factor due to the fact $\frac{1}{n!}$ converges to $0$ somewhat faster than exponentially -- haven't worked this all out), where $n$ is the number of digits to compute. We can go much faster, however, if we apply what is called "binary splitting", which changes the cost from $O(n^2)$ to $O(\log(n)\ M(n))$, where $M(n)$ is the cost to multiply two $n$-digit numbers. Using classical grade-school multiplication with $M(n) = O(n^2)$ complexity gives no advantage; in fact, it is worse with $O(n^2 \log(n))$ complexity instead of just $O(n^2)$. But if we leverage the faster FFT technique with $M(n) = O(n \log(n))$ (idealized bound -- in practice we have limited precision with which to do the FFT, and so we must eventually pack fewer digits per FFT element, and eventually would have to increase the precision of the numbers used in the FFT.), then we have $O(n \log(n)^2)$, which is much better than the naive approach.

"Binary splitting" can be in turn generalized to arbitrary hypergeometric series. But consider now $e^{1/e}$. Plugging $\frac{1}{e}$ into $\exp$'s series yields a naive algorithm in which we must do full-precision work just to compute terms of the series -- after computing $\frac{1}{e}$! This algorithm has bound $O(n\ M(n))$ (or perhaps a little better, due to the same considerations from the convergence of reciprocal factorial as for $e$), which is pretty bad -- with classical multiplication it's (somewhat better than, but worse than $O(n^2)$) $O(n^3)$ and even using the FFT technique, it's still $O(n^2 \log(n))$ -- worse than computing $e$ via the purely-classical approach! Trying to use the binary splitting doesn't save us jack, since full precision is required for every term. We can try and get another series by taking $f(x) = e^{e^x - 1} - 1$ at $x = -1$ to compute $\frac{e^{1/e}}{e} - 1$ as

$$\frac{e^{1/e}}{e} - 1 = \sum_{n=1}^{\infty} (-1)^n \frac{B_n}{n!}$$

where $B_n$ are the Bell numbers. This has only rational terms. But this series is not hypergeometric, so the binary splitting does not apply. Worse yet, it converges even slower than the first approach, and given the lack of a simple recurrence for its terms, suggests we don't save anything by this method. So, is there an efficient formula or algorithm to compute $e^{1/e}$? Note that $\pi$ can also be computed by binary splitting on, say, the Chudnovsky formula, and by the AGM iteration (Gauss-Legendre and other methods), both of which yield algorithms (when FFT multiplication is employed) with complexities of the form $O(n \log(n)^k)$ for some positive integer $k$. So that complexity -- $O(n \log(n)^k)$ -- is what I take as an "efficient" algorithm for the purposes of this question.

$\endgroup$
3
  • 2
    $\begingroup$ What about using Continued fractions? for example en.wikipedia.org/wiki/Generalized_continued_fraction#Examples $\endgroup$
    – jimjim
    Commented Sep 5, 2013 at 7:12
  • $\begingroup$ @Arjang: That doesn't seem like it converges fast enough, and would therefore still take a lot of full-precision operations. $\endgroup$ Commented Sep 5, 2013 at 7:35
  • $\begingroup$ It'd be nice if, if nothing could be found that wouldn't require everything to be done at full precision, that there was some kind of quadratically-convergent process involving only arithmetical functions and perhaps radicals (i.e. no transcendental functions that would require series evaluations), perhaps with efficiently-computable precomputed constants such as $e$. $\endgroup$ Commented Sep 5, 2013 at 7:36

1 Answer 1

6
$\begingroup$

I would use Newton's method on the function $f$ defined on $(1,\infty)$ by: $$f(x)=\log\left(\frac{1}{\log(x)}\right)-1$$

$$f'(x)=\frac{-1}{x\log(x)}$$

Newton's method converges quadratically.

The computation of the logarithm could use the Arithmetic-Geometric Mean that also converges quadratically. (See this article)

It means that asymptotically, to obtain $n$ digits you would need $O(\log(n)^3)$ iterations, which is quite good.

$\endgroup$
2
  • $\begingroup$ You can eliminate the reciprocal in $f(x)$. See here $\endgroup$
    – PM 2Ring
    Commented Sep 21, 2021 at 10:16
  • $\begingroup$ And we can reduce it to one log per loop if we pre-compute $e^{-1}$, like this $\endgroup$
    – PM 2Ring
    Commented Sep 22, 2021 at 14:12

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .