4
$\begingroup$

I want to find the complex fixpoint $t=b^t $ for real bases $b> \eta = \exp(\exp(-1))$.

added remark: I'm aware that there is a solution using branches of the Lambert-W-function, but I've no Maple/Mathematica and only a rough implementation in Pari/GP for real values. That motivated to try a solution via Newton/Raphson. And to understand and solve such an implementation (which has to deal with derivatives and complex values) is/was then my question here. See also my comment to Fabian's answer below. I seem to have solved it myself for the "principal branch" (see my own answer below) but it is still open for the general case of k'th branch. [end of remark]

What I have is a function depending on a parameter $ \beta $ giving the auxiliary values
$$ u = \frac{\beta}{ \sin(\beta) } *\exp( i * \beta) $$ $$ t=\exp(u) $$ $$ b= f(\beta) = \exp(u/t) = \exp(u * \exp(-u)) $$

By this I can do an approximation given a base $B$ using binary search. I can find the bounds of an interval taking lower and upper-limit beta's $ \beta_l = \epsilon $ and $ \beta_u = \pi-\epsilon $ with small epsilons giving the lower and upper bases $b_l$ and $b_u$ respectively. Then comparing $b_m = f(\beta_m)$ where $ \beta_m = (\beta_l + \beta_u)/2 $ with my given base $B$ I can implement a binary search which approximates $b_m$ to $B$ arbitrarily well and having $\beta_m$ I can reconstruct u and the fixpoint t by the above formula.

However, that binary search needs surprisingly many iterations and I thought, possibly a Newton-like method for that approximation would be more efficient. But since I have complex values involved I do not even see the derivative and even less the formula how to involve that derivative in such an approximation-formula and how to apply this finally to actually do the iterations...

[update 4] moved my own findings into an own answer (as suggested in meta.***)

[update 1] (in this old plot I used the letter s instead of b) plot of function f(beta)

$\endgroup$
3
  • $\begingroup$ In your formulas $*$ is used for simple product or for convolution? $\endgroup$
    – mpiktas
    Commented Mar 3, 2011 at 9:31
  • $\begingroup$ @mpiktas: just multiplication $\endgroup$ Commented Mar 3, 2011 at 12:18
  • $\begingroup$ There is also a nice treatize by L. Euler on that question of finding a fixpoint in exponentiation. Unfortunately it seems that there is only the latin version and no translation yet. But it is nice to see the formulae in that old paper ... math.dartmouth.edu/~euler/docs/originals/E489.pdf $\endgroup$ Commented Mar 3, 2011 at 13:30

4 Answers 4

3
$\begingroup$

It might be advantageous to reduce the problem to something which has been studied in the literature: the keyword is Lambert W function. The Lambert W function $W(z)$ is defined via $$z= W(z)e^{W(z)}.$$ On many systems there are already implementations of this function. In terms of the Lambert W function your $t$ is given via the relation $$t = - \frac{W(-\log b)}{\log b}.$$

We need to find a way to calculate $w=W(z)$ for $z<-1/e$. I follow an idea of this page. Let $z=-r$ (with $r>1/e$), $w=R e^{i \theta}$ ($R>0$). Writing $w e^{w} =z$ in polar coordinates, reduces to the set of equation $$r= R e^{R \cos \theta}$$ and $$\pi = \theta + R \sin \theta.$$ This set of equations can be solved for $R>0$ and $\theta \in [0,\pi]$.

The second equation can be rewritten as $R= (\pi - \theta)/\sin \theta$. Plugging this into the first equation yields $$r \sin \theta= (\pi - \theta) e^{(\pi - \theta) \cot \theta}.$$ This last equation can be solved using bisection (we know that the solution is between 0 and $\pi$).

$\endgroup$
3
  • $\begingroup$ thanks! Just didn't think of the W-function although I knew of the Galidakis-site you mention. Anyway- what I was doing was (using Pari/GP, which has no W-implementation) searching that approximation and then I got aware, that I even would not know how to implement a Newton-approximation in this case. So that was the original motivation. (For the Lambert W I have only a real-version from some open-source collection; after your hint I asked Wolfram-alpha and got a solution using the "-1"-branch of the W - not usable with my Pari/GP-function) $\endgroup$ Commented Mar 1, 2011 at 20:37
  • $\begingroup$ But solving the equation $r \sin \theta= (\pi - \theta) e^{(\pi - \theta) \cot \theta}$ for $\theta$ with bisection should work...? $\endgroup$
    – Fabian
    Commented Mar 1, 2011 at 20:40
  • $\begingroup$ I think so. Well, meanwhile it has become late here, I'll come back to this tomorrow morning. $\endgroup$ Commented Mar 1, 2011 at 20:58
3
$\begingroup$

A bit late for an interesting problem.

Strating from @Fabian's answer, we look for the inverse of $$r=(\pi -\theta )\, \, e^{(\pi -\theta ) \cot(\theta )}\,\, \csc (\theta )$$ which is very stiff.

Better would be to take the logarithm of both sides. If we are not too concerned by the case where $\theta$ is close to $0$ or to $\pi$, let $\theta=x+\frac \pi 2$ and use a series expansion around $x=0$.

This would give $$y=\log \left(\frac{2 r}{\pi }\right)=\sum_{i=1}^n (-1)^i\,\, a_i\,x^i$$ the first coefficients being $$\left\{\frac{4+\pi ^2}{2 \pi },\frac{3 \pi ^2-4}{2 \pi ^2},\frac{16+\pi ^4}{6 \pi ^3},\frac{5 \pi ^4-48}{12 \pi ^4},\frac{96+\pi ^6}{15 \pi ^5},\frac{7 \pi ^6-480}{45 \pi ^6},\cdots\right\}$$ and we could use power series reversion to obtain an estimate of $x$.

However, much better would be to make the series an $[k+1,k]$ Padé approximant $P_k$ $$P_k=x\, \frac{\sum_{i=0}^k b_i\,x^i } {1+\sum_{i=1}^k c_i\,x^i }$$ which are quite accurate even close to the end points. If we use $k=2$, we just need to solve a cubic equation in $x$.

To check how good or bad is this simple approximation, give $x$ a value, compute the corresponding $y$ and solve the cubic equation using the trigonometric solution (the smallest positive root is the solution).

$$\left( \begin{array}{cc} x_{\text{given}} & x_{\text{recomputed}} \\ -1.5 & -1.53969 \\ -1.4 & -1.42305 \\ -1.3 & -1.31332 \\ -1.2 & -1.20754 \\ -1.1 & -1.10414 \\ -1.0 & -1.00218 \\ -0.9 & -0.90108 \\ -0.8 & -0.80050 \\ -0.7 & -0.70021 \\ -0.6 & -0.60008 \\ -0.5 & -0.50003 \\ -0.4 & -0.40001 \\ -0.3 & -0.30000 \\ -0.2 & -0.20000 \\ -0.1 & -0.10000 \\ 0.0 & 0.00000 \\ 0.1 & 0.10000 \\ 0.2 & 0.20000 \\ 0.3 & 0.30000 \\ 0.4 & 0.39999 \\ 0.5 & 0.49998 \\ 0.6 & 0.59993 \\ 0.7 & 0.69981 \\ 0.8 & 0.79956 \\ 0.9 & 0.89904 \\ 1.0 & 0.99800 \\ 1.1 & 1.09600 \\ 1.2 & 1.19208 \\ 1.3 & 1.28418 \\ 1.4 & 1.36720 \\ 1.5 & 1.42783 \\ \end{array} \right)$$

For $P_2$, made rational, the coefficients are $$b_0=-\frac{13037}{5906} \quad b_1=-\frac{166}{4587} \quad b_2=\frac{321}{1720}\quad c_1=\frac{1935}{3203}\quad c_2=-\frac{40}{7081}$$

Trying for $y=5$, the approximate result is $-0.992816$ while the "exact" solution given by Newton method is $-0.990771$.

Edit

In order to improve the accuracy around $x=-\frac \pi 2$, we can slightly alter $P_2$ writing it as $$Q_2= x \, \frac {a_0+a_1 x+a_2 x^2} {\left(x+\frac{\pi }{2}\right) (1+b_1 x) }$$ where $$a_0=-\frac{\pi ^2+4}{4} \qquad a_1=\frac{-512+624 \pi ^2-64 \pi ^4+\pi^6}{8 \pi \left(40-18 \pi ^2+\pi^4\right)} $$ $$a_2=\frac{832+2544 \pi ^2-844 \pi ^4+75 \pi ^6-2 \pi ^8}{24 \pi ^2 \left(40-18\pi ^2+\pi ^4\right)}\qquad b_1=\frac{\pi ^4-112}{2 \pi \left(40-18 \pi ^2+\pi ^4\right)}$$

$\endgroup$
2
  • $\begingroup$ Hi Claude - nice to see a new answer even after long time; thanks for that. I'm currently on a holiday and have no time to look at it deeply. Back home in -say- 2 weeks. $\endgroup$ Commented Jun 2 at 12:06
  • $\begingroup$ @GottfriedHelms. I saw this old problem and I could not resist the equation. It was fun. Cheers :-) $\endgroup$ Commented Jun 2 at 14:06
1
$\begingroup$

(As suggested by a thread in meta.stackexchange I moved my own solution (which is however still partial) into a separate answer)

The following solves the problem of finding the Newton-iteration at least for the "principal" value.

Using $c(b) = ln(ln(b))$ extends not only the range for numerical evaluation but also simplifies the whole problem. First I remove the use of the named variable u by a function properly dependend on $\beta$ : $$u(\beta) = \frac{\beta}{ \sin(\beta) } *\exp( i * \beta) = \beta \cot(\beta) + i \beta $$

Then I consider the replacement of the binary search using $ v(\beta) = \ln(u(\beta))-u(\beta) $ by the Newton/Raphson-iteration-scheme to approximate to a zero of $$ v(\beta,b) = v(\beta) - c(b) $$

Btw., here v has a an interesting powerseries: its coefficients are rational scalings of bernoulli-numbers, resp. zeta at negative arguments: $$v(x)=-1 + \sum_{k=2}^{\infty} (2 i)^k*(k+1)/k! \zeta (1-k) x^k $$

That value c(b) to which I want to approximate defines the constant deviation from v(x) so I include that in the constant term: $$v(x,b)=-1 - c(b) + \sum_{k=2}^{\infty} (2 i)^k*(k+1)/k! \zeta (1-k) x^k $$

The derivative by termwise differentiation is $$ dv(x) = v'(x) = \sum_{k=2}^{\infty} (2 i)^k*(k+1)*k/k! \zeta (1-k) x^{k-1} $$ and is the same as of $ v(x,b) $

Now the Newton-iteration for finding the root of $v(\beta,b) $ is $$ \beta_0 = \frac{\pi}2 \text{ and } \beta_{n+1} = \beta_n - \frac{v(\beta_n,b)}{v'(\beta_n)} $$ and with some n'th iteration I get $\beta_n$ which provides approximations to the parameters $$ t \approx t_n = \exp(u(\beta_n)) $$ $$ B \approx b_n = \exp(\exp(v(\beta_n,B))) $$ where also $$ B = t^{\frac1t} \approx t_n \ ^{\frac1{t_n}} $$ and B is the given real base for which I search the complex fixpoint t.

Numerically this iteration converges well with few (say 64) terms; I need now only 4 to 8 iterations to find a well approximated t for base b=4 where I needed hundreds of iterations using the binary search to get accuracy of, say, fourty digits.

(I've not yet tried to extend this to other complex solutions in t, so this is still open)

[update2] The selection of another branch (and meaning of another fixpoint) is as simple as possible: just adjust the search-interval for the iterative function to the appropriate interval of $k*\pi+\epsilon \ldots (k+1)*\pi-\epsilon$ Using the Pari/GP-builtin binary search with the new functions, I get the first seven fixpoints as follows

$$ \begin{array} {rrr} k & log(t) & t \ \ (=Fixpoint) & error: \ \ \ 4 - t^{\frac1t} \\ 0 & 0.0886671+1.51223*I & 0.0639598+1.09084*I & 2.04128E-202 \\ 1 & 1.20116+4.44867*I & -0.866450-3.20904*I & 8.36925E-201 \\ 2 & 1.73066+7.63096*I & 1.24841+5.50457*I & 1.49014E-200 \\ 3 & 2.07153+10.8062*I & -1.49429-7.79501*I & 7.14449E-201 \\ 4 & 2.32409+13.9723*I & 1.67648+10.0789*I & 1.16353E-200 \\ 5 & 2.52508+17.1324*I & -1.82146-12.3584*I & 1.01043E-199 \\ 6 & 2.69214+20.2884*I & 1.94197+14.6350*I & 1.85757E-200 \end{array} $$

[update] Hmm, the needed number of terms m in the powerseries of v(x) to get, for instance, accuracy to 40 decimal digits can be fairly well estimated by the following formula for bases b where 2 <= b <= 2000 .
$$ m \approx 62.448 * \ln(\ln(b)) + 100 \text{ (formula estimated using Excel) } $$

For a fixed base b one can determine the accuracy to number of digits depending on number of terms n of the power series v(ß). Here is a plot for b=2, b=4, b=16, b=256

For instance, for base B=4 I get using 150 terms in the power series of v(x) the complex fixpoint t to 40 digits using 6 iterations as

t =  0.06395981030134317249921085931098535528761 +     
       1.090843376539957576373598843800166909991*I
4 - t^(1/t) = 1.67294184828 E-46

Accuracy

$\endgroup$
1
$\begingroup$

You are interested in evaluating the function $\exp(-W_k(-\log\,b))$ for various $k$; we can restrict to nonnegative $k$ here due to the conjugate relationship between positive and negative branches. A truncation of formula 86 from this article can serve as a starting point for approximating $W_k(z)$, which can be further polished with Newton-Raphson or Halley:

$$W_k(z)\approx \log\,z+2\pi ik-\log(\log\,z+2\pi ik)-\log\left(1-\frac{\log\,\log\,z}{\log\,z}\right)\left(1+\frac1{\log\,z-\log\,\log\,z}\right)$$

Your course of action, then is to take $z=-\log\,b$, pick your $k$, evaluate the approximation given above, polish with an iterative method, and finally negate and exponentiate.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .