14
$\begingroup$

In general, Newton's method for root finding has a "bubbly" boundary between basins of convergence for different roots. This is where fractals are usually created from. But outside these "bubbly" boundaries there are very clear areas where there's no ambiguity about which root Newton's method will converge to.

Is it possible to build conservative bounds around the basins of converge, where you can say that if you start inside that basin you will definitely converge to a root? It might still be possible to converge to the root outside this bound, if you start in the "bubbly" region, so the bound won't be tight...

But is there a way to use information about a function to set up bounds where you know that you're sufficiently close to your root for Newton's method to converge, and specifically to only converge to your specific root of interest?

...

Basically, I'm trying to explore numerical methods for finding all the roots of a function in some given interval, including even roots. Since you can't really bracket an even root and use bisection, Newton's method becomes a good choice, but only if the iterations will actually converge.

I have a method in mind ("conservative advancement") for approaching a known even root, which takes safe step sizes based on the max error bounds from a Taylor approximation and how far you are from 0. This takes large step sizes when you're far away from 0, but as you approach 0 for non-simple roots, it converges really slowly (it might possibly take millions of iterations). Newton's method converges much quicker, even if its convergence is only linear for non-simple roots. But I want to make sure that it will converge to only the root I actually am trying to get to, and not skip the nearest root and instead converge to another root farther away, since I wouldn't really have a way to detect that I missed a root.

...

My guess is that you need to know the roots a priori to be able to bound the basins of convergence (something sort of like a Voronoi region around the roots, probably). But I haven't read any literature about it either way, so I'm curious if there's any known tricks.

$\endgroup$
5
  • $\begingroup$ You might want to peruse Bahman Kalantari's stuff; I'm pretty sure he's delved into the basins of attraction for various iterative methods. Note that there are at least two standard methods for handling roots of some function $f(x)$ with even multiplicity: treat the problem as an optimization problem, or apply some rootfinding method to $\frac{f(x)}{f^\prime(x)}$. $\endgroup$ Commented Apr 24, 2011 at 13:29
  • 1
    $\begingroup$ You're looking for the largest disk (or at least a large one) totally contained in the immediate basin of attraction of each root. I just mention this because the key word here is immediate. $\endgroup$
    – lhf
    Commented Apr 24, 2011 at 14:05
  • $\begingroup$ Minimization would work pretty well if I could bracket a region that I know contains either a single root or a single min/max "hump". That's actually fairly tricky to do, though... I can bound the max of the first derivative in a region with a linear function (I use that for conservative advancement) so maybe there's a way to use that somehow... $\endgroup$
    – Jay Lemmon
    Commented Apr 25, 2011 at 0:27
  • $\begingroup$ You might want to have a look at How to find all roots of complex polynomials by Newton's method by Hubbard et al. $\endgroup$
    – lhf
    Commented Apr 25, 2011 at 3:40
  • $\begingroup$ Look also into the alpha-beta-gamma theory of Blum-Shucker-Smale, it was made to explore the size of the region of quadratic convergence. $\endgroup$ Commented Aug 9, 2018 at 8:24

1 Answer 1

10
$\begingroup$

Chee Yap's book on "Fundamental Problems of Algorithmic Algebra", chapter 6 deals with this problem precisely.

Yap cites Friedman's "On the convergence of Newton's method." and Smale's "The fundamental theorem of algebra and complexity theory" as references for the following result (Yap, Theorem 6.37):

Let $f(X) \in \mathbb{Z}[X]$ be square free with $m = \deg f$ and $M = 1 + \|f\|_{\infty}$. Then Newton iteration for $f(X)$ is guaranteed to converge to root $X^*$ provied the initial approximation is at most $$\delta_0 = [m^{3m+9}(1+M)^{6m}]^{-1} $$ from $X^*$

Where $\|f\|_{\infty}$ is the maximum coefficient of the polynomial $f(x)$.

Hope that helps.

$\endgroup$
2
  • $\begingroup$ What does $!$ represent in your equation? Or did you mean $1$? Anyway, that's a promising start... The function I have in mind is not a polynomial, though I can bound it above and below with a polynomial. I'll look at the links you provided and see if I can work out something similar for my purposes. $\endgroup$
    – Jay Lemmon
    Commented Apr 25, 2011 at 0:21
  • $\begingroup$ @Jay, Theorem 2.2 in Friedman gives an explicit ball in the immediate basin of attraction of a root. $\endgroup$
    – lhf
    Commented Apr 25, 2011 at 3:37

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .