Home | Libraries | People | FAQ | More |
There are many reasons why root root finding can fail, here are just a few of the more common examples:
If you start in the wrong place, such as z0 here:
Then almost any root-finding algorithm will descend into a local minima rather than find the root.
In this example, we're starting from a location (z0) where the first derivative is essentially zero:
In this situation the next iteration will shoot off to infinity (assuming we're using derivatives that is). Our code guards against this by insisting that the root is always bracketed, and then never stepping outside those bounds. In a case like this, no root finding algorithm can do better than bisecting until the root is found.
Note that there is no scale on the graph, we have seen examples of this situation occur in practice even when several decimal places of the initial guess z0 are correct.
This is really a special case of a more common situation where root finding with derivatives is divergent. Consider starting at z0 in this case:
An initial Newton step would take you further from the root than you started, as will all subsequent steps.
Consider starting at z0 in this situation:
The first derivative is essentially infinite, and the second close to zero (and so offers no correction if we use it), as a result we take a very small first step. In the worst case situation, the first step is so small - perhaps even so small that subtracting from z0 has no effect at the current working precision - that our algorithm will assume we are at the root already and terminate. Otherwise we will take lot's of very small steps which never converge on the root: our algorithms will protect against that by reverting to bisection.
An example of this situation would be trying to find the root of e-1/z2 - this function has a single root at z = 0, but for z0 < 0 neither Newton nor Halley steps will ever converge on the root, and for z0 > 0 the steps are actually divergent.