Convergence of the Newton-Raphson Method

For systems of equations the Newton-Raphson method is widely used, especially for the equations arising from solution of differential equations.

Write a Taylor expansion in several variables.

The Jacobian matrix is defined as

and the Newton-Raphson method is

Theorem. Assume x0 is such that

and

and

Then the Newton iterates lie in the 2b sphere

and

where

and

Convergence is guaranteed provided the norm of the matrix J is bounded, f(x) is bounded for the initial guess, and the second derivative of f(x) with respect to all variables is bounded. See Isaacson and Keller [1966]. Note also that the error decreases from iteration to iteration by at least a factor of 2. In actuality, the convergence is much faster when the approximation approaches the true solution.

Let's apply this theorem to the simple problem solved above:

f(x) = x2 ­ 2 = 0; find x.

The function, first derivative and second derivative are

Thus the Jacobian is 2x. The Theorem requires

which is true for some "a" as long as x is not 0. The theorem also requires

and is true for some "b" as long as x0 is not 0. The second derivative is constant, and is hence bounded. Use as the initial guess x0 = 1 and use a ball of radius 0.9. Thus we look at x = 1 ± b, or 0.1 x 1.9, which requires that a = 5. The last equation takes the value 0.5 for x0 = 1, which is less than b = 0.9; thus, this equation is satisfied. All conditions are satisfied, and the theorem says the method will converge, and it does.

Take Home Message: If the Newton-Raphson method converges, near the solution the error is reduced at least a factor of two each iteration (usually more).