   # Newton's method for unconstrained minimization

Since Newton's method for solving is nothing more than Newton's method applied to the nonlinear system the following theorem is a corollary to Theorem 2.2:

Theorem 3.1   Suppose is twice continuously differentiable, and satisfies
1. ;
2. is positive definite (and hence, in particular, nonsingular);
3. is Lipschitz continuous on a neighborhood of x*.
Then x* is a strict local minimizer of f and, for any x(0)sufficiently close to x*, Newton's method defines a sequence that converges quadratically to x*.

I now give an example that shows that Newton's method can still converge if the hypotheses of the above theorem fail, specifically, if is positive semidefinite and singular. I define by

f(x)=(x1+x2-3)2+(x1-x2+1)4.

Then An easy calculation show that, with x*=(1,2), and x* is the unique global minimizer of f. However, the eigenvalues of are 0 and 4, and so is positive semidefinite and singular.

Begining with x(0)=(2,2), Newton's method produces the results shown in Table 2. (To save space, I only show iterates .)

Table 2: Results of applying Newton's method minimize a function of two variables.
 k  x1(k) x2(k) 16  1.0007612194201734 1.9992387805798266 17  1.0005074796134474 1.9994925203865526 18  1.0003383197422977 1.9996616802577023 19  1.0002255464948688 1.9997744535051312 20  1.0001503643299121 1.9998496356700879

The results suggest that Newton's method produces a sequence that converges to x*. However, the convergence is definitely not quadratic. Indeed, the ratios strongly suggest that linearly. A comparison between Tables 1 and 2 shows the desirability of quadratic (or at least superlinear) convergence.   