Next: The logarithmic barrier method
Up: Introduction to inequalityconstrained optimization:
Previous: Introduction to inequalityconstrained optimization:
I now turn to the inequalityconstrained nonlinear program


f(x) 
(1) 
s.t. 


(2) 
where
and
.
The reader
should notice that
is to be interpreted componentwise;
that is,
if and only if
for each
.
Inequality constraints are common in optimization problems;
indeed, almost every optimization problems does or could have inequality
constraints, although it is sometimes safe to ignore them.
The most common type of inequality constraints are simple bounds
that the variables must satisfy. For example, if every variable must
satisfy both lower and upper bound, then the constraints could be written
as
where
with .
To incorporate these constraints
into the standard form (2), one would write them as
Therefore, if a problem contained only these simple bounds, the constraint
function
(p=2n) would be defined as
Simple bounds are common because variables typically have a physical
interpretation and some real numbers do not make physical sense for a given
variable. For example, physical parameters (density, elasticity,
thermal conductivity, etc.) typically cannot take negative values, so
nonnegativity constraints of the form
are common. The same constraint appears when the variables represents
quantities (for example, the number of barrels of petroleum) that cannot be
negative. Upper bounds often represent limited resources.
In some cases, it may seem that the appropriate constraint is a strict
inequality, as when the variables represent physical parameters that must
be strictly positive. However, a strict inequality constraint may lead
to an optimization problem that is illposed in that a minimizer is infeasible
but on the boundary of the feasible set. In such as case, there may not
be a solution to the optimization problem. A simple example of this
is
For this reason, strict inequality constraints are not used in nonlinear
programming.
When the appropriate constraint seems to be a strict inequality, one of the
following is usually true:
 1.
 The problem is expected to have a solution that easily satisfies the
strict inequality. In this case, as I will show below, the constraint
plays little part in the theory or algorithm other than as a ``sanity check''
on the variable. Therefore, the nonstrict inequality constraint is just
as useful.
 2.
 Due to noise in the data or other complications, the solution to
the optimization problem may lie on the boundary of the feasible set, even
though such a solution is not physically meaningful. In this case, the
inequality must be perturbed slightly and written as a nonstrict inequality.
For example, a constraint of the form x_{i}>0 should be replaced with
,
where a_{i}>0 is the smallest value of x_{i} that is
physically meaningful.
It may not be clear how to distinguish the first case from the second;
if it is not, the second approach is always valid. The first case can
usually be identified by the fact that the solution is not expected to be
close to satisfying the inequality as an equation.
Inequality constraints that are not simple bounds are usually bounds on
derived quantities. For example, if x represents design parameters for
a certain object, the mass m of the object may be represented as a
function of x: m=q(x). In this case, constraints of the form
or
(or both) may be appropriate.
I now present a simple example of an inequalityconstrained nonlinear
program.
Example 1.1
I define
and
by
The feasible set for
is shown in Figure
1, together with the contours of the
objective function
f.
Figure:
The contours of f and the feasible set determined by (see Example 1.1). The feasible set is half of the unit disk
(the shaded region). The minimizer
is indicated
by an asterisk.

An important aspect of this example is that the second constraint,
,
does not affect the solution. If the second constraint
were changed to
for some value of u that is not too large,
or if the second constraint were simply omitted, the optimization problem
would have the same solution. Both the theory of and algorithms for
inequalityconstrained problems must address the issue of ``inactive''
constraints.
The first step in analyzing NLP (12) should
be to derive the optimality conditions. However, since the optimality
conditions are somewhat more complicated than in the case of equality
constraints, I will begin by presenting an algorithm for solving
(12), namely, the logarithmic barrier method. From this
algorithm, I will deduce the optimality conditions for the
inequalityconstrained NLP. A rigorous derivation of these optimality
conditions will be given later.
Next: The logarithmic barrier method
Up: Introduction to inequalityconstrained optimization:
Previous: Introduction to inequalityconstrained optimization:
Mark S. Gockenbach
20030330