Consider an optimization problem in the standard form (we call this a primal problem):

We denote the optimal value of this as
The Lagrange dual function
We define the Lagrangian
We define the Lagrange dual function (or just dual function)
Note that once we choose an
Lower bound property
The dual function yields lower bounds on the optimal value
Suppose
Hence
Since
Derive an analytical expression for the Lagrange dual function
Practice problem 1: Least-squares solution of linear equations

The Lagrangian is
which yields
Therefore,
Practice problem 2: Standard form Linear Programming

The Lagrangian is
The dual function is
We see that

The lower bound property is nontrivial only when

This problem, in turn, can be expressed as

The Lagrange dual problem
For each pair
This leads to the optimization problem

This problem is called the Lagrange dual problem associated with the problem (1). In this context the original problem (1) is sometimes called the primal problem. We refer to
Note: the dual problem is always convex.
Weak/ Strong duality
The optimal value of the Lagrange dual problem, which we denote
which holds even if the original problem is not convex. This property is called weak duality.
We refer to the difference
We say that strong duality holds if
Note that strong duality does not hold in general. But if the primal problem (11) is convex with
Slater’s condition
Slater’s condition: There exists an
Such a point is called strictly feasible.
Slater’s theorem: If Slater’s condition holds for a convex problem, then the strong duality holds.
Complementary slackness
Suppose the strong duality holds. Let

We conclude that the two inequalities in this chain hold with equality. Since the inequality in the third line is an equality, we conclude that
Another important conclusion is
KKT optimality conditions
We now assume that the functions
KKT conditions for nonconvex problems
Suppose the strong duality holds. Let
The KKT conditions are the following:
For any optimization problem with differentiable objective and constraint functions for which strong duality obtains, any pair of primal and dual optimal points must satisfy the KKT conditions.
KKT conditions for convex problems
When the primal problem is convex, the KKT conditions are also sufficient for the points to be primal and dual optimal. That is, if

then

This shows that
We conclude the following:
- For any convex optimization problem with differentiable objective and constraint functions, any points that satisfy the KKT conditions are primal and dual optimal, and have zero duality gap.
- If a convex optimization problem with differentiable objective and constraint functions satisfies Slater’s condition, then the KKT conditions provide necessary and sufficient conditions for optimality: Slater’s condition implies that the optimal duality gap is zero and the dual optimum is attained, so
is optimal iff there are that, together with , satisfy the KKT conditions.
Solving the primal problem via the dual
Note that if strong duality holds and a dual optimal solution
More precisely, suppose we have strong duality and an optimal

is unique (For a convex problem this occurs). Then if the solution is primal feasible, it must be primal optimal; if it is not primal feasible, then no primal optimal point can exist, i.e., we can conclude that the primal optimum is not attained.
Reference:
- Convex Optimization* by Stephen Boyd and Lieven Vandenberghe.