I need to relearn optimization, so here’s my incredibly short review.

The most basic problem to solve is to minimize $f(x)$ such that $g(x) = 0$. Since the constraint is an equality condition, we use the Lagrange multiplier. We look at the Lagrangian function $\mathcal{L}(x, \lambda) = f(x) – \lambda g(x)$; it’s easy to see that the minima of the original problem satisfies some sort of saddle point condition. This Lagrangian minimization problem can be solved by taking the gradients and setting it equal to 0.

As an example, let’s consider the curl-curl problem I’ve been looking at. \begin{align*} A u &= f, B u &= 0 \end{align*} with some appropriate boundary conditions which I will skip. The matrix $A$ corresponds to a curl-curl operator in strong form and $B$ is a div operator. The zero-divergence condition on the function is critical for the physics; $u$ should be thought of as a magnetic field and thus satisfy Gauss’s law.

Since $A$ is positive semi-definite, this is really a minimization problem $\min J(u) = u^T A u/2 – f^T u$ with corresponding Lagrangian $\mathcal{L}(u, \lambda) = u^T A u/ 2 – f^T u – \lambda^T Bu$ (note that in this case, $\lambda$ is a vector). The gradient with respect to the multiplier gives $Bu = 0$ as expected, and the gradient with respect to $u$ gives $Au – B^T\lambda = f$. Combining these gives the saddle point problem that we are familiar with. We can also obtain this sort of result using the functional formulation; without diving into too much details, it’s a similar process except with the Euler-Lagrange equation.

So what in the world is KKT conditions then? It’s just the generalization of Lagrange multipliers to inequalities… Specifically, one forms the Lagrangian again, and then set all the gradients equal to 0 and magically we get the correct minimum. Now looking back on this, this is a really strong result. But man, the way the econ professors taught this was tragically bad.