In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the {P \neq NP} problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The {P \neq NP} problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as {k}-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and {k}-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector {x \in {\bf R}^n} in an affine space {\{ x \in {\bf R}^n: Ax = b \}} (where {b \in {\bf R}^m} and {A \in {\bf R}^{m\times n}} are given, and {m} is typically somewhat smaller than {n}) which minimises the {\ell^1}-norm {\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|} of the vector {x}. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once {n} and {m} get moderately large (e.g. of the order of {10^6}), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix {A}. Much of the problem comes from the fact that the functional {x \mapsto \|x\|_1} is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint {Ax=b} with a convex penalty term {\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}, thus one is now trying to minimise the unconstrained functional

\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.

This functional is more convex, and is over a computationally simpler domain {{\bf R}^n} than the affine space {\{x \in {\bf R}^n: Ax=b\}}, so is easier (though still not entirely trivial) to optimise over. However, the minimiser {x^\epsilon} to this problem need not match the minimiser {x^0} to the original problem, particularly if the (sub-)gradient {\partial \|x\|_1} of the original functional {\|x\|_1} is large at {x^0}, and if {\epsilon} is not set to be small. (And setting {\epsilon} too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser {x^0} to the original problem, provided that {p} is (or more precisely, lies in) the (sub-)gradient {\partial \|x\|_1} of {\|x\|_1} at {x^0} – even if {\epsilon} is not small. But, one does not know in advance what the correct value of {p} should be, because one does not know what the minimiser {x^0} is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for {x} and {p} simultaneously; given an initial guess {x^k, p^k} for both {x^k} and {p^k}, one first updates {x^k} to the minimiser {x^{k+1} \in {\bf R}^n} of the convex functional

\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)

and then updates {p^{k+1}} to the natural value of the subgradient {\partial \|x\|_1} at {x^{k+1}}, namely

\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)

(note upon taking the first variation of (1) that {p^{k+1}} is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser {x^0} even for non-small values of {\epsilon}, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth {n}-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space {{\bf R}^m}. The Nash embedding theorem guarantees that one can do this if {m} is large enough depending on {n}, but what is the minimal value of {m} one can get away with? Many years ago, my colleague Robert Greene showed that {m = \frac{n(n+1)}{2} + n} sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that {m = \frac{n(n+1)}{2}} is possible, and no better. So this suggests that {m = \frac{n(n+1)}{2}} is the threshold for the smooth problem also, but this remains open in general. The cases {n=1} is trivial, and the {n=2} case is not too difficult (if the curvature is non-zero) as the codimension {m-n} is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the {n=3} case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.