You are currently browsing the monthly archive for March 2007.

This is a well known problem (see for instance this survey) in the area of “quantum chaos” or “quantum unique ergodicity”; I am attracted to it both for its simplicity of statement (which I will get to eventually), and also because it focuses on one of the key weaknesses in our current understanding of the Laplacian, namely is that it is difficult with the tools we know to distinguish between eigenfunctions (exact solutions to $-\Delta u_k = \lambda_k u_k$) and quasimodes (approximate solutions to the same equation), unless one is willing to work with generic energy levels rather than specific energy levels.

The Bunimovich stadium $\Omega$ is the name given to any planar domain consisting of a rectangle bounded at both ends by semicircles. Thus the stadium has two flat edges (which are traditionally drawn horizontally) and two round edges, as this picture from Wikipedia shows:

Despite the simple nature of this domain, the stadium enjoys some interesting classical and quantum dynamics. The classical dynamics, or billiard dynamics on $\Omega$ is ergodic (as shown by Bunimovich) but not uniquely ergodic. In more detail: we say the dynamics is ergodic because a billiard ball with randomly chosen initial position and velocity (as depicted above) will, over time, be uniformly distributed across the billiard (as well as in the energy surface of the phase space of the billiard). On the other hand, we say that the dynamics is not uniquely ergodic because there do exist some exceptional choices of initial position and velocity for which one does not have uniform distribution, namely the vertical trajectories in which the billiard reflects orthogonally off of the two flat edges indefinitely.

I am very saddened to find out (first via Wikipedia, then by several independent confirmations) that Paul Cohen died on Friday, aged 72.

Paul Cohen is of course best known in mathematics for his Fields Medal-winning proof of the undecidability of the continuum hypothesis within the standard Zermelo-Frankel-Choice (ZFC) axioms of set theory, by introducing the now standard method of forcing in model theory. (More precisely, assuming ZFC is consistent, Cohen proved that models of ZFC exist in which the continuum hypothesis fails; Gödel had previously shown under the same assumption that models exist in which the continuum hypothesis is true.) Cohen’s method also showed that the axiom of choice was independent of ZF. The friendliest introduction to forcing is perhaps still Timothy Chow‘s “Forcing for dummies“, though I should warn that Tim has a rather stringent definition of “dummy”.

But Cohen was also a noted analyst. For instance, the Cohen idempotent theorem in harmonic analysis classifies the idempotent measures $\mu$ in a locally compact abelian group G (i.e. the finite regular measures for which $\mu * \mu = \mu$); specifically, a finite regular measure $\mu$ is idempotent if and only if the Fourier transform $\hat \mu$ of the measure only takes values 0 and 1, and furthermore can be expressed as a finite linear combination of indicator functions of cosets of open subgroups of the Pontryagin dual $\hat G$ of G. (Earlier results in this direction were obtained by Helson and by Rudin; a non-commutative version was subsequently given by Host. These results play an important role in abstract harmonic analysis.) Recently, Ben Green and Tom Sanders connected this classical result to the very recent work on Freiman-type theorems in additive combinatorics, using the latter to create a quantitative version of the former, which in particular is suitable for use in finite abelian groups.

Paul Cohen’s legacy also includes the advisorship of outstanding mathematicians such as the number theorist and analyst Peter Sarnak (who, incidentally, taught me analytic number theory when I was a graduate student). Cohen was in fact my “uncle”; his advisor, Antoni Zygmund, was the advisor of my own advisor Elias Stein.

It is a great loss for the world of mathematics.

[Update, Mar 25: Added the hypothesis that ZFC is consistent to the description of Cohen’s result. Several other minor edits also.]

Ben Green and I have just uploaded the paper “Freiman’s theorem in finite fields via extremal set theory” to the arXiv, which we have submitted to Combinatorics, Probability, and Computing. This paper is our third regarding various refinements of Freiman’s theorem; the first one concerned approximating a set of small doubling in a torsion-free group by a progression of really small rank using compressions and convex geometry, while the second one obtained an intersection version of Freiman’s theorem (and the Balog-Szemerédi-Gowers theorem) in the 2-torsion case using an induction-on-dimension argument. Here, we return to compressions to obtain particularly sharp bounds on Freiman’s theorem in the 2-torsion case, and in particular getting some partial results on the polynomial Freiman-Ruzsa conjecture. Specifically, we show that if $A \subset {\Bbb F}_2^n$ has doubling constant at most K, thus $|A+A| \leq K|A|$, then A can be contained in an affine subspace of cardinality at most $2^{2K + O(\sqrt{K} \log K)} |A|$. Except for the O() factor, this is sharp. If in addition A is a downset, we show that A can be covered by $O(K^{46})$ translates of a coordinate subspace of cardinality at most |A|. (The constant 46 is definitely not best possible, though we show that it cannot be lowered to below 1.46.) In particular, this settles the polynomial Freiman-Ruzsa conjecture in the case when the set is a downset.

The main ideas are to use ideas from extremal set systems theory, notably the method of compressions (as used for instance by Bollobás and Leader) and of shifts (as used for instance by Frankl). Using all this machinery, things eventually reduce to understanding sumsets of a very structured class of sets – the shift-minimal downsets – and their relationship with Hamming balls. This in turn ultimately rests on a certain “gamblers’ ruin” problem already considered by Frankl.

It is always dangerous to venture an opinion as to why a problem is hard (cf. Clarke’s first law), but I’m going to stick my neck out on this one, because (a) it seems that there has been a lot of effort expended on this problem recently, sometimes perhaps without full awareness of the main difficulties, and (b) I would love to be proved wrong on this opinion :-) .

The global regularity problem for Navier-Stokes is of course a Clay Millennium Prize problem and it would be redundant to describe it again here. I will note, however, that it asks for existence of global smooth solutions to a Cauchy problem for a nonlinear PDE. There are countless other global regularity results of this type for many (but certainly not all) other nonlinear PDE; for instance, global regularity is known for Navier-Stokes in two spatial dimensions rather than three (this result essentially dates all the way back to Leray’s thesis in 1933!). Why is the three-dimensional Navier-Stokes global regularity problem considered so hard, when global regularity for so many other equations is easy, or at least achievable?

(For this post, I am only considering the global regularity problem for Navier-Stokes, from a purely mathematical viewpoint, and in the precise formulation given by the Clay Institute; I will not discuss at all the question as to what implications a rigorous solution (either positive or negative) to this problem would have for physics, computational fluid dynamics, or other disciplines, as these are beyond my area of expertise. But if anyone qualified in these fields wants to make a comment along these lines, by all means do so.)

I’ve received quite a lot of inquiries regarding a recent article in the New York Times, so I am borrowing some space on this blog to respond to some of the more common of these, and also to initiate a discussion on maths education, which was briefly touched upon in the article.

Most of the feedback I received, though, concerned the issue of maths education. I mentioned in the article that I feel that the skill of thinking in a mathematical and rigorous way is one which can be taught to virtually anyone, and I would in the future hope to be involved in some project aimed towards this goal. I received a surprising number of inquiries on this, particularly from parents of school-age children. Unfortunately, my maths teaching experience is almost completely restricted to the undergraduate and graduate levels – and my own school experience was perhaps somewhat unusual – so I currently have close to zero expertise in K-12 maths education. (This may change though as my son gets older…) Still, I think it is a worthy topic of discussion as to what the mathematical academic community can do to promote interest in mathematics, and to encourage mathematical ways of thinking and of looking at the world, so I am opening the discussion to others who may have something of interest to say on these matters.

(Update, March 13: A bad link has been repaired. Also, I can’t resist a somewhat political plug: for Californian readers, there is an open letter in support of California’s K-12 education standards, together with some background information.)

I’ve arXived the paper “On the condition number of a randomly perturbed matrix”, authored with Van Vu, which will appear at STOC 2007. (This paper was actually finished and submitted several months ago, but for some reason I neglected to upload it to the arXiv until now.) Here we show that given an arbitrary $n \times n$ matrix M with polynomially bounded entries (in particular, M could be highly singular), a random discrete perturbation M+N, where N is (say) a random Bernoulli $\pm 1$ matrix, will have polynomially bounded condition number with high probability (specifically, given any A there exists a B such that the condition number is $O(n^B)$ with probability $1-O(n^{-A})$). This is the discrete analogue of an analogous result by Spielman and Teng for continuous random perturbations. The proof uses the inverse Littlewood-Offord technology as well as a discretisation lemma for generalized arithmetic progressions, both from our previous paper (which essentially addressed the M=0 case). It turns out that the effect of M is essentially just to add a few deterministic rows to a random matrix which one needs to obtain lower bounds for the least singular value.

[This post is authored by Ben Green, who has kindly “guest blogged” this week’s “open problem of the week”. – T.]

In an earlier blog post Terry discussed Freiman’s theorem. The name of Freiman is attached to a growing body of theorems which take some rather “combinatorial” hypothesis, such that the sumset |A+A| of some set A is small, and deduce from it rather “algebraic” information (such that A is contained in a subspace or a grid).

The easiest place to talk about Freiman’s theorem is in the finite field model ${\Bbb F}_2^n$ (see my survey article on this subject for a full discussion). Here it was shown by Ruzsa that if |A+A| is at most $K |A|$ then A is contained in a subspace of size no more than about $2^{K^4}|A|$. The exponent has been improved a few times since Ruzsa’s paper, the best result currently in print being due to Sanders, who obtains an upper bound of $2^{K^{3/2}\log K}|A|$. Terry and I are in the process of writing a paper which obtains $2^{2K + o(K)}|A|$, which is best possible in view of the example $A := \{e_1,...,e_m\}$ where $m := 2K + O(1)$; this set has doubling roughly K but is not contained in a subspace of dimension smaller than 2K.

This result has an air of finality (except for the true nature of the o(K) term, which represents an interesting open problem). This is something of an illusion, however. Even using this theorem, one loses an exponential every time one tries to transition between “combinatorial” structure and “algebraic” structure and back again. Indeed if one knows that A is contained in a subspace of size $2^{2K}|A|$ then the strongest assertion one can make about the doubling of A is that it is at most $2^{2K}$.

The Polynomial Freiman-Ruzsa conjecture (PFR), in ${\Bbb F}_2^n$, hypothesises a more precise structure theorem for sets with small doubling. Using this
conjecture, one may flit back and forth between combinatorial and algebraic structure with only polynomial losses. Ruzsa attributes the conjecture to
Marton: it states that if A has doubling at most K then A is contained in the union of $K^{O(1)}$ translates of some subspace H of size at most |A|.

It seems that I have unwittingly started an “open problem of the week” column here; certainly it seems easier for me to pose unsolved problems than to write papers :-) .

This question in convex geometry has been around for a while; I am fond of it because it attempts to capture the intuitively obvious fact that cubes and octahedra are the “pointiest” possible symmetric convex bodies one can create. Sadly, we still have very few tools to make this intuition rigorous (especially when compared against the assertion that the Euclidean ball is the “roundest” possible convex body, for which we have many rigorous and useful formulations).

To state the conjecture I need a little notation. Suppose we have a symmetric convex body $B \subset {\Bbb R}^d$ in a Euclidean space, thus B is open, convex, bounded, and symmetric around the origin. We can define the polar body $B^\circ \subset {\Bbb R}^d$ by

$B^\circ := \{ \xi \in {\Bbb R}^d: x \cdot \xi < 1 \hbox{ for all } x \in B \}$.

This is another symmetric convex body. One can interpret B as the unit ball of a Banach space norm on ${\Bbb R}^d$, in which case $B^\circ$ is simply the unit ball of the dual norm. The Mahler volume $M(B)$ of the body is defined as the product of the volumes of B and its polar body:

$M(B) := \hbox{vol}(B) \hbox{vol}(B^\circ).$

This is another one of my favourite open problems, falling under the heading of inverse theorems in arithmetic combinatorics. “Direct” theorems in arithmetic combinatorics take a finite set A in a group or ring and study things like the size of its sum set $A+A := \{ a+b: a,b \in A \}$ or product set $A \cdot A := \{ ab: a,b \in A \}$. For example, a typical result in this area is the sum-product theorem, which asserts that whenever $A \subset {\Bbb F}_p$ is a subset of a finite field of prime order with $1 \leq |A| \leq p^{1-\delta}$, then

$\max( |A+A|, |A \cdot A| ) \geq |A|^{1+\epsilon}$

for some $\epsilon = \epsilon(\delta) > 0$. (This particular theorem was first proven here, with an earlier partial result here; more recent and elementary proofs with civilised bounds can be found here, here or here. It has a number of applications.)

In contrast, inverse theorems in this subject start with a hypothesis that, say, the sum set A+A of an unknown set A is small, and try to deduce structural information about A. A typical goal is to completely classify all sets A for which A+A has comparable size with A. In the case of finite subsets of integers, this is Freiman’s theorem, which roughly speaking asserts that if $|A+A| = O(|A|)$, if and only if A is a dense subset of a generalised arithmetic progression P of rank O(1), where we say that A is a dense subset of B if $A \subset B$ and $|B| = O(|A|)$. (The “if and only if” has to be interpreted properly; in either the “if” or the “only if” direction, the implicit constants in the conclusion depends on the implicit constants in the hypothesis, but these dependencies are not inverses of each other.) In the case of finite subsets A of an arbitrary abelian group, we have the Freiman-Green-Ruzsa theorem, which asserts that $|A+A| = O(|A|)$ if and only if A is a dense subset of a sum P+H of a finite subgroup H and a generalised arithmetic progression P of rank O(1).