You are currently browsing the monthly archive for April 2020.

The square root cancellation heuristic, briefly mentioned in the preceding set of notes, predicts that if a collection {z_1,\dots,z_n} of complex numbers have phases that are sufficiently “independent” of each other, then

\displaystyle |\sum_{j=1}^n z_j| \approx (\sum_{j=1}^n |z_j|^2)^{1/2};

similarly, if {f_1,\dots,f_n} are a collection of functions in a Lebesgue space {L^p(X,\mu)} that oscillate “independently” of each other, then we expect

\displaystyle \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)} \approx \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)}.

We have already seen one instance in which this heuristic can be made precise, namely when the phases of {z_j,f_j} are randomised by a random sign, so that Khintchine’s inequality (Lemma 4 from Notes 1) can be applied. There are other contexts in which a square function estimate

\displaystyle \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)} \lesssim \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)}

or a reverse square function estimate

\displaystyle \| \sum_{j=1}^n f_j \|_{L^p(X,\mu)} \lesssim \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p(X,\mu)}

(or both) are known or conjectured to hold. For instance, the useful Littlewood-Paley inequality implies (among other things) that for any {1 < p < \infty}, we have the reverse square function estimate

\displaystyle \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)} \lesssim_{p,d} \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)}, \ \ \ \ \ (1)

whenever the Fourier transforms {\hat f_j} of the {f_j} are supported on disjoint annuli {\{ \xi \in {\bf R}^d: 2^{k_j} \leq |\xi| < 2^{k_j+1} \}}, and we also have the matching square function estimate

\displaystyle \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)} \lesssim_{p,d} \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)}

if there is some separation between the annuli (for instance if the {k_j} are {2}-separated). We recall the proofs of these facts below the fold. In the {p=2} case, we of course have Pythagoras’ theorem, which tells us that if the {f_j} are all orthogonal elements of {L^2(X,\mu)}, then

\displaystyle \| \sum_{j=1}^n f_j \|_{L^2(X,\mu)} = (\sum_{j=1}^n \| f_j \|_{L^2(X,\mu)}^2)^{1/2} = \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^2(X,\mu)}.

In particular, this identity holds if the {f_j \in L^2({\bf R}^d)} have disjoint Fourier supports in the sense that their Fourier transforms {\hat f_j} are supported on disjoint sets. For {p=4}, the technique of bi-orthogonality can also give square function and reverse square function estimates in some cases, as we shall also see below the fold.
In recent years, it has begun to be realised that in the regime {p > 2}, a variant of reverse square function estimates such as (1) is also useful, namely decoupling estimates such as

\displaystyle \| \sum_{j=1}^n f_j \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{j=1}^n \|f_j\|_{L^p({\bf R}^d)}^2)^{1/2} \ \ \ \ \ (2)

(actually in practice we often permit small losses such as {n^\varepsilon} on the right-hand side). An estimate such as (2) is weaker than (1) when {p\geq 2} (or equal when {p=2}), as can be seen by starting with the triangle inequality

\displaystyle \| \sum_{j=1}^n |f_j|^2 \|_{L^{p/2}({\bf R}^d)} \leq \sum_{j=1}^n \| |f_j|^2 \|_{L^{p/2}({\bf R}^d)},

and taking the square root of both side to conclude that

\displaystyle \| (\sum_{j=1}^n |f_j|^2)^{1/2} \|_{L^p({\bf R}^d)} \leq (\sum_{j=1}^n \|f_j\|_{L^p({\bf R}^d)}^2)^{1/2}. \ \ \ \ \ (3)

However, the flip side of this weakness is that (2) can be easier to prove. One key reason for this is the ability to iterate decoupling estimates such as (2), in a way that does not seem to be possible with reverse square function estimates such as (1). For instance, suppose that one has a decoupling inequality such as (2), and furthermore each {f_j} can be split further into components {f_j= \sum_{k=1}^m f_{j,k}} for which one has the decoupling inequalities

\displaystyle \| \sum_{k=1}^m f_{j,k} \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{k=1}^m \|f_{j,k}\|_{L^p({\bf R}^d)}^2)^{1/2}.

Then by inserting these bounds back into (2) we see that we have the combined decoupling inequality

\displaystyle \| \sum_{j=1}^n\sum_{k=1}^m f_{j,k} \|_{L^p({\bf R}^d)} \lesssim_{p,d} (\sum_{j=1}^n \sum_{k=1}^m \|f_{j,k}\|_{L^p({\bf R}^d)}^2)^{1/2}.

This iterative feature of decoupling inequalities means that such inequalities work well with the method of induction on scales, that we introduced in the previous set of notes.
In fact, decoupling estimates share many features in common with restriction theorems; in addition to induction on scales, there are several other techniques that first emerged in the restriction theory literature, such as wave packet decompositions, rescaling, and bilinear or multilinear reductions, that turned out to also be well suited to proving decoupling estimates. As with restriction, the curvature or transversality of the different Fourier supports of the {f_j} will be crucial in obtaining non-trivial estimates.
Strikingly, in many important model cases, the optimal decoupling inequalities (except possibly for epsilon losses in the exponents) are now known. These estimates have in turn had a number of important applications, such as establishing certain discrete analogues of the restriction conjecture, or the first proof of the main conjecture for Vinogradov mean value theorems in analytic number theory.
These notes only serve as a brief introduction to decoupling. A systematic exploration of this topic can be found in this recent text of Demeter.
Read the rest of this entry »

I was greatly saddened to learn that John Conway died yesterday from COVID-19, aged 82.

My own mathematical areas of expertise are somewhat far from Conway’s; I have played for instance with finite simple groups on occasion, but have not studied his work on moonshine and the monster group.  But I have certainly encountered his results every so often in surprising contexts; most recently, when working on the Collatz conjecture, I looked into Conway’s wonderfully preposterous FRACTRAN language, which can encode any Turing machine as an iteration of a Collatz-type map, showing in particular that there are generalisations of the Collatz conjecture that are undecidable in axiomatic frameworks such as ZFC.  [EDIT: also, my belief that the Navier-Stokes equations admit solutions that blow up in finite time is also highly influenced by the ability of Conway’s game of life to generate self-replicating “von Neumann machines“.]

I first met John as an incoming graduate student in Princeton in 1992; indeed, a talk he gave, on “Extreme proofs” (proofs that are in some sense “extreme points” in the “convex hull” of all proofs of a given result), may well have been the first research-level talk I ever attended, and one that set a high standard for all the subsequent talks I went to, with Conway’s ability to tease out deep and interesting mathematics from seemingly frivolous questions making a particular impact on me.  (Some version of this talk eventually became this paper of Conway and Shipman many years later.)

Conway was fond of hanging out in the Princeton graduate lounge at the time of my studies there, often tinkering with some game or device, and often enlisting any nearby graduate students to assist him with some experiment or other.  I have a vague memory of being drafted into holding various lengths of cloth with several other students in order to compute some element of a braid group; on another occasion he challenged me to a board game he recently invented (now known as “Phutball“) with Elwyn Berlekamp and Richard Guy (who, by sad coincidence, both also passed away in the last 12 months).  I still remember being repeatedly obliterated in that game, which was a healthy and needed lesson in humility for me (and several of my fellow graduate students) at the time.  I also recall Conway spending several weeks trying to construct a strange periscope-type device to try to help him visualize four-dimensional objects by giving his eyes vertical parallax in addition to the usual horizontal parallax, although he later told me that the only thing the device made him experience was a headache.

About ten years ago we ran into each other at some large mathematics conference, and lacking any other plans, we had a pleasant dinner together at the conference hotel.  We talked a little bit of math, but mostly the conversation was philosophical.  I regrettably do not remember precisely what we discussed, but it was very refreshing and stimulating to have an extremely frank and heartfelt interaction with someone with Conway’s level of insight and intellectual clarity.

Conway was arguably an extreme point in the convex hull of all mathematicians.  He will very much be missed.

My student, Jaume de Dios, has set up a web site to collect upcoming mathematics seminars from any institution that are open online.  (For instance, it has a talk that I will be giving in an hour.)   There is a form for adding further talks to the site; please feel free to contribute (or make other suggestions) in order to make the seminar list more useful.

UPDATE: Here are some other lists of mathematical seminars online:

Perhaps further links of this type could be added in the comments.  It would perhaps make sense to somehow unify these lists into a single one that can be updated through crowdsourcing.

EDIT: See also IPAM’s advice page on running virtual seminars.

Archives