You are currently browsing the category archive for the ‘math.CA’ category.

I was asked the following interesting question from a bright high school student I am working with, to which I did not immediately know the answer:

Question 1 Does there exist a smooth function ${f: {\bf R} \rightarrow {\bf R}}$ which is not real analytic, but such that all the differences ${x \mapsto f(x+h) - f(x)}$ are real analytic for every ${h \in {\bf R}}$?

The hypothesis implies that the Newton quotients ${\frac{f(x+h)-f(x)}{h}}$ are real analytic for every ${h \neq 0}$. If analyticity was preserved by smooth limits, this would imply that ${f'}$ is real analytic, which would make ${f}$ real analytic. However, we are not assuming any uniformity in the analyticity of the Newton quotients, so this simple argument does not seem to resolve the question immediately.

In the case that ${f}$ is periodic, say periodic with period ${1}$, one can answer the question in the negative by Fourier series. Perform a Fourier expansion ${f(x) = \sum_{n \in {\bf Z}} c_n e^{2\pi i nx}}$. If ${f}$ is not real analytic, then there is a sequence ${n_j}$ going to infinity such that ${|c_{n_j}| = e^{-o(n_j)}}$ as ${j \rightarrow \infty}$. From the Borel-Cantelli lemma one can then find a real number ${h}$ such that ${|e^{2\pi i h n_j} - 1| \gg \frac{1}{n^2_j}}$ (say) for infinitely many ${j}$, hence ${|(e^{2\pi i h n_j} - 1) c_{n_j}| \gg n_j^2 e^{-o(n_j)}}$ for infinitely many ${j}$. Thus the Fourier coefficients of ${x \mapsto f(x+h) - f(x)}$ do not decay exponentially and hence this function is not analytic, a contradiction.

I was not able to quickly resolve the non-periodic case, but I thought perhaps this might be a good problem to crowdsource, so I invite readers to contribute their thoughts on this problem here. In the spirit of the polymath projects, I would encourage comments that contain thoughts that fall short of a complete solution, in the event that some other reader may be able to take the thought further.

In this previous blog post I noted the following easy application of Cauchy-Schwarz:

Lemma 1 (Van der Corput inequality) Let ${v,u_1,\dots,u_n}$ be unit vectors in a Hilbert space ${H}$. Then

$\displaystyle (\sum_{i=1}^n |\langle v, u_i \rangle_H|)^2 \leq \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle_H|.$

Proof: The left-hand side may be written as ${|\langle v, \sum_{i=1}^n \epsilon_i u_i \rangle_H|^2}$ for some unit complex numbers ${\epsilon_i}$. By Cauchy-Schwarz we have

$\displaystyle |\langle v, \sum_{i=1}^n \epsilon_i u_i \rangle_H|^2 \leq \langle \sum_{i=1}^n \epsilon_i u_i, \sum_{j=1}^n \epsilon_j u_j \rangle_H$

and the claim now follows from the triangle inequality. $\Box$

As a corollary, correlation becomes transitive in a statistical sense (even though it is not transitive in an absolute sense):

Corollary 2 (Statistical transitivity of correlation) Let ${v,u_1,\dots,u_n}$ be unit vectors in a Hilbert space ${H}$ such that ${|\langle v,u_i \rangle_H| \geq \delta}$ for all ${i=1,\dots,n}$ and some ${0 < \delta \leq 1}$. Then we have ${|\langle u_i, u_j \rangle_H| \geq \delta^2/2}$ for at least ${\delta^2 n^2/2}$ of the pairs ${(i,j) \in \{1,\dots,n\}^2}$.

Proof: From the lemma, we have

$\displaystyle \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle_H| \geq \delta^2 n^2.$

The contribution of those ${i,j}$ with ${|\langle u_i, u_j \rangle_H| < \delta^2/2}$ is at most ${\delta^2 n^2/2}$, and all the remaining summands are at most ${1}$, giving the claim. $\Box$

One drawback with this corollary is that it does not tell us which pairs ${u_i,u_j}$ correlate. In particular, if the vector ${v}$ also correlates with a separate collection ${w_1,\dots,w_n}$ of unit vectors, the pairs ${(i,j)}$ for which ${u_i,u_j}$ correlate may have no intersection whatsoever with the pairs in which ${w_i,w_j}$ correlate (except of course on the diagonal ${i=j}$ where they must correlate).

While working on an ongoing research project, I recently found that there is a very simple way to get around the latter problem by exploiting the tensor power trick:

Corollary 3 (Simultaneous statistical transitivity of correlation) Let ${v, u^k_i}$ be unit vectors in a Hilbert space for ${i=1,\dots,n}$ and ${k=1,\dots,K}$ such that ${|\langle v, u^k_i \rangle_H| \geq \delta_k}$ for all ${i=1,\dots,n}$, ${k=1,\dots,K}$ and some ${0 < \delta_k \leq 1}$. Then there are at least ${(\delta_1 \dots \delta_K)^2 n^2/2}$ pairs ${(i,j) \in \{1,\dots,n\}^2}$ such that ${\prod_{k=1}^K |\langle u^k_i, u^k_j \rangle_H| \geq (\delta_1 \dots \delta_K)^2/2}$. In particular (by Cauchy-Schwarz) we have ${|\langle u^k_i, u^k_j \rangle_H| \geq (\delta_1 \dots \delta_K)^2/2}$ for all ${k}$.

Proof: Apply Corollary 2 to the unit vectors ${v^{\otimes K}}$ and ${u^1_i \otimes \dots \otimes u^K_i}$, ${i=1,\dots,n}$ in the tensor power Hilbert space ${H^{\otimes K}}$. $\Box$

It is surprisingly difficult to obtain even a qualitative version of the above conclusion (namely, if ${v}$ correlates with all of the ${u^k_i}$, then there are many pairs ${(i,j)}$ for which ${u^k_i}$ correlates with ${u^k_j}$ for all ${k}$ simultaneously) without some version of the tensor power trick. For instance, even the powerful Szemerédi regularity lemma, when applied to the set of pairs ${i,j}$ for which one has correlation of ${u^k_i}$, ${u^k_j}$ for a single ${i,j}$, does not seem to be sufficient. However, there is a reformulation of the argument using the Schur product theorem as a substitute for (or really, a disguised version of) the tensor power trick. For simplicity of notation let us just work with real Hilbert spaces to illustrate the argument. We start with the identity

$\displaystyle \langle u^k_i, u^k_j \rangle_H = \langle v, u^k_i \rangle_H \langle v, u^k_j \rangle_H + \langle \pi(u^k_i), \pi(u^k_j) \rangle_H$

where ${\pi}$ is the orthogonal projection to the complement of ${v}$. This implies a Gram matrix inequality

$\displaystyle (\langle u^k_i, u^k_j \rangle_H)_{1 \leq i,j \leq n} \succ (\langle v, u^k_i \rangle_H \langle v, u^k_j \rangle_H)_{1 \leq i,j \leq n} \succ 0$

for each ${k}$ where ${A \succ B}$ denotes the claim that ${A-B}$ is positive semi-definite. By the Schur product theorem, we conclude that

$\displaystyle (\prod_{k=1}^K \langle u^k_i, u^k_j \rangle_H)_{1 \leq i,j \leq n} \succ (\prod_{k=1}^K \langle v, u^k_i \rangle_H \langle v, u^k_j \rangle_H)_{1 \leq i,j \leq n}$

and hence for a suitable choice of signs ${\epsilon_1,\dots,\epsilon_n}$,

$\displaystyle \sum_{1 \leq i, j \leq n} \epsilon_i \epsilon_j \prod_{k=1}^K \langle u^k_i, u^k_j \rangle_H \geq \delta_1^2 \dots \delta_K^2 n^2.$

One now argues as in the proof of Corollary 2.

A separate application of tensor powers to amplify correlations was also noted in this previous blog post giving a cheap version of the Kabatjanskii-Levenstein bound, but this seems to not be directly related to this current application.

Previous set of notes: Notes 1. Next set of notes: Notes 3.

In Exercise 5 (and Lemma 1) of 246A Notes 4 we already observed some links between complex analysis on the disk (or annulus) and Fourier series on the unit circle:

• (i) Functions ${f}$ that are holomorphic on a disk ${\{ |z| < R \}}$ are expressed by a convergent Fourier series (and also Taylor series) ${f(re^{i\theta}) = \sum_{n=0}^\infty r^n a_n e^{in\theta}}$ for ${0 \leq r < R}$ (so in particular ${a_n = \frac{1}{n!} f^{(n)}(0)}$), where

$\displaystyle \limsup_{n \rightarrow +\infty} |a_n|^{1/n} \leq \frac{1}{R}; \ \ \ \ \ (1)$

conversely, every infinite sequence ${(a_n)_{n=0}^\infty}$ of coefficients obeying (1) arises from such a function ${f}$.
• (ii) Functions ${f}$ that are holomorphic on an annulus ${\{ r_- < |z| < r_+ \}}$ are expressed by a convergent Fourier series (and also Laurent series) ${f(re^{i\theta}) = \sum_{n=-\infty}^\infty r^n a_n e^{in\theta}}$, where

$\displaystyle \limsup_{n \rightarrow +\infty} |a_n|^{1/n} \leq \frac{1}{r_+}; \limsup_{n \rightarrow -\infty} |a_n|^{1/|n|} \leq \frac{1}{r_-}; \ \ \ \ \ (2)$

conversely, every doubly infinite sequence ${(a_n)_{n=-\infty}^\infty}$ of coefficients obeying (2) arises from such a function ${f}$.
• (iii) In the situation of (ii), there is a unique decomposition ${f = f_1 + f_2}$ where ${f_1}$ extends holomorphically to ${\{ z: |z| < r_+\}}$, and ${f_2}$ extends holomorphically to ${\{ z: |z| > r_-\}}$ and goes to zero at infinity, and are given by the formulae

$\displaystyle f_1(z) = \sum_{n=0}^\infty a_n z^n = \frac{1}{2\pi i} \int_\gamma \frac{f(w)}{w-z}\ dw$

where ${\gamma}$ is any anticlockwise contour in ${\{ z: |z| < r_+\}}$ enclosing ${z}$, and and

$\displaystyle f_2(z) = \sum_{n=-\infty}^{-1} a_n z^n = - \frac{1}{2\pi i} \int_\gamma \frac{f(w)}{w-z}\ dw$

where ${\gamma}$ is any anticlockwise contour in ${\{ z: |z| > r_-\}}$ enclosing ${0}$ but not ${z}$.

This connection lets us interpret various facts about Fourier series through the lens of complex analysis, at least for some special classes of Fourier series. For instance, the Fourier inversion formula ${a_n = \frac{1}{2\pi} \int_0^{2\pi} f(e^{i\theta}) e^{-in\theta}\ d\theta}$ becomes the Cauchy-type formula for the Laurent or Taylor coefficients of ${f}$, in the event that the coefficients are doubly infinite and obey (2) for some ${r_- < 1 < r_+}$, or singly infinite and obey (1) for some ${R > 1}$.

It turns out that there are similar links between complex analysis on a half-plane (or strip) and Fourier integrals on the real line, which we will explore in these notes.

We first fix a normalisation for the Fourier transform. If ${f \in L^1({\bf R})}$ is an absolutely integrable function on the real line, we define its Fourier transform ${\hat f: {\bf R} \rightarrow {\bf C}}$ by the formula

$\displaystyle \hat f(\xi) := \int_{\bf R} f(x) e^{-2\pi i x \xi}\ dx. \ \ \ \ \ (3)$

From the dominated convergence theorem ${\hat f}$ will be a bounded continuous function; from the Riemann-Lebesgue lemma it also decays to zero as ${\xi \rightarrow \pm \infty}$. My choice to place the ${2\pi}$ in the exponent is a personal preference (it is slightly more convenient for some harmonic analysis formulae such as the identities (4), (5), (6) below), though in the complex analysis and PDE literature there are also some slight advantages in omitting this factor. In any event it is not difficult to adapt the discussion in this notes for other choices of normalisation. It is of interest to extend the Fourier transform beyond the ${L^1({\bf R})}$ class into other function spaces, such as ${L^2({\bf R})}$ or the space of tempered distributions, but we will not pursue this direction here; see for instance these lecture notes of mine for a treatment.

Exercise 1 (Fourier transform of Gaussian) If ${a}$ is a coplex number with ${\mathrm{Re} a>0}$ and ${f}$ is the Gaussian function ${f(x) := e^{-\pi a x^2}}$, show that the Fourier transform ${\hat f}$ is given by the Gaussian ${\hat f(\xi) = a^{-1/2} e^{-\pi \xi^2/a}}$, where we use the standard branch for ${a^{-1/2}}$.

The Fourier transform has many remarkable properties. On the one hand, as long as the function ${f}$ is sufficiently “reasonable”, the Fourier transform enjoys a number of very useful identities, such as the Fourier inversion formula

$\displaystyle f(x) = \int_{\bf R} \hat f(\xi) e^{2\pi i x \xi} d\xi, \ \ \ \ \ (4)$

the Plancherel identity

$\displaystyle \int_{\bf R} |f(x)|^2\ dx = \int_{\bf R} |\hat f(\xi)|^2\ d\xi, \ \ \ \ \ (5)$

and the Poisson summation formula

$\displaystyle \sum_{n \in {\bf Z}} f(n) = \sum_{k \in {\bf Z}} \hat f(k). \ \ \ \ \ (6)$

On the other hand, the Fourier transform also intertwines various qualitative properties of a function ${f}$ with “dual” qualitative properties of its Fourier transform ${\hat f}$; in particular, “decay” properties of ${f}$ tend to be associated with “regularity” properties of ${\hat f}$, and vice versa. For instance, the Fourier transform of rapidly decreasing functions tend to be smooth. There are complex analysis counterparts of this Fourier dictionary, in which “decay” properties are described in terms of exponentially decaying pointwise bounds, and “regularity” properties are expressed using holomorphicity on various strips, half-planes, or the entire complex plane. The following exercise gives some examples of this:

Exercise 2 (Decay of ${f}$ implies regularity of ${\hat f}$) Let ${f \in L^1({\bf R})}$ be an absolutely integrable function.
• (i) If ${f}$ has super-exponential decay in the sense that ${f(x) \lesssim_{f,M} e^{-M|x|}}$ for all ${x \in {\bf R}}$ and ${M>0}$ (that is to say one has ${|f(x)| \leq C_{f,M} e^{-M|x|}}$ for some finite quantity ${C_{f,M}}$ depending only on ${f,M}$), then ${\hat f}$ extends uniquely to an entire function ${\hat f : {\bf C} \rightarrow {\bf C}}$. Furthermore, this function continues to be defined by (3).
• (ii) If ${f}$ is supported on a compact interval ${[a,b]}$ then the entire function ${\hat f}$ from (i) obeys the bounds ${\hat f(\xi) \lesssim_f \max( e^{2\pi a \mathrm{Im} \xi}, e^{2\pi b \mathrm{Im} \xi} )}$ for ${\xi \in {\bf C}}$. In particular, if ${f}$ is supported in ${[-M,M]}$ then ${\hat f(\xi) \lesssim_f e^{2\pi M |\mathrm{Im}(\xi)|}}$.
• (iii) If ${f}$ obeys the bound ${f(x) \lesssim_{f,a} e^{-2\pi a|x|}}$ for all ${x \in {\bf R}}$ and some ${a>0}$, then ${\hat f}$ extends uniquely to a holomorphic function ${\hat f}$ on the horizontal strip ${\{ \xi: |\mathrm{Im} \xi| < a \}}$, and obeys the bound ${\hat f(\xi) \lesssim_{f,a} \frac{1}{a - |\mathrm{Im}(\xi)|}}$ in this strip. Furthermore, this function continues to be defined by (3).
• (iv) If ${f}$ is supported on ${[0,+\infty)}$ (resp. ${(-\infty,0]}$), then there is a unique continuous extension of ${\hat f}$ to the lower half-plane ${\{ \xi: \mathrm{Im} \xi \leq 0\}}$ (resp. the upper half-plane ${\{ \xi: \mathrm{Im} \xi \geq 0 \}}$ which is holomorphic in the interior of this half-plane, and such that ${\hat f(\xi) \rightarrow 0}$ uniformly as ${\mathrm{Im} \xi \rightarrow -\infty}$ (resp. ${\mathrm{Im} \xi \rightarrow +\infty}$). Furthermore, this function continues to be defined by (3).
Hint: to establish holomorphicity in each of these cases, use Morera’s theorem and the Fubini-Tonelli theorem. For uniqueness, use analytic continuation, or (for part (iv)) the Cauchy integral formula.

Later in these notes we will give a partial converse to part (ii) of this exercise, known as the Paley-Wiener theorem; there are also partial converses to the other parts of this exercise.

From (3) we observe the following intertwining property between multiplication by an exponential and complex translation: if ${\xi_0}$ is a complex number and ${f: {\bf R} \rightarrow {\bf C}}$ is an absolutely integrable function such that the modulated function ${f_{\xi_0}(x) := e^{2\pi i \xi_0 x} f(x)}$ is also absolutely integrable, then we have the identity

$\displaystyle \widehat{f_{\xi_0}}(\xi) = \hat f(\xi - \xi_0) \ \ \ \ \ (7)$

whenever ${\xi}$ is a complex number such that at least one of the two sides of the equation in (7) is well defined. Thus, multiplication of a function by an exponential weight corresponds (formally, at least) to translation of its Fourier transform. By using contour shifting, we will also obtain a dual relationship: under suitable holomorphicity and decay conditions on ${f}$, translation by a complex shift will correspond to multiplication of the Fourier transform by an exponential weight. It turns out to be possible to exploit this property to derive many Fourier-analytic identities, such as the inversion formula (4) and the Poisson summation formula (6), which we do later in these notes. (The Plancherel theorem can also be established by complex analytic methods, but this requires a little more effort; see Exercise 8.)

The material in these notes is loosely adapted from Chapter 4 of Stein-Shakarchi’s “Complex Analysis”.

Laura Cladek and I have just uploaded to the arXiv our paper “Additive energy of regular measures in one and higher dimensions, and the fractal uncertainty principle“. This paper concerns a continuous version of the notion of additive energy. Given a finite measure ${\mu}$ on ${{\bf R}^d}$ and a scale ${r>0}$, define the energy ${\mathrm{E}(\mu,r)}$ at scale ${r}$ to be the quantity

$\displaystyle \mathrm{E}(\mu,r) := \mu^4\left( \{ (x_1,x_2,x_3,x_4) \in ({\bf R}^d)^4: |x_1+x_2-x_3-x_4| \leq r \}\right) \ \ \ \ \ (1)$

where ${\mu^4}$ is the product measure on ${({\bf R}^d)^4}$ formed from four copies of the measure ${\mu}$ on ${{\bf R}^d}$. We will be interested in Cantor-type measures ${\mu}$, supported on a compact set ${X \subset B(0,1)}$ and obeying the Ahlfors-David regularity condition

$\displaystyle \mu(B(x,r)) \leq C r^\delta$

for all balls ${B(x,r)}$ and some constants ${C, \delta > 0}$, as well as the matching lower bound

$\displaystyle \mu(B(x,r)) \geq C^{-1} r^\delta$

when ${x \in X}$ whenever ${0 < r < 1}$. One should think of ${X}$ as a ${\delta}$-dimensional fractal set, and ${\mu}$ as some vaguely self-similar measure on this set.

Note that once one fixes ${x_1,x_2,x_3}$, the variable ${x_4}$ in (1) is constrained to a ball of radius ${r}$, hence we obtain the trivial upper bound

$\displaystyle \mathrm{E}(\mu,r) \leq C^4 r^\delta. \ \ \ \ \ (2)$

If the set ${X}$ contains a lot of “additive structure”, one can expect this bound to be basically sharp; for instance, if ${\delta}$ is an integer, ${X}$ is a ${\delta}$-dimensional unit disk, and ${\mu}$ is Lebesgue measure on this disk, one can verify that ${\mathrm{E}(\mu,r) \sim r^\delta}$ (where we allow implied constants to depend on ${d,\delta}$. However we show that if the dimension is non-integer, then one obtains a gain:

Theorem 1 If ${0 < \delta < d}$ is not an integer, and ${X, \mu}$ are as above, then

$\displaystyle \mathrm{E}(\mu,r) \lesssim_{C,\delta,d} r^{\delta+\beta}$

for some ${\beta>0}$ depending only on ${C,\delta,d}$.

Informally, this asserts that Ahlfors-David regular fractal sets of non-integer dimension cannot behave as if they are approximately closed under addition. In fact the gain ${\beta}$ we obtain is quasipolynomial in the regularity constant ${C}$:

$\displaystyle \beta = \exp\left( - O_{\delta,d}( 1 + \log^{O_{\delta,d}(1)}(C) ) \right).$

(We also obtain a localised version in which the regularity condition is only required to hold at scales between ${r}$ and ${1}$.) Such a result was previously obtained (with more explicit values of the ${O_{\delta,d}()}$ implied constants) in the one-dimensional case ${d=1}$ by Dyatlov and Zahl; but in higher dimensions there does not appear to have been any results for this general class of sets ${X}$ and measures ${\mu}$. In the paper of Dyatlov and Zahl it is noted that some dependence on ${C}$ is necessary; in particular, ${\beta}$ cannot be much better than ${1/\log C}$. This reflects the fact that there are fractal sets that do behave reasonably well with respect to addition (basically because they are built out of long arithmetic progressions at many scales); however, such sets are not very Ahlfors-David regular. Among other things, this result readily implies a dimension expansion result

$\displaystyle \mathrm{dim}( f( X, X) ) \geq \delta + \beta$

for any non-degenerate smooth map ${f: {\bf R}^d \times {\bf R}^d \rightarrow {\bf R}^d}$, including the sum map ${f(x,y) := x+y}$ and (in one dimension) the product map ${f(x,y) := x \cdot y}$, where the non-degeneracy condition required is that the gradients ${D_x f(x,y), D_y f(x,y): {\bf R}^d \rightarrow {\bf R}^d}$ are invertible for every ${x,y}$. We refer to the paper for the formal statement.

Our higher-dimensional argument shares many features in common with that of Dyatlov and Zahl, notably a reliance on the modern tools of additive combinatorics (and specifically the Bogulybov-Ruzsa lemma of Sanders). However, in one dimension we were also able to find a completely elementary argument, avoiding any particularly advanced additive combinatorics and instead primarily exploiting the order-theoretic properties of the real line, that gave a superior value of ${\beta}$, namely

$\displaystyle \beta := c \min(\delta,1-\delta) C^{-25}.$

One of the main reasons for obtaining such improved energy bounds is that they imply a fractal uncertainty principle in some regimes. We focus attention on the model case of obtaining such an uncertainty principle for the semiclassical Fourier transform

$\displaystyle {\mathcal F}_h f(\xi) := (2\pi h)^{-d/2} \int_{{\bf R}^d} e^{-i x \cdot \xi/h} f(x)\ dx$

where ${h>0}$ is a small parameter. If ${X, \mu, \delta}$ are as above, and ${X_h}$ denotes the ${h}$-neighbourhood of ${X}$, then from the Hausdorff-Young inequality one obtains the trivial bound

$\displaystyle \| 1_{X_h} {\mathcal F}_h 1_{X_h} \|_{L^2({\bf R}^d) \rightarrow L^2({\bf R}^d)} \lesssim_{C,d} h^{\max\left(\frac{d}{2}-\delta,0\right)}.$

(There are also variants involving pairs of sets ${X_h, Y_h}$, but for simplicity we focus on the uncertainty principle for a single set ${X_h}$.) The fractal uncertainty principle, when it applies, asserts that one can improve this to

$\displaystyle \| 1_{X_h} {\mathcal F}_h 1_{X_h} \|_{L^2({\bf R}^d) \rightarrow L^2({\bf R}^d)} \lesssim_{C,d} h^{\max\left(\frac{d}{2}-\delta,0\right) + \beta}$

for some ${\beta>0}$; informally, this asserts that a function and its Fourier transform cannot simultaneously be concentrated in the set ${X_h}$ when ${\delta \leq \frac{d}{2}}$, and that a function cannot be concentrated on ${X_h}$ and have its Fourier transform be of maximum size on ${X_h}$ when ${\delta \geq \frac{d}{2}}$. A modification of the disk example mentioned previously shows that such a fractal uncertainty principle cannot hold if ${\delta}$ is an integer. However, in one dimension, the fractal uncertainty principle is known to hold for all ${0 < \delta < 1}$. The above-mentioned results of Dyatlov and Zahl were able to establish this for ${\delta}$ close to ${1/2}$, and the remaining cases ${1/2 < \delta < 1}$ and ${0 < \delta < 1/2}$ were later established by Bourgain-Dyatlov and Dyatlov-Jin respectively. Such uncertainty principles have applications to hyperbolic dynamics, in particular in establishing spectral gaps for certain Selberg zeta functions.

It remains a largely open problem to establish a fractal uncertainty principle in higher dimensions. Our results allow one to establish such a principle when the dimension ${\delta}$ is close to ${d/2}$, and ${d}$ is assumed to be odd (to make ${d/2}$ a non-integer). There is also work of Han and Schlag that obtains such a principle when one of the copies of ${X_h}$ is assumed to have a product structure. We hope to obtain further higher-dimensional fractal uncertainty principles in subsequent work.

We now sketch how our main theorem is proved. In both one dimension and higher dimensions, the main point is to get a preliminary improvement

$\displaystyle \mathrm{E}(\mu,r_0) \leq \varepsilon r_0^\delta \ \ \ \ \ (3)$

over the trivial bound (2) for any small ${\varepsilon>0}$, provided ${r_0}$ is sufficiently small depending on ${\varepsilon, \delta, d}$; one can then iterate this bound by a fairly standard “induction on scales” argument (which roughly speaking can be used to show that energies ${\mathrm{E}(\mu,r)}$ behave somewhat multiplicatively in the scale parameter ${r}$) to propagate the bound to a power gain at smaller scales. We found that a particularly clean way to run the induction on scales was via use of the Gowers uniformity norm ${U^2}$, and particularly via a clean Fubini-type inequality

$\displaystyle \| f \|_{U^2(V \times V')} \leq \|f\|_{U^2(V; U^2(V'))}$

(ultimately proven using the Gowers-Cauchy-Schwarz inequality) that allows one to “decouple” coarse and fine scale aspects of the Gowers norms (and hence of additive energies).

It remains to obtain the preliminary improvement. In one dimension this is done by identifying some “left edges” of the set ${X}$ that supports ${\mu}$: intervals ${[x, x+K^{-n}]}$ that intersect ${X}$, but such that a large interval ${[x-K^{-n+1},x]}$ just to the left of this interval is disjoint from ${X}$. Here ${K}$ is a large constant and ${n}$ is a scale parameter. It is not difficult to show (using in particular the Archimedean nature of the real line) that if one has the Ahlfors-David regularity condition for some ${0 < \delta < 1}$ then left edges exist in abundance at every scale; for instance most points of ${X}$ would be expected to lie in quite a few of these left edges (much as most elements of, say, the ternary Cantor set ${\{ \sum_{n=1}^\infty \varepsilon_n 3^{-n} \varepsilon_n \in \{0,1\} \}}$ would be expected to contain a lot of ${0}$s in their base ${3}$ expansion). In particular, most pairs ${(x_1,x_2) \in X \times X}$ would be expected to lie in a pair ${[x,x+K^{-n}] \times [y,y+K^{-n}]}$ of left edges of equal length. The key point is then that if ${(x_1,x_2) \in X \times X}$ lies in such a pair with ${K^{-n} \geq r}$, then there are relatively few pairs ${(x_3,x_4) \in X \times X}$ at distance ${O(K^{-n+1})}$ from ${(x_1,x_2)}$ for which one has the relation ${x_1+x_2 = x_3+x_4 + O(r)}$, because ${x_3,x_4}$ will both tend to be to the right of ${x_1,x_2}$ respectively. This causes a decrement in the energy at scale ${K^{-n+1}}$, and by carefully combining all these energy decrements one can eventually cobble together the energy bound (3).

We were not able to make this argument work in higher dimension (though perhaps the cases ${0 < \delta < 1}$ and ${d-1 < \delta < d}$ might not be completely out of reach from these methods). Instead we return to additive combinatorics methods. If the claim (3) failed, then by applying the Balog-Szemeredi-Gowers theorem we can show that the set ${X}$ has high correlation with an approximate group ${H}$, and hence (by the aforementioned Bogulybov-Ruzsa type theorem of Sanders, which is the main source of the quasipolynomial bounds in our final exponent) ${X}$ will exhibit an approximate “symmetry” along some non-trivial arithmetic progression of some spacing length ${r}$ and some diameter ${R \gg r}$. The ${r}$-neighbourhood ${X_r}$ of ${X}$ will then resemble the union of parallel “cylinders” of dimensions ${r \times R}$. If we focus on a typical ${R}$-ball of ${X_r}$, the set now resembles a Cartesian product of an interval of length ${R}$ with a subset of a ${d-1}$-dimensional hyperplane, which behaves approximately like an Ahlfors-David regular set of dimension ${\delta-1}$ (this already lets us conclude a contradiction if ${\delta<1}$). Note that if the original dimension ${\delta}$ was non-integer then this new dimension ${\delta-1}$ will also be non-integer. It is then possible to contradict the failure of (3) by appealing to a suitable induction hypothesis at one lower dimension.

Asgar Jamneshan and I have just uploaded to the arXiv our paper “Foundational aspects of uncountable measure theory: Gelfand duality, Riesz representation, canonical models, and canonical disintegration“. This paper arose from our longer-term project to systematically develop “uncountable” ergodic theory – ergodic theory in which the groups acting are not required to be countable, the probability spaces one acts on are not required to be standard Borel, or Polish, and the compact groups that arise in the structural theory (e.g., the theory of group extensions) are not required to be separable. One of the motivations of doing this is to allow ergodic theory results to be applied to ultraproducts of finite dynamical systems, which can then hopefully be transferred to establish combinatorial results with good uniformity properties. An instance of this is the uncountable Mackey-Zimmer theorem, discussed in this companion blog post.

In the course of this project, we ran into the obstacle that many foundational results, such as the Riesz representation theorem, often require one or more of these countability hypotheses when encountered in textbooks. Other technical issues also arise in the uncountable setting, such as the need to distinguish the Borel ${\sigma}$-algebra from the (two different types of) Baire ${\sigma}$-algebra. As such we needed to spend some time reviewing and synthesizing the known literature on some foundational results of “uncountable” measure theory, which led to this paper. As such, most of the results of this paper are already in the literature, either explicitly or implicitly, in one form or another (with perhaps the exception of the canonical disintegration, which we discuss below); we view the main contribution of this paper as presenting the results in a coherent and unified fashion. In particular we found that the language of category theory was invaluable in clarifying and organizing all the different results. In subsequent work we (and some other authors) will use the results in this paper for various applications in uncountable ergodic theory.

The foundational results covered in this paper can be divided into a number of subtopics (Gelfand duality, Baire ${\sigma}$-algebras and Riesz representation, canonical models, and canonical disintegration), which we discuss further below the fold.

I have uploaded to the arXiv my paper “Exploring the toolkit of Jean Bourgain“. This is one of a collection of papers to be published in the Bulletin of the American Mathematical Society describing aspects of the work of Jean Bourgain; other contributors to this collection include Keith Ball, Ciprian Demeter, and Carlos Kenig. Because the other contributors will be covering specific areas of Jean’s work in some detail, I decided to take a non-overlapping tack, and focus instead on some basic tools of Jean that he frequently used across many of the fields he contributed to. Jean had a surprising number of these “basic tools” that he wielded with great dexterity, and in this paper I focus on just a few of them:

• Reducing qualitative analysis results (e.g., convergence theorems or dimension bounds) to quantitative analysis estimates (e.g., variational inequalities or maximal function estimates).
• Using dyadic pigeonholing to locate good scales to work in or to apply truncations.
• Using random translations to amplify small sets (low density) into large sets (positive density).
• Combining large deviation inequalities with metric entropy bounds to control suprema of various random processes.

Each of these techniques is individually not too difficult to explain, and were certainly employed on occasion by various mathematicians prior to Bourgain’s work; but Jean had internalized them to the point where he would instinctively use them as soon as they became relevant to a given problem at hand. I illustrate this at the end of the paper with an exposition of one particular result of Jean, on the Erdős similarity problem, in which his main result (that any sum ${S = S_1+S_2+S_3}$ of three infinite sets of reals has the property that there exists a positive measure set ${E}$ that does not contain any homothetic copy ${x+tS}$ of ${S}$) is basically proven by a sequential application of these tools (except for dyadic pigeonholing, which turns out not to be needed here).

I had initially intended to also cover some other basic tools in Jean’s toolkit, such as the uncertainty principle and the use of probabilistic decoupling, but was having trouble keeping the paper coherent with such a broad focus (certainly I could not identify a single paper of Jean’s that employed all of these tools at once). I hope though that the examples given in the paper gives some reasonable impression of Jean’s research style.

I’ve just uploaded to the arXiv my paper The Ionescu-Wainger multiplier theorem and the adeles“. This paper revisits a useful multiplier theorem of Ionescu and Wainger on “major arc” Fourier multiplier operators on the integers ${{\bf Z}}$ (or lattices ${{\bf Z}^d}$), and strengthens the bounds while also interpreting it from the viewpoint of the adelic integers ${{\bf A}_{\bf Z}}$ (which were also used in my recent paper with Krause and Mirek).

For simplicity let us just work in one dimension. Any smooth function ${m: {\bf R}/{\bf Z} \rightarrow {\bf C}}$ then defines a discrete Fourier multiplier operator ${T_m: \ell^p({\bf Z}) \rightarrow \ell^p({\bf Z})}$ for any ${1 \leq p \leq \infty}$ by the formula

$\displaystyle {\mathcal F}_{\bf Z} T_m f(\xi) =: m(\xi) {\mathcal F}_{\bf Z} f(\xi)$

where ${{\mathcal F}_{\bf Z} f(\xi) := \sum_{n \in {\bf Z}} f(n) e(n \xi)}$ is the Fourier transform on ${{\bf Z}}$; similarly, any test function ${m: {\bf R} \rightarrow {\bf C}}$ defines a continuous Fourier multiplier operator ${T_m: L^p({\bf R}) \rightarrow L^p({\bf R})}$ by the formula

$\displaystyle {\mathcal F}_{\bf R} T_m f(\xi) := m(\xi) {\mathcal F}_{\bf R} f(\xi)$

where ${{\mathcal F}_{\bf R} f(\xi) := \int_{\bf R} f(x) e(x \xi)\ dx}$. In both cases we refer to ${m}$ as the symbol of the multiplier operator ${T_m}$.

We will be interested in discrete Fourier multiplier operators whose symbols are supported on a finite union of arcs. One way to construct such operators is by “folding” continuous Fourier multiplier operators into various target frequencies. To make this folding operation precise, given any continuous Fourier multiplier operator ${T_m: L^p({\bf R}) \rightarrow L^p({\bf R})}$, and any frequency ${\alpha \in {\bf R}/{\bf Z}}$, we define the discrete Fourier multiplier operator ${T_{m;\alpha}: \ell^p({\bf Z}) \rightarrow \ell^p({\bf Z})}$ for any frequency shift ${\alpha \in {\bf R}/{\bf Z}}$ by the formula

$\displaystyle {\mathcal F}_{\bf Z} T_{m,\alpha} f(\xi) := \sum_{\theta \in {\bf R}: \xi = \alpha + \theta} m(\theta) {\mathcal F}_{\bf Z} f(\xi)$

or equivalently

$\displaystyle T_{m;\alpha} f(n) = \int_{\bf R} m(\theta) {\mathcal F}_{\bf Z} f(\alpha+\theta) e( n(\alpha+\theta) )\ d\theta.$

More generally, given any finite set ${\Sigma \subset {\bf R}/{\bf Z}}$, we can form a multifrequency projection operator ${T_{m;\Sigma}}$ on ${\ell^p({\bf Z})}$ by the formula

$\displaystyle T_{m;\Sigma} := \sum_{\alpha \in \Sigma} T_{m;\alpha}$

thus

$\displaystyle T_{m;\alpha} f(n) = \sum_{\alpha \in \Sigma} \int_{\bf R} m(\theta) {\mathcal F}_{\bf Z} f(\alpha+\theta) e( n(\alpha+\theta) )\ d\theta.$

This construction gives discrete Fourier multiplier operators whose symbol can be localised to a finite union of arcs. For instance, if ${m: {\bf R} \rightarrow {\bf C}}$ is supported on ${[-\varepsilon,\varepsilon]}$, then ${T_{m;\Sigma}}$ is a Fourier multiplier whose symbol is supported on the set ${\bigcup_{\alpha \in \Sigma} \alpha + [-\varepsilon,\varepsilon]}$.

There are a body of results relating the ${\ell^p({\bf Z})}$ theory of discrete Fourier multiplier operators such as ${T_{m;\alpha}}$ or ${T_{m;\Sigma}}$ with the ${L^p({\bf R})}$ theory of their continuous counterparts. For instance we have the basic result of Magyar, Stein, and Wainger:

Proposition 1 (Magyar-Stein-Wainger sampling principle) Let ${1 \leq p \leq \infty}$ and ${\alpha \in {\bf R}/{\bf Z}}$.
• (i) If ${m: {\bf R} \rightarrow {\bf C}}$ is a smooth function supported in ${[-1/2,1/2]}$, then ${\|T_{m;\alpha}\|_{B(\ell^p({\bf Z}))} \lesssim \|T_m\|_{B(L^p({\bf R}))}}$, where ${B(V)}$ denotes the operator norm of an operator ${T: V \rightarrow V}$.
• (ii) More generally, if ${m: {\bf R} \rightarrow {\bf C}}$ is a smooth function supported in ${[-1/2Q,1/2Q]}$ for some natural number ${Q}$, then ${\|T_{m;\alpha + \frac{1}{Q}{\bf Z}/{\bf Z}}\|_{B(\ell^p({\bf Z}))} \lesssim \|T_m\|_{B(L^p({\bf R}))}}$.

When ${p=2}$ the implied constant in these bounds can be set to equal ${1}$. In the paper of Magyar, Stein, and Wainger it was posed as an open problem as to whether this is the case for other ${p}$; in an appendix to this paper I show that the answer is negative if ${p}$ is sufficiently close to ${1}$ or ${\infty}$, but I do not know the full answer to this question.

This proposition allows one to get a good multiplier theory for symbols supported near cyclic groups ${\frac{1}{Q}{\bf Z}/{\bf Z}}$; for instance it shows that a discrete Fourier multiplier with symbol ${\sum_{\alpha \in \frac{1}{Q}{\bf Z}/{\bf Z}} \phi(Q(\xi-\alpha))}$ for a fixed test function ${\phi}$ is bounded on ${\ell^p({\bf Z})}$, uniformly in ${p}$ and ${Q}$. For many applications in discrete harmonic analysis, one would similarly like a good multiplier theory for symbols supported in “major arc” sets such as

$\displaystyle \bigcup_{q=1}^N \bigcup_{\alpha \in \frac{1}{q}{\bf Z}/{\bf Z}} \alpha + [-\varepsilon,\varepsilon] \ \ \ \ \ (1)$

and in particular to get a good Littlewood-Paley theory adapted to major arcs. (This is particularly the case when trying to control “true complexity zero” expressions for which the minor arc contributions can be shown to be negligible; my recent paper with Krause and Mirek is focused on expressions of this type.) At present we do not have a good multiplier theory that is directly adapted to the classical major arc set (1) (though I do not know of rigorous negative results that show that such a theory is not possible); however, Ionescu and Wainger were able to obtain a useful substitute theory in which (1) was replaced by a somewhat larger set that had better multiplier behaviour. Starting with a finite collection ${S}$ of pairwise coprime natural numbers, and a natural number ${k}$, one can form the major arc type set

$\displaystyle \bigcup_{\alpha \in \Sigma_{\leq k}} \alpha + [-\varepsilon,\varepsilon] \ \ \ \ \ (2)$

where ${\Sigma_{\leq k} \subset {\bf R}/{\bf Z}}$ consists of all rational points in the unit circle of the form ${\frac{a}{Q} \mod 1}$ where ${Q}$ is the product of at most ${k}$ elements from ${S}$ and ${a}$ is an integer. For suitable choices of ${S}$ and ${k}$ not too large, one can make this set (2) contain the set (1) while still having a somewhat controlled size (very roughly speaking, one chooses ${S}$ to consist of (small powers of) large primes between ${N^\rho}$ and ${N}$ for some small constant ${\rho>0}$, together with something like the product of all the primes up to ${N^\rho}$ (raised to suitable powers)).

In the regime where ${k}$ is fixed and ${\varepsilon}$ is small, there is a good theory:

Theorem 2 (Ionescu-Wainger theorem, rough version) If ${p}$ is an even integer or the dual of an even integer, and ${m: {\bf R} \rightarrow {\bf C}}$ is supported on ${[-\varepsilon,\varepsilon]}$ for a sufficiently small ${\varepsilon > 0}$, then

$\displaystyle \|T_{m;\Sigma_{\leq k}}\|_{B(\ell^p({\bf Z}))} \lesssim_{p, k} (\log(1+|S|))^{O_k(1)} \|T_m\|_{B(L^p({\bf R}))}.$

There is a more explicit description of how small ${\varepsilon}$ needs to be for this theorem to work (roughly speaking, it is not much more than what is needed for all the arcs ${\alpha + [-\varepsilon,\varepsilon]}$ in (2) to be disjoint), but we will not give it here. The logarithmic loss of ${(\log(1+|S|))^{O_k(1)}}$ was reduced to ${\log(1+|S|)}$ by Mirek. In this paper we refine the bound further to

$\displaystyle \|T_{m;\Sigma_{\leq k}}\|_{B(\ell^p({\bf Z}))} \leq O(r \log(2+kr))^k \|T_m\|_{B(L^p({\bf R}))}. \ \ \ \ \ (3)$

when ${p = 2r}$ or ${p = (2r)'}$ for some integer ${r}$. In particular there is no longer any logarithmic loss in the cardinality of the set ${S}$.

The proof of (3) follows a similar strategy as to previous proofs of Ionescu-Wainger type. By duality we may assume ${p=2r}$. We use the following standard sequence of steps:

• (i) (Denominator orthogonality) First one splits ${T_{m;\Sigma_{\leq k}} f}$ into various pieces depending on the denominator ${Q}$ appearing in the element of ${\Sigma_{\leq k}}$, and exploits “superorthogonality” in ${Q}$ to estimate the ${\ell^p}$ norm by the ${\ell^p}$ norm of an appropriate square function.
• (ii) (Nonconcentration) One expands out the ${p^{th}}$ power of the square function and estimates it by a “nonconcentrated” version in which various factors that arise in the expansion are “disjoint”.
• (iii) (Numerator orthogonality) We now decompose based on the numerators ${a}$ appearing in the relevant elements of ${\Sigma_{\leq k}}$, and exploit some residual orthogonality in this parameter to reduce to estimating a square-function type expression involving sums over various cosets ${\alpha + \frac{1}{Q}{\bf Z}/{\bf Z}}$.
• (iv) (Marcinkiewicz-Zygmund) One uses the Marcinkiewicz-Zygmund theorem relating scalar and vector valued operator norms to eliminate the role of the multiplier ${m}$.
• (v) (Rubio de Francia) Use a reverse square function estimate of Rubio de Francia type to conclude.

The main innovations are that of using the probabilistic decoupling method to remove some logarithmic losses in (i), and recent progress on the Erdos-Rado sunflower conjecture (as discussed in this recent post) to improve the bounds in (ii). For (i), the key point is that one can express a sum such as

$\displaystyle \sum_{A \in \binom{S}{k}} f_A,$

where ${\binom{S}{k}}$ is the set of ${k}$-element subsets of an index set ${S}$, and ${f_A}$ are various complex numbers, as an average

$\displaystyle \sum_{A \in \binom{S}{k}} f_A = \frac{k^k}{k!} {\bf E} \sum_{s_1 \in {\bf S}_1,\dots,s_k \in {\bf S}_k} f_{\{s_1,\dots,s_k\}}$

where ${S = {\bf S}_1 \cup \dots \cup {\bf S}_k}$ is a random partition of ${S}$ into ${k}$ subclasses (chosen uniformly over all such partitions), basically because every ${k}$-element subset ${A}$ of ${S}$ has a probability exactly ${\frac{k!}{k^k}}$ of being completely shattered by such a random partition. This “decouples” the index set ${\binom{S}{k}}$ into a Cartesian product ${{\bf S}_1 \times \dots \times {\bf S}_k}$ which is more convenient for application of the superorthogonality theory. For (ii), the point is to efficiently obtain estimates of the form

$\displaystyle (\sum_{A \in \binom{S}{k}} F_A)^r \lesssim_{k,r} \sum_{A_1,\dots,A_r \in \binom{S}{k} \hbox{ sunflower}} F_{A_1} \dots F_{A_r}$

where ${F_A}$ are various non-negative quantities, and a sunflower is a collection of sets ${A_1,\dots,A_r}$ that consist of a common “core” ${A_0}$ and disjoint “petals” ${A_1 \backslash A_0,\dots,A_r \backslash A_0}$. The other parts of the argument are relatively routine; see for instance this survey of Pierce for a discussion of them in the simple case ${k=1}$.

In this paper we interpret the Ionescu-Wainger multiplier theorem as being essentially a consequence of various quantitative versions of the Shannon sampling theorem. Recall that this theorem asserts that if a (Schwartz) function ${f: {\bf R} \rightarrow {\bf C}}$ has its Fourier transform supported on ${[-1/2,1/2]}$, then ${f}$ can be recovered uniquely from its restriction ${f|_{\bf Z}: {\bf Z} \rightarrow {\bf C}}$. In fact, as can be shown from a little bit of routine Fourier analysis, if we narrow the support of the Fourier transform slightly to ${[-c,c]}$ for some ${0 < c < 1/2}$, then the restriction ${f|_{\bf Z}}$ has the same ${L^p}$ behaviour as the original function, in the sense that

$\displaystyle \| f|_{\bf Z} \|_{\ell^p({\bf Z})} \sim_{c,p} \|f\|_{L^p({\bf R})} \ \ \ \ \ (4)$

for all ${0 < p \leq \infty}$; see Theorem 4.18 of this paper of myself with Krause and Mirek. This is consistent with the uncertainty principle, which suggests that such functions ${f}$ should behave like a constant at scales ${\sim 1/c}$.

The quantitative sampling theorem (4) can be used to give an alternate proof of Proposition 1(i), basically thanks to the identity

$\displaystyle T_{m;0} (f|_{\bf Z}) = (T_m f)_{\bf Z}$

whenever ${f: {\bf R} \rightarrow {\bf C}}$ is Schwartz and has Fourier transform supported in ${[-1/2,1/2]}$, and ${m}$ is also supported on ${[-1/2,1/2]}$; this identity can be easily verified from the Poisson summation formula. A variant of this argument also yields an alternate proof of Proposition 1(ii), where the role of ${{\bf R}}$ is now played by ${{\bf R} \times {\bf Z}/Q{\bf Z}}$, and the standard embedding of ${{\bf Z}}$ into ${{\bf R}}$ is now replaced by the embedding ${\iota_Q: n \mapsto (n, n \hbox{ mod } Q)}$ of ${{\bf Z}}$ into ${{\bf R} \times {\bf Z}/Q{\bf Z}}$; the analogue of (4) is now

$\displaystyle \| f \circ \iota_Q \|_{\ell^p({\bf Z})} \sim_{c,p} \|f\|_{L^p({\bf R} \times {\bf Z}/Q{\bf Z})} \ \ \ \ \ (5)$

whenever ${f: {\bf R} \times {\bf Z}/Q{\bf Z} \rightarrow {\bf C}}$ is Schwartz and has Fourier transform ${{\mathcal F}_{{\bf R} \times {\bf Z}/Q{\bf Z}} f\colon {\bf R} \times \frac{1}{Q}{\bf Z}/{\bf Z} \rightarrow {\bf C}}$ supported in ${[-c/Q,c/Q] \times \frac{1}{Q}{\bf Z}/{\bf Z}}$, and ${{\bf Z}/Q{\bf Z}}$ is endowed with probability Haar measure.

The locally compact abelian groups ${{\bf R}}$ and ${{\bf R} \times {\bf Z}/Q{\bf Z}}$ can all be viewed as projections of the adelic integers ${{\bf A}_{\bf Z} := {\bf R} \times \hat {\bf Z}}$ (the product of the reals and the profinite integers ${\hat {\bf Z}}$). By using the Ionescu-Wainger multiplier theorem, we are able to obtain an adelic version of the quantitative sampling estimate (5), namely

$\displaystyle \| f \circ \iota \|_{\ell^p({\bf Z})} \sim_{c,p} \|f\|_{L^p({\bf A}_{\bf Z})}$

whenever ${1 < p < \infty}$, ${f: {\bf A}_{\bf Z} \rightarrow {\bf C}}$ is Schwartz-Bruhat and has Fourier transform ${{\mathcal F}_{{\bf A}_{\bf Z}} f: {\bf R} \times {\bf Q}/{\bf Z} \rightarrow {\bf C}}$ supported on ${[-\varepsilon,\varepsilon] \times \Sigma_{\leq k}}$ for some sufficiently small ${\varepsilon}$ (the precise bound on ${\varepsilon}$ depends on ${S, p, c}$ in a fashion not detailed here). This allows one obtain an “adelic” extension of the Ionescu-Wainger multiplier theorem, in which the ${\ell^p({\bf Z})}$ operator norm of any discrete multiplier operator whose symbol is supported on major arcs can be shown to be comparable to the ${L^p({\bf A}_{\bf Z})}$ operator norm of an adelic counterpart to that multiplier operator; in principle this reduces “major arc” harmonic analysis on the integers ${{\bf Z}}$ to “low frequency” harmonic analysis on the adelic integers ${{\bf A}_{\bf Z}}$, which is a simpler setting in many ways (mostly because the set of major arcs (2) is now replaced with a product set ${[-\varepsilon,\varepsilon] \times \Sigma_{\leq k}}$).

Ben Krause, Mariusz Mirek, and I have uploaded to the arXiv our paper Pointwise ergodic theorems for non-conventional bilinear polynomial averages. This paper is a contribution to the decades-long program of extending the classical ergodic theorems to “non-conventional” ergodic averages. Here, the focus is on pointwise convergence theorems, and in particular looking for extensions of the pointwise ergodic theorem of Birkhoff:

Theorem 1 (Birkhoff ergodic theorem) Let ${(X,\mu,T)}$ be a measure-preserving system (by which we mean ${(X,\mu)}$ is a ${\sigma}$-finite measure space, and ${T: X \rightarrow X}$ is invertible and measure-preserving), and let ${f \in L^p(X)}$ for any ${1 \leq p < \infty}$. Then the averages ${\frac{1}{N} \sum_{n=1}^N f(T^n x)}$ converge pointwise for ${\mu}$-almost every ${x \in X}$.

Pointwise ergodic theorems have an inherently harmonic analysis content to them, as they are closely tied to maximal inequalities. For instance, the Birkhoff ergodic theorem is closely tied to the Hardy-Littlewood maximal inequality.

The above theorem was generalized by Bourgain (conceding the endpoint ${p=1}$, where pointwise almost everywhere convergence is now known to fail) to polynomial averages:

Theorem 2 (Pointwise ergodic theorem for polynomial averages) Let ${(X,\mu,T)}$ be a measure-preserving system, and let ${f \in L^p(X)}$ for any ${1 < p < \infty}$. Let ${P \in {\bf Z}[{\mathrm n}]}$ be a polynomial with integer coefficients. Then the averages ${\frac{1}{N} \sum_{n=1}^N f(T^{P(n)} x)}$ converge pointwise for ${\mu}$-almost every ${x \in X}$.

For bilinear averages, we have a separate 1990 result of Bourgain (for ${L^\infty}$ functions), extended to other ${L^p}$ spaces by Lacey, and with an alternate proof given, by Demeter:

Theorem 3 (Pointwise ergodic theorem for two linear polynomials) Let ${(X,\mu,T)}$ be a measure-preserving system with finite measure, and let ${f \in L^{p_1}(X)}$, ${g \in L^{p_2}}$ for some ${1 < p_1,p_2 \leq \infty}$ with ${\frac{1}{p_1}+\frac{1}{p_2} < \frac{3}{2}}$. Then for any integers ${a,b}$, the averages ${\frac{1}{N} \sum_{n=1}^N f(T^{an} x) g(T^{bn} x)}$ converge pointwise almost everywhere.

It has been an open question for some time (see e.g., Problem 11 of this survey of Frantzikinakis) to extend this result to other bilinear ergodic averages. In our paper we are able to achieve this in the partially linear case:

Theorem 4 (Pointwise ergodic theorem for one linear and one nonlinear polynomial) Let ${(X,\mu,T)}$ be a measure-preserving system, and let ${f \in L^{p_1}(X)}$, ${g \in L^{p_2}}$ for some ${1 < p_1,p_2 < \infty}$ with ${\frac{1}{p_1}+\frac{1}{p_2} \leq 1}$. Then for any polynomial ${P \in {\bf Z}[{\mathrm n}]}$ of degree ${d \geq 2}$, the averages ${\frac{1}{N} \sum_{n=1}^N f(T^{n} x) g(T^{P(n)} x)}$ converge pointwise almost everywhere.

We actually prove a bit more than this, namely a maximal function estimate and a variational estimate, together with some additional estimates that “break duality” by applying in certain ranges with ${\frac{1}{p_1}+\frac{1}{p_2}>1}$, but we will not discuss these extensions here. A good model case to keep in mind is when ${p_1=p_2=2}$ and ${P(n) = n^2}$ (which is the case we started with). We note that norm convergence for these averages was established much earlier by Furstenberg and Weiss (in the ${d=2}$ case at least), and in fact norm convergence for arbitrary polynomial averages is now known thanks to the work of Host-Kra, Leibman, and Walsh.

Our proof of Theorem 4 is much closer in spirit to Theorem 2 than to Theorem 3. The property of the averages shared in common by Theorems 2, 4 is that they have “true complexity zero”, in the sense that they can only be only be large if the functions ${f,g}$ involved are “major arc” or “profinite”, in that they behave periodically over very long intervals (or like a linear combination of such periodic functions). In contrast, the average in Theorem 3 has “true complexity one”, in the sense that they can also be large if ${f,g}$ are “almost periodic” (a linear combination of eigenfunctions, or plane waves), and as such all proofs of the latter theorem have relied (either explicitly or implicitly) on some form of time-frequency analysis. In principle, the true complexity zero property reduces one to study the behaviour of averages on major arcs. However, until recently the available estimates to quantify this true complexity zero property were not strong enough to achieve a good reduction of this form, and even once one was in the major arc setting the bilinear averages in Theorem 4 were still quite complicated, exhibiting a mixture of both continuous and arithmetic aspects, both of which being genuinely bilinear in nature.

After applying standard reductions such as the Calderón transference principle, the key task is to establish a suitably “scale-invariant” maximal (or variational) inequality on the integer shift system (in which ${X = {\bf Z}}$ with counting measure, and ${T(n) = n-1}$). A model problem is to establish the maximal inequality

$\displaystyle \| \sup_N |A_N(f,g)| \|_{\ell^1({\bf Z})} \lesssim \|f\|_{\ell^2({\bf Z})}\|g\|_{\ell^2({\bf Z})} \ \ \ \ \ (1)$

where ${N}$ ranges over powers of two and ${A_N}$ is the bilinear operator

$\displaystyle A_N(f,g)(x) := \frac{1}{N} \sum_{n=1}^N f(x-n) g(x-n^2).$

The single scale estimate

$\displaystyle \| A_N(f,g) \|_{\ell^1({\bf Z})} \lesssim \|f\|_{\ell^2({\bf Z})}\|g\|_{\ell^2({\bf Z})}$

or equivalently (by duality)

$\displaystyle \frac{1}{N} \sum_{n=1}^N \sum_{x \in {\bf Z}} h(x) f(x-n) g(x-n^2) \lesssim \|f\|_{\ell^2({\bf Z})}\|g\|_{\ell^2({\bf Z})} \|h\|_{\ell^\infty({\bf Z})} \ \ \ \ \ (2)$

is immediate from Hölder’s inequality; the difficulty is how to take the supremum over scales ${N}$.

The first step is to understand when the single-scale estimate (2) can come close to equality. A key example to keep in mind is when ${f(x) = e(ax/q) F(x)}$, ${g(x) = e(bx/q) G(x)}$, ${h(x) = e(cx/q) H(x)}$ where ${q=O(1)}$ is a small modulus, ${a,b,c}$ are such that ${a+b+c=0 \hbox{ mod } q}$, ${G}$ is a smooth cutoff to an interval ${I}$ of length ${O(N^2)}$, and ${F=H}$ is also supported on ${I}$ and behaves like a constant on intervals of length ${O(N)}$. Then one can check that (barring some unusual cancellation) (2) is basically sharp for this example. A remarkable result of Peluse and Prendiville (generalised to arbitrary nonlinear polynomials ${P}$ by Peluse) asserts, roughly speaking, that this example basically the only way in which (2) can be saturated, at least when ${f,g,h}$ are supported on a common interval ${I}$ of length ${O(N^2)}$ and are normalised in ${\ell^\infty}$ rather than ${\ell^2}$. (Strictly speaking, the above paper of Peluse and Prendiville only says something like this regarding the ${f,h}$ factors; the corresponding statement for ${g}$ was established in a subsequent paper of Peluse and Prendiville.) The argument requires tools from additive combinatorics such as the Gowers uniformity norms, and hinges in particular on the “degree lowering argument” of Peluse and Prendiville, which I discussed in this previous blog post. Crucially for our application, the estimates are very quantitative, with all bounds being polynomial in the ratio between the left and right hand sides of (2) (or more precisely, the ${\ell^\infty}$-normalized version of (2)).

For our applications we had to extend the ${\ell^\infty}$ inverse theory of Peluse and Prendiville to an ${\ell^2}$ theory. This turned out to require a certain amount of “sleight of hand”. Firstly, one can dualise the theorem of Peluse and Prendiville to show that the “dual function”

$\displaystyle A^*_N(h,g)(x) = \frac{1}{N} \sum_{n=1}^N h(x+n) g(x+n-n^2)$

can be well approximated in ${\ell^1}$ by a function that has Fourier support on “major arcs” if ${g,h}$ enjoy ${\ell^\infty}$ control. To get the required extension to ${\ell^2}$ in the ${f}$ aspect one has to improve the control on the error from ${\ell^1}$ to ${\ell^2}$; this can be done by some interpolation theory combined with the useful Fourier multiplier theory of Ionescu and Wainger on major arcs. Then, by further interpolation using recent ${\ell^p({\bf Z})}$ improving estimates of Han, Kovac, Lacey, Madrid, and Yang for linear averages such as ${x \mapsto \frac{1}{N} \sum_{n=1}^N g(x+n-n^2)}$, one can relax the ${\ell^\infty}$ hypothesis on ${g}$ to an ${\ell^2}$ hypothesis, and then by undoing the duality one obtains a good inverse theorem for (2) for the function ${f}$; a modification of the arguments also gives something similar for ${g}$.

Using these inverse theorems (and the Ionescu-Wainger multiplier theory) one still has to understand the “major arc” portion of (1); a model case arises when ${f,g}$ are supported near rational numbers ${a/q}$ with ${q \sim 2^l}$ for some moderately large ${l}$. The inverse theory gives good control (with an exponential decay in ${l}$) on individual scales ${N}$, and one can leverage this with a Rademacher-Menshov type argument (see e.g., this blog post) and some closer analysis of the bilinear Fourier symbol of ${A_N}$ to eventually handle all “small” scales, with ${N}$ ranging up to say ${2^{2^u}}$ where ${u = C 2^{\rho l}}$ for some small constant ${\rho}$ and large constant ${C}$. For the “large” scales, it becomes feasible to place all the major arcs simultaneously under a single common denominator ${Q}$, and then a quantitative version of the Shannon sampling theorem allows one to transfer the problem from the integers ${{\bf Z}}$ to the locally compact abelian group ${{\bf R} \times {\bf Z}/Q{\bf Z}}$. Actually it was conceptually clearer for us to work instead with the adelic integers ${{\mathbf A}_{\bf Z} ={\bf R} \times \hat {\bf Z}}$, which is the inverse limit of the ${{\bf R} \times {\bf Z}/Q{\bf Z}}$. Once one transfers to the adelic integers, the bilinear operators involved split up as tensor products of the “continuous” bilinear operator

$\displaystyle A_{N,{\bf R}}(f,g)(x) := \frac{1}{N} \int_0^N f(x-t) g(x-t^2)\ dt$

on ${{\bf R}}$, and the “arithmetic” bilinear operator

$\displaystyle A_{\hat Z}(f,g)(x) := \int_{\hat {\bf Z}} f(x-y) g(x-y^2) d\mu_{\hat {\bf Z}}(y)$

on the profinite integers ${\hat {\bf Z}}$, equipped with probability Haar measure ${\mu_{\hat {\bf Z}}}$. After a number of standard manipulations (interpolation, Fubini’s theorem, Hölder’s inequality, variational inequalities, etc.) the task of estimating this tensor product boils down to establishing an ${L^q}$ improving estimate

$\displaystyle \| A_{\hat {\bf Z}}(f,g) \|_{L^q(\hat {\bf Z})} \lesssim \|f\|_{L^2(\hat {\bf Z})} \|g\|_{L^2(\hat {\bf Z})}$

for some ${q>2}$. Splitting the profinite integers ${\hat {\bf Z}}$ into the product of the ${p}$-adic integers ${{\bf Z}_p}$, it suffices to establish this claim for each ${{\bf Z}_p}$ separately (so long as we keep the implied constant equal to ${1}$ for sufficiently large ${p}$). This turns out to be possible using an arithmetic version of the Peluse-Prendiville inverse theorem as well as an arithmetic ${L^q}$ improving estimate for linear averaging operators which ultimately arises from some estimates on the distribution of polynomials on the ${p}$-adic field ${{\bf Q}_p}$, which are a variant of some estimates of Kowalski and Wright.

Kari Astala, Steffen Rohde, Eero Saksman and I have (finally!) uploaded to the arXiv our preprint “Homogenization of iterated singular integrals with applications to random quasiconformal maps“. This project started (and was largely completed) over a decade ago, but for various reasons it was not finalised until very recently. The motivation for this project was to study the behaviour of “random” quasiconformal maps. Recall that a (smooth) quasiconformal map is a homeomorphism ${f: {\bf C} \rightarrow {\bf C}}$ that obeys the Beltrami equation

$\displaystyle \frac{\partial f}{\partial \overline{z}} = \mu \frac{\partial f}{\partial z}$

for some Beltrami coefficient ${\mu: {\bf C} \rightarrow D(0,1)}$; this can be viewed as a deformation of the Cauchy-Riemann equation ${\frac{\partial f}{\partial \overline{z}} = 0}$. Assuming that ${f(z)}$ is asymptotic to ${z}$ at infinity, one can (formally, at least) solve for ${f}$ in terms of ${\mu}$ using the Beurling transform

$\displaystyle Tf(z) := \frac{\partial}{\partial z}(\frac{\partial f}{\partial \overline{z}})^{-1}(z) = -\frac{1}{\pi} p.v. \int_{\bf C} \frac{f(w)}{(w-z)^2}\ dw$

by the Neumann series

$\displaystyle \frac{\partial f}{\partial \overline{z}} = \mu + \mu T \mu + \mu T \mu T \mu + \dots.$

We looked at the question of the asymptotic behaviour of ${f}$ if ${\mu = \mu_\delta}$ is a random field that oscillates at some fine spatial scale ${\delta>0}$. A simple model to keep in mind is

$\displaystyle \mu_\delta(z) = \varphi(z) \sum_{n \in {\bf Z}^2} \epsilon_n 1_{n\delta + [0,\delta]^2}(z) \ \ \ \ \ (1)$

where ${\epsilon_n = \pm 1}$ are independent random signs and ${\varphi: {\bf C} \rightarrow D(0,1)}$ is a bump function. For models such as these, we show that a homogenisation occurs in the limit ${\delta \rightarrow 0}$; each multilinear expression

$\displaystyle \mu_\delta T \mu_\delta \dots T \mu_\delta \ \ \ \ \ (2)$

converges weakly in probability (and almost surely, if we restrict ${\delta}$ to a lacunary sequence) to a deterministic limit, and the associated quasiconformal map ${f = f_\delta}$ similarly converges weakly in probability (or almost surely). (Results of this latter type were also recently obtained by Ivrii and Markovic by a more geometric method which is simpler, but is applied to a narrower class of Beltrami coefficients.) In the specific case (1), the limiting quasiconformal map is just the identity map ${f(z)=z}$, but if for instance replaces the ${\epsilon_n}$ by non-symmetric random variables then one can have significantly more complicated limits. The convergence theorem for multilinear expressions such as is not specific to the Beurling transform ${T}$; any other translation and dilation invariant singular integral can be used here.

The random expression (2) is somewhat reminiscent of a moment of a random matrix, and one can start computing it analogously. For instance, if one has a decomposition ${\mu_\delta = \sum_{n \in {\bf Z}^2} \mu_{\delta,n}}$ such as (1), then (2) expands out as a sum

$\displaystyle \sum_{n_1,\dots,n_k \in {\bf Z}^2} \mu_{\delta,n_1} T \mu_{\delta,n_2} \dots T \mu_{\delta,n_k}$

The random fluctuations of this sum can be treated by a routine second moment estimate, and the main task is to show that the expected value

$\displaystyle \sum_{n_1,\dots,n_k \in {\bf Z}^2} \mathop{\bf E}(\mu_{\delta,n_1} T \mu_{\delta,n_2} \dots T \mu_{\delta,n_k}) \ \ \ \ \ (3)$

becomes asymptotically independent of ${\delta}$.

If all the ${n_1,\dots,n_k}$ were distinct then one could use independence to factor the expectation to get

$\displaystyle \sum_{n_1,\dots,n_k \in {\bf Z}^2} \mathop{\bf E}(\mu_{\delta,n_1}) T \mathop{\bf E}(\mu_{\delta,n_2}) \dots T \mathop{\bf E}(\mu_{\delta,n_k})$

which is a relatively straightforward expression to calculate (particularly in the model (1), where all the expectations here in fact vanish). The main difficulty is that there are a number of configurations in (3) in which various of the ${n_j}$ collide with each other, preventing one from easily factoring the expression. A typical problematic contribution for instance would be a sum of the form

$\displaystyle \sum_{n_1,n_2 \in {\bf Z}^2: n_1 \neq n_2} \mathop{\bf E}(\mu_{\delta,n_1} T \mu_{\delta,n_2} T \mu_{\delta,n_1} T \mu_{\delta,n_2}). \ \ \ \ \ (4)$

This is an example of what we call a non-split sum. This can be compared with the split sum

$\displaystyle \sum_{n_1,n_2 \in {\bf Z}^2: n_1 \neq n_2} \mathop{\bf E}(\mu_{\delta,n_1} T \mu_{\delta,n_1} T \mu_{\delta,n_2} T \mu_{\delta,n_2}). \ \ \ \ \ (5)$

If we ignore the constraint ${n_1 \neq n_2}$ in the latter sum, then it splits into

$\displaystyle f_\delta T g_\delta$

where

$\displaystyle f_\delta := \sum_{n_1 \in {\bf Z}^2} \mathop{\bf E}(\mu_{\delta,n_1} T \mu_{\delta,n_1})$

and

$\displaystyle g_\delta := \sum_{n_2 \in {\bf Z}^2} \mathop{\bf E}(\mu_{\delta,n_2} T \mu_{\delta,n_2})$

and one can hope to treat this sum by an induction hypothesis. (To actually deal with constraints such as ${n_1 \neq n_2}$ requires an inclusion-exclusion argument that creates some notational headaches but is ultimately manageable.) As the name suggests, the non-split configurations such as (4) cannot be factored in this fashion, and are the most difficult to handle. A direct computation using the triangle inequality (and a certain amount of combinatorics and induction) reveals that these sums are somewhat localised, in that dyadic portions such as

$\displaystyle \sum_{n_1,n_2 \in {\bf Z}^2: |n_1 - n_2| \sim R} \mathop{\bf E}(\mu_{\delta,n_1} T \mu_{\delta,n_2} T \mu_{\delta,n_1} T \mu_{\delta,n_2})$

exhibit power decay in ${R}$ (when measured in suitable function space norms), basically because of the large number of times one has to transition back and forth between ${n_1}$ and ${n_2}$. Thus, morally at least, the dominant contribution to a non-split sum such as (4) comes from the local portion when ${n_2=n_1+O(1)}$. From the translation and dilation invariance of ${T}$ this type of expression then simplifies to something like

$\displaystyle \varphi(z)^4 \sum_{n \in {\bf Z}^2} \eta( \frac{n-z}{\delta} )$

(plus negligible errors) for some reasonably decaying function ${\eta}$, and this can be shown to converge to a weak limit as ${\delta \rightarrow 0}$.

In principle all of these limits are computable, but the combinatorics is remarkably complicated, and while there is certainly some algebraic structure to the calculations, it does not seem to be easily describable in terms of an existing framework (e.g., that of free probability).

This set of notes discusses aspects of one of the oldest questions in Fourier analysis, namely the nature of convergence of Fourier series.

If ${f: {\bf R}/{\bf Z} \rightarrow {\bf C}}$ is an absolutely integrable function, its Fourier coefficients ${\hat f: {\bf Z} \rightarrow {\bf C}}$ are defined by the formula

$\displaystyle \hat f(n) := \int_{{\bf R}/{\bf Z}} f(x) e^{-2\pi i nx}\ dx.$

If ${f}$ is smooth, then the Fourier coefficients ${\hat f}$ are absolutely summable, and we have the Fourier inversion formula

$\displaystyle f(x) = \sum_{n \in {\bf Z}} \hat f(n) e^{2\pi i nx}$

where the series here is uniformly convergent. In particular, if we define the partial summation operators

$\displaystyle S_N f(x) := \sum_{|n| \leq N} \hat f(n) e^{2\pi i nx}$

then ${S_N f}$ converges uniformly to ${f}$ when ${f}$ is smooth.

What if ${f}$ is not smooth, but merely lies in an ${L^p({\bf R}/{\bf Z})}$ class for some ${1 \leq p \leq \infty}$? The Fourier coefficients ${\hat f}$ remain well-defined, as do the partial summation operators ${S_N}$. The question of convergence in norm is relatively easy to settle:

Exercise 1
• (i) If ${1 < p < \infty}$ and ${f \in L^p({\bf R}/{\bf Z})}$, show that ${S_N f}$ converges in ${L^p({\bf R}/{\bf Z})}$ norm to ${f}$. (Hint: first use the boundedness of the Hilbert transform to show that ${S_N}$ is bounded in ${L^p({\bf R}/{\bf Z})}$ uniformly in ${N}$.)
• (ii) If ${p=1}$ or ${p=\infty}$, show that there exists ${f \in L^p({\bf R}/{\bf Z})}$ such that the sequence ${S_N f}$ is unbounded in ${L^p({\bf R}/{\bf Z})}$ (so in particular it certainly does not converge in ${L^p({\bf R}/{\bf Z})}$ norm to ${f}$. (Hint: first show that ${S_N}$ is not bounded in ${L^p({\bf R}/{\bf Z})}$ uniformly in ${N}$, then apply the uniform boundedness principle in the contrapositive.)

The question of pointwise almost everywhere convergence turned out to be a significantly harder problem:

Theorem 2 (Pointwise almost everywhere convergence)
• (i) (Kolmogorov, 1923) There exists ${f \in L^1({\bf R}/{\bf Z})}$ such that ${S_N f(x)}$ is unbounded in ${N}$ for almost every ${x}$.
• (ii) (Carleson, 1966; conjectured by Lusin, 1913) For every ${f \in L^2({\bf R}/{\bf Z})}$, ${S_N f(x)}$ converges to ${f(x)}$ as ${N \rightarrow \infty}$ for almost every ${x}$.
• (iii) (Hunt, 1967) For every ${1 < p \leq \infty}$ and ${f \in L^p({\bf R}/{\bf Z})}$, ${S_N f(x)}$ converges to ${f(x)}$ as ${N \rightarrow \infty}$ for almost every ${x}$.

Note from Hölder’s inequality that ${L^2({\bf R}/{\bf Z})}$ contains ${L^p({\bf R}/{\bf Z})}$ for all ${p\geq 2}$, so Carleson’s theorem covers the ${p \geq 2}$ case of Hunt’s theorem. We remark that the precise threshold near ${L^1}$ between Kolmogorov-type divergence results and Carleson-Hunt pointwise convergence results, in the category of Orlicz spaces, is still an active area of research; see this paper of Lie for further discussion.

Carleson’s theorem in particular was a surprisingly difficult result, lying just out of reach of classical methods (as we shall see later, the result is much easier if we smooth either the function ${f}$ or the summation method ${S_N}$ by a tiny bit). Nowadays we realise that the reason for this is that Carleson’s theorem essentially contains a frequency modulation symmetry in addition to the more familiar translation symmetry and dilation symmetry. This basically rules out the possibility of attacking Carleson’s theorem with tools such as Calderón-Zygmund theory or Littlewood-Paley theory, which respect the latter two symmetries but not the former. Instead, tools from “time-frequency analysis” that essentially respect all three symmetries should be employed. We will illustrate this by giving a relatively short proof of Carleson’s theorem due to Lacey and Thiele. (There are other proofs of Carleson’s theorem, including Carleson’s original proof, its modification by Hunt, and a later time-frequency proof by Fefferman; see Remark 18 below.)