You are currently browsing the category archive for the ‘math.CA’ category.

In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable , for instance, is not allowed to equal and simultaneously. We will call such variables *completely specified*. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.

However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:

Strictly speaking, the expression is not well-formed according to the grammar of first-order logic; one should instead use something like

or

or

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

is also not a first-order sentence; one would instead have to write something like or instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as which are used once and then discarded.Another example of partially specified notation is the innocuous notation. For instance, the assertion

when written formally using first-order logic, would become something like which is not exactly an elegant reformulation. Similarly with statements such as orBelow the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using*ambient parameters*, we become able to formally introduce

*asymptotic notation*such as the big-O notation or the little-o notation . We will explain how to do this at the end of this post.

A few months ago I posted a question about analytic functions that I received from a bright high school student, which turned out to be studied and resolved by de Bruijn. Based on this positive resolution, I thought I might try my luck again and list three further questions that this student asked which do not seem to be trivially resolvable.

- Does there exist a smooth function which is nowhere analytic, but is such that the Taylor series converges for every ? (Of course, this series would not then converge to , but instead to some analytic function for each .) I have a vague feeling that perhaps the Baire category theorem should be able to resolve this question, but it seems to require a bit of effort. (Update: answered by Alexander Shaposhnikov in comments.)
- Is there a function which meets every polynomial to infinite order in the following sense: for every polynomial , there exists such that for all ? Such a function would be rather pathological, perhaps resembling a space-filling curve. (Update: solved for smooth by Aleksei Kulikov in comments. The situation currently remains unclear in the general case.)
- Is there a power series that diverges everywhere (except at ), but which becomes pointwise convergent after dividing each of the monomials into pieces for some summing absolutely to , and then rearranging, i.e., there is some rearrangement of that is pointwise convergent for every ? (Update: solved by Jacob Manaker in comments.)

Feel free to post answers or other thoughts on these questions in the comments.

Louis Esser, Burt Totaro, Chengxi Wang, and myself have just uploaded to the arXiv our preprint “Varieties of general type with many vanishing plurigenera, and optimal sine and sawtooth inequalities“. This is an interdisciplinary paper that arose because in order to optimize a certain algebraic geometry construction it became necessary to solve a purely analytic question which, while simple, did not seem to have been previously studied in the literature. We were able to solve the analytic question exactly and thus fully optimize the algebraic geometry construction, though the analytic question may have some independent interest.

Let us first discuss the algebraic geometry application. Given a smooth complex -dimensional projective variety there is a standard line bundle attached to it, known as the canonical line bundle; -forms on the variety become sections of this bundle. The bundle may not actually admit global sections; that is to say, the dimension of global sections may vanish. But as one raises the canonical line bundle to higher and higher powers to form further line bundles , the number of global sections tends to increase; in particular, the dimension of global sections (known as the plurigenus) always obeys an asymptotic of the form

as for some non-negative number , which is called the*volume*of the variety , which is an invariant that reveals some information about the birational geometry of . For instance, if the canonical line bundle is ample (or more generally, nef), this volume is equal to the intersection number (roughly speaking, the number of common zeroes of generic sections of the canonical line bundle); this is a special case of the asymptotic Riemann-Roch theorem. In particular, the volume is a natural number in this case. However, it is possible for the volume to also be fractional in nature. One can then ask: how small can the volume get without vanishing entirely? (By definition, varieties with non-vanishing volume are known as varieties of general type.)

It follows from a deep result obtained independently by Hacon–McKernan, Takayama and Tsuji that there is a uniform lower bound for the volume of all -dimensional projective varieties of general type. However, the precise lower bound is not known, and the current paper is a contribution towards probing this bound by constructing varieties of particularly small volume in the high-dimensional limit . Prior to this paper, the best such constructions of -dimensional varieties basically had exponentially small volume, with a construction of volume at most given by Ballico–Pignatelli–Tasin, and an improved construction with a volume bound of given by Totaro and Wang. In this paper, we obtain a variant construction with the somewhat smaller volume bound of ; the method also gives comparable bounds for some other related algebraic geometry statistics, such as the largest for which the pluricanonical map associated to the linear system is not a birational embedding into projective space.

The space is constructed by taking a general hypersurface of a certain degree in a weighted projective space and resolving the singularities. These varieties are relatively tractable to work with, as one can use standard algebraic geometry tools (such as the Reid–Tai inequality) to provide sufficient conditions to guarantee that the hypersurface has only canonical singularities and that the canonical bundle is a reflexive sheaf, which allows one to calculate the volume exactly in terms of the degree and weights . The problem then reduces to optimizing the resulting volume given the constraints needed for the above-mentioned sufficient conditions to hold. After working with a particular choice of weights (which consist of products of mostly consecutive primes, with each product occuring with suitable multiplicities ), the problem eventually boils down to trying to minimize the total multiplicity , subject to certain congruence conditions and other bounds on the . Using crude bounds on the eventually leads to a construction with volume at most , but by taking advantage of the ability to “dilate” the congruence conditions and optimizing over all dilations, we are able to improve the constant to .

Now it is time to turn to the analytic side of the paper by describing the optimization problem that we solve. We consider the *sawtooth* function , with defined as the unique real number in that is equal to mod . We consider a (Borel) probability measure on the real line, and then compute the average value of this sawtooth function

If one considers the deterministic case in which is a Dirac mass supported at some real number , then the Dirichlet approximation theorem tells us that there is such that is within of an integer, so we have

in this case, and this bound is sharp for deterministic measures . Thus we have However, both of these bounds turn out to be far from the truth, and the optimal value of is comparable to . In fact we were able to compute this quantity precisely:

Theorem 1 (Optimal bound for sawtooth inequality)Let .In particular, we have as .

- (i) If for some natural number , then .
- (ii) If for some natural number , then .

We establish this bound through duality. Indeed, suppose we could find non-negative coefficients such that one had the pointwise bound

for all real numbers . Integrating this against an arbitrary probability measure , we would conclude and hence Conversely, one can find lower bounds on by selecting suitable candidate measures and computing the means . The theory of linear programming duality tells us that this method must give us the optimal bound, but one has to locate the optimal measure and optimal weights . This we were able to do by first doing some extensive numerics to discover these weights and measures for small values of , and then doing some educated guesswork to extrapolate these examples to the general case, and then to verify the required inequalities. In case (i) the situation is particularly simple, as one can take to be the discrete measure that assigns a probability to the numbers and the remaining probability of to , while the optimal weighted inequality (1) turns out to be which is easily proven by telescoping series. However the general case turned out to be significantly tricker to work out, and the verification of the optimal inequality required a delicate case analysis (reflecting the fact that equality was attained in this inequality in a large number of places).After solving the sawtooth problem, we became interested in the analogous question for the sine function, that is to say what is the best bound for the inequality

The left-hand side is the smallest imaginary part of the first Fourier coefficients of . To our knowledge this quantity has not previously been studied in the Fourier analysis literature. By adopting a similar approach as for the sawtooth problem, we were able to compute this quantity exactly also:

Theorem 2For any , one has In particular,

Interestingly, a closely related cotangent sum recently appeared in this MathOverflow post. Verifying the lower bound on boils down to choosing the right test measure ; it turns out that one should pick the probability measure supported the with odd, with probability proportional to , and the lower bound verification eventually follows from a classical identity

for , first posed by Eisenstein in 1844 and proved by Stern in 1861. The upper bound arises from establishing the trigonometric inequality for all real numbers , which to our knowledge is new; the left-hand side has a Fourier-analytic intepretation as convolving the Fejér kernel with a certain discretized square wave function, and this interpretation is used heavily in our proof of the inequality.In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms for . For finitely supported functions , one can define the (non-normalised) Gowers norm by the formula

where denotes complex conjugation, and then on any discrete interval and any function we can then define the (normalised) Gowers norm where is the extension of by zero to all of . Thus for instance (which technically makes a seminorm rather than a norm), and one can calculate where , and we use the averaging notation .The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials and functions , we define the multilinear form

(assuming that the denominator is finite and non-zero). Thus for instance where we view as formal (indeterminate) variables, and are understood to be extended by zero to all of . These forms are used to count patterns in various sets; for instance, the quantity is closely related to the number of length three arithmetic progressions contained in . Let us informally say that a form is*controlled*by the norm if the form is small whenever are -bounded functions with at least one of the small in norm. This definition was made more precise by Gowers and Wolf, who then defined the

*true complexity*of a form to be the least such that is controlled by the norm. For instance,

- and have true complexity ;
- has true complexity ;
- has true complexity ;
- The form (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials ; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The (semi-)norm is so weak that it barely controls any averages at all. For instance the average

is not controlled by the semi-norm: it is perfectly possible for a -bounded function to even have vanishing norm but have large value of (consider for instance the parity function ).Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the and norms, which I will call the (or “profinite “) norm:

where ranges over all arithmetic progressions in . This can easily be seen to be a norm on functions that controls the norm. It is also basically controlled by the norm for -bounded functions ; indeed, if is an arithmetic progression in of some spacing , then we can write as the intersection of an interval with a residue class modulo , and from Fourier expansion we have If we let be a standard bump function supported on with total mass and is a parameter then (extending by zero outside of ), as can be seen by using the triangle inequality and the estimate After some Fourier expansion of we now have Writing as a linear combination of and using the Gowers–Cauchy–Schwarz inequality, we conclude hence on optimising in we have Forms which are controlled by the norm (but not ) would then have their true complexity adjusted to with this insertion.The norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form had true complexity in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function ; for the first two functions one needs to localize the norm to intervals of length . But I will ignore this technical point to keep the exposition simple.] The weaker claim that has true complexity is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the norm tells us that if a -bounded function has norm at least for some , then there is a Fourier phase such that

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one hasFor one has a trivial inverse theorem; by definition, the norm of is at least if and only if

Thus the frequency appearing in the inverse theorem can be taken to be zero when working instead with the norm.For one has the intermediate situation in which the frequency is not taken to be zero, but is instead major arc. Indeed, suppose that is -bounded with , thus

for some progression . This forces the spacing of this progression to be . We write the above inequality as for some residue class and some interval . By Fourier expansion and the triangle inequality we then have for some integer . Convolving by for a small multiple of and a Schwartz function of unit mass with Fourier transform supported on , we have The Fourier transform of is bounded by and supported on , thus by Fourier expansion and the triangle inequality we have for some , so in particular . Thus we have for some of the major arc form with . Conversely, for of this form, some routine summation by parts gives the bound so if (2) holds for a -bounded then one must have .Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes of functions (where each class of functions induces a dual norm :

Here I have included the three classes of functions that one can choose from for the inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems , known as *Host-Kra seminorms*. The norm can be defined for as

*invariant factor*(generated by the (almost everywhere) invariant measurable subsets of ) in the sense that a function has vanishing seminorm if and only if it is orthogonal to all -measurable (bounded) functions. Similarly, the norm is orthogonal to the

*Kronecker factor*, generated by the eigenfunctions of (that is to say, those obeying an identity for some -invariant ); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers norm, one can then define the Host-Kra seminorm by it is orthogonal to the

*profinite factor*, generated by the periodic sets of (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

I was asked the following interesting question from a bright high school student I am working with, to which I did not immediately know the answer:

Question 1Does there exist a smooth function which is not real analytic, but such that all the differences are real analytic for every ?

The hypothesis implies that the Newton quotients are real analytic for every . If analyticity was preserved by smooth limits, this would imply that is real analytic, which would make real analytic. However, we are not assuming any uniformity in the analyticity of the Newton quotients, so this simple argument does not seem to resolve the question immediately.

In the case that is periodic, say periodic with period , one can answer the question in the negative by Fourier series. Perform a Fourier expansion . If is not real analytic, then there is a sequence going to infinity such that as . From the Borel-Cantelli lemma one can then find a real number such that (say) for infinitely many , hence for infinitely many . Thus the Fourier coefficients of do not decay exponentially and hence this function is not analytic, a contradiction.

I was not able to quickly resolve the non-periodic case, but I thought perhaps this might be a good problem to crowdsource, so I invite readers to contribute their thoughts on this problem here. In the spirit of the polymath projects, I would encourage comments that contain thoughts that fall short of a complete solution, in the event that some other reader may be able to take the thought further.

In this previous blog post I noted the following easy application of Cauchy-Schwarz:

Lemma 1 (Van der Corput inequality)Let be unit vectors in a Hilbert space . Then

*Proof:* The left-hand side may be written as for some unit complex numbers . By Cauchy-Schwarz we have

As a corollary, correlation becomes transitive in a statistical sense (even though it is not transitive in an absolute sense):

Corollary 2 (Statistical transitivity of correlation)Let be unit vectors in a Hilbert space such that for all and some . Then we have for at least of the pairs .

*Proof:* From the lemma, we have

One drawback with this corollary is that it does not tell us *which* pairs correlate. In particular, if the vector also correlates with a separate collection of unit vectors, the pairs for which correlate may have no intersection whatsoever with the pairs in which correlate (except of course on the diagonal where they must correlate).

While working on an ongoing research project, I recently found that there is a very simple way to get around the latter problem by exploiting the tensor power trick:

Corollary 3 (Simultaneous statistical transitivity of correlation)Let be unit vectors in a Hilbert space for and such that for all , and some . Then there are at least pairs such that . In particular (by Cauchy-Schwarz) we have for all .

*Proof:* Apply Corollary 2 to the unit vectors and , in the tensor power Hilbert space .

It is surprisingly difficult to obtain even a qualitative version of the above conclusion (namely, if correlates with all of the , then there are many pairs for which correlates with for all simultaneously) without some version of the tensor power trick. For instance, even the powerful Szemerédi regularity lemma, when applied to the set of pairs for which one has correlation of , for a single , does not seem to be sufficient. However, there is a reformulation of the argument using the Schur product theorem as a substitute for (or really, a disguised version of) the tensor power trick. For simplicity of notation let us just work with real Hilbert spaces to illustrate the argument. We start with the identity

where is the orthogonal projection to the complement of . This implies a Gram matrix inequality for each where denotes the claim that is positive semi-definite. By the Schur product theorem, we conclude that and hence for a suitable choice of signs , One now argues as in the proof of Corollary 2.A separate application of tensor powers to amplify correlations was also noted in this previous blog post giving a cheap version of the Kabatjanskii-Levenstein bound, but this seems to not be directly related to this current application.

Previous set of notes: Notes 1. Next set of notes: Notes 3.

In Exercise 5 (and Lemma 1) of 246A Notes 4 we already observed some links between complex analysis on the disk (or annulus) and Fourier series on the unit circle:

- (i) Functions that are holomorphic on a disk are expressed by a convergent Fourier series (and also Taylor series) for (so in particular ), where conversely, every infinite sequence of coefficients obeying (1) arises from such a function .
- (ii) Functions that are holomorphic on an annulus are expressed by a convergent Fourier series (and also Laurent series) , where conversely, every doubly infinite sequence of coefficients obeying (2) arises from such a function .
- (iii) In the situation of (ii), there is a unique decomposition where extends holomorphically to , and extends holomorphically to and goes to zero at infinity, and are given by the formulae where is any anticlockwise contour in enclosing , and and where is any anticlockwise contour in enclosing but not .

This connection lets us interpret various facts about Fourier series through the lens of complex analysis, at least for some special classes of Fourier series. For instance, the Fourier inversion formula becomes the Cauchy-type formula for the Laurent or Taylor coefficients of , in the event that the coefficients are doubly infinite and obey (2) for some , or singly infinite and obey (1) for some .

It turns out that there are similar links between complex analysis on a half-plane (or strip) and Fourier *integrals* on the real line, which we will explore in these notes.

We first fix a normalisation for the Fourier transform. If is an absolutely integrable function on the real line, we define its Fourier transform by the formula

From the dominated convergence theorem will be a bounded continuous function; from the Riemann-Lebesgue lemma it also decays to zero as . My choice to place the in the exponent is a personal preference (it is slightly more convenient for some harmonic analysis formulae such as the identities (4), (5), (6) below), though in the complex analysis and PDE literature there are also some slight advantages in omitting this factor. In any event it is not difficult to adapt the discussion in this notes for other choices of normalisation. It is of interest to extend the Fourier transform beyond the class into other function spaces, such as or the space of tempered distributions, but we will not pursue this direction here; see for instance these lecture notes of mine for a treatment.

Exercise 1 (Fourier transform of Gaussian)If is a complex number with and is the Gaussian function , show that the Fourier transform is given by the Gaussian , where we use the standard branch for .

The Fourier transform has many remarkable properties. On the one hand, as long as the function is sufficiently “reasonable”, the Fourier transform enjoys a number of very useful identities, such as the Fourier inversion formula

the Plancherel identity and the Poisson summation formula On the other hand, the Fourier transform also intertwines various*qualitative*properties of a function with “dual” qualitative properties of its Fourier transform ; in particular, “decay” properties of tend to be associated with “regularity” properties of , and vice versa. For instance, the Fourier transform of rapidly decreasing functions tend to be smooth. There are complex analysis counterparts of this Fourier dictionary, in which “decay” properties are described in terms of exponentially decaying pointwise bounds, and “regularity” properties are expressed using holomorphicity on various strips, half-planes, or the entire complex plane. The following exercise gives some examples of this:

Exercise 2 (Decay of implies regularity of )Let be an absolutely integrable function.

- (i) If has super-exponential decay in the sense that for all and (that is to say one has for some finite quantity depending only on ), then extends uniquely to an entire function . Furthermore, this function continues to be defined by (3).
- (ii) If is supported on a compact interval then the entire function from (i) obeys the bounds for . In particular, if is supported in then .
- (iii) If obeys the bound for all and some , then extends uniquely to a holomorphic function on the horizontal strip , and obeys the bound in this strip. Furthermore, this function continues to be defined by (3).
- (iv) If is supported on (resp. ), then there is a unique continuous extension of to the lower half-plane (resp. the upper half-plane ) which is holomorphic in the interior of this half-plane, and such that uniformly as (resp. ). Furthermore, this function continues to be defined by (3).
Hint:to establish holomorphicity in each of these cases, use Morera’s theorem and the Fubini-Tonelli theorem. For uniqueness, use analytic continuation, or (for part (iv)) the Schwartz reflection principle.

Later in these notes we will give a partial converse to part (ii) of this exercise, known as the Paley-Wiener theorem; there are also partial converses to the other parts of this exercise.

From (3) we observe the following intertwining property between multiplication by an exponential and complex translation: if is a complex number and is an absolutely integrable function such that the modulated function is also absolutely integrable, then we have the identity

whenever is a complex number such that at least one of the two sides of the equation in (7) is well defined. Thus, multiplication of a function by an exponential weight corresponds (formally, at least) to translation of its Fourier transform. By using contour shifting, we will also obtain a dual relationship: under suitable holomorphicity and decay conditions on , translation by a complex shift will correspond to multiplication of the Fourier transform by an exponential weight. It turns out to be possible to exploit this property to derive many Fourier-analytic identities, such as the inversion formula (4) and the Poisson summation formula (6), which we do later in these notes. (The Plancherel theorem can also be established by complex analytic methods, but this requires a little more effort; see Exercise 8.)The material in these notes is loosely adapted from Chapter 4 of Stein-Shakarchi’s “Complex Analysis”.

Laura Cladek and I have just uploaded to the arXiv our paper “Additive energy of regular measures in one and higher dimensions, and the fractal uncertainty principle“. This paper concerns a continuous version of the notion of additive energy. Given a finite measure on and a scale , define the energy at scale to be the quantity

where is the product measure on formed from four copies of the measure on . We will be interested in Cantor-type measures , supported on a compact set and obeying the Ahlfors-David regularity condition for all balls and some constants , as well as the matching lower bound when whenever . One should think of as a -dimensional fractal set, and as some vaguely self-similar measure on this set.Note that once one fixes , the variable in (1) is constrained to a ball of radius , hence we obtain the trivial upper bound

If the set contains a lot of “additive structure”, one can expect this bound to be basically sharp; for instance, if is an integer, is a -dimensional unit disk, and is Lebesgue measure on this disk, one can verify that (where we allow implied constants to depend on . However we show that if the dimension is non-integer, then one obtains a gain:

Theorem 1If is not an integer, and are as above, then for some depending only on .

Informally, this asserts that Ahlfors-David regular fractal sets of non-integer dimension cannot behave as if they are approximately closed under addition. In fact the gain we obtain is quasipolynomial in the regularity constant :

(We also obtain a localised version in which the regularity condition is only required to hold at scales between and .) Such a result was previously obtained (with more explicit values of the implied constants) in the one-dimensional case by Dyatlov and Zahl; but in higher dimensions there does not appear to have been any results for this general class of sets and measures . In the paper of Dyatlov and Zahl it is noted that some dependence on is necessary; in particular, cannot be much better than . This reflects the fact that there*are*fractal sets that do behave reasonably well with respect to addition (basically because they are built out of long arithmetic progressions at many scales); however, such sets are not very Ahlfors-David regular. Among other things, this result readily implies a dimension expansion result for any non-degenerate smooth map , including the sum map and (in one dimension) the product map , where the non-degeneracy condition required is that the gradients are invertible for every . We refer to the paper for the formal statement.

Our higher-dimensional argument shares many features in common with that of Dyatlov and Zahl, notably a reliance on the modern tools of additive combinatorics (and specifically the Bogulybov-Ruzsa lemma of Sanders). However, in one dimension we were also able to find a completely elementary argument, avoiding any particularly advanced additive combinatorics and instead primarily exploiting the order-theoretic properties of the real line, that gave a superior value of , namely

One of the main reasons for obtaining such improved energy bounds is that they imply a *fractal uncertainty principle* in some regimes. We focus attention on the model case of obtaining such an uncertainty principle for the semiclassical Fourier transform

*fractal uncertainty principle*, when it applies, asserts that one can improve this to for some ; informally, this asserts that a function and its Fourier transform cannot simultaneously be concentrated in the set when , and that a function cannot be concentrated on and have its Fourier transform be of maximum size on when . A modification of the disk example mentioned previously shows that such a fractal uncertainty principle cannot hold if is an integer. However, in one dimension, the fractal uncertainty principle is known to hold for all . The above-mentioned results of Dyatlov and Zahl were able to establish this for close to , and the remaining cases and were later established by Bourgain-Dyatlov and Dyatlov-Jin respectively. Such uncertainty principles have applications to hyperbolic dynamics, in particular in establishing spectral gaps for certain Selberg zeta functions.

It remains a largely open problem to establish a fractal uncertainty principle in higher dimensions. Our results allow one to establish such a principle when the dimension is close to , and is assumed to be odd (to make a non-integer). There is also work of Han and Schlag that obtains such a principle when one of the copies of is assumed to have a product structure. We hope to obtain further higher-dimensional fractal uncertainty principles in subsequent work.

We now sketch how our main theorem is proved. In both one dimension and higher dimensions, the main point is to get a preliminary improvement

over the trivial bound (2) for any small , provided is sufficiently small depending on ; one can then iterate this bound by a fairly standard “induction on scales” argument (which roughly speaking can be used to show that energies behave somewhat multiplicatively in the scale parameter ) to propagate the bound to a power gain at smaller scales. We found that a particularly clean way to run the induction on scales was via use of the Gowers uniformity norm , and particularly via a clean Fubini-type inequality (ultimately proven using the Gowers-Cauchy-Schwarz inequality) that allows one to “decouple” coarse and fine scale aspects of the Gowers norms (and hence of additive energies).It remains to obtain the preliminary improvement. In one dimension this is done by identifying some “left edges” of the set that supports : intervals that intersect , but such that a large interval just to the left of this interval is disjoint from . Here is a large constant and is a scale parameter. It is not difficult to show (using in particular the Archimedean nature of the real line) that if one has the Ahlfors-David regularity condition for some then left edges exist in abundance at every scale; for instance most points of would be expected to lie in quite a few of these left edges (much as most elements of, say, the ternary Cantor set would be expected to contain a lot of s in their base expansion). In particular, most pairs would be expected to lie in a pair of left edges of equal length. The key point is then that if lies in such a pair with , then there are relatively few pairs at distance from for which one has the relation , because will both tend to be to the right of respectively. This causes a decrement in the energy at scale , and by carefully combining all these energy decrements one can eventually cobble together the energy bound (3).

We were not able to make this argument work in higher dimension (though perhaps the cases and might not be completely out of reach from these methods). Instead we return to additive combinatorics methods. If the claim (3) failed, then by applying the Balog-Szemeredi-Gowers theorem we can show that the set has high correlation with an approximate group , and hence (by the aforementioned Bogulybov-Ruzsa type theorem of Sanders, which is the main source of the quasipolynomial bounds in our final exponent) will exhibit an approximate “symmetry” along some non-trivial arithmetic progression of some spacing length and some diameter . The -neighbourhood of will then resemble the union of parallel “cylinders” of dimensions . If we focus on a typical -ball of , the set now resembles a Cartesian product of an interval of length with a subset of a -dimensional hyperplane, which behaves approximately like an Ahlfors-David regular set of dimension (this already lets us conclude a contradiction if ). Note that if the original dimension was non-integer then this new dimension will also be non-integer. It is then possible to contradict the failure of (3) by appealing to a suitable induction hypothesis at one lower dimension.

Asgar Jamneshan and I have just uploaded to the arXiv our paper “Foundational aspects of uncountable measure theory: Gelfand duality, Riesz representation, canonical models, and canonical disintegration“. This paper arose from our longer-term project to systematically develop “uncountable” ergodic theory – ergodic theory in which the groups acting are not required to be countable, the probability spaces one acts on are not required to be standard Borel, or Polish, and the compact groups that arise in the structural theory (e.g., the theory of group extensions) are not required to be separable. One of the motivations of doing this is to allow ergodic theory results to be applied to ultraproducts of finite dynamical systems, which can then hopefully be transferred to establish combinatorial results with good uniformity properties. An instance of this is the uncountable Mackey-Zimmer theorem, discussed in this companion blog post.

In the course of this project, we ran into the obstacle that many foundational results, such as the Riesz representation theorem, often require one or more of these countability hypotheses when encountered in textbooks. Other technical issues also arise in the uncountable setting, such as the need to distinguish the Borel -algebra from the (two different types of) Baire -algebra. As such we needed to spend some time reviewing and synthesizing the known literature on some foundational results of “uncountable” measure theory, which led to this paper. As such, most of the results of this paper are already in the literature, either explicitly or implicitly, in one form or another (with perhaps the exception of the canonical disintegration, which we discuss below); we view the main contribution of this paper as presenting the results in a coherent and unified fashion. In particular we found that the language of category theory was invaluable in clarifying and organizing all the different results. In subsequent work we (and some other authors) will use the results in this paper for various applications in uncountable ergodic theory.

The foundational results covered in this paper can be divided into a number of subtopics (Gelfand duality, Baire -algebras and Riesz representation, canonical models, and canonical disintegration), which we discuss further below the fold.

I have uploaded to the arXiv my paper “Exploring the toolkit of Jean Bourgain“. This is one of a collection of papers to be published in the Bulletin of the American Mathematical Society describing aspects of the work of Jean Bourgain; other contributors to this collection include Keith Ball, Ciprian Demeter, and Carlos Kenig. Because the other contributors will be covering specific areas of Jean’s work in some detail, I decided to take a non-overlapping tack, and focus instead on some basic tools of Jean that he frequently used across many of the fields he contributed to. Jean had a surprising number of these “basic tools” that he wielded with great dexterity, and in this paper I focus on just a few of them:

- Reducing qualitative analysis results (e.g., convergence theorems or dimension bounds) to quantitative analysis estimates (e.g., variational inequalities or maximal function estimates).
- Using dyadic pigeonholing to locate good scales to work in or to apply truncations.
- Using random translations to amplify small sets (low density) into large sets (positive density).
- Combining large deviation inequalities with metric entropy bounds to control suprema of various random processes.

Each of these techniques is individually not too difficult to explain, and were certainly employed on occasion by various mathematicians prior to Bourgain’s work; but Jean had internalized them to the point where he would instinctively use them as soon as they became relevant to a given problem at hand. I illustrate this at the end of the paper with an exposition of one particular result of Jean, on the Erdős similarity problem, in which his main result (that any sum of three infinite sets of reals has the property that there exists a positive measure set that does not contain any homothetic copy of ) is basically proven by a sequential application of these tools (except for dyadic pigeonholing, which turns out not to be needed here).

I had initially intended to also cover some other basic tools in Jean’s toolkit, such as the uncertainty principle and the use of probabilistic decoupling, but was having trouble keeping the paper coherent with such a broad focus (certainly I could not identify a single paper of Jean’s that employed all of these tools at once). I hope though that the examples given in the paper gives some reasonable impression of Jean’s research style.

## Recent Comments