- Reducing qualitative analysis results (e.g., convergence theorems or dimension bounds) to quantitative analysis estimates (e.g., variational inequalities or maximal function estimates).
- Using dyadic pigeonholing to locate good scales to work in or to apply truncations.
- Using random translations to amplify small sets (low density) into large sets (positive density).
- Combining large deviation inequalities with metric entropy bounds to control suprema of various random processes.

Each of these techniques is individually not too difficult to explain, and were certainly employed on occasion by various mathematicians prior to Bourgain’s work; but Jean had internalized them to the point where he would instinctively use them as soon as they became relevant to a given problem at hand. I illustrate this at the end of the paper with an exposition of one particular result of Jean, on the Erdős similarity problem, in which his main result (that any sum of three infinite sets of reals has the property that there exists a positive measure set that does not contain any homothetic copy of ) is basically proven by a sequential application of these tools (except for dyadic pigeonholing, which turns out not to be needed here).

I had initially intended to also cover some other basic tools in Jean’s toolkit, such as the uncertainty principle and the use of probabilistic decoupling, but was having trouble keeping the paper coherent with such a broad focus (certainly I could not identify a single paper of Jean’s that employed all of these tools at once). I hope though that the examples given in the paper gives some reasonable impression of Jean’s research style.

]]>Vaughan and I grew up in extremely culturally similar countries, worked in adjacent areas of mathematics, shared (as of this week) a coauthor in Dima Shylakhtenko, started out our career with the same postdoc position (as UCLA Hedrick Assistant Professors, sixteen years apart) and even ended up in sister campuses of the University of California, but surprisingly we only interacted occasionally, via chance meetings at conferences or emails on some committee business. I found him extremely easy to get along with when we did meet, though, perhaps because of our similar cultural upbringing.

I have not had much occasion to directly use much of Vaughan’s mathematical contributions, but I did very much enjoy reading his influential 1999 preprint on planar algebras (which, for some odd reason has never been formally published). Traditional algebra notation is one-dimensional in nature, with algebraic expressions being described by strings of mathematical symbols; a linear operator , for instance, might appear in the middle of such a string, taking in an input on the right and returning an output on its left that might then be fed into some other operation. There are a few mathematical notations which are two-dimensional, such as the commutative diagrams in homological algebra, the tree expansions of solutions to nonlinear PDE (particularly stochastic nonlinear PDE), or the Feynman diagrams and Penrose graphical notations from physics, but these are the exception rather than the rule, and the notation is often still concentrated on a one-dimensional complex of vertices and edges (or arrows) in the plane. Planar algebras, by contrast, fully exploit the topological nature of the plane; a planar “operator” (or “operad”) inhabits some punctured region of the plane, such as an annulus, with “inputs” entering from the inner boundaries of the region and “outputs” emerging from the outer boundary. These algebras arose for Vaughan in both operator theory and knot theory, and have since been used in some other areas of mathematics such as representation theory and homology. I myself have not found a direct use for this type of algebra in my own work, but nevertheless I found the mere possibility of higher dimensional notation being the natural choice for a given mathematical problem to be conceptually liberating.

]]>A basic motivating example is the question of counting the number of incidences between points and lines (or between points and other geometric objects). Suppose one has points and lines in a space. How many incidences can there be between these points and lines? The utterly trivial bound is , but by using the basic fact that two points determine a line (or two lines intersect in at most one point), a simple application of Cauchy-Schwarz improves this bound to . In graph theoretic terms, the point is that the bipartite incidence graph between points and lines does not contain a copy of (there does not exist two points and two lines that are all incident to each other). Without any other further hypotheses, this bound is basically sharp: consider for instance the collection of points and lines in a finite plane , that has incidences (one can make the situation more symmetric by working with a projective plane rather than an affine plane). If however one considers lines in the real plane , the famous Szemerédi-Trotter theorem improves the incidence bound further from to . Thus the incidence graph between real points and lines contains more structure than merely the absence of .

More generally, bounding on the size of bipartite graphs (or multipartite hypergraphs) not containing a copy of some complete bipartite subgraph (or in the hypergraph case) is known as *Zarankiewicz’s problem*. We have results for all and all orders of hypergraph, but for sake of this post I will focus on the bipartite case.

In our paper we improve the bound to a near-linear bound in the case that the incidence graph is “semilinear”. A model case occurs when one considers incidences between points and axis-parallel rectangles in the plane. Now the condition is not automatic (it is of course possible for two distinct points to both lie in two distinct rectangles), so we impose this condition by *fiat*:

Theorem 1Suppose one has points and axis-parallel rectangles in the plane, whose incidence graph contains no ‘s, for some large .

- (i) The total number of incidences is .
- (ii) If all the rectangles are dyadic, the bound can be improved to .
- (iii) The bound in (ii) is best possible (up to the choice of implied constant).

We don’t know whether the bound in (i) is similarly tight for non-dyadic boxes; the usual tricks for reducing the non-dyadic case to the dyadic case strangely fail to apply here. One can generalise to higher dimensions, replacing rectangles by polytopes with faces in some fixed finite set of orientations, at the cost of adding several more logarithmic factors; also, one can replace the reals by other ordered division rings, and replace polytopes by other sets of bounded “semilinear descriptive complexity”, e.g., unions of boundedly many polytopes, or which are cut out by boundedly many functions that enjoy coordinatewise monotonicity properties. For certain specific graphs we can remove the logarithmic factors entirely. We refer to the preprint for precise details.

The proof techniques are combinatorial. The proof of (i) relies primarily on the order structure of to implement a “divide and conquer” strategy in which one can efficiently control incidences between points and rectangles by incidences between approximately points and boxes. For (ii) there is additional order-theoretic structure one can work with: first there is an easy pruning device to reduce to the case when no rectangle is completely contained inside another, and then one can impose the “tile partial order” in which one dyadic rectangle is less than another if and . The point is that this order is “locally linear” in the sense that for any two dyadic rectangles , the set is linearly ordered, and this can be exploited by elementary double counting arguments to obtain a bound which eventually becomes after optimising certain parameters in the argument. The proof also suggests how to construct the counterexample in (iii), which is achieved by an elementary iterative construction.

]]>

When is large and the matrix is a random matrix with empirical spectral distribution converging to some compactly supported probability measure on the real line, then under suitable hypotheses (e.g., unitary conjugation invariance of the random matrix ensemble ), a “concentration of measure” effect occurs, with the spectral distribution of the minors for for any fixed converging to a specific measure that depends only on and . The reason for this notation is that there is a surprising description of this measure when is a natural number, namely it is the free convolution of copies of , pushed forward by the dilation map . For instance, if is the Wigner semicircular measure , then . At the random matrix level, this reflects the fact that the minor of a GUE matrix is again a GUE matrix (up to a renormalizing constant).

As first observed by Bercovici and Voiculescu and developed further by Nica and Speicher, among other authors, the notion of a free convolution power of can be extended to non-integer , thus giving the notion of a “fractional free convolution power”. This notion can be defined in several different ways. One of them proceeds via the Cauchy transform

of the measure , and can be defined by solving the Burgers-type equation with initial condition (see this previous blog post for a derivation). This equation can be solved explicitly using theNica and Speicher also gave a free probability interpretation of the fractional free convolution power: if is a noncommutative random variable in a noncommutative probability space with distribution , and is a real projection operator free of with trace , then the “minor” of (viewed as an element of a new noncommutative probability space whose elements are minors , with trace ) has the law of (we give a self-contained proof of this in an appendix to our paper). This suggests that the minor process (or fractional free convolution) can be studied within the framework of free probability theory.

One of the known facts about integer free convolution powers is monotonicity of the *free entropy*

Our first main result is to extend the monotonicity results of Shylakhtenko to fractional . We give two proofs of this fact, one using free probability machinery, and a more self contained (but less motivated) proof using integration by parts and contour integration. The free probability proof relies on the concept of the *free score* of a noncommutative random variable, which is the analogue of the classical score. The free score, also introduced by Voiculescu, can be defined by duality as measuring the perturbation with respect to semicircular noise, or more precisely

The free score interacts very well with the free minor process , in particular by standard calculations one can establish the identity

whenever is a noncommutative random variable, is an algebra of noncommutative random variables, and is a real projection of trace that is free of both and . The monotonicity of free Fisher information then follows from an application of Pythagoras’s theorem (which implies in particular that conditional expectation operators are contractions on ). The monotonicity of free entropy then follows from an integral representation of free entropy as an integral of free Fisher information along the free Ornstein-Uhlenbeck process (or equivalently, free Fisher information is essentially the rate of change of free entropy with respect to perturbation by semicircular noise). The argument also shows when equality holds in the monotonicity inequalities; this occurs precisely when is a semicircular measure up to affine rescaling.After an extensive amount of calculation of all the quantities that were implicit in the above free probability argument (in particular computing the various terms involved in the application of Pythagoras’ theorem), we were able to extract a self-contained proof of monotonicity that relied on differentiating the quantities in and using the differential equation (1). It turns out that if for sufficiently regular , then there is an identity

where is the kernel and . It is not difficult to show that is a positive semi-definite kernel, which gives the required monotonicity. It would be interesting to obtain some more insightful interpretation of the kernel and the identity (2).These monotonicity properties hint at the minor process being associated to some sort of “gradient flow” in the parameter. We were not able to formalize this intuition; indeed, it is not clear what a gradient flow on a varying noncommutative probability space even means. However, after substantial further calculation we were able to formally describe the minor process as the Euler-Lagrange equation for an intriguing Lagrangian functional that we conjecture to have a random matrix interpretation. We first work in “Lagrangian coordinates”, defining the quantity on the “Gelfand-Tsetlin pyramid”

by the formula which is well defined if the density of is sufficiently well behaved. The random matrix interpretation of is that it is the asymptotic location of the eigenvalue of the upper left minor of a random matrix with asymptotic empirical spectral distribution and with unitarily invariant distribution, thus is in some sense a continuum limit of Gelfand-Tsetlin patterns. Thus for instance the Cauchy interlacing laws in this asymptotic limit regime become After a lengthy calculation (involving extensive use of the chain rule and product rule), the equation (1) is equivalent to the Euler-Lagrange equation where is the Lagrangian density Thus the minor process is formally a critical point of the integral . The quantity measures the mean eigenvalue spacing at some location of the Gelfand-Tsetlin pyramid, and the ratio measures mean eigenvalue drift in the minor process. This suggests that this Lagrangian density is some sort of measure of entropy of the asymptotic microscale point process emerging from the minor process at this spacing and drift. There is work of Metcalfe demonstrating that this point process is given by the Boutillier bead model, so we conjecture that this Lagrangian density somehow measures the entropy density of this process. ]]>For simplicity let us just work in one dimension. Any smooth function then defines a discrete Fourier multiplier operator for any by the formula

where is the Fourier transform on ; similarly, any test function defines a continuous Fourier multiplier operator by the formula where . In both cases we refer to as theWe will be interested in discrete Fourier multiplier operators whose symbols are supported on a finite union of arcs. One way to construct such operators is by “folding” continuous Fourier multiplier operators into various target frequencies. To make this folding operation precise, given any continuous Fourier multiplier operator , and any frequency , we define the discrete Fourier multiplier operator for any frequency shift by the formula

or equivalently More generally, given any finite set , we can form a multifrequency projection operator on by the formula thus This construction gives discrete Fourier multiplier operators whose symbol can be localised to a finite union of arcs. For instance, if is supported on , then is a Fourier multiplier whose symbol is supported on the set .There are a body of results relating the theory of discrete Fourier multiplier operators such as or with the theory of their continuous counterparts. For instance we have the basic result of Magyar, Stein, and Wainger:

Proposition 1 (Magyar-Stein-Wainger sampling principle)Let and .

- (i) If is a smooth function supported in , then , where denotes the operator norm of an operator .
- (ii) More generally, if is a smooth function supported in for some natural number , then .

When the implied constant in these bounds can be set to equal . In the paper of Magyar, Stein, and Wainger it was posed as an open problem as to whether this is the case for other ; in an appendix to this paper I show that the answer is negative if is sufficiently close to or , but I do not know the full answer to this question.

This proposition allows one to get a good multiplier theory for symbols supported near cyclic groups ; for instance it shows that a discrete Fourier multiplier with symbol for a fixed test function is bounded on , uniformly in and . For many applications in discrete harmonic analysis, one would similarly like a good multiplier theory for symbols supported in “major arc” sets such as

and in particular to get a good Littlewood-Paley theory adapted to major arcs. (This is particularly the case when trying to control “true complexity zero” expressions for which the minor arc contributions can be shown to be negligible; my recent paper with Krause and Mirek is focused on expressions of this type.) At present we do not have a good multiplier theory that is directly adapted to the classical major arc set (1) (though I do not know of rigorous negative results that show that such a theory is not possible); however, Ionescu and Wainger were able to obtain a useful substitute theory in which (1) was replaced by a somewhat larger set that had better multiplier behaviour. Starting with a finite collection of pairwise coprime natural numbers, and a natural number , one can form the major arc type set where consists of all rational points in the unit circle of the form where is the product of at most elements from and is an integer. For suitable choices of and not too large, one can make this set (2) contain the set (1) while still having a somewhat controlled size (very roughly speaking, one chooses to consist of (small powers of) large primes between and for some small constant , together with something like the product of all the primes up to (raised to suitable powers)).In the regime where is fixed and is small, there is a good theory:

Theorem 2 (Ionescu-Wainger theorem, rough version)If is an even integer or the dual of an even integer, and is supported on for a sufficiently small , then

There is a more explicit description of how small needs to be for this theorem to work (roughly speaking, it is not much more than what is needed for all the arcs in (2) to be disjoint), but we will not give it here. The logarithmic loss of was reduced to by Mirek. In this paper we refine the bound further to

when or for some integer . In particular there is no longer any logarithmic loss in the cardinality of the set .The proof of (3) follows a similar strategy as to previous proofs of Ionescu-Wainger type. By duality we may assume . We use the following standard sequence of steps:

- (i) (Denominator orthogonality) First one splits into various pieces depending on the denominator appearing in the element of , and exploits “superorthogonality” in to estimate the norm by the norm of an appropriate square function.
- (ii) (Nonconcentration) One expands out the power of the square function and estimates it by a “nonconcentrated” version in which various factors that arise in the expansion are “disjoint”.
- (iii) (Numerator orthogonality) We now decompose based on the numerators appearing in the relevant elements of , and exploit some residual orthogonality in this parameter to reduce to estimating a square-function type expression involving sums over various cosets .
- (iv) (Marcinkiewicz-Zygmund) One uses the Marcinkiewicz-Zygmund theorem relating scalar and vector valued operator norms to eliminate the role of the multiplier .
- (v) (Rubio de Francia) Use a reverse square function estimate of Rubio de Francia type to conclude.

The main innovations are that of using the probabilistic decoupling method to remove some logarithmic losses in (i), and recent progress on the Erdos-Rado sunflower conjecture (as discussed in this recent post) to improve the bounds in (ii). For (i), the key point is that one can express a sum such as

where is the set of -element subsets of an index set , and are various complex numbers, as an average where is a random partition of into subclasses (chosen uniformly over all such partitions), basically because every -element subset of has a probability exactly of being completely shattered by such a random partition. This “decouples” the index set into a Cartesian product which is more convenient for application of the superorthogonality theory. For (ii), the point is to efficiently obtain estimates of the form where are various non-negative quantities, and a sunflower is a collection of sets that consist of a common “core” and disjoint “petals” . The other parts of the argument are relatively routine; see for instance this survey of Pierce for a discussion of them in the simple case .In this paper we interpret the Ionescu-Wainger multiplier theorem as being essentially a consequence of various quantitative versions of the Shannon sampling theorem. Recall that this theorem asserts that if a (Schwartz) function has its Fourier transform supported on , then can be recovered uniquely from its restriction . In fact, as can be shown from a little bit of routine Fourier analysis, if we narrow the support of the Fourier transform slightly to for some , then the restriction has the same behaviour as the original function, in the sense that

for all ; see Theorem 4.18 of this paper of myself with Krause and Mirek. This is consistent with the uncertainty principle, which suggests that such functions should behave like a constant at scales .The quantitative sampling theorem (4) can be used to give an alternate proof of Proposition 1(i), basically thanks to the identity

whenever is Schwartz and has Fourier transform supported in , and is also supported on ; this identity can be easily verified from the Poisson summation formula. A variant of this argument also yields an alternate proof of Proposition 1(ii), where the role of is now played by , and the standard embedding of into is now replaced by the embedding of into ; the analogue of (4) is now whenever is Schwartz and has Fourier transform supported in , and is endowed with probability Haar measure.The locally compact abelian groups and can all be viewed as projections of the adelic integers (the product of the reals and the profinite integers ). By using the Ionescu-Wainger multiplier theorem, we are able to obtain an adelic version of the quantitative sampling estimate (5), namely

whenever , is Schwartz-Bruhat and has Fourier transform supported on for some sufficiently small (the precise bound on depends on in a fashion not detailed here). This allows one obtain an “adelic” extension of the Ionescu-Wainger multiplier theorem, in which the operator norm of any discrete multiplier operator whose symbol is supported on major arcs can be shown to be comparable to the operator norm of an adelic counterpart to that multiplier operator; in principle this reduces “major arc” harmonic analysis on the integers to “low frequency” harmonic analysis on the adelic integers , which is a simpler setting in many ways (mostly because the set of major arcs (2) is now replaced with a product set ). ]]>

Theorem 1 (Birkhoff ergodic theorem)Let be a measure-preserving system (by which we mean is a -finite measure space, and is invertible and measure-preserving), and let for any . Then the averages converge pointwise for -almost every .

Pointwise ergodic theorems have an inherently harmonic analysis content to them, as they are closely tied to maximal inequalities. For instance, the Birkhoff ergodic theorem is closely tied to the Hardy-Littlewood maximal inequality.

The above theorem was generalized by Bourgain (conceding the endpoint , where pointwise almost everywhere convergence is now known to fail) to polynomial averages:

Theorem 2 (Pointwise ergodic theorem for polynomial averages)Let be a measure-preserving system, and let for any . Let be a polynomial with integer coefficients. Then the averages converge pointwise for -almost every .

For bilinear averages, we have a separate 1990 result of Bourgain (for functions), extended to other spaces by Lacey, and with an alternate proof given, by Demeter:

Theorem 3 (Pointwise ergodic theorem for two linear polynomials)Let be a measure-preserving system with finite measure, and let , for some with . Then for any integers , the averages converge pointwise almost everywhere.

It has been an open question for some time (see e.g., Problem 11 of this survey of Frantzikinakis) to extend this result to other bilinear ergodic averages. In our paper we are able to achieve this in the partially linear case:

Theorem 4 (Pointwise ergodic theorem for one linear and one nonlinear polynomial)Let be a measure-preserving system, and let , for some with . Then for any polynomial of degree , the averages converge pointwise almost everywhere.

We actually prove a bit more than this, namely a maximal function estimate and a variational estimate, together with some additional estimates that “break duality” by applying in certain ranges with , but we will not discuss these extensions here. A good model case to keep in mind is when and (which is the case we started with). We note that norm convergence for these averages was established much earlier by Furstenberg and Weiss (in the case at least), and in fact norm convergence for arbitrary polynomial averages is now known thanks to the work of Host-Kra, Leibman, and Walsh.

Our proof of Theorem 4 is much closer in spirit to Theorem 2 than to Theorem 3. The property of the averages shared in common by Theorems 2, 4 is that they have “true complexity zero”, in the sense that they can only be only be large if the functions involved are “major arc” or “profinite”, in that they behave periodically over very long intervals (or like a linear combination of such periodic functions). In contrast, the average in Theorem 3 has “true complexity one”, in the sense that they can also be large if are “almost periodic” (a linear combination of eigenfunctions, or plane waves), and as such all proofs of the latter theorem have relied (either explicitly or implicitly) on some form of time-frequency analysis. In principle, the true complexity zero property reduces one to study the behaviour of averages on major arcs. However, until recently the available estimates to quantify this true complexity zero property were not strong enough to achieve a good reduction of this form, and even once one was in the major arc setting the bilinear averages in Theorem 4 were still quite complicated, exhibiting a mixture of both continuous and arithmetic aspects, both of which being genuinely bilinear in nature.

After applying standard reductions such as the Calderón transference principle, the key task is to establish a suitably “scale-invariant” maximal (or variational) inequality on the integer shift system (in which with counting measure, and ). A model problem is to establish the maximal inequality

where ranges over powers of two and is the bilinear operator The single scale estimate or equivalently (by duality) is immediate from Hölder’s inequality; the difficulty is how to take the supremum over scales .The first step is to understand when the single-scale estimate (2) can come close to equality. A key example to keep in mind is when , , where is a small modulus, are such that , is a smooth cutoff to an interval of length , and is also supported on and behaves like a constant on intervals of length . Then one can check that (barring some unusual cancellation) (2) is basically sharp for this example. A remarkable result of Peluse and Prendiville (generalised to arbitrary nonlinear polynomials by Peluse) asserts, roughly speaking, that this example basically the only way in which (2) can be saturated, at least when are supported on a common interval of length and are normalised in rather than . (Strictly speaking, the above paper of Peluse and Prendiville only says something like this regarding the factors; the corresponding statement for was established in a subsequent paper of Peluse and Prendiville.) The argument requires tools from additive combinatorics such as the Gowers uniformity norms, and hinges in particular on the “degree lowering argument” of Peluse and Prendiville, which I discussed in this previous blog post. Crucially for our application, the estimates are very quantitative, with all bounds being polynomial in the ratio between the left and right hand sides of (2) (or more precisely, the -normalized version of (2)).

For our applications we had to extend the inverse theory of Peluse and Prendiville to an theory. This turned out to require a certain amount of “sleight of hand”. Firstly, one can dualise the theorem of Peluse and Prendiville to show that the “dual function”

can be well approximated in by a function that has Fourier support on “major arcs” if enjoy control. To get the required extension to in the aspect one has to improve the control on the error from to ; this can be done by some interpolation theory combined with the useful Fourier multiplier theory of Ionescu and Wainger on major arcs. Then, by further interpolation using recent improving estimates of Han, Kovac, Lacey, Madrid, and Yang for linear averages such as , one can relax the hypothesis on to an hypothesis, and then by undoing the duality one obtains a good inverse theorem for (2) for the function ; a modification of the arguments also gives something similar for .Using these inverse theorems (and the Ionescu-Wainger multiplier theory) one still has to understand the “major arc” portion of (1); a model case arises when are supported near rational numbers with for some moderately large . The inverse theory gives good control (with an exponential decay in ) on individual scales , and one can leverage this with a Rademacher-Menshov type argument (see e.g., this blog post) and some closer analysis of the bilinear Fourier symbol of to eventually handle all “small” scales, with ranging up to say where for some small constant and large constant . For the “large” scales, it becomes feasible to place all the major arcs simultaneously under a single common denominator , and then a quantitative version of the Shannon sampling theorem allows one to transfer the problem from the integers to the locally compact abelian group . Actually it was conceptually clearer for us to work instead with the adelic integers , which is the inverse limit of the . Once one transfers to the adelic integers, the bilinear operators involved split up as tensor products of the “continuous” bilinear operator

on , and the “arithmetic” bilinear operator on the profinite integers , equipped with probability Haar measure . After a number of standard manipulations (interpolation, Fubini’s theorem, Hölder’s inequality, variational inequalities, etc.) the task of estimating this tensor product boils down to establishing an improving estimate for some . Splitting the profinite integers into the product of the -adic integers , it suffices to establish this claim for each separately (so long as we keep the implied constant equal to for sufficiently large ). This turns out to be possible using an arithmetic version of the Peluse-Prendiville inverse theorem as well as an arithmetic improving estimate for linear averaging operators which ultimately arises from some estimates on the distribution of polynomials on the -adic field , which are a variant of some estimates of Kowalski and Wright. ]]>The conjecture gets more difficult as increases, and also becomes more difficult the more slowly grows with . The conjecture is equivalent to the assertion

which was proven (for arbitrarily slowly growing ) in a landmark paper of Matomäki and Radziwill, discussed for instance in this blog post.For , the conjecture is equivalent to the assertion

This remains open for sufficiently slowly growing (and it would be a major breakthrough in particular if one could obtain this bound for as small as for any fixed , particularly if applicable to more general bounded multiplicative functions than , as this would have new implications for a generalization of the Chowla conjecture known as the Elliott conjecture). Recently, Kaisa, Maks and myself were able to establish this conjecture in the range (in fact we have since worked out in the current paper that we can get as small as ). In our current paper we establish Fourier uniformity conjecture for higher for the same range of . This in particular implies local orthogonality to polynomial phases, where denotes the polynomials of degree at most , but the full conjecture is a bit stronger than this, establishing the more general statement for any degree filtered nilmanifold and Lipschitz function , where now ranges over polynomial maps from to . The method of proof follows the same general strategy as in the previous paper with Kaisa and Maks. (The equivalence of (4) and (1) follows from the inverse conjecture for the Gowers norms, proven in this paper.) We quickly sketch first the proof of (3), using very informal language to avoid many technicalities regarding the precise quantitative form of various estimates. If the estimate (3) fails, then we have the correlation estimate for many and some polynomial depending on . The difficulty here is to understand how can depend on . We write the above correlation estimate more suggestively as Because of the multiplicativity at small primes , one expects to have a relation of the form for many for which for some small primes . (This can be formalised using an inequality of Elliott related to the Turan-Kubilius theorem.) This gives a relationship between and for “edges” in a rather sparse “graph” connecting the elements of say . Using some graph theory one can locate some non-trivial “cycles” in this graph that eventually lead (in conjunction to a certain technical but important “Chinese remainder theorem” step to modify the to eliminate a rather serious “aliasing” issue that was already discussed in this previous post) to obtain functional equations of the form for some large and close (but not identical) integers , where should be viewed as a first approximation (ignoring a certain “profinite” or “major arc” term for simplicity) as “differing by a slowly varying polynomial” and the polynomials should now be viewed as taking values on the reals rather than the integers. This functional equation can be solved to obtain a relation of the form for some real number of polynomial size, and with further analysis of the relation (5) one can make basically independent of . This simplifies (3) to something like and this is now of a form that can be treated by the theorem of Matomäki and Radziwill (because is a bounded multiplicative function). (Actually because of the profinite term mentioned previously, one also has to insert a Dirichlet character of bounded conductor into this latter conclusion, but we will ignore this technicality.)Now we apply the same strategy to (4). For abelian the claim follows easily from (3), so we focus on the non-abelian case. One now has a polynomial sequence attached to many , and after a somewhat complicated adaptation of the above arguments one again ends up with an approximate functional equation

where the relation is rather technical and will not be detailed here. A new difficulty arises in that there are some unwanted solutions to this equation, such as for some , which do not necessarily lead to multiplicative characters like as in the polynomial case, but instead to some unfriendly looking “generalized multiplicative characters” (think of as a rough caricature). To avoid this problem, we rework the graph theory portion of the argument to produce not just one functional equation of the form (6)for each , butWe give two applications of this higher order Fourier uniformity. One regards the growth of the number

of length sign patterns in the Liouville function. The Chowla conjecture implies that , but even the weaker conjecture of Sarnak that for some remains open. Until recently, the best asymptotic lower bound on was , due to McNamara; with our result, we can now show for any (in fact we can get for any ). The idea is to repeat the now-standard argument to exploit multiplicativity at small primes to deduce Chowla-type conjectures from Fourier uniformity conjectures, noting that the Chowla conjecture would give all the sign patterns one could hope for. The usual argument here uses the “entropy decrement argument” to eliminate a certain error term (involving the large but mean zero factor ). However the observation is that if there are extremely few sign patterns of length , then the entropy decrement argument is unnecessary (there isn’t much entropy to begin with), and a more low-tech moment method argument (similar to the derivation of Chowla’s conjecture from Sarnak’s conjecture, as discussed for instance in this post) gives enough of Chowla’s conjecture to produce plenty of length sign patterns. If there are not extremely few sign patterns of length then we are done anyway. One quirk of this argument is that the sign patterns it produces may only appear exactly once; in contrast with preceding arguments, we were not able to produce a large number of sign patterns that each occur infinitely often.The second application is to obtain cancellation for various polynomial averages involving the Liouville function or von Mangoldt function , such as

or where are polynomials of degree at most , no two of which differ by a constant (the latter is essential to avoid having to establish the Chowla or Hardy-Littlewood conjectures, which of course remain open). Results of this type were previously obtained by Tamar Ziegler and myself in the “true complexity zero” case when the polynomials had distinct degrees, in which one could use the theory of Matomäki and Radziwill; now that higher is available at the scale we can now remove this restriction. ]]>Rao’s argument used the Shannon noiseless coding theorem. It turns out that the argument can be arranged in the very slightly different language of Shannon entropy, and I would like to present it here. The argument proceeds by locating the core and petals of the sunflower separately (this strategy is also followed in Alweiss-Lovett-Wu-Zhang). In both cases the following definition will be key. In this post all random variables, such as random sets, will be understood to be discrete random variables taking values in a finite range. We always use boldface symbols to denote random variables, and non-boldface for deterministic quantities.

Definition 1 (Spread set)Let . A random set is said to be -spread if one has for all sets . A family of sets is said to be -spread if is non-empty and the random variable is -spread, where is drawn uniformly from .

The core can then be selected greedily in such a way that the remainder of a family becomes spread:

Lemma 2 (Locating the core)Let be a family of subsets of a finite set , each of cardinality at most , and let . Then there exists a “core” set of cardinality at most such that the set has cardinality at least , and such that the family is -spread. Furthermore, if and the are distinct, then .

*Proof:* We may assume is non-empty, as the claim is trivial otherwise. For any , define the quantity

Let be the set (3). Since , is non-empty. It remains to check that the family is -spread. But for any and drawn uniformly at random from one has

Since and , we obtain the claimIn view of the above lemma, the bound (2) will then follow from

Proposition 3 (Locating the petals)Let be natural numbers, and suppose that for a sufficiently large constant . Let be a finite family of subsets of a finite set , each of cardinality at most which is -spread. Then there exist such that is disjoint.

Indeed, to prove (2), we assume that is a family of sets of cardinality greater than for some ; by discarding redundant elements and sets we may assume that is finite and that all the are contained in a common finite set . Apply Lemma 2 to find a set of cardinality such that the family is -spread. By Proposition 3 we can find such that are disjoint; since these sets have cardinality , this implies that the are distinct. Hence form a sunflower as required.

Remark 4Proposition 3 is easy to prove if we strengthen the condition on to . In this case, we have for every , hence by the union bound we see that for any with there exists such that is disjoint from the set , which has cardinality at most . Iterating this, we obtain the conclusion of Proposition 3 in this case. This recovers a bound of the form , and by pursuing this idea a little further one can recover the original upper bound (1) of Erdös and Rado.

It remains to prove Proposition 3. In fact we can locate the petals one at a time, placing each petal inside a random set.

Proposition 5 (Locating a single petal)Let the notation and hypotheses be as in Proposition 3. Let be a random subset of , such that each lies in with an independent probability of . Then with probability greater than , contains one of the .

To see that Proposition 5 implies Proposition 3, we randomly partition into by placing each into one of the , chosen uniformly and independently at random. By Proposition 5 and the union bound, we see that with positive probability, it is simultaneously true for all that each contains one of the . Selecting one such for each , we obtain the required disjoint petals.

We will prove Proposition 5 by gradually increasing the density of the random set and arranging the sets to get quickly absorbed by this random set. The key iteration step is

Proposition 6 (Refinement inequality)Let and . Let be a random subset of a finite set which is -spread, and let be a random subset of independent of , such that each lies in with an independent probability of . Then there exists another random subset of with the same distribution as , such that and

Note that a direct application of the first moment method gives only the bound

but the point is that by switching from to an equivalent we can replace the factor by a quantity significantly smaller than .One can iterate the above proposition, repeatedly replacing with (noting that this preserves the -spread nature ) to conclude

Corollary 7 (Iterated refinement inequality)Let , , and . Let be a random subset of a finite set which is -spread, and let be a random subset of independent of , such that each lies in with an independent probability of . Then there exists another random subset of with the same distribution as , such that

Now we can prove Proposition 5. Let be chosen shortly. Applying Corollary 7 with drawn uniformly at random from the , and setting , or equivalently , we have

In particular, if we set , so that , then by choice of we have , hence In particular with probability at least , there must exist such that , giving the proposition.It remains to establish Proposition 6. This is the difficult step, and requires a clever way to find the variant of that has better containment properties in than does. The main trick is to make a conditional copy of that is conditionally independent of subject to the constraint . The point here is that this constrant implies the inclusions

and Because of the -spread hypothesis, it is hard for to contain any fixed large set. If we could apply this observation in the contrapositive to we could hope to get a good upper bound on the size of and hence on thanks to (4). One can also hope to improve such an upper bound by also employing (5), since it is also hard for the random set to contain a fixed large set. There are however difficulties with implementing this approach due to the fact that the random sets are coupled with in a moderately complicated fashion. In Rao’s argument a somewhat complicated encoding scheme was created to give information-theoretic control on these random variables; below thefold we accomplish a similar effect by using Shannon entropy inequalities in place of explicit encoding. A certain amount of information-theoretic sleight of hand is required to decouple certain random variables to the extent that the Shannon inequalities can be effectively applied. The argument bears some resemblance to the “entropy compression method” discussed in this previous blog post; there may be a way to more explicitly express the argument below in terms of that method. (There is also some kinship with the method of dependent random choice, which is used for instance to establish the Balog-Szemerédi-Gowers lemma, and was also translated into information theoretic language in these unpublished notes of Van Vu and myself.)

** — 1. Shannon entropy — **

In this section we lay out all the tools from the theory of Shannon entropy that we will need.

Define an *empirical sequence* for a random variable taking values in a discrete set to be a sequence in such that the empirical samples of this sequence converge in distribution to in the sense that

If is a random variable taking values in some set , its *Shannon entropy* is defined by the formula

We record the following standard and easily verified facts:

Lemma 8 (Basic Shannon inequalities)Let be random variables.

- (i) (Monotonicity) If is a deterministic function of , then . More generally, if is a deterministic function of and , then . If is a deterministic function of , then .
- (ii) (Subadditivity) One has , with equality iff , are independent. More generally, one has , with equality iff , are conditionally independent with respect to .
- (iii) (Chain rule) One has . More generally . In particular , and iff are independent; similarly, , and iff are conditionally independent with respect to .
- (iv) (Jensen’s inequality) If takes values in a finite set then , with equality iff is uniformly distributed in . More generally, if takes values in a set that depends on , then , with equality iff is uniformly distributed in after conditioning on .
- (v) (Gibbs inequality) If take values in the same finite set , then (we permit the right-hand side to be infinite, which makes the inequality vacuously true).

See this previous blog post for some intuitive analogies to understand Shannon entropy.

Now we establish some inequalities of relevance to random sets.

We first observe that any small random set largely determines any of its subsets. Define a *random subset* of a random set to be a random set such that holds almost surely.

Lemma 9 (Subsets of small sets have small conditional entropy)Let be a random finite set.

- (i) One has for any random subset of .
- (ii) One has . If is almost surely non-empty, we can improve this to .

*Proof:* The set takes values in the power set of , so the claim (i) follows from Lemma 8(iv). (Note how it is convenient here that we are using the base for the logarithm.)

For (ii), apply Lemma 8(v) with and the geometric random variable for natural numbers (or for positive , if is non-empty).

Now we encode the property of a random variable being -spread in the language of Shannon entropy.

Lemma 10 (Information-theoretic interpretation of spread)Let be a random finite set that is -spread for some .

- (i) If is uniformly distributed amongst some finite collection of sets, then for all random subsets of .
- (ii) In the general case, if are an empirical sequence of , then as , where is drawn uniformly from and is a random subset of .

Informally: large random subsets of an -spread set necessarily have a lot of mutual information with . Conversely, one can bound the size of a random subset of an -spread set by bounding its mutual information with .

*Proof:* In case (i), it suffices by Lemma 8(iv) to establish the bound

Given a finite non-empty set and , let denote the collection of -element subsets of . A uniformly chosen element of is thus a random -element subset of ; we refer to the quantity as the *density* of this random subset, and as a *uniformly chosen random subset of of density *. (Of course, this is only defined when is an integer multiple of .) Uniformly chosen random sets have the following information-theoretic relationships to small random sets have the following information-theoretic properties:

Lemma 11Let be a finite non-empty set, let be a uniformly chosen random subset of of some density (which is a multiple of ).

- (i) (Spread) If is a random subset of , then
- (ii) (Absorption) If is a random subset of , then

*Proof:* To prove (i), it suffices by Lemma 10(i) to show that is -spread, which amounts to showing that

For (ii), by replacing with we may assume that are disjoint. From Lemma 8(iii) and Lemma 9(ii) it suffices to show that

which in turn is implied by for each . By Lemma 8(iv) it suffices to show that but this follows from multiplying together the inequalities for .The following “relative product” construction will be important for us. Given a random variable and a deterministic function of that variable, one can construct a conditionally independent copy of subject to the condition , with the joint distribution

Note that this is usually

** — 2. Proof of refinement inequality — **

Now we have enough tools to prove Proposition 6. Let be as in that proposition. On the event when is empty we can set so we can instead condition to the event that is non-empty. In particular

In order to use Lemma 10 we fix an empirical sequence for . We relabel as , and let be a parameter going off to infinity (so in particular is identified with a subset of . We let be drawn uniformly at random from , and let be a uniform random subset of of density independent of . Observe from Stirling’s formula that converges in distribution to . Thus it will suffice to find another uniform random variable from such that

as , since we can pass to a subsequence in which converges in distribution to . From (8) we haveFrom we can form the random set ; we then form a conditionally independent copy of subject to the constraint

We use as the uniform variable to establish (9). The point is that the relation (11) implies that so it will suffice to show that and hence by (7) and hence by Lemma 8(ii) and independence of Now we try to relate the first term on the left-hand side with . Note from (11) that we have the identity and hence by Lemma 8(i) We estimate the relative entropy of here by selecting first , then , then . More precisely, using the chain rule and monotonicity (Lemma 8(i), (iii)) we have From Lemma 9(i) we have and Putting all this together, we conclude If we apply Lemma 10(ii) right away we will get the estimate which is a bound resembling (12), but the dependence on the parameters are too weak. To do better we return to the relative product construction to decouple some of the random variables here. From the tuple we can form the random variable , then form a conditionally independent copy of subject to the constraints From (11) and Lemma 8(i) we then have The point is that is now conditionally independent of relative to , so we can also rewrite the above conditional entropy as We now use the chain rule to disentangle the role of , writing the previous as From independence we have and from Lemma 9(i) we have We discard the negative term . Putting all this together, we obtain and Thus by Lemma 8(i), (ii), followed by Lemma 10(ii) and Lemma 11(i), we have which when inserted back into (14) using and simplifies to and the claim follows. ]]>The random expression (2) is somewhat reminiscent of a moment of a random matrix, and one can start computing it analogously. For instance, if one has a decomposition such as (1), then (2) expands out as a sum

The random fluctuations of this sum can be treated by a routine second moment estimate, and the main task is to show that the expected value becomes asymptotically independent of .If all the were distinct then one could use independence to factor the expectation to get

which is a relatively straightforward expression to calculate (particularly in the model (1), where all the expectations here in fact vanish). The main difficulty is that there are a number of configurations in (3) in which various of the collide with each other, preventing one from easily factoring the expression. A typical problematic contribution for instance would be a sum of the form This is an example of what we call aIn principle all of these limits are computable, but the combinatorics is remarkably complicated, and while there is certainly some algebraic structure to the calculations, it does not seem to be easily describable in terms of an existing framework (e.g., that of free probability).

]]>