You are currently browsing the category archive for the ‘Mathematics’ category.
There is a very nice recent paper by Lemke Oliver and Soundararajan (complete with a popular science article about it by the consistently excellent Erica Klarreich for Quanta) about a surprising (but now satisfactorily explained) bias in the distribution of pairs of consecutive primes when reduced to a small modulus .
This phenomenon is superficially similar to the more well known Chebyshev bias concerning the reduction of a single prime to a small modulus , but is in fact a rather different (and much stronger) bias than the Chebyshev bias, and seems to arise from a completely different source. The Chebyshev bias asserts, roughly speaking, that a randomly selected prime of a large magnitude will typically (though not always) be slightly more likely to be a quadratic non-residue modulo than a quadratic residue, but the bias is small (the difference in probabilities is only about for typical choices of ), and certainly consistent with known or conjectured positive results such as Dirichlet’s theorem or the generalised Riemann hypothesis. The reason for the Chebyshev bias can be traced back to the von Mangoldt explicit formula which relates the distribution of the von Mangoldt function modulo with the zeroes of the -functions with period . This formula predicts (assuming some standard conjectures like GRH) that the von Mangoldt function is quite unbiased modulo . The von Mangoldt function is mostly concentrated in the primes, but it also has a medium-sized contribution coming from squares of primes, which are of course all located in the quadratic residues modulo . (Cubes and higher powers of primes also make a small contribution, but these are quite negligible asymptotically.) To balance everything out, the contribution of the primes must then exhibit a small preference towards quadratic non-residues, and this is the Chebyshev bias. (See this article of Rubinstein and Sarnak for a more technical discussion of the Chebyshev bias, and this survey of Granville and Martin for an accessible introduction. The story of the Chebyshev bias is also related to Skewes’ number, once considered the largest explicit constant to naturally appear in a mathematical argument.)
The paper of Lemke Oliver and Soundararajan considers instead the distribution of the pairs for small and for large consecutive primes , say drawn at random from the primes comparable to some large . For sake of discussion let us just take . Then all primes larger than are either or ; Chebyshev’s bias gives a very slight preference to the latter (of order , as discussed above), but apart from this, we expect the primes to be more or less equally distributed in both classes. For instance, assuming GRH, the probability that lands in would be , and similarly for .
In view of this, one would expect that up to errors of or so, the pair should be equally distributed amongst the four options , , , , thus for instance the probability that this pair is would naively be expected to be , and similarly for the other three tuples. These assertions are not yet proven (although some non-trivial upper and lower bounds for such probabilities can be obtained from recent work of Maynard).
However, Lemke Oliver and Soundararajan argue (backed by both plausible heuristic arguments (based ultimately on the Hardy-Littlewood prime tuples conjecture), as well as substantial numerical evidence) that there is a significant bias away from the tuples and – informally, adjacent primes don’t like being in the same residue class! For instance, they predict that the probability of attaining is in fact
with similar predictions for the other three pairs (in fact they give a somewhat more precise prediction than this). The magnitude of this bias, being comparable to , is significantly stronger than the Chebyshev bias of .
One consequence of this prediction is that the prime gaps are slightly less likely to be divisible by than naive random models of the primes would predict. Indeed, if the four options , , , all occurred with equal probability , then should equal with probability , and and with probability each (as would be the case when taking the difference of two random numbers drawn from those integers not divisible by ); but the Lemke Oliver-Soundararajan bias predicts that the probability of being divisible by three should be slightly lower, being approximately .
Below the fold we will give a somewhat informal justification of (a simplified version of) this phenomenon, based on the Lemke Oliver-Soundararajan calculation using the prime tuples conjecture.
Van Vu and I just posted to the arXiv our paper “sum-free sets in groups” (submitted to Discrete Analysis), as well as a companion survey article (submitted to J. Comb.). Given a subset of an additive group , define the quantity to be the cardinality of the largest subset of which is sum-free in in the sense that all the sums with distinct elements of lie outside of . For instance, if is itself a group, then , since no two elements of can sum to something outside of . More generally, if is the union of groups, then is at most , thanks to the pigeonhole principle.
If is the integers, then there are no non-trivial subgroups, and one can thus expect to start growing with . For instance, one has the following easy result:
Proof: We use an argument of Ruzsa, which is based in turn on an older argument of Choi. Let be the largest element of , and then recursively, once has been selected, let be the largest element of not equal to any of the , such that for all , terminating this construction when no such can be located. This gives a sequence of elements in which are sum-free in , and with the property that for any , either is equal to one of the , or else for some with . Iterating this, we see that any is of the form for some and . The number of such expressions is at most , thus which implies . Since , the claim follows.
In particular, we have for subsets of the integers. It has been possible to improve upon this easy bound, but only with remarkable effort. The best lower bound currently is
Using the standard tool of Freiman homomorphisms, the above results for the integers extend to other torsion-free abelian groups . In our paper we study the opposite case where is finite (but still abelian). In this paper of Erdös (in which the quantity was first introduced), the following question was posed: if is sufficiently large depending on , does this imply the existence of two elements with ? As it turns out, we were able to find some simple counterexamples to this statement. For instance, if is any finite additive group, then the set has but with no summing to zero; this type of example in fact works with replaced by any larger Mersenne prime, and we also have a counterexample in for arbitrarily large. However, in the positive direction, we can show that the answer to Erdös’s question is positive if is assumed to have no small prime factors. That is to say,
Theorem 2 For every there exists such that if is a finite abelian group whose order is not divisible by any prime less than or equal to , and is a subset of with order at least and , then there exist with .
There are two main tools used to prove this result. One is an “arithmetic removal lemma” proven by Král, Serra, and Vena. Note that the condition means that for any distinct , at least one of the , , must also lie in . Roughly speaking, the arithmetic removal lemma allows one to “almost” remove the requirement that be distinct, which basically now means that for almost all . This near-dilation symmetry, when combined with the hypothesis that has no small prime factors, gives a lot of “dispersion” in the Fourier coefficients of which can now be exploited to prove the theorem.
The second tool is the following structure theorem, which is the main result of our paper, and goes a fair ways towards classifying sets for which is small:
Theorem 3 Let be a finite subset of an arbitrary additive group , with . Then one can find finite subgroups with such that and . Furthermore, if , then the exceptional set is empty.
Roughly speaking, this theorem shows that the example of the union of subgroups mentioned earlier is more or less the “only” example of sets with , modulo the addition of some small exceptional sets and some refinement of the subgroups to dense subsets.
This theorem has the flavour of other inverse theorems in additive combinatorics, such as Freiman’s theorem, and indeed one can use Freiman’s theorem (and related tools, such as the Balog-Szemeredi theorem) to easily get a weaker version of this theorem. Indeed, if there are no sum-free subsets of of order , then a fraction of all pairs in must have their sum also in (otherwise one could take random elements of and they would be sum-free in with positive probability). From this and the Balog-Szemeredi theorem and Freiman’s theorem (in arbitrary abelian groups, as established by Green and Ruzsa), we see that must be “commensurate” with a “coset progression” of bounded rank. One can then eliminate the torsion-free component of this coset progression by a number of methods (e.g. by using variants of the argument in Proposition 1), with the upshot being that one can locate a finite group that has large intersection with .
At this point it is tempting to simply remove from and iterate. But one runs into a technical difficulty that removing a set such as from can alter the quantity in unpredictable ways, so one has to still keep around when analysing the residual set . A second difficulty is that the latter set could be considerably smaller than or , but still large in absolute terms, so in particular any error term whose size is only bounded by for a small could be massive compared with the residual set , and so such error terms would be unacceptable. One can get around these difficulties if one first performs some preliminary “normalisation” of the group , so that the residual set does not intersect any coset of too strongly. The arguments become even more complicated when one starts removing more than one group from and analyses the residual set ; indeed the “epsilon management” involved became so fearsomely intricate that we were forced to use a nonstandard analysis formulation of the problem in order to keep the complexity of the argument at a reasonable level (cf. my previous blog post on this topic). One drawback of doing so is that we have no effective bounds for the implied constants in our main theorem; it would be of interest to obtain a more direct proof of our main theorem that would lead to effective bounds.
I’ve just uploaded to the arXiv my paper Finite time blowup for high dimensional nonlinear wave systems with bounded smooth nonlinearity, submitted to Comm. PDE. This paper is in the same spirit as (though not directly related to) my previous paper on finite time blowup of supercritical NLW systems, and was inspired by a question posed to me some time ago by Jeffrey Rauch. Here, instead of looking at supercritical equations, we look at an extremely subcritical equation, namely a system of the form
where is the unknown field, and is the nonlinearity, which we assume to have all derivatives bounded. A typical example of such an equation is the higher-dimensional sine-Gordon equation
for a scalar field . Here is the d’Alembertian operator. We restrict attention here to classical (i.e. smooth) solutions to (1).
We do not assume any Hamiltonian structure, so we do not require to be a gradient of a potential . But even without such Hamiltonian structure, the equation (1) is very well behaved, with many a priori bounds available. For instance, if the initial position and initial velocity are smooth and compactly supported, then from finite speed of propagation has uniformly bounded compact support for all in a bounded interval. As the nonlinearity is bounded, this immediately places in in any bounded time interval, which by the energy inequality gives an a priori bound on in this time interval. Next, from the chain rule we have
which (from the assumption that is bounded) shows that is in , which by the energy inequality again now gives an a priori bound on .
One might expect that one could keep iterating this and obtain a priori bounds on in arbitrarily smooth norms. In low dimensions such as , this is a fairly easy task, since the above estimates and Sobolev embedding already place one in , and the nonlinear map is easily verified to preserve the space for any natural number , from which one obtains a priori bounds in any Sobolev space; from this and standard energy methods, one can then establish global regularity for this equation (that is to say, any smooth choice of initial data generates a global smooth solution). However, one starts running into trouble in higher dimensions, in which no bound is available. The main problem is that even a really nice nonlinearity such as is unbounded in higher Sobolev norms. The estimates
ensure that the map is bounded in low regularity spaces like or , but one already runs into trouble with the second derivative
where there is a troublesome lower order term of size which becomes difficult to control in higher dimensions, preventing the map to be bounded in . Ultimately, the issue here is that when is not controlled in , the function can oscillate at a much higher frequency than ; for instance, if is the one-dimensional wave for some and , then oscillates at frequency , but the function more or less oscillates at the larger frequency .
In medium dimensions, it is possible to use dispersive estimates for the wave equation (such as the famous Strichartz estimates) to overcome these problems. This line of inquiry was pursued (albeit for slightly different classes of nonlinearity than those considered here) by Heinz-von Wahl, Pecher (in a series of papers), Brenner, and Brenner-von Wahl; to cut a long story short, one of the conclusions of these papers was that one had global regularity for equations such as (1) in dimensions . (I reprove this result using modern Strichartz estimate and Littlewood-Paley techniques in an appendix to my paper. The references given also allow for some growth in the nonlinearity , but we will not detail the precise hypotheses used in these papers here.)
In my paper, I complement these positive results with an almost matching negative result:
Theorem 1 If and , then there exists a nonlinearity with all derivatives bounded, and a solution to (1) that is smooth at time zero, but develops a singularity in finite time.
The construction crucially relies on the ability to choose the nonlinearity , and also needs some injectivity properties on the solution (after making a symmetry reduction using an assumption of spherical symmetry to view as a function of variables rather than ) which restricts our counterexample to the case. Thus the model case of the higher-dimensional sine-Gordon equation is not covered by our arguments. Nevertheless (as with previous finite-time blowup results discussed on this blog), one can view this result as a barrier to trying to prove regularity for equations such as in eleven and higher dimensions, as any such argument must somehow use a property of that equation that is not applicable to the more general system (1).
Let us first give some back-of-the-envelope calculations suggesting why there could be finite time blowup in eleven and higher dimensions. For sake of this discussion let us restrict attention to the sine-Gordon equation . The blowup ansatz we will use is as follows: for each frequency in a sequence of large quantities going to infinity, there will be a spacetime “cube” on which the solution oscillates with “amplitude” and “frequency” , where is an exponent to be chosen later; this ansatz is of course compatible with the uncertainty principle. Since as , this will create a singularity at the spacetime origin . To make this ansatz plausible, we wish to make the oscillation of on driven primarily by the forcing term at . Thus, by Duhamel’s formula, we expect a relation roughly of the form
on , where is the usual free wave propagator, and is the indicator function of .
On , oscillates with amplitude and frequency , we expect the derivative to be of size about , and so from the principle of stationary phase we expect to oscillate at frequency about . Since the wave propagator preserves frequencies, and is supposed to be of frequency on we are thus led to the requirement
where is surface measure on the unit sphere , and is the volume of that sphere. In our setting, is comparable to , and so we have the informal approximation
Since is bounded, is bounded as well. This gives a (non-rigorous) upper bound
which when combined with our ansatz that has ampitude about on , gives the constraint
which on applying (2) gives the further constraint
which can be rearranged as
It is now clear that the optimal choice of is
and this blowup ansatz is only self-consistent when
or equivalently if .
To turn this ansatz into an actual blowup example, we will construct as the sum of various functions that solve the wave equation with forcing term in , and which concentrate in with the amplitude and frequency indicated by the above heuristic analysis. The remaining task is to show that can be written in the form for some with all derivatives bounded. For this one needs some injectivity properties of (after imposing spherical symmetry to impose a dimensional reduction on the domain of from dimensions to ). This requires one to construct some solutions to the free wave equation that have some unusual restrictions on the range (for instance, we will need a solution taking values in the plane that avoid one quadrant of that plane). In order to do this we take advantage of the very explicit nature of the fundamental solution to the wave equation in odd dimensions (such as ), particularly under the assumption of spherical symmetry. Specifically, one can show that in odd dimension , any spherically symmetric function of the form
for an arbitrary smooth function , will solve the free wave equation; this is ultimately due to iterating the “ladder operator” identity
This precise and relatively simple formula for allows one to create “bespoke” solutions that obey various unusual properties, without too much difficulty.
It is not clear to me what to conjecture for . The blowup ansatz given above is a little inefficient, in that the frequency component of the solution is only generated from a portion of the component, namely the portion close to a certain light cone. In particular, the solution does not saturate the Strichartz estimates that are used to establish the positive results for , which helps explain the slight gap between the positive and negative results. It may be that a more complicated ansatz could work to give a negative result in ten dimensions; conversely, it is also possible that one could use more advanced estimates than the Strichartz estimate (that somehow capture the “thinness” of the fundamental solution, and not just its dispersive properties) to stretch the positive results to ten dimensions. Which side the case falls in all come down to some rather delicate numerology.
I’ve been meaning to return to fluids for some time now, in order to build upon my construction two years ago of a solution to an averaged Navier-Stokes equation that exhibited finite time blowup. (I recently spoke on this work in the recent conference in Princeton in honour of Sergiu Klainerman; my slides for that talk are here.)
One of the biggest deficiencies with my previous result is the fact that the averaged Navier-Stokes equation does not enjoy any good equation for the vorticity , in contrast to the true Navier-Stokes equations which, when written in vorticity-stream formulation, become
(Throughout this post we will be working in three spatial dimensions .) So one of my main near-term goals in this area is to exhibit an equation resembling Navier-Stokes as much as possible which enjoys a vorticity equation, and for which there is finite time blowup.
Heuristically, this task should be easier for the Euler equations (i.e. the zero viscosity case of Navier-Stokes) than the viscous Navier-Stokes equation, as one expects the viscosity to only make it easier for the solution to stay regular. Indeed, morally speaking, the assertion that finite time blowup solutions of Navier-Stokes exist should be roughly equivalent to the assertion that finite time blowup solutions of Euler exist which are “Type I” in the sense that all Navier-Stokes-critical and Navier-Stokes-subcritical norms of this solution go to infinity (which, as explained in the above slides, heuristically means that the effects of viscosity are negligible when compared against the nonlinear components of the equation). In vorticity-stream formulation, the Euler equations can be written as
As discussed in this previous blog post, a natural generalisation of this system of equations is the system
where is a linear operator on divergence-free vector fields that is “zeroth order” in some sense; ideally it should also be invertible, self-adjoint, and positive definite (in order to have a Hamiltonian that is comparable to the kinetic energy ). (In the previous blog post, it was observed that the surface quasi-geostrophic (SQG) equation could be embedded in a system of the form (1).) The system (1) has many features in common with the Euler equations; for instance vortex lines are transported by the velocity field , and Kelvin’s circulation theorem is still valid.
So far, I have not been able to fully achieve this goal. However, I have the following partial result, stated somewhat informally:
Theorem 1 There is a “zeroth order” linear operator (which, unfortunately, is not invertible, self-adjoint, or positive definite) for which the system (1) exhibits smooth solutions that blowup in finite time.
being rescalings of . This operator is still bounded on all spaces , and so is arguably still a zeroth order operator, though not as convincingly as I would like. Another, less significant, issue with the result is that the solution constructed does not have good spatial decay properties, but this is mostly for convenience and it is likely that the construction can be localised to give solutions that have reasonable decay in space. But the biggest drawback of this theorem is the fact that is not invertible, self-adjoint, or positive definite, so in particular there is no non-negative Hamiltonian for this equation. It may be that some modification of the arguments below can fix these issues, but I have so far been unable to do so. Still, the construction does show that the circulation theorem is insufficient by itself to prevent blowup.
We sketch the proof of the above theorem as follows. We use the barrier method, introducing the time-varying hyperboloid domains
for (expressed in cylindrical coordinates ). We will select initial data to be for some non-negative even bump function supported on , normalised so that
in particular is divergence-free supported in , with vortex lines connecting to . Suppose for contradiction that we have a smooth solution to (1) with this initial data; to simplify the discussion we assume that the solution behaves well at spatial infinity (this can be justified with the choice (2) of vorticity-stream operator, but we will not do so here). Since the domains disconnect from at time , there must exist a time which is the first time where the support of touches the boundary of , with supported in .
From (1) we see that the support of is transported by the velocity field . Thus, at the point of contact of the support of with the boundary of , the inward component of the velocity field cannot exceed the inward velocity of . We will construct the functions so that this is not the case, leading to the desired contradiction. (Geometrically, what is going on here is that the operator is pinching the flow to pass through the narrow cylinder , leading to a singularity by time at the latest.)
First we observe from conservation of circulation, and from the fact that is supported in , that the integrals
are constant in both space and time for . From the choice of initial data we thus have
for all and all . On the other hand, if is of the form (2) with for some bump function that only has -components, then is divergence-free with mean zero, and
where . We choose to be supported in the slab for some large constant , and to equal a function depending only on on the cylinder , normalised so that . If , then passes through this cylinder, and we conclude that
for some coefficients . We will not be able to control these coefficients , but fortunately we only need to understand on the boundary , for which . So, if happens to be supported on an annulus , then vanishes on if is large enough. We then have
on the boundary of .
Let be a function of the form
where is a bump function supported on that equals on . We can perform a dyadic decomposition where
where is a bump function supported on with . If we then set
then one can check that for a function that is divergence-free and mean zero, and supported on the annulus , and
so on (where ) we have
One can manually check that the inward velocity of this vector on exceeds the inward velocity of if is large enough, and the claim follows.
Remark 2 The type of blowup suggested by this construction, where a unit amount of circulation is squeezed into a narrow cylinder, is of “Type II” with respect to the Navier-Stokes scaling, because Navier-Stokes-critical norms such (or at least ) look like they stay bounded during this squeezing procedure (the velocity field is of size about in cylinders of radius and length about ). So even if the various issues with are repaired, it does not seem likely that this construction can be directly adapted to obtain a corresponding blowup for a Navier-Stokes type equation. To get a “Type I” blowup that is consistent with Kelvin’s circulation theorem, it seems that one needs to coil the vortex lines around a loop multiple times in order to get increased circulation in a small space. This seems possible to pull off to me – there don’t appear to be any unavoidable obstructions coming from topology, scaling, or conservation laws – but would require a more complicated construction than the one given above.
In this blog post, I would like to specialise the arguments of Bourgain, Demeter, and Guth from the previous post to the two-dimensional case of the Vinogradov main conjecture, namely
This particular case of the main conjecture has a classical proof using some elementary number theory. Indeed, the left-hand side can be viewed as the number of solutions to the system of equations
with . These two equations can combine (using the algebraic identity applied to ) to imply the further equation
which, when combined with the divisor bound, shows that each is associated to choices of excluding diagonal cases when two of the collide, and this easily yields Theorem 1. However, the Bourgain-Demeter-Guth argument (which, in the two dimensional case, is essentially contained in a previous paper of Bourgain and Demeter) does not require the divisor bound, and extends for instance to the the more general case where ranges in a -separated set of reals between to .
In this special case, the Bourgain-Demeter argument simplifies, as the lower dimensional inductive hypothesis becomes a simple almost orthogonality claim, and the multilinear Kakeya estimate needed is also easy (collapsing to just Fubini’s theorem). Also one can work entirely in the context of the Vinogradov main conjecture, and not turn to the increased generality of decoupling inequalities (though this additional generality is convenient in higher dimensions). As such, I am presenting this special case as an introduction to the Bourgain-Demeter-Guth machinery.
We now give the specialisation of the Bourgain-Demeter argument to Theorem 1. It will suffice to establish the bound
for all , (where we keep fixed and send to infinity), as the bound then follows by combining the above bound with the trivial bound . Accordingly, for any and , we let denote the claim that
as . Clearly, for any fixed , holds for some large , and it will suffice to establish
Proposition 2 Let , and let be such that holds. Then there exists such that holds.
Indeed, this proposition shows that for , the infimum of the for which holds is zero.
We prove the proposition below the fold, using a simplified form of the methods discussed in the previous blog post. To simplify the exposition we will be a bit cavalier with the uncertainty principle, for instance by essentially ignoring the tails of rapidly decreasing functions.
Given any finite collection of elements in some Banach space , the triangle inequality tells us that
However, when the all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if is a Hilbert space and the are mutually orthogonal, we have the Pythagorean theorem
for any finite collection in any Banach space , where denotes the cardinality of . Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of or so over the trivial bound coming from the triangle inequality.
More generally, let us somewhat informally say that a collection exhibits decoupling in if one has the Pythagorean-like inequality
for any , thus one obtains almost the full square root cancellation in the norm. The theory of almost orthogonality can then be viewed as the theory of decoupling in Hilbert spaces such as . In spaces for one usually does not expect this sort of decoupling; for instance, if the are disjointly supported one has
and the right-hand side can be much larger than when . At the opposite extreme, one usually does not expect to get decoupling in , since one could conceivably align the to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in becomes sharp.
However, in some cases one can get decoupling for certain . For instance, suppose we are in , and that are bi-orthogonal in the sense that the products for are pairwise orthogonal in . Then we have
giving decoupling in . (Similarly if each of the is orthogonal to all but of the other .) A similar argument also gives decoupling when one has tri-orthogonality (with the mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed if one multiplies each of the by an independent random sign .
In recent years, Bourgain and Demeter have been establishing decoupling theorems in spaces for various key exponents of , in the “restriction theory” setting in which the are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve parameterised by the polynomial curve
For any ball in , let denote the weight
which should be viewed as a smoothed out version of the indicator function of . In particular, the space can be viewed as a smoothed out version of the space . For future reference we observe a fundamental self-similarity of the curve : any arc in this curve, with a compact interval, is affinely equivalent to the standard arc .
of a finite Borel measure on the arc , where . Then the exhibit decoupling in for any ball of radius .
Orthogonality gives the case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in up to the range ; the point here is that we can now get a much larger value of . The case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent (and the radius ) is best possible, as can be seen by the following basic example. If
where is a bump function adapted to , then standard Fourier-analytic computations show that will be comparable to on a rectangular box of dimensions (and thus volume ) centred at the origin, and exhibit decay away from this box, with comparable to
On the other hand, is comparable to on a ball of radius comparable to centred at the origin, so is , which is just barely consistent with decoupling. This calculation shows that decoupling will fail if is replaced by any larger exponent, and also if the radius of the ball is reduced to be significantly smaller than .
This theorem has the following consequence of importance in analytic number theory:
Corollary 2 (Vinogradov main conjecture) Let be integers, and let . Then
Proof: By the Hölder inequality (and the trivial bound of for the exponential sum), it suffices to treat the critical case , that is to say to show that
We can rescale this as
As the integrand is periodic along the lattice , this is equivalent to
The left-hand side may be bounded by , where and . Since
the claim now follows from the decoupling theorem and a brief calculation.
Using the Plancherel formula, one may equivalently (when is an integer) write the Vinogradov main conjecture in terms of solutions to the system of equations
but we will not use this formulation here.
A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for , or for and either below or above , with the bulk of recent progress coming from the efficient congruencing technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of fifth powers (the previous best result required fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set that is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)
Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.
I thank Jean Bourgain and Andrew Granville for helpful discussions.
Let denote the Liouville function. The prime number theorem is equivalent to the estimate
as , that is to say that exhibits cancellation on large intervals such as . This result can be improved to give cancellation on shorter intervals. For instance, using the known zero density estimates for the Riemann zeta function, one can establish that
as if for some fixed ; I believe this result is due to Ramachandra (see also Exercise 21 of this previous blog post), and in fact one could obtain a better error term on the right-hand side that for instance gained an arbitrary power of . On the Riemann hypothesis (or the weaker density hypothesis), it was known that the could be lowered to .
Early this year, there was a major breakthrough by Matomaki and Radziwill, who (among other things) showed that the asymptotic (1) was in fact valid for any with that went to infinity as , thus yielding cancellation on extremely short intervals. This has many further applications; for instance, this estimate, or more precisely its extension to other “non-pretentious” bounded multiplicative functions, was a key ingredient in my recent solution of the Erdös discrepancy problem, as well as in obtaining logarithmically averaged cases of Chowla’s conjecture, such as
It is of interest to twist the above estimates by phases such as the linear phase . In 1937, Davenport showed that
from which one can see that this is another averaged form of Chowla’s conjecture (stronger than the one I was able to prove with Matomaki and Radziwill, but a consequence of the unaveraged Chowla conjecture). If one inserted such a bound into the machinery I used to solve the Erdös discrepancy problem, it should lead to further averaged cases of Chowla’s conjecture, such as
though I have not fully checked the details of this implication. It should also have a number of new implications for sign patterns of the Liouville function, though we have not explored these in detail yet.
One can write (4) equivalently in the form
uniformly for all -dependent phases . In contrast, (3) is equivalent to the subcase of (6) when the linear phase coefficient is independent of . This dependency of on seems to necessitate some highly nontrivial additive combinatorial analysis of the function in order to establish (4) when is small. To date, this analysis has proven to be elusive, but I would like to record what one can do with more classical methods like Vaughan’s identity, namely:
The values of in this range are far too large to yield implications such as new cases of the Chowla conjecture, but it appears that the exponent is the limit of “classical” methods (at least as far as I was able to apply them), in the sense that one does not do any combinatorial analysis on the function , nor does one use modern equidistribution results on “Type III sums” that require deep estimates on Kloosterman-type sums. The latter may shave a little bit off of the exponent, but I don’t see how one would ever hope to go below without doing some non-trivial combinatorics on the function . UPDATE: I have come across this paper of Zhan which uses mean-value theorems for L-functions to lower the exponent to .
Let me now sketch the proof of the proposition, omitting many of the technical details. We first remark that known estimates on sums of the Liouville function (or similar functions such as the von Mangoldt function) in short arithmetic progressions, based on zero-density estimates for Dirichlet -functions, can handle the “major arc” case of (4) (or (6)) where is restricted to be of the form for (the exponent here being of the same numerology as the exponent in the classical result of Ramachandra, tied to the best zero density estimates currently available); for instance a modification of the arguments in this recent paper of Koukoulopoulos would suffice. Thus we can restrict attention to “minor arc” values of (or , using the interpretation of (6)).
Next, one breaks up (or the closely related Möbius function) into Dirichlet convolutions using one of the standard identities (e.g. Vaughan’s identity or Heath-Brown’s identity), as discussed for instance in this previous post (which is focused more on the von Mangoldt function, but analogous identities exist for the Liouville and Möbius functions). The exact choice of identity is not terribly important, but the upshot is that can be decomposed into terms, each of which is either of the “Type I” form
for some coefficients that are roughly of logarithmic size on the average, and scales with and , or else of the “Type II” form
for some coefficients that are roughly of logarithmic size on the average, and scales with and . As discussed in the previous post, the exponent is a natural barrier in these identities if one is unwilling to also consider “Type III” type terms which are roughly of the shape of the third divisor function .
A Type I sum makes a contribution to that can be bounded (via Cauchy-Schwarz) in terms of an expression such as
The inner sum exhibits a lot of cancellation unless is within of an integer. (Here, “a lot” should be loosely interpreted as “gaining many powers of over the trivial bound”.) Since is significantly larger than , standard Vinogradov-type manipulations (see e.g. Lemma 13 of these previous notes) show that this bad case occurs for many only when is “major arc”, which is the case we have specifically excluded. This lets us dispose of the Type I contributions.
A Type II sum makes a contribution to roughly of the form
We can break this up into a number of sums roughly of the form
for ; note that the range is non-trivial because is much larger than . Applying the usual bilinear sum Cauchy-Schwarz methods (e.g. Theorem 14 of these notes) we conclude that there is a lot of cancellation unless one has for some . But with , is well below the threshold for the definition of major arc, so we can exclude this case and obtain the required cancellation.
A basic estimate in multiplicative number theory (particularly if one is using the Granville-Soundararajan “pretentious” approach to this subject) is the following inequality of Halasz (formulated here in a quantitative form introduced by Montgomery and Tenenbaum).
As a qualitative corollary, we conclude (by standard compactness arguments) that if
as . In the more recent work of this paper of Granville and Soundararajan, the sharper bound
is obtained (with a more precise description of the term).
The usual proofs of Halasz’s theorem are somewhat lengthy (though there has been a recent simplification, in forthcoming work of Granville, Harper, and Soundarajan). Below the fold I would like to give a relatively short proof of the following “cheap” version of the inequality, which has slightly weaker quantitative bounds, but still suffices to give qualitative conclusions such as (2).
Theorem 2 (Cheap Halasz inequality) Let be a multiplicative function bounded in magnitude by . Let and , and suppose that is sufficiently large depending on . If (1) holds for all , then
The non-optimal exponent can probably be improved a bit by being more careful with the exponents, but I did not try to optimise it here. A similar bound appears in the first paper of Halasz on this topic.
The idea of the argument is to split as a Dirichlet convolution where is the portion of coming from “small”, “medium”, and “large” primes respectively (with the dividing line between the three types of primes being given by various powers of ). Using a Perron-type formula, one can express this convolution in terms of the product of the Dirichlet series of respectively at various complex numbers with . One can use based estimates to control the Dirichlet series of , while using the hypothesis (1) one can get estimates on the Dirichlet series of . (This is similar to the Fourier-analytic approach to ternary additive problems, such as Vinogradov’s theorem on representing large odd numbers as the sum of three primes.) This idea was inspired by a similar device used in the work of Granville, Harper, and Soundarajan. A variant of this argument also appears in unpublished work of Adam Harper.
I thank Andrew Granville for helpful comments which led to significant simplifications of the argument.
This is however not the end of the matter; there are many variants, refinements, and generalisations of the central limit theorem, and the purpose of this set of notes is to present a small sample of these variants.
First of all, the above theorem does not quantify the rate of convergence in (1). We have already addressed this issue to some extent with the Berry-Esséen theorem, which roughly speaking gives a convergence rate of uniformly in if we assume that has finite third moment. However there are still some quantitative versions of (1) which are not addressed by the Berry-Esséen theorem. For instance one may be interested in bounding the large deviation probabilities
in the setting where grows with . Chebyshev’s inequality gives an upper bound of for this quantity, but one can often do much better than this in practice. For instance, the central limit theorem (1) suggests that this probability should be bounded by something like ; however, this theorem only kicks in when is very large compared with . For instance, if one uses the Berry-Esséen theorem, one would need as large as or so to reach the desired bound of , even under the assumption of finite third moment. Basically, the issue is that convergence-in-distribution results, such as the central limit theorem, only really control the typical behaviour of statistics in ; they are much less effective at controlling the very rare outlier events in which the statistic strays far from its typical behaviour. Fortunately, there are large deviation inequalities (or concentration of measure inequalities) that do provide exponential type bounds for quantities such as (2), which are valid for both small and large values of . A basic example of this is the Chernoff bound that made an appearance in Exercise 47 of Notes 4; here we give some further basic inequalities of this type, including versions of the Bennett and Hoeffding inequalities.
where is now bounded (but can grow with ). The central limit theorem predicts that this quantity should be roughly , but even if one is able to invoke the Berry-Esséen theorem, one cannot quite see this main term because it is dominated by the error term in Berry-Esséen. There is good reason for this: if for instance takes integer values, then also takes integer values, and can vanish when is less than and is slightly larger than an integer. However, this turns out to essentially be the only obstruction; if does not lie in a lattice such as , then we can establish a local limit theorem controlling (3), and when does take values in a lattice like , there is a discrete local limit theorem that controls probabilities such as . Both of these limit theorems will be proven by the Fourier-analytic method used in the previous set of notes.
We also discuss other limit theorems in which the limiting distribution is something other than the normal distribution. Perhaps the most common example of these theorems is the Poisson limit theorems, in which one sums a large number of indicator variables (or approximate indicator variables), each of which is rarely non-zero, but which collectively add up to a random variable of medium-sized mean. In this case, it turns out that the limiting distribution should be a Poisson random variable; this again is an easy application of the Fourier method. Finally, we briefly discuss limit theorems for other stable laws than the normal distribution, which are suitable for summing random variables of infinite variance, such as the Cauchy distribution.
Finally, we mention a very important class of generalisations to the CLT (and to the variants of the CLT discussed in this post), in which the hypothesis of joint independence between the variables is relaxed, for instance one could assume only that the form a martingale. Many (though not all) of the proofs of the CLT extend to these more general settings, and this turns out to be important for many applications in which one does not expect joint independence. However, we will not discuss these generalisations in this course, as they are better suited for subsequent courses in this series when the theory of martingales, conditional expectation, and related tools are developed.