Kaisa Matomäki, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Higher uniformity of arithmetic functions in short intervals I. All intervals“. This paper investigates the higher order (Gowers) uniformity of standard arithmetic functions in analytic number theory (and specifically, the Möbius function , the von Mangoldt function , and the generalised divisor functions ) in short intervals , where is large and lies in the range for a fixed constant (that one would like to be as small as possible). If we let denote one of the functions , then there is extensive literature on the estimation of short sums
and some literature also on the estimation of exponential sums such as for a real frequency , where . For applications in the additive combinatorics of such functions , it is also necessary to consider more general correlations, such as polynomial correlations where is a polynomial of some fixed degree, or more generally where is a nilmanifold of fixed degree and dimension (and with some control on structure constants), is a polynomial map, and is a Lipschitz function (with some bound on the Lipschitz constant). Indeed, thanks to the inverse theorem for the Gowers uniformity norm, such correlations let one control the Gowers uniformity norm of (possibly after subtracting off some renormalising factor) on such short intervals , which can in turn be used to control other multilinear correlations involving such functions.Traditionally, asymptotics for such sums are expressed in terms of a “main term” of some arithmetic nature, plus an error term that is estimated in magnitude. For instance, a sum such as would be approximated in terms of a main term that vanished (or is negligible) if is “minor arc”, but would be expressible in terms of something like a Ramanujan sum if was “major arc”, together with an error term. We found it convenient to cancel off such main terms by subtracting an approximant from each of the arithmetic functions and then getting upper bounds on remainder correlations such as
(actually for technical reasons we also allow the variable to be restricted further to a subprogression of , but let us ignore this minor extension for this discussion). There is some flexibility in how to choose these approximants, but we eventually found it convenient to use the following choices.
- For the Möbius function , we simply set , as per the Möbius pseudorandomness conjecture. (One could choose a more sophisticated approximant in the presence of a Siegel zero, as I did with Joni in this recent paper, but we do not do so here.)
- For the von Mangoldt function , we eventually went with the Cramér-Granville approximant , where and .
- For the divisor functions , we used a somewhat complicated-looking approximant for some explicit polynomials , chosen so that and have almost exactly the same sums along arithmetic progressions (see the paper for details).
The objective is then to obtain bounds on sums such as (1) that improve upon the “trivial bound” that one can get with the triangle inequality and standard number theory bounds such as the Brun-Titchmarsh inequality. For and , the Siegel-Walfisz theorem suggests that it is reasonable to expect error terms that have “strongly logarithmic savings” in the sense that they gain a factor of over the trivial bound for any ; for , the Dirichlet hyperbola method suggests instead that one has “power savings” in that one should gain a factor of over the trivial bound for some . In the case of the Möbius function , there is an additional trick (introduced by Matomäki and Teräväinen) that allows one to lower the exponent somewhat at the cost of only obtaining “weakly logarithmic savings” of shape for some small .
Our main estimates on sums of the form (1) work in the following ranges:
- For , one can obtain strongly logarithmic savings on (1) for , and power savings for .
- For , one can obtain weakly logarithmic savings for .
- For , one can obtain power savings for .
- For , one can obtain power savings for .
Conjecturally, one should be able to obtain power savings in all cases, and lower down to zero, but the ranges of exponents and savings given here seem to be the limit of current methods unless one assumes additional hypotheses, such as GRH. The result for correlation against Fourier phases was established previously by Zhan, and the result for such phases and was established previously by by Matomäki and Teräväinen.
By combining these results with tools from additive combinatorics, one can obtain a number of applications:
- Direct insertion of our bounds in the recent work of Kanigowski, Lemanczyk, and Radziwill on the prime number theorem on dynamical systems that are analytic skew products gives some improvements in the exponents there.
- We can obtain a “short interval” version of a multiple ergodic theorem along primes established by Frantzikinakis-Host-Kra and Wooley-Ziegler, in which we average over intervals of the form rather than .
- We can obtain a “short interval” version of the “linear equations in primes” asymptotics obtained by Ben Green, Tamar Ziegler, and myself in this sequence of papers, where the variables in these equations lie in short intervals rather than long intervals such as .
We now briefly discuss some of the ingredients of proof of our main results. The first step is standard, using combinatorial decompositions (based on the Heath-Brown identity and (for the result) the Ramaré identity) to decompose into more tractable sums of the following types:
- Type sums, which are basically of the form for some weights of controlled size and some cutoff that is not too large;
- Type sums, which are basically of the form for some weights , of controlled size and some cutoffs that are not too close to or to ;
- Type sums, which are basically of the form for some weights of controlled size and some cutoff that is not too large.
The precise ranges of the cutoffs depend on the choice of ; our methods fail once these cutoffs pass a certain threshold, and this is the reason for the exponents being what they are in our main results.
The Type sums involving nilsequences can be treated by methods similar to those in this previous paper of Ben Green and myself; the main innovations are in the treatment of the Type and Type sums.
For the Type sums, one can split into the “abelian” case in which (after some Fourier decomposition) the nilsequence is basically of the form , and the “non-abelian” case in which is non-abelian and exhibits non-trivial oscillation in a central direction. In the abelian case we can adapt arguments of Matomaki and Shao, which uses Cauchy-Schwarz and the equidistribution properties of polynomials to obtain good bounds unless is “major arc” in the sense that it resembles (or “pretends to be”) for some Dirichlet character and some frequency , but in this case one can use classical multiplicative methods to control the correlation. It turns out that the non-abelian case can be treated similarly. After applying Cauchy-Schwarz, one ends up analyzing the equidistribution of the four-variable polynomial sequence
as range in various dyadic intervals. Using the known multidimensional equidistribution theory of polynomial maps in nilmanifolds, one can eventually show in the non-abelian case that this sequence either has enough equidistribution to give cancellation, or else the nilsequence involved can be replaced with one from a lower dimensional nilmanifold, in which case one can apply an induction hypothesis.For the type sum, a model sum to study is
which one can expand as We experimented with a number of ways to treat this type of sum (including automorphic form methods, or methods based on the Voronoi formula or van der Corput’s inequality), but somewhat to our surprise, the most efficient approach was an elementary one, in which one uses the Dirichlet approximation theorem to decompose the hyperbolic region into a number of arithmetic progressions, and then uses equidistribution theory to establish cancellation of sequences such as on the majority of these progressions. As it turns out, this strategy works well in the regime unless the nilsequence involved is “major arc”, but the latter case is treatable by existing methods as discussed previously; this is why the exponent for our result can be as low as .In a sequel to this paper (currently in preparation), we will obtain analogous results for almost all intervals with in the range , in which we will be able to lower all the way to .
12 comments
Comments feed for this article
11 April, 2022 at 6:46 am
Will Sawin
Beautiful result!
> the ranges of exponents and savings given here seem to be the limit of current methods unless one assumes additional hypotheses, such as GRH.
What sort of improvements could be made under GRH?
11 April, 2022 at 1:53 pm
Terence Tao
Certainly on GRH I expect all the strongly logarithmic savings can be improved to power savings for some . There should also be some improvements on the value of , because GRH should allow one to get better control on major arc sums like for various , and this should move the needle on the constraints on . Note though that even for the extremely well studied sum we don't know how to get below 1/2 even on GRH, so there are still limitations to this method.
Also for it may be possible to use the more sophisticated progress on the divisor sum problem of estimating the error term in to lower these exponents further, although our methods for tackling seem to have a hard barrier at .
15 April, 2022 at 7:56 am
Liewyee
I have to leave with a pity，because someone we know，which you choose to care，thanks for your polite company，anyway…goodbye and goodluck…
11 April, 2022 at 10:06 am
Ian Finn
“to obtain bounded on sums” –> I think you mean bounds?
[corrected, thanks – T.]
11 April, 2022 at 12:12 pm
Anonymous
Are these methods sufficient to extend the setimates for ?
11 April, 2022 at 2:05 pm
Terence Tao
Our arguments for all hit the same barrier at . The model problem is to obtain some non-trivial asymptotic for
with and (note that by a suitable Taylor expansion one can approximate for a suitable polynomial ). The most dangerous contribution to comes from the case where all four factors are comparable to . Here it turns out that all of our techniques (Type I methods, Type II methods, Type I_2 methods, or major arc methods) break down once dips below (in fact only the major arc methods come close to tackling this sum in this regime). (This obstruction was known to previous authors such as Zhan and Baker-Harman-Pintz; unfortunately we do not have anything new to say about this obstacle.) As mentioned in my other comment, though, one can presumably improve this exponent on GRH. Also, any improvement on the exponent for the problem is likely to also lead to similar improvements , or for (the model problem above is in some sense the “only” thing preventing improvement of the 5/8 exponent).
There may be more hope in improving the 5/9 exponent for . This estimate is basically a consequence the 1/3 exponent estimate for and the triangle inequality. Any improvement at all over the crude triangle inequality here should lead to some gain, but we were not able to secure any additional cancellation for this purpose.
12 April, 2022 at 12:03 pm
Adrian Fellhauer
I’ve just been reading about https://en.wikipedia.org/wiki/Jang_Yeong-sil , who was promoted to a major scientific figure under Sejong the Great, despite coming from a lower-class social background.
Jesus Christ, how long shall my results remain un-used?
If I survive, I’ll get a readable version on-line next month (there was indeed a minor mistake; the n-th power does not generalise Buchstab’s function adequately, but the function which is obtained instead satisfies the same limit behaviour). I’m even in the process of learning about the geometry of numbers, so that I can calculate the error integrals for prime ideals of a number ring.
It’s just not fair.
14 April, 2022 at 10:49 pm
Jas, the Physicist
Just keep going. The Internet will connect you to who needs to see it.
16 May, 2022 at 1:37 am
Adrian Fellhauer
I proudly present the corrected version of my calculations:
Click to access THE-ASYMPTOTIC-DISTRIBUTION-OF-PRIME-ELEMENTS.pdf
I don’t wish for a conflict, but I’m beginning to fail to care about my work being ignored by the experts. Instead, I’m beginning to re-prove many mathematical theorems in simpler ways, and also to introduce new terminology which makes more sense. (How on the Earth is a border-free set “open”? How is a limit-faithful function continuous, even if it is defined on two disjoint intervals? And how are the Gaussian integers, a classic example for an arithmonid, a ring???) The new terminology will prevail in the end.
See for example my proof of the Nullstellensatz, which is, in my opinion, much more in the spirit of Grothendieck than the other ones: https://mathoverflow.net/questions/15226/elementary-interesting-proofs-of-the-nullstellensatz/410619#410619
While I’m at it, let me give you a bit of advice. Instead of defeatistically giving up on Sendov’s conjecture, why don’t you lower the exponent to zero? (Similar for other “almost proved” conjectures.) Why don’t you give courses on your good results, such as sets with few ordinary lines, the Conze-Lesigne classification, the joint work with Van Vu etc.? Can these proofs be simplified? Can parts of the proofs be re-used for other results?
Finally: There are enough conjectures for all of us.
21 April, 2022 at 2:12 am
Anonymous
Is it possible that the required size of the averaging interval H is somehow dependent on the algorithmic complexity of the function f(n)?
22 April, 2022 at 9:04 am
Terence Tao
I think there is only a loose connection at best. These sorts of uniformity results aim to establish statistical pseudorandomness properties of arithmetic functions such as the Mobius function short intervals. Statistical pseudorandomness is somewhat analogous to computational complexity pseudorandomness, but there is no tight relationship betweeen the two properties. In any event, lower bounds on the computational complexity of functions such as the Mobius function are extremely difficult to establish and unlikely to be the route to make further progress on these statistical uniformity questions.
5 May, 2022 at 4:49 am
Liewyee
I take back what I said…hope it’s not too late…so how is everything going？：-）