You are currently browsing the tag archive for the ‘Chowla conjecture’ tag.

Joni Teräväinen and I have just uploaded to the arXiv our paper “The structure of correlations of multiplicative functions at almost all scales, with applications to the Chowla and Elliott conjectures“. This is a sequel to our previous paper that studied logarithmic correlations of the form

where were bounded multiplicative functions, were fixed shifts, was a quantity going off to infinity, and was a generalised limit functional. Our main technical result asserted that these correlations were necessarily the uniform limit of periodic functions . Furthermore, if (weakly) pretended to be a Dirichlet character , then the could be chosen to be –isotypic in the sense that whenever are integers with coprime to the periods of and ; otherwise, if did not weakly pretend to be any Dirichlet character , then vanished completely. This was then used to verify several cases of the logarithmically averaged Elliott and Chowla conjectures.

The purpose of this paper was to investigate the extent to which the methods could be extended to non-logarithmically averaged settings. For our main technical result, we now considered the unweighted averages

where is an additional parameter. Our main result was now as follows. If did not weakly pretend to be a twisted Dirichlet character , then converged to zero on (doubly logarithmic) average as . If instead did pretend to be such a twisted Dirichlet character, then converged on (doubly logarithmic) average to a limit of -isotypic functions . Thus, roughly speaking, one has the approximation

for most .

Informally, this says that at almost all scales (where “almost all” means “outside of a set of logarithmic density zero”), the non-logarithmic averages behave much like their logarithmic counterparts except for a possible additional twisting by an Archimedean character (which interacts with the Archimedean parameter in much the same way that the Dirichlet character interacts with the non-Archimedean parameter ). One consequence of this is that most of the recent results on the logarithmically averaged Chowla and Elliott conjectures can now be extended to their non-logarithmically averaged counterparts, so long as one excludes a set of exceptional scales of logarithmic density zero. For instance, the Chowla conjecture

is now established for either odd or equal to , so long as one excludes an exceptional set of scales.

In the logarithmically averaged setup, the main idea was to combine two very different pieces of information on . The first, coming from recent results in ergodic theory, was to show that was well approximated in some sense by a nilsequence. The second was to use the “entropy decrement argument” to obtain an approximate isotopy property of the form

for “most” primes and integers . Combining the two facts, one eventually finds that only the almost periodic components of the nilsequence are relevant.

In the current situation, each is approximated by a nilsequence, but the nilsequence can vary with (although there is some useful “Lipschitz continuity” of this nilsequence with respect to the parameter). Meanwhile, the entropy decrement argument gives an approximation basically of the form

for “most” . The arguments then proceed largely as in the logarithmically averaged case. A key lemma to handle the dependence on the new parameter is the following cohomological statement: if one has a map that was a quasimorphism in the sense that for all and some small , then there exists a real number such that for all small . This is achieved by applying a standard “cocycle averaging argument” to the cocycle .

It would of course be desirable to not have the set of exceptional scales. We only know of one (implausible) scenario in which we can do this, namely when one has far fewer (in particular, subexponentially many) sign patterns for (say) the Liouville function than predicted by the Chowla conjecture. In this scenario (roughly analogous to the “Siegel zero” scenario in multiplicative number theory), the entropy of the Liouville sign patterns is so small that the entropy decrement argument becomes powerful enough to control all scales rather than almost all scales. On the other hand, this scenario seems to be self-defeating, in that it allows one to establish a large number of cases of the Chowla conjecture, and the full Chowla conjecture is inconsistent with having unusually few sign patterns. Still it hints that future work in this direction may need to split into “low entropy” and “high entropy” cases, in analogy to how many arguments in multiplicative number theory have to split into the “Siegel zero” and “no Siegel zero” cases.

Let be the Liouville function, thus is defined to equal when is the product of an even number of primes, and when is the product of an odd number of primes. The Chowla conjecture asserts that has the statistics of a random sign pattern, in the sense that

for all and all distinct natural numbers , where we use the averaging notation

For , this conjecture is equivalent to the prime number theorem (as discussed in this previous blog post), but the conjecture remains open for any .

In recent years, it has been realised that one can make more progress on this conjecture if one works instead with the logarithmically averaged version

of the conjecture, where we use the logarithmic averaging notation

Using the summation by parts (or telescoping series) identity

it is not difficult to show that the Chowla conjecture (1) for a given implies the logarithmically averaged conjecture (2). However, the converse implication is not at all clear. For instance, for , we have already mentioned that the Chowla conjecture

is equivalent to the prime number theorem; but the logarithmically averaged analogue

is significantly easier to show (a proof with the Liouville function replaced by the closely related Möbius function is given in this previous blog post). And indeed, significantly more is now known for the logarithmically averaged Chowla conjecture; in this paper of mine I had proven (2) for , and in this recent paper with Joni Teravainen, we proved the conjecture for all odd (with a different proof also given here).

In view of this emerging consensus that the logarithmically averaged Chowla conjecture was easier than the ordinary Chowla conjecture, it was thus somewhat of a surprise for me to read a recent paper of Gomilko, Kwietniak, and Lemanczyk who (among other things) established the following statement:

Theorem 1Assume that the logarithmically averaged Chowla conjecture (2) is true for all . Then there exists a sequence going to infinity such that the Chowla conjecture (1) is true for all along that sequence, that is to sayfor all and all distinct .

This implication does not use any special properties of the Liouville function (other than that they are bounded), and in fact proceeds by ergodic theoretic methods, focusing in particular on the ergodic decomposition of invariant measures of a shift into ergodic measures. Ergodic methods have proven remarkably fruitful in understanding these sorts of number theoretic and combinatorial problems, as could already be seen by the ergodic theoretic proof of Szemerédi’s theorem by Furstenberg, and more recently by the work of Frantzikinakis and Host on Sarnak’s conjecture. (My first paper with Teravainen also uses ergodic theory tools.) Indeed, many other results in the subject were first discovered using ergodic theory methods.

On the other hand, many results in this subject that were first proven ergodic theoretically have since been reproven by more combinatorial means; my second paper with Teravainen is an instance of this. As it turns out, one can also prove Theorem 1 by a standard combinatorial (or probabilistic) technique known as the second moment method. In fact, one can prove slightly more:

Theorem 2Let be a natural number. Assume that the logarithmically averaged Chowla conjecture (2) is true for . Then there exists a set of natural numbers of logarithmic density (that is, ) such thatfor any distinct .

It is not difficult to deduce Theorem 1 from Theorem 2 using a diagonalisation argument. Unfortunately, the known cases of the logarithmically averaged Chowla conjecture ( and odd ) are currently insufficient to use Theorem 2 for any purpose other than to reprove what is already known to be true from the prime number theorem. (Indeed, the even cases of Chowla, in either logarithmically averaged or non-logarithmically averaged forms, seem to be far more powerful than the odd cases; see Remark 1.7 of this paper of myself and Teravainen for a related observation in this direction.)

We now sketch the proof of Theorem 2. For any distinct , we take a large number and consider the limiting the second moment

We can expand this as

If all the are distinct, the hypothesis (2) tells us that the inner averages goes to zero as . The remaining averages are , and there are of these averages. We conclude that

By Markov’s inequality (and (3)), we conclude that for any fixed , there exists a set of upper logarithmic density at least , thus

such that

By deleting at most finitely many elements, we may assume that consists only of elements of size at least (say).

For any , if we let be the union of for , then has logarithmic density . By a diagonalisation argument (using the fact that the set of tuples is countable), we can then find a set of natural numbers of logarithmic density , such that for every , every sufficiently large element of lies in . Thus for every sufficiently large in , one has

for some with . By Cauchy-Schwarz, this implies that

interchanging the sums and using and , this implies that

We conclude on taking to infinity that

as required.

Joni Teräväinen and I have just uploaded to the arXiv our paper “Odd order cases of the logarithmically averaged Chowla conjecture“, submitted to J. Numb. Thy. Bordeaux. This paper gives an alternate route to one of the main results of our previous paper, and more specifically reproves the asymptotic

for all odd and all integers (that is to say, all the odd order cases of the logarithmically averaged Chowla conjecture). Our previous argument relies heavily on some deep ergodic theory results of Bergelson-Host-Kra, Leibman, and Le (and was applicable to more general multiplicative functions than the Liouville function ); here we give a shorter proof that avoids ergodic theory (but instead requires the Gowers uniformity of the (W-tricked) von Mangoldt function, established in several papers of Ben Green, Tamar Ziegler, and myself). The proof follows the lines sketched in the previous blog post. In principle, due to the avoidance of ergodic theory, the arguments here have a greater chance to be made quantitative; however, at present the known bounds on the Gowers uniformity of the von Mangoldt function are qualitative, except at the level, which is unfortunate since the first non-trivial odd case requires quantitative control on the level. (But it may be possible to make the Gowers uniformity bounds for quantitative if one assumes GRH, although when one puts everything together, the actual decay rate obtained in (1) is likely to be poor.)

Joni Teräväinen and I have just uploaded to the arXiv our paper “The structure of logarithmically averaged correlations of multiplicative functions, with applications to the Chowla and Elliott conjectures“, submitted to Duke Mathematical Journal. This paper builds upon my previous paper in which I introduced an “entropy decrement method” to prove the two-point (logarithmically averaged) cases of the Chowla and Elliott conjectures. A bit more specifically, I showed that

whenever were sequences going to infinity, were distinct integers, and were -bounded multiplicative functions which were *non-pretentious* in the sense that

for all Dirichlet characters and for . Thus, for instance, one had the logarithmically averaged two-point Chowla conjecture

for fixed any non-zero , where was the Liouville function.

One would certainly like to extend these results to higher order correlations than the two-point correlations. This looks to be difficult (though perhaps not completely impossible if one allows for logarithmic averaging): in a previous paper I showed that achieving this in the context of the Liouville function would be equivalent to resolving the logarithmically averaged Sarnak conjecture, as well as establishing logarithmically averaged local Gowers uniformity of the Liouville function. However, in this paper we are able to avoid having to resolve these difficult conjectures to obtain partial results towards the (logarithmically averaged) Chowla and Elliott conjecture. For the Chowla conjecture, we can obtain all odd order correlations, in that

for all odd and all integers (which, in the odd order case, are no longer required to be distinct). (Superficially, this looks like we have resolved “half of the cases” of the logarithmically averaged Chowla conjecture; but it seems the odd order correlations are significantly easier than the even order ones. For instance, because of the Katai-Bourgain-Sarnak-Ziegler criterion, one can basically deduce the odd order cases of (2) from the even order cases (after allowing for some dilations in the argument ).

For the more general Elliott conjecture, we can show that

for any , any integers and any bounded multiplicative functions , unless the product *weakly pretends to be a Dirichlet character * in the sense that

This can be seen to imply (2) as a special case. Even when *does* pretend to be a Dirichlet character , we can still say something: if the limits

exist for each (which can be guaranteed if we pass to a suitable subsequence), then is the uniform limit of periodic functions , each of which is –isotypic in the sense that whenever are integers with coprime to the periods of and . This does not pin down the value of any single correlation , but does put significant constraints on how these correlations may vary with .

Among other things, this allows us to show that all possible length four sign patterns of the Liouville function occur with positive density, and all possible length four sign patterns occur with the conjectured logarithmic density. (In a previous paper with Matomaki and Radziwill, we obtained comparable results for length three patterns of Liouville and length two patterns of Möbius.)

To describe the argument, let us focus for simplicity on the case of the Liouville correlations

assuming for sake of discussion that all limits exist. (In the paper, we instead use the device of generalised limits, as discussed in this previous post.) The idea is to combine together two rather different ways to control this function . The first proceeds by the entropy decrement method mentioned earlier, which roughly speaking works as follows. Firstly, we pick a prime and observe that for any , which allows us to rewrite (3) as

Making the change of variables , we obtain

The difference between and is negligible in the limit (here is where we crucially rely on the log-averaging), hence

and thus by (3) we have

The entropy decrement argument can be used to show that the latter limit is small for most (roughly speaking, this is because the factors behave like independent random variables as varies, so that concentration of measure results such as Hoeffding’s inequality can apply, after using entropy inequalities to decouple somewhat these random variables from the factors). We thus obtain the approximate isotopy property

On the other hand, by the Furstenberg correspondence principle (as discussed in these previous posts), it is possible to express as a multiple correlation

for some probability space equipped with a measure-preserving invertible map . Using results of Bergelson-Host-Kra, Leibman, and Le, this allows us to obtain a decomposition of the form

where is a nilsequence, and goes to zero in density (even along the primes, or constant multiples of the primes). The original work of Bergelson-Host-Kra required ergodicity on , which is very definitely a hypothesis that is not available here; however, the later work of Leibman removed this hypothesis, and the work of Le refined the control on so that one still has good control when restricting to primes, or constant multiples of primes.

Ignoring the small error , we can now combine (5) to conclude that

Using the equidistribution theory of nilsequences (as developed in this previous paper of Ben Green and myself), one can break up further into a periodic piece and an “irrational” or “minor arc” piece . The contribution of the minor arc piece can be shown to mostly cancel itself out after dilating by primes and averaging, thanks to Vinogradov-type bilinear sum estimates (transferred to the primes). So we end up with

which already shows (heuristically, at least) the claim that can be approximated by periodic functions which are isotopic in the sense that

But if is odd, one can use Dirichlet’s theorem on primes in arithmetic progressions to restrict to primes that are modulo the period of , and conclude now that vanishes identically, which (heuristically, at least) gives (2).

The same sort of argument works to give the more general bounds on correlations of bounded multiplicative functions. But for the specific task of proving (2), we initially used a slightly different argument that avoids using the ergodic theory machinery of Bergelson-Host-Kra, Leibman, and Le, but replaces it instead with the Gowers uniformity norm theory used to count linear equations in primes. Basically, by averaging (4) in using the “-trick”, as well as known facts about the Gowers uniformity of the von Mangoldt function, one can obtain an approximation of the form

where ranges over a large range of integers coprime to some primorial . On the other hand, by iterating (4) we have

for most semiprimes , and by again averaging over semiprimes one can obtain an approximation of the form

For odd, one can combine the two approximations to conclude that . (This argument is not given in the current paper, but we plan to detail it in a subsequent one.)

Given a function on the natural numbers taking values in , one can invoke the Furstenberg correspondence principle to locate a measure preserving system – a probability space together with a measure-preserving shift (or equivalently, a measure-preserving -action on ) – together with a measurable function (or “observable”) that has essentially the same statistics as in the sense that

for any integers . In particular, one has

whenever the limit on the right-hand side exists. We will refer to the system together with the designated function as a *Furstenberg limit* ot the sequence . These Furstenberg limits capture some, but not all, of the asymptotic behaviour of ; roughly speaking, they control the typical “local” behaviour of , involving correlations such as in the regime where are much smaller than . However, the control on error terms here is usually only qualitative at best, and one usually does not obtain non-trivial control on correlations in which the are allowed to grow at some significant rate with (e.g. like some power of ).

The correspondence principle is discussed in these previous blog posts. One way to establish the principle is by introducing a Banach limit that extends the usual limit functional on the subspace of consisting of convergent sequences while still having operator norm one. Such functionals cannot be constructed explicitly, but can be proven to exist (non-constructively and non-uniquely) using the Hahn-Banach theorem; one can also use a non-principal ultrafilter here if desired. One can then seek to construct a system and a measurable function for which one has the statistics

for all . One can explicitly construct such a system as follows. One can take to be the Cantor space with the product -algebra and the shift

with the function being the coordinate function at zero:

(so in particular for any ). The only thing remaining is to construct the invariant measure . In order to be consistent with (2), one must have

for any distinct integers and signs . One can check that this defines a premeasure on the Boolean algebra of defined by cylinder sets, and the existence of then follows from the Hahn-Kolmogorov extension theorem (or the closely related Kolmogorov extension theorem). One can then check that the correspondence (2) holds, and that is translation-invariant; the latter comes from the translation invariance of the (Banach-)Césaro averaging operation . A variant of this construction shows that the Furstenberg limit is unique up to equivalence if and only if all the limits appearing in (1) actually exist.

One can obtain a slightly tighter correspondence by using a smoother average than the Césaro average. For instance, one can use the logarithmic Césaro averages in place of the Césaro average , thus one replaces (2) by

Whenever the Césaro average of a bounded sequence exists, then the logarithmic Césaro average exists and is equal to the Césaro average. Thus, a Furstenberg limit constructed using logarithmic Banach-Césaro averaging still obeys (1) for all when the right-hand side limit exists, but also obeys the more general assertion

whenever the limit of the right-hand side exists.

In a recent paper of Frantizinakis, the Furstenberg limits of the Liouville function (with logarithmic averaging) were studied. Some (but not all) of the known facts and conjectures about the Liouville function can be interpreted in the Furstenberg limit. For instance, in a recent breakthrough result of Matomaki and Radziwill (discussed previously here), it was shown that the Liouville function exhibited cancellation on short intervals in the sense that

In terms of Furstenberg limits of the Liouville function, this assertion is equivalent to the assertion that

for all Furstenberg limits of Liouville (including those without logarithmic averaging). Invoking the mean ergodic theorem (discussed in this previous post), this assertion is in turn equivalent to the observable that corresponds to the Liouville function being orthogonal to the invariant factor of ; equivalently, the first Gowers-Host-Kra seminorm of (as defined for instance in this previous post) vanishes. The Chowla conjecture, which asserts that

for all distinct integers , is equivalent to the assertion that all the Furstenberg limits of Liouville are equivalent to the Bernoulli system ( with the product measure arising from the uniform distribution on , with the shift and observable as before). Similarly, the logarithmically averaged Chowla conjecture

is equivalent to the assertion that all the Furstenberg limits of Liouville with logarithmic averaging are equivalent to the Bernoulli system. Recently, I was able to prove the two-point version

of the logarithmically averaged Chowla conjecture, for any non-zero integer ; this is equivalent to the perfect strong mixing property

for any Furstenberg limit of Liouville with logarithmic averaging, and any .

The situation is more delicate with regards to the Sarnak conjecture, which is equivalent to the assertion that

for any zero-entropy sequence (see this previous blog post for more discussion). Morally speaking, this conjecture should be equivalent to the assertion that any Furstenberg limit of Liouville is disjoint from any zero entropy system, but I was not able to formally establish an implication in either direction due to some technical issues regarding the fact that the Furstenberg limit does not directly control long-range correlations, only short-range ones. (There are however ergodic theoretic interpretations of the Sarnak conjecture that involve the notion of generic points; see this paper of El Abdalaoui, Lemancyk, and de la Rue.) But the situation is currently better with the logarithmically averaged Sarnak conjecture

as I was able to show that this conjecture was equivalent to the logarithmically averaged Chowla conjecture, and hence to all Furstenberg limits of Liouville with logarithmic averaging being Bernoulli; I also showed the conjecture was equivalent to local Gowers uniformity of the Liouville function, which is in turn equivalent to the function having all Gowers-Host-Kra seminorms vanishing in every Furstenberg limit with logarithmic averaging. In this recent paper of Frantzikinakis, this analysis was taken further, showing that the logarithmically averaged Chowla and Sarnak conjectures were in fact equivalent to the much milder seeming assertion that all Furstenberg limits with logarithmic averaging were ergodic.

Actually, the logarithmically averaged Furstenberg limits have more structure than just a -action on a measure preserving system with a single observable . Let denote the semigroup of affine maps on the integers with and positive. Also, let denote the profinite integers (the inverse limit of the cyclic groups ). Observe that acts on by taking the inverse limit of the obvious actions of on .

Proposition 1 (Enriched logarithmically averaged Furstenberg limit of Liouville)Let be a Banach limit. Then there exists a probability space with an action of the affine semigroup , as well as measurable functions and , with the following properties:

- (i) (Affine Furstenberg limit) For any , and any congruence class , one has
- (ii) (Equivariance of ) For any , one has
for -almost every .

- (iii) (Multiplicativity at fixed primes) For any prime , one has
for -almost every , where is the dilation map .

- (iv) (Measure pushforward) If is of the form and is the set , then the pushforward of by is equal to , that is to say one has
for every measurable .

Note that can be viewed as the subgroup of consisting of the translations . If one only keeps the -portion of the action and forgets the rest (as well as the function ) then the action becomes measure-preserving, and we recover an ordinary Furstenberg limit with logarithmic averaging. However, the additional structure here can be quite useful; for instance, one can transfer the proof of (3) to this setting, which we sketch below the fold, after proving the proposition.

The observable , roughly speaking, means that points in the Furstenberg limit constructed by this proposition are still “virtual integers” in the sense that one can meaningfully compute the residue class of modulo any natural number modulus , by first applying and then reducing mod . The action of means that one can also meaningfully multiply by any natural number, and translate it by any integer. As with other applications of the correspondence principle, the main advantage of moving to this more “virtual” setting is that one now acquires a probability measure , so that the tools of ergodic theory can be readily applied.

I’ve just uploaded to the arXiv my paper “Equivalence of the logarithmically averaged Chowla and Sarnak conjectures“, submitted to the Festschrift “Number Theory – Diophantine problems, uniform distribution and applications” in honour of Robert F. Tichy. This paper is a spinoff of my previous paper establishing a logarithmically averaged version of the Chowla (and Elliott) conjectures in the two-point case. In that paper, the estimate

as was demonstrated, where was any positive integer and denoted the Liouville function. The proof proceeded using a method I call the “entropy decrement argument”, which ultimately reduced matters to establishing a bound of the form

whenever was a slowly growing function of . This was in turn established in a previous paper of Matomaki, Radziwill, and myself, using the recent breakthrough of Matomaki and Radziwill.

It is natural to see to what extent the arguments can be adapted to attack the higher-point cases of the logarithmically averaged Chowla conjecture (ignoring for this post the more general Elliott conjecture for other bounded multiplicative functions than the Liouville function). That is to say, one would like to prove that

as for any fixed distinct integers . As it turns out (and as is detailed in the current paper), the entropy decrement argument extends to this setting (after using some known facts about linear equations in primes), and allows one to reduce the above estimate to an estimate of the form

for a slowly growing function of and some fixed (in fact we can take for ), where is the (normalised) local Gowers uniformity norm. (In the case , , this becomes the Fourier-uniformity conjecture discussed in this previous post.) If one then applied the (now proven) inverse conjecture for the Gowers norms, this estimate is in turn equivalent to the more complicated looking assertion

where the supremum is over all possible choices of *nilsequences* of controlled step and complexity (see the paper for definitions of these terms).

The main novelty in the paper (elaborating upon a previous comment I had made on this blog) is to observe that this latter estimate in turn follows from the logarithmically averaged form of Sarnak’s conjecture (discussed in this previous post), namely that

whenever is a zero entropy (i.e. deterministic) sequence. Morally speaking, this follows from the well-known fact that nilsequences have zero entropy, but the presence of the supremum in (1) means that we need a little bit more; roughly speaking, we need the *class* of nilsequences of a given step and complexity to have “uniformly zero entropy” in some sense.

On the other hand, it was already known (see previous post) that the Chowla conjecture implied the Sarnak conjecture, and similarly for the logarithmically averaged form of the two conjectures. Putting all these implications together, we obtain the pleasant fact that the logarithmically averaged Sarnak and Chowla conjectures are equivalent, which is the main result of the current paper. There have been a large number of special cases of the Sarnak conjecture worked out (when the deterministic sequence involved came from a special dynamical system), so these results can now also be viewed as partial progress towards the Chowla conjecture also (at least with logarithmic averaging). However, my feeling is that the full resolution of these conjectures will not come from these sorts of special cases; instead, conjectures like the Fourier-uniformity conjecture in this previous post look more promising to attack.

It would also be nice to get rid of the pesky logarithmic averaging, but this seems to be an inherent requirement of the entropy decrement argument method, so one would probably have to find a way to avoid that argument if one were to remove the log averaging.

Let denote the Liouville function. The prime number theorem is equivalent to the estimate

as , that is to say that exhibits cancellation on large intervals such as . This result can be improved to give cancellation on shorter intervals. For instance, using the known zero density estimates for the Riemann zeta function, one can establish that

as if for some fixed ; I believe this result is due to Ramachandra (see also Exercise 21 of this previous blog post), and in fact one could obtain a better error term on the right-hand side that for instance gained an arbitrary power of . On the Riemann hypothesis (or the weaker density hypothesis), it was known that the could be lowered to .

Early this year, there was a major breakthrough by Matomaki and Radziwill, who (among other things) showed that the asymptotic (1) was in fact valid for *any* with that went to infinity as , thus yielding cancellation on extremely short intervals. This has many further applications; for instance, this estimate, or more precisely its extension to other “non-pretentious” bounded multiplicative functions, was a key ingredient in my recent solution of the Erdös discrepancy problem, as well as in obtaining logarithmically averaged cases of Chowla’s conjecture, such as

It is of interest to twist the above estimates by phases such as the linear phase . In 1937, Davenport showed that

which of course improves the prime number theorem. Recently with Matomaki and Radziwill, we obtained a common generalisation of this estimate with (1), showing that

as , for any that went to infinity as . We were able to use this estimate to obtain an averaged form of Chowla’s conjecture.

In that paper, we asked whether one could improve this estimate further by moving the supremum inside the integral, that is to say to establish the bound

as , for any that went to infinity as . This bound is asserting that is locally Fourier-uniform on most short intervals; it can be written equivalently in terms of the “local Gowers norm” as

from which one can see that this is another averaged form of Chowla’s conjecture (stronger than the one I was able to prove with Matomaki and Radziwill, but a consequence of the unaveraged Chowla conjecture). If one inserted such a bound into the machinery I used to solve the Erdös discrepancy problem, it should lead to further averaged cases of Chowla’s conjecture, such as

though I have not fully checked the details of this implication. It should also have a number of new implications for sign patterns of the Liouville function, though we have not explored these in detail yet.

One can write (4) equivalently in the form

uniformly for all -dependent phases . In contrast, (3) is equivalent to the subcase of (6) when the linear phase coefficient is independent of . This dependency of on seems to necessitate some highly nontrivial additive combinatorial analysis of the function in order to establish (4) when is small. To date, this analysis has proven to be elusive, but I would like to record what one can do with more classical methods like Vaughan’s identity, namely:

Proposition 1The estimate (4) (or equivalently (6)) holds in the range for any fixed . (In fact one can improve the right-hand side by an arbitrary power of in this case.)

The values of in this range are far too large to yield implications such as new cases of the Chowla conjecture, but it appears that the exponent is the limit of “classical” methods (at least as far as I was able to apply them), in the sense that one does not do any combinatorial analysis on the function , nor does one use modern equidistribution results on “Type III sums” that require deep estimates on Kloosterman-type sums. The latter may shave a little bit off of the exponent, but I don’t see how one would ever hope to go below without doing some non-trivial combinatorics on the function . UPDATE: I have come across this paper of Zhan which uses mean-value theorems for L-functions to lower the exponent to .

Let me now sketch the proof of the proposition, omitting many of the technical details. We first remark that known estimates on sums of the Liouville function (or similar functions such as the von Mangoldt function) in short arithmetic progressions, based on zero-density estimates for Dirichlet -functions, can handle the “major arc” case of (4) (or (6)) where is restricted to be of the form for (the exponent here being of the same numerology as the exponent in the classical result of Ramachandra, tied to the best zero density estimates currently available); for instance a modification of the arguments in this recent paper of Koukoulopoulos would suffice. Thus we can restrict attention to “minor arc” values of (or , using the interpretation of (6)).

Next, one breaks up (or the closely related Möbius function) into Dirichlet convolutions using one of the standard identities (e.g. Vaughan’s identity or Heath-Brown’s identity), as discussed for instance in this previous post (which is focused more on the von Mangoldt function, but analogous identities exist for the Liouville and Möbius functions). The exact choice of identity is not terribly important, but the upshot is that can be decomposed into terms, each of which is either of the “Type I” form

for some coefficients that are roughly of logarithmic size on the average, and scales with and , or else of the “Type II” form

for some coefficients that are roughly of logarithmic size on the average, and scales with and . As discussed in the previous post, the exponent is a natural barrier in these identities if one is unwilling to also consider “Type III” type terms which are roughly of the shape of the third divisor function .

A Type I sum makes a contribution to that can be bounded (via Cauchy-Schwarz) in terms of an expression such as

The inner sum exhibits a lot of cancellation unless is within of an integer. (Here, “a lot” should be loosely interpreted as “gaining many powers of over the trivial bound”.) Since is significantly larger than , standard Vinogradov-type manipulations (see e.g. Lemma 13 of these previous notes) show that this bad case occurs for many only when is “major arc”, which is the case we have specifically excluded. This lets us dispose of the Type I contributions.

A Type II sum makes a contribution to roughly of the form

We can break this up into a number of sums roughly of the form

for ; note that the range is non-trivial because is much larger than . Applying the usual bilinear sum Cauchy-Schwarz methods (e.g. Theorem 14 of these notes) we conclude that there is a lot of cancellation unless one has for some . But with , is well below the threshold for the definition of major arc, so we can exclude this case and obtain the required cancellation.

The Chowla conjecture asserts, among other things, that one has the asymptotic

as for any distinct integers , where is the Liouville function. (The usual formulation of the conjecture also allows one to consider more general linear forms than the shifts , but for sake of discussion let us focus on the shift case.) This conjecture remains open for , though there are now some partial results when one averages either in or in the , as discussed in this recent post.

A natural generalisation of the Chowla conjecture is the Elliott conjecture. Its original formulation was basically as follows: one had

whenever were bounded completely multiplicative functions and were distinct integers, and one of the was “non-pretentious” in the sense that

for all Dirichlet characters and real numbers . It is easy to see that some condition like (2) is necessary; for instance if and has period then can be verified to be bounded away from zero as .

In a previous paper with Matomaki and Radziwill, we provided a counterexample to the original formulation of the Elliott conjecture, and proposed that (2) be replaced with the stronger condition

as for any Dirichlet character . To support this conjecture, we proved an averaged and non-asymptotic version of this conjecture which roughly speaking showed a bound of the form

whenever was an arbitrarily slowly growing function of , was sufficiently large (depending on and the rate at which grows), and one of the obeyed the condition

for some that was sufficiently large depending on , and all Dirichlet characters of period at most . As further support of this conjecture, I recently established the bound

under the same hypotheses, where is an arbitrarily slowly growing function of .

In view of these results, it is tempting to conjecture that the condition (4) for one of the should be sufficient to obtain the bound

when is large enough depending on . This may well be the case for . However, the purpose of this blog post is to record a simple counterexample for . Let’s take for simplicity. Let be a quantity much larger than but much smaller than (e.g. ), and set

For , Taylor expansion gives

and

and hence

and hence

On the other hand one can easily verify that all of the obey (4) (the restriction there prevents from getting anywhere close to ). So it seems the correct non-asymptotic version of the Elliott conjecture is the following:

Conjecture 1 (Non-asymptotic Elliott conjecture)Let be a natural number, and let be integers. Let , let be sufficiently large depending on , and let be sufficiently large depending on . Let be bounded multiplicative functions such that for some , one hasfor all Dirichlet characters of conductor at most . Then

The case of this conjecture follows from the work of Halasz; in my recent paper a logarithmically averaged version of the case of this conjecture is established. The requirement to take to be as large as does not emerge in the averaged Elliott conjecture in my previous paper with Matomaki and Radziwill; it thus seems that this averaging has concealed some of the subtler features of the Elliott conjecture. (However, this subtlety does not seem to affect the asymptotic version of the conjecture formulated in that paper, in which the hypothesis is of the form (3), and the conclusion is of the form (1).)

A similar subtlety arises when trying to control the maximal integral

In my previous paper with Matomaki and Radziwill, we could show that easier expression

was small (for a slowly growing function of ) if was bounded and completely multiplicative, and one had a condition of the form

for some large . However, to obtain an analogous bound for (5) it now appears that one needs to strengthen the above condition to

in order to address the counterexample in which for some between and . This seems to suggest that proving (5) (which is closely related to the case of the Chowla conjecture) could in fact be rather difficult; the estimation of (6) relied primarily of prior work of Matomaki and Radziwill which used the hypothesis (7), but as this hypothesis is not sufficient to conclude (5), some additional input must also be used.

I’ve just uploaded two related papers to the arXiv:

- The logarithmically averaged Chowla and Elliott conjectures for two-point correlations, submitted to Forum of Mathematics, Pi; and
- The Erdos discrepancy problem, submitted to the new arXiv overlay journal, Discrete Analysis (see this recent announcement on Tim Gowers’ blog).

This pair of papers is an outgrowth of these two recent blog posts and the ensuing discussion. In the first paper, we establish the following logarithmically averaged version of the Chowla conjecture (in the case of two-point correlations (or “pair correlations”)):

Theorem 1 (Logarithmically averaged Chowla conjecture)Let be natural numbers, and let be integers such that . Let be a quantity depending on that goes to infinity as . Let denote the Liouville function. Then one has

For comparison, the non-averaged Chowla conjecture would imply that

which is a strictly stronger estimate than (2), and remains open.

The arguments also extend to other completely multiplicative functions than the Liouville function. In particular, one obtains a slightly averaged version of the non-asymptotic Elliott conjecture that was shown in the previous blog post to imply a positive solution to the Erdos discrepancy problem. The averaged version of the conjecture established in this paper is slightly weaker than the one assumed in the previous blog post, but it turns out that the arguments there can be modified without much difficulty to accept this averaged Elliott conjecture as input. In particular, we obtain an unconditional solution to the Erdos discrepancy problem as a consequence; this is detailed in the second paper listed above. In fact we can also handle the vector-valued version of the Erdos discrepancy problem, in which the sequence takes values in the unit sphere of an arbitrary Hilbert space, rather than in .

Estimates such as (2) or (3) are known to be subject to the “parity problem” (discussed numerous times previously on this blog), which roughly speaking means that they cannot be proven solely using “linear” estimates on functions such as the von Mangoldt function. However, it is known that the parity problem can be circumvented using “bilinear” estimates, and this is basically what is done here.

We now describe in informal terms the proof of Theorem 1, focusing on the model case (2) for simplicity. Suppose for contradiction that the left-hand side of (2) was large and (say) positive. Using the multiplicativity , we conclude that

is also large and positive for all primes that are not too large; note here how the logarithmic averaging allows us to leave the constraint unchanged. Summing in , we conclude that

is large and positive for any given set of medium-sized primes. By a standard averaging argument, this implies that

is large for many choices of , where is a medium-sized parameter at our disposal to choose, and we take to be some set of primes that are somewhat smaller than . (A similar approach was taken in this recent paper of Matomaki, Radziwill, and myself to study sign patterns of the Möbius function.) To obtain the required contradiction, one thus wants to demonstrate significant cancellation in the expression (4). As in that paper, we view as a random variable, in which case (4) is essentially a bilinear sum of the random sequence along a random graph on , in which two vertices are connected if they differ by a prime in that divides . A key difficulty in controlling this sum is that for randomly chosen , the sequence and the graph need not be independent. To get around this obstacle we introduce a new argument which we call the “entropy decrement argument” (in analogy with the “density increment argument” and “energy increment argument” that appear in the literature surrounding Szemerédi’s theorem on arithmetic progressions, and also reminiscent of the “entropy compression argument” of Moser and Tardos, discussed in this previous post). This argument, which is a simple consequence of the Shannon entropy inequalities, can be viewed as a quantitative version of the standard subadditivity argument that establishes the existence of Kolmogorov-Sinai entropy in topological dynamical systems; it allows one to select a scale parameter (in some suitable range ) for which the sequence and the graph exhibit some weak independence properties (or more precisely, the mutual information between the two random variables is small).

Informally, the entropy decrement argument goes like this: if the sequence has significant mutual information with , then the entropy of the sequence for will grow a little slower than linearly, due to the fact that the graph has zero entropy (knowledge of more or less completely determines the shifts of the graph); this can be formalised using the classical Shannon inequalities for entropy (and specifically, the non-negativity of conditional mutual information). But the entropy cannot drop below zero, so by increasing as necessary, at some point one must reach a metastable region (cf. the finite convergence principle discussed in this previous blog post), within which very little mutual information can be shared between the sequence and the graph . Curiously, for the application it is not enough to have a purely quantitative version of this argument; one needs a quantitative bound (which gains a factor of a bit more than on the trivial bound for mutual information), and this is surprisingly delicate (it ultimately comes down to the fact that the series diverges, which is only barely true).

Once one locates a scale with the low mutual information property, one can use standard concentration of measure results such as the Hoeffding inequality to approximate (4) by the significantly simpler expression

The important thing here is that Hoeffding’s inequality gives exponentially strong bounds on the failure probability, which is needed to counteract the logarithms that are inevitably present whenever trying to use entropy inequalities. The expression (5) can then be controlled in turn by an application of the Hardy-Littlewood circle method and a non-trivial estimate

for averaged short sums of a modulated Liouville function established in another recent paper by Matomäki, Radziwill and myself.

When one uses this method to study more general sums such as

one ends up having to consider expressions such as

where is the coefficient . When attacking this sum with the circle method, one soon finds oneself in the situation of wanting to locate the large Fourier coefficients of the exponential sum

In many cases (such as in the application to the Erdös discrepancy problem), the coefficient is identically , and one can understand this sum satisfactorily using the classical results of Vinogradov: basically, is large when lies in a “major arc” and is small when it lies in a “minor arc”. For more general functions , the coefficients are more or less arbitrary; the large values of are no longer confined to the major arc case. Fortunately, even in this general situation one can use a restriction theorem for the primes established some time ago by Ben Green and myself to show that there are still only a bounded number of possible locations (up to the uncertainty mandated by the Heisenberg uncertainty principle) where is large, and we can still conclude by using (6). (Actually, as recently pointed out to me by Ben, one does not need the full strength of our result; one only needs the restriction theorem for the primes, which can be proven fairly directly using Plancherel’s theorem and some sieve theory.)

It is tempting to also use the method to attack higher order cases of the (logarithmically) averaged Chowla conjecture, for instance one could try to prove the estimate

The above arguments reduce matters to obtaining some non-trivial cancellation for sums of the form

A little bit of “higher order Fourier analysis” (as was done for very similar sums in the ergodic theory context by Frantzikinakis-Host-Kra and Wooley-Ziegler) lets one control this sort of sum if one can establish a bound of the form

where goes to infinity and is a very slowly growing function of . This looks very similar to (6), but the fact that the supremum is now inside the integral makes the problem much more difficult. However it looks worth attacking (7) further, as this estimate looks like it should have many nice applications (beyond just the case of the logarithmically averaged Chowla or Elliott conjectures, which is already interesting).

For higher than , the same line of analysis requires one to replace the linear phase by more complicated phases, such as quadratic phases or even -step nilsequences. Given that (7) is already beyond the reach of current literature, these even more complicated expressions are also unavailable at present, but one can imagine that they will eventually become tractable, in which case we would obtain an averaged form of the Chowla conjecture for all , which would have a number of consequences (such as a logarithmically averaged version of Sarnak’s conjecture, as per this blog post).

It would of course be very nice to remove the logarithmic averaging, and be able to establish bounds such as (3). I did attempt to do so, but I do not see a way to use the entropy decrement argument in a manner that does not require some sort of averaging of logarithmic type, as it requires one to pick a scale that one cannot specify in advance, which is not a problem for logarithmic averages (which are quite stable with respect to dilations) but is problematic for ordinary averages. But perhaps the problem can be circumvented by some clever modification of the argument. One possible approach would be to start exploiting multiplicativity at products of primes, and not just individual primes, to try to keep the scale fixed, but this makes the concentration of measure part of the argument much more complicated as one loses some independence properties (coming from the Chinese remainder theorem) which allowed one to conclude just from the Hoeffding inequality.

Kaisa Matomäki, Maksym Radziwiłł, and I have just uploaded to the arXiv our paper “Sign patterns of the Liouville and Möbius functions“. This paper is somewhat similar to our previous paper in that it is using the recent breakthrough of Matomäki and Radziwiłł on mean values of multiplicative functions to obtain partial results towards the Chowla conjecture. This conjecture can be phrased, roughly speaking, as follows: if is a fixed natural number and is selected at random from a large interval , then the sign pattern becomes asymptotically equidistributed in in the limit . This remains open for . In fact even the significantly weaker statement that each of the sign patterns in is attained infinitely often is open for . However, in 1986, Hildebrand showed that for all sign patterns are indeed attained infinitely often. Our first result is a strengthening of Hildebrand’s, moving a little bit closer to Chowla’s conjecture:

Theorem 1Let . Then each of the sign patterns in is attained by the Liouville function for a set of natural numbers of positive lower density.

Thus for instance one has for a set of of positive lower density. The case of this theorem already appears in the original paper of Matomäki and Radziwiłł (and the significantly simpler case of the sign patterns and was treated previously by Harman, Pintz, and Wolke).

The basic strategy in all of these arguments is to assume for sake of contradiction that a certain sign pattern occurs extremely rarely, and then exploit the complete multiplicativity of (which implies in particular that , , and for all ) together with some combinatorial arguments (vaguely analogous to solving a Sudoku puzzle!) to establish more complex sign patterns for the Liouville function, that are either inconsistent with each other, or with results such as the Matomäki-Radziwiłł result. To illustrate this, let us give some examples, arguing a little informally to emphasise the combinatorial aspects of the argument. First suppose that the sign pattern almost never occurs. The prime number theorem tells us that and are each equal to about half of the time, which by inclusion-exclusion implies that the sign pattern almost never occurs. In other words, we have for almost all . But from the multiplicativity property this implies that one should have

and

for almost all . But the above three statements are contradictory, and the claim follows.

Similarly, if we assume that the sign pattern almost never occurs, then a similar argument to the above shows that for any fixed , one has for almost all . But this means that the mean is abnormally large for most , which (for large enough) contradicts the results of Matomäki and Radziwiłł. Here we see that the “enemy” to defeat is the scenario in which only changes sign very rarely, in which case one rarely sees the pattern .

It turns out that similar (but more combinatorially intricate) arguments work for sign patterns of length three (but are unlikely to work for most sign patterns of length four or greater). We give here one fragment of such an argument (due to Hildebrand) which hopefully conveys the Sudoku-type flavour of the combinatorics. Suppose for instance that the sign pattern almost never occurs. Now suppose is a typical number with . Since we almost never have the sign pattern , we must (almost always) then have . By multiplicativity this implies that

We claim that this (almost always) forces . For if , then by the lack of the sign pattern , this (almost always) forces , which by multiplicativity forces , which by lack of (almost always) forces , which by multiplicativity contradicts . Thus we have ; a similar argument gives almost always, which by multiplicativity gives , a contradiction. Thus we almost never have , which by the inclusion-exclusion argument mentioned previously shows that for almost all .

One can continue these Sudoku-type arguments and conclude eventually that for almost all . To put it another way, if denotes the non-principal Dirichlet character of modulus , then is almost always constant away from the multiples of . (Conversely, if changed sign very rarely outside of the multiples of three, then the sign pattern would never occur.) Fortunately, the main result of Matomäki and Radziwiłł shows that this scenario cannot occur, which establishes that the sign pattern must occur rather frequently. The other sign patterns are handled by variants of these arguments.

Excluding a sign pattern of length three leads to useful implications like “if , then ” which turn out are just barely strong enough to quite rigidly constrain the Liouville function using Sudoku-like arguments. In contrast, excluding a sign pattern of length four only gives rise to implications like “`if , then “, and these seem to be much weaker for this purpose (the hypothesis in these implications just isn’t satisfied nearly often enough). So a different idea seems to be needed if one wishes to extend the above theorem to larger values of .

Our second theorem gives an analogous result for the Möbius function (which takes values in rather than ), but the analysis turns out to be remarkably difficult and we are only able to get up to :

Theorem 2Let . Then each of the sign patterns in is attained by the Möbius function for a set of positive lower density.

It turns out that the prime number theorem and elementary sieve theory can be used to handle the case and all the cases that involve at least one , leaving only the four sign patterns to handle. It is here that the zeroes of the Möbius function cause a significant new obstacle. Suppose for instance that the sign pattern almost never occurs for the Möbius function. The same arguments that were used in the Liouville case then show that will be almost always equal to , *provided that are both square-free*. One can try to chain this together as before to create a long string where the Möbius function is constant, but this cannot work for any larger than three, because the Möbius function vanishes at every multiple of four.

The constraints we assume on the Möbius function can be depicted using a graph on the squarefree natural numbers, in which any two adjacent squarefree natural numbers are connected by an edge. The main difficulty is then that this graph is highly disconnected due to the multiples of four not being squarefree.

To get around this, we need to enlarge the graph. Note from multiplicativity that if is almost always equal to when are squarefree, then is almost always equal to when are squarefree and is divisible by . We can then form a graph on the squarefree natural numbers by connecting to whenever are squarefree and is divisible by . If this graph is “locally connected” in some sense, then will be constant on almost all of the squarefree numbers in a large interval, which turns out to be incompatible with the results of Matomäki and Radziwiłł. Because of this, matters are reduced to establishing the connectedness of a certain graph. More precisely, it turns out to be sufficient to establish the following claim:

Theorem 3For each prime , let be a residue class chosen uniformly at random. Let be the random graph whose vertices consist of those integers not equal to for any , and whose edges consist of pairs in with . Then with probability , the graph is connected.

We were able to show the connectedness of this graph, though it turned out to be remarkably tricky to do so. Roughly speaking (and suppressing a number of technicalities), the main steps in the argument were as follows.

- (Early stage) Pick a large number (in our paper we take to be odd, but I’ll ignore this technicality here). Using a moment method to explore neighbourhoods of a single point in , one can show that a vertex in is almost always connected to at least numbers in , using relatively short paths of short diameter. (This is the most computationally intensive portion of the argument.)
- (Middle stage) Let be a typical number in , and let be a scale somewhere between and . By using paths involving three primes, and using a variant of Vinogradov’s theorem and some routine second moment computations, one can show that with quite high probability, any “good” vertex in is connected to a “good” vertex in by paths of length three, where the definition of “good” is somewhat technical but encompasses almost all of the vertices in .
- (Late stage) Combining the two previous results together, we can show that most vertices will be connected to a vertex in for any in . In particular, will be connected to a set of vertices in . By tracking everything carefully, one can control the length and diameter of the paths used to connect to this set, and one can also control the parity of the elements in this set.
- (Final stage) Now if we have two vertices at a distance apart. By the previous item, one can connect to a large set of vertices in , and one can similarly connect to a large set of vertices in . Now, by using a Vinogradov-type theorem and second moment calculations again (and ensuring that the elements of and have opposite parity), one can connect many of the vertices in to many of the vertices by paths of length three, which then connects to , and gives the claim.

It seems of interest to understand random graphs like further. In particular, the graph on the integers formed by connecting to for all in a randomly selected residue class mod for each prime is particularly interesting (it is to the Liouville function as is to the Möbius function); if one could show some “local expander” properties of this graph , then one would have a chance of modifying the above methods to attack the first unsolved case of the Chowla conjecture, namely that has asymptotic density zero (perhaps working with logarithmic density instead of natural density to avoids some technicalities).

## Recent Comments