You are currently browsing Terence Tao’s articles.
Joni Teräväinen and I have just uploaded to the arXiv our preprint “The Hardy–Littlewood–Chowla conjecture in the presence of a Siegel zero“. This paper is a development of the theme that certain conjectures in analytic number theory become easier if one makes the hypothesis that Siegel zeroes exist; this places one in a presumably “illusory” universe, since the widely believed Generalised Riemann Hypothesis (GRH) precludes the existence of such zeroes, yet this illusory universe seems remarkably self-consistent and notoriously impossible to eliminate from one’s analysis.
For the purposes of this paper, a Siegel zero is a zero of a Dirichlet -function corresponding to a primitive quadratic character of some conductor , which is close to in the sense that
for some large (which we will call the quality) of the Siegel zero. The significance of these zeroes is that they force the Möbius function and the Liouville function to “pretend” to be like the exceptional character for primes of magnitude comparable to . Indeed, if we define an exceptional prime to be a prime in which , then very few primes near will be exceptional; in our paper we use some elementary arguments to establish the bounds for any and , where the sum is over exceptional primes in the indicated range ; this bound is non-trivial for as large as . (See Section 1 of this blog post for some variants of this argument, which were inspired by work of Heath-Brown.) There is also a companion bound (somewhat weaker) that covers a range of a little bit below .One of the early influential results in this area was the following result of Heath-Brown, which I previously blogged about here:
Theorem 1 (Hardy-Littlewood assuming Siegel zero) Let be a fixed natural number. Suppose one has a Siegel zero associated to some conductor . Then we have for all , where is the von Mangoldt function and is the singular series
In particular, Heath-Brown showed that if there are infinitely many Siegel zeroes, then there are also infinitely many twin primes, with the correct asymptotic predicted by the Hardy-Littlewood prime tuple conjecture at infinitely many scales.
Very recently, Chinis established an analogous result for the Chowla conjecture (building upon earlier work of Germán and Katai):
Theorem 2 (Chowla assuming Siegel zero) Let be distinct fixed natural numbers. Suppose one has a Siegel zero associated to some conductor . Then one has in the range , where is the Liouville function.
In our paper we unify these results and also improve the quantitative estimates and range of :
Theorem 3 (Hardy-Littlewood-Chowla assuming Siegel zero) Let be distinct fixed natural numbers with . Suppose one has a Siegel zero associated to some conductor . Then one has for for any fixed .
Our argument proceeds by a series of steps in which we replace and by more complicated looking, but also more tractable, approximations, until the correlation is one that can be computed in a tedious but straightforward fashion by known techniques. More precisely, the steps are as follows:
- (i) Replace the Liouville function with an approximant , which is a completely multiplicative function that agrees with at small primes and agrees with at large primes.
- (ii) Replace the von Mangoldt function with an approximant , which is the Dirichlet convolution multiplied by a Selberg sieve weight to essentially restrict that convolution to almost primes.
- (iii) Replace with a more complicated truncation which has the structure of a “Type I sum”, and which agrees with on numbers that have a “typical” factorization.
- (iv) Replace the approximant with a more complicated approximant which has the structure of a “Type I sum”.
- (v) Now that all terms in the correlation have been replaced with tractable Type I sums, use standard Euler product calculations and Fourier analysis, similar in spirit to the proof of the pseudorandomness of the Selberg sieve majorant for the primes in this paper of Ben Green and myself, to evaluate the correlation to high accuracy.
Steps (i), (ii) proceed mainly through estimates such as (1) and standard sieve theory bounds. Step (iii) is based primarily on estimates on the number of smooth numbers of a certain size.
The restriction in our main theorem is needed only to execute step (iv) of this step. Roughly speaking, the Siegel approximant to is a twisted, sieved version of the divisor function , and the types of correlation one is faced with at the start of step (iv) are a more complicated version of the divisor correlation sum
For this sum can be easily controlled by the Dirichlet hyperbola method. For one needs the fact that has a level of distribution greater than ; in fact Kloosterman sum bounds give a level of distribution of , a folklore fact that seems to have first been observed by Linnik and Selberg. We use a (notationally more complicated) version of this argument to treat the sums arising in (iv) for . Unfortunately for there are no known techniques to unconditionally obtain asymptotics, even for the model sum although we do at least have fairly convincing conjectures as to what the asymptotics should be. Because of this, it seems unlikely that one will be able to relax the hypothesis in our main theorem at our current level of understanding of analytic number theory.Step (v) is a tedious but straightforward sieve theoretic computation, similar in many ways to the correlation estimates of Goldston and Yildirim used in their work on small gaps between primes (as discussed for instance here), and then also used by Ben Green and myself to locate arithmetic progressions in primes.
A few months ago I posted a question about analytic functions that I received from a bright high school student, which turned out to be studied and resolved by de Bruijn. Based on this positive resolution, I thought I might try my luck again and list three further questions that this student asked which do not seem to be trivially resolvable.
- Does there exist a smooth function which is nowhere analytic, but is such that the Taylor series converges for every ? (Of course, this series would not then converge to , but instead to some analytic function for each .) I have a vague feeling that perhaps the Baire category theorem should be able to resolve this question, but it seems to require a bit of effort. (Update: answered by Alexander Shaposhnikov in comments.)
- Is there a function which meets every polynomial to infinite order in the following sense: for every polynomial , there exists such that for all ? Such a function would be rather pathological, perhaps resembling a space-filling curve. (Update: solved for smooth by Aleksei Kulikov in comments. The situation currently remains unclear in the general case.)
- Is there a power series that diverges everywhere (except at ), but which becomes pointwise convergent after dividing each of the monomials into pieces for some summing absolutely to , and then rearranging, i.e., there is some rearrangement of that is pointwise convergent for every ?
Feel free to post answers or other thoughts on these questions in the comments.
Rachel Greenfeld and I have just uploaded to the arXiv our preprint “Undecidable translational tilings with only two tiles, or one nonabelian tile“. This paper studies the following question: given a finitely generated group , a (periodic) subset of , and finite sets in , is it possible to tile by translations of the tiles ? That is to say, is there a solution to the (translational) tiling equation
for some subsets of , where denotes the set of sums if the sums are all disjoint (and is undefined otherwise), and denotes disjoint union. (One can also write the tiling equation in the language of convolutions as .)A bit more specifically, the paper studies the decidability of the above question. There are two slightly different types of decidability one could consider here:
- Logical decidability. For a given , one can ask whether the solvability of the tiling equation (1) is provable or disprovable in ZFC (where we encode all the data by appropriate constructions in ZFC). If this is the case we say that the tiling equation (1) (or more precisely, the solvability of this equation) is logically decidable, otherwise it is logically undecidable.
- Algorithmic decidability. For data in some specified class (and encoded somehow as binary strings), one can ask whether the solvability of the tiling equation (1) can be correctly determined for all choices of data in this class by the output of some Turing machine that takes the data as input (encoded as a binary string) and halts in finite time, returning either YES if the equation can be solved or NO otherwise. If this is the case, we say the tiling problem of solving (1) for data in the given class is algorithmically decidable, otherwise it is algorithmically undecidable.
Note that the notion of logical decidability is “pointwise” in the sense that it pertains to a single choice of data , whereas the notion of algorithmic decidability pertains instead to classes of data, and is only interesting when this class is infinite. Indeed, any tiling problem with a finite class of data is trivially decidable because one could simply code a Turing machine that is basically a lookup table that returns the correct answer for each choice of data in the class. (This is akin to how a student with a good memory could pass any exam if the questions are drawn from a finite list, merely by memorising an answer key for that list of questions.)
The two notions are related as follows: if a tiling problem (1) is algorithmically undecidable for some class of data, then the tiling equation must be logically undecidable for at least one choice of data for this class. For if this is not the case, one could algorithmically decide the tiling problem by searching for proofs or disproofs that the equation (1) is solvable for a given choice of data; the logical decidability of all such solvability questions will ensure that this algorithm always terminates in finite time.
One can use the Gödel completeness theorem to interpret logical decidability in terms of universes (also known as structures or models) of ZFC. In addition to the “standard” universe of sets that we believe satisfies the axioms of ZFC, there are also other “nonstandard” universes that also obey the axioms of ZFC. If the solvability of a tiling equation (1) is logically undecidable, this means that such a tiling exists in some universes of ZFC, but not in others.
(To continue the exam analogy, we thus see that a yes-no exam question is logically undecidable if the answer to the question is yes in some parallel universes, but not in others. A course syllabus is algorithmically undecidable if there is no way to prepare for the final exam for the course in a way that guarantees a perfect score (in the standard universe).)
Questions of decidability are also related to the notion of aperiodicity. For a given , a tiling equation (1) is said to be aperiodic if the equation (1) is solvable (in the standard universe of ZFC), but none of the solutions (in that universe) are completely periodic (i.e., there are no solutions where all of the are periodic). Perhaps the most well-known example of an aperiodic tiling (in the context of , and using rotations as well as translations) come from the Penrose tilings, but there are many others besides.
It was (essentially) observed by Hao Wang in the 1960s that if a tiling equation is logically undecidable, then it must necessarily be aperiodic. Indeed, if a tiling equation fails to be aperiodic, then (in the standard universe) either there is a periodic tiling, or there are no tilings whatsoever. In the former case, the periodic tiling can be used to give a finite proof that the tiling equation is solvable; in the latter case, the compactness theorem implies that there is some finite fragment of that is not compatible with being tiled by , and this provides a finite proof that the tiling equation is unsolvable. Thus in either case the tiling equation is logically decidable.
This observation of Wang clarifies somewhat how logically undecidable tiling equations behave in the various universes of ZFC. In the standard universe, tilings exist, but none of them will be periodic. In nonstandard universes, tilings may or may not exist, and the tilings that do exist may be periodic (albeit with a nonstandard period); but there must be at least one universe in which no tiling exists at all.
In one dimension when (or more generally with a finite group), a simple pigeonholing argument shows that no tiling equations are aperiodic, and hence all tiling equations are decidable. However the situation changes in two dimensions. In 1966, Berger (a student of Wang) famously showed that there exist tiling equations (1) in the discrete plane that are aperiodic, or even logically undecidable; in fact he showed that the tiling problem in this case (with arbitrary choices of data ) was algorithmically undecidable. (Strictly speaking, Berger established this for a variant of the tiling problem known as the domino problem, but later work of Golomb showed that the domino problem could be easily encoded within the tiling problem.) This was accomplished by encoding the halting problem for Turing machines into the tiling problem (or domino problem); the latter is well known to be algorithmically undecidable (and thus have logically undecidable instances), and so the latter does also. However, the number of tiles required for Berger’s construction was quite large: his construction of an aperiodic tiling required tiles, and his construction of a logically undecidable tiling required an even larger (and not explicitly specified) collection of tiles. Subsequent work by many authors did reduce the number of tiles required; in the setting, the current world record for the fewest number of tiles in an aperiodic tiling is (due to Amman, Grunbaum, and Shephard) and for a logically undecidable tiling is (due to Ollinger). On the other hand, it is conjectured (see Grunbaum-Shephard and Lagarias-Wang) that one cannot lower all the way to :
Conjecture 1 (Periodic tiling conjecture) If is a periodic subset of a finitely generated abelian group , and is a finite subset of , then the tiling equation is not aperiodic.
This conjecture is known to be true in two dimensions (by work of Bhattacharya when , and more recently by us when ), but remains open in higher dimensions. By the preceding discussion, the conjecture implies that every tiling equation with a single tile is logically decidable, and the problem of whether a given periodic set can be tiled by a single tile is algorithmically decidable.
In this paper we show on the other hand that aperiodic and undecidable tilings exist when , at least if one is permitted to enlarge the group a bit:
Theorem 2 (Logically undecidable tilings)
- (i) There exists a group of the form for some finite abelian , a subset of , and finite sets such that the tiling equation is logically undecidable (and hence also aperiodic).
- (ii) There exists a dimension , a periodic subset of , and finite sets such that tiling equation is logically undecidable (and hence also aperiodic).
- (iii) There exists a non-abelian finite group (with the group law still written additively), a subset of , and a finite set such that the nonabelian tiling equation is logically undecidable (and hence also aperiodic).
We also have algorithmic versions of this theorem. For instance, the algorithmic version of (i) is that the problem of determining solvability of the tiling equation for a given choice of finite abelian group , subset of , and finite sets is algorithmically undecidable. Similarly for (ii), (iii).
This result (together with a negative result discussed below) suggest to us that there is a significant qualitative difference in the theory of tiling by a single (abelian) tile, and the theory of tiling with multiple tiles (or one non-abelian tile). (The positive results on the periodic tiling conjecture certainly rely heavily on the fact that there is only one tile, in particular there is a “dilation lemma” that is only available in this setting that is of key importance in the two dimensional theory.) It would be nice to eliminate the group from (i) (or to set in (ii)), but I think this would require a fairly significant modification of our methods.
Like many other undecidability results, the proof of Theorem 2 proceeds by a sequence of reductions, in which the undecidability of one problem is shown to follow from the undecidability of another, more “expressive” problem that can be encoded inside the original problem, until one reaches a problem that is so expressive that it encodes a problem already known to be undecidable. Indeed, all three undecidability results are ultimately obtained from Berger’s undecidability result on the domino problem.
The first step in increasing expressiveness is to observe that the undecidability of a single tiling equation follows from the undecidability of a system of tiling equations. More precisely, suppose we have non-empty finite subsets of a finitely generated group for and , as well as periodic sets of for , such that it is logically undecidable whether the system of tiling equations
for has no solution in . Then, for any , we can “stack” these equations into a single tiling equation in the larger group , and specifically to the equation where and It is a routine exercise to check that the system of equations (2) admits a solution in if and only if the single equation (3) admits a equation in . Thus, to prove the undecidability of a single equation of the form (3) it suffices to establish undecidability of a system of the form (2); note here how the freedom to select the auxiliary group is important here.We view systems of the form (2) as belonging to a kind of “language” in which each equation in the system is a “sentence” in the language imposing additional constraints on a tiling. One can now pick and choose various sentences in this language to try to encode various interesting problems. For instance, one can encode the concept of a function taking values in a finite group as a single tiling equation
since the solutions to this equation are precisely the graphs of a function . By adding more tiling equations to this equation to form a larger system, we can start imposing additional constraints on this function . For instance, if is a coset of some subgroup of , we can impose the additional equation to impose the additional constraint that for all , if we desire. If happens to contain two distinct elements , and , then the additional equation imposes the additional constraints that for all , and additionally that for all .This begins to resemble the equations that come up in the domino problem. Here one has a finite set of Wang tiles – unit squares where each of the four sides is colored with a color (corresponding to the four cardinal directions North, South, East, and West) from some finite set of colors. The domino problem is then to tile the plane with copies of these tiles in such a way that adjacent sides match. In terms of equations, one is seeking to find functions obeying the pointwise constraint
for all where is the set of colors associated to the set of Wang tiles being used, and the matching constraints for all . As it turns out, the pointwise constraint (7) can be encoded by tiling equations that are fancier versions of (4), (5), (6) that involve only one unknown tiling set , but in order to encode the matching constraints (8) we were forced to introduce a second tile (or work with nonabelian tiling equations). This appears to be an inherent feature of the method, since we found a partial rigidity result for tilings of one tile in one dimension that obstructs this encoding strategy from working when one only has one tile available. The result is as follows:
Proposition 3 (Swapping property) Consider the solutions to a tiling equation in a one-dimensional group (with a finite abelian group, finite, and periodic). Suppose there are two solutions to this equation that agree on the left in the sense that For any function , define the “swap” of and to be the set Then also solves the equation (9).
One can think of and as “genes” with “nucleotides” , at each position , and is a new gene formed by choosing one of the nucleotides from the “parent” genes , at each position. The above proposition then says that the solutions to the equation (9) must be closed under “genetic transfer” among any pair of genes that agree on the left. This seems to present an obstruction to trying to encode equation such as
for two functions (say), which is a toy version of the matching constraint (8), since the class of solutions to this equation turns out not to obey this swapping property. On the other hand, it is easy to encode such equations using two tiles instead of one, and an elaboration of this construction is used to prove our main theorem.Louis Esser, Burt Totaro, Chengxi Wang, and myself have just uploaded to the arXiv our preprint “Varieties of general type with many vanishing plurigenera, and optimal sine and sawtooth inequalities“. This is an interdisciplinary paper that arose because in order to optimize a certain algebraic geometry construction it became necessary to solve a purely analytic question which, while simple, did not seem to have been previously studied in the literature. We were able to solve the analytic question exactly and thus fully optimize the algebraic geometry construction, though the analytic question may have some independent interest.
Let us first discuss the algebraic geometry application. Given a smooth complex -dimensional projective variety there is a standard line bundle attached to it, known as the canonical line bundle; -forms on the variety become sections of this bundle. The bundle may not actually admit global sections; that is to say, the dimension of global sections may vanish. But as one raises the canonical line bundle to higher and higher powers to form further line bundles , the number of global sections tends to increase; in particular, the dimension of global sections (known as the plurigenus) always obeys an asymptotic of the form
as for some non-negative number , which is called the volume of the variety , which is an invariant that reveals some information about the birational geometry of . For instance, if the canonical line bundle is ample (or more generally, nef), this volume is equal to the intersection number (roughly speaking, the number of common zeroes of generic sections of the canonical line bundle); this is a special case of the asymptotic Riemann-Roch theorem. In particular, the volume is a natural number in this case. However, it is possible for the volume to also be fractional in nature. One can then ask: how small can the volume get without vanishing entirely? (By definition, varieties with non-vanishing volume are known as varieties of general type.)It follows from a deep result obtained independently by Hacon–McKernan, Takayama and Tsuji that there is a uniform lower bound for the volume of all -dimensional projective varieties of general type. However, the precise lower bound is not known, and the current paper is a contribution towards probing this bound by constructing varieties of particularly small volume in the high-dimensional limit . Prior to this paper, the best such constructions of -dimensional varieties basically had exponentially small volume, with a construction of volume at most given by Ballico–Pignatelli–Tasin, and an improved construction with a volume bound of given by Totaro and Wang. In this paper, we obtain a variant construction with the somewhat smaller volume bound of ; the method also gives comparable bounds for some other related algebraic geometry statistics, such as the largest for which the pluricanonical map associated to the linear system is not a birational embedding into projective space.
The space is constructed by taking a general hypersurface of a certain degree in a weighted projective space and resolving the singularities. These varieties are relatively tractable to work with, as one can use standard algebraic geometry tools (such as the Reid–Tai inequality) to provide sufficient conditions to guarantee that the hypersurface has only canonical singularities and that the canonical bundle is a reflexive sheaf, which allows one to calculate the volume exactly in terms of the degree and weights . The problem then reduces to optimizing the resulting volume given the constraints needed for the above-mentioned sufficient conditions to hold. After working with a particular choice of weights (which consist of products of mostly consecutive primes, with each product occuring with suitable multiplicities ), the problem eventually boils down to trying to minimize the total multiplicity , subject to certain congruence conditions and other bounds on the . Using crude bounds on the eventually leads to a construction with volume at most , but by taking advantage of the ability to “dilate” the congruence conditions and optimizing over all dilations, we are able to improve the constant to .
Now it is time to turn to the analytic side of the paper by describing the optimization problem that we solve. We consider the sawtooth function , with defined as the unique real number in that is equal to mod . We consider a (Borel) probability measure on the real line, and then compute the average value of this sawtooth function
as well as various dilates of this expectation. Since is bounded above by , we certainly have the trivial bound However, this bound is not very sharp. For instance, the only way in which could attain the value of is if the probability measure was supported on half-integers, but in that case would vanish. For the algebraic geometry application discussed above one is then led to the following question: for a given choice of , what is the best upper bound on the quantity that holds for all probability measures ?If one considers the deterministic case in which is a Dirac mass supported at some real number , then the Dirichlet approximation theorem tells us that there is such that is within of an integer, so we have
in this case, and this bound is sharp for deterministic measures . Thus we have However, both of these bounds turn out to be far from the truth, and the optimal value of is comparable to . In fact we were able to compute this quantity precisely:
Theorem 1 (Optimal bound for sawtooth inequality) Let .In particular, we have as .
- (i) If for some natural number , then .
- (ii) If for some natural number , then .
We establish this bound through duality. Indeed, suppose we could find non-negative coefficients such that one had the pointwise bound
for all real numbers . Integrating this against an arbitrary probability measure , we would conclude and hence Conversely, one can find lower bounds on by selecting suitable candidate measures and computing the means . The theory of linear programming duality tells us that this method must give us the optimal bound, but one has to locate the optimal measure and optimal weights . This we were able to do by first doing some extensive numerics to discover these weights and measures for small values of , and then doing some educated guesswork to extrapolate these examples to the general case, and then to verify the required inequalities. In case (i) the situation is particularly simple, as one can take to be the discrete measure that assigns a probability to the numbers and the remaining probability of to , while the optimal weighted inequality (1) turns out to be which is easily proven by telescoping series. However the general case turned out to be significantly tricker to work out, and the verification of the optimal inequality required a delicate case analysis (reflecting the fact that equality was attained in this inequality in a large number of places).After solving the sawtooth problem, we became interested in the analogous question for the sine function, that is to say what is the best bound for the inequality
The left-hand side is the smallest imaginary part of the first Fourier coefficients of . To our knowledge this quantity has not previously been studied in the Fourier analysis literature. By adopting a similar approach as for the sawtooth problem, we were able to compute this quantity exactly also:
Theorem 2 For any , one has In particular,
Interestingly, a closely related cotangent sum recently appeared in this MathOverflow post. Verifying the lower bound on boils down to choosing the right test measure ; it turns out that one should pick the probability measure supported the with odd, with probability proportional to , and the lower bound verification eventually follows from a classical identity
for , first posed by Eisenstein in 1844 and proved by Stern in 1861. The upper bound arises from establishing the trigonometric inequality for all real numbers , which to our knowledge is new; the left-hand side has a Fourier-analytic intepretation as convolving the Fejér kernel with a certain discretized square wave function, and this interpretation is used heavily in our proof of the inequality.In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms for . For finitely supported functions , one can define the (non-normalised) Gowers norm by the formula
where denotes complex conjugation, and then on any discrete interval and any function we can then define the (normalised) Gowers norm where is the extension of by zero to all of . Thus for instance (which technically makes a seminorm rather than a norm), and one can calculate where , and we use the averaging notation .The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials and functions , we define the multilinear form
(assuming that the denominator is finite and non-zero). Thus for instance where we view as formal (indeterminate) variables, and are understood to be extended by zero to all of . These forms are used to count patterns in various sets; for instance, the quantity is closely related to the number of length three arithmetic progressions contained in . Let us informally say that a form is controlled by the norm if the form is small whenever are -bounded functions with at least one of the small in norm. This definition was made more precise by Gowers and Wolf, who then defined the true complexity of a form to be the least such that is controlled by the norm. For instance,- and have true complexity ;
- has true complexity ;
- has true complexity ;
- The form (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).
Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials ; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.
The (semi-)norm is so weak that it barely controls any averages at all. For instance the average
is not controlled by the semi-norm: it is perfectly possible for a -bounded function to even have vanishing norm but have large value of (consider for instance the parity function ).Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the and norms, which I will call the (or “profinite “) norm:
where ranges over all arithmetic progressions in . This can easily be seen to be a norm on functions that controls the norm. It is also basically controlled by the norm for -bounded functions ; indeed, if is an arithmetic progression in of some spacing , then we can write as the intersection of an interval with a residue class modulo , and from Fourier expansion we have If we let be a standard bump function supported on with total mass and is a parameter then (extending by zero outside of ), as can be seen by using the triangle inequality and the estimate After some Fourier expansion of we now have Writing as a linear combination of and using the Gowers–Cauchy–Schwarz inequality, we conclude hence on optimising in we have Forms which are controlled by the norm (but not ) would then have their true complexity adjusted to with this insertion.The norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form had true complexity in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function ; for the first two functions one needs to localize the norm to intervals of length . But I will ignore this technical point to keep the exposition simple.] The weaker claim that has true complexity is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).
The well known inverse theorem for the norm tells us that if a -bounded function has norm at least for some , then there is a Fourier phase such that
this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one hasFor one has a trivial inverse theorem; by definition, the norm of is at least if and only if
Thus the frequency appearing in the inverse theorem can be taken to be zero when working instead with the norm.For one has the intermediate situation in which the frequency is not taken to be zero, but is instead major arc. Indeed, suppose that is -bounded with , thus
for some progression . This forces the spacing of this progression to be . We write the above inequality as for some residue class and some interval . By Fourier expansion and the triangle inequality we then have for some integer . Convolving by for a small multiple of and a Schwartz function of unit mass with Fourier transform supported on , we have The Fourier transform of is bounded by and supported on , thus by Fourier expansion and the triangle inequality we have for some , so in particular . Thus we have for some of the major arc form with . Conversely, for of this form, some routine summation by parts gives the bound so if (2) holds for a -bounded then one must have .Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes of functions (where each class of functions induces a dual norm :
Here I have included the three classes of functions that one can choose from for the inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.
The Gowers norms have counterparts for measure-preserving systems , known as Host-Kra seminorms. The norm can be defined for as
and the norm can be defined as The seminorm is orthogonal to the invariant factor (generated by the (almost everywhere) invariant measurable subsets of ) in the sense that a function has vanishing seminorm if and only if it is orthogonal to all -measurable (bounded) functions. Similarly, the norm is orthogonal to the Kronecker factor , generated by the eigenfunctions of (that is to say, those obeying an identity for some -invariant ); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers norm, one can then define the Host-Kra seminorm by it is orthogonal to the profinite factor , generated by the periodic sets of (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.Joni Teräväinen and myself have just uploaded to the arXiv our preprint “Quantitative bounds for Gowers uniformity of the Möbius and von Mangoldt functions“. This paper makes quantitative the Gowers uniformity estimates on the Möbius function and the von Mangoldt function .
To discuss the results we first discuss the situation of the Möbius function, which is technically simpler in some (though not all) ways. We assume familiarity with Gowers norms and standard notations around these norms, such as the averaging notation and the exponential notation . The prime number theorem in qualitative form asserts that
as . With Vinogradov-Korobov error term, the prime number theorem is strengthened to we refer to such decay bounds (With type factors) as pseudopolynomial decay. Equivalently, we obtain pseudopolynomial decay of Gowers seminorm of : As is well known, the Riemann hypothesis would be equivalent to an upgrade of this estimate to polynomial decay of the form for any .Once one restricts to arithmetic progressions, the situation gets worse: the Siegel-Walfisz theorem gives the bound
for any residue class and any , but with the catch that the implied constant is ineffective in . This ineffectivity cannot be removed without further progress on the notorious Siegel zero problem.In 1937, Davenport was able to show the discorrelation estimate
for any uniformly in , which leads (by standard Fourier arguments) to the Fourier uniformity estimate Again, the implied constant is ineffective. If one insists on effective constants, the best bound currently available is for some small effective constant .For the situation with the norm the previously known results were much weaker. Ben Green and I showed that
uniformly for any , any degree two (filtered) nilmanifold , any polynomial sequence , and any Lipschitz function ; again, the implied constants are ineffective. On the other hand, in a separate paper of Ben Green and myself, we established the following inverse theorem: if for instance we knew that for some , then there exists a degree two nilmanifold of dimension , complexity , a polynomial sequence , and Lipschitz function of Lipschitz constant such that Putting the two assertions together and comparing all the dependencies on parameters, one can establish the qualitative decay bound However the decay rate produced by this argument is completely ineffective: obtaining a bound on when this quantity dips below a given threshold depends on the implied constant in (3) for some whose dimension depends on , and the dependence on obtained in this fashion is ineffective in the face of a Siegel zero.For higher norms , the situation is even worse, because the quantitative inverse theory for these norms is poorer, and indeed it was only with the recent work of Manners that any such bound is available at all (at least for ). Basically, Manners establishes if
then there exists a degree nilmanifold of dimension , complexity , a polynomial sequence , and Lipschitz function of Lipschitz constant such that (We allow all implied constants to depend on .) Meanwhile, the bound (3) was extended to arbitrary nilmanifolds by Ben and myself. Again, the two results when concatenated give the qualitative decay but the decay rate is completely ineffective.Our first result gives an effective decay bound:
Theorem 1 For any , we have for some . The implied constants are effective.
This is off by a logarithm from the best effective bound (2) in the case. In the case there is some hope to remove this logarithm based on the improved quantitative inverse theory currently available in this case, but there is a technical obstruction to doing so which we will discuss later in this post. For the above bound is the best one could hope to achieve purely using the quantitative inverse theory of Manners.
We have analogues of all the above results for the von Mangoldt function . Here a complication arises that does not have mean close to zero, and one has to subtract off some suitable approximant to before one would expect good Gowers norms bounds. For the prime number theorem one can just use the approximant , giving
but even for the prime number theorem in arithmetic progressions one needs a more accurate approximant. In our paper it is convenient to use the “Cramér approximant” where and is the quasipolynomial quantity Then one can show from the Siegel-Walfisz theorem and standard bilinear sum methods that and for all and (with an ineffective dependence on ), again regaining effectivity if is replaced by a sufficiently small constant . All the previously stated discorrelation and Gowers uniformity results for then have analogues for , and our main result is similarly analogous:
Theorem 2 For any , we have for some . The implied constants are effective.
By standard methods, this result also gives quantitative asymptotics for counting solutions to various systems of linear equations in primes, with error terms that gain a factor of with respect to the main term.
We now discuss the methods of proof, focusing first on the case of the Möbius function. Suppose first that there is no “Siegel zero”, by which we mean a quadratic character of some conductor with a zero with for some small absolute constant . In this case the Siegel-Walfisz bound (1) improves to a quasipolynomial bound
To establish Theorem 1 in this case, it suffices by Manners’ inverse theorem to establish the polylogarithmic bound for all degree nilmanifolds of dimension and complexity , all polynomial sequences , and all Lipschitz functions of norm . If the nilmanifold had bounded dimension, then one could repeat the arguments of Ben and myself more or less verbatim to establish this claim from (5), which relied on the quantitative equidistribution theory on nilmanifolds developed in a separate paper of Ben and myself. Unfortunately, in the latter paper the dependence of the quantitative bounds on the dimension was not explicitly given. In an appendix to the current paper, we go through that paper to account for this dependence, showing that all exponents depend at most doubly exponentially in the dimension , which is barely sufficient to handle the dimension of that arises here.Now suppose we have a Siegel zero . In this case the bound (5) will not hold in general, and hence also (6) will not hold either. Here, the usual way out (while still maintaining effective estimates) is to approximate not by , but rather by a more complicated approximant that takes the Siegel zero into account, and in particular is such that one has the (effective) pseudopolynomial bound
for all residue classes . The Siegel approximant to is actually a little bit complicated, and to our knowledge the first appearance of this sort of approximant only appears as late as this 2010 paper of Germán and Katai. Our version of this approximant is defined as the multiplicative function such that when , and when is coprime to all primes , and is a normalising constant given by the formula (this constant ends up being of size and plays only a minor role in the analysis). This is a rather complicated formula, but it seems to be virtually the only choice of approximant that allows for bounds such as (7) to hold. (This is the one aspect of the problem where the von Mangoldt theory is simpler than the Möbius theory, as in the former one only needs to work with very rough numbers for which one does not need to make any special accommodations for the behavior at small primes when introducing the Siegel correction term.) With this starting point it is then possible to repeat the analysis of my previous papers with Ben and obtain the pseudopolynomial discorrelation bound for as before, which when combined with Manners’ inverse theorem gives the doubly logarithmic bound Meanwhile, a direct sieve-theoretic computation ends up giving the singly logarithmic bound (indeed, there is a good chance that one could improve the bounds even further, though it is not helpful for this current argument to do so). Theorem 1 then follows from the triangle inequality for the Gowers norm. It is interesting that the Siegel approximant seems to play a rather essential component in the proof, even if it is absent in the final statement. We note that this approximant seems to be a useful tool to explore the “illusory world” of the Siegel zero further; see for instance the recent paper of Chinis for some work in this direction.For the analogous problem with the von Mangoldt function (assuming a Siegel zero for sake of discussion), the approximant is simpler; we ended up using
which allows one to state the standard prime number theorem in arithmetic progressions with classical error term and Siegel zero term compactly as Routine modifications of previous arguments also give and The one tricky new step is getting from the discorrelation estimate (8) to the Gowers uniformity estimate One cannot directly apply Manners’ inverse theorem here because and are unbounded. There is a standard tool for getting around this issue, now known as the dense model theorem, which is the standard engine powering the transference principle from theorems about bounded functions to theorems about certain types of unbounded functions. However the quantitative versions of the dense model theorem in the literature are expensive and would basically weaken the doubly logarithmic gain here to a triply logarithmic one. Instead, we bypass the dense model theorem and directly transfer the inverse theorem for bounded functions to an inverse theorem for unbounded functions by using the densification approach to transference introduced by Conlon, Fox, and Zhao. This technique turns out to be quantitatively quite efficient (the dependencies of the main parameters in the transference are polynomial in nature), and also has the technical advantage of avoiding the somewhat tricky “correlation condition” present in early transference results which are also not beneficial for quantitative bounds.In principle, the above results can be improved for due to the stronger quantitative inverse theorems in the setting. However, there is a bottleneck that prevents us from achieving this, namely that the equidistribution theory of two-step nilmanifolds has exponents which are exponential in the dimension rather than polynomial in the dimension, and as a consequence we were unable to improve upon the doubly logarithmic results. Specifically, if one is given a sequence of bracket quadratics such as that fails to be -equidistributed, one would need to establish a nontrivial linear relationship modulo 1 between the (up to errors of ), where the coefficients are of size ; current methods only give coefficient bounds of the form . An old result of Schmidt demonstrates proof of concept that these sorts of polynomial dependencies on exponents is possible in principle, but actually implementing Schmidt’s methods here seems to be a quite non-trivial task. There is also another possible route to removing a logarithm, which is to strengthen the inverse theorem to make the dimension of the nilmanifold logarithmic in the uniformity parameter rather than polynomial. Again, the Freiman-Bilu theorem (see for instance this paper of Ben and myself) demonstrates proof of concept that such an improvement in dimension is possible, but some work would be needed to implement it.
Kaisa Matomäki, Maksym Radziwill, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Singmaster’s conjecture in the interior of Pascal’s triangle“. This paper leverages the theory of exponential sums over primes to make progress on a well known conjecture of Singmaster which asserts that any natural number larger than appears at most a bounded number of times in Pascal’s triangle. That is to say, for any integer , there are at most solutions to the equation
with . Currently, the largest number of solutions that is known to be attainable is eight, with equal to Because of the symmetry of Pascal’s triangle it is natural to restrict attention to the left half of the triangle.Our main result settles this conjecture in the “interior” region of the triangle:
Theorem 1 (Singmaster’s conjecture in the interior of the triangle) If and is sufficiently large depending on , there are at most two solutions to (1) in the region and hence at most four in the region Also, there is at most one solution in the region
To verify Singmaster’s conjecture in full, it thus suffices in view of this result to verify the conjecture in the boundary region
(or equivalently ); we have deleted the case as it of course automatically supplies exactly one solution to (1). It is in fact possible that for sufficiently large there are no further collisions for in the region (3), in which case there would never be more than eight solutions to (1) for sufficiently large . This is latter claim known for bounded values of by Beukers, Shorey, and Tildeman, with the main tool used being Siegel’s theorem on integral points.The upper bound of two here for the number of solutions in the region (2) is best possible, due to the infinite family of solutions to the equation
coming from , and is the Fibonacci number.The appearance of the quantity in Theorem 1 may be familiar to readers that are acquainted with Vinogradov’s bounds on exponential sums, which ends up being the main new ingredient in our arguments. In principle this threshold could be lowered if we had stronger bounds on exponential sums.
To try to control solutions to (1) we use a combination of “Archimedean” and “non-Archimedean” approaches. In the “Archimedean” approach (following earlier work of Kane on this problem) we view primarily as real numbers rather than integers, and express (1) in terms of the Gamma function as
One can use this equation to solve for in terms of as for a certain real analytic function whose asymptotics are easily computable (for instance one has the asymptotic ). One can then view the problem as one of trying to control the number of lattice points on the graph . Here we can take advantage of the fact that in the regime (which corresponds to working in the left half of Pascal’s triangle), the function can be shown to be convex, but not too convex, in the sense that one has both upper and lower bounds on the second derivative of (in fact one can show that ). This can be used to preclude the possibility of having a cluster of three or more nearby lattice points on the graph , basically because the area subtended by the triangle connecting three of these points would lie between and , contradicting Pick’s theorem. Developing these ideas, we were able to show
Proposition 2 Let , and suppose is sufficiently large depending on . If is a solution to (1) in the left half of Pascal’s triangle, then there is at most one other solution to this equation in the left half with
Again, the example of (4) shows that a cluster of two solutions is certainly possible; the convexity argument only kicks in once one has a cluster of three or more solutions.
To finish the proof of Theorem 1, one has to show that any two solutions to (1) in the region of interest must be close enough for the above proposition to apply. Here we switch to the “non-Archimedean” approach, in which we look at the -adic valuations of the binomial coefficients, defined as the number of times a prime divides . From the fundamental theorem of arithmetic, a collision
between binomial coefficients occurs if and only if one has agreement of valuations From the Legendre formula we can rewrite this latter identity (5) as where denotes the fractional part of . (These sums are not truly infinite, because the summands vanish once is larger than .)A key idea in our approach is to view this condition (6) statistically, for instance by viewing as a prime drawn randomly from an interval such as for some suitably chosen scale parameter , so that the two sides of (6) now become random variables. It then becomes advantageous to compare correlations between these two random variables and some additional test random variable. For instance, if and are far apart from each other, then one would expect the left-hand side of (6) to have a higher correlation with the fractional part , since this term shows up in the summation on the left-hand side but not the right. Similarly if and are far apart from each other (although there are some annoying cases one has to treat separately when there is some “unexpected commensurability”, for instance if is a rational multiple of where the rational has bounded numerator and denominator). In order to execute this strategy, it turns out (after some standard Fourier expansion) that one needs to get good control on exponential sums such as
for various choices of parameters , where . Fortunately, the methods of Vinogradov (which more generally can handle sums such as and for various analytic functions ) can give useful bounds on such sums as long as and are not too large compared to ; more specifically, Vinogradov’s estimates are non-trivial in the regime , and this ultimately leads to a distance bound between any colliding pair in the left half of Pascal’s triangle, as well as the variant bound under the additional assumption Comparing these bounds with Proposition 2 and using some basic estimates about the function , we can conclude Theorem 1.A modification of the arguments also gives similar results for the equation
where is the falling factorial:
Theorem 3 If and is sufficiently large depending on , there are at most two solutions to (7) in the region
Again the upper bound of two is best possible, thanks to identities such as
I’m collecting in this blog post a number of simple group-theoretic lemmas, all of the following flavour: if is a subgroup of some product of groups, then one of three things has to happen:
- ( too small) is contained in some proper subgroup of , or the elements of are constrained to some sort of equation that the full group does not satisfy.
- ( too large) contains some non-trivial normal subgroup of , and as such actually arises by pullback from some subgroup of the quotient group .
- (Structure) There is some useful structural relationship between and the groups .
It is perhaps easiest to explain the flavour of these lemmas with some simple examples, starting with the case where we are just considering subgroups of a single group .
Lemma 1 Let be a subgroup of a group . Then exactly one of the following hold:
- (i) ( too small) There exists a non-trivial group homomorphism into a group such that for all .
- (ii) ( normally generates ) is generated as a group by the conjugates of .
Proof: Let be the group normally generated by , that is to say the group generated by the conjugates of . This is a normal subgroup of containing (indeed it is the smallest such normal subgroup). If is all of we are in option (ii); otherwise we can take to be the quotient group and to be the quotient map. Finally, if (i) holds, then all of the conjugates of lie in the kernel of , and so (ii) cannot hold.
Here is a “dual” to the above lemma:
Lemma 2 Let be a subgroup of a group . Then exactly one of the following hold:
- (i) ( too large) is the pullback of some subgroup of for some non-trivial normal subgroup of , where is the quotient map.
- (ii) ( is core-free) does not contain any non-trivial conjugacy class .
Proof: Let be the normal core of , that is to say the intersection of all the conjugates of . This is the largest normal subgroup of that is contained in . If is non-trivial, we can quotient it out and end up with option (i). If instead is trivial, then there is no non-trivial element that lies in the core, hence no non-trivial conjugacy class lies in and we are in option (ii). Finally, if (i) holds, then every conjugacy class of an element of is contained in and hence in , so (ii) cannot hold.
For subgroups of nilpotent groups, we have a nice dichotomy that detects properness of a subgroup through abelian representations:
Lemma 3 Let be a subgroup of a nilpotent group . Then exactly one of the following hold:
- (i) ( too small) There exists non-trivial group homomorphism into an abelian group such that for all .
- (ii) .
Informally: if is a variable ranging in a subgroup of a nilpotent group , then either is unconstrained (in the sense that it really ranges in all of ), or it obeys some abelian constraint .
Proof: By definition of nilpotency, the lower central series
eventually becomes trivial.Since is a normal subgroup of , is also a subgroup of . Suppose first that is a proper subgroup of , then the quotient map is a non-trivial homomorphism to an abelian group that annihilates , and we are in option (i). Thus we may assume that , and thus
Note that modulo the normal group , commutes with , hence and thus We conclude that . One can continue this argument by induction to show that for every ; taking large enough we end up in option (ii). Finally, it is clear that (i) and (ii) cannot both hold.
Remark 4 When the group is locally compact and is closed, one can take the homomorphism in Lemma 3 to be continuous, and by using Pontryagin duality one can also take the target group to be the unit circle . Thus is now a character of . Similar considerations hold for some of the later lemmas in this post. Discrete versions of this above lemma, in which the group is replaced by some orbit of a polynomial map on a nilmanifold, were obtained by Leibman and are important in the equidistribution theory of nilmanifolds; see this paper of Ben Green and myself for further discussion.
Here is an analogue of Lemma 3 for special linear groups, due to Serre (IV-23):
Lemma 5 Let be a prime, and let be a closed subgroup of , where is the ring of -adic integers. Then exactly one of the following hold:
- (i) ( too small) There exists a proper subgroup of such that for all .
- (ii) .
Proof: It is a standard fact that the reduction of mod is , hence (i) and (ii) cannot both hold.
Suppose that (i) fails, then for every there exists such that , which we write as
We now claim inductively that for any and , there exists with ; taking limits as using the closed nature of will then place us in option (ii).The case is already handled, so now suppose . If , we see from the case that we can write where and . Thus to establish the claim it suffices to do so under the additional hypothesis that .
First suppose that for some with . By the case, we can find of the form for some . Raising to the power and using and , we note that
giving the claim in this case.Any matrix of trace zero with coefficients in is a linear combination of , , and is thus a sum of matrices that square to zero. Hence, if is of the form , then for some matrix of trace zero, and thus one can write (up to errors) as the finite product of matrices of the form with . By the previous arguments, such a matrix lies in up to errors, and hence does also. This completes the proof of the case.
Now suppose and the claim has already been proven for . Arguing as before, it suffices to close the induction under the additional hypothesis that , thus we may write . By induction hypothesis, we may find with . But then , and we are done.
We note a generalisation of Lemma 3 that involves two groups rather than just one:
Lemma 6 Let be a subgroup of a product of two nilpotent groups . Then exactly one of the following hold:
- (i) ( too small) There exists group homomorphisms , into an abelian group , with non-trivial, such that for all , where is the projection of to .
- (ii) for some subgroup of .
Proof: Consider the group . This is a subgroup of . If it is all of , then must be a Cartesian product and option (ii) holds. So suppose that this group is a proper subgroup of . Applying Lemma 3, we obtain a non-trivial group homomorphism into an abelian group such that whenever . For any in the projection of to , there is thus a unique quantity such that whenever . One easily checks that is a homomorphism, so we are in option (i).
Finally, it is clear that (i) and (ii) cannot both hold, since (i) places a non-trivial constraint on the second component of an element of for any fixed choice of .
We also note a similar variant of Lemma 5, which is Lemme 10 of this paper of Serre:
Lemma 7 Let be a prime, and let be a closed subgroup of . Then exactly one of the following hold:
- (i) ( too small) There exists a proper subgroup of such that for all .
- (ii) .
Proof: As in the proof of Lemma 5, (i) and (ii) cannot both hold. Suppose that (i) does not hold, then for any there exists such that . Similarly, there exists with . Taking commutators of and , we can find with . Continuing to take commutators with and extracting a limit (using compactness and the closed nature of ), we can find with . Thus, the closed subgroup of does not obey conclusion (i) of Lemma 5, and must therefore obey conclusion (ii); that is to say, contains . Similarly contains ; multiplying, we end up in conclusion (ii).
The most famous result of this type is of course the Goursat lemma, which we phrase here in a somewhat idiosyncratic manner to conform to the pattern of the other lemmas in this post:
Lemma 8 (Goursat lemma) Let be a subgroup of a product of two groups . Then one of the following hold:
- (i) ( too small) is contained in for some subgroups , of respectively, with either or (or both).
- (ii) ( too large) There exist normal subgroups of respectively, not both trivial, such that arises from a subgroup of , where is the quotient map.
- (iii) (Isomorphism) There is a group isomorphism such that is the graph of . In particular, and are isomorphic.
Here we almost have a trichotomy, because option (iii) is incompatible with both option (i) and option (ii). However, it is possible for options (i) and (ii) to simultaneously hold.
Proof: If either of the projections , from to the factor groups (thus and fail to be surjective, then we are in option (i). Thus we may assume that these maps are surjective.
Next, if either of the maps , fail to be injective, then at least one of the kernels , is non-trivial. We can then descend down to the quotient and end up in option (ii).
The only remaining case is when the group homomorphisms are both bijections, hence are group isomorphisms. If we set we end up in case (iii).
We can combine the Goursat lemma with Lemma 3 to obtain a variant:
Corollary 9 (Nilpotent Goursat lemma) Let be a subgroup of a product of two nilpotent groups . Then one of the following hold:
- (i) ( too small) There exists and a non-trivial group homomorphism such that for all .
- (ii) ( too large) There exist normal subgroups of respectively, not both trivial, such that arises from a subgroup of .
- (iii) (Isomorphism) There is a group isomorphism such that is the graph of . In particular, and are isomorphic.
Proof: If Lemma 8(i) holds, then by applying Lemma 3 we arrive at our current option (i). The other options are unchanged from Lemma 8, giving the claim.
Now we present a lemma involving three groups that is known in ergodic theory contexts as the “Furstenberg-Weiss argument”, as an argument of this type arose in this paper of Furstenberg and Weiss, though perhaps it also implicitly appears in other contexts also. It has the remarkable feature of being able to enforce the abelian nature of one of the groups once the other options of the lemma are excluded.
Lemma 10 (Furstenberg-Weiss lemma) Let be a subgroup of a product of three groups . Then one of the following hold:
- (i) ( too small) There is some proper subgroup of and some such that whenever and .
- (ii) ( too large) There exists a non-trivial normal subgroup of with abelian, such that arises from a subgroup of , where is the quotient map.
- (iii) is abelian.
Proof: If the group is a proper subgroup of , then we are in option (i) (with ), so we may assume that
Similarly we may assume that Now let be any two elements of . By the above assumptions, we can find such that and Taking commutators to eliminate the terms, we conclude that Thus the group contains every commutator , and thus contains the entire group generated by these commutators. If fails to be abelian, then is a non-trivial normal subgroup of , and now arises from in the obvious fashion, placing one in option (ii). Hence the only remaining case is when is abelian, giving us option (iii).As before, we can combine this with previous lemmas to obtain a variant in the nilpotent case:
Lemma 11 (Nilpotent Furstenberg-Weiss lemma) Let be a subgroup of a product of three nilpotent groups . Then one of the following hold:
- (i) ( too small) There exists and group homomorphisms , for some abelian group , with non-trivial, such that whenever , where is the projection of to .
- (ii) ( too large) There exists a non-trivial normal subgroup of , such that arises from a subgroup of .
- (iii) is abelian.
Informally, this lemma asserts that if is a variable ranging in some subgroup , then either (i) there is a non-trivial abelian equation that constrains in terms of either or ; (ii) is not fully determined by and ; or (iii) is abelian.
Proof: Applying Lemma 10, we are already done if conclusions (ii) or (iii) of that lemma hold, so suppose instead that conclusion (i) holds for say . Then the group is not of the form , since it only contains those with . Applying Lemma 6, we obtain group homomorphisms , into an abelian group , with non-trivial, such that whenever , placing us in option (i).
The Furstenberg-Weiss argument is often used (though not precisely in this form) to establish that certain key structure groups arising in ergodic theory are abelian; see for instance Proposition 6.3(1) of this paper of Host and Kra for an example.
One can get more structural control on in the Furstenberg-Weiss lemma in option (iii) if one also broadens options (i) and (ii):
Lemma 12 (Variant of Furstenberg-Weiss lemma) Let be a subgroup of a product of three groups . Then one of the following hold:
- (i) ( too small) There is some proper subgroup of for some such that whenever . (In other words, the projection of to is not surjective.)
- (ii) ( too large) There exists a normal of respectively, not all trivial, such that arises from a subgroup of , where is the quotient map.
- (iii) are abelian and isomorphic. Furthermore, there exist isomorphisms , , to an abelian group such that
The ability to encode an abelian additive relation in terms of group-theoretic properties is vaguely reminiscent of the group configuration theorem.
Proof: We apply Lemma 10. Option (i) of that lemma implies option (i) of the current lemma, and similarly for option (ii), so we may assume without loss of generality that is abelian. By permuting we may also assume that are abelian, and will use additive notation for these groups.
We may assume that the projections of to and are surjective, else we are in option (i). The group is then a normal subgroup of ; we may assume it is trivial, otherwise we can quotient it out and be in option (ii). Thus can be expressed as a graph for some map . As is a group, must be a homomorphism, and we can write it as for some homomorphisms , . Thus elements of obey the constraint .
If or fails to be injective, then we can quotient out by their kernels and end up in option (ii). If fails to be surjective, then the projection of to also fails to be surjective (since for , is now constrained to lie in the range of ) and we are in option (i). Similarly if fails to be surjective. Thus we may assume that the homomorphisms are bijective and thus group isomorphisms. Setting to the identity, we arrive at option (iii).
Combining this lemma with Lemma 3, we obtain a nilpotent version:
Corollary 13 (Variant of nilpotent Furstenberg-Weiss lemma) Let be a subgroup of a product of three groups . Then one of the following hold:
- (i) ( too small) There are homomorphisms , to some abelian group for some , with not both trivial, such that whenever .
- (ii) ( too large) There exists a normal of respectively, not all trivial, such that arises from a subgroup of , where is the quotient map.
- (iii) are abelian and isomorphic. Furthermore, there exist isomorphisms , , to an abelian group such that
Here is another variant of the Furstenberg-Weiss lemma, attributed to Serre by Ribet (see Lemma 3.3):
Lemma 14 (Serre’s lemma) Let be a subgroup of a finite product of groups with . Then one of the following hold:
- (i) ( too small) There is some proper subgroup of for some such that whenever .
- (ii) ( too large) One has .
- (iii) One of the has a non-trivial abelian quotient .
Proof: The claim is trivial for (and we don’t need (iii) in this case), so suppose that . We can assume that each is a perfect group, , otherwise we can quotient out by the commutator and arrive in option (iii). Similarly, we may assume that all the projections of to , are surjective, otherwise we are in option (i).
We now claim that for any and any , one can find with for and . For this follows from the surjectivity of the projection of to . Now suppose inductively that and the claim has already been proven for . Since is perfect, it suffices to establish this claim for of the form for some . By induction hypothesis, we can find with for and . By surjectivity of the projection of to , one can find with and . Taking commutators of these two elements, we obtain the claim.
Setting , we conclude that contains . Similarly for permutations. Multiplying these together we see that contains all of , and we are in option (ii).
I was asked the following interesting question from a bright high school student I am working with, to which I did not immediately know the answer:
Question 1 Does there exist a smooth function which is not real analytic, but such that all the differences are real analytic for every ?
The hypothesis implies that the Newton quotients are real analytic for every . If analyticity was preserved by smooth limits, this would imply that is real analytic, which would make real analytic. However, we are not assuming any uniformity in the analyticity of the Newton quotients, so this simple argument does not seem to resolve the question immediately.
In the case that is periodic, say periodic with period , one can answer the question in the negative by Fourier series. Perform a Fourier expansion . If is not real analytic, then there is a sequence going to infinity such that as . From the Borel-Cantelli lemma one can then find a real number such that (say) for infinitely many , hence for infinitely many . Thus the Fourier coefficients of do not decay exponentially and hence this function is not analytic, a contradiction.
I was not able to quickly resolve the non-periodic case, but I thought perhaps this might be a good problem to crowdsource, so I invite readers to contribute their thoughts on this problem here. In the spirit of the polymath projects, I would encourage comments that contain thoughts that fall short of a complete solution, in the event that some other reader may be able to take the thought further.
In this previous blog post I noted the following easy application of Cauchy-Schwarz:
Lemma 1 (Van der Corput inequality) Let be unit vectors in a Hilbert space . Then
Proof: The left-hand side may be written as for some unit complex numbers . By Cauchy-Schwarz we have
and the claim now follows from the triangle inequality.As a corollary, correlation becomes transitive in a statistical sense (even though it is not transitive in an absolute sense):
Corollary 2 (Statistical transitivity of correlation) Let be unit vectors in a Hilbert space such that for all and some . Then we have for at least of the pairs .
Proof: From the lemma, we have
The contribution of those with is at most , and all the remaining summands are at most , giving the claim.One drawback with this corollary is that it does not tell us which pairs correlate. In particular, if the vector also correlates with a separate collection of unit vectors, the pairs for which correlate may have no intersection whatsoever with the pairs in which correlate (except of course on the diagonal where they must correlate).
While working on an ongoing research project, I recently found that there is a very simple way to get around the latter problem by exploiting the tensor power trick:
Corollary 3 (Simultaneous statistical transitivity of correlation) Let be unit vectors in a Hilbert space for and such that for all , and some . Then there are at least pairs such that . In particular (by Cauchy-Schwarz) we have for all .
Proof: Apply Corollary 2 to the unit vectors and , in the tensor power Hilbert space .
It is surprisingly difficult to obtain even a qualitative version of the above conclusion (namely, if correlates with all of the , then there are many pairs for which correlates with for all simultaneously) without some version of the tensor power trick. For instance, even the powerful Szemerédi regularity lemma, when applied to the set of pairs for which one has correlation of , for a single , does not seem to be sufficient. However, there is a reformulation of the argument using the Schur product theorem as a substitute for (or really, a disguised version of) the tensor power trick. For simplicity of notation let us just work with real Hilbert spaces to illustrate the argument. We start with the identity
where is the orthogonal projection to the complement of . This implies a Gram matrix inequality for each where denotes the claim that is positive semi-definite. By the Schur product theorem, we conclude that and hence for a suitable choice of signs , One now argues as in the proof of Corollary 2.A separate application of tensor powers to amplify correlations was also noted in this previous blog post giving a cheap version of the Kabatjanskii-Levenstein bound, but this seems to not be directly related to this current application.
Recent Comments