You are currently browsing the category archive for the ‘paper’ category.
Rachel Greenfeld and I have just uploaded to the arXiv our announcement “A counterexample to the periodic tiling conjecture“. This is an announcement of a longer paper that we are currently in the process of writing up (and hope to release in a few weeks), in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. This conjecture can be formulated in both discrete and continuous settings:
Conjecture 1 (Discrete periodic tiling conjecture) Suppose that is a finite set that tiles by translations (i.e., can be partitioned into translates of ). Then also tiles by translations periodically (i.e., the set of translations can be taken to be a periodic subset of ).
Conjecture 2 (Continuous periodic tiling conjecture) Suppose that is a bounded measurable set of positive measure that tiles by translations up to null sets. Then also tiles by translations periodically up to null sets.
The discrete periodic tiling conjecture can be easily established for by the pigeonhole principle (as first observed by Newman), and was proven for by Bhattacharya (with a new proof given by Greenfeld and myself). The continuous periodic tiling conjecture was established for by Lagarias and Wang. By an old observation of Hao Wang, one of the consequences of the (discrete) periodic tiling conjecture is that the problem of determining whether a given finite set tiles by translations is (algorithmically and logically) decidable.
On the other hand, once one allows tilings by more than one tile, it is well known that aperiodic tile sets exist, even in dimension two – finite collections of discrete or continuous tiles that can tile the given domain by translations, but not periodically. Perhaps the most famous examples of such aperiodic tilings are the Penrose tilings, but there are many other constructions; for instance, there is a construction of Ammann, Grümbaum, and Shephard of eight tiles in which tile aperiodically. Recently, Rachel and I constructed a pair of tiles in that tiled a periodic subset of aperiodically (in fact we could even make the tiling question logically undecidable in ZFC).
Our main result is then
Theorem 3 Both the discrete and continuous periodic tiling conjectures fail for sufficiently large . Also, there is a finite abelian group such that the analogue of the discrete periodic tiling conjecture for is false.
This suggests that the techniques used to prove the discrete periodic conjecture in are already close to the limit of their applicability, as they cannot handle even virtually two-dimensional discrete abelian groups such as . The main difficulty is in constructing the counterexample in the setting.
The approach starts by adapting some of the methods of a previous paper of Rachel and myself. The first step is make the problem easier to solve by disproving a “multiple periodic tiling conjecture” instead of the traditional periodic tiling conjecture. At present, Theorem 3 asserts the existence of a “tiling equation” (where one should think of and as given, and the tiling set is known), which admits solutions, all of which are non-periodic. It turns out that it is enough to instead assert the existence of a system
of tiling equations, which admits solutions, all of which are non-periodic. This is basically because one can “stack” together a system of tiling equations into an essentially equivalent single tiling equation in a slightly larger group. The advantage of this reformulation is that it creates a “tiling language”, in which each sentence in the language expresses a different type of constraint on the unknown set . The strategy then is to locate a non-periodic set which one can try to “describe” by sentences in the tiling language that are obeyed by this non-periodic set, and which are “structured” enough that one can capture their non-periodic nature through enough of these sentences.It is convenient to replace sets by functions, so that this tiling language can be translated to a more familiar language, namely the language of (certain types of) functional equations. The key point here is that the tiling equation
for some abelian groups is precisely asserting that is a graph of some function (this sometimes referred to as the “vertical line test” in U.S. undergraduate math classes). Using this translation, it is possible to encode a variety of functional equations relating one or more functions taking values in some finite group (such as a cyclic group).The non-periodic behaviour that we ended up trying to capture was that of a certain “-adically structured function” associated to some fixed and sufficiently large prime (in fact for our arguments any prime larger than , e.g., , would suffice), defined by the formula
for and , where is the number of times divides . In other words, is the last non-zero digit in the base expansion of (with the convention that the last non-zero digit of is ). This function is not periodic, and yet obeys a lot of functional equations; for instance, one has for all , and also for (and in fact these two equations, together with the condition , completely determine ). Here is what the function looks like (for ):
It turns out that we cannot describe this one-dimensional non-periodic function directly via tiling equations. However, we can describe two-dimensional non-periodic functions such as for some coefficients via a suitable system of tiling equations. A typical such function looks like this:
A feature of this function is that when one restricts to a row or diagonal of such a function, the resulting one-dimensional function exhibits “-adic structure” in the sense that it behaves like a rescaled version of ; see the announcement for a precise version of this statement. It turns out that the converse is essentially true: after excluding some degenerate solutions in which the function is constant along one or more of the columns, all two-dimensional functions which exhibit -adic structure along (non-vertical) lines must behave like one of the functions mentioned earlier, and in particular is non-periodic. The proof of this result is strongly reminiscent of the type of reasoning needed to solve a Sudoku puzzle, and so we have adopted some Sudoku-like terminology in our arguments to provide intuition and visuals. One key step is to perform a shear transformation to the puzzle so that many of the rows become constant, as displayed in this example,
and then perform a “Tetris” move of eliminating the constant rows to arrive at a secondary Sudoku puzzle which one then analyzes in turn:
It is the iteration of this procedure that ultimately generates the non-periodic -adic structure.
Kaisa Matomäki, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Higher uniformity of arithmetic functions in short intervals I. All intervals“. This paper investigates the higher order (Gowers) uniformity of standard arithmetic functions in analytic number theory (and specifically, the Möbius function , the von Mangoldt function , and the generalised divisor functions ) in short intervals , where is large and lies in the range for a fixed constant (that one would like to be as small as possible). If we let denote one of the functions , then there is extensive literature on the estimation of short sums
and some literature also on the estimation of exponential sums such as for a real frequency , where . For applications in the additive combinatorics of such functions , it is also necessary to consider more general correlations, such as polynomial correlations where is a polynomial of some fixed degree, or more generally where is a nilmanifold of fixed degree and dimension (and with some control on structure constants), is a polynomial map, and is a Lipschitz function (with some bound on the Lipschitz constant). Indeed, thanks to the inverse theorem for the Gowers uniformity norm, such correlations let one control the Gowers uniformity norm of (possibly after subtracting off some renormalising factor) on such short intervals , which can in turn be used to control other multilinear correlations involving such functions.Traditionally, asymptotics for such sums are expressed in terms of a “main term” of some arithmetic nature, plus an error term that is estimated in magnitude. For instance, a sum such as would be approximated in terms of a main term that vanished (or is negligible) if is “minor arc”, but would be expressible in terms of something like a Ramanujan sum if was “major arc”, together with an error term. We found it convenient to cancel off such main terms by subtracting an approximant from each of the arithmetic functions and then getting upper bounds on remainder correlations such as
(actually for technical reasons we also allow the variable to be restricted further to a subprogression of , but let us ignore this minor extension for this discussion). There is some flexibility in how to choose these approximants, but we eventually found it convenient to use the following choices.
- For the Möbius function , we simply set , as per the Möbius pseudorandomness conjecture. (One could choose a more sophisticated approximant in the presence of a Siegel zero, as I did with Joni in this recent paper, but we do not do so here.)
- For the von Mangoldt function , we eventually went with the Cramér-Granville approximant , where and .
- For the divisor functions , we used a somewhat complicated-looking approximant for some explicit polynomials , chosen so that and have almost exactly the same sums along arithmetic progressions (see the paper for details).
The objective is then to obtain bounds on sums such as (1) that improve upon the “trivial bound” that one can get with the triangle inequality and standard number theory bounds such as the Brun-Titchmarsh inequality. For and , the Siegel-Walfisz theorem suggests that it is reasonable to expect error terms that have “strongly logarithmic savings” in the sense that they gain a factor of over the trivial bound for any ; for , the Dirichlet hyperbola method suggests instead that one has “power savings” in that one should gain a factor of over the trivial bound for some . In the case of the Möbius function , there is an additional trick (introduced by Matomäki and Teräväinen) that allows one to lower the exponent somewhat at the cost of only obtaining “weakly logarithmic savings” of shape for some small .
Our main estimates on sums of the form (1) work in the following ranges:
- For , one can obtain strongly logarithmic savings on (1) for , and power savings for .
- For , one can obtain weakly logarithmic savings for .
- For , one can obtain power savings for .
- For , one can obtain power savings for .
Conjecturally, one should be able to obtain power savings in all cases, and lower down to zero, but the ranges of exponents and savings given here seem to be the limit of current methods unless one assumes additional hypotheses, such as GRH. The result for correlation against Fourier phases was established previously by Zhan, and the result for such phases and was established previously by by Matomäki and Teräväinen.
By combining these results with tools from additive combinatorics, one can obtain a number of applications:
- Direct insertion of our bounds in the recent work of Kanigowski, Lemanczyk, and Radziwill on the prime number theorem on dynamical systems that are analytic skew products gives some improvements in the exponents there.
- We can obtain a “short interval” version of a multiple ergodic theorem along primes established by Frantzikinakis-Host-Kra and Wooley-Ziegler, in which we average over intervals of the form rather than .
- We can obtain a “short interval” version of the “linear equations in primes” asymptotics obtained by Ben Green, Tamar Ziegler, and myself in this sequence of papers, where the variables in these equations lie in short intervals rather than long intervals such as .
We now briefly discuss some of the ingredients of proof of our main results. The first step is standard, using combinatorial decompositions (based on the Heath-Brown identity and (for the result) the Ramaré identity) to decompose into more tractable sums of the following types:
- Type sums, which are basically of the form for some weights of controlled size and some cutoff that is not too large;
- Type sums, which are basically of the form for some weights , of controlled size and some cutoffs that are not too close to or to ;
- Type sums, which are basically of the form for some weights of controlled size and some cutoff that is not too large.
The precise ranges of the cutoffs depend on the choice of ; our methods fail once these cutoffs pass a certain threshold, and this is the reason for the exponents being what they are in our main results.
The Type sums involving nilsequences can be treated by methods similar to those in this previous paper of Ben Green and myself; the main innovations are in the treatment of the Type and Type sums.
For the Type sums, one can split into the “abelian” case in which (after some Fourier decomposition) the nilsequence is basically of the form , and the “non-abelian” case in which is non-abelian and exhibits non-trivial oscillation in a central direction. In the abelian case we can adapt arguments of Matomaki and Shao, which uses Cauchy-Schwarz and the equidistribution properties of polynomials to obtain good bounds unless is “major arc” in the sense that it resembles (or “pretends to be”) for some Dirichlet character and some frequency , but in this case one can use classical multiplicative methods to control the correlation. It turns out that the non-abelian case can be treated similarly. After applying Cauchy-Schwarz, one ends up analyzing the equidistribution of the four-variable polynomial sequence
as range in various dyadic intervals. Using the known multidimensional equidistribution theory of polynomial maps in nilmanifolds, one can eventually show in the non-abelian case that this sequence either has enough equidistribution to give cancellation, or else the nilsequence involved can be replaced with one from a lower dimensional nilmanifold, in which case one can apply an induction hypothesis.For the type sum, a model sum to study is
which one can expand as We experimented with a number of ways to treat this type of sum (including automorphic form methods, or methods based on the Voronoi formula or van der Corput’s inequality), but somewhat to our surprise, the most efficient approach was an elementary one, in which one uses the Dirichlet approximation theorem to decompose the hyperbolic region into a number of arithmetic progressions, and then uses equidistribution theory to establish cancellation of sequences such as on the majority of these progressions. As it turns out, this strategy works well in the regime unless the nilsequence involved is “major arc”, but the latter case is treatable by existing methods as discussed previously; this is why the exponent for our result can be as low as .In a sequel to this paper (currently in preparation), we will obtain analogous results for almost all intervals with in the range , in which we will be able to lower all the way to .
Jan Grebik, Rachel Greenfeld, Vaclav Rozhon and I have just uploaded to the arXiv our preprint “Measurable tilings by abelian group actions“. This paper is related to an earlier paper of Rachel Greenfeld and myself concerning tilings of lattices , but now we consider the more general situation of tiling a measure space by a tile shifted by a finite subset of shifts of an abelian group that acts in a measure-preserving (or at least quasi-measure-preserving) fashion on . For instance, could be a torus , could be a positive measure subset of that torus, and could be the group , acting on by translation.
If is a finite subset of with the property that the translates , of partition up to null sets, we write , and refer to this as a measurable tiling of by (with tiling set ). For instance, if is the torus , we can create a measurable tiling with and . Our main results are the following:
- By modifying arguments from previous papers (including the one with Greenfeld mentioned above), we can establish the following “dilation lemma”: a measurable tiling automatically implies further measurable tilings , whenever is an integer coprime to all primes up to the cardinality of .
- By averaging the above dilation lemma, we can also establish a “structure theorem” that decomposes the indicator function of into components, each of which are invariant with respect to a certain shift in . We can establish this theorem in the case of measure-preserving actions on probability spaces via the ergodic theorem, but one can also generalize to other settings by using the device of “measurable medial means” (which relates to the concept of a universally measurable set).
- By applying this structure theorem, we can show that all measurable tilings of the one-dimensional torus are rational, in the sense that lies in a coset of the rationals . This answers a recent conjecture of Conley, Grebik, and Pikhurko; we also give an alternate proof of this conjecture using some previous results of Lagarias and Wang.
- For tilings of higher-dimensional tori, the tiling need not be rational. However, we can show that we can “slide” the tiling to be rational by giving each translate of a “velocity” , and for every time , the translates still form a partition of modulo null sets, and at time the tiling becomes rational. In particular, if a set can tile a torus in an irrational fashion, then it must also be able to tile the torus in a rational fashion.
- In the two-dimensional case one can arrange matters so that all the velocities are parallel. If we furthermore assume that the tile is connected, we can also show that the union of all the translates with a common velocity form a -invariant subset of the torus.
- Finally, we show that tilings of a finitely generated discrete group , with a finite group, cannot be constructed in a “local” fashion (we formalize this probabilistically using the notion of a “factor of iid process”) unless the tile is contained in a single coset of . (Nonabelian local tilings, for instance of the sphere by rotations, are of interest due to connections with the Banach-Tarski paradox; see the aforementioned paper of Conley, Grebik, and Pikhurko. Unfortunately, our methods seem to break down completely in the nonabelian case.)
I’ve just uploaded to the arXiv my preprint “Perfectly packing a square by squares of nearly harmonic sidelength“. This paper concerns a variant of an old problem of Meir and Moser, who asks whether it is possible to perfectly pack squares of sidelength for into a single square or rectangle of area . (The following variant problem, also posed by Meir and Moser and discussed for instance in this MathOverflow post, is perhaps even more well known: is it possible to perfectly pack rectangles of dimensions for into a single square of area ?) For the purposes of this paper, rectangles and squares are understood to have sides parallel to the axes, and a packing is perfect if it partitions the region being packed up to sets of measure zero. As one partial result towards these problems, it was shown by Paulhus that squares of sidelength for can be packed (not quite perfectly) into a single rectangle of area , and rectangles of dimensions for can be packed (again not quite perfectly) into a single square of area . (Paulhus’s paper had some gaps in it, but these were subsequently repaired by Grzegorek and Januszewski.)
Another direction in which partial progress has been made is to consider instead the problem of packing squares of sidelength , perfectly into a square or rectangle of total area , for some fixed constant (this lower bound is needed to make the total area finite), with the aim being to get as close to as possible. Prior to this paper, the most recent advance in this direction was by Januszewski and Zielonka last year, who achieved such a packing in the range .
In this paper we are able to get arbitrarily close to (which turns out to be a “critical” value of this parameter), but at the expense of deleting the first few tiles:
Theorem 1 If , and is sufficiently large depending on , then one can pack squares of sidelength , perfectly into a square of area .
As in previous works, the general strategy is to execute a greedy algorithm, which can be described somewhat incompletely as follows.
- Step 1: Suppose that one has already managed to perfectly pack a square of area by squares of sidelength for , together with a further finite collection of rectangles with disjoint interiors. (Initially, we would have and , but these parameter will change over the course of the algorithm.)
- Step 2: Amongst all the rectangles in , locate the rectangle of the largest width (defined as the shorter of the two sidelengths of ).
- Step 3: Pack (as efficiently as one can) squares of sidelength for into for some , and decompose the portion of not covered by this packing into rectangles .
- Step 4: Replace by , replace by , and return to Step 1.
The main innovation of this paper is to perform Step 3 somewhat more efficiently than in previous papers.
The above algorithm can get stuck if one reaches a point where one has already packed squares of sidelength for , but that all remaining rectangles in have width less than , in which case there is no obvious way to fit in the next square. If we let and denote the width and height of these rectangles , then the total area of the rectangles must be
and the total perimeter of these rectangles is Thus we have and so to ensure that there is at least one rectangle with it would be enough to have the perimeter bound for a sufficiently small constant . It is here that we now see the critical nature of the exponent : for , the amount of perimeter we are permitted to have in the remaining rectangles increases as one progresses with the packing, but for the amount of perimeter one is “budgeted” for stays constant (and for the situation is even worse, in that the remaining rectangles should steadily decrease in total perimeter).In comparison, the perimeter of the squares that one has already packed is equal to
which is comparable to for large (with the constants blowing up as approaches the critical value of ). In previous algorithms, the total perimeter of the remainder rectangles was basically comparable to the perimeter of the squares already packed, and this is the main reason why the results only worked when was sufficiently far away from . In my paper, I am able to get the perimeter of significantly smaller than the perimeter of the squares already packed, by grouping those squares into lattice-like clusters (of about squares arranged in an pattern), and sliding the squares in each cluster together to almost entirely eliminate the wasted space between each square, leaving only the space around the cluster as the main source of residual perimeter, which will be comparable to about per cluster, as compared to the total perimeter of the squares in the cluster which is comparable to . This strategy is perhaps easiest to illustrate with a picture, in which squares of slowly decreasing sidelength are packed together with relatively little wasted space:
By choosing the parameter suitably large (and taking sufficiently large depending on ), one can then prove the theorem. (In order to do some technical bookkeeping and to allow one to close an induction in the verification of the algorithm’s correctness, it is convenient to replace the perimeter by a slightly weighted variant for a small exponent , but this is a somewhat artificial device that somewhat obscures the main ideas.)
Asgar Jamneshan and myself have just uploaded to the arXiv our preprint “The inverse theorem for the Gowers uniformity norm on arbitrary finite abelian groups: Fourier-analytic and ergodic approaches“. This paper, which is a companion to another recent paper of ourselves and Or Shalom, studies the inverse theory for the third Gowers uniformity norm
on an arbitrary finite abelian group , where is the multiplicative derivative. Our main result is as follows:
Theorem 1 (Inverse theorem for ) Let be a finite abelian group, and let be a -bounded function with for some . Then:
- (i) (Correlation with locally quadratic phase) There exists a regular Bohr set with and , a locally quadratic function , and a function such that
- (ii) (Correlation with nilsequence) There exists an explicit degree two filtered nilmanifold of dimension , a polynomial map , and a Lipschitz function of constant such that
Such a theorem was proven by Ben Green and myself in the case when was odd, and by Samorodnitsky in the -torsion case . In all cases one uses the “higher order Fourier analysis” techniques introduced by Gowers. After some now-standard manipulations (using for instance what is now known as the Balog-Szemerédi-Gowers lemma), one arrives (for arbitrary ) at an estimate that is roughly of the form
where denotes various -bounded functions whose exact values are not too important, and is a symmetric locally bilinear form. The idea is then to “integrate” this form by expressing it in the form for some locally quadratic ; this then allows us to write the above correlation as (after adjusting the functions suitably), and one can now conclude part (i) of the above theorem using some linear Fourier analysis. Part (ii) follows by encoding locally quadratic phase functions as nilsequences; for this we adapt an algebraic construction of Manners.So the key step is to obtain a representation of the form (1), possibly after shrinking the Bohr set a little if needed. This has been done in the literature in two ways:
- When is odd, one has the ability to divide by , and on the set one can establish (1) with . (This is similar to how in single variable calculus the function is a function whose second derivative is equal to .)
- When , then after a change of basis one can take the Bohr set to be for some , and the bilinear form can be written in coordinates as for some with . The diagonal terms cause a problem, but by subtracting off the rank one form one can write on the orthogonal complement of for some coefficients which now vanish on the diagonal: . One can now obtain (1) on this complement by taking
In our paper we can now treat the case of arbitrary finite abelian groups , by means of the following two new ingredients:
- (i) Using some geometry of numbers, we can lift the group to a larger (possibly infinite, but still finitely generated) abelian group with a projection map , and find a globally bilinear map on the latter group, such that one has a representation of the locally bilinear form by the globally bilinear form when are close enough to the origin.
- (ii) Using an explicit construction, one can show that every globally bilinear map has a representation of the form (1) for some globally quadratic function .
To illustrate (i), consider the Bohr set in (where denotes the distance to the nearest integer), and consider a locally bilinear form of the form for some real number and all integers (which we identify with elements of . For generic , this form cannot be extended to a globally bilinear form on ; however if one lifts to the finitely generated abelian group
(with projection map ) and introduces the globally bilinear form by the formula then one has (2) when lie in the interval . A similar construction works for higher rank Bohr sets.To illustrate (ii), the key case turns out to be when is a cyclic group , in which case will take the form
for some integer . One can then check by direct construction that (1) will be obeyed with regardless of whether is even or odd. A variant of this construction also works for , and the general case follows from a short calculation verifying that the claim (ii) for any two groups implies the corresponding claim (ii) for the product .This concludes the Fourier-analytic proof of Theorem 1. In this paper we also give an ergodic theory proof of (a qualitative version of) Theorem 1(ii), using a correspondence principle argument adapted from this previous paper of Ziegler, and myself. Basically, the idea is to randomly generate a dynamical system on the group , by selecting an infinite number of random shifts , which induces an action of the infinitely generated free abelian group on by the formula
Much as the law of large numbers ensures the almost sure convergence of Monte Carlo integration, one can show that this action is almost surely ergodic (after passing to a suitable Furstenberg-type limit where the size of goes to infinity), and that the dynamical Host-Kra-Gowers seminorms of that system coincide with the combinatorial Gowers norms of the original functions. One is then well placed to apply an inverse theorem for the third Host-Kra-Gowers seminorm for -actions, which was accomplished in the companion paper to this one. After doing so, one almost gets the desired conclusion of Theorem 1(ii), except that after undoing the application of the Furstenberg correspondence principle, the map is merely an almost polynomial rather than a polynomial, which roughly speaking means that instead of certain derivatives of vanishing, they instead are merely very small outside of a small exceptional set. To conclude we need to invoke a “stability of polynomials” result, which at this level of generality was first established by Candela and Szegedy (though we also provide an independent proof here in an appendix), which roughly speaking asserts that every approximate polynomial is close in measure to an actual polynomial. (This general strategy is also employed in the Candela-Szegedy paper, though in the absence of the ergodic inverse theorem input that we rely upon here, the conclusion is weaker in that the filtered nilmanifold is replaced with a general space known as a “CFR nilspace”.)This transference principle approach seems to work well for the higher step cases (for instance, the stability of polynomials result is known in arbitrary degree); the main difficulty is to establish a suitable higher step inverse theorem in the ergodic theory setting, which we hope to do in future research.
Asgar Jamneshan, Or Shalom, and myself have just uploaded to the arXiv our preprint “The structure of arbitrary Conze–Lesigne systems“. As the title suggests, this paper is devoted to the structural classification of Conze-Lesigne systems, which are a type of measure-preserving system that are “quadratic” or of “complexity two” in a certain technical sense, and are of importance in the theory of multiple recurrence. There are multiple ways to define such systems; here is one. Take a countable abelian group acting in a measure-preserving fashion on a probability space , thus each group element gives rise to a measure-preserving map . Define the third Gowers-Host-Kra seminorm of a function via the formula
where is a Folner sequence for and is the complex conjugation map. One can show that this limit exists and is independent of the choice of Folner sequence, and that the seminorm is indeed a seminorm. A Conze-Lesigne system is an ergodic measure-preserving system in which the seminorm is in fact a norm, thus whenever is non-zero. Informally, this means that when one considers a generic parallelepiped in a Conze–Lesigne system , the location of any vertex of that parallelepiped is more or less determined by the location of the other seven vertices. These are the important systems to understand in order to study “complexity two” patterns, such as arithmetic progressions of length four. While not all systems are Conze-Lesigne systems, it turns out that they always have a maximal factor that is a Conze-Lesigne system, known as the Conze-Lesigne factor or the second Host-Kra-Ziegler factor of the system, and this factor controls all the complexity two recurrence properties of the system.The analogous theory in complexity one is well understood. Here, one replaces the norm by the norm
and the ergodic systems for which is a norm are called Kronecker systems. These systems are completely classified: a system is Kronecker if and only if it arises from a compact abelian group equipped with Haar probability measure and a translation action for some homomorphism with dense image. Such systems can then be analyzed quite efficiently using the Fourier transform, and this can then be used to satisfactory analyze “complexity one” patterns, such as length three progressions, in arbitrary systems (or, when translated back to combinatorial settings, in arbitrary dense sets of abelian groups).We return now to the complexity two setting. The most famous examples of Conze-Lesigne systems are (order two) nilsystems, in which the space is a quotient of a two-step nilpotent Lie group by a lattice (equipped with Haar probability measure), and the action is given by a translation for some group homomorphism . For instance, the Heisenberg -nilsystem
with a shift of the form for two real numbers with linearly independent over , is a Conze-Lesigne system. As the base case of a well known result of Host and Kra, it is shown in fact that all Conze-Lesigne -systems are inverse limits of nilsystems (previous results in this direction were obtained by Conze-Lesigne, Furstenberg-Weiss, and others). Similar results are known for -systems when is finitely generated, thanks to the thesis work of Griesmer (with further proofs by Gutman-Lian and Candela-Szegedy). However, this is not the case once is not finitely generated; as a recent example of Shalom shows, Conze-Lesigne systems need not be the inverse limit of nilsystems in this case.Our main result is that even in the infinitely generated case, Conze-Lesigne systems are still inverse limits of a slight generalisation of the nilsystem concept, in which is a locally compact Polish group rather than a Lie group:
Theorem 1 (Classification of Conze-Lesigne systems) Let be a countable abelian group, and an ergodic measure-preserving -system. Then is a Conze-Lesigne system if and only if it is the inverse limit of translational systems , where is a nilpotent locally compact Polish group of nilpotency class two, and is a lattice in (and also a lattice in the commutator group ), with equipped with the Haar probability measure and a translation action for some homomorphism .
In a forthcoming companion paper to this one, Asgar Jamneshan and I will use this theorem to derive an inverse theorem for the Gowers norm for an arbitrary finite abelian group (with no restrictions on the order of , in particular our result handles the case of even and odd in a unified fashion). In principle, having a higher order version of this theorem will similarly allow us to derive inverse theorems for norms for arbitrary and finite abelian ; we hope to investigate this further in future work.
We sketch some of the main ideas used to prove the theorem. The existing machinery developed by Conze-Lesigne, Furstenberg-Weiss, Host-Kra, and others allows one to describe an arbitrary Conze-Lesigne system as a group extension , where is a Kronecker system (a rotational system on a compact abelian group and translation action ), is another compact abelian group, and the cocycle is a collection of measurable maps obeying the cocycle equation
for almost all . Furthermore, is of “type two”, which means in this concrete setting that it obeys an additional equation for all and almost all , and some measurable function ; roughly speaking this asserts that is “linear up to coboundaries”. For technical reasons it is also convenient to reduce to the case where is separable. The problem is that the equation (2) is unwieldy to work with. In the model case when the target group is a circle , one can use some Fourier analysis to convert (2) into the more tractable Conze-Lesigne equation for all , all , and almost all , where for each , is a measurable function, and is a homomorphism. (For technical reasons it is often also convenient to enforce that depend in a measurable fashion on ; this can always be achieved, at least when the Conze-Lesigne system is separable, but actually verifying that this is possible actually requires a certain amount of effort, which we devote an appendix to in our paper.) It is not difficult to see that (3) implies (2) for any group (as long as one has the measurability in mentioned previously), but the converse turns out to fail for some groups , such as solenoid groups (e.g., inverse limits of as ), as was essentially shown by Rudolph. However, in our paper we were able to find a separate argument that also derived the Conze-Lesigne equation in the case of a cyclic group . Putting together the and cases, one can then derive the Conze-Lesigne equation for arbitrary compact abelian Lie groups (as such groups are isomorphic to direct products of finitely many tori and cyclic groups). As has been known for some time (see e.g., this paper of Host and Kra), once one has a Conze-Lesigne equation, one can more or less describe the system as a translational system , where the Host-Kra group is the set of all pairs that solve an equation of the form (3) (with these pairs acting on by the law ), and is the stabiliser of a point in this system. This then establishes the theorem in the case when is a Lie group, and the general case basically comes from the fact (from Fourier analysis or the Peter-Weyl theorem) that an arbitrary compact abelian group is an inverse limit of Lie groups. (There is a technical issue here in that one has to check that the space of translational system factors of form a directed set in order to have a genuine inverse limit, but this can be dealt with by modifications of the tools mentioned here.)There is an additional technical issue worth pointing out here (which unfortunately was glossed over in some previous work in the area). Because the cocycle equation (1) and the Conze-Lesigne equation (3) are only valid almost everywhere instead of everywhere, the action of on is technically only a near-action rather than a genuine action, and as such one cannot directly define to be the stabiliser of a point without running into multiple problems. To fix this, one has to pass to a topological model of in which the action becomes continuous, and the stabilizer becomes well defined, although one then has to work a little more to check that the action is still transitive. This can be done via Gelfand duality; we proceed using a mixture of a construction from this book of Host and Kra, and the machinery in this recent paper of Asgar and myself.
Now we discuss how to establish the Conze-Lesigne equation (3) in the cyclic group case . As this group embeds into the torus , it is easy to use existing methods obtain (3) but with the homomorphism and the function taking values in rather than in . The main task is then to fix up the homomorphism so that it takes values in , that is to say that vanishes. This only needs to be done locally near the origin, because the claim is easy when lies in the dense subgroup of , and also because the claim can be shown to be additive in . Near the origin one can leverage the Steinhaus lemma to make depend linearly (or more precisely, homomorphically) on , and because the cocycle already takes values in , vanishes and must be an eigenvalue of the system . But as was assumed to be separable, there are only countably many eigenvalues, and by another application of Steinhaus and linearity one can then make vanish on an open neighborhood of the identity, giving the claim.
Joni Teräväinen and I have just uploaded to the arXiv our preprint “The Hardy–Littlewood–Chowla conjecture in the presence of a Siegel zero“. This paper is a development of the theme that certain conjectures in analytic number theory become easier if one makes the hypothesis that Siegel zeroes exist; this places one in a presumably “illusory” universe, since the widely believed Generalised Riemann Hypothesis (GRH) precludes the existence of such zeroes, yet this illusory universe seems remarkably self-consistent and notoriously impossible to eliminate from one’s analysis.
For the purposes of this paper, a Siegel zero is a zero of a Dirichlet -function corresponding to a primitive quadratic character of some conductor , which is close to in the sense that
for some large (which we will call the quality) of the Siegel zero. The significance of these zeroes is that they force the Möbius function and the Liouville function to “pretend” to be like the exceptional character for primes of magnitude comparable to . Indeed, if we define an exceptional prime to be a prime in which , then very few primes near will be exceptional; in our paper we use some elementary arguments to establish the bounds for any and , where the sum is over exceptional primes in the indicated range ; this bound is non-trivial for as large as . (See Section 1 of this blog post for some variants of this argument, which were inspired by work of Heath-Brown.) There is also a companion bound (somewhat weaker) that covers a range of a little bit below .One of the early influential results in this area was the following result of Heath-Brown, which I previously blogged about here:
Theorem 1 (Hardy-Littlewood assuming Siegel zero) Let be a fixed natural number. Suppose one has a Siegel zero associated to some conductor . Then we have for all , where is the von Mangoldt function and is the singular series
In particular, Heath-Brown showed that if there are infinitely many Siegel zeroes, then there are also infinitely many twin primes, with the correct asymptotic predicted by the Hardy-Littlewood prime tuple conjecture at infinitely many scales.
Very recently, Chinis established an analogous result for the Chowla conjecture (building upon earlier work of Germán and Katai):
Theorem 2 (Chowla assuming Siegel zero) Let be distinct fixed natural numbers. Suppose one has a Siegel zero associated to some conductor . Then one has in the range , where is the Liouville function.
In our paper we unify these results and also improve the quantitative estimates and range of :
Theorem 3 (Hardy-Littlewood-Chowla assuming Siegel zero) Let be distinct fixed natural numbers with . Suppose one has a Siegel zero associated to some conductor . Then one has for for any fixed .
Our argument proceeds by a series of steps in which we replace and by more complicated looking, but also more tractable, approximations, until the correlation is one that can be computed in a tedious but straightforward fashion by known techniques. More precisely, the steps are as follows:
- (i) Replace the Liouville function with an approximant , which is a completely multiplicative function that agrees with at small primes and agrees with at large primes.
- (ii) Replace the von Mangoldt function with an approximant , which is the Dirichlet convolution multiplied by a Selberg sieve weight to essentially restrict that convolution to almost primes.
- (iii) Replace with a more complicated truncation which has the structure of a “Type I sum”, and which agrees with on numbers that have a “typical” factorization.
- (iv) Replace the approximant with a more complicated approximant which has the structure of a “Type I sum”.
- (v) Now that all terms in the correlation have been replaced with tractable Type I sums, use standard Euler product calculations and Fourier analysis, similar in spirit to the proof of the pseudorandomness of the Selberg sieve majorant for the primes in this paper of Ben Green and myself, to evaluate the correlation to high accuracy.
Steps (i), (ii) proceed mainly through estimates such as (1) and standard sieve theory bounds. Step (iii) is based primarily on estimates on the number of smooth numbers of a certain size.
The restriction in our main theorem is needed only to execute step (iv) of this step. Roughly speaking, the Siegel approximant to is a twisted, sieved version of the divisor function , and the types of correlation one is faced with at the start of step (iv) are a more complicated version of the divisor correlation sum
For this sum can be easily controlled by the Dirichlet hyperbola method. For one needs the fact that has a level of distribution greater than ; in fact Kloosterman sum bounds give a level of distribution of , a folklore fact that seems to have first been observed by Linnik and Selberg. We use a (notationally more complicated) version of this argument to treat the sums arising in (iv) for . Unfortunately for there are no known techniques to unconditionally obtain asymptotics, even for the model sum although we do at least have fairly convincing conjectures as to what the asymptotics should be. Because of this, it seems unlikely that one will be able to relax the hypothesis in our main theorem at our current level of understanding of analytic number theory.Step (v) is a tedious but straightforward sieve theoretic computation, similar in many ways to the correlation estimates of Goldston and Yildirim used in their work on small gaps between primes (as discussed for instance here), and then also used by Ben Green and myself to locate arithmetic progressions in primes.
Rachel Greenfeld and I have just uploaded to the arXiv our preprint “Undecidable translational tilings with only two tiles, or one nonabelian tile“. This paper studies the following question: given a finitely generated group , a (periodic) subset of , and finite sets in , is it possible to tile by translations of the tiles ? That is to say, is there a solution to the (translational) tiling equation
for some subsets of , where denotes the set of sums if the sums are all disjoint (and is undefined otherwise), and denotes disjoint union. (One can also write the tiling equation in the language of convolutions as .)A bit more specifically, the paper studies the decidability of the above question. There are two slightly different types of decidability one could consider here:
- Logical decidability. For a given , one can ask whether the solvability of the tiling equation (1) is provable or disprovable in ZFC (where we encode all the data by appropriate constructions in ZFC). If this is the case we say that the tiling equation (1) (or more precisely, the solvability of this equation) is logically decidable, otherwise it is logically undecidable.
- Algorithmic decidability. For data in some specified class (and encoded somehow as binary strings), one can ask whether the solvability of the tiling equation (1) can be correctly determined for all choices of data in this class by the output of some Turing machine that takes the data as input (encoded as a binary string) and halts in finite time, returning either YES if the equation can be solved or NO otherwise. If this is the case, we say the tiling problem of solving (1) for data in the given class is algorithmically decidable, otherwise it is algorithmically undecidable.
Note that the notion of logical decidability is “pointwise” in the sense that it pertains to a single choice of data , whereas the notion of algorithmic decidability pertains instead to classes of data, and is only interesting when this class is infinite. Indeed, any tiling problem with a finite class of data is trivially decidable because one could simply code a Turing machine that is basically a lookup table that returns the correct answer for each choice of data in the class. (This is akin to how a student with a good memory could pass any exam if the questions are drawn from a finite list, merely by memorising an answer key for that list of questions.)
The two notions are related as follows: if a tiling problem (1) is algorithmically undecidable for some class of data, then the tiling equation must be logically undecidable for at least one choice of data for this class. For if this is not the case, one could algorithmically decide the tiling problem by searching for proofs or disproofs that the equation (1) is solvable for a given choice of data; the logical decidability of all such solvability questions will ensure that this algorithm always terminates in finite time.
One can use the Gödel completeness theorem to interpret logical decidability in terms of universes (also known as structures or models) of ZFC. In addition to the “standard” universe of sets that we believe satisfies the axioms of ZFC, there are also other “nonstandard” universes that also obey the axioms of ZFC. If the solvability of a tiling equation (1) is logically undecidable, this means that such a tiling exists in some universes of ZFC, but not in others.
(To continue the exam analogy, we thus see that a yes-no exam question is logically undecidable if the answer to the question is yes in some parallel universes, but not in others. A course syllabus is algorithmically undecidable if there is no way to prepare for the final exam for the course in a way that guarantees a perfect score (in the standard universe).)
Questions of decidability are also related to the notion of aperiodicity. For a given , a tiling equation (1) is said to be aperiodic if the equation (1) is solvable (in the standard universe of ZFC), but none of the solutions (in that universe) are completely periodic (i.e., there are no solutions where all of the are periodic). Perhaps the most well-known example of an aperiodic tiling (in the context of , and using rotations as well as translations) come from the Penrose tilings, but there are many others besides.
It was (essentially) observed by Hao Wang in the 1960s that if a tiling equation is logically undecidable, then it must necessarily be aperiodic. Indeed, if a tiling equation fails to be aperiodic, then (in the standard universe) either there is a periodic tiling, or there are no tilings whatsoever. In the former case, the periodic tiling can be used to give a finite proof that the tiling equation is solvable; in the latter case, the compactness theorem implies that there is some finite fragment of that is not compatible with being tiled by , and this provides a finite proof that the tiling equation is unsolvable. Thus in either case the tiling equation is logically decidable.
This observation of Wang clarifies somewhat how logically undecidable tiling equations behave in the various universes of ZFC. In the standard universe, tilings exist, but none of them will be periodic. In nonstandard universes, tilings may or may not exist, and the tilings that do exist may be periodic (albeit with a nonstandard period); but there must be at least one universe in which no tiling exists at all.
In one dimension when (or more generally with a finite group), a simple pigeonholing argument shows that no tiling equations are aperiodic, and hence all tiling equations are decidable. However the situation changes in two dimensions. In 1966, Berger (a student of Wang) famously showed that there exist tiling equations (1) in the discrete plane that are aperiodic, or even logically undecidable; in fact he showed that the tiling problem in this case (with arbitrary choices of data ) was algorithmically undecidable. (Strictly speaking, Berger established this for a variant of the tiling problem known as the domino problem, but later work of Golomb showed that the domino problem could be easily encoded within the tiling problem.) This was accomplished by encoding the halting problem for Turing machines into the tiling problem (or domino problem); the latter is well known to be algorithmically undecidable (and thus have logically undecidable instances), and so the latter does also. However, the number of tiles required for Berger’s construction was quite large: his construction of an aperiodic tiling required tiles, and his construction of a logically undecidable tiling required an even larger (and not explicitly specified) collection of tiles. Subsequent work by many authors did reduce the number of tiles required; in the setting, the current world record for the fewest number of tiles in an aperiodic tiling is (due to Amman, Grunbaum, and Shephard) and for a logically undecidable tiling is (due to Ollinger). On the other hand, it is conjectured (see Grunbaum-Shephard and Lagarias-Wang) that one cannot lower all the way to :
Conjecture 1 (Periodic tiling conjecture) If is a periodic subset of a finitely generated abelian group , and is a finite subset of , then the tiling equation is not aperiodic.
This conjecture is known to be true in two dimensions (by work of Bhattacharya when , and more recently by us when ), but remains open in higher dimensions. By the preceding discussion, the conjecture implies that every tiling equation with a single tile is logically decidable, and the problem of whether a given periodic set can be tiled by a single tile is algorithmically decidable.
In this paper we show on the other hand that aperiodic and undecidable tilings exist when , at least if one is permitted to enlarge the group a bit:
Theorem 2 (Logically undecidable tilings)
- (i) There exists a group of the form for some finite abelian , a subset of , and finite sets such that the tiling equation is logically undecidable (and hence also aperiodic).
- (ii) There exists a dimension , a periodic subset of , and finite sets such that tiling equation is logically undecidable (and hence also aperiodic).
- (iii) There exists a non-abelian finite group (with the group law still written additively), a subset of , and a finite set such that the nonabelian tiling equation is logically undecidable (and hence also aperiodic).
We also have algorithmic versions of this theorem. For instance, the algorithmic version of (i) is that the problem of determining solvability of the tiling equation for a given choice of finite abelian group , subset of , and finite sets is algorithmically undecidable. Similarly for (ii), (iii).
This result (together with a negative result discussed below) suggest to us that there is a significant qualitative difference in the theory of tiling by a single (abelian) tile, and the theory of tiling with multiple tiles (or one non-abelian tile). (The positive results on the periodic tiling conjecture certainly rely heavily on the fact that there is only one tile, in particular there is a “dilation lemma” that is only available in this setting that is of key importance in the two dimensional theory.) It would be nice to eliminate the group from (i) (or to set in (ii)), but I think this would require a fairly significant modification of our methods.
Like many other undecidability results, the proof of Theorem 2 proceeds by a sequence of reductions, in which the undecidability of one problem is shown to follow from the undecidability of another, more “expressive” problem that can be encoded inside the original problem, until one reaches a problem that is so expressive that it encodes a problem already known to be undecidable. Indeed, all three undecidability results are ultimately obtained from Berger’s undecidability result on the domino problem.
The first step in increasing expressiveness is to observe that the undecidability of a single tiling equation follows from the undecidability of a system of tiling equations. More precisely, suppose we have non-empty finite subsets of a finitely generated group for and , as well as periodic sets of for , such that it is logically undecidable whether the system of tiling equations
for has no solution in . Then, for any , we can “stack” these equations into a single tiling equation in the larger group , and specifically to the equation where and It is a routine exercise to check that the system of equations (2) admits a solution in if and only if the single equation (3) admits a equation in . Thus, to prove the undecidability of a single equation of the form (3) it suffices to establish undecidability of a system of the form (2); note here how the freedom to select the auxiliary group is important here.We view systems of the form (2) as belonging to a kind of “language” in which each equation in the system is a “sentence” in the language imposing additional constraints on a tiling. One can now pick and choose various sentences in this language to try to encode various interesting problems. For instance, one can encode the concept of a function taking values in a finite group as a single tiling equation
since the solutions to this equation are precisely the graphs of a function . By adding more tiling equations to this equation to form a larger system, we can start imposing additional constraints on this function . For instance, if is a coset of some subgroup of , we can impose the additional equation to impose the additional constraint that for all , if we desire. If happens to contain two distinct elements , and , then the additional equation imposes the additional constraints that for all , and additionally that for all .This begins to resemble the equations that come up in the domino problem. Here one has a finite set of Wang tiles – unit squares where each of the four sides is colored with a color (corresponding to the four cardinal directions North, South, East, and West) from some finite set of colors. The domino problem is then to tile the plane with copies of these tiles in such a way that adjacent sides match. In terms of equations, one is seeking to find functions obeying the pointwise constraint
for all where is the set of colors associated to the set of Wang tiles being used, and the matching constraints for all . As it turns out, the pointwise constraint (7) can be encoded by tiling equations that are fancier versions of (4), (5), (6) that involve only one unknown tiling set , but in order to encode the matching constraints (8) we were forced to introduce a second tile (or work with nonabelian tiling equations). This appears to be an inherent feature of the method, since we found a partial rigidity result for tilings of one tile in one dimension that obstructs this encoding strategy from working when one only has one tile available. The result is as follows:
Proposition 3 (Swapping property) Consider the solutions to a tiling equation in a one-dimensional group (with a finite abelian group, finite, and periodic). Suppose there are two solutions to this equation that agree on the left in the sense that For any function , define the “swap” of and to be the set Then also solves the equation (9).
One can think of and as “genes” with “nucleotides” , at each position , and is a new gene formed by choosing one of the nucleotides from the “parent” genes , at each position. The above proposition then says that the solutions to the equation (9) must be closed under “genetic transfer” among any pair of genes that agree on the left. This seems to present an obstruction to trying to encode equation such as
for two functions (say), which is a toy version of the matching constraint (8), since the class of solutions to this equation turns out not to obey this swapping property. On the other hand, it is easy to encode such equations using two tiles instead of one, and an elaboration of this construction is used to prove our main theorem.Louis Esser, Burt Totaro, Chengxi Wang, and myself have just uploaded to the arXiv our preprint “Varieties of general type with many vanishing plurigenera, and optimal sine and sawtooth inequalities“. This is an interdisciplinary paper that arose because in order to optimize a certain algebraic geometry construction it became necessary to solve a purely analytic question which, while simple, did not seem to have been previously studied in the literature. We were able to solve the analytic question exactly and thus fully optimize the algebraic geometry construction, though the analytic question may have some independent interest.
Let us first discuss the algebraic geometry application. Given a smooth complex -dimensional projective variety there is a standard line bundle attached to it, known as the canonical line bundle; -forms on the variety become sections of this bundle. The bundle may not actually admit global sections; that is to say, the dimension of global sections may vanish. But as one raises the canonical line bundle to higher and higher powers to form further line bundles , the number of global sections tends to increase; in particular, the dimension of global sections (known as the plurigenus) always obeys an asymptotic of the form
as for some non-negative number , which is called the volume of the variety , which is an invariant that reveals some information about the birational geometry of . For instance, if the canonical line bundle is ample (or more generally, nef), this volume is equal to the intersection number (roughly speaking, the number of common zeroes of generic sections of the canonical line bundle); this is a special case of the asymptotic Riemann-Roch theorem. In particular, the volume is a natural number in this case. However, it is possible for the volume to also be fractional in nature. One can then ask: how small can the volume get without vanishing entirely? (By definition, varieties with non-vanishing volume are known as varieties of general type.)It follows from a deep result obtained independently by Hacon–McKernan, Takayama and Tsuji that there is a uniform lower bound for the volume of all -dimensional projective varieties of general type. However, the precise lower bound is not known, and the current paper is a contribution towards probing this bound by constructing varieties of particularly small volume in the high-dimensional limit . Prior to this paper, the best such constructions of -dimensional varieties basically had exponentially small volume, with a construction of volume at most given by Ballico–Pignatelli–Tasin, and an improved construction with a volume bound of given by Totaro and Wang. In this paper, we obtain a variant construction with the somewhat smaller volume bound of ; the method also gives comparable bounds for some other related algebraic geometry statistics, such as the largest for which the pluricanonical map associated to the linear system is not a birational embedding into projective space.
The space is constructed by taking a general hypersurface of a certain degree in a weighted projective space and resolving the singularities. These varieties are relatively tractable to work with, as one can use standard algebraic geometry tools (such as the Reid–Tai inequality) to provide sufficient conditions to guarantee that the hypersurface has only canonical singularities and that the canonical bundle is a reflexive sheaf, which allows one to calculate the volume exactly in terms of the degree and weights . The problem then reduces to optimizing the resulting volume given the constraints needed for the above-mentioned sufficient conditions to hold. After working with a particular choice of weights (which consist of products of mostly consecutive primes, with each product occuring with suitable multiplicities ), the problem eventually boils down to trying to minimize the total multiplicity , subject to certain congruence conditions and other bounds on the . Using crude bounds on the eventually leads to a construction with volume at most , but by taking advantage of the ability to “dilate” the congruence conditions and optimizing over all dilations, we are able to improve the constant to .
Now it is time to turn to the analytic side of the paper by describing the optimization problem that we solve. We consider the sawtooth function , with defined as the unique real number in that is equal to mod . We consider a (Borel) probability measure on the real line, and then compute the average value of this sawtooth function
as well as various dilates of this expectation. Since is bounded above by , we certainly have the trivial bound However, this bound is not very sharp. For instance, the only way in which could attain the value of is if the probability measure was supported on half-integers, but in that case would vanish. For the algebraic geometry application discussed above one is then led to the following question: for a given choice of , what is the best upper bound on the quantity that holds for all probability measures ?If one considers the deterministic case in which is a Dirac mass supported at some real number , then the Dirichlet approximation theorem tells us that there is such that is within of an integer, so we have
in this case, and this bound is sharp for deterministic measures . Thus we have However, both of these bounds turn out to be far from the truth, and the optimal value of is comparable to . In fact we were able to compute this quantity precisely:
Theorem 1 (Optimal bound for sawtooth inequality) Let .In particular, we have as .
- (i) If for some natural number , then .
- (ii) If for some natural number , then .
We establish this bound through duality. Indeed, suppose we could find non-negative coefficients such that one had the pointwise bound
for all real numbers . Integrating this against an arbitrary probability measure , we would conclude and hence Conversely, one can find lower bounds on by selecting suitable candidate measures and computing the means . The theory of linear programming duality tells us that this method must give us the optimal bound, but one has to locate the optimal measure and optimal weights . This we were able to do by first doing some extensive numerics to discover these weights and measures for small values of , and then doing some educated guesswork to extrapolate these examples to the general case, and then to verify the required inequalities. In case (i) the situation is particularly simple, as one can take to be the discrete measure that assigns a probability to the numbers and the remaining probability of to , while the optimal weighted inequality (1) turns out to be which is easily proven by telescoping series. However the general case turned out to be significantly tricker to work out, and the verification of the optimal inequality required a delicate case analysis (reflecting the fact that equality was attained in this inequality in a large number of places).After solving the sawtooth problem, we became interested in the analogous question for the sine function, that is to say what is the best bound for the inequality
The left-hand side is the smallest imaginary part of the first Fourier coefficients of . To our knowledge this quantity has not previously been studied in the Fourier analysis literature. By adopting a similar approach as for the sawtooth problem, we were able to compute this quantity exactly also:
Theorem 2 For any , one has In particular,
Interestingly, a closely related cotangent sum recently appeared in this MathOverflow post. Verifying the lower bound on boils down to choosing the right test measure ; it turns out that one should pick the probability measure supported the with odd, with probability proportional to , and the lower bound verification eventually follows from a classical identity
for , first posed by Eisenstein in 1844 and proved by Stern in 1861. The upper bound arises from establishing the trigonometric inequality for all real numbers , which to our knowledge is new; the left-hand side has a Fourier-analytic intepretation as convolving the Fejér kernel with a certain discretized square wave function, and this interpretation is used heavily in our proof of the inequality.Joni Teräväinen and myself have just uploaded to the arXiv our preprint “Quantitative bounds for Gowers uniformity of the Möbius and von Mangoldt functions“. This paper makes quantitative the Gowers uniformity estimates on the Möbius function and the von Mangoldt function .
To discuss the results we first discuss the situation of the Möbius function, which is technically simpler in some (though not all) ways. We assume familiarity with Gowers norms and standard notations around these norms, such as the averaging notation and the exponential notation . The prime number theorem in qualitative form asserts that
as . With Vinogradov-Korobov error term, the prime number theorem is strengthened to we refer to such decay bounds (With type factors) as pseudopolynomial decay. Equivalently, we obtain pseudopolynomial decay of Gowers seminorm of : As is well known, the Riemann hypothesis would be equivalent to an upgrade of this estimate to polynomial decay of the form for any .Once one restricts to arithmetic progressions, the situation gets worse: the Siegel-Walfisz theorem gives the bound
for any residue class and any , but with the catch that the implied constant is ineffective in . This ineffectivity cannot be removed without further progress on the notorious Siegel zero problem.In 1937, Davenport was able to show the discorrelation estimate
for any uniformly in , which leads (by standard Fourier arguments) to the Fourier uniformity estimate Again, the implied constant is ineffective. If one insists on effective constants, the best bound currently available is for some small effective constant .For the situation with the norm the previously known results were much weaker. Ben Green and I showed that
uniformly for any , any degree two (filtered) nilmanifold , any polynomial sequence , and any Lipschitz function ; again, the implied constants are ineffective. On the other hand, in a separate paper of Ben Green and myself, we established the following inverse theorem: if for instance we knew that for some , then there exists a degree two nilmanifold of dimension , complexity , a polynomial sequence , and Lipschitz function of Lipschitz constant such that Putting the two assertions together and comparing all the dependencies on parameters, one can establish the qualitative decay bound However the decay rate produced by this argument is completely ineffective: obtaining a bound on when this quantity dips below a given threshold depends on the implied constant in (3) for some whose dimension depends on , and the dependence on obtained in this fashion is ineffective in the face of a Siegel zero.For higher norms , the situation is even worse, because the quantitative inverse theory for these norms is poorer, and indeed it was only with the recent work of Manners that any such bound is available at all (at least for ). Basically, Manners establishes if
then there exists a degree nilmanifold of dimension , complexity , a polynomial sequence , and Lipschitz function of Lipschitz constant such that (We allow all implied constants to depend on .) Meanwhile, the bound (3) was extended to arbitrary nilmanifolds by Ben and myself. Again, the two results when concatenated give the qualitative decay but the decay rate is completely ineffective.Our first result gives an effective decay bound:
Theorem 1 For any , we have for some . The implied constants are effective.
This is off by a logarithm from the best effective bound (2) in the case. In the case there is some hope to remove this logarithm based on the improved quantitative inverse theory currently available in this case, but there is a technical obstruction to doing so which we will discuss later in this post. For the above bound is the best one could hope to achieve purely using the quantitative inverse theory of Manners.
We have analogues of all the above results for the von Mangoldt function . Here a complication arises that does not have mean close to zero, and one has to subtract off some suitable approximant to before one would expect good Gowers norms bounds. For the prime number theorem one can just use the approximant , giving
but even for the prime number theorem in arithmetic progressions one needs a more accurate approximant. In our paper it is convenient to use the “Cramér approximant” where and is the quasipolynomial quantity Then one can show from the Siegel-Walfisz theorem and standard bilinear sum methods that and for all and (with an ineffective dependence on ), again regaining effectivity if is replaced by a sufficiently small constant . All the previously stated discorrelation and Gowers uniformity results for then have analogues for , and our main result is similarly analogous:
Theorem 2 For any , we have for some . The implied constants are effective.
By standard methods, this result also gives quantitative asymptotics for counting solutions to various systems of linear equations in primes, with error terms that gain a factor of with respect to the main term.
We now discuss the methods of proof, focusing first on the case of the Möbius function. Suppose first that there is no “Siegel zero”, by which we mean a quadratic character of some conductor with a zero with for some small absolute constant . In this case the Siegel-Walfisz bound (1) improves to a quasipolynomial bound
To establish Theorem 1 in this case, it suffices by Manners’ inverse theorem to establish the polylogarithmic bound for all degree nilmanifolds of dimension and complexity , all polynomial sequences , and all Lipschitz functions of norm . If the nilmanifold had bounded dimension, then one could repeat the arguments of Ben and myself more or less verbatim to establish this claim from (5), which relied on the quantitative equidistribution theory on nilmanifolds developed in a separate paper of Ben and myself. Unfortunately, in the latter paper the dependence of the quantitative bounds on the dimension was not explicitly given. In an appendix to the current paper, we go through that paper to account for this dependence, showing that all exponents depend at most doubly exponentially in the dimension , which is barely sufficient to handle the dimension of that arises here.Now suppose we have a Siegel zero . In this case the bound (5) will not hold in general, and hence also (6) will not hold either. Here, the usual way out (while still maintaining effective estimates) is to approximate not by , but rather by a more complicated approximant that takes the Siegel zero into account, and in particular is such that one has the (effective) pseudopolynomial bound
for all residue classes . The Siegel approximant to is actually a little bit complicated, and to our knowledge the first appearance of this sort of approximant only appears as late as this 2010 paper of Germán and Katai. Our version of this approximant is defined as the multiplicative function such that when , and when is coprime to all primes , and is a normalising constant given by the formula (this constant ends up being of size and plays only a minor role in the analysis). This is a rather complicated formula, but it seems to be virtually the only choice of approximant that allows for bounds such as (7) to hold. (This is the one aspect of the problem where the von Mangoldt theory is simpler than the Möbius theory, as in the former one only needs to work with very rough numbers for which one does not need to make any special accommodations for the behavior at small primes when introducing the Siegel correction term.) With this starting point it is then possible to repeat the analysis of my previous papers with Ben and obtain the pseudopolynomial discorrelation bound for as before, which when combined with Manners’ inverse theorem gives the doubly logarithmic bound Meanwhile, a direct sieve-theoretic computation ends up giving the singly logarithmic bound (indeed, there is a good chance that one could improve the bounds even further, though it is not helpful for this current argument to do so). Theorem 1 then follows from the triangle inequality for the Gowers norm. It is interesting that the Siegel approximant seems to play a rather essential component in the proof, even if it is absent in the final statement. We note that this approximant seems to be a useful tool to explore the “illusory world” of the Siegel zero further; see for instance the recent paper of Chinis for some work in this direction.For the analogous problem with the von Mangoldt function (assuming a Siegel zero for sake of discussion), the approximant is simpler; we ended up using
which allows one to state the standard prime number theorem in arithmetic progressions with classical error term and Siegel zero term compactly as Routine modifications of previous arguments also give and The one tricky new step is getting from the discorrelation estimate (8) to the Gowers uniformity estimate One cannot directly apply Manners’ inverse theorem here because and are unbounded. There is a standard tool for getting around this issue, now known as the dense model theorem, which is the standard engine powering the transference principle from theorems about bounded functions to theorems about certain types of unbounded functions. However the quantitative versions of the dense model theorem in the literature are expensive and would basically weaken the doubly logarithmic gain here to a triply logarithmic one. Instead, we bypass the dense model theorem and directly transfer the inverse theorem for bounded functions to an inverse theorem for unbounded functions by using the densification approach to transference introduced by Conlon, Fox, and Zhao. This technique turns out to be quantitatively quite efficient (the dependencies of the main parameters in the transference are polynomial in nature), and also has the technical advantage of avoiding the somewhat tricky “correlation condition” present in early transference results which are also not beneficial for quantitative bounds.In principle, the above results can be improved for due to the stronger quantitative inverse theorems in the setting. However, there is a bottleneck that prevents us from achieving this, namely that the equidistribution theory of two-step nilmanifolds has exponents which are exponential in the dimension rather than polynomial in the dimension, and as a consequence we were unable to improve upon the doubly logarithmic results. Specifically, if one is given a sequence of bracket quadratics such as that fails to be -equidistributed, one would need to establish a nontrivial linear relationship modulo 1 between the (up to errors of ), where the coefficients are of size ; current methods only give coefficient bounds of the form . An old result of Schmidt demonstrates proof of concept that these sorts of polynomial dependencies on exponents is possible in principle, but actually implementing Schmidt’s methods here seems to be a quite non-trivial task. There is also another possible route to removing a logarithm, which is to strengthen the inverse theorem to make the dimension of the nilmanifold logarithmic in the uniformity parameter rather than polynomial. Again, the Freiman-Bilu theorem (see for instance this paper of Ben and myself) demonstrates proof of concept that such an improvement in dimension is possible, but some work would be needed to implement it.
Recent Comments