You are currently browsing the category archive for the ‘Mathematics’ category.

An unusual lottery result made the news recently: on October 1, 2022, the PCSO Grand Lotto in the Philippines, which draws six numbers from {1} to {55} at random, managed to draw the numbers {9, 18, 27, 36, 45, 54} (though the balls were actually drawn in the order {9, 45,36, 27, 18, 54}). In other words, they drew exactly six multiples of nine from {1} to {55}. In addition, a total of {433} tickets were bought with this winning combination, whose owners then had to split the {236} million peso jackpot (about {4} million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system, including from the minority leader of the Senate.

Whenever an event like this happens, journalists often contact mathematicians to ask the question: “What are the odds of this happening?”, and in fact I myself received one such inquiry this time around. This is a number that is not too difficult to compute – in this case, the probability of the lottery producing the six numbers {9, 18, 27, 35, 45, 54} in some order turn out to be {1} in {\binom{55}{6} = 28,989,675} – and such a number is often dutifully provided to such journalists, who in turn report it as some sort of quantitative demonstration of how remarkable the event was.

But on the previous draw of the same lottery, on September 28, 2022, the unremarkable sequence of numbers {11, 26, 33, 45, 51, 55} were drawn (again in a different order), and no tickets ended up claiming the jackpot. The probability of the lottery producing the six numbers {11, 26, 33, 45, 51, 55} is also {1} in {\binom{55}{6} = 28,989,675} – just as likely or as unlikely as the October 1 numbers {9, 18, 27, 36, 45, 54}. Indeed, the whole point of drawing the numbers randomly is to make each of the {28,989,675} possible outcomes (whether they be “unusual” or “unremarkable”) equally likely. So why is it that the October 1 lottery attracted so much attention, but the September 28 lottery did not?

Part of the explanation surely lies in the unusually large number ({433}) of lottery winners on October 1, but I will set that aspect of the story aside until the end of this post. The more general points that I want to make with these sorts of situations are:

  1. The question “what are the odds of happening” is often easy to answer mathematically, but it is not the correct question to ask.
  2. The question “what is the probability that an alternative hypothesis is the truth” is (one of) the correct questions to ask, but is very difficult to answer (it involves both mathematical and non-mathematical considerations).
  3. The answer to the first question is one of the quantities needed to calculate the answer to the second, but it is far from the only such quantity. Most of the other quantities involved cannot be calculated exactly.
  4. However, by making some educated guesses, one can still sometimes get a very rough gauge of which events are “more surprising” than others, in that they would lead to relatively higher answers to the second question.

To explain these points it is convenient to adopt the framework of Bayesian probability. In this framework, one imagines that there are competing hypotheses to explain the world, and that one assigns a probability to each such hypothesis representing one’s belief in the truth of that hypothesis. For simplicity, let us assume that there are just two competing hypotheses to be entertained: the null hypothesis {H_0}, and an alternative hypothesis {H_1}. For instance, in our lottery example, the two hypotheses might be:

  • Null hypothesis {H_0}: The lottery is run in a completely fair and random fashion.
  • Alternative hypothesis {H_1}: The lottery is rigged by some corrupt officials for their personal gain.

At any given point in time, a person would have a probability {{\bf P}(H_0)} assigned to the null hypothesis, and a probability {{\bf P}(H_1)} assigned to the alternative hypothesis; in this simplified model where there are only two hypotheses under consideration, these probabilities must add to one, but of course if there were additional hypotheses beyond these two then this would no longer be the case.

Bayesian probability does not provide a rule for calculating the initial (or prior) probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} that one starts with; these may depend on the subjective experiences and biases of the person considering the hypothesis. For instance, one person might have quite a bit of prior faith in the lottery system, and assign the probabilities {{\bf P}(H_0) = 0.99} and {{\bf P}(H_1) = 0.01}. Another person might have quite a bit of prior cynicism, and perhaps assign {{\bf P}(H_0)=0.5} and {{\bf P}(H_1)=0.5}. One cannot use purely mathematical arguments to determine which of these two people is “correct” (or whether they are both “wrong”); it depends on subjective factors.

What Bayesian probability does do, however, is provide a rule to update these probabilities {{\bf P}(H_0)}, {{\bf P}(H_1)} in view of new information {E} to provide posterior probabilities {{\bf P}(H_0|E)}, {{\bf P}(H_1|E)}. In our example, the new information {E} would be the fact that the October 1 lottery numbers were {9, 18, 27, 36, 45, 54} (in some order). The update is given by the famous Bayes theorem

\displaystyle  {\bf P}(H_0|E) = \frac{{\bf P}(E|H_0) {\bf P}(H_0)}{{\bf P}(E)}; \quad {\bf P}(H_1|E) = \frac{{\bf P}(E|H_1) {\bf P}(H_1)}{{\bf P}(E)},

where {{\bf P}(E|H_0)} is the probability that the event {E} would have occurred under the null hypothesis {H_0}, and {{\bf P}(E|H_1)} is the probability that the event {E} would have occurred under the alternative hypothesis {H_1}. Let us divide the second equation by the first to cancel the {{\bf P}(E)} denominator, and obtain

\displaystyle  \frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } = \frac{ {\bf P}(H_1) }{ {\bf P}(H_0) } \times \frac{ {\bf P}(E | H_1)}{{\bf P}(E | H_0)}. \ \ \ \ \ (1)

One can interpret {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} as the prior odds of the alternative hypothesis, and {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) } } as the posterior odds of the alternative hypothesis. The identity (1) then says that in order to compute the posterior odds {\frac{ {\bf P}(H_1|E) }{ {\bf P}(H_0|E) }} of the alternative hypothesis in light of the new information {E}, one needs to know three things:
  1. The prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis;
  2. The probability {\mathop{\bf P}(E|H_0)} that the event {E} occurs under the null hypothesis {H_0}; and
  3. The probability {\mathop{\bf P}(E|H_1)} that the event {E} occurs under the alternative hypothesis {H_1}.

As previously discussed, the prior odds {\frac{ {\bf P}(H_1) }{ {\bf P}(H_0) }} of the alternative hypothesis are subjective and vary from person to person; in the example earlier, the person with substantial faith in the lottery may only give prior odds of {\frac{0.01}{0.99} \approx 0.01} (99 to 1 against) of the alternative hypothesis, whereas the cynic might give odds of {\frac{0.5}{0.5}=1} (even odds). The probability {{\bf P}(E|H_0)} is the quantity that can often be calculated by straightforward mathematics; as discussed before, in this specific example we have

\displaystyle  \mathop{\bf P}(E|H_0) = \frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}.

But this still leaves one crucial quantity that is unknown: the probability {{\bf P}(E|H_1)}. This is incredibly difficult to compute, because it requires a precise theory for how events would play out under the alternative hypothesis {H_1}, and in particular is very sensitive as to what the alternative hypothesis {H_1} actually is.

For instance, suppose we replace the alternative hypothesis {H_1} by the following very specific (and somewhat bizarre) hypothesis:

  • Alternative hypothesis {H'_1}: The lottery is rigged by a cult that worships the multiples of {9}, and views October 1 as their holiest day. On this day, they will manipulate the lottery to only select those balls that are multiples of {9}.

Under this alternative hypothesis {H'_1}, we have {{\bf P}(E|H'_1)=1}. So, when {E} happens, the odds of this alternative hypothesis {H'_1} will increase by the dramatic factor of {\frac{{\bf P}(E|H'_1)}{{\bf P}(E|H_0)} = 28,989,675}. So, for instance, someone who already was entertaining odds of {\frac{0.01}{0.99}} of this hypothesis {H'_1} would now have these odds multiply dramatically to {\frac{0.01}{0.99} \times 28,989,675 \approx 290,000}, so that the probability of {H'_1} would have jumped from a mere {1\%} to a staggering {99.9997\%}. This is about as strong a shift in belief as one could imagine. However, this hypothesis {H'_1} is so specific and bizarre that one’s prior odds of this hypothesis would be nowhere near as large as {\frac{0.01}{0.99}} (unless substantial prior evidence of this cult and its hold on the lottery system existed, of course). A more realistic prior odds for {H'_1} would be something like {\frac{10^{-10^{10}}}{1-10^{-10^{10}}}} – which is so miniscule that even multiplying it by a factor such as {28,989,675} barely moves the needle.

Remark 1 The contrast between alternative hypothesis {H_1} and alternative hypothesis {H'_1} illustrates a common demagogical rhetorical technique when an advocate is trying to convince an audience of an alternative hypothesis, namely to use suggestive language (“`I’m just asking questions here”) rather than precise statements in order to leave the alternative hypothesis deliberately vague. In particular, the advocate may take advantage of the freedom to use a broad formulation of the hypothesis (such as {H_1}) in order to maximize the audience’s prior odds of the hypothesis, simultaneously with a very specific formulation of the hypothesis (such as {H'_1}) in order to maximize the probability of the actual event {E} occuring under this hypothesis. (A related technique is to be deliberately vague about the hypothesized competency of some suspicious actor, so that this actor could be portrayed as being extraordinarily competent when convenient to do so, while simultaneously being portrayed as extraordinarily incompetent when that instead is the more useful hypothesis.) This can lead to wildly inaccurate Bayesian updates of this vague alternative hypothesis, and so precise formulation of such hypothesis is important if one is to approach a topic from anything remotely resembling a scientific approach.

At the opposite extreme, consider instead the following hypothesis:

  • Alternative hypothesis {H''_1}: The lottery is rigged by some corrupt officials, who on October 1 decide to randomly determine the winning numbers in advance, share these numbers with their collaborators, and then manipulate the lottery to choose those numbers that they selected.

If these corrupt officials are indeed choosing their predetermined winning numbers randomly, then the probability {{\bf P}(E|H''_1)} would in fact be just the same probability {\frac{1}{\binom{55}{6}} = \frac{1}{28,989,675}} as {{\bf P}(E|H_0)}, and in this case the seemingly unusual event {E} would in fact have no effect on the odds of the alternative hypothesis, because it was just as unlikely for the alternative hypothesis to generate this multiples-of-nine pattern as for the null hypothesis to. In fact, one would imagine that these corrupt officials would avoid “suspicious” numbers, such as the multiples of {9}, and only choose numbers that look random, in which case {{\bf P}(E|H''_1)} would in fact be less than {{\bf P}(E|H_0)} and so the event {E} would actually lower the odds of the alternative hypothesis in this case. (In fact, one can sometimes use this tendency of fraudsters to not generate truly random data as a statistical tool to detect such fraud; violations of Benford’s law for instance can be used in this fashion, though only in situations where the null hypothesis is expected to obey Benford’s law, as discussed in this previous blog post.)

Now let us consider a third alternative hypothesis:

  • Alternative hypothesis {H'''_1}: On October 1, the lottery machine developed a fault and now only selects numbers that exhibit unusual patterns.

Setting aside the question of precisely what faulty mechanism could induce this sort of effect, it is not clear at all how to compute {{\bf P}(E|H'''_1)} in this case. Using the principle of indifference as a crude rule of thumb, one might expect

\displaystyle  {\bf P}(E|H'''_1) \approx \frac{1}{\# \{ \hbox{unusual patterns}\}}

where the denominator is the number of patterns among the possible {\binom{55}{6}} lottery outcomes that are “unusual”. Among such patterns would presumably be the multiples-of-9 pattern {9,18,27,36,45,54}, but one could easily come up with other patterns that are equally “unusual”, such as consecutive strings such as {11, 12, 13, 14, 15, 16}, or the first few primes {2, 3, 5, 7, 11, 13}, or the first few squares {1, 4, 9, 16, 25, 36}, and so forth. How many such unusual patterns are there? This is too vague a question to answer with any degree of precision, but as one illustrative statistic, the Online Encyclopedia of Integer Sequences (OEIS) currently hosts about {350,000} sequences. Not all of these would begin with six distinct numbers from {1} to {55}, and several of these sequences might generate the same set of six numbers, but this does suggests that patterns that one would deem to be “unusual” could number in the thousands, tens of thousands, or more. Using this guess, we would then expect the event {E} to boost the odds of this hypothesis {H'''_1} by perhaps a thousandfold or so, which is moderately impressive. But subsequent information can counteract this effect. For instance, on October 3, the same lottery produced the numbers {8, 10, 12, 14, 26, 51}, which exhibit no unusual properties (no search results in the OEIS, for instance); if we denote this event by {E'}, then we have {{\bf P}(E'|H'''_1) \approx 0} and so this new information {E'} should drive the odds for this alternative hypothesis {H'''_1} way down again.

Remark 2 This example demonstrates another demagogical rhetorical technique that one sometimes sees (particularly in political or other emotionally charged contexts), which is to cherry-pick the information presented to their audience by informing them of events {E} which have a relatively high probability of occurring under their alternative hypothesis, but withholding information about other relevant events {E'} that have a relatively low probability of occurring under their alternative hypothesis. When confronted with such new information {E'}, a common defense of a demogogue is to modify the alternative hypothesis {H_1} to a more specific hypothesis {H'_1} that can “explain” this information {E'} (“Oh, clearly we heard about {E'} because the conspiracy in fact extends to the additional organizations {X, Y, Z} that reported {E'}“), taking advantage of the vagueness discussed in Remark 1.

Let us consider a superficially similar hypothesis:

  • Alternative hypothesis {H''''_1}: On October 1, a divine being decided to send a sign to humanity by placing an unusual pattern in a lottery.

Here we (literally) stay agnostic on the prior odds of this hypothesis, and do not address the theological question of why a divine being should choose to use the medium of a lottery to send their signs. At first glance, the probability {{\bf P}(E|H''''_1)} here should be similar to the probability {{\bf P}(E|H'''_1)}, and so perhaps one could use this event {E} to improve the odds of the existence of a divine being by a factor of a thousand or so. But note carefully that the hypothesis {H''''_1} did not specify which lottery the divine being chose to use. The PSCO Grand Lotto is just one of a dozen lotteries run by the Philippine Charity Sweepstakes Office (PCSO), and of course there are over a hundred other countries and thousands of states within these countries, each of which often run their own lotteries. Taking into account these thousands or tens of thousands of additional lotteries to choose from, the probability {{\bf P}(E|H''''_1)} now drops by several orders of magnitude, and is now basically comparable to the probability {{\bf P}(E|H_0)} coming from the null hypothesis. As such one does not expect the event {E} to have a significant impact on the odds of the hypothesis {H''''_1}, despite the small-looking nature {\frac{1}{28,989,675}} of the probability {{\bf P}(E|H_0)}.

In summary, we have failed to locate any alternative hypothesis {H_1} which

  1. Has some non-negligible prior odds of being true (and in particular is not excessively specific, as with hypothesis {H'_1});
  2. Has a significantly higher probability of producing the specific event {E} than the null hypothesis; AND
  3. Does not struggle to also produce other events {E'} that have since been observed.
One needs all three of these factors to be present in order to significantly weaken the plausibility of the null hypothesis {H_0}; in the absence of these three factors, a moderately small numerical value of {{\bf P}(E|H_0)}, such as {\frac{1}{28,989,675}} does not actually do much to affect this plausibility. In this case one needs to lay out a reasonably precise alternative hypothesis {H_1} and make some actual educated guesses towards the competing probability {{\bf P}(E|H_1)} before one can lead to further conclusions. However, if {{\bf P}(E|H_0)} is insanely small, e.g., less than {10^{-1000}}, then the possibility of a previously overlooked alternative hypothesis {H_1} becomes far more plausible; as per the famous quote of Arthur Conan Doyle’s Sherlock Holmes, “When you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.”

We now return to the fact that for this specific October 1 lottery, there were {433} tickets that managed to select the winning numbers. Let us call this event {F}. In view of this additional information, we should now consider the ratio of the probabilities {{\bf P}(E \& F|H_1)} and {{\bf P}(E \& F|H_0)}, rather than the ratio of the probabilities {{\bf P}(E|H_1)} and {{\bf P}(E|H_0)}. If we augment the null hypothesis to

  • Null hypothesis {H'_0}: The lottery is run in a completely fair and random fashion, and the purchasers of lottery tickets also select their numbers in a completely random fashion.

Then {{\bf P}(E \& F|H'_0)} is indeed of the “insanely improbable” category mentioned previously. I was not able to get official numbers on how many tickets are purchased per lottery, but let us say for sake of argument that it is 1 million (the conclusion will not be extremely sensitive to this choice). Then the expected number of tickets that would have the winning numbers would be

\displaystyle  \frac{1 \hbox{ million}}{28,989,675} \approx 0.03

(which is broadly consistent, by the way, with the jackpot being reached every {30} draws or so), and standard probability theory suggests that the number of winners should now follow a Poisson distribution with this mean {\lambda = 0.03}. The probability of obtaining {433} winners would now be

\displaystyle  {\bf P}(F|H'_0) = \frac{\lambda^{433} e^{-\lambda}}{433!} \approx 10^{-1600}

and of course {{\bf P}(E \& F|H'_0)} would be even smaller than this. So this clearly demands some sort of explanation. But in actuality, many purchasers of lottery tickets do not select their numbers completely randomly; they often have some “lucky” numbers (e.g., based on birthdays or other personally significant dates) that they prefer to use, or choose numbers according to a simple pattern rather than go to the trouble of trying to make them truly random. So if we modify the null hypothesis to

  • Null hypothesis {H''_0}: The lottery is run in a completely fair and random fashion, but a significant fraction of the purchasers of lottery tickets only select “unusual” numbers.

then it can now become quite plausible that a highly unusual set of numbers such as {9,18,27,36,45,54} could be selected by as many as {433} purchasers of tickets; for instance, of {10\%} of the 1 million ticket holders chose to select their numbers according to some sort of pattern, then only {0.4\%} of those holders would have to pick {9,18,27,36,45,54} in order for the event {F} to hold (given {E}), and this is not extremely implausible. Given that this reasonable version of the null hypothesis already gives a plausible explanation for {F}, there does not seem to be a pressing need to locate an alternate hypothesis {H_1} that gives some other explanation (cf. Occam’s razor).

Remark 3 In view of the above discussion, one can propose a systematic way to evaluate (in as objective a fashion as possible) rhetorical claims in which an advocate is presenting evidence to support some alternative hypothesis:
  1. State the null hypothesis {H_0} and the alternative hypothesis {H_1} as precisely as possible. In particular, avoid conflating an extremely broad hypothesis (such as the hypothesis {H_1} in our running example) with an extremely specific one (such as {H'_1} in our example).
  2. With the hypotheses precisely stated, give an honest estimate to the prior odds of this formulation of the alternative hypothesis.
  3. Consider if all the relevant information {E} (or at least a representative sample thereof) has been presented to you before proceeding further. If not, consider gathering more information {E'} from further sources.
  4. Estimate how likely the information {E} was to have occurred under the null hypothesis.
  5. Estimate how likely the information {E} was to have occurred under the alternative hypothesis (using exactly the same wording of this hypothesis as you did in previous steps).
  6. If the second estimate is significantly larger than the first, then you have cause to update your prior odds of this hypothesis (though if those prior odds were already vanishingly unlikely, this may not move the needle significantly). If not, the argument is unconvincing and no significant adjustment to the odds (except perhaps in a downwards direction) needs to be made.

Rachel Greenfeld and I have just uploaded to the arXiv our announcement “A counterexample to the periodic tiling conjecture“. This is an announcement of a longer paper that we are currently in the process of writing up (and hope to release in a few weeks), in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. This conjecture can be formulated in both discrete and continuous settings:

Conjecture 1 (Discrete periodic tiling conjecture) Suppose that {F \subset {\bf Z}^d} is a finite set that tiles {{\bf Z}^d} by translations (i.e., {{\bf Z}^d} can be partitioned into translates of {F}). Then {F} also tiles {{\bf Z}^d} by translations periodically (i.e., the set of translations can be taken to be a periodic subset of {{\bf Z}^d}).

Conjecture 2 (Continuous periodic tiling conjecture) Suppose that {\Omega \subset {\bf R}^d} is a bounded measurable set of positive measure that tiles {{\bf R}^d} by translations up to null sets. Then {\Omega} also tiles {{\bf R}^d} by translations periodically up to null sets.

The discrete periodic tiling conjecture can be easily established for {d=1} by the pigeonhole principle (as first observed by Newman), and was proven for {d=2} by Bhattacharya (with a new proof given by Greenfeld and myself). The continuous periodic tiling conjecture was established for {d=1} by Lagarias and Wang. By an old observation of Hao Wang, one of the consequences of the (discrete) periodic tiling conjecture is that the problem of determining whether a given finite set {F \subset {\bf Z}^d} tiles by translations is (algorithmically and logically) decidable.

On the other hand, once one allows tilings by more than one tile, it is well known that aperiodic tile sets exist, even in dimension two – finite collections of discrete or continuous tiles that can tile the given domain by translations, but not periodically. Perhaps the most famous examples of such aperiodic tilings are the Penrose tilings, but there are many other constructions; for instance, there is a construction of Ammann, Grümbaum, and Shephard of eight tiles in {{\bf Z}^2} which tile aperiodically. Recently, Rachel and I constructed a pair of tiles in {{\bf Z}^d} that tiled a periodic subset of {{\bf Z}^d} aperiodically (in fact we could even make the tiling question logically undecidable in ZFC).

Our main result is then

Theorem 3 Both the discrete and continuous periodic tiling conjectures fail for sufficiently large {d}. Also, there is a finite abelian group {G_0} such that the analogue of the discrete periodic tiling conjecture for {{\bf Z}^2 \times G_0} is false.

This suggests that the techniques used to prove the discrete periodic conjecture in {{\bf Z}^2} are already close to the limit of their applicability, as they cannot handle even virtually two-dimensional discrete abelian groups such as {{\bf Z}^2 \times G_0}. The main difficulty is in constructing the counterexample in the {{\bf Z}^2 \times G_0} setting.

The approach starts by adapting some of the methods of a previous paper of Rachel and myself. The first step is make the problem easier to solve by disproving a “multiple periodic tiling conjecture” instead of the traditional periodic tiling conjecture. At present, Theorem 3 asserts the existence of a “tiling equation” {A \oplus F = {\bf Z}^2 \times G_0} (where one should think of {F} and {G_0} as given, and the tiling set {A} is known), which admits solutions, all of which are non-periodic. It turns out that it is enough to instead assert the existence of a system

\displaystyle  A \oplus F^{(m)} = {\bf Z}^2 \times G_0, m=1,\dots,M

of tiling equations, which admits solutions, all of which are non-periodic. This is basically because one can “stack” together a system of tiling equations into an essentially equivalent single tiling equation in a slightly larger group. The advantage of this reformulation is that it creates a “tiling language”, in which each sentence {A \oplus F^{(m)} = {\bf Z}^2 \times G_0} in the language expresses a different type of constraint on the unknown set {A}. The strategy then is to locate a non-periodic set {A} which one can try to “describe” by sentences in the tiling language that are obeyed by this non-periodic set, and which are “structured” enough that one can capture their non-periodic nature through enough of these sentences.

It is convenient to replace sets by functions, so that this tiling language can be translated to a more familiar language, namely the language of (certain types of) functional equations. The key point here is that the tiling equation

\displaystyle  A \oplus (\{0\} \times H) = G \times H

for some abelian groups {G, H} is precisely asserting that {A} is a graph

\displaystyle  A = \{ (x, f(x)): x \in G \}

of some function {f: G \rightarrow H} (this sometimes referred to as the “vertical line test” in U.S. undergraduate math classes). Using this translation, it is possible to encode a variety of functional equations relating one or more functions {f_i: G \rightarrow H} taking values in some finite group {H} (such as a cyclic group).

The non-periodic behaviour that we ended up trying to capture was that of a certain “{p}-adically structured function” {f_p: {\bf Z} \rightarrow ({\bf Z}/p{\bf Z})^\times} associated to some fixed and sufficiently large prime {p} (in fact for our arguments any prime larger than {48}, e.g., {p=53}, would suffice), defined by the formula

\displaystyle  f_p(n) := \frac{n}{p^{\nu_p(n)}} \hbox{ mod } p

for {n \neq 0} and {f_p(0)=1}, where {\nu_p(n)} is the number of times {p} divides {n}. In other words, {f_p(n)} is the last non-zero digit in the base {p} expansion of {n} (with the convention that the last non-zero digit of {0} is {1}). This function is not periodic, and yet obeys a lot of functional equations; for instance, one has {f_p(pn) = f_p(n)} for all {n}, and also {f_p(pn+j)=j} for {j=1,\dots,p-1} (and in fact these two equations, together with the condition {f_p(0)=1}, completely determine {f_p}). Here is what the function {f_p} looks like (for {p=5}):

It turns out that we cannot describe this one-dimensional non-periodic function directly via tiling equations. However, we can describe two-dimensional non-periodic functions such as {(n,m) \mapsto f_p(An+Bm+C)} for some coefficients {A,B,C} via a suitable system of tiling equations. A typical such function looks like this:

A feature of this function is that when one restricts to a row or diagonal of such a function, the resulting one-dimensional function exhibits “{p}-adic structure” in the sense that it behaves like a rescaled version of {f_p}; see the announcement for a precise version of this statement. It turns out that the converse is essentially true: after excluding some degenerate solutions in which the function is constant along one or more of the columns, all two-dimensional functions which exhibit {p}-adic structure along (non-vertical) lines must behave like one of the functions {(n,m) \mapsto f_p(An+Bm+C)} mentioned earlier, and in particular is non-periodic. The proof of this result is strongly reminiscent of the type of reasoning needed to solve a Sudoku puzzle, and so we have adopted some Sudoku-like terminology in our arguments to provide intuition and visuals. One key step is to perform a shear transformation to the puzzle so that many of the rows become constant, as displayed in this example,

and then perform a “Tetris” move of eliminating the constant rows to arrive at a secondary Sudoku puzzle which one then analyzes in turn:

It is the iteration of this procedure that ultimately generates the non-periodic {p}-adic structure.

Let {M_{n \times m}({\bf Z})} denote the space of {n \times m} matrices with integer entries, and let {GL_n({\bf Z})} be the group of invertible {n \times n} matrices with integer entries. The Smith normal form takes an arbitrary matrix {A \in M_{n \times m}({\bf Z})} and factorises it as {A = UDV}, where {U \in GL_n({\bf Z})}, {V \in GL_m({\bf Z})}, and {D} is a rectangular diagonal matrix, by which we mean that the principal {\min(n,m) \times \min(n,m)} minor is diagonal, with all other entries zero. Furthermore the diagonal entries of {D} are {\alpha_1,\dots,\alpha_k,0,\dots,0} for some {0 \leq k \leq \min(n,m)} (which is also the rank of {A}) with the numbers {\alpha_1,\dots,\alpha_k} (known as the invariant factors) principal divisors with {\alpha_1 | \dots | \alpha_k}. The invariant factors are uniquely determined; but there can be some freedom to modify the invertible matrices {U,V}. The Smith normal form can be computed easily; for instance, in SAGE, it can be computed calling the {{\tt smith\_form()}} function from the matrix class. The Smith normal form is also available for other principal ideal domains than the integers, but we will only be focused on the integer case here. For the purposes of this post, we will view the Smith normal form as a primitive operation on matrices that can be invoked as a “black box”.

In this post I would like to record how to use the Smith normal form to computationally manipulate two closely related classes of objects:

  • Subgroups {\Gamma \leq {\bf Z}^d} of a standard lattice {{\bf Z}^d} (or lattice subgroups for short);
  • Closed subgroups {H \leq ({\bf R}/{\bf Z})^d} of a standard torus {({\bf R}/{\bf Z})^d} (or closed torus subgroups for short).
(This arose for me due to the need to actually perform (with a collaborator) some numerical calculations with a number of lattice subgroups and closed torus subgroups.) It’s possible that all of these operations are already encoded in some existing object classes in a computational algebra package; I would be interested to know of such packages and classes for lattice subgroups or closed torus subgroups in the comments.

The above two classes of objects are isomorphic to each other by Pontryagin duality: if {\Gamma \leq {\bf Z}^d} is a lattice subgroup, then the orthogonal complement

\displaystyle  \Gamma^\perp := \{ x \in ({\bf R}/{\bf Z})^d: \langle x, \xi \rangle = 0 \forall \xi \in \Gamma \}

is a closed torus subgroup (with {\langle,\rangle: ({\bf R}/{\bf Z})^d \times {\bf Z}^d \rightarrow {\bf R}/{\bf Z}} the usual Fourier pairing); conversely, if {H \leq ({\bf R}/{\bf Z})^d} is a closed torus subgroup, then

\displaystyle  H^\perp := \{ \xi \in {\bf Z}^d: \langle x, \xi \rangle = 0 \forall x \in H \}

is a lattice subgroup. These two operations invert each other: {(\Gamma^\perp)^\perp = \Gamma} and {(H^\perp)^\perp = H}.

Example 1 The orthogonal complement of the lattice subgroup

\displaystyle  2{\bf Z} \times \{0\} = \{ (2n,0): n \in {\bf Z}\} \leq {\bf Z}^2

is the closed torus subgroup

\displaystyle  (\frac{1}{2}{\bf Z}/{\bf Z}) \times ({\bf R}/{\bf Z}) = \{ (x,y) \in ({\bf R}/{\bf Z})^2: 2x=0\} \leq ({\bf R}/{\bf Z})^2

and conversely.

Let us focus first on lattice subgroups {\Gamma \leq {\bf Z}^d}. As all such subgroups are finitely generated abelian groups, one way to describe a lattice subgroup is to specify a set {v_1,\dots,v_n \in \Gamma} of generators of {\Gamma}. Equivalently, we have

\displaystyle  \Gamma = A {\bf Z}^n

where {A \in M_{d \times n}({\bf Z})} is the matrix whose columns are {v_1,\dots,v_n}. Applying the Smith normal form {A = UDV}, we conclude that

\displaystyle  \Gamma = UDV{\bf Z}^n = UD{\bf Z}^n

so in particular {\Gamma} is isomorphic (with respect to the automorphism group {GL_d({\bf Z})} of {{\bf Z}^d}) to {D{\bf Z}^n}. In particular, we see that {\Gamma} is a free abelian group of rank {k}, where {k} is the rank of {D} (or {A}). This representation also allows one to trim the representation {A {\bf Z}^n} down to {U D'{\bf Z}^k}, where {D' \in M_{d \times k}} is the matrix formed from the {k} left columns of {D}; the columns of {UD'} then give a basis for {\Gamma}. Let us call this a trimmed representation of {A{\bf Z}^n}.

Example 2 Let {\Gamma \leq {\bf Z}^3} be the lattice subgroup generated by {(1,3,1)}, {(2,-2,2)}, {(3,1,3)}, thus {\Gamma = A {\bf Z}^3} with {A = \begin{pmatrix} 1 & 2 & 3 \\ 3 & -2 & 1 \\ 1 & 2 & 3 \end{pmatrix}}. A Smith normal form for {A} is given by

\displaystyle  A = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 0 & 0 \\ 3 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 3 & -2 & 1 \\ -1 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}

so {A{\bf Z}^3} is a rank two lattice with a basis of {(3,1,3) \times 1 = (3,1,3)} and {(1,0,1) \times 8 = (8,0,8)} (and the invariant factors are {1} and {8}). The trimmed representation is

\displaystyle  A {\bf Z}^3 = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 0 & 0 \\ 3 & 1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 8 \\ 0 & 0 \end{pmatrix} {\bf Z}^2 = \begin{pmatrix} 3 & 8 \\ 1 & 0 \\ 3 & 8 \end{pmatrix} {\bf Z}^2.

There are other Smith normal forms for {A}, giving slightly different representations here, but the rank and invariant factors will always be the same.

By the above discussion we can represent a lattice subgroup {\Gamma \leq {\bf Z}^d} by a matrix {A \in M_{d \times n}({\bf Z})} for some {n}; this representation is not unique, but we will address this issue shortly. For now, we focus on the question of how to use such data representations of subgroups to perform basic operations on lattice subgroups. There are some operations that are very easy to perform using this data representation:

  • (Applying a linear transformation) if {T \in M_{d' \times d}({\bf Z})}, so that {T} is also a linear transformation from {{\bf Z}^d} to {{\bf Z}^{d'}}, then {T} maps lattice subgroups to lattice subgroups, and clearly maps the lattice subgroup {A{\bf Z}^n} to {(TA){\bf Z}^n} for any {A \in M_{d \times n}({\bf Z})}.
  • (Sum) Given two lattice subgroups {A_1 {\bf Z}^{n_1}, A_2 {\bf Z}^{n_2} \leq {\bf Z}^d} for some {A_1 \in M_{d \times n_1}({\bf Z})}, {A_2 \in M_{d \times n_2}({\bf Z})}, the sum {A_1 {\bf Z}^{n_1} + A_2 {\bf Z}^{n_2}} is equal to the lattice subgroup {A {\bf Z}^{n_1+n_2}}, where {A = (A_1 A_2) \in M_{d \times n_1 + n_2}({\bf Z})} is the matrix formed by concatenating the columns of {A_1} with the columns of {A_2}.
  • (Direct sum) Given two lattice subgroups {A_1 {\bf Z}^{n_1} \leq {\bf Z}^{d_1}}, {A_2 {\bf Z}^{n_2} \leq {\bf Z}^{d_2}}, the direct sum {A_1 {\bf Z}^{n_1} \times A_2 {\bf Z}^{n_2}} is equal to the lattice subgroup {A {\bf Z}^{n_1+n_2}}, where {A = \begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix} \in M_{d_1+d_2 \times n_1 + n_2}({\bf Z})} is the block matrix formed by taking the direct sum of {A_1} and {A_2}.

One can also use Smith normal form to detect when one lattice subgroup {B {\bf Z}^m \leq {\bf Z}^d} is a subgroup of another lattice subgroup {A {\bf Z}^n \leq {\bf Z}^d}. Using Smith normal form factorization {A = U D V}, with invariant factors {\alpha_1|\dots|\alpha_k}, the relation {B {\bf Z}^m \leq A {\bf Z}^n} is equivalent after some manipulation to

\displaystyle  U^{-1} B {\bf Z}^m \leq D {\bf Z}^n.

The group {U^{-1} B {\bf Z}^m} is generated by the columns of {U^{-1} B}, so this gives a test to determine whether {B {\bf Z}^{m} \leq A {\bf Z}^{n}}: the {i^{th}} row of {U^{-1} B} must be divisible by {\alpha_i} for {i=1,\dots,k}, and all other rows must vanish.

Example 3 To test whether the lattice subgroup {\Gamma'} generated by {(1,1,1)} and {(0,2,0)} is contained in the lattice subgroup {\Gamma = A{\bf Z}^3} from Example 2, we write {\Gamma'} as {B {\bf Z}^2} with {B = \begin{pmatrix} 1 & 0 \\ 1 & 2 \\ 1 & 0\end{pmatrix}}, and observe that

\displaystyle  U^{-1} B = \begin{pmatrix} 1 & 2 \\ -2 & -6 \\ 0 & 0 \end{pmatrix}.

The first row is of course divisible by {1}, and the last row vanishes as required, but the second row is not divisible by {8}, so {\Gamma'} is not contained in {\Gamma} (but {4\Gamma'} is); also a similar computation verifies that {\Gamma} is conversely contained in {\Gamma'}.

One can now test whether {B{\bf Z}^m = A{\bf Z}^n} by testing whether {B{\bf Z}^m \leq A{\bf Z}^n} and {A{\bf Z}^n \leq B{\bf Z}^m} simultaneously hold (there may be more efficient ways to do this, but this is already computationally manageable in many applications). This in principle addresses the issue of non-uniqueness of representation of a subgroup {\Gamma} in the form {A{\bf Z}^n}.

Next, we consider the question of representing the intersection {A{\bf Z}^n \cap B{\bf Z}^m} of two subgroups {A{\bf Z}^n, B{\bf Z}^m \leq {\bf Z}^d} in the form {C{\bf Z}^p} for some {p} and {C \in M_{d \times p}({\bf Z})}. We can write

\displaystyle  A{\bf Z}^n \cap B{\bf Z}^m = \{ Ax: Ax = By \hbox{ for some } x \in {\bf Z}^n, y \in {\bf Z}^m \}

\displaystyle  = (A 0) \{ z \in {\bf Z}^{n+m}: (A B) z = 0 \}

where {(A B) \in M_{d \times n+m}({\bf Z})} is the matrix formed by concatenating {A} and {B}, and similarly for {(A 0) \in M_{d \times n+m}({\bf Z})} (here we use the change of variable {z = \begin{pmatrix} x \\ -y \end{pmatrix}}). We apply the Smith normal form to {(A B)} to write

\displaystyle  (A B) = U D V

where {U \in GL_d({\bf Z})}, {D \in M_{d \times n+m}({\bf Z})}, {V \in GL_{n+m}({\bf Z})} with {D} of rank {k}. We can then write

\displaystyle  \{ z \in {\bf Z}^{n+m}: (A B) z = 0 \} = V^{-1} \{ w \in {\bf Z}^{n+m}: Dw = 0 \}

\displaystyle  = V^{-1} (\{0\}^k \times {\bf Z}^{n+m-k})

(making the change of variables {w = Vz}). Thus we can write {A{\bf Z}^n \cap B{\bf Z}^m = C {\bf Z}^{n+m-k}} where {C \in M_{d \times n+m-k}({\bf Z})} consists of the right {n+m-k} columns of {(A 0) V^{-1} \in M_{d \times n+m}({\bf Z})}.

Example 4 With the lattice {A{\bf Z}^3} from Example 2, we shall compute the intersection of {A{\bf Z}^3} with the subgroup {{\bf Z}^2 \times \{0\}}, which one can also write as {B{\bf Z}^2} with {B = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}}. We obtain a Smith normal form

\displaystyle  (A B) = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 3 & -2 & 1 & 0 & 1 \\ 1 & 2 & 3 & 1 & 0 \\ 1 & 2 & 3 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 \end{pmatrix}

so {k=3}. We have

\displaystyle  (A 0) V^{-1} = \begin{pmatrix} 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 3 & 0 & -8 \\ 0 & 0 & 1 & 0 & 0 \end{pmatrix}

and so we can write {A{\bf Z}^3 \cap B{\bf Z}^2 = C{\bf Z}^2} where

\displaystyle  C = \begin{pmatrix} 0 & 0 \\ 0 & -8 \\ 0 & 0 \end{pmatrix}.

One can trim this representation if desired, for instance by deleting the first column of {C} (and replacing {{\bf Z}^2} with {{\bf Z}}). Thus the intersection of {A{\bf Z}^3} with {{\bf Z}^2 \times \{0\}} is the rank one subgroup generated by {(0,-8,0)}.

A similar calculation allows one to represent the pullback {T^{-1} (A {\bf Z}^n) \leq {\bf Z}^{d'}} of a subgroup {A{\bf Z}^n \leq {\bf Z}^d} via a linear transformation {T \in M_{d \times d'}({\bf Z})}, since

\displaystyle T^{-1} (A {\bf Z}^n) = \{ x \in {\bf Z}^{d'}: Tx = Ay \hbox{ for some } y \in {\bf Z}^m \}

\displaystyle  = (I 0) \{ z \in {\bf Z}^{d'+m}: (T A) z = 0 \}

where {(I 0) \in M_{d' \times d'+m}({\bf Z})} is the concatenation of the {d' \times d'} identity matrix {I} and the {d' \times m} zero matrix. Applying the Smith normal form to write {(T A) = UDV} with {D} of rank {k}, the same argument as before allows us to write {T^{-1}(A{\bf Z}^n) = C {\bf Z}^{d'+m-k}} where {C \in M_{d' \times d'+m-k}} consists of the right {d'+m-k} columns of {(I 0) V^{-1} \in M_{d' \times d'+m}({\bf Z})}.

Among other things, this allows one to describe lattices given by systems of linear equations and congruences in the {A{\bf Z}^n} format. Indeed, the set of lattice vectors {x \in {\bf Z}^d} that solve the system of congruences

\displaystyle  \alpha_i | x \cdot v_i \ \ \ \ \ (1)

for {i=1,\dots,k}, some natural numbers {\alpha_i}, and some lattice vectors {v_i \in {\bf Z}^d}, together with an additional system of equations

\displaystyle  x \cdot w_j = 0 \ \ \ \ \ (2)

for {j=1,\dots,l} and some lattice vectors {w_j \in {\bf Z}^d}, can be written as {T^{-1}(A {\bf Z}^k)} where {T \in M_{k+l \times d}({\bf Z})} is the matrix with rows {v_1,\dots,v_k,w_1,\dots,w_l}, and {A \in M_{k+l \times k}({\bf Z})} is the diagonal matrix with diagonal entries {\alpha_1,\dots,\alpha_k}. Conversely, any subgroup {A{\bf Z}^n} can be described in this form by first using the trimmed representation {A{\bf Z}^n = UD'{\bf Z}^k}, at which point membership of a lattice vector {x \in {\bf Z}^d} in {A{\bf Z}^n} is seen to be equivalent to the congruences

\displaystyle  \alpha_i | U^{-1} x \cdot e_i

for {i=1,\dots,k} (where {k} is the rank, {\alpha_1,\dots,\alpha_k} are the invariant factors, and {e_1,\dots,e_d} is the standard basis of {{\bf Z}^d}) together with the equations

\displaystyle  U^{-1} x \cdot e_j = 0

for {j=k+1,\dots,d}. Thus one can obtain a representation in the form (1), (2) with {l=d-k}, and {v_1,\dots,v_k,w_1,\dots,w_{d-k}} to be the rows of {U^{-1}} in order.

Example 5 With the lattice subgroup {A{\bf Z}^3} from Example 2, we have {U^{-1} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & -3 & 1 \\ 1 & 0 & -1 \end{pmatrix}}, and so {A{\bf Z}^3} consists of those triples {(x_1,x_2,x_3)} which obey the (redundant) congruence

\displaystyle  1 | x_2,

the congruence

\displaystyle  8 | -3x_2 + x_3

and the identity

\displaystyle  x_1 - x_3 = 0.

Conversely, one can use the above procedure to convert the above system of congruences and identities back into a form {A' {\bf Z}^{n'}} (though depending on which Smith normal form one chooses, the end result may be a different representation of the same lattice group {A{\bf Z}^3}).

Now we apply Pontryagin duality. We claim the identity

\displaystyle  (A{\bf Z}^n)^\perp = \{ x \in ({\bf R}/{\bf Z})^d: A^Tx = 0 \}

for any {A \in M_{d \times n}({\bf Z})} (where {A^T \in M_{n \times d}({\bf Z})} induces a homomorphism from {({\bf R}/{\bf Z})^d} to {({\bf R}/{\bf Z})^n} in the obvious fashion). This can be verified by direct computation when {A} is a (rectangular) diagonal matrix, and the general case then easily follows from a Smith normal form computation (one can presumably also derive it from the category-theoretic properties of Pontryagin duality, although I will not do so here). So closed torus subgroups that are defined by a system of linear equations (over {{\bf R}/{\bf Z}}, with integer coefficients) are represented in the form {(A{\bf Z}^n)^\perp} of an orthogonal complement of a lattice subgroup. Using the trimmed form {A{\bf Z}^n = U D' {\bf Z}^k}, we see that

\displaystyle  (A{\bf Z}^n)^\perp = \{ x \in ({\bf R}/{\bf Z})^d: (UD')^T x = 0 \}

\displaystyle  = (U^{-1})^T \{ y \in ({\bf R}/{\bf Z})^d: (D')^T x = 0 \}

\displaystyle  = (U^{-1})^T (\frac{1}{\alpha_1} {\bf Z}/{\bf Z} \times \dots \times \frac{1}{\alpha_k} {\bf Z}/{\bf Z} \times ({\bf R}/{\bf Z})^{d-k}),

giving an explicit representation “in coordinates” of such a closed torus subgroup. In particular we can read off the isomorphism class of a closed torus subgroup as the product of a finite number of cyclic groups and a torus:

\displaystyle (A{\bf Z}^n)^\perp \equiv ({\bf Z}/\alpha_1 {\bf Z}) \times \dots \times ({\bf Z}/\alpha_k{\bf Z}) \times ({\bf R}/{\bf Z})^{d-k}.

Example 6 The orthogonal complement of the lattice subgroup {A{\bf Z}^3} from Example 2 is the closed torus subgroup

\displaystyle  (A{\bf Z}^3)^\perp = \{ (x_1,x_2,x_3) \in ({\bf R}/{\bf Z})^3: x_1 + 3x_2 + x_3

\displaystyle  = 2x_1 - 2x_2 + 2x_3 = 3x_1 + x_2 + 3x_3 = 0 \};

using the trimmed representation of {(A{\bf Z}^3)^\perp}, one can simplify this a little to

\displaystyle  (A{\bf Z}^3)^\perp = \{ (x_1,x_2,x_3) \in ({\bf R}/{\bf Z})^3: 3x_1 + x_2 + 3x_3

\displaystyle  = 8 x_1 + 8x_3 = 0 \}

and one can also write this as the image of the group {\{ 0\} \times (\frac{1}{8}{\bf Z}/{\bf Z}) \times ({\bf R}/{\bf Z})} under the torus isomorphism

\displaystyle  (y_1,y_2,y_3) \mapsto (y_3, y_1 - 3y_2, y_2 - y_3).

In other words, one can write

\displaystyle  (A{\bf Z}^3)^\perp = \{ (y,0,-y) + (0,-\frac{3a}{8},\frac{a}{8}): y \in {\bf R}/{\bf Z}; a \in {\bf Z}/8{\bf Z} \}

so that {(A{\bf Z}^3)^\perp} is isomorphic to {{\bf R}/{\bf Z} \times {\bf Z}/8{\bf Z}}.

We can now dualize all of the previous computable operations on subgroups of {{\bf Z}^d} to produce computable operations on closed subgroups of {({\bf R}/{\bf Z})^d}. For instance:

  • To form the intersection or sum of two closed torus subgroups {(A_1 {\bf Z}^{n_1})^\perp, (A_2 {\bf Z}^{n_2})^\perp \leq ({\bf R}/{\bf Z})^d}, use the identities

    \displaystyle  (A_1 {\bf Z}^{n_1})^\perp \cap (A_2 {\bf Z}^{n_2})^\perp = (A_1 {\bf Z}^{n_1} + A_2 {\bf Z}^{n_2})^\perp

    and

    \displaystyle  (A_1 {\bf Z}^{n_1})^\perp + (A_2 {\bf Z}^{n_2})^\perp = (A_1 {\bf Z}^{n_1} \cap A_2 {\bf Z}^{n_2})^\perp

    and then calculate the sum or intersection of the lattice subgroups {A_1 {\bf Z}^{n_1}, A_2 {\bf Z}^{n_2}} by the previous methods. Similarly, the operation of direct sum of two closed torus subgroups dualises to the operation of direct sum of two lattice subgroups.
  • To determine whether one closed torus subgroup {(A_1 {\bf Z}^{n_1})^\perp \leq ({\bf R}/{\bf Z})^d} is contained in (or equal to) another closed torus subgroup {(A_2 {\bf Z}^{n_2})^\perp \leq ({\bf R}/{\bf Z})^d}, simply use the preceding methods to check whether the lattice subgroup {A_2 {\bf Z}^{n_2}} is contained in (or equal to) the lattice subgroup {A_1 {\bf Z}^{n_1}}.
  • To compute the pull back {T^{-1}( (A{\bf Z}^n)^\perp )} of a closed torus subgroup {(A{\bf Z}^n)^\perp \leq ({\bf R}/{\bf Z})^d} via a linear transformation {T \in M_{d' \times d}({\bf Z})}, use the identity

    \displaystyle T^{-1}( (A{\bf Z}^n)^\perp ) = (T^T A {\bf Z}^n)^\perp.

    Similarly, to compute the image {T( (B {\bf Z}^m)^\perp )} of a closed torus subgroup {(B {\bf Z}^m)^\perp \leq ({\bf R}/{\bf Z})^{d'}}, use the identity

    \displaystyle T( (B{\bf Z}^m)^\perp ) = ((T^T)^{-1} B {\bf Z}^m)^\perp.

Example 7 Suppose one wants to compute the sum of the closed torus subgroup {(A{\bf Z}^3)^\perp} from Example 6 with the closed torus subgroup {\{0\}^2 \times {\bf R}/{\bf Z}}. This latter group is the orthogonal complement of the lattice subgroup {{\bf Z}^2 \times \{0\}} considered in Example 4. Thus we have {(A{\bf Z}^3)^\perp + (\{0\}^2 \times {\bf R}/{\bf Z}) = (C{\bf Z}^2)^\perp} where {C} is the matrix from Example 6; discarding the zero column, we thus have

\displaystyle (A{\bf Z}^3)^\perp + (\{0\}^2 \times {\bf R}/{\bf Z}) = \{ (x_1,x_2,x_3): -8x_2 = 0 \}.

Let {G} be a finite set of order {N}; in applications {G} will be typically something like a finite abelian group, such as the cyclic group {{\bf Z}/N{\bf Z}}. Let us define a {1}-bounded function to be a function {f: G \rightarrow {\bf C}} such that {|f(n)| \leq 1} for all {n \in G}. There are many seminorms {\| \|} of interest that one places on functions {f: G \rightarrow {\bf C}} that are bounded by {1} on {1}-bounded functions, such as the Gowers uniformity seminorms {\| \|_k} for {k \geq 1} (which are genuine norms for {k \geq 2}). All seminorms in this post will be implicitly assumed to obey this property.

In additive combinatorics, a significant role is played by inverse theorems, which abstractly take the following form for certain choices of seminorm {\| \|}, some parameters {\eta, \varepsilon>0}, and some class {{\mathcal F}} of {1}-bounded functions:

Theorem 1 (Inverse theorem template) If {f} is a {1}-bounded function with {\|f\| \geq \eta}, then there exists {F \in {\mathcal F}} such that {|\langle f, F \rangle| \geq \varepsilon}, where {\langle,\rangle} denotes the usual inner product

\displaystyle  \langle f, F \rangle := {\bf E}_{n \in G} f(n) \overline{F(n)}.

Informally, one should think of {\eta} as being somewhat small but fixed independently of {N}, {\varepsilon} as being somewhat smaller but depending only on {\eta} (and on the seminorm), and {{\mathcal F}} as representing the “structured functions” for these choices of parameters. There is some flexibility in exactly how to choose the class {{\mathcal F}} of structured functions, but intuitively an inverse theorem should become more powerful when this class is small. Accordingly, let us define the {(\eta,\varepsilon)}-entropy of the seminorm {\| \|} to be the least cardinality of {{\mathcal F}} for which such an inverse theorem holds. Seminorms with low entropy are ones for which inverse theorems can be expected to be a useful tool. This concept arose in some discussions I had with Ben Green many years ago, but never appeared in print, so I decided to record some observations we had on this concept here on this blog.

Lebesgue norms {\| f\|_{L^p} := ({\bf E}_{n \in G} |f(n)|^p)^{1/p}} for {1 < p < \infty} have exponentially large entropy (and so inverse theorems are not expected to be useful in this case):

Proposition 2 ({L^p} norm has exponentially large inverse entropy) Let {1 < p < \infty} and {0 < \eta < 1}. Then the {(\eta,\eta^p/4)}-entropy of {\| \|_{L^p}} is at most {(1+8/\eta^p)^N}. Conversely, for any {\varepsilon>0}, the {(\eta,\varepsilon)}-entropy of {\| \|_{L^p}} is at least {\exp( c \varepsilon^2 N)} for some absolute constant {c>0}.

Proof: If {f} is {1}-bounded with {\|f\|_{L^p} \geq \eta}, then we have

\displaystyle  |\langle f, |f|^{p-2} f \rangle| \geq \eta^p

and hence by the triangle inequality we have

\displaystyle  |\langle f, F \rangle| \geq \eta^p/2

where {F} is either the real or imaginary part of {|f|^{p-2} f}, which takes values in {[-1,1]}. If we let {\tilde F} be {F} rounded to the nearest multiple of {\eta^p/4}, then by the triangle inequality again we have

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^p/4.

There are only at most {1+8/\eta^p} possible values for each value {\tilde F(n)} of {\tilde F}, and hence at most {(1+8/\eta^p)^N} possible choices for {\tilde F}. This gives the first claim.

Now suppose that there is an {(\eta,\varepsilon)}-inverse theorem for some {{\mathcal F}} of cardinality {M}. If we let {f} be a random sign function (so the {f(n)} are independent random variables taking values in {-1,+1} with equal probability), then there is a random {F \in {\mathcal F}} such that

\displaystyle  |\langle f, F \rangle| \geq \varepsilon

and hence by the pigeonhole principle there is a deterministic {F \in {\mathcal F}} such that

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \geq 1/M.

On the other hand, from the Hoeffding inequality one has

\displaystyle  {\bf P}( |\langle f, F \rangle| \geq \varepsilon ) \ll \exp( - c \varepsilon^2 N )

for some absolute constant {c}, hence

\displaystyle  M \geq \exp( c \varepsilon^2 N )

as claimed. \Box

Most seminorms of interest in additive combinatorics, such as the Gowers uniformity norms, are bounded by some finite {L^p} norm thanks to Hölder’s inequality, so from the above proposition and the obvious monotonicity properties of entropy, we conclude that all Gowers norms on finite abelian groups {G} have at most exponential inverse theorem entropy. But we can do significantly better than this:

  • For the {U^1} seminorm {\|f\|_{U^1(G)} := |{\bf E}_{n \in G} f(n)|}, one can simply take {{\mathcal F} = \{1\}} to consist of the constant function {1}, and the {(\eta,\eta)}-entropy is clearly equal to {1} for any {0 < \eta < 1}.
  • For the {U^2} norm, the standard Fourier-analytic inverse theorem asserts that if {\|f\|_{U^2(G)} \geq \eta} then {|\langle f, e(\xi \cdot) \rangle| \geq \eta^2} for some Fourier character {\xi \in \hat G}. Thus the {(\eta,\eta^2)}-entropy is at most {N}.
  • For the {U^k({\bf Z}/N{\bf Z})} norm on cyclic groups for {k > 2}, the inverse theorem proved by Green, Ziegler, and myself gives an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} consisting of nilsequences {n \mapsto F(g(n) \Gamma)} for some filtered nilmanifold {G/\Gamma} of degree {k-1} in a finite collection of cardinality {O_{\eta,k}(1)}, some polynomial sequence {g: {\bf Z} \rightarrow G} (which was subsequently observed by Candela-Sisask (see also Manners) that one can choose to be {N}-periodic), and some Lipschitz function {F: G/\Gamma \rightarrow {\bf C}} of Lipschitz norm {O_{\eta,k}(1)}. By the Arzela-Ascoli theorem, the number of possible {F} (up to uniform errors of size at most {\varepsilon/2}, say) is {O_{\eta,k}(1)}. By standard arguments one can also ensure that the coefficients of the polynomial {g} are {O_{\eta,k}(1)}, and then by periodicity there are only {O(N^{O_{\eta,k}(1)}} such polynomials. As a consequence, the {(\eta,\varepsilon)}-entropy is of polynomial size {O_{\eta,k}( N^{O_{\eta,k}(1)} )} (a fact that seems to have first been implicitly observed in Lemma 6.2 of this paper of Frantzikinakis; thanks to Ben Green for this reference). One can obtain more precise dependence on {\eta,k} using the quantitative version of this inverse theorem due to Manners; back of the envelope calculations using Section 5 of that paper suggest to me that one can take {\varepsilon = \eta^{O_k(1)}} to be polynomial in {\eta} and the entropy to be of the order {O_k( N^{\exp(\exp(\eta^{-O_k(1)}))} )}, or alternatively one can reduce the entropy to {O_k( \exp(\exp(\eta^{-O_k(1)})) N^{\eta^{-O_k(1)}})} at the cost of degrading {\varepsilon} to {1/\exp\exp( O(\eta^{-O(1)}))}.
  • If one replaces the cyclic group {{\bf Z}/N{\bf Z}} by a vector space {{\bf F}_p^n} over some fixed finite field {{\bf F}_p} of prime order (so that {N=p^n}), then the inverse theorem of Ziegler and myself (available in both high and low characteristic) allows one to obtain an {(\eta,\varepsilon)}-inverse theorem for some {\varepsilon \gg_{k,\eta} 1} and {{\mathcal F}} the collection of non-classical degree {k-1} polynomial phases from {{\bf F}_p^n} to {S^1}, which one can normalize to equal {1} at the origin, and then by the classification of such polynomials one can calculate that the {(\eta,\varepsilon)} entropy is of quasipolynomial size {\exp( O_{p,k}(n^{k-1}) ) = \exp( O_{p,k}( \log^{k-1} N ) )} in {N}. By using the recent work of Gowers and Milicevic, one can make the dependence on {p,k} here more precise, but we will not perform these calcualtions here.
  • For the {U^3(G)} norm on an arbitrary finite abelian group, the recent inverse theorem of Jamneshan and myself gives (after some calculations) a bound of the polynomial form {O( q^{O(n^2)} N^{\exp(\eta^{-O(1)})})} on the {(\eta,\varepsilon)}-entropy for some {\varepsilon \gg \eta^{O(1)}}, which one can improve slightly to {O( q^{O(n^2)} N^{\eta^{-O(1)}})} if one degrades {\varepsilon} to {1/\exp(\eta^{-O(1)})}, where {q} is the maximal order of an element of {G}, and {n} is the rank (the number of elements needed to generate {G}). This bound is polynomial in {N} in the cyclic group case and quasipolynomial in general.

For general finite abelian groups {G}, we do not yet have an inverse theorem of comparable power to the ones mentioned above that give polynomial or quasipolynomial upper bounds on the entropy. However, there is a cheap argument that at least gives some subexponential bounds:

Proposition 3 (Cheap subexponential bound) Let {k \geq 2} and {0 < \eta < 1/2}, and suppose that {G} is a finite abelian group of order {N \geq \eta^{-C_k}} for some sufficiently large {C_k}. Then the {(\eta,c_k \eta^{O_k(1)})}-complexity of {\| \|_{U^k(G)}} is at most {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ))}.

Proof: (Sketch) We use a standard random sampling argument, of the type used for instance by Croot-Sisask or Briet-Gopi (thanks to Ben Green for this latter reference). We can assume that {N \geq \eta^{-C_k}} for some sufficiently large {C_k>0}, since otherwise the claim follows from Proposition 2.

Let {A} be a random subset of {{\bf Z}/N{\bf Z}} with the events {n \in A} being iid with probability {0 < p < 1} to be chosen later, conditioned to the event {|A| \leq 2pN}. Let {f} be a {1}-bounded function. By a standard second moment calculation, we see that with probability at least {1/2}, we have

\displaystyle  \|f\|_{U^k(G)}^{2^k} = {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)

\displaystyle + O((\frac{1}{N^{k+1} p^{2^k-1}})^{1/2}).

Thus, by the triangle inequality, if we choose {p := C \eta^{-2^{k+1}/(2^k-1)} / N^{\frac{k+1}{2^k-1}}} for some sufficiently large {C = C_k > 0}, then for any {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta/2}, one has with probability at least {1/2} that

\displaystyle  |{\bf E}_{n, h_1,\dots,h_k \i2^n G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|} \frac{1}{p} 1_A f(n + \omega \cdot h)|

\displaystyle \geq \eta^{2^k}/2^{2^k+1}.

We can write the left-hand side as {|\langle f, F \rangle|} where {F} is the randomly sampled dual function

\displaystyle  F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} 1_A f(n + \omega \cdot h).

Unfortunately, {F} is not {1}-bounded in general, but we have

\displaystyle  \|F\|_{L^2(G)}^2 \leq {\bf E}_{n, h_1,\dots,h_k ,h'_1,\dots,h'_k \in G}

\displaystyle  \prod_{\omega \in \{0,1\}^k \backslash \{0\}} \frac{1}{p} 1_A(n + \omega \cdot h) \frac{1}{p} 1_A(n + \omega \cdot h')

and the right-hand side can be shown to be {1+o(1)} on the average, so we can condition on the event that the right-hand side is {O(1)} without significant loss in falure probability.

If we then let {\tilde f_A} be {1_A f} rounded to the nearest Gaussian integer multiple of {\eta^{2^k}/2^{2^{10k}}} in the unit disk, one has from the triangle inequality that

\displaystyle  |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{2^k+2}

where {\tilde F} is the discretised randomly sampled dual function

\displaystyle  \tilde F(n) := {\bf E}_{n, h_1,\dots,h_k \in G} f(n) \prod_{\omega \in \{0,1\}^k \backslash \{0\}} {\mathcal C}^{|\omega|+1} \frac{1}{p} \tilde f_A(n + \omega \cdot h).

For any given {A}, there are at most {2np} places {n} where {\tilde f_A(n)} can be non-zero, and in those places there are {O_k( \eta^{-2^{k}})} possible values for {\tilde f_A(n)}. Thus, if we let {{\mathcal F}_A} be the collection of all possible {\tilde f_A} associated to a given {A}, the cardinality of this set is {O( \exp( \eta^{-O_k(1)} N^{1 - \frac{k+1}{2^k-1}} ) )}, and for any {f} with {\|f\|_{U^k(G)} \geq \eta/2}, we have

\displaystyle  \sup_{\tilde F \in {\mathcal F}_A} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

with probability at least {1/2}.

Now we remove the failure probability by independent resampling. By rounding to the nearest Gaussian integer multiple of {c_k \eta^{2^k}} in the unit disk for a sufficiently small {c_k>0}, one can find a family {{\mathcal G}} of cardinality {O( \eta^{-O_k(N)})} consisting of {1}-bounded functions {\tilde f} of {U^k(G)} norm at least {\eta/2} such that for every {1}-bounded {f} with {\|f\|_{U^k(G)} \geq \eta} there exists {\tilde f \in {\mathcal G}} such that

\displaystyle  \|f-\tilde f\|_{L^\infty(G)} \leq \eta^{2^k}/2^{k+3}.

Now, let {A_1,\dots,A_M} be independent samples of {A} for some {M} to be chosen later. By the preceding discussion, we see that with probability at least {1 - 2^{-M}}, we have

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for any given {\tilde f \in {\mathcal G}}, so by the union bound, if we choose {M = \lfloor C N \log \frac{1}{\eta} \rfloor} for a large enough {C = C_k}, we can find {A_1,\dots,A_M} such that

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle \tilde f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+2}

for all {\tilde f \in {\mathcal G}}, and hence y the triangle inequality

\displaystyle  \sup_{\tilde F \in \bigcup_{j=1}^M {\mathcal F}_{A_j}} |\langle f, \tilde F \rangle| \geq \eta^{2^k}/2^{k+3}.

Taking {{\mathcal F}} to be the union of the {{\mathcal F}_{A_j}} (applying some truncation and rescaling to these {L^2}-bounded functions to make them {L^\infty}-bounded, and then {1}-bounded), we obtain the claim. \Box

One way to obtain lower bounds on the inverse theorem entropy is to produce a collection of almost orthogonal functions with large norm. More precisely:

Proposition 4 Let {\| \|} be a seminorm, let {0 < \varepsilon \leq \eta < 1}, and suppose that one has a collection {f_1,\dots,f_M} of {1}-bounded functions such that for all {i=1,\dots,M}, {\|f_i\| \geq \eta} one has {|\langle f_i, f_j \rangle| \leq \varepsilon^2/2} for all but at most {L} choices of {j \in \{1,\dots,M\}} for all distinct {i,j \in \{1,\dots,M\}}. Then the {(\eta, \varepsilon)}-entropy of {\| \|} is at least {\varepsilon^2 M / 2L}.

Proof: Suppose we have an {(\eta,\varepsilon)}-inverse theorem with some family {{\mathcal F}}. Then for each {i=1,\dots,M} there is {F_i \in {\mathcal F}} such that {|\langle f_i, F_i \rangle| \geq \varepsilon}. By the pigeonhole principle, there is thus {F \in {\mathcal F}} such that {|\langle f_i, F \rangle| \geq \varepsilon} for all {i} in a subset {I} of {\{1,\dots,M\}} of cardinality at least {M/|{\mathcal F}|}:

\displaystyle  |I| \geq M / |{\mathcal F}|.

We can sum this to obtain

\displaystyle  |\sum_{i \in I} c_i \langle f_i, F \rangle| \geq |I| \varepsilon

for some complex numbers {c_i} of unit magnitude. By Cauchy-Schwarz, this implies

\displaystyle  \| \sum_{i \in I} c_i f_i \|_{L^2(G)}^2 \geq |I|^2 \varepsilon^2

and hence by the triangle inequality

\displaystyle  \sum_{i,j \in I} |\langle f_i, f_j \rangle| \geq |I|^2 \varepsilon^2.

On the other hand, by hypothesis we can bound the left-hand side by {|I| (L + \varepsilon^2 |I|/2)}. Rearranging, we conclude that

\displaystyle  |I| \leq 2 L / \varepsilon^2

and hence

\displaystyle  |{\mathcal F}| \geq \varepsilon^2 M / 2L

giving the claim. \Box

Thus for instance:

  • For the {U^2(G)} norm, one can take {f_1,\dots,f_M} to be the family of linear exponential phases {n \mapsto e(\xi \cdot n)} with {M = N} and {L=1}, and obtain a linear lower bound of {\varepsilon^2 N/2} for the {(\eta,\varepsilon)}-entropy, thus matching the upper bound of {N} up to constants when {\varepsilon} is fixed.
  • For the {U^k({\bf Z}/N{\bf Z})} norm, a similar calculation using polynomial phases of degree {k-1}, combined with the Weyl sum estimates, gives a lower bound of {\gg_{k,\varepsilon} N^{k-1}} for the {(\eta,\varepsilon)}-entropy for any fixed {\eta,\varepsilon}; by considering nilsequences as well, together with nilsequence equidistribution theory, one can replace the exponent {k-1} here by some quantity that goes to infinity as {\eta \rightarrow 0}, though I have not attempted to calculate the exact rate.
  • For the {U^k({\bf F}_p^n)} norm, another similar calculation using polynomial phases of degree {k-1} should give a lower bound of {\gg_{p,k,\eta,\varepsilon} \exp( c_{p,k,\eta,\varepsilon} n^{k-1} )} for the {(\eta,\varepsilon)}-entropy, though I have not fully performed the calculation.

We close with one final example. Suppose {G} is a product {G = A \times B} of two sets {A,B} of cardinality {\asymp \sqrt{N}}, and we consider the Gowers box norm

\displaystyle  \|f\|_{\Box^2(G)}^4 := {\bf E}_{a,a' \in A; b,b' \in B} f(a,b) \overline{f}(a,b') \overline{f}(a',b) f(a,b).

One possible choice of class {{\mathcal F}} here are the indicators {1_{U \times V}} of “rectangles” {U \times V} with {U \subset A}, {V \subset B} (cf. this previous blog post on cut norms). By standard calculations, one can use this class to show that the {(\eta, \eta^4/10)}-entropy of {\| \|_{\Box^2(G)}} is {O( \exp( O(\sqrt{N}) )}, and a variant of the proof of the second part of Proposition 2 shows that this is the correct order of growth in {N}. In contrast, a modification of Proposition 3 only gives an upper bound of the form {O( \exp( O( N^{2/3} ) ) )} (the bottleneck is ensuring that the randomly sampled dual functions stay bounded in {L^2}), which shows that while this cheap bound is not optimal, it can still broadly give the correct “type” of bound (specifically, intermediate growth between polynomial and exponential).

In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable {x}, for instance, is not allowed to equal {+3} and {-3} simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.

However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:

\displaystyle  \hbox{If } x,a,b,c \in {\bf R} \hbox{ with } a \neq 0, \hbox{ then }

\displaystyle  ax^2+bx+c=0 \hbox{ if and only if } x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}. \ \ \ \ \ (1)

Strictly speaking, the expression {x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}} is not well-formed according to the grammar of first-order logic; one should instead use something like

\displaystyle x = \frac{-b - \sqrt{b^2-4ac}}{2a} \hbox{ or } x = \frac{-b + \sqrt{b^2-4ac}}{2a}

or

\displaystyle x \in \left\{ \frac{-b - \sqrt{b^2-4ac}}{2a}, \frac{-b + \sqrt{b^2-4ac}}{2a} \right\}

or

\displaystyle x = \frac{-b + \epsilon \sqrt{b^2-4ac}}{2a} \hbox{ for some } \epsilon \in \{-1,+1\}

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

\displaystyle  \hbox{The sum of two odd numbers is an even number} \ \ \ \ \ (2)

is also not a first-order sentence; one would instead have to write something like

\displaystyle  \hbox{For all odd numbers } x, y, \hbox{ the number } x+y \hbox{ is even} \ \ \ \ \ (3)

or

\displaystyle  \hbox{For all odd numbers } x,y \hbox{ there exists an even number } z \ \ \ \ \ (4)

\displaystyle  \hbox{ such that } x+y=z

instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as {x,y,z} which are used once and then discarded.

Another example of partially specified notation is the innocuous {\ldots} notation. For instance, the assertion

\displaystyle \pi=3.14\ldots,

when written formally using first-order logic, would become something like

\displaystyle \pi = 3 + \frac{1}{10} + \frac{4}{10^2} + \sum_{n=3}^\infty \frac{a_n}{10^n} \hbox{ for some sequence } (a_n)_{n=3}^\infty

\displaystyle  \hbox{ with } a_n \in \{0,1,2,3,4,5,6,7,8,9\} \hbox{ for all } n,

which is not exactly an elegant reformulation. Similarly with statements such as

\displaystyle \tan x = x + \frac{x^3}{3} + \ldots \hbox{ for } |x| < \pi/2

or

\displaystyle \tan x = x + \frac{x^3}{3} + O(|x|^5) \hbox{ for } |x| < \pi/2.

Below the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

\displaystyle  \hbox{odd} + \hbox{odd} = \hbox{even}.

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using ambient parameters, we become able to formally introduce asymptotic notation such as the big-O notation {O()} or the little-o notation {o()}. We will explain how to do this at the end of this post.

Read the rest of this entry »

Kaisa Matomäki, Xuancheng Shao, Joni Teräväinen, and myself have just uploaded to the arXiv our preprint “Higher uniformity of arithmetic functions in short intervals I. All intervals“. This paper investigates the higher order (Gowers) uniformity of standard arithmetic functions in analytic number theory (and specifically, the Möbius function {\mu}, the von Mangoldt function {\Lambda}, and the generalised divisor functions {d_k}) in short intervals {(X,X+H]}, where {X} is large and {H} lies in the range {X^{\theta+\varepsilon} \leq H \leq X^{1-\varepsilon}} for a fixed constant {0 < \theta < 1} (that one would like to be as small as possible). If we let {f} denote one of the functions {\mu, \Lambda, d_k}, then there is extensive literature on the estimation of short sums

\displaystyle  \sum_{X < n \leq X+H} f(n)

and some literature also on the estimation of exponential sums such as

\displaystyle  \sum_{X < n \leq X+H} f(n) e(-\alpha n)

for a real frequency {\alpha}, where {e(\theta) := e^{2\pi i \theta}}. For applications in the additive combinatorics of such functions {f}, it is also necessary to consider more general correlations, such as polynomial correlations

\displaystyle  \sum_{X < n \leq X+H} f(n) e(-P(n))

where {P: {\bf Z} \rightarrow {\bf R}} is a polynomial of some fixed degree, or more generally

\displaystyle  \sum_{X < n \leq X+H} f(n) \overline{F}(g(n) \Gamma)

where {G/\Gamma} is a nilmanifold of fixed degree and dimension (and with some control on structure constants), {g: {\bf Z} \rightarrow G} is a polynomial map, and {F: G/\Gamma \rightarrow {\bf C}} is a Lipschitz function (with some bound on the Lipschitz constant). Indeed, thanks to the inverse theorem for the Gowers uniformity norm, such correlations let one control the Gowers uniformity norm of {f} (possibly after subtracting off some renormalising factor) on such short intervals {(X,X+H]}, which can in turn be used to control other multilinear correlations involving such functions.

Traditionally, asymptotics for such sums are expressed in terms of a “main term” of some arithmetic nature, plus an error term that is estimated in magnitude. For instance, a sum such as {\sum_{X < n \leq X+H} \Lambda(n) e(-\alpha n)} would be approximated in terms of a main term that vanished (or is negligible) if {\alpha} is “minor arc”, but would be expressible in terms of something like a Ramanujan sum if {\alpha} was “major arc”, together with an error term. We found it convenient to cancel off such main terms by subtracting an approximant {f^\sharp} from each of the arithmetic functions {f} and then getting upper bounds on remainder correlations such as

\displaystyle  |\sum_{X < n \leq X+H} (f(n)-f^\sharp(n)) \overline{F}(g(n) \Gamma)| \ \ \ \ \ (1)

(actually for technical reasons we also allow the {n} variable to be restricted further to a subprogression of {(X,X+H]}, but let us ignore this minor extension for this discussion). There is some flexibility in how to choose these approximants, but we eventually found it convenient to use the following choices.

  • For the Möbius function {\mu}, we simply set {\mu^\sharp = 0}, as per the Möbius pseudorandomness conjecture. (One could choose a more sophisticated approximant in the presence of a Siegel zero, as I did with Joni in this recent paper, but we do not do so here.)
  • For the von Mangoldt function {\Lambda}, we eventually went with the Cramér-Granville approximant {\Lambda^\sharp(n) = \frac{W}{\phi(W)} 1_{(n,W)=1}}, where {W = \prod_{p < R} p} and {R = \exp(\log^{1/10} X)}.
  • For the divisor functions {d_k}, we used a somewhat complicated-looking approximant {d_k^\sharp(n) = \sum_{m \leq X^{\frac{k-1}{5k}}} P_m(\log n)} for some explicit polynomials {P_m}, chosen so that {d_k^\sharp} and {d_k} have almost exactly the same sums along arithmetic progressions (see the paper for details).

The objective is then to obtain bounds on sums such as (1) that improve upon the “trivial bound” that one can get with the triangle inequality and standard number theory bounds such as the Brun-Titchmarsh inequality. For {\mu} and {\Lambda}, the Siegel-Walfisz theorem suggests that it is reasonable to expect error terms that have “strongly logarithmic savings” in the sense that they gain a factor of {O_A(\log^{-A} X)} over the trivial bound for any {A>0}; for {d_k}, the Dirichlet hyperbola method suggests instead that one has “power savings” in that one should gain a factor of {X^{-c_k}} over the trivial bound for some {c_k>0}. In the case of the Möbius function {\mu}, there is an additional trick (introduced by Matomäki and Teräväinen) that allows one to lower the exponent {\theta} somewhat at the cost of only obtaining “weakly logarithmic savings” of shape {\log^{-c} X} for some small {c>0}.

Our main estimates on sums of the form (1) work in the following ranges:

  • For {\theta=5/8}, one can obtain strongly logarithmic savings on (1) for {f=\mu,\Lambda}, and power savings for {f=d_k}.
  • For {\theta=3/5}, one can obtain weakly logarithmic savings for {f = \mu, d_k}.
  • For {\theta=5/9}, one can obtain power savings for {f=d_3}.
  • For {\theta=1/3}, one can obtain power savings for {f=d_2}.

Conjecturally, one should be able to obtain power savings in all cases, and lower {\theta} down to zero, but the ranges of exponents and savings given here seem to be the limit of current methods unless one assumes additional hypotheses, such as GRH. The {\theta=5/8} result for correlation against Fourier phases {e(\alpha n)} was established previously by Zhan, and the {\theta=3/5} result for such phases and {f=\mu} was established previously by by Matomäki and Teräväinen.

By combining these results with tools from additive combinatorics, one can obtain a number of applications:

  • Direct insertion of our bounds in the recent work of Kanigowski, Lemanczyk, and Radziwill on the prime number theorem on dynamical systems that are analytic skew products gives some improvements in the exponents there.
  • We can obtain a “short interval” version of a multiple ergodic theorem along primes established by Frantzikinakis-Host-Kra and Wooley-Ziegler, in which we average over intervals of the form {(X,X+H]} rather than {[1,X]}.
  • We can obtain a “short interval” version of the “linear equations in primes” asymptotics obtained by Ben Green, Tamar Ziegler, and myself in this sequence of papers, where the variables in these equations lie in short intervals {(X,X+H]} rather than long intervals such as {[1,X]}.

We now briefly discuss some of the ingredients of proof of our main results. The first step is standard, using combinatorial decompositions (based on the Heath-Brown identity and (for the {\theta=3/5} result) the Ramaré identity) to decompose {\mu(n), \Lambda(n), d_k(n)} into more tractable sums of the following types:

  • Type {I} sums, which are basically of the form {\sum_{m \leq A:m|n} \alpha(m)} for some weights {\alpha(m)} of controlled size and some cutoff {A} that is not too large;
  • Type {II} sums, which are basically of the form {\sum_{A_- \leq m \leq A_+:m|n} \alpha(m)\beta(n/m)} for some weights {\alpha(m)}, {\beta(n)} of controlled size and some cutoffs {A_-, A_+} that are not too close to {1} or to {X};
  • Type {I_2} sums, which are basically of the form {\sum_{m \leq A:m|n} \alpha(m) d_2(n/m)} for some weights {\alpha(m)} of controlled size and some cutoff {A} that is not too large.

The precise ranges of the cutoffs {A, A_-, A_+} depend on the choice of {\theta}; our methods fail once these cutoffs pass a certain threshold, and this is the reason for the exponents {\theta} being what they are in our main results.

The Type {I} sums involving nilsequences can be treated by methods similar to those in this previous paper of Ben Green and myself; the main innovations are in the treatment of the Type {II} and Type {I_2} sums.

For the Type {II} sums, one can split into the “abelian” case in which (after some Fourier decomposition) the nilsequence {F(g(n)\Gamma)} is basically of the form {e(P(n))}, and the “non-abelian” case in which {G} is non-abelian and {F} exhibits non-trivial oscillation in a central direction. In the abelian case we can adapt arguments of Matomaki and Shao, which uses Cauchy-Schwarz and the equidistribution properties of polynomials to obtain good bounds unless {e(P(n))} is “major arc” in the sense that it resembles (or “pretends to be”) {\chi(n) n^{it}} for some Dirichlet character {\chi} and some frequency {t}, but in this case one can use classical multiplicative methods to control the correlation. It turns out that the non-abelian case can be treated similarly. After applying Cauchy-Schwarz, one ends up analyzing the equidistribution of the four-variable polynomial sequence

\displaystyle  (n,m,n',m') \mapsto (g(nm)\Gamma, g(n'm)\Gamma, g(nm') \Gamma, g(n'm'\Gamma))

as {n,m,n',m'} range in various dyadic intervals. Using the known multidimensional equidistribution theory of polynomial maps in nilmanifolds, one can eventually show in the non-abelian case that this sequence either has enough equidistribution to give cancellation, or else the nilsequence involved can be replaced with one from a lower dimensional nilmanifold, in which case one can apply an induction hypothesis.

For the type {I_2} sum, a model sum to study is

\displaystyle  \sum_{X < n \leq X+H} d_2(n) e(\alpha n)

which one can expand as

\displaystyle  \sum_{n,m: X < nm \leq X+H} e(\alpha nm).

We experimented with a number of ways to treat this type of sum (including automorphic form methods, or methods based on the Voronoi formula or van der Corput’s inequality), but somewhat to our surprise, the most efficient approach was an elementary one, in which one uses the Dirichlet approximation theorem to decompose the hyperbolic region {\{ (n,m) \in {\bf N}^2: X < nm \leq X+H \}} into a number of arithmetic progressions, and then uses equidistribution theory to establish cancellation of sequences such as {e(\alpha nm)} on the majority of these progressions. As it turns out, this strategy works well in the regime {H > X^{1/3+\varepsilon}} unless the nilsequence involved is “major arc”, but the latter case is treatable by existing methods as discussed previously; this is why the {\theta} exponent for our {d_2} result can be as low as {1/3}.

In a sequel to this paper (currently in preparation), we will obtain analogous results for almost all intervals {(x,x+H]} with {x} in the range {[X,2X]}, in which we will be able to lower {\theta} all the way to {0}.

Just a brief update to the previous post. Gerhard Paseman and I have now set up a web site for the Short Communication Satellite (SCS) for the virtual International Congress of Mathematicians (ICM), which will be an experimental independent online satellite event in which short communications on topics relevant to one or two of the sections of the ICM can be submitted, reviewed by peers, and (if appropriate for the SCS event) displayed in a virtual “poster room” during the Congress on July 6-14 (which, by the way, has recently released its schedule and list of speakers). Our plan is to open the registration for this event on April 5, and start taking submissions on April 20; we are also currently accepting any expressions of interest in helping out with the event, for instance by serving as a reviewer. For more information about the event, please see the overview page, the guidelines page, and the FAQ page of the web site. As viewers will see, the web site is still somewhat under construction, but will be updated as we move closer to the actual Congress.

The comments section of this post would be a suitable place to ask further questions about this event, or give any additional feedback.

UPDATE: for readers who have difficulty accessing the links above, here are backup copies of the overview page and guidelines page.


Jan Grebik, Rachel Greenfeld, Vaclav Rozhon and I have just uploaded to the arXiv our preprint “Measurable tilings by abelian group actions“. This paper is related to an earlier paper of Rachel Greenfeld and myself concerning tilings of lattices {{\bf Z}^d}, but now we consider the more general situation of tiling a measure space {X} by a tile {A \subset X} shifted by a finite subset {F} of shifts of an abelian group {G = (G,+)} that acts in a measure-preserving (or at least quasi-measure-preserving) fashion on {X}. For instance, {X} could be a torus {{\bf T}^d = {\bf R}^d/{\bf Z}^d}, {A} could be a positive measure subset of that torus, and {G} could be the group {{\bf R}^d}, acting on {X} by translation.


If {F} is a finite subset of {G} with the property that the translates {f+A}, {f \in F} of {A \subset X} partition {X} up to null sets, we write {F \oplus A =_{a.e.} X}, and refer to this as a measurable tiling of {X} by {A} (with tiling set {F}). For instance, if {X} is the torus {{\bf T}^2}, we can create a measurable tiling with {A = [0,1/2]^2 \hbox{ mod } {\bf Z}^2} and {F = \{0,1/2\}^2}. Our main results are the following:

  • By modifying arguments from previous papers (including the one with Greenfeld mentioned above), we can establish the following “dilation lemma”: a measurable tiling {F \oplus A =_{a.e.} X} automatically implies further measurable tilings {rF \oplus A =_{a.e.} X}, whenever {r} is an integer coprime to all primes up to the cardinality {\# F} of {F}.
  • By averaging the above dilation lemma, we can also establish a “structure theorem” that decomposes the indicator function {1_A} of {A} into components, each of which are invariant with respect to a certain shift in {G}. We can establish this theorem in the case of measure-preserving actions on probability spaces via the ergodic theorem, but one can also generalize to other settings by using the device of “measurable medial means” (which relates to the concept of a universally measurable set).
  • By applying this structure theorem, we can show that all measurable tilings {F \oplus A = {\bf T}^1} of the one-dimensional torus {{\bf T}^1} are rational, in the sense that {F} lies in a coset of the rationals {{\bf Q} = {\bf Q}^1}. This answers a recent conjecture of Conley, Grebik, and Pikhurko; we also give an alternate proof of this conjecture using some previous results of Lagarias and Wang.
  • For tilings {F \oplus A = {\bf T}^d} of higher-dimensional tori, the tiling need not be rational. However, we can show that we can “slide” the tiling to be rational by giving each translate {f + A} of {A} a “velocity” {v_f \in {\bf R}^d}, and for every time {t}, the translates {f + tv_f + A} still form a partition of {{\bf T}^d} modulo null sets, and at time {t=1} the tiling becomes rational. In particular, if a set {A} can tile a torus in an irrational fashion, then it must also be able to tile the torus in a rational fashion.
  • In the two-dimensional case {d=2} one can arrange matters so that all the velocities {v_f} are parallel. If we furthermore assume that the tile {A} is connected, we can also show that the union of all the translates {f+A} with a common velocity {v_f = v} form a {v}-invariant subset of the torus.
  • Finally, we show that tilings {F \oplus A = {\bf Z}^d \times G} of a finitely generated discrete group {{\bf Z}^d \times G}, with {G} a finite group, cannot be constructed in a “local” fashion (we formalize this probabilistically using the notion of a “factor of iid process”) unless the tile {F} is contained in a single coset of {\{0\} \times G}. (Nonabelian local tilings, for instance of the sphere by rotations, are of interest due to connections with the Banach-Tarski paradox; see the aforementioned paper of Conley, Grebik, and Pikhurko. Unfortunately, our methods seem to break down completely in the nonabelian case.)

As I have mentioned in some recent posts, I am interested in exploring unconventional modalities for presenting mathematics, for instance using media with high production value. One such recent example of this I saw was a presentation of the fundamental zero product property (or domain property) of the real numbers – namely, that ab=0 implies a=0 or b=0 for real numbers a,b – expressed through the medium of German-language rap:

EDIT: and here is a lesson on fractions, expressed through the medium of a burger chain advertisement:

I’d be interested to know what further examples of this type are out there.

SECOND EDIT: The following two examples from Wired magazine are slightly more conventional in nature, but still worth mentioning, I think. Firstly, my colleague at UCLA, Amit Sahai, presents the concept of zero knowledge proofs at various levels of technicality:

Secondly, Moon Duchin answers math questions of all sorts from Twitter:

I’ve just uploaded to the arXiv my preprint “Perfectly packing a square by squares of nearly harmonic sidelength“. This paper concerns a variant of an old problem of Meir and Moser, who asks whether it is possible to perfectly pack squares of sidelength {1/n} for {n \geq 2} into a single square or rectangle of area {\sum_{n=2}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} - 1}. (The following variant problem, also posed by Meir and Moser and discussed for instance in this MathOverflow post, is perhaps even more well known: is it possible to perfectly pack rectangles of dimensions {1/n \times 1/(n+1)} for {n \geq 1} into a single square of area {\sum_{n=1}^\infty \frac{1}{n(n+1)} = 1}?) For the purposes of this paper, rectangles and squares are understood to have sides parallel to the axes, and a packing is perfect if it partitions the region being packed up to sets of measure zero. As one partial result towards these problems, it was shown by Paulhus that squares of sidelength {1/n} for {n \geq 2} can be packed (not quite perfectly) into a single rectangle of area {\frac{\pi^2}{6} - 1 + \frac{1}{1244918662}}, and rectangles of dimensions {1/n \times 1/n+1} for {n \geq 1} can be packed (again not quite perfectly) into a single square of area {1 + \frac{1}{10^9+1}}. (Paulhus’s paper had some gaps in it, but these were subsequently repaired by Grzegorek and Januszewski.)

Another direction in which partial progress has been made is to consider instead the problem of packing squares of sidelength {n^{-t}}, {n \geq 1} perfectly into a square or rectangle of total area {\sum_{n=1}^\infty \frac{1}{n^{2t}}}, for some fixed constant {t > 1/2} (this lower bound is needed to make the total area {\sum_{n=1}^\infty \frac{1}{n^{2t}}} finite), with the aim being to get {t} as close to {1} as possible. Prior to this paper, the most recent advance in this direction was by Januszewski and Zielonka last year, who achieved such a packing in the range {1/2 < t \leq 2/3}.

In this paper we are able to get {t} arbitrarily close to {1} (which turns out to be a “critical” value of this parameter), but at the expense of deleting the first few tiles:

Theorem 1 If {1/2 < t < 1}, and {n_0} is sufficiently large depending on {t}, then one can pack squares of sidelength {n^{-t}}, {n \geq n_0} perfectly into a square of area {\sum_{n=n_0}^\infty \frac{1}{n^{2t}}}.

As in previous works, the general strategy is to execute a greedy algorithm, which can be described somewhat incompletely as follows.

  • Step 1: Suppose that one has already managed to perfectly pack a square {S} of area {\sum_{n=n_0}^\infty \frac{1}{n^{2t}}} by squares of sidelength {n^{-t}} for {n_0 \leq n < n_1}, together with a further finite collection {{\mathcal R}} of rectangles with disjoint interiors. (Initially, we would have {n_1=n_0} and {{\mathcal R} = \{S\}}, but these parameter will change over the course of the algorithm.)
  • Step 2: Amongst all the rectangles in {{\mathcal R}}, locate the rectangle {R} of the largest width (defined as the shorter of the two sidelengths of {R}).
  • Step 3: Pack (as efficiently as one can) squares of sidelength {n^{-t}} for {n_1 \leq n < n_2} into {R} for some {n_2>n_1}, and decompose the portion of {R} not covered by this packing into rectangles {{\mathcal R}'}.
  • Step 4: Replace {n_1} by {n_2}, replace {{\mathcal R}} by {({\mathcal R} \backslash \{R\}) \cup {\mathcal R}'}, and return to Step 1.

The main innovation of this paper is to perform Step 3 somewhat more efficiently than in previous papers.

The above algorithm can get stuck if one reaches a point where one has already packed squares of sidelength {1/n^t} for {n_0 \leq n < n_1}, but that all remaining rectangles {R} in {{\mathcal R}} have width less than {n_1^{-t}}, in which case there is no obvious way to fit in the next square. If we let {w(R)} and {h(R)} denote the width and height of these rectangles {R}, then the total area of the rectangles must be

\displaystyle  \sum_{R \in {\mathcal R}} w(R) h(R) = \sum_{n=n_0}^\infty \frac{1}{n^{2t}} - \sum_{n=n_0}^{n_1-1} \frac{1}{n^{2t}} \asymp n_1^{1-2t}

and the total perimeter {\mathrm{perim}({\mathcal R})} of these rectangles is

\displaystyle  \mathrm{perim}({\mathcal R}) = \sum_{R \in {\mathcal R}} 2(w(R)+h(R)) \asymp \sum_{R \in {\mathcal R}} h(R).

Thus we have

\displaystyle  n_1^{1-2t} \ll \mathrm{perim}({\mathcal R}) \sup_{R \in {\mathcal R}} w(R)

and so to ensure that there is at least one rectangle {R} with {w(R) \geq n_1^{-t}} it would be enough to have the perimeter bound

\displaystyle  \mathrm{perim}({\mathcal R}) \leq c n_1^{1-t}

for a sufficiently small constant {c>0}. It is here that we now see the critical nature of the exponent {t=1}: for {t<1}, the amount of perimeter we are permitted to have in the remaining rectangles increases as one progresses with the packing, but for {t=1} the amount of perimeter one is “budgeted” for stays constant (and for {t>1} the situation is even worse, in that the remaining rectangles {{\mathcal R}} should steadily decrease in total perimeter).

In comparison, the perimeter of the squares that one has already packed is equal to

\displaystyle  \sum_{n=n_0}^{n_1-1} 4 n^{-t}

which is comparable to {n_1^{1-t}} for {n_1} large (with the constants blowing up as {t} approaches the critical value of {1}). In previous algorithms, the total perimeter of the remainder rectangles {{\mathcal R}} was basically comparable to the perimeter of the squares already packed, and this is the main reason why the results only worked when {t} was sufficiently far away from {1}. In my paper, I am able to get the perimeter of {{\mathcal R}} significantly smaller than the perimeter of the squares already packed, by grouping those squares into lattice-like clusters (of about {M^2} squares arranged in an {M \times M} pattern), and sliding the squares in each cluster together to almost entirely eliminate the wasted space between each square, leaving only the space around the cluster as the main source of residual perimeter, which will be comparable to about {M n_1^{-t}} per cluster, as compared to the total perimeter of the squares in the cluster which is comparable to {M^2 n_1^{-t}}. This strategy is perhaps easiest to illustrate with a picture, in which {3 \times 4} squares {S_{i,j}} of slowly decreasing sidelength are packed together with relatively little wasted space:

By choosing the parameter {M} suitably large (and taking {n_0} sufficiently large depending on {M}), one can then prove the theorem. (In order to do some technical bookkeeping and to allow one to close an induction in the verification of the algorithm’s correctness, it is convenient to replace the perimeter {\sum_{R \in {\mathcal R}} 2(w(R)+h(R))} by a slightly weighted variant {\sum_{R \in {\mathcal R}} w(R)^\delta h(R)} for a small exponent {\delta}, but this is a somewhat artificial device that somewhat obscures the main ideas.)

Archives