You are currently browsing Terence Tao’s articles.

Asgar Jamneshan, Or Shalom, and myself have just uploaded to the arXiv our preprints “A Host–Kra {{\bf F}^\omega_2}-system of order 5 that is not Abramov of order 5, and non-measurability of the inverse theorem for the {U^6({\bf F}^n_2)} norm” and “The structure of totally disconnected Host–Kra–Ziegler factors, and the inverse theorem for the {U^k} Gowers uniformity norms on finite abelian groups of bounded torsion“. These two papers are both concerned with advancing the inverse theory for the Gowers norms and Gowers-Host-Kra seminorms; the first paper provides a counterexample in this theory (in particular disproving a conjecture of Bergelson, Ziegler and myself), and the second paper gives new positive results in the case when the underlying group is bounded torsion, or the ergodic system is totally disconnected. I discuss the two papers more below the fold.

Read the rest of this entry »

The International Center for Mathematical Sciences in Edinburgh recently launched its “Mathematics for Humanity” initiative with a call for research activity proposals (ranging from small collaborations to courses, workshops and conferences) aimed at using mathematics to contributing to the betterment of humanity. (I have agreed to serve on the scientific committee to evaluate these proposals.) We launched this initiative in January and initially set the deadline for April 15, but several people who had expressed interest felt that this was insufficient time to prepare a quality proposal, so we have now extended the deadline to June 1, and welcome further applications.

See also this Mathstodon post from fellow committee member John Baez last year where he solicited some preliminary suggestions for proposals, and my previous Mathstodon announcement of this programme.

This is a somewhat experimental and speculative post. This week I was at the IPAM workshop on machine assisted proof that I was one of the organizers of. We had an interesting and diverse range of talks, both from computer scientists presenting the latest available tools to formally verify proofs or to automate various aspects of proof writing or proof discovery, as well as mathematicians who described their experiences using these tools to solve their research problems. One can find the videos of these talks on the IPAM youtube channel; I also posted about the talks during the event on my Mathstodon account. I am of course not the most objective person to judge, but from the feedback I received it seems that the conference was able to successfully achieve its aim of bringing together the different communities interested in this topic.

As a result of the conference I started thinking about what possible computer tools might now be developed that could be of broad use to mathematicians, particularly those who do not have prior expertise with the finer aspects of writing code or installing software. One idea that came to mind was a potential tool to could take, say, an arXiv preprint as input, and return some sort of diagram detailing the logical flow of the main theorems and lemmas in the paper. This is currently done by hand by authors in some, but not all, papers (and can often also be automatically generated from formally verified proofs, as seen for instance in the graphic accompanying the IPAM workshop, or this diagram generated from Massot’s blueprint software from a manually inputted set of theorems and dependencies as a precursor to formalization of a proof [thanks to Thomas Bloom for this example]). For instance, here is a diagram that my co-author Rachel Greenfeld and I drew for a recent paper:

This particular diagram incorporated a number of subjective design choices regarding layout, which results to be designated important enough to require a dedicated box (as opposed to being viewed as a mere tool to get from one box to another), and how to describe each of these results (and how to colour-code them). This is still a very human-intensive task (and my co-author and I went through several iterations of this particular diagram with much back-and-forth discussion until we were both satisfied). But I could see the possibility of creating an automatic tool that could provide an initial “first approximation” to such a diagram, which a human user could then modify as they see fit (perhaps using some convenient GUI interface, for instance some variant of the Quiver online tool for drawing commutative diagrams in LaTeX).

As a crude first attempt at automatically generating such a diagram, one couuld perhaps develop a tool to scrape a LaTeX file to locate all the instances of the theorem environment in the text (i.e., all the formally identified lemmas, corollaries, and so forth), and for each such theorem, locate a proof environment instance that looks like it is associated to that theorem (doing this with reasonable accuracy may require a small amount of machine learning, though perhaps one could just hope that proximity of the proof environment instance to the theorem environment instance suffices in many cases). Then identify all the references within that proof environment to other theorems to start building the tree of implications, which one could then depict in a diagram such as the above. Such an approach would likely miss many of the implications; for instance, because many lemmas might not be proven using a formal proof environment, but instead by some more free-flowing text discussion, or perhaps a one line justification such as “By combining Lemma 3.4 and Proposition 3.6, we conclude”. Also, some references to other results in the paper might not proceed by direct citation, but by more indirect justifications such as “invoking the previous lemma, we obtain” or “by repeating the arguments in Section 3, we have”. Still, even such a crude diagram might still be helpful, both as a starting point for authors to make an improved diagram, or for a student trying to understand a lengthy paper to get some initial idea of the logical structure.

More advanced features might be to try to use more of the text of the paper to assign some measure of importance to individual results (and then weight the diagram correspondingly to highlight the more important results), to try to give each result a natural language description, and to somehow capture key statements that are not neatly encapsulated in a theorem environment instance, but I would imagine that such tasks should be deferred until some cruder proof-of-concept prototype can be demonstrated.

Anyway, I would be interested to hear opinions about whether this idea (or some modification thereof) is (a) actually feasible with current technology (or better yet, already exists in some form), and (b) of interest to research mathematicians.

Tamar Ziegler and I have just uploaded to the arXiv our paper “Infinite partial sumsets in the primes“. This is a short paper inspired by a recent result of Kra, Moreira, Richter, and Robertson (discussed for instance in this Quanta article from last December) showing that for any set {A} of natural numbers of positive upper density, there exists a sequence {b_1 < b_2 < b_3 < \dots} of natural numbers and a shift {t} such that {b_i + b_j + t \in A} for all {i<j} this answers a question of Erdős). In view of the “transference principle“, it is then plausible to ask whether the same result holds if {A} is replaced by the primes. We can show the following results:

Theorem 1
  • (i) If the Hardy-Littlewood prime tuples conjecture (or the weaker conjecture of Dickson) is true, then there exists an increasing sequence {b_1 < b_2 < b_3 < \dots} of primes such that {b_i + b_j + 1} is prime for all {i < j}.
  • (ii) Unconditionally, there exist increasing sequences {a_1 < a_2 < \dots} and {b_1 < b_2 < \dots} of natural numbers such that {a_i + b_j} is prime for all {i<j}.
  • (iii) These conclusions fail if “prime” is replaced by “positive (relative) density subset of the primes” (even if the density is equal to 1).

We remark that it was shown by Balog that there (unconditionally) exist arbitrarily long but finite sequences {b_1 < \dots < b_k} of primes such that {b_i + b_j + 1} is prime for all {i < j \leq k}. (This result can also be recovered from the later results of Ben Green, myself, and Tamar Ziegler.) Also, it had previously been shown by Granville that on the Hardy-Littlewood prime tuples conjecture, there existed increasing sequences {a_1 < a_2 < \dots} and {b_1 < b_2 < \dots} of natural numbers such that {a_i+b_j} is prime for all {i,j}.

The conclusion of (i) is stronger than that of (ii) (which is of course consistent with the former being conditional and the latter unconditional). The conclusion (ii) also implies the well-known theorem of Maynard that for any given {k}, there exist infinitely many {k}-tuples of primes of bounded diameter, and indeed our proof of (ii) uses the same “Maynard sieve” that powers the proof of that theorem (though we use a formulation of that sieve closer to that in this blog post of mine). Indeed, the failure of (iii) basically arises from the failure of Maynard’s theorem for dense subsets of primes, simply by removing those clusters of primes that are unusually closely spaced.

Our proof of (i) was initially inspired by the topological dynamics methods used by Kra, Moreira, Richter, and Robertson, but we managed to condense it to a purely elementary argument (taking up only half a page) that makes no reference to topological dynamics and builds up the sequence {b_1 < b_2 < \dots} recursively by repeated application of the prime tuples conjecture.

The proof of (ii) takes up the majority of the paper. It is easiest to phrase the argument in terms of “prime-producing tuples” – tuples {(h_1,\dots,h_k)} for which there are infinitely many {n} with {n+h_1,\dots,n+h_k} all prime. Maynard’s theorem is equivalent to the existence of arbitrarily long prime-producing tuples; our theorem is equivalent to the stronger assertion that there exist an infinite sequence {h_1 < h_2 < \dots} such that every initial segment {(h_1,\dots,h_k)} is prime-producing. The main new tool for achieving this is the following cute measure-theoretic lemma of Bergelson:

Lemma 2 (Bergelson intersectivity lemma) Let {E_1,E_2,\dots} be subsets of a probability space {(X,\mu)} of measure uniformly bounded away from zero, thus {\inf_i \mu(E_i) > 0}. Then there exists a subsequence {E_{i_1}, E_{i_2}, \dots} such that

\displaystyle  \mu(E_{i_1} \cap \dots \cap E_{i_k} ) > 0

for all {k}.

This lemma has a short proof, though not an entirely obvious one. Firstly, by deleting a null set from {X}, one can assume that all finite intersections {E_{i_1} \cap \dots \cap E_{i_k}} are either positive measure or empty. Secondly, a routine application of Fatou’s lemma shows that the maximal function {\limsup_N \frac{1}{N} \sum_{i=1}^N 1_{E_i}} has a positive integral, hence must be positive at some point {x_0}. Thus there is a subsequence {E_{i_1}, E_{i_2}, \dots} whose finite intersections all contain {x_0}, thus have positive measure as desired by the previous reduction.

It turns out that one cannot quite combine the standard Maynard sieve with the intersectivity lemma because the events {E_i} that show up (which roughly correspond to the event that {n + h_i} is prime for some random number {n} (with a well-chosen probability distribution) and some shift {h_i}) have their probability going to zero, rather than being uniformly bounded from below. To get around this, we borrow an idea from a paper of Banks, Freiberg, and Maynard, and group the shifts {h_i} into various clusters {h_{i,1},\dots,h_{i,J_1}}, chosen in such a way that the probability that at least one of {n+h_{i,1},\dots,n+h_{i,J_1}} is prime is bounded uniformly from below. One then applies the Bergelson intersectivity lemma to those events and uses many applications of the pigeonhole principle to conclude.

Over the last few years, I have served on a committee of the National Academy of Sciences to produce some posters and other related media to showcase twenty-first century and its applications in the real world, suitable for display in classrooms or math departments. Our posters (together with some associated commentary, webinars on related topics, and even a whimsical “comic“) are now available for download here.

This post is an unofficial sequel to one of my first blog posts from 2007, which was entitled “Quantum mechanics and Tomb Raider“.

One of the oldest and most famous allegories is Plato’s allegory of the cave. This allegory centers around a group of people chained to a wall in a cave that cannot see themselves or each other, but only the two-dimensional shadows of themselves cast on the wall in front of them by some light source they cannot directly see. Because of this, they identify reality with this two-dimensional representation, and have significant conceptual difficulties in trying to view themselves (or the world as a whole) as three-dimensional, until they are freed from the cave and able to venture into the sunlight.

There is a similar conceptual difficulty when trying to understand Einstein’s theory of special relativity (and more so for general relativity, but let us focus on special relativity for now). We are very much accustomed to thinking of reality as a three-dimensional space endowed with a Euclidean geometry that we traverse through in time, but in order to have the clearest view of the universe of special relativity it is better to think of reality instead as a four-dimensional spacetime that is endowed instead with a Minkowski geometry, which mathematically is similar to a (four-dimensional) Euclidean space but with a crucial change of sign in the underlying metric. Indeed, whereas the distance {ds} between two points in Euclidean space {{\bf R}^3} is given by the three-dimensional Pythagorean theorem

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2

under some standard Cartesian coordinate system {(x,y,z)} of that space, and the distance {ds} in a four-dimensional Euclidean space {{\bf R}^4} would be similarly given by

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2 + du^2

under a standard four-dimensional Cartesian coordinate system {(x,y,z,u)}, the spacetime interval {ds} in Minkowski space is given by

\displaystyle  ds^2 = dx^2 + dy^2 + dz^2 - c^2 dt^2

(though in many texts the opposite sign convention {ds^2 = -dx^2 -dy^2 - dz^2 + c^2dt^2} is preferred) in spacetime coordinates {(x,y,z,t)}, where {c} is the speed of light. The geometry of Minkowski space is then quite similar algebraically to the geometry of Euclidean space (with the sign change replacing the traditional trigonometric functions {\sin, \cos, \tan}, etc. by their hyperbolic counterparts {\sinh, \cosh, \tanh}, and with various factors involving “{c}” inserted in the formulae), but also has some qualitative differences to Euclidean space, most notably a causality structure connected to light cones that has no obvious counterpart in Euclidean space.

That said, the analogy between Minkowski space and four-dimensional Euclidean space is strong enough that it serves as a useful conceptual aid when first learning special relativity; for instance the excellent introductory text “Spacetime physics” by Taylor and Wheeler very much adopts this view. On the other hand, this analogy doesn’t directly address the conceptual problem mentioned earlier of viewing reality as a four-dimensional spacetime in the first place, rather than as a three-dimensional space that objects move around in as time progresses. Of course, part of the issue is that we aren’t good at directly visualizing four dimensions in the first place. This latter problem can at least be easily addressed by removing one or two spatial dimensions from this framework – and indeed many relativity texts start with the simplified setting of only having one spatial dimension, so that spacetime becomes two-dimensional and can be depicted with relative ease by spacetime diagrams – but still there is conceptual resistance to the idea of treating time as another spatial dimension, since we clearly cannot “move around” in time as freely as we can in space, nor do we seem able to easily “rotate” between the spatial and temporal axes, the way that we can between the three coordinate axes of Euclidean space.

With this in mind, I thought it might be worth attempting a Plato-type allegory to reconcile the spatial and spacetime views of reality, in a way that can be used to describe (analogues of) some of the less intuitive features of relativity, such as time dilation, length contraction, and the relativity of simultaneity. I have (somewhat whimsically) decided to place this allegory in a Tolkienesque fantasy world (similarly to how my previous allegory to describe quantum mechanics was phrased in a world based on the computer game “Tomb Raider”). This is something of an experiment, and (like any other analogy) the allegory will not be able to perfectly capture every aspect of the phenomenon it is trying to represent, so any feedback to improve the allegory would be appreciated.

Read the rest of this entry »

If {\lambda>0}, a Poisson random variable {{\bf Poisson}(\lambda)} with mean {\lambda} is a random variable taking values in the natural numbers with probability distribution

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = k) = e^{-\lambda} \frac{\lambda^k}{k!}.

One is often interested in bounding upper tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u))

for {u \geq 0}, or lower tail probabilities

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u))

for {-1 < u \leq 0}. A standard tool for this is Bennett’s inequality:

Proposition 1 (Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \leq \exp(-\lambda h(u))

for {-1 < u \leq 0}, where

\displaystyle  h(u) := (1+u) \log(1+u) - u.

From the Taylor expansion {h(u) = \frac{u^2}{2} + O(u^3)} for {u=O(1)} we conclude Gaussian type tail bounds in the regime {u = o(1)} (and in particular when {u = O(1/\sqrt{\lambda})} (in the spirit of the Chernoff, Bernstein, and Hoeffding inequalities). but in the regime where {u} is large and positive one obtains a slight gain over these other classical bounds (of {\exp(- \lambda u \log u)} type, rather than {\exp(-\lambda u)}).

Proof: We use the exponential moment method. For any {t \geq 0}, we have from Markov’s inequality that

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq e^{-t \lambda(1+u)} {\bf E} \exp( t {\bf Poisson}(\lambda) ).

A standard computation shows that the moment generating function of the Poisson distribution is given by

\displaystyle  \exp( t {\bf Poisson}(\lambda) ) = \exp( (e^t - 1) \lambda )

and hence

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \leq \exp( (e^t - 1)\lambda - t \lambda(1+u) ).

For {u \geq 0}, it turns out that the right-hand side is optimized by setting {t = \log(1+u)}, in which case the right-hand side simplifies to {\exp(-\lambda h(u))}. This proves the first inequality; the second inequality is proven similarly (but now {u} and {t} are non-positive rather than non-negative). \Box

Remark 2 Bennett’s inequality also applies for (suitably normalized) sums of bounded independent random variables. In some cases there are direct comparison inequalities available to relate those variables to the Poisson case. For instance, suppose {S = X_1 + \dots + X_n} is the sum of independent Boolean variables {X_1,\dots,X_n \in \{0,1\}} of total mean {\sum_{j=1}^n {\bf E} X_j = \lambda} and with {\sup_i {\bf P}(X_i) \leq \varepsilon} for some {0 < \varepsilon < 1}. Then for any natural number {k}, we have

\displaystyle  {\bf P}(S=k) = \sum_{1 \leq i_1 < \dots < i_k \leq n} {\bf P}(X_{i_1}=1) \dots {\bf P}(X_{i_k}=1)

\displaystyle  \prod_{i \neq i_1,\dots,i_k} {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\sum_{i=1}^n \frac{{\bf P}(X_i=1)}{{\bf P}(X_i=0)})^k \times \prod_{i=1}^n {\bf P}(X_i=0)

\displaystyle  \leq \frac{1}{k!} (\frac{\lambda}{1-\varepsilon})^k \prod_{i=1}^n \exp( - {\bf P}(X_i = 1))

\displaystyle  \leq e^{-\lambda} \frac{\lambda^k}{(1-\varepsilon)^k k!}

\displaystyle  \leq e^{\frac{\varepsilon}{1-\varepsilon} \lambda} {\bf P}( \mathbf{Poisson}(\frac{\lambda}{1-\varepsilon}) = k).

As such, for {\varepsilon} small, one can efficiently control the tail probabilities of {S} in terms of the tail probability of a Poisson random variable of mean close to {\lambda}; this is of course very closely related to the well known fact that the Poisson distribution emerges as the limit of sums of many independent boolean variables, each of which is non-zero with small probability. See this paper of Bentkus and this paper of Pinelis for some further useful (and less obvious) comparison inequalities of this type.

In this note I wanted to record the observation that one can improve the Bennett bound by a small polynomial factor once one leaves the Gaussian regime {u = O(1/\sqrt{\lambda})}, in particular gaining a factor of {1/\sqrt{\lambda}} when {u \sim 1}. This observation is not difficult and is implicitly in the literature (one can extract it for instance from the much more general results of this paper of Talagrand, and the basic idea already appears in this paper of Glynn), but I was not able to find a clean version of this statement in the literature, so I am placing it here on my blog. (But if a reader knows of a reference that basically contains the bound below, I would be happy to know of it.)

Proposition 3 (Improved Bennett’s inequality) One has

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \geq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

for {u \geq 0} and

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) \leq \lambda(1+u)) \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

for {-1 < u \leq 0}.

Proof: We begin with the first inequality. We may assume that {u \geq 1/\sqrt{\lambda}}, since otherwise the claim follows from the usual Bennett inequality. We expand out the left-hand side as

\displaystyle  e^{-\lambda} \sum_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

Observe that for {k \geq \lambda(1+u)} that

\displaystyle  \frac{\lambda^{k+1}}{(k+1)!} \leq \frac{1}{1+u} \frac{\lambda^{k}}{k!} .

Thus the sum is dominated by the first term times a geometric series {\sum_{j=0}^\infty \frac{1}{(1+u)^j} = 1 + \frac{1}{u}}. We can thus bound the left-hand side by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{\lambda^k}{k!}.

By the Stirling approximation, this is

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \sup_{k \geq \lambda(1+u)} \frac{1}{\sqrt{k}} \frac{(e\lambda)^k}{k^k}.

The expression inside the supremum is decreasing in {k} for {k > \lambda}, thus we can bound it by

\displaystyle  \ll e^{-\lambda} (1 + \frac{1}{u}) \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda \min(u, u^2)}}

after a routine calculation.

Now we turn to the second inequality. As before we may assume that {u \leq -1/\sqrt{\lambda}}. We first dispose of a degenerate case in which {\lambda(1+u) < 1}. Here the left-hand side is just

\displaystyle  {\bf P}( {\bf Poisson}(\lambda) = 0 ) = e^{-\lambda}

and the right-hand side is comparable to

\displaystyle  e^{-\lambda} \exp( - \lambda (1+u) \log (1+u) + \lambda(1+u) ) / \sqrt{\lambda(1+u)}.

Since {-\lambda(1+u) \log(1+u)} is negative and {0 < \lambda(1+u) < 1}, we see that the right-hand side is {\gg e^{-\lambda}}, and the estimate holds in this case.

It remains to consider the regime where {u \leq -1/\sqrt{\lambda}} and {\lambda(1+u) \geq 1}. The left-hand side expands as

\displaystyle  e^{-\lambda} \sum_{k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

The sum is dominated by the first term times a geometric series {\sum_{j=-\infty}^0 \frac{1}{(1+u)^j} = \frac{1}{|u|}}. The maximal {k} is comparable to {\lambda(1+u)}, so we can bound the left-hand side by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \sup_{\lambda(1+u) \ll k \leq \lambda(1+u)} \frac{\lambda^k}{k!}.

Using the Stirling approximation as before we can bound this by

\displaystyle  \ll e^{-\lambda} \frac{1}{|u|} \frac{1}{\sqrt{\lambda(1+u)}} \frac{(e\lambda)^{\lambda(1+u)}}{(\lambda(1+u))^{\lambda(1+u)}},

which simplifies to

\displaystyle  \ll \frac{\exp(-\lambda h(u))}{\sqrt{1 + \lambda u^2 (1+u)}}

after a routine calculation. \Box

The same analysis can be reversed to show that the bounds given above are basically sharp up to constants, at least when {\lambda} (and {\lambda(1+u)}) are large.

[The following information was provided to me by Geordie Williamson, who is Director of the Sydney Mathematics Research Institute – T.]

We are currently advertising two positions in math and AI:

Both positions are for three years and are based at the Sydney Mathematical Research Institute. The positions are research only, but teaching at the University of Sydney is possible if desired. The successful candidate will have considerable time and flexibility to pursue their own research program.

We are after either:

  1. excellent mathematicians with some interest in programming and modern AI;
  2. excellent computer scientists with some interest and background in mathematics, as well as an interest in using AI to attack tough problems in mathematics.

Rachel Greenfeld and I have just uploaded to the arXiv our paper “A counterexample to the periodic tiling conjecture“. This is the full version of the result I announced on this blog a few months ago, in which we disprove the periodic tiling conjecture of Grünbaum-Shephard and Lagarias-Wang. The paper took a little longer than expected to finish, due to a technical issue that we did not realize at the time of the announcement that required a workaround.

In more detail: the original strategy, as described in the announcement, was to build a “tiling language” that was capable of encoding a certain “{p}-adic Sudoku puzzle”, and then show that the latter type of puzzle had only non-periodic solutions if {p} was a sufficiently large prime. As it turns out, the second half of this strategy worked out, but there was an issue in the first part: our tiling language was able (using {2}-group-valued functions) to encode arbitrary boolean relationships between boolean functions, and was also able (using {{\bf Z}/p{\bf Z}}-valued functions) to encode “clock” functions such as {n \mapsto n \hbox{ mod } p} that were part of our {p}-adic Sudoku puzzle, but we were not able to make these two types of functions “talk” to each other in the way that was needed to encode the {p}-adic Sudoku puzzle (the basic problem being that if {H} is a finite abelian {2}-group then there are no non-trivial subgroups of {H \times {\bf Z}/p{\bf Z}} that are not contained in {H} or trivial in the {{\bf Z}/p{\bf Z}} direction). As a consequence, we had to replace our “{p}-adic Sudoku puzzle” by a “{2}-adic Sudoku puzzle” which basically amounts to replacing the prime {p} by a sufficiently large power of {2} (we believe {2^{10}} will suffice). This solved the encoding issue, but the analysis of the {2}-adic Sudoku puzzles was a little bit more complicated than the {p}-adic case, for the following reason. The following is a nice exercise in analysis:

Theorem 1 (Linearity in three directions implies full linearity) Let {F: {\bf R}^2 \rightarrow {\bf R}} be a smooth function which is affine-linear on every horizontal line, diagonal (line of slope {1}), and anti-diagonal (line of slope {-1}). In other words, for any {c \in {\bf R}}, the functions {x \mapsto F(x,c)}, {x \mapsto F(x,c+x)}, and {x \mapsto F(x,c-x)} are each affine functions on {{\bf R}}. Then {F} is an affine function on {{\bf R}^2}.

Indeed, the property of being affine in three directions shows that the quadratic form associated to the Hessian {\nabla^2 F(x,y)} at any given point vanishes at {(1,0)}, {(1,1)}, and {(1,-1)}, and thus must vanish everywhere. In fact the smoothness hypothesis is not necessary; we leave this as an exercise to the interested reader. The same statement turns out to be true if one replaces {{\bf R}} with the cyclic group {{\bf Z}/p{\bf Z}} as long as {p} is odd; this is the key for us to showing that our {p}-adic Sudoku puzzles have an (approximate) two-dimensional affine structure, which on further analysis can then be used to show that it is in fact non-periodic. However, it turns out that the corresponding claim for cyclic groups {{\bf Z}/q{\bf Z}} can fail when {q} is a sufficiently large power of {2}! In fact the general form of functions {F: ({\bf Z}/q{\bf Z})^2 \rightarrow {\bf Z}/q{\bf Z}} that are affine on every horizontal line, diagonal, and anti-diagonal takes the form

\displaystyle  F(x,y) = Ax + By + C + D \frac{q}{4} y(x-y)

for some integer coefficients {A,B,C,D}. This additional “pseudo-affine” term {D \frac{q}{4} y(x-y)} causes some additional technical complications but ultimately turns out to be manageable.

During the writing process we also discovered that the encoding part of the proof becomes more modular and conceptual once one introduces two new definitions, that of an “expressible property” and a “weakly expressible property”. These concepts are somewhat analogous to that of {\Pi^0_0} sentences and {\Sigma^0_1} sentences in the arithmetic hierarchy, or to algebraic sets and semi-algebraic sets in real algebraic geometry. Roughly speaking, an expressible property is a property of a tuple of functions {f_w: G \rightarrow H_w}, {w \in {\mathcal W}} from an abelian group {G} to finite abelian groups {H_w}, such that the property can be expressed in terms of one or more tiling equations on the graph

\displaystyle  A := \{ (x, (f_w(x))_{w \in {\mathcal W}} \subset G \times \prod_{w \in {\mathcal W}} H_w.

For instance, the property that two functions {f,g: {\bf Z} \rightarrow H} differ by a constant can be expressed in terms of the tiling equation

\displaystyle  A \oplus (\{0\} \times H^2) = {\bf Z} \times H^2

(the vertical line test), as well as

\displaystyle  A \oplus (\{0\} \times \Delta \cup \{1\} \times (H^2 \backslash \Delta)) = G \times H^2,

where {\Delta = \{ (h,h): h \in H \}} is the diagonal subgroup of {H^2}. A weakly expressible property {P} is an existential quantification of some expressible property {P^*}, so that a tuple of functions {(f_w)_{w \in W}} obeys the property {P} if and only if there exists an extension of this tuple by some additional functions that obey the property {P^*}. It turns out that weakly expressible properties are closed under a number of useful operations, and allow us to easily construct quite complicated weakly expressible properties out of a “library” of simple weakly expressible properties, much as a complex computer program can be constructed out of simple library routines. In particular we will be able to “program” our Sudoku puzzle as a weakly expressible property.

It’s been a while since I’ve actively participated in social media outside of this blog – I was active in Google Buzz/Google+ for a while, until that service closed – but I’ve decided to try out Mathstodon, one of the servers of the open source social media software platform Mastodon. As I understand it, Mastodon functions in many ways similar to the significantly more well-known platform Twitter, but is decentralized into a federation of servers that share content with each other but can have their own moderation rules and add-ons. For instance, the Mathstodon server has the additional feature of supporting LaTeX in its posts. Another consequence of this decentralization is that if one for some reason ends up disagreeing with the administration of the server one is in, one has the option of transferring one’s account to a different server while staying on the same platform.

I just created an account at Mathstodon and it currently has very little content, but I hope to add some soon (though I will probably not be as prolific as some other mathematicians already on that site, such as John Baez or Nalini Joshi).

Archives