You are currently browsing the monthly archive for August 2011.
This fall (starting Monday, September 26), I will be teaching a graduate topics course which I have entitled “Hilbert’s fifth problem and related topics.” The course is going to focus on three related topics:
- Hilbert’s fifth problem on the topological description of Lie groups, as well as the closely related (local) classification of locally compact groups (the Gleason-Yamabe theorem).
- Approximate groups in nonabelian groups, and their classification via the Gleason-Yamabe theorem (this is very recent work of Emmanuel Breuillard, Ben Green, Tom Sanders, and myself, building upon earlier work of Hrushovski);
- Gromov’s theorem on groups of polynomial growth, as proven via the classification of approximate groups (as well as some consequences to fundamental groups of Riemannian manifolds).
I have already blogged about these topics repeatedly in the past (particularly with regard to Hilbert’s fifth problem), and I intend to recycle some of that material in the lecture notes for this course.
The above three families of results exemplify two broad principles (part of what I like to call “the dichotomy between structure and randomness“):
- (Rigidity) If a group-like object exhibits a weak amount of regularity, then it (or a large portion thereof) often automatically exhibits a strong amount of regularity as well;
- (Structure) This strong regularity manifests itself either as Lie type structure (in continuous settings) or nilpotent type structure (in discrete settings). (In some cases, “nilpotent” should be replaced by sister properties such as “abelian“, “solvable“, or “polycyclic“.)
Let me illustrate what I mean by these two principles with two simple examples, one in the continuous setting and one in the discrete setting. We begin with a continuous example. Given an complex matrix
, define the matrix exponential
of
by the formula
which can easily be verified to be an absolutely convergent series.
Exercise 1 Show that the map
is a real analytic (and even complex analytic) map from
to
, and obeys the restricted homomorphism property
for all
and
.
Proposition 1 (Rigidity and structure of matrix homomorphisms) Let
be a natural number. Let
be the group of invertible
complex matrices. Let
be a map obeying two properties:
- (Group-like object)
is a homomorphism, thus
for all
.
- (Weak regularity) The map
is continuous.
Then:
- (Strong regularity) The map
is smooth (i.e. infinitely differentiable). In fact it is even real analytic.
- (Lie-type structure) There exists a (unique) complex
matrix
such that
for all
.
Proof: Let be as above. Let
be a small number (depending only on
). By the homomorphism property,
(where we use
here to denote the identity element of
), and so by continuity we may find a small
such that
for all
(we use some arbitrary norm here on the space of
matrices, and allow implied constants in the
notation to depend on
).
The map is real analytic and (by the inverse function theorem) is a diffeomorphism near
. Thus, by the inverse function theorem, we can (if
is small enough) find a matrix
of size
such that
. By the homomorphism property and (1), we thus have
On the other hand, by another application of the inverse function theorem we see that the squaring map is a diffeomorphism near
in
, and thus (if
is small enough)
We may iterate this argument (for a fixed, but small, value of ) and conclude that
for all . By the homomorphism property and (1) we thus have
whenever is a dyadic rational, i.e. a rational of the form
for some integer
and natural number
. By continuity we thus have
for all real . Setting
we conclude that
for all real , which gives existence of the representation and also real analyticity and smoothness. Finally, uniqueness of the representation
follows from the identity
Exercise 2 Generalise Proposition 1 by replacing the hypothesis that
is continuous with the hypothesis that
is Lebesgue measurable (Hint: use the Steinhaus theorem.). Show that the proposition fails (assuming the axiom of choice) if this hypothesis is omitted entirely.
Note how one needs both the group-like structure and the weak regularity in combination in order to ensure the strong regularity; neither is sufficient on its own. We will see variants of the above basic argument throughout the course. Here, the task of obtaining smooth (or real analytic structure) was relatively easy, because we could borrow the smooth (or real analytic) structure of the domain and range
; but, somewhat remarkably, we shall see that one can still build such smooth or analytic structures even when none of the original objects have any such structure to begin with.
Now we turn to a second illustration of the above principles, namely Jordan’s theorem, which uses a discreteness hypothesis to upgrade Lie type structure to nilpotent (and in this case, abelian) structure. We shall formulate Jordan’s theorem in a slightly stilted fashion in order to emphasise the adherence to the above-mentioned principles.
Theorem 2 (Jordan’s theorem) Let
be an object with the following properties:
- (Group-like object)
is a group.
- (Discreteness)
is finite.
- (Lie-type structure)
is contained in
(the group of unitary
matrices) for some
.
Then there is a subgroup
of
such that
- (
is close to
) The index
of
in
is
(i.e. bounded by
for some quantity
depending only on
).
- (Nilpotent-type structure)
is abelian.
A key observation in the proof of Jordan’s theorem is that if two unitary elements are close to the identity, then their commutator
is even closer to the identity (in, say, the operator norm
). Indeed, since multiplication on the left or right by unitary elements does not affect the operator norm, we have
and so by the triangle inequality
Now we can prove Jordan’s theorem.
Proof: We induct on , the case
being trivial. Suppose first that
contains a central element
which is not a multiple of the identity. Then, by definition,
is contained in the centraliser
of
, which by the spectral theorem is isomorphic to a product
of smaller unitary groups. Projecting
to each of these factor groups and applying the induction hypothesis, we obtain the claim.
Thus we may assume that contains no central elements other than multiples of the identity. Now pick a small
(one could take
in fact) and consider the subgroup
of
generated by those elements of
that are within
of the identity (in the operator norm). By considering a maximal
-net of
we see that
has index at most
in
. By arguing as before, we may assume that
has no central elements other than multiples of the identity.
If consists only of multiples of the identity, then we are done. If not, take an element
of
that is not a multiple of the identity, and which is as close as possible to the identity (here is where we crucially use that
is finite). By (2), we see that if
is sufficiently small depending on
, and if
is one of the generators of
, then
lies in
and is closer to the identity than
, and is thus a multiple of the identity. On the other hand,
has determinant
. Given that it is so close to the identity, it must therefore be the identity (if
is small enough). In other words,
is central in
, and is thus a multiple of the identity. But this contradicts the hypothesis that there are no central elements other than multiples of the identity, and we are done.
Commutator estimates such as (2) will play a fundamental role in many of the arguments we will see in this course; as we saw above, such estimates combine very well with a discreteness hypothesis, but will also be very useful in the continuous setting.
Exercise 3 Generalise Jordan’s theorem to the case when
is a finite subgroup of
rather than of
. (Hint: The elements of
are not necessarily unitary, and thus do not necessarily preserve the standard Hilbert inner product of
. However, if one averages that inner product by the finite group
, one obtains a new inner product on
that is preserved by
, which allows one to conjugate
to a subgroup of
. This averaging trick is (a small) part of Weyl’s unitary trick in representation theory.)
Exercise 4 (Inability to discretise nonabelian Lie groups) Show that if
, then the orthogonal group
cannot contain arbitrarily dense finite subgroups, in the sense that there exists an
depending only on
such that for every finite subgroup
of
, there exists a ball of radius
in
(with, say, the operator norm metric) that is disjoint from
. What happens in the
case?
Remark 1 More precise classifications of the finite subgroups of
are known, particularly in low dimensions. For instance, one can show that the only finite subgroups of
(which
is a double cover of) are isomorphic to either a cyclic group, a dihedral group, or the symmetry group of one of the Platonic solids.
One of the most notorious problems in elementary mathematics that remains unsolved is the Collatz conjecture, concerning the function defined by setting
when
is odd, and
when
is even. (Here,
is understood to be the positive natural numbers
.)
Conjecture 1 (Collatz conjecture) For any given natural number
, the orbit
passes through
(i.e.
for some
).
Open questions with this level of notoriety can lead to what Richard Lipton calls “mathematical diseases” (and what I termed an unhealthy amount of obsession on a single famous problem). (See also this xkcd comic regarding the Collatz conjecture.) As such, most practicing mathematicians tend to spend the majority of their time on more productive research areas that are only just beyond the range of current techniques. Nevertheless, it can still be diverting to spend a day or two each year on these sorts of questions, before returning to other matters; so I recently had a go at the problem. Needless to say, I didn’t solve the problem, but I have a better appreciation of why the conjecture is (a) plausible, and (b) unlikely be proven by current technology, and I thought I would share what I had found out here on this blog.
Let me begin with some very well known facts. If is odd, then
is even, and so
. Because of this, one could replace
by the function
, defined by
when
is odd, and
when
is even, and obtain an equivalent conjecture. Now we see that if one chooses
“at random”, in the sense that it is odd with probability
and even with probability
, then
increases
by a factor of roughly
half the time, and decreases it by a factor of
half the time. Furthermore, if
is uniformly distributed modulo
, one easily verifies that
is uniformly distributed modulo
, and so
should be roughly
times as large as
half the time, and roughly
times as large as
the other half of the time. Continuing this at a heuristic level, we expect generically that
half the time, and
the other half of the time. The logarithm
of this orbit can then be modeled heuristically by a random walk with steps
and
occuring with equal probability. The expectation
is negative, and so (by the classic gambler’s ruin) we expect the orbit to decrease over the long term. This can be viewed as heuristic justification of the Collatz conjecture, at least in the “average case” scenario in which is chosen uniform at random (e.g. in some large interval
). (It also suggests that if one modifies the problem, e.g. by replacing
to
, then one can obtain orbits that tend to increase over time, and indeed numerically for this variant one sees orbits that appear to escape to infinity.) Unfortunately, one can only rigorously keep the orbit uniformly distributed modulo
for time about
or so; after that, the system is too complicated for naive methods to control at anything other than a heuristic level.
Remark 1 One can obtain a rigorous analogue of the above arguments by extending
from the integers
to the
-adics
. This compact abelian group comes with a Haar probability measure, and one can verify that this measure is invariant with respect to
; with a bit more effort one can verify that it is ergodic. This suggests the introduction of ergodic theory methods. For instance, using the pointwise ergodic theorem, we see that if
is a random
-adic integer, then almost surely the orbit
will be even half the time and odd half the time asymptotically, thus supporting the above heuristics. Unfortunately, this does not directly tell us much about the dynamics on
, as this is a measure zero subset of
. More generally, unless a dynamical system is somehow “polynomial”, “nilpotent”, or “unipotent” in nature, the current state of ergodic theory is usually only able to say something meaningful about generic orbits, but not about all orbits. For instance, the very simple system
on the unit circle
is well understood from ergodic theory (in particular, almost all orbits will be uniformly distributed), but the orbit of a specific point, e.g.
, is still nearly impossible to understand (this particular problem being equivalent to the notorious unsolved question of whether the digits of
are uniformly distributed).
The above heuristic argument only suggests decreasing orbits for almost all (though even this remains unproven, the state of the art is that the number of
in
that eventually go to
is
, a result of Krasikov and Lagarias). It leaves open the possibility of some very rare exceptional
for which the orbit goes to infinity, or gets trapped in a periodic loop. Since the only loop that
lies in is
(for
) or
(for
), we thus may isolate a weaker consequence of the Collatz conjecture:
Conjecture 2 (Weak Collatz conjecture) Suppose that
is a natural number such that
for some
. Then
is equal to
,
, or
.
Of course, we may replace with
(and delete “
“) and obtain an equivalent conjecture.
This weaker version of the Collatz conjecture is also unproven. However, it was observed by Bohm and Sontacchi that this weak conjecture is equivalent to a divisibility problem involving powers of and
:
Conjecture 3 (Reformulated weak Collatz conjecture) There does not exist
and integers
such that
is a positive integer that is a proper divisor of
Proof: To see this, it is convenient to reformulate Conjecture 2 slightly. Define an equivalence relation on
by declaring
if
for some integer
, thus giving rise to the quotient space
of equivalence classes
(which can be placed, if one wishes, in one-to-one correspondence with the odd natural numbers). We can then define a function
by declaring
for any , where
is the largest power of
that divides
. It is easy to see that
is well-defined (it is essentially the Syracuse function, after identifying
with the odd natural numbers), and that periodic orbits of
correspond to periodic orbits of
or
. Thus, Conjecture 2 is equivalent to the conjecture that
is the only periodic orbit of
.
Now suppose that Conjecture 2 failed, thus there exists such that
for some
. Without loss of generality we may take
to be odd, then
. It is easy to see that
is the only fixed point of
, and so
. An easy induction using (2) shows that
where, for each ,
is the largest power of
that divides
In particular, as is odd,
. Using the recursion
we see from induction that divides
, and thus
:
Since , we have
for some integer . Since
is divisible by
, and
is odd, we conclude
; if we rearrange the above equation as (1), then we obtain a counterexample to Conjecture 3.
Conversely, suppose that Conjecture 3 failed. Then we have , integers
and a natural number such that (1) holds. As
, we see that the right-hand side of (1) is odd, so
is odd also. If we then introduce the natural numbers
by the formula (3), then an easy induction using (4) shows that
with the periodic convention for
. As the
are increasing in
(even for
), we see that
is the largest power of
that divides the right-hand side of (5); as
is odd, we conclude that
is also the largest power of
that divides
. We conclude that
and thus is a periodic orbit of
. Since
is an odd number larger than
, this contradicts Conjecture 3.
Call a counterexample a tuple that contradicts Conjecture 3, i.e. an integer
and an increasing set of integers
such that (1) holds for some . We record a simple bound on such counterexamples, due to Terras and to Garner :
Lemma 5 (Exponent bounds) Let
, and suppose that the Collatz conjecture is true for all
. Let
be a counterexample. Then
Proof: The first bound is immediate from the positivity of . To prove the second bound, observe from the proof of Proposition 4 that the counterexample
will generate a counterexample to Conjecture 2, i.e. a non-trivial periodic orbit
. As the conjecture is true for all
, all terms in this orbit must be at least
. An inspection of the proof of Proposition 4 reveals that this orbit consists of
steps of the form
, and
steps of the form
. As all terms are at least
, the former steps can increase magnitude by a multiplicative factor of at most
. As the orbit returns to where it started, we conclude that
whence the claim.
The Collatz conjecture has already been verified for many values of (up to at least
, according to this web site). Inserting this into the above lemma, one can get lower bounds on
. For instance, by methods such as this, it is known that any non-trivial periodic orbit has length at least
, as shown in Garner’s paper (and this bound, which uses the much smaller value
that was available in 1981, can surely be improved using the most recent computational bounds).
Now we can perform a heuristic count on the number of counterexamples. If we fix and
, then
, and from basic combinatorics we see that there are
different ways to choose the remaining integers
to form a potential counterexample . As a crude heuristic, one expects that for a “random” such choice of integers, the expression (1) has a probability
of holding for some integer
. (Note that
is not divisible by
or
, and so one does not expect the special structure of the right-hand side of (1) with respect to those moduli to be relevant. There will be some choices of
where the right-hand side in (1) is too small to be divisible by
, but using the estimates in Lemma 5, one expects this to occur very infrequently.) Thus, the total expected number of solutions for this choice of
is
The heuristic number of solutions overall is then expected to be
where, in view of Lemma 5, one should restrict the double summation to the heuristic regime , with the approximation here accurate to many decimal places.
We need a lower bound on . Here, we will use Baker’s theorem (as discussed in this previous post), which among other things gives the lower bound
for some absolute constant . Meanwhile, Stirling’s formula (as discussed in this previous post) combined with the approximation
gives
where is the entropy function
A brief computation shows that
and so (ignoring all subexponential terms)
which makes the series (6) convergent. (Actually, one does not need the full strength of Lemma 5 here; anything that kept well away from
would suffice. In particular, one does not need an enormous value of
; even
(say) would be more than sufficient to obtain the heuristic that there are finitely many counterexamples.) Heuristically applying the Borel-Cantelli lemma, we thus expect that there are only a finite number of counterexamples to the weak Collatz conjecture (and inserting a bound such as
, one in fact expects it to be extremely likely that there are no counterexamples at all).
This, of course, is far short of any rigorous proof of Conjecture 2. In order to make rigorous progress on this conjecture, it seems that one would need to somehow exploit the structural properties of numbers of the form
In some very special cases, this can be done. For instance, suppose that one had with at most one exception (this is essentially what is called a
-cycle by Steiner). Then (8) simplifies via the geometric series formula to a combination of just a bounded number of powers of
and
, rather than an unbounded number. In that case, one can start using tools from transcendence theory such as Baker’s theorem to obtain good results; for instance, in the above-referenced paper of Steiner, it was shown that
-cycles cannot actually occur, and similar methods have been used to show that
-cycles (in which there are at most
exceptions to
) do not occur for any
, as was shown by Simons and de Weger. However, for general increasing tuples of integers
, there is no such representation by bounded numbers of powers, and it does not seem that methods from transcendence theory will be sufficient to control the expressions (8) to the extent that one can understand their divisibility properties by quantities such as
.
Amusingly, there is a slight connection to Littlewood-Offord theory in additive combinatorics – the study of the random sums
generated by some elements of an additive group
, or equivalently, the vertices of an
-dimensional parallelepiped inside
. Here, the relevant group is
. The point is that if one fixes
and
(and hence
), and lets
vary inside the simplex
then the set of all sums of the form (8) (viewed as an element of
) contains many large parallelepipeds. (Note, incidentally, that once one fixes
, all the sums of the form (8) are distinct; because given (8) and
, one can read off
as the largest power of
that divides (8), and then subtracting off
one can then read off
, and so forth.) This is because the simplex
contains many large cubes. Indeed, if one picks a typical element
of
, then one expects (thanks to Lemma 5) that there there will be
indices
such that
for
, which allows one to adjust each of the
independently by
if desired and still remain inside
. This gives a cube in
of dimension
, which then induces a parallelepiped of the same dimension in
. A short computation shows that the generators of this parallelepiped consist of products of a power of
and a power of
, and in particular will be coprime to
.
If the weak Collatz conjecture is true, then the set must avoid the residue class
in
. Let us suppose temporarily that we did not know about Baker’s theorem (and the associated bound (7)), so that
could potentially be quite small. Then we would have a large parallelepiped inside a small cyclic group
that did not cover all of
, which would not be possible for
small enough. Indeed, an easy induction shows that a
-dimensional parallelepiped in
, with all generators coprime to
, has cardinality at least
. This argument already shows the lower bound
. In other words, we have
Proposition 6 Suppose the weak Collatz conjecture is true. Then for any natural numbers
with
, one has
.
This bound is very weak when compared against the unconditional bound (7). However, I know of no way to get a nontrivial separation property between powers of and powers of
other than via transcendence theory methods. Thus, this result strongly suggests that any proof of the Collatz conjecture must either use existing results in transcendence theory, or else must contribute a new method to give non-trivial results in transcendence theory. (This already rules out a lot of possible approaches to solve the Collatz conjecture.)
By using more sophisticated tools in additive combinatorics, one can improve the above proposition (though it is still well short of the transcendence theory bound (7)):
Proposition 7 Suppose the weak Collatz conjecture is true. Then for any natural numbers
with
, one has
for some absolute constant
.
Proof: (Informal sketch only) Suppose not, then we can find with
of size
. We form the set
as before, which contains parallelepipeds in
of large dimension
that avoid
. We can count the number of times
occurs in one of these parallelepipeds by a standard Fourier-analytic computation involving Riesz products (see Chapter 7 of my book with Van Vu, or this recent preprint of Maples). Using this Fourier representation, the fact that this parallelepiped avoids
(and the fact that
) forces the generators
to be concentrated in a Bohr set, in that one can find a non-zero frequency
such that
of the
generators lie in the set
. However, one can choose the generators to essentially have the structure of a (generalised) geometric progression (up to scaling, it resembles something like
for
ranging over a generalised arithmetic progression, and
a fixed irrational), and one can show that such progressions cannot be concentrated in Bohr sets (this is similar in spirit to the exponential sum estimates of Bourgain on approximate multiplicative subgroups of
, though one can use more elementary methods here due to the very strong nature of the Bohr set concentration (being of the “
concentration” variety rather than the “
concentration”).). This furnishes the required contradiction.
Thus we see that any proposed proof of the Collatz conjecture must either use transcendence theory, or introduce new techniques that are powerful enough to create exponential separation between powers of and powers of
.
Unfortunately, once one uses the transcendence theory bound (7), the size of the cyclic group
becomes larger than the volume of any cube in
, and Littlewood-Offord techniques are no longer of much use (they can be used to show that
is highly equidistributed in
, but this does not directly give any way to prevent
from containing
).
One possible toy model problem for the (weak) Collatz conjecture is a conjecture of Erdos asserting that for , the base
representation of
contains at least one
. (See this paper of Lagarias for some work on this conjecture and on related problems.) To put it another way, the conjecture asserts that there are no integer solutions to
with and
. (When
, of course, one has
.) In this form we see a resemblance to Conjecture 3, but it looks like a simpler problem to attack (though one which is still a fair distance beyond what one can do with current technology). Note that one has a similar heuristic support for this conjecture as one does for Proposition 3; a number of magnitude
has about
base
digits, so the heuristic probability that none of these digits are equal to
is
, which is absolutely summable.
I’ve been focusing my blog posts recently on the mathematics around Hilbert’s fifth problem (is every locally Euclidean group a Lie group?), but today, I will be discussing another of Hilbert’s problems, namely Hilbert’s seventh problem, on the transcendence of powers of two algebraic numbers
. (I am not randomly going through Hilbert’s list, by the way; I hope to explain my interest in the seventh problem in a later post.) This problem was famously solved by Gelfond and Schneider in the 1930s:
Theorem 1 (Gelfond-Schneider theorem) Let
be algebraic numbers, with
and
irrational. Then (any of the values of the possibly multi-valued expression)
is transcendental.
For sake of simplifying the discussion, let us focus on just one specific consequence of this theorem:
Corollary 2
is transcendental.
Proof: If not, one could obtain a contradiction to the Gelfond-Schneider theorem by setting and
. (Note that
is clearly irrational, since
for any integers
with
positive.)
In the 1960s, Alan Baker established a major generalisation of the Gelfond-Schneider theorem known as Baker’s theorem, as part of his work in transcendence theory that later earned him a Fields Medal. Among other things, this theorem provided explicit quantitative bounds on exactly how transcendental quantities such as were. In particular, it gave a strong bound on how irrational such quantities were (i.e. how easily they were approximable by rationals). Here, in particular, is one special case of Baker’s theorem:
Proposition 3 (Special case of Baker’s theorem) For any integers
with
positive, one has
for some absolute (and effectively computable) constants
.
This theorem may be compared with (the easily proved) Liouville’s theorem on diophantine approximation, which asserts that if is an irrational algebraic number of degree
, then
for all with
positive, and some effectively computable
, and (the more significantly difficult) Thue-Siegel-Roth theorem, which under the same hypotheses gives the bound
for all , all
with
positive and an ineffective constant
. Finally, one should compare these results against Dirichlet’s theorem on Diophantine approximation, which asserts that for any real number
one has
for infinitely many with
positive. (The reason the Thue-Siegel-Roth theorem is ineffective is because it relies heavily on the dueling conspiracies argument, i.e. playing off multiple “conspiracies”
against each other; the other results however only focus on one approximation at a time and thus avoid ineffectivity.)
Proposition 3 easily implies the following separation property between powers of and powers of
:
Corollary 4 (Separation between powers of
and powers of
) For any positive integers
one has
for some effectively computable constants
(which may be slightly different from those in Proposition 3).
Indeed, this follows quickly from Proposition 3, the identity
and some elementary estimates.
In particular, the gap between powers of three and powers of two
grows exponentially in the exponents
. I do not know of any other way to establish this fact other than essentially going through some version of Baker’s argument (which will be given below).
For comparison, by exploiting the trivial (yet fundamental) integrality gap – the obvious fact that if an integer is non-zero, then its magnitude is at least
– we have the trivial bound
for all positive integers (since, from the fundamental theorem of arithmetic,
cannot vanish). Putting this into (1) we obtain a very weak version of Proposition 3, that only gives exponential bounds instead of polynomial ones:
Proposition 5 (Trivial bound) For any integers
with
positive, one has
for some absolute (and effectively computable) constant
.
The proof of Baker’s theorem (or even of the simpler special case in Proposition 3) is largely elementary (except for some very basic complex analysis), but is quite intricate and lengthy, as a lot of careful book-keeping is necessary in order to get a bound as strong as that in Proposition 3. To illustrate the main ideas, I will prove a bound that is weaker than Proposition 3, but still significantly stronger than Proposition 5, and whose proof already captures many of the key ideas of Baker:
Proposition 6 (Weak special case of Baker’s theorem) For any integers
with
, one has
for some absolute constants
.
Note that Proposition 3 is equivalent to the assertion that one can take (and
effective) in the above proposition.
The proof of Proposition 6 can be made effective (for instance, it is not too difficult to make the close to
); however, in order to simplify the exposition (and in particular, to use some nonstandard analysis terminology to reduce the epsilon management), I will establish Proposition 6 with ineffective constants
.
Like many other results in transcendence theory, the proof of Baker’s theorem (and Proposition 6) rely on what we would nowadays call the polynomial method – to play off upper and lower bounds on the complexity of polynomials that vanish (or nearly vanish) to high order on a specified set of points. (I have discussed the polynomial method in relation to problems in incidence geometry in several previous blog posts.) In the specific case of Proposition 6, the points in question are of the form
for some large integer . On the one hand, the irrationality of
ensures that the curve
is not algebraic, and so it is difficult for a polynomial of controlled complexity to vanish (or nearly vanish) to high order at all the points of
; the trivial bound in Proposition 5 allows one to make this statement more precise. (Here, “complexity” of a polynomial is an informal term referring both to the degree of the polynomial, and the height of the coefficients, which in our application will essentially be integers up to some normalisation factors.) On the other hand, if Proposition 6 failed, then
is close to a rational, which by Taylor expansion makes
close to an algebraic curve over the rationals (up to some rescaling by factors such as
and
) at each point of
. This, together with a pigeonholing argument, allows one to find a polynomial
of reasonably controlled complexity to (nearly) vanish to high order at every point of
.
These observations, by themselves, are not sufficient to get beyond the trivial bound in Proposition 5. However, Baker’s key insight was to exploit the integrality gap to bootstrap the (near) vanishing of on a set
to imply near-vanishing of
on a larger set
with
. The point is that if a polynomial
of controlled degree and size (nearly) vanishes to higher order on a lot of points on an analytic curve such as
, then it will also be fairly small on many other points in
as well. (To quantify this statement efficiently, it is convenient to use the tools of complex analysis, which are particularly well suited to understand zeroes (or small values) of polynomials.) But then, thanks to the integrality gap (and the controlled complexity of
), we can amplify “fairly small” to “very small”.
Using this observation and an iteration argument, Baker was able to take a polynomial of controlled complexity that nearly vanished to high order on a relatively small set
, and bootstrap that to show near-vanishing on a much larger set
. This bootstrap allows one to dramatically bridge the gap between the upper and lower bounds on the complexity of polynomials that nearly vanish to a specified order on a given
, and eventually leads to Proposition 6 (and, with much more care and effort, to Proposition 3).
Below the fold, I give the details of this argument. My treatment here is inspired by this expose of Serre, and these lecture notes of Soundararajan (as transcribed by Ian Petrow).
One of the fundamental structures in modern mathematics is that of a group. Formally, a group is a set equipped with an identity element
, a multiplication operation
, and an inversion operation
obeying the following axioms:
- (Closure) If
, then
and
are well-defined and lie in
. (This axiom is redundant from the above description, but we include it for emphasis.)
- (Associativity) If
, then
.
- (Identity) If
, then
.
- (Inverse) If
, then
.
One can also consider additive groups instead of multiplicative groups, with the obvious changes of notation. By convention, additive groups are always understood to be abelian, so it is convenient to use additive notation when one wishes to emphasise the abelian nature of the group structure. As usual, we often abbreviate
by
(and
by
) when there is no chance of confusion.
If furthermore is equipped with a topology, and the group operations
are continuous in this topology, then
is a topological group. Any group can be made into a topological group by imposing the discrete topology, but there are many more interesting examples of topological groups, such as Lie groups, in which
is not just a topological space, but is in fact a smooth manifold (and the group operations are not merely continuous, but also smooth).
There are many naturally occuring group-like objects that obey some, but not all, of the axioms. For instance, monoids are required to obey the closure, associativity, and identity axioms, but not the inverse axiom. If we also drop the identity axiom, we end up with a semigroup. Groupoids do not necessarily obey the closure axiom, but obey (versions of) the associativity, identity, and inverse axioms. And so forth.
Another group-like concept is that of a local topological group (or local group, for short), which is essentially a topological group with the closure axiom omitted (but do not obey the same axioms set as groupoids); they arise primarily in the study of local properties of (global) topological groups, and also in the study of approximate groups in additive combinatorics. Formally, a local group is a topological space
equipped with an identity element
, a partially defined but continuous multiplication operation
for some domain
, and a partially defined but continuous inversion operation
, where
, obeying the following axioms:
- (Local closure)
is an open neighbourhood of
, and
is an open neighbourhood of
.
- (Local associativity) If
are such that
and
are both well-defined, then they are equal. (Note however that it may be possible for one of these products to be defined but not the other, in contrast for instance with groupoids.)
- (Identity) For all
,
.
- (Local inverse) If
and
is well-defined, then
. (In particular this, together with the other axioms, forces
.)
We will often refer to ordinary groups as global groups (and topological groups as global topological groups) to distinguish them from local groups. Every global topological group is a local group, but not conversely.
One can consider discrete local groups, in which the topology is the discrete topology; in this case, the openness and continuity axioms in the definition are automatic and can be omitted. At the other extreme, one can consider local Lie groups, in which the local group has the structure of a smooth manifold, and the group operations are smooth. We can also consider symmetric local groups, in which
(i.e. inverses are always defined). Symmetric local groups have the advantage of local homogeneity: given any
, the operation of left-multiplication
is locally inverted by
near the identity, thus giving a homeomorphism between a neighbourhood of
and a neighbourhood of the identity; in particular, we see that given any two group elements
in a symmetric local group
, there is a homeomorphism between a neighbourhood of
and a neighbourhood of
. (If the symmetric local group is also Lie, then these homeomorphisms are in fact diffeomorphisms.) This local homogeneity already simplifies a lot of the possible topology of symmetric local groups, as it basically means that the local topological structure of such groups is determined by the local structure at the origin. (For instance, all connected components of a local Lie group necessarily have the same dimension.) It is easy to see that any local group has at least one symmetric open neighbourhood of the identity, so in many situations we can restrict to the symmetric case without much loss of generality.
A prime example of a local group can be formed by restricting any global topological group to an open neighbourhood
of the identity, with the domains
and
one easily verifies that this gives the structure of a local group (which we will sometimes call
to emphasise the original group
). If
is symmetric (i.e.
), then we in fact have a symmetric local group. One can also restrict local groups
to open neighbourhoods
to obtain a smaller local group
by the same procedure (adopting the convention that statements such as
or
are considered false if the left-hand side is undefined). (Note though that if one restricts to non-open neighbourhoods of the identity, then one usually does not get a local group; for instance
is not a local group (why?).)
Finite subsets of (Hausdorff) groups containing the identity can be viewed as local groups. This point of view turns out to be particularly useful for studying approximate groups in additive combinatorics, a point which I hope to expound more on later. Thus, for instance, the discrete interval is an additive symmetric local group, which informally might model an adding machine that can only handle (signed) one-digit numbers. More generally, one can view a local group as an object that behaves like a group near the identity, but for which the group laws (and in particular, the closure axiom) can start breaking down once one moves far enough away from the identity.
One can formalise this intuition as follows. Let us say that a word in a local group
is well-defined in
(or well-defined, for short) if every possible way of associating this word using parentheses is well-defined from applying the product operation. For instance, in order for
to be well-defined,
,
,
,
, and
must all be well-defined. In the preceding example
,
is not well-defined because one of the ways of associating this sum, namely
, is not well-defined (even though
is well-defined).
Exercise 1 (Iterating the associative law)
- Show that if a word
in a local group is well-defined, then all ways of associating this word give the same answer, and so we can uniquely evaluate
as an element in
.
- Give an example of a word
in a local group which has two ways of being associated that are both well-defined, but give different answers. (Hint: the local associativity axiom prevents this from happening for
, so try
. A small discrete local group will already suffice to give a counterexample; verifying the local group axioms are easier if one makes the domain of definition of the group operations as small as one can get away with while still having the counterexample.)
Exercise 2 Show that the number of ways to associate a word
is given by the Catalan number
.
Exercise 3 Let
be a local group, and let
be an integer. Show that there exists a symmetric open neighbourhood
of the identity such that every word of length
in
is well-defined in
(or more succinctly,
is well-defined). (Note though that these words will usually only take values in
, rather than in
, and also the sets
tend to become smaller as
increases.)
In many situations (such as when one is investigating the local structure of a global group) one is only interested in the local properties of a (local or global) group. We can formalise this by the following definition. Let us call two local groups and
locally identical if they have a common restriction, thus there exists a set
such that
(thus,
, and the topology and group operations of
and
agree on
). This is easily seen to be an equivalence relation. We call an equivalence class
of local groups a group germ.
Let be a property of a local group (e.g. abelianness, connectedness, compactness, etc.). We call a group germ locally
if every local group in that germ has a restriction that obeys
; we call a local or global group
locally
if its germ is locally
(or equivalently, every open neighbourhood of the identity in
contains a further neighbourhood that obeys
). Thus, the study of local properties of (local or global) groups is subsumed by the study of group germs.
Exercise 4
- Show that the above general definition is consistent with the usual definitions of the properties “connected” and “locally connected” from point-set topology.
- Strictly speaking, the above definition is not consistent with the usual definitions of the properties “compact” and “local compact” from point-set topology because in the definition of local compactness, the compact neighbourhoods are certainly not required to be open. Show however that the point-set topology notion of “locally compact” is equivalent, using the above conventions, to the notion of “locally precompact inside of an ambient local group”. Of course, this is a much more clumsy terminology, and so we shall abuse notation slightly and continue to use the standard terminology “locally compact” even though it is, strictly speaking, not compatible with the above general convention.
- Show that a local group is discrete if and only if it is locally trivial.
- Show that a connected global group is abelian if and only if it is locally abelian. (Hint: in a connected global group, the only open subgroup is the whole group.)
- Show that a global topological group is first-countable if and only if it is locally first countable. (By the Birkhoff-Kakutani theorem, this implies that such groups are metrisable if and only if they are locally metrisable.)
- Let
be a prime. Show that the solenoid group
, where
is the
-adic integers and
is the diagonal embedding of
inside
, is connected but not locally connected.
Remark 1 One can also study the local properties of groups using nonstandard analysis. Instead of group germs, one works (at least in the case when
is first countable) with the monad
of the identity element
of
, defined as the nonstandard group elements
in
that are infinitesimally close to the origin in the sense that they lie in every standard neighbourhood of the identity. The monad
is closely related to the group germ
, but has the advantage of being a genuine (global) group, as opposed to an equivalence class of local groups. It is possible to recast most of the results here in this nonstandard formulation; see e.g. the classic text of Robinson. However, we will not adopt this perspective here.
A useful fact to know is that Lie structure is local. Call a (global or local) topological group Lie if it can be given the structure of a (global or local) Lie group.
Lemma 1 (Lie is a local property) A global topological group
is Lie if and only if it is locally Lie. The same statement holds for local groups
as long as they are symmetric.
We sketch a proof of this lemma below the fold. One direction is obvious, as the restriction a global Lie group to an open neighbourhood of the origin is clearly a local Lie group; for instance, the continuous interval is a symmetric local Lie group. The converse direction is almost as easy, but (because we are not assuming
to be connected) requires one non-trivial fact, namely that local homomorphisms between local Lie groups are automatically smooth; details are provided below the fold.
As with so many other basic classes of objects in mathematics, it is of fundamental importance to specify and study the morphisms between local groups (and group germs). Given two local groups , we can define the notion of a (continuous) homomorphism
between them, defined as a continuous map with
such that whenever are such that
is well-defined, then
is well-defined and equal to
; similarly, whenever
is such that
is well-defined, then
is well-defined and equal to
. (In abstract algebra, the continuity requirement is omitted from the definition of a homomorphism; we will call such maps discrete homomorphisms to distinguish them from the continuous ones which will be the ones studied here.)
It is often more convenient to work locally: define a local (continuous) homomorphism from
to
to be a homomorphism from an open neighbourhood
of the identity to
. Given two local homomorphisms
,
from one pair of locally identical groups
to another pair
, we say that
are locally identical if they agree on some open neighbourhood of the identity in
(note that it does not matter here whether we require openness in
, in
, or both). An equivalence class
of local homomorphisms will be called a germ homomorphism (or morphism for short) from the group germ
to the group germ
.
Exercise 5 Show that the class of group germs, equipped with the germ homomorphisms, becomes a category. (Strictly speaking, because group germs are themselves classes rather than sets, the collection of all group germs is a second-order class rather than a class, but this set-theoretic technicality can be resolved in a number of ways (e.g. by restricting all global and local groups under consideration to some fixed “universe”) and should be ignored for this exercise.)
As is usual in category theory, once we have a notion of a morphism, we have a notion of an isomorphism: two group germs are isomorphic if there are germ homomorphisms
,
that invert each other. Lifting back to local groups, the associated notion is that of local isomorphism: two local groups
are locally isomorphic if there exist local isomorphisms
and
from
to
and from
to
that locally invert each other, thus
for
sufficiently close to
, and
for
sufficiently close to
. Note that all local properties of (global or local) groups that can be defined purely in terms of the group and topological structures will be preserved under local isomorphism. Thus, for instance, if
are locally isomorphic local groups, then
is locally connected iff
is,
is locally compact iff
is, and (by Lemma 1)
is Lie iff
is.
Exercise 6
Show that the additive global groups and
are locally isomorphic.
Show that every locally path-connected group is locally isomorphic to a path-connected, simply connected group.
— 1. Lie’s third theorem —
Lie’s fundamental theorems of Lie theory link the Lie group germs to Lie algebras. Observe that if is a locally Lie group germ, then the tangent space
at the identity of this germ is well-defined, and is a finite-dimensional vector space. If we choose
to be symmetric, then
can also be identified with the left-invariant (say) vector fields on
, which are first-order differential operators on
. The Lie bracket for vector fields then endows
with the structure of a Lie algebra. It is easy to check that every morphism
of locally Lie germs gives rise (via the derivative map at the identity) to a morphism
of the associated Lie algebras. From the Baker-Campbell-Hausdorff formula (which is valid for local Lie groups, as discussed in this previous post) we conversely see that
uniquely determines the germ homomorphism
. Thus the derivative map provides a covariant functor from the category of locally Lie group germs to the category of (finite-dimensional) Lie algebras. In fact, this functor is an isomorphism, which is part of a fact known as Lie’s third theorem:
Theorem 2 (Lie’s third theorem) For this theorem, all Lie algebras are understood to be finite dimensional (and over the reals).
- Every Lie algebra
is the Lie algebra of a local Lie group germ
, which is unique up to germ isomorphism (fixing
).
- Every Lie algebra
is the Lie algebra of some global connected, simply connected Lie group
, which is unique up to Lie group isomorphism (fixing
).
- Every homomorphism
between Lie algebras is the derivative of a unique germ homomorphism
between the associated local Lie group germs.
- Every homomorphism
between Lie algebras is the derivative of a unique Lie group homomorphism
between the associated global connected, simply connected, Lie groups.
- Every local Lie group germ is the germ of a global connected, simply connected Lie group
, which is unique up to Lie group isomorphism. In particular, every local Lie group is locally isomorphic to a global Lie group.
We record the (standard) proof of this theorem below the fold, which is ultimately based on Ado’s theorem and the Baker-Campbell-Hausdorff formula. Lie’s third theorem (which, actually, was proven in full generality by Cartan) demonstrates the equivalence of three categories: the category of finite-dimensonal Lie algebras, the category of local Lie group germs, and the category of connected, simply connected Lie groups.
— 2. Globalising a local group —
Many properties of a local group improve after passing to a smaller neighbourhood of the identity. Here are some simple examples:
Exercise 7 Let
be a local group.
- Give an example to show that
does not necessarily obey the cancellation laws
for
(with the convention that statements such as
are false if either side is undefined). However, show that there exists an open neighbourhood
of
within which the cancellation law holds.
- Repeat the previous part, but with the cancellation law (1) replaced by the inversion law
for any
for which both sides are well-defined.
- Repeat the previous part, but with the inversion law replaced by the involution law
for any
for which the left-hand side is well-defined.
Note that the counterexamples in the above exercise demonstrate that not every local group is the restriction of a global group, because global groups (and hence, their restrictions) always obey the cancellation law (1), the inversion law (2), and the involution law (3). Another way in which a local group can fail to come from a global group is if it contains relations which can interact in a “global’ way to cause trouble, in a fashion which is invisible at the local level. For instance, consider the open unit cube , and consider four points
in this cube that are close to the upper four corners
of this cube respectively. Define an equivalence relation
on this cube by setting
if
and
is equal to either
or
for some
. Note that this indeed an equivalence relation if
are close enough to the corners (as this forces all non-trivial combinations
to lie outside the doubled cube
). The quotient space
(which is a cube with bits around opposite corners identified together) can then be seen to be a symmetric additive local Lie group, but will usually not come from a global group. Indeed, it is not hard to see that if
is the restriction of a global group
, then
must be a Lie group with Lie algebra
(by Lemma 1), and so the connected component
of
containing the identity is isomorphic to
for some sublattice
of
that contains
; but for generic
, there is no such lattice, as the
will generate a dense subset of
. (The situation here is somewhat analogous to a number of famous Escher prints, such as Ascending and Descending, in which the geometry is locally consistent but globally inconsistent.) We will give this sort of argument in more detail below the fold (see the proof of Proposition 7).
Nevertheless, the space is still locally isomorphic to a global Lie group, namely
; for instance, the open neighbourhood
is isomorphic to
, which is an open neighbourhood of
. More generally, Lie’s third theorem tells us that any local Lie group is locally isomorphic to a global Lie group.
Let us call a local group globalisable if it is locally isomorphic to a global group; thus Lie’s third theorem tells us that every local Lie group is globalisable. Thanks to Goldbring’s solution to the local version of Hilbert’s fifth problem, we also know that locally Euclidean local groups are globalisable. A modification of this argument by van den Dries and Goldbring shows in fact that every locally compact local group is globalisable.
In view of these results, it is tempting to conjecture that all local groups are globalisable;; among other things, this would simplify the proof of Lie’s third theorem (and of the local version of Hilbert’s fifth problem). Unfortunately, this claim as stated is false:
Theorem 3 There exists local groups
which are not globalisable.
The counterexamples used to establish Theorem 3 are remarkably delicate; the first example I know of is due to van Est and Korthagen. One reason for this, of course, is that the previous results prevents one from using any local Lie group, or even a locally compact group as a counterexample. We will present a (somewhat complicated) example below, based on the unit ball in the infinite-dimensional Banach space .
However, there are certainly many situations in which we can globalise a local group. For instance, this is the case if one has a locally faithful representation of that local group inside a global group:
Lemma 4 (Faithful representation implies globalisability) Let
be a local group, and suppose there exists an injective local homomorphism
from
into a global topological group
with
symmetric. Then
is isomorphic to the restriction of a global topological group to an open neighbourhood of the identity; in particular,
is globalisable.
The material here is based in part on this paper of Olver and this paper of Goldbring.
The classical formulation of Hilbert’s fifth problem asks whether topological groups that have the topological structure of a manifold, are necessarily Lie groups. This is indeed, the case, thanks to following theorem of Gleason and Montgomery-Zippin:
Theorem 1 (Hilbert’s fifth problem) Let
be a topological group which is locally Euclidean. Then
is isomorphic to a Lie group.
We have discussed the proof of this result, and of related results, in previous posts. There is however a generalisation of Hilbert’s fifth problem which remains open, namely the Hilbert-Smith conjecture, in which it is a space acted on by the group which has the manifold structure, rather than the group itself:
Conjecture 2 (Hilbert-Smith conjecture) Let
be a locally compact topological group which acts continuously and faithfully (or effectively) on a connected finite-dimensional manifold
. Then
is isomorphic to a Lie group.
Note that Conjecture 2 easily implies Theorem 1 as one can pass to the connected component of a locally Euclidean group (which is clearly locally compact), and then look at the action of
on itself by left-multiplication.
The hypothesis that the action is faithful (i.e. each non-identity group element acts non-trivially on
) cannot be completely eliminated, as any group
will have a trivial action on any space
. The requirement that
be locally compact is similarly necessary: consider for instance the diffeomorphism group
of, say, the unit circle
, which acts on
but is infinite dimensional and is not locally compact (with, say, the uniform topology). Finally, the connectedness of
is also important: the infinite torus
(with the product topology) acts faithfully on the disconnected manifold
by the action
The conjecture in full generality remains open. However, there are a number of partial results. For instance, it was observed by Montgomery and Zippin that the conjecture is true for transitive actions, by a modification of the argument used to establish Theorem 1. This special case of the Hilbert-Smith conjecture (or more precisely, a generalisation thereof in which “finite-dimensional manifold” was replaced by “locally connected locally compact finite-dimensional”) was used in Gromov’s proof of his famous theorem on groups of polynomial growth. I record the argument of Montgomery and Zippin below the fold.
Another partial result is the reduction of the Hilbert-Smith conjecture to the -adic case. Indeed, it is known that Conjecture 2 is equivalent to
Conjecture 3 (Hilbert-Smith conjecture for
-adic actions) It is not possible for a
-adic group
to act continuously and effectively on a connected finite-dimensional manifold
.
The reduction to the -adic case follows from the structural theory of locally compact groups (specifically, the Gleason-Yamabe theorem discussed in previous posts) and some results of Newman that sharply restrict the ability of periodic actions on a manifold
to be close to the identity. I record this argument (which appears for instance in this paper of Lee) below the fold also.
One of the most well known problems from ancient Greek mathematics was that of trisecting an angle by straightedge and compass, which was eventually proven impossible in 1837 by Pierre Wantzel, using methods from Galois theory.
Formally, one can set up the problem as follows. Define a configuration to be a finite collection of points, lines, and circles in the Euclidean plane. Define a construction step to be one of the following operations to enlarge the collection
:
- (Straightedge) Given two distinct points
in
, form the line
that connects
and
, and add it to
.
- (Compass) Given two distinct points
in
, and given a third point
in
(which may or may not equal
or
), form the circle with centre
and radius equal to the length
of the line segment joining
and
, and add it to
.
- (Intersection) Given two distinct curves
in
(thus
is either a line or a circle in
, and similarly for
), select a point
that is common to both
and
(there are at most two such points), and add it to
.
We say that a point, line, or circle is constructible by straightedge and compass from a configuration if it can be obtained from
after applying a finite number of construction steps.
Problem 1 (Angle trisection) Let
be distinct points in the plane. Is it always possible to construct by straightedge and compass from
a line
through
that trisects the angle
, in the sense that the angle between
and
is one third of the angle of
?
Thanks to Wantzel’s result, the answer to this problem is known to be “no” in general; a generic angle cannot be trisected by straightedge and compass. (On the other hand, some special angles can certainly be trisected by straightedge and compass, such as a right angle. Also, one can certainly trisect generic angles using other methods than straightedge and compass; see the Wikipedia page on angle trisection for some examples of this.)
The impossibility of angle trisection stands in sharp contrast to the easy construction of angle bisection via straightedge and compass, which we briefly review as follows:
- Start with three points
.
- Form the circle
with centre
and radius
, and intersect it with the line
. Let
be the point in this intersection that lies on the same side of
as
. (
may well be equal to
).
- Form the circle
with centre
and radius
, and the circle
with centre
and radius
. Let
be the point of intersection of
and
that is not
.
- The line
will then bisect the angle
.
The key difference between angle trisection and angle bisection ultimately boils down to the following trivial number-theoretic fact:
Proof: Obvious by modular arithmetic, by induction, or by the fundamental theorem of arithmetic.
In contrast, there are of course plenty of powers of that are evenly divisible by
, and this is ultimately why angle bisection is easy while angle trisection is hard.
The standard way in which Lemma 2 is used to demonstrate the impossibility of angle trisection is via Galois theory. The implication is quite short if one knows this theory, but quite opaque otherwise. We briefly sketch the proof of this implication here, though we will not need it in the rest of the discussion. Firstly, Lemma 2 implies the following fact about field extensions.
Corollary 3 Let
be a field, and let
be an extension of
that can be constructed out of
by a finite sequence of quadratic extensions. Then
does not contain any cubic extensions
of
.
Proof: If contained a cubic extension
of
, then the dimension of
over
would be a multiple of three. On the other hand, if
is obtained from
by a tower of quadratic extensions, then the dimension of
over
is a power of two. The claim then follows from Lemma 2.
To conclude the proof, one then notes that any point, line, or circle that can be constructed from a configuration is definable in a field obtained from the coefficients of all the objects in
after taking a finite number of quadratic extensions, whereas a trisection of an angle
will generically only be definable in a cubic extension of the field generated by the coordinates of
.
The Galois theory method also allows one to obtain many other impossibility results of this type, most famously the Abel-Ruffini theorem on the insolvability of the quintic equation by radicals. For this reason (and also because of the many applications of Galois theory to number theory and other branches of mathematics), the Galois theory argument is the “right” way to prove the impossibility of angle trisection within the broader framework of modern mathematics. However, this argument has the drawback that it requires one to first understand Galois theory (or at least field theory), which is usually not presented until an advanced undergraduate algebra or number theory course, whilst the angle trisection problem requires only high-school level mathematics to formulate. Even if one is allowed to “cheat” and sweep several technicalities under the rug, one still needs to possess a fair amount of solid intuition about advanced algebra in order to appreciate the proof. (This was undoubtedly one reason why, even after Wantzel’s impossibility result was published, a large amount of effort was still expended by amateur mathematicians to try to trisect a general angle.)
In this post I would therefore like to present a different proof (or perhaps more accurately, a disguised version of the standard proof) of the impossibility of angle trisection by straightedge and compass, that avoids explicit mention of Galois theory (though it is never far beneath the surface). With “cheats”, the proof is actually quite simple and geometric (except for Lemma 2, which is still used at a crucial juncture), based on the basic geometric concept of monodromy; unfortunately, some technical work is needed however to remove these cheats.
To describe the intuitive idea of the proof, let us return to the angle bisection construction, that takes a triple of points as input and returns a bisecting line
as output. We iterate the construction to create a quadrisecting line
, via the following sequence of steps that extend the original bisection construction:
- Start with three points
.
- Form the circle
with centre
and radius
, and intersect it with the line
. Let
be the point in this intersection that lies on the same side of
as
. (
may well be equal to
).
- Form the circle
with centre
and radius
, and the circle
with centre
and radius
. Let
be the point of intersection of
and
that is not
.
- Let
be the point on the line
which lies on
, and is on the same side of
as
.
- Form the circle
with centre
and radius
. Let
be the point of intersection of
and
that is not
.
- The line
will then quadrisect the angle
.
Let us fix the points and
, but not
, and view
(as well as intermediate objects such as
,
,
,
,
,
,
) as a function of
.
Let us now do the following: we begin rotating counterclockwise around
, which drags around the other objects
,
,
,
,
,
,
that were constructed by
accordingly. For instance, here is an early stage of this rotation process, when the angle
has become obtuse:
Now for the slightly tricky bit. We are going to keep rotating beyond a half-rotation of
, so that
now becomes a reflex angle. At this point, a singularity occurs; the point
collides into
, and so there is an instant in which the line
is not well-defined. However, this turns out to be a removable singularity (and the easiest way to demonstrate this will be to tap the power of complex analysis, as complex numbers can easily route around such a singularity), and we can blast through it to the other side, giving a picture like this:
Note that we have now deviated from the original construction in that and
are no longer on the same side of
; we are thus now working in a continuation of that construction rather than with the construction itself. Nevertheless, we can still work with this continuation (much as, say, one works with analytic continuations of infinite series such as
beyond their original domain of definition).
We now keep rotating around
. Here,
is approaching a full rotation of
:
When reaches a full rotation, a different singularity occurs:
and
coincide. Nevertheless, this is also a removable singularity, and we blast through to beyond a full rotation:
And now is back where it started, as are
,
,
, and
… but the point
has moved, from one intersection point of
to the other. As a consequence,
,
, and
have also changed, with
being at right angles to where it was before. (In the jargon of modern mathematics, the quadrisection construction has a non-trivial monodromy.)
But nothing stops us from rotating some more. If we continue this procedure, we see that after two full rotations of
around
, all points, lines, and circles constructed from
have returned to their original positions. Because of this, we shall say that the quadrisection construction described above is periodic with period
.
Similarly, if one performs an octisection of the angle by bisecting the quadrisection, one can verify that this octisection is periodic with period
; it takes four full rotations of
around
before the configuration returns to where it started. More generally, one can show
Proposition 4 Any construction of straightedge and compass from the points
is periodic with period equal to a power of
.
The reason for this, ultimately, is because any two circles or lines will intersect each other in at most two points, and so at each step of a straightedge-and-compass construction there is an ambiguity of at most . Each rotation of
around
can potentially flip one of these points to the other, but then if one rotates again, the point returns to its original position, and then one can analyse the next point in the construction in the same fashion until one obtains the proposition.
But now consider a putative trisection operation, that starts with an arbitrary angle and somehow uses some sequence of straightedge and compass constructions to end up with a trisecting line
:
What is the period of this construction? If we continuously rotate around
, we observe that a full rotations of
only causes the trisecting line
to rotate by a third of a full rotation (i.e. by
):
Because of this, we see that the period of any construction that contains must be a multiple of
. But this contradicts Proposition 4 and Lemma 2.
Below the fold, I will make the above proof rigorous. Unfortunately, in doing so, I had to again leave the world of high-school mathematics, as one needs a little bit of algebraic geometry and complex analysis to resolve the issues with singularities that we saw in the above sketch. Still, I feel that at an intuitive level at least, this argument is more geometric and accessible than the Galois-theoretic argument (though anyone familiar with Galois theory will note that there is really not that much difference between the proofs, ultimately, as one has simply replaced the Galois group with a closely related monodromy group instead).
A few days ago, I released a preprint entitled “Localisation and compactness properties of the Navier-Stokes global regularity problem“, discussed in this previous blog post. As it turns out, I was somewhat impatient to finalise the paper and move on to other things, and the original preprint was still somewhat rough in places (contradicting my own advice on this matter), with a number of typos of minor to moderate severity. But a bit more seriously, I discovered on a further proofreading that there was a subtle error in a component of the argument that I had believed to be routine – namely the persistence of higher regularity for mild solutions. As a consequence, some of the implications stated in the first version were not exactly correct as stated; but they can be repaired by replacing a “bad” notion of global regularity for a certain class of data with a “good” notion. I have completed (and proofread) an updated version of the ms, which should appear at the arXiv link of the paper in a day or two (and which I have also placed at this link). (In the meantime, it is probably best not to read the original ms too carefully, as this could lead to some confusion.) I’ve also added a new section that shows that, due to this technicality, one can exhibit smooth initial data to the Navier-Stokes equation for which there are no smooth solutions, which superficially sounds very close to a negative solution to the global regularity problem, but is actually nothing of the sort.
Let me now describe the issue in more detail (and also to explain why I missed it previously). A standard principle in the theory of evolutionary partial differentiation equations is that regularity in space can be used to imply regularity in time. To illustrate this, consider a solution to the supercritical nonlinear wave equation
(1)
for some field . Suppose one already knew that
had some regularity in space, and in particular the
norm of
was bounded (thus
and up to two spatial derivatives of
were bounded). Then, by (1), we see that two time derivatives of
were also bounded, and one then gets the additional regularity of
.
In a similar vein, suppose one initially knew that had the regularity
. Then (1) soon tells us that
also has the regularity
; then, if one differentiates (1) in time to obtain
one can conclude that also has the regularity of
. One can continue this process indefinitely; in particular, if one knew that
, then these sorts of manipulations show that
is infinitely smooth in both space and time.
The issue that caught me by surprise is that for the Navier-Stokes equations
(2)
(setting the forcing term equal to zero for simplicity), infinite regularity in space does not automatically imply infinite regularity in time, even if one assumes the initial data lies in a standard function space such as the Sobolev space
. The problem lies with the pressure term
, which is recovered from the velocity via the elliptic equation
(3)
that can be obtained by taking the divergence of (2). This equation is solved by a non-local integral operator:
If, say, lies in
, then there is no difficulty establishing a bound on
in terms of
(for instance, one can use singular integral theory and Sobolev embedding to place
in
. However, one runs into difficulty when trying to compute time derivatives of
. Differentiating (3) once, one gets
.
At the regularity of , one can still (barely) control this quantity by using (2) to expand out
and using some integration by parts. But when one wishes to compute a second time derivative of the pressure, one obtains (after integration by parts) an expansion of the form
and now there is not enough regularity on available to get any control on
, even if one assumes that
is smooth. Indeed, following this observation, I was able to show that given generic smooth
data, the pressure
will instantaneously fail to be
in time, and thence (by (2)) the velocity will instantaneously fail to be
in time. (Switching to the vorticity formulation buys one further degree of time differentiability, but does not fully eliminate the problem; the vorticity
will fail to be
in time. Switching to material coordinates seems to makes things very slightly better, but I believe there is still a breakdown of time regularity in these coordinates also.)
For later times t>0 (and assuming homogeneous data f=0 for simplicity), this issue no longer arises, because of the instantaneous smoothing effect of the Navier-Stokes flow, which for instance will upgrade regularity to
regularity instantaneously. It is only the initial time at which some time irregularity can occur.
This breakdown of regularity does not actually impact the original formulation of the Clay Millennium Prize problem, though, because in that problem the initial velocity is required to be Schwartz class (so all derivatives are rapidly decreasing). In this class, the regularity theory works as expected; if one has a solution which already has some reasonable regularity (e.g. a mild solution) and the data is Schwartz, then the solution will be smooth in spacetime. (Another class where things work as expected is when the vorticity is Schwartz; in such cases, the solution remains smooth in both space and time (for short times, at least), and the Schwartz nature of the vorticity is preserved (because the vorticity is subject to fewer non-local effects than the velocity, as it is not directly affected by the pressure).)
This issue means that one of the implications in the original paper (roughly speaking, that global regularity for Schwartz data implies global regularity for smooth data) is not correct as stated. But this can be fixed by weakening the notion of global regularity in the latter setting, by limiting the amount of time differentiability available at the initial time. More precisely, call a solution
and
almost smooth if
and
are smooth on the half-open slab
; and
- For every
,
exist and are continuous on the full slab
.
Thus, an almost smooth solution is the same concept as a smooth solution, except that at time zero, the velocity field is only , and the pressure field is only
. This is still enough regularity to interpret the Navier-Stokes equation (2) in a classical manner, but falls slightly short of full smoothness.
(I had already introduced this notion of almost smoothness in the more general setting of smooth finite energy solutions in the first draft of this paper, but had failed to realise that it was also necessary in the smooth setting also.)
One can now “fix” the global regularity conjectures for Navier-Stokes in the smooth or smooth finite energy setting by requiring the solutions to merely be almost smooth instead of smooth. Once one does so, the results in my paper then work as before: roughly speaking, if one knows that Schwartz data produces smooth solutions, one can conclude that smooth
or smooth finite energy data produces almost smooth solutions (and the paper now contains counterexamples to show that one does not always have smooth solutions in this category).
The diagram of implications between conjectures has been adjusted to reflect this issue, and now reads as follows:
I’ve just uploaded to the arXiv my paper “Localisation and compactness properties of the Navier-Stokes global regularity problem“, submitted to Analysis and PDE. This paper concerns the global regularity problem for the Navier-Stokes system of equations
in three dimensions. Thus, we specify initial data , where
is a time,
is the initial velocity field (which, in order to be compatible with (2), (3), is required to be divergence-free),
is the forcing term, and then seek to extend this initial data to a solution
with this data, where the velocity field
and pressure term
are the unknown fields.
Roughly speaking, the global regularity problem asserts that given every smooth set of initial data , there exists a smooth solution
to the Navier-Stokes equation with this data. However, this is not a good formulation of the problem because it does not exclude the possibility that one or more of the fields
grows too fast at spatial infinity. This problem is evident even for the much simpler heat equation
As long as one has some mild conditions at infinity on the smooth initial data (e.g. polynomial growth at spatial infinity), then one can solve this equation using the fundamental solution of the heat equation:
If furthermore is a tempered distribution, one can use Fourier-analytic methods to show that this is the unique solution to the heat equation with this data. But once one allows sufficiently rapid growth at spatial infinity, existence and uniqueness can break down. Consider for instance the backwards heat kernel
for some , which is smooth (albeit exponentially growing) at time zero, and is a smooth solution to the heat equation for
, but develops a dramatic singularity at time
. A famous example of Tychonoff from 1935, based on a power series construction, also shows that uniqueness for the heat equation can also fail once growth conditions are removed. An explicit example of non-uniqueness for the heat equation is given by the contour integral
where is the
-shaped contour consisting of the positive real axis and the upper imaginary axis, with
being interpreted with the standard branch (with cut on the negative axis). One can show by contour integration that this function solves the heat equation and is smooth (but rapidly growing at infinity), and vanishes for
, but is not identically zero for
.
Thus, in order to obtain a meaningful (and physically realistic) problem, one needs to impose some decay (or at least limited growth) hypotheses on the data and solution
in addition to smoothness. For the data, one can impose a variety of such hypotheses, including the following:
- (Finite energy data) One has
and
.
- (
data) One has
and
.
- (Schwartz data) One has
and
for all
.
- (Periodic data) There is some
such that
and
for all
and
.
- (Homogeneous data)
.
Note that smoothness alone does not necessarily imply finite energy, , or the Schwartz property. For instance, the (scalar) function
is smooth and finite energy, but not in
or Schwartz. Periodicity is of course incompatible with finite energy,
, or the Schwartz property, except in the trivial case when the data is identically zero.
Similarly, one can impose conditions at spatial infinity on the solution, such as the following:
- (Finite energy solution) One has
.
- (
solution) One has
and
.
- (Partially periodic solution) There is some
such that
for all
and
.
- (Fully periodic solution) There is some
such that
and
for all
and
.
(The component of the
solution is for technical reasons, and should not be paid too much attention for this discussion.) Note that we do not consider the notion of a Schwartz solution; as we shall see shortly, this is too restrictive a concept of solution to the Navier-Stokes equation.
Finally, one can downgrade the regularity of the solution down from smoothness. There are many ways to do so; two such examples include
- (
mild solutions) The solution is not smooth, but is
(in the preceding sense) and solves the equation (1) in the sense that the Duhamel formula
holds.
- (Leray-Hopf weak solution) The solution
is not smooth, but lies in
, solves (1) in the sense of distributions (after rewriting the system in divergence form), and obeys an energy inequality.
Finally, one can ask for two types of global regularity results on the Navier-Stokes problem: a qualitative regularity result, in which one merely provides existence of a smooth solution without any explicit bounds on that solution, and a quantitative regularity result, which provides bounds on the solution in terms of the initial data, e.g. a bound of the form
for some function . One can make a further distinction between local quantitative results, in which
is allowed to depend on
, and global quantitative results, in which there is no dependence on
(the latter is only reasonable though in the homogeneous case, or if
has some decay in time).
By combining these various hypotheses and conclusions, we see that one can write down quite a large number of slightly different variants of the global regularity problem. In the official formulation of the regularity problem for the Clay Millennium prize, a positive correct solution to either of the following two problems would be accepted for the prize:
- Conjecture 1.4 (Qualitative regularity for homogeneous periodic data) If
is periodic, smooth, and homogeneous, then there exists a smooth partially periodic solution
with this data.
- Conjecture 1.3 (Qualitative regularity for homogeneous Schwartz data) If
is Schwartz and homogeneous, then there exists a smooth finite energy solution
with this data.
(The numbering here corresponds to the numbering in the paper.)
Furthermore, a negative correct solution to either of the following two problems would also be accepted for the prize:
- Conjecture 1.6 (Qualitative regularity for periodic data) If
is periodic and smooth, then there exists a smooth partially periodic solution
with this data.
- Conjecture 1.5 (Qualitative regularity for Schwartz data) If
is Schwartz, then there exists a smooth finite energy solution
with this data.
I am not announcing any major progress on these conjectures here. What my paper does study, though, is the question of whether the answer to these conjectures is somehow sensitive to the choice of formulation. For instance:
- Note in the periodic formulations of the Clay prize problem that the solution is only required to be partially periodic, rather than fully periodic; thus the pressure has no periodicity hypothesis. One can ask the extent to which the above problems change if one also requires pressure periodicity.
- In another direction, one can ask the extent to which quantitative formulations of the Navier-Stokes problem are stronger than their qualitative counterparts; in particular, whether it is possible that each choice of initial data in a certain class leads to a smooth solution, but with no uniform bound on that solution in terms of various natural norms of the data.
- Finally, one can ask the extent to which the conjecture depends on the category of data. For instance, could it be that global regularity is true for smooth periodic data but false for Schwartz data? True for Schwartz data but false for smooth
data? And so forth.
One motivation for the final question (which was posed to me by my colleague, Andrea Bertozzi) is that the Schwartz property on the initial data tends to be instantly destroyed by the Navier-Stokes flow. This can be seen by introducing the vorticity
. If
is Schwartz, then from Stokes’ theorem we necessarily have vanishing of certain moments of the vorticity, for instance:
On the other hand, some integration by parts using (1) reveals that such moments are usually not preserved by the flow; for instance, one has the law
and one can easily concoct examples for which the right-hand side is non-zero at time zero. This suggests that the Schwartz class may be unnecessarily restrictive for Conjecture 1.3 or Conjecture 1.5.
My paper arose out of an attempt to address these three questions, and ended up obtaining partial results in all three directions. Roughly speaking, the results that address these three questions are as follows:
- (Homogenisation) If one only assumes partial periodicity instead of full periodicity, then the forcing term
becomes irrelevant. In particular, Conjecture 1.4 and Conjecture 1.6 are equivalent.
- (Concentration compactness) In the
category (both periodic and nonperiodic, homogeneous or nonhomogeneous), the qualitative and quantitative formulations of the Navier-Stokes global regularity problem are essentially equivalent.
- (Localisation) The (inhomogeneous) Navier-Stokes problems in the Schwartz, smooth
, and finite energy categories are essentially equivalent to each other, and are also implied by the (fully) periodic version of these problems.
The first two of these families of results are relatively routine, drawing on existing methods in the literature; the localisation results though are somewhat more novel, and introduce some new local energy and local enstrophy estimates which may be of independent interest.
Broadly speaking, the moral to draw from these results is that the precise formulation of the Navier-Stokes equation global regularity problem is only of secondary importance; modulo a number of caveats and technicalities, the various formulations are close to being equivalent, and a breakthrough on any one of the formulations is likely to lead (either directly or indirectly) to a comparable breakthrough on any of the others.
This is only a caricature of the actual implications, though. Below is the diagram from the paper indicating the various formulations of the Navier-Stokes equations, and the known implications between them:
The above three streams of results are discussed in more detail below the fold.
Recent Comments