You are currently browsing the monthly archive for July 2011.

Christian Elsholtz and I have recently finished our joint paper “Counting the number of solutions to the Erdös-Straus equation on unit fractions“, submitted to the Journal of the Australian Mathematical Society. This supercedes my previous paper on the subject, by obtaining stronger and more general results. (The paper is currently in the process of being resubmitted to the arXiv, and should appear at this link within a few days.)

As with the previous paper, the main object of study is the number {f(n)} of solutions to the Diophantine equation

\displaystyle  \frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z} \ \ \ \ \ (1)

with {x,y,z} positive integers. The Erdös-Straus conjecture asserts that {f(n)>0} for all {n>1}. Since {f(nm) \geq f(n)} for all positive integers {n,m}, it suffices to show that {f(p)>0} for all primes {p}.

We single out two special types of solutions: Type I solutions, in which {x} is divisible by {n} and {y,z} are coprime to {n}, and Type II solutions, in which {x} is coprime to {n} and {y,z} are divisible by {n}. Let {f_I(n), f_{II}(n)} denote the number of Type I and Type II solutions respectively. For any {n}, one has

\displaystyle  f(n) \geq 3 f_I(n) + 3 f_{II}(n),

with equality when {n} is an odd primes {p}. Thus, to prove the Erdös-Strauss conjecture, it suffices to show that at least one of {f_I(p)}, {f_{II}(p)} is positive whenever {p} is an odd prime.

Our first main results are the asymptotics

\displaystyle  N \log^3 N \ll \sum_{n \leq N} f_I(n) \ll N \log^3 N

\displaystyle  N \log^3 N \ll \sum_{n \leq N} f_{II}(n) \ll N \log^3 N

\displaystyle  N \log^2 N \ll \sum_{p \leq N} f_I(p) \ll N \log^2 N \log\log N

\displaystyle  N \log^2 N \ll \sum_{p \leq N} f_{II}(p) \ll N \log^2 N.

This improves upon the results in the previous paper, which only established

\displaystyle  N \log^2 N \ll \sum_{p \leq N} f_I(p) \ll N \exp(O( \frac{\log x}{\log\log x} ))

and

\displaystyle  N \log^2 N \ll \sum_{p \leq N} f_{II}(p) \ll N \log^2 N \log \log N.

The double logarithmic factor in the upper bound for {\sum_{p \leq N} f_I(p)} is artificial (arising from the inefficiency in the Brun-Titchmarsh inequality on very short progressions) but we do not know how to remove it.

The methods are similar to those in the previous paper (which were also independently discovered in unpublished work of Elsholtz and Heath-Brown), but with the additional input of the Erdös divisor bound on expressions of the form {\sum_{n \leq N} \tau(P(n))} for polynomials {P}, discussed in this recent blog post. (Actually, we need to tighten Erdös’ bound somewhat, to obtain some uniformity in the bounds even as the coefficients of {P} become large, but this turns out to be achievable by going through the original arguments of Erdös more carefully.)

We also note an observation of Heath-Brown, that in our notation gives the lower bound

\displaystyle  N \log^6 N \ll \sum_{n \leq N} f(n);

thus, we see that for typical {n}, that most solutions to the Erdös-Straus equation are not of Type I or Type II, in contrast to the case when {n} is prime.

We also have a number other new results. We find a way to systematically unify all the previously known parameterisations of solutions to the Erdös-Straus equation, by lifting the Cayley-type surface {\{ (x,y,z): \frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z} \}} to a certain three-dimensional variety in six-dimensional affine space, in such a way that integer points in the former arise from integer points in the latter. Each of the previously known characterisations of solutions then corresponds to a different choice of coordinates on this variety. (This point of view was also adopted in a paper of Heath-Brown, who interprets this lifted variety as the universal torsor of the Cayley surface.) By optimising between these parameterisations and exploiting the divisor bound, we obtain some bounds on the worst-case behaviour of {f_I(n)} and {f_{II}(n)}, namely

\displaystyle  f_I(n) \ll n^{3/5 + O(1/\log \log n)}

and

\displaystyle  f_{II}(n) \ll n^{2/5 + O(1/\log \log n)},

which should be compared to a recent previous bound {f(n) \ll n^{2/3 + O(1/\log \log n)}} of Browning and Elsholtz. In the other direction, we show that {f(n) \gg n^{(3+o(1))/\log\log n}} for infinitely many {n}, and {f(p) \gg \log^{\frac{\log 3}{2}-o(1)} p} for almost all primes {p}. Here, the main tools are some bounds for the representation of a rational as a sum of two unit fractions in the above-mentioned work of Browning and Elsholtz, and also the Turán-Kubilius inequality.

We also completely classify all the congruence classes that can be solved by polynomials, completing the partial list discussed in the previous post. Specifically, the Erdös-Straus conjecture is true for {n} whenever one of the following congruence-type conditions is satisfied:

  1. {n = -f \mod 4ad}, where {a,d,f \in {\bf N}} are such that {f|4a^2 d+1}.
  2. {n = -f \mod 4ac} and {n = -\frac{c}{a} \mod f}, where {a,c,f \in {\bf N}} are such that {(4ac,f)=1}.
  3. {n = -f \mod 4cd} and {n^2 = -4c^2d \mod f}, where {c,d,f \in {\bf N}} are such that {(4cd,f)=1}.
  4. {n = -\frac{1}{e} \mod 4ab} or {n = -e \mod 4ab}, where {a,b,e \in {\bf N}} are such that {e|a+b} and {(e,4ab)=1}.
  5. {n = -4a^2d \mod f}, where {a,d,f \in {\bf N}} are such that {4ad|f+1}.
  6. {n = -4a^2d-e \mod 4ade}, where {a,d,e \in {\bf N}} are such that {(4ad,e)=1}.

In principle, this suggests a way to extend the existing verification of the Erdös-Straus conjecture beyond the current range of {10^{14}} by collecting all congruences to small moduli (e.g. up to {10^6}), and then using this to sieve out the primes up to a given size.

Finally, we begin a study of the more general equation

\displaystyle  \frac{m}{n} = \frac{1}{n_1}+\ldots+\frac{1}{n_k} \ \ \ \ \ (2)

where {m > k \geq 3} are fixed. We can obtain a partial analogue of our main bounds for the {m=4,k=3} case, namely that

\displaystyle  \sum_{n \leq N} f_{m,k,II}(n) \gg N \log^{2^{k-1}-1} N

and

\displaystyle  \sum_{p \leq N} f_{m,k,II}(p) \gg N \log^{2^{k-1}-2} N / \log\log N

were {f_{m,k,II}(n)} denotes the number of solutions to (2) which are of “Type II” in the sense that {n_2,\ldots,n_k} are all divisible by {n}. However, we do not believe our bounds to be sharp in the large {k} regime, though it does show that the expected number of solutions to (2) should grow rapidly in {k}.

One of the basic problems in analytic number theory is to obtain bounds and asymptotics for sums of the form

\displaystyle \sum_{n \leq x} f(n)

in the limit {x \rightarrow \infty}, where {n} ranges over natural numbers less than {x}, and {f: {\bf N} \rightarrow {\bf C}} is some arithmetic function of number-theoretic interest. (It is also often convenient to replace this sharply truncated sum with a smoother sum such as {\sum_n f(n) \psi(n/x)}, but we will not discuss this technicality here.) For instance, the prime number theorem is equivalent to the assertion

\displaystyle \sum_{n \leq x} \Lambda(n) = x + o(x)

where {\Lambda} is the von Mangoldt function, while the Riemann hypothesis is equivalent to the stronger assertion

\displaystyle \sum_{n \leq x} \Lambda(n) = x + O(x^{1/2+o(1)}).

It is thus of interest to develop techniques to estimate such sums {\sum_{n \leq x} f(n)}. Of course, the difficulty of this task depends on how “nice” the function {f} is. The functions {f} that come up in number theory lie on a broad spectrum of “niceness”, with some particularly nice functions being quite easy to sum, and some being insanely difficult.

At the easiest end of the spectrum are those functions {f} that exhibit some sort of regularity or “smoothness”. Examples of smoothness include “Archimedean” smoothness, in which {f(n)} is the restriction of some smooth function {f: {\bf R} \rightarrow {\bf C}} from the reals to the natural numbers, and the derivatives of {f} are well controlled. A typical example is

\displaystyle \sum_{n \leq x} \log n.

One can already get quite good bounds on this quantity by comparison with the integral {\int_1^x \log t\ dt}, namely

\displaystyle \sum_{n \leq x} \log n = x \log x - x + O(\log x),

with sharper bounds available by using tools such as the Euler-Maclaurin formula (see this blog post). Exponentiating such asymptotics, incidentally, leads to one of the standard proofs of Stirling’s formula (as discussed in this blog post).

One can also consider “non-Archimedean” notions of smoothness, such as periodicity relative to a small period {q}. Indeed, if {f} is periodic with period {q} (and is thus essentially a function on the cyclic group {{\bf Z}/q{\bf Z}}), then one has the easy bound

\displaystyle \sum_{n \leq x} f(n) = \frac{x}{q} \sum_{n \in {\bf Z}/q{\bf Z}} f(n) + O( \sum_{n \in {\bf Z}/q{\bf Z}} |f(n)| ).

In particular, we have the fundamental estimate

\displaystyle \sum_{n \leq x: q|n} 1 = \frac{x}{q} + O(1). \ \ \ \ \ (1)

This is a good estimate when {q} is much smaller than {x}, but as {q} approaches {x} in magnitude, the error term {O(1)} begins to overwhelm the main term {\frac{x}{q}}, and one needs much more delicate information on the fractional part of {\frac{x}{q}} in order to obtain good estimates at this point.

One can also consider functions {f} which combine “Archimedean” and “non-Archimedean” smoothness into an “adelic” smoothness. We will not define this term precisely here (though the concept of a Schwartz-Bruhat function is one way to capture this sort of concept), but a typical example might be

\displaystyle \sum_{n \leq x} \chi(n) \log n

where {\chi} is periodic with some small period {q}. By using techniques such as summation by parts, one can estimate such sums using the techniques used to estimate sums of periodic functions or functions with (Archimedean) smoothness.

Another class of functions that is reasonably well controlled are the multiplicative functions, in which {f(nm) = f(n) f(m)} whenever {n,m} are coprime. Here, one can use the powerful techniques of multiplicative number theory, for instance by working with the Dirichlet series

\displaystyle \sum_{n=1}^\infty \frac{f(n)}{n^s}

which are clearly related to the partial sums {\sum_{n \leq x} f(n)} (essentially via the Mellin transform, a cousin of the Fourier and Laplace transforms); for this post we ignore the (important) issue of how to make sense of this series when it is not absolutely convergent (but see this previous blog post for more discussion). A primary reason that this technique is effective is that the Dirichlet series of a multiplicative function factorises as an Euler product

\displaystyle \sum_{n=1}^\infty \frac{f(n)}{n^s} = \prod_p (\sum_{j=0}^\infty \frac{f(p^j)}{p^{js}}).

One also obtains similar types of representations for functions that are not quite multiplicative, but are closely related to multiplicative functions, such as the von Mangoldt function {\Lambda} (whose Dirichlet series {\sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)}} is not given by an Euler product, but instead by the logarithmic derivative of an Euler product).

Moving another notch along the spectrum between well-controlled and ill-controlled functions, one can consider functions {f} that are divisor sums such as

\displaystyle f(n) = \sum_{d \leq R; d|n} g(d) = \sum_{d \leq R} 1_{d|n} g(d)

for some other arithmetic function {g}, and some level {R}. This is a linear combination of periodic functions {1_{d|n} g(d)} and is thus technically periodic in {n} (with period equal to the least common multiple of all the numbers from {1} to {R}), but in practice this periodic is far too large to be useful (except for extremely small levels {R}, e.g. {R = O(\log x)}). Nevertheless, we can still control the sum {\sum_{n \leq x} f(n)} simply by rearranging the summation:

\displaystyle \sum_{n \leq x} f(n) = \sum_{d \leq R} g(d) \sum_{n \leq x: d|n} 1

and thus by (1) one can bound this by the sum of a main term {x \sum_{d \leq R} \frac{g(d)}{d}} and an error term {O( \sum_{d \leq R} |g(d)| )}. As long as the level {R} is significantly less than {x}, one may expect the main term to dominate, and one can often estimate this term by a variety of techniques (for instance, if {g} is multiplicative, then multiplicative number theory techniques are quite effective, as mentioned previously). Similarly for other slight variants of divisor sums, such as expressions of the form

\displaystyle \sum_{d \leq R; d | n} g(d) \log \frac{n}{d}

or expressions of the form

\displaystyle \sum_{d \leq R} F_d(n)

where each {F_d} is periodic with period {d}.

One of the simplest examples of this comes when estimating the divisor function

\displaystyle \tau(n) := \sum_{d|n} 1,

which counts the number of divisors up to {n}. This is a multiplicative function, and is therefore most efficiently estimated using the techniques of multiplicative number theory; but for reasons that will become clearer later, let us “forget” the multiplicative structure and estimate the above sum by more elementary methods. By applying the preceding method, we see that

\displaystyle \sum_{n \leq x} \tau(n) = \sum_{d \leq x} \sum_{n \leq x:d|n} 1

\displaystyle = \sum_{d \leq x} (\frac{x}{d} + O(1))

\displaystyle = x \log x + O(x). \ \ \ \ \ (2)

Here, we are (barely) able to keep the error term smaller than the main term; this is right at the edge of the divisor sum method, because the level {R} in this case is equal to {x}. Unfortunately, at this high choice of level, it is not always possible to always keep the error term under control like this. For instance, if one wishes to use the standard divisor sum representation

\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d},

where {\mu} is the Möbius function, to compute {\sum_{n \leq x} \Lambda(n)}, then one ends up looking at

\displaystyle \sum_{n \leq x} \Lambda(n) = \sum_{d \leq x} \mu(d) \sum_{n \leq x:d|n} \log \frac{n}{d}

\displaystyle = \sum_{d \leq x} \mu(d) ( \frac{n}{d} \log \frac{n}{d} - \frac{n}{d} + O(\log \frac{n}{d}) )

From Dirichlet series methods, it is not difficult to establish the identities

\displaystyle \lim_{s\rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n)}{n^s} = 0

and

\displaystyle \lim_{s \rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n) \log n}{n^s} = -1.

This suggests (but does not quite prove) that one has

\displaystyle \sum_{n=1}^\infty \frac{\mu(n)}{n} = 0 \ \ \ \ \ (3)

and

\displaystyle \sum_{n=1}^\infty \frac{\mu(n)\log n}{n} = -1 \ \ \ \ \ (4)

in the sense of conditionally convergent series. Assuming one can justify this (which, ultimately, requires one to exclude zeroes of the Riemann zeta function on the line {\hbox{Re}(s)=1}, as discussed in this previous post), one is eventually left with the estimate {x + O(x)}, which is useless as a lower bound (and recovers only the classical Chebyshev estimate {\sum_{n \leq x} \Lambda(n) \ll x} as the upper bound). The inefficiency here when compared to the situation with the divisor function {\tau} can be attributed to the signed nature of the Möbius function {\mu(n)}, which causes some cancellation in the divisor sum expansion that needs to be compensated for with improved estimates.

However, there are a number of tricks available to reduce the level of divisor sums. The simplest comes from exploiting the change of variables {d \mapsto \frac{n}{d}}, which can in principle reduce the level by a square root. For instance, when computing the divisor function {\tau(n) = \sum_{d|n} 1}, one can observe using this change of variables that every divisor of {n} above {\sqrt{n}} is paired with one below {\sqrt{n}}, and so we have

\displaystyle \tau(n) = 2 \sum_{d \leq \sqrt{n}: d|n} 1 \ \ \ \ \ (5)

except when {n} is a perfect square, in which case one must subtract one from the right-hand side. Using this reduced-level divisor sum representation, one can obtain an improvement to (2), namely

\displaystyle \sum_{n \leq x} \tau(n) = x \log x + (2\gamma-1) x + O(\sqrt{x}).

This type of argument is also known as the Dirichlet hyperbola method. A variant of this argument can also deduce the prime number theorem from (3), (4) (and with some additional effort, one can even drop the use of (4)); this is discussed at this previous blog post.

Using this square root trick, one can now also control divisor sums such as

\displaystyle \sum_{n \leq x} \tau(n^2+1).

(Note that {\tau(n^2+1)} has no multiplicativity properties in {n}, and so multiplicative number theory techniques cannot be directly applied here.) The level of the divisor sum here is initially of order {x^2}, which is too large to be useful; but using the square root trick, we can expand this expression as

\displaystyle 2 \sum_{n \leq x} \sum_{d \leq n: d | n^2+1} 1

which one can rewrite as

\displaystyle 2 \sum_{d \leq x} \sum_{d \leq n \leq x: n^2+1 = 0 \hbox{ mod } d} 1.

The constraint {n^2+1=0 \hbox{ mod } d} is periodic in {n} with period {d}, so we can write this as

\displaystyle 2 \sum_{d \leq x} ( \frac{x}{d} \rho(d) + O(\rho(d)) )

where {\rho(d)} is the number of solutions in {{\bf Z}/d{\bf Z}} to the equation {n^2+1 = 0 \hbox{ mod } d}, and so

\displaystyle \sum_{n \leq x} \tau(n^2+1) = 2x \sum_{d \leq x} \frac{\rho(d)}{d} + O(\sum_{d \leq x} \rho(d)).

The function {\rho} is multiplicative, and can be easily computed at primes {p} and prime powers {p^j} using tools such as quadratic reciprocity and Hensel’s lemma. For instance, by Fermat’s two-square theorem, {\rho(p)} is equal to {2} for {p=1 \hbox{ mod } 4} and {0} for {p=3 \hbox{ mod } 4}. From this and standard multiplicative number theory methods (e.g. by obtaining asymptotics on the Dirichlet series {\sum_d \frac{\rho(d)}{d^s}}), one eventually obtains the asymptotic

\displaystyle \sum_{d \leq x} \frac{\rho(d)}{d} = \frac{3}{2\pi} \log x + O(1)

and also

\displaystyle \sum_{d \leq x} \rho(d) = O(x)

and thus

\displaystyle \sum_{n \leq x} \tau(n^2+1) = \frac{3}{\pi} x \log x + O(x).

Similar arguments give asymptotics for {\tau} on other quadratic polynomials; see for instance this paper of Hooley and these papers by McKee. Note that the irreducibility of the polynomial will be important. If one considers instead a sum involving a reducible polynomial, such as {\sum_{n \leq x} \tau(n^2-1)}, then the analogous quantity {\rho(n)} becomes significantly larger, leading to a larger growth rate (of order {x \log^2 x} rather than {x\log x}) for the sum.

However, the square root trick is insufficient by itself to deal with higher order sums involving the divisor function, such as

\displaystyle \sum_{n \leq x} \tau(n^3+1);

the level here is initially of order {x^3}, and the square root trick only lowers this to about {x^{3/2}}, creating an error term that overwhelms the main term. And indeed, the asymptotic for such this sum has not yet been rigorously established (although if one heuristically drops error terms, one can arrive at a reasonable conjecture for this asymptotic), although some results are known if one averages over additional parameters (see e.g. this paper of Greaves, or this paper of Matthiesen).

Nevertheless, there is an ingenious argument of Erdös that allows one to obtain good upper and lower bounds for these sorts of sums, in particular establishing the asymptotic

\displaystyle x \log x \ll \sum_{n \leq x} \tau(P(n)) \ll x \log x \ \ \ \ \ (6)

for any fixed irreducible non-constant polynomial {P} that maps {{\bf N}} to {{\bf N}} (with the implied constants depending of course on the choice of {P}). There is also the related moment bound

\displaystyle \sum_{n \leq x} \tau^m(P(n)) \ll x \log^{O(1)} x \ \ \ \ \ (7)

for any fixed {P} (not necessarily irreducible) and any fixed {m \geq 1}, due to van der Corput; this bound is in fact used to dispose of some error terms in the proof of (6). These should be compared with what one can obtain from the divisor bound {\tau(n) \ll n^{O(1/\log \log n)}} and the trivial bound {\tau(n) \geq 1}, giving the bounds

\displaystyle x \ll \sum_{n \leq x} \tau^m(P(n)) \ll x^{1 + O(\frac{1}{\log \log x})}

for any fixed {m \geq 1}.

The lower bound in (6) is easy, since one can simply lower the level in (5) to obtain the lower bound

\displaystyle \tau(n) \geq \sum_{d \leq n^\theta: d|n} 1

for any {\theta>0}, and the preceding methods then easily allow one to obtain the lower bound by taking {\theta} small enough (more precisely, if {P} has degree {d}, one should take {\theta} equal to {1/d} or less). The upper bounds in (6) and (7) are more difficult. Ideally, if we could obtain upper bounds of the form

\displaystyle \tau(n) \ll \sum_{d \leq n^\theta: d|n} 1 \ \ \ \ \ (8)

for any fixed {\theta > 0}, then the preceding methods would easily establish both results. Unfortunately, this bound can fail, as illustrated by the following example. Suppose that {n} is the product of {k} distinct primes {p_1 \ldots p_k}, each of which is close to {n^{1/k}}. Then {n} has {2^k} divisors, with {\binom{n}{j}} of them close to {n^{j/k}} for each {0 \ldots j \leq k}. One can think of (the logarithms of) these divisors as being distributed according to what is essentially a Bernoulli distribution, thus a randomly selected divisor of {n} has magnitude about {n^{j/k}}, where {j} is a random variable which has the same distribution as the number of heads in {k} independently tossed fair coins. By the law of large numbers, {j} should concentrate near {k/2} when {k} is large, which implies that the majority of the divisors of {n} will be close to {n^{1/2}}. Sending {k \rightarrow \infty}, one can show that the bound (8) fails whenever {\theta < 1/2}.

This however can be fixed in a number of ways. First of all, even when {\theta<1/2}, one can show weaker substitutes for (8). For instance, for any fixed {\theta > 0} and {m \geq 1} one can show a bound of the form

\displaystyle \tau(n)^m \ll \sum_{d \leq n^\theta: d|n} \tau(d)^C \ \ \ \ \ (9)

for some {C} depending only on {m,\theta}. This nice elementary inequality (first observed by Landreau) already gives a quite short proof of van der Corput’s bound (7).

For Erdös’s upper bound (6), though, one cannot afford to lose these additional factors of {\tau(d)}, and one must argue more carefully. Here, the key observation is that the counterexample discussed earlier – when the natural number {n} is the product of a large number of fairly small primes – is quite atypical; most numbers have at least one large prime factor. For instance, the number of natural numbers less than {x} that contain a prime factor between {x^{1/2}} and {x} is equal to

\displaystyle \sum_{x^{1/2} \leq p \leq x} (\frac{x}{p} + O(1)),

which, thanks to Mertens’ theorem

\displaystyle \sum_{p \leq x} \frac{1}{p} = \log\log x + M+o(1)

for some absolute constant {M}, is comparable to {x}. In a similar spirit, one can show by similarly elementary means that the number of natural numbers {m} less than {x} that are {x^{1/m}}-smooth, in the sense that all prime factors are at most {x^{1/m}}, is only about {m^{-cm} x} or so. Because of this, one can hope that the bound (8), while not true in full generality, will still be true for most natural numbers {n}, with some slightly weaker substitute available (such as (7)) for the exceptional numbers {n}. This turns out to be the case by an elementary but careful argument.

The Erdös argument is quite robust; for instance, the more general inequality

\displaystyle x \log^{2^m-1} x \ll \sum_{n \leq x} \tau(P(n))^m \ll x \log^{2^m-1} x

for fixed irreducible {P} and {m \geq 1}, which improves van der Corput’s inequality (8) was shown by Delmer using the same methods. (A slight error in the original paper of Erdös was also corrected in this latter paper.) In a forthcoming revision to my paper on the Erdös-Straus conjecture, Christian Elsholtz and I have also applied this method to obtain bounds such as

\displaystyle \sum_{a \leq A} \sum_{b \leq B} \tau(a^2 b + 1) \ll AB \log(A+B),

which turn out to be enough to obtain the right asymptotics for the number of solutions to the equation {\frac{4}{p}= \frac{1}{x}+\frac{1}{y}+\frac{1}{z}}.

Below the fold I will provide some more details of the arguments of Landreau and of Erdös.

Read the rest of this entry »

I’ve just opened the research thread for the mini-polymath3 project over at the polymath blog.  I decided to use Q2 of this year’s IMO, in part to see how the polymath format copes with a geometric problem.  (The full list of questions for this year is available here.)

This post will serve as the discussion thread of the project, intended to focus all the non-research aspects of the project such as organisational matters or commentary on the progress of the project.    The third component of the project is the wiki page, which is intended to summarise the progress made so far on the problem.

As with mini-polymath1 and mini-polymath2, I myself will be serving primarily as a moderator, and hope other participants will take the lead in the research and in keeping the wiki up-to-date.

Just a reminder that the mini-polymath3 project begins in 24 hours, on July 19, 8pm UTC.

An algebraic (affine) plane curve of degree {d} over some field {k} is a curve {\gamma} of the form

\displaystyle  \gamma = \{ (x,y) \in k^2: P(x,y) = 0 \}

where {P} is some non-constant polynomial of degree {d}. Examples of low-degree plane curves include

  • Degree {1} (linear) curves {\{ (x,y) \in k^2: ax+by=c\}}, which are simply the lines;
  • Degree {2} (quadric) curves {\{ (x,y) \in k^2: ax^2+bxy+cy^2+dx+ey+f=0\}}, which (when {k={\bf R}}) include the classical conic sections (i.e. ellipses, hyperbolae, and parabolae), but also include the reducible example of the union of two lines; and
  • Degree {3} (cubic) curves {\{ (x,y) \in k^2: ax^3+bx^2y+cxy^2+dy^3+ex^2+fxy+gy^2+hx+iy+j=0\}}, which include the elliptic curves {\{ (x,y) \in k^2: y^2=x^3+ax+b\}} (with non-zero discriminant {\Delta := -16(4a^3+27b^2)}, so that the curve is smooth) as examples (ignoring some technicalities when {k} has characteristic two or three), but also include the reducible examples of the union of a line and a conic section, or the union of three lines.
  • etc.

Algebraic affine plane curves can also be extended to the projective plane {{\Bbb P} k^2 = \{ [x,y,z]: (x,y,z) \in k^3 \backslash 0 \}} by homogenising the polynomial. For instance, the affine quadric curve {\{ (x,y) \in k^2: ax^2+bxy+cy^2+dx+ey+f=0\}} would become {\{ [x,y,z] \in {\Bbb P} k^2: ax^2+bxy+cy^2+dxz+eyz+fz^2=0\}}.

One of the fundamental theorems about algebraic plane curves is Bézout’s theorem, which asserts that if a degree {d} curve {\gamma} and a degree {d'} curve {\gamma'} have no common component, then they intersect in at most {dd'} points (and if the underlying field {k} is algebraically closed, one works projectively, and one counts intersections with multiplicity, they intersect in exactly {dd'} points). Thus, for instance, two distinct lines intersect in at most one point; a line and a conic section intersect in at most two points; two distinct conic sections intersect in at most four points; a line and an elliptic curve intersect in at most three points; two distinct elliptic curves intersect in at most nine points; and so forth. Bézout’s theorem is discussed in this previous post.

From linear algebra we also have the fundamental fact that one can build algebraic curves through various specified points. For instance, for any two points {A_1,A_2} one can find a line {\{ (x,y): ax+by=c\}} passing through the points {A_1,A_2}, because this imposes two linear constraints on three unknowns {a,b,c} and is thus guaranteed to have at least one solution. Similarly, given any five points {A_1,\ldots,A_5}, one can find a quadric curve passing through these five points (though note that if three of these points are collinear, then this curve cannot be a conic thanks to Bézout’s theorem, and is thus necessarily reducible to the union of two lines); given any nine points {A_1,\ldots,A_9}, one can find a cubic curve going through these nine points; and so forth. This simple observation is one of the foundational building blocks of the polynomial method in combinatorial incidence geometry, discussed in these blog posts.

In the degree {1} case, it is always true that two distinct points {A, B} determine exactly one line {\overleftrightarrow{AB}}. In higher degree, the situation is a bit more complicated. For instance, five collinear points determine more than one quadric curve, as one can simply take the union of the line containing those five points, together with an arbitrary additional line. Similarly, eight points on a conic section plus one additional point determine more than one cubic curve, as one can take that conic section plus an arbitrary line going through the additional point. However, if one places some “general position” hypotheses on these points, then one can recover uniqueness. For instance, given five points, no three of which are collinear, there can be at most one quadric curve that passes through these points (because these five points cannot lie on the union of two lines, and by Bézout’s theorem they cannot simultaneously lie on two distinct conic sections).

For cubic curves, the situation is more complicated still. Consider for instance two distinct cubic curves {\gamma_0 = \{ P_0(x,y)=0\}} and {\gamma_\infty = \{P_\infty(x,y)=0\}} that intersect in precisely nine points {A_1,\ldots,A_9} (note from Bézout’s theorem that this is an entirely typical situation). Then there is in fact an entire one-parameter family of cubic curves that pass through these points, namely the curves {\gamma_t = \{ P_0(x,y) + t P_\infty(x,y) = 0\}} for any {t \in k \cup \{\infty\}} (with the convention that the constraint {P_0+tP_\infty=0} is interpreted as {P_\infty=0} when {t=\infty}).

In fact, these are the only cubics that pass through these nine points, or even through eight of the nine points. More precisely, we have the following useful fact, known as the Cayley-Bacharach theorem:

Proposition 1 (Cayley-Bacharach theorem) Let {\gamma_0 = \{ P_0(x,y)=0\}} and {\gamma_\infty = \{P_\infty(x,y)=0\}} be two cubic curves that intersect (over some algebraically closed field {k}) in precisely nine distinct points {A_1,\ldots,A_9 \in k^2}. Let {P} be a cubic polynomial that vanishes on eight of these points (say {A_1,\ldots,A_8}). Then {P} is a linear combination of {P_0,P_\infty}, and in particular vanishes on the ninth point {A_9}.

Proof: (This proof is based off of a text of Husemöller.) We assume for contradiction that there is a cubic polynomial {P} that vanishes on {A_1,\ldots,A_8}, but is not a linear combination of {P_0} and {P_\infty}.

We first make some observations on the points {A_1,\ldots,A_9}. No four of these points can be collinear, because then by Bézout’s theorem, {P_0} and {P_\infty} would both have to vanish on this line, contradicting the fact that {\gamma_0, \gamma_\infty} meet in at most nine points. For similar reasons, no seven of these points can lie on a quadric curve.

One consequence of this is that any five of the {A_1,\ldots,A_9} determine a unique quadric curve {\sigma}. The existence of the curve follows from linear algebra as discussed previously. If five of the points lie on two different quadric curves {\sigma,\sigma'}, then by Bezout’s theorem, they must share a common line; but this line can contain at most three of the five points, and the other two points determine uniquely the other line that is the component of both {\sigma} and {\sigma'}, and the claim follows.

Now suppose that three of the first eight points, say {A_1,A_2,A_3}, are collinear, lying on a line {\ell}. The remaining five points {A_4,\ldots,A_8} do not lie on {\ell}, and determine a unique quadric curve {\sigma} by the previous discussion. Let {B} be another point on {\ell}, and let {C} be a point that does not lie on either {\ell} or {\sigma}. By linear algebra, one can find a non-trivial linear combination {Q = aP + bP_0 + cP_\infty} of {P,P_0,P_\infty} that vanishes at both {B} and {C}. Then {Q} is a cubic polynomial that vanishes on the four collinear points {A_1,A_2,A_3,B} and thus vanishes on {\ell}, thus the cubic curve defined by {Q} consists of {\ell} and a quadric curve. This curve passes through {A_4,\ldots,A_8} and thus equals {\sigma}. But then {C} does not lie on either {\ell} or {\sigma} despite being a vanishing point of {Q}, a contradiction. Thus, no three points from {A_1,\ldots,A_8} are collinear.

In a similar vein, suppose next that six of the first eight points, say {A_1,\ldots,A_6}, lie on a quadric curve {\sigma}; as no three points are collinear, this quadric curve cannot be the union of two lines, and is thus a conic section. The remaining two points {A_7, A_8} determine a unique line {\ell = \overleftrightarrow{A_7A_8}}. Let {B} be another point on {\sigma}, and let {C} be another point that does not lie on either {\ell} and {\sigma}. As before, we can find a non-trivial cubic {Q = aP + bP_0+cP_\infty} that vanishes at both {B, C}. As {Q} vanishes at seven points of a conic section {\sigma}, it must vanish on all of {\sigma}, and so the cubic curve defined by {Q} is the union of {\sigma} and a line that passes through {A_7} and {A_8}, which must necessarily be {\ell}. But then this curve does not pass through {C}, a contradiction. Thus no six points in {A_1,\ldots,A_8} lie on a quadric curve.

Finally, let {\ell} be the line through the two points {A_1,A_2}, and {\sigma} the quadric curve through the five points {A_3,\ldots,A_7}; as before, {\sigma} must be a conic section, and by the preceding paragraphs we see that {A_8} does not lie on either {\sigma} or {\ell}. We pick two more points {B, C} lying on {\ell} but not on {\sigma}. As before, we can find a non-trivial cubic {Q = aP + bP_0+cP_\infty} that vanishes on {B, C}; it vanishes on four points on {\ell} and thus {Q} defines a cubic curve that consists of {\ell} and a quadric curve. The quadric curve passes through {A_3,\ldots,A_7} and is thus {\sigma}; but then the curve does not pass through {A_8}, a contradiction. This contradiction finishes the proof of the proposition. \Box

I recently learned of this proposition and its role in unifying many incidence geometry facts concerning lines, quadric curves, and cubic curves. For instance, we can recover the proof of the classical theorem of Pappus:

Theorem 2 (Pappus’ theorem) Let {\ell, \ell'} be two distinct lines, let {A_1,A_2,A_3} be distinct points on {\ell} that do not lie on {\ell'}, and let {B_1,B_2,B_3} be distinct points on {\ell'} that do not lie on {\ell}. Suppose that for {ij=12,23,31}, the lines {\overleftrightarrow{A_i B_j}} and {\overleftrightarrow{A_j B_i}} meet at a point {C_{ij}}. Then the points {C_{12}, C_{23}, C_{31}} are collinear.

Proof: We may assume that {C_{12}, C_{23}} are distinct, since the claim is trivial otherwise.

Let {\gamma_0} be the union of the three lines {\overleftrightarrow{A_1 B_2}}, {\overleftrightarrow{A_2 B_3}}, and {\overleftrightarrow{A_3 B_1}} (the purple lines in the first figure), let {\gamma_1} be the union of the three lines {\overleftrightarrow{A_2 B_1}}, {\overleftrightarrow{A_3 B_2}}, and {\overleftrightarrow{A_1 B_3}} (the dark blue lines), and let {\gamma} be the union of the three lines {\ell}, {\ell'}, and {\overleftrightarrow{C_{12} C_{23}}} (the other three lines). By construction, {\gamma_0} and {\gamma_1} are cubic curves with no common component that meet at the nine points {A_1,A_2,A_3,B_1,B_2,B_3,C_{12},C_{23},C_{31}}. Also, {\gamma} is a cubic curve that passes through the first eight of these points, and thus also passes through the ninth point {C_{31}}, by the Cayley-Bacharach theorem. The claim follows (note that {C_{31}} cannot lie on {\ell} or {\ell'}). \Box

The same argument gives the closely related theorem of Pascal:

Theorem 3 (Pascal’s theorem) Let {A_1,A_2,A_3,B_1,B_2,B_3} be distinct points on a conic section {\sigma}. Suppose that for {ij=12,23,31}, the lines {\overleftrightarrow{A_i B_j}} and {\overleftrightarrow{A_j B_i}} meet at a point {C_{ij}}. Then the points {C_{12}, C_{23}, C_{31}} are collinear.

Proof: Repeat the proof of Pappus’ theorem, with {\sigma} taking the place of {\ell \cup \ell'}. (Note that as any line meets {\sigma} in at most two points, the {C_{ij}} cannot lie on {\sigma}.) \Box

One can view Pappus’s theorem as the degenerate case of Pascal’s theorem, when the conic section degenerates to the union of two lines.

Finally, Proposition 1 gives the associativity of the elliptic curve group law:

Theorem 4 (Associativity of the elliptic curve law) Let {\gamma := \{ (x,y) \in k^2: y^2 = x^3+ax+b \} \cup \{O\}} be a (projective) elliptic curve, where {O := [0,1,0]} is the point at infinity on the {y}-axis, and the discriminant {\Delta := -16(4a^3+27b^2)} is non-zero. Define an addition law {+} on {\gamma} by defining {A+B} to equal {-C}, where {C} is the unique point on {\gamma} collinear with {A} and {B} (if {A,B} are disjoint) or tangent to {A} (if {A=B}), and {-C} is the reflection of {C} through the {x}-axis (thus {C, -C, O} are collinear), with the convention {-O=O}. Then {+} gives {\gamma} the structure of an abelian group with identity {O} and inverse {-}.

Proof: It is clear that {O} is the identity for {+}, {-} is an inverse, and {+} is abelian. The only non-trivial assertion is associativity: {(A+B)+C = A+(B+C)}. By a perturbation (or Zariski closure) argument, we may assume that we are in the generic case when {O,A,B,C,A+B,B+C,-(A+B), -(B+C)} are all distinct from each other and from {-((A+B)+C), -(A+(B+C))}. (Here we are implicitly using the smoothness of the elliptic curve, which is guaranteed by the hypothesis that the discriminant is non-zero.)

Let {\gamma'} be the union of the three lines {\overleftrightarrow{AB}}, {\overleftrightarrow{C(A+B)}}, and {\overleftarrow{O(B+C)}} (the purple lines), and let {\gamma''} be the union of the three lines {\overleftrightarrow{O(A+B)}}, {\overleftrightarrow{BC}}, and {\overleftrightarrow{A(B+C)}} (the green lines). Observe that {\gamma'} and {\gamma} are cubic curves with no common component that meet at the nine distinct points {O, A, B, C, A+B, -(A+B), B+C, -(B+C), -((A+B)+C)}. The cubic curve {\gamma''} goes through the first eight of these points, and thus (by Proposition 1) also goes through the ninth point {-((A+B)+C)}. This implies that the line through {A} and {B+C} meets {\gamma} in both {-(A+(B+C))} and {-((A+B)+C)}, and so these two points must be equal, and so {(A+B)+C=A+(B+C)} as required. \Box

One can view Pappus’s theorem and Pascal’s theorem as a degeneration of the associativity of the elliptic curve law, when the elliptic curve degenerates to three lines (in the case of Pappus) or the union of one line and one conic section (in the case of Pascal’s theorem).

This is another installment of my my series of posts on Hilbert’s fifth problem. One formulation of this problem is answered by the following theorem of Gleason and Montgomery-Zippin:

Theorem 1 (Hilbert’s fifth problem) Let {G} be a topological group which is locally Euclidean. Then {G} is isomorphic to a Lie group.

Theorem 1 is deep and difficult result, but the discussion in the previous posts has reduced the proof of this Theorem to that of establishing two simpler results, involving the concepts of a no small subgroups (NSS) subgroup, and that of a Gleason metric. We briefly recall the relevant definitions:

Definition 2 (NSS) A topological group {G} is said to have no small subgroups, or is NSS for short, if there is an open neighbourhood {U} of the identity in {G} that contains no subgroups of {G} other than the trivial subgroup {\{ \hbox{id}\}}.

Definition 3 (Gleason metric) Let {G} be a topological group. A Gleason metric on {G} is a left-invariant metric {d: G \times G \rightarrow {\bf R}^+} which generates the topology on {G} and obeys the following properties for some constant {C>0}, writing {\|g\|} for {d(g,\hbox{id})}:

  • (Escape property) If {g \in G} and {n \geq 1} is such that {n \|g\| \leq \frac{1}{C}}, then

    \displaystyle  \|g^n\| \geq \frac{1}{C} n \|g\|. \ \ \ \ \ (1)

  • (Commutator estimate) If {g, h \in G} are such that {\|g\|, \|h\| \leq \frac{1}{C}}, then

    \displaystyle  \|[g,h]\| \leq C \|g\| \|h\|, \ \ \ \ \ (2)

    where {[g,h] := g^{-1}h^{-1}gh} is the commutator of {g} and {h}.

The remaining steps in the resolution of Hilbert’s fifth problem are then as follows:

Theorem 4 (Reduction to the NSS case) Let {G} be a locally compact group, and let {U} be an open neighbourhood of the identity in {G}. Then there exists an open subgroup {G'} of {G}, and a compact subgroup {N} of {G'} contained in {U}, such that {G'/N} is NSS and locally compact.

Theorem 5 (Gleason’s lemma) Let {G} be a locally compact NSS group. Then {G} has a Gleason metric.

The purpose of this post is to establish these two results, using arguments that are originally due to Gleason. We will split this task into several subtasks, each of which improves the structure on the group {G} by some amount:

Proposition 6 (From locally compact to metrisable) Let {G} be a locally compact group, and let {U} be an open neighbourhood of the identity in {G}. Then there exists an open subgroup {G'} of {G}, and a compact subgroup {N} of {G'} contained in {U}, such that {G'/N} is locally compact and metrisable.

For any open neighbourhood {U} of the identity in {G}, let {Q(U)} be the union of all the subgroups of {G} that are contained in {U}. (Thus, for instance, {G} is NSS if and only if {Q(U)} is trivial for all sufficiently small {U}.)

Proposition 7 (From metrisable to subgroup trapping) Let {G} be a locally compact metrisable group. Then {G} has the subgroup trapping property: for every open neighbourhood {U} of the identity, there exists another open neighbourhood {V} of the identity such that {Q(V)} generates a subgroup {\langle Q(V) \rangle} contained in {U}.

Proposition 8 (From subgroup trapping to NSS) Let {G} be a locally compact group with the subgroup trapping property, and let {U} be an open neighbourhood of the identity in {G}. Then there exists an open subgroup {G'} of {G}, and a compact subgroup {N} of {G'} contained in {U}, such that {G'/N} is locally compact and NSS.

Proposition 9 (From NSS to the escape property) Let {G} be a locally compact NSS group. Then there exists a left-invariant metric {d} on {G} generating the topology on {G} which obeys the escape property (1) for some constant {C}.

Proposition 10 (From escape to the commutator estimate) Let {G} be a locally compact group with a left-invariant metric {d} that obeys the escape property (1). Then {d} also obeys the commutator property (2).

It is clear that Propositions 6, 7, and 8 combine to give Theorem 4, and Propositions 9, 10 combine to give Theorem 5.

Propositions 610 are all proven separately, but their proofs share some common strategies and ideas. The first main idea is to construct metrics on a locally compact group {G} by starting with a suitable “bump function” {\phi \in C_c(G)} (i.e. a continuous, compactly supported function from {G} to {{\bf R}}) and pulling back the metric structure on {C_c(G)} by using the translation action {\tau_g \phi(x) := \phi(g^{-1} x)}, thus creating a (semi-)metric

\displaystyle  d_\phi( g, h ) := \| \tau_g \phi - \tau_h \phi \|_{C_c(G)} := \sup_{x \in G} |\phi(g^{-1} x) - \phi(h^{-1} x)|. \ \ \ \ \ (3)

One easily verifies that this is indeed a (semi-)metric (in that it is non-negative, symmetric, and obeys the triangle inequality); it is also left-invariant, and so we have {d_\phi(g,h) = \|g^{-1} h \|_\phi = \| h^{-1} g \|_\phi}, where

\displaystyle  \| g \|_\phi = d_\phi(g,\hbox{id}) = \| \partial_g \phi \|_{C_c(G)}

where {\partial_g} is the difference operator {\partial_g = 1 - \tau_g},

\displaystyle  \partial_g \phi(x) = \phi(x) - \phi(g^{-1} x).

This construction was already seen in the proof of the Birkhoff-Kakutani theorem, which is the main tool used to establish Proposition 6. For the other propositions, the idea is to choose a bump function {\phi} that is “smooth” enough that it creates a metric with good properties such as the commutator estimate (2). Roughly speaking, to get a bound of the form (2), one needs {\phi} to have “{C^{1,1}} regularity” with respect to the “right” smooth structure on {G} By {C^{1,1}} regularity, we mean here something like a bound of the form

\displaystyle  \| \partial_g \partial_h \phi \|_{C_c(G)} \ll \|g\|_\phi \|h\|_\phi \ \ \ \ \ (4)

for all {g,h \in G}. Here we use the usual asymptotic notation, writing {X \ll Y} or {X=O(Y)} if {X \leq CY} for some constant {C} (which can vary from line to line).

The following lemma illustrates how {C^{1,1}} regularity can be used to build Gleason metrics.

Lemma 11 Suppose that {\phi \in C_c(G)} obeys (4). Then the (semi-)metric {d_\phi} (and associated (semi-)norm {\|\|_\phi}) obey the escape property (1) and the commutator property (2).

Proof: We begin with the commutator property (2). Observe the identity

\displaystyle  \tau_{[g,h]} = \tau_{hg}^{-1} \tau_{gh}

whence

\displaystyle  \partial_{[g,h]} = \tau_{hg}^{-1} ( \tau_{hg} - \tau_{gh} )

\displaystyle  = \tau_{hg}^{-1} ( \partial_h \partial_g - \partial_g \partial_h ).

From the triangle inequality (and translation-invariance of the {C_c(G)} norm) we thus see that (2) follows from (4). Similarly, to obtain the escape property (1), observe the telescoping identity

\displaystyle  \partial_{g^n} = n \partial_g + \sum_{i=0}^{n-1} \partial_g \partial_{g^i}

for any {g \in G} and natural number {n}, and thus by the triangle inequality

\displaystyle  \| g^n \|_\phi = n \| g \|_\phi + O( \sum_{i=0}^{n-1} \| \partial_g \partial_{g^i} \phi \|_{C_c(G)} ). \ \ \ \ \ (5)

But from (4) (and the triangle inequality) we have

\displaystyle  \| \partial_g \partial_{g^i} \phi \|_{C_c(G)} \ll \|g\|_\phi \|g^i \|_\phi \ll i \|g\|_\phi^2

and thus we have the “Taylor expansion”

\displaystyle  \|g^n\|_\phi = n \|g\|_\phi + O( n^2 \|g\|_\phi^2 )

which gives (1). \Box

It remains to obtain {\phi} that have the desired {C^{1,1}} regularity property. In order to get such regular bump functions, we will use the trick of convolving together two lower regularity bump functions (such as two functions with “{C^{0,1}} regularity” in some sense to be determined later). In order to perform this convolution, we will use the fundamental tool of (left-invariant) Haar measure {\mu} on the locally compact group {G}. Here we exploit the basic fact that the convolution

\displaystyle  f_1 * f_2(x) := \int_G f_1(y) f_2(y^{-1} x)\ d\mu(y) \ \ \ \ \ (6)

of two functions {f_1,f_2 \in C_c(G)} tends to be smoother than either of the two factors {f_1,f_2}. This is easiest to see in the abelian case, since in this case we can distribute derivatives according to the law

\displaystyle  \partial_g (f_1 * f_2) = (\partial_g f_1) * f_2 = f_1 * (\partial_g f_2),

which suggests that the order of “differentiability” of {f_1*f_2} should be the sum of the orders of {f_1} and {f_2} separately.

These ideas are already sufficient to establish Proposition 10 directly, and also Proposition 9 when comined with an additional bootstrap argument. The proofs of Proposition 7 and Proposition 8 use similar techniques, but is more difficult due to the potential presence of small subgroups, which require an application of the Peter-Weyl theorem to properly control. Both of these theorems will be proven below the fold, thus (when combined with the preceding posts) completing the proof of Theorem 1.

The presentation here is based on some unpublished notes of van den Dries and Goldbring on Hilbert’s fifth problem. I am indebted to Emmanuel Breuillard, Ben Green, and Tom Sanders for many discussions related to these arguments.

Read the rest of this entry »

I have just uploaded to the arXiv my paper “On the number of solutions to {\frac{4}{p}=\frac{1}{n_1}+\frac{1}{n_2}+\frac{1}{n_3}}“, submitted to the Journal of the Australian Mathematical Society.

For any positive integer {n}, let {f(n)} denote the number of solutions to the Diophantine equation

\displaystyle  \frac{4}{n} = \frac{1}{n_1} + \frac{1}{n_2} + \frac{1}{n_3}

where {n_1,n_2,n_3} are positive integers (we allow repetitions, and do not require the {n_i} to be increasing). The Erdös-Straus conjecture asserts that {f(n)>0} for all {n \geq 2}. By dividing through by any positive integer {m} we see that {f(nm) \geq f(n)}, so it suffices to verify this conjecture for primes {n=p}, i.e. to solve the Diophantine equation

\displaystyle  \frac{4}{p} = \frac{1}{n_1} + \frac{1}{n_2} + \frac{1}{n_3} \ \ \ \ \ (1)

for each prime {p}. As the case {p=2} is easily solved, we may of course restrict attention to odd primes.

This conjecture remains open, although there is a reasonable amount of evidence towards its truth. For instance, it was shown by Vaughan that for any large {x}, the number of exceptions {f(n)=0} to the Erdös-Straus conjecture with {n<x} is at most {x \exp( - c \log^{2/3} x)} for some absolute constant {c>0}. The Erdös-Straus conjecture is also verified in several congruence classes of primes; for instance, from the identity

\displaystyle  \frac{4}{p} = \frac{1}{\frac{p+1}{4}} + \frac{1}{\frac{p+1}{2}p} + \frac{1}{\frac{p+1}{2}p}

we see that the conjecture holds when {p = 3 \hbox{ mod } 4}. Further identities of this type can be used to resolve the conjecture unless {p} is a quadratic residue mod {840}, which leaves only six residue classes in that modulus to check (namely, {1, 121, 169, 289, 361}, and {529}). However, there is a significant obstruction to eliminating the quadratic residue classes, as we will discuss later.

By combining these reductions with extensive numerical calculations, the Erdös-Straus conjecture was verified for all {n \leq 10^{14}} by Swett.

One approach to solving Diophantine equations such as (1) is to use methods of analytic number theory, such as the circle method, to obtain asymptotics (or at least lower bounds) for the number {f(p)} of solutions (or some proxy for this number); if one obtains a lower bound which is nontrivial for every {p}, one has solved the problem. (One can alternatively view such methods as a variant of the probabilistic method; in this interpretation, one chooses the unknowns {n_1,n_2,n_3} according to some suitable probability distribution and then tries to show that the probability of solving the equation is positive.) Such techniques can be effective for instance for certain instances of the Waring problem. However, as a general rule, these methods only work when there are a lot of solutions, and specifically when the number {f(p)} of solutions grows at a polynomial rate with the parameter {p}.

The first main result of my paper is that the number of solutions {f(p)} is essentially logarithmic in nature, thus providing a serious obstruction to solution by analytic methods, as there is almost no margin of error available. More precisely, I show that for almost all primes {p} (i.e. in a subset of primes of relative density {1}), one has {f(p) \leq p^{o(1)}}, or more precisely that

\displaystyle  f(p) \leq \exp( O( \frac{\log p}{\log\log p} ) ).

Readers familiar with analytic number theory will recognise the right-hand side from the divisor bound, which indeed plays a role in the proof.

Actually, there are more precise estimates which suggest (though do not quite prove) that the mean value of {f(p)} is comparable to {\log^3 p}. To state these results properly, I need some more notation. It is not difficult to show that if {p} is an odd prime and {n_1,n_2,n_3} obey (1), then at least one, but not all, of the {n_1,n_2,n_3} must be a multiple of {p}. Thus we can split {f(p) = f_1(p) + f_2(p)}, where {f_i(p)} is the number of solutions to (1) where exactly {i} of the {n_1,n_2,n_3} are divisible by {p}. The sharpest estimates I can prove are then

Theorem 1 For all sufficiently large {x}, one has

\displaystyle  x \log^2 x \ll \sum_{p < x} f_1(p) \ll x \exp( O( \frac{\log x}{\log \log x} ) )

and

\displaystyle  x \log^2 x \ll \sum_{p < x} f_2(p) \ll x \log^2 x \log\log x.

Since the number of primes less than {x} is comparable to {x/\log x} by the prime number theorem, this shows that the mean value of {f_2(p)} in this range is between {\log^3 x} and {\log^3 \log\log x}, and the mean value of {f_1(p)} is between {\log^3 x} and {\exp( O( \frac{\log x}{\log \log x}) )}, which gives the previous result as a corollary (thanks to a Markov inequality argument). A naive Poisson process heuristic then suggests that each prime {p} has a “probability” {1 - O( \exp( - c \log^3 p ) )} of having a solution to (1), which by the Borel-Cantelli lemma heuristically suggests that there are only finitely many {p} for which (1) fails. Of course, this is far from a rigorous proof (though the result of Vaughan mentioned earlier, which is based on the large sieve, can be viewed as a partial formalisation of the argument).

The first step in obtaining these results is an elementary description of {f_1(p), f_2(p)} in terms of some divisibility constraints. More precisely, we have

Lemma 2 Let {p} be an odd prime. Then {f_1(p)} is equal to three times the number of triples {(a,b,c)} of positive integers, with {a, b} coprime, {c} dividing {a+b}, and {4ab} dividing {cp+1}. Similarly, {f_2(p)} is equal to three times the number of triples {(a,b,c)} of positive integers, with {a, b} coprime, {c} dividing {a+b}, and {4ab} dividing {p+c}.

One direction of this lemma is very easy: if {c} divides {a+b} and {4ab} divides {cp+1} then the identity

\displaystyle  \frac{4}{p} = \frac{1}{\frac{pc+1}{4}p} + \frac{1}{\frac{pc+1}{4a} \frac{a+b}{c}} + \frac{1}{\frac{pc+1}{4b} \frac{a+b}{c}}

and its cyclic permutations give three contributions to {f_1(p)}, and similarly if {4ab} divides {p+c} instead then the identity

\displaystyle  \frac{4}{p} = \frac{1}{\frac{p+c}{4}} + \frac{1}{\frac{p+c}{4a} \frac{a+b}{c} p} + \frac{1}{\frac{p+c}{4b} \frac{a+b}{c} p}

and its cyclic permutations give three contributions to {f_2(p)}. Conversely, some elementary number theory can be used to show that all solutions contributing to either {f_1(p)} or {f_2(p)} are of these forms. (Similar criteria have been used in prior literature, for instance in the previously mentioned paper of Vaughan.)

This lemma, incidentally, provides a way to quickly generate a large number of congruences of primes {p} for which the Erdös-Straus conjecture can be verified. Indeed, from the above lemma we see that if {a, b} are coprime and {c} is any odd factor of {a+b} (and hence coprime to {4ab}), then the Erdös-Straus conjecture is true whenever {p=-c \hbox{ mod } 4ab} or {p = -c^{-1} \hbox{ mod } 4ab}. For instance,

  • Taking {(a,b,c)=(1,1,1)}, we see that the conjecture holds whenever {p=3 \hbox{ mod } 4} (leaving only those primes {p=1 \hbox{ mod } 4});
  • Taking {(a,b,c)=(1,2,3)} we see that the conjecture holds whenever {p=5 \hbox{ mod } 8} (leaving only those primes {p=1 \hbox{ mod } 8});
  • Taking {(a,b,c)=(3,4,7)}, we see that the conjecture holds whenever {p=5 \hbox{ mod } 12} ((leaving only those primes {p=1 \hbox{ mod } 24});
  • Taking {(a,b,c)=(1,5,3)}, we see that the conjecture holds whenever {p=17, 13 \hbox{ mod } 20} (leaving only those primes {p=1,49 \hbox{ mod } 120});
  • Taking {(a,b,c)=(1,14,15)}, we see that the conjecture holds whenever {p= 41 \hbox{ mod } 56}; taking instead {(a,b,c)=(2,21,23)}, we see that the conjecture holds whenever {p = 73, 145 \hbox{ mod } 168}. (This leaves only the primes {p} that are equal to one of the six quadratic residues {1, 121, 169, 289, 361, 529 \hbox{ mod } 840}, an observation first made by Mordell.)
  • etc.

If we let {g_1(n)} (resp. {g_2(n)}) be the number of triples {(a,b,c)} with {a,b} coprime, {c} dividing {a+b}, and {n} congruent to {-c^{-1}} (resp. {-c}) mod {4ab}, the above lemma tells us that {f_i(p) = 3g_i(p)} for all odd primes {p}, so it suffices to show that {g_1(p), g_2(p)} are non-zero for all {p}. One might hope that there are enough congruence relations provided by the previous observation to obtain a covering system of congruences which would resolve the conjecture. Unfortunately, as was also observed by Mordell, an application of the quadratic reciprocity law shows that these congruence relations cannot eliminate quadratic residues, only quadratic non-residues. More precisely, one can show that if {a,b,c,p} are as above (restricting to the case p=1 \hbox{ mod } 8amp;fg000000, which contains all the unsolved cases of the conjecture) and {q} is the largest odd factor of {4ab}, then the Jacobi symbols {\left(\frac{-c}{q}\right)} and {\left(\frac{-c^{-1}}{q}\right)} are always equal to {-1}. One consequence of this is that {g_1(n)=g_2(n)=0} whenever {n} is an odd perfect square. This makes it quite hard to resolve the conjecture for odd primes {p} which resemble a perfect square in that they lie in quadratic residue classes to small moduli (and, from Dirichlet’s theorem on primes in arithmetic progressions, one can find many primes of this type). The same argument also shows that for an odd prime {p}, there are no solutions to

\displaystyle  \frac{4}{p^2} = \frac{1}{n_1} + \frac{1}{n_2} + \frac{1}{n_3}

in which one or two of the {n_1,n_2,n_3} are divisible by {p^2}, with the other denominators being coprime to {p}. (Of course, one can still get representations of {\frac{4}{p^2}} by starting with a representation of {\frac{4}{p}} and dividing by {p}, but such representations are is not of the above form.) This shows that any attempt to establish the Erdös-Straus conjecture by manually constructing {n_1,n_2,n_3} as a function of {p} must involve a technique which breaks down if {p} is replaced by {p^2} (for instance, this rules out any approach based on using polynomial combinations of {p} and dividing into cases based on residue classes of {p} in small moduli). Part of the problem here is that we do not have good bounds that prevent a prime {p} from “spoofing” a perfect square to all small moduli (say, to all moduli less than a small power of {p}); this problem is closely related (via quadratic reciprocity) to Vinogradov’s conjecture on the least quadratic nonresidue, discussed in this previous blog post.

It remains to control the sums {\sum_{p<x} g_i(p)} for {i=1,2}. The lower bounds {\sum_{p<x} g_i(p) \gg x \log^2 x} are relatively routine to establish, arising from counting the contribution of those {a,b,c} that are somewhat small (e.g. less than {x^{0.2}}), and use only standard asymptotics of arithmetic functions and the Bombieri-Vinogradov inequality (to handle the restriction of the summation to primes). To obtain (nearly) matching upper bounds, one needs to prevent {a,b,c} from getting too large. In the case of {g_2(p)}, the fact that {c} divides {a+b} and {4ab} divides {c+p} soon leads one to the bounds {ab \ll x}, at which point one can count primes in residue classes modulo {4ab} with reasonable efficiency via the Brun-Titchmarsh inequality. This inequality unfortunately introduces a factor of {\frac{4ab}{\phi(4ab)}}, which we were only able to handle by bounding it crudely by {O(\log \log x)}, leading to the double logarithmic loss in the {f_2} sum.

For {g_1(p)}, these arguments do not give good bounds, because it is possible for {ab} to be much larger than {x}, and one can no longer easily count solutions to {p = -c^{-1} \hbox{ mod } 4ab}. However, an elementary change of variables resolves this problem. It turns out that if one lets {l,m} be the positive integers

\displaystyle  l := \frac{pc+1}{4ab}

\displaystyle  m := \frac{4la^2+1}{c}

then one can convert the triple {(a,b,c)} to a triple {(a,l,m)} obeying the constraints

\displaystyle  al \leq p

\displaystyle  m | 4la^2+1

\displaystyle  p = -m \hbox{ mod } 4al.

Conversely, every such triple {(a,l,m)} determines {(a,b,c)} by the formulae

\displaystyle  c = \frac{4la^2+1}{m}; b = c \frac{p+m}{4al} - a = \frac{4la^2 p + m + p}{4alm}. \ \ \ \ \ (2)

The point is that this transformation now places {p} in a residue class modulo {4al} rather than {4ab}, and with {al = O(x)}, this allows one to count the number of solutions reasonably efficiently. The main loss now comes from counting the number of divisors {m} of {4la^2+1}; in particular, one becomes interested in estimating sums such as

\displaystyle  \sum_{a \sim A} \sum_{l \sim L} d( 4la^2+1 )

where {A, L} are less than {x} and {d(n)} is the number of divisors of {n}. In principle, because {d(n)} behaves like {\log n} on the average (as can be seen from the Dirichlet hyperbola method), we expect this sum to be of order {AL \log x}, but I was unable to show this, instead using the far cruder divisor bound

\displaystyle  d(n) \ll \exp( O(\frac{\log n}{\log\log n}) )

to obtain an upper bound of {AL \exp( O(\frac{\log x}{\log\log x}) )}. Any improvement upon this bound would lead to a corresponding improvement in the upper bound for {\sum_{p<x} f_1(p)}. While improvement is possible in some ranges (particularly when {L} is large) using various bounds on Kloosterman-type exponential sums, I was not able to adequately control these sums when {L} was fairly small (e.g. polylogarithmic size in {x}), as one could no longer extract much of an averaging effect from the {l} summation in that case. Part of the difficulty is that in that case one must somehow exploit the fact that {4la^2+1} is irreducible as a polynomial in {a} for any fixed {l}, otherwise there will be too many divisors.

Read the rest of this entry »

I have blogged several times in the past about nonstandard analysis, which among other things is useful in allowing one to import tools from infinitary (or qualitative) mathematics in order to establish results in finitary (or quantitative) mathematics. One drawback, though, to using nonstandard analysis methods is that the bounds one obtains by such methods are usually ineffective: in particular, the conclusions of a nonstandard analysis argument may involve an unspecified constant {C} that is known to be finite but for which no explicit bound is obviously available. (In many cases, a bound can eventually be worked out by performing proof mining on the argument, and in particular by carefully unpacking the proofs of all the various results from infinitary mathematics that were used in the argument, as opposed to simply using them as “black boxes”, but this is a time-consuming task and the bounds that one eventually obtains tend to be quite poor (e.g. tower exponential or Ackermann type bounds are not uncommon).)

Because of this fact, it would seem that quantitative bounds, such as polynomial type bounds {X \leq C Y^C} that show that one quantity {X} is controlled in a polynomial fashion by another quantity {Y}, are not easily obtainable through the ineffective methods of nonstandard analysis. Actually, this is not the case; as I will demonstrate by an example below, nonstandard analysis can certainly yield polynomial type bounds. The catch is that the exponent {C} in such bounds will be ineffective; but nevertheless such bounds are still good enough for many applications.

Let us now illustrate this by reproving a lemma from this paper of Mei-Chu Chang (Lemma 2.14, to be precise), which was recently pointed out to me by Van Vu. Chang’s paper is focused primarily on the sum-product problem, but she uses a quantitative lemma from algebraic geometry which is of independent interest. To motivate the lemma, let us first establish a qualitative version:

Lemma 1 (Qualitative solvability) Let {P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}} be a finite number of polynomials in several variables with rational coefficients. If there is a complex solution {z = (z_1,\ldots,z_d) \in {\bf C}^d} to the simultaneous system of equations

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

then there also exists a solution {z \in \overline{{\bf Q}}^d} whose coefficients are algebraic numbers (i.e. they lie in the algebraic closure {{\bf Q}} of the rationals).

Proof: Suppose there was no solution to {P_1(z)=\ldots=P_r(z)=0} over {\overline{{\bf Q}}}. Applying Hilbert’s nullstellensatz (which is available as {\overline{{\bf Q}}} is algebraically closed), we conclude the existence of some polynomials {Q_1,\ldots,Q_r} (with coefficients in {\overline{{\bf Q}}}) such that

\displaystyle  P_1 Q_1 + \ldots + P_r Q_r = 1

as polynomials. In particular, we have

\displaystyle  P_1(z) Q_1(z) + \ldots + P_r(z) Q_r(z) = 1

for all {z \in {\bf C}^d}. This shows that there is no solution to {P_1(z)=\ldots=P_r(z)=0} over {{\bf C}}, as required. \Box

Remark 1 Observe that in the above argument, one could replace {{\bf Q}} and {{\bf C}} by any other pair of fields, with the latter containing the algebraic closure of the former, and still obtain the same result.

The above lemma asserts that if a system of rational equations is solvable at all, then it is solvable with some algebraic solution. But it gives no bound on the complexity of that solution in terms of the complexity of the original equation. Chang’s lemma provides such a bound. If {H \geq 1} is an integer, let us say that an algebraic number has height at most {H} if its minimal polynomial (after clearing denominators) consists of integers of magnitude at most {H}.

Lemma 2 (Quantitative solvability) Let {P_1,\ldots,P_r: {\bf C}^d \rightarrow {\bf C}} be a finite number of polynomials of degree at most {D} with rational coefficients, each of height at most {H}. If there is a complex solution {z = (z_1,\ldots,z_d) \in {\bf C}^d} to the simultaneous system of equations

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

then there also exists a solution {z \in \overline{{\bf Q}}^d} whose coefficients are algebraic numbers of degree at most {C} and height at most {CH^C}, where {C = C_{D, d,r}} depends only on {D}, {d} and {r}.

Chang proves this lemma by essentially establishing a quantitative version of the nullstellensatz, via elementary elimination theory (somewhat similar, actually, to the approach I took to the nullstellensatz in my own blog post). She also notes that one could also establish the result through the machinery of Gröbner bases. In each of these arguments, it was not possible to use Lemma 1 (or the closely related nullstellensatz) as a black box; one actually had to unpack one of the proofs of that lemma or nullstellensatz to get the polynomial bound. However, using nonstandard analysis, it is possible to get such polynomial bounds (albeit with an ineffective value of the constant {C}) directly from Lemma 1 (or more precisely, the generalisation in Remark 1) without having to inspect the proof, and instead simply using it as a black box, thus providing a “soft” proof of Lemma 2 that is an alternative to the “hard” proofs mentioned above.

Here’s how the proof works. Informally, the idea is that Lemma 2 should follow from Lemma 1 after replacing the field of rationals {{\bf Q}} with “the field of rationals of polynomially bounded height”. Unfortunately, the latter object does not really make sense as a field in standard analysis; nevertheless, it is a perfectly sensible object in nonstandard analysis, and this allows the above informal argument to be made rigorous.

We turn to the details. As is common whenever one uses nonstandard analysis to prove finitary results, we use a “compactness and contradiction” argument (or more precisely, an “ultralimit and contradiction” argument). Suppose for contradiction that Lemma 2 failed. Carefully negating the quantifiers (and using the axiom of choice), we conclude that there exists {D, d, r} such that for each natural number {n}, there is a positive integer {H^{(n)}} and a family {P_1^{(n)}, \ldots, P_r^{(n)}: {\bf C}^d \rightarrow {\bf C}} of polynomials of degree at most {D} and rational coefficients of height at most {H^{(n)}}, such that there exist at least one complex solution {z^{(n)} \in {\bf C}^d} to

\displaystyle  P_1^{(n)}(z^{(n)}) = \ldots = P_r(z^{(n)}) = 0, \ \ \ \ \ (1)

but such that there does not exist any such solution whose coefficients are algebraic numbers of degree at most {n} and height at most {n (H^{(n)})^n}.

Now we take ultralimits (see e.g. this previous blog post of a quick review of ultralimit analysis, which we will assume knowledge of in the argument that follows). Let {p \in \beta {\bf N} \backslash {\bf N}} be a non-principal ultrafilter. For each {i=1,\ldots,r}, the ultralimit

\displaystyle  P_i := \lim_{n \rightarrow p} P_i^{(n)}

of the (standard) polynomials {P_i^{(n)}} is a nonstandard polynomial {P_i: {}^* {\bf C}^d \rightarrow {}^* {\bf C}} of degree at most {D}, whose coefficients now lie in the nonstandard rationals {{}^* {\bf Q}}. Actually, due to the height restriction, we can say more. Let {H := \lim_{n \rightarrow p} H^{(n)} \in {}^* {\bf N}} be the ultralimit of the {H^{(n)}}, this is a nonstandard natural number (which will almost certainly be unbounded, but we will not need to use this). Let us say that a nonstandard integer {a} is of polynomial size if we have {|a| \leq C H^C} for some standard natural number {C}, and say that a nonstandard rational number {a/b} is of polynomial height if {a}, {b} are of polynomial size. Let {{\bf Q}_{poly(H)}} be the collection of all nonstandard rationals of polynomial height. (In the language of nonstandard analysis, {{\bf Q}_{poly(H)}} is an external set rather than an internal one, because it is not itself an ultraproduct of standard sets; but this will not be relevant for the argument that follows.) It is easy to see that {{\bf Q}_{poly(H)}} is a field, basically because the sum or product of two integers of polynomial size, remains of polynomial size. By construction, it is clear that the coefficients of {P_i} are nonstandard rationals of polynomial height, and thus {P_1,\ldots,P_r} are defined over {{\bf Q}_{poly(H)}}.

Meanwhile, if we let {z := \lim_{n \rightarrow p} z^{(n)} \in {}^* {\bf C}^d} be the ultralimit of the solutions {z^{(n)}} in (1), we have

\displaystyle  P_1(z) = \ldots = P_r(z) = 0,

thus {P_1,\ldots,P_r} are solvable in {{}^* {\bf C}}. Applying Lemma 1 (or more precisely, the generalisation in Remark 1), we see that {P_1,\ldots,P_r} are also solvable in {\overline{{\bf Q}_{poly(H)}}}. (Note that as {{\bf C}} is algebraically closed, {{}^*{\bf C}} is also (by Los’s theorem), and so {{}^* {\bf C}} contains {\overline{{\bf Q}_{poly(H)}}}.) Thus, there exists {w \in \overline{{\bf Q}_{poly(H)}}^d} with

\displaystyle  P_1(w) = \ldots = P_r(w) = 0.

As {\overline{{\bf Q}_{poly(H)}}^d} lies in {{}^* {\bf C}^d}, we can write {w} as an ultralimit {w = \lim_{n \rightarrow p} w^{(n)}} of standard complex vectors {w^{(n)} \in {\bf C}^d}. By construction, the coefficients of {w} each obey a non-trivial polynomial equation of degree at most {C} and whose coefficients are nonstandard integers of magnitude at most {C H^C}, for some standard natural number {C}. Undoing the ultralimit, we conclude that for {n} sufficiently close to {p}, the coefficients of {w^{(n)}} obey a non-trivial polynomial equation of degree at most {C} whose coefficients are standard integers of magnitude at most {C (H^{(n)})^C}. In particular, these coefficients have height at most {C (H^{(n)})^C}. Also, we have

\displaystyle  P_1^{(n)}(w^{(n)}) = \ldots = P_r^{(n)}(w^{(n)}) = 0.

But for {n} larger than {C}, this contradicts the construction of the {P_i^{(n)}}, and the claim follows. (Note that as {p} is non-principal, any neighbourhood of {p} in {{\bf N}} will contain arbitrarily large natural numbers.)

Remark 2 The same argument actually gives a slightly stronger version of Lemma 2, namely that the integer coefficients used to define the algebraic solution {z} can be taken to be polynomials in the coefficients of {P_1,\ldots,P_r}, with degree and coefficients bounded by {C_{D,d,r}}.

I recently finished the first draft of the last of my books based on my 2010 blog posts (and also my Google buzzes), entitled “Compactness and contradiction“.    The PDF of this draft is available here.  This is a somewhat assorted (and lightly edited) collection of posts (and buzzes), though concentrating in the areas of analysis (both standard and nonstandard), logic, and group theory.   As always, comments and corrections are welcome.

Archives