You are currently browsing the category archive for the ‘math.NT’ category.

One of the basic problems in analytic number theory is to estimate sums of the form

$\displaystyle \sum_{p

as ${x \rightarrow \infty}$, where ${p}$ ranges over primes and ${f}$ is some explicit function of interest (e.g. a linear phase function ${f(p) = e^{2\pi i \alpha p}}$ for some real number ${\alpha}$). This is essentially the same task as obtaining estimates on the sum

$\displaystyle \sum_{n

where ${\Lambda}$ is the von Mangoldt function. If ${f}$ is bounded, ${f(n)=O(1)}$, then from the prime number theorem one has the trivial bound

$\displaystyle \sum_{n

but often (when ${f}$ is somehow “oscillatory” in nature) one is seeking the refinement

$\displaystyle \sum_{n

or equivalently

$\displaystyle \sum_{p

Thanks to identities such as

$\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log(\frac{n}{d}), \ \ \ \ \ (3)$

where ${\mu}$ is the Möbius function, refinements such as (1) are similar in spirit to estimates of the form

$\displaystyle \sum_{n

Unfortunately, the connection between (1) and (4) is not particularly tight; roughly speaking, one needs to improve the bounds in (4) (and variants thereof) by about two factors of ${\log x}$ before one can use identities such as (3) to recover (1). Still, one generally thinks of (1) and (4) as being “morally” equivalent, even if they are not formally equivalent.

When ${f}$ is oscillating in a sufficiently “irrational” way, then one standard way to proceed is the method of Type I and Type II sums, which uses truncated versions of divisor identities such as (3) to expand out either (1) or (4) into linear (Type I) or bilinear sums (Type II) with which one can exploit the oscillation of ${f}$. For instance, Vaughan’s identity lets one rewrite the sum in (1) as the sum of the Type I sum

$\displaystyle \sum_{d \leq U} \mu(d) (\sum_{V/d \leq r \leq x/d} (\log r) f(rd)),$

the Type I sum

$\displaystyle -\sum_{d \leq UV} a(d) \sum_{V/d \leq r \leq x/d} f(rd),$

the Type II sum

$\displaystyle -\sum_{V \leq d \leq x/U} \sum_{U < m \leq x/V} \Lambda(d) b(m) f(dm),$

and the error term ${\sum_{d \leq V} \Lambda(n) f(n)}$, whenever ${1 \leq U, V \leq x}$ are parameters, and ${a, b}$ are the sequences

$\displaystyle a(d) := \sum_{e \leq U, f \leq V: ef = d} \Lambda(d) \mu(e)$

and

$\displaystyle b(m) := \sum_{d|m: d \leq U} \mu(d).$

Similarly one can express (4) as the Type I sum

$\displaystyle -\sum_{d \leq UV} c(d) \sum_{UV/d \leq r \leq x/d} f(rd),$

the Type II sum

$\displaystyle - \sum_{V < d \leq x/U} \sum_{U < m \leq x/d} \mu(m) b(d) f(dm)$

and the error term ${\sum_{d \leq UV} \mu(n) f(N)}$, whenever ${1 \leq U,V \leq x}$ with ${UV \leq x}$, and ${c}$ is the sequence

$\displaystyle c(d) := \sum_{e \leq U, f \leq V: ef = d} \mu(d) \mu(e).$

After eliminating troublesome sequences such as ${a(), b(), c()}$ via Cauchy-Schwarz or the triangle inequality, one is then faced with the task of estimating Type I sums such as

$\displaystyle \sum_{r \leq y} f(rd)$

or Type II sums such as

$\displaystyle \sum_{r \leq y} f(rd) \overline{f(rd')}$

for various ${y, d, d' \geq 1}$. Here, the trivial bound is ${O(y)}$, but due to a number of logarithmic inefficiencies in the above method, one has to obtain bounds that are more like ${O( \frac{y}{\log^C y})}$ for some constant ${C}$ (e.g. ${C=5}$) in order to end up with an asymptotic such as (1) or (4).

However, in a recent paper of Bourgain, Sarnak, and Ziegler, it was observed that as long as one is only seeking the Mobius orthogonality (4) rather than the von Mangoldt orthogonality (1), one can avoid losing any logarithmic factors, and rely purely on qualitative equidistribution properties of ${f}$. A special case of their orthogonality criterion (which actually dates back to an earlier paper of Katai, as was pointed out to me by Nikos Frantzikinakis) is as follows:

Proposition 1 (Orthogonality criterion) Let ${f: {\bf N} \rightarrow {\bf C}}$ be a bounded function such that

$\displaystyle \sum_{n \leq x} f(pn) \overline{f(qn)} = o(x) \ \ \ \ \ (5)$

for any distinct primes ${p, q}$ (where the decay rate of the error term ${o(x)}$ may depend on ${p}$ and ${q}$). Then

$\displaystyle \sum_{n \leq x} \mu(n) f(n) =o(x). \ \ \ \ \ (6)$

Actually, the Bourgain-Sarnak-Ziegler paper establishes a more quantitative version of this proposition, in which ${\mu}$ can be replaced by an arbitrary bounded multiplicative function, but we will content ourselves with the above weaker special case. (See also these notes of Harper, which uses the Katai argument to give a slightly weaker quantitative bound in the same spirit.) This criterion can be viewed as a multiplicative variant of the classical van der Corput lemma, which in our notation asserts that ${\sum_{n \leq x} f(n) = o(x)}$ if one has ${\sum_{n \leq x} f(n+h) \overline{f(n)} = o(x)}$ for each fixed non-zero ${h}$.

As a sample application, Proposition 1 easily gives a proof of the asymptotic

$\displaystyle \sum_{n \leq x} \mu(n) e^{2\pi i \alpha n} = o(x)$

for any irrational ${\alpha}$. (For rational ${\alpha}$, this is a little trickier, as it is basically equivalent to the prime number theorem in arithmetic progressions.) The paper of Bourgain, Sarnak, and Ziegler also apply this criterion to nilsequences (obtaining a quick proof of a qualitative version of a result of Ben Green and myself, see these notes of Ziegler for details) and to horocycle flows (for which no Möbius orthogonality result was previously known).

Informally, the connection between (5) and (6) comes from the multiplicative nature of the Möbius function. If (6) failed, then ${\mu(n)}$ exhibits strong correlation with ${f(n)}$; by change of variables, we then expect ${\mu(pn)}$ to correlate with ${f(pn)}$ and ${\mu(pm)}$ to correlate with ${f(qn)}$, for “typical” ${p,q}$ at least. On the other hand, since ${\mu}$ is multiplicative, ${\mu(pn)}$ exhibits strong correlation with ${\mu(qn)}$. Putting all this together (and pretending correlation is transitive), this would give the claim (in the contrapositive). Of course, correlation is not quite transitive, but it turns out that one can use the Cauchy-Schwarz inequality as a substitute for transitivity of correlation in this case.

I will give a proof of Proposition 1 below the fold (which is not quite based on the argument in the above mentioned paper, but on a variant of that argument communicated to me by Tamar Ziegler, and also independently discovered by Adam Harper). The main idea is to exploit the following observation: if ${P}$ is a “large” but finite set of primes (in the sense that the sum ${A := \sum_{p \in P} \frac{1}{p}}$ is large), then for a typical large number ${n}$ (much larger than the elements of ${P}$), the number of primes in ${P}$ that divide ${n}$ is pretty close to ${A = \sum_{p \in P} \frac{1}{p}}$:

$\displaystyle \sum_{p \in P: p|n} 1 \approx A. \ \ \ \ \ (7)$

A more precise formalisation of this heuristic is provided by the Turan-Kubilius inequality, which is proven by a simple application of the second moment method.

In particular, one can sum (7) against ${\mu(n) f(n)}$ and obtain an approximation

$\displaystyle \sum_{n \leq x} \mu(n) f(n) \approx \frac{1}{A} \sum_{p \in P} \sum_{n \leq x: p|n} \mu(n) f(n)$

that approximates a sum of ${\mu(n) f(n)}$ by a bunch of sparser sums of ${\mu(n) f(n)}$. Since

$\displaystyle x = \frac{1}{A} \sum_{p \in P} \frac{x}{p},$

we see (heuristically, at least) that in order to establish (4), it would suffice to establish the sparser estimates

$\displaystyle \sum_{n \leq x: p|n} \mu(n) f(n) = o(\frac{x}{p})$

for all ${p \in P}$ (or at least for “most” ${p \in P}$).

Now we make the change of variables ${n = pm}$. As the Möbius function is multiplicative, we usually have ${\mu(n) = \mu(p) \mu(m) = - \mu(m)}$. (There is an exception when ${n}$ is divisible by ${p^2}$, but this will be a rare event and we will be able to ignore it.) So it should suffice to show that

$\displaystyle \sum_{m \leq x/p} \mu(m) f(pm) = o( x/p )$

for most ${p \in P}$. However, by the hypothesis (5), the sequences ${m \mapsto f(pm)}$ are asymptotically orthogonal as ${p}$ varies, and this claim will then follow from a Cauchy-Schwarz argument.

One of the most notorious problems in elementary mathematics that remains unsolved is the Collatz conjecture, concerning the function ${f_0: {\bf N} \rightarrow {\bf N}}$ defined by setting ${f_0(n) := 3n+1}$ when ${n}$ is odd, and ${f_0(n) := n/2}$ when ${n}$ is even. (Here, ${{\bf N}}$ is understood to be the positive natural numbers ${\{1,2,3,\ldots\}}$.)

Conjecture 1 (Collatz conjecture) For any given natural number ${n}$, the orbit ${n, f_0(n), f^2_0(n), f^3_0(n), \ldots}$ passes through ${1}$ (i.e. ${f^k_0(n)=1}$ for some ${k}$).

Open questions with this level of notoriety can lead to what Richard Lipton calls “mathematical diseases” (and what I termed an unhealthy amount of obsession on a single famous problem). (See also this xkcd comic regarding the Collatz conjecture.) As such, most practicing mathematicians tend to spend the majority of their time on more productive research areas that are only just beyond the range of current techniques. Nevertheless, it can still be diverting to spend a day or two each year on these sorts of questions, before returning to other matters; so I recently had a go at the problem. Needless to say, I didn’t solve the problem, but I have a better appreciation of why the conjecture is (a) plausible, and (b) unlikely be proven by current technology, and I thought I would share what I had found out here on this blog.

Let me begin with some very well known facts. If ${n}$ is odd, then ${f_0(n) = 3n+1}$ is even, and so ${f_0^2(n) = \frac{3n+1}{2}}$. Because of this, one could replace ${f_0}$ by the function ${f_1: {\bf N} \rightarrow {\bf N}}$, defined by ${f_1(n) = \frac{3n+1}{2}}$ when ${n}$ is odd, and ${f_1(n)=n/2}$ when ${n}$ is even, and obtain an equivalent conjecture. Now we see that if one chooses ${n}$ “at random”, in the sense that it is odd with probability ${1/2}$ and even with probability ${1/2}$, then ${f_1}$ increases ${n}$ by a factor of roughly ${3/2}$ half the time, and decreases it by a factor of ${1/2}$ half the time. Furthermore, if ${n}$ is uniformly distributed modulo ${4}$, one easily verifies that ${f_1(n)}$ is uniformly distributed modulo ${2}$, and so ${f_1^2(n)}$ should be roughly ${3/2}$ times as large as ${f_1(n)}$ half the time, and roughly ${1/2}$ times as large as ${f_1(n)}$ the other half of the time. Continuing this at a heuristic level, we expect generically that ${f_1^{k+1}(n) \approx \frac{3}{2} f_1^k(n)}$ half the time, and ${f_1^{k+1}(n) \approx \frac{1}{2} f_1^k(n)}$ the other half of the time. The logarithm ${\log f_1^k(n)}$ of this orbit can then be modeled heuristically by a random walk with steps ${\log \frac{3}{2}}$ and ${\log \frac{1}{2}}$ occuring with equal probability. The expectation

$\displaystyle \frac{1}{2} \log \frac{3}{2} + \frac{1}{2} \log \frac{1}{2} = \frac{1}{2} \log \frac{3}{4}$

is negative, and so (by the classic gambler’s ruin) we expect the orbit to decrease over the long term. This can be viewed as heuristic justification of the Collatz conjecture, at least in the “average case” scenario in which ${n}$ is chosen uniform at random (e.g. in some large interval ${\{1,\ldots,N\}}$). (It also suggests that if one modifies the problem, e.g. by replacing ${3n+1}$ to ${5n+1}$, then one can obtain orbits that tend to increase over time, and indeed numerically for this variant one sees orbits that appear to escape to infinity.) Unfortunately, one can only rigorously keep the orbit uniformly distributed modulo ${2}$ for time about ${O(\log N)}$ or so; after that, the system is too complicated for naive methods to control at anything other than a heuristic level.

Remark 1 One can obtain a rigorous analogue of the above arguments by extending ${f_1}$ from the integers ${{\bf Z}}$ to the ${2}$-adics ${{\bf Z}_2}$. This compact abelian group comes with a Haar probability measure, and one can verify that this measure is invariant with respect to ${f_1}$; with a bit more effort one can verify that it is ergodic. This suggests the introduction of ergodic theory methods. For instance, using the pointwise ergodic theorem, we see that if ${n}$ is a random ${2}$-adic integer, then almost surely the orbit ${n, f_1(n), f_1^2(n), \ldots}$ will be even half the time and odd half the time asymptotically, thus supporting the above heuristics. Unfortunately, this does not directly tell us much about the dynamics on ${{\bf Z}}$, as this is a measure zero subset of ${{\bf Z}_2}$. More generally, unless a dynamical system is somehow “polynomial”, “nilpotent”, or “unipotent” in nature, the current state of ergodic theory is usually only able to say something meaningful about generic orbits, but not about all orbits. For instance, the very simple system ${x \rightarrow 10x}$ on the unit circle ${{\bf R}/{\bf Z}}$ is well understood from ergodic theory (in particular, almost all orbits will be uniformly distributed), but the orbit of a specific point, e.g. ${\pi\hbox{ mod } 1}$, is still nearly impossible to understand (this particular problem being equivalent to the notorious unsolved question of whether the digits of ${\pi}$ are uniformly distributed).

The above heuristic argument only suggests decreasing orbits for almost all ${n}$ (though even this remains unproven, the state of the art is that the number of ${n}$ in ${\{1,\ldots,N\}}$ that eventually go to ${1}$ is ${\gg N^{0.84}}$, a result of Krasikov and Lagarias). It leaves open the possibility of some very rare exceptional ${n}$ for which the orbit goes to infinity, or gets trapped in a periodic loop. Since the only loop that ${1}$ lies in is ${1,4,2}$ (for ${f_0}$) or ${1,2}$ (for ${f_1}$), we thus may isolate a weaker consequence of the Collatz conjecture:

Conjecture 2 (Weak Collatz conjecture) Suppose that ${n}$ is a natural number such that ${f^k_0(n)=n}$ for some ${k \geq 1}$. Then ${n}$ is equal to ${1}$, ${2}$, or ${4}$.

Of course, we may replace ${f_0}$ with ${f_1}$ (and delete “${4}$“) and obtain an equivalent conjecture.

This weaker version of the Collatz conjecture is also unproven. However, it was observed by Bohm and Sontacchi that this weak conjecture is equivalent to a divisibility problem involving powers of ${2}$ and ${3}$:

Conjecture 3 (Reformulated weak Collatz conjecture) There does not exist ${k \geq 1}$ and integers

$\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}$

such that ${2^{a_{k+1}}-3^k}$ is a positive integer that is a proper divisor of

$\displaystyle 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k},$

i.e.

$\displaystyle (2^{a_{k+1}} - 3^k) n = 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k} \ \ \ \ \ (1)$

for some natural number ${n > 1}$.

Proposition 4 Conjecture 2 and Conjecture 3 are equivalent.

Proof: To see this, it is convenient to reformulate Conjecture 2 slightly. Define an equivalence relation ${\sim}$ on ${{\bf N}}$ by declaring ${a \sim b}$ if ${a/b = 2^m}$ for some integer ${m}$, thus giving rise to the quotient space ${{\bf N}/\sim}$ of equivalence classes ${[n]}$ (which can be placed, if one wishes, in one-to-one correspondence with the odd natural numbers). We can then define a function ${f_2: {\bf N}/\sim \rightarrow {\bf N}/\sim}$ by declaring

$\displaystyle f_2( [n] ) := [3n + 2^a] \ \ \ \ \ (2)$

for any ${n \in {\bf N}}$, where ${2^a}$ is the largest power of ${2}$ that divides ${n}$. It is easy to see that ${f_2}$ is well-defined (it is essentially the Syracuse function, after identifying ${{\bf N}/\sim}$ with the odd natural numbers), and that periodic orbits of ${f_2}$ correspond to periodic orbits of ${f_1}$ or ${f_0}$. Thus, Conjecture 2 is equivalent to the conjecture that ${[1]}$ is the only periodic orbit of ${f_2}$.

Now suppose that Conjecture 2 failed, thus there exists ${[n] \neq [1]}$ such that ${f_2^k([n])=[n]}$ for some ${k \geq 1}$. Without loss of generality we may take ${n}$ to be odd, then ${n>1}$. It is easy to see that ${[1]}$ is the only fixed point of ${f_2}$, and so ${k>1}$. An easy induction using (2) shows that

$\displaystyle f_2^k([n]) = [3^k n + 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k}]$

where, for each ${1 \leq i \leq k}$, ${2^{a_i}}$ is the largest power of ${2}$ that divides

$\displaystyle n_i := 3^{i-1} n + 3^{i-2} 2^{a_1} + \ldots + 2^{a_{i-1}}. \ \ \ \ \ (3)$

In particular, as ${n_1 = n}$ is odd, ${a_1=0}$. Using the recursion

$\displaystyle n_{i+1} = 3n_i + 2^{a_i}, \ \ \ \ \ (4)$

we see from induction that ${2^{a_i+1}}$ divides ${n_{i+1}}$, and thus ${a_{i+1}>a_i}$:

$\displaystyle 0 = a_1 < a_2 < \ldots < a_k.$

Since ${f_2^k([n]) = [n]}$, we have

$\displaystyle 2^{a_{k+1}} n = 3^k n + 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k} = 3 n_k + 2^{a_k}$

for some integer ${a_{k+1}}$. Since ${3 n_k + 2^{a_k}}$ is divisible by ${2^{a_k+1}}$, and ${n}$ is odd, we conclude ${a_{k+1} > a_k}$; if we rearrange the above equation as (1), then we obtain a counterexample to Conjecture 3.

Conversely, suppose that Conjecture 3 failed. Then we have ${k \geq 1}$, integers

$\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}$

and a natural number ${n > 1}$ such that (1) holds. As ${a_1=0}$, we see that the right-hand side of (1) is odd, so ${n}$ is odd also. If we then introduce the natural numbers ${n_i}$ by the formula (3), then an easy induction using (4) shows that

$\displaystyle (2^{a_{k+1}} - 3^k) n_i = 3^{k-1} 2^{a_i} + 3^{k-2} 2^{a_{i+1}} + \ldots + 2^{a_{i+k-1}} \ \ \ \ \ (5)$

with the periodic convention ${a_{k+j} := a_j + a_{k+1}}$ for ${j>1}$. As the ${a_i}$ are increasing in ${i}$ (even for ${i \geq k+1}$), we see that ${2^{a_i}}$ is the largest power of ${2}$ that divides the right-hand side of (5); as ${2^{a_{k+1}}-3^k}$ is odd, we conclude that ${2^{a_i}}$ is also the largest power of ${2}$ that divides ${n_i}$. We conclude that

$\displaystyle f_2([n_i]) = [3n_i + 2^{a_i}] = [n_{i+1}]$

and thus ${[n]}$ is a periodic orbit of ${f_2}$. Since ${n}$ is an odd number larger than ${1}$, this contradicts Conjecture 3. $\Box$

Call a counterexample a tuple ${(k,a_1,\ldots,a_{k+1})}$ that contradicts Conjecture 3, i.e. an integer ${k \geq 1}$ and an increasing set of integers

$\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}$

such that (1) holds for some ${n \geq 1}$. We record a simple bound on such counterexamples, due to Terras and to Garner :

Lemma 5 (Exponent bounds) Let ${N \geq 1}$, and suppose that the Collatz conjecture is true for all ${n < N}$. Let ${(k,a_1,\ldots,a_{k+1})}$ be a counterexample. Then

$\displaystyle \frac{\log 3}{\log 2} k < a_{k+1} < \frac{\log(3+\frac{1}{N})}{\log 2} k.$

Proof: The first bound is immediate from the positivity of ${2^{a_{k+1}}-3^k}$. To prove the second bound, observe from the proof of Proposition 4 that the counterexample ${(k,a_1,\ldots,a_{k+1})}$ will generate a counterexample to Conjecture 2, i.e. a non-trivial periodic orbit ${n, f(n), \ldots, f^K(n) = n}$. As the conjecture is true for all ${n < N}$, all terms in this orbit must be at least ${N}$. An inspection of the proof of Proposition 4 reveals that this orbit consists of ${k}$ steps of the form ${x \mapsto 3x+1}$, and ${a_{k+1}}$ steps of the form ${x \mapsto x/2}$. As all terms are at least ${n}$, the former steps can increase magnitude by a multiplicative factor of at most ${3+\frac{1}{N}}$. As the orbit returns to where it started, we conclude that

$\displaystyle 1 \leq (3+\frac{1}{N})^k (\frac{1}{2})^{a_{k+1}}$

whence the claim. $\Box$

The Collatz conjecture has already been verified for many values of ${n}$ (up to at least ${N = 5 \times 10^{18}}$, according to this web site). Inserting this into the above lemma, one can get lower bounds on ${k}$. For instance, by methods such as this, it is known that any non-trivial periodic orbit has length at least ${105{,}000}$, as shown in Garner’s paper (and this bound, which uses the much smaller value ${N = 2 \times 10^9}$ that was available in 1981, can surely be improved using the most recent computational bounds).

Now we can perform a heuristic count on the number of counterexamples. If we fix ${k}$ and ${a := a_{k+1}}$, then ${2^a > 3^k}$, and from basic combinatorics we see that there are ${\binom{a-1}{k-1}}$ different ways to choose the remaining integers

$\displaystyle 0 = a_1 < a_2 < \ldots < a_{k+1}$

to form a potential counterexample ${(k,a_1,\ldots,a_{k+1})}$. As a crude heuristic, one expects that for a “random” such choice of integers, the expression (1) has a probability ${1/q}$ of holding for some integer ${n}$. (Note that ${q}$ is not divisible by ${2}$ or ${3}$, and so one does not expect the special structure of the right-hand side of (1) with respect to those moduli to be relevant. There will be some choices of ${a_1,\ldots,a_k}$ where the right-hand side in (1) is too small to be divisible by ${q}$, but using the estimates in Lemma 5, one expects this to occur very infrequently.) Thus, the total expected number of solutions for this choice of ${a, k}$ is

$\displaystyle \frac{1}{q} \binom{a-1}{k-1}.$

The heuristic number of solutions overall is then expected to be

$\displaystyle \sum_{a,k} \frac{1}{q} \binom{a-1}{k-1}, \ \ \ \ \ (6)$

where, in view of Lemma 5, one should restrict the double summation to the heuristic regime ${a \approx \frac{\log 3}{\log 2} k}$, with the approximation here accurate to many decimal places.

We need a lower bound on ${q}$. Here, we will use Baker’s theorem (as discussed in this previous post), which among other things gives the lower bound

$\displaystyle q = 2^a - 3^k \gg 2^a / a^C \ \ \ \ \ (7)$

for some absolute constant ${C}$. Meanwhile, Stirling’s formula (as discussed in this previous post) combined with the approximation ${k \approx \frac{\log 2}{\log 3} a}$ gives

$\displaystyle \binom{a-1}{k-1} \approx \exp(h(\frac{\log 2}{\log 3}))^a$

where ${h}$ is the entropy function

$\displaystyle h(x) := - x \log x - (1-x) \log (1-x).$

A brief computation shows that

$\displaystyle \exp(h(\frac{\log 2}{\log 3})) \approx 1.9318 \ldots$

and so (ignoring all subexponential terms)

$\displaystyle \frac{1}{q} \binom{a-1}{k-1} \approx (0.9659\ldots)^a$

which makes the series (6) convergent. (Actually, one does not need the full strength of Lemma 5 here; anything that kept ${k}$ well away from ${a/2}$ would suffice. In particular, one does not need an enormous value of ${N}$; even ${N=5}$ (say) would be more than sufficient to obtain the heuristic that there are finitely many counterexamples.) Heuristically applying the Borel-Cantelli lemma, we thus expect that there are only a finite number of counterexamples to the weak Collatz conjecture (and inserting a bound such as ${k \geq 105{,}000}$, one in fact expects it to be extremely likely that there are no counterexamples at all).

This, of course, is far short of any rigorous proof of Conjecture 2. In order to make rigorous progress on this conjecture, it seems that one would need to somehow exploit the structural properties of numbers of the form

$\displaystyle 3^{k-1} 2^{a_1} + 3^{k-2} 2^{a_2} + \ldots + 2^{a_k}. \ \ \ \ \ (8)$

In some very special cases, this can be done. For instance, suppose that one had ${a_{i+1}=a_i+1}$ with at most one exception (this is essentially what is called a ${1}$-cycle by Steiner). Then (8) simplifies via the geometric series formula to a combination of just a bounded number of powers of ${2}$ and ${3}$, rather than an unbounded number. In that case, one can start using tools from transcendence theory such as Baker’s theorem to obtain good results; for instance, in the above-referenced paper of Steiner, it was shown that ${1}$-cycles cannot actually occur, and similar methods have been used to show that ${m}$-cycles (in which there are at most ${m}$ exceptions to ${a_{i+1}=a_i+1}$) do not occur for any ${m \leq 63}$, as was shown by Simons and de Weger. However, for general increasing tuples of integers ${a_1,\ldots,a_k}$, there is no such representation by bounded numbers of powers, and it does not seem that methods from transcendence theory will be sufficient to control the expressions (8) to the extent that one can understand their divisibility properties by quantities such as ${2^a-3^k}$.

Amusingly, there is a slight connection to Littlewood-Offord theory in additive combinatorics – the study of the ${2^n}$ random sums

$\displaystyle \pm v_1 \pm v_2 \pm \ldots \pm v_n$

generated by some elements ${v_1,\ldots,v_n}$ of an additive group ${G}$, or equivalently, the vertices of an ${n}$-dimensional parallelepiped inside ${G}$. Here, the relevant group is ${{\bf Z}/q{\bf Z}}$. The point is that if one fixes ${k}$ and ${a_{k+1}}$ (and hence ${q}$), and lets ${a_1,\ldots,a_k}$ vary inside the simplex

$\displaystyle \Delta := \{ (a_1,\ldots,a_k) \in {\bf N}^k: 0 = a_1 < \ldots < a_k < a_{k+1}\}$

then the set ${S}$ of all sums of the form (8) (viewed as an element of ${{\bf Z}/q{\bf Z}}$) contains many large parallelepipeds. (Note, incidentally, that once one fixes ${k}$, all the sums of the form (8) are distinct; because given (8) and ${k}$, one can read off ${2^{a_1}}$ as the largest power of ${2}$ that divides (8), and then subtracting off ${3^{k-1} 2^{a_1}}$ one can then read off ${2^{a_2}}$, and so forth.) This is because the simplex ${\Delta}$ contains many large cubes. Indeed, if one picks a typical element ${(a_1,\ldots,a_k)}$ of ${\Delta}$, then one expects (thanks to Lemma 5) that there there will be ${\gg k}$ indices ${1 \leq i_1 < \ldots < i_m \leq k}$ such that ${a_{i_j+1} > a_{i_j}+1}$ for ${j=1,\ldots,m}$, which allows one to adjust each of the ${a_{i_j}}$ independently by ${1}$ if desired and still remain inside ${\Delta}$. This gives a cube in ${\Delta}$ of dimension ${\gg k}$, which then induces a parallelepiped of the same dimension in ${S}$. A short computation shows that the generators of this parallelepiped consist of products of a power of ${2}$ and a power of ${3}$, and in particular will be coprime to ${q}$.

If the weak Collatz conjecture is true, then the set ${S}$ must avoid the residue class ${0}$ in ${{\bf Z}/q{\bf Z}}$. Let us suppose temporarily that we did not know about Baker’s theorem (and the associated bound (7)), so that ${q}$ could potentially be quite small. Then we would have a large parallelepiped inside a small cyclic group ${{\bf Z}/q{\bf Z}}$ that did not cover all of ${{\bf Z}/q{\bf Z}}$, which would not be possible for ${q}$ small enough. Indeed, an easy induction shows that a ${d}$-dimensional parallelepiped in ${{\bf Z}/q{\bf Z}}$, with all generators coprime to ${q}$, has cardinality at least ${\min(q, d+1)}$. This argument already shows the lower bound ${q \gg k}$. In other words, we have

Proposition 6 Suppose the weak Collatz conjecture is true. Then for any natural numbers ${a, k}$ with ${2^a > 3^k}$, one has ${2^a-3^k \gg k}$.

This bound is very weak when compared against the unconditional bound (7). However, I know of no way to get a nontrivial separation property between powers of ${2}$ and powers of ${3}$ other than via transcendence theory methods. Thus, this result strongly suggests that any proof of the Collatz conjecture must either use existing results in transcendence theory, or else must contribute a new method to give non-trivial results in transcendence theory. (This already rules out a lot of possible approaches to solve the Collatz conjecture.)

By using more sophisticated tools in additive combinatorics, one can improve the above proposition (though it is still well short of the transcendence theory bound (7)):

Proposition 7 Suppose the weak Collatz conjecture is true. Then for any natural numbers ${a, k}$ with ${2^a > 3^k}$, one has ${2^a-3^k \gg (1+\epsilon)^k}$ for some absolute constant ${\epsilon > 0}$.

Proof: (Informal sketch only) Suppose not, then we can find ${a, k}$ with ${q := 2^a-3^k}$ of size ${(1+o(1))^k = \exp(o(k))}$. We form the set ${S}$ as before, which contains parallelepipeds in ${{\bf Z}/q{\bf Z}}$ of large dimension ${d \gg k}$ that avoid ${0}$. We can count the number of times ${0}$ occurs in one of these parallelepipeds by a standard Fourier-analytic computation involving Riesz products (see Chapter 7 of my book with Van Vu, or this recent preprint of Maples). Using this Fourier representation, the fact that this parallelepiped avoids ${0}$ (and the fact that ${q = \exp(o(k)) = \exp(o(d))}$) forces the generators ${v_1,\ldots,v_d}$ to be concentrated in a Bohr set, in that one can find a non-zero frequency ${\xi \in {\bf Z}/q{\bf Z}}$ such that ${(1-o(1))d}$ of the ${d}$ generators lie in the set ${\{ v: \xi v = o(q) \hbox{ mod } q \}}$. However, one can choose the generators to essentially have the structure of a (generalised) geometric progression (up to scaling, it resembles something like ${2^i 3^{\lfloor \alpha i \rfloor}}$ for ${i}$ ranging over a generalised arithmetic progression, and ${\alpha}$ a fixed irrational), and one can show that such progressions cannot be concentrated in Bohr sets (this is similar in spirit to the exponential sum estimates of Bourgain on approximate multiplicative subgroups of ${{\bf Z}/q{\bf Z}}$, though one can use more elementary methods here due to the very strong nature of the Bohr set concentration (being of the “${99\%}$ concentration” variety rather than the “${1\%}$ concentration”).). This furnishes the required contradiction. $\Box$

Thus we see that any proposed proof of the Collatz conjecture must either use transcendence theory, or introduce new techniques that are powerful enough to create exponential separation between powers of ${2}$ and powers of ${3}$.

Unfortunately, once one uses the transcendence theory bound (7), the size ${q}$ of the cyclic group ${{\bf Z}/q{\bf Z}}$ becomes larger than the volume of any cube in ${S}$, and Littlewood-Offord techniques are no longer of much use (they can be used to show that ${S}$ is highly equidistributed in ${{\bf Z}/q{\bf Z}}$, but this does not directly give any way to prevent ${S}$ from containing ${0}$).

One possible toy model problem for the (weak) Collatz conjecture is a conjecture of Erdos asserting that for ${n>8}$, the base ${3}$ representation of ${2^n}$ contains at least one ${2}$. (See this paper of Lagarias for some work on this conjecture and on related problems.) To put it another way, the conjecture asserts that there are no integer solutions to

$\displaystyle 2^n = 3^{a_1} + 3^{a_2} + \ldots + 3^{a_k}$

with ${n > 8}$ and ${0 \leq a_1 < \ldots < a_k}$. (When ${n=8}$, of course, one has ${2^8 = 3^0 + 3^1 + 3^2 + 3^5}$.) In this form we see a resemblance to Conjecture 3, but it looks like a simpler problem to attack (though one which is still a fair distance beyond what one can do with current technology). Note that one has a similar heuristic support for this conjecture as one does for Proposition 3; a number of magnitude ${2^n}$ has about ${n \frac{\log 2}{\log 3}}$ base ${3}$ digits, so the heuristic probability that none of these digits are equal to ${2}$ is ${3^{-n\frac{\log 2}{\log 3}} = 2^{-n}}$, which is absolutely summable.

I’ve been focusing my blog posts recently on the mathematics around Hilbert’s fifth problem (is every locally Euclidean group a Lie group?), but today, I will be discussing another of Hilbert’s problems, namely Hilbert’s seventh problem, on the transcendence of powers ${a^b}$ of two algebraic numbers ${a,b}$. (I am not randomly going through Hilbert’s list, by the way; I hope to explain my interest in the seventh problem in a later post.) This problem was famously solved by Gelfond and Schneider in the 1930s:

Theorem 1 (Gelfond-Schneider theorem) Let ${a, b}$ be algebraic numbers, with ${a \neq 0,1}$ and ${b}$ irrational. Then (any of the values of the possibly multi-valued expression) ${a^b}$ is transcendental.

For sake of simplifying the discussion, let us focus on just one specific consequence of this theorem:

Corollary 2 ${\frac{\log 2}{\log 3}}$ is transcendental.

Proof: If not, one could obtain a contradiction to the Gelfond-Schneider theorem by setting ${a := 3}$ and ${b := \frac{\log 2}{\log 3}}$. (Note that ${\frac{\log 2}{\log 3}}$ is clearly irrational, since ${3^p \neq 2^q}$ for any integers ${p,q}$ with ${q}$ positive.) $\Box$

In the 1960s, Alan Baker established a major generalisation of the Gelfond-Schneider theorem known as Baker’s theorem, as part of his work in transcendence theory that later earned him a Fields Medal. Among other things, this theorem provided explicit quantitative bounds on exactly how transcendental quantities such as ${\frac{\log 2}{\log 3}}$ were. In particular, it gave a strong bound on how irrational such quantities were (i.e. how easily they were approximable by rationals). Here, in particular, is one special case of Baker’s theorem:

Proposition 3 (Special case of Baker’s theorem) For any integers ${p, q}$ with ${q}$ positive, one has

$\displaystyle |\frac{\log 2}{\log 3} - \frac{p}{q}| \geq c \frac{1}{q^C}$

for some absolute (and effectively computable) constants ${c, C > 0}$.

This theorem may be compared with (the easily proved) Liouville’s theorem on diophantine approximation, which asserts that if ${\alpha}$ is an irrational algebraic number of degree ${d}$, then

$\displaystyle |\alpha - \frac{p}{q}| \geq c \frac{1}{q^d}$

for all ${p,q}$ with ${q}$ positive, and some effectively computable ${c>0}$, and (the more significantly difficult) Thue-Siegel-Roth theorem, which under the same hypotheses gives the bound

$\displaystyle |\alpha - \frac{p}{q}| \geq c_\epsilon \frac{1}{q^{2+\epsilon}}$

for all ${\epsilon>0}$, all ${p,q}$ with ${q}$ positive and an ineffective constant ${c_\epsilon>0}$. Finally, one should compare these results against Dirichlet’s theorem on Diophantine approximation, which asserts that for any real number ${\alpha}$ one has

$\displaystyle |\alpha - \frac{p}{q}| < \frac{1}{q^2}$

for infinitely many ${p,q}$ with ${q}$ positive. (The reason the Thue-Siegel-Roth theorem is ineffective is because it relies heavily on the dueling conspiracies argument, i.e. playing off multiple “conspiracies” ${\alpha \approx \frac{p}{q}}$ against each other; the other results however only focus on one approximation at a time and thus avoid ineffectivity.)

Proposition 3 easily implies the following separation property between powers of ${2}$ and powers of ${3}$:

Corollary 4 (Separation between powers of ${2}$ and powers of ${3}$) For any positive integers ${p, q}$ one has

$\displaystyle |3^p - 2^q| \geq \frac{c}{q^C} 3^p$

for some effectively computable constants ${c, C > 0}$ (which may be slightly different from those in Proposition 3).

Indeed, this follows quickly from Proposition 3, the identity

$\displaystyle 3^p - 2^q = 3^p ( 1 - 3^{q (\frac{\log 2}{\log 3} - \frac{p}{q})}) \ \ \ \ \ (1)$

and some elementary estimates.

In particular, the gap between powers of three ${3^p}$ and powers of two ${2^q}$ grows exponentially in the exponents ${p,q}$. I do not know of any other way to establish this fact other than essentially going through some version of Baker’s argument (which will be given below).

For comparison, by exploiting the trivial (yet fundamental) integrality gap – the obvious fact that if an integer ${n}$ is non-zero, then its magnitude is at least ${1}$ – we have the trivial bound

$\displaystyle |3^p - 2^q| \geq 1$

for all positive integers ${p, q}$ (since, from the fundamental theorem of arithmetic, ${3^p-2^q}$ cannot vanish). Putting this into (1) we obtain a very weak version of Proposition 3, that only gives exponential bounds instead of polynomial ones:

Proposition 5 (Trivial bound) For any integers ${p, q}$ with ${q}$ positive, one has

$\displaystyle |\frac{\log 2}{\log 3} - \frac{p}{q}| \geq c \frac{1}{2^q}$

for some absolute (and effectively computable) constant ${c > 0}$.

The proof of Baker’s theorem (or even of the simpler special case in Proposition 3) is largely elementary (except for some very basic complex analysis), but is quite intricate and lengthy, as a lot of careful book-keeping is necessary in order to get a bound as strong as that in Proposition 3. To illustrate the main ideas, I will prove a bound that is weaker than Proposition 3, but still significantly stronger than Proposition 5, and whose proof already captures many of the key ideas of Baker:

Proposition 6 (Weak special case of Baker’s theorem) For any integers ${p, q}$ with ${q > 1}$, one has

$\displaystyle |\frac{\log 2}{\log 3} - \frac{p}{q}| \geq \exp( - C \log^{C'} q )$

for some absolute constants ${C, C' > 0}$.

Note that Proposition 3 is equivalent to the assertion that one can take ${C'=1}$ (and ${C}$ effective) in the above proposition.

The proof of Proposition 6 can be made effective (for instance, it is not too difficult to make the ${C'}$ close to ${2}$); however, in order to simplify the exposition (and in particular, to use some nonstandard analysis terminology to reduce the epsilon management), I will establish Proposition 6 with ineffective constants ${C, C'}$.

Like many other results in transcendence theory, the proof of Baker’s theorem (and Proposition 6) rely on what we would nowadays call the polynomial method – to play off upper and lower bounds on the complexity of polynomials that vanish (or nearly vanish) to high order on a specified set of points. (I have discussed the polynomial method in relation to problems in incidence geometry in several previous blog posts.) In the specific case of Proposition 6, the points in question are of the form

$\displaystyle \Gamma_N := \{ (2^n, 3^n): n = 1,\ldots,N \} \subset {\bf R}^2$

for some large integer ${N}$. On the one hand, the irrationality of ${\frac{\log 2}{\log 3}}$ ensures that the curve

$\displaystyle \gamma := \{ (2^t, 3^t): t \in {\bf R} \}$

is not algebraic, and so it is difficult for a polynomial ${P}$ of controlled complexity to vanish (or nearly vanish) to high order at all the points of ${\Gamma_N}$; the trivial bound in Proposition 5 allows one to make this statement more precise. (Here, “complexity” of a polynomial is an informal term referring both to the degree of the polynomial, and the height of the coefficients, which in our application will essentially be integers up to some normalisation factors.) On the other hand, if Proposition 6 failed, then ${\frac{\log 2}{\log 3}}$ is close to a rational, which by Taylor expansion makes ${\gamma}$ close to an algebraic curve over the rationals (up to some rescaling by factors such as ${\log 2}$ and ${\log 3}$) at each point of ${\Gamma_N}$. This, together with a pigeonholing argument, allows one to find a polynomial ${P}$ of reasonably controlled complexity to (nearly) vanish to high order at every point of ${\Gamma_N}$.

These observations, by themselves, are not sufficient to get beyond the trivial bound in Proposition 5. However, Baker’s key insight was to exploit the integrality gap to bootstrap the (near) vanishing of ${P}$ on a set ${\Gamma_N}$ to imply near-vanishing of ${P}$ on a larger set ${\Gamma_{N'}}$ with ${N'>N}$. The point is that if a polynomial ${P}$ of controlled degree and size (nearly) vanishes to higher order on a lot of points on an analytic curve such as ${\gamma}$, then it will also be fairly small on many other points in ${\gamma}$ as well. (To quantify this statement efficiently, it is convenient to use the tools of complex analysis, which are particularly well suited to understand zeroes (or small values) of polynomials.) But then, thanks to the integrality gap (and the controlled complexity of ${P}$), we can amplify “fairly small” to “very small”.

Using this observation and an iteration argument, Baker was able to take a polynomial of controlled complexity ${P}$ that nearly vanished to high order on a relatively small set ${\Gamma_{N_0}}$, and bootstrap that to show near-vanishing on a much larger set ${\Gamma_{N_k}}$. This bootstrap allows one to dramatically bridge the gap between the upper and lower bounds on the complexity of polynomials that nearly vanish to a specified order on a given ${\Gamma_N}$, and eventually leads to Proposition 6 (and, with much more care and effort, to Proposition 3).

Below the fold, I give the details of this argument. My treatment here is inspired by this expose of Serre, and these lecture notes of Soundararajan (as transcribed by Ian Petrow).

One of the most well known problems from ancient Greek mathematics was that of trisecting an angle by straightedge and compass, which was eventually proven impossible in 1837 by Pierre Wantzel, using methods from Galois theory.

Formally, one can set up the problem as follows. Define a configuration to be a finite collection ${{\mathcal C}}$ of points, lines, and circles in the Euclidean plane. Define a construction step to be one of the following operations to enlarge the collection ${{\mathcal C}}$:

• (Straightedge) Given two distinct points ${A, B}$ in ${{\mathcal C}}$, form the line ${\overline{AB}}$ that connects ${A}$ and ${B}$, and add it to ${{\mathcal C}}$.
• (Compass) Given two distinct points ${A, B}$ in ${{\mathcal C}}$, and given a third point ${O}$ in ${{\mathcal C}}$ (which may or may not equal ${A}$ or ${B}$), form the circle with centre ${O}$ and radius equal to the length ${|AB|}$ of the line segment joining ${A}$ and ${B}$, and add it to ${{\mathcal C}}$.
• (Intersection) Given two distinct curves ${\gamma, \gamma'}$ in ${{\mathcal C}}$ (thus ${\gamma}$ is either a line or a circle in ${{\mathcal C}}$, and similarly for ${\gamma'}$), select a point ${P}$ that is common to both ${\gamma}$ and ${\gamma'}$ (there are at most two such points), and add it to ${{\mathcal C}}$.

We say that a point, line, or circle is constructible by straightedge and compass from a configuration ${{\mathcal C}}$ if it can be obtained from ${{\mathcal C}}$ after applying a finite number of construction steps.

Problem 1 (Angle trisection) Let ${A, B, C}$ be distinct points in the plane. Is it always possible to construct by straightedge and compass from ${A,B,C}$ a line ${\ell}$ through ${A}$ that trisects the angle ${\angle BAC}$, in the sense that the angle between ${\ell}$ and ${BA}$ is one third of the angle of ${\angle BAC}$?

Thanks to Wantzel’s result, the answer to this problem is known to be “no” in general; a generic angle ${\angle BAC}$ cannot be trisected by straightedge and compass. (On the other hand, some special angles can certainly be trisected by straightedge and compass, such as a right angle. Also, one can certainly trisect generic angles using other methods than straightedge and compass; see the Wikipedia page on angle trisection for some examples of this.)

The impossibility of angle trisection stands in sharp contrast to the easy construction of angle bisection via straightedge and compass, which we briefly review as follows:

1. Start with three points ${A, B, C}$.
2. Form the circle ${c_0}$ with centre ${A}$ and radius ${AB}$, and intersect it with the line ${\overline{AC}}$. Let ${D}$ be the point in this intersection that lies on the same side of ${A}$ as ${C}$. (${D}$ may well be equal to ${C}$).
3. Form the circle ${c_1}$ with centre ${B}$ and radius ${AB}$, and the circle ${c_2}$ with centre ${D}$ and radius ${AB}$. Let ${E}$ be the point of intersection of ${c_1}$ and ${c_2}$ that is not ${A}$.
4. The line ${\ell := \overline{AE}}$ will then bisect the angle ${\angle BAC}$.

The key difference between angle trisection and angle bisection ultimately boils down to the following trivial number-theoretic fact:

Lemma 2 There is no power of ${2}$ that is evenly divisible by ${3}$.

Proof: Obvious by modular arithmetic, by induction, or by the fundamental theorem of arithmetic. $\Box$

In contrast, there are of course plenty of powers of ${2}$ that are evenly divisible by ${2}$, and this is ultimately why angle bisection is easy while angle trisection is hard.

The standard way in which Lemma 2 is used to demonstrate the impossibility of angle trisection is via Galois theory. The implication is quite short if one knows this theory, but quite opaque otherwise. We briefly sketch the proof of this implication here, though we will not need it in the rest of the discussion. Firstly, Lemma 2 implies the following fact about field extensions.

Corollary 3 Let ${F}$ be a field, and let ${E}$ be an extension of ${F}$ that can be constructed out of ${F}$ by a finite sequence of quadratic extensions. Then ${E}$ does not contain any cubic extensions ${K}$ of ${F}$.

Proof: If $E$ contained a cubic extension $K$ of $F$, then the dimension of $E$ over $F$ would be a multiple of three. On the other hand, if $E$ is obtained from $F$ by a tower of quadratic extensions, then the dimension of $E$ over $F$ is a power of two. The claim then follows from Lemma 2. $\Box$

To conclude the proof, one then notes that any point, line, or circle that can be constructed from a configuration ${{\mathcal C}}$ is definable in a field obtained from the coefficients of all the objects in ${{\mathcal C}}$ after taking a finite number of quadratic extensions, whereas a trisection of an angle ${\angle ABC}$ will generically only be definable in a cubic extension of the field generated by the coordinates of ${A,B,C}$.

The Galois theory method also allows one to obtain many other impossibility results of this type, most famously the Abel-Ruffini theorem on the insolvability of the quintic equation by radicals. For this reason (and also because of the many applications of Galois theory to number theory and other branches of mathematics), the Galois theory argument is the “right” way to prove the impossibility of angle trisection within the broader framework of modern mathematics. However, this argument has the drawback that it requires one to first understand Galois theory (or at least field theory), which is usually not presented until an advanced undergraduate algebra or number theory course, whilst the angle trisection problem requires only high-school level mathematics to formulate. Even if one is allowed to “cheat” and sweep several technicalities under the rug, one still needs to possess a fair amount of solid intuition about advanced algebra in order to appreciate the proof. (This was undoubtedly be one reason why, even after Wantzel’s impossibility result was published, a large amount of effort was still expended by amateur mathematicians to try to trisect a general angle.)

In this post I would therefore like to present a different proof (or perhaps more accurately, a disguised version of the standard proof) of the impossibility of angle trisection by straightedge and compass, that avoids explicit mention of Galois theory (though it is never far beneath the surface). With “cheats”, the proof is actually quite simple and geometric (except for Lemma 2, which is still used at a crucial juncture), based on the basic geometric concept of monodromy; unfortunately, some technical work is needed however to remove these cheats.

To describe the intuitive idea of the proof, let us return to the angle bisection construction, that takes a triple ${A, B, C}$ of points as input and returns a bisecting line ${\ell}$ as output. We iterate the construction to create a quadrisecting line ${m}$, via the following sequence of steps that extend the original bisection construction:

1. Start with three points ${A, B, C}$.
2. Form the circle ${c_0}$ with centre ${A}$ and radius ${AB}$, and intersect it with the line ${\overline{AC}}$. Let ${D}$ be the point in this intersection that lies on the same side of ${A}$ as ${C}$. (${D}$ may well be equal to ${C}$).
3. Form the circle ${c_1}$ with centre ${B}$ and radius ${AB}$, and the circle ${c_2}$ with centre ${D}$ and radius ${AB}$. Let ${E}$ be the point of intersection of ${c_1}$ and ${c_2}$ that is not ${A}$.
4. Let ${F}$ be the point on the line ${\ell := \overline{AE}}$ which lies on ${c_0}$, and is on the same side of ${A}$ as ${E}$.
5. Form the circle ${c_3}$ with centre ${F}$ and radius ${AB}$. Let ${G}$ be the point of intersection of ${c_1}$ and ${c_3}$ that is not ${A}$.
6. The line ${m := \overline{AG}}$ will then quadrisect the angle ${\angle BAC}$.

Let us fix the points ${A}$ and ${B}$, but not ${C}$, and view ${m}$ (as well as intermediate objects such as ${D}$, ${c_2}$, ${E}$, ${\ell}$, ${F}$, ${c_3}$, ${G}$) as a function of ${C}$.

Let us now do the following: we begin rotating ${C}$ counterclockwise around ${A}$, which drags around the other objects ${D}$, ${c_2}$, ${E}$, ${\ell}$, ${F}$, ${c_3}$, ${G}$ that were constructed by ${C}$ accordingly. For instance, here is an early stage of this rotation process, when the angle ${\angle BAC}$ has become obtuse:

Now for the slightly tricky bit. We are going to keep rotating ${C}$ beyond a half-rotation of ${180^\circ}$, so that ${\angle BAC}$ now becomes a reflex angle. At this point, a singularity occurs; the point ${E}$ collides into ${A}$, and so there is an instant in which the line ${\ell = \overline{AE}}$ is not well-defined. However, this turns out to be a removable singularity (and the easiest way to demonstrate this will be to tap the power of complex analysis, as complex numbers can easily route around such a singularity), and we can blast through it to the other side, giving a picture like this:

Note that we have now deviated from the original construction in that ${F}$ and ${E}$ are no longer on the same side of ${A}$; we are thus now working in a continuation of that construction rather than with the construction itself. Nevertheless, we can still work with this continuation (much as, say, one works with analytic continuations of infinite series such as ${\sum_{n=1}^\infty \frac{1}{n^s}}$ beyond their original domain of definition).

We now keep rotating ${C}$ around ${A}$. Here, ${\angle BAC}$ is approaching a full rotation of ${360^\circ}$:

When ${\angle BAC}$ reaches a full rotation, a different singularity occurs: ${c_1}$ and ${c_2}$ coincide. Nevertheless, this is also a removable singularity, and we blast through to beyond a full rotation:

And now ${C}$ is back where it started, as are ${D}$, ${c_2}$, ${E}$, and ${\ell}$… but the point ${F}$ has moved, from one intersection point of ${\ell \cap c_3}$ to the other. As a consequence, ${c_3}$, ${G}$, and ${m}$ have also changed, with ${m}$ being at right angles to where it was before. (In the jargon of modern mathematics, the quadrisection construction has a non-trivial monodromy.)

But nothing stops us from rotating ${C}$ some more. If we continue this procedure, we see that after two full rotations of ${C}$ around ${A}$, all points, lines, and circles constructed from ${A, B, C}$ have returned to their original positions. Because of this, we shall say that the quadrisection construction described above is periodic with period ${2}$.

Similarly, if one performs an octisection of the angle ${\angle BAC}$ by bisecting the quadrisection, one can verify that this octisection is periodic with period ${4}$; it takes four full rotations of ${C}$ around ${A}$ before the configuration returns to where it started. More generally, one can show

Proposition 4 Any construction of straightedge and compass from the points ${A,B,C}$ is periodic with period equal to a power of ${2}$.

The reason for this, ultimately, is because any two circles or lines will intersect each other in at most two points, and so at each step of a straightedge-and-compass construction there is an ambiguity of at most ${2! = 2}$. Each rotation of ${C}$ around ${A}$ can potentially flip one of these points to the other, but then if one rotates again, the point returns to its original position, and then one can analyse the next point in the construction in the same fashion until one obtains the proposition.

But now consider a putative trisection operation, that starts with an arbitrary angle ${\angle BAC}$ and somehow uses some sequence of straightedge and compass constructions to end up with a trisecting line ${\ell}$:

What is the period of this construction? If we continuously rotate ${C}$ around ${A}$, we observe that a full rotations of ${C}$ only causes the trisecting line ${\ell}$ to rotate by a third of a full rotation (i.e. by ${120^\circ}$):

Because of this, we see that the period of any construction that contains ${\ell}$ must be a multiple of ${3}$. But this contradicts Proposition 4 and Lemma 2.

Below the fold, I will make the above proof rigorous. Unfortunately, in doing so, I had to again leave the world of high-school mathematics, as one needs a little bit of algebraic geometry and complex analysis to resolve the issues with singularities that we saw in the above sketch. Still, I feel that at an intuitive level at least, this argument is more geometric and accessible than the Galois-theoretic argument (though anyone familiar with Galois theory will note that there is really not that much difference between the proofs, ultimately, as one has simply replaced the Galois group with a closely related monodromy group instead).

Christian Elsholtz and I have recently finished our joint paper “Counting the number of solutions to the Erdös-Straus equation on unit fractions“, submitted to the Journal of the Australian Mathematical Society. This supercedes my previous paper on the subject, by obtaining stronger and more general results. (The paper is currently in the process of being resubmitted to the arXiv, and should appear at this link within a few days.)

As with the previous paper, the main object of study is the number ${f(n)}$ of solutions to the Diophantine equation

$\displaystyle \frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z} \ \ \ \ \ (1)$

with ${x,y,z}$ positive integers. The Erdös-Straus conjecture asserts that ${f(n)>0}$ for all ${n>1}$. Since ${f(nm) \geq f(n)}$ for all positive integers ${n,m}$, it suffices to show that ${f(p)>0}$ for all primes ${p}$.

We single out two special types of solutions: Type I solutions, in which ${x}$ is divisible by ${n}$ and ${y,z}$ are coprime to ${n}$, and Type II solutions, in which ${x}$ is coprime to ${n}$ and ${y,z}$ are divisible by ${n}$. Let ${f_I(n), f_{II}(n)}$ denote the number of Type I and Type II solutions respectively. For any ${n}$, one has

$\displaystyle f(n) \geq 3 f_I(n) + 3 f_{II}(n),$

with equality when ${n}$ is an odd primes ${p}$. Thus, to prove the Erdös-Strauss conjecture, it suffices to show that at least one of ${f_I(p)}$, ${f_{II}(p)}$ is positive whenever ${p}$ is an odd prime.

Our first main results are the asymptotics

$\displaystyle N \log^3 N \ll \sum_{n \leq N} f_I(n) \ll N \log^3 N$

$\displaystyle N \log^3 N \ll \sum_{n \leq N} f_{II}(n) \ll N \log^3 N$

$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_I(p) \ll N \log^2 N \log\log N$

$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_{II}(p) \ll N \log^2 N.$

This improves upon the results in the previous paper, which only established

$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_I(p) \ll N \exp(O( \frac{\log x}{\log\log x} ))$

and

$\displaystyle N \log^2 N \ll \sum_{p \leq N} f_{II}(p) \ll N \log^2 N \log \log N.$

The double logarithmic factor in the upper bound for ${\sum_{p \leq N} f_I(p)}$ is artificial (arising from the inefficiency in the Brun-Titchmarsh inequality on very short progressions) but we do not know how to remove it.

The methods are similar to those in the previous paper (which were also independently discovered in unpublished work of Elsholtz and Heath-Brown), but with the additional input of the Erdös divisor bound on expressions of the form ${\sum_{n \leq N} \tau(P(n))}$ for polynomials ${P}$, discussed in this recent blog post. (Actually, we need to tighten Erdös’ bound somewhat, to obtain some uniformity in the bounds even as the coefficients of ${P}$ become large, but this turns out to be achievable by going through the original arguments of Erdös more carefully.)

We also note an observation of Heath-Brown, that in our notation gives the lower bound

$\displaystyle N \log^6 N \ll \sum_{n \leq N} f(n);$

thus, we see that for typical ${n}$, that most solutions to the Erdös-Straus equation are not of Type I or Type II, in contrast to the case when ${n}$ is prime.

We also have a number other new results. We find a way to systematically unify all the previously known parameterisations of solutions to the Erdös-Straus equation, by lifting the Cayley-type surface ${\{ (x,y,z): \frac{4}{n} = \frac{1}{x} + \frac{1}{y} + \frac{1}{z} \}}$ to a certain three-dimensional variety in six-dimensional affine space, in such a way that integer points in the former arise from integer points in the latter. Each of the previously known characterisations of solutions then corresponds to a different choice of coordinates on this variety. (This point of view was also adopted in a paper of Heath-Brown, who interprets this lifted variety as the universal torsor of the Cayley surface.) By optimising between these parameterisations and exploiting the divisor bound, we obtain some bounds on the worst-case behaviour of ${f_I(n)}$ and ${f_{II}(n)}$, namely

$\displaystyle f_I(n) \ll n^{3/5 + O(1/\log \log n)}$

and

$\displaystyle f_{II}(n) \ll n^{2/5 + O(1/\log \log n)},$

which should be compared to a recent previous bound ${f(n) \ll n^{2/3 + O(1/\log \log n)}}$ of Browning and Elsholtz. In the other direction, we show that ${f(n) \gg n^{(3+o(1))/\log\log n}}$ for infinitely many ${n}$, and ${f(p) \gg \log^{\frac{\log 3}{2}-o(1)} p}$ for almost all primes ${p}$. Here, the main tools are some bounds for the representation of a rational as a sum of two unit fractions in the above-mentioned work of Browning and Elsholtz, and also the Turán-Kubilius inequality.

We also completely classify all the congruence classes that can be solved by polynomials, completing the partial list discussed in the previous post. Specifically, the Erdös-Straus conjecture is true for ${n}$ whenever one of the following congruence-type conditions is satisfied:

1. ${n = -f \mod 4ad}$, where ${a,d,f \in {\bf N}}$ are such that ${f|4a^2 d+1}$.
2. ${n = -f \mod 4ac}$ and ${n = -\frac{c}{a} \mod f}$, where ${a,c,f \in {\bf N}}$ are such that ${(4ac,f)=1}$.
3. ${n = -f \mod 4cd}$ and ${n^2 = -4c^2d \mod f}$, where ${c,d,f \in {\bf N}}$ are such that ${(4cd,f)=1}$.
4. ${n = -\frac{1}{e} \mod 4ab}$ or ${n = -e \mod 4ab}$, where ${a,b,e \in {\bf N}}$ are such that ${e|a+b}$ and ${(e,4ab)=1}$.
5. ${n = -4a^2d \mod f}$, where ${a,d,f \in {\bf N}}$ are such that ${4ad|f+1}$.
6. ${n = -4a^2d-e \mod 4ade}$, where ${a,d,e \in {\bf N}}$ are such that ${(4ad,e)=1}$.

In principle, this suggests a way to extend the existing verification of the Erdös-Straus conjecture beyond the current range of ${10^{14}}$ by collecting all congruences to small moduli (e.g. up to ${10^6}$), and then using this to sieve out the primes up to a given size.

Finally, we begin a study of the more general equation

$\displaystyle \frac{m}{n} = \frac{1}{n_1}+\ldots+\frac{1}{n_k} \ \ \ \ \ (2)$

where ${m > k \geq 3}$ are fixed. We can obtain a partial analogue of our main bounds for the ${m=4,k=3}$ case, namely that

$\displaystyle \sum_{n \leq N} f_{m,k,II}(n) \gg N \log^{2^{k-1}-1} N$

and

$\displaystyle \sum_{p \leq N} f_{m,k,II}(p) \gg N \log^{2^{k-1}-2} N / \log\log N$

were ${f_{m,k,II}(n)}$ denotes the number of solutions to (2) which are of “Type II” in the sense that ${n_2,\ldots,n_k}$ are all divisible by ${n}$. However, we do not believe our bounds to be sharp in the large ${k}$ regime, though it does show that the expected number of solutions to (2) should grow rapidly in ${k}$.

One of the basic problems in analytic number theory is to obtain bounds and asymptotics for sums of the form

$\displaystyle \sum_{n \leq x} f(n)$

in the limit ${x \rightarrow \infty}$, where ${n}$ ranges over natural numbers less than ${x}$, and ${f: {\bf N} \rightarrow {\bf C}}$ is some arithmetic function of number-theoretic interest. (It is also often convenient to replace this sharply truncated sum with a smoother sum such as ${\sum_n f(n) \psi(n/x)}$, but we will not discuss this technicality here.) For instance, the prime number theorem is equivalent to the assertion

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + o(x)$

where ${\Lambda}$ is the von Mangoldt function, while the Riemann hypothesis is equivalent to the stronger assertion

$\displaystyle \sum_{n \leq x} \Lambda(n) = x + O(x^{1/2+o(1)}).$

It is thus of interest to develop techniques to estimate such sums ${\sum_{n \leq x} f(n)}$. Of course, the difficulty of this task depends on how “nice” the function ${f}$ is. The functions ${f}$ that come up in number theory lie on a broad spectrum of “niceness”, with some particularly nice functions being quite easy to sum, and some being insanely difficult.

At the easiest end of the spectrum are those functions ${f}$ that exhibit some sort of regularity or “smoothness”. Examples of smoothness include “Archimedean” smoothness, in which ${f(n)}$ is the restriction of some smooth function ${f: {\bf R} \rightarrow {\bf C}}$ from the reals to the natural numbers, and the derivatives of ${f}$ are well controlled. A typical example is

$\displaystyle \sum_{n \leq x} \log n.$

One can already get quite good bounds on this quantity by comparison with the integral ${\int_1^x \log t\ dt}$, namely

$\displaystyle \sum_{n \leq x} \log n = x \log x - x + O(\log x),$

with sharper bounds available by using tools such as the Euler-Maclaurin formula (see this blog post). Exponentiating such asymptotics, incidentally, leads to one of the standard proofs of Stirling’s formula (as discussed in this blog post).

One can also consider “non-Archimedean” notions of smoothness, such as periodicity relative to a small period ${q}$. Indeed, if ${f}$ is periodic with period ${q}$ (and is thus essentially a function on the cyclic group ${{\bf Z}/q{\bf Z}}$), then one has the easy bound

$\displaystyle \sum_{n \leq x} f(n) = \frac{x}{q} \sum_{n \in {\bf Z}/q{\bf Z}} f(n) + O( \sum_{n \in {\bf Z}/q{\bf Z}} |f(n)| ).$

In particular, we have the fundamental estimate

$\displaystyle \sum_{n \leq x: q|n} 1 = \frac{x}{q} + O(1). \ \ \ \ \ (1)$

This is a good estimate when ${q}$ is much smaller than ${x}$, but as ${q}$ approaches ${x}$ in magnitude, the error term ${O(1)}$ begins to overwhelm the main term ${\frac{n}{q}}$, and one needs much more delicate information on the fractional part of ${\frac{n}{q}}$ in order to obtain good estimates at this point.

One can also consider functions ${f}$ which combine “Archimedean” and “non-Archimedean” smoothness into an “adelic” smoothness. We will not define this term precisely here (though the concept of a Schwartz-Bruhat function is one way to capture this sort of concept), but a typical example might be

$\displaystyle \sum_{n \leq x} \chi(n) \log n$

where ${\chi}$ is periodic with some small period ${q}$. By using techniques such as summation by parts, one can estimate such sums using the techniques used to estimate sums of periodic functions or functions with (Archimedean) smoothness.

Another class of functions that is reasonably well controlled are the multiplicative functions, in which ${f(nm) = f(n) f(m)}$ whenever ${n,m}$ are coprime. Here, one can use the powerful techniques of multiplicative number theory, for instance by working with the Dirichlet series

$\displaystyle \sum_{n=1}^\infty \frac{f(n)}{n^s}$

which are clearly related to the partial sums ${\sum_{n \leq x} f(n)}$ (essentially via the Mellin transform, a cousin of the Fourier and Laplace transforms); for this post we ignore the (important) issue of how to make sense of this series when it is not absolutely convergent (but see this previous blog post for more discussion). A primary reason that this technique is effective is that the Dirichlet series of a multiplicative function factorises as an Euler product

$\displaystyle \sum_{n=1}^\infty \frac{f(n)}{n^s} = \prod_p (\sum_{j=0}^\infty \frac{f(p^j)}{p^{js}}).$

One also obtains similar types of representations for functions that are not quite multiplicative, but are closely related to multiplicative functions, such as the von Mangoldt function ${\Lambda}$ (whose Dirichlet series ${\sum_{n=1}^\infty \frac{\Lambda(n)}{n^s} = -\frac{\zeta'(s)}{\zeta(s)}}$ is not given by an Euler product, but instead by the logarithmic derivative of an Euler product).

Moving another notch along the spectrum between well-controlled and ill-controlled functions, one can consider functions ${f}$ that are divisor sums such as

$\displaystyle f(n) = \sum_{d \leq R; d|n} g(d) = \sum_{d \leq R} 1_{d|n} g(d)$

for some other arithmetic function ${g}$, and some level ${R}$. This is a linear combination of periodic functions ${1_{d|n} g(d)}$ and is thus technically periodic in ${n}$ (with period equal to the least common multiple of all the numbers from ${1}$ to ${R}$), but in practice this periodic is far too large to be useful (except for extremely small levels ${R}$, e.g. ${R = O(\log x)}$). Nevertheless, we can still control the sum ${\sum_{n \leq x} f(n)}$ simply by rearranging the summation:

$\displaystyle \sum_{n \leq x} f(n) = \sum_{d \leq R} g(d) \sum_{n \leq x: d|n} 1$

and thus by (1) one can bound this by the sum of a main term ${x \sum_{d \leq R} \frac{g(d)}{d}}$ and an error term ${O( \sum_{d \leq R} |g(d)| )}$. As long as the level ${R}$ is significantly less than ${x}$, one may expect the main term to dominate, and one can often estimate this term by a variety of techniques (for instance, if ${g}$ is multiplicative, then multiplicative number theory techniques are quite effective, as mentioned previously). Similarly for other slight variants of divisor sums, such as expressions of the form

$\displaystyle \sum_{d \leq R; d | n} g(d) \log \frac{n}{d}$

or expressions of the form

$\displaystyle \sum_{d \leq R} F_d(n)$

where each ${F_d}$ is periodic with period ${d}$.

One of the simplest examples of this comes when estimating the divisor function

$\displaystyle \tau(n) := \sum_{d|n} 1,$

which counts the number of divisors up to ${n}$. This is a multiplicative function, and is therefore most efficiently estimated using the techniques of multiplicative number theory; but for reasons that will become clearer later, let us “forget” the multiplicative structure and estimate the above sum by more elementary methods. By applying the preceding method, we see that

$\displaystyle \sum_{n \leq x} \tau(n) = \sum_{d \leq x} \sum_{n \leq x:d|n} 1$

$\displaystyle = \sum_{d \leq x} (\frac{x}{d} + O(1))$

$\displaystyle = x \log x + O(x). \ \ \ \ \ (2)$

Here, we are (barely) able to keep the error term smaller than the main term; this is right at the edge of the divisor sum method, because the level ${R}$ in this case is equal to ${x}$. Unfortunately, at this high choice of level, it is not always possible to always keep the error term under control like this. For instance, if one wishes to use the standard divisor sum representation

$\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d},$

where ${\mu}$ is the Möbius function, to compute ${\sum_{n \leq x} \Lambda(n)}$, then one ends up looking at

$\displaystyle \sum_{n \leq x} \Lambda(n) = \sum_{d \leq x} \mu(d) \sum_{n \leq x:d|n} \log \frac{n}{d}$

$\displaystyle = \sum_{d \leq x} \mu(d) ( \frac{n}{d} \log \frac{n}{d} - \frac{n}{d} + O(\log \frac{n}{d}) )$

From Dirichlet series methods, it is not difficult to establish the identities

$\displaystyle \lim_{s\rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n)}{n^s} = 0$

and

$\displaystyle \lim_{s \rightarrow 1^+} \sum_{n=1}^\infty \frac{\mu(n) \log n}{n^s} = -1.$

This suggests (but does not quite prove) that one has

$\displaystyle \sum_{n=1}^\infty \frac{\mu(n)}{n} = 0 \ \ \ \ \ (3)$

and

$\displaystyle \sum_{n=1}^\infty \frac{\mu(n)\log n}{n} = -1 \ \ \ \ \ (4)$

in the sense of conditionally convergent series. Assuming one can justify this (which, ultimately, requires one to exclude zeroes of the Riemann zeta function on the line ${\hbox{Re}(s)=1}$, as discussed in this previous post), one is eventually left with the estimate ${x + O(x)}$, which is useless as a lower bound (and recovers only the classical Chebyshev estimate ${\sum_{n \leq x} \Lambda(n) \ll x}$ as the upper bound). The inefficiency here when compared to the situation with the divisor function ${\tau}$ can be attributed to the signed nature of the Möbius function ${\mu(n)}$, which causes some cancellation in the divisor sum expansion that needs to be compensated for with improved estimates.

However, there are a number of tricks available to reduce the level of divisor sums. The simplest comes from exploiting the change of variables ${d \mapsto \frac{n}{d}}$, which can in principle reduce the level by a square root. For instance, when computing the divisor function ${\tau(n) = \sum_{d|n} 1}$, one can observe using this change of variables that every divisor of ${n}$ above ${\sqrt{n}}$ is paired with one below ${\sqrt{n}}$, and so we have

$\displaystyle \tau(n) = 2 \sum_{d \leq \sqrt{n}: d|n} 1 \ \ \ \ \ (5)$

except when ${n}$ is a perfect square, in which case one must subtract one from the right-hand side. Using this reduced-level divisor sum representation, one can obtain an improvement to (2), namely

$\displaystyle \sum_{n \leq x} \tau(n) = x \log x + (2\gamma-1) x + O(\sqrt{x}).$

This type of argument is also known as the Dirichlet hyperbola method. A variant of this argument can also deduce the prime number theorem from (3), (4) (and with some additional effort, one can even drop the use of (4)); this is discussed at this previous blog post.

Using this square root trick, one can now also control divisor sums such as

$\displaystyle \sum_{n \leq x} \tau(n^2+1).$

(Note that ${\tau(n^2+1)}$ has no multiplicativity properties in ${n}$, and so multiplicative number theory techniques cannot be directly applied here.) The level of the divisor sum here is initially of order ${x^2}$, which is too large to be useful; but using the square root trick, we can expand this expression as

$\displaystyle 2 \sum_{n \leq x} \sum_{d \leq n: d | n^2+1} 1$

which one can rewrite as

$\displaystyle 2 \sum_{d \leq x} \sum_{d \leq n \leq x: n^2+1 = 0 \hbox{ mod } d} 1.$

The constraint ${n^2+1=0 \hbox{ mod } d}$ is periodic in ${n}$ with period ${d}$, so we can write this as

$\displaystyle 2 \sum_{d \leq x} ( \frac{x}{d} \rho(d) + O(\rho(d)) )$

where ${\rho(d)}$ is the number of solutions in ${{\bf Z}/d{\bf Z}}$ to the equation ${n^2+1 = 0 \hbox{ mod } d}$, and so

$\displaystyle \sum_{n \leq x} \tau(n^2+1) = 2x \sum_{d \leq x} \frac{\rho(d)}{d} + O(\sum_{d \leq x} \rho(d)).$

The function ${\rho}$ is multiplicative, and can be easily computed at primes ${p}$ and prime powers ${p^j}$ using tools such as quadratic reciprocity and Hensel’s lemma. For instance, by Fermat’s two-square theorem, ${\rho(p)}$ is equal to ${2}$ for ${p=1 \hbox{ mod } 4}$ and ${0}$ for ${p=3 \hbox{ mod } 4}$. From this and standard multiplicative number theory methods (e.g. by obtaining asymptotics on the Dirichlet series ${\sum_d \frac{\rho(d)}{d^s}}$), one eventually obtains the asymptotic

$\displaystyle \sum_{d \leq x} \frac{\rho(d)}{d} = \frac{3}{2\pi} \log x + O(1)$

and also

$\displaystyle \sum_{d \leq x} \rho(d) = O(x)$

and thus

$\displaystyle \sum_{n \leq x} \tau(n^2+1) = \frac{3}{\pi} x \log x + O(x).$

Similar arguments give asymptotics for ${\tau}$ on other quadratic polynomials; see for instance this paper of Hooley and these papers by McKee. Note that the irreducibility of the polynomial will be important. If one considers instead a sum involving a reducible polynomial, such as ${\sum_{n \leq x} \tau(n^2-1)}$, then the analogous quantity ${\rho(n)}$ becomes significantly larger, leading to a larger growth rate (of order ${x \log^2 x}$ rather than ${x\log x}$) for the sum.

However, the square root trick is insufficient by itself to deal with higher order sums involving the divisor function, such as

$\displaystyle \sum_{n \leq x} \tau(n^3+1);$

the level here is initially of order ${x^3}$, and the square root trick only lowers this to about ${x^{3/2}}$, creating an error term that overwhelms the main term. And indeed, the asymptotic for such this sum has not yet been rigorously established (although if one heuristically drops error terms, one can arrive at a reasonable conjecture for this asymptotic), although some results are known if one averages over additional parameters (see e.g. this paper of Greaves, or this paper of Matthiesen).

Nevertheless, there is an ingenious argument of Erdös that allows one to obtain good upper and lower bounds for these sorts of sums, in particular establishing the asymptotic

$\displaystyle x \log x \ll \sum_{n \leq x} \tau(P(n)) \ll x \log x \ \ \ \ \ (6)$

for any fixed irreducible non-constant polynomial ${P}$ that maps ${{\bf N}}$ to ${{\bf N}}$ (with the implied constants depending of course on the choice of ${P}$). There is also the related moment bound

$\displaystyle \sum_{n \leq x} \tau^m(P(n)) \ll x \log^{O(1)} x \ \ \ \ \ (7)$

for any fixed ${P}$ (not necessarily irreducible) and any fixed ${m \geq 1}$, due to van der Corput; this bound is in fact used to dispose of some error terms in the proof of (6). These should be compared with what one can obtain from the divisor bound ${\tau(n) \ll n^{O(1/\log \log n)}}$ and the trivial bound ${\tau(n) \geq 1}$, giving the bounds

$\displaystyle x \ll \sum_{n \leq x} \tau^m(P(n)) \ll x^{1 + O(\frac{1}{\log \log x})}$

for any fixed ${m \geq 1}$.

The lower bound in (6) is easy, since one can simply lower the level in (5) to obtain the lower bound

$\displaystyle \tau(n) \geq \sum_{d \leq n^\theta: d|n} 1$

for any ${\theta>0}$, and the preceding methods then easily allow one to obtain the lower bound by taking ${\theta}$ small enough (more precisely, if ${P}$ has degree ${d}$, one should take ${\theta}$ equal to ${1/d}$ or less). The upper bounds in (6) and (7) are more difficult. Ideally, if we could obtain upper bounds of the form

$\displaystyle \tau(n) \ll \sum_{d \leq n^\theta: d|n} 1 \ \ \ \ \ (8)$

for any fixed ${\theta > 0}$, then the preceding methods would easily establish both results. Unfortunately, this bound can fail, as illustrated by the following example. Suppose that ${n}$ is the product of ${k}$ distinct primes ${p_1 \ldots p_k}$, each of which is close to ${n^{1/k}}$. Then ${n}$ has ${2^k}$ divisors, with ${\binom{n}{j}}$ of them close to ${n^{j/k}}$ for each ${0 \ldots j \leq k}$. One can think of (the logarithms of) these divisors as being distributed according to what is essentially a Bernoulli distribution, thus a randomly selected divisor of ${n}$ has magnitude about ${n^{j/k}}$, where ${j}$ is a random variable which has the same distribution as the number of heads in ${k}$ independently tossed fair coins. By the law of large numbers, ${j}$ should concentrate near ${k/2}$ when ${k}$ is large, which implies that the majority of the divisors of ${n}$ will be close to ${n^{1/2}}$. Sending ${k \rightarrow \infty}$, one can show that the bound (8) fails whenever ${\theta < 1/2}$.

This however can be fixed in a number of ways. First of all, even when ${\theta<1/2}$, one can show weaker substitutes for (8). For instance, for any fixed ${\theta > 0}$ and ${m \geq 1}$ one can show a bound of the form

$\displaystyle \tau(n)^m \ll \sum_{d \leq n^\theta: d|n} \tau(d)^C \ \ \ \ \ (9)$

for some ${C}$ depending only on ${m,\theta}$. This nice elementary inequality (first observed by Landreau) already gives a quite short proof of van der Corput’s bound (7).

For Erdös’s upper bound (6), though, one cannot afford to lose these additional factors of ${\tau(d)}$, and one must argue more carefully. Here, the key observation is that the counterexample discussed earlier – when the natural number ${n}$ is the product of a large number of fairly small primes – is quite atypical; most numbers have at least one large prime factor. For instance, the number of natural numbers less than ${x}$ that contain a prime factor between ${x^{1/2}}$ and ${x}$ is equal to

$\displaystyle \sum_{x^{1/2} \leq p \leq x} (\frac{x}{p} + O(1)),$

which, thanks to Mertens’ theorem

$\displaystyle \sum_{p \leq x} \frac{1}{p} = \log\log x + M+o(1)$

for some absolute constant ${M}$, is comparable to ${x}$. In a similar spirit, one can show by similarly elementary means that the number of natural numbers ${m}$ less than ${x}$ that are ${x^{1/m}}$-smooth, in the sense that all prime factors are at most ${x^{1/m}}$, is only about ${m^{-cm} x}$ or so. Because of this, one can hope that the bound (8), while not true in full generality, will still be true for most natural numbers ${n}$, with some slightly weaker substitute available (such as (7)) for the exceptional numbers ${n}$. This turns out to be the case by an elementary but careful argument.

The Erdös argument is quite robust; for instance, the more general inequality

$\displaystyle x \log^{2^m-1} x \ll \sum_{n \leq x} \tau(P(n))^m \ll x \log^{2^m-1} x$

for fixed irreducible ${P}$ and ${m \geq 1}$, which improves van der Corput’s inequality (8) was shown by Delmer using the same methods. (A slight error in the original paper of Erdös was also corrected in this latter paper.) In a forthcoming revision to my paper on the Erdös-Straus conjecture, Christian Elsholtz and I have also applied this method to obtain bounds such as

$\displaystyle \sum_{a \leq A} \sum_{b \leq B} \tau(a^2 b + 1) \ll AB \log(A+B),$

which turn out to be enough to obtain the right asymptotics for the number of solutions to the equation ${\frac{4}{p}= \frac{1}{x}+\frac{1}{y}+\frac{1}{z}}$.

Below the fold I will provide some more details of the arguments of Landreau and of Erdös.

I have just uploaded to the arXiv my paper “On the number of solutions to ${\frac{4}{p}=\frac{1}{n_1}+\frac{1}{n_2}+\frac{1}{n_3}}$“, submitted to the Journal of the Australian Mathematical Society.

For any positive integer ${n}$, let ${f(n)}$ denote the number of solutions to the Diophantine equation

$\displaystyle \frac{4}{n} = \frac{1}{n_1} + \frac{1}{n_2} + \frac{1}{n_3}$

where ${n_1,n_2,n_3}$ are positive integers (we allow repetitions, and do not require the ${n_i}$ to be increasing). The Erdös-Straus conjecture asserts that ${f(n)>0}$ for all ${n \geq 2}$. By dividing through by any positive integer ${m}$ we see that ${f(nm) \geq f(n)}$, so it suffices to verify this conjecture for primes ${n=p}$, i.e. to solve the Diophantine equation

$\displaystyle \frac{4}{p} = \frac{1}{n_1} + \frac{1}{n_2} + \frac{1}{n_3} \ \ \ \ \ (1)$

for each prime ${p}$. As the case ${p=2}$ is easily solved, we may of course restrict attention to odd primes.

This conjecture remains open, although there is a reasonable amount of evidence towards its truth. For instance, it was shown by Vaughan that for any large ${x}$, the number of exceptions ${f(n)=0}$ to the Erdös-Straus conjecture with ${n is at most ${x \exp( - c \log^{2/3} x)}$ for some absolute constant ${c>0}$. The Erdös-Straus conjecture is also verified in several congruence classes of primes; for instance, from the identity

$\displaystyle \frac{4}{p} = \frac{1}{\frac{p+1}{4}} + \frac{1}{\frac{p+1}{2}p} + \frac{1}{\frac{p+1}{2}p}$

we see that the conjecture holds when ${p = 3 \hbox{ mod } 4}$. Further identities of this type can be used to resolve the conjecture unless ${p}$ is a quadratic residue mod ${840}$, which leaves only six residue classes in that modulus to check (namely, ${1, 121, 169, 289, 361}$, and ${529}$). However, there is a significant obstruction to eliminating the quadratic residue classes, as we will discuss later.

By combining these reductions with extensive numerical calculations, the Erdös-Straus conjecture was verified for all ${n \leq 10^{14}}$ by Swett.

One approach to solving Diophantine equations such as (1) is to use methods of analytic number theory, such as the circle method, to obtain asymptotics (or at least lower bounds) for the number ${f(p)}$ of solutions (or some proxy for this number); if one obtains a lower bound which is nontrivial for every ${p}$, one has solved the problem. (One can alternatively view such methods as a variant of the probabilistic method; in this interpretation, one chooses the unknowns ${n_1,n_2,n_3}$ according to some suitable probability distribution and then tries to show that the probability of solving the equation is positive.) Such techniques can be effective for instance for certain instances of the Waring problem. However, as a general rule, these methods only work when there are a lot of solutions, and specifically when the number ${f(p)}$ of solutions grows at a polynomial rate with the parameter ${p}$.

The first main result of my paper is that the number of solutions ${f(p)}$ is essentially logarithmic in nature, thus providing a serious obstruction to solution by analytic methods, as there is almost no margin of error available. More precisely, I show that for almost all primes ${p}$ (i.e. in a subset of primes of relative density ${1}$), one has ${f(p) \leq p^{o(1)}}$, or more precisely that

$\displaystyle f(p) \leq \exp( O( \frac{\log p}{\log\log p} ) ).$

Readers familiar with analytic number theory will recognise the right-hand side from the divisor bound, which indeed plays a role in the proof.

Actually, there are more precise estimates which suggest (though do not quite prove) that the mean value of ${f(p)}$ is comparable to ${\log^3 p}$. To state these results properly, I need some more notation. It is not difficult to show that if ${p}$ is an odd prime and ${n_1,n_2,n_3}$ obey (1), then at least one, but not all, of the ${n_1,n_2,n_3}$ must be a multiple of ${p}$. Thus we can split ${f(p) = f_1(p) + f_2(p)}$, where ${f_i(p)}$ is the number of solutions to (1) where exactly ${i}$ of the ${n_1,n_2,n_3}$ are divisible by ${p}$. The sharpest estimates I can prove are then

Theorem 1 For all sufficiently large ${x}$, one has

$\displaystyle x \log^2 x \ll \sum_{p < x} f_1(p) \ll x \exp( O( \frac{\log x}{\log \log x} ) )$

and

$\displaystyle x \log^2 x \ll \sum_{p < x} f_2(p) \ll x \log^2 x \log\log x.$

Since the number of primes less than ${x}$ is comparable to ${x/\log x}$ by the prime number theorem, this shows that the mean value of ${f_2(p)}$ in this range is between ${\log^3 x}$ and ${\log^3 \log\log x}$, and the mean value of ${f_1(p)}$ is between ${\log^3 x}$ and ${\exp( O( \frac{\log x}{\log \log x}) )}$, which gives the previous result as a corollary (thanks to a Markov inequality argument). A naive Poisson process heuristic then suggests that each prime ${p}$ has a “probability” ${1 - O( \exp( - c \log^3 p ) )}$ of having a solution to (1), which by the Borel-Cantelli lemma heuristically suggests that there are only finitely many ${p}$ for which (1) fails. Of course, this is far from a rigorous proof (though the result of Vaughan mentioned earlier, which is based on the large sieve, can be viewed as a partial formalisation of the argument).

The first step in obtaining these results is an elementary description of ${f_1(p), f_2(p)}$ in terms of some divisibility constraints. More precisely, we have

Lemma 2 Let ${p}$ be an odd prime. Then ${f_1(p)}$ is equal to three times the number of triples ${(a,b,c)}$ of positive integers, with ${a, b}$ coprime, ${c}$ dividing ${a+b}$, and ${4ab}$ dividing ${cp+1}$. Similarly, ${f_2(p)}$ is equal to three times the number of triples ${(a,b,c)}$ of positive integers, with ${a, b}$ coprime, ${c}$ dividing ${a+b}$, and ${4ab}$ dividing ${p+c}$.

One direction of this lemma is very easy: if ${c}$ divides ${a+b}$ and ${4ab}$ divides ${cp+1}$ then the identity

$\displaystyle \frac{4}{p} = \frac{1}{\frac{pc+1}{4}p} + \frac{1}{\frac{pc+1}{4a} \frac{a+b}{c}} + \frac{1}{\frac{pc+1}{4b} \frac{a+b}{c}}$

and its cyclic permutations give three contributions to ${f_1(p)}$, and similarly if ${4ab}$ divides ${p+c}$ instead then the identity

$\displaystyle \frac{4}{p} = \frac{1}{\frac{p+c}{4}} + \frac{1}{\frac{p+c}{4a} \frac{a+b}{c} p} + \frac{1}{\frac{p+c}{4b} \frac{a+b}{c} p}$

and its cyclic permutations give three contributions to ${f_2(p)}$. Conversely, some elementary number theory can be used to show that all solutions contributing to either ${f_1(p)}$ or ${f_2(p)}$ are of these forms. (Similar criteria have been used in prior literature, for instance in the previously mentioned paper of Vaughan.)

This lemma, incidentally, provides a way to quickly generate a large number of congruences of primes ${p}$ for which the Erdös-Straus conjecture can be verified. Indeed, from the above lemma we see that if ${a, b}$ are coprime and ${c}$ is any odd factor of ${a+b}$ (and hence coprime to ${4ab}$), then the Erdös-Straus conjecture is true whenever ${p=-c \hbox{ mod } 4ab}$ or ${p = -c^{-1} \hbox{ mod } 4ab}$. For instance,

• Taking ${(a,b,c)=(1,1,1)}$, we see that the conjecture holds whenever ${p=3 \hbox{ mod } 4}$ (leaving only those primes ${p=1 \hbox{ mod } 4}$);
• Taking ${(a,b,c)=(1,2,3)}$ we see that the conjecture holds whenever ${p=5 \hbox{ mod } 8}$ (leaving only those primes ${p=1 \hbox{ mod } 8}$);
• Taking ${(a,b,c)=(3,4,7)}$, we see that the conjecture holds whenever ${p=5 \hbox{ mod } 12}$ ((leaving only those primes ${p=1 \hbox{ mod } 24}$);
• Taking ${(a,b,c)=(1,5,3)}$, we see that the conjecture holds whenever ${p=17, 13 \hbox{ mod } 20}$ (leaving only those primes ${p=1,49 \hbox{ mod } 120}$);
• Taking ${(a,b,c)=(1,14,15)}$, we see that the conjecture holds whenever ${p= 41 \hbox{ mod } 56}$; taking instead ${(a,b,c)=(2,21,23)}$, we see that the conjecture holds whenever ${p = 73, 145 \hbox{ mod } 168}$. (This leaves only the primes ${p}$ that are equal to one of the six quadratic residues ${1, 121, 169, 289, 361, 529 \hbox{ mod } 840}$, an observation first made by Mordell.)
• etc.

If we let ${g_1(n)}$ (resp. ${g_2(n)}$) be the number of triples ${(a,b,c)}$ with ${a,b}$ coprime, ${c}$ dividing ${a+b}$, and ${n}$ congruent to ${-c^{-1}}$ (resp. ${-c}$) mod ${4ab}$, the above lemma tells us that ${f_i(p) = 3g_i(p)}$ for all odd primes ${p}$, so it suffices to show that ${g_1(p), g_2(p)}$ are non-zero for all ${p}$. One might hope that there are enough congruence relations provided by the previous observation to obtain a covering system of congruences which would resolve the conjecture. Unfortunately, as was also observed by Mordell, an application of the quadratic reciprocity law shows that these congruence relations cannot eliminate quadratic residues, only quadratic non-residues. More precisely, one can show that if ${a,b,c,p}$ are as above (restricting to the case $p=1 \hbox{ mod } 8amp;fg000000$, which contains all the unsolved cases of the conjecture) and ${q}$ is the largest odd factor of ${4ab}$, then the Jacobi symbols ${\left(\frac{-c}{q}\right)}$ and ${\left(\frac{-c^{-1}}{q}\right)}$ are always equal to ${-1}$. One consequence of this is that ${g_1(n)=g_2(n)=0}$ whenever ${n}$ is an odd perfect square. This makes it quite hard to resolve the conjecture for odd primes ${p}$ which resemble a perfect square in that they lie in quadratic residue classes to small moduli (and, from Dirichlet’s theorem on primes in arithmetic progressions, one can find many primes of this type). The same argument also shows that for an odd prime ${p}$, there are no solutions to

$\displaystyle \frac{4}{p^2} = \frac{1}{n_1} + \frac{1}{n_2} + \frac{1}{n_3}$

in which one or two of the ${n_1,n_2,n_3}$ are divisible by ${p^2}$, with the other denominators being coprime to ${p}$. (Of course, one can still get representations of ${\frac{4}{p^2}}$ by starting with a representation of ${\frac{4}{p}}$ and dividing by ${p}$, but such representations are is not of the above form.) This shows that any attempt to establish the Erdös-Straus conjecture by manually constructing ${n_1,n_2,n_3}$ as a function of ${p}$ must involve a technique which breaks down if ${p}$ is replaced by ${p^2}$ (for instance, this rules out any approach based on using polynomial combinations of ${p}$ and dividing into cases based on residue classes of ${p}$ in small moduli). Part of the problem here is that we do not have good bounds that prevent a prime ${p}$ from “spoofing” a perfect square to all small moduli (say, to all moduli less than a small power of ${p}$); this problem is closely related (via quadratic reciprocity) to Vinogradov’s conjecture on the least quadratic nonresidue, discussed in this previous blog post.

It remains to control the sums ${\sum_{p for ${i=1,2}$. The lower bounds ${\sum_{p are relatively routine to establish, arising from counting the contribution of those ${a,b,c}$ that are somewhat small (e.g. less than ${x^{0.2}}$), and use only standard asymptotics of arithmetic functions and the Bombieri-Vinogradov inequality (to handle the restriction of the summation to primes). To obtain (nearly) matching upper bounds, one needs to prevent ${a,b,c}$ from getting too large. In the case of ${g_2(p)}$, the fact that ${c}$ divides ${a+b}$ and ${4ab}$ divides ${c+p}$ soon leads one to the bounds ${ab \ll x}$, at which point one can count primes in residue classes modulo ${4ab}$ with reasonable efficiency via the Brun-Titchmarsh inequality. This inequality unfortunately introduces a factor of ${\frac{4ab}{\phi(4ab)}}$, which we were only able to handle by bounding it crudely by ${O(\log \log x)}$, leading to the double logarithmic loss in the ${f_2}$ sum.

For ${g_1(p)}$, these arguments do not give good bounds, because it is possible for ${ab}$ to be much larger than ${x}$, and one can no longer easily count solutions to ${p = -c^{-1} \hbox{ mod } 4ab}$. However, an elementary change of variables resolves this problem. It turns out that if one lets ${l,m}$ be the positive integers

$\displaystyle l := \frac{pc+1}{4ab}$

$\displaystyle m := \frac{4la^2+1}{c}$

then one can convert the triple ${(a,b,c)}$ to a triple ${(a,l,m)}$ obeying the constraints

$\displaystyle al \leq p$

$\displaystyle m | 4la^2+1$

$\displaystyle p = -m \hbox{ mod } 4al.$

Conversely, every such triple ${(a,l,m)}$ determines ${(a,b,c)}$ by the formulae

$\displaystyle c = \frac{4la^2+1}{m}; b = c \frac{p+m}{4al} - a = \frac{4la^2 p + m + p}{4alm}. \ \ \ \ \ (2)$

The point is that this transformation now places ${p}$ in a residue class modulo ${4al}$ rather than ${4ab}$, and with ${al = O(x)}$, this allows one to count the number of solutions reasonably efficiently. The main loss now comes from counting the number of divisors ${m}$ of ${4la^2+1}$; in particular, one becomes interested in estimating sums such as

$\displaystyle \sum_{a \sim A} \sum_{l \sim L} d( 4la^2+1 )$

where ${A, L}$ are less than ${x}$ and ${d(n)}$ is the number of divisors of ${n}$. In principle, because ${d(n)}$ behaves like ${\log n}$ on the average (as can be seen from the Dirichlet hyperbola method), we expect this sum to be of order ${AL \log x}$, but I was unable to show this, instead using the far cruder divisor bound

$\displaystyle d(n) \ll \exp( O(\frac{\log n}{\log\log n}) )$

to obtain an upper bound of ${AL \exp( O(\frac{\log x}{\log\log x}) )}$. Any improvement upon this bound would lead to a corresponding improvement in the upper bound for ${\sum_{p. While improvement is possible in some ranges (particularly when ${L}$ is large) using various bounds on Kloosterman-type exponential sums, I was not able to adequately control these sums when ${L}$ was fairly small (e.g. polylogarithmic size in ${x}$), as one could no longer extract much of an averaging effect from the ${l}$ summation in that case. Part of the difficulty is that in that case one must somehow exploit the fact that ${4la^2+1}$ is irreducible as a polynomial in ${a}$ for any fixed ${l}$, otherwise there will be too many divisors.

I’ve uploaded to the arXiv the polymath research paper “Deterministic methods to find primes“, which is the outcome of the Polymath4 collaborative mathematics project, and has been submitted to Mathematics of Computation.

The objective of this paper was to find fast deterministic algorithms to solve the following problem:

Given a (large) integer x, find a prime p larger than x.

Thanks to the AKS algorithm, a number of size O(x) can be deterministically tested for primality in time $O( \log^{O(1)} x ) = O( x^{o(1)} )$.   By Bertrand’s postulate, there is always at least one prime between x and 2x; by testing each one of these integers in turn for primality, one can thus obtain a deterministic algorithm to find primes in time $O( x^{1+o(1)})$.

But one should be able to do much better.  For comparison, if probabilistic algorithms are allowed, then by randomly selecting integers between x and 2x to test for primality, it is easy to see from the prime number theorem that one will succeed in obtaining a prime with high probability in time $O( \log^{O(1)} x ) = O( x^{o(1)} )$.  However, after some effort we were not able to “derandomise” this algorithm to create any reasonable deterministic counterpart.  Nevertheless, we conjecture that a deterministic algorithm with run time $O( x^{o(1)})$ exists.  Such algorithms can be easily obtained if one assumes some standard conjectures regarding the primes (e.g. Cramer’s conjecture on prime gaps), but we do not know of any deterministic algorithms which can be unconditionally proved to run in time $O( x^{o(1)})$.

Currently, the best known deterministic algorithm is due to Lagarias and Odlyzko, and has a run time of $O( x^{1/2+o(1)})$.  Roughly speaking, it is based on the ability to compute the prime counting function $\pi(x)$ in time $O( x^{1/2+o(1)} )$; once one has this function, one can detect which intervals contain primes or not, and then starting from Bertrand’s postulate and performing a binary search one can then locate a prime.  The Lagarias-Odlyzko argument is based on approximating $\pi(x)$ by a certain integral of the Riemann zeta function, which one then approximates in turn by quadrature.

We conjecture that one should be able to compute $\pi(x)$ in faster time, and in particular in time $O(x^{1/2-c+o(1)})$ for some $c>0$.  Unfortunately, we were not able to achieve this; however, we do have a non-trivial method to compute the parity $\pi(x) \hbox{ mod } 2$ of $\pi(x)$ in such a time; a bit more generally (and oversimplifying a little bit), we can compute various projections $\sum_{p \leq x} t^p \hbox{ mod } 2, g(t)$ of the prime polynomial $\sum_{p \leq x} t^p$ modulo some small polynomials g.  This seems close to being able to achieve the goal of detecting whether primes exist in a given interval, but unfortunately some additional work seems to be needed in that area.  Still, the methods used to get the parity seem to be interesting (involving the Dirichlet hyperbola method, a piecewise approximation of the hyperbola, and the Strassen fast multiplication algorithm), and these techniques might be useful for some other problems.

Roughly speaking, the idea to compute the parity of $\pi(x) \hbox{ mod } 2$ is as follows.  The first observation is that, for square-free n, the number $\tau(n)$ of divisors of n is equal to 2 when n is a prime, and a multiple of 4 otherwise.  So to compute the parity of $\pi(x)$, it suffices to compute $\sum_{n \leq x} \tau(n)$ modulo 4 (or more precisely, the restriction of this sum to squarefree n).

The restriction to squarefree numbers turns out to be relatively easy to handle, so we ignore it.  Since $\tau(n) = \sum_{a,b: ab=n} 1$, we can rewrite

$\sum_{n \leq x} \tau(n) = \sum_{a,b: ab \leq x} 1$.

So it suffices to find an efficient way to count the number of lattice points below the hyperbola $\{ (a,b): ab = x \}$.

The classic way to do this is via the Dirichlet hyperbola method.  This lets one rewrite the above expression as

$2 \sum_{a \leq \sqrt{x}} \lfloor \frac{x}{a} \rfloor - \lfloor \sqrt{x}\rfloor^2$

(assuming $x$ is not a perfect square, where $\lfloor x \rfloor$ is the greatest integer function).  One can then compute this quantity in time $O(x^{1/2+o(1)})$ without difficulty.

To improve this to $O(x^{1/2-c+o(1)})$, we used some ideas of Vinogradov, based on using Diophantine approximation to find moderately long arithmetic progressions on which the map $a \mapsto \lfloor \frac{x}{a} \rfloor$ is linear, and thus easily summable.

The same method extends to the polynomial setting, though now, instead of summing an arithmetic progression, one has to find an efficient way to sum quadratic sums such as $\sum_{n=1}^N t^{an^2+bn+c}$.  This turns out to be possible by expressing this sum as a matrix product and then using fast matrix multiplication algorithms.

There is still some scope to improve the results and methods further; Ernie Croot is pursuing this with two undergraduate students, David Hollis and David Lowry,

As is now widely reported, the Fields medals for 2010 have been awarded to Elon Lindenstrauss, Ngo Bao Chau, Stas Smirnov, and Cedric Villani. Concurrently, the Nevanlinna prize (for outstanding contributions to mathematical aspects of information science) was awarded to Dan Spielman, the Gauss prize (for outstanding mathematical contributions that have found significant applications outside of mathematics) to Yves Meyer, and the Chern medal (for lifelong achievement in mathematics) to Louis Nirenberg. All of the recipients are of course exceptionally qualified and deserving for these awards; congratulations to all of them. (I should mention that I myself was only very tangentially involved in the awards selection process, and like everyone else, had to wait until the ceremony to find out the winners. I imagine that the work of the prize committees must have been extremely difficult.)

Today, I thought I would mention one result of each of the Fields medalists; by chance, three of the four medalists work in areas reasonably close to my own. (Ngo is rather more distant from my areas of expertise, but I will give it a shot anyway.) This will of course only be a tiny sample of each of their work, and I do not claim to be necessarily describing their “best” achievement, as I only know a portion of the research of each of them, and my selection choice may be somewhat idiosyncratic. (I may discuss the work of Spielman, Meyer, and Nirenberg in a later post.)

In this, the final lecture notes of this course, we discuss one of the motivating applications of the theory developed thus far, namely to count solutions to linear equations in primes ${{\mathcal P} = \{2,3,5,7,\ldots\}}$ (or in dense subsets ${A}$ of primes ${{\mathcal P}}$). Unfortunately, the most famous linear equations in primes: the twin prime equation ${p_2 - p_1 = 2}$ and the even Goldbach equation ${p_1+p_2=N}$ – remain out of reach of this technology (because the relevant affine linear forms involved are commensurate, and thus have infinite complexity with respect to the Gowers norms), but most other systems of equations, in particular that of arithmetic progressions ${p_i = n+ir}$ for ${i=0,\ldots,k-1}$ (or equivalently, ${p_i + p_{i+2} = 2p_{i+1}}$ for ${i=0,\ldots,k-2}$) , as well as the odd Goldbach equation ${p_1+p_2+p_3=N}$, are tractable.

To illustrate the main ideas, we will focus on the following result of Green:

Theorem 1 (Roth’s theorem in the primes) Let ${A \subset {\mathcal P}}$ be a subset of primes whose upper density ${\limsup_{N \rightarrow \infty} |A \cap [N]|/|{\mathcal P} \cap [N]|}$ is positive. Then ${A}$ contains infinitely many arithmetic progressions of length three.

This should be compared with Roth’s theorem in the integers (Notes 2), which is the same statement but with the primes ${{\mathcal P}}$ replaced by the integers ${{\bf Z}}$ (or natural numbers ${{\bf N}}$). Indeed, Roth’s theorem for the primes is proven by transferring Roth’s theorem for the integers to the prime setting; the latter theorem is used as a “black box”. The key difficulty here in performing this transference is that the primes have zero density inside the integers; indeed, from the prime number theorem we have ${|{\mathcal P} \cap [N]| = (1+o(1)) \frac{N}{\log N} = o(N)}$.

There are a number of generalisations of this transference technique. In a paper of Green and myself, we extended the above theorem to progressions of longer length (thus transferring Szemerédi’s theorem to the primes). In a series of papers (culminating in a paper to appear shortly) of Green, myself, and also Ziegler, related methods are also used to obtain an asymptotic for the number of solutions in the primes to any system of linear equations of bounded complexity. This latter result uses the full power of higher order Fourier analysis, in particular relying heavily on the inverse conjecture for the Gowers norms; in contrast, Roth’s theorem and Szemerédi’s theorem in the primes are “softer” results that do not need this conjecture.

To transfer results from the integers to the primes, there are three basic steps:

• A general transference principle, that transfers certain types of additive combinatorial results from dense subsets of the integers to dense subsets of a suitably “pseudorandom set” of integers (or more precisely, to the integers weighted by a suitably “pseudorandom measure”);
• An application of sieve theory to show that the primes (or more precisely, an affine modification of the primes) lie inside a suitably pseudorandom set of integers (or more precisely, have significant mass with respect to a suitably pseudorandom measure).
• If one is seeking asymptotics for patterns in the primes, and not simply lower bounds, one also needs to control correlations between the primes (or proxies for the primes, such as the Möbius function) with various objects that arise from higher order Fourier analysis, such as nilsequences.

The former step can be accomplished in a number of ways. For progressions of length three (and more generally, for controlling linear patterns of complexity at most one), transference can be accomplished by Fourier-analytic methods. For more complicated patterns, one can use techniques inspired by ergodic theory; more recently, simplified and more efficient methods based on duality (the Hahn-Banach theorem) have also been used. No number theory is used in this step. (In the case of transference to genuinely random sets, rather than pseudorandom sets, similar ideas appeared earlier in the graph theory setting, see this paper of Kohayakawa, Luczak, and Rodl.

The second step is accomplished by fairly standard sieve theory methods (e.g. the Selberg sieve, or the slight variants of this sieve used by Goldston and Yildirim). Remarkably, very little of the formidable apparatus of modern analytic number theory is needed for this step; for instance, the only fact about the Riemann zeta function that is truly needed is that it has a simple pole at ${s=1}$, and no knowledge of L-functions is needed.

The third step does draw more significantly on analytic number theory techniques and results (most notably, the method of Vinogradov to compute oscillatory sums over the primes, and also the Siegel-Walfisz theorem that gives a good error term on the prime number theorem in arithemtic progressions). As these techniques are somewhat orthogonal to the main topic of this course, we shall only touch briefly on this aspect of the transference strategy.