The twin prime conjecture, still unsolved, asserts that there are infinitely many primes {p} such that {p+2} is also prime. A more precise form of this conjecture is (a special case) of the Hardy-Littlewood prime tuples conjecture, which asserts that

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (2\Pi_2+o(1)) x \ \ \ \ \ (1)

 

as {x \rightarrow \infty}, where {\Lambda} is the von Mangoldt function and {\Pi_2 = 0.6606\dots} is the twin prime constant

\displaystyle \prod_{p>2} (1 - \frac{1}{(p-1)^2}).

Because {\Lambda} is almost entirely supported on the primes, it is not difficult to see that (1) implies the twin prime conjecture.

One can give a heuristic justification of the asymptotic (1) (and hence the twin prime conjecture) via sieve theoretic methods. Recall that the von Mangoldt function can be decomposed as a Dirichlet convolution

\displaystyle \Lambda(n) = \sum_{d|n} \mu(d) \log \frac{n}{d}

where {\mu} is the Möbius function. Because of this, we can rewrite the left-hand side of (1) as

\displaystyle \sum_{d \leq x} \mu(d) \sum_{n \leq x: d|n} \log\frac{n}{d} \Lambda(n+2). \ \ \ \ \ (2)

 

To compute this double sum, it is thus natural to consider sums such as

\displaystyle \sum_{n \leq x: d|n} \log \frac{n}{d} \Lambda(n+2)

or (to simplify things by removing the logarithm)

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2).

The prime number theorem in arithmetic progressions suggests that one has an asymptotic of the form

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2) \approx \frac{g(d)}{d} x \ \ \ \ \ (3)

 

where {g} is the multiplicative function with {g(d)=0} for {d} even and

\displaystyle g(d) := \frac{d}{\phi(d)} = \prod_{p|d} (1-\frac{1}{p})^{-1}

for {d} odd. Summing by parts, one then expects

\displaystyle \sum_{n \leq x: d|n} \Lambda(n+2)\log \frac{n}{d}  \approx \frac{g(d)}{d} x \log \frac{x}{d}

and so we heuristically have

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) \approx x \sum_{d \leq x} \frac{\mu(d) g(d)}{d} \log \frac{x}{d}.

The Dirichlet series

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s}

has an Euler product factorisation

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s} = \prod_p (1 - \frac{g(p)}{p^s})

for {\hbox{Re} s > 1}; comparing this with the Euler product factorisation

\displaystyle \zeta(s) = \prod_p (1 - \frac{1}{p^s})^{-1}

for the Riemann zeta function, and recalling that {\zeta} has a simple pole of residue {1} at {s=1}, we see that

\displaystyle \sum_n \frac{\mu(n) g(n)}{n^s} = \frac{1}{\zeta(s)} \prod_p \frac{1-g(p)/p^s}{1-p^s}

has a simple zero at {s=1} with first derivative

\displaystyle \prod_p \frac{1 - g(p)/p}{1-1/p} = 2 \Pi_2.

From this and standard multiplicative number theory manipulations, one can calculate the asymptotic

\displaystyle \sum_{d \leq x} \frac{\mu(d) g(d)}{d} \log \frac{x}{d} = 2 \Pi_2 + o(1)

which concludes the heuristic justification of (1).

What prevents us from making the above heuristic argument rigorous, and thus proving (1) and the twin prime conjecture? Note that the variable {d} in (2) ranges to be as large as {x}. On the other hand, the prime number theorem in arithmetic progressions (3) is not expected to hold for {d} anywhere that large (for instance, the left-hand side of (3) vanishes as soon as {d} exceeds {x}). The best unconditional result known of the type (3) is the Siegel-Walfisz theorem, which allows {d} to be as large as {\log^{O(1)} x}. Even the powerful generalised Riemann hypothesis (GRH) only lets one prove an estimate of the form (3) for {d} up to about {x^{1/2-o(1)}}.

However, because of the averaging effect of the summation in {d} in (2), we don’t need the asymptotic (3) to be true for all {d} in a particular range; having it true for almost all {d} in that range would suffice. Here the situation is much better; the celebrated Bombieri-Vinogradov theorem (sometimes known as “GRH on the average”) implies, roughly speaking, that the approximation (3) is valid for almost all {d \leq x^{1/2-\varepsilon}} for any fixed {\varepsilon>0}. While this is not enough to control (2) or (1), the Bombieri-Vinogradov theorem can at least be used to control variants of (1) such as

\displaystyle \sum_{n \leq x} (\sum_{d|n} \lambda_d) \Lambda(n+2)

for various sieve weights {\lambda_d} whose associated divisor function {\sum_{d|n} \lambda_d} is supposed to approximate the von Mangoldt function {\Lambda}, although that theorem only lets one do this when the weights {\lambda_d} are supported on the range {d \leq x^{1/2-\varepsilon}}. This is still enough to obtain some partial results towards (1); for instance, by selecting weights according to the Selberg sieve, one can use the Bombieri-Vinogradov theorem to establish the upper bound

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) \leq (4+o(1)) 2 \Pi_2 x, \ \ \ \ \ (4)

 

which is off from (1) by a factor of about {4}. See for instance this blog post for details.

It has been difficult to improve upon the Bombieri-Vinogradov theorem in its full generality, although there are various improvements to certain restricted versions of the Bombieri-Vinogradov theorem, for instance in the famous work of Zhang on bounded gaps between primes. Nevertheless, it is believed that the Elliott-Halberstam conjecture (EH) holds, which roughly speaking would mean that (3) now holds for almost all {d \leq x^{1-\varepsilon}} for any fixed {\varepsilon>0}. (Unfortunately, the {\varepsilon} factor cannot be removed, as investigated in a series of papers by Friedlander, Granville, and also Hildebrand and Maier.) This comes tantalisingly close to having enough distribution to control all of (1). Unfortunately, it still falls short. Using this conjecture in place of the Bombieri-Vinogradov theorem leads to various improvements to sieve theoretic bounds; for instance, the factor of {4+o(1)} in (4) can now be improved to {2+o(1)}.

In two papers from the 1970s (which can be found online here and here respectively, the latter starting on page 255 of the pdf), Bombieri developed what is now known as the Bombieri asymptotic sieve to clarify the situation more precisely. First, he showed that on the Elliott-Halberstam conjecture, while one still could not establish the asymptotic (1), one could prove the generalised asymptotic

\displaystyle \sum_{n \leq x} \Lambda_k(n) \Lambda(n+2) = (2\Pi_2+o(1)) k x \log^{k-1} x \ \ \ \ \ (5)

 

for all natural numbers {k \geq 2}, where the generalised von Mangoldt functions {\Lambda_k} are defined by the formula

\displaystyle \Lambda_k(n) := \sum_{d|n} \mu(d) \log^k \frac{n}{d}.

These functions behave like the von Mangoldt function, but are concentrated on {k}-almost primes (numbers with at most {k} prime factors) rather than primes. The right-hand side of (5) corresponds to what one would expect if one ran the same heuristics used to justify (1). Sadly, the {k=1} case of (5), which is just (1), is just barely excluded from Bombieri’s analysis.

More generally, on the assumption of EH, the Bombieri asymptotic sieve provides the asymptotic

\displaystyle \sum_{n \leq x} \Lambda_{(k_1,\dots,k_r)}(n) \Lambda(n+2) \ \ \ \ \ (6)

 

\displaystyle = (2\Pi_2+o(1)) \frac{\prod_{i=1}^r k_i!}{(k_1+\dots+k_r-1)!} x \log^{k_1+\dots+k_r-1} x

for any fixed {r \geq 1} and any tuple {(k_1,\dots,k_r)} of natural numbers other than {(1,\dots,1)}, where

\displaystyle \Lambda_{(k_1,\dots,k_r)} := \Lambda_{k_1} * \dots * \Lambda_{k_r}

is a further generalisation of the von Mangoldt function (now concentrated on {k_1+\dots+k_r}-almost primes). By combining these asymptotics with some elementary identities involving the {\Lambda_{(k_1,\dots,k_r)}}, together with the Weierstrass approximation theorem, Bombieri was able to control a wide family of sums including (1), except for one undetermined scalar {\delta_x \in [0,2]}. Namely, he was able to show (again on EH) that for any fixed {r \geq 1} and any continuous function {g_r} on the simplex {\Delta_r := \{ (t_1,\dots,t_r) \in {\bf R}^r: t_1+\dots+t_r = 1; 0 \leq t_1 \leq \dots \leq t_r\}} that had suitable vanishing at the boundary, the sum

\displaystyle \sum_{n \leq x: n=p_1 \dots p_r} g_r( \frac{\log p_1}{\log n}, \dots, \frac{\log p_r}{\log n} ) \Lambda(n+2)

was equal to

\displaystyle (\delta_x+o(1)) \int_{\Delta_r} g_r \frac{x}{\log x} \ \ \ \ \ (7)

 

when {r} was odd and

\displaystyle (2-\delta_x+o(1)) \int_{\Delta_r} g_r \frac{x}{\log x} \ \ \ \ \ (8)

 

when {r} was even, where the integral on {\Delta_r} is with respect to the measure {\frac{dt_1 \dots dt_{r-1}}{t_1 \dots t_r}} (this is Dirac measure in the case {r=1}). In particular, we have

\displaystyle \sum_{n \leq x} \Lambda(n) \Lambda(n+2) = (\delta_x + o(1)) 2 \Pi_2 x

and the twin prime conjecture would be proved if one could show that {\delta_x} is bounded away from zero, while (1) is equivalent to the assertion that {\delta_x} is equal to {1+o(1)}. Unfortunately, no additional bound beyond the inequalities {0 \leq \delta_x \leq 2} provided by the Bombieri asymptotic sieve is known, even if one assumes all other major conjectures in number theory than the prime tuples conjecture and its variants (e.g. GRH, GEH, GUE, abc, Chowla, …).

To put it another way, the Bombieri asymptotic sieve is able (on EH) to compute asymptotics for sums

\displaystyle \sum_{n \leq x} f(n) \Lambda(n+2) \ \ \ \ \ (9)

 

without needing to know the unknown scalar {\delta_x}, when {f} is a function supported on almost primes of the form

\displaystyle f(p_1 \dots p_r) = g_r( \frac{\log p_1}{\log n}, \dots, \frac{\log p_r}{\log n} )

for {1 \leq r \leq r_*} and some fixed {r_*}, with {f} vanishing elsewhere and for some continuous (symmetric) functions {g_r: \Delta_r \rightarrow {\bf C}} obeying some vanishing at the boundary, so long as the parity condition

\displaystyle \sum_{r \hbox{ odd}} \int_{\Delta_r} g_r = \sum_{r \hbox{ even}} \int_{\Delta_r} g_r

is obeyed (informally: {f} gives the same weight to products of an odd number of primes as to products of an even number of primes, or to put it another way, {f} is asymptotically orthogonal to the Möbius function {\mu}). But when {f} violates the parity condition, the asymptotic involves the unknown {\delta_x}. This scalar {\delta_x} thus embodies the “parity problem” for the twin prime conjecture (discussed in these previous blog posts).

Because the obstruction to the parity problem is only one-dimensional (on EH), one can replace any parity-violating weight (such as {\Lambda}) with any other parity-violating weight and obtain a logically equivalent estimate. For instance, to prove the twin prime conjecture on EH, it would suffice to show that

\displaystyle \sum_{p_1 p_2 p_3 \leq x: p_1,p_2,p_3 \geq x^\alpha} \Lambda(p_1 p_2 p_3 + 2) \gg \frac{x}{\log x}

for some fixed {\alpha>0}, or equivalently that there are {\gg \frac{x}{\log^2 x}} solutions to the equation {p - p_1 p_2 p_3 = 2} in primes with {p \leq x} and {p_1,p_2,p_3 \geq x^\alpha}. (In some cases, this sort of reduction can also be made using other sieves than the Bombieri asymptotic sieve, as was observed by Ng.) As another example, the Bombieri asymptotic sieve can be used to show that the asymptotic (1) is equivalent to the asymptotic

\displaystyle \sum_{n \leq x} \mu(n) 1_R(n) \Lambda(n+2) = o( \frac{x}{\log x})

where {R} is the set of numbers that are rough in the sense that they have no prime factors less than {x^\alpha} for some fixed {\alpha>0} (the function {\mu 1_R} clearly correlates with {\mu} and so must violate the parity condition). One can replace {1_R} with similar sieve weights (e.g. a Selberg sieve) that concentrate on almost primes if desired.

As it turns out, if one is willing to strengthen the assumption of the Elliott-Halberstam (EH) conjecture to the assumption of the generalised Elliott-Halberstam (GEH) conjecture (as formulated for instance in Claim 2.6 of the Polymath8b paper), one can also swap the {\Lambda(n+2)} factor in the above asymptotics with other parity-violating weights and obtain a logically equivalent estimate, as the Bombieri asymptotic sieve also applies to weights such as {\mu 1_R} under the assumption of GEH. For instance, on GEH one can use two such applications of the Bombieri asymptotic sieve to show that the twin prime conjecture would follow if one could show that there are {\gg \frac{x}{\log^2 x}} solutions to the equation

\displaystyle p_1 p_2 - p_3 p_4 = 2

in primes with {p_1,p_2,p_3,p_4 \geq x^\alpha} and {p_1 p_2 \leq x}, for some {\alpha > 0}. Similarly, on GEH the asymptotic (1) is equivalent to the asymptotic

\displaystyle \sum_{n \leq x} \mu(n) 1_R(n) \mu(n+2) 1_R(n+2) = o( \frac{x}{\log^2 x})

for some fixed {\alpha>0}, and similarly with {1_R} replaced by other sieves. This form of the quantitative twin primes conjecture is appealingly similar to the (special case)

\displaystyle \sum_{n \leq x} \mu(n) \mu(n+2) = o(x)

of the Chowla conjecture, for which there has been some recent progress (discussed for instance in these recent posts). Informally, the Bombieri asymptotic sieve lets us (on GEH) view the twin prime conjecture as a sort of Chowla conjecture restricted to almost primes. Unfortunately, the recent progress on the Chowla conjecture relies heavily on the multiplicativity of {\mu} at small primes, which is completely destroyed by inserting a weight such as {1_R}, so this does not yet yield a viable path towards the twin prime conjecture even assuming GEH. Still, the similarity is striking, and one can hope that further ways to attack the Chowla conjecture may emerge that could impact the twin prime conjecture. (Alternatively, if one assumes a sufficiently optimistic version of the GEH, one could perhaps relax the notion of “almost prime” to the extent that one could start usefully using multiplicativity at smallish primes, though this seems rather wishful at present, particularly since the most optimistic versions of GEH are known to be false.)

The Bombieri asymptotic sieve is already well explained in the original two papers of Bombieri; there is also a slightly different treatment of the sieve by Friedlander and Iwaniec, as well as a simplified version in the book of Friedlander and Iwaniec (in which the distribution hypothesis is strengthened in order to shorten the arguments. I’ve decided though to write up my own notes on the sieve below the fold; this is primarily for my own benefit, but may be useful to some readers also. I largely follow the treatment of Bombieri, with the one idiosyncratic twist of replacing the usual “elementary” Selberg sieve with the “analytic” Selberg sieve used in particular in many of the breakthrough works in small gaps between primes; I prefer working with the latter due to its Fourier-analytic flavour.

— 1. Controlling generalised von Mangoldt sums —

To prove (5), we shall first generalise it, by replacing the sequence {\Lambda(n+2)} by a more general sequence {a_n} obeying the following axioms:

  • (i) (Non-negativity) One has {a_n \geq 0} for all {n}.
  • (ii) (Crude size bound) One has {a_n \ll \tau(n)^{O(1)} \log^{O(1)} n} for all {n}, where {\tau} is the divisor function.
  • (iii) (Size) We have {\sum_{n \leq x} a_n = (C+o(1)) x} for some constant {C>0}.
  • (iv) (Elliott-Halberstam type conjecture) For any {\varepsilon,A>0}, one has

    \displaystyle \sum_{d \leq x^{1-\varepsilon}} |\sum_{n \leq x: d|n} a_n - C x \frac{g(d)}{d}| \ll_{\varepsilon,A} x \log^{-A} x

    where {g} is a multiplicative function with {g(p^j) = 1 + O(1/p)} for all primes {p} and {j \geq 1}.

These axioms are a little bit stronger than what is actually needed to make the Bombieri asymptotic sieve work, but we will not attempt to work with the weakest possible axioms here.

We introduce the function

\displaystyle G(s) := \prod_p \frac{1-g(p)/p^s}{1-1/p^s}

which is analytic for {\hbox{Re}(s) > 0}; in particular it can be evaluated at {s=1} to yield

\displaystyle G(1) = \prod_p \frac{1-g(p)/p}{1-1/p}.

There are two model examples of data {a_n, C, g} to keep in mind. The first, discussed in the introduction, is when {a_n =\Lambda(n+2)}, then {C = 2 \Pi_2} and {g} is as in the introduction; one of course needs EH to justify axiom (iv) in this case. The other is when {a_n=1}, in which case {C=1} and {g(n)=1} for all {n}. We will later take advantage of the second example to avoid doing some (routine, but messy) main term computations.

The main result of this section is then

Theorem 1 Let {a_n, g, C, G} be as above. Let {\vec k = (k_1,\dots,k_r)} be a tuple of natural numbers (independent of {x}) that is not equal to {(1,\dots,1)}. Then one has the asymptotic

\displaystyle \sum_{n \leq x} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C x \log^{|\vec k|-1} x

as {x \rightarrow \infty}, where {|\vec k| := k_1 + \dots + k_r}.

Note that this recovers (5) (on EH) as a special case.

We now begin the proof of this theorem. Henceforth we allow implied constants in the {O()} or {\ll} notation to depend on {r, \vec k} and {g,G}.

It will be convenient to replace the range {n \leq x} by a shorter range by the following standard localisation trick. Let {B} be a large quantity depending on {r, \vec k} to be chosen later, and let {I} denote the interval {\{ n: x - x \log^{-B} x \leq n \leq x \}}. We will show the estimate

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C |I| \log^{|\vec k|-1} x \ \ \ \ \ (10)

 

from which the original claim follows by a routine summation argument. Observe from axiom (iv) and the triangle inequality that

\displaystyle \sum_{d \leq x^{1-\varepsilon}: \mu^2(d)=1} |\sum_{n \in I: d|n} a_n - C |I| \frac{g(d)}{d}| \ll_{\varepsilon,A} x \log^{-A} x

for any {\varepsilon,A > 0}.

Write {L} for the logarithm function {L(n) := \log n}, thus {\Lambda_k = \mu * L^k} for any {k}. Without loss of generality we may assume that {k_r > 1}; we then factor {\Lambda_{\vec k} = \mu_{\vec k} * L^{k_r}}, where

\displaystyle \mu_{\vec k} := \Lambda_{k_1} * \dots * \Lambda_{k_{r-1}} * \mu.

This function is just {\mu} when {r=1}. When {r>1} the function is more complicated, but we at least have the following crude bound:

Lemma 2 One has the pointwise bound {|\mu_{\vec k}| \leq L^{|\vec k|-k_r}}.

Proof: We induct on {r}. The case {r=1} is obvious, so suppose {r>1} and the claim has already been proven for {r-1}. Since {\mu_{\vec k} = \Lambda_{k_1} * \mu_{(k_2,\dots,k_r)}}, we see from induction hypothesis and the triangle inequality that

\displaystyle |\mu_{\vec k}| \leq \Lambda_{k_1} * L^{|\vec k| - k_r - k_1} \leq L^{|\vec k| - k_r - k_1} (\Lambda_{k_1} * 1).

Since {\Lambda_{k_1}*1 = L^{k_1}} by Möbius inversion, the claim follows. \Box

We can write

\displaystyle \Lambda_{\vec k}(n) = \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{n}{d}.

In the region {n \in I}, we have {\log^{k_r} \frac{n}{d} = \log^{k_r} \frac{x}{d} + O( \log^{-B+O(1)} x )}. Thus

\displaystyle \Lambda_{\vec k}(n) = \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} + O( \tau(x) \log^{-B+O(1)} x )

for {n \in I}. The contribution of the error term to {O( \tau(x) \log^{-B+O(1)} x )} to (10) is easily seen to be negligible if {B} is large enough, so we may freely replace {\Lambda_{\vec k}(n)} with {\sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d}} with little difficulty.

If we insert this replacement directly into the left-hand side of (10) and rearrange, we get

\displaystyle \sum_{d \leq x} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \sum_{n \in I: d|n} a_d.

We can’t quite control this using axiom (iv) because the range of {d} is a bit too big, as explained in the introduction. So let us introduce a truncated function

\displaystyle \Lambda_{\vec k,\varepsilon}(n) := \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon( \frac{\log d}{\log x} ) \ \ \ \ \ (11)

 

where {\varepsilon>0} is a small quantity to be chosen later, and {\eta_\varepsilon: {\bf R} \rightarrow [0,1]} is a smooth function that equals {1} on {(-\infty,1-4\varepsilon)} and equals {0} on {(1-3\varepsilon,+\infty)}. Suppose one could establish the following two estimates for any fixed {\varepsilon>0}:

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (12)

 

and

\displaystyle \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) a_n = C Q_{\varepsilon,x} G(1) + o( |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (13)

 

where {Q_{\varepsilon,x}} is a quantity that depends on {\varepsilon, \eta_\varepsilon, \vec k, B, x} but not on {C, g,G}. Then on combining the two estimates we would have

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = C Q_{\varepsilon,x} G(1) + (O(\varepsilon) + o(1)) C |I| \log^{|\vec k|-1} x. \ \ \ \ \ (14)

 

One could in principle compute {Q_{\varepsilon,x}} explicitly from the proof of (13), but one can avoid doing so by the following comparison trick. In the special case {a_n=1}, standard multiplicative number theory (noting that the Dirichlet series {\sum_n \frac{\Lambda_{\vec k}(n)}{n^s}} has a pole of order {|\vec k|} at {s=1}, with top Laurent coefficient {\prod_{j=1}^r k_j!}) gives the asymptotic

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} + o(1)) |I| \log^{|\vec k|-1} x

which when compared with (14) for {a_n=1} (recalling that {G(1)=C=1} in this case) gives the formula

\displaystyle Q_{\varepsilon,x} = (\prod_{j=1}^r k_j + O(\varepsilon)) |I| \log^{|\vec k|-1} x.

Inserting this back into (14) and recalling that {\varepsilon>0} can be made arbitrarily small, we obtain (10).

As it turns out, the estimate (13) is easy to establish, but the estimate (12) is not, roughly speaking because the typical number {n} in {I} has too many divisors {d} in the range {[x^{1-4\varepsilon},1]}, each of which gives a contribution to the error term. (In the book of Friedlander and Iwaniec, the estimate (13) is established anyway, but only after assuming a stronger version of (iv), roughly speaking in which {d} is allowed to be as large as {x \exp( -\log^{1/4} x)}.) To resolve this issue, we will insert a preliminary sieve {\nu_\varepsilon} that will remove most of the potential divisors {d} i the range {[x^{1-4\varepsilon},1]} (leaving only about {O(1)} such divisors on the average for typical {n}), making the analogue of (12) easier to prove (at the cost of making the analogue of (13) more difficult). Namely, if one can find a function {\nu_\varepsilon: {\bf N} \rightarrow {\bf R}} for which one has the estimates

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) a_n = \sum_{n \in I} \Lambda_{\vec k}(n) \nu_\varepsilon(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ), \ \ \ \ \ (15)

 

\displaystyle \sum_{n \in I} \Lambda_{\vec k}(n) \nu_\varepsilon(n) a_n

\displaystyle = \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n) a_n + O( (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (16)

 

and

\displaystyle \sum_{n \in I} \Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n) a_n = C Q'_{\varepsilon,x} G(1) + o( |I| \log^{|\vec k|-1} x ) \ \ \ \ \ (17)

 

for some quantity {Q'_{\varepsilon,x}} that depends on {\varepsilon, \eta_\varepsilon, \vec k, B, x} but not on {C, g, G,}, then by repeating the previous arguments we will again be able to establish (10).

The key estimate is (16). As we shall see, when comparing {\Lambda_{\vec k}(n) \nu_\varepsilon(n)} with {\Lambda_{\vec k,\varepsilon}(n) \nu_\varepsilon(n)}, the weight {\nu_\varepsilon} will cost us a factor of {1/\varepsilon}, but the {\log^{k_r} \frac{x}{d}} term in the definitions of {\Lambda_{\vec k}} and {\Lambda_{\vec k,\varepsilon}} will recover a factor of {\varepsilon^{k_r}}, which will give the desired bound since we are assuming {k_r > 1}.

One has some flexibility in how to select the weight {\nu_\varepsilon}: basically any standard sieve that uses divisors of size at most {x^{2\varepsilon}} to localise (at least approximately) to numbers that are rough in the sense that they have no (or at least very few) factors less than {x^\varepsilon}, will do. We will use the analytic Selberg sieve choice

\displaystyle \nu_\varepsilon(n) := (\sum_{d|n} \mu(d) \psi( \frac{\log d}{\varepsilon \log x} ))^2 \ \ \ \ \ (18)

 

where {\psi: {\bf R} \rightarrow [0,1]} is a smooth function supported on {[-1,1]} that equals {1} on {[-1/2,1/2]}.

It remains to establish the bounds (15), (16), (17). To warm up and introduce the various methods needed, we begin with the standard bound

\displaystyle \sum_{n \in I} \nu_\varepsilon(n) a_n = \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + o(1)), \ \ \ \ \ (19)

 

where {\psi'} denotes the derivative of {\psi}. Note the loss of {1/\varepsilon} that had previously been pointed out. In the arguments that follows I will be a little brief with the details, as they are standard (see e.g. this previous post).

We now prove (19). The left-hand side can be expanded as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \sum_{n \in I: [d_1,d_2]|n} a_n

where {[d_1,d_2]} denotes the least common multiple of {d_1} and {d_2}. From the support of {\psi} we see that the summand is only non-vanishing when {[d_1,d_2] \leq x^{2\varepsilon}}. We now use axiom (iv) and split the left-hand side into a main term

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g(d)}{d} C |I|

and an error term that is at most

\displaystyle O_\varepsilon( \sum_{d \leq x^{2\varepsilon}} \tau(d)^{O(1)} | \sum_{n \in I: d|n} a_n - \frac{g(d)}{d} C |I|| ). \ \ \ \ \ (20)

 

From axiom (ii) and elementary multiplicative number theory, we have the bound

\displaystyle \sum_{d \leq x} \tau(d)^{O(1)} | \sum_{n \in I: d|n} a_n - \frac{g(d)}{d} C |I| \ll C |I| \log^{O(1)} x

so from axiom (iv) and Cauchy-Schwarz we see that the error term (20) is acceptable. Thus it will suffice to establish the bound

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2])}{[d_1,d_2]}

\displaystyle = \frac{1}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + o(\frac{1}{\log x}). \ \ \ \ \ (21)

 

The summand here is almost, but not quite, multiplicative in {d_1,d_2}. To make it genuinely multiplicative, we perform a (shifted) Fourier expansion

\displaystyle \psi(u) = \int_{\bf R} e^{-(1+it)u} \Psi(t)\ dt \ \ \ \ \ (22)

 

for some rapidly decreasing function {\Psi} (essentially the Fourier transform of {e^u \psi(u)}). Thus

\displaystyle \psi( \frac{\log d}{\varepsilon \log x} ) = \int_{\bf R} \frac{1}{d^{\frac{1+it}{\varepsilon \log x}}} \Psi(t)\ dt,

and so the left-hand side of (21) can be rearranged using Fubini’s theorem as

\displaystyle \int_{\bf R} \int_{\bf R} E(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})\ \Psi(t_1) \Psi(t_2) dt_1 dt_2 \ \ \ \ \ (23)

 

where

\displaystyle E(s_1,s_2) := \sum_{d_1,d_2} \frac{\mu(d_1) \mu(d_2)}{d_1^{s_1}d_2^{s_2}} \frac{g([d_1,d_2])}{[d_1,d_2]}.

We can factorise {E(s_1,s_2)} as an Euler product:

\displaystyle E(s_1,s_2) = \prod_p (1 - \frac{g(p)}{p^{1+s_1}} - \frac{g(p)}{p^{1+s_2}} + \frac{g(p)}{p^{1+s_1+s_2}}).

Taking absolute values and using Mertens’ theorem leads to the crude bound

\displaystyle E(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x}) \ll_\varepsilon \log^{O(1)} x

which when combined with the rapid decrease of {\Psi}, allows us to restrict the region of integration in (23) to the square {\{ |t_1|, |t_2| \leq \sqrt{\log x} \}} (say) with negligible error. Next, we use the Euler product

\displaystyle \zeta(s) = \prod_p (1-\frac{1}{p^s})^{-1}

for {\hbox{Re} s > 1} to factorise

\displaystyle E(s_1,s_2) = \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} \prod_p E_p(s_1,s_2)

where

\displaystyle E_p(s_1,s_2) := \frac{(1 - \frac{g(p)}{p^{1+s_1}} - \frac{g(p)}{p^{1+s_2}} + \frac{g(p)}{p^{1+s_1+s_2}})(1 - \frac{1}{p^{1+s_1+s_2}})}{(1-\frac{1}{p^{1+s_1}})(1-\frac{1}{p^{1+s_2}})}.

For {s_1,s_2=o(1)} with nonnegative real part, one has

\displaystyle E_p(s_1,s_2) = 1 + O(1/p^2)

and so by the Weierstrass {M}-test, {\prod_p E_p(s_1,s_2)} is continuous at {s_1=s_2=0}. Since

\displaystyle \prod_p E_p(0,0) = G(1)

we thus have

\displaystyle \prod_p E_p(s_1,s_2) = G(1) + o(1)

Also, since {\zeta} has a pole of order {1} at {s=1} with residue {1}, we have

\displaystyle \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} = (1+o(1)) \frac{s_1 s_2}{s_1+s_2}

and thus

\displaystyle E(s_1,s_2) = (G(1)+o(1)) \frac{s_1s_2}{s_1+s_2}.

The quantity (23) can thus be written, up to errors of {o(\frac{1}{\log x})}, as

\displaystyle \frac{G(1)}{\varepsilon \log x} \int_{|t_1|, |t_2| \leq \sqrt{\log x}} \frac{(1+it_1)(1+it_2)}{1+it_1+1+it_2} \Psi(t_1) \Psi(t_2)\ dt_1 dt_2.

Using the rapid decrease of {\Psi}, we may remove the restriction on {t_1,t_2}, and it will now suffice to prove the identity

\displaystyle \int_{\bf R} \int_{\bf R} \frac{(1+it_1)(1+it_2)}{1+it_1+1+it_2} \Psi(t_1) \Psi(t_2)\ dt_1 dt_2 = (\int_0^1 \psi'(u)^2\ du)^2.

But on differentiating and then squaring (22) we have

\displaystyle \psi'(u)^2 = \int_{\bf R} \int_{\bf R} (1+it_1)(1+it_2) e^{-(1+it_1+1+it_2)u}\Psi(t_1) \Psi(t_2)\ dt_1 dt_2

and the claim follows by integrating in {u} from zero to infinity (noting that {\psi'} vanishes for {u>1}).

We have the following variant of (19):

Lemma 3 For any {d \leq x^{1-3\varepsilon}}, one has

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n \ll \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} + R_d \ \ \ \ \ (24)

 

where the {R_d} are such that

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} R_d \ll_A |I| \log^{-A} x \ \ \ \ \ (25)

 

for any {A>0}. We also have the variant

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n/d) a_n \ll \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O(1 ) )}{d} + R_d. \ \ \ \ \ (26)

 

If in addition {d} has no prime factors less than {x^\delta} for some fixed {\delta>0}, one has

\displaystyle \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n

\displaystyle = \frac{1+o(1)}{d} \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1) + O(R_d). \ \ \ \ \ (27)

 

Roughly speaking, the above estimates assert that {\nu_\varepsilon} is concentrated on those numbers {n} with no prime factors much less than {x^\varepsilon}, but factors {d} without such small prime divisors occur with about the same relative density as they do in the integers.

Proof: The left-hand side of (24) can be expanded as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \sum_{n \in I: [d_1,d_2,d]|n} a_n.

If we define

\displaystyle R_d := \sum_{d' \leq x^{1-\varepsilon}: d|d'} \tau(d')^2 |\sum_{n \in I:d'|n} a_n - \frac{g(d')}{d'} C|I||

then the previous expression can be written as

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2,d])}{[d_1,d_2,d]} C|I| + O(R_d),

while one has

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} R_d \leq \sum_{d' \leq x^{1-\varepsilon}} \tau(d')^3 |\sum_{n \in I:d'|n} a_n - \frac{g(d')}{d'} C|I||

which gives (25) from Axiom (iv). To prove (24), it now suffices to show that

\displaystyle \sum_{d_1,d_2} \mu(d_1) \mu(d_2) \psi( \frac{\log d_1}{\varepsilon \log x} ) \psi( \frac{\log d_2}{\varepsilon \log x} ) \frac{g([d_1,d_2,d])}{[d_1,d_2,d]}

\displaystyle \ll \frac{1}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d}. \ \ \ \ \ (28)

 

Arguing as before, the left-hand side is

\displaystyle \int_{\bf R} \int_{\bf R} E^{(d)}(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})\ \Psi(t_1) \Psi(t_2) dt_1 dt_2

where

\displaystyle E^{(d)}(s_1,s_2) := \sum_{d_1,d_2} \frac{\mu(d_1) \mu(d_2)}{d_1^{s_1}d_2^{s_2}} \frac{g([d_1,d_2,d])}{[d_1,d_2,d]}.

From Mertens’ theorem we have

\displaystyle E^{(d)}(s_1,s_2) \ll_\varepsilon \frac{\prod_{p|d} O(1)}{d} \log^{O(1)} x

when {\hbox{Re} s_1, \hbox{Re} s_2 = \frac{1}{\varepsilon \log x}}, so the contribution of the terms where {|t_1|, |t_2| \geq \sqrt{\log x}} can be absorbed into the {R_d} error (after increasing that error slightly). For the remaining contributions, we see that

\displaystyle E^{(d)}(s_1,s_2) = \frac{\zeta(1+s_1+s_2)}{\zeta(1+s_1) \zeta(1+s_2)} \prod_p E^{(d)}_p(s_1,s_2)

where {E^{(d)}_p(s_1,s_2) = E_p(s_1,s_2)} if {p} does not divide {d}, and

\displaystyle E^{(d)}_p(s_1,s_2) = \frac{g(p^j)}{p^j} \frac{(1 - \frac{1}{p^{s_1}}) (1 - \frac{1}{p^{s_2}}) (1 - \frac{1}{p^{1+s_1+s_2}})}{(1-\frac{1}{p^{1+s_1}})(1-\frac{1}{p^{1+s_2}})}

if {p} divides {d} {j} times for some {j \geq 1}. In the latter case, Taylor expansion gives the bounds

\displaystyle |E^{(d)}_p(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x})| \lesssim (1+|t_1|+|t_2|)^{O(1)} \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p}

and the claim (28) follows. When {p \geq x^\delta} and {|t_1|, |t_2| \leq \sqrt{\log x}} we have

\displaystyle E^{(d)}_p(\frac{1+it_1}{\varepsilon \log x},\frac{1+it_2}{\varepsilon \log x}) = \frac{1+o(1)}{p^j}

and (27) follows by repeating the previous calculations. Finally, (26) is proven similarly to (24) (using {d[d_1,d_2]} in place of {[d_1,d_2,d]}). \Box

Now we can prove (15), (16), (17). We begin with (15). Using the Leibniz rule {L(f*g) = (Lf)*g + f*(Lg)} applied to the identity {\mu = \mu * 1 * \mu} and using {\Lambda = \mu*L} and Möbius inversion (and the associativity and commutativity of Dirichlet convolution) we see that

\displaystyle L\mu = - \mu * \Lambda. \ \ \ \ \ (29)

 

Next, by applying the Leibniz rule to {\Lambda_k = \mu * L^k} for some {k \geq 1} and using (29) we see that

\displaystyle L \Lambda_k = L \mu * L^k + \mu * L^{k+1}

\displaystyle = - \mu * \Lambda * L^k + \Lambda_{k+1}

and hence we have the recursive identity

\displaystyle \Lambda_{k+1} = L \Lambda_k + \Lambda *\Lambda_k. \ \ \ \ \ (30)

 

In particular, from induction we see that {\Lambda_k} is supported on numbers with at most {k} distinct prime factors, and hence {\Lambda_{\vec k}} is supported on numbers with at most {|\vec k|} distinct prime factors. In particular, from (18) we see that {\nu_\varepsilon(n) = O(1)} on the support of {\Lambda_{\vec k}}. Thus it will suffice to show that

\displaystyle \sum_{n \in I: \nu_\varepsilon(n) \neq 1} \Lambda_{\vec k}(n) a_n \ll (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x.

If {\nu_\varepsilon(n) \neq 1} and {\Lambda_{\vec k}(n) \neq 0}, then {n} has at most {|\vec k|} distinct prime factors {p_1 < p_2 < \dots < p_r}, with {p_1 \leq x^\varepsilon}. If we factor {n = n_1 n_2}, where {n_1} is the contribution of those {p_i} with {p_i \leq x^{1/10|\vec k|}}, and {n_2} is the contribution of those {p_i} with {p_i > x^{1/10|\vec k|}}, then at least one of the following two statements hold:

  • (a) {n_1} (and hence {n}) is divisible by a square number of size at least {x^{1/10}}.
  • (b) {n_1 \leq x^{1/5}}.

The contribution of case (a) is easily seen to be acceptable by axiom (ii). For case (b), we observe from (30) and induction that

\displaystyle \Lambda_k(n) \ll \log^{|\vec k|} x \prod_{j=1}^k \frac{\log p_j}{\log x}

and so it will suffice to show that

\displaystyle \sum_{n_1} (\prod_{p|n_1} \frac{\log p}{\log x}) \sum_{n \in I: n_1 | n} 1_R(n/n_1) a_n \ll (\varepsilon + o(1)) C |I| \log^{-1} x

where {n_1} ranges over numbers bounded by {x^{1/5}} with at most {|\vec k|} distinct prime factors, the smallest of which is at most {x^\varepsilon}, and {R} consists of those numbers with no prime factor less than or equal to {x^{1/10|\vec k|}}. Applying (26) (with {\varepsilon} replaced by {1/10|\vec k|}) gives the bound

\displaystyle \sum_{n \in I: d|n} 1_R(n/n_1) a_n \ll \frac{C|I|}{\log x} \frac{1}{n_1} + R_d

so by (25) it suffices to show that

\displaystyle \sum_{n_1} (\prod_{p|n_1} \frac{\log p}{\log x}) \frac{1}{n_1} \ll \varepsilon

subject to the same constraints on {n_1} as before. The contribution of those {n_1} with {r} distinct prime factors can be bounded by

\displaystyle O(\sum_{p_1 \leq x^\varepsilon} \frac{\log p_1}{p_1 \log x}) \times O(\sum_{p \leq x^{1/5}} \frac{\log p}{p\log x})^{r-1};

applying Mertens’ theorem and summing over {1 \leq r \leq |\vec k|}, one obtains the claim.

Now we show (16). As discussed previously in this section, we can replace {\Lambda_{\vec k}(n)} by {\sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d}} with negligible error. Comparing this with (16) and (11), we see that it suffices to show that

\displaystyle \sum_{n \in I} \sum_{d|n} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} (1 - \eta_\varepsilon(\frac{\log d}{\log x})) \nu_\varepsilon(n) a_n \ll (\varepsilon+o(1)) C |I| \log^{|\vec k|-1} x.

From the support of {\eta_\varepsilon}, the summand on the left-hand side is only non-zero when {d \geq x^{1-4\varepsilon}}, which makes {\log^{k_r} \frac{x}{d} \ll \varepsilon^{k_r} \log^{k_r} x \leq \varepsilon^2 \log^{k_r} x}, where we use the crucial hypothesis {k_r > 1} to gain enough powers of {\varepsilon} to make the argument here work. Applying Lemma 2, we reduce to showing that

\displaystyle \sum_{n \in I} \sum_{d|n: d \geq x^{1-4\varepsilon}} \nu_\varepsilon(n) a_n \ll \frac{1+o(1)}{\varepsilon \log x} C |I|.

We can make the change of variables {d \mapsto n/d} to flip the sum

\displaystyle \sum_{d|n: d \geq x^{1-4\varepsilon}} 1 \leq \sum_{d|n: d \leq x^{3\varepsilon}} 1

and then swap the sums to reduce to showing that

\displaystyle \sum_{d \leq x^{4\varepsilon}} \sum_{n \in I} \nu_\varepsilon(n) a_n \ll \frac{1+o(1)}{\varepsilon \log x} C |I|.

By Lemma 3, it suffices to show that

\displaystyle \sum_{d \leq x^{4\varepsilon}} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} \ll 1.

To prove this, we use the Rankin trick, bounding the implied weight {1_{d \leq x^{4\varepsilon}}} by {O( \frac{1}{d^{1/\varepsilon \log x}} )}. We can then bound the left-hand side by the Euler product

\displaystyle \prod_p (1 + O( \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p^{1+1/\varepsilon \log x}} ))

which can be bounded by

\displaystyle \exp( O( \sum_p \frac{\min( \frac{\log p}{\varepsilon \log x}, 1 )^2}{p^{1+1/\varepsilon \log x}} ) )

and the claim follows from Mertens’ theorem.

Finally, we show (17). By (11), the left-hand side expands as

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon(\frac{\log d}{\log x}) \sum_{n \in I: d|n} \nu_\varepsilon(n) a_n.

We let {\delta>0} be a small constant to be chosen later. We divide the outer sum into two ranges, depending on whether {d} only has prime factors greater than {x^\delta} or not. In the former case, we can apply (27) to write this contribution as

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \mu_{\vec k}(d) \log^{k_r} \frac{x}{d} \eta_\varepsilon(\frac{\log d}{\log x}) \frac{1+o(1)}{d} \frac{C|I|}{\varepsilon \log x} (\int_0^1 \psi'(u)^2\ du) G(1)

plus a negligible error, where the {d} is implicitly restricted to numbers with all prime factors greater than {x^\delta}. The main term is messy, but it is of the required form {C Q'_{\varepsilon,x} G(1)} up to an acceptable error, so there is no need to compute it any further. It remains to consider those {d} that have at least one prime factor less than {x^\delta}. Here we use (24) instead of (27) as well as Lemma 3 to dominate this contribution by

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} O( \log^{|\vec k|} x \frac{C|I|}{\varepsilon \log x} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 )^2 )}{d} )

up to negligible errors, where {d} is now restricted to have at least one prime factor less than {x^\delta}. This makes at least one of the factors {\min( \frac{\log p}{\varepsilon \log x}, 1 )} to be at most {O_\varepsilon(\delta)}. A routine application of Rankin’s trick shows that

\displaystyle \sum_{d \leq x^{1-3\varepsilon}} \frac{\prod_{p|d} O( \min( \frac{\log p}{\varepsilon \log x}, 1 ) )}{d} \ll_\varepsilon 1

and so the total contribution of this case is {O_\varepsilon((\delta+o(1)) |I| \log^{|\vec k|-1} x)}. Since {\delta>0} can be made arbitrarily small, (17) follows.

— 2. Weierstrass approximation —

Having proved Theorem 1, we now take linear combinations of this theorem, combined with the Weierstrass approximation theorem, to give the asymptotics (7), (8) described in the introduction.

Let {a_n}, {g}, {C}, {G} be as in that theorem. It will be convenient to normalise the weights {\Lambda_{\vec k}} by {L^{1-|\vec k|}} to make their mean value comparable to {1}. From Theorem 1 and summation by parts we have

\displaystyle \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) a_n = (G(1)+o(1)) \frac{\prod_{i=1}^r k_i!}{(|\vec k|-1)!} C x \ \ \ \ \ (31)

 

whenever {\vec k} does not consist entirely of ones.

We now take a closer look at what happens when {\vec k} does consist entirely of ones. Let {1^r} denote the {r}-tuple {(1,\dots,1)}. Convolving the {k=1} case of (30) with {r-1} copies of {\Lambda} for some {r \geq 1} and using the Leibniz rule, we see that

\displaystyle \Lambda_{(1^{r-1}, 2)} = \frac{1}{r} L \Lambda_{1^r} + \Lambda_{1^{r+1}}

and hence

\displaystyle L^{-r} \Lambda_{1^{r+1}} = L^{-r} \Lambda_{(1^{r-1},2)} - \frac{1}{r} L^{1-r} \Lambda_{1^r}.

Multiplying by {a_n} and summing over {n \leq x}, and using (31) to control the {\Lambda_{(1^{r-1},2)}} term, one has

\displaystyle \sum_{n \leq x} L^{-r} \Lambda_{1^{r+1}}(n) a_n = (G(1)+o(1)) \frac{2}{r!} - \frac{1}{r} \sum_{n \leq x} L^{1-r} \Lambda_{1^{r}}(n) a_n.

If we define {\delta_x} (up to an error of {o(1)}) by the formula

\displaystyle \sum_{n \leq x} \Lambda(n) a_n = (\delta_x G(1) + o(1)) C x

then an induction then shows that

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \frac{1}{(r-1)!} (\delta_x G(1) + o(1)) C x

for odd {r}, and

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \frac{1}{(r-1)!} ((2-\delta_x) G(1) + o(1)) C x

for even {r}. In particular, after adjusting {\delta_x} by {o(1)} if necessary, we have {0 \leq \delta_x \leq 2} since the left-hand sides are non-negative.

If we now define the comparison sequence {b_n := C G(1) (1 + (1-\delta_x) \mu(n))}, standard multiplicative number theory shows that the above estimates also hold when {a_n} is replaced by {b_n}; thus

\displaystyle \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) a_n = \sum_{n \leq x} L^{1-r} \Lambda_{1^r}(n) b_n + o( x )

for both odd and even {r}. The bound (31) also holds for {b_n} when {\vec k} does not consist entirely of ones, and hence

\displaystyle \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) a_n = \sum_{n \leq x} L^{1-|\vec k|} \Lambda_{\vec k}(n) b_n + o( x )

for any fixed {\vec k} (which may or may not consist entirely of ones).

Next, from induction (on {j_1+\dots+j_r}), the Leibniz rule, and (30), we see that for any {r \geq 1} and {j_1,\dots,j_r \geq 0}, {k_1,\dots,k_r}, the function

\displaystyle L^{1-j_1-\dots-j_r-|\vec k|} ((L^{j_1} \Lambda_{k_1}) * \dots * (L^{j_r} \Lambda_{k_r})) \ \ \ \ \ (32)

 

is a finite linear combination of functions of the form {L^{1-|\vec k'|} \Lambda_{\vec k'}} for tuples {\vec k'} that may possibly consist entirely of ones. We thus have

\displaystyle \sum_{n \leq x} f(n) a_n = \sum_{n \leq x}f(n) b_n + o( x )

whenever {f} is one of these functions (32). Specialising to the case {k_1=\dots=k_r=1}, we thus have

\displaystyle \sum_{n_1 \dots n_r \leq x} a_{n} \log^{1-r} n \prod_{i=1}^r (\log n_i/\log n)^{j_i} \Lambda(n_i)

\displaystyle = \sum_{n_1 \dots n_r \leq x} b_{n} \log^{1-r} n \prod_{i=1}^r (\log n_i/\log n)^{j_i} \Lambda(n_i) + o(x )

where {n := n_1 \dots n_r}. The contribution of those {n_i} that are powers of primes can be easily seen to be negligible, leading to

\displaystyle \sum_{p_1 \dots p_r \leq x} a_{n} \log n \prod_{i=1}^r (\log p_i/\log n)^{j_i+1}

\displaystyle = \sum_{p_1 \dots p_r \leq x} b_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1} + o(x)

where now {n := p_1 \dots p_r}. The contribution of the case where two of the primes {p_i} agree can also be seen to be negligible, as can the error when replacing {\log n} with {\log x}, and then by symmetry

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1}

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} \prod_{i=1}^r (\log p_i/\log n)^{j_i+1} + o(x / \log x).

By linearity, this implies that

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} P( \log p_1/\log n, \dots, \log p_r/\log n)

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} P( \log p_1/\log n, \dots, \log p_r/\log n) + o(x / \log x)

for any polynomial {P(t_1,\dots,t_r)} that vanishes on the coordinate hyperplanes {t_i=0}. The right-hand side can also be evaluated by Mertens’ theorem as

\displaystyle CG(1) \delta_x \int_{\Delta_r} P x + o(x)

when {r} is odd and

\displaystyle CG(1) (2-\delta_x) \int_{\Delta_r} P x + o(x)

when {r} is even. Using the Weierstrass approximation theorem, we then have

\displaystyle \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} a_{n} g_r( \log p_1/\log n, \dots, \log p_r/\log n)

\displaystyle = \sum_{p_1 \dots p_r \leq x: p_1 < \dots < p_r} b_{n} g_r( \log p_1/\log n, \dots, \log p_r/\log n) + o(x / \log x)

for any continuous function {g_r} that is compactly supported in the interior of {\Delta_r}. Computing the right-hand side using Mertens’ theorem as before, we obtain the claimed asymptotics (7), (8).

Remark 4 The Bombieri asymptotic sieve has to use the full power of EH (or GEH); there are constructions due to Ford that show that if one only has a distributional hypothesis up to {x^{1-c}} for some fixed constant {c>0}, then the asymptotics of sums such as (5), or more generally (9), are not determined by a single scalar parameter {\delta_x}, but can also vary in other ways as well. Thus the Bombieri asymptotic sieve really is asymptotic; in order to get {o(1)} type error terms one needs the level {1-\varepsilon} of distribution to be asymptotically equal to {1} as {x \rightarrow \infty}. Related to this, the quantitative decay of the {o(1)} error terms in the Bombieri asymptotic sieve are extremely poor; in particular, they depend on the dependence of implied constant in axiom (iv) on the parameters {\varepsilon,A}, for which there is no consensus on what one should conjecturally expect.

I’ve just posted to the arXiv my paper “Finite time blowup for Lagrangian modifications of the three-dimensional Euler equation“. This paper is loosely in the spirit of other recent papers of mine in which I explore how close one can get to supercritical PDE of physical interest (such as the Euler and Navier-Stokes equations), while still being able to rigorously demonstrate finite time blowup for at least some choices of initial data. Here, the PDE we are trying to get close to is the incompressible inviscid Euler equations

\displaystyle \partial_t u + (u \cdot \nabla) u = - \nabla p

\displaystyle \nabla \cdot u = 0

in three spatial dimensions, where {u} is the velocity vector field and {p} is the pressure field. In vorticity form, and viewing the vorticity {\omega} as a {2}-form (rather than a vector), we can rewrite this system using the language of differential geometry as

\displaystyle \partial_t \omega + {\mathcal L}_u \omega = 0

\displaystyle u = \delta \tilde \eta^{-1} \Delta^{-1} \omega

where {{\mathcal L}_u} is the Lie derivative along {u}, {\delta} is the codifferential (the adjoint of the differential {d}, or equivalently the negative of the divergence operator) that sends {k+1}-vector fields to {k}-vector fields, {\Delta} is the Hodge Laplacian, and {\tilde \eta} is the identification of {k}-vector fields with {k}-forms induced by the Euclidean metric {\tilde \eta}. The equation{u = \delta \tilde \eta^{-1} \Delta^{-1} \omega} can be viewed as the Biot-Savart law recovering velocity from vorticity, expressed in the language of differential geometry.

One can then generalise this system by replacing the operator {\tilde \eta^{-1} \Delta^{-1}} by a more general operator {A} from {2}-forms to {2}-vector fields, giving rise to what I call the generalised Euler equations

\displaystyle \partial_t \omega + {\mathcal L}_u \omega = 0

\displaystyle u = \delta A \omega.

For example, the surface quasi-geostrophic (SQG) equations can be written in this form, as discussed in this previous post. One can view {A \omega} (up to Hodge duality) as a vector potential for the velocity {u}, so it is natural to refer to {A} as a vector potential operator.

The generalised Euler equations carry much of the same geometric structure as the true Euler equations. For instance, the transport equation {\partial_t \omega + {\mathcal L}_u \omega = 0} is equivalent to the Kelvin circulation theorem, which in three dimensions also implies the transport of vortex streamlines and the conservation of helicity. If {A} is self-adjoint and positive definite, then the famous Euler-Poincaré interpretation of the true Euler equations as geodesic flow on an infinite dimensional Riemannian manifold of volume preserving diffeomorphisms (as discussed in this previous post) extends to the generalised Euler equations (with the operator {A} determining the new Riemannian metric to place on this manifold). In particular, the generalised Euler equations have a Lagrangian formulation, and so by Noether’s theorem we expect any continuous symmetry of the Lagrangian to lead to conserved quantities. Indeed, we have a conserved Hamiltonian {\frac{1}{2} \int \langle \omega, A \omega \rangle}, and any spatial symmetry of {A} leads to a conserved impulse (e.g. translation invariance leads to a conserved momentum, and rotation invariance leads to a conserved angular momentum). If {A} behaves like a pseudodifferential operator of order {-2} (as is the case with the true vector potential operator {\tilde \eta^{-1} \Delta^{-1}}), then it turns out that one can use energy methods to recover the same sort of classical local existence theory as for the true Euler equations (up to and including the famous Beale-Kato-Majda criterion for blowup).

The true Euler equations are suspected of admitting smooth localised solutions which blow up in finite time; there is now substantial numerical evidence for this blowup, but it has not been proven rigorously. The main purpose of this paper is to show that such finite time blowup can at least be established for certain generalised Euler equations that are somewhat close to the true Euler equations. This is similar in spirit to my previous paper on finite time blowup on averaged Navier-Stokes equations, with the main new feature here being that the modified equation continues to have a Lagrangian structure and a vorticity formulation, which was not the case with the averaged Navier-Stokes equation. On the other hand, the arguments here are not able to handle the presence of viscosity (basically because they rely crucially on the Kelvin circulation theorem, which is not available in the viscous case).

In fact, three different blowup constructions are presented (for three different choices of vector potential operator {A}). The first is a variant of one discussed previously on this blog, in which a “neck pinch” singularity for a vortex tube is created by using a non-self-adjoint vector potential operator, in which the velocity at the neck of the vortex tube is determined by the circulation of the vorticity somewhat further away from that neck, which when combined with conservation of circulation is enough to guarantee finite time blowup. This is a relatively easy construction of finite time blowup, and has the advantage of being rather stable (any initial data flowing through a narrow tube with a large positive circulation will blow up in finite time). On the other hand, it is not so surprising in the non-self-adjoint case that finite blowup can occur, as there is no conserved energy.

The second blowup construction is based on a connection between the two-dimensional SQG equation and the three-dimensional generalised Euler equations, discussed in this previous post. Namely, any solution to the former can be lifted to a “two and a half-dimensional” solution to the latter, in which the velocity and vorticity are translation-invariant in the vertical direction (but the velocity is still allowed to contain vertical components, so the flow is not completely horizontal). The same embedding also works to lift solutions to generalised SQG equations in two dimensions to solutions to generalised Euler equations in three dimensions. Conveniently, even if the vector potential operator for the generalised SQG equation fails to be self-adjoint, one can ensure that the three-dimensional vector potential operator is self-adjoint. Using this trick, together with a two-dimensional version of the first blowup construction, one can then construct a generalised Euler equation in three dimensions with a vector potential that is both self-adjoint and positive definite, and still admits solutions that blow up in finite time, though now the blowup is now a vortex sheet creasing at on a line, rather than a vortex tube pinching at a point.

This eliminates the main defect of the first blowup construction, but introduces two others. Firstly, the blowup is less stable, as it relies crucially on the initial data being translation-invariant in the vertical direction. Secondly, the solution is not spatially localised in the vertical direction (though it can be viewed as a compactly supported solution on the manifold {{\bf R}^2 \times {\bf R}/{\bf Z}}, rather than {{\bf R}^3}). The third and final blowup construction of the paper addresses the final defect, by replacing vertical translation symmetry with axial rotation symmetry around the vertical axis (basically, replacing Cartesian coordinates with cylindrical coordinates). It turns out that there is a more complicated way to embed two-dimensional generalised SQG equations into three-dimensional generalised Euler equations in which the solutions to the latter are now axially symmetric (but are allowed to “swirl” in the sense that the velocity field can have a non-zero angular component), while still keeping the vector potential operator self-adjoint and positive definite; the blowup is now that of a vortex ring creasing on a circle.

As with the previous papers in this series, these blowup constructions do not directly imply finite time blowup for the true Euler equations, but they do at least provide a barrier to establishing global regularity for these latter equations, in that one is forced to use some property of the true Euler equations that are not shared by these generalisations. They also suggest some possible blowup mechanisms for the true Euler equations (although unfortunately these mechanisms do not seem compatible with the addition of viscosity, so they do not seem to suggest a viable Navier-Stokes blowup mechanism).

In logic, there is a subtle but important distinction between the concept of mutual knowledge – information that everyone (or almost everyone) knows – and common knowledge, which is not only knowledge that (almost) everyone knows, but something that (almost) everyone knows that everyone else knows (and that everyone knows that everyone else knows that everyone else knows, and so forth).  A classic example arises from Hans Christian Andersens’ fable of the Emperor’s New Clothes: the fact that the emperor in fact has no clothes is mutual knowledge, but not common knowledge, because everyone (save, eventually, for a small child) is refusing to acknowledge the emperor’s nakedness, thus perpetuating the charade that the emperor is actually wearing some incredibly expensive and special clothing that is only visible to a select few.  My own personal favourite example of the distinction comes from the blue-eyed islander puzzle, discussed previously here, here and here on the blog.  (By the way, I would ask that any commentary about that puzzle be directed to those blog posts, rather than to the current one.)

I believe that there is now a real-life instance of this situation in the US presidential election, regarding the following

Proposition 1.  The presumptive nominee of the Republican Party, Donald Trump, is not even remotely qualified to carry out the duties of the presidency of the United States of America.

Proposition 1 is a statement which I think is approaching the level of mutual knowledge amongst the US population (and probably a large proportion of people following US politics overseas): even many of Trump’s nominal supporters secretly suspect that this proposition is true, even if they are hesitant to say it out loud.  And there have been many prominent people, from both major parties, that have made the case for Proposition 1: for instance Mitt Romney, the Republican presidential nominee in 2012, did so back in March, and just a few days ago Hillary Clinton, the likely Democratic presidential nominee this year, did so in this speech:

I highly recommend watching the entirety of the (35 mins or so) speech, followed by the entirety of Trump’s rebuttal.

However, even if Proposition 1 is approaching the status of “mutual knowledge”, it does not yet seem to be close to the status of “common knowledge”: one may secretly believe that Trump cannot be considered as a serious candidate for the US presidency, but must continue to entertain this possibility, because they feel that others around them, or in politics or the media, appear to be doing so.  To reconcile these views can require taking on some implausible hypotheses that are not otherwise supported by any evidence, such as the hypothesis that Trump’s displays of policy ignorance, pettiness, and other clearly unpresidential behaviour are merely “for show”, and that behind this facade there is actually a competent and qualified presidential candidate; much like the emperor’s new clothes, this alleged competence is supposedly only visible to a select few.  And so the charade continues.

I feel that it is time for the charade to end: Trump is unfit to be president, and everybody knows it.  But more people need to say so, openly.

Important note: I anticipate there will be any number of “tu quoque” responses, asserting for instance that Hillary Clinton is also unfit to be the US president.  I personally do not believe that to be the case (and certainly not to the extent that Trump exhibits), but in any event such an assertion has no logical bearing on the qualification of Trump for the presidency.  As such, any comments that are purely of this “tu quoque” nature, and which do not directly address the validity or epistemological status of Proposition 1, will be deleted as off-topic.  However, there is a legitimate case to be made that there is a fundamental weakness in the current mechanics of the US presidential election, particularly with the “first-past-the-post” voting system, in that (once the presidential primaries are concluded) a voter in the presidential election is effectively limited to choosing between just two viable choices, one from each of the two major parties, or else refusing to vote or making a largely symbolic protest vote. This weakness is particularly evident when at least one of these two major choices is demonstrably unfit for office, as per Proposition 1.  I think there is a serious case for debating the possibility of major electoral reform in the US (I am particularly partial to the Instant Runoff Voting system, used for instance in my home country of Australia, which allows for meaningful votes to third parties), and I would consider such a debate to be on-topic for this post.  But this is very much a longer term issue, as there is absolutely no chance that any such reform would be implemented by the time of the US elections in November (particularly given that any significant reform would almost certainly require, at minimum, a constitutional amendment).

 

Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.

Suppose one assigns {N} true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number {k} as the final grade of the examination. More generally, one could assign some score of {A} points to each correct answer and some score (possibly negative) of {B} points to each incorrect answer, giving a total grade of {A k + B(N-k)} points. As long as {A > B}, this grade is simply an affine rescaling of the simple grading scheme {k} and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.

In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student {S} and a given question {n}, the student {S} may think that the answer to question {n} is true with probability {p_{S,n}} and false with probability {1-p_{S,n}}, where {0 \leq p_{S,n} \leq 1} is some quantity that can be viewed as a measure of confidence {S} has in the answer (with {S} being confident that the answer is true if {p_{S,n}} is close to {1}, and confident that the answer is false if {p_{S,n}} is close to {0}); for simplicity let us assume that in {S}‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student {S} wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for {S} to take is to answer question {n} true if {p_{S,n} > 1/2} and false if {p_{S,n} < 1/2}. (If {p_{S,n}=1/2}, the student {S} can answer arbitrarily.)

[Important note: here we are not using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]

This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities {p_{S,n}} for each question. If for instance {S} answered {7} out of {10} questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.

But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question {n}, the student was allowed to give answers such as “{60\%} confident that the answer is true” (and hence {40\%} confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities {p_{S,n}}.

But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in {100\%} confidence is awarded one point. How many points should one award a correct answer given in {60\%} confidence? How about an incorrect answer given in {60\%} confidence (or equivalently, a correct answer given in {40\%} confidence)?

Mathematically, one could design a grading scheme by selecting some grading function {f: [0,1] \rightarrow {\bf R}} and then awarding a student {f(p)} points whenever they indicate the correct answer with a confidence of {p}. For instance, if the student was {60\%} confident that the answer was “true” (and hence {40\%} confident that the answer was “false”), then this grading scheme would award the student {f(0.6)} points if the correct answer actually was “true”, and {f(0.4)} points if the correct answer actually was “false”. One can then ask the question of what functions {f} would be “best” for this scheme?

Intuitively, one would expect that {f} should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function {f(p) = p} – thus for instance a correct answer given with {60\%} confidence might be worth {0.6} points. But is this the “best” option?

To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence {q_{S,n}} for a student {S} to report on a question may differ from that student’s subjective confidence {p_{S,n}}. So one could ask to design a scheme so that {q_{S,n}} is always equal to {p_{S,n}}, so that the incentive is for the student to honestly report his or her confidence level in the answer.

This turns out to give a precise constraint on the grading function {f}. If a student {S} thinks that the answer to a question {n} is true with probability {p_{S,n}} and false with probability {1-p_{S,n}}, and enters in an answer of “true” with confidence {q_{S,n}} (and thus “false” with confidence {1-q_{S,n}}), then student would expect a grade of

\displaystyle p_{S,n} f( q_{S,n} ) + (1-p_{S,n}) f(1 - q_{S,n})

on average for this question. To maximise this expected grade (assuming differentiability of {f}, which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable {q_{S,n}} and setting the result to zero, thus obtaining

\displaystyle p_{S,n} f'( q_{S,n} ) - (1-p_{S,n}) f'(1 - q_{S,n}) = 0.

In order to avoid perverse incentives, the maximum should occur at {q_{S,n} = p_{S,n}}, thus we should have

\displaystyle p f'(p) - (1-p) f'(1-p) = 0

for all {0 \leq p \leq 1}. This suggests that the function {p \mapsto p f'(p)} should be constant. (Strictly speaking, it only gives the weaker constraint that {p \mapsto p f'(p)} is symmetric around {p=1/2}; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force {p f'(p)} to be constant in {p}; we leave this computation to the interested reader.) In other words, {f(p)} should be of the form {A \log p + B} for some {A,B}; by monotonicity we expect {A} to be positive. If we make the normalisation {f(1/2)=0} (so that no points are awarded for a {50-50} split in confidence between true and false) and {f(1)=1}, one arrives at the grading scheme

\displaystyle f(p) := \log_2(2p).

Thus, if a student believes that an answer is “true” with confidence {p} and “false” with confidence {1-p}, he or she will be awarded {\log_2(2p)} points when the correct answer is “true”, and {\log_2(2(1-p))} points if the correct answer is “false”. The following table gives some illustrative values for this scheme:

Confidence that answer is “true” Points awarded if answer is “true” Points awarded if answer is “false”
{0\%} {-\infty} {1.000}
{1\%} {-5.644} {0.9855}
{2\%} {-4.644} {0.9709}
{5\%} {-3.322} {0.9260}
{10\%} {-2.322} {0.8480}
{20\%} {-1.322} {0.6781}
{30\%} {-0.737} {0.4854}
{40\%} {-0.322} {0.2630}
{50\%} {0.000} {0.000}
{60\%} {0.2630} {-0.322}
{70\%} {0.4854} {-0.737}
{80\%} {0.6781} {-1.322}
{90\%} {0.8480} {-2.322}
{95\%} {0.9260} {-3.322}
{98\%} {0.9709} {-4.644}
{99\%} {0.9855} {-5.644}
{100\%} {1.000} {-\infty}

Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of {100\%} confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.

The total grade given under such a scheme to a student {S} who answers each question {n} to be “true” with confidence {p_{S,n}}, and “false” with confidence {1-p_{S,n}}, is

\displaystyle \sum_{n: \hbox{ ans is true}} \log_2(2 p_{S,n} ) + \sum_{n: \hbox{ ans is false}} \log_2(2(1-p_{S,n})).

This grade can also be written as

\displaystyle N + \frac{1}{\log 2} \log {\mathcal L}

where

\displaystyle {\mathcal L} := \prod_{n: \hbox{ ans is true}} p_{S,n} \times \prod_{n: \hbox{ ans is false}} (1-p_{S,n})

is the likelihood of the student {S}‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.

One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of {100\%} to any prediction made without any such qualification would result in an automatic grade of {-\infty} if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level {p} of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of {p} that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.

The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with uncertainty, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between {50\%} and {70\%} confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.

A capset in the vector space {{\bf F}_3^n} over the finite field {{\bf F}_3} of three elements is a subset {A} of {{\bf F}_3^n} that does not contain any lines {\{ x,x+r,x+2r\}}, where {x,r \in {\bf F}_3^n} and {r \neq 0}. A basic problem in additive combinatorics (discussed in one of the very first posts on this blog) is to obtain good upper and lower bounds for the maximal size of a capset in {{\bf F}_3^n}.

Trivially, one has {|A| \leq 3^n}. Using Fourier methods (and the density increment argument of Roth), the bound of {|A| \leq O( 3^n / n )} was obtained by Meshulam, and improved only as late as 2012 to {O( 3^n /n^{1+c})} for some absolute constant {c>0} by Bateman and Katz. But in a very recent breakthrough, Ellenberg (and independently Gijswijt) obtained the exponentially superior bound {|A| \leq O( 2.756^n )}, using a version of the polynomial method recently introduced by Croot, Lev, and Pach. (In the converse direction, a construction of Edel gives capsets as large as {(2.2174)^n}.) Given the success of the polynomial method in superficially similar problems such as the finite field Kakeya problem (discussed in this previous post), it was natural to wonder that this method could be applicable to the cap set problem (see for instance this MathOverflow comment of mine on this from 2010), but it took a surprisingly long time before Croot, Lev, and Pach were able to identify the precise variant of the polynomial method that would actually work here.

The proof of the capset bound is very short (Ellenberg’s and Gijswijt’s preprints are both 3 pages long, and Croot-Lev-Pach is 6 pages), but I thought I would present a slight reformulation of the argument which treats the three points on a line in {{\bf F}_3} symmetrically (as opposed to treating the third point differently from the first two, as is done in the Ellenberg and Gijswijt papers; Croot-Lev-Pach also treat the middle point of a three-term arithmetic progression differently from the two endpoints, although this is a very natural thing to do in their context of {({\bf Z}/4{\bf Z})^n}). The basic starting point is this: if {A} is a capset, then one has the identity

\displaystyle \delta_{0^n}( x+y+z ) = \sum_{a \in A} \delta_a(x) \delta_a(y) \delta_a(z) \ \ \ \ \ (1)

 

for all {(x,y,z) \in A^3}, where {\delta_a(x) := 1_{a=x}} is the Kronecker delta function, which we view as taking values in {{\bf F}_3}. Indeed, (1) reflects the fact that the equation {x+y+z=0} has solutions precisely when {x,y,z} are either all equal, or form a line, and the latter is ruled out precisely when {A} is a capset.

To exploit (1), we will show that the left-hand side of (1) is “low rank” in some sense, while the right-hand side is “high rank”. Recall that a function {F: A \times A \rightarrow {\bf F}} taking values in a field {{\bf F}} is of rank one if it is non-zero and of the form {(x,y) \mapsto f(x) g(y)} for some {f,g: A \rightarrow {\bf F}}, and that the rank of a general function {F: A \times A \rightarrow {\bf F}} is the least number of rank one functions needed to express {F} as a linear combination. More generally, if {k \geq 2}, we define the rank of a function {F: A^k \rightarrow {\bf F}} to be the least number of “rank one” functions of the form

\displaystyle (x_1,\dots,x_k) \mapsto f(x_i) g(x_1,\dots,x_{i-1},x_{i+1},\dots,x_k)

for some {i=1,\dots,k} and some functions {f: A \rightarrow {\bf F}}, {g: A^{k-1} \rightarrow {\bf F}}, that are needed to generate {F} as a linear combination. For instance, when {k=3}, the rank one functions take the form {(x,y,z) \mapsto f(x) g(y,z)}, {(x,y,z) \mapsto f(y) g(x,z)}, {(x,y,z) \mapsto f(z) g(x,y)}, and linear combinations of {r} such rank one functions will give a function of rank at most {r}.

It is a standard fact in linear algebra that the rank of a diagonal matrix is equal to the number of non-zero entries. This phenomenon extends to higher dimensions:

Lemma 1 (Rank of diagonal hypermatrices) Let {k \geq 2}, let {A} be a finite set, let {{\bf F}} be a field, and for each {a \in A}, let {c_a \in {\bf F}} be a coefficient. Then the rank of the function

\displaystyle (x_1,\dots,x_k) \mapsto \sum_{a \in A} c_a \delta_a(x_1) \dots \delta_a(x_k) \ \ \ \ \ (2)

 

is equal to the number of non-zero coefficients {c_a}.

Proof: We induct on {k}. As mentioned above, the case {k=2} follows from standard linear algebra, so suppose now that {k>2} and the claim has already been proven for {k-1}.

It is clear that the function (2) has rank at most equal to the number of non-zero {c_a} (since the summands on the right-hand side are rank one functions), so it suffices to establish the lower bound. By deleting from {A} those elements {a \in A} with {c_a=0} (which cannot increase the rank), we may assume without loss of generality that all the {c_a} are non-zero. Now suppose for contradiction that (2) has rank at most {|A|-1}, then we obtain a representation

\displaystyle \sum_{a \in A} c_a \delta_a(x_1) \dots \delta_a(x_k)

\displaystyle = \sum_{i=1}^k \sum_{\alpha \in I_i} f_{i,\alpha}(x_i) g_{i,\alpha}( x_1,\dots,x_{i-1},x_{i+1},\dots,x_k) \ \ \ \ \ (3)

 

for some sets {I_1,\dots,I_k} of cardinalities adding up to at most {|A|-1}, and some functions {f_{i,\alpha}: A \rightarrow {\bf F}} and {g_{i,\alpha}: A^{k-1} \rightarrow {\bf R}}.

Consider the space of functions {h: A \rightarrow {\bf F}} that are orthogonal to all the {f_{k,\alpha}}, {\alpha \in I_k} in the sense that

\displaystyle \sum_{x \in A} f_{k,\alpha}(x) h(x) = 0

for all {\alpha \in I_k}. This space is a vector space whose dimension {d} is at least {|A| - |I_k|}. A basis of this space generates a {d \times |A|} coordinate matrix of full rank, which implies that there is at least one non-singular {d \times d} minor. This implies that there exists a function {h: A \rightarrow {\bf F}} in this space which is nowhere vanishing on some subset {A'} of {A} of cardinality at least {|A|-|I_k|}.

If we multiply (3) by {h(x_k)} and sum in {x_k}, we conclude that

\displaystyle \sum_{a \in A} c_a h(a) \delta_a(x_1) \dots \delta_a(x_{k-1})

\displaystyle = \sum_{i=1}^{k-1} \sum_{\alpha \in I_i} f_{i,\alpha}(x_i)\tilde g_{i,\alpha}( x_1,\dots,x_{i-1},x_{i+1},\dots,x_{k-1})

where

\displaystyle \tilde g_{i,\alpha}(x_1,\dots,x_{i-1},x_{i+1},\dots,x_{k-1})

\displaystyle := \sum_{x_k \in A} g_{i,\alpha}(x_1,\dots,x_{i-1},x_{i+1},\dots,x_k) h(x_k).

The right-hand side has rank at most {|A|-1-|I_k|}, since the summands are rank one functions. On the other hand, from induction hypothesis the left-hand side has rank at least {|A|-|I_k|}, giving the required contradiction. \Box

On the other hand, we have the following (symmetrised version of a) beautifully simple observation of Croot, Lev, and Pach:

Lemma 2 On {({\bf F}_3^n)^3}, the rank of the function {(x,y,z) \mapsto \delta_{0^n}(x+y+z)} is at most {3N}, where

\displaystyle N := \sum_{a,b,c \geq 0: a+b+c=n, b+2c \leq 2n/3} \frac{n!}{a!b!c!}.

Proof: Using the identity {\delta_0(x) = 1 - x^2} for {x \in {\bf F}_3}, we have

\displaystyle \delta_{0^n}(x+y+z) = \prod_{i=1}^n (1 - (x_i+y_i+z_i)^2).

The right-hand side is clearly a polynomial of degree {2n} in {x,y,z}, which is then a linear combination of monomials

\displaystyle x_1^{i_1} \dots x_n^{i_n} y_1^{j_1} \dots y_n^{j_n} z_1^{k_1} \dots z_n^{k_n}

with {i_1,\dots,i_n,j_1,\dots,j_n,k_1,\dots,k_n \in \{0,1,2\}} with

\displaystyle i_1 + \dots + i_n + j_1 + \dots + j_n + k_1 + \dots + k_n \leq 2n.

In particular, from the pigeonhole principle, at least one of {i_1 + \dots + i_n, j_1 + \dots + j_n, k_1 + \dots + k_n} is at most {2n/3}.

Consider the contribution of the monomials for which {i_1 + \dots + i_n \leq 2n/3}. We can regroup this contribution as

\displaystyle \sum_\alpha f_\alpha(x) g_\alpha(y,z)

where {\alpha} ranges over those {(i_1,\dots,i_n) \in \{0,1,2\}^n} with {i_1 + \dots + i_n \leq 2n/3}, {f_\alpha} is the monomial

\displaystyle f_\alpha(x_1,\dots,x_n) := x_1^{i_1} \dots x_n^{i_n}

and {g_\alpha: {\bf F}_3^n \times {\bf F}_3^n \rightarrow {\bf F}_3} is some explicitly computable function whose exact form will not be of relevance to our argument. The number of such {\alpha} is equal to {N}, so this contribution has rank at most {N}. The remaining contributions arising from the cases {j_1 + \dots + j_n \leq 2n/3} and {k_1 + \dots + k_n \leq 2n/3} similarly have rank at most {N} (grouping the monomials so that each monomial is only counted once), so the claim follows.

Upon restricting from {({\bf F}_3^n)^3} to {A^3}, the rank of {(x,y,z) \mapsto \delta_{0^n}(x+y+z)} is still at most {3N}. The two lemmas then combine to give the Ellenberg-Gijswijt bound

\displaystyle |A| \leq 3N.

All that remains is to compute the asymptotic behaviour of {N}. This can be done using the general tool of Cramer’s theorem, but can also be derived from Stirling’s formula (discussed in this previous post). Indeed, if {a = (\alpha+o(1)) n}, {b = (\beta+o(1)) n}, {c = (\gamma+o(1)) n} for some {\alpha,\beta,\gamma \geq 0} summing to {1}, Stirling’s formula gives

\displaystyle \frac{n!}{a!b!c!} = \exp( n (h(\alpha,\beta,\gamma) + o(1)) )

where {h} is the entropy function

\displaystyle h(\alpha,\beta,\gamma) = \alpha \log \frac{1}{\alpha} + \beta \log \frac{1}{\beta} + \gamma \log \frac{1}{\gamma}.

We then have

\displaystyle N = \exp( n (X + o(1))

where {X} is the maximum entropy {h(\alpha,\beta,\gamma)} subject to the constraints

\displaystyle \alpha,\beta,\gamma \geq 0; \alpha+\beta+\gamma=1; \beta+2\gamma \leq 2/3.

A routine Lagrange multiplier computation shows that the maximum occurs when

\displaystyle \alpha = \frac{32}{3(15 + \sqrt{33})}

\displaystyle \beta = \frac{4(\sqrt{33}-1)}{3(15+\sqrt{33})}

\displaystyle \gamma = \frac{(\sqrt{33}-1)^2}{6(15+\sqrt{33})}

and {h(\alpha,\beta,\gamma)} is approximately {1.013455}, giving rise to the claimed bound of {O( 2.756^n )}.

Remark 3 As noted in the Ellenberg and Gijswijt papers, the above argument extends readily to other fields than {{\bf F}_3} to control the maximal size of subset of {{\bf F}^n} that has no non-trivial solutions to the equation {ax+by+cz=0}, where {a,b,c \in {\bf F}} are non-zero constants that sum to zero. Of course one replaces the function {(x,y,z) \mapsto \delta_{0^n}(x+y+z)} in Lemma 2 by {(x,y,z) \mapsto \delta_{0^n}(ax+by+cz)} in this case.

Remark 4 This symmetrised formulation suggests that one possible way to improve slightly on the numerical quantity {2.756} by finding a more efficient way to decompose {\delta_{0^n}(x+y+z)} into rank one functions, however I was not able to do so (though such improvements are reminiscent of the Strassen type algorithms for fast matrix multiplication).

Remark 5 It is tempting to see if this method can get non-trivial upper bounds for sets {A} with no length {4} progressions, in (say) {{\bf F}_5^n}. One can run the above arguments, replacing the function

\displaystyle (x,y,z) \mapsto \delta_{0^n}(x+y+z)

with

\displaystyle (x,y,z,w) \mapsto \delta_{0^n}(x-2y+z) \delta_{0^n}(y-2z+w);

this leads to the bound {|A| \leq 4N} where

\displaystyle N := \sum_{a,b,c,d,e \geq 0: a+b+c+d+e=n, b+2c+3d+4e \leq 2n} \frac{n!}{a!b!c!d!e!}.

Unfortunately, {N} is asymptotic to {\frac{1}{2} 5^n} and so this bound is in fact slightly worse than the trivial bound {|A| \leq 5^n}! However, there is a slim chance that there is a more efficient way to decompose {\delta_{0^n}(x-2y+z) \delta_{0^n}(y-2z+w)} into rank one functions that would give a non-trivial bound on {A}. I experimented with a few possible such decompositions but unfortunately without success.

Remark 6 Return now to the capset problem. Since Lemma 1 is valid for any field {{\bf F}}, one could perhaps hope to get better bounds by viewing the Kronecker delta function {\delta} as taking values in another field than {{\bf F}_3}, such as the complex numbers {{\bf C}}. However, as soon as one works in a field of characteristic other than {3}, one can adjoin a cube root {\omega} of unity, and one now has the Fourier decomposition

\displaystyle \delta_{0^n}(x+y+z) = \frac{1}{3^n} \sum_{\xi \in {\bf F}_3^n} \omega^{\xi \cdot x} \omega^{\xi \cdot y} \omega^{\xi \cdot z}.

Moving to the Fourier basis, we conclude from Lemma 1 that the function {(x,y,z) \mapsto \delta_{0^n}(x+y+z)} on {{\bf F}_3^n} now has rank exactly {3^n}, and so one cannot improve upon the trivial bound of {|A| \leq 3^n} by this method using fields of characteristic other than three as the range field. So it seems one has to stick with {{\bf F}_3} (or the algebraic completion thereof).

Thanks to Jordan Ellenberg and Ben Green for helpful discussions.

I’ve just uploaded to the arXiv my paper “Equivalence of the logarithmically averaged Chowla and Sarnak conjectures“, submitted to the Festschrift “Number Theory – Diophantine problems, uniform distribution and applications” in honour of Robert F. Tichy. This paper is a spinoff of my previous paper establishing a logarithmically averaged version of the Chowla (and Elliott) conjectures in the two-point case. In that paper, the estimate

\displaystyle  \sum_{n \leq x} \frac{\lambda(n) \lambda(n+h)}{n} = o( \log x )

as {x \rightarrow \infty} was demonstrated, where {h} was any positive integer and {\lambda} denoted the Liouville function. The proof proceeded using a method I call the “entropy decrement argument”, which ultimately reduced matters to establishing a bound of the form

\displaystyle  \sum_{n \leq x} \frac{|\sum_{h \leq H} \lambda(n+h) e( \alpha h)|}{n} = o( H \log x )

whenever {H} was a slowly growing function of {x}. This was in turn established in a previous paper of Matomaki, Radziwill, and myself, using the recent breakthrough of Matomaki and Radziwill.

It is natural to see to what extent the arguments can be adapted to attack the higher-point cases of the logarithmically averaged Chowla conjecture (ignoring for this post the more general Elliott conjecture for other bounded multiplicative functions than the Liouville function). That is to say, one would like to prove that

\displaystyle  \sum_{n \leq x} \frac{\lambda(n+h_1) \dots \lambda(n+h_k)}{n} = o( \log x )

as {x \rightarrow \infty} for any fixed distinct integers {h_1,\dots,h_k}. As it turns out (and as is detailed in the current paper), the entropy decrement argument extends to this setting (after using some known facts about linear equations in primes), and allows one to reduce the above estimate to an estimate of the form

\displaystyle  \sum_{n \leq x} \frac{1}{n} \| \lambda \|_{U^d[n, n+H]} = o( \log x )

for {H} a slowly growing function of {x} and some fixed {d} (in fact we can take {d=k-1} for {k \geq 3}), where {U^d} is the (normalised) local Gowers uniformity norm. (In the case {k=3}, {d=2}, this becomes the Fourier-uniformity conjecture discussed in this previous post.) If one then applied the (now proven) inverse conjecture for the Gowers norms, this estimate is in turn equivalent to the more complicated looking assertion

\displaystyle  \sum_{n \leq x} \frac{1}{n} \sup |\sum_{h \leq H} \lambda(n+h) F( g^h x )| = o( \log x ) \ \ \ \ \ (1)

where the supremum is over all possible choices of nilsequences {h \mapsto F(g^h x)} of controlled step and complexity (see the paper for definitions of these terms).

The main novelty in the paper (elaborating upon a previous comment I had made on this blog) is to observe that this latter estimate in turn follows from the logarithmically averaged form of Sarnak’s conjecture (discussed in this previous post), namely that

\displaystyle  \sum_{n \leq x} \frac{1}{n} \lambda(n) F( T^n x )= o( \log x )

whenever {n \mapsto F(T^n x)} is a zero entropy (i.e. deterministic) sequence. Morally speaking, this follows from the well-known fact that nilsequences have zero entropy, but the presence of the supremum in (1) means that we need a little bit more; roughly speaking, we need the class of nilsequences of a given step and complexity to have “uniformly zero entropy” in some sense.

On the other hand, it was already known (see previous post) that the Chowla conjecture implied the Sarnak conjecture, and similarly for the logarithmically averaged form of the two conjectures. Putting all these implications together, we obtain the pleasant fact that the logarithmically averaged Sarnak and Chowla conjectures are equivalent, which is the main result of the current paper. There have been a large number of special cases of the Sarnak conjecture worked out (when the deterministic sequence involved came from a special dynamical system), so these results can now also be viewed as partial progress towards the Chowla conjecture also (at least with logarithmic averaging). However, my feeling is that the full resolution of these conjectures will not come from these sorts of special cases; instead, conjectures like the Fourier-uniformity conjecture in this previous post look more promising to attack.

It would also be nice to get rid of the pesky logarithmic averaging, but this seems to be an inherent requirement of the entropy decrement argument method, so one would probably have to find a way to avoid that argument if one were to remove the log averaging.

When teaching mathematics, the traditional method of lecturing in front of a blackboard is still hard to improve upon, despite all the advances in modern technology.  However, there are some nice things one can do in an electronic medium, such as this blog.  Here, I would like to experiment with the ability to animate images, which I think can convey some mathematical concepts in ways that cannot be easily replicated by traditional static text and images. Given that many readers may find these animations annoying, I am placing the rest of the post below the fold.

Read the rest of this entry »

Throughout this post we shall always work in the smooth category, thus all manifolds, maps, coordinate charts, and functions are assumed to be smooth unless explicitly stated otherwise.

A (real) manifold {M} can be defined in at least two ways. On one hand, one can define the manifold extrinsically, as a subset of some standard space such as a Euclidean space {{\bf R}^d}. On the other hand, one can define the manifold intrinsically, as a topological space equipped with an atlas of coordinate charts. The fundamental embedding theorems show that, under reasonable assumptions, the intrinsic and extrinsic approaches give the same classes of manifolds (up to isomorphism in various categories). For instance, we have the following (special case of) the Whitney embedding theorem:

Theorem 1 (Whitney embedding theorem) Let {M} be a compact manifold. Then there exists an embedding {u: M \rightarrow {\bf R}^d} from {M} to a Euclidean space {{\bf R}^d}.

In fact, if {M} is {n}-dimensional, one can take {d} to equal {2n}, which is often best possible (easy examples include the circle {{\bf R}/{\bf Z}} which embeds into {{\bf R}^2} but not {{\bf R}^1}, or the Klein bottle that embeds into {{\bf R}^4} but not {{\bf R}^3}). One can also relax the compactness hypothesis on {M} to second countability, but we will not pursue this extension here. We give a “cheap” proof of this theorem below the fold which allows one to take {d} equal to {2n+1}.

A significant strengthening of the Whitney embedding theorem is (a special case of) the Nash embedding theorem:

Theorem 2 (Nash embedding theorem) Let {(M,g)} be a compact Riemannian manifold. Then there exists a isometric embedding {u: M \rightarrow {\bf R}^d} from {M} to a Euclidean space {{\bf R}^d}.

In order to obtain the isometric embedding, the dimension {d} has to be a bit larger than what is needed for the Whitney embedding theorem; in this article of Gunther the bound

\displaystyle  d = \max( 	n(n+5)/2, n(n+3)/2 + 5) \ \ \ \ \ (1)

is attained, which I believe is still the record for large {n}. (In the converse direction, one cannot do better than {d = \frac{n(n+1)}{2}}, basically because this is the number of degrees of freedom in the Riemannian metric {g}.) Nash’s original proof of theorem used what is now known as Nash-Moser inverse function theorem, but a subsequent simplification of Gunther allowed one to proceed using just the ordinary inverse function theorem (in Banach spaces).

I recently had the need to invoke the Nash embedding theorem to establish a blowup result for a nonlinear wave equation, which motivated me to go through the proof of the theorem more carefully. Below the fold I give a proof of the theorem that does not attempt to give an optimal value of {d}, but which hopefully isolates the main ideas of the argument (as simplified by Gunther). One advantage of not optimising in {d} is that it allows one to freely exploit the very useful tool of pairing together two maps {u_1: M \rightarrow {\bf R}^{d_1}}, {u_2: M \rightarrow {\bf R}^{d_2}} to form a combined map {(u_1,u_2): M \rightarrow {\bf R}^{d_1+d_2}} that can be closer to an embedding or an isometric embedding than the original maps {u_1,u_2}. This lets one perform a “divide and conquer” strategy in which one first starts with the simpler problem of constructing some “partial” embeddings of {M} and then pairs them together to form a “better” embedding.

In preparing these notes, I found the articles of Deane Yang and of Siyuan Lu to be helpful.

Read the rest of this entry »

Over the last few years, a large group of mathematicians have been developing an online database to systematically collect the known facts, numerical data, and algorithms concerning some of the most central types of objects in modern number theory, namely the L-functions associated to various number fields, curves, and modular forms, as well as further data about these modular forms.  This of course includes the most famous examples of L-functions and modular forms respectively, namely the Riemann zeta function \zeta(s) and the discriminant modular form \Delta(q), but there are countless other examples of both. The connections between these classes of objects lie at the heart of the Langlands programme.

As of today, the “L-functions and modular forms database” is now out of beta, and open to the public; at present the database is mostly geared towards specialists in computational number theory, but will hopefully develop into a more broadly useful resource as time develops.  An article by John Cremona summarising the purpose of the database can be found here.

(Thanks to Andrew Sutherland and Kiran Kedlaya for the information.)

The International Mathematical Union (with the assistance of the Friends of the International Mathematical Union and The World Academy of Sciences, and supported by Ian Agol, Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Richard Taylor, and myself) has just launched the Graduate Breakout Fellowships, which will offer highly qualified students from developing countries a full scholarship to study for a PhD in mathematics at an institution that is also located in a developing country.  Nominations for this fellowship (which should be from a sponsoring mathematician, preferably a mentor of the nominee) have just opened (with an application deadline of June 22); details on the nomination process and eligibility requirements can be found at this page.

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 6,290 other followers