You are currently browsing the category archive for the ‘Mathematics’ category.

The Chowla conjecture asserts, among other things, that one has the asymptotic

$\displaystyle \frac{1}{X} \sum_{n \leq X} \lambda(n+h_1) \dots \lambda(n+h_k) = o(1)$

as ${X \rightarrow \infty}$ for any distinct integers ${h_1,\dots,h_k}$, where ${\lambda}$ is the Liouville function. (The usual formulation of the conjecture also allows one to consider more general linear forms ${a_i n + b_i}$ than the shifts ${n+h_i}$, but for sake of discussion let us focus on the shift case.) This conjecture remains open for ${k \geq 2}$, though there are now some partial results when one averages either in ${x}$ or in the ${h_1,\dots,h_k}$, as discussed in this recent post.

A natural generalisation of the Chowla conjecture is the Elliott conjecture. Its original formulation was basically as follows: one had

$\displaystyle \frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k) = o(1) \ \ \ \ \ (1)$

whenever ${g_1,\dots,g_k}$ were bounded completely multiplicative functions and ${h_1,\dots,h_k}$ were distinct integers, and one of the ${g_i}$ was “non-pretentious” in the sense that

$\displaystyle \sum_p \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} = +\infty \ \ \ \ \ (2)$

for all Dirichlet characters ${\chi}$ and real numbers ${t}$. It is easy to see that some condition like (2) is necessary; for instance if ${g(n) := \chi(n) n^{it}}$ and ${\chi}$ has period ${q}$ then ${\frac{1}{X} \sum_{n \leq X} g(n+q) \overline{g(n)}}$ can be verified to be bounded away from zero as ${X \rightarrow \infty}$.

In a previous paper with Matomaki and Radziwill, we provided a counterexample to the original formulation of the Elliott conjecture, and proposed that (2) be replaced with the stronger condition

$\displaystyle \inf_{|t| \leq X} \sum_{p \leq X} \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} \rightarrow +\infty \ \ \ \ \ (3)$

as ${X \rightarrow \infty}$ for any Dirichlet character ${\chi}$. To support this conjecture, we proved an averaged and non-asymptotic version of this conjecture which roughly speaking showed a bound of the form

$\displaystyle \frac{1}{H^k} \sum_{h_1,\dots,h_k \leq H} |\frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k)| \leq \varepsilon$

whenever ${H}$ was an arbitrarily slowly growing function of ${X}$, ${X}$ was sufficiently large (depending on ${\varepsilon,k}$ and the rate at which ${H}$ grows), and one of the ${g_i}$ obeyed the condition

$\displaystyle \inf_{|t| \leq AX} \sum_{p \leq X} \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} \geq A \ \ \ \ \ (4)$

for some ${A}$ that was sufficiently large depending on ${k,\varepsilon}$, and all Dirichlet characters ${\chi}$ of period at most ${A}$. As further support of this conjecture, I recently established the bound

$\displaystyle \frac{1}{\log \omega} |\sum_{X/\omega \leq n \leq X} \frac{g_1(n+h_1) g_2(n+h_2)}{n}| \leq \varepsilon$

under the same hypotheses, where ${\omega}$ is an arbitrarily slowly growing function of ${X}$.

In view of these results, it is tempting to conjecture that the condition (4) for one of the ${g_i}$ should be sufficient to obtain the bound

$\displaystyle |\frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k)| \leq \varepsilon$

when ${A}$ is large enough depending on ${k,\varepsilon}$. This may well be the case for ${k=2}$. However, the purpose of this blog post is to record a simple counterexample for ${k>2}$. Let’s take ${k=3}$ for simplicity. Let ${t_0}$ be a quantity much larger than ${X}$ but much smaller than ${X^2}$ (e.g. ${t = X^{3/2}}$), and set

$\displaystyle g_1(n) := n^{it_0}; \quad g_2(n) := n^{-2it_0}; \quad g_3(n) := n^{it_0}.$

For ${X/2 \leq n \leq X}$, Taylor expansion gives

$\displaystyle (n+1)^{it} = n^{it_0} \exp( i t_0 / n ) + o(1)$

and

$\displaystyle (n+2)^{it} = n^{it_0} \exp( 2 i t_0 / n ) + o(1)$

and hence

$\displaystyle g_1(n) g_2(n+1) g_3(n+2) = 1 + o(1)$

and hence

$\displaystyle |\frac{1}{X} \sum_{X/2 \leq n \leq X} g_1(n) g_2(n+1) g_3(n+2)| \gg 1.$

On the other hand one can easily verify that all of the ${g_1,g_2,g_3}$ obey (4) (the restriction ${|t| \leq AX}$ there prevents ${t}$ from getting anywhere close to ${t_0}$). So it seems the correct non-asymptotic version of the Elliott conjecture is the following:

Conjecture 1 (Non-asymptotic Elliott conjecture) Let ${k}$ be a natural number, and let ${h_1,\dots,h_k}$ be integers. Let ${\varepsilon > 0}$, let ${A}$ be sufficiently large depending on ${k,\varepsilon,h_1,\dots,h_k}$, and let ${X}$ be sufficiently large depending on ${k,\varepsilon,h_1,\dots,h_k,A}$. Let ${g_1,\dots,g_k}$ be bounded multiplicative functions such that for some ${1 \leq i \leq k}$, one has

$\displaystyle \inf_{|t| \leq AX^{k-1}} \sum_{p \leq X} \frac{1 - \hbox{Re}( g_i(p) \overline{\chi(p)} p^{-it})}{p} \geq A$

for all Dirichlet characters ${\chi}$ of conductor at most ${A}$. Then

$\displaystyle |\frac{1}{X} \sum_{n \leq X} g_1(n+h_1) \dots g_k(n+h_k)| \leq \varepsilon.$

The ${k=1}$ case of this conjecture follows from the work of Halasz; in my recent paper a logarithmically averaged version of the ${k=2}$ case of this conjecture is established. The requirement to take ${t}$ to be as large as ${A X^{k-1}}$ does not emerge in the averaged Elliott conjecture in my previous paper with Matomaki and Radziwill; it thus seems that this averaging has concealed some of the subtler features of the Elliott conjecture. (However, this subtlety does not seem to affect the asymptotic version of the conjecture formulated in that paper, in which the hypothesis is of the form (3), and the conclusion is of the form (1).)

A similar subtlety arises when trying to control the maximal integral

$\displaystyle \frac{1}{X} \int_X^{2X} \sup_\alpha \frac{1}{H} |\sum_{x \leq n \leq x+H} g(n) e(\alpha n)|\ dx. \ \ \ \ \ (5)$

In my previous paper with Matomaki and Radziwill, we could show that easier expression

$\displaystyle \frac{1}{X} \sup_\alpha \int_X^{2X} \frac{1}{H} |\sum_{x \leq n \leq x+H} g(n) e(\alpha n)|\ dx. \ \ \ \ \ (6)$

was small (for ${H}$ a slowly growing function of ${X}$) if ${g}$ was bounded and completely multiplicative, and one had a condition of the form

$\displaystyle \inf_{|t| \leq AX} \sum_{p \leq X} \frac{1 - \hbox{Re}( g(p) \overline{\chi(p)} p^{-it})}{p} \geq A \ \ \ \ \ (7)$

for some large ${A}$. However, to obtain an analogous bound for (5) it now appears that one needs to strengthen the above condition to

$\displaystyle \inf_{|t| \leq AX^2} \sum_{p \leq X} \frac{1 - \hbox{Re}( g(p) \overline{\chi(p)} p^{-it})}{p} \geq A$

in order to address the counterexample in which ${g(n) = n^{it_0}}$ for some ${t_0}$ between ${X}$ and ${X^2}$. This seems to suggest that proving (5) (which is closely related to the ${k=3}$ case of the Chowla conjecture) could in fact be rather difficult; the estimation of (6) relied primarily of prior work of Matomaki and Radziwill which used the hypothesis (7), but as this hypothesis is not sufficient to conclude (5), some additional input must also be used.

One of the major activities in probability theory is studying the various statistics that can be produced from a complex system with many components. One of the simplest possible systems one can consider is a finite sequence ${X_1,\dots,X_n}$ or an infinite sequence ${X_1,X_2,\dots}$ of jointly independent scalar random variables, with the case when the ${X_i}$ are also identically distributed (i.e. the ${X_i}$ are iid) being a model case of particular interest. (In some cases one may consider a triangular array ${(X_{n,i})_{1 \leq i \leq n}}$ of scalar random variables, rather than a finite or infinite sequence.) There are many statistics of such sequences that one can study, but one of the most basic such statistics are the partial sums

$\displaystyle S_n := X_1 + \dots + X_n.$

The first fundamental result about these sums is the law of large numbers (or LLN for short), which comes in two formulations, weak (WLLN) and strong (SLLN). To state these laws, we first must define the notion of convergence in probability.

Definition 1 Let ${X_n}$ be a sequence of random variables taking values in a separable metric space ${R = (R,d)}$ (e.g. the ${X_n}$ could be scalar random variables, taking values in ${{\bf R}}$ or ${{\bf C}}$), and let ${X}$ be another random variable taking values in ${R}$. We say that ${X_n}$ converges in probability to ${X}$ if, for every radius ${\varepsilon > 0}$, one has ${{\bf P}( d(X_n,X) > \varepsilon ) \rightarrow 0}$ as ${n \rightarrow \infty}$. Thus, if ${X_n, X}$ are scalar, we have ${X_n}$ converging to ${X}$ in probability if ${{\bf P}( |X_n-X| > \varepsilon ) \rightarrow 0}$ as ${n \rightarrow \infty}$ for any given ${\varepsilon > 0}$.

The measure-theoretic analogue of convergence in probability is convergence in measure.

It is instructive to compare the notion of convergence in probability with almost sure convergence. it is easy to see that ${X_n}$ converges almost surely to ${X}$ if and only if, for every radius ${\varepsilon > 0}$, one has ${{\bf P}( \bigvee_{n \geq N} (d(X_n,X)>\varepsilon) ) \rightarrow 0}$ as ${N \rightarrow \infty}$; thus, roughly speaking, convergence in probability is good for controlling how a single random variable ${X_n}$ is close to its putative limiting value ${X}$, while almost sure convergence is good for controlling how the entire tail ${(X_n)_{n \geq N}}$ of a sequence of random variables is close to its putative limit ${X}$.

We have the following easy relationships between convergence in probability and almost sure convergence:

Exercise 2 Let ${X_n}$ be a sequence of scalar random variables, and let ${X}$ be another scalar random variable.

• (i) If ${X_n \rightarrow X}$ almost surely, show that ${X_n \rightarrow X}$ in probability. Give a counterexample to show that the converse does not necessarily hold.
• (ii) Suppose that ${\sum_n {\bf P}( |X_n-X| > \varepsilon ) < \infty}$ for all ${\varepsilon > 0}$. Show that ${X_n \rightarrow X}$ almost surely. Give a counterexample to show that the converse does not necessarily hold.
• (iii) If ${X_n \rightarrow X}$ in probability, show that there is a subsequence ${X_{n_j}}$ of the ${X_n}$ such that ${X_{n_j} \rightarrow X}$ almost surely.
• (iv) If ${X_n,X}$ are absolutely integrable and ${{\bf E} |X_n-X| \rightarrow 0}$ as ${n \rightarrow \infty}$, show that ${X_n \rightarrow X}$ in probability. Give a counterexample to show that the converse does not necessarily hold.
• (v) (Urysohn subsequence principle) Suppose that every subsequence ${X_{n_j}}$ of ${X_n}$ has a further subsequence ${X_{n_{j_k}}}$ that converges to ${X}$ in probability. Show that ${X_n}$ also converges to ${X}$ in probability.
• (vi) Does the Urysohn subsequence principle still hold if “in probability” is replaced with “almost surely” throughout?
• (vii) If ${X_n}$ converges in probability to ${X}$, and ${F: {\bf R} \rightarrow {\bf R}}$ or ${F: {\bf C} \rightarrow {\bf C}}$ is continuous, show that ${F(X_n)}$ converges in probability to ${F(X)}$. More generally, if for each ${i=1,\dots,k}$, ${X^{(i)}_n}$ is a sequence of scalar random variables that converge in probability to ${X^{(i)}}$, and ${F: {\bf R}^k \rightarrow {\bf R}}$ or ${F: {\bf C}^k \rightarrow {\bf C}}$ is continuous, show that ${F(X^{(1)}_n,\dots,X^{(k)}_n)}$ converges in probability to ${F(X^{(1)},\dots,X^{(k)})}$. (Thus, for instance, if ${X_n}$ and ${Y_n}$ converge in probability to ${X}$ and ${Y}$ respectively, then ${X_n + Y_n}$ and ${X_n Y_n}$ converge in probability to ${X+Y}$ and ${XY}$ respectively.
• (viii) (Fatou’s lemma for convergence in probability) If ${X_n}$ are non-negative and converge in probability to ${X}$, show that ${{\bf E} X \leq \liminf_{n \rightarrow \infty} {\bf E} X_n}$.
• (ix) (Dominated convergence in probability) If ${X_n}$ converge in probability to ${X}$, and one almost surely has ${|X_n| \leq Y}$ for all ${n}$ and some absolutely integrable ${Y}$, show that ${{\bf E} X_n}$ converges to ${{\bf E} X}$.

Exercise 3 Let ${X_1,X_2,\dots}$ be a sequence of scalar random variables converging in probability to another random variable ${X}$.

• (i) Suppose that there is a random variable ${Y}$ which is independent of ${X_i}$ for each individual ${i}$. Show that ${Y}$ is also independent of ${X}$.
• (ii) Suppose that the ${X_1,X_2,\dots}$ are jointly independent. Show that ${X}$ is almost surely constant (i.e. there is a deterministic scalar ${c}$ such that ${X=c}$ almost surely).

We can now state the weak and strong law of large numbers, in the model case of iid random variables.

Theorem 4 (Law of large numbers, model case) Let ${X_1, X_2, \dots}$ be an iid sequence of copies of an absolutely integrable random variable ${X}$ (thus the ${X_i}$ are independent and all have the same distribution as ${X}$). Write ${\mu := {\bf E} X}$, and for each natural number ${n}$, let ${S_n}$ denote the random variable ${S_n := X_1 + \dots + X_n}$.

• (i) (Weak law of large numbers) The random variables ${S_n/n}$ converge in probability to ${\mu}$.
• (ii) (Strong law of large numbers) The random variables ${S_n/n}$ converge almost surely to ${\mu}$.

Informally: if ${X_1,\dots,X_n}$ are iid with mean ${\mu}$, then ${X_1 + \dots + X_n \approx \mu n}$ for ${n}$ large. Clearly the strong law of large numbers implies the weak law, but the weak law is easier to prove (and has somewhat better quantitative estimates). There are several variants of the law of large numbers, for instance when one drops the hypothesis of identical distribution, or when the random variable ${X}$ is not absolutely integrable, or if one seeks more quantitative bounds on the rate of convergence; we will discuss some of these variants below the fold.

It is instructive to compare the law of large numbers with what one can obtain from the Kolmogorov zero-one law, discussed in Notes 2. Observe that if the ${X_n}$ are real-valued, then the limit superior ${\limsup_{n \rightarrow \infty} S_n/n}$ and ${\liminf_{n \rightarrow \infty} S_n/n}$ are tail random variables in the sense that they are not affected if one changes finitely many of the ${X_n}$; in particular, events such as ${\limsup_{n \rightarrow \infty} S_n/n > t}$ are tail events for any ${t \in {\bf R}}$. From this and the zero-one law we see that there must exist deterministic quantities ${-\infty \leq \mu_- \leq \mu_+ \leq +\infty}$ such that ${\limsup_{n \rightarrow \infty} S_n/n = \mu_+}$ and ${\liminf_{n \rightarrow \infty} S_n/n = \mu_-}$ almost surely. The strong law of large numbers can then be viewed as the assertion that ${\mu_- = \mu_+ = \mu}$ when ${X}$ is absolutely integrable. On the other hand, the zero-one law argument does not require absolute integrability (and one can replace the denominator ${n}$ by other functions of ${n}$ that go to infinity as ${n \rightarrow \infty}$).

The law of large numbers asserts, roughly speaking, that the theoretical expectation ${\mu}$ of a random variable ${X}$ can be approximated by taking a large number of independent samples ${X_1,\dots,X_n}$ of ${X}$ and then forming the empirical mean ${S_n/n = \frac{X_1+\dots+X_n}{n}}$. This ability to approximate the theoretical statistics of a probability distribution through empirical data is one of the basic starting points for mathematical statistics, though this is not the focus of the course here. The tendency of statistics such as ${S_n/n}$ to cluster closely around their mean value ${\mu}$ is the simplest instance of the concentration of measure phenomenon, which is of tremendous significance not only within probability, but also in applications of probability to disciplines such as statistics, theoretical computer science, combinatorics, random matrix theory and high dimensional geometry. We will not discuss these topics much in this course, but see this previous blog post for some further discussion.

There are several ways to prove the law of large numbers (in both forms). One basic strategy is to use the moment method – controlling statistics such as ${S_n/n}$ by computing moments such as the mean ${{\bf E} S_n/n}$, variance ${{\bf E} |S_n/n - {\bf E} S_n/n|^2}$, or higher moments such as ${{\bf E} |S_n/n - {\bf E} S_n/n|^k}$ for ${k = 4, 6, \dots}$. The joint independence of the ${X_i}$ make such moments fairly easy to compute, requiring only some elementary combinatorics. A direct application of the moment method typically requires one to make a finite moment assumption such as ${{\bf E} |X|^k < \infty}$, but as we shall see, one can reduce fairly easily to this case by a truncation argument.

For the strong law of large numbers, one can also use methods relating to the theory of martingales, such as stopping time arguments and maximal inequalities; we present some classical arguments of Kolmogorov in this regard.

In the previous set of notes, we constructed the measure-theoretic notion of the Lebesgue integral, and used this to set up the probabilistic notion of expectation on a rigorous footing. In this set of notes, we will similarly construct the measure-theoretic concept of a product measure (restricting to the case of probability measures to avoid unnecessary techncialities), and use this to set up the probabilistic notion of independence on a rigorous footing. (To quote Durrett: “measure theory ends and probability theory begins with the definition of independence.”) We will be able to take virtually any collection of random variables (or probability distributions) and couple them together to be independent via the product measure construction, though for infinite products there is the slight technicality (a requirement of the Kolmogorov extension theorem) that the random variables need to range in standard Borel spaces. This is not the only way to couple together such random variables, but it is the simplest and the easiest to compute with in practice, as we shall see in the next few sets of notes.

I recently learned about a curious operation on square matrices known as sweeping, which is used in numerical linear algebra (particularly in applications to statistics), as a useful and more robust variant of the usual Gaussian elimination operations seen in undergraduate linear algebra courses. Given an ${n \times n}$ matrix ${A := (a_{ij})_{1 \leq i,j \leq n}}$ (with, say, complex entries) and an index ${1 \leq k \leq n}$, with the entry ${a_{kk}}$ non-zero, the sweep ${\hbox{Sweep}_k[A] = (\hat a_{ij})_{1 \leq i,j \leq n}}$ of ${A}$ at ${k}$ is the matrix given by the formulae

$\displaystyle \hat a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}$

$\displaystyle \hat a_{ik} := \frac{a_{ik}}{a_{kk}}$

$\displaystyle \hat a_{kj} := \frac{a_{kj}}{a_{kk}}$

$\displaystyle \hat a_{kk} := \frac{-1}{a_{kk}}$

for all ${i,j \in \{1,\dots,n\} \backslash \{k\}}$. Thus for instance if ${k=1}$, and ${A}$ is written in block form as

$\displaystyle A = \begin{pmatrix} a_{11} & X \\ Y & B \end{pmatrix} \ \ \ \ \ (1)$

for some ${1 \times n-1}$ row vector ${X}$, ${n-1 \times 1}$ column vector ${Y}$, and ${n-1 \times n-1}$ minor ${B}$, one has

$\displaystyle \hbox{Sweep}_1[A] = \begin{pmatrix} -1/a_{11} & X / a_{11} \\ Y/a_{11} & B - a_{11}^{-1} YX \end{pmatrix}. \ \ \ \ \ (2)$

The inverse sweep operation ${\hbox{Sweep}_k^{-1}[A] = (\check a_{ij})_{1 \leq i,j \leq n}}$ is given by a nearly identical set of formulae:

$\displaystyle \check a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}$

$\displaystyle \check a_{ik} := -\frac{a_{ik}}{a_{kk}}$

$\displaystyle \check a_{kj} := -\frac{a_{kj}}{a_{kk}}$

$\displaystyle \check a_{kk} := \frac{-1}{a_{kk}}$

for all ${i,j \in \{1,\dots,n\} \backslash \{k\}}$. One can check that these operations invert each other. Actually, each sweep turns out to have order ${4}$, so that ${\hbox{Sweep}_k^{-1} = \hbox{Sweep}_k^3}$: an inverse sweep performs the same operation as three forward sweeps. Sweeps also preserve the space of symmetric matrices (allowing one to cut down computational run time in that case by a factor of two), and behave well with respect to principal minors; a sweep of a principal minor is a principal minor of a sweep, after adjusting indices appropriately.

Remarkably, the sweep operators all commute with each other: ${\hbox{Sweep}_k \hbox{Sweep}_l = \hbox{Sweep}_l \hbox{Sweep}_k}$. If ${1 \leq k \leq n}$ and we perform the first ${k}$ sweeps (in any order) to a matrix

$\displaystyle A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}$

with ${A_{11}}$ a ${k \times k}$ minor, ${A_{12}}$ a ${k \times n-k}$ matrix, ${A_{12}}$ a ${n-k \times k}$ matrix, and ${A_{22}}$ a ${n-k \times n-k}$ matrix, one obtains the new matrix

$\displaystyle \hbox{Sweep}_1 \dots \hbox{Sweep}_k[A] = \begin{pmatrix} -A_{11}^{-1} & A_{11}^{-1} A_{12} \\ A_{21} A_{11}^{-1} & A_{22} - A_{21} A_{11}^{-1} A_{12} \end{pmatrix}.$

Note the appearance of the Schur complement in the bottom right block. Thus, for instance, one can essentially invert a matrix ${A}$ by performing all ${n}$ sweeps:

$\displaystyle \hbox{Sweep}_1 \dots \hbox{Sweep}_n[A] = -A^{-1}.$

If a matrix has the form

$\displaystyle A = \begin{pmatrix} B & X \\ Y & a \end{pmatrix}$

for a ${n-1 \times n-1}$ minor ${B}$, ${n-1 \times 1}$ column vector ${X}$, ${1 \times n-1}$ row vector ${Y}$, and scalar ${a}$, then performing the first ${n-1}$ sweeps gives

$\displaystyle \hbox{Sweep}_1 \dots \hbox{Sweep}_{n-1}[A] = \begin{pmatrix} -B^{-1} & B^{-1} X \\ Y B^{-1} & a - Y B^{-1} X \end{pmatrix}$

and all the components of this matrix are usable for various numerical linear algebra applications in statistics (e.g. in least squares regression). Given that sweeps behave well with inverses, it is perhaps not surprising that sweeps also behave well under determinants: the determinant of ${A}$ can be factored as the product of the entry ${a_{kk}}$ and the determinant of the ${n-1 \times n-1}$ matrix formed from ${\hbox{Sweep}_k[A]}$ by removing the ${k^{th}}$ row and column. As a consequence, one can compute the determinant of ${A}$ fairly efficiently (so long as the sweep operations don’t come close to dividing by zero) by sweeping the matrix for ${k=1,\dots,n}$ in turn, and multiplying together the ${kk^{th}}$ entry of the matrix just before the ${k^{th}}$ sweep for ${k=1,\dots,n}$ to obtain the determinant.

It turns out that there is a simple geometric explanation for these seemingly magical properties of the sweep operation. Any ${n \times n}$ matrix ${A}$ creates a graph ${\hbox{Graph}[A] := \{ (X, AX): X \in {\bf R}^n \}}$ (where we think of ${{\bf R}^n}$ as the space of column vectors). This graph is an ${n}$-dimensional subspace of ${{\bf R}^n \times {\bf R}^n}$. Conversely, most subspaces of ${{\bf R}^n \times {\bf R}^n}$ arises as graphs; there are some that fail the vertical line test, but these are a positive codimension set of counterexamples.

We use ${e_1,\dots,e_n,f_1,\dots,f_n}$ to denote the standard basis of ${{\bf R}^n \times {\bf R}^n}$, with ${e_1,\dots,e_n}$ the standard basis for the first factor of ${{\bf R}^n}$ and ${f_1,\dots,f_n}$ the standard basis for the second factor. The operation of sweeping the ${k^{th}}$ entry then corresponds to a ninety degree rotation ${\hbox{Rot}_k: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n \times {\bf R}^n}$ in the ${e_k,f_k}$ plane, that sends ${f_k}$ to ${e_k}$ (and ${e_k}$ to ${-f_k}$), keeping all other basis vectors fixed: thus we have

$\displaystyle \hbox{Graph}[ \hbox{Sweep}_k[A] ] = \hbox{Rot}_k \hbox{Graph}[A]$

for generic ${n \times n}$ ${A}$ (more precisely, those ${A}$ with non-vanishing entry ${a_{kk}}$). For instance, if ${k=1}$ and ${A}$ is of the form (1), then ${\hbox{Graph}[A]}$ is the set of tuples ${(r,R,s,S) \in {\bf R} \times {\bf R}^{n-1} \times {\bf R} \times {\bf R}^{n-1}}$ obeying the equations

$\displaystyle a_{11} r + X R = s$

$\displaystyle Y r + B R = S.$

The image of ${(r,R,s,S)}$ under ${\hbox{Rot}_1}$ is ${(s, R, -r, S)}$. Since we can write the above system of equations (for ${a_{11} \neq 0}$) as

$\displaystyle \frac{-1}{a_{11}} s + \frac{X}{a_{11}} R = -r$

$\displaystyle \frac{Y}{a_{11}} s + (B - a_{11}^{-1} YX) R = S$

we see from (2) that ${\hbox{Rot}_1 \hbox{Graph}[A]}$ is the graph of ${\hbox{Sweep}_1[A]}$. Thus the sweep operation is a multidimensional generalisation of the high school geometry fact that the line ${y = mx}$ in the plane becomes ${y = \frac{-1}{m} x}$ after applying a ninety degree rotation.

It is then an instructive exercise to use this geometric interpretation of the sweep operator to recover all the remarkable properties about these operations listed above. It is also useful to compare the geometric interpretation of sweeping as rotation of the graph to that of Gaussian elimination, which instead shears and reflects the graph by various elementary transformations (this is what is going on geometrically when one performs Gaussian elimination on an augmented matrix). Rotations are less distorting than shears, so one can see geometrically why sweeping can produce fewer numerical artefacts than Gaussian elimination.

In Notes 0, we introduced the notion of a measure space ${\Omega = (\Omega, {\mathcal F}, \mu)}$, which includes as a special case the notion of a probability space. By selecting one such probability space ${(\Omega,{\mathcal F},\mu)}$ as a sample space, one obtains a model for random events and random variables, with random events ${E}$ being modeled by measurable sets ${E_\Omega}$ in ${{\mathcal F}}$, and random variables ${X}$ taking values in a measurable space ${R}$ being modeled by measurable functions ${X_\Omega: \Omega \rightarrow R}$. We then defined some basic operations on these random events and variables:

• Given events ${E,F}$, we defined the conjunction ${E \wedge F}$, the disjunction ${E \vee F}$, and the complement ${\overline{E}}$. For countable families ${E_1,E_2,\dots}$ of events, we similarly defined ${\bigwedge_{n=1}^\infty E_n}$ and ${\bigvee_{n=1}^\infty E_n}$. We also defined the empty event ${\emptyset}$ and the sure event ${\overline{\emptyset}}$, and what it meant for two events to be equal.
• Given random variables ${X_1,\dots,X_n}$ in ranges ${R_1,\dots,R_n}$ respectively, and a measurable function ${F: R_1 \times \dots \times R_n \rightarrow S}$, we defined the random variable ${F(X_1,\dots,X_n)}$ in range ${S}$. (As the special case ${n=0}$ of this, every deterministic element ${s}$ of ${S}$ was also a random variable taking values in ${S}$.) Given a relation ${P: R_1 \times \dots \times R_n \rightarrow \{\hbox{true}, \hbox{false}\}}$, we similarly defined the event ${P(X_1,\dots,X_n)}$. Conversely, given an event ${E}$, we defined the indicator random variable ${1_E}$. Finally, we defined what it meant for two random variables to be equal.
• Given an event ${E}$, we defined its probability ${{\bf P}(E)}$.

These operations obey various axioms; for instance, the boolean operations on events obey the axioms of a Boolean algebra, and the probabilility function ${E \mapsto {\bf P}(E)}$ obeys the Kolmogorov axioms. However, we will not focus on the axiomatic approach to probability theory here, instead basing the foundations of probability theory on the sample space models as discussed in Notes 0. (But see this previous post for a treatment of one such axiomatic approach.)

It turns out that almost all of the other operations on random events and variables we need can be constructed in terms of the above basic operations. In particular, this allows one to safely extend the sample space in probability theory whenever needed, provided one uses an extension that respects the above basic operations; this is an important operation when one needs to add new sources of randomness to an existing system of events and random variables, or to couple together two separate such systems into a joint system that extends both of the original systems. We gave a simple example of such an extension in the previous notes, but now we give a more formal definition:

Definition 1 Suppose that we are using a probability space ${\Omega = (\Omega, {\mathcal F}, \mu)}$ as the model for a collection of events and random variables. An extension of this probability space is a probability space ${\Omega' = (\Omega', {\mathcal F}', \mu')}$, together with a measurable map ${\pi: \Omega' \rightarrow \Omega}$ (sometimes called the factor map) which is probability-preserving in the sense that

$\displaystyle \mu'( \pi^{-1}(E) ) = \mu(E) \ \ \ \ \ (1)$

for all ${E \in {\mathcal F}}$. (Caution: this does not imply that ${\mu(\pi(F)) = \mu'(F)}$ for all ${F \in {\mathcal F}'}$ – why not?)

An event ${E}$ which is modeled by a measurable subset ${E_\Omega}$ in the sample space ${\Omega}$, will be modeled by the measurable set ${E_{\Omega'} := \pi^{-1}(E_\Omega)}$ in the extended sample space ${\Omega'}$. Similarly, a random variable ${X}$ taking values in some range ${R}$ that is modeled by a measurable function ${X_\Omega: \Omega \rightarrow R}$ in ${\Omega}$, will be modeled instead by the measurable function ${X_{\Omega'} := X_\Omega \circ \pi}$ in ${\Omega'}$. We also allow the extension ${\Omega'}$ to model additional events and random variables that were not modeled by the original sample space ${\Omega}$ (indeed, this is one of the main reasons why we perform extensions in probability in the first place).

Thus, for instance, the sample space ${\Omega'}$ in Example 3 of the previous post is an extension of the sample space ${\Omega}$ in that example, with the factor map ${\pi: \Omega' \rightarrow \Omega}$ given by the first coordinate projection ${\pi(i,j) := i}$. One can verify that all of the basic operations on events and random variables listed above are unaffected by the above extension (with one caveat, see remark below). For instance, the conjunction ${E \wedge F}$ of two events can be defined via the original model ${\Omega}$ by the formula

$\displaystyle (E \wedge F)_\Omega := E_\Omega \cap F_\Omega$

or via the extension ${\Omega'}$ via the formula

$\displaystyle (E \wedge F)_{\Omega'} := E_{\Omega'} \cap F_{\Omega'}.$

The two definitions are consistent with each other, thanks to the obvious set-theoretic identity

$\displaystyle \pi^{-1}( E_\Omega \cap F_\Omega ) = \pi^{-1}(E_\Omega) \cap \pi^{-1}(F_\Omega).$

Similarly, the assumption (1) is precisely what is needed to ensure that the probability ${\mathop{\bf P}(E)}$ of an event remains unchanged when one replaces a sample space model with an extension. We leave the verification of preservation of the other basic operations described above under extension as exercises to the reader.

Remark 2 There is one minor exception to this general rule if we do not impose the additional requirement that the factor map ${\pi}$ is surjective. Namely, for non-surjective ${\pi}$, it can become possible that two events ${E, F}$ are unequal in the original sample space model, but become equal in the extension (and similarly for random variables), although the converse never happens (events that are equal in the original sample space always remain equal in the extension). For instance, let ${\Omega}$ be the discrete probability space ${\{a,b\}}$ with ${p_a=1}$ and ${p_b=0}$, and let ${\Omega'}$ be the discrete probability space ${\{ a'\}}$ with ${p'_{a'}=1}$, and non-surjective factor map ${\pi: \Omega' \rightarrow \Omega}$ defined by ${\pi(a') := a}$. Then the event modeled by ${\{b\}}$ in ${\Omega}$ is distinct from the empty event when viewed in ${\Omega}$, but becomes equal to that event when viewed in ${\Omega'}$. Thus we see that extending the sample space by a non-surjective factor map can identify previously distinct events together (though of course, being probability preserving, this can only happen if those two events were already almost surely equal anyway). This turns out to be fairly harmless though; while it is nice to know if two given events are equal, or if they differ by a non-null event, it is almost never useful to know that two events are unequal if they are already almost surely equal. Alternatively, one can add the additional requirement of surjectivity in the definition of an extension, which is also a fairly harmless constraint to impose (this is what I chose to do in this previous set of notes).

Roughly speaking, one can define probability theory as the study of those properties of random events and random variables that are model-independent in the sense that they are preserved by extensions. For instance, the cardinality ${|E_\Omega|}$ of the model ${E_\Omega}$ of an event ${E}$ is not a concept within the scope of probability theory, as it is not preserved by extensions: continuing Example 3 from Notes 0, the event ${E}$ that a die roll ${X}$ is even is modeled by a set ${E_\Omega = \{2,4,6\}}$ of cardinality ${3}$ in the original sample space model ${\Omega}$, but by a set ${E_{\Omega'} = \{2,4,6\} \times \{1,2,3,4,5,6\}}$ of cardinality ${18}$ in the extension. Thus it does not make sense in the context of probability theory to refer to the “cardinality of an event ${E}$“.

On the other hand, the supremum ${\sup_n X_n}$ of a collection of random variables ${X_n}$ in the extended real line ${[-\infty,+\infty]}$ is a valid probabilistic concept. This can be seen by manually verifying that this operation is preserved under extension of the sample space, but one can also see this by defining the supremum in terms of existing basic operations. Indeed, note from Exercise 24 of Notes 0 that a random variable ${X}$ in the extended real line is completely specified by the threshold events ${(X \leq t)}$ for ${t \in {\bf R}}$; in particular, two such random variables ${X,Y}$ are equal if and only if the events ${(X \leq t)}$ and ${(Y \leq t)}$ are surely equal for all ${t}$. From the identity

$\displaystyle (\sup_n X_n \leq t) = \bigwedge_{n=1}^\infty (X_n \leq t)$

we thus see that one can completely specify ${\sup_n X_n}$ in terms of ${X_n}$ using only the basic operations provided in the above list (and in particular using the countable conjunction ${\bigwedge_{n=1}^\infty}$.) Of course, the same considerations hold if one replaces supremum, by infimum, limit superior, limit inferior, or (if it exists) the limit.

In this set of notes, we will define some further important operations on scalar random variables, in particular the expectation of these variables. In the sample space models, expectation corresponds to the notion of integration on a measure space. As we will need to use both expectation and integration in this course, we will thus begin by quickly reviewing the basics of integration on a measure space, although we will then translate the key results of this theory into probabilistic language.

As the finer details of the Lebesgue integral construction are not the core focus of this probability course, some of the details of this construction will be left to exercises. See also Chapter 1 of Durrett, or these previous blog notes, for a more detailed treatment.

Starting this week, I will be teaching an introductory graduate course (Math 275A) on probability theory here at UCLA. While I find myself using probabilistic methods routinely nowadays in my research (for instance, the probabilistic concept of Shannon entropy played a crucial role in my recent paper on the Chowla and Elliott conjectures, and random multiplicative functions similarly played a central role in the paper on the Erdos discrepancy problem), this will actually be the first time I will be teaching a course on probability itself (although I did give a course on random matrix theory some years ago that presumed familiarity with graduate-level probability theory). As such, I will be relying primarily on an existing textbook, in this case Durrett’s Probability: Theory and Examples. I still need to prepare lecture notes, though, and so I thought I would continue my practice of putting my notes online, although in this particular case they will be less detailed or complete than with other courses, as they will mostly be focusing on those topics that are not already comprehensively covered in the text of Durrett. Below the fold are my first such set of notes, concerning the classical measure-theoretic foundations of probability. (I wrote on these foundations also in this previous blog post, but in that post I already assumed that the reader was familiar with measure theory and basic probability, whereas in this course not every student will have a strong background in these areas.)

Note: as this set of notes is primarily concerned with foundational issues, it will contain a large number of pedantic (and nearly trivial) formalities and philosophical points. We dwell on these technicalities in this set of notes primarily so that they are out of the way in later notes, when we work with the actual mathematics of probability, rather than on the supporting foundations of that mathematics. In particular, the excessively formal and philosophical language in this set of notes will not be replicated in later notes.

Let ${X}$ and ${Y}$ be two random variables taking values in the same (discrete) range ${R}$, and let ${E}$ be some subset of ${R}$, which we think of as the set of “bad” outcomes for either ${X}$ or ${Y}$. If ${X}$ and ${Y}$ have the same probability distribution, then clearly

$\displaystyle {\bf P}( X \in E ) = {\bf P}( Y \in E ).$

In particular, if it is rare for ${Y}$ to lie in ${E}$, then it is also rare for ${X}$ to lie in ${E}$.

If ${X}$ and ${Y}$ do not have exactly the same probability distribution, but their probability distributions are close to each other in some sense, then we can expect to have an approximate version of the above statement. For instance, from the definition of the total variation distance ${\delta(X,Y)}$ between two random variables (or more precisely, the total variation distance between the probability distributions of two random variables), we see that

$\displaystyle {\bf P}(Y \in E) - \delta(X,Y) \leq {\bf P}(X \in E) \leq {\bf P}(Y \in E) + \delta(X,Y) \ \ \ \ \ (1)$

for any ${E \subset R}$. In particular, if it is rare for ${Y}$ to lie in ${E}$, and ${X,Y}$ are close in total variation, then it is also rare for ${X}$ to lie in ${E}$.

A basic inequality in information theory is Pinsker’s inequality

$\displaystyle \delta(X,Y) \leq \sqrt{\frac{1}{2} D_{KL}(X||Y)}$

where the Kullback-Leibler divergence ${D_{KL}(X||Y)}$ is defined by the formula

$\displaystyle D_{KL}(X||Y) = \sum_{x \in R} {\bf P}( X=x ) \log \frac{{\bf P}(X=x)}{{\bf P}(Y=x)}.$

(See this previous blog post for a proof of this inequality.) A standard application of Jensen’s inequality reveals that ${D_{KL}(X||Y)}$ is non-negative (Gibbs’ inequality), and vanishes if and only if ${X}$, ${Y}$ have the same distribution; thus one can think of ${D_{KL}(X||Y)}$ as a measure of how close the distributions of ${X}$ and ${Y}$ are to each other, although one should caution that this is not a symmetric notion of distance, as ${D_{KL}(X||Y) \neq D_{KL}(Y||X)}$ in general. Inserting Pinsker’s inequality into (1), we see for instance that

$\displaystyle {\bf P}(X \in E) \leq {\bf P}(Y \in E) + \sqrt{\frac{1}{2} D_{KL}(X||Y)}.$

Thus, if ${X}$ is close to ${Y}$ in the Kullback-Leibler sense, and it is rare for ${Y}$ to lie in ${E}$, then it is rare for ${X}$ to lie in ${E}$ as well.

We can specialise this inequality to the case when ${Y}$ a uniform random variable ${U}$ on a finite range ${R}$ of some cardinality ${N}$, in which case the Kullback-Leibler divergence ${D_{KL}(X||U)}$ simplifies to

$\displaystyle D_{KL}(X||U) = \log N - {\bf H}(X)$

where

$\displaystyle {\bf H}(X) := \sum_{x \in R} {\bf P}(X=x) \log \frac{1}{{\bf P}(X=x)}$

is the Shannon entropy of ${X}$. Again, a routine application of Jensen’s inequality shows that ${{\bf H}(X) \leq \log N}$, with equality if and only if ${X}$ is uniformly distributed on ${R}$. The above inequality then becomes

$\displaystyle {\bf P}(X \in E) \leq {\bf P}(U \in E) + \sqrt{\frac{1}{2}(\log N - {\bf H}(X))}. \ \ \ \ \ (2)$

Thus, if ${E}$ is a small fraction of ${R}$ (so that it is rare for ${U}$ to lie in ${E}$), and the entropy of ${X}$ is very close to the maximum possible value of ${\log N}$, then it is rare for ${X}$ to lie in ${E}$ also.

The inequality (2) is only useful when the entropy ${{\bf H}(X)}$ is close to ${\log N}$ in the sense that ${{\bf H}(X) = \log N - O(1)}$, otherwise the bound is worse than the trivial bound of ${{\bf P}(X \in E) \leq 1}$. In my recent paper on the Chowla and Elliott conjectures, I ended up using a variant of (2) which was still non-trivial when the entropy ${{\bf H}(X)}$ was allowed to be smaller than ${\log N - O(1)}$. More precisely, I used the following simple inequality, which is implicit in the arguments of that paper but which I would like to make more explicit in this post:

Lemma 1 (Pinsker-type inequality) Let ${X}$ be a random variable taking values in a finite range ${R}$ of cardinality ${N}$, let ${U}$ be a uniformly distributed random variable in ${R}$, and let ${E}$ be a subset of ${R}$. Then

$\displaystyle {\bf P}(X \in E) \leq \frac{(\log N - {\bf H}(X)) + \log 2}{\log 1/{\bf P}(U \in E)}.$

Proof: Consider the conditional entropy ${{\bf H}(X | 1_{X \in E} )}$. On the one hand, we have

$\displaystyle {\bf H}(X | 1_{X \in E} ) = {\bf H}(X, 1_{X \in E}) - {\bf H}(1_{X \in E} )$

$\displaystyle = {\bf H}(X) - {\bf H}(1_{X \in E})$

$\displaystyle \geq {\bf H}(X) - \log 2$

by Jensen’s inequality. On the other hand, one has

$\displaystyle {\bf H}(X | 1_{X \in E} ) = {\bf P}(X \in E) {\bf H}(X | X \in E )$

$\displaystyle + (1-{\bf P}(X \in E)) {\bf H}(X | X \not \in E)$

$\displaystyle \leq {\bf P}(X \in E) \log |E| + (1-{\bf P}(X \in E)) \log N$

$\displaystyle = \log N - {\bf P}(X \in E) \log \frac{N}{|E|}$

$\displaystyle = \log N - {\bf P}(X \in E) \log \frac{1}{{\bf P}(U \in E)},$

where we have again used Jensen’s inequality. Putting the two inequalities together, we obtain the claim. $\Box$

Remark 2 As noted in comments, this inequality can be viewed as a special case of the more general inequality

$\displaystyle {\bf P}(X \in E) \leq \frac{D(X||Y) + \log 2}{\log 1/{\bf P}(Y \in E)}$

for arbitrary random variables ${X,Y}$ taking values in the same discrete range ${R}$, which follows from the data processing inequality

$\displaystyle D( f(X)||f(Y)) \leq D(X|| Y)$

for arbitrary functions ${f}$, applied to the indicator function ${f = 1_E}$. Indeed one has

$\displaystyle D( 1_E(X) || 1_E(Y) ) = {\bf P}(X \in E) \log \frac{{\bf P}(X \in E)}{{\bf P}(Y \in E)}$

$\displaystyle + {\bf P}(X \not \in E) \log \frac{{\bf P}(X \not \in E)}{{\bf P}(Y \not \in E)}$

$\displaystyle \geq {\bf P}(X \in E) \log \frac{1}{{\bf P}(Y \in E)} - h( {\bf P}(X \in E) )$

$\displaystyle \geq {\bf P}(X \in E) \log \frac{1}{{\bf P}(Y \in E)} - \log 2$

where ${h(u) := u \log \frac{1}{u} + (1-u) \log \frac{1}{1-u}}$ is the entropy function.

Thus, for instance, if one has

$\displaystyle {\bf H}(X) \geq \log N - o(K)$

and

$\displaystyle {\bf P}(U \in E) \leq \exp( - K )$

for some ${K}$ much larger than ${1}$ (so that ${1/K = o(1)}$), then

$\displaystyle {\bf P}(X \in E) = o(1).$

More informally: if the entropy of ${X}$ is somewhat close to the maximum possible value of ${\log N}$, and it is exponentially rare for a uniform variable to lie in ${E}$, then it is still somewhat rare for ${X}$ to lie in ${E}$. The estimate given is close to sharp in this regime, as can be seen by calculating the entropy of a random variable ${X}$ which is uniformly distributed inside a small set ${E}$ with some probability ${p}$ and uniformly distributed outside of ${E}$ with probability ${1-p}$, for some parameter ${0 \leq p \leq 1}$.

It turns out that the above lemma combines well with concentration of measure estimates; in my paper, I used one of the simplest such estimates, namely Hoeffding’s inequality, but there are of course many other estimates of this type (see e.g. this previous blog post for some others). Roughly speaking, concentration of measure inequalities allow one to make approximations such as

$\displaystyle F(U) \approx {\bf E} F(U)$

with exponentially high probability, where ${U}$ is a uniform distribution and ${F}$ is some reasonable function of ${U}$. Combining this with the above lemma, we can then obtain approximations of the form

$\displaystyle F(X) \approx {\bf E} F(U) \ \ \ \ \ (3)$

with somewhat high probability, if the entropy of ${X}$ is somewhat close to maximum. This observation, combined with an “entropy decrement argument” that allowed one to arrive at a situation in which the relevant random variable ${X}$ did have a near-maximum entropy, is the key new idea in my recent paper; for instance, one can use the approximation (3) to obtain an approximation of the form

$\displaystyle \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+j+p) 1_{p|n+j}$

$\displaystyle \approx \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+j+p)}{p}$

for “most” choices of ${n}$ and a suitable choice of ${H}$ (with the latter being provided by the entropy decrement argument). The left-hand side is tied to Chowla-type sums such as ${\sum_{n \leq x} \frac{\lambda(n)\lambda(n+1)}{n}}$ through the multiplicativity of ${\lambda}$, while the right-hand side, being a linear correlation involving two parameters ${j,p}$ rather than just one, has “finite complexity” and can be treated by existing techniques such as the Hardy-Littlewood circle method. One could hope that one could similarly use approximations such as (3) in other problems in analytic number theory or combinatorics.

I’ve just uploaded two related papers to the arXiv:

This pair of papers is an outgrowth of these two recent blog posts and the ensuing discussion. In the first paper, we establish the following logarithmically averaged version of the Chowla conjecture (in the case ${k=2}$ of two-point correlations (or “pair correlations”)):

Theorem 1 (Logarithmically averaged Chowla conjecture) Let ${a_1,a_2}$ be natural numbers, and let ${b_1,b_2}$ be integers such that ${a_1 b_2 - a_2 b_1 \neq 0}$. Let ${1 \leq \omega(x) \leq x}$ be a quantity depending on ${x}$ that goes to infinity as ${x \rightarrow \infty}$. Let ${\lambda}$ denote the Liouville function. Then one has

$\displaystyle \sum_{x/\omega(x) < n \leq x} \frac{\lambda(a_1 n + b_1) \lambda(a_2 n+b_2)}{n} = o( \log \omega(x) ) \ \ \ \ \ (1)$

as ${x \rightarrow \infty}$.

Thus for instance one has

$\displaystyle \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1)}{n} = o(\log x). \ \ \ \ \ (2)$

For comparison, the non-averaged Chowla conjecture would imply that

$\displaystyle \sum_{n \leq x} \lambda(n) \lambda(n+1) = o(x) \ \ \ \ \ (3)$

which is a strictly stronger estimate than (2), and remains open.

The arguments also extend to other completely multiplicative functions than the Liouville function. In particular, one obtains a slightly averaged version of the non-asymptotic Elliott conjecture that was shown in the previous blog post to imply a positive solution to the Erdos discrepancy problem. The averaged version of the conjecture established in this paper is slightly weaker than the one assumed in the previous blog post, but it turns out that the arguments there can be modified without much difficulty to accept this averaged Elliott conjecture as input. In particular, we obtain an unconditional solution to the Erdos discrepancy problem as a consequence; this is detailed in the second paper listed above. In fact we can also handle the vector-valued version of the Erdos discrepancy problem, in which the sequence ${f(1), f(2), \dots}$ takes values in the unit sphere of an arbitrary Hilbert space, rather than in ${\{-1,+1\}}$.

Estimates such as (2) or (3) are known to be subject to the “parity problem” (discussed numerous times previously on this blog), which roughly speaking means that they cannot be proven solely using “linear” estimates on functions such as the von Mangoldt function. However, it is known that the parity problem can be circumvented using “bilinear” estimates, and this is basically what is done here.

We now describe in informal terms the proof of Theorem 1, focusing on the model case (2) for simplicity. Suppose for contradiction that the left-hand side of (2) was large and (say) positive. Using the multiplicativity ${\lambda(pn) = -\lambda(n)}$, we conclude that

$\displaystyle \sum_{n \leq x} \frac{\lambda(n) \lambda(n+p) 1_{p|n}}{n}$

is also large and positive for all primes ${p}$ that are not too large; note here how the logarithmic averaging allows us to leave the constraint ${n \leq x}$ unchanged. Summing in ${p}$, we conclude that

$\displaystyle \sum_{n \leq x} \frac{ \sum_{p \in {\mathcal P}} \lambda(n) \lambda(n+p) 1_{p|n}}{n}$

is large and positive for any given set ${{\mathcal P}}$ of medium-sized primes. By a standard averaging argument, this implies that

$\displaystyle \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \lambda(n+j) \lambda(n+p+j) 1_{p|n+j} \ \ \ \ \ (4)$

is large for many choices of ${n}$, where ${H}$ is a medium-sized parameter at our disposal to choose, and we take ${{\mathcal P}}$ to be some set of primes that are somewhat smaller than ${H}$. (A similar approach was taken in this recent paper of Matomaki, Radziwill, and myself to study sign patterns of the Möbius function.) To obtain the required contradiction, one thus wants to demonstrate significant cancellation in the expression (4). As in that paper, we view ${n}$ as a random variable, in which case (4) is essentially a bilinear sum of the random sequence ${(\lambda(n+1),\dots,\lambda(n+H))}$ along a random graph ${G_{n,H}}$ on ${\{1,\dots,H\}}$, in which two vertices ${j, j+p}$ are connected if they differ by a prime ${p}$ in ${{\mathcal P}}$ that divides ${n+j}$. A key difficulty in controlling this sum is that for randomly chosen ${n}$, the sequence ${(\lambda(n+1),\dots,\lambda(n+H))}$ and the graph ${G_{n,H}}$ need not be independent. To get around this obstacle we introduce a new argument which we call the “entropy decrement argument” (in analogy with the “density increment argument” and “energy increment argument” that appear in the literature surrounding Szemerédi’s theorem on arithmetic progressions, and also reminiscent of the “entropy compression argument” of Moser and Tardos, discussed in this previous post). This argument, which is a simple consequence of the Shannon entropy inequalities, can be viewed as a quantitative version of the standard subadditivity argument that establishes the existence of Kolmogorov-Sinai entropy in topological dynamical systems; it allows one to select a scale parameter ${H}$ (in some suitable range ${[H_-,H_+]}$) for which the sequence ${(\lambda(n+1),\dots,\lambda(n+H))}$ and the graph ${G_{n,H}}$ exhibit some weak independence properties (or more precisely, the mutual information between the two random variables is small).

Informally, the entropy decrement argument goes like this: if the sequence ${(\lambda(n+1),\dots,\lambda(n+H))}$ has significant mutual information with ${G_{n,H}}$, then the entropy of the sequence ${(\lambda(n+1),\dots,\lambda(n+H'))}$ for ${H' > H}$ will grow a little slower than linearly, due to the fact that the graph ${G_{n,H}}$ has zero entropy (knowledge of ${G_{n,H}}$ more or less completely determines the shifts ${G_{n+kH,H}}$ of the graph); this can be formalised using the classical Shannon inequalities for entropy (and specifically, the non-negativity of conditional mutual information). But the entropy cannot drop below zero, so by increasing ${H}$ as necessary, at some point one must reach a metastable region (cf. the finite convergence principle discussed in this previous blog post), within which very little mutual information can be shared between the sequence ${(\lambda(n+1),\dots,\lambda(n+H))}$ and the graph ${G_{n,H}}$. Curiously, for the application it is not enough to have a purely quantitative version of this argument; one needs a quantitative bound (which gains a factor of a bit more than ${\log H}$ on the trivial bound for mutual information), and this is surprisingly delicate (it ultimately comes down to the fact that the series ${\sum_{j \geq 2} \frac{1}{j \log j \log\log j}}$ diverges, which is only barely true).

Once one locates a scale ${H}$ with the low mutual information property, one can use standard concentration of measure results such as the Hoeffding inequality to approximate (4) by the significantly simpler expression

$\displaystyle \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j)}{p}. \ \ \ \ \ (5)$

The important thing here is that Hoeffding’s inequality gives exponentially strong bounds on the failure probability, which is needed to counteract the logarithms that are inevitably present whenever trying to use entropy inequalities. The expression (5) can then be controlled in turn by an application of the Hardy-Littlewood circle method and a non-trivial estimate

$\displaystyle \sup_\alpha \frac{1}{X} \int_X^{2X} |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (6)$

for averaged short sums of a modulated Liouville function established in another recent paper by Matomäki, Radziwill and myself.

When one uses this method to study more general sums such as

$\displaystyle \sum_{n \leq x} \frac{g_1(n) g_2(n+1)}{n},$

one ends up having to consider expressions such as

$\displaystyle \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} c_p \frac{g_1(n+j) g_2(n+p+j)}{p}.$

where ${c_p}$ is the coefficient ${c_p := \overline{g_1}(p) \overline{g_2}(p)}$. When attacking this sum with the circle method, one soon finds oneself in the situation of wanting to locate the large Fourier coefficients of the exponential sum

$\displaystyle S(\alpha) := \sum_{p \in {\mathcal P}} \frac{c_p}{p} e^{2\pi i \alpha p}.$

In many cases (such as in the application to the Erdös discrepancy problem), the coefficient ${c_p}$ is identically ${1}$, and one can understand this sum satisfactorily using the classical results of Vinogradov: basically, ${S(\alpha)}$ is large when ${\alpha}$ lies in a “major arc” and is small when it lies in a “minor arc”. For more general functions ${g_1,g_2}$, the coefficients ${c_p}$ are more or less arbitrary; the large values of ${S(\alpha)}$ are no longer confined to the major arc case. Fortunately, even in this general situation one can use a restriction theorem for the primes established some time ago by Ben Green and myself to show that there are still only a bounded number of possible locations ${\alpha}$ (up to the uncertainty mandated by the Heisenberg uncertainty principle) where ${S(\alpha)}$ is large, and we can still conclude by using (6). (Actually, as recently pointed out to me by Ben, one does not need the full strength of our result; one only needs the ${L^4}$ restriction theorem for the primes, which can be proven fairly directly using Plancherel’s theorem and some sieve theory.)

It is tempting to also use the method to attack higher order cases of the (logarithmically) averaged Chowla conjecture, for instance one could try to prove the estimate

$\displaystyle \sum_{n \leq x} \frac{\lambda(n) \lambda(n+1) \lambda(n+2)}{n} = o(\log x).$

The above arguments reduce matters to obtaining some non-trivial cancellation for sums of the form

$\displaystyle \frac{1}{H} \sum_{j=1}^H \sum_{p \in {\mathcal P}} \frac{\lambda(n+j) \lambda(n+p+j) \lambda(n+2p+j)}{p}.$

A little bit of “higher order Fourier analysis” (as was done for very similar sums in the ergodic theory context by Frantzikinakis-Host-Kra and Wooley-Ziegler) lets one control this sort of sum if one can establish a bound of the form

$\displaystyle \frac{1}{X} \int_X^{2X} \sup_\alpha |\frac{1}{H} \sum_{x \leq n \leq x+H} \lambda(n) e(\alpha n)|\ dx = o(1) \ \ \ \ \ (7)$

where ${X}$ goes to infinity and ${H}$ is a very slowly growing function of ${X}$. This looks very similar to (6), but the fact that the supremum is now inside the integral makes the problem much more difficult. However it looks worth attacking (7) further, as this estimate looks like it should have many nice applications (beyond just the ${k=3}$ case of the logarithmically averaged Chowla or Elliott conjectures, which is already interesting).

For higher ${k}$ than ${k=3}$, the same line of analysis requires one to replace the linear phase ${e(\alpha n)}$ by more complicated phases, such as quadratic phases ${e(\alpha n^2 + \beta n)}$ or even ${k-2}$-step nilsequences. Given that (7) is already beyond the reach of current literature, these even more complicated expressions are also unavailable at present, but one can imagine that they will eventually become tractable, in which case we would obtain an averaged form of the Chowla conjecture for all ${k}$, which would have a number of consequences (such as a logarithmically averaged version of Sarnak’s conjecture, as per this blog post).

It would of course be very nice to remove the logarithmic averaging, and be able to establish bounds such as (3). I did attempt to do so, but I do not see a way to use the entropy decrement argument in a manner that does not require some sort of averaging of logarithmic type, as it requires one to pick a scale ${H}$ that one cannot specify in advance, which is not a problem for logarithmic averages (which are quite stable with respect to dilations) but is problematic for ordinary averages. But perhaps the problem can be circumvented by some clever modification of the argument. One possible approach would be to start exploiting multiplicativity at products of primes, and not just individual primes, to try to keep the scale fixed, but this makes the concentration of measure part of the argument much more complicated as one loses some independence properties (coming from the Chinese remainder theorem) which allowed one to conclude just from the Hoeffding inequality.

The Chowla conjecture asserts that all non-trivial correlations of the Liouville function are asymptotically negligible; for instance, it asserts that

$\displaystyle \sum_{n \leq X} \lambda(n) \lambda(n+h) = o(X)$

as ${X \rightarrow \infty}$ for any fixed natural number ${h}$. This conjecture remains open, though there are a number of partial results (e.g. these two previous results of Matomaki, Radziwill, and myself).

A natural generalisation of Chowla’s conjecture was proposed by Elliott. For simplicity we will only consider Elliott’s conjecture for the pair correlations

$\displaystyle \sum_{n \leq X} g(n) \overline{g}(n+h).$

For such correlations, the conjecture was that one had

$\displaystyle \sum_{n \leq X} g(n) \overline{g}(n+h) = o(X) \ \ \ \ \ (1)$

as ${X \rightarrow \infty}$ for any natural number ${h}$, as long as ${g}$ was a completely multiplicative function with magnitude bounded by ${1}$, and such that

$\displaystyle \sum_p \hbox{Re} \frac{1 - g(p) \overline{\chi(p)} p^{-it}}{p} = +\infty \ \ \ \ \ (2)$

for any Dirichlet character ${\chi}$ and any real number ${t}$. In the language of “pretentious number theory”, as developed by Granville and Soundararajan, the hypothesis (2) asserts that the completely multiplicative function ${g}$ does not “pretend” to be like the completely multiplicative function ${n \mapsto \chi(n) n^{it}}$ for any character ${\chi}$ and real number ${t}$. A condition of this form is necessary; for instance, if ${g(n)}$ is precisely equal to ${\chi(n) n^{it}}$ and ${\chi}$ has period ${q}$, then ${g(n) \overline{g}(n+q)}$ is equal to ${1_{(n,q)=1} + o(1)}$ as ${n \rightarrow \infty}$ and (1) clearly fails. The prime number theorem in arithmetic progressions implies that the Liouville function obeys (2), and so the Elliott conjecture contains the Chowla conjecture as a special case.

As it turns out, Elliott’s conjecture is false as stated, with the counterexample ${g}$ having the property that ${g}$ “pretends” locally to be the function ${n \mapsto n^{it_j}}$ for ${n}$ in various intervals ${[1, X_j]}$, where ${X_j}$ and ${t_j}$ go to infinity in a certain prescribed sense. See this paper of Matomaki, Radziwill, and myself for details. However, we view this as a technicality, and continue to believe that certain “repaired” versions of Elliott’s conjecture still hold. For instance, our counterexample does not apply when ${g}$ is restricted to be real-valued rather than complex, and we believe that Elliott’s conjecture is valid in this setting. Returning to the complex-valued case, we still expect the asymptotic (1) provided that the condition (2) is replaced by the stronger condition

$\displaystyle \sup_{|t| \leq X} |\sum_{p \leq X} \hbox{Re} \frac{1 - g(p) \overline{\chi(p)} p^{-it}}{p}| \rightarrow +\infty$

as ${X \rightarrow +\infty}$ for all fixed Dirichlet characters ${\chi}$. In our paper we supported this claim by establishing a certain “averaged” version of this conjecture; see that paper for further details. (See also this recent paper of Frantzikinakis and Host which establishes a different averaged version of this conjecture.)

One can make a stronger “non-asymptotic” version of this corrected Elliott conjecture, in which the ${X}$ parameter does not go to infinity, or equivalently that the function ${g}$ is permitted to depend on ${X}$:

Conjecture 1 (Non-asymptotic Elliott conjecture) Let ${\varepsilon > 0}$, let ${A \geq 1}$ be sufficiently large depending on ${\varepsilon}$, and let ${X}$ be sufficiently large depending on ${A,\varepsilon}$. Suppose that ${g}$ is a completely multiplicative function with magnitude bounded by ${1}$, such that

$\displaystyle \inf_{|t| \leq AX} |\sum_{p \leq X} \hbox{Re} \frac{1 - g(p) \overline{\chi(p)} p^{-it}}{p}| \geq A$

for all Dirichlet characters ${\chi}$ of period at most ${A}$. Then one has

$\displaystyle |\sum_{n \leq X} g(n) \overline{g(n+h)}| \leq \varepsilon X$

for all natural numbers ${1 \leq h \leq 1/\varepsilon}$.

The ${\varepsilon}$-dependent factor ${A}$ in the constraint ${|t| \leq AX}$ is necessary, as can be seen by considering the completely multiplicative function ${g(n) := n^{2iX}}$ (for instance). Again, the results in my previous paper with Matomaki and Radziwill can be viewed as establishing an averaged version of this conjecture.

Meanwhile, we have the following conjecture that is the focus of the Polymath5 project:

Conjecture 2 (Erdös discrepancy conjecture) For any function ${f: {\bf N} \rightarrow \{-1,+1\}}$, the discrepancy

$\displaystyle \sup_{n,d \in {\bf N}} |\sum_{j=1}^n f(jd)|$

is infinite.

It is instructive to compute some near-counterexamples to Conjecture 2 that illustrate the difficulty of the Erdös discrepancy problem. The first near-counterexample is that of a non-principal Dirichlet character ${f(n) = \chi(n)}$ that takes values in ${\{-1,0,+1\}}$ rather than ${\{-1,+1\}}$. For this function, one has from the complete multiplicativity of ${\chi}$ that

$\displaystyle |\sum_{j=1}^n f(jd)| = |\sum_{j=1}^n \chi(j) \chi(d)|$

$\displaystyle \leq |\sum_{j=1}^n \chi(j)|.$

If ${q}$ denotes the period of ${\chi}$, then ${\chi}$ has mean zero on every interval of length ${q}$, and thus

$\displaystyle |\sum_{j=1}^n f(jd)| \leq |\sum_{j=1}^n \chi(j)| \leq q.$

Thus ${\chi}$ has bounded discrepancy.

Of course, this is not a true counterexample to Conjecture 2 because ${\chi}$ can take the value ${0}$. Let us now consider the following variant example, which is the simplest member of a family of examples studied by Borwein, Choi, and Coons. Let ${\chi = \chi_3}$ be the non-principal Dirichlet character of period ${3}$ (thus ${\chi(n)}$ equals ${+1}$ when ${n=1 \hbox{ mod } 3}$, ${-1}$ when ${n = 2 \hbox{ mod } 3}$, and ${0}$ when ${n = 0 \hbox{ mod } 3}$), and define the completely multiplicative function ${f = \tilde \chi: {\bf N} \rightarrow \{-1,+1\}}$ by setting ${\tilde \chi(p) := \chi(p)}$ when ${p \neq 3}$ and ${\tilde \chi(3) = +1}$. This is about the simplest modification one can make to the previous near-counterexample to eliminate the zeroes. Now consider the sum

$\displaystyle \sum_{j=1}^n \tilde \chi(j)$

with ${n := 1 + 3 + 3^2 + \dots + 3^k}$ for some large ${k}$. Writing ${j = 3^a m}$ with ${m}$ coprime to ${3}$ and ${a}$ at most ${k}$, we can write this sum as

$\displaystyle \sum_{a=0}^k \sum_{1 \leq m \leq n/3^j} \tilde \chi(3^a m).$

Now observe that ${\tilde \chi(3^a m) = \tilde \chi(3)^a \tilde \chi(m) = \chi(m)}$. The function ${\chi}$ has mean zero on every interval of length three, and ${\lfloor n/3^j\rfloor}$ is equal to ${1}$ mod ${3}$, and thus

$\displaystyle \sum_{1 \leq m \leq n/3^j} \tilde \chi(3^a m) = 1$

for every ${a=0,\dots,k}$, and thus

$\displaystyle \sum_{j=1}^n \tilde \chi(j) = k+1 \gg \log n.$

Thus ${\tilde \chi}$ also has unbounded discrepancy, but only barely so (it grows logarithmically in ${n}$). These examples suggest that the main “enemy” to proving Conjecture 2 comes from completely multiplicative functions ${f}$ that somehow “pretend” to be like a Dirichlet character but do not vanish at the zeroes of that character. (Indeed, the special case of Conjecture 2 when ${f}$ is completely multiplicative is already open, appears to be an important subcase.)

All of these conjectures remain open. However, I would like to record in this blog post the following striking connection, illustrating the power of the Elliott conjecture (particularly in its nonasymptotic formulation):

Theorem 3 (Elliott conjecture implies unbounded discrepancy) Conjecture 1 implies Conjecture 2.

The argument relies heavily on two observations that were previously made in connection with the Polymath5 project. The first is a Fourier-analytic reduction that replaces the Erdos Discrepancy Problem with an averaged version for completely multiplicative functions ${g}$. An application of Cauchy-Schwarz then shows that any counterexample to that version will violate the conclusion of Conjecture 1, so if one assumes that conjecture then ${g}$ must pretend to be like a function of the form ${n \mapsto \chi(n) n^{it}}$. One then uses (a generalisation) of a second argument from Polymath5 to rule out this case, basically by reducing matters to a more complicated version of the Borwein-Choi-Coons analysis. Details are provided below the fold.

There is some hope that the Chowla and Elliott conjectures can be attacked, as the parity barrier which is so impervious to attack for the twin prime conjecture seems to be more permeable in this setting. (For instance, in my previous post I raised a possible approach, based on establishing expander properties of a certain random graph, which seems to get around the parity problem, in principle at least.)

(Update, Sep 25: fixed some treatment of error terms, following a suggestion of Andrew Granville.)

The twin prime conjecture is one of the oldest unsolved problems in analytic number theory. There are several reasons why this conjecture remains out of reach of current techniques, but the most important obstacle is the parity problem which prevents purely sieve-theoretic methods (or many other popular methods in analytic number theory, such as the circle method) from detecting pairs of prime twins in a way that can distinguish them from other twins of almost primes. The parity problem is discussed in these previous blog posts; this obstruction is ultimately powered by the Möbius pseudorandomness principle that asserts that the Möbius function ${\mu}$ is asymptotically orthogonal to all “structured” functions (and in particular, to the weight functions constructed from sieve theory methods).

However, there is an intriguing “alternate universe” in which the Möbius function is strongly correlated with some structured functions, and specifically with some Dirichlet characters, leading to the existence of the infamous “Siegel zero“. In this scenario, the parity problem obstruction disappears, and it becomes possible, in principle, to attack problems such as the twin prime conjecture. In particular, we have the following result of Heath-Brown:

Theorem 1 At least one of the following two statements are true:

• (Twin prime conjecture) There are infinitely many primes ${p}$ such that ${p+2}$ is also prime.
• (No Siegel zeroes) There exists a constant ${c>0}$ such that for every real Dirichlet character ${\chi}$ of conductor ${q > 1}$, the associated Dirichlet ${L}$-function ${s \mapsto L(s,\chi)}$ has no zeroes in the interval ${[1-\frac{c}{\log q}, 1]}$.

Informally, this result asserts that if one had an infinite sequence of Siegel zeroes, one could use this to generate infinitely many twin primes. See this survey of Friedlander and Iwaniec for more on this “illusory” or “ghostly” parallel universe in analytic number theory that should not actually exist, but is surprisingly self-consistent and to date proven to be impossible to banish from the realm of possibility.

The strategy of Heath-Brown’s proof is fairly straightforward to describe. The usual starting point is to try to lower bound

$\displaystyle \sum_{x \leq n \leq 2x} \Lambda(n) \Lambda(n+2) \ \ \ \ \ (1)$

for some large value of ${x}$, where ${\Lambda}$ is the von Mangoldt function. Actually, in this post we will work with the slight variant

$\displaystyle \sum_{x \leq n \leq 2x} \Lambda_2(n(n+2)) \nu(n(n+2))$

where

$\displaystyle \Lambda_2(n) = (\mu * L^2)(n) = \sum_{d|n} \mu(d) \log^2 \frac{n}{d}$

is the second von Mangoldt function, and ${*}$ denotes Dirichlet convolution, and ${\nu}$ is an (unsquared) Selberg sieve that damps out small prime factors. This sum also detects twin primes, but will lead to slightly simpler computations. For technical reasons we will also smooth out the interval ${x \leq n \leq 2x}$ and remove very small primes from ${n}$, but we will skip over these steps for the purpose of this informal discussion. (In Heath-Brown’s original paper, the Selberg sieve ${\nu}$ is essentially replaced by the more combinatorial restriction ${1_{(n(n+2),q^{1/C}\#)=1}}$ for some large ${C}$, where ${q^{1/C}\#}$ is the primorial of ${q^{1/C}}$, but I found the computations to be slightly easier if one works with a Selberg sieve, particularly if the sieve is not squared to make it nonnegative.)

If there is a Siegel zero ${L(\beta,\chi)=0}$ with ${\beta}$ close to ${1}$ and ${\chi}$ a Dirichlet character of conductor ${q}$, then multiplicative number theory methods can be used to show that the Möbius function ${\mu}$ “pretends” to be like the character ${\chi}$ in the sense that ${\mu(p) \approx \chi(p)}$ for “most” primes ${p}$ near ${q}$ (e.g. in the range ${q^\varepsilon \leq p \leq q^C}$ for some small ${\varepsilon>0}$ and large ${C>0}$). Traditionally, one uses complex-analytic methods to demonstrate this, but one can also use elementary multiplicative number theory methods to establish these results (qualitatively at least), as will be shown below the fold.

The fact that ${\mu}$ pretends to be like ${\chi}$ can be used to construct a tractable approximation (after inserting the sieve weight ${\nu}$) in the range ${[x,2x]}$ (where ${x = q^C}$ for some large ${C}$) for the second von Mangoldt function ${\Lambda_2}$, namely the function

$\displaystyle \tilde \Lambda_2(n) := (\chi * L)(n) = \sum_{d|n} \chi(d) \log^2 \frac{n}{d}.$

Roughly speaking, we think of the periodic function ${\chi}$ and the slowly varying function ${\log^2}$ as being of about the same “complexity” as the constant function ${1}$, so that ${\tilde \Lambda_2}$ is roughly of the same “complexity” as the divisor function

$\displaystyle \tau(n) := (1*1)(n) = \sum_{d|n} 1,$

which is considerably simpler to obtain asymptotics for than the von Mangoldt function as the Möbius function is no longer present. (For instance, note from the Dirichlet hyperbola method that one can estimate ${\sum_{x \leq n \leq 2x} \tau(n)}$ to accuracy ${O(\sqrt{x})}$ with little difficulty, whereas to obtain a comparable level of accuracy for ${\sum_{x \leq n \leq 2x} \Lambda(n)}$ or ${\sum_{x \leq n \leq 2x} \Lambda_2(n)}$ is essentially the Riemann hypothesis.)

One expects ${\tilde \Lambda_2(n)}$ to be a good approximant to ${\Lambda_2(n)}$ if ${n}$ is of size ${O(x)}$ and has no prime factors less than ${q^{1/C}}$ for some large constant ${C}$. The Selberg sieve ${\nu}$ will be mostly supported on numbers with no prime factor less than ${q^{1/C}}$. As such, one can hope to approximate (1) by the expression

$\displaystyle \sum_{x \leq n \leq 2x} \tilde \Lambda_2(n(n+2)) \nu(n(n+2)); \ \ \ \ \ (2)$

as it turns out, the error between this expression and (1) is easily controlled by sieve-theoretic techniques. Let us ignore the Selberg sieve for now and focus on the slightly simpler sum

$\displaystyle \sum_{x \leq n \leq 2x} \tilde \Lambda_2(n(n+2)).$

As discussed above, this sum should be thought of as a slightly more complicated version of the sum

$\displaystyle \sum_{x \leq n \leq 2x} \tau(n(n+2)). \ \ \ \ \ (3)$

Accordingly, let us look (somewhat informally) at the task of estimating the model sum (3). One can think of this problem as basically that of counting solutions to the equation ${ab+2=cd}$ with ${a,b,c,d}$ in various ranges; this is clearly related to understanding the equidistribution of the hyperbola ${\{ (a,b) \in {\bf Z}/d{\bf Z}: ab + 2 = 0 \hbox{ mod } d \}}$ in ${({\bf Z}/d{\bf Z})^2}$. Taking Fourier transforms, the latter problem is closely related to estimation of the Kloosterman sums

$\displaystyle \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{a_1 m + a_2 \overline{m}}{r} )$

where ${\overline{m}}$ denotes the inverse of ${m}$ in ${({\bf Z}/r{\bf Z})^\times}$. One can then use the Weil bound

$\displaystyle \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} ) \ll r^{1/2 + o(1)} (a,b,r)^{1/2} \ \ \ \ \ (4)$

where ${(a,b,r)}$ is the greatest common divisor of ${a,b,r}$ (with the convention that this is equal to ${r}$ if ${a,b}$ vanish), and the ${o(1)}$ decays to zero as ${r \rightarrow \infty}$. The Weil bound yields good enough control on error terms to estimate (3), and as it turns out the same method also works to estimate (2) (provided that ${x=q^C}$ with ${C}$ large enough).

Actually one does not need the full strength of the Weil bound here; any power savings over the trivial bound of ${r}$ will do. In particular, it will suffice to use the weaker, but easier to prove, bounds of Kloosterman:

Lemma 2 (Kloosterman bound) One has

$\displaystyle \sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} ) \ll r^{3/4 + o(1)} (a,b,r)^{1/4} \ \ \ \ \ (5)$

whenever ${r \geq 1}$ and ${a,b}$ are coprime to ${r}$, where the ${o(1)}$ is with respect to the limit ${r \rightarrow \infty}$ (and is uniform in ${a,b}$).

Proof: Observe from change of variables that the Kloosterman sum ${\sum_{m \in ({\bf Z}/r{\bf Z})^\times} e( \frac{am+b\overline{m}}{r} )}$ is unchanged if one replaces ${(a,b)}$ with ${(\lambda a, \lambda^{-1} b)}$ for ${\lambda \in ({\bf Z}/d{\bf Z})^\times}$. For fixed ${a,b}$, the number of such pairs ${(\lambda a, \lambda^{-1} b)}$ is at least ${r^{1-o(1)} / (a,b,r)}$, thanks to the divisor bound. Thus it will suffice to establish the fourth moment bound

$\displaystyle \sum_{a,b \in {\bf Z}/r{\bf Z}} |\sum_{m \in ({\bf Z}/r{\bf Z})^\times} e\left( \frac{am+b\overline{m}}{r} \right)|^4 \ll d^{4+o(1)}.$

The left-hand side can be rearranged as

$\displaystyle \sum_{m_1,m_2,m_3,m_4 \in ({\bf Z}/r{\bf Z})^\times} \sum_{a,b \in {\bf Z}/d{\bf Z}}$

$\displaystyle e\left( \frac{a(m_1+m_2-m_3-m_4) + b(\overline{m_1}+\overline{m_2}-\overline{m_3}-\overline{m_4})}{r} \right)$

which by Fourier summation is equal to

$\displaystyle d^2 \# \{ (m_1,m_2,m_3,m_4) \in (({\bf Z}/r{\bf Z})^\times)^4:$

$\displaystyle m_1+m_2-m_3-m_4 = \frac{1}{m_1} + \frac{1}{m_2} - \frac{1}{m_3} - \frac{1}{m_4} = 0 \hbox{ mod } r \}.$

Observe from the quadratic formula and the divisor bound that each pair ${(x,y)\in ({\bf Z}/r{\bf Z})^2}$ has at most ${O(r^{o(1)})}$ solutions ${(m_1,m_2)}$ to the system of equations ${m_1+m_2=x; \frac{1}{m_1} + \frac{1}{m_2} = y}$. Hence the number of quadruples ${(m_1,m_2,m_3,m_4)}$ of the desired form is ${r^{2+o(1)}}$, and the claim follows. $\Box$

We will also need another easy case of the Weil bound to handle some other portions of (2):

Lemma 3 (Easy Weil bound) Let ${\chi}$ be a primitive real Dirichlet character of conductor ${q}$, and let ${a,b,c,d \in{\bf Z}/q{\bf Z}}$. Then

$\displaystyle \sum_{n \in {\bf Z}/q{\bf Z}} \chi(an+b) \chi(cn+d) \ll q^{o(1)} (ad-bc, q).$

Proof: As ${q}$ is the conductor of a primitive real Dirichlet character, ${q}$ is equal to ${2^j}$ times a squarefree odd number for some ${j \leq 3}$. By the Chinese remainder theorem, it thus suffices to establish the claim when ${q}$ is an odd prime. We may assume that ${ad-bc}$ is not divisible by this prime ${q}$, as the claim is trivial otherwise. If ${a}$ vanishes then ${c}$ does not vanish, and the claim follows from the mean zero nature of ${\chi}$; similarly if ${c}$ vanishes. Hence we may assume that ${a,c}$ do not vanish, and then we can normalise them to equal ${1}$. By completing the square it now suffices to show that

$\displaystyle \sum_{n \in {\bf Z}/p{\bf Z}} \chi( n^2 - b ) \ll 1$

whenever ${b \neq 0 \hbox{ mod } p}$. As ${\chi}$ is ${+1}$ on the quadratic residues and ${-1}$ on the non-residues, it now suffices to show that

$\displaystyle \# \{ (m,n) \in ({\bf Z}/p{\bf Z})^2: n^2 - b = m^2 \} = p + O(1).$

But by making the change of variables ${(x,y) = (n+m,n-m)}$, the left-hand side becomes ${\# \{ (x,y) \in ({\bf Z}/p{\bf Z})^2: xy=b\}}$, and the claim follows. $\Box$

While the basic strategy of Heath-Brown’s argument is relatively straightforward, implementing it requires a large amount of computation to control both main terms and error terms. I experimented for a while with rearranging the argument to try to reduce the amount of computation; I did not fully succeed in arriving at a satisfactorily minimal amount of superfluous calculation, but I was able to at least reduce this amount a bit, mostly by replacing a combinatorial sieve with a Selberg-type sieve (which was not needed to be positive, so I dispensed with the squaring aspect of the Selberg sieve to simplify the calculations a little further; also for minor reasons it was convenient to retain a tiny portion of the combinatorial sieve to eliminate extremely small primes). Also some modest reductions in complexity can be obtained by using the second von Mangoldt function ${\Lambda_2(n(n+2))}$ in place of ${\Lambda(n) \Lambda(n+2)}$. These exercises were primarily for my own benefit, but I am placing them here in case they are of interest to some other readers.