You are currently browsing the category archive for the ‘math.PR’ category.

Asgar Jamneshan and I have just uploaded to the arXiv our paper “Foundational aspects of uncountable measure theory: Gelfand duality, Riesz representation, canonical models, and canonical disintegration“. This paper arose from our longer-term project to systematically develop “uncountable” ergodic theory – ergodic theory in which the groups acting are not required to be countable, the probability spaces one acts on are not required to be standard Borel, or Polish, and the compact groups that arise in the structural theory (e.g., the theory of group extensions) are not required to be separable. One of the motivations of doing this is to allow ergodic theory results to be applied to ultraproducts of finite dynamical systems, which can then hopefully be transferred to establish combinatorial results with good uniformity properties. An instance of this is the uncountable Mackey-Zimmer theorem, discussed in this companion blog post.

In the course of this project, we ran into the obstacle that many foundational results, such as the Riesz representation theorem, often require one or more of these countability hypotheses when encountered in textbooks. Other technical issues also arise in the uncountable setting, such as the need to distinguish the Borel -algebra from the (two different types of) Baire -algebra. As such we needed to spend some time reviewing and synthesizing the known literature on some foundational results of “uncountable” measure theory, which led to this paper. As such, most of the results of this paper are already in the literature, either explicitly or implicitly, in one form or another (with perhaps the exception of the canonical disintegration, which we discuss below); we view the main contribution of this paper as presenting the results in a coherent and unified fashion. In particular we found that the language of category theory was invaluable in clarifying and organizing all the different results. In subsequent work we (and some other authors) will use the results in this paper for various applications in uncountable ergodic theory.

The foundational results covered in this paper can be divided into a number of subtopics (Gelfand duality, Baire -algebras and Riesz representation, canonical models, and canonical disintegration), which we discuss further below the fold.

Dimitri Shlyakhtenko and I have uploaded to the arXiv our paper Fractional free convolution powers. For me, this project (which we started during the 2018 IPAM program on quantitative linear algebra) was motivated by a desire to understand the behavior of the *minor process* applied to a large random Hermitian matrix , in which one takes the successive upper left minors of and computes their eigenvalues in non-decreasing order. These eigenvalues are related to each other by the Cauchy interlacing inequalities

*Gelfand-Tsetlin pattern*, as discussed in these previous blog posts.

When is large and the matrix is a random matrix with empirical spectral distribution converging to some compactly supported probability measure on the real line, then under suitable hypotheses (e.g., unitary conjugation invariance of the random matrix ensemble ), a “concentration of measure” effect occurs, with the spectral distribution of the minors for for any fixed converging to a specific measure that depends only on and . The reason for this notation is that there is a surprising description of this measure when is a natural number, namely it is the free convolution of copies of , pushed forward by the dilation map . For instance, if is the Wigner semicircular measure , then . At the random matrix level, this reflects the fact that the minor of a GUE matrix is again a GUE matrix (up to a renormalizing constant).

As first observed by Bercovici and Voiculescu and developed further by Nica and Speicher, among other authors, the notion of a free convolution power of can be extended to non-integer , thus giving the notion of a “fractional free convolution power”. This notion can be defined in several different ways. One of them proceeds via the Cauchy transform

of the measure , and can be defined by solving the Burgers-type equation with initial condition (see this previous blog post for a derivation). This equation can be solved explicitly using the*-transform*of , defined by solving the equation for sufficiently large , in which case one can show that (In the case of the semicircular measure , the -transform is simply the identity: .)

Nica and Speicher also gave a free probability interpretation of the fractional free convolution power: if is a noncommutative random variable in a noncommutative probability space with distribution , and is a real projection operator free of with trace , then the “minor” of (viewed as an element of a new noncommutative probability space whose elements are minors , with trace ) has the law of (we give a self-contained proof of this in an appendix to our paper). This suggests that the minor process (or fractional free convolution) can be studied within the framework of free probability theory.

One of the known facts about integer free convolution powers is monotonicity of the *free entropy*

*free Fisher information*which were introduced by Voiculescu as free probability analogues of the classical probability concepts of differential entropy and classical Fisher information. (Here we correct a small typo in the normalization constant of Fisher entropy as presented in Voiculescu’s paper.) Namely, it was shown by Shylakhtenko that the quantity is monotone non-decreasing for integer , and the Fisher information is monotone non-increasing for integer . This is the free probability analogue of the corresponding monotonicities for differential entropy and classical Fisher information that was established by Artstein, Ball, Barthe, and Naor, answering a question of Shannon.

Our first main result is to extend the monotonicity results of Shylakhtenko to fractional . We give two proofs of this fact, one using free probability machinery, and a more self contained (but less motivated) proof using integration by parts and contour integration. The free probability proof relies on the concept of the *free score* of a noncommutative random variable, which is the analogue of the classical score. The free score, also introduced by Voiculescu, can be defined by duality as measuring the perturbation with respect to semicircular noise, or more precisely

The free score interacts very well with the free minor process , in particular by standard calculations one can establish the identity

whenever is a noncommutative random variable, is an algebra of noncommutative random variables, and is a real projection of trace that is free of both and . The monotonicity of free Fisher information then follows from an application of Pythagoras’s theorem (which implies in particular that conditional expectation operators are contractions on ). The monotonicity of free entropy then follows from an integral representation of free entropy as an integral of free Fisher information along the free Ornstein-Uhlenbeck process (or equivalently, free Fisher information is essentially the rate of change of free entropy with respect to perturbation by semicircular noise). The argument also shows when equality holds in the monotonicity inequalities; this occurs precisely when is a semicircular measure up to affine rescaling.After an extensive amount of calculation of all the quantities that were implicit in the above free probability argument (in particular computing the various terms involved in the application of Pythagoras’ theorem), we were able to extract a self-contained proof of monotonicity that relied on differentiating the quantities in and using the differential equation (1). It turns out that if for sufficiently regular , then there is an identity

where is the kernel and . It is not difficult to show that is a positive semi-definite kernel, which gives the required monotonicity. It would be interesting to obtain some more insightful interpretation of the kernel and the identity (2).These monotonicity properties hint at the minor process being associated to some sort of “gradient flow” in the parameter. We were not able to formalize this intuition; indeed, it is not clear what a gradient flow on a varying noncommutative probability space even means. However, after substantial further calculation we were able to formally describe the minor process as the Euler-Lagrange equation for an intriguing Lagrangian functional that we conjecture to have a random matrix interpretation. We first work in “Lagrangian coordinates”, defining the quantity on the “Gelfand-Tsetlin pyramid”

by the formula which is well defined if the density of is sufficiently well behaved. The random matrix interpretation of is that it is the asymptotic location of the eigenvalue of the upper left minor of a random matrix with asymptotic empirical spectral distribution and with unitarily invariant distribution, thus is in some sense a continuum limit of Gelfand-Tsetlin patterns. Thus for instance the Cauchy interlacing laws in this asymptotic limit regime become After a lengthy calculation (involving extensive use of the chain rule and product rule), the equation (1) is equivalent to the Euler-Lagrange equation where is the Lagrangian density Thus the minor process is formally a critical point of the integral . The quantity measures the mean eigenvalue spacing at some location of the Gelfand-Tsetlin pyramid, and the ratio measures mean eigenvalue drift in the minor process. This suggests that this Lagrangian density is some sort of measure of entropy of the asymptotic microscale point process emerging from the minor process at this spacing and drift. There is work of Metcalfe demonstrating that this point process is given by the Boutillier bead model, so we conjecture that this Lagrangian density somehow measures the entropy density of this process.Asgar Jamneshan and I have just uploaded to the arXiv our paper “An uncountable Moore-Schmidt theorem“. This paper revisits a classical theorem of Moore and Schmidt in measurable cohomology of measure-preserving systems. To state the theorem, let be a probability space, and be the group of measure-preserving automorphisms of this space, that is to say the invertible bimeasurable maps that preserve the measure : . To avoid some ambiguity later in this post when we introduce abstract analogues of measure theory, we will refer to measurable maps as *concrete measurable maps*, and measurable spaces as *concrete measurable spaces*. (One could also call a concrete probability space, but we will not need to do so here as we will not be working explicitly with abstract probability spaces.)

Let be a discrete group. A *(concrete) measure-preserving action* of on is a group homomorphism from to , thus is the identity map and for all . A large portion of ergodic theory is concerned with the study of such measure-preserving actions, especially in the classical case when is the integers (with the additive group law).

Let be a compact Hausdorff abelian group, which we can endow with the Borel -algebra . A *(concrete measurable) –cocycle* is a collection of concrete measurable maps obeying the *cocycle equation*

for -almost every . (Here we are glossing over a measure-theoretic subtlety that we will return to later in this post – see if you can spot it before then!) Cocycles arise naturally in the theory of group extensions of dynamical systems; in particular (and ignoring the aforementioned subtlety), each cocycle induces a measure-preserving action on (which we endow with the product of with Haar probability measure on ), defined by

This connection with group extensions was the original motivation for our study of measurable cohomology, but is not the focus of the current paper.

A special case of a -valued cocycle is a *(concrete measurable) -valued coboundary*, in which for each takes the special form

for -almost every , where is some measurable function; note that (ignoring the aforementioned subtlety), every function of this form is automatically a concrete measurable -valued cocycle. One of the first basic questions in measurable cohomology is to try to characterize which -valued cocycles are in fact -valued coboundaries. This is a difficult question in general. However, there is a general result of Moore and Schmidt that at least allows one to reduce to the model case when is the unit circle , by taking advantage of the Pontryagin dual group of characters , that is to say the collection of continuous homomorphisms to the unit circle. More precisely, we have

Theorem 1 (Countable Moore-Schmidt theorem)Let be a discrete group acting in a concrete measure-preserving fashion on a probability space . Let be a compact Hausdorff abelian group. Assume the following additional hypotheses:

- (i) is at most countable.
- (ii) is a standard Borel space.
- (iii) is metrisable.
Then a -valued concrete measurable cocycle is a concrete coboundary if and only if for each character , the -valued cocycles are concrete coboundaries.

The hypotheses (i), (ii), (iii) are saying in some sense that the data are not too “large”; in all three cases they are saying in some sense that the data are only “countably complicated”. For instance, (iii) is equivalent to being second countable, and (ii) is equivalent to being modeled by a complete separable metric space. It is because of this restriction that we refer to this result as a “countable” Moore-Schmidt theorem. This theorem is a useful tool in several other applications, such as the Host-Kra structure theorem for ergodic systems; I hope to return to these subsequent applications in a future post.

Let us very briefly sketch the main ideas of the proof of Theorem 1. Ignore for now issues of measurability, and pretend that something that holds almost everywhere in fact holds everywhere. The hard direction is to show that if each is a coboundary, then so is . By hypothesis, we then have an equation of the form

for all and some functions , and our task is then to produce a function for which

for all .

Comparing the two equations, the task would be easy if we could find an for which

for all . However there is an obstruction to this: the left-hand side of (3) is additive in , so the right-hand side would have to be also in order to obtain such a representation. In other words, for this strategy to work, one would have to first establish the identity

for all . On the other hand, the good news is that if we somehow manage to obtain the equation, then we can obtain a function obeying (3), thanks to Pontryagin duality, which gives a one-to-one correspondence between and the homomorphisms of the (discrete) group to .

Now, it turns out that one cannot derive the equation (4) directly from the given information (2). However, the left-hand side of (2) is additive in , so the right-hand side must be also. Manipulating this fact, we eventually arrive at

In other words, we don’t get to show that the left-hand side of (4) vanishes, but we do at least get to show that it is -invariant. Now let us assume for sake of argument that the action of is ergodic, which (ignoring issues about sets of measure zero) basically asserts that the only -invariant functions are constant. So now we get a weaker version of (4), namely

for some constants .

Now we need to eliminate the constants. This can be done by the following group-theoretic projection. Let denote the space of concrete measurable maps from to , up to almost everywhere equivalence; this is an abelian group where the various terms in (5) naturally live. Inside this group we have the subgroup of constant functions (up to almost everywhere equivalence); this is where the right-hand side of (5) lives. Because is a divisible group, there is an application of Zorn’s lemma (a good exercise for those who are not acquainted with these things) to show that there exists a retraction , that is to say a group homomorphism that is the identity on the subgroup . We can use this retraction, or more precisely the complement , to eliminate the constant in (5). Indeed, if we set

then from (5) we see that

while from (2) one has

and now the previous strategy works with replaced by . This concludes the sketch of proof of Theorem 1.

In making the above argument rigorous, the hypotheses (i)-(iii) are used in several places. For instance, to reduce to the ergodic case one relies on the ergodic decomposition, which requires the hypothesis (ii). Also, most of the above equations only hold outside of a set of measure zero, and the hypothesis (i) and the hypothesis (iii) (which is equivalent to being at most countable) to avoid the problem that an uncountable union of sets of measure zero could have positive measure (or fail to be measurable at all).

My co-author Asgar Jamneshan and I are working on a long-term project to extend many results in ergodic theory (such as the aforementioned Host-Kra structure theorem) to “uncountable” settings in which hypotheses analogous to (i)-(iii) are omitted; thus we wish to consider actions on uncountable groups, on spaces that are not standard Borel, and cocycles taking values in groups that are not metrisable. Such uncountable contexts naturally arise when trying to apply ergodic theory techniques to combinatorial problems (such as the inverse conjecture for the Gowers norms), as one often relies on the ultraproduct construction (or something similar) to generate an ergodic theory translation of these problems, and these constructions usually give “uncountable” objects rather than “countable” ones. (For instance, the ultraproduct of finite groups is a hyperfinite group, which is usually uncountable.). This paper marks the first step in this project by extending the Moore-Schmidt theorem to the uncountable setting.

If one simply drops the hypotheses (i)-(iii) and tries to prove the Moore-Schmidt theorem, several serious difficulties arise. We have already mentioned the loss of the ergodic decomposition and the possibility that one has to control an uncountable union of null sets. But there is in fact a more basic problem when one deletes (iii): the addition operation , while still continuous, can fail to be measurable as a map from to ! Thus for instance the sum of two measurable functions need not remain measurable, which makes even the very definition of a measurable cocycle or measurable coboundary problematic (or at least unnatural). This phenomenon is known as the *Nedoma pathology*. A standard example arises when is the uncountable torus , endowed with the product topology. Crucially, the Borel -algebra generated by this uncountable product is *not* the product of the factor Borel -algebras (the discrepancy ultimately arises from the fact that topologies permit uncountable unions, but -algebras do not); relating to this, the product -algebra is *not* the same as the Borel -algebra , but is instead a strict sub-algebra. If the group operations on were measurable, then the diagonal set

would be measurable in . But it is an easy exercise in manipulation of -algebras to show that if are any two measurable spaces and is measurable in , then the fibres of are contained in some countably generated subalgebra of . Thus if were -measurable, then all the points of would lie in a single countably generated -algebra. But the cardinality of such an algebra is at most while the cardinality of is , and Cantor’s theorem then gives a contradiction.

To resolve this problem, we give a coarser -algebra than the Borel -algebra, namely the *Baire -algebra* , thus coarsening the measurable space structure on to a new measurable space . In the case of compact Hausdorff abelian groups, can be defined as the -algebra generated by the characters ; for more general compact abelian groups, one can define as the -algebra generated by all continuous maps into metric spaces. This -algebra is equal to when is metrisable but can be smaller for other . With this measurable structure, becomes a measurable group; it seems that once one leaves the metrisable world that is a superior (or at least equally good) space to work with than for analysis, as it avoids the Nedoma pathology. (For instance, from Plancherel’s theorem, we see that if is the Haar probability measure on , then (thus, every -measurable set is equivalent modulo -null sets to a -measurable set), so there is no damage to Plancherel caused by passing to the Baire -algebra.

Passing to the Baire -algebra fixes the most severe problems with an uncountable Moore-Schmidt theorem, but one is still faced with an issue of having to potentially take an uncountable union of null sets. To avoid this sort of problem, we pass to the framework of *abstract measure theory*, in which we remove explicit mention of “points” and can easily delete all null sets at a very early stage of the formalism. In this setup, the category of concrete measurable spaces is replaced with the larger category of *abstract measurable spaces*, which we formally define as the opposite category of the category of -algebras (with Boolean algebra homomorphisms). Thus, we define an *abstract measurable space* to be an object of the form , where is an (abstract) -algebra and is a formal placeholder symbol that signifies use of the opposite category, and an *abstract measurable map* is an object of the form , where is a Boolean algebra homomorphism and is again used as a formal placeholder; we call the *pullback map* associated to . [UPDATE: It turns out that this definition of a measurable map led to technical issues. In a forthcoming revision of the paper we also impose the requirement that the abstract measurable map be -complete (i.e., it respects countable joins).] The composition of two abstract measurable maps , is defined by the formula , or equivalently .

Every concrete measurable space can be identified with an abstract counterpart , and similarly every concrete measurable map can be identified with an abstract counterpart , where is the pullback map . Thus the category of concrete measurable spaces can be viewed as a subcategory of the category of abstract measurable spaces. The advantage of working in the abstract setting is that it gives us access to more spaces that could not be directly defined in the concrete setting. Most importantly for us, we have a new abstract space, the *opposite measure algebra* of , defined as where is the ideal of null sets in . Informally, is the space with all the null sets removed; there is a canonical abstract embedding map , which allows one to convert any concrete measurable map into an abstract one . One can then define the notion of an abstract action, abstract cocycle, and abstract coboundary by replacing every occurrence of the category of concrete measurable spaces with their abstract counterparts, and replacing with the opposite measure algebra ; see the paper for details. Our main theorem is then

Theorem 2 (Uncountable Moore-Schmidt theorem)Let be a discrete group acting abstractly on a -finite measure space . Let be a compact Hausdorff abelian group. Then a -valued abstract measurable cocycle is an abstract coboundary if and only if for each character , the -valued cocycles are abstract coboundaries.

With the abstract formalism, the proof of the uncountable Moore-Schmidt theorem is almost identical to the countable one (in fact we were able to make some simplifications, such as avoiding the use of the ergodic decomposition). A key tool is what we call a “conditional Pontryagin duality” theorem, which asserts that if one has an abstract measurable map for each obeying the identity for all , then there is an abstract measurable map such that for all . This is derived from the usual Pontryagin duality and some other tools, most notably the completeness of the -algebra of , and the Sikorski extension theorem.

We feel that it is natural to stay within the abstract measure theory formalism whenever dealing with uncountable situations. However, it is still an interesting question as to when one can guarantee that the abstract objects constructed in this formalism are representable by concrete analogues. The basic questions in this regard are:

- (i) Suppose one has an abstract measurable map into a concrete measurable space. Does there exist a representation of by a concrete measurable map ? Is it unique up to almost everywhere equivalence?
- (ii) Suppose one has a concrete cocycle that is an abstract coboundary. When can it be represented by a concrete coboundary?

For (i) the answer is somewhat interesting (as I learned after posing this MathOverflow question):

- If does not separate points, or is not compact metrisable or Polish, there can be counterexamples to uniqueness. If is not compact or Polish, there can be counterexamples to existence.
- If is a compact metric space or a Polish space, then one always has existence and uniqueness.
- If is a compact Hausdorff abelian group, one always has existence.
- If is a complete measure space, then one always has existence (from a theorem of Maharam).
- If is the unit interval with the Borel -algebra and Lebesgue measure, then one has existence for all compact Hausdorff assuming the continuum hypothesis (from a theorem of von Neumann) but existence can fail under other extensions of ZFC (from a theorem of Shelah, using the method of forcing).
- For more general , existence for all compact Hausdorff is equivalent to the existence of a lifting from the -algebra to (or, in the language of abstract measurable spaces, the existence of an abstract retraction from to ).
- It is a long-standing open question (posed for instance by Fremlin) whether it is relatively consistent with ZFC that existence holds whenever is compact Hausdorff.

Our understanding of (ii) is much less complete:

- If is metrisable, the answer is “always” (which among other things establishes the countable Moore-Schmidt theorem as a corollary of the uncountable one).
- If is at most countable and is a complete measure space, then the answer is again “always”.

In view of the answers to (i), I would not be surprised if the full answer to (ii) was also sensitive to axioms of set theory. However, such set theoretic issues seem to be almost completely avoided if one sticks with the abstract formalism throughout; they only arise when trying to pass back and forth between the abstract and concrete categories.

I’ve just uploaded to the arXiv my paper “Almost all Collatz orbits attain almost bounded values“, submitted to the proceedings of the Forum of Mathematics, Pi. In this paper I returned to the topic of the notorious Collatz conjecture (also known as the conjecture), which I previously discussed in this blog post. This conjecture can be phrased as follows. Let denote the positive integers (with the natural numbers), and let be the map defined by setting equal to when is odd and when is even. Let be the minimal element of the Collatz orbit . Then we have

Conjecture 1 (Collatz conjecture)One has for all .

Establishing the conjecture for all remains out of reach of current techniques (for instance, as discussed in the previous blog post, it is basically at least as difficult as Baker’s theorem, all known proofs of which are quite difficult). However, the situation is more promising if one is willing to settle for results which only hold for “most” in some sense. For instance, it is a result of Krasikov and Lagarias that

for all sufficiently large . In another direction, it was shown by Terras that for almost all (in the sense of natural density), one has . This was then improved by Allouche to , and extended later by Korec to cover all . In this paper we obtain the following further improvement (at the cost of weakening natural density to logarithmic density):

Theorem 2Let be any function with . Then we have for almost all (in the sense of logarithmic density).

Thus for instance one has for almost all (in the sense of logarithmic density).

The difficulty here is one usually only expects to establish “local-in-time” results that control the evolution for times that only get as large as a small multiple of ; the aforementioned results of Terras, Allouche, and Korec, for instance, are of this type. However, to get all the way down to one needs something more like an “(almost) global-in-time” result, where the evolution remains under control for so long that the orbit has nearly reached the bounded state .

However, as observed by Bourgain in the context of nonlinear Schrödinger equations, one can iterate “almost sure local wellposedness” type results (which give local control for almost all initial data from a given distribution) into “almost sure (almost) global wellposedness” type results if one is fortunate enough to draw one’s data from an *invariant measure* for the dynamics. To illustrate the idea, let us take Korec’s aforementioned result that if one picks at random an integer from a large interval , then in most cases, the orbit of will eventually move into the interval . Similarly, if one picks an integer at random from , then in most cases, the orbit of will eventually move into . It is then tempting to concatenate the two statements and conclude that for most in , the orbit will eventually move . Unfortunately, this argument does not quite work, because by the time the orbit from a randomly drawn reaches , the distribution of the final value is unlikely to be close to being uniformly distributed on , and in particular could potentially concentrate almost entirely in the exceptional set of that do not make it into . The point here is the uniform measure on is not transported by Collatz dynamics to anything resembling the uniform measure on .

So, one now needs to locate a measure which has better invariance properties under the Collatz dynamics. It turns out to be technically convenient to work with a standard acceleration of the Collatz map known as the *Syracuse map* , defined on the odd numbers by setting , where is the largest power of that divides . (The advantage of using the Syracuse map over the Collatz map is that it performs precisely one multiplication of at each iteration step, which makes the map better behaved when performing “-adic” analysis.)

When viewed -adically, we soon see that iterations of the Syracuse map become somewhat irregular. Most obviously, is never divisible by . A little less obviously, is twice as likely to equal mod as it is to equal mod . This is because for a randomly chosen odd , the number of times that divides can be seen to have a geometric distribution of mean – it equals any given value with probability . Such a geometric random variable is twice as likely to be odd as to be even, which is what gives the above irregularity. There are similar irregularities modulo higher powers of . For instance, one can compute that for large random odd , will take the residue classes with probabilities

respectively. More generally, for any , will be distributed according to the law of a random variable on that we call a *Syracuse random variable*, and can be described explicitly as

where are iid copies of a geometric random variable of mean .

In view of this, any proposed “invariant” (or approximately invariant) measure (or family of measures) for the Syracuse dynamics should take this -adic irregularity of distribution into account. It turns out that one can use the Syracuse random variables to construct such a measure, but only if these random variables stabilise in the limit in a certain total variation sense. More precisely, in the paper we establish the estimate

for any and any . This type of stabilisation is plausible from entropy heuristics – the tuple of geometric random variables that generates has Shannon entropy , which is significantly larger than the total entropy of the uniform distribution on , so we expect a lot of “mixing” and “collision” to occur when converting the tuple to ; these heuristics can be supported by numerics (which I was able to work out up to about before running into memory and CPU issues), but it turns out to be surprisingly delicate to make this precise.

A first hint of how to proceed comes from the elementary number theory observation (easily proven by induction) that the rational numbers

are all distinct as vary over tuples in . Unfortunately, the process of reducing mod creates a lot of collisions (as must happen from the pigeonhole principle); however, by a simple “Lefschetz principle” type argument one can at least show that the reductions

are mostly distinct for “typical” (as drawn using the geometric distribution) as long as is a bit smaller than (basically because the rational number appearing in (3) then typically takes a form like with an integer between and ). This analysis of the component (3) of (1) is already enough to get quite a bit of spreading on (roughly speaking, when the argument is optimised, it shows that this random variable cannot concentrate in any subset of of density less than for some large absolute constant ). To get from this to a stabilisation property (2) we have to exploit the mixing effects of the remaining portion of (1) that does not come from (3). After some standard Fourier-analytic manipulations, matters then boil down to obtaining non-trivial decay of the characteristic function of , and more precisely in showing that

for any and any that is not divisible by .

If the random variable (1) was the sum of independent terms, one could express this characteristic function as something like a Riesz product, which would be straightforward to estimate well. Unfortunately, the terms in (1) are loosely coupled together, and so the characteristic factor does not immediately factor into a Riesz product. However, if one groups adjacent terms in (1) together, one can rewrite it (assuming is even for sake of discussion) as

where . The point here is that after conditioning on the to be fixed, the random variables remain independent (though the distribution of each depends on the value that we conditioned to), and so the above expression is a *conditional* sum of independent random variables. This lets one express the characeteristic function of (1) as an *averaged* Riesz product. One can use this to establish the bound (4) as long as one can show that the expression

is not close to an integer for a moderately large number (, to be precise) of indices . (Actually, for technical reasons we have to also restrict to those for which , but let us ignore this detail here.) To put it another way, if we let denote the set of pairs for which

we have to show that (with overwhelming probability) the random walk

(which we view as a two-dimensional renewal process) contains at least a few points lying outside of .

A little bit of elementary number theory and combinatorics allows one to describe the set as the union of “triangles” with a certain non-zero separation between them. If the triangles were all fairly small, then one expects the renewal process to visit at least one point outside of after passing through any given such triangle, and it then becomes relatively easy to then show that the renewal process usually has the required number of points outside of . The most difficult case is when the renewal process passes through a particularly large triangle in . However, it turns out that large triangles enjoy particularly good separation properties, and in particular afer passing through a large triangle one is likely to only encounter nothing but small triangles for a while. After making these heuristics more precise, one is finally able to get enough points on the renewal process outside of that one can finish the proof of (4), and thus Theorem 2.

William Banks, Kevin Ford, and I have just uploaded to the arXiv our paper “Large prime gaps and probabilistic models“. In this paper we introduce a random model to help understand the connection between two well known conjectures regarding the primes , the Cramér conjecture and the Hardy-Littlewood conjecture:

Conjecture 1 (Cramér conjecture)If is a large number, then the largest prime gap in is of size . (Granville refines this conjecture to , where . Here we use the asymptotic notation for , for , for , and for .)

Conjecture 2 (Hardy-Littlewood conjecture)If are fixed distinct integers, then the number of numbers with all prime is as , where the singular series is defined by the formula

(One can view these conjectures as modern versions of two of the classical Landau problems, namely Legendre’s conjecture and the twin prime conjecture respectively.)

A well known connection between the Hardy-Littlewood conjecture and prime gaps was made by Gallagher. Among other things, Gallagher showed that if the Hardy-Littlewood conjecture was true, then the prime gaps with were asymptotically distributed according to an exponential distribution of mean , in the sense that

as for any fixed . Roughly speaking, the way this is established is by using the Hardy-Littlewood conjecture to control the mean values of for fixed , where ranges over the primes in . The relevance of these quantities arises from the Bonferroni inequalities (or “Brun pure sieve“), which can be formulated as the assertion that

when is even and

when is odd, for any natural number ; setting and taking means, one then gets upper and lower bounds for the probability that the interval is free of primes. The most difficult step is to control the mean values of the singular series as ranges over -tuples in a fixed interval such as .

Heuristically, if one extrapolates the asymptotic (1) to the regime , one is then led to Cramér’s conjecture, since the right-hand side of (1) falls below when is significantly larger than . However, this is not a rigorous derivation of Cramér’s conjecture from the Hardy-Littlewood conjecture, since Gallagher’s computations only establish (1) for *fixed* choices of , which is only enough to establish the far weaker bound , which was already known (see this previous paper for a discussion of the best known unconditional lower bounds on ). An inspection of the argument shows that if one wished to extend (1) to parameter choices that were allowed to grow with , then one would need as input a stronger version of the Hardy-Littlewood conjecture in which the length of the tuple , as well as the magnitudes of the shifts , were also allowed to grow with . Our initial objective in this project was then to quantify exactly what strengthening of the Hardy-Littlewood conjecture would be needed to rigorously imply Cramer’s conjecture. The precise results are technical, but roughly we show results of the following form:

Theorem 3 (Large gaps from Hardy-Littlewood, rough statement)

- If the Hardy-Littlewood conjecture is uniformly true for -tuples of length , and with shifts of size , with a power savings in the error term, then .
- If the Hardy-Littlewood conjecture is “true on average” for -tuples of length and shifts of size for all , with a power savings in the error term, then .

In particular, we can recover Cramer’s conjecture given a sufficiently powerful version of the Hardy-Littlewood conjecture “on the average”.

Our proof of this theorem proceeds more or less along the same lines as Gallagher’s calculation, but now with allowed to grow slowly with . Again, the main difficulty is to accurately estimate average values of the singular series . Here we found it useful to switch to a probabilistic interpretation of this series. For technical reasons it is convenient to work with a truncated, unnormalised version

of the singular series, for a suitable cutoff ; it turns out that when studying prime tuples of size , the most convenient cutoff is the “Pólya magic cutoff“, defined as the largest prime for which

(this is well defined for ); by Mertens’ theorem, we have . One can interpret probabilistically as

where is the randomly sifted set of integers formed by removing one residue class uniformly at random for each prime . The Hardy-Littlewood conjecture can be viewed as an assertion that the primes behave in some approximate statistical sense like the random sifted set , and one can prove the above theorem by using the Bonferroni inequalities both for the primes and for the random sifted set, and comparing the two (using an even for the sifted set and an odd for the primes in order to be able to combine the two together to get a useful bound).

The proof of Theorem 3 ended up not using any properties of the set of primes other than that this set obeyed some form of the Hardy-Littlewood conjectures; the theorem remains true (with suitable notational changes) if this set were replaced by any other set. In order to convince ourselves that our theorem was not vacuous due to our version of the Hardy-Littlewood conjecture being too strong to be true, we then started exploring the question of coming up with random models of which obeyed various versions of the Hardy-Littlewood and Cramér conjectures.

This line of inquiry was started by Cramér, who introduced what we now call the *Cramér random model* of the primes, in which each natural number is selected for membership in with an independent probability of . This model matches the primes well in some respects; for instance, it almost surely obeys the “Riemann hypothesis”

and Cramér also showed that the largest gap was almost surely . On the other hand, it does not obey the Hardy-Littlewood conjecture; more precisely, it obeys a simplified variant of that conjecture in which the singular series is absent.

Granville proposed a refinement to Cramér’s random model in which one first sieves out (in each dyadic interval ) all residue classes for for a certain threshold , and then places each surviving natural number in with an independent probability . One can verify that this model obeys the Hardy-Littlewood conjectures, and Granville showed that the largest gap in this model was almost surely , leading to his conjecture that this bound also was true for the primes. (Interestingly, this conjecture is not yet borne out by numerics; calculations of prime gaps up to , for instance, have shown that never exceeds in this range. This is not necessarily a conflict, however; Granville’s analysis relies on inspecting gaps in an extremely sparse region of natural numbers that are more devoid of primes than average, and this region is not well explored by existing numerics. See this previous blog post for more discussion of Granville’s argument.)

However, Granville’s model does not produce a power savings in the error term of the Hardy-Littlewood conjectures, mostly due to the need to truncate the singular series at the logarithmic cutoff . After some experimentation, we were able to produce a tractable random model for the primes which obeyed the Hardy-Littlewood conjectures with power savings, and which reproduced Granville’s gap prediction of (we also get an upper bound of for both models, though we expect the lower bound to be closer to the truth); to us, this strengthens the case for Granville’s version of Cramér’s conjecture. The model can be described as follows. We select one residue class uniformly at random for each prime , and as before we let be the sifted set of integers formed by deleting the residue classes with . We then set

with Pólya’s magic cutoff (this is the cutoff that gives a density consistent with the prime number theorem or the Riemann hypothesis). As stated above, we are able to show that almost surely one has

and that the Hardy-Littlewood conjectures hold with power savings for up to for any fixed and for shifts of size . This is unfortunately a tiny bit weaker than what Theorem 3 requires (which more or less corresponds to the endpoint ), although there is a variant of Theorem 3 that can use this input to produce a lower bound on gaps in the model (but it is weaker than the one in (3)). In fact we prove a more precise almost sure asymptotic formula for that involves the optimal bounds for the *linear sieve* (or *interval sieve*), in which one deletes one residue class modulo from an interval for all primes up to a given threshold. The lower bound in (3) relates to the case of deleting the residue classes from ; the upper bound comes from the delicate analysis of the linear sieve by Iwaniec. Improving on either of the two bounds looks to be quite a difficult problem.

The probabilistic analysis of is somewhat more complicated than of or as there is now non-trivial coupling between the events as varies, although moment methods such as the second moment method are still viable and allow one to verify the Hardy-Littlewood conjectures by a lengthy but fairly straightforward calculation. To analyse large gaps, one has to understand the statistical behaviour of a random linear sieve in which one starts with an interval and randomly deletes a residue class for each prime up to a given threshold. For very small this is handled by the deterministic theory of the linear sieve as discussed above. For medium sized , it turns out that there is good concentration of measure thanks to tools such as Bennett’s inequality or Azuma’s inequality, as one can view the sieving process as a martingale or (approximately) as a sum of independent random variables. For larger primes , in which only a small number of survivors are expected to be sieved out by each residue class, a direct combinatorial calculation of all possible outcomes (involving the random graph that connects interval elements to primes if falls in the random residue class ) turns out to give the best results.

In a recent post I discussed how the Riemann zeta function can be locally approximated by a polynomial, in the sense that for randomly chosen one has an approximation

where grows slowly with , and is a polynomial of degree . Assuming the Riemann hypothesis (as we will throughout this post), the zeroes of should all lie on the unit circle, and one should then be able to write as a scalar multiple of the characteristic polynomial of (the inverse of) a unitary matrix , which we normalise as

Here is some quantity depending on . We view as a random element of ; in the limit , the GUE hypothesis is equivalent to becoming equidistributed with respect to Haar measure on (also known as the Circular Unitary Ensemble, CUE; it is to the unit circle what the Gaussian Unitary Ensemble (GUE) is on the real line). One can also view as analogous to the “geometric Frobenius” operator in the function field setting, though unfortunately it is difficult at present to make this analogy any more precise (due, among other things, to the lack of a sufficiently satisfactory theory of the “field of one element“).

Taking logarithmic derivatives of (2), we have

and hence on taking logarithmic derivatives of (1) in the variable we (heuristically) have

Morally speaking, we have

so on comparing coefficients we expect to interpret the moments of as a finite Dirichlet series:

To understand the distribution of in the unitary group , it suffices to understand the distribution of the moments

where denotes averaging over , and . The GUE hypothesis asserts that in the limit , these moments converge to their CUE counterparts

where is now drawn uniformly in with respect to the CUE ensemble, and denotes expectation with respect to that measure.

The moment (6) vanishes unless one has the homogeneity condition

This follows from the fact that for any phase , has the same distribution as , where we use the number theory notation .

In the case when the degree is low, we can use representation theory to establish the following simple formula for the moment (6), as evaluated by Diaconis and Shahshahani:

Proposition 1 (Low moments in CUE model)Ifthen the moment (6) vanishes unless for all , in which case it is equal to

Another way of viewing this proposition is that for distributed according to CUE, the random variables are distributed like independent complex random variables of mean zero and variance , as long as one only considers moments obeying (8). This identity definitely breaks down for larger values of , so one only obtains central limit theorems in certain limiting regimes, notably when one only considers a fixed number of ‘s and lets go to infinity. (The paper of Diaconis and Shahshahani writes in place of , but I believe this to be a typo.)

*Proof:* Let be the left-hand side of (8). We may assume that (7) holds since we are done otherwise, hence

Our starting point is Schur-Weyl duality. Namely, we consider the -dimensional complex vector space

This space has an action of the product group : the symmetric group acts by permutation on the tensor factors, while the general linear group acts diagonally on the factors, and the two actions commute with each other. Schur-Weyl duality gives a decomposition

where ranges over Young tableaux of size with at most rows, is the -irreducible unitary representation corresponding to (which can be constructed for instance using Specht modules), and is the -irreducible polynomial representation corresponding with highest weight .

Let be a permutation consisting of cycles of length (this is uniquely determined up to conjugation), and let . The pair then acts on , with the action on basis elements given by

The trace of this action can then be computed as

where is the matrix coefficient of . Breaking up into cycles and summing, this is just

But we can also compute this trace using the Schur-Weyl decomposition (10), yielding the identity

where is the character on associated to , and is the character on associated to . As is well known, is just the Schur polynomial of weight applied to the (algebraic, generalised) eigenvalues of . We can specialise to unitary matrices to conclude that

and similarly

where consists of cycles of length for each . On the other hand, the characters are an orthonormal system on with the CUE measure. Thus we can write the expectation (6) as

Now recall that ranges over all the Young tableaux of size with at most rows. But by (8) we have , and so the condition of having rows is redundant. Hence now ranges over *all* Young tableaux of size , which as is well known enumerates all the irreducible representations of . One can then use the standard orthogonality properties of characters to show that the sum (12) vanishes if , are not conjugate, and is equal to divided by the size of the conjugacy class of (or equivalently, by the size of the centraliser of ) otherwise. But the latter expression is easily computed to be , giving the claim.

Example 2We illustrate the identity (11) when , . The Schur polynomials are given aswhere are the (generalised) eigenvalues of , and the formula (11) in this case becomes

The functions are orthonormal on , so the three functions are also, and their norms are , , and respectively, reflecting the size in of the centralisers of the permutations , , and respectively. If is instead set to say , then the terms now disappear (the Young tableau here has too many rows), and the three quantities here now have some non-trivial covariance.

Example 3Consider the moment . For , the above proposition shows us that this moment is equal to . What happens for ? The formula (12) computes this moment aswhere is a cycle of length in , and ranges over all Young tableaux with size and at most rows. The Murnaghan-Nakayama rule tells us that vanishes unless is a hook (all but one of the non-zero rows consisting of just a single box; this also can be interpreted as an exterior power representation on the space of vectors in whose coordinates sum to zero), in which case it is equal to (depending on the parity of the number of non-zero rows). As such we see that this moment is equal to . Thus in general we have

Now we discuss what is known for the analogous moments (5). Here we shall be rather non-rigorous, in particular ignoring an annoying “Archimedean” issue that the product of the ranges and is not quite the range but instead leaks into the adjacent range . This issue can be addressed by working in a “weak" sense in which parameters such as are averaged over fairly long scales, or by passing to a function field analogue of these questions, but we shall simply ignore the issue completely and work at a heuristic level only. For similar reasons we will ignore some technical issues arising from the sharp cutoff of to the range (it would be slightly better technically to use a smooth cutoff).

One can morally expand out (5) using (4) as

where , , and the integers are in the ranges

for and , and

for and . Morally, the expectation here is negligible unless

in which case the expecation is oscillates with magnitude one. In particular, if (7) fails (with some room to spare) then the moment (5) should be negligible, which is consistent with the analogous behaviour for the moments (6). Now suppose that (8) holds (with some room to spare). Then is significantly less than , so the multiplicative error in (15) becomes an additive error of . On the other hand, because of the fundamental *integrality gap* – that the integers are always separated from each other by a distance of at least – this forces the integers , to in fact be equal:

The von Mangoldt factors effectively restrict to be prime (the effect of prime powers is negligible). By the fundamental theorem of arithmetic, the constraint (16) then forces , and to be a permutation of , which then forces for all ._ For a given , the number of possible is then , and the expectation in (14) is equal to . Thus this expectation is morally

and using Mertens’ theorem this soon simplifies asymptotically to the same quantity in Proposition 1. Thus we see that (morally at least) the moments (5) associated to the zeta function asymptotically match the moments (6) coming from the CUE model in the low degree case (8), thus lending support to the GUE hypothesis. (These observations are basically due to Rudnick and Sarnak, with the degree case of pair correlations due to Montgomery, and the degree case due to Hejhal.)

With some rare exceptions (such as those estimates coming from “Kloostermania”), the moment estimates of Rudnick and Sarnak basically represent the state of the art for what is known for the moments (5). For instance, Montgomery’s pair correlation conjecture, in our language, is basically the analogue of (13) for , thus

for all . Montgomery showed this for (essentially) the range (as remarked above, this is a special case of the Rudnick-Sarnak result), but no further cases of this conjecture are known.

These estimates can be used to give some non-trivial information on the largest and smallest spacings between zeroes of the zeta function, which in our notation corresponds to spacing between eigenvalues of . One such method used today for this is due to Montgomery and Odlyzko and was greatly simplified by Conrey, Ghosh, and Gonek. The basic idea, translated to our random matrix notation, is as follows. Suppose is some random polynomial depending on of degree at most . Let denote the eigenvalues of , and let be a parameter. Observe from the pigeonhole principle that if the quantity

then the arcs cannot all be disjoint, and hence there exists a pair of eigenvalues making an angle of less than ( times the mean angle separation). Similarly, if the quantity (18) falls below that of (19), then these arcs cannot cover the unit circle, and hence there exists a pair of eigenvalues making an angle of greater than times the mean angle separation. By judiciously choosing the coefficients of as functions of the moments , one can ensure that both quantities (18), (19) can be computed by the Rudnick-Sarnak estimates (or estimates of equivalent strength); indeed, from the residue theorem one can write (18) as

for sufficiently small , and this can be computed (in principle, at least) using (3) if the coefficients of are in an appropriate form. Using this sort of technology (translated back to the Riemann zeta function setting), one can show that gaps between consecutive zeroes of zeta are less than times the mean spacing and greater than times the mean spacing infinitely often for certain ; the current records are (due to Goldston and Turnage-Butterbaugh) and (due to Bui and Milinovich, who input some additional estimates beyond the Rudnick-Sarnak set, namely the twisted fourth moment estimates of Bettin, Bui, Li, and Radziwill, and using a technique based on Hall’s method rather than the Montgomery-Odlyzko method).

It would be of great interest if one could push the upper bound for the smallest gap below . The reason for this is that this would then exclude the Alternative Hypothesis that the spacing between zeroes are asymptotically always (or almost always) a non-zero half-integer multiple of the mean spacing, or in our language that the gaps between the phases of the eigenvalues of are nasymptotically always non-zero integer multiples of . The significance of this hypothesis is that it is implied by the existence of a Siegel zero (of conductor a small power of ); see this paper of Conrey and Iwaniec. (In our language, what is going on is that if there is a Siegel zero in which is very close to zero, then behaves like the Kronecker delta, and hence (by the Riemann-Siegel formula) the combined -function will have a polynomial approximation which in our language looks like a scalar multiple of , where and is a phase. The zeroes of this approximation lie on a coset of the roots of unity; the polynomial is a factor of this approximation and hence will also lie in this coset, implying in particular that all eigenvalue spacings are multiples of . Taking then gives the claim.)

Unfortunately, the known methods do not seem to break this barrier without some significant new input; already the original paper of Montgomery and Odlyzko observed this limitation for their particular technique (and in fact fall very slightly short, as observed in unpublished work of Goldston and of Milinovich). In this post I would like to record another way to see this, by providing an “alternative” probability distribution to the CUE distribution (which one might dub the *Alternative Circular Unitary Ensemble* (ACUE) which is indistinguishable in low moments in the sense that the expectation for this model also obeys Proposition 1, but for which the phase spacings are always a multiple of . This shows that if one is to rule out the Alternative Hypothesis (and thus in particular rule out Siegel zeroes), one needs to input some additional moment information beyond Proposition 1. It would be interesting to see if any of the other known moment estimates that go beyond this proposition are consistent with this alternative distribution. (UPDATE: it looks like they are, see Remark 7 below.)

To describe this alternative distribution, let us first recall the Weyl description of the CUE measure on the unitary group in terms of the distribution of the phases of the eigenvalues, randomly permuted in any order. This distribution is given by the probability measure

is the Vandermonde determinant; see for instance this previous blog post for the derivation of a very similar formula for the GUE distribution, which can be adapted to CUE without much difficulty. To see that this is a probability measure, first observe the Vandermonde determinant identity

where , denotes the dot product, and is the “long word”, which implies that (20) is a trigonometric series with constant term ; it is also clearly non-negative, so it is a probability measure. One can thus generate a random CUE matrix by first drawing using the probability measure (20), and then generating to be a random unitary matrix with eigenvalues .

For the alternative distribution, we first draw on the discrete torus (thus each is a root of unity) with probability density function

shift by a phase drawn uniformly at random, and then select to be a random unitary matrix with eigenvalues . Let us first verify that (21) is a probability density function. Clearly it is non-negative. It is the linear combination of exponentials of the form for . The diagonal contribution gives the constant function , which has total mass one. All of the other exponentials have a frequency that is not a multiple of , and hence will have mean zero on . The claim follows.

From construction it is clear that the matrix drawn from this alternative distribution will have all eigenvalue phase spacings be a non-zero multiple of . Now we verify that the alternative distribution also obeys Proposition 1. The alternative distribution remains invariant under rotation by phases, so the claim is again clear when (8) fails. Inspecting the proof of that proposition, we see that it suffices to show that the Schur polynomials with of size at most and of equal size remain orthonormal with respect to the alternative measure. That is to say,

when have size equal to each other and at most . In this case the phase in the definition of is irrelevant. In terms of eigenvalue measures, we are then reduced to showing that

By Fourier decomposition, it then suffices to show that the trigonometric polynomial does not contain any components of the form for some non-zero lattice vector . But we have already observed that is a linear combination of plane waves of the form for . Also, as is well known, is a linear combination of plane waves where is majorised by , and similarly is a linear combination of plane waves where is majorised by . So the product is a linear combination of plane waves of the form . But every coefficient of the vector lies between and , and so cannot be of the form for any non-zero lattice vector , giving the claim.

Example 4If , then the distribution (21) assigns a probability of to any pair that is a permuted rotation of , and a probability of to any pair that is a permuted rotation of . Thus, a matrix drawn from the alternative distribution will be conjugate to a phase rotation of with probability , and to with probability .A similar computation when gives conjugate to a phase rotation of with probability , to a phase rotation of or its adjoint with probability of each, and a phase rotation of with probability .

Remark 5For large it does not seem that this specific alternative distribution is the only distribution consistent with Proposition 1 and which has all phase spacings a non-zero multiple of ; in particular, it may not be the only distribution consistent with a Siegel zero. Still, it is a very explicit distribution that might serve as a test case for the limitations of various arguments for controlling quantities such as the largest or smallest spacing between zeroes of zeta. The ACUE is in some sense the distribution that maximally resembles CUE (in the sense that it has the greatest number of Fourier coefficients agreeing) while still also being consistent with the Alternative Hypothesis, and so should be the most difficult enemy to eliminate if one wishes to disprove that hypothesis.

In some cases, even just a tiny improvement in known results would be able to exclude the alternative hypothesis. For instance, if the alternative hypothesis held, then is periodic in with period , so from Proposition 1 for the alternative distribution one has

which differs from (13) for any . (This fact was implicitly observed recently by Baluyot, in the original context of the zeta function.) Thus a verification of the pair correlation conjecture (17) for even a single with would rule out the alternative hypothesis. Unfortunately, such a verification appears to be on comparable difficulty with (an averaged version of) the Hardy-Littlewood conjecture, with power saving error term. (This is consistent with the fact that Siegel zeroes can cause distortions in the Hardy-Littlewood conjecture, as (implicitly) discussed in this previous blog post.)

Remark 6One can view the CUE as normalised Lebesgue measure on (viewed as a smooth submanifold of ). One can similarly view ACUE as normalised Lebesgue measure on the (disconnected) smooth submanifold of consisting of those unitary matrices whose phase spacings are non-zero integer multiples of ; informally, ACUE is CUE restricted to this lower dimensional submanifold. As is well known, the phases of CUE eigenvalues form a determinantal point process with kernel (or one can equivalently take ; in a similar spirit, the phases of ACUE eigenvalues, once they are rotated to be roots of unity, become a discrete determinantal point process on those roots of unity with exactly the same kernel (except for a normalising factor of ). In particular, the -point correlation functions of ACUE (after this rotation) are precisely the restriction of the -point correlation functions of CUE after normalisation, that is to say they are proportional to .

Remark 7One family of estimates that go beyond the Rudnick-Sarnak family of estimates are twisted moment estimates for the zeta function, such as ones that give asymptotics forfor some small even exponent (almost always or ) and some short Dirichlet polynomial ; see for instance this paper of Bettin, Bui, Li, and Radziwill for some examples of such estimates. The analogous unitary matrix average would be something like

where is now some random medium degree polynomial that depends on the unitary matrix associated to (and in applications will typically also contain some negative power of to cancel the corresponding powers of in ). Unfortunately such averages generally are unable to distinguish the CUE from the ACUE. For instance, if all the coefficients of involve products of traces of total order less than , then in terms of the eigenvalue phases , is a linear combination of plane waves where the frequencies have coefficients of magnitude less than . On the other hand, as each coefficient of is an elementary symmetric function of the eigenvalues, is a linear combination of plane waves where the frequencies have coefficients of magnitude at most . Thus is a linear combination of plane waves where the frequencies have coefficients of magnitude less than , and thus is orthogonal to the difference between the CUE and ACUE measures on the phase torus by the previous arguments. In other words, has the same expectation with respect to ACUE as it does with respect to CUE. Thus one can only start distinguishing CUE from ACUE if the mollifier has degree close to or exceeding , which corresponds to Dirichlet polynomials of length close to or exceeding , which is far beyond current technology for such moment estimates.

Remark 8The GUE hypothesis for the zeta function asserts that the averagefor any and any test function , where is the Dyson sine kernel and are the ordinates of zeroes of the zeta function. This corresponds to the CUE distribution for . The ACUE distribution then corresponds to an “alternative gaussian unitary ensemble (AGUE)” hypothesis, in which the average (22) is instead predicted to equal a Riemann sum version of the integral (23):

This is a stronger version of the alternative hypothesis that the spacing between adjacent zeroes is almost always approximately a half-integer multiple of the mean spacing. I do not know of any known moment estimates for Dirichlet series that is able to eliminate this AGUE hypothesis (even assuming GRH). (UPDATE: These facts have also been independently observed in forthcoming work of Lagarias and Rodgers.)

**Important note:** As this is not a course in probability, we will try to avoid developing the general theory of stochastic calculus (which includes such concepts as filtrations, martingales, and Ito calculus). This will unfortunately limit what we can actually prove rigorously, and so at some places the arguments will be somewhat informal in nature. A rigorous treatment of many of the topics here can be found for instance in Lawler’s Conformally Invariant Processes in the Plane, from which much of the material here is drawn.

In these notes, random variables will be denoted in boldface.

Definition 1A real random variable is said to be normally distributed with mean and variance if one hasfor all test functions . Similarly, a complex random variable is said to be normally distributed with mean and variance if one has

for all test functions , where is the area element on .

A

real Brownian motionwith base point is a random, almost surely continuous function (using the locally uniform topology on continuous functions) with the property that (almost surely) , and for any sequence of times , the increments for are independent real random variables that are normally distributed with mean zero and variance . Similarly, acomplex Brownian motionwith base point is a random, almost surely continuous function with the property that and for any sequence of times , the increments for are independent complex random variables that are normally distributed with mean zero and variance .

Remark 2Thanks to the central limit theorem, the hypothesis that the increments be normally distributed can be dropped from the definition of a Brownian motion, so long as one retains the independence and the normalisation of the mean and variance (technically one also needs some uniform integrability on the increments beyond the second moment, but we will not detail this here). A similar statement is also true for the complex Brownian motion (where now we need to normalise the variances and covariances of the real and imaginary parts of the increments).

Real and complex Brownian motions exist from any base point or ; see e.g. this previous blog post for a construction. We have the following simple invariances:

Exercise 3

- (i) (Translation invariance) If is a real Brownian motion with base point , and , show that is a real Brownian motion with base point . Similarly, if is a complex Brownian motion with base point , and , show that is a complex Brownian motion with base point .
- (ii) (Dilation invariance) If is a real Brownian motion with base point , and is non-zero, show that is also a real Brownian motion with base point . Similarly, if is a complex Brownian motion with base point , and is non-zero, show that is also a complex Brownian motion with base point .
- (iii) (Real and imaginary parts) If is a complex Brownian motion with base point , show that and are independent real Brownian motions with base point . Conversely, if are independent real Brownian motions of base point , show that is a complex Brownian motion with base point .

The next lemma is a special case of the optional stopping theorem.

Lemma 4 (Optional stopping identities)

- (i) (Real case) Let be a real Brownian motion with base point . Let be a bounded stopping time – a bounded random variable with the property that for any time , the event that is determined by the values of the trajectory for times up to (or more precisely, this event is measurable with respect to the algebra generated by this proprtion of the trajectory). Then
and

and

- (ii) (Complex case) Let be a real Brownian motion with base point . Let be a bounded stopping time – a bounded random variable with the property that for any time , the event that is determined by the values of the trajectory for times up to . Then

*Proof:* (Slightly informal) We just prove (i) and leave (ii) as an exercise. By translation invariance we can take . Let be an upper bound for . Since is a real normally distributed variable with mean zero and variance , we have

and

and

By the law of total expectation, we thus have

and

and

where the inner conditional expectations are with respect to the event that attains a particular point in . However, from the independent increment nature of Brownian motion, once one conditions to a fixed point , the random variable becomes a real normally distributed variable with mean and variance . Thus we have

and

and

which give the first two claims, and (after some algebra) the identity

which then also gives the third claim.

Exercise 5Prove the second part of Lemma 4.

In this post we assume the Riemann hypothesis and the simplicity of zeroes, thus the zeroes of in the critical strip take the form for some real number ordinates . From the Riemann-von Mangoldt formula, one has the asymptotic

as ; in particular, the spacing should behave like on the average. However, it can happen that some gaps are unusually small compared to other nearby gaps. For the sake of concreteness, let us define a Lehmer pair to be a pair of adjacent ordinates such that

The specific value of constant is not particularly important here; anything larger than would suffice. An example of such a pair would be the classical pair

discovered by Lehmer. It follows easily from the main results of Csordas, Smith, and Varga that if an infinite number of Lehmer pairs (in the above sense) existed, then the de Bruijn-Newman constant is non-negative. This implication is now redundant in view of the unconditional results of this recent paper of Rodgers and myself; however, the question of whether an infinite number of Lehmer pairs exist remain open.

In this post, I sketch an argument that Brad and I came up with (as initially suggested by Odlyzko) the GUE hypothesis implies the existence of infinitely many Lehmer pairs. We argue probabilistically: pick a sufficiently large number , pick at random from to (so that the average gap size is close to ), and prove that the Lehmer pair condition (1) occurs with positive probability.

Introduce the renormalised ordinates for , and let be a small absolute constant (independent of ). It will then suffice to show that

(say) with probability , since the contribution of those outside of can be absorbed by the factor with probability .

As one consequence of the GUE hypothesis, we have with probability . Thus, if , then has density . Applying the Hardy-Littlewood maximal inequality, we see that with probability , we have

which implies in particular that

for all . This implies in particular that

and so it will suffice to show that

(say) with probability .

By the GUE hypothesis (and the fact that is independent of ), it suffices to show that a Dyson sine process , normalised so that is the first positive point in the process, obeys the inequality

with probability . However, if we let be a moderately large constant (and assume small depending on ), one can show using -point correlation functions for the Dyson sine process (and the fact that the Dyson kernel equals to second order at the origin) that

for any natural number , where denotes the number of elements of the process in . For instance, the expression can be written in terms of the three-point correlation function as

which can easily be estimated to be (since in this region), and similarly for the other estimates claimed above.

Since for natural numbers , the quantity is only positive when , we see from the first three estimates that the event that occurs with probability . In particular, by Markov’s inequality we have the conditional probabilities

and thus, if is large enough, and small enough, it will be true with probability that

and

and simultaneously that

for all natural numbers . This implies in particular that

and

for all , which gives (2) for small enough.

Remark 1The above argument needed the GUE hypothesis for correlations up to fourth order (in order to establish (3)). It might be possible to reduce the number of correlations needed, but I do not see how to obtain the claim just using pair correlations only.

Let be the Liouville function, thus is defined to equal when is the product of an even number of primes, and when is the product of an odd number of primes. The Chowla conjecture asserts that has the statistics of a random sign pattern, in the sense that

for all and all distinct natural numbers , where we use the averaging notation

For , this conjecture is equivalent to the prime number theorem (as discussed in this previous blog post), but the conjecture remains open for any .

In recent years, it has been realised that one can make more progress on this conjecture if one works instead with the logarithmically averaged version

of the conjecture, where we use the logarithmic averaging notation

Using the summation by parts (or telescoping series) identity

it is not difficult to show that the Chowla conjecture (1) for a given implies the logarithmically averaged conjecture (2). However, the converse implication is not at all clear. For instance, for , we have already mentioned that the Chowla conjecture

is equivalent to the prime number theorem; but the logarithmically averaged analogue

is significantly easier to show (a proof with the Liouville function replaced by the closely related Möbius function is given in this previous blog post). And indeed, significantly more is now known for the logarithmically averaged Chowla conjecture; in this paper of mine I had proven (2) for , and in this recent paper with Joni Teravainen, we proved the conjecture for all odd (with a different proof also given here).

In view of this emerging consensus that the logarithmically averaged Chowla conjecture was easier than the ordinary Chowla conjecture, it was thus somewhat of a surprise for me to read a recent paper of Gomilko, Kwietniak, and Lemanczyk who (among other things) established the following statement:

Theorem 1Assume that the logarithmically averaged Chowla conjecture (2) is true for all . Then there exists a sequence going to infinity such that the Chowla conjecture (1) is true for all along that sequence, that is to sayfor all and all distinct .

This implication does not use any special properties of the Liouville function (other than that they are bounded), and in fact proceeds by ergodic theoretic methods, focusing in particular on the ergodic decomposition of invariant measures of a shift into ergodic measures. Ergodic methods have proven remarkably fruitful in understanding these sorts of number theoretic and combinatorial problems, as could already be seen by the ergodic theoretic proof of Szemerédi’s theorem by Furstenberg, and more recently by the work of Frantzikinakis and Host on Sarnak’s conjecture. (My first paper with Teravainen also uses ergodic theory tools.) Indeed, many other results in the subject were first discovered using ergodic theory methods.

On the other hand, many results in this subject that were first proven ergodic theoretically have since been reproven by more combinatorial means; my second paper with Teravainen is an instance of this. As it turns out, one can also prove Theorem 1 by a standard combinatorial (or probabilistic) technique known as the second moment method. In fact, one can prove slightly more:

Theorem 2Let be a natural number. Assume that the logarithmically averaged Chowla conjecture (2) is true for . Then there exists a set of natural numbers of logarithmic density (that is, ) such thatfor any distinct .

It is not difficult to deduce Theorem 1 from Theorem 2 using a diagonalisation argument. Unfortunately, the known cases of the logarithmically averaged Chowla conjecture ( and odd ) are currently insufficient to use Theorem 2 for any purpose other than to reprove what is already known to be true from the prime number theorem. (Indeed, the even cases of Chowla, in either logarithmically averaged or non-logarithmically averaged forms, seem to be far more powerful than the odd cases; see Remark 1.7 of this paper of myself and Teravainen for a related observation in this direction.)

We now sketch the proof of Theorem 2. For any distinct , we take a large number and consider the limiting the second moment

We can expand this as

If all the are distinct, the hypothesis (2) tells us that the inner averages goes to zero as . The remaining averages are , and there are of these averages. We conclude that

By Markov’s inequality (and (3)), we conclude that for any fixed , there exists a set of upper logarithmic density at least , thus

such that

By deleting at most finitely many elements, we may assume that consists only of elements of size at least (say).

For any , if we let be the union of for , then has logarithmic density . By a diagonalisation argument (using the fact that the set of tuples is countable), we can then find a set of natural numbers of logarithmic density , such that for every , every sufficiently large element of lies in . Thus for every sufficiently large in , one has

for some with . By Cauchy-Schwarz, this implies that

interchanging the sums and using and , this implies that

We conclude on taking to infinity that

as required.

Suppose we have an matrix that is expressed in block-matrix form as

where is an matrix, is an matrix, is an matrix, and is a matrix for some . If is invertible, we can use the technique of Schur complementation to express the inverse of (if it exists) in terms of the inverse of , and the other components of course. Indeed, to solve the equation

where are column vectors and are column vectors, we can expand this out as a system

Using the invertibility of , we can write the first equation as

and substituting this into the second equation yields

and thus (assuming that is invertible)

and then inserting this back into (1) gives

Comparing this with

we have managed to express the inverse of as

One can consider the inverse problem: given the inverse of , does one have a nice formula for the inverse of the minor ? Trying to recover this directly from (2) looks somewhat messy. However, one can proceed as follows. Let denote the matrix

(with the identity matrix), and let be its transpose:

Then for any scalar (which we identify with times the identity matrix), one has

and hence by (2)

noting that the inverses here will exist for large enough. Taking limits as , we conclude that

On the other hand, by the Woodbury matrix identity (discussed in this previous blog post), we have

and hence on taking limits and comparing with the preceding identity, one has

This achieves the aim of expressing the inverse of the minor in terms of the inverse of the full matrix. Taking traces and rearranging, we conclude in particular that

In the case, this can be simplified to

where is the basis column vector.

We can apply this identity to understand how the spectrum of an random matrix relates to that of its top left minor . Subtracting any complex multiple of the identity from (and hence from ), we can relate the Stieltjes transform of with the Stieltjes transform of :

At this point we begin to proceed informally. Assume for sake of argument that the random matrix is Hermitian, with distribution that is invariant under conjugation by the unitary group ; for instance, could be drawn from the Gaussian Unitary Ensemble (GUE), or alternatively could be of the form for some real diagonal matrix and a unitary matrix drawn randomly from using Haar measure. To fix normalisations we will assume that the eigenvalues of are typically of size . Then is also Hermitian and -invariant. Furthermore, the law of will be the same as the law of , where is now drawn uniformly from the unit sphere (independently of ). Diagonalising into eigenvalues and eigenvectors , we have

One can think of as a random (complex) Gaussian vector, divided by the magnitude of that vector (which, by the Chernoff inequality, will concentrate to ). Thus the coefficients with respect to the orthonormal basis can be thought of as independent (complex) Gaussian vectors, divided by that magnitude. Using this and the Chernoff inequality again, we see (for distance away from the real axis at least) that one has the concentration of measure

and thus

(that is to say, the diagonal entries of are roughly constant). Similarly we have

Inserting this into (5) and discarding terms of size , we thus conclude the approximate relationship

This can be viewed as a difference equation for the Stieltjes transform of top left minors of . Iterating this equation, and formally replacing the difference equation by a differential equation in the large limit, we see that when is large and for some , one expects the top left minor of to have Stieltjes transform

where solves the Burgers-type equation

Example 1If is a constant multiple of the identity, then . One checks that is a steady state solution to (7), which is unsurprising given that all minors of are also times the identity.

Example 2If is GUE normalised so that each entry has variance , then by the semi-circular law (see previous notes) one has (using an appropriate branch of the square root). One can then verify the self-similar solutionto (7), which is consistent with the fact that a top minor of also has the law of GUE, with each entry having variance when .

One can justify the approximation (6) given a sufficiently good well-posedness theory for the equation (7). We will not do so here, but will note that (as with the classical inviscid Burgers equation) the equation can be solved exactly (formally, at least) by the method of characteristics. For any initial position , we consider the characteristic flow formed by solving the ODE

with initial data , ignoring for this discussion the problems of existence and uniqueness. Then from the chain rule, the equation (7) implies that

and thus . Inserting this back into (8) we see that

and thus (7) may be solved implicitly via the equation

Remark 3In practice, the equation (9) may stop working when crosses the real axis, as (7) does not necessarily hold in this region. It is a cute exercise (ultimately coming from the Cauchy-Schwarz inequality) to show that this crossing always happens, for instance if has positive imaginary part then necessarily has negative or zero imaginary part.

Example 4Suppose we have as in Example 1. Then (9) becomesfor any , which after making the change of variables becomes

as in Example 1.

Example 5Suppose we haveas in Example 2. Then (9) becomes

If we write

one can calculate that

and hence

One can recover the spectral measure from the Stieltjes transform as the weak limit of as ; we write this informally as

In this informal notation, we have for instance that

which can be interpreted as the fact that the Cauchy distributions converge weakly to the Dirac mass at as . Similarly, the spectral measure associated to (10) is the semicircular measure .

If we let be the spectral measure associated to , then the curve from to the space of measures is the high-dimensional limit of a Gelfand-Tsetlin pattern (discussed in this previous post), if the pattern is randomly generated amongst all matrices with spectrum asymptotic to as . For instance, if , then the curve is , corresponding to a pattern that is entirely filled with ‘s. If instead is a semicircular distribution, then the pattern is

thus at height from the top, the pattern is semicircular on the interval . The interlacing property of Gelfand-Tsetlin patterns translates to the claim that (resp. ) is non-decreasing (resp. non-increasing) in for any fixed . In principle one should be able to establish these monotonicity claims directly from the PDE (7) or from the implicit solution (9), but it was not clear to me how to do so.

An interesting example of such a limiting Gelfand-Tsetlin pattern occurs when , which corresponds to being , where is an orthogonal projection to a random -dimensional subspace of . Here we have

and so (9) in this case becomes

A tedious calculation then gives the solution

For , there are simple poles at , and the associated measure is

This reflects the interlacing property, which forces of the eigenvalues of the minor to be equal to (resp. ). For , the poles disappear and one just has

For , one has an inverse semicircle distribution

There is presumably a direct geometric explanation of this fact (basically describing the singular values of the product of two random orthogonal projections to half-dimensional subspaces of ), but I do not know of one off-hand.

The evolution of can also be understood using the *-transform* and *-transform* from free probability. Formally, letlet be the inverse of , thus

for all , and then define the -transform

The equation (9) may be rewritten as

and hence

See these previous notes for a discussion of free probability topics such as the -transform.

Example 6If then the transform is .

Example 7If is given by (10), then the transform is

Example 8If is given by (11), then the transform is

This simple relationship (12) is essentially due to Nica and Speicher (thanks to Dima Shylakhtenko for this reference). It has the remarkable consequence that when is the reciprocal of a natural number , then is the free arithmetic mean of copies of , that is to say is the free convolution of copies of , pushed forward by the map . In terms of random matrices, this is asserting that the top minor of a random matrix has spectral measure approximately equal to that of an arithmetic mean of independent copies of , so that the process of taking top left minors is in some sense a continuous analogue of the process of taking freely independent arithmetic means. There ought to be a geometric proof of this assertion, but I do not know of one. In the limit (or ), the -transform becomes linear and the spectral measure becomes semicircular, which is of course consistent with the free central limit theorem.

In a similar vein, if one defines the function

and inverts it to obtain a function with

for all , then the *-transform* is defined by

Writing

for any , , we have

and so (9) becomes

which simplifies to

replacing by we obtain

and thus

and hence

One can compute to be the -transform of the measure ; from the link between -transforms and free products (see e.g. these notes of Guionnet), we conclude that is the free product of and . This is consistent with the random matrix theory interpretation, since is also the spectral measure of , where is the orthogonal projection to the span of the first basis elements, so in particular has spectral measure . If is unitarily invariant then (by a fundamental result of Voiculescu) it is asymptotically freely independent of , so the spectral measure of is asymptotically the free product of that of and of .

## Recent Comments