You are currently browsing Terence Tao’s articles.

This is another sequel to a recent post in which I showed the Riemann zeta function can be locally approximated by a polynomial, in the sense that for randomly chosen one has an approximation

where grows slowly with , and is a polynomial of degree . It turns out that in the function field setting there is an exact version of this approximation which captures many of the known features of the Riemann zeta function, namely Dirichlet -functions for a random character of given modulus over a function field. This model was (essentially) studied in a fairly recent paper by Andrade, Miller, Pratt, and Trinh; I am not sure if there is any further literature on this model beyond this paper (though the number field analogue of low-lying zeroes of Dirichlet -functions is certainly well studied). In this model it is possible to set fixed and let go to infinity, thus providing a simple finite-dimensional model problem for problems involving the statistics of zeroes of the zeta function.

In this post I would like to record this analogue precisely. We will need a finite field of some order and a natural number , and set

We will primarily think of as being large and as being either fixed or growing very slowly with , though it is possible to also consider other asymptotic regimes (such as holding fixed and letting go to infinity). Let be the ring of polynomials of one variable with coefficients in , and let be the multiplicative semigroup of monic polynomials in ; one should view and as the function field analogue of the integers and natural numbers respectively. We use the valuation for polynomials (with ); this is the analogue of the usual absolute value on the integers. We select an irreducible polynomial of size (i.e., has degree ). The multiplicative group can be shown to be cyclic of order . A Dirichlet character of modulus is a completely multiplicative function of modulus , that is periodic of period and vanishes on those not coprime to . From Fourier analysis we see that there are exactly Dirichlet characters of modulus . A Dirichlet character is said to be *odd* if it is not identically one on the group of non-zero constants; there are only non-odd characters (including the principal character), so in the limit most Dirichlet characters are odd. We will work primarily with odd characters in order to be able to ignore the effect of the place at infinity.

Let be an odd Dirichlet character of modulus . The Dirichlet -function is then defined (for of sufficiently large real part, at least) as

Note that for , the set is invariant under shifts whenever ; since this covers a full set of residue classes of , and the odd character has mean zero on this set of residue classes, we conclude that the sum vanishes for . In particular, the -function is entire, and for any real number and complex number , we can write the -function as a polynomial

where and the coefficients are given by the formula

Note that can easily be normalised to zero by the relation

In particular, the dependence on is periodic with period (so by abuse of notation one could also take to be an element of ).

Fourier inversion yields a functional equation for the polynomial :

Proposition 1 (Functional equation)Let be an odd Dirichlet character of modulus , and . There exists a phase (depending on ) such thatfor all , or equivalently that

where .

*Proof:* We can normalise . Let be the finite field . We can write

where denotes the subgroup of consisting of (residue classes of) polynomials of degree less than . Let be a non-trivial character of whose kernel lies in the space (this is easily achieved by pulling back a non-trivial character from the quotient ). We can use the Fourier inversion formula to write

where

From change of variables we see that is a scalar multiple of ; from Plancherel we conclude that

for some phase . We conclude that

The inner sum equals if , and vanishes otherwise, thus

For in , and the contribution of the sum vanishes as is odd. Thus we may restrict to , so that

By the multiplicativity of , this factorises as

From the one-dimensional version of (3) (and the fact that is odd) we have

for some phase . The claim follows.

As one corollary of the functional equation, is a phase rotation of and thus is non-zero, so has degree exactly . The functional equation is then equivalent to the zeroes of being symmetric across the unit circle. In fact we have the stronger

Theorem 2 (Riemann hypothesis for Dirichlet -functions over function fields)Let be an odd Dirichlet character of modulus , and . Then all the zeroes of lie on the unit circle.

We derive this result from the Riemann hypothesis for curves over function fields below the fold.

In view of this theorem (and the fact that ), we may write

for some unitary matrix . It is possible to interpret as the action of the geometric Frobenius map on a certain cohomology group, but we will not do so here. The situation here is simpler than in the number field case because the factor arising from very small primes is now absent (in the function field setting there are no primes of size between and ).

We now let vary uniformly at random over all odd characters of modulus , and uniformly over , independently of ; we also make the distribution of the random variable conjugation invariant in . We use to denote the expectation with respect to this randomness. One can then ask what the limiting distribution of is in various regimes; we will focus in this post on the regime where is fixed and is being sent to infinity. In the spirit of the Sato-Tate conjecture, one should expect to converge in distribution to the circular unitary ensemble (CUE), that is to say Haar probability measure on . This may well be provable from Deligne’s “Weil II” machinery (in the spirit of this monograph of Katz and Sarnak), though I do not know how feasible this is or whether it has already been done in the literature; here we shall avoid using this machinery and study what partial results towards this CUE hypothesis one can make without it.

If one lets be the eigenvalues of (ordered arbitrarily), then we now have

and hence the are essentially elementary symmetric polynomials of the eigenvalues:

One can take log derivatives to conclude

On the other hand, as in the number field case one has the Dirichlet series expansion

where has sufficiently large real part, , and the von Mangoldt function is defined as when is the power of an irreducible and otherwise. We conclude the “explicit formula”

Similarly on inverting we have

Since we also have

for sufficiently large real part, where the Möbius function is equal to when is the product of distinct irreducibles, and otherwise, we conclude that the Möbius coefficients

are just the complete homogeneous symmetric polynomials of the eigenvalues:

One can then derive various algebraic relationships between the coefficients from various identities involving symmetric polynomials, but we will not do so here.

What do we know about the distribution of ? By construction, it is conjugation-invariant; from (2) it is also invariant with respect to the rotations for any phase . We also have the function field analogue of the Rudnick-Sarnak asymptotics:

Proposition 3 (Rudnick-Sarnak asymptotics)Let be nonnegative integers. Ifis equal to in the limit (holding fixed) unless for all , in which case it is equal to

Comparing this with Proposition 1 from this previous post, we thus see that all the low moments of are consistent with the CUE hypothesis (and also with the ACUE hypothesis, again by the previous post). The case of this proposition was essentially established by Andrade, Miller, Pratt, and Trinh.

*Proof:* We may assume the homogeneity relationship

since otherwise the claim follows from the invariance under phase rotation . By (6), the expression (9) is equal to

where

and consists of copies of for each , and similarly consists of copies of for each .

The polynomials and are monic of degree , which by hypothesis is less than the degree of , and thus they can only be scalar multiples of each other in if they are identical (in ). As such, we see that the average

vanishes unless , in which case this average is equal to . Thus the expression (9) simplifies to

There are at most choices for the product , and each one contributes to the above sum. All but of these choices are square-free, so by accepting an error of , we may restrict attention to square-free . This forces to all be irreducible (as opposed to powers of irreducibles); as is a unique factorisation domain, this forces and to be a permutation of . By the size restrictions, this then forces for all (if the above expression is to be anything other than ), and each is associated to possible choices of . Writing and then reinstating the non-squarefree possibilities for , we can thus write the above expression as

Using the prime number theorem , we obtain the claim.

Comparing this with Proposition 1 from this previous post, we thus see that all the low moments of are consistent with the CUE and ACUE hypotheses:

Corollary 4 (CUE statistics at low frequencies)Let be the eigenvalues of , permuted uniformly at random. Let be a linear combination of monomials where are integers with either or . Then

The analogue of the GUE hypothesis in this setting would be the CUE hypothesis, which asserts that the threshold here can be replaced by an arbitrarily large quantity. As far as I know this is not known even for (though, as mentioned previously, in principle one may be able to resolve such cases using Deligne’s proof of the Riemann hypothesis for function fields). Among other things, this would allow one to distinguish CUE from ACUE, since as discussed in the previous post, these two distributions agree when tested against monomials up to threshold , though not to .

*Proof:* By permutation symmetry we can take to be symmetric, and by linearity we may then take to be the symmetrisation of a single monomial . If then both expectations vanish due to the phase rotation symmetry, so we may assume that and . We can write this symmetric polynomial as a constant multiple of plus other monomials with a smaller value of . Since , the claim now follows by induction from Proposition 3 and Proposition 1 from the previous post.

Thus, for instance, for , the moment

is equal to

because all the monomials in are of the required form when . The latter expectation can be computed exactly (for any natural number ) using a formula

of Baker-Forrester and Keating-Snaith, thus for instance

and more generally

when , where are the integers

and more generally

(OEIS A039622). Thus we have

for if and is sufficiently slowly growing depending on . The CUE hypothesis would imply that that this formula also holds for higher . (The situation here is cleaner than in the number field case, in which the GUE hypothesis only suggests the correct lower bound for the moments rather than an asymptotic, due to the absence of the wildly fluctuating additional factor that is present in the Riemann zeta function model.)

Now we can recover the analogue of Montgomery’s work on the pair correlation conjecture. Consider the statistic

where

is some finite linear combination of monomials independent of . We can expand the above sum as

Assuming the CUE hypothesis, then by Example 3 of the previous post, we would conclude that

This is the analogue of Montgomery’s pair correlation conjecture. Proposition 3 implies that this claim is true whenever is supported on . If instead we assume the ACUE hypothesis (or the weaker Alternative Hypothesis that the phase gaps are non-zero multiples of ), one should instead have

for arbitrary ; this is the function field analogue of a recent result of Baluyot. In any event, since is non-negative, we unconditionally have the lower bound

By applying (12) for various choices of test functions we can obtain various bounds on the behaviour of eigenvalues. For instance suppose we take the Fejér kernel

Then (12) applies unconditionally and we conclude that

The right-hand side evaluates to . On the other hand, is non-negative, and equal to when . Thus

The sum is at least , and is at least if is not a simple eigenvalue. Thus

and thus the expected number of simple eigenvalues is at least ; in particular, at least two thirds of the eigenvalues are simple asymptotically on average. If we had (12) without any restriction on the support of , the same arguments allow one to show that the expected proportion of simple eigenvalues is .

Suppose that the phase gaps in are all greater than almost surely. Let is non-negative and non-positive for outside of the arc . Then from (13) one has

so by taking contrapositives one can force the existence of a gap less than asymptotically if one can find with non-negative, non-positive for outside of the arc , and for which one has the inequality

By a suitable choice of (based on a minorant of Selberg) one can ensure this for for large; see Section 5 of these notes of Goldston. This is not the smallest value of currently obtainable in the literature for the number field case (which is currently , due to Goldston and Turnage-Butterbaugh, by a somewhat different method), but is still significantly less than the trivial value of . On the other hand, due to the compatibility of the ACUE distribution with Proposition 3, it is not possible to lower below purely through the use of Proposition 3.

In some cases it is possible to go beyond Proposition 3. Consider the mollified moment

where

for some coefficients . We can compute this moment in the CUE case:

Proposition 5We have

*Proof:* From (5) one has

hence

where we suppress the dependence on the eigenvalues . Now observe the Pieri formula

where are the hook Schur polynomials

and we adopt the convention that vanishes for , or when and . Then also vanishes for . We conclude that

As the Schur polynomials are orthonormal on the unitary group, the claim follows.

The CUE hypothesis would then imply the corresponding mollified moment conjecture

(See this paper of Conrey, and this paper of Radziwill, for some discussion of the analogous conjecture for the zeta function, which is essentially due to Farmer.)

From Proposition 3 one sees that this conjecture holds in the range . It is likely that the function field analogue of the calculations of Conrey (based ultimately on deep exponential sum estimates of Deshouillers and Iwaniec) can extend this range to for any , if is sufficiently large depending on ; these bounds thus go beyond what is available from Proposition 3. On the other hand, as discussed in Remark 7 of the previous post, ACUE would also predict (14) for as large as , so the available mollified moment estimates are not strong enough to rule out ACUE. It would be interesting to see if there is some other estimate in the function field setting that can be used to exclude the ACUE hypothesis (possibly one that exploits the fact that GRH is available in the function field case?).

In a recent post I discussed how the Riemann zeta function can be locally approximated by a polynomial, in the sense that for randomly chosen one has an approximation

where grows slowly with , and is a polynomial of degree . Assuming the Riemann hypothesis (as we will throughout this post), the zeroes of should all lie on the unit circle, and one should then be able to write as a scalar multiple of the characteristic polynomial of (the inverse of) a unitary matrix , which we normalise as

Here is some quantity depending on . We view as a random element of ; in the limit , the GUE hypothesis is equivalent to becoming equidistributed with respect to Haar measure on (also known as the Circular Unitary Ensemble, CUE; it is to the unit circle what the Gaussian Unitary Ensemble (GUE) is on the real line). One can also view as analogous to the “geometric Frobenius” operator in the function field setting, though unfortunately it is difficult at present to make this analogy any more precise (due, among other things, to the lack of a sufficiently satisfactory theory of the “field of one element“).

Taking logarithmic derivatives of (2), we have

and hence on taking logarithmic derivatives of (1) in the variable we (heuristically) have

Morally speaking, we have

so on comparing coefficients we expect to interpret the moments of as a finite Dirichlet series:

To understand the distribution of in the unitary group , it suffices to understand the distribution of the moments

where denotes averaging over , and . The GUE hypothesis asserts that in the limit , these moments converge to their CUE counterparts

where is now drawn uniformly in with respect to the CUE ensemble, and denotes expectation with respect to that measure.

The moment (6) vanishes unless one has the homogeneity condition

This follows from the fact that for any phase , has the same distribution as , where we use the number theory notation .

In the case when the degree is low, we can use representation theory to establish the following simple formula for the moment (6), as evaluated by Diaconis and Shahshahani:

Proposition 1 (Low moments in CUE model)Ifthen the moment (6) vanishes unless for all , in which case it is equal to

Another way of viewing this proposition is that for distributed according to CUE, the random variables are distributed like independent complex random variables of mean zero and variance , as long as one only considers moments obeying (8). This identity definitely breaks down for larger values of , so one only obtains central limit theorems in certain limiting regimes, notably when one only considers a fixed number of ‘s and lets go to infinity. (The paper of Diaconis and Shahshahani writes in place of , but I believe this to be a typo.)

*Proof:* Let be the left-hand side of (8). We may assume that (7) holds since we are done otherwise, hence

Our starting point is Schur-Weyl duality. Namely, we consider the -dimensional complex vector space

This space has an action of the product group : the symmetric group acts by permutation on the tensor factors, while the general linear group acts diagonally on the factors, and the two actions commute with each other. Schur-Weyl duality gives a decomposition

where ranges over Young tableaux of size with at most rows, is the -irreducible unitary representation corresponding to (which can be constructed for instance using Specht modules), and is the -irreducible polynomial representation corresponding with highest weight .

Let be a permutation consisting of cycles of length (this is uniquely determined up to conjugation), and let . The pair then acts on , with the action on basis elements given by

The trace of this action can then be computed as

where is the matrix coefficient of . Breaking up into cycles and summing, this is just

But we can also compute this trace using the Schur-Weyl decomposition (10), yielding the identity

where is the character on associated to , and is the character on associated to . As is well known, is just the Schur polynomial of weight applied to the (algebraic, generalised) eigenvalues of . We can specialise to unitary matrices to conclude that

and similarly

where consists of cycles of length for each . On the other hand, the characters are an orthonormal system on with the CUE measure. Thus we can write the expectation (6) as

Now recall that ranges over all the Young tableaux of size with at most rows. But by (8) we have , and so the condition of having rows is redundant. Hence now ranges over *all* Young tableaux of size , which as is well known enumerates all the irreducible representations of . One can then use the standard orthogonality properties of characters to show that the sum (12) vanishes if , are not conjugate, and is equal to divided by the size of the conjugacy class of (or equivalently, by the size of the centraliser of ) otherwise. But the latter expression is easily computed to be , giving the claim.

Example 2We illustrate the identity (11) when , . The Schur polynomials are given aswhere are the (generalised) eigenvalues of , and the formula (11) in this case becomes

The functions are orthonormal on , so the three functions are also, and their norms are , , and respectively, reflecting the size in of the centralisers of the permutations , , and respectively. If is instead set to say , then the terms now disappear (the Young tableau here has too many rows), and the three quantities here now have some non-trivial covariance.

Example 3Consider the moment . For , the above proposition shows us that this moment is equal to . What happens for ? The formula (12) computes this moment aswhere is a cycle of length in , and ranges over all Young tableaux with size and at most rows. The Murnaghan-Nakayama rule tells us that vanishes unless is a hook (all but one of the non-zero rows consisting of just a single box; this also can be interpreted as an exterior power representation on the space of vectors in whose coordinates sum to zero), in which case it is equal to (depending on the parity of the number of non-zero rows). As such we see that this moment is equal to . Thus in general we have

Now we discuss what is known for the analogous moments (5). Here we shall be rather non-rigorous, in particular ignoring an annoying “Archimedean” issue that the product of the ranges and is not quite the range but instead leaks into the adjacent range . This issue can be addressed by working in a “weak" sense in which parameters such as are averaged over fairly long scales, or by passing to a function field analogue of these questions, but we shall simply ignore the issue completely and work at a heuristic level only. For similar reasons we will ignore some technical issues arising from the sharp cutoff of to the range (it would be slightly better technically to use a smooth cutoff).

One can morally expand out (5) using (4) as

where , , and the integers are in the ranges

for and , and

for and . Morally, the expectation here is negligible unless

in which case the expecation is oscillates with magnitude one. In particular, if (7) fails (with some room to spare) then the moment (5) should be negligible, which is consistent with the analogous behaviour for the moments (6). Now suppose that (8) holds (with some room to spare). Then is significantly less than , so the multiplicative error in (15) becomes an additive error of . On the other hand, because of the fundamental *integrality gap* – that the integers are always separated from each other by a distance of at least – this forces the integers , to in fact be equal:

The von Mangoldt factors effectively restrict to be prime (the effect of prime powers is negligible). By the fundamental theorem of arithmetic, the constraint (16) then forces , and to be a permutation of , which then forces for all ._ For a given , the number of possible is then , and the expectation in (14) is equal to . Thus this expectation is morally

and using Mertens’ theorem this soon simplifies asymptotically to the same quantity in Proposition 1. Thus we see that (morally at least) the moments (5) associated to the zeta function asymptotically match the moments (6) coming from the CUE model in the low degree case (8), thus lending support to the GUE hypothesis. (These observations are basically due to Rudnick and Sarnak, with the degree case of pair correlations due to Montgomery, and the degree case due to Hejhal.)

With some rare exceptions (such as those estimates coming from “Kloostermania”), the moment estimates of Rudnick and Sarnak basically represent the state of the art for what is known for the moments (5). For instance, Montgomery’s pair correlation conjecture, in our language, is basically the analogue of (13) for , thus

for all . Montgomery showed this for (essentially) the range (as remarked above, this is a special case of the Rudnick-Sarnak result), but no further cases of this conjecture are known.

These estimates can be used to give some non-trivial information on the largest and smallest spacings between zeroes of the zeta function, which in our notation corresponds to spacing between eigenvalues of . One such method used today for this is due to Montgomery and Odlyzko and was greatly simplified by Conrey, Ghosh, and Gonek. The basic idea, translated to our random matrix notation, is as follows. Suppose is some random polynomial depending on of degree at most . Let denote the eigenvalues of , and let be a parameter. Observe from the pigeonhole principle that if the quantity

then the arcs cannot all be disjoint, and hence there exists a pair of eigenvalues making an angle of less than ( times the mean angle separation). Similarly, if the quantity (18) falls below that of (19), then these arcs cannot cover the unit circle, and hence there exists a pair of eigenvalues making an angle of greater than times the mean angle separation. By judiciously choosing the coefficients of as functions of the moments , one can ensure that both quantities (18), (19) can be computed by the Rudnick-Sarnak estimates (or estimates of equivalent strength); indeed, from the residue theorem one can write (18) as

for sufficiently small , and this can be computed (in principle, at least) using (3) if the coefficients of are in an appropriate form. Using this sort of technology (translated back to the Riemann zeta function setting), one can show that gaps between consecutive zeroes of zeta are less than times the mean spacing and greater than times the mean spacing infinitely often for certain ; the current records are (due to Goldston and Turnage-Butterbaugh) and (due to Bui and Milinovich, who input some additional estimates beyond the Rudnick-Sarnak set, namely the twisted fourth moment estimates of Bettin, Bui, Li, and Radziwill, and using a technique based on Hall’s method rather than the Montgomery-Odlyzko method).

It would be of great interest if one could push the upper bound for the smallest gap below . The reason for this is that this would then exclude the Alternative Hypothesis that the spacing between zeroes are asymptotically always (or almost always) a non-zero half-integer multiple of the mean spacing, or in our language that the gaps between the phases of the eigenvalues of are nasymptotically always non-zero integer multiples of . The significance of this hypothesis is that it is implied by the existence of a Siegel zero (of conductor a small power of ); see this paper of Conrey and Iwaniec. (In our language, what is going on is that if there is a Siegel zero in which is very close to zero, then behaves like the Kronecker delta, and hence (by the Riemann-Siegel formula) the combined -function will have a polynomial approximation which in our language looks like a scalar multiple of , where and is a phase. The zeroes of this approximation lie on a coset of the roots of unity; the polynomial is a factor of this approximation and hence will also lie in this coset, implying in particular that all eigenvalue spacings are multiples of . Taking then gives the claim.)

Unfortunately, the known methods do not seem to break this barrier without some significant new input; already the original paper of Montgomery and Odlyzko observed this limitation for their particular technique (and in fact fall very slightly short, as observed in unpublished work of Goldston and of Milinovich). In this post I would like to record another way to see this, by providing an “alternative” probability distribution to the CUE distribution (which one might dub the *Alternative Circular Unitary Ensemble* (ACUE) which is indistinguishable in low moments in the sense that the expectation for this model also obeys Proposition 1, but for which the phase spacings are always a multiple of . This shows that if one is to rule out the Alternative Hypothesis (and thus in particular rule out Siegel zeroes), one needs to input some additional moment information beyond Proposition 1. It would be interesting to see if any of the other known moment estimates that go beyond this proposition are consistent with this alternative distribution. (UPDATE: it looks like they are, see Remark 7 below.)

To describe this alternative distribution, let us first recall the Weyl description of the CUE measure on the unitary group in terms of the distribution of the phases of the eigenvalues, randomly permuted in any order. This distribution is given by the probability measure

is the Vandermonde determinant; see for instance this previous blog post for the derivation of a very similar formula for the GUE distribution, which can be adapted to CUE without much difficulty. To see that this is a probability measure, first observe the Vandermonde determinant identity

where , denotes the dot product, and is the “long word”, which implies that (20) is a trigonometric series with constant term ; it is also clearly non-negative, so it is a probability measure. One can thus generate a random CUE matrix by first drawing using the probability measure (20), and then generating to be a random unitary matrix with eigenvalues .

For the alternative distribution, we first draw on the discrete torus (thus each is a root of unity) with probability density function

shift by a phase drawn uniformly at random, and then select to be a random unitary matrix with eigenvalues . Let us first verify that (21) is a probability density function. Clearly it is non-negative. It is the linear combination of exponentials of the form for . The diagonal contribution gives the constant function , which has total mass one. All of the other exponentials have a frequency that is not a multiple of , and hence will have mean zero on . The claim follows.

From construction it is clear that the matrix drawn from this alternative distribution will have all eigenvalue phase spacings be a non-zero multiple of . Now we verify that the alternative distribution also obeys Proposition 1. The alternative distribution remains invariant under rotation by phases, so the claim is again clear when (8) fails. Inspecting the proof of that proposition, we see that it suffices to show that the Schur polynomials with of size at most and of equal size remain orthonormal with respect to the alternative measure. That is to say,

when have size equal to each other and at most . In this case the phase in the definition of is irrelevant. In terms of eigenvalue measures, we are then reduced to showing that

By Fourier decomposition, it then suffices to show that the trigonometric polynomial does not contain any components of the form for some non-zero lattice vector . But we have already observed that is a linear combination of plane waves of the form for . Also, as is well known, is a linear combination of plane waves where is majorised by , and similarly is a linear combination of plane waves where is majorised by . So the product is a linear combination of plane waves of the form . But every coefficient of the vector lies between and , and so cannot be of the form for any non-zero lattice vector , giving the claim.

Example 4If , then the distribution (21) assigns a probability of to any pair that is a permuted rotation of , and a probability of to any pair that is a permuted rotation of . Thus, a matrix drawn from the alternative distribution will be conjugate to a phase rotation of with probability , and to with probability .A similar computation when gives conjugate to a phase rotation of with probability , to a phase rotation of or its adjoint with probability of each, and a phase rotation of with probability .

Remark 5For large it does not seem that this specific alternative distribution is the only distribution consistent with Proposition 1 and which has all phase spacings a non-zero multiple of ; in particular, it may not be the only distribution consistent with a Siegel zero. Still, it is a very explicit distribution that might serve as a test case for the limitations of various arguments for controlling quantities such as the largest or smallest spacing between zeroes of zeta. The ACUE is in some sense the distribution that maximally resembles CUE (in the sense that it has the greatest number of Fourier coefficients agreeing) while still also being consistent with the Alternative Hypothesis, and so should be the most difficult enemy to eliminate if one wishes to disprove that hypothesis.

In some cases, even just a tiny improvement in known results would be able to exclude the alternative hypothesis. For instance, if the alternative hypothesis held, then is periodic in with period , so from Proposition 1 for the alternative distribution one has

which differs from (13) for any . (This fact was implicitly observed recently by Baluyot, in the original context of the zeta function.) Thus a verification of the pair correlation conjecture (17) for even a single with would rule out the alternative hypothesis. Unfortunately, such a verification appears to be on comparable difficulty with (an averaged version of) the Hardy-Littlewood conjecture, with power saving error term. (This is consistent with the fact that Siegel zeroes can cause distortions in the Hardy-Littlewood conjecture, as (implicitly) discussed in this previous blog post.)

Remark 6One can view the CUE as normalised Lebesgue measure on (viewed as a smooth submanifold of ). One can similarly view ACUE as normalised Lebesgue measure on the (disconnected) smooth submanifold of consisting of those unitary matrices whose phase spacings are non-zero integer multiples of ; informally, ACUE is CUE restricted to this lower dimensional submanifold. As is well known, the phases of CUE eigenvalues form a determinantal point process with kernel (or one can equivalently take ; in a similar spirit, the phases of ACUE eigenvalues, once they are rotated to be roots of unity, become a discrete determinantal point process on those roots of unity with exactly the same kernel (except for a normalising factor of ). In particular, the -point correlation functions of ACUE (after this rotation) are precisely the restriction of the -point correlation functions of CUE after normalisation, that is to say they are proportional to .

Remark 7One family of estimates that go beyond the Rudnick-Sarnak family of estimates are twisted moment estimates for the zeta function, such as ones that give asymptotics forfor some small even exponent (almost always or ) and some short Dirichlet polynomial ; see for instance this paper of Bettin, Bui, Li, and Radziwill for some examples of such estimates. The analogous unitary matrix average would be something like

where is now some random medium degree polynomial that depends on the unitary matrix associated to (and in applications will typically also contain some negative power of to cancel the corresponding powers of in ). Unfortunately such averages generally are unable to distinguish the CUE from the ACUE. For instance, if all the coefficients of involve products of traces of total order less than , then in terms of the eigenvalue phases , is a linear combination of plane waves where the frequencies have coefficients of magnitude less than . On the other hand, as each coefficient of is an elementary symmetric function of the eigenvalues, is a linear combination of plane waves where the frequencies have coefficients of magnitude at most . Thus is a linear combination of plane waves where the frequencies have coefficients of magnitude less than , and thus is orthogonal to the difference between the CUE and ACUE measures on the phase torus by the previous arguments. In other words, has the same expectation with respect to ACUE as it does with respect to CUE. Thus one can only start distinguishing CUE from ACUE if the mollifier has degree close to or exceeding , which corresponds to Dirichlet polynomials of length close to or exceeding , which is far beyond current technology for such moment estimates.

Remark 8The GUE hypothesis for the zeta function asserts that the averagefor any and any test function , where is the Dyson sine kernel and are the ordinates of zeroes of the zeta function. This corresponds to the CUE distribution for . The ACUE distribution then corresponds to an “alternative gaussian unitary ensemble (AGUE)” hypothesis, in which the average (22) is instead predicted to equal a Riemann sum version of the integral (23):

This is a stronger version of the alternative hypothesis that the spacing between adjacent zeroes is almost always approximately a half-integer multiple of the mean spacing. I do not know of any known moment estimates for Dirichlet series that is able to eliminate this AGUE hypothesis (even assuming GRH). (UPDATE: These facts have also been independently observed in forthcoming work of Lagarias and Rodgers.)

Just a short note to point out that submissions to the 2019 Breakthrough Junior Challenge are now open until June 15. Students ages 13 to 18 from countries across the globe are invited to create and submit original videos (3:00 minutes in length maximum) that bring to life a concept or theory in the life sciences, physics or mathematics. The submissions are judged on the student’s ability to communicate complex scientific ideas in engaging, illuminating, and imaginative ways. The Challenge is organized by the Breakthrough Prize Foundation, in partnership with Khan Academy, National Geographic, and Cold Spring Harbor Laboratory. The winner of the challenge recieves a $250K college scholarship, with an addition $50K prize to the winner’s maths or science teacher, and a $100K lab for the student’s school. (This year I will be on the selection committee for this challenge.)

A useful rule of thumb in complex analysis is that holomorphic functions behave like large degree polynomials . This can be evidenced for instance at a “local” level by the Taylor series expansion for a complex analytic function in the disk, or at a “global” level by factorisation theorems such as the Weierstrass factorisation theorem (or the closely related Hadamard factorisation theorem). One can truncate these theorems in a variety of ways (e.g., Taylor’s theorem with remainder) to be able to approximate a holomorphic function by a polynomial on various domains.

In some cases it can be convenient instead to work with polynomials of another variable such as (or more generally for a scaling parameter ). In the case of the Riemann zeta function, defined by meromorphic continuation of the formula

one ends up having the following heuristic approximation in the neighbourhood of a point on the critical line:

Heuristic 1 (Polynomial approximation)Let be a height, let be a “typical” element of , and let be an integer. Let be the linear change of variables

The requirement is necessary since the right-hand side is periodic with period in the variable (or period in the variable), whereas the zeta function is not expected to have any such periodicity, even approximately.

Let us give two non-rigorous justifications of this heuristic. Firstly, it is standard that inside the critical strip (with ) we have an approximate form

of (11). If we group the integers from to into bins depending on what powers of they lie between, we thus have

For with and we heuristically have

and so

where are the partial Dirichlet series

This gives the desired polynomial approximation.

A second non-rigorous justification is as follows. From factorisation theorems such as the Hadamard factorisation theorem we expect to have

where runs over the non-trivial zeroes of , and there are some additional factors arising from the trivial zeroes and poles of which we will ignore here; we will also completely ignore the issue of how to renormalise the product to make it converge properly. In the region , the dominant contribution to this product (besides multiplicative constants) should arise from zeroes that are also in this region. The Riemann-von Mangoldt formula suggests that for “typical” one should have about such zeroes. If one lets be any enumeration of zeroes closest to , and then repeats this set of zeroes periodically by period , one then expects to have an approximation of the form

again ignoring all issues of convergence. If one writes and , then Euler’s famous product formula for sine basically gives

(here we are glossing over some technical issues regarding renormalisation of the infinite products, which can be dealt with by studying the asymptotics as ) and hence we expect

This again gives the desired polynomial approximation.

Below the fold we give a rigorous version of the second argument suitable for “microscale” analysis. More precisely, we will show

Theorem 2Let be an integer going sufficiently slowly to infinity. Let go to zero sufficiently slowly depending on . Let be drawn uniformly at random from . Then with probability (in the limit ), and possibly after adjusting by , there exists a polynomial of degree and obeying the functional equation (9) below, such that

It should be possible to refine the arguments to extend this theorem to the mesoscale setting by letting be anything growing like , and anything growing like ; also we should be able to delete the need to adjust by . We have not attempted these optimisations here.

Many conjectures and arguments involving the Riemann zeta function can be heuristically translated into arguments involving the polynomials , which one can view as random degree polynomials if is interpreted as a random variable drawn uniformly at random from . These can be viewed as providing a “toy model” for the theory of the Riemann zeta function, in which the complex analysis is simplified to the study of the zeroes and coefficients of this random polynomial (for instance, the role of the gamma function is now played by a monomial in ). This model also makes the zeta function theory more closely resemble the function field analogues of this theory (in which the analogue of the zeta function is also a polynomial (or a rational function) in some variable , as per the Weil conjectures). The parameter is at our disposal to choose, and reflects the scale at which one wishes to study the zeta function. For “macroscopic” questions, at which one wishes to understand the zeta function at unit scales, it is natural to take (or very slightly larger), while for “microscopic” questions one would take close to and only growing very slowly with . For the intermediate “mesoscopic” scales one would take somewhere between and . Unfortunately, the statistical properties of are only understood well at a conjectural level at present; even if one assumes the Riemann hypothesis, our understanding of is largely restricted to the computation of low moments (e.g., the second or fourth moments) of various linear statistics of and related functions (e.g., , , or ).

Let’s now heuristically explore the polynomial analogues of this theory in a bit more detail. The Riemann hypothesis basically corresponds to the assertion that all the zeroes of the polynomial lie on the unit circle (which, after the change of variables , corresponds to being real); in a similar vein, the GUE hypothesis corresponds to having the asymptotic law of a random scalar times the characteristic polynomial of a random unitary matrix. Next, we consider what happens to the functional equation

A routine calculation involving Stirling’s formula reveals that

with ; one also has the closely related approximation

when . Since , applying (5) with and using the approximation (2) suggests a functional equation for :

where is the polynomial with all the coefficients replaced by their complex conjugate. Thus if we write

then the functional equation can be written as

We remark that if we use the heuristic (3) (interpreting the cutoffs in the summation in a suitably vague fashion) then this equation can be viewed as an instance of the Poisson summation formula.

Another consequence of the functional equation is that the zeroes of are symmetric with respect to inversion across the unit circle. This is of course consistent with the Riemann hypothesis, but does not obviously imply it. The phase is of little consequence in this functional equation; one could easily conceal it by working with the phase rotation of instead.

One consequence of the functional equation is that is real for any ; the same is then true for the derivative . Among other things, this implies that cannot vanish unless does also; thus the zeroes of will not lie on the unit circle except where has repeated zeroes. The analogous statement is true for ; the zeroes of will not lie on the critical line except where has repeated zeroes.

Relating to this fact, it is a classical result of Speiser that the Riemann hypothesis is true if and only if all the zeroes of the derivative of the zeta function in the critical strip lie on or to the *right* of the critical line. The analogous result for polynomials is

Proposition 3We have(where all zeroes are counted with multiplicity.) In particular, the zeroes of all lie on the unit circle if and only if the zeroes of lie in the closed unit disk.

*Proof:* From the functional equation we have

Thus it will suffice to show that and have the same number of zeroes outside the closed unit disk.

Set , then is a rational function that does not have a zero or pole at infinity. For not a zero of , we have already seen that and are real, so on dividing we see that is always real, that is to say

(This can also be seen by writing , where runs over the zeroes of , and using the fact that these zeroes are symmetric with respect to reflection across the unit circle.) When is a zero of , has a simple pole at with residue a positive multiple of , and so stays on the right half-plane if one traverses a semicircular arc around outside the unit disk. From this and continuity we see that stays on the right-half plane in a circle slightly larger than the unit circle, and hence by the argument principle it has the same number of zeroes and poles outside of this circle, giving the claim.

From the functional equation and the chain rule, is a zero of if and only if is a zero of . We can thus write the above proposition in the equivalent form

One can use this identity to get a lower bound on the number of zeroes of by the method of mollifiers. Namely, for any other polynomial , we clearly have

By Jensen’s formula, we have for any that

We therefore have

As the logarithm function is concave, we can apply Jensen’s inequality to conclude

where the expectation is over the parameter. It turns out that by choosing the mollifier carefully in order to make behave like the function (while keeping the degree small enough that one can compute the second moment here), and then optimising in , one can use this inequality to get a positive fraction of zeroes of on the unit circle on average. This is the polynomial analogue of a classical argument of Levinson, who used this to show that at least one third of the zeroes of the Riemann zeta function are on the critical line; all later improvements on this fraction have been based on some version of Levinson’s method, mainly focusing on more advanced choices for the mollifier and of the differential operator that implicitly appears in the above approach. (The most recent lower bound I know of is , due to Pratt and Robles. In principle (as observed by Farmer) this bound can get arbitrarily close to if one is allowed to use arbitrarily long mollifiers, but establishing this seems of comparable difficulty to unsolved problems such as the pair correlation conjecture; see this paper of Radziwill for more discussion.) A variant of these techniques can also establish “zero density estimates” of the following form: for any , the number of zeroes of that lie further than from the unit circle is of order on average for some absolute constant . Thus, roughly speaking, most zeroes of lie within of the unit circle. (Analogues of these results for the Riemann zeta function were worked out by Selberg, by Jutila, and by Conrey, with increasingly strong values of .)

The zeroes of tend to live somewhat closer to the origin than the zeroes of . Suppose for instance that we write

where are the zeroes of , then by evaluating at zero we see that

and the right-hand side is of unit magnitude by the functional equation. However, if we differentiate

where are the zeroes of , then by evaluating at zero we now see that

The right-hand side would now be typically expected to be of size , and so on average we expect the to have magnitude like , that is to say pushed inwards from the unit circle by a distance roughly . The analogous result for the Riemann zeta function is that the zeroes of at height lie at a distance roughly to the right of the critical line on the average; see this paper of Levinson and Montgomery for a precise statement.

The Polymath15 paper “Effective approximation of heat flow evolution of the Riemann function, and a new upper bound for the de Bruijn-Newman constant“, submitted to Research in the Mathematical Sciences, has just been uploaded to the arXiv. This paper records the mix of theoretical and computational work needed to improve the upper bound on the de Bruijn-Newman constant . This constant can be defined as follows. The function

where is the Riemann function

has a Fourier representation

where is the super-exponentially decaying function

The Riemann hypothesis is equivalent to the claim that all the zeroes of are real. De Bruijn introduced (in different notation) the deformations

of ; one can view this as the solution to the backwards heat equation starting at . From the work of de Bruijn and of Newman, it is known that there exists a real number – the de Bruijn-Newman constant – such that has all zeroes real for and has at least one non-real zero for . In particular, the Riemann hypothesis is equivalent to the assertion . Prior to this paper, the best known bounds for this constant were

with the lower bound due to Rodgers and myself, and the upper bound due to Ki, Kim, and Lee. One of the main results of the paper is to improve the upper bound to

At a purely numerical level this gets “closer” to proving the Riemann hypothesis, but the methods of proof take as input a finite numerical verification of the Riemann hypothesis up to some given height (in our paper we take ) and converts this (and some other numerical verification) to an upper bound on that is of order . As discussed in the final section of the paper, further improvement of the numerical verification of RH would thus lead to modest improvements in the upper bound on , although it does not seem likely that our methods could for instance improve the bound to below without an infeasible amount of computation.

We now discuss the methods of proof. An existing result of de Bruijn shows that if all the zeroes of lie in the strip , then ; we will verify this hypothesis with , thus giving (1). Using the symmetries and the known zero-free regions, it suffices to show that

whenever and .

For large (specifically, ), we use effective numerical approximation to to establish (2), as discussed in a bit more detail below. For smaller values of , the existing numerical verification of the Riemann hypothesis (we use the results of Platt) shows that

for and . The problem though is that this result only controls at time rather than the desired time . To bridge the gap we need to erect a “barrier” that, roughly speaking, verifies that

for , , and ; with a little bit of work this barrier shows that zeroes cannot sneak in from the right of the barrier to the left in order to produce counterexamples to (2) for small .

To enforce this barrier, and to verify (2) for large , we need to approximate for positive . Our starting point is the Riemann-Siegel formula, which roughly speaking is of the shape

where , is an explicit “gamma factor” that decays exponentially in , and is a ratio of gamma functions that is roughly of size . Deforming this by the heat flow gives rise to an approximation roughly of the form

where and are variants of and , , and is an exponent which is roughly . In particular, for positive values of , increases (logarithmically) as increases, and the two sums in the Riemann-Siegel formula become increasingly convergent (even in the face of the slowly increasing coefficients ). For very large values of (in the range for a large absolute constant ), the terms of both sums dominate, and begins to behave in a sinusoidal fashion, with the zeroes “freezing” into an approximate arithmetic progression on the real line much like the zeroes of the sine or cosine functions (we give some asymptotic theorems that formalise this “freezing” effect). This lets one verify (2) for extremely large values of (e.g., ). For slightly less large values of , we first multiply the Riemann-Siegel formula by an “Euler product mollifier” to reduce some of the oscillation in the sum and make the series converge better; we also use a technical variant of the triangle inequality to improve the bounds slightly. These are sufficient to establish (2) for moderately large (say ) with only a modest amount of computational effort (a few seconds after all the optimisations; on my own laptop with very crude code I was able to verify all the computations in a matter of minutes).

The most difficult computational task is the verification of the barrier (3), particularly when is close to zero where the series in (4) converge quite slowly. We first use an Euler product heuristic approximation to to decide where to place the barrier in order to make our numerical approximation to as large in magnitude as possible (so that we can afford to work with a sparser set of mesh points for the numerical verification). In order to efficiently evaluate the sums in (4) for many different values of , we perform a Taylor expansion of the coefficients to factor the sums as combinations of other sums that do not actually depend on and and so can be re-used for multiple choices of after a one-time computation. At the scales we work in, this computation is still quite feasible (a handful of minutes after software and hardware optimisations); if one assumes larger numerical verifications of RH and lowers and to optimise the value of accordingly, one could get down to an upper bound of assuming an enormous numerical verification of RH (up to height about ) and a very large distributed computing project to perform the other numerical verifications.

This post can serve as the (presumably final) thread for the Polymath15 project (continuing this post), to handle any remaining discussion topics for that project.

Just a brief announcement that the AMS is now accepting (until June 30) nominations for the 2020 Joseph L. Doob Prize, which recognizes a single, relatively recent, outstanding research book that makes a seminal contribution to the research literature, reflects the highest standards of research exposition, and promises to have a deep and long-term impact in its area. The book must have been published within the six calendar years preceding the year in which it is nominated. Books may be nominated by members of the Society, by members of the selection committee, by members of AMS editorial committees, or by publishers. (I am currently on the committee for this prize.) A list of previous winners may be found here. The nomination procedure may be found at the bottom of this page.

Joni Teräväinen and I have just uploaded to the arXiv our paper “Value patterns of multiplicative functions and related sequences“, submitted to Forum of Mathematics, Sigma. This paper explores how to use recent technology on correlations of multiplicative (or nearly multiplicative functions), such as the “entropy decrement method”, in conjunction with techniques from additive combinatorics, to establish new results on the sign patterns of functions such as the Liouville function . For instance, with regards to length 5 sign patterns

of the Liouville function, we can now show that at least of the possible sign patterns in occur with positive upper density. (Conjecturally, all of them do so, and this is known for all shorter sign patterns, but unfortunately seems to be the limitation of our methods.)

The Liouville function can be written as , where is the number of prime factors of (counting multiplicity). One can also consider the variant , which is a completely multiplicative function taking values in the cube roots of unity . Here we are able to show that all sign patterns in occur with positive lower density as sign patterns of this function. The analogous result for was already known (see this paper of Matomäki, Radziwiłł, and myself), and in that case it is even known that all sign patterns occur with equal logarithmic density (from this paper of myself and Teräväinen), but these techniques barely fail to handle the case by itself (largely because the “parity” arguments used in the case of the Liouville function no longer control three-point correlations in the case) and an additional additive combinatorial tool is needed. After applying existing technology (such as entropy decrement methods), the problem roughly speaking reduces to locating patterns for a certain partition of a compact abelian group (think for instance of the unit circle , although the general case is a bit more complicated, in particular if is disconnected then there is a certain “coprimality” constraint on , also we can allow the to be replaced by any with divisible by ), with each of the having measure . An inequality of Kneser just barely fails to guarantee the existence of such patterns, but by using an inverse theorem for Kneser’s inequality in this previous paper of mine we are able to identify precisely the obstruction for this method to work, and rule it out by an *ad hoc* method.

The same techniques turn out to also make progress on some conjectures of Erdös-Pomerance and Hildebrand regarding patterns of the largest prime factor of a natural number . For instance, we improve results of Erdös-Pomerance and of Balog demonstrating that the inequalities

and

each hold for infinitely many , by demonstrating the stronger claims that the inequalities

and

each hold for a set of of positive lower density. As a variant, we also show that we can find a positive density set of for which

for any fixed (this improves on a previous result of Hildebrand with replaced by . A number of other results of this type are also obtained in this paper.

In order to obtain these sorts of results, one needs to extend the entropy decrement technology from the setting of multiplicative functions to that of what we call “weakly stable sets” – sets which have some multiplicative structure, in the sense that (roughly speaking) there is a set such that for all small primes , the statements and are roughly equivalent to each other. For instance, if is a level set , one would take ; if instead is a set of the form , then one can take . When one has such a situation, then very roughly speaking, the entropy decrement argument then allows one to estimate a one-parameter correlation such as

with a two-parameter correlation such as

(where we will be deliberately vague as to how we are averaging over and ), and then the use of the “linear equations in primes” technology of Ben Green, Tamar Ziegler, and myself then allows one to replace this average in turn by something like

where is constrained to be not divisible by small primes but is otherwise quite arbitrary. This latter average can then be attacked by tools from additive combinatorics, such as translation to a continuous group model (using for instance the Furstenberg correspondence principle) followed by tools such as Kneser’s inequality (or inverse theorems to that inequality).

(This post is mostly intended for my own reference, as I found myself repeatedly looking up several conversions between polynomial bases on various occasions.)

Let denote the vector space of polynomials of one variable with real coefficients of degree at most . This is a vector space of dimension , and the sequence of these spaces form a filtration:

A standard basis for these vector spaces are given by the monomials : every polynomial in can be expressed uniquely as a linear combination of the first monomials . More generally, if one has any sequence of polynomials, with each of degree exactly , then an easy induction shows that forms a basis for .

In particular, if we have *two* such sequences and of polynomials, with each of degree and each of degree , then must be expressible uniquely as a linear combination of the polynomials , thus we have an identity of the form

for some *change of basis coefficients* . These coefficients describe how to convert a polynomial expressed in the basis into a polynomial expressed in the basis.

Many standard combinatorial quantities involving two natural numbers can be interpreted as such change of basis coefficients. The most familiar example are the binomial coefficients , which measures the conversion from the shifted monomial basis to the monomial basis , thanks to (a special case of) the binomial formula:

thus for instance

More generally, for any shift , the conversion from to is measured by the coefficients , thanks to the general case of the binomial formula.

But there are other bases of interest too. For instance if one uses the falling factorial basis

then the conversion from falling factorials to monomials is given by the Stirling numbers of the first kind :

thus for instance

and the conversion back is given by the Stirling numbers of the second kind :

thus for instance

If one uses the binomial functions as a basis instead of the falling factorials, one of course can rewrite these conversions as

and

thus for instance

and

As a slight variant, if one instead uses rising factorials

then the conversion to monomials yields the unsigned Stirling numbers of the first kind:

thus for instance

One final basis comes from the polylogarithm functions

For instance one has

and more generally one has

for all natural numbers and some polynomial of degree (the *Eulerian polynomials*), which when converted to the monomial basis yields the (shifted) Eulerian numbers

For instance

These particular coefficients also have useful combinatorial interpretations. For instance:

- The binomial coefficient is of course the number of -element subsets of .
- The unsigned Stirling numbers of the first kind are the number of permutations of with exactly cycles. The signed Stirling numbers are then given by the formula .
- The Stirling numbers of the second kind are the number of ways to partition into non-empty subsets.
- The Eulerian numbers are the number of permutations of with exactly ascents.

These coefficients behave similarly to each other in several ways. For instance, the binomial coefficients obey the well known Pascal identity

(with the convention that vanishes outside of the range ). In a similar spirit, the unsigned Stirling numbers of the first kind obey the identity

and the signed counterparts obey the identity

The Stirling numbers of the second kind obey the identity

and the Eulerian numbers obey the identity

I was pleased to learn this week that the 2019 Abel Prize was awarded to Karen Uhlenbeck. Uhlenbeck laid much of the foundations of modern geometric PDE. One of the few papers I have in this area is in fact a joint paper with Gang Tian extending a famous singularity removal theorem of Uhlenbeck for four-dimensional Yang-Mills connections to higher dimensions. In both these papers, it is crucial to be able to construct “Coulomb gauges” for various connections, and there is a clever trick of Uhlenbeck for doing so, introduced in another important paper of hers, which is absolutely critical in my own paper with Tian. Nowadays it would be considered a standard technique, but it was definitely not so at the time that Uhlenbeck introduced it.

Suppose one has a smooth connection on a (closed) unit ball in for some , taking values in some Lie algebra associated to a compact Lie group . This connection then has a curvature , defined in coordinates by the usual formula

It is natural to place the curvature in a scale-invariant space such as , and then the natural space for the connection would be the Sobolev space . It is easy to see from (1) and Sobolev embedding that if is bounded in , then will be bounded in . One can then ask the converse question: if is bounded in , is bounded in ? This can be viewed as asking whether the curvature equation (1) enjoys “elliptic regularity”.

There is a basic obstruction provided by gauge invariance. For any smooth gauge taking values in the Lie group, one can gauge transform to

and then a brief calculation shows that the curvature is conjugated to

This gauge symmetry does not affect the norm of the curvature tensor , but can make the connection extremely large in , since there is no control on how wildly can oscillate in space.

However, one can hope to overcome this problem by *gauge fixing*: perhaps if is bounded in , then one can make bounded in *after* applying a gauge transformation. The basic and useful result of Uhlenbeck is that this can be done if the norm of is sufficiently small (and then the conclusion is that is small in ). (For large connections there is a serious issue related to the Gribov ambiguity.) In my (much) later paper with Tian, we adapted this argument, replacing Lebesgue spaces by Morrey space counterparts. (This result was also independently obtained at about the same time by Meyer and Riviére.)

To make the problem elliptic, one can try to impose the *Coulomb gauge condition*

(also known as the *Lorenz gauge* or *Hodge gauge* in various papers), together with a natural boundary condition on that will not be discussed further here. This turns (1), (2) into a divergence-curl system that is elliptic at the linear level at least. Indeed if one takes the divergence of (1) using (2) one sees that

and if one could somehow ignore the nonlinear term then we would get the required regularity on by standard elliptic regularity estimates.

The problem is then how to handle the nonlinear term. If we already knew that was small in the right norm then one can use Sobolev embedding, Hölder’s inequality, and elliptic regularity to show that the second term in (3) is small compared to the first term, and so one could then hope to eliminate it by perturbative analysis. However, proving that is small in this norm is exactly what we are trying to prove! So this approach seems circular.

Uhlenbeck’s clever way out of this circularity is a textbook example of what is now known as a “continuity” argument. Instead of trying to work just with the original connection , one works with the rescaled connections for , with associated rescaled curvatures . If the original curvature is small in norm (e.g. bounded by some small ), then so are all the rescaled curvatures . We want to obtain a Coulomb gauge at time ; this is difficult to do directly, but it is trivial to obtain a Coulomb gauge at time , because the connection vanishes at this time. On the other hand, once one has successfully obtained a Coulomb gauge at some time with small in the natural norm (say bounded by for some constant which is large in absolute terms, but not so large compared with say ), the perturbative argument mentioned earlier (combined with the qualitative hypothesis that is smooth) actually works to show that a Coulomb gauge can also be constructed and be small for all sufficiently close *nearby* times to ; furthermore, the perturbative analysis actually shows that the nearby gauges enjoy a slightly better bound on the norm, say rather than . As a consequence of this, the set of times for which one has a good Coulomb gauge obeying the claimed estimates is both open and closed in , and also contains . Since the unit interval is connected, it must then also contain . This concludes the proof.

One of the lessons I drew from this example is to not be deterred (especially in PDE) by an argument seeming to be circular; if the argument is still sufficiently “nontrivial” in nature, it can often be modified into a usefully non-circular argument that achieves what one wants (possibly under an additional qualitative hypothesis, such as a continuity or smoothness hypothesis).

Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two note–takers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology for varieties (or similar objects) defined over arbitrary commutative rings , and with coefficients in another arbitrary commutative ring . Currently, we have various flavours of cohomology that only work for certain types of domain rings and coefficient rings :

- Singular cohomology, which roughly speaking works when the domain ring is a characteristic zero field such as or , but can allow for arbitrary coefficients ;
- de Rham cohomology, which roughly speaking works as long as the coefficient ring is the same as the domain ring (or a homomorphic image thereof), as one can only talk about -valued differential forms if the underlying space is also defined over ;
- -adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring is localised around a prime that is different from the characteristic of the domain ring ; and
- Crystalline cohomology, in which the domain ring is a field of some finite characteristic , but the coefficient ring can be a slight deformation of , such as the ring of Witt vectors of .

There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case . The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:

The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point in the above diagram, in which the domain ring and the coefficient ring are both thought of as being “close to characteristic ” in some sense, so that the dilates of these rings is either zero, or “small”. For instance, the -adic ring is technically of characteristic , but is a “small” ideal of (it consists of those elements of of -adic valuation at most ), so one can think of as being “close to characteristic ” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings for which prismatic cohomology is effective:

To define prismatic cohomology rings one needs a “prism”: a ring homomorphism from to equipped with a “Frobenius-like” endomorphism on obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:

(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)

There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “-deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators that for instance applied to monomials by the usual formula

prismatic cohomology in coordinates can be computed using a “-derivative” operator that for instance applies to monomials by the formula

where

is the “-analogue” of (a polynomial in that equals in the limit ). (The -analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a non-trivial theorem.

## Recent Comments