You are currently browsing the category archive for the ‘expository’ category.

About six years ago on this blog, I started thinking about trying to make a web-based game based around high-school algebra, and ended up using Scratch to write a short but playable puzzle game in which one solves linear equations for an unknown using a restricted set of moves. (At almost the same time, there were a number of more professionally made games released along similar lines, most notably Dragonbox.)

Since then, I have thought a couple times about whether there were other parts of mathematics which could be gamified in a similar fashion. Shortly after my first blog posts on this topic, I experimented with a similar gamification of Lewis Carroll’s classic list of logic puzzles, but the results were quite clunky, and I was never satisfied with the results.

Over the last few weeks I returned to this topic though, thinking in particular about how to gamify the rules of inference of propositional logic, in a manner that at least vaguely resembles how mathematicians actually go about making logical arguments (e.g., splitting into cases, arguing by contradiction, using previous result as lemmas to help with subsequent ones, and so forth). The rules of inference are a list of a dozen or so deductive rules concerning propositional sentences (things like “( AND ) OR (NOT )”, where are some formulas). A typical such rule is Modus Ponens: if the sentence is known to be true, and the implication “ IMPLIES ” is also known to be true, then one can deduce that is also true. Furthermore, in this deductive calculus it is possible to temporarily introduce some unproven statements as an assumption, only to discharge them later. In particular, we have the deduction theorem: if, after making an assumption , one is able to derive the statement , then one can conclude that the implication “ IMPLIES ” is true without any further assumption.

It took a while for me to come up with a workable game-like graphical interface for all of this, but I finally managed to set one up, now using Javascript instead of Scratch (which would be hopelessly inadequate for this task); indeed, part of the motivation of this project was to finally learn how to program in Javascript, which turned out to be not as formidable as I had feared (certainly having experience with other C-like languages like C++, Java, or lua, as well as some prior knowledge of HTML, was very helpful). The main code for this project is available here. Using this code, I have created an interactive textbook in the style of a computer game, which I have titled “QED”. This text contains thirty-odd exercises arranged in twelve sections that function as game “levels”, in which one has to use a given set of rules of inference, together with a given set of hypotheses, to reach a desired conclusion. The set of available rules increases as one advances through the text; in particular, each new section gives one or more rules, and additionally each exercise one solves automatically becomes a new deduction rule one can exploit in later levels, much as lemmas and propositions are used in actual mathematics to prove more difficult theorems. The text automatically tries to match available deduction rules to the sentences one clicks on or drags, to try to minimise the amount of manual input one needs to actually make a deduction.

Most of one’s proof activity takes place in a “root environment” of statements that are known to be true (under the given hypothesis), but for more advanced exercises one has to also work in sub-environments in which additional assumptions are made. I found the graphical metaphor of nested boxes to be useful to depict this tree of sub-environments, and it seems to combine well with the drag-and-drop interface.

The text also logs one’s moves in a more traditional proof format, which shows how the mechanics of the game correspond to a traditional mathematical argument. My hope is that this will give students a way to understand the underlying concept of forming a proof in a manner that is more difficult to achieve using traditional, non-interactive textbooks.

I have tried to organise the exercises in a game-like progression in which one first works with easy levels that train the player on a small number of moves, and then introduce more advanced moves one at a time. As such, the order in which the rules of inference are introduced is a little idiosyncratic. The most powerful rule (the law of the excluded middle, which is what separates classical logic from intuitionistic logic) is saved for the final section of the text.

Anyway, I am now satisfied enough with the state of the code and the interactive text that I am willing to make both available (and open source; I selected a CC-BY licence for both), and would be happy to receive feedback on any aspect of the either. In principle one could extend the game mechanics to other mathematical topics than the propositional calculus – the rules of inference for first-order logic being an obvious next candidate – but it seems to make sense to focus just on propositional logic for now.

Let be a measure-preserving system – a probability space equipped with a measure-preserving translation (which for simplicity of discussion we shall assume to be invertible). We will informally think of two points in this space as being “close” if for some that is not too large; this allows one to distinguish between “local” structure at a point (in which one only looks at nearby points for moderately large ) and “global” structure (in which one looks at the entire space ). The local/global distinction is also known as the time-averaged/space-averaged distinction in ergodic theory.

A measure-preserving system is said to be ergodic if all the invariant sets are either zero measure or full measure. An equivalent form of this statement is that any measurable function which is *locally essentially constant* in the sense that for -almost every , is necessarily *globally essentially constant* in the sense that there is a constant such that for -almost every . A basic consequence of ergodicity is the mean ergodic theorem: if , then the averages converge in norm to the mean . (The mean ergodic theorem also applies to other spaces with , though it is usually proven first in the Hilbert space .) Informally: in ergodic systems, time averages are asymptotically equal to space averages. Specialising to the case of indicator functions, this implies in particular that converges to for any measurable set .

In this short note I would like to use the mean ergodic theorem to show that ergodic systems also have the property that “somewhat locally constant” functions are necessarily “somewhat globally constant”; this is not a deep observation, and probably already in the literature, but I found it a cute statement that I had not previously seen. More precisely:

Corollary 1Let be an ergodic measure-preserving system, and let be measurable. Suppose thatfor some . Then there exists a constant such that for in a set of measure at least .

Informally: if is locally constant on pairs at least of the time, then is globally constant at least of the time. Of course the claim fails if the ergodicity hypothesis is dropped, as one can simply take to be an invariant function that is not essentially constant, such as the indicator function of an invariant set of intermediate measure. This corollary can be viewed as a manifestation of the general principle that ergodic systems have the same “global” (or “space-averaged”) behaviour as “local” (or “time-averaged”) behaviour, in contrast to non-ergodic systems in which local properties do not automatically transfer over to their global counterparts.

*Proof:* By composing with (say) the tangent function, we may assume without loss of generality that is bounded. Let , and partition as , where is the level set

For each , only finitely many of the are non-empty. By (1), one has

Using the ergodic theorem, we conclude that

On the other hand, . Thus there exists such that , thus

By the Bolzano-Weierstrass theorem, we may pass to a subsequence where converges to a limit , then we have

for infinitely many , and hence

The claim follows.

Let , be additive groups (i.e., groups with an abelian addition group law). A map is a homomorphism if one has

for all . A map is an *affine* homomorphism if one has

for all *additive quadruples* in , by which we mean that and . The two notions are closely related; it is easy to verify that is an affine homomorphism if and only if is the sum of a homomorphism and a constant.

Now suppose that also has a translation-invariant metric . A map is said to be a quasimorphism if one has

for all , where denotes a quantity at a bounded distance from the origin. Similarly, is an *affine quasimorphism* if

for all additive quadruples in . Again, one can check that is an affine quasimorphism if and only if it is the sum of a quasimorphism and a constant (with the implied constant of the quasimorphism controlled by the implied constant of the affine quasimorphism). (Since every constant is itself a quasimorphism, it is in fact the case that affine quasimorphisms are quasimorphisms, but now the implied constant in the latter is not controlled by the implied constant of the former.)

“Trivial” examples of quasimorphisms include the sum of a homomorphism and a bounded function. Are there others? In some cases, the answer is no. For instance, suppose we have a quasimorphism . Iterating (2), we see that for any integer and natural number , which we can rewrite as for non-zero . Also, is Lipschitz. Sending , we can verify that is a Cauchy sequence as and thus tends to some limit ; we have for , hence for positive , and then one can use (2) one last time to obtain for all . Thus is the sum of the homomorphism and a bounded sequence.

In general, one can phrase this problem in the language of group cohomology (discussed in this previous post). Call a map a *-cocycle*. A *-cocycle* is a map obeying the identity

for all . Given a -cocycle , one can form its *derivative* by the formula

Such functions are called *-coboundaries*. It is easy to see that the abelian group of -coboundaries is a subgroup of the abelian group of -cocycles. The quotient of these two groups is the first group cohomology of with coefficients in , and is denoted .

If a -cocycle is bounded then its derivative is a bounded -coboundary. The quotient of the group of bounded -cocycles by the derivatives of bounded -cocycles is called the *bounded first group cohomology* of with coefficients in , and is denoted . There is an obvious homomorphism from to , formed by taking a coset of the space of derivatives of bounded -cocycles, and enlarging it to a coset of the space of -coboundaries. By chasing all the definitions, we see that all quasimorphism from to are the sum of a homomorphism and a bounded function if and only if this homomorphism is injective; in fact the quotient of the space of quasimorphisms by the sum of homomorphisms and bounded functions is isomorphic to the kernel of .

In additive combinatorics, one is often working with functions which only have additive structure a fraction of the time, thus for instance (1) or (3) might only hold “ of the time”. This makes it somewhat difficult to directly interpret the situation in terms of group cohomology. However, thanks to tools such as the Balog-Szemerédi-Gowers lemma, one can upgrade this sort of -structure to -structure – at the cost of restricting the domain to a smaller set. Here I record one such instance of this phenomenon, thus giving a tentative link between additive combinatorics and group cohomology. (I thank Yuval Wigderson for suggesting the problem of locating such a link.)

Theorem 1Let , be additive groups with , let be a subset of , let , and let be a function such thatfor additive quadruples in . Then there exists a subset of containing with , a subset of with , and a function such that

for all (thus, the derivative takes values in on ), and such that for each , one has

Presumably the constants and can be improved further, but we have not attempted to optimise these constants. We chose as the domain on which one has a bounded derivative, as one can use the Bogulybov lemma (see e.g, Proposition 4.39 of my book with Van Vu) to find a large Bohr set inside . In applications, the set need not have bounded size, or even bounded doubling; for instance, in the inverse theory over a small finite fields , one would be interested in the situation where is the group of matrices with coefficients in (for some large , and being the subset consisting of those matrices of rank bounded by some bound .

*Proof:* By hypothesis, there are triples such that and

Thus, there is a set with such that for all , one has (6) for pairs with ; in particular, there exists such that (6) holds for values of . Setting , we conclude that for each , one has

Consider the bipartite graph whose vertex sets are two copies of , and and connected by a (directed) edge if and (7) holds. Then this graph has edges. Applying (a slight modification of) the Balog-Szemerédi-Gowers theorem (for instance by modifying the proof of Corollary 5.19 of my book with Van Vu), we can then find a subset of with with the property that for any , there exist triples such that the edges all lie in this bipartite graph. This implies that, for all , there exist septuples obeying the constraints

and for . These constraints imply in particular that

Also observe that

Thus, if and are such that , we see that

for octuples in the hyperplane

By the pigeonhole principle, this implies that for any fixed , there can be at most sets of the form with , that are pairwise disjoint. Using a greedy algorithm, we conclude that there is a set of cardinality , such that each set with , intersects for some , or in other words that

This implies that there exists a subset of with , and an element for each , such that

for all . Note we may assume without loss of generality that and .

By construction of , and permuting labels, we can find 16-tuples such that

and

for . We sum this to obtain

and hence by (8)

where . Since

we see that there are only possible values of . By the pigeonhole principle, we conclude that at most of the sets can be disjoint. Arguing as before, we conclude that there exists a set of cardinality such that

whenever (10) holds.

For any , write arbitrarily as for some (with if , and if ) and then set

Then from (11) we have (4). For we have , and (5) then follows from (9).

This is a sequel to this previous blog post, in which we discussed the effect of the heat flow evolution

on the zeroes of a time-dependent family of polynomials , with a particular focus on the case when the polynomials had real zeroes. Here (inspired by some discussions I had during a recent conference on the Riemann hypothesis in Bristol) we record the analogous theory in which the polynomials instead have zeroes on a circle , with the heat flow slightly adjusted to compensate for this. As we shall discuss shortly, a key example of this situation arises when is the numerator of the zeta function of a curve.

More precisely, let be a natural number. We will say that a polynomial

of degree (so that ) obeys the *functional equation* if the are all real and

for all , thus

and

for all non-zero . This means that the zeroes of (counting multiplicity) lie in and are symmetric with respect to complex conjugation and inversion across the circle . We say that this polynomial *obeys the Riemann hypothesis* if all of its zeroes actually lie on the circle . For instance, in the case, the polynomial obeys the Riemann hypothesis if and only if .

Such polynomials arise in number theory as follows: if is a projective curve of genus over a finite field , then, as famously proven by Weil, the associated local zeta function (as defined for instance in this previous blog post) is known to take the form

where is a degree polynomial obeying both the functional equation and the Riemann hypothesis. In the case that is an elliptic curve, then and takes the form , where is the number of -points of minus . The Riemann hypothesis in this case is a famous result of Hasse.

Another key example of such polynomials arise from rescaled characteristic polynomials

of matrices in the compact symplectic group . These polynomials obey both the functional equation and the Riemann hypothesis. The Sato-Tate conjecture (in higher genus) asserts, roughly speaking, that “typical” polyomials arising from the number theoretic situation above are distributed like the rescaled characteristic polynomials (1), where is drawn uniformly from with Haar measure.

Given a polynomial of degree with coefficients

we can evolve it in time by the formula

thus for . Informally, as one increases , this evolution accentuates the effect of the extreme monomials, particularly, and at the expense of the intermediate monomials such as , and conversely as one decreases . This family of polynomials obeys the heat-type equation

In view of the results of Marcus, Spielman, and Srivastava, it is also very likely that one can interpret this flow in terms of expected characteristic polynomials involving conjugation over the compact symplectic group , and should also be tied to some sort of “” version of Brownian motion on this group, but we have not attempted to work this connection out in detail.

It is clear that if obeys the functional equation, then so does for any other time . Now we investigate the evolution of the zeroes. Suppose at some time that the zeroes of are distinct, then

From the inverse function theorem we see that for times sufficiently close to , the zeroes of continue to be distinct (and vary smoothly in ), with

Differentiating this at any not equal to any of the , we obtain

and

and

Inserting these formulae into (2) (expanding as ) and canceling some terms, we conclude that

for sufficiently close to , and not equal to . Extracting the residue at , we conclude that

which we can rearrange as

If we make the change of variables (noting that one can make depend smoothly on for sufficiently close to ), this becomes

Intuitively, this equation asserts that the phases repel each other if they are real (and attract each other if their difference is imaginary). If obeys the Riemann hypothesis, then the are all real at time , then the Picard uniqueness theorem (applied to and its complex conjugate) then shows that the are also real for sufficiently close to . If we then define the entropy functional

then the above equation becomes a gradient flow

which implies in particular that is non-increasing in time. This shows that as one evolves time forward from , there is a uniform lower bound on the separation between the phases , and hence the equation can be solved indefinitely; in particular, obeys the Riemann hypothesis for all if it does so at time . Our argument here assumed that the zeroes of were simple, but this assumption can be removed by the usual limiting argument.

For any polynomial obeying the functional equation, the rescaled polynomials converge locally uniformly to as . By Rouche’s theorem, we conclude that the zeroes of converge to the equally spaced points on the circle . Together with the symmetry properties of the zeroes, this implies in particular that obeys the Riemann hypothesis for all sufficiently large positive . In the opposite direction, when , the polynomials converge locally uniformly to , so if , of the zeroes converge to the origin and the other converge to infinity. In particular, fails the Riemann hypothesis for sufficiently large negative . Thus (if ), there must exist a real number , which we call the *de Bruijn-Newman constant* of the original polynomial , such that obeys the Riemann hypothesis for and fails the Riemann hypothesis for . The situation is a bit more complicated if vanishes; if is the first natural number such that (or equivalently, ) does not vanish, then by the above arguments one finds in the limit that of the zeroes go to the origin, go to infinity, and the remaining zeroes converge to the equally spaced points . In this case the de Bruijn-Newman constant remains finite except in the degenerate case , in which case .

For instance, consider the case when and for some real with . Then the quadratic polynomial

has zeroes

and one easily checks that these zeroes lie on the circle when , and are on the real axis otherwise. Thus in this case we have (with if ). Note how as increases to , the zeroes repel each other and eventually converge to , while as decreases to , the zeroes collide and then separate on the real axis, with one zero going to the origin and the other to infinity.

The arguments in my paper with Brad Rodgers (discussed in this previous post) indicate that for a “typical” polynomial of degree that obeys the Riemann hypothesis, the expected time to relaxation to equilibrium (in which the zeroes are equally spaced) should be comparable to , basically because the average spacing is and hence by (3) the typical velocity of the zeroes should be comparable to , and the diameter of the unit circle is comparable to , thus requiring time comparable to to reach equilibrium. Taking contrapositives, this suggests that the de Bruijn-Newman constant should typically take on values comparable to (since typically one would not expect the initial configuration of zeroes to be close to evenly spaced). I have not attempted to formalise or prove this claim, but presumably one could do some numerics (perhaps using some of the examples of given previously) to explore this further.

In this post we assume the Riemann hypothesis and the simplicity of zeroes, thus the zeroes of in the critical strip take the form for some real number ordinates . From the Riemann-von Mangoldt formula, one has the asymptotic

as ; in particular, the spacing should behave like on the average. However, it can happen that some gaps are unusually small compared to other nearby gaps. For the sake of concreteness, let us define a Lehmer pair to be a pair of adjacent ordinates such that

The specific value of constant is not particularly important here; anything larger than would suffice. An example of such a pair would be the classical pair

discovered by Lehmer. It follows easily from the main results of Csordas, Smith, and Varga that if an infinite number of Lehmer pairs (in the above sense) existed, then the de Bruijn-Newman constant is non-negative. This implication is now redundant in view of the unconditional results of this recent paper of Rodgers and myself; however, the question of whether an infinite number of Lehmer pairs exist remain open.

In this post, I sketch an argument that Brad and I came up with (as initially suggested by Odlyzko) the GUE hypothesis implies the existence of infinitely many Lehmer pairs. We argue probabilistically: pick a sufficiently large number , pick at random from to (so that the average gap size is close to ), and prove that the Lehmer pair condition (1) occurs with positive probability.

Introduce the renormalised ordinates for , and let be a small absolute constant (independent of ). It will then suffice to show that

(say) with probability , since the contribution of those outside of can be absorbed by the factor with probability .

As one consequence of the GUE hypothesis, we have with probability . Thus, if , then has density . Applying the Hardy-Littlewood maximal inequality, we see that with probability , we have

which implies in particular that

for all . This implies in particular that

and so it will suffice to show that

(say) with probability .

By the GUE hypothesis (and the fact that is independent of ), it suffices to show that a Dyson sine process , normalised so that is the first positive point in the process, obeys the inequality

with probability . However, if we let be a moderately large constant (and assume small depending on ), one can show using -point correlation functions for the Dyson sine process (and the fact that the Dyson kernel equals to second order at the origin) that

for any natural number , where denotes the number of elements of the process in . For instance, the expression can be written in terms of the three-point correlation function as

which can easily be estimated to be (since in this region), and similarly for the other estimates claimed above.

Since for natural numbers , the quantity is only positive when , we see from the first three estimates that the event that occurs with probability . In particular, by Markov’s inequality we have the conditional probabilities

and thus, if is large enough, and small enough, it will be true with probability that

and

and simultaneously that

for all natural numbers . This implies in particular that

and

for all , which gives (2) for small enough.

Remark 1The above argument needed the GUE hypothesis for correlations up to fourth order (in order to establish (3)). It might be possible to reduce the number of correlations needed, but I do not see how to obtain the claim just using pair correlations only.

The Boussinesq equations for inviscid, incompressible two-dimensional fluid flow in the presence of gravity are given by

where is the velocity field, is the pressure field, and is the density field (or, in some physical interpretations, the temperature field). In this post we shall restrict ourselves to formal manipulations, assuming implicitly that all fields are regular enough (or sufficiently decaying at spatial infinity) that the manipulations are justified. Using the material derivative , one can abbreviate these equations as

One can eliminate the role of the pressure by working with the *vorticity* . A standard calculation then leads us to the equivalent “vorticity-stream” formulation

of the Boussinesq equations. The latter two equations can be used to recover the velocity field from the vorticity by the Biot-Savart law

It has long been observed (see e.g. Section 5.4.1 of Bertozzi-Majda) that the Boussinesq equations are very similar, though not quite identical, to the three-dimensional inviscid incompressible Euler equations under the hypothesis of axial symmetry (with swirl). The Euler equations are

where now the velocity field and pressure field are over the three-dimensional domain . If one expresses in polar coordinates then one can write the velocity vector field in these coordinates as

If we make the axial symmetry assumption that these components, as well as , do not depend on the variable, thus

then after some calculation (which we give below the fold) one can eventually reduce the Euler equations to the system

where is the modified material derivative, and is the field . This is almost identical with the Boussinesq equations except for some additional powers of ; thus, the intuition is that the Boussinesq equations are a simplified model for axially symmetric Euler flows when one stays away from the axis and also does not wander off to .

However, this heuristic is not rigorous; the above calculations do not actually give an embedding of the Boussinesq equations into Euler. (The equations do match on the cylinder , but this is a measure zero subset of the domain, and so is not enough to give an embedding on any non-trivial region of space.) Recently, while playing around with trying to embed other equations into the Euler equations, I discovered that it is possible to make such an embedding into a *four*-dimensional Euler equation, albeit on a slightly curved manifold rather than in Euclidean space. More precisely, we use the Ebin-Marsden generalisation

of the Euler equations to an arbitrary Riemannian manifold (ignoring any issues of boundary conditions for this discussion), where is a time-dependent vector field, is a time-dependent scalar field, and is the covariant derivative along using the Levi-Civita connection . In Penrose abstract index notation (using the Levi-Civita connection , and raising and lowering indices using the metric ), the equations of motion become

in coordinates, this becomes

where the Christoffel symbols are given by the formula

where is the inverse to the metric tensor . If the coordinates are chosen so that the volume form is the Euclidean volume form , thus , then on differentiating we have , and hence , and so the divergence-free equation (10) simplifies in this case to . The Ebin-Marsden Euler equations are the natural generalisation of the Euler equations to arbitrary manifolds; for instance, they (formally) conserve the kinetic energy

and can be viewed as the formal geodesic flow equation on the infinite-dimensional manifold of volume-preserving diffeomorphisms on (see this previous post for a discussion of this in the flat space case).

The specific four-dimensional manifold in question is the space with metric

and solutions to the Boussinesq equation on can be transformed into solutions to the Euler equations on this manifold. This is part of a more general family of embeddings into the Euler equations in which passive scalar fields (such as the field appearing in the Boussinesq equations) can be incorporated into the dynamics via fluctuations in the Riemannian metric ). I am writing the details below the fold (partly for my own benefit).

A basic object of study in multiplicative number theory are the arithmetic functions: functions from the natural numbers to the complex numbers. Some fundamental examples of such functions include

- The constant function ;
- The Kronecker delta function ;
- The natural logarithm function ;
- The divisor function ;
- The von Mangoldt function , with defined to equal when is a power of a prime for some , and defined to equal zero otherwise; and
- The Möbius function , with defined to equal when is the product of distinct primes, and defined to equal zero otherwise.

Given an arithmetic function , we are often interested in statistics such as the summatory function

the logarithmically (or harmonically) weighted summatory function

or the Dirichlet series

In the latter case, one typically has to first restrict to those complex numbers whose real part is large enough in order to ensure the series on the right converges; but in many important cases, one can then extend the Dirichlet series to almost all of the complex plane by analytic continuation. One is also interested in correlations involving additive shifts, such as , but these are significantly more difficult to study and cannot be easily estimated by the methods of classical multiplicative number theory.

A key operation on arithmetic functions is that of Dirichlet convolution, which when given two arithmetic functions , forms a new arithmetic function , defined by the formula

Thus for instance , , , and for any arithmetic function . Dirichlet convolution and Dirichlet series are related by the fundamental formula

at least when the real part of is large enough that all sums involved become absolutely convergent (but in practice one can use analytic continuation to extend this identity to most of the complex plane). There is also the identity

at least when the real part of is large enough to justify interchange of differentiation and summation. As a consequence, many Dirichlet series can be expressed in terms of the Riemann zeta function , thus for instance

Much of the difficulty of multiplicative number theory can be traced back to the discrete nature of the natural numbers , which form a rather complicated abelian semigroup with respect to multiplication (in particular the set of generators is the set of prime numbers). One can obtain a simpler analogue of the subject by working instead with the half-infinite interval , which is a much simpler abelian semigroup under multiplication (being a one-dimensional Lie semigroup). (I will think of this as a sort of “completion” of at the infinite place , hence the terminology.) Accordingly, let us define a *continuous arithmetic function* to be a locally integrable function . The analogue of the summatory function (1) is then an integral

and similarly the analogue of (2) is

The analogue of the Dirichlet series is the Mellin-type transform

which will be well-defined at least if the real part of is large enough and if the continuous arithmetic function does not grow too quickly, and hopefully will also be defined elsewhere in the complex plane by analytic continuation.

For instance, the continuous analogue of the discrete constant function would be the constant function , which maps any to , and which we will denote by in order to keep it distinct from . The two functions and have approximately similar statistics; for instance one has

and

where is the harmonic number, and we are deliberately vague as to what the symbol means. Continuing this analogy, we would expect

which reflects the fact that has a simple pole at with residue , and no other poles. Note that the identity is initially only valid in the region , but clearly the right-hand side can be continued analytically to the entire complex plane except for the pole at , and so one can define in this region also.

In a similar vein, the logarithm function is approximately similar to the logarithm function , giving for instance the crude form

of Stirling’s formula, or the Dirichlet series approximation

The continuous analogue of Dirichlet convolution is multiplicative convolution using the multiplicative Haar measure : given two continuous arithmetic functions , one can define their convolution by the formula

Thus for instance . A short computation using Fubini’s theorem shows the analogue

of (3) whenever the real part of is large enough that Fubini’s theorem can be justified; similarly, differentiation under the integral sign shows that

again assuming that the real part of is large enough that differentiation under the integral sign (or some other tool like this, such as the Cauchy integral formula for derivatives) can be justified.

Direct calculation shows that for any complex number , one has

(at least for the real part of large enough), and hence by several applications of (5)

for any natural number . This can lead to the following heuristic: if a Dirichlet series behaves like a linear combination of poles , in that

for some set of poles and some coefficients and natural numbers (where we again are vague as to what means, and how to interpret the sum if the set of poles is infinite), then one should expect the arithmetic function to behave like the continuous arithmetic function

In particular, if we only have simple poles,

then we expect to have behave like continuous arithmetic function

Integrating this from to , this heuristically suggests an approximation

for the summatory function, and similarly

with the convention that is when , and similarly is when . One can make these sorts of approximations more rigorous by means of Perron’s formula (or one of its variants) combined with the residue theorem, provided that one has good enough control on the relevant Dirichlet series, but we will not pursue these rigorous calculations here. (But see for instance this previous blog post for some examples.)

For instance, using the more refined approximation

to the zeta function near , we have

we would expect that

and thus for instance

which matches what one actually gets from the Dirichlet hyperbola method (see e.g. equation (44) of this previous post).

Or, noting that has a simple pole at and assuming simple zeroes elsewhere, the log derivative will have simple poles of residue at and at all the zeroes, leading to the heuristic

suggesting that should behave like the continuous arithmetic function

leading for instance to the summatory approximation

which is a heuristic form of the Riemann-von Mangoldt explicit formula (see Exercise 45 of these notes for a rigorous version of this formula).

Exercise 1Go through some of the other explicit formulae listed at this Wikipedia page and give heuristic justifications for them (up to some lower order terms) by similar calculations to those given above.

Given the “adelic” perspective on number theory, I wonder if there are also -adic analogues of arithmetic functions to which a similar set of heuristics can be applied, perhaps to study sums such as . A key problem here is that there does not seem to be any good interpretation of the expression when is complex and is a -adic number, so it is not clear that one can analyse a Dirichlet series -adically. For similar reasons, we don’t have a canonical way to define for a Dirichlet character (unless its conductor happens to be a power of ), so there doesn’t seem to be much to say in the -aspect either.

Let be the Liouville function, thus is defined to equal when is the product of an even number of primes, and when is the product of an odd number of primes. The Chowla conjecture asserts that has the statistics of a random sign pattern, in the sense that

for all and all distinct natural numbers , where we use the averaging notation

For , this conjecture is equivalent to the prime number theorem (as discussed in this previous blog post), but the conjecture remains open for any .

In recent years, it has been realised that one can make more progress on this conjecture if one works instead with the logarithmically averaged version

of the conjecture, where we use the logarithmic averaging notation

Using the summation by parts (or telescoping series) identity

it is not difficult to show that the Chowla conjecture (1) for a given implies the logarithmically averaged conjecture (2). However, the converse implication is not at all clear. For instance, for , we have already mentioned that the Chowla conjecture

is equivalent to the prime number theorem; but the logarithmically averaged analogue

is significantly easier to show (a proof with the Liouville function replaced by the closely related Möbius function is given in this previous blog post). And indeed, significantly more is now known for the logarithmically averaged Chowla conjecture; in this paper of mine I had proven (2) for , and in this recent paper with Joni Teravainen, we proved the conjecture for all odd (with a different proof also given here).

In view of this emerging consensus that the logarithmically averaged Chowla conjecture was easier than the ordinary Chowla conjecture, it was thus somewhat of a surprise for me to read a recent paper of Gomilko, Kwietniak, and Lemanczyk who (among other things) established the following statement:

Theorem 1Assume that the logarithmically averaged Chowla conjecture (2) is true for all . Then there exists a sequence going to infinity such that the Chowla conjecture (1) is true for all along that sequence, that is to sayfor all and all distinct .

This implication does not use any special properties of the Liouville function (other than that they are bounded), and in fact proceeds by ergodic theoretic methods, focusing in particular on the ergodic decomposition of invariant measures of a shift into ergodic measures. Ergodic methods have proven remarkably fruitful in understanding these sorts of number theoretic and combinatorial problems, as could already be seen by the ergodic theoretic proof of Szemerédi’s theorem by Furstenberg, and more recently by the work of Frantzikinakis and Host on Sarnak’s conjecture. (My first paper with Teravainen also uses ergodic theory tools.) Indeed, many other results in the subject were first discovered using ergodic theory methods.

On the other hand, many results in this subject that were first proven ergodic theoretically have since been reproven by more combinatorial means; my second paper with Teravainen is an instance of this. As it turns out, one can also prove Theorem 1 by a standard combinatorial (or probabilistic) technique known as the second moment method. In fact, one can prove slightly more:

Theorem 2Let be a natural number. Assume that the logarithmically averaged Chowla conjecture (2) is true for . Then there exists a set of natural numbers of logarithmic density (that is, ) such thatfor any distinct .

It is not difficult to deduce Theorem 1 from Theorem 2 using a diagonalisation argument. Unfortunately, the known cases of the logarithmically averaged Chowla conjecture ( and odd ) are currently insufficient to use Theorem 2 for any purpose other than to reprove what is already known to be true from the prime number theorem. (Indeed, the even cases of Chowla, in either logarithmically averaged or non-logarithmically averaged forms, seem to be far more powerful than the odd cases; see Remark 1.7 of this paper of myself and Teravainen for a related observation in this direction.)

We now sketch the proof of Theorem 2. For any distinct , we take a large number and consider the limiting the second moment

We can expand this as

If all the are distinct, the hypothesis (2) tells us that the inner averages goes to zero as . The remaining averages are , and there are of these averages. We conclude that

By Markov’s inequality (and (3)), we conclude that for any fixed , there exists a set of upper logarithmic density at least , thus

such that

By deleting at most finitely many elements, we may assume that consists only of elements of size at least (say).

For any , if we let be the union of for , then has logarithmic density . By a diagonalisation argument (using the fact that the set of tuples is countable), we can then find a set of natural numbers of logarithmic density , such that for every , every sufficiently large element of lies in . Thus for every sufficiently large in , one has

for some with . By Cauchy-Schwarz, this implies that

interchanging the sums and using and , this implies that

We conclude on taking to infinity that

as required.

Let be a monic polynomial of degree with complex coefficients. Then by the fundamental theorem of algebra, we can factor as

for some complex zeroes (possibly with repetition).

Now suppose we evolve with respect to time by heat flow, creating a function of two variables with given initial data for which

On the space of polynomials of degree at most , the operator is nilpotent, and one can solve this equation explicitly both forwards and backwards in time by the Taylor series

For instance, if one starts with a quadratic , then the polynomial evolves by the formula

As the polynomial evolves in time, the zeroes evolve also. Assuming for sake of discussion that the zeroes are simple, the inverse function theorem tells us that the zeroes will (locally, at least) evolve smoothly in time. What are the dynamics of this evolution?

For instance, in the quadratic case, the quadratic formula tells us that the zeroes are

and

after arbitrarily choosing a branch of the square root. If are real and the discriminant is initially positive, we see that we start with two real zeroes centred around , which then approach each other until time , at which point the roots collide and then move off from each other in an imaginary direction.

In the general case, we can obtain the equations of motion by implicitly differentiating the defining equation

in time using (2) to obtain

To simplify notation we drop the explicit dependence on time, thus

From (1) and the product rule, we see that

and

(where all indices are understood to range over ) leading to the equations of motion

at least when one avoids those times in which there is a repeated zero. In the case when the zeroes are real, each term represents a (first-order) attraction in the dynamics between and , but the dynamics are more complicated for complex zeroes (e.g. purely imaginary zeroes will experience repulsion rather than attraction, as one already sees in the quadratic example). Curiously, this system resembles that of Dyson brownian motion (except with the brownian motion part removed, and time reversed). I learned of the connection between the ODE (3) and the heat equation from this paper of Csordas, Smith, and Varga, but perhaps it has been mentioned in earlier literature as well.

One interesting consequence of these equations is that if the zeroes are real at some time, then they will stay real as long as the zeroes do not collide. Let us now restrict attention to the case of real simple zeroes, in which case we will rename the zeroes as instead of , and order them as . The evolution

can now be thought of as reverse gradient flow for the “entropy”

(which is also essentially the logarithm of the discriminant of the polynomial) since we have

In particular, we have the monotonicity formula

where is the “energy”

where in the last line we use the antisymmetrisation identity

Among other things, this shows that as one goes backwards in time, the entropy decreases, and so no collisions can occur to the past, only in the future, which is of course consistent with the attractive nature of the dynamics. As is a convex function of the positions , one expects to also evolve in a convex manner in time, that is to say the energy should be increasing. This is indeed the case:

Exercise 1Show that

Symmetric polynomials of the zeroes are polynomial functions of the coefficients and should thus evolve in a polynomial fashion. One can compute this explicitly in simple cases. For instance, the center of mass is an invariant:

The variance decreases linearly:

Exercise 2Establish the virial identity

As the variance (which is proportional to ) cannot become negative, this identity shows that “finite time blowup” must occur – that the zeroes must collide at or before the time .

Exercise 3Show that theStieltjes transformsolves the viscous Burgers equation

either by using the original heat equation (2) and the identity , or else by using the equations of motion (3). This relation between the Burgers equation and the heat equation is known as the Cole-Hopf transformation.

The paper of Csordas, Smith, and Varga mentioned previously gives some other bounds on the lifespan of the dynamics; roughly speaking, they show that if there is one pair of zeroes that are much closer to each other than to the other zeroes then they must collide in a short amount of time (unless there is a collision occuring even earlier at some other location). Their argument extends also to situations where there are an infinite number of zeroes, which they apply to get new results on Newman’s conjecture in analytic number theory. I would be curious to know of further places in the literature where this dynamics has been studied.

Suppose we have an matrix that is expressed in block-matrix form as

where is an matrix, is an matrix, is an matrix, and is a matrix for some . If is invertible, we can use the technique of Schur complementation to express the inverse of (if it exists) in terms of the inverse of , and the other components of course. Indeed, to solve the equation

where are column vectors and are column vectors, we can expand this out as a system

Using the invertibility of , we can write the first equation as

and substituting this into the second equation yields

and thus (assuming that is invertible)

and then inserting this back into (1) gives

Comparing this with

we have managed to express the inverse of as

One can consider the inverse problem: given the inverse of , does one have a nice formula for the inverse of the minor ? Trying to recover this directly from (2) looks somewhat messy. However, one can proceed as follows. Let denote the matrix

(with the identity matrix), and let be its transpose:

Then for any scalar (which we identify with times the identity matrix), one has

and hence by (2)

noting that the inverses here will exist for large enough. Taking limits as , we conclude that

On the other hand, by the Woodbury matrix identity (discussed in this previous blog post), we have

and hence on taking limits and comparing with the preceding identity, one has

This achieves the aim of expressing the inverse of the minor in terms of the inverse of the full matrix. Taking traces and rearranging, we conclude in particular that

In the case, this can be simplified to

where is the basis column vector.

We can apply this identity to understand how the spectrum of an random matrix relates to that of its top left minor . Subtracting any complex multiple of the identity from (and hence from ), we can relate the Stieltjes transform of with the Stieltjes transform of :

At this point we begin to proceed informally. Assume for sake of argument that the random matrix is Hermitian, with distribution that is invariant under conjugation by the unitary group ; for instance, could be drawn from the Gaussian Unitary Ensemble (GUE), or alternatively could be of the form for some real diagonal matrix and a unitary matrix drawn randomly from using Haar measure. To fix normalisations we will assume that the eigenvalues of are typically of size . Then is also Hermitian and -invariant. Furthermore, the law of will be the same as the law of , where is now drawn uniformly from the unit sphere (independently of ). Diagonalising into eigenvalues and eigenvectors , we have

One can think of as a random (complex) Gaussian vector, divided by the magnitude of that vector (which, by the Chernoff inequality, will concentrate to ). Thus the coefficients with respect to the orthonormal basis can be thought of as independent (complex) Gaussian vectors, divided by that magnitude. Using this and the Chernoff inequality again, we see (for distance away from the real axis at least) that one has the concentration of measure

and thus

(that is to say, the diagonal entries of are roughly constant). Similarly we have

Inserting this into (5) and discarding terms of size , we thus conclude the approximate relationship

This can be viewed as a difference equation for the Stieltjes transform of top left minors of . Iterating this equation, and formally replacing the difference equation by a differential equation in the large limit, we see that when is large and for some , one expects the top left minor of to have Stieltjes transform

where solves the Burgers-type equation

Example 1If is a constant multiple of the identity, then . One checks that is a steady state solution to (7), which is unsurprising given that all minors of are also times the identity.

Example 2If is GUE normalised so that each entry has variance , then by the semi-circular law (see previous notes) one has (using an appropriate branch of the square root). One can then verify the self-similar solutionto (7), which is consistent with the fact that a top minor of also has the law of GUE, with each entry having variance when .

One can justify the approximation (6) given a sufficiently good well-posedness theory for the equation (7). We will not do so here, but will note that (as with the classical inviscid Burgers equation) the equation can be solved exactly (formally, at least) by the method of characteristics. For any initial position , we consider the characteristic flow formed by solving the ODE

with initial data , ignoring for this discussion the problems of existence and uniqueness. Then from the chain rule, the equation (7) implies that

and thus . Inserting this back into (8) we see that

and thus (7) may be solved implicitly via the equation

Remark 3In practice, the equation (9) may stop working when crosses the real axis, as (7) does not necessarily hold in this region. It is a cute exercise (ultimately coming from the Cauchy-Schwarz inequality) to show that this crossing always happens, for instance if has positive imaginary part then necessarily has negative or zero imaginary part.

Example 4Suppose we have as in Example 1. Then (9) becomesfor any , which after making the change of variables becomes

as in Example 1.

Example 5Suppose we haveas in Example 2. Then (9) becomes

If we write

one can calculate that

and hence

One can recover the spectral measure from the Stieltjes transform as the weak limit of as ; we write this informally as

In this informal notation, we have for instance that

which can be interpreted as the fact that the Cauchy distributions converge weakly to the Dirac mass at as . Similarly, the spectral measure associated to (10) is the semicircular measure .

If we let be the spectral measure associated to , then the curve from to the space of measures is the high-dimensional limit of a Gelfand-Tsetlin pattern (discussed in this previous post), if the pattern is randomly generated amongst all matrices with spectrum asymptotic to as . For instance, if , then the curve is , corresponding to a pattern that is entirely filled with ‘s. If instead is a semicircular distribution, then the pattern is

thus at height from the top, the pattern is semicircular on the interval . The interlacing property of Gelfand-Tsetlin patterns translates to the claim that (resp. ) is non-decreasing (resp. non-increasing) in for any fixed . In principle one should be able to establish these monotonicity claims directly from the PDE (7) or from the implicit solution (9), but it was not clear to me how to do so.

An interesting example of such a limiting Gelfand-Tsetlin pattern occurs when , which corresponds to being , where is an orthogonal projection to a random -dimensional subspace of . Here we have

and so (9) in this case becomes

A tedious calculation then gives the solution

For , there are simple poles at , and the associated measure is

This reflects the interlacing property, which forces of the eigenvalues of the minor to be equal to (resp. ). For , the poles disappear and one just has

For , one has an inverse semicircle distribution

There is presumably a direct geometric explanation of this fact (basically describing the singular values of the product of two random orthogonal projections to half-dimensional subspaces of ), but I do not know of one off-hand.

The evolution of can also be understood using the *-transform* and *-transform* from free probability. Formally, letlet be the inverse of , thus

for all , and then define the -transform

The equation (9) may be rewritten as

and hence

See these previous notes for a discussion of free probability topics such as the -transform.

Example 6If then the transform is .

Example 7If is given by (10), then the transform is

Example 8If is given by (11), then the transform is

This simple relationship (12) is essentially due to Nica and Speicher (thanks to Dima Shylakhtenko for this reference). It has the remarkable consequence that when is the reciprocal of a natural number , then is the free arithmetic mean of copies of , that is to say is the free convolution of copies of , pushed forward by the map . In terms of random matrices, this is asserting that the top minor of a random matrix has spectral measure approximately equal to that of an arithmetic mean of independent copies of , so that the process of taking top left minors is in some sense a continuous analogue of the process of taking freely independent arithmetic means. There ought to be a geometric proof of this assertion, but I do not know of one. In the limit (or ), the -transform becomes linear and the spectral measure becomes semicircular, which is of course consistent with the free central limit theorem.

In a similar vein, if one defines the function

and inverts it to obtain a function with

for all , then the *-transform* is defined by

Writing

for any , , we have

and so (9) becomes

which simplifies to

replacing by we obtain

and thus

and hence

One can compute to be the -transform of the measure ; from the link between -transforms and free products (see e.g. these notes of Guionnet), we conclude that is the free product of and . This is consistent with the random matrix theory interpretation, since is also the spectral measure of , where is the orthogonal projection to the span of the first basis elements, so in particular has spectral measure . If is unitarily invariant then (by a fundamental result of Voiculescu) it is asymptotically freely independent of , so the spectral measure of is asymptotically the free product of that of and of .

## Recent Comments