You are currently browsing the category archive for the ‘math.RA’ category.

In July I will be spending a week at Park City, being one of the mini-course lecturers in the Graduate Summer School component of the Park City Summer Session on random matrices. I have chosen to give some lectures on least singular values of random matrices, the circular law, and the Lindeberg exchange method in random matrix theory; this is a slightly different set of topics than I had initially advertised (which was instead about the Lindeberg exchange method and the local relaxation flow method), but after consulting with the other mini-course lecturers I felt that this would be a more complementary set of topics. I have uploaded an draft of my lecture notes (some portion of which is derived from my monograph on the subject); as always, comments and corrections are welcome.

<I>[Update, June 23: notes revised and reformatted to PCMI format. -T.]</I>

Suppose is a continuous (but nonlinear) map from one normed vector space to another . The continuity means, roughly speaking, that if are such that is small, then is also small (though the precise notion of “smallness” may depend on or , particularly if is not known to be uniformly continuous). If is known to be differentiable (in, say, the Fréchet sense), then we in fact have a linear bound of the form

for some depending on , if is small enough; one can of course make independent of (and drop the smallness condition) if is known instead to be Lipschitz continuous.

In many applications in analysis, one would like more explicit and quantitative bounds that estimate quantities like in terms of quantities like . There are a number of ways to do this. First of all, there is of course the trivial estimate arising from the triangle inequality:

This estimate is usually not very good when and are close together. However, when and are far apart, this estimate can be more or less sharp. For instance, if the magnitude of varies so much from to that is more than (say) twice that of , or vice versa, then (1) is sharp up to a multiplicative constant. Also, if is oscillatory in nature, and the distance between and exceeds the “wavelength” of the oscillation of at (or at ), then one also typically expects (1) to be close to sharp. Conversely, if does not vary much in magnitude from to , and the distance between and is less than the wavelength of any oscillation present in , one expects to be able to improve upon (1).

When is relatively simple in form, one can sometimes proceed simply by substituting . For instance, if is the squaring function in a commutative ring , one has

and thus

or in terms of the original variables one has

If the ring is not commutative, one has to modify this to

Thus, for instance, if are matrices and denotes the operator norm, one sees from the triangle inequality and the sub-multiplicativity of operator norm that

If involves (or various components of ) in several places, one can sometimes get a good estimate by “swapping” with at each of the places in turn, using a telescoping series. For instance, if we again use the squaring function in a non-commutative ring, we have

which for instance leads to a slight improvement of (2):

More generally, for any natural number , one has the identity

in a commutative ring, while in a non-commutative ring one must modify this to

and for matrices one has

Exercise 1If and are unitary matrices, show that the commutator obeys the inequality(

Hint:first control .)

Now suppose (for simplicity) that is a map between Euclidean spaces. If is continuously differentiable, then one can use the fundamental theorem of calculus to write

where is any continuously differentiable path from to . For instance, if one uses the straight line path , one has

In the one-dimensional case , this simplifies to

Among other things, this immediately implies the factor theorem for functions: if is a function for some that vanishes at some point , then factors as the product of and some function . Another basic consequence is that if is uniformly bounded in magnitude by some constant , then is Lipschitz continuous with the same constant .

Applying (4) to the power function , we obtain the identity

which can be compared with (3). Indeed, for and close to , one can use logarithms and Taylor expansion to arrive at the approximation , so (3) behaves a little like a Riemann sum approximation to (5).

Exercise 2For each , let and be random variables taking values in a measurable space , and let be a bounded measurable function.

- (i) (Lindeberg exchange identity) Show that
- (ii) (Knowles-Yin exchange identity) Show that
where is a mixture of and , with uniformly drawn from independently of each other and of the .

- (iii) Discuss the relationship between the identities in parts (i), (ii) with the identities (3), (5).
(The identity in (i) is the starting point for the

Lindeberg exchange methodin probability theory, discussed for instance in this previous post. The identity in (ii) can also be used in the Lindeberg exchange method; the terms in the right-hand side are slightly more symmetric in the indices , which can be a technical advantage in some applications; see this paper of Knowles and Yin for an instance of this.)

Exercise 3If is continuously times differentiable, establish Taylor’s theorem with remainderIf is bounded, conclude that

For real scalar functions , the average value of the continuous real-valued function must be attained at some point in the interval . We thus conclude the mean-value theorem

for some (that can depend on , , and ). This can for instance give a second proof of fact that continuously differentiable functions with bounded derivative are Lipschitz continuous. However it is worth stressing that the mean-value theorem is only available for *real scalar* functions; it is false for instance for complex scalar functions. A basic counterexample is given by the function ; there is no for which . On the other hand, as has magnitude , we still know from (4) that is Lipschitz of constant , and when combined with (1) we obtain the basic bounds

which are already very useful for many applications.

Exercise 4Let be matrices, and let be a non-negative real.

- (i) Establish the Duhamel formula
where denotes the matrix exponential of . (

Hint:Differentiate or in .)- (ii) Establish the
iterated Duhamel formulafor any .

- (iii) Establish the infinitely iterated Duhamel formula
- (iv) If is an matrix depending in a continuously differentiable fashion on , establish the variation formula
where is the adjoint representation applied to , and is the function

(thus for non-zero ), with defined using functional calculus.

We remark that further manipulation of (iv) of the above exercise using the fundamental theorem of calculus eventually leads to the Baker-Campbell-Hausdorff-Dynkin formula, as discussed in this previous blog post.

Exercise 5Let be positive definite matrices, and let be an matrix. Show that there is a unique solution to the Sylvester equationwhich is given by the formula

In the above examples we had applied the fundamental theorem of calculus along linear curves . However, it is sometimes better to use other curves. For instance, the circular arc can be useful, particularly if and are “orthogonal” or “independent” in some sense; a good example of this is the proof by Maurey and Pisier of the gaussian concentration inequality, given in Theorem 8 of this previous blog post. In a similar vein, if one wishes to compare a scalar random variable of mean zero and variance one with a Gaussian random variable of mean zero and variance one, it can be useful to introduce the intermediate random variables (where and are independent); note that these variables have mean zero and variance one, and after coupling them together appropriately they evolve by the Ornstein-Uhlenbeck process, which has many useful properties. For instance, one can use these ideas to establish monotonicity formulae for entropy; see e.g. this paper of Courtade for an example of this and further references. More generally, one can exploit curves that flow according to some geometrically natural ODE or PDE; several examples of this occur famously in Perelman’s proof of the Poincaré conjecture via Ricci flow, discussed for instance in this previous set of lecture notes.

In some cases, it is difficult to compute or the derivative directly, but one can instead proceed by implicit differentiation, or some variant thereof. Consider for instance the matrix inversion map (defined on the open dense subset of matrices consisting of invertible matrices). If one wants to compute for close to , one can write temporarily write , thus

Multiplying both sides on the left by to eliminate the term, and on the right by to eliminate the term, one obtains

and thus on reversing these steps we arrive at the basic identity

For instance, if are matrices, and we consider the resolvents

then we have the *resolvent identity*

as long as does not lie in the spectrum of or (for instance, if , are self-adjoint then one can take to be any strictly complex number). One can iterate this identity to obtain

for any natural number ; in particular, if has operator norm less than one, one has the Neumann series

Similarly, if is a family of invertible matrices that depends in a continuously differentiable fashion on a time variable , then by implicitly differentiating the identity

in using the product rule, we obtain

and hence

(this identity may also be easily derived from (6)). One can then use the fundamental theorem of calculus to obtain variants of (6), for instance by using the curve we arrive at

assuming that the curve stays entirely within the set of invertible matrices. While this identity may seem more complicated than (6), it is more symmetric, which conveys some advantages. For instance, using this identity it is easy to see that if are positive definite with in the sense of positive definite matrices (that is, is positive definite), then . (Try to prove this using (6) instead!)

Exercise 6If is an invertible matrix and are vectors, establish the Sherman-Morrison formulawhenever is a scalar such that is non-zero. (See also this previous blog post for more discussion of these sorts of identities.)

One can use the Cauchy integral formula to extend these identities to other functions of matrices. For instance, if is an entire function, and is a counterclockwise contour that goes around the spectrum of both and , then we have

and similarly

and hence by (7) one has

similarly, if depends on in a continuously differentiable fashion, then

as long as goes around the spectrum of .

Exercise 7If is an matrix depending continuously differentiably on , and is an entire function, establish the tracial chain rule

In a similar vein, given that the logarithm function is the antiderivative of the reciprocal, one can express the matrix logarithm of a positive definite matrix by the fundamental theorem of calculus identity

(with the constant term needed to prevent a logarithmic divergence in the integral). Differentiating, we see that if is a family of positive definite matrices depending continuously on , that

This can be used for instance to show that is a monotone increasing function, in the sense that whenever in the sense of positive definite matrices. One can of course integrate this formula to obtain some formulae for the difference of the logarithm of two positive definite matrices .

To compare the square root of two positive definite matrices is trickier; there are multiple ways to proceed. One approach is to use contour integration as before (but one has to take some care to avoid branch cuts of the square root). Another to express the square root in terms of exponentials via the formula

where is the gamma function; this formula can be verified by first diagonalising to reduce to the scalar case and using the definition of the Gamma function. Then one has

and one can use some of the previous identities to control . This is pretty messy though. A third way to proceed is via implicit differentiation. If for instance is a family of positive definite matrices depending continuously differentiably on , we can differentiate the identity

to obtain

This can for instance be solved using Exercise 5 to obtain

and this can in turn be integrated to obtain a formula for . This is again a rather messy formula, but it does at least demonstrate that the square root is a monotone increasing function on positive definite matrices: implies .

Several of the above identities for matrices can be (carefully) extended to operators on Hilbert spaces provided that they are sufficiently well behaved (in particular, if they have a good functional calculus, and if various spectral hypotheses are obeyed). We will not attempt to do so here, however.

I just learned (from Emmanuel Kowalski’s blog) that the AMS has just started a repository of open-access mathematics lecture notes. There are only a few such sets of notes there at present, but hopefully it will grow in the future; I just submitted some old lecture notes of mine from an undergraduate linear algebra course I taught in 2002 (with some updating of format and fixing of various typos).

[Update, Dec 22: my own notes are now on the repository.]

By an odd coincidence, I stumbled upon a second question in as many weeks about power series, and once again the only way I know how to prove the result is by complex methods; once again, I am leaving it here as a challenge to any interested readers, and I would be particularly interested in knowing of a proof that was not based on complex analysis (or thinly disguised versions thereof), or for a reference to previous literature where something like this identity has occured. (I suspect for instance that something like this may have shown up before in free probability, based on the answer to part (ii) of the problem.)

Here is a purely algebraic form of the problem:

Problem 1Let be a formal function of one variable . Suppose that is the formal function defined bywhere we use to denote the -fold derivative of with respect to the variable .

- (i) Show that can be formally recovered from by the formula
- (ii) There is a remarkable further formal identity relating with that does not explicitly involve any infinite summation. What is this identity?

To rigorously formulate part (i) of this problem, one could work in the commutative differential ring of formal infinite series generated by polynomial combinations of and its derivatives (with no constant term). Part (ii) is a bit trickier to formulate in this abstract ring; the identity in question is easier to state if are formal power series, or (even better) convergent power series, as it involves operations such as composition or inversion that can be more easily defined in those latter settings.

To illustrate Problem 1(i), let us compute up to third order in , using to denote any quantity involving four or more factors of and its derivatives, and similarly for other exponents than . Then we have

and hence

multiplying, we have

and

and hence after a lot of canceling

Thus Problem 1(i) holds up to errors of at least. In principle one can continue verifying Problem 1(i) to increasingly high order in , but the computations rapidly become quite lengthy, and I do not know of a direct way to ensure that one always obtains the required cancellation at the end of the computation.

Problem 1(i) can also be posed in formal power series: if

is a formal power series with no constant term with complex coefficients with , then one can verify that the series

makes sense as a formal power series with no constant term, thus

For instance it is not difficult to show that . If one further has , then it turns out that

as formal power series. Currently the only way I know how to show this is by first proving the claim for power series with a positive radius of convergence using the Cauchy integral formula, but even this is a bit tricky unless one has managed to guess the identity in (ii) first. (In fact, the way I discovered this problem was by first trying to solve (a variant of) the identity in (ii) by Taylor expansion in the course of attacking another problem, and obtaining the transform in Problem 1 as a consequence.)

The transform that takes to resembles both the exponential function

and Taylor’s formula

but does not seem to be directly connected to either (this is more apparent once one knows the identity in (ii)).

Kronecker is famously reported to have said, “God created the natural numbers; all else is the work of man”. The truth of this statement (literal or otherwise) is debatable; but one can certainly view the other standard number systems as (iterated) completions of the natural numbers in various senses. For instance:

- The integers are the additive completion of the natural numbers (the minimal additive group that contains a copy of ).
- The rationals are the multiplicative completion of the integers (the minimal field that contains a copy of ).
- The reals are the metric completion of the rationals (the minimal complete metric space that contains a copy of ).
- The complex numbers are the algebraic completion of the reals (the minimal algebraically closed field that contains a copy of ).

These descriptions of the standard number systems are elegant and conceptual, but not entirely suitable for *constructing* the number systems in a non-circular manner from more primitive foundations. For instance, one cannot quite define the reals from scratch as the metric completion of the rationals , because the definition of a metric space itself requires the notion of the reals! (One can of course construct by other means, for instance by using Dedekind cuts or by using uniform spaces in place of metric spaces.) The definition of the complex numbers as the algebraic completion of the reals does not suffer from such a non-circularity issue, but a certain amount of field theory is required to work with this definition initially. For the purposes of quickly constructing the complex numbers, it is thus more traditional to first define as a quadratic extension of the reals , and more precisely as the extension formed by adjoining a square root of to the reals, that is to say a solution to the equation . It is not immediately obvious that this extension is in fact algebraically closed; this is the content of the famous fundamental theorem of algebra, which we will prove later in this course.

The two equivalent definitions of – as the algebraic closure, and as a quadratic extension, of the reals respectively – each reveal important features of the complex numbers in applications. Because is algebraically closed, all polynomials over the complex numbers split completely, which leads to a good spectral theory for both finite-dimensional matrices and infinite-dimensional operators; in particular, one expects to be able to diagonalise most matrices and operators. Applying this theory to constant coefficient ordinary differential equations leads to a unified theory of such solutions, in which real-variable ODE behaviour such as exponential growth or decay, polynomial growth, and sinusoidal oscillation all become aspects of a single object, the complex exponential (or more generally, the matrix exponential ). Applying this theory more generally to diagonalise arbitrary translation-invariant operators over some locally compact abelian group, one arrives at Fourier analysis, which is thus most naturally phrased in terms of complex-valued functions rather than real-valued ones. If one drops the assumption that the underlying group is abelian, one instead discovers the representation theory of unitary representations, which is simpler to study than the real-valued counterpart of orthogonal representations. For closely related reasons, the theory of complex Lie groups is simpler than that of real Lie groups.

Meanwhile, the fact that the complex numbers are a quadratic extension of the reals lets one view the complex numbers geometrically as a two-dimensional plane over the reals (the Argand plane). Whereas a point singularity in the real line disconnects that line, a point singularity in the Argand plane leaves the rest of the plane connected (although, importantly, the punctured plane is no longer simply connected). As we shall see, this fact causes singularities in complex analytic functions to be better behaved than singularities of real analytic functions, ultimately leading to the powerful residue calculus for computing complex integrals. Remarkably, this calculus, when combined with the quintessentially complex-variable technique of *contour shifting*, can also be used to compute some (though certainly not all) definite integrals of *real*-valued functions that would be much more difficult to compute by purely real-variable methods; this is a prime example of Hadamard’s famous dictum that “the shortest path between two truths in the real domain passes through the complex domain”.

Another important geometric feature of the Argand plane is the angle between two tangent vectors to a point in the plane. As it turns out, the operation of multiplication by a complex scalar preserves the magnitude and orientation of such angles; the same fact is true for any non-degenerate complex analytic mapping, as can be seen by performing a Taylor expansion to first order. This fact ties the study of complex mappings closely to that of the conformal geometry of the plane (and more generally, of two-dimensional surfaces and domains). In particular, one can use complex analytic maps to conformally transform one two-dimensional domain to another, leading among other things to the famous Riemann mapping theorem, and to the classification of Riemann surfaces.

If one Taylor expands complex analytic maps to second order rather than first order, one discovers a further important property of these maps, namely that they are harmonic. This fact makes the class of complex analytic maps extremely rigid and well behaved analytically; indeed, the entire theory of elliptic PDE now comes into play, giving useful properties such as elliptic regularity and the maximum principle. In fact, due to the magic of residue calculus and contour shifting, we already obtain these properties for maps that are merely complex differentiable rather than complex analytic, which leads to the striking fact that complex differentiable functions are automatically analytic (in contrast to the real-variable case, in which real differentiable functions can be very far from being analytic).

The geometric structure of the complex numbers (and more generally of complex manifolds and complex varieties), when combined with the algebraic closure of the complex numbers, leads to the beautiful subject of *complex algebraic geometry*, which motivates the much more general theory developed in modern algebraic geometry. However, we will not develop the algebraic geometry aspects of complex analysis here.

Last, but not least, because of the good behaviour of Taylor series in the complex plane, complex analysis is an excellent setting in which to manipulate various generating functions, particularly Fourier series (which can be viewed as boundary values of power (or Laurent) series ), as well as Dirichlet series . The theory of contour integration provides a very useful dictionary between the asymptotic behaviour of the sequence , and the complex analytic behaviour of the Dirichlet or Fourier series, particularly with regard to its poles and other singularities. This turns out to be a particularly handy dictionary in analytic number theory, for instance relating the distribution of the primes to the Riemann zeta function. Nowadays, many of the analytic number theory results first obtained through complex analysis (such as the prime number theorem) can also be obtained by more “real-variable” methods; however the complex-analytic viewpoint is still extremely valuable and illuminating.

We will frequently touch upon many of these connections to other fields of mathematics in these lecture notes. However, these are mostly side remarks intended to provide context, and it is certainly possible to skip most of these tangents and focus purely on the complex analysis material in these notes if desired.

Note: complex analysis is a very visual subject, and one should draw plenty of pictures while learning it. I am however not planning to put too many pictures in these notes, partly as it is somewhat inconvenient to do so on this blog from a technical perspective, but also because pictures that one draws on one’s own are likely to be far more useful to you than pictures that were supplied by someone else.

*[This blog post was written jointly by Terry Tao and Will Sawin.]*

In the previous blog post, one of us (Terry) implicitly introduced a notion of rank for tensors which is a little different from the usual notion of tensor rank, and which (following BCCGNSU) we will call “slice rank”. This notion of rank could then be used to encode the Croot-Lev-Pach-Ellenberg-Gijswijt argument that uses the polynomial method to control capsets.

Afterwards, several papers have applied the slice rank method to further problems – to control tri-colored sum-free sets in abelian groups (BCCGNSU, KSS) and from there to the triangle removal lemma in vector spaces over finite fields (FL), to control sunflowers (NS), and to bound progression-free sets in -groups (P).

In this post we investigate the notion of slice rank more systematically. In particular, we show how to give lower bounds for the slice rank. In many cases, we can show that the upper bounds on slice rank given in the aforementioned papers are sharp to within a subexponential factor. This still leaves open the possibility of getting a better bound for the original combinatorial problem using the slice rank of some other tensor, but for very long arithmetic progressions (at least eight terms), we show that the slice rank method cannot improve over the trivial bound using any tensor.

It will be convenient to work in a “basis independent” formalism, namely working in the category of abstract finite-dimensional vector spaces over a fixed field . (In the applications to the capset problem one takes to be the finite field of three elements, but most of the discussion here applies to arbitrary fields.) Given such vector spaces , we can form the tensor product , generated by the tensor products with for , subject to the constraint that the tensor product operation is multilinear. For each , we have the smaller tensor products , as well as the tensor product

defined in the obvious fashion. Elements of of the form for some and will be called *rank one functions*, and the *slice rank* (or *rank* for short) of an element of is defined to be the least nonnegative integer such that is a linear combination of rank one functions. If are finite-dimensional, then the rank is always well defined as a non-negative integer (in fact it cannot exceed . It is also clearly subadditive:

For , is when is zero, and otherwise. For , is the usual rank of the -tensor (which can for instance be identified with a linear map from to the dual space ). The usual notion of tensor rank for higher order tensors uses complete tensor products , as the rank one objects, rather than , giving a rank that is greater than or equal to the slice rank studied here.

From basic linear algebra we have the following equivalences:

Lemma 1Let be finite-dimensional vector spaces over a field , let be an element of , and let be a non-negative integer. Then the following are equivalent:

- (i) One has .
- (ii) One has a representation of the form
where are finite sets of total cardinality at most , and for each and , and .

- (iii) One has
where for each , is a subspace of of total dimension at most , and we view as a subspace of in the obvious fashion.

- (iv) (Dual formulation) There exist subspaces of the dual space for , of total dimension at least , such that is orthogonal to , in the sense that one has the vanishing
for all , where is the obvious pairing.

*Proof:* The equivalence of (i) and (ii) is clear from definition. To get from (ii) to (iii) one simply takes to be the span of the , and conversely to get from (iii) to (ii) one takes the to be a basis of the and computes by using a basis for the tensor product consisting entirely of functions of the form for various . To pass from (iii) to (iv) one takes to be the annihilator of , and conversely to pass from (iv) to (iii).

One corollary of the formulation (iv), is that the set of tensors of slice rank at most is Zariski closed (if the field is algebraically closed), and so the slice rank itself is a lower semi-continuous function. This is in contrast to the usual tensor rank, which is not necessarily semicontinuous.

Corollary 2Let be finite-dimensional vector spaces over an algebraically closed field . Let be a nonnegative integer. The set of elements of of slice rank at most is closed in the Zariski topology.

*Proof:* In view of Lemma 1(i and iv), this set is the union over tuples of integers with of the projection from of the set of tuples with orthogonal to , where is the Grassmanian parameterizing -dimensional subspaces of .

One can check directly that the set of tuples with orthogonal to is Zariski closed in using a set of equations of the form locally on . Hence because the Grassmanian is a complete variety, the projection of this set to is also Zariski closed. So the finite union over tuples of these projections is also Zariski closed.

We also have good behaviour with respect to linear transformations:

Lemma 3Let be finite-dimensional vector spaces over a field , let be an element of , and for each , let be a linear transformation, with the tensor product of these maps. Then

Furthermore, if the are all injective, then one has equality in (2).

Thus, for instance, the rank of a tensor is intrinsic in the sense that it is unaffected by any enlargements of the spaces .

*Proof:* The bound (2) is clear from the formulation (ii) of rank in Lemma 1. For equality, apply (2) to the injective , as well as to some arbitrarily chosen left inverses of the .

Computing the rank of a tensor is difficult in general; however, the problem becomes a combinatorial one if one has a suitably sparse representation of that tensor in some basis, where we will measure sparsity by the property of being an antichain.

Proposition 4Let be finite-dimensional vector spaces over a field . For each , let be a linearly independent set in indexed by some finite set . Let be a subset of .

where for each , is a coefficient in . Then one has

where the minimum ranges over all coverings of by sets , and for are the projection maps.

Now suppose that the coefficients are all non-zero, that each of the are equipped with a total ordering , and is the set of maximal elements of , thus there do not exist distinct , such that for all . Then one has

In particular, if is an antichain (i.e. every element is maximal), then equality holds in (4).

*Proof:* By Lemma 3 (or by enlarging the bases ), we may assume without loss of generality that each of the is spanned by the . By relabeling, we can also assume that each is of the form

with the usual ordering, and by Lemma 3 we may take each to be , with the standard basis.

Let denote the rank of . To show (4), it suffices to show the inequality

for any covering of by . By removing repeated elements we may assume that the are disjoint. For each , the tensor

can (after collecting terms) be written as

for some . Summing and using (1), we conclude the inequality (6).

Now assume that the are all non-zero and that is the set of maximal elements of . To conclude the proposition, it suffices to show that the reverse inequality

holds for some covering . By Lemma 1(iv), there exist subspaces of whose dimension sums to

Let . Using Gaussian elimination, one can find a basis of whose representation in the standard dual basis of is in row-echelon form. That is to say, there exist natural numbers

such that for all , is a linear combination of the dual vectors , with the coefficient equal to one.

We now claim that is disjoint from . Suppose for contradiction that this were not the case, thus there exists for each such that

As is the set of maximal elements of , this implies that

for any tuple other than . On the other hand, we know that is a linear combination of , with the coefficient one. We conclude that the tensor product is equal to

plus a linear combination of other tensor products with not in . Taking inner products with (3), we conclude that , contradicting the fact that is orthogonal to . Thus we have disjoint from .

For each , let denote the set of tuples in with not of the form . From the previous discussion we see that the cover , and we clearly have , and hence from (8) we have (7) as claimed.

As an instance of this proposition, we recover the computation of diagonal rank from the previous blog post:

Example 5Let be finite-dimensional vector spaces over a field for some . Let be a natural number, and for , let be a linearly independent set in . Let be non-zero coefficients in . Thenhas rank . Indeed, one applies the proposition with all equal to , with the diagonal in ; this is an antichain if we give one of the the standard ordering, and another of the the opposite ordering (and ordering the remaining arbitrarily). In this case, the are all bijective, and so it is clear that the minimum in (4) is simply .

The combinatorial minimisation problem in the above proposition can be solved asymptotically when working with tensor powers, using the notion of the Shannon entropy of a discrete random variable .

Proposition 6Let be finite-dimensional vector spaces over a field . For each , let be a linearly independent set in indexed by some finite set . Let be a non-empty subset of .Let be a tensor of the form (3) for some coefficients . For each natural number , let be the tensor power of copies of , viewed as an element of . Then

and range over the random variables taking values in .

Now suppose that the coefficients are all non-zero and that each of the are equipped with a total ordering . Let be the set of maximal elements of in the product ordering, and let where range over random variables taking values in . Then

as . In particular, if the maximizer in (10) is supported on the maximal elements of (which always holds if is an antichain in the product ordering), then equality holds in (9).

*Proof:*

as , where is the projection map. Then the same thing will apply to and . Then applying Proposition 4, using the lexicographical ordering on and noting that, if are the maximal elements of , then are the maximal elements of , we obtain both (9) and (11).

We first prove the lower bound. By compactness (and the continuity properties of entropy), we can find a random variable taking values in such that

Let be a small positive quantity that goes to zero sufficiently slowly with . Let denote the set of all tuples in that are within of being distributed according to the law of , in the sense that for all , one has

By the asymptotic equipartition property, the cardinality of can be computed to be

if goes to zero slowly enough. Similarly one has

Now let be an arbitrary covering of . By the pigeonhole principle, there exists such that

which by (13) implies that

noting that the factor can be absorbed into the error). This gives the lower bound in (12).

Now we prove the upper bound. We can cover by sets of the form for various choices of random variables taking values in . For each such random variable , we can find such that ; we then place all of in . It is then clear that the cover and that

for all , giving the required upper bound.

It is of interest to compute the quantity in (10). We have the following criterion for when a maximiser occurs:

Proposition 7Let be finite sets, and be non-empty. Let be the quantity in (10). Let be a random variable taking values in , and let denote the essential range of , that is to say the set of tuples such that is non-zero. Then the following are equivalent:

- (i) attains the maximum in (10).
- (ii) There exist weights and a finite quantity , such that whenever , and such that
for all , with equality if . (In particular, must vanish if there exists a with .)

Furthermore, when (i) and (ii) holds, one has

*Proof:* We first show that (i) implies (ii). The function is concave on . As a consequence, if we define to be the set of tuples such that there exists a random variable taking values in with , then is convex. On the other hand, by (10), is disjoint from the orthant . Thus, by the hyperplane separation theorem, we conclude that there exists a half-space

where are reals that are not all zero, and is another real, which contains on its boundary and in its interior, such that avoids the interior of the half-space. Since is also on the boundary of , we see that the are non-negative, and that whenever .

By construction, the quantity

is maximised when . At this point we could use the method of Lagrange multipliers to obtain the required constraints, but because we have some boundary conditions on the (namely, that the probability that they attain a given element of has to be non-negative) we will work things out by hand. Let be an element of , and an element of . For small enough, we can form a random variable taking values in , whose probability distribution is the same as that for except that the probability of attaining is increased by , and the probability of attaining is decreased by . If there is any for which and , then one can check that

for sufficiently small , contradicting the maximality of ; thus we have whenever . Taylor expansion then gives

for small , where

and similarly for . We conclude that for all and , thus there exists a quantity such that for all , and for all . By construction must be nonnegative. Sampling using the distribution of , one has

almost surely; taking expectations we conclude that

The inner sum is , which equals when is non-zero, giving (17).

Now we show conversely that (ii) implies (i). As noted previously, the function is concave on , with derivative . This gives the inequality

for any (note the right-hand side may be infinite when and ). Let be any random variable taking values in , then on applying the above inequality with and , multiplying by , and summing over and gives

By construction, one has

and

so to prove that (which would give (i)), it suffices to show that

or equivalently that the quantity

is maximised when . Since

it suffices to show this claim for the quantity

One can view this quantity as

By (ii), this quantity is bounded by , with equality if is equal to (and is in particular ranging in ), giving the claim.

The second half of the proof of Proposition 7 only uses the marginal distributions and the equation(16), not the actual distribution of , so it can also be used to prove an upper bound on when the exact maximizing distribution is not known, given suitable probability distributions in each variable. The logarithm of the probability distribution here plays the role that the weight functions do in BCCGNSU.

Remark 8Suppose one is in the situation of (i) and (ii) above; assume the nondegeneracy condition that is positive (or equivalently that is positive). We can assign a “degree” to each element by the formula

then every tuple in has total degree at most , and those tuples in have degree exactly . In particular, every tuple in has degree at most , and hence by (17), each such tuple has a -component of degree less than or equal to for some with . On the other hand, we can compute from (19) and the fact that for that . Thus, by asymptotic equipartition, and assuming , the number of “monomials” in of total degree at most is at most ; one can in fact use (19) and (18) to show that this is in fact an equality. This gives a direct way to cover by sets with , which is in the spirit of the Croot-Lev-Pach-Ellenberg-Gijswijt arguments from the previous post.

We can now show that the rank computation for the capset problem is sharp:

Proposition 9Let denote the space of functions from to . Then the function from to , viewed as an element of , has rank as , where is given by the formula

*Proof:* In , we have

Thus, if we let be the space of functions from to (with domain variable denoted respectively), and define the basis functions

of indexed by (with the usual ordering), respectively, and set to be the set

then is a linear combination of the with , and all coefficients non-zero. Then we have . We will show that the quantity of (10) agrees with the quantity of (20), and that the optimizing distribution is supported on , so that by Proposition 6 the rank of is .

To compute the quantity at (10), we use the criterion in Proposition 7. We take to be the random variable taking values in that attains each of the values with a probability of , and each of with a probability of ; then each of the attains the values of with probabilities respectively, so in particular is equal to the quantity in (20). If we now set and

we can verify the condition (16) with equality for all , which from (17) gives as desired.

This statement already follows from the result of Kleinberg-Sawin-Speyer, which gives a “tri-colored sum-free set” in of size , as the slice rank of this tensor is an upper bound for the size of a tri-colored sum-free set. If one were to go over the proofs more carefully to evaluate the subexponential factors, this argument would give a stronger lower bound than KSS, as it does not deal with the substantial loss that comes from Behrend’s construction. However, because it actually constructs a set, the KSS result rules out more possible approaches to give an exponential improvement of the upper bound for capsets. The lower bound on slice rank shows that the bound cannot be improved using only the slice rank of this particular tensor, whereas KSS shows that the bound cannot be improved using any method that does not take advantage of the “single-colored” nature of the problem.

We can also show that the slice rank upper bound in a result of Naslund-Sawin is similarly sharp:

Proposition 10Let denote the space of functions from to . Then the function from , viewed as an element of , has slice rank

*Proof:* Let and be a basis for the space of functions on , itself indexed by . Choose similar bases for and , with and .

Set . Then is a linear combination of the with , and all coefficients non-zero. Order the usual way so that is an antichain. We will show that the quantity of (10) is , so that applying the last statement of Proposition 6, we conclude that the rank of is ,

Let be the random variable taking values in that attains each of the values with a probability of . Then each of the attains the value with probability and with probability , so

Setting and , we can verify the condition (16) with equality for all , which from (17) gives as desired.

We used a slightly different method in each of the last two results. In the first one, we use the most natural bases for all three vector spaces, and distinguish from its set of maximal elements . In the second one we modify one basis element slightly, with instead of the more obvious choice , which allows us to work with instead of . Because is an antichain, we do not need to distinguish and . Both methods in fact work with either problem, and they are both about equally difficult, but we include both as either might turn out to be substantially more convenient in future work.

Proposition 11Let be a natural number and let be a finite abelian group. Let be any field. Let denote the space of functions from to .Let be any -valued function on that is nonzero only when the elements of form a -term arithmetic progression, and is nonzero on every -term constant progression.

Then the slice rank of is .

*Proof:* We apply Proposition 4, using the standard bases of . Let be the support of . Suppose that we have orderings on such that the constant progressions are maximal elements of and thus all constant progressions lie in . Then for any partition of , can contain at most constant progressions, and as all constant progressions must lie in one of the , we must have . By Proposition 4, this implies that the slice rank of is at least . Since is a tensor, the slice rank is at most , hence exactly .

So it is sufficient to find orderings on such that the constant progressions are maximal element of . We make several simplifying reductions: We may as well assume that consists of all the -term arithmetic progressions, because if the constant progressions are maximal among the set of all progressions then they are maximal among its subset . So we are looking for an ordering in which the constant progressions are maximal among all -term arithmetic progressions. We may as well assume that is cyclic, because if for each cyclic group we have an ordering where constant progressions are maximal, on an arbitrary finite abelian group the lexicographic product of these orderings is an ordering for which the constant progressions are maximal. We may assume , as if we have an -tuple of orderings where constant progressions are maximal, we may add arbitrary orderings and the constant progressions will remain maximal.

So it is sufficient to find orderings on the cyclic group such that the constant progressions are maximal elements of the set of -term progressions in in the -fold product ordering. To do that, let the first, second, third, and fifth orderings be the usual order on and let the fourth, sixth, seventh, and eighth orderings be the reverse of the usual order on .

Then let be a constant progression and for contradiction assume that is a progression greater than in this ordering. We may assume that , because otherwise we may reverse the order of the progression, which has the effect of reversing all eight orderings, and then apply the transformation , which again reverses the eight orderings, bringing us back to the original problem but with .

Take a representative of the residue class in the interval . We will abuse notation and call this . Observe that , and are all contained in the interval modulo . Take a representative of the residue class in the interval . Then is in the interval for some . The distance between any distinct pair of intervals of this type is greater than , but the distance between and is at most , so is in the interval . By the same reasoning, is in the interval . Therefore . But then the distance between and is at most , so by the same reasoning is in the interval . Because is between and , it also lies in the interval . Because is in the interval , and by assumption it is congruent mod to a number in the set greater than or equal to , it must be exactly . Then, remembering that and lie in , we have and , so , hence , thus , which contradicts the assumption that .

In fact, given a -term progressions mod and a constant, we can form a -term binary sequence with a for each step of the progression that is greater than the constant and a for each step that is less. Because a rotation map, viewed as a dynamical system, has zero topological entropy, the number of -term binary sequences that appear grows subexponentially in . Hence there must be, for large enough , at least one sequence that does not appear. In this proof we exploit a sequence that does not appear for .

A *capset* in the vector space over the finite field of three elements is a subset of that does not contain any lines , where and . A basic problem in additive combinatorics (discussed in one of the very first posts on this blog) is to obtain good upper and lower bounds for the maximal size of a capset in .

Trivially, one has . Using Fourier methods (and the density increment argument of Roth), the bound of was obtained by Meshulam, and improved only as late as 2012 to for some absolute constant by Bateman and Katz. But in a very recent breakthrough, Ellenberg (and independently Gijswijt) obtained the exponentially superior bound , using a version of the polynomial method recently introduced by Croot, Lev, and Pach. (In the converse direction, a construction of Edel gives capsets as large as .) Given the success of the polynomial method in superficially similar problems such as the finite field Kakeya problem (discussed in this previous post), it was natural to wonder that this method could be applicable to the cap set problem (see for instance this MathOverflow comment of mine on this from 2010), but it took a surprisingly long time before Croot, Lev, and Pach were able to identify the precise variant of the polynomial method that would actually work here.

The proof of the capset bound is very short (Ellenberg’s and Gijswijt’s preprints are both 3 pages long, and Croot-Lev-Pach is 6 pages), but I thought I would present a slight reformulation of the argument which treats the three points on a line in symmetrically (as opposed to treating the third point differently from the first two, as is done in the Ellenberg and Gijswijt papers; Croot-Lev-Pach also treat the middle point of a three-term arithmetic progression differently from the two endpoints, although this is a very natural thing to do in their context of ). The basic starting point is this: if is a capset, then one has the identity

for all , where is the Kronecker delta function, which we view as taking values in . Indeed, (1) reflects the fact that the equation has solutions precisely when are either all equal, or form a line, and the latter is ruled out precisely when is a capset.

To exploit (1), we will show that the left-hand side of (1) is “low rank” in some sense, while the right-hand side is “high rank”. Recall that a function taking values in a field is of *rank one* if it is non-zero and of the form for some , and that the rank of a general function is the least number of rank one functions needed to express as a linear combination. More generally, if , we define the *rank* of a function to be the least number of “rank one” functions of the form

for some and some functions , , that are needed to generate as a linear combination. For instance, when , the rank one functions take the form , , , and linear combinations of such rank one functions will give a function of rank at most .

It is a standard fact in linear algebra that the rank of a diagonal matrix is equal to the number of non-zero entries. This phenomenon extends to higher dimensions:

Lemma 1 (Rank of diagonal hypermatrices)Let , let be a finite set, let be a field, and for each , let be a coefficient. Then the rank of the function

*Proof:* We induct on . As mentioned above, the case follows from standard linear algebra, so suppose now that and the claim has already been proven for .

It is clear that the function (2) has rank at most equal to the number of non-zero (since the summands on the right-hand side are rank one functions), so it suffices to establish the lower bound. By deleting from those elements with (which cannot increase the rank), we may assume without loss of generality that all the are non-zero. Now suppose for contradiction that (2) has rank at most , then we obtain a representation

for some sets of cardinalities adding up to at most , and some functions and .

Consider the space of functions that are orthogonal to all the , in the sense that

for all . This space is a vector space whose dimension is at least . A basis of this space generates a coordinate matrix of full rank, which implies that there is at least one non-singular minor. This implies that there exists a function in this space which is nowhere vanishing on some subset of of cardinality at least .

If we multiply (3) by and sum in , we conclude that

where

The right-hand side has rank at most , since the summands are rank one functions. On the other hand, from induction hypothesis the left-hand side has rank at least , giving the required contradiction.

On the other hand, we have the following (symmetrised version of a) beautifully simple observation of Croot, Lev, and Pach:

*Proof:* Using the identity for , we have

The right-hand side is clearly a polynomial of degree in , which is then a linear combination of monomials

with with

In particular, from the pigeonhole principle, at least one of is at most .

Consider the contribution of the monomials for which . We can regroup this contribution as

where ranges over those with , is the monomial

and is some explicitly computable function whose exact form will not be of relevance to our argument. The number of such is equal to , so this contribution has rank at most . The remaining contributions arising from the cases and similarly have rank at most (grouping the monomials so that each monomial is only counted once), so the claim follows.

Upon restricting from to , the rank of is still at most . The two lemmas then combine to give the Ellenberg-Gijswijt bound

All that remains is to compute the asymptotic behaviour of . This can be done using the general tool of Cramer’s theorem, but can also be derived from Stirling’s formula (discussed in this previous post). Indeed, if , , for some summing to , Stirling’s formula gives

where is the entropy function

We then have

where is the maximum entropy subject to the constraints

A routine Lagrange multiplier computation shows that the maximum occurs when

and is approximately , giving rise to the claimed bound of .

Remark 3As noted in the Ellenberg and Gijswijt papers, the above argument extends readily to other fields than to control the maximal size of subset of that has no non-trivial solutions to the equation , where are non-zero constants that sum to zero. Of course one replaces the function in Lemma 2 by in this case.

Remark 4This symmetrised formulation suggests that one possible way to improve slightly on the numerical quantity by finding a more efficient way to decompose into rank one functions, however I was not able to do so (though such improvements are reminiscent of the Strassen type algorithms for fast matrix multiplication).

Remark 5It is tempting to see if this method can get non-trivial upper bounds for sets with no length progressions, in (say) . One can run the above arguments, replacing the functionwith

this leads to the bound where

Unfortunately, is asymptotic to and so this bound is in fact slightly worse than the trivial bound ! However, there is a slim chance that there is a more efficient way to decompose into rank one functions that would give a non-trivial bound on . I experimented with a few possible such decompositions but unfortunately without success.

Remark 6Return now to the capset problem. Since Lemma 1 is valid for any field , one could perhaps hope to get better bounds by viewing the Kronecker delta function as taking values in another field than , such as the complex numbers . However, as soon as one works in a field of characteristic other than , one can adjoin a cube root of unity, and one now has the Fourier decompositionMoving to the Fourier basis, we conclude from Lemma 1 that the function on now has rank exactly , and so one cannot improve upon the trivial bound of by this method using fields of characteristic other than three as the range field. So it seems one has to stick with (or the algebraic completion thereof).

Thanks to Jordan Ellenberg and Ben Green for helpful discussions.

Because of Euler’s identity , the complex exponential is not injective: for any complex and integer . As such, the complex logarithm is not well-defined as a single-valued function from to . However, after making a branch cut, one can create a branch of the logarithm which is single-valued. For instance, after removing the negative real axis , one has the *standard branch* of the logarithm, with defined as the unique choice of the complex logarithm of whose imaginary part has magnitude strictly less than . This particular branch has a number of useful additional properties:

- The standard branch is holomorphic on its domain .
- One has for all in the domain . In particular, if is real, then is real.
- One has for all in the domain .

One can then also use the standard branch of the logarithm to create standard branches of other multi-valued functions, for instance creating a standard branch of the square root function. We caution however that the identity can fail for the standard branch (or indeed for any branch of the logarithm).

One can extend this standard branch of the logarithm to complex matrices, or (equivalently) to linear transformations on an -dimensional complex vector space , provided that the spectrum of that matrix or transformation avoids the branch cut . Indeed, from the spectral theorem one can decompose any such as the direct sum of operators on the non-trivial generalised eigenspaces of , where ranges in the spectrum of . For each component of , we define

where is the Taylor expansion of at ; as is nilpotent, only finitely many terms in this Taylor expansion are required. The logarithm is then defined as the direct sum of the .

The matrix standard branch of the logarithm has many pleasant and easily verified properties (often inherited from their scalar counterparts), whenever has no spectrum in :

- (i) We have .
- (ii) If and have no spectrum in , then .
- (iii) If has spectrum in a closed disk in , then , where is the Taylor series of around (which is absolutely convergent in ).
- (iv) depends holomorphically on . (Easily established from (ii), (iii), after covering the spectrum of by disjoint disks; alternatively, one can use the Cauchy integral representation for a contour in the domain enclosing the spectrum of .) In particular, the standard branch of the matrix logarithm is smooth.
- (v) If is any invertible linear or antilinear map, then . In particular, the standard branch of the logarithm commutes with matrix conjugations; and if is real with respect to a complex conjugation operation on (that is to say, an antilinear involution), then is real also.
- (vi) If denotes the transpose of (with the complex dual of ), then . Similarly, if denotes the adjoint of (with the complex conjugate of , i.e. with the conjugated multiplication map ), then .
- (vii) One has .
- (viii) If denotes the spectrum of , then .

As a quick application of the standard branch of the matrix logarithm, we have

Proposition 1Let be one of the following matrix groups: , , , , , or , where is a non-degenerate real quadratic form (so is isomorphic to a (possibly indefinite) orthogonal group for some . Then any element of whose spectrum avoids is exponential, that is to say for some in the Lie algebra of .

*Proof:* We just prove this for , as the other cases are similar (or a bit simpler). If , then (viewing as a complex-linear map on , and using the complex bilinear form associated to to identify with its complex dual , then is real and . By the properties (v), (vi), (vii) of the standard branch of the matrix logarithm, we conclude that is real and , and so lies in the Lie algebra , and the claim now follows from (i).

Exercise 2Show that is not exponential in if . Thus we see that the branch cut in the above proposition is largely necessary. See this paper of Djokovic for a more complete description of the image of the exponential map in classical groups, as well as this previous blog post for some more discussion of the surjectivity (or lack thereof) of the exponential map in Lie groups.

For a slightly less quick application of the standard branch, we have the following result (recently worked out in the answers to this MathOverflow question):

Proposition 3Let be an element of the split orthogonal group which lies in the connected component of the identity. Then .

The requirement that lie in the identity component is necessary, as the counterexample for shows.

*Proof:* We think of as a (real) linear transformation on , and write for the quadratic form associated to , so that . We can split , where is the sum of all the generalised eigenspaces corresponding to eigenvalues in , and is the sum of all the remaining eigenspaces. Since and are real, are real (i.e. complex-conjugation invariant) also. For , the restriction of to then lies in , where is the restriction of to , and

The spectrum of consists of positive reals, as well as complex pairs (with equal multiplicity), so . From the preceding proposition we have for some ; this will be important later.

It remains to show that . If has spectrum at then we are done, so we may assume that has spectrum only at (being invertible, has no spectrum at ). We split , where correspond to the portions of the spectrum in , ; these are real, -invariant spaces. We observe that if are generalised eigenspaces of with , then are orthogonal with respect to the (complex-bilinear) inner product associated with ; this is easiest to see first for the actual eigenspaces (since for all ), and the extension to generalised eigenvectors then follows from a routine induction. From this we see that is orthogonal to , and and are null spaces, which by the non-degeneracy of (and hence of the restriction of to ) forces to have the same dimension as , indeed now gives an identification of with . If we let be the restrictions of to , we thus identify with , since lies in ; in particular is invertible. Thus

and so it suffices to show that .

At this point we need to use the hypothesis that lies in the identity component of . This implies (by a continuity argument) that the restriction of to any maximal-dimensional positive subspace has positive determinant (since such a restriction cannot be singular, as this would mean that positive norm vector would map to a non-positive norm vector). Now, as have equal dimension, has a balanced signature, so does also. Since , already lies in the identity component of , and so has positive determinant on any maximal-dimensional positive subspace of . We conclude that has positive determinant on any maximal-dimensional positive subspace of .

We choose a complex basis of , to identify with , which has already been identified with . (In coordinates, are now both of the form , and for .) Then becomes a maximal positive subspace of , and the restriction of to this subspace is conjugate to , so that

But since and is positive definite, so as required.

Analytic number theory is only one of many different approaches to number theory. Another important branch of the subject is algebraic number theory, which studies algebraic structures (e.g. groups, rings, and fields) of number-theoretic interest. With this perspective, the classical field of rationals , and the classical ring of integers , are placed inside the much larger field of algebraic numbers, and the much larger ring of algebraic integers, respectively. Recall that an algebraic number is a root of a polynomial with integer coefficients, and an algebraic integer is a root of a monic polynomial with integer coefficients; thus for instance is an algebraic integer (a root of ), while is merely an algebraic number (a root of ). For the purposes of this post, we will adopt the concrete (but somewhat artificial) perspective of viewing algebraic numbers and integers as lying inside the complex numbers , thus . (From a modern algebraic perspective, it is better to think of as existing as an abstract field separate from , but which has a number of embeddings into (as well as into other fields, such as the completed p-adics ), no one of which should be considered favoured over any other; cf. this mathOverflow post. But for the rudimentary algebraic number theory in this post, we will not need to work at this level of abstraction.) In particular, we identify the algebraic integer with the complex number for any natural number .

Exercise 1Show that the field of algebraic numbers is indeed a field, and that the ring of algebraic integers is indeed a ring, and is in fact an integral domain. Also, show that , that is to say the ordinary integers are precisely the algebraic integers that are also rational. Because of this, we will sometimes refer to elements of asrational integers.

In practice, the field is too big to conveniently work with directly, having infinite dimension (as a vector space) over . Thus, algebraic number theory generally restricts attention to intermediate fields between and , which are of finite dimension over ; that is to say, finite degree extensions of . Such fields are known as algebraic number fields, or *number fields* for short. Apart from itself, the simplest examples of such number fields are the quadratic fields, which have dimension exactly two over .

Exercise 2Show that if is a rational number that is not a perfect square, then the field generated by and either of the square roots of is a quadratic field. Conversely, show that all quadratic fields arise in this fashion. (Hint:show that every element of a quadratic field is a root of a quadratic polynomial over the rationals.)

The ring of algebraic integers is similarly too large to conveniently work with directly, so in algebraic number theory one usually works with the rings of algebraic integers inside a given number field . One can (and does) study this situation in great generality, but for the purposes of this post we shall restrict attention to a simple but illustrative special case, namely the quadratic fields with a certain type of negative discriminant. (The positive discriminant case will be briefly discussed in Remark 42 below.)

Exercise 3Let be a square-free natural number with or . Show that the ring of algebraic integers in is given byIf instead is square-free with , show that the ring is instead given by

What happens if is not square-free, or negative?

Remark 4In the case , it may naively appear more natural to work with the ring , which is an index two subring of . However, because this ring only captures some of the algebraic integers in rather than all of them, the algebraic properties of these rings are somewhat worse than those of (in particular, they generally fail to be Dedekind domains) and so are not convenient to work with in algebraic number theory.

We refer to fields of the form for natural square-free numbers as *quadratic fields of negative discriminant*, and similarly refer to as a ring of quadratic integers of negative discriminant. Quadratic fields and quadratic integers of positive discriminant are just as important to analytic number theory as their negative discriminant counterparts, but we will restrict attention to the latter here for simplicity of discussion.

Thus, for instance, when , the ring of integers in is the ring of Gaussian integers

and when , the ring of integers in is the ring of Eisenstein integers

where is a cube root of unity.

As these examples illustrate, the additive structure of a ring of quadratic integers is that of a two-dimensional lattice in , which is isomorphic as an additive group to . Thus, from an additive viewpoint, one can view quadratic integers as “two-dimensional” analogues of rational integers. From a *multiplicative* viewpoint, however, the quadratic integers (and more generally, integers in a number field) behave very similarly to the rational integers (as opposed to being some sort of “higher-dimensional” version of such integers). Indeed, a large part of basic algebraic number theory is devoted to treating the multiplicative theory of integers in number fields in a unified fashion, that naturally generalises the classical multiplicative theory of the rational integers.

For instance, every rational integer has an absolute value , with the multiplicativity property for , and the positivity property for all . Among other things, the absolute value detects units: if and only if is a unit in (that is to say, it is multiplicatively invertible in ). Similarly, in any ring of quadratic integers with negative discriminant, we can assign a norm to any quadratic integer by the formula

where is the complex conjugate of . (When working with other number fields than quadratic fields of negative discriminant, one instead defines to be the product of all the Galois conjugates of .) Thus for instance, when one has

Analogously to the rational integers, we have the multiplicativity property for and the positivity property for , and the units in are precisely the elements of norm one.

Exercise 5Establish the three claims of the previous paragraph. Conclude that the units (invertible elements) of consist of the four elements if , the six elements if , and the two elements if .

For the rational integers, we of course have the fundamental theorem of arithmetic, which asserts that every non-zero rational integer can be uniquely factored (up to permutation and units) as the product of irreducible integers, that is to say non-zero, non-unit integers that cannot be factored into the product of integers of strictly smaller norm. As it turns out, the same claim is true for a few additional rings of quadratic integers, such as the Gaussian integers and Eisenstein integers, but fails in general; for instance, in the ring , we have the famous counterexample

that decomposes non-uniquely into the product of irreducibles in . Nevertheless, it is an important fact that the fundamental theorem of arithmetic can be salvaged if one uses an “idealised” notion of a number in a ring of integers , now known in modern language as an ideal of that ring. For instance, in , the principal ideal turns out to uniquely factor into the product of (non-principal) ideals ; see Exercise 27. We will review the basic theory of ideals in number fields (focusing primarily on quadratic fields of negative discriminant) below the fold.

The norm forms (1), (2) can be viewed as examples of positive definite quadratic forms over the integers, by which we mean a polynomial of the form

for some integer coefficients . One can declare two quadratic forms to be *equivalent* if one can transform one to the other by an invertible linear transformation , so that . For example, the quadratic forms and are equivalent, as can be seen by using the invertible linear transformation . Such equivalences correspond to the different choices of basis available when expressing a ring such as (or an ideal thereof) additively as a copy of .

There is an important and classical invariant of a quadratic form , namely the discriminant , which will of course be familiar to most readers via the quadratic formula, which among other things tells us that a quadratic form will be positive definite precisely when its discriminant is negative. It is not difficult (particularly if one exploits the multiplicativity of the determinant of matrices) to show that two equivalent quadratic forms have the same discriminant. Thus for instance any quadratic form equivalent to (1) has discriminant , while any quadratic form equivalent to (2) has discriminant . Thus we see that each ring of quadratic integers is associated with a certain negative discriminant , defined to equal when and when .

Exercise 6 (Geometric interpretation of discriminant)Let be a quadratic form of negative discriminant , and extend it to a real form in the obvious fashion. Show that for any , the set is an ellipse of area .

It is natural to ask the converse question: if two quadratic forms have the same discriminant, are they necessarily equivalent? For certain choices of discriminant, this is the case:

Exercise 7Show that any quadratic form of discriminant is equivalent to the form , and any quadratic form of discriminant is equivalent to . (Hint:use elementary transformations to try to make as small as possible, to the point where one only has to check a finite number of cases; this argument is due to Legendre.) More generally, show that for any negative discriminant , there are only finitely many quadratic forms of that discriminant up to equivalence (a result first established by Gauss).

Unfortunately, for most choices of discriminant, the converse question fails; for instance, the quadratic forms and both have discriminant , but are not equivalent (Exercise 38). This particular failure of equivalence turns out to be intimately related to the failure of unique factorisation in the ring .

It turns out that there is a fundamental connection between quadratic fields, equivalence classes of quadratic forms of a given discriminant, and real Dirichlet characters, thus connecting the material discussed above with the last section of the previous set of notes. Here is a typical instance of this connection:

Proposition 8Let be the real non-principal Dirichlet character of modulus , or more explicitly is equal to when , when , and when .

- (i) For any natural number , the number of Gaussian integers with norm is equal to . Equivalently, the number of solutions to the equation with is . (Here, as in the previous post, the symbol denotes Dirichlet convolution.)
- (ii) For any natural number , the number of Gaussian integers that divide (thus for some ) is .

We will prove this proposition later in these notes. We observe that as a special case of part (i) of this proposition, we recover the Fermat two-square theorem: an odd prime is expressible as the sum of two squares if and only if . This proposition should also be compared with the fact, used crucially in the previous post to prove Dirichlet’s theorem, that is non-negative for any , and at least one when is a square, for any quadratic character .

As an illustration of the relevance of such connections to analytic number theory, let us now explicitly compute .

This particular identity is also known as the Leibniz formula.

*Proof:* For a large number , consider the quantity

of all the Gaussian integers of norm less than . On the one hand, this is the same as the number of lattice points of in the disk of radius . Placing a unit square centred at each such lattice point, we obtain a region which differs from the disk by a region contained in an annulus of area . As the area of the disk is , we conclude the Gauss bound

On the other hand, by Proposition 8(i) (and removing the contribution), we see that

Now we use the Dirichlet hyperbola method to expand the right-hand side sum, first expressing

and then using the bounds , , from the previous set of notes to conclude that

Comparing the two formulae for and sending , we obtain the claim.

Exercise 10Give an alternate proof of Corollary 9 that relies on obtaining asymptotics for the Dirichlet series as , rather than using the Dirichlet hyperbola method.

Exercise 11Give a direct proof of Corollary 9 that does not use Proposition 8, instead using Taylor expansion of the complex logarithm . (One can also use Taylor expansions of some other functions related to the complex logarithm here, such as the arctangent function.)

More generally, one can relate for a real Dirichlet character with the number of inequivalent quadratic forms of a certain discriminant, via the famous class number formula; we will give a special case of this formula below the fold.

The material here is only a very rudimentary introduction to algebraic number theory, and is not essential to the rest of the course. A slightly expanded version of the material here, from the perspective of analytic number theory, may be found in Sections 5 and 6 of Davenport’s book. A more in-depth treatment of algebraic number theory may be found in a number of texts, e.g. Fröhlich and Taylor.

As laid out in the foundational work of Kolmogorov, a *classical probability space* (or probability space for short) is a triplet , where is a set, is a -algebra of subsets of , and is a countably additive probability measure on . Given such a space, one can form a number of interesting function spaces, including

- the (real) Hilbert space of square-integrable functions , modulo -almost everywhere equivalence, and with the positive definite inner product ; and
- the unital commutative Banach algebra of essentially bounded functions , modulo -almost everywhere equivalence, with defined as the essential supremum of .

There is also a trace on defined by integration: .

One can form the category of classical probability spaces, by defining a morphism between probability spaces to be a function which is measurable (thus for all ) and measure-preserving (thus for all ).

Let us now abstract the algebraic features of these spaces as follows; for want of a better name, I will refer to this abstraction as an *algebraic probability space*, and is very similar to the non-commutative probability spaces studied in this previous post, except that these spaces are now commutative (and real).

Definition 1Analgebraic probability spaceis a pair where

- is a unital commutative real algebra;
- is a homomorphism such that and for all ;
- Every element of is
boundedin the sense that . (Technically, this isn’t an algebraic property, but I need it for technical reasons.)A morphism is a homomorphism which is trace-preserving, in the sense that for all .

For want of a better name, I’ll denote the category of algebraic probability spaces as . One can view this category as the opposite category to that of (a subcategory of) the category of tracial commutative real algebras. One could emphasise this opposite nature by denoting the algebraic probability space as rather than ; another suggestive (but slightly inaccurate) notation, inspired by the language of schemes, would be rather than . However, we will not adopt these conventions here, and refer to algebraic probability spaces just by the pair .

By the previous discussion, we have a covariant functor that takes a classical probability space to its algebraic counterpart , with a morphism of classical probability spaces mapping to a morphism of the corresponding algebraic probability spaces by the formula

for . One easily verifies that this is a functor.

In this post I would like to describe a functor which partially inverts (up to natural isomorphism), that is to say a recipe for starting with an algebraic probability space and producing a classical probability space . This recipe is not new – it is basically the (commutative) Gelfand-Naimark-Segal construction (discussed in this previous post) combined with the Loomis-Sikorski theorem (discussed in this previous post). However, I wanted to put the construction in a single location for sake of reference. I also wanted to make the point that and are not complete inverses; there is a bit of information in the algebraic probability space (e.g. topological information) which is lost when passing back to the classical probability space. In some future posts, I would like to develop some ergodic theory using the algebraic foundations of probability theory rather than the classical foundations; this turns out to be convenient in the ergodic theory arising from nonstandard analysis (such as that described in this previous post), in which the groups involved are uncountable and the underlying spaces are not standard Borel spaces.

Let us describe how to construct the functor , with details postponed to below the fold.

- Starting with an algebraic probability space , form an inner product on by the formula , and also form the spectral radius .
- The inner product is clearly positive semi-definite. Quotienting out the null vectors and taking completions, we arrive at a real Hilbert space , to which the trace may be extended.
- Somewhat less obviously, the spectral radius is well-defined and gives a norm on . Taking limits of sequences in of bounded spectral radius gives us a subspace of that has the structure of a real commutative Banach algebra.
- The idempotents of the Banach algebra may be indexed by elements of an abstract -algebra .
- The Boolean algebra homomorphisms (or equivalently, the real algebra homomorphisms ) may be indexed by elements of a space .
- Let denote the -algebra on generated by the basic sets for every .
- Let be the -ideal of generated by the sets , where is a sequence with .
- One verifies that is isomorphic to . Using this isomorphism, the trace on can be used to construct a countably additive measure on . The classical probability space is then , and the abstract spaces may now be identified with their concrete counterparts , .
- Every algebraic probability space morphism generates a classical probability morphism via the formula
using a pullback operation on the abstract -algebras that can be defined by density.

Remark 1The classical probability space constructed by the functor has some additional structure; namely is a -Stone space (a Stone space with the property that the closure of any countable union of clopen sets is clopen), is the Baire -algebra (generated by the clopen sets), and the null sets are the meager sets. However, we will not use this additional structure here.

The partial inversion relationship between the functors and is given by the following assertion:

- There is a natural transformation from to the identity functor .

More informally: if one starts with an algebraic probability space and converts it back into a classical probability space , then there is a trace-preserving algebra homomorphism of to , which respects morphisms of the algebraic probability space. While this relationship is far weaker than an equivalence of categories (which would require that and are both natural isomorphisms), it is still good enough to allow many ergodic theory problems formulated using classical probability spaces to be reformulated instead as an equivalent problem in algebraic probability spaces.

Remark 2The opposite composition is a little odd: it takes an arbitrary probability space and returns a more complicated probability space , with being the space of homomorphisms . while there is “morally” an embedding of into using the evaluation map, this map does not exist in general because points in may well have zero measure. However, if one takes a “pointless” approach and focuses just on the measure algebras , , then these algebras become naturally isomorphic after quotienting out by null sets.

Remark 3An algebraic probability space captures a bit more structure than a classical probability space, because may be identified with a proper subset of that describes the “regular” functions (or random variables) of the space. For instance, starting with the unit circle (with the usual Haar measure and the usual trace ), any unital subalgebra of that is dense in will generate the same classical probability space on applying the functor , namely one will get the space of homomorphisms from to (with the measure induced from ). Thus for instance could be the continuous functions , the Wiener algebra or the full space , but the classical space will be unable to distinguish these spaces from each other. In particular, the functor loses information (roughly speaking, this functor takes an algebraic probability space and completes it to a von Neumann algebra, but then forgets exactly what algebra was initially used to create this completion). In ergodic theory, this sort of “extra structure” is traditionally encoded in topological terms, by assuming that the underlying probability space has a nice topological structure (e.g. a standard Borel space); however, with the algebraic perspective one has the freedom to have non-topological notions of extra structure, by choosing to be something other than an algebra of continuous functions on a topological space. I hope to discuss one such example of extra structure (coming from the Gowers-Host-Kra theory of uniformity seminorms) in a later blog post (this generalises the example of the Wiener algebra given previously, which is encoding “Fourier structure”).

A small example of how one could use the functors is as follows. Suppose one has a classical probability space with a measure-preserving action of an uncountable group , which is only defined (and an action) up to almost everywhere equivalence; thus for instance for any set and any , and might not be exactly equal, but only equal up to a null set. For similar reasons, an element of the invariant factor might not be exactly invariant with respect to , but instead one only has and equal up to null sets for each . One might like to “clean up” the action of to make it defined everywhere, and a genuine action everywhere, but this is not immediately achievable if is uncountable, since the union of all the null sets where something bad occurs may cease to be a null set. However, by applying the functor , each shift defines a morphism on the associated algebraic probability space (i.e. the Koopman operator), and then applying , we obtain a shift on a new classical probability space which now gives a genuine measure-preserving action of , and which is equivalent to the original action from a measure algebra standpoint. The invariant factor now consists of those sets in which are genuinely -invariant, not just up to null sets. (Basically, the classical probability space contains a Boolean algebra with the property that every measurable set is equivalent up to null sets to precisely one set in , allowing for a canonical “retraction” onto that eliminates all null set issues.)

More indirectly, the functors suggest that one should be able to develop a “pointless” form of ergodic theory, in which the underlying probability spaces are given algebraically rather than classically. I hope to give some more specific examples of this in later posts.

## Recent Comments