You are currently browsing the category archive for the ‘Mathematics’ category.

Given two unit vectors in a real inner product space, one can define the *correlation* between these vectors to be their inner product , or in more geometric terms, the cosine of the angle subtended by and . By the Cauchy-Schwarz inequality, this is a quantity between and , with the extreme positive correlation occurring when are identical, the extreme negative correlation occurring when are diametrically opposite, and the zero correlation occurring when are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables , which is the same as the correlation between two unit vectors lying in the Hilbert space of square-integrable random variables, with being the normalisation of defined by subtracting off the mean and then dividing by the standard deviation of , and similarly for and .

One can also define correlation for complex (Hermitian) inner product spaces by taking the real part of the complex inner product to recover a real inner product.

While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if correlates with , and correlates with , then this does not imply that correlates with . A simple geometric example is provided by the three unit vectors

in the Euclidean plane : and have a positive correlation of , as does and , but and are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.

However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are *very* close to : if are unit vectors such that is *very* highly correlated with , and is *very* highly correlated with , then this *does* imply that is very highly correlated with . Indeed, from the identity

(and similarly for and ) and the triangle inequality

Thus, for instance, if and , then . This is of course closely related to (though slightly weaker than) the triangle inequality for angles:

Remark 1(Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors with . This comes by extending in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space if necessary. More concretely, one can apply (1) to the unit vectorsin .

But even in the “” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the *van der Corput lemma*, which basically asserts that if a unit vector is correlated with *many* unit vectors , then many of the pairs will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality

Thus, for instance, if for at least values of , then must be at least , which implies that for at least pairs . Or as another example: if a random variable exhibits at least positive correlation with other random variables , then if , at least two distinct must have positive correlation with each other (although this argument does not tell you *which* pair are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.

A similar argument (multiplying each by an appropriate sign ) shows the related van der Corput inequality

and this inequality is also true for complex inner product spaces. (Also, the do not need to be unit vectors for this inequality to hold.)

Geometrically, the picture is this: if positively correlates with all of the , then the are all squashed into a somewhat narrow cone centred at . The cone is still wide enough to allow a few pairs to be orthogonal (or even negatively correlated) with each other, but (when is large enough) it is not wide enough to allow *all* of the to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)

A particularly common special case of the van der Corput inequality arises when is a unit vector fixed by some unitary operator , and the are shifts of a single unit vector . In this case, the inner products are all equal, and we arrive at the useful van der Corput inequality

(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that has negligible correlation with , it suffices to show that the shifts of have negligible correlation with each other.

Here is a basic application of the van der Corput inequality:

Proposition 1 (Weyl equidistribution estimate)Let be a polynomial with at least one non-constant coefficient irrational. Then one haswhere .

Note that this assertion implies the more general assertion

for any non-zero integer (simply by replacing by ), which by the Weyl equidistribution criterion is equivalent to the sequence being asymptotically equidistributed in .

*Proof:* We induct on the degree of the polynomial , which must be at least one. If is equal to one, the claim is easily established from the geometric series formula, so suppose that and that the claim has already been proven for . If the top coefficient of is rational, say , then by partitioning the natural numbers into residue classes modulo , we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient is irrational.

In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter defines an inner product on bounded complex sequences by setting

Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that

for every non-principal ultrafilter .

Note that the space of bounded sequences (modulo null vectors) admits a shift , defined by

This shift becomes unitary once we quotient out by null vectors, and the constant sequence is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have

for any . But we may rewrite . Then observe that if , is a polynomial of degree whose coefficient is irrational, so by induction hypothesis we have for . For we of course have , and so

for any . Letting , we obtain the claim.

Many fluid equations are expected to exhibit turbulence in their solutions, in which a significant portion of their energy ends up in high frequency modes. A typical example arises from the three-dimensional periodic Navier-Stokes equations

where is the velocity field, is a forcing term, is a pressure field, and is the viscosity. To study the dynamics of energy for this system, we first pass to the Fourier transform

so that the system becomes

We may normalise (and ) to have mean zero, so that . Then we introduce the dyadic energies

where ranges over the powers of two, and is shorthand for . Taking the inner product of (1) with , we obtain the energy flow equation

where range over powers of two, is the energy flow rate

is the energy dissipation rate

and is the energy injection rate

The Navier-Stokes equations are notoriously difficult to solve in general. Despite this, Kolmogorov in 1941 was able to give a convincing heuristic argument for what the distribution of the dyadic energies should become over long times, assuming that some sort of distributional steady state is reached. It is common to present this argument in the form of dimensional analysis, but one can also give a more “first principles” form Kolmogorov’s argument, which I will do here. Heuristically, one can divide the frequency scales into three regimes:

- The
*injection regime*in which the energy injection rate dominates the right-hand side of (2); - The
*energy flow regime*in which the flow rates dominate the right-hand side of (2); and - The
*dissipation regime*in which the dissipation dominates the right-hand side of (2).

If we assume a fairly steady and smooth forcing term , then will be supported on the low frequency modes , and so we heuristically expect the injection regime to consist of the low scales . Conversely, if we take the viscosity to be small, we expect the dissipation regime to only occur for very large frequencies , with the energy flow regime occupying the intermediate frequencies.

We can heuristically predict the dividing line between the energy flow regime. Of all the flow rates , it turns out in practice that the terms in which (i.e., interactions between comparable scales, rather than widely separated scales) will dominate the other flow rates, so we will focus just on these terms. It is convenient to return back to physical space, decomposing the velocity field into Littlewood-Paley components

of the velocity field at frequency . By Plancherel’s theorem, this field will have an norm of , and as a naive model of turbulence we expect this field to be spread out more or less uniformly on the torus, so we have the heuristic

and a similar heuristic applied to gives

(One can consider modifications of the Kolmogorov model in which is concentrated on a lower-dimensional subset of the three-dimensional torus, leading to some changes in the numerology below, but we will not consider such variants here.) Since

we thus arrive at the heuristic

Of course, there is the possibility that due to significant cancellation, the energy flow is significantly less than , but we will assume that cancellation effects are not that significant, so that we typically have

or (assuming that does not oscillate too much in , and are close to )

On the other hand, we clearly have

We thus expect to be in the dissipation regime when

and in the energy flow regime when

Now we study the energy flow regime further. We assume a “statistically scale-invariant” dynamics in this regime, in particular assuming a power law

for some . From (3), we then expect an average asymptotic of the form

for some structure constants that depend on the exact nature of the turbulence; here we have replaced the factor by the comparable term to make things more symmetric. In order to attain a steady state in the energy flow regime, we thus need a cancellation in the structure constants:

On the other hand, if one is assuming statistical scale invariance, we expect the structure constants to be scale-invariant (in the energy flow regime), in that

for dyadic . Also, since the Euler equations conserve energy, the energy flows symmetrise to zero,

which from (7) suggests a similar cancellation among the structure constants

Combining this with the scale-invariance (9), we see that for fixed , we may organise the structure constants for dyadic into sextuples which sum to zero (including some degenerate tuples of order less than six). This will *automatically* guarantee the cancellation (8) required for a steady state energy distribution, provided that

or in other words

for any other value of , there is no particular reason to expect this cancellation (8) to hold. Thus we are led to the heuristic conclusion that the most stable power law distribution for the energies is the law

or in terms of shell energies, we have the famous Kolmogorov 5/3 law

Given that frequency interactions tend to cascade from low frequencies to high (if only because there are so many more high frequencies than low ones), the above analysis predicts a stablising effect around this power law: scales at which a law (6) holds for some are likely to lose energy in the near-term, while scales at which a law (6) hold for some are conversely expected to gain energy, thus nudging the exponent of power law towards .

We can solve for in terms of energy dissipation as follows. If we let be the frequency scale demarcating the transition from the energy flow regime (5) to the dissipation regime (4), we have

and hence by (10)

On the other hand, if we let be the energy dissipation at this scale (which we expect to be the dominant scale of energy dissipation), we have

Some simple algebra then lets us solve for and as

and

Thus, we have the Kolmogorov prediction

for

with energy dissipation occuring at the high end of this scale, which is counterbalanced by the energy injection at the low end of the scale.

Let be a quasiprojective variety defined over a finite field , thus for instance could be an affine variety

where is -dimensional affine space and are a finite collection of polynomials with coefficients in . Then one can define the set of -rational points, and more generally the set of -rational points for any , since can be viewed as a field extension of . Thus for instance in the affine case (1) we have

The Weil conjectures are concerned with understanding the number

of -rational points over a variety . The first of these conjectures was proven by Dwork, and can be phrased as follows.

Theorem 1 (Rationality of the zeta function)Let be a quasiprojective variety defined over a finite field , and let be given by (2). Then there exist a finite number of algebraic integers (known ascharacteristic valuesof ), such thatfor all .

After cancelling, we may of course assume that for any and , and then it is easy to see (as we will see below) that the become uniquely determined up to permutations of the and . These values are known as the *characteristic values* of . Since is a rational integer (i.e. an element of ) rather than merely an algebraic integer (i.e. an element of the ring of integers of the algebraic closure of ), we conclude from the above-mentioned uniqueness that the set of characteristic values are invariant with respect to the Galois group . To emphasise this Galois invariance, we will not fix a specific embedding of the algebraic numbers into the complex field , but work with all such embeddings simultaneously. (Thus, for instance, contains three cube roots of , but which of these is assigned to the complex numbers , , will depend on the choice of embedding .)

An equivalent way of phrasing Dwork’s theorem is that the (-form of the) zeta function

associated to (which is well defined as a formal power series in , at least) is equal to a rational function of (with the and being the poles and zeroes of respectively). Here, we use the formal exponential

Equivalently, the (-form of the) zeta-function is a meromorphic function on the complex numbers which is also periodic with period , and which has only finitely many poles and zeroes up to this periodicity.

Dwork’s argument relies primarily on -adic analysis – an analogue of complex analysis, but over an algebraically complete (and metrically complete) extension of the -adic field , rather than over the Archimedean complex numbers . The argument is quite effective, and in particular gives explicit upper bounds for the number of characteristic values in terms of the complexity of the variety ; for instance, in the affine case (1) with of degree , Bombieri used Dwork’s methods (in combination with Deligne’s theorem below) to obtain the bound , and a subsequent paper of Hooley established the slightly weaker bound purely from Dwork’s methods (a similar bound had also been pointed out in unpublished work of Dwork). In particular, one has bounds that are uniform in the field , which is an important fact for many analytic number theory applications.

These -adic arguments stand in contrast with Deligne’s resolution of the last (and deepest) of the Weil conjectures:

Theorem 2 (Riemann hypothesis)Let be a quasiprojective variety defined over a finite field , and let be a characteristic value of . Then there exists a natural number such that for every embedding , where denotes the usual absolute value on the complex numbers . (Informally: and all of its Galois conjugates have complex magnitude .)

To put it another way that closely resembles the classical Riemann hypothesis, all the zeroes and poles of the -form lie on the critical lines for . (See this previous blog post for further comparison of various instantiations of the Riemann hypothesis.) Whereas Dwork uses -adic analysis, Deligne uses the essentially orthogonal technique of ell-adic cohomology to establish his theorem. However, ell-adic methods can be used (via the Grothendieck-Lefschetz trace formula) to establish rationality, and conversely, in this paper of Kedlaya p-adic methods are used to establish the Riemann hypothesis. As pointed out by Kedlaya, the ell-adic methods are tied to the intrinsic geometry of (such as the structure of sheaves and covers over ), while the -adic methods are more tied to the *extrinsic* geometry of (how sits inside its ambient affine or projective space).

In this post, I would like to record my notes on Dwork’s proof of Theorem 1, drawing heavily on the expositions of Serre, Hooley, Koblitz, and others.

The basic strategy is to control the rational integers both in an “Archimedean” sense (embedding the rational integers inside the complex numbers with the usual norm ) as well as in the “-adic” sense, with the characteristic of (embedding the integers now in the “complexification” of the -adic numbers , which is equipped with a norm that we will recall later). (This is in contrast to the methods of ell-adic cohomology, in which one primarily works over an -adic field with .) The Archimedean control is trivial:

Proposition 3 (Archimedean control of )With as above, and any embedding , we havefor all and some independent of .

*Proof:* Since is a rational integer, is just . By decomposing into affine pieces, we may assume that is of the affine form (1), then we trivially have , and the claim follows.

Another way of thinking about this Archimedean control is that it guarantees that the zeta function can be defined holomorphically on the open disk in of radius centred at the origin.

The -adic control is significantly more difficult, and is the main component of Dwork’s argument:

Proposition 4 (-adic control of )With as above, and using an embedding (defined later) with the characteristic of , we can find for any real a finite number of elements such thatfor all .

Another way of thinking about this -adic control is that it guarantees that the zeta function can be defined *meromorphically* on the entire -adic complex field .

Proposition 4 is ostensibly much weaker than Theorem 1 because of (a) the error term of -adic magnitude at most ; (b) the fact that the number of potential characteristic values here may go to infinity as ; and (c) the potential characteristic values only exist inside the complexified -adics , rather than in the algebraic integers . However, it turns out that by combining -adic control on in Proposition 4 with the trivial control on in Proposition 3, one can obtain Theorem 1 by an elementary argument that does not use any further properties of (other than the obvious fact that the are rational integers), with the in Proposition 4 chosen to exceed the in Proposition 3. We give this argument (essentially due to Borel) below the fold.

The proof of Proposition 4 can be split into two pieces. The first piece, which can be viewed as the number-theoretic component of the proof, uses external descriptions of such as (1) to obtain the following decomposition of :

Proposition 5 (Decomposition of )With and as above, we can decompose as a finite linear combination (over the integers) of sequences , such that for each such sequence , the zeta functionsare entire in , by which we mean that

as .

This proposition will ultimately be a consequence of the properties of the Teichmuller lifting .

The second piece, which can be viewed as the “-adic complex analytic” component of the proof, relates the -adic entire nature of a zeta function with control on the associated sequence , and can be interpreted (after some manipulation) as a -adic version of the Weierstrass preparation theorem:

Proposition 6 (-adic Weierstrass preparation theorem)Let be a sequence in , such that the zeta functionis entire in . Then for any real , there exist a finite number of elements such that

for all and some .

Clearly, the combination of Proposition 5 and Proposition 6 (and the non-Archimedean nature of the norm) imply Proposition 4.

This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.

Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.

One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:

Conjecture 1 (Kakeya conjecture)Let be a subset of that contains a unit line segment in every direction. Then .

This conjecture is not precisely formulated here, because we have not specified exactly what type of set is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):

Conjecture 2 (Kakeya conjecture, again)Let be a family of lines in that meet and contain a line in each direction. Let be the union of the restriction to of every line in . Then .

As the space of all directions in is two-dimensional, we thus see that is an (at least) two-dimensional subset of the four-dimensional space of lines in (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ). One could then ask if this is the only property of that is needed to establish the Kakeya conjecture, that is to say if any subset of which contains a two-dimensional family of lines (restricted to , and meeting ) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:

Conjecture 3 (Strong Kakeya conjecture)Let be a two-dimensional family of lines in that meet , and assume theWolff axiomthat no (affine) plane contains more than a one-dimensional family of lines in . Let be the union of the restriction of every line in . Then .

Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie *close* to a plane, rather than exactly *on* the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.

In 1995, Wolff established the important lower bound (for various notions of dimension, e.g. Hausdorff dimension) for sets in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the barrier, coming from the possible existence of *half-dimensional (approximate) subfields* of the reals . To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:

Conjecture 4 (Strong Kakeya conjecture over )Let be a four (real) dimensional family of complex lines in that meet the unit ball in , and assume theWolff axiomthat no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in . Let be the union of the restriction of every complex line in . Then has real dimension .

The argument of Wolff can be adapted to the complex case to show that all sets occuring in Conjecture 4 have real dimension at least . Unfortunately, this is sharp, due to the following fundamental counterexample:

Proposition 5 (Heisenberg group counterexample)Let be the Heisenberg groupand let be the family of complex lines

with and . Then is a five (real) dimensional subset of that contains every line in the four (real) dimensional set ; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in . In particular, the strong Kakeya conjecture over the complex numbers is false.

This proposition is proven by a routine computation, which we omit here. The group structure on is given by the group law

giving the structure of a -step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over . Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines

with ; multiplying this family of lines on the right by a group element in gives other families of parallel lines, which in fact sweep out all of .

The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield of , which induces an involution which can then be used to define the Heisenberg group through the formula

Analogous Heisenberg counterexamples can also be constructed if one works over finite fields that contain a “half-dimensional” subfield ; we leave the details to the interested reader. Morally speaking, if in turn contained a subfield of dimension (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.

We thus see that to go beyond the dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:

- (a) Exploit the distinct directions of the lines in in a way that goes beyond the Wolff axiom; or
- (b) Exploit the fact that does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).

(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)

Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of for Kakeya sets very slightly to (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of , and then pursued route (b) to obtain a corresponding improvement to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.

Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:

- Assume that the (strong) Kakeya conjecture fails, so that there are sets of the form in Conjecture 3 of dimension for some . Assume that is “optimal”, in the sense that is as large as possible.
- Use the optimality of (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets , namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining to “behave like” a putative Heisenberg group counterexample.
- By playing all these structural properties off of each other, show that can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.

Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.

Let be a finite field of order , and let be an absolutely irreducible smooth projective curve defined over (and hence over the algebraic closure of that field). For instance, could be the projective elliptic curve

in the projective plane , where are coefficients whose discriminant is non-vanishing, which is the projective version of the affine elliptic curve

To each such curve one can associate a genus , which we will define later; for instance, elliptic curves have genus . We can also count the cardinality of the set of -points of . The *Hasse-Weil bound* relates the two:

The usual proofs of this bound proceed by first establishing a trace formula of the form

for some complex numbers independent of ; this is in fact a special case of the Lefschetz-Grothendieck trace formula, and can be interpreted as an assertion that the zeta function associated to the curve is rational. The task is then to establish a bound for all ; this (or more precisely, the slightly stronger assertion ) is the Riemann hypothesis for such curves. This can be done either by passing to the Jacobian variety of and using a certain duality available on the cohomology of such varieties, known as Rosati involution; alternatively, one can pass to the product surface and apply the Riemann-Roch theorem for that surface.

In 1969, Stepanov introduced an elementary method (a version of what is now known as the polynomial method) to count (or at least to upper bound) the quantity . The method was initially restricted to hyperelliptic curves, but was soon extended to general curves. In particular, Bombieri used this method to give a short proof of the following weaker version of the Hasse-Weil bound:

Theorem 2 (Weak Hasse-Weil bound)If is a perfect square, and , then .

In fact, the bound on can be sharpened a little bit further, as we will soon see.

Theorem 2 is only an upper bound on , but there is a Galois-theoretic trick to convert (a slight generalisation of) this upper bound to a matching lower bound, and if one then uses the trace formula (1) (and the “tensor power trick” of sending to infinity to control the weights ) one can then recover the full Hasse-Weil bound. We discuss these steps below the fold.

I’ve discussed Bombieri’s proof of Theorem 2 in this previous post (in the special case of hyperelliptic curves), but now wish to present the full proof, with some minor simplifications from Bombieri’s original presentation; it is mostly elementary, with the deepest fact from algebraic geometry needed being Riemann’s inequality (a weak form of the Riemann-Roch theorem).

The first step is to reinterpret as the number of points of intersection between two curves in the surface . Indeed, if we define the Frobenius endomorphism on any projective space by

then this map preserves the curve , and the fixed points of this map are precisely the points of :

Thus one can interpret as the number of points of intersection between the diagonal curve

and the Frobenius graph

which are copies of inside . But we can use the additional hypothesis that is a perfect square to write this more symmetrically, by taking advantage of the fact that the Frobenius map has a square root

with also preserving . One can then also interpret as the number of points of intersection between the curve

Let be the field of rational functions on (with coefficients in ), and define , , and analogously )(although is likely to be disconnected, so will just be a ring rather than a field. We then (morally) have the commuting square

if we ignore the issue that a rational function on, say, , might blow up on all of and thus not have a well-defined restriction to . We use and to denote the restriction maps. Furthermore, we have obvious isomorphisms , coming from composing with the graphing maps and .

The idea now is to find a rational function on the surface of controlled degree which vanishes when restricted to , but is non-vanishing (and not blowing up) when restricted to . On , we thus get a non-zero rational function of controlled degree which vanishes on – which then lets us bound the cardinality of in terms of the degree of . (In Bombieri’s original argument, one required vanishing to high order on the side, but in our presentation, we have factored out a term which removes this high order vanishing condition.)

To find this , we will use linear algebra. Namely, we will locate a finite-dimensional subspace of (consisting of certain “controlled degree” rational functions) which projects injectively to , but whose projection to has strictly smaller dimension than itself. The rank-nullity theorem then forces the existence of a non-zero element of whose projection to vanishes, but whose projection to is non-zero.

Now we build . Pick a point of , which we will think of as being a point at infinity. (For the purposes of proving Theorem 2, we may clearly assume that is non-empty.) Thus is fixed by . To simplify the exposition, we will also assume that is fixed by the square root of ; in the opposite case when has order two when acting on , the argument is essentially the same, but all references to in the second factor of need to be replaced by (we leave the details to the interested reader).

For any natural number , define to be the set of rational functions which are allowed to have a pole of order up to at , but have no other poles on ; note that as we are assuming to be smooth, it is unambiguous what a pole is (and what order it will have). (In the fancier language of divisors and Cech cohomology, we have .) The space is clearly a vector space over ; one can view intuitively as the space of “polynomials” on of “degree” at most . When , consists just of the constant functions. Indeed, if , then the image of avoids and so lies in the affine line ; but as is projective, the image needs to be compact (hence closed) in , and must therefore be a point, giving the claim.

For higher , we have the easy relations

The former inequality just comes from the trivial inclusion . For the latter, observe that if two functions lie in , so that they each have a pole of order at most at , then some linear combination of these functions must have a pole of order at most at ; thus has codimension at most one in , giving the claim.

From (3) and induction we see that each of the are finite dimensional, with the trivial upper bound

*Riemann’s inequality* complements this with the lower bound

thus one has for all but at most exceptions (in fact, exactly exceptions as it turns out). This is a consequence of the Riemann-Roch theorem; it can be proven from abstract nonsense (the snake lemma) if one defines the genus in a non-standard fashion (as the dimension of the first Cech cohomology of the structure sheaf of ), but to obtain this inequality with a standard definition of (e.g. as the dimension of the zeroth Cech cohomolgy of the line bundle of differentials) requires the more non-trivial tool of Serre duality.

At any rate, now that we have these vector spaces , we will define to be a tensor product space

for some natural numbers which we will optimise in later. That is to say, is spanned by functions of the form with and . This is clearly a linear subspace of of dimension , and hence by Rieman’s inequality we have

Observe that maps a tensor product to a function . If and , then we see that the function has a pole of order at most at . We conclude that

and in particular by (4)

We will choose to be a bit bigger than , to make the image of smaller than that of . From (6), (10) we see that if we have the inequality

(together with (7)) then cannot be injective.

On the other hand, we have the following basic fact:

*Proof:* From (3), we can find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). Similarly, we may find a linear basis of such that each of the has a distinct order of pole at (somewhere between and inclusive). The functions then span , and the order of pole at is . But since , these orders are all distinct, and so these functions must be linearly independent. The claim follows.

This gives us the following bound:

Proposition 4Let be natural numbers such that (7), (11), (12) hold. Then .

*Proof:* As is not injective, we can find with vanishing. By the above lemma, the function is then non-zero, but it must also vanish on , which has cardinality . On the other hand, by (8), has a pole of order at most at and no other poles. Since the number of poles and zeroes of a rational function on a projective curve must add up to zero, the claim follows.

If , we may make the explicit choice

and a brief calculation then gives Theorem 2. In some cases one can optimise things a bit further. For instance, in the genus zero case (e.g. if is just the projective line ) one may take and conclude the absolutely sharp bound in this case; in the case of the projective line , the function is in fact the very concrete function .

Remark 1When is not a perfect square, one can try to run the above argument using the factorisation instead of . This gives a weaker version of the above bound, of the shape . In the hyperelliptic case at least, one can erase this loss by working with a variant of the argument in which one requires to vanish to high order at , rather than just to first order; see this survey article of mine for details.

Roth’s theorem on arithmetic progressions asserts that every subset of the integers of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:

Theorem 1 (Roth’s theorem)Let be a compact abelian group, with Haar probability measure , which is -divisible (i.e. the map is surjective) and let be a measurable subset of with for some . Then we havewhere denotes the bound for some depending only on .

This theorem is usually formulated in the case that is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of -divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant on , but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the -divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the shift in that case.

We can deduce Theorem 1 from the following more general Khintchine-type statement. Let denote the Pontryagin dual of a compact abelian group , that is to say the set of all continuous homomorphisms from to the (additive) unit circle . Thus is a discrete abelian group, and functions have a Fourier transform defined by

If is -divisible, then is -torsion-free in the sense that the map is injective. For any finite set and any radius , define the *Bohr set*

where denotes the distance of to the nearest integer. We refer to the cardinality of as the *rank* of the Bohr set. We record a simple volume bound on Bohr sets:

Lemma 2 (Volume packing bound)Let be a compact abelian group with Haar probability measure . For any Bohr set , we have

*Proof:* We can cover the torus by translates of the cube . Then the sets form an cover of . But all of these sets lie in a translate of , and the claim then follows from the translation invariance of .

Given any Bohr set , we define a normalised “Lipschitz” cutoff function by the formula

where is the constant such that

thus

The function should be viewed as an -normalised “tent function” cutoff to . Note from Lemma 2 that

We then have the following sharper version of Theorem 1:

Theorem 3 (Roth-Khintchine theorem)Let be a -divisible compact abelian group, with Haar probability measure , and let . Then for any measurable function , there exists a Bohr set with and such thatwhere denotes the convolution operation

A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with and equal to a small multiple of to conclude that there is a Bohr set with and such that

But from (2) we have the pointwise bound , and Theorem 1 follows.

Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set to capture all the “large Fourier coefficients” of , but such that a certain “dilate” of does not capture much more Fourier energy of than itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the considered above to achieve a similar effect.

Let be an irreducible polynomial in three variables. As is not algebraically closed, the zero set can split into various components of dimension between and . For instance, if , the zero set is a line; more interestingly, if , then is the union of a line and a surface (or the product of an acnodal cubic curve with a line). We will assume that the -dimensional component is non-empty, thus defining a real surface in . In particular, this hypothesis implies that is not just irreducible over , but is in fact absolutely irreducible (i.e. irreducible over ), since otherwise one could use the complex factorisation of to contain inside the intersection of the complex zero locus of complex polynomial and its complex conjugate, with having no common factor, forcing to be at most one-dimensional. (For instance, in the case , one can take .) Among other things, this makes a Zariski-dense subset of , thus any polynomial identity which holds true at every point of , also holds true on all of . This allows us to easily use tools from algebraic geometry in this real setting, even though the reals are not quite algebraically closed.

The surface is said to be ruled if, for a Zariski open dense set of points , there exists a line through for some non-zero which is completely contained in , thus

for all . Also, a point is said to be a flecnode if there exists a line through for some non-zero which is tangent to to third order, in the sense that

for . Clearly, if is a ruled surface, then a Zariski open dense set of points on are a flecnode. We then have the remarkable theorem of Cayley and Salmon asserting the converse:

Theorem 1 (Cayley-Salmon theorem)Let be an irreducible polynomial with non-empty. Suppose that a Zariski dense set of points in are flecnodes. Then is a ruled surface.

Among other things, this theorem was used in the celebrated result of Guth and Katz that almost solved the Erdos distance problem in two dimensions, as discussed in this previous blog post. Vanishing to third order is necessary: observe that in a surface of negative curvature, such as the saddle , every point on the surface is tangent to second order to a line (the line in the direction for which the second fundamental form vanishes).

The original proof of the Cayley-Salmon theorem, dating back to at least 1915, is not easily accessible and not written in modern language. A modern proof of this theorem (together with substantial generalisations, for instance to higher dimensions) is given by Landsberg; the proof uses the machinery of modern algebraic geometry. The purpose of this post is to record an alternate proof of the Cayley-Salmon theorem based on classical differential geometry (in particular, the notion of torsion of a curve) and basic ODE methods (in particular, Gronwall’s inequality and the Picard existence theorem). The idea is to “integrate” the lines indicated by the flecnode to produce smooth curves on the surface ; one then uses the vanishing (1) and some basic calculus to conclude that these curves have zero torsion and are thus planar curves. Some further manipulation using (1) (now just to second order instead of third) then shows that these curves are in fact straight lines, giving the ruling on the surface.

Update: Janos Kollar has informed me that the above theorem was essentially known to Monge in 1809; see his recent arXiv note for more details.

I thank Larry Guth and Micha Sharir for conversations leading to this post.

A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of *additive combinatorics*) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets

and difference sets

as well as iterated sumsets such as , , and so forth. Here, are finite non-empty subsets of some additive group (classically one took or , but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:

Lemma 1 (Ruzsa covering lemma)Let be finite non-empty subsets of . Then may be covered by at most translates of .

*Proof:* Consider a maximal set of disjoint translates of by elements . These translates have cardinality , are disjoint, and lie in , so there are at most of them. By maximality, for any , must intersect at least one of the selected , thus , and the claim follows.

Lemma 2 (Ruzsa triangle inequality)Let be finite non-empty subsets of . Then .

*Proof:* Consider the addition map from to . Every element of has a preimage of this map of cardinality at least , thanks to the obvious identity for each . Since has cardinality , the claim follows.

Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets of small doubling, in the sense that for some bounded . (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in .)

However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. or a torus ) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group of orthogonal matrices, and let be the matrices obtained by starting with all of the orthogonal matrice in and rounding each coefficient of each matrix in this set to the nearest multiple of , for some small . This forms a finite set (whose cardinality grows as like a certain negative power of ). In the limit , the set is not a set of small doubling in the discrete sense. However, is still close to in a metric sense, being contained in the -neighbourhood of . Another key example comes from graphs of maps from a subset of one additive group to another . If is “approximately additive” in the sense that for all , is close to in some metric, then might not have small doubling in the discrete sense (because could take a large number of values), but could be considered a set of small doubling in a discretised sense.

One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric on the underlying group (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:

Definition 3Let be a metric space, let be a subset of , and let be a radius.

- The
packing numberis the largest number of points one can pack inside such that the balls are disjoint.- The
internal covering numberis the fewest number of points such that the balls cover .- The
external covering numberis the fewest number of points such that the balls cover .- The
metric entropyis the largest number of points one can find in that are -separated, thus for all .

It is an easy exercise to verify the inequalities

for any , and that is non-increasing in and non-decreasing in for the three choices (but monotonicity in can fail for !). It turns out that the external covering number is slightly more convenient than the other notions of metric entropy, so we will abbreviate . The cardinality can be viewed as the limit of the entropies as .

If we have the bounded doubling property that is covered by translates of for each , and one has a Haar measure on which assigns a positive finite mass to each ball, then any of the above entropies is comparable to , as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter by some absolute constant. Such losses can be acceptable in applications in which the underlying sets are largely “transverse” to the balls , so that the -entropy of is largely independent of ; this is a situation which arises in particular in the case of graphs discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)

Henceforth we work in an additive group equipped with a translation-invariant metric . (One can also generalise things slightly by allowing the metric to attain the values or , without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set will have finite entropy for any . We now have analogues of the two basic Ruzsa lemmas above:

Lemma 4 (Ruzsa covering lemma)Let be precompact non-empty subsets of , and let . Then may be covered by at most translates of .

*Proof:* Let be a maximal set of points such that the sets are all disjoint. Then the sets are disjoint in and have entropy , and furthermore any ball of radius can intersect at most one of the . We conclude that , so . If , then must intersect one of the , so , and the claim follows.

Lemma 5 (Ruzsa triangle inequality)Let be precompact non-empty subsets of , and let . Then .

*Proof:* Consider the addition map from to . The domain may be covered by product balls . Every element of has a preimage of this map which projects to a translate of , and thus must meet at least of these product balls. However, if two elements of are separated by a distance of at least , then no product ball can intersect both preimages. We thus see that , and the claim follows.

Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)

It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.

As in the previous post, all computations here are at the formal level only.

In the previous blog post, the Euler equations for inviscid incompressible fluid flow were interpreted in a Lagrangian fashion, and then Noether’s theorem invoked to derive the known conservation laws for these equations. In a bit more detail: starting with *Lagrangian space* and *Eulerian space* , we let be the space of volume-preserving, orientation-preserving maps from Lagrangian space to Eulerian space. Given a curve , we can define the *Lagrangian velocity field* as the time derivative of , and the *Eulerian velocity field* . The volume-preserving nature of ensures that is a divergence-free vector field:

If we formally define the functional

then one can show that the critical points of this functional (with appropriate boundary conditions) obey the Euler equations

for some pressure field . As discussed in the previous post, the time translation symmetry of this functional yields conservation of the Hamiltonian

the rigid motion symmetries of Eulerian space give conservation of the total momentum

and total angular momentum

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

for any closed loop in , or equivalently pointwise conservation of the Lagrangian vorticity , where is the -form associated with the vector field using the Euclidean metric on , with denoting pullback by .

It turns out that one can generalise the above calculations. Given any self-adjoint operator on divergence-free vector fields , we can define the functional

as we shall see below the fold, critical points of this functional (with appropriate boundary conditions) obey the generalised Euler equations

for some pressure field , where in coordinates is with the usual summation conventions. (When , , and this term can be absorbed into the pressure , and we recover the usual Euler equations.) Time translation symmetry then gives conservation of the Hamiltonian

If the operator commutes with rigid motions on , then we have conservation of total momentum

and total angular momentum

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

or pointwise conservation of the Lagrangian vorticity . These applications of Noether’s theorem proceed exactly as the previous post; we leave the details to the interested reader.

One particular special case of interest arises in two dimensions , when is the inverse derivative . The vorticity is a -form, which in the two-dimensional setting may be identified with a scalar. In coordinates, if we write , then

Since is also divergence-free, we may therefore write

where the stream function is given by the formula

If we take the curl of the generalised Euler equation (2), we obtain (after some computation) the surface quasi-geostrophic equation

This equation has strong analogies with the three-dimensional incompressible Euler equations, and can be viewed as a simplified model for that system; see this paper of Constantin, Majda, and Tabak for details.

Now we can specialise the general conservation laws derived previously to this setting. The conserved Hamiltonian is

(a law previously observed for this equation in the abovementioned paper of Constantin, Majda, and Tabak). As commutes with rigid motions, we also have (formally, at least) conservation of momentum

(which up to trivial transformations is also expressible in impulse form as , after integration by parts), and conservation of angular momentum

(which up to trivial transformations is ). Finally, diffeomorphism invariance gives pointwise conservation of Lagrangian vorticity , thus is transported by the flow (which is also evident from (3). In particular, all integrals of the form for a fixed function are conserved by the flow.

Throughout this post, we will work only at the *formal* level of analysis, ignoring issues of convergence of integrals, justifying differentiation under the integral sign, and so forth. (Rigorous justification of the conservation laws and other identities arising from the formal manipulations below can usually be established in an *a posteriori* fashion once the identities are in hand, without the need to rigorously justify the manipulations used to come up with these identities).

It is a remarkable fact in the theory of differential equations that many of the ordinary and partial differential equations that are of interest (particularly in geometric PDE, or PDE arising from mathematical physics) admit a variational formulation; thus, a collection of one or more fields on a domain taking values in a space will solve the differential equation of interest if and only if is a critical point to the functional

involving the fields and their first derivatives , where the Lagrangian is a function on the vector bundle over consisting of triples with , , and a linear transformation; we also usually keep the boundary data of fixed in case has a non-trivial boundary, although we will ignore these issues here. (We also ignore the possibility of having additional constraints imposed on and , which require the machinery of Lagrange multipliers to deal with, but which will only serve as a distraction for the current discussion.) It is common to use local coordinates to parameterise as and as , in which case can be viewed locally as a function on .

Example 1 (Geodesic flow)Take and to be a Riemannian manifold, which we will write locally in coordinates as with metric for . A geodesic is then a critical point (keeping fixed) of the energy functionalor in coordinates (ignoring coordinate patch issues, and using the usual summation conventions)

As discussed in this previous post, both the Euler equations for rigid body motion, and the Euler equations for incompressible inviscid flow, can be interpreted as geodesic flow (though in the latter case, one has to work

reallyformally, as the manifold is now infinite dimensional).More generally, if is itself a Riemannian manifold, which we write locally in coordinates as with metric for , then a harmonic map is a critical point of the energy functional

or in coordinates (again ignoring coordinate patch issues)

If we replace the Riemannian manifold by a Lorentzian manifold, such as Minkowski space , then the notion of a harmonic map is replaced by that of a wave map, which generalises the scalar wave equation (which corresponds to the case ).

Example 2 (-particle interactions)Take and ; then a function can be interpreted as a collection of trajectories in space, which we give a physical interpretation as the trajectories of particles. If we assign each particle a positive mass , and also introduce a potential energy function , then it turns out that Newton’s laws of motion in this context (with the force on the particle being given by the conservative force ) are equivalent to the trajectories being a critical point of the action functional

Formally, if is a critical point of a functional , this means that

whenever is a (smooth) deformation with (and with respecting whatever boundary conditions are appropriate). Interchanging the derivative and integral, we (formally, at least) arrive at

Write for the infinitesimal deformation of . By the chain rule, can be expressed in terms of . In coordinates, we have

where we parameterise by , and we use subscripts on to denote partial derivatives in the various coefficients. (One can of course work in a coordinate-free manner here if one really wants to, but the notation becomes a little cumbersome due to the need to carefully split up the tangent space of , and we will not do so here.) Thus we can view (2) as an integral identity that asserts the vanishing of a certain integral, whose integrand involves , where vanishes at the boundary but is otherwise unconstrained.

A general rule of thumb in PDE and calculus of variations is that whenever one has an integral identity of the form for some class of functions that vanishes on the boundary, then there must be an associated differential identity that justifies this integral identity through Stokes’ theorem. This rule of thumb helps explain why integration by parts is used so frequently in PDE to justify integral identities. The rule of thumb can fail when one is dealing with “global” or “cohomologically non-trivial” integral identities of a topological nature, such as the Gauss-Bonnet or Kazhdan-Warner identities, but is quite reliable for “local” or “cohomologically trivial” identities, such as those arising from calculus of variations.

In any case, if we apply this rule to (2), we expect that the integrand should be expressible as a spatial divergence. This is indeed the case:

Proposition 1(Formal) Let be a critical point of the functional defined in (1). Then for any deformation with , we havewhere is the vector field that is expressible in coordinates as

*Proof:* Comparing (4) with (3), we see that the claim is equivalent to the Euler-Lagrange equation

The same computation, together with an integration by parts, shows that (2) may be rewritten as

Since is unconstrained on the interior of , the claim (6) follows (at a formal level, at least).

Many variational problems also enjoy one-parameter continuous *symmetries*: given any field (not necessarily a critical point), one can place that field in a one-parameter family with , such that

for all ; in particular,

which can be written as (2) as before. Applying the previous rule of thumb, we thus expect another divergence identity

whenever arises from a continuous one-parameter symmetry. This expectation is indeed the case in many examples. For instance, if the spatial domain is the Euclidean space , and the Lagrangian (when expressed in coordinates) has no direct dependence on the spatial variable , thus

then we obtain translation symmetries

for , where is the standard basis for . For a fixed , the left-hand side of (7) then becomes

where . Another common type of symmetry is a *pointwise* symmetry, in which

for all , in which case (7) clearly holds with .

If we subtract (4) from (7), we obtain the celebrated theorem of Noether linking symmetries with conservation laws:

Theorem 2 (Noether’s theorem)Suppose that is a critical point of the functional (1), and let be a one-parameter continuous symmetry with . Let be the vector field in (5), and let be the vector field in (7). Then we have the pointwise conservation law

In particular, for one-dimensional variational problems, in which , we have the conservation law for all (assuming of course that is connected and contains ).

Noether’s theorem gives a systematic way to locate conservation laws for solutions to variational problems. For instance, if and the Lagrangian has no explicit time dependence, thus

then by using the time translation symmetry , we have

as discussed previously, whereas we have , and hence by (5)

and so Noether’s theorem gives conservation of the *Hamiltonian*

For instance, for geodesic flow, the Hamiltonian works out to be

so we see that the speed of the geodesic is conserved over time.

For pointwise symmetries (9), vanishes, and so Noether’s theorem simplifies to ; in the one-dimensional case , we thus see from (5) that the quantity

is conserved in time. For instance, for the -particle system in Example 2, if we have the translation invariance

for all , then we have the pointwise translation symmetry

for all , and some , in which case , and the conserved quantity (11) becomes

as was arbitrary, this establishes conservation of the *total momentum*

Similarly, if we have the rotation invariance

for any and , then we have the pointwise rotation symmetry

for any skew-symmetric real matrix , in which case , and the conserved quantity (11) becomes

since is an arbitrary skew-symmetric matrix, this establishes conservation of the *total angular momentum*

Below the fold, I will describe how Noether’s theorem can be used to locate all of the conserved quantities for the Euler equations of inviscid fluid flow, discussed in this previous post, by interpreting that flow as geodesic flow in an infinite dimensional manifold.

## Recent Comments