You are currently browsing the category archive for the ‘math.AP’ category.

I’ve just uploaded to the arXiv my paper Finite time blowup for high dimensional nonlinear wave systems with bounded smooth nonlinearity, submitted to Comm. PDE. This paper is in the same spirit as (though not directly related to) my previous paper on finite time blowup of supercritical NLW systems, and was inspired by a question posed to me some time ago by Jeffrey Rauch. Here, instead of looking at supercritical equations, we look at an extremely subcritical equation, namely a system of the form

where is the unknown field, and is the nonlinearity, which we assume to have all derivatives bounded. A typical example of such an equation is the higher-dimensional sine-Gordon equation

for a scalar field . Here is the d’Alembertian operator. We restrict attention here to classical (i.e. smooth) solutions to (1).

We do not assume any Hamiltonian structure, so we do not require to be a gradient of a potential . But even without such Hamiltonian structure, the equation (1) is very well behaved, with many *a priori* bounds available. For instance, if the initial position and initial velocity are smooth and compactly supported, then from finite speed of propagation has uniformly bounded compact support for all in a bounded interval. As the nonlinearity is bounded, this immediately places in in any bounded time interval, which by the energy inequality gives an a priori bound on in this time interval. Next, from the chain rule we have

which (from the assumption that is bounded) shows that is in , which by the energy inequality again now gives an a priori bound on .

One might expect that one could keep iterating this and obtain *a priori* bounds on in arbitrarily smooth norms. In low dimensions such as , this is a fairly easy task, since the above estimates and Sobolev embedding already place one in , and the nonlinear map is easily verified to preserve the space for any natural number , from which one obtains a priori bounds in any Sobolev space; from this and standard energy methods, one can then establish global regularity for this equation (that is to say, any smooth choice of initial data generates a global smooth solution). However, one starts running into trouble in higher dimensions, in which no bound is available. The main problem is that even a really nice nonlinearity such as is unbounded in higher Sobolev norms. The estimates

and

ensure that the map is bounded in low regularity spaces like or , but one already runs into trouble with the second derivative

where there is a troublesome lower order term of size which becomes difficult to control in higher dimensions, preventing the map to be bounded in . Ultimately, the issue here is that when is not controlled in , the function can oscillate at a much higher frequency than ; for instance, if is the one-dimensional wave for some and , then oscillates at frequency , but the function more or less oscillates at the larger frequency .

In medium dimensions, it is possible to use dispersive estimates for the wave equation (such as the famous Strichartz estimates) to overcome these problems. This line of inquiry was pursued (albeit for slightly different classes of nonlinearity than those considered here) by Heinz-von Wahl, Pecher (in a series of papers), Brenner, and Brenner-von Wahl; to cut a long story short, one of the conclusions of these papers was that one had global regularity for equations such as (1) in dimensions . (I reprove this result using modern Strichartz estimate and Littlewood-Paley techniques in an appendix to my paper. The references given also allow for some growth in the nonlinearity , but we will not detail the precise hypotheses used in these papers here.)

In my paper, I complement these positive results with an almost matching negative result:

Theorem 1If and , then there exists a nonlinearity with all derivatives bounded, and a solution to (1) that is smooth at time zero, but develops a singularity in finite time.

The construction crucially relies on the ability to choose the nonlinearity , and also needs some injectivity properties on the solution (after making a symmetry reduction using an assumption of spherical symmetry to view as a function of variables rather than ) which restricts our counterexample to the case. Thus the model case of the higher-dimensional sine-Gordon equation is not covered by our arguments. Nevertheless (as with previous finite-time blowup results discussed on this blog), one can view this result as a *barrier* to trying to prove regularity for equations such as in eleven and higher dimensions, as any such argument must somehow use a property of that equation that is not applicable to the more general system (1).

Let us first give some back-of-the-envelope calculations suggesting why there could be finite time blowup in eleven and higher dimensions. For sake of this discussion let us restrict attention to the sine-Gordon equation . The blowup ansatz we will use is as follows: for each frequency in a sequence of large quantities going to infinity, there will be a spacetime “cube” on which the solution oscillates with “amplitude” and “frequency” , where is an exponent to be chosen later; this ansatz is of course compatible with the uncertainty principle. Since as , this will create a singularity at the spacetime origin . To make this ansatz plausible, we wish to make the oscillation of on driven primarily by the forcing term at . Thus, by Duhamel’s formula, we expect a relation roughly of the form

on , where is the usual free wave propagator, and is the indicator function of .

On , oscillates with amplitude and frequency , we expect the derivative to be of size about , and so from the principle of stationary phase we expect to oscillate at frequency about . Since the wave propagator preserves frequencies, and is supposed to be of frequency on we are thus led to the requirement

Next, when restricted to frequencies of order , the propagator “behaves like” , where is the spherical averaging operator

where is surface measure on the unit sphere , and is the volume of that sphere. In our setting, is comparable to , and so we have the informal approximation

on .

Since is bounded, is bounded as well. This gives a (non-rigorous) upper bound

which when combined with our ansatz that has ampitude about on , gives the constraint

which on applying (2) gives the further constraint

which can be rearranged as

It is now clear that the optimal choice of is

and this blowup ansatz is only self-consistent when

or equivalently if .

To turn this ansatz into an actual blowup example, we will construct as the sum of various functions that solve the wave equation with forcing term in , and which concentrate in with the amplitude and frequency indicated by the above heuristic analysis. The remaining task is to show that can be written in the form for some with all derivatives bounded. For this one needs some injectivity properties of (after imposing spherical symmetry to impose a dimensional reduction on the domain of from dimensions to ). This requires one to construct some solutions to the free wave equation that have some unusual restrictions on the range (for instance, we will need a solution taking values in the plane that avoid one quadrant of that plane). In order to do this we take advantage of the very explicit nature of the fundamental solution to the wave equation in odd dimensions (such as ), particularly under the assumption of spherical symmetry. Specifically, one can show that in odd dimension , any spherically symmetric function of the form

for an arbitrary smooth function , will solve the free wave equation; this is ultimately due to iterating the “ladder operator” identity

This precise and relatively simple formula for allows one to create “bespoke” solutions that obey various unusual properties, without too much difficulty.

It is not clear to me what to conjecture for . The blowup ansatz given above is a little inefficient, in that the frequency component of the solution is only generated from a portion of the component, namely the portion close to a certain light cone. In particular, the solution does not saturate the Strichartz estimates that are used to establish the positive results for , which helps explain the slight gap between the positive and negative results. It may be that a more complicated ansatz could work to give a negative result in ten dimensions; conversely, it is also possible that one could use more advanced estimates than the Strichartz estimate (that somehow capture the “thinness” of the fundamental solution, and not just its dispersive properties) to stretch the positive results to ten dimensions. Which side the case falls in all come down to some rather delicate numerology.

…

I’ve been meaning to return to fluids for some time now, in order to build upon my construction two years ago of a solution to an averaged Navier-Stokes equation that exhibited finite time blowup. (I recently spoke on this work in the recent conference in Princeton in honour of Sergiu Klainerman; my slides for that talk are here.)

One of the biggest deficiencies with my previous result is the fact that the averaged Navier-Stokes equation does not enjoy any good equation for the vorticity , in contrast to the true Navier-Stokes equations which, when written in vorticity-stream formulation, become

(Throughout this post we will be working in three spatial dimensions .) So one of my main near-term goals in this area is to exhibit an equation resembling Navier-Stokes as much as possible which enjoys a vorticity equation, and for which there is finite time blowup.

Heuristically, this task should be easier for the Euler equations (i.e. the zero viscosity case of Navier-Stokes) than the viscous Navier-Stokes equation, as one expects the viscosity to only make it easier for the solution to stay regular. Indeed, morally speaking, the assertion that finite time blowup solutions of Navier-Stokes exist should be roughly equivalent to the assertion that finite time blowup solutions of Euler exist which are “Type I” in the sense that all Navier-Stokes-critical and Navier-Stokes-subcritical norms of this solution go to infinity (which, as explained in the above slides, heuristically means that the effects of viscosity are negligible when compared against the nonlinear components of the equation). In vorticity-stream formulation, the Euler equations can be written as

As discussed in this previous blog post, a natural generalisation of this system of equations is the system

where is a linear operator on divergence-free vector fields that is “zeroth order” in some sense; ideally it should also be invertible, self-adjoint, and positive definite (in order to have a Hamiltonian that is comparable to the kinetic energy ). (In the previous blog post, it was observed that the surface quasi-geostrophic (SQG) equation could be embedded in a system of the form (1).) The system (1) has many features in common with the Euler equations; for instance vortex lines are transported by the velocity field , and Kelvin’s circulation theorem is still valid.

So far, I have not been able to fully achieve this goal. However, I have the following partial result, stated somewhat informally:

Theorem 1There is a “zeroth order” linear operator (which, unfortunately, is not invertible, self-adjoint, or positive definite) for which the system (1) exhibits smooth solutions that blowup in finite time.

The operator constructed is not quite a zeroth-order pseudodifferential operator; it is instead merely in the “forbidden” symbol class , and more precisely it takes the form

for some compactly supported divergence-free of mean zero with

being rescalings of . This operator is still bounded on all spaces , and so is arguably still a zeroth order operator, though not as convincingly as I would like. Another, less significant, issue with the result is that the solution constructed does not have good spatial decay properties, but this is mostly for convenience and it is likely that the construction can be localised to give solutions that have reasonable decay in space. But the biggest drawback of this theorem is the fact that is not invertible, self-adjoint, or positive definite, so in particular there is no non-negative Hamiltonian for this equation. It may be that some modification of the arguments below can fix these issues, but I have so far been unable to do so. Still, the construction does show that the circulation theorem is insufficient by itself to prevent blowup.

We sketch the proof of the above theorem as follows. We use the barrier method, introducing the time-varying hyperboloid domains

for (expressed in cylindrical coordinates ). We will select initial data to be for some non-negative even bump function supported on , normalised so that

in particular is divergence-free supported in , with vortex lines connecting to . Suppose for contradiction that we have a smooth solution to (1) with this initial data; to simplify the discussion we assume that the solution behaves well at spatial infinity (this can be justified with the choice (2) of vorticity-stream operator, but we will not do so here). Since the domains disconnect from at time , there must exist a time which is the first time where the support of touches the boundary of , with supported in .

From (1) we see that the support of is transported by the velocity field . Thus, at the point of contact of the support of with the boundary of , the inward component of the velocity field cannot exceed the inward velocity of . We will construct the functions so that this is not the case, leading to the desired contradiction. (Geometrically, what is going on here is that the operator is pinching the flow to pass through the narrow cylinder , leading to a singularity by time at the latest.)

First we observe from conservation of circulation, and from the fact that is supported in , that the integrals

are constant in both space and time for . From the choice of initial data we thus have

for all and all . On the other hand, if is of the form (2) with for some bump function that only has -components, then is divergence-free with mean zero, and

where . We choose to be supported in the slab for some large constant , and to equal a function depending only on on the cylinder , normalised so that . If , then passes through this cylinder, and we conclude that

Inserting ths into (2), (1) we conclude that

for some coefficients . We will not be able to control these coefficients , but fortunately we only need to understand on the boundary , for which . So, if happens to be supported on an annulus , then vanishes on if is large enough. We then have

on the boundary of .

Let be a function of the form

where is a bump function supported on that equals on . We can perform a dyadic decomposition where

where is a bump function supported on with . If we then set

then one can check that for a function that is divergence-free and mean zero, and supported on the annulus , and

so on (where ) we have

One can manually check that the inward velocity of this vector on exceeds the inward velocity of if is large enough, and the claim follows.

Remark 2The type of blowup suggested by this construction, where a unit amount of circulation is squeezed into a narrow cylinder, is of “Type II” with respect to the Navier-Stokes scaling, because Navier-Stokes-critical norms such (or at least ) look like they stay bounded during this squeezing procedure (the velocity field is of size about in cylinders of radius and length about ). So even if the various issues with are repaired, it does not seem likely that this construction can be directly adapted to obtain a corresponding blowup for a Navier-Stokes type equation. To get a “Type I” blowup that is consistent with Kelvin’s circulation theorem, it seems that one needs to coil the vortex lines around a loop multiple times in order to get increased circulation in a small space. This seems possible to pull off to me – there don’t appear to be any unavoidable obstructions coming from topology, scaling, or conservation laws – but would require a more complicated construction than the one given above.

The Poincaré upper half-plane (with a boundary consisting of the real line together with the point at infinity ) carries an action of the projective special linear group

via fractional linear transformations:

Here and in the rest of the post we will abuse notation by identifying elements of the special linear group with their equivalence class in ; this will occasionally create or remove a factor of two in our formulae, but otherwise has very little effect, though one has to check that various definitions and expressions (such as (1)) are unaffected if one replaces a matrix by its negation . In particular, we recommend that the reader ignore the signs that appear from time to time in the discussion below.

As the action of on is transitive, and any given point in (e.g. ) has a stabiliser isomorphic to the projective rotation group , we can view the Poincaré upper half-plane as a homogeneous space for , and more specifically the quotient space of of a maximal compact subgroup . In fact, we can make the half-plane a symmetric space for , by endowing with the Riemannian metric

(using Cartesian coordinates ), which is invariant with respect to the action. Like any other Riemannian metric, the metric on generates a number of other important geometric objects on , such as the distance function which can be computed to be given by the formula

the volume measure , which can be computed to be

and the Laplace-Beltrami operator, which can be computed to be (here we use the negative definite sign convention for ). As the metric was -invariant, all of these quantities arising from the metric are similarly -invariant in the appropriate sense.

The Gauss curvature of the Poincaré half-plane can be computed to be the constant , thus is a model for two-dimensional hyperbolic geometry, in much the same way that the unit sphere in is a model for two-dimensional spherical geometry (or is a model for two-dimensional Euclidean geometry). (Indeed, is isomorphic (via projection to a null hyperplane) to the upper unit hyperboloid in the Minkowski spacetime , which is the direct analogue of the unit sphere in Euclidean spacetime or the plane in Galilean spacetime .)

One can inject arithmetic into this geometric structure by passing from the Lie group to the full modular group

or congruence subgroups such as

for natural number , or to the discrete stabiliser of the point at infinity:

These are discrete subgroups of , nested by the subgroup inclusions

There are many further discrete subgroups of (known collectively as Fuchsian groups) that one could consider, but we will focus attention on these three groups in this post.

Any discrete subgroup of generates a quotient space , which in general will be a non-compact two-dimensional orbifold. One can understand such a quotient space by working with a fundamental domain – a set consisting of a single representative of each of the orbits of in . This fundamental domain is by no means uniquely defined, but if the fundamental domain is chosen with some reasonable amount of regularity, one can view as the fundamental domain with the boundaries glued together in an appropriate sense. Among other things, fundamental domains can be used to induce a volume measure on from the volume measure on (restricted to a fundamental domain). By abuse of notation we will refer to both measures simply as when there is no chance of confusion.

For instance, a fundamental domain for is given (up to null sets) by the strip , with identifiable with the cylinder formed by gluing together the two sides of the strip. A fundamental domain for is famously given (again up to null sets) by an upper portion , with the left and right sides again glued to each other, and the left and right halves of the circular boundary glued to itself. A fundamental domain for can be formed by gluing together

copies of a fundamental domain for in a rather complicated but interesting fashion.

While fundamental domains can be a convenient choice of coordinates to work with for some computations (as well as for drawing appropriate pictures), it is geometrically more natural to avoid working explicitly on such domains, and instead work directly on the quotient spaces . In order to analyse functions on such orbifolds, it is convenient to lift such functions back up to and identify them with functions which are *-automorphic* in the sense that for all and . Such functions will be referred to as -automorphic forms, or *automorphic forms* for short (we always implicitly assume all such functions to be measurable). (Strictly speaking, these are the automorphic forms with trivial factor of automorphy; one can certainly consider other factors of automorphy, particularly when working with holomorphic modular forms, which corresponds to sections of a more non-trivial line bundle over than the trivial bundle that is implicitly present when analysing scalar functions . However, we will not discuss this (important) more general situation here.)

An important way to create a -automorphic form is to start with a non-automorphic function obeying suitable decay conditions (e.g. bounded with compact support will suffice) and form the Poincaré series defined by

which is clearly -automorphic. (One could equivalently write in place of here; there are good argument for both conventions, but I have ultimately decided to use the convention, which makes explicit computations a little neater at the cost of making the group actions work in the opposite order.) Thus we naturally see sums over associated with -automorphic forms. A little more generally, given a subgroup of and a -automorphic function of suitable decay, we can form a relative Poincaré series by

where is any fundamental domain for , that is to say a subset of consisting of exactly one representative for each right coset of . As is -automorphic, we see (if has suitable decay) that does not depend on the precise choice of fundamental domain, and is -automorphic. These operations are all compatible with each other, for instance . A key example of Poincaré series are the Eisenstein series, although there are of course many other Poincaré series one can consider by varying the test function .

For future reference we record the basic but fundamental *unfolding identities*

for any function with sufficient decay, and any -automorphic function of reasonable growth (e.g. bounded and compact support, and bounded, will suffice). Note that is viewed as a function on on the left-hand side, and as a -automorphic function on on the right-hand side. More generally, one has

whenever are discrete subgroups of , is a -automorphic function with sufficient decay on , and is a -automorphic (and thus also -automorphic) function of reasonable growth. These identities will allow us to move fairly freely between the three domains , , and in our analysis.

When computing various statistics of a Poincaré series , such as its values at special points , or the quantity , expressions of interest to analytic number theory naturally emerge. We list three basic examples of this below, discussed somewhat informally in order to highlight the main ideas rather than the technical details.

The first example we will give concerns the problem of estimating the sum

where is the divisor function. This can be rewritten (by factoring and ) as

which is basically a sum over the full modular group . At this point we will “cheat” a little by moving to the related, but different, sum

This sum is not exactly the same as (8), but will be a little easier to handle, and it is plausible that the methods used to handle this sum can be modified to handle (8). Observe from (2) and some calculation that the distance between and is given by the formula

and so one can express the above sum as

(the factor of coming from the quotient by in the projective special linear group); one can express this as , where and is the indicator function of the ball . Thus we see that expressions such as (7) are related to evaluations of Poincaré series. (In practice, it is much better to use smoothed out versions of indicator functions in order to obtain good control on sums such as (7) or (9), but we gloss over this technical detail here.)

The second example concerns the relative

of the sum (7). Note from multiplicativity that (7) can be written as , which is superficially very similar to (10), but with the key difference that the polynomial is irreducible over the integers.

As with (7), we may expand (10) as

At first glance this does not look like a sum over a modular group, but one can manipulate this expression into such a form in one of two (closely related) ways. First, observe that any factorisation of into Gaussian integers gives rise (upon taking norms) to an identity of the form , where and . Conversely, by using the unique factorisation of the Gaussian integers, every identity of the form gives rise to a factorisation of the form , essentially uniquely up to units. Now note that is of the form if and only if , in which case . Thus we can essentially write the above sum as something like

and one the modular group is now manifest. An equivalent way to see these manipulations is as follows. A triple of natural numbers with gives rise to a positive quadratic form of normalised discriminant equal to with integer coefficients (it is natural here to allow to take integer values rather than just natural number values by essentially doubling the sum). The group acts on the space of such quadratic forms in a natural fashion (by composing the quadratic form with the inverse of an element of ). Because the discriminant has class number one (this fact is equivalent to the unique factorisation of the gaussian integers, as discussed in this previous post), every form in this space is equivalent (under the action of some element of ) with the standard quadratic form . In other words, one has

which (up to a harmless sign) is exactly the representation , , introduced earlier, and leads to the same reformulation of the sum (10) in terms of expressions like (11). Similar considerations also apply if the quadratic polynomial is replaced by another quadratic, although one has to account for the fact that the class number may now exceed one (so that unique factorisation in the associated quadratic ring of integers breaks down), and in the positive discriminant case the fact that the group of units might be infinite presents another significant technical problem.

Note that has real part and imaginary part . Thus (11) is (up to a factor of two) the Poincaré series as in the preceding example, except that is now the indicator of the sector .

Sums involving subgroups of the full modular group, such as , often arise when imposing congruence conditions on sums such as (10), for instance when trying to estimate the expression when and are large. As before, one then soon arrives at the problem of evaluating a Poincaré series at one or more special points, where the series is now over rather than .

The third and final example concerns averages of Kloosterman sums

where and is the inverse of in the multiplicative group . It turns out that the norms of Poincaré series or are closely tied to such averages. Consider for instance the quantity

where is a natural number and is a -automorphic form that is of the form

for some integer and some test function , which for sake of discussion we will take to be smooth and compactly supported. Using the unfolding formula (6), we may rewrite (13) as

To compute this, we use the double coset decomposition

where for each , are arbitrarily chosen integers such that . To see this decomposition, observe that every element in outside of can be assumed to have by applying a sign , and then using the row and column operations coming from left and right multiplication by (that is, shifting the top row by an integer multiple of the bottom row, and shifting the right column by an integer multiple of the left column) one can place in the interval and to be any specified integer pair with . From this we see that

and so from further use of the unfolding formula (5) we may expand (13) as

The first integral is just . The second expression is more interesting. We have

so we can write

as

which on shifting by simplifies a little to

and then on scaling by simplifies a little further to

Note that as , we have modulo . Comparing the above calculations with (12), we can thus write (13) as

is a certain integral involving and a parameter , but which does not depend explicitly on parameters such as . Thus we have indeed expressed the expression (13) in terms of Kloosterman sums. It is possible to invert this analysis and express varius weighted sums of Kloosterman sums in terms of expressions (possibly involving inner products instead of norms) of Poincaré series, but we will not do so here; see Chapter 16 of Iwaniec and Kowalski for further details.

Traditionally, automorphic forms have been analysed using the spectral theory of the Laplace-Beltrami operator on spaces such as or , so that a Poincaré series such as might be expanded out using inner products of (or, by the unfolding identities, ) with various generalised eigenfunctions of (such as cuspidal eigenforms, or Eisenstein series). With this approach, special functions, and specifically the modified Bessel functions of the second kind, play a prominent role, basically because the -automorphic functions

for and non-zero are generalised eigenfunctions of (with eigenvalue ), and are almost square-integrable on (the norm diverges only logarithmically at one end of the cylinder , while decaying exponentially fast at the other end ).

However, as discussed in this previous post, the spectral theory of an essentially self-adjoint operator such as is basically equivalent to the theory of various solution operators associated to partial differential equations involving that operator, such as the Helmholtz equation , the heat equation , the Schrödinger equation , or the wave equation . Thus, one can hope to rephrase many arguments that involve spectral data of into arguments that instead involve resolvents , heat kernels , Schrödinger propagators , or wave propagators , or involve the PDE more directly (e.g. applying integration by parts and energy methods to solutions of such PDE). This is certainly done to some extent in the existing literature; resolvents and heat kernels, for instance, are often utilised. In this post, I would like to explore the possibility of reformulating spectral arguments instead using the inhomogeneous wave equation

Actually it will be a bit more convenient to normalise the Laplacian by , and look instead at the *automorphic wave equation*

This equation somewhat resembles a “Klein-Gordon” type equation, except that the mass is imaginary! This would lead to pathological behaviour were it not for the negative curvature, which in principle creates a spectral gap of that cancels out this factor.

The point is that the wave equation approach gives access to some nice PDE techniques, such as energy methods, Sobolev inequalities and finite speed of propagation, which are somewhat submerged in the spectral framework. The wave equation also interacts well with Poincaré series; if for instance and are -automorphic solutions to (15) obeying suitable decay conditions, then their Poincaré series and will be -automorphic solutions to the same equation (15), basically because the Laplace-Beltrami operator commutes with translations. Because of these facts, it is possible to replicate several standard spectral theory arguments in the wave equation framework, without having to deal directly with things like the asymptotics of modified Bessel functions. The wave equation approach to automorphic theory was introduced by Faddeev and Pavlov (using the Lax-Phillips scattering theory), and developed further by by Lax and Phillips, to recover many spectral facts about the Laplacian on modular curves, such as the Weyl law and the Selberg trace formula. Here, I will illustrate this by deriving three basic applications of automorphic methods in a wave equation framework, namely

- Using the Weil bound on Kloosterman sums to derive Selberg’s 3/16 theorem on the least non-trivial eigenvalue for on (discussed previously here);
- Conversely, showing that Selberg’s eigenvalue conjecture (improving Selberg’s bound to the optimal ) implies an optimal bound on (smoothed) sums of Kloosterman sums; and
- Using the same bound to obtain pointwise bounds on Poincaré series similar to the ones discussed above. (Actually, the argument here does not use the wave equation, instead it just uses the Sobolev inequality.)

This post originated from an attempt to finally learn this part of analytic number theory properly, and to see if I could use a PDE-based perspective to understand it better. Ultimately, this is not that dramatic a depature from the standard approach to this subject, but I found it useful to think of things in this fashion, probably due to my existing background in PDE.

I thank Bill Duke and Ben Green for helpful discussions. My primary reference for this theory was Chapters 15, 16, and 21 of Iwaniec and Kowalski.

The Euler equations for three-dimensional incompressible inviscid fluid flow are

where is the velocity field, and is the pressure field. For the purposes of this post, we will ignore all issues of decay or regularity of the fields in question, assuming that they are as smooth and rapidly decreasing as needed to justify all the formal calculations here; in particular, we will apply inverse operators such as or formally, assuming that these inverses are well defined on the functions they are applied to.

Meanwhile, the surface quasi-geostrophic (SQG) equation is given by

where is the active scalar, and is the velocity field. The SQG equations are often used as a toy model for the 3D Euler equations, as they share many of the same features (e.g. vortex stretching); see this paper of Constantin, Majda, and Tabak for more discussion (or this previous blog post).

I recently found a more direct way to connect the two equations. We first recall that the Euler equations can be placed in *vorticity-stream* form by focusing on the vorticity . Indeed, taking the curl of (1), we obtain the vorticity equation

while the velocity can be recovered from the vorticity via the Biot-Savart law

The system (4), (5) has some features in common with the system (2), (3); in (2) it is a scalar field that is being transported by a divergence-free vector field , which is a linear function of the scalar field as per (3), whereas in (4) it is a vector field that is being transported (in the Lie derivative sense) by a divergence-free vector field , which is a linear function of the vector field as per (5). However, the system (4), (5) is in three dimensions whilst (2), (3) is in two spatial dimensions, the dynamical field is a scalar field for SQG and a vector field for Euler, and the relationship between the velocity field and the dynamical field is given by a zeroth order Fourier multiplier in (3) and a order operator in (5).

However, we can make the two equations more closely resemble each other as follows. We first consider the generalisation

where is an invertible, self-adjoint, positive-definite zeroth order Fourier multiplier that maps divergence-free vector fields to divergence-free vector fields. The Euler equations then correspond to the case when is the identity operator. As discussed in this previous blog post (which used to denote the inverse of the operator denoted here as ), this generalised Euler system has many of the same features as the original Euler equation, such as a conserved Hamiltonian

the Kelvin circulation theorem, and conservation of helicity

Also, if we require to be divergence-free at time zero, it remains divergence-free at all later times.

Let us consider “two-and-a-half-dimensional” solutions to the system (6), (7), in which do not depend on the vertical coordinate , thus

and

but we allow the vertical components to be non-zero. For this to be consistent, we also require to commute with translations in the direction. As all derivatives in the direction now vanish, we can simplify (6) to

where is the two-dimensional material derivative

Also, divergence-free nature of then becomes

In particular, we may (formally, at least) write

for some scalar field , so that (7) becomes

The first two components of (8) become

which rearranges using (9) to

Formally, we may integrate this system to obtain the transport equation

Finally, the last component of (8) is

At this point, we make the following choice for :

where is a real constant and is the Leray projection onto divergence-free vector fields. One can verify that for large enough , is a self-adjoint positive definite zeroth order Fourier multiplier from divergence free vector fields to divergence-free vector fields. With this choice, we see from (10) that

so that (12) simplifies to

This implies (formally at least) that if vanishes at time zero, then it vanishes for all time. Setting , we then have from (10) that

and from (11) we then recover the SQG system (2), (3). To put it another way, if and solve the SQG system, then by setting

then solve the modified Euler system (6), (7) with given by (13).

We have , so the Hamiltonian for the modified Euler system in this case is formally a scalar multiple of the conserved quantity . The momentum for the modified Euler system is formally a scalar multiple of the conserved quantity , while the vortex stream lines that are preserved by the modified Euler flow become the level sets of the active scalar that are preserved by the SQG flow. On the other hand, the helicity vanishes, and other conserved quantities for SQG (such as the Hamiltonian ) do not seem to correspond to conserved quantities of the modified Euler system. This is not terribly surprising; a low-dimensional flow may well have a richer family of conservation laws than the higher-dimensional system that it is embedded in.

The wave equation is usually expressed in the form

where is a function of both time and space , with being the Laplacian operator. One can generalise this equation in a number of ways, for instance by replacing the spatial domain with some other manifold and replacing the Laplacian with the Laplace-Beltrami operator or adding lower order terms (such as a potential, or a coupling with a magnetic field). But for sake of discussion let us work with the classical wave equation on . We will work formally in this post, being unconcerned with issues of convergence, justifying interchange of integrals, derivatives, or limits, etc.. One then has a conserved energy

which we can rewrite using integration by parts and the inner product on as

A key feature of the wave equation is *finite speed of propagation*: if, at time (say), the initial position and initial velocity are both supported in a ball , then at any later time , the position and velocity are supported in the larger ball . This can be seen for instance (formally, at least) by inspecting the exterior energy

and observing (after some integration by parts and differentiation under the integral sign) that it is non-increasing in time, non-negative, and vanishing at time .

The wave equation is second order in time, but one can turn it into a first order system by working with the pair rather than just the single field , where is the velocity field. The system is then

and the conserved energy is now

Finite speed of propagation then tells us that if are both supported on , then are supported on for all . One also has time reversal symmetry: if is a solution, then is a solution also, thus for instance one can establish an analogue of finite speed of propagation for negative times using this symmetry.

If one has an eigenfunction

of the Laplacian, then we have the explicit solutions

of the wave equation, which formally can be used to construct all other solutions via the principle of superposition.

When one has vanishing initial velocity , the solution is given via functional calculus by

and the propagator can be expressed as the average of half-wave operators:

One can view as a minor of the full wave propagator

which is unitary with respect to the energy form (1), and is the fundamental solution to the wave equation in the sense that

Viewing the contraction as a minor of a unitary operator is an instance of the “dilation trick“.

It turns out (as I learned from Yuval Peres) that there is a useful discrete analogue of the wave equation (and of all of the above facts), in which the time variable now lives on the integers rather than on , and the spatial domain can be replaced by discrete domains also (such as graphs). Formally, the system is now of the form

where is now an integer, take values in some Hilbert space (e.g. functions on a graph ), and is some operator on that Hilbert space (which in applications will usually be a self-adjoint contraction). To connect this with the classical wave equation, let us first consider a rescaling of this system

where is a small parameter (representing the discretised time step), now takes values in the integer multiples of , and is the wave propagator operator or the heat propagator (the two operators are different, but agree to fourth order in ). One can then formally verify that the wave equation emerges from this rescaled system in the limit . (Thus, is not exactly the direct analogue of the Laplacian , but can be viewed as something like in the case of small , or if we are not rescaling to the small case. The operator is sometimes known as the *diffusion operator*)

Assuming is self-adjoint, solutions to the system (3) formally conserve the energy

This energy is positive semi-definite if is a contraction. We have the same time reversal symmetry as before: if solves the system (3), then so does . If one has an eigenfunction

to the operator , then one has an explicit solution

to (3), and (in principle at least) this generates all other solutions via the principle of superposition.

Finite speed of propagation is a lot easier in the discrete setting, though one has to offset the support of the “velocity” field by one unit. Suppose we know that has unit speed in the sense that whenever is supported in a ball , then is supported in the ball . Then an easy induction shows that if are supported in respectively, then are supported in .

The fundamental solution to the discretised wave equation (3), in the sense of (2), is given by the formula

where and are the Chebyshev polynomials of the first and second kind, thus

and

In particular, is now a minor of , and can also be viewed as an average of with its inverse :

As before, is unitary with respect to the energy form (4), so this is another instance of the dilation trick in action. The powers and are discrete analogues of the heat propagators and wave propagators respectively.

One nice application of all this formalism, which I learned from Yuval Peres, is the Varopoulos-Carne inequality:

Theorem 1 (Varopoulos-Carne inequality)Let be a (possibly infinite) regular graph, let , and let be vertices in . Then the probability that the simple random walk at lands at at time is at most , where is the graph distance.

This general inequality is quite sharp, as one can see using the standard Cayley graph on the integers . Very roughly speaking, it asserts that on a regular graph of reasonably controlled growth (e.g. polynomial growth), random walks of length concentrate on the ball of radius or so centred at the origin of the random walk.

*Proof:* Let be the graph Laplacian, thus

for any , where is the degree of the regular graph and sum is over the vertices that are adjacent to . This is a contraction of unit speed, and the probability that the random walk at lands at at time is

where are the Dirac deltas at . Using (5), we can rewrite this as

where we are now using the energy form (4). We can write

where is the simple random walk of length on the integers, that is to say where are independent uniform Bernoulli signs. Thus we wish to show that

By finite speed of propagation, the inner product here vanishes if . For we can use Cauchy-Schwarz and the unitary nature of to bound the inner product by . Thus the left-hand side may be upper bounded by

and the claim now follows from the Chernoff inequality.

This inequality has many applications, particularly with regards to relating the entropy, mixing time, and concentration of random walks with volume growth of balls; see this text of Lyons and Peres for some examples.

For sake of comparison, here is a continuous counterpart to the Varopoulos-Carne inequality:

Theorem 2 (Continuous Varopoulos-Carne inequality)Let , and let be supported on compact sets respectively. Thenwhere is the Euclidean distance between and .

*Proof:* By Fourier inversion one has

for any real , and thus

By finite speed of propagation, the inner product vanishes when ; otherwise, we can use Cauchy-Schwarz and the contractive nature of to bound this inner product by . Thus

Bounding by , we obtain the claim.

Observe that the argument is quite general and can be applied for instance to other Riemannian manifolds than .

Many fluid equations are expected to exhibit turbulence in their solutions, in which a significant portion of their energy ends up in high frequency modes. A typical example arises from the three-dimensional periodic Navier-Stokes equations

where is the velocity field, is a forcing term, is a pressure field, and is the viscosity. To study the dynamics of energy for this system, we first pass to the Fourier transform

so that the system becomes

We may normalise (and ) to have mean zero, so that . Then we introduce the dyadic energies

where ranges over the powers of two, and is shorthand for . Taking the inner product of (1) with , we obtain the energy flow equation

where range over powers of two, is the energy flow rate

is the energy dissipation rate

and is the energy injection rate

The Navier-Stokes equations are notoriously difficult to solve in general. Despite this, Kolmogorov in 1941 was able to give a convincing heuristic argument for what the distribution of the dyadic energies should become over long times, assuming that some sort of distributional steady state is reached. It is common to present this argument in the form of dimensional analysis, but one can also give a more “first principles” form Kolmogorov’s argument, which I will do here. Heuristically, one can divide the frequency scales into three regimes:

- The
*injection regime*in which the energy injection rate dominates the right-hand side of (2); - The
*energy flow regime*in which the flow rates dominate the right-hand side of (2); and - The
*dissipation regime*in which the dissipation dominates the right-hand side of (2).

If we assume a fairly steady and smooth forcing term , then will be supported on the low frequency modes , and so we heuristically expect the injection regime to consist of the low scales . Conversely, if we take the viscosity to be small, we expect the dissipation regime to only occur for very large frequencies , with the energy flow regime occupying the intermediate frequencies.

We can heuristically predict the dividing line between the energy flow regime. Of all the flow rates , it turns out in practice that the terms in which (i.e., interactions between comparable scales, rather than widely separated scales) will dominate the other flow rates, so we will focus just on these terms. It is convenient to return back to physical space, decomposing the velocity field into Littlewood-Paley components

of the velocity field at frequency . By Plancherel’s theorem, this field will have an norm of , and as a naive model of turbulence we expect this field to be spread out more or less uniformly on the torus, so we have the heuristic

and a similar heuristic applied to gives

(One can consider modifications of the Kolmogorov model in which is concentrated on a lower-dimensional subset of the three-dimensional torus, leading to some changes in the numerology below, but we will not consider such variants here.) Since

we thus arrive at the heuristic

Of course, there is the possibility that due to significant cancellation, the energy flow is significantly less than , but we will assume that cancellation effects are not that significant, so that we typically have

or (assuming that does not oscillate too much in , and are close to )

On the other hand, we clearly have

We thus expect to be in the dissipation regime when

and in the energy flow regime when

Now we study the energy flow regime further. We assume a “statistically scale-invariant” dynamics in this regime, in particular assuming a power law

for some . From (3), we then expect an average asymptotic of the form

for some structure constants that depend on the exact nature of the turbulence; here we have replaced the factor by the comparable term to make things more symmetric. In order to attain a steady state in the energy flow regime, we thus need a cancellation in the structure constants:

On the other hand, if one is assuming statistical scale invariance, we expect the structure constants to be scale-invariant (in the energy flow regime), in that

for dyadic . Also, since the Euler equations conserve energy, the energy flows symmetrise to zero,

which from (7) suggests a similar cancellation among the structure constants

Combining this with the scale-invariance (9), we see that for fixed , we may organise the structure constants for dyadic into sextuples which sum to zero (including some degenerate tuples of order less than six). This will *automatically* guarantee the cancellation (8) required for a steady state energy distribution, provided that

or in other words

for any other value of , there is no particular reason to expect this cancellation (8) to hold. Thus we are led to the heuristic conclusion that the most stable power law distribution for the energies is the law

or in terms of shell energies, we have the famous Kolmogorov 5/3 law

Given that frequency interactions tend to cascade from low frequencies to high (if only because there are so many more high frequencies than low ones), the above analysis predicts a stablising effect around this power law: scales at which a law (6) holds for some are likely to lose energy in the near-term, while scales at which a law (6) hold for some are conversely expected to gain energy, thus nudging the exponent of power law towards .

We can solve for in terms of energy dissipation as follows. If we let be the frequency scale demarcating the transition from the energy flow regime (5) to the dissipation regime (4), we have

and hence by (10)

On the other hand, if we let be the energy dissipation at this scale (which we expect to be the dominant scale of energy dissipation), we have

Some simple algebra then lets us solve for and as

and

Thus, we have the Kolmogorov prediction

for

with energy dissipation occuring at the high end of this scale, which is counterbalanced by the energy injection at the low end of the scale.

As in the previous post, all computations here are at the formal level only.

In the previous blog post, the Euler equations for inviscid incompressible fluid flow were interpreted in a Lagrangian fashion, and then Noether’s theorem invoked to derive the known conservation laws for these equations. In a bit more detail: starting with *Lagrangian space* and *Eulerian space* , we let be the space of volume-preserving, orientation-preserving maps from Lagrangian space to Eulerian space. Given a curve , we can define the *Lagrangian velocity field* as the time derivative of , and the *Eulerian velocity field* . The volume-preserving nature of ensures that is a divergence-free vector field:

If we formally define the functional

then one can show that the critical points of this functional (with appropriate boundary conditions) obey the Euler equations

for some pressure field . As discussed in the previous post, the time translation symmetry of this functional yields conservation of the Hamiltonian

the rigid motion symmetries of Eulerian space give conservation of the total momentum

and total angular momentum

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

for any closed loop in , or equivalently pointwise conservation of the Lagrangian vorticity , where is the -form associated with the vector field using the Euclidean metric on , with denoting pullback by .

It turns out that one can generalise the above calculations. Given any self-adjoint operator on divergence-free vector fields , we can define the functional

as we shall see below the fold, critical points of this functional (with appropriate boundary conditions) obey the generalised Euler equations

for some pressure field , where in coordinates is with the usual summation conventions. (When , , and this term can be absorbed into the pressure , and we recover the usual Euler equations.) Time translation symmetry then gives conservation of the Hamiltonian

If the operator commutes with rigid motions on , then we have conservation of total momentum

and total angular momentum

and the diffeomorphism symmetries of Lagrangian space give conservation of circulation

or pointwise conservation of the Lagrangian vorticity . These applications of Noether’s theorem proceed exactly as the previous post; we leave the details to the interested reader.

One particular special case of interest arises in two dimensions , when is the inverse derivative . The vorticity is a -form, which in the two-dimensional setting may be identified with a scalar. In coordinates, if we write , then

Since is also divergence-free, we may therefore write

where the stream function is given by the formula

If we take the curl of the generalised Euler equation (2), we obtain (after some computation) the surface quasi-geostrophic equation

This equation has strong analogies with the three-dimensional incompressible Euler equations, and can be viewed as a simplified model for that system; see this paper of Constantin, Majda, and Tabak for details.

Now we can specialise the general conservation laws derived previously to this setting. The conserved Hamiltonian is

(a law previously observed for this equation in the abovementioned paper of Constantin, Majda, and Tabak). As commutes with rigid motions, we also have (formally, at least) conservation of momentum

(which up to trivial transformations is also expressible in impulse form as , after integration by parts), and conservation of angular momentum

(which up to trivial transformations is ). Finally, diffeomorphism invariance gives pointwise conservation of Lagrangian vorticity , thus is transported by the flow (which is also evident from (3). In particular, all integrals of the form for a fixed function are conserved by the flow.

Throughout this post, we will work only at the *formal* level of analysis, ignoring issues of convergence of integrals, justifying differentiation under the integral sign, and so forth. (Rigorous justification of the conservation laws and other identities arising from the formal manipulations below can usually be established in an *a posteriori* fashion once the identities are in hand, without the need to rigorously justify the manipulations used to come up with these identities).

It is a remarkable fact in the theory of differential equations that many of the ordinary and partial differential equations that are of interest (particularly in geometric PDE, or PDE arising from mathematical physics) admit a variational formulation; thus, a collection of one or more fields on a domain taking values in a space will solve the differential equation of interest if and only if is a critical point to the functional

involving the fields and their first derivatives , where the Lagrangian is a function on the vector bundle over consisting of triples with , , and a linear transformation; we also usually keep the boundary data of fixed in case has a non-trivial boundary, although we will ignore these issues here. (We also ignore the possibility of having additional constraints imposed on and , which require the machinery of Lagrange multipliers to deal with, but which will only serve as a distraction for the current discussion.) It is common to use local coordinates to parameterise as and as , in which case can be viewed locally as a function on .

Example 1 (Geodesic flow)Take and to be a Riemannian manifold, which we will write locally in coordinates as with metric for . A geodesic is then a critical point (keeping fixed) of the energy functionalor in coordinates (ignoring coordinate patch issues, and using the usual summation conventions)

As discussed in this previous post, both the Euler equations for rigid body motion, and the Euler equations for incompressible inviscid flow, can be interpreted as geodesic flow (though in the latter case, one has to work

reallyformally, as the manifold is now infinite dimensional).More generally, if is itself a Riemannian manifold, which we write locally in coordinates as with metric for , then a harmonic map is a critical point of the energy functional

or in coordinates (again ignoring coordinate patch issues)

If we replace the Riemannian manifold by a Lorentzian manifold, such as Minkowski space , then the notion of a harmonic map is replaced by that of a wave map, which generalises the scalar wave equation (which corresponds to the case ).

Example 2 (-particle interactions)Take and ; then a function can be interpreted as a collection of trajectories in space, which we give a physical interpretation as the trajectories of particles. If we assign each particle a positive mass , and also introduce a potential energy function , then it turns out that Newton’s laws of motion in this context (with the force on the particle being given by the conservative force ) are equivalent to the trajectories being a critical point of the action functional

Formally, if is a critical point of a functional , this means that

whenever is a (smooth) deformation with (and with respecting whatever boundary conditions are appropriate). Interchanging the derivative and integral, we (formally, at least) arrive at

Write for the infinitesimal deformation of . By the chain rule, can be expressed in terms of . In coordinates, we have

where we parameterise by , and we use subscripts on to denote partial derivatives in the various coefficients. (One can of course work in a coordinate-free manner here if one really wants to, but the notation becomes a little cumbersome due to the need to carefully split up the tangent space of , and we will not do so here.) Thus we can view (2) as an integral identity that asserts the vanishing of a certain integral, whose integrand involves , where vanishes at the boundary but is otherwise unconstrained.

A general rule of thumb in PDE and calculus of variations is that whenever one has an integral identity of the form for some class of functions that vanishes on the boundary, then there must be an associated differential identity that justifies this integral identity through Stokes’ theorem. This rule of thumb helps explain why integration by parts is used so frequently in PDE to justify integral identities. The rule of thumb can fail when one is dealing with “global” or “cohomologically non-trivial” integral identities of a topological nature, such as the Gauss-Bonnet or Kazhdan-Warner identities, but is quite reliable for “local” or “cohomologically trivial” identities, such as those arising from calculus of variations.

In any case, if we apply this rule to (2), we expect that the integrand should be expressible as a spatial divergence. This is indeed the case:

Proposition 1(Formal) Let be a critical point of the functional defined in (1). Then for any deformation with , we havewhere is the vector field that is expressible in coordinates as

*Proof:* Comparing (4) with (3), we see that the claim is equivalent to the Euler-Lagrange equation

The same computation, together with an integration by parts, shows that (2) may be rewritten as

Since is unconstrained on the interior of , the claim (6) follows (at a formal level, at least).

Many variational problems also enjoy one-parameter continuous *symmetries*: given any field (not necessarily a critical point), one can place that field in a one-parameter family with , such that

for all ; in particular,

which can be written as (2) as before. Applying the previous rule of thumb, we thus expect another divergence identity

whenever arises from a continuous one-parameter symmetry. This expectation is indeed the case in many examples. For instance, if the spatial domain is the Euclidean space , and the Lagrangian (when expressed in coordinates) has no direct dependence on the spatial variable , thus

then we obtain translation symmetries

for , where is the standard basis for . For a fixed , the left-hand side of (7) then becomes

where . Another common type of symmetry is a *pointwise* symmetry, in which

for all , in which case (7) clearly holds with .

If we subtract (4) from (7), we obtain the celebrated theorem of Noether linking symmetries with conservation laws:

Theorem 2 (Noether’s theorem)Suppose that is a critical point of the functional (1), and let be a one-parameter continuous symmetry with . Let be the vector field in (5), and let be the vector field in (7). Then we have the pointwise conservation law

In particular, for one-dimensional variational problems, in which , we have the conservation law for all (assuming of course that is connected and contains ).

Noether’s theorem gives a systematic way to locate conservation laws for solutions to variational problems. For instance, if and the Lagrangian has no explicit time dependence, thus

then by using the time translation symmetry , we have

as discussed previously, whereas we have , and hence by (5)

and so Noether’s theorem gives conservation of the *Hamiltonian*

For instance, for geodesic flow, the Hamiltonian works out to be

so we see that the speed of the geodesic is conserved over time.

For pointwise symmetries (9), vanishes, and so Noether’s theorem simplifies to ; in the one-dimensional case , we thus see from (5) that the quantity

is conserved in time. For instance, for the -particle system in Example 2, if we have the translation invariance

for all , then we have the pointwise translation symmetry

for all , and some , in which case , and the conserved quantity (11) becomes

as was arbitrary, this establishes conservation of the *total momentum*

Similarly, if we have the rotation invariance

for any and , then we have the pointwise rotation symmetry

for any skew-symmetric real matrix , in which case , and the conserved quantity (11) becomes

since is an arbitrary skew-symmetric matrix, this establishes conservation of the *total angular momentum*

Below the fold, I will describe how Noether’s theorem can be used to locate all of the conserved quantities for the Euler equations of inviscid fluid flow, discussed in this previous post, by interpreting that flow as geodesic flow in an infinite dimensional manifold.

The Euler equations for incompressible inviscid fluids may be written as

where is the velocity field, and is the pressure field. To avoid technicalities we will assume that both fields are smooth, and that is bounded. We will take the dimension to be at least two, with the three-dimensional case being of course especially interesting.

The Euler equations are the inviscid limit of the Navier-Stokes equations; as discussed in my previous post, one potential route to establishing finite time blowup for the latter equations when is to be able to construct “computers” solving the Euler equations, which generate smaller replicas of themselves in a noise-tolerant manner (as the viscosity term in the Navier-Stokes equation is to be viewed as perturbative noise).

Perhaps the most prominent obstacles to this route are the *conservation laws* for the Euler equations, which limit the types of final states that a putative computer could reach from a given initial state. Most famously, we have the conservation of energy

(assuming sufficient decay of the velocity field at infinity); thus for instance it would not be possible for a computer to generate a replica of itself which had greater total energy than the initial computer. This by itself is not a fatal obstruction (in this paper of mine, I constructed such a “computer” for an averaged Euler equation that still obeyed energy conservation). However, there are other conservation laws also, for instance in three dimensions one also has conservation of helicity

and (formally, at least) one has conservation of momentum

and angular momentum

(although, as we shall discuss below, due to the slow decay of at infinity, these integrals have to either be interpreted in a principal value sense, or else replaced with their vorticity-based formulations, namely impulse and moment of impulse). Total vorticity

is also conserved, although it turns out in three dimensions that this quantity vanishes when one assumes sufficient decay at infinity. Then there are the pointwise conservation laws: the vorticity and the volume form are both transported by the fluid flow, while the velocity field (when viewed as a covector) is transported up to a gradient; among other things, this gives the transport of vortex lines as well as Kelvin’s circulation theorem, and can also be used to deduce the helicity conservation law mentioned above. In my opinion, none of these laws actually prohibits a self-replicating computer from existing within the laws of ideal fluid flow, but they do significantly complicate the task of actually designing such a computer, or of the basic “gates” that such a computer would consist of.

Below the fold I would like to record and derive all the conservation laws mentioned above, which to my knowledge essentially form the complete set of known conserved quantities for the Euler equations. The material here (although not the notation) is drawn from this text of Majda and Bertozzi.

## Recent Comments