You are currently browsing the monthly archive for December 2009.

Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting. The short answer is “yes for small averages, but not for long ones”.

The Furstenberg multiple recurrence theorem can be phrased as follows: if {X = (X, {\mathcal X}, \mu)} is a probability space with a measure-preserving shift {T:X \rightarrow X} (which naturally induces an isomorphism {\alpha: L^\infty(X) \rightarrow L^\infty(X)} by setting {\alpha a := a \circ T^{-1}}), {a \in L^\infty(X)} is non-negative with positive trace {\tau(a) := \int_X a\ d\mu}, and {k \geq 1} is an integer, then one has

\displaystyle  \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0.

In particular, {\tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0} for all {n} in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic progressions.

The Host-Kra multiple convergence theorem makes the related assertion that if {a_0,\ldots,a_{k-1} \in L^\infty(X)}, then the scalar averages

\displaystyle  \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )

converge to a limit as {N \rightarrow \infty}; a fortiori, the function averages

\displaystyle  \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})

converge in (say) {L^2(X)} norm.

The space {L^\infty(X)} is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space {H} which is closed under the weak operator topology, and under taking adjoints. Indeed, one can take {H} to be {L^2(X)}, and identify each element {m} of {L^\infty(X)} with the multiplier operator {a \mapsto ma}. The operation {\tau: a \mapsto \int_X a\ d\mu} is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars {{\mathbb C}} such that {\tau(ab)=\tau(ba)}, {\tau(a^*) = \overline{\tau(a)}}, and {\tau(a^* a) \geq 0}, with equality iff {a=0}. The shift {\alpha: L^\infty(X) \rightarrow L^\infty(X)} is then an automorphism of this algebra (preserving shift and conjugation).

We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system {(M, \tau, \alpha)} to be a von Neumann algebra {M} with a finite trace {\tau} and an automorphism {\alpha: M \rightarrow M}. In addition to the commutative examples generated by measure-preserving systems, we give three other examples here:

  • (Matrices) {M = M_n({\mathbb C})} is the algebra of {n \times n} complex matrices, with trace {\tau(a) = \frac{1}{n} \hbox{tr}(a)} and shift {\alpha(a) := UaU^{-1}}, where {U} is a fixed unitary {n \times n} matrix.
  • (Group algebras) {M = \overline{{\mathbb C} G}} is the closure of the group algebra {{\mathbb C} G} of a discrete group {G} (i.e. the algebra of finite formal complex combinations of group elements), which acts on the Hilbert space {\ell^2(G)} by convolution (identifying each group element with its Kronecker delta function). A trace is given by {\alpha(a) = \langle a \delta_0, \delta_0 \rangle_{\ell^2(G)}}, where {\delta_0 \in \ell^2(G)} is the Kronecker delta at the identity. Any automorphism {T: G \rightarrow G} of the group induces a shift {\alpha: M \rightarrow M}.
  • (Noncommutative torus) {M} is the von Neumann algebra acting on {L^2(({\mathbb R}/{\mathbb Z})^2)} generated by the multiplier operator {f(x,y) \mapsto e^{2\pi i x} f(x,y)} and the shifted multiplier operator {f(x,y) \mapsto e^{2\pi i y} f(x+\alpha,y)}, where {\alpha \in {\mathbb R}/{\mathbb Z}} is fixed. A trace is given by {\alpha(a) = \langle 1, a1\rangle_{L^2(({\mathbb R}/{\mathbb Z})^2)}}, where {1 \in L^2(({\mathbb R}/{\mathbb Z})^2)} is the constant function.

Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed {k \geq 1} and for a fixed von Neumann dynamical system {(M,\tau,\alpha)}:

  • (Recurrence on average) Whenever {a \in M} is non-negative with positive trace, is it true that\displaystyle  \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0?
  • (Recurrence on a dense set) Whenever {a \in M} is non-negative with positive trace, is it true that\displaystyle  \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0for all {n} in a set of positive upper density?
  • (Weak convergence) With {a_0,\ldots,a_{k-1} \in M}, is it true that\displaystyle  \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )converges?
  • (Strong convergence) With {a_1,\ldots,a_{k-1} \in M}, is it true that\displaystyle  \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})converges in using the Hilbert-Schmidt norm {\|a\|_{L^2(M)} := \tau(a^* a)^{1/2}}?

Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set.

For {k=1}, all four questions can trivially be answered “yes”. For {k=2}, the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators. For {k=3}, we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that {M} is ergodic. For general {k}, we have a positive answer to all four questions under the assumption that {M} is asymptotically abelian, which roughly speaking means that the commutators {[a,\alpha^n b]} converges to zero (in an appropriate weak sense) as {n \rightarrow \infty}. Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido, Duvenhage, and Beyers-Duvenhage-Stroh. For the {k=3} result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the “generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity.

In most other situations we have counterexamples to all of these questions. In particular:

  • For {k=3}, recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von Neumann algebra construction known as the crossed product.
  • For {k=3}, recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction.
  • For {k=4}, weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold.
  • For {k=5}, recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This example only applies for {k \geq 5}; we do not know for {k=4} whether recurrence on a dense set holds for ergodic systems.

Read the rest of this entry »

This will be a more frivolous post than usual, in part due to the holiday season.

I recently happened across the following video, which exploits a simple rhetorical trick that I had not seen before:

If nothing else, it’s a convincing (albeit unsubtle) demonstration that the English language is non-commutative (or perhaps non-associative); a linguistic analogue of the swindle, if you will.

Of course, the trick relies heavily on sentence fragments that negate or compare; I wonder if it is possible to achieve a comparable effect without using such fragments.

A related trick which I have seen (though I cannot recall any explicit examples right now; perhaps some readers know of some?) is to set up the verses of a song so that the last verse is identical to the first, but now has a completely distinct meaning (e.g. an ironic interpretation rather than a literal one) due to the context of the preceding verses.  The ultimate challenge would be to set up a Möbius song, in which each iteration of the song completely reverses the meaning of the next iterate (cf. this xkcd strip), but this may be beyond the capability of the English language.

On a related note: when I was a graduate student in Princeton, I recall John Conway (and another author whose name I forget) producing another light-hearted demonstration that the English language was highly non-commutative, by showing that if one takes the free group with 26 generators a,b,\ldots,z and quotients out by all relations given by anagrams (e.g. cat=act) then the resulting group was commutative.    Unfortunately I was not able to locate this recreational mathematics paper of Conway (which also treated the French language, if I recall correctly); perhaps one of the readers knows of it?

In a multiplicative group {G}, the commutator of two group elements {g, h} is defined as {[g,h] := g^{-1}h^{-1}gh} (other conventions are also in use, though they are largely equivalent for the purposes of this discussion). A group is said to be nilpotent of step {s} (or more precisely, step {\leq s}), if all iterated commutators of order {s+1} or higher necessarily vanish. For instance, a group is nilpotent of order {1} if and only if it is abelian, and it is nilpotent of order {2} if and only if {[[g_1,g_2],g_3]=id} for all {g_1,g_2,g_3} (i.e. all commutator elements {[g_1,g_2]} are central), and so forth. A good example of an {s}-step nilpotent group is the group of {s+1 \times s+1} upper-triangular unipotent matrices (i.e. matrices with {1}s on the diagonal and zero below the diagonal), and taking values in some ring (e.g. reals, integers, complex numbers, etc.).

Another important example of nilpotent groups arise from operations on polynomials. For instance, if {V_{\leq s}} is the vector space of real polynomials of one variable of degree at most {s}, then there are two natural affine actions on {V_{\leq s}}. Firstly, every polynomial {Q} in {V_{\leq s}} gives rise to an “vertical” shift {P \mapsto P+Q}. Secondly, every {h \in {\bf R}} gives rise to a “horizontal” shift {P \mapsto P(\cdot+h)}. The group generated by these two shifts is a nilpotent group of step {\leq s}; this reflects the well-known fact that a polynomial of degree {\leq s} vanishes once one differentiates more than {s} times. Because of this link between nilpotentcy and polynomials, one can view nilpotent algebra as a generalisation of polynomial algebra.

Suppose one has a finite number {g_1,\ldots,g_n} of generators. Using abstract algebra, one can then construct the free nilpotent group {{\mathcal F}_{\leq s}(g_1,\ldots,g_n)} of step {\leq s}, defined as the group generated by the {g_1,\ldots,g_n} subject to the relations that all commutators of order {s+1} involving the generators are trivial. This is the universal object in the category of nilpotent groups of step {\leq s} with {n} marked elements {g_1,\ldots,g_n}. In other words, given any other {\leq s}-step nilpotent group {G'} with {n} marked elements {g'_1,\ldots,g'_n}, there is a unique homomorphism from the free nilpotent group to {G'} that maps each {g_j} to {g'_j} for {1 \leq j \leq n}. In particular, the free nilpotent group is well-defined up to isomorphism in this category.

In many applications, one wants to have a more concrete description of the free nilpotent group, so that one can perform computations more easily (and in particular, be able to tell when two words in the group are equal or not). This is easy for small values of {s}. For instance, when {s=1}, {{\mathcal F}_{\leq 1}(g_1,\ldots,g_n)} is simply the free abelian group generated by {g_1,\ldots,g_n}, and so every element {g} of {{\mathcal F}_{\leq 1}(g_1,\ldots,g_n)} can be described uniquely as

\displaystyle  g = \prod_{j=1}^n g_j^{m_j} := g_1^{m_1} \ldots g_n^{m_n} \ \ \ \ \ (1)

for some integers {m_1,\ldots,m_n}, with the obvious group law. Indeed, to obtain existence of this representation, one starts with any representation of {g} in terms of the generators {g_1,\ldots,g_n}, and then uses the abelian property to push the {g_1} factors to the far left, followed by the {g_2} factors, and so forth. To show uniqueness, we observe that the group {G} of formal abelian products {\{ g_1^{m_1} \ldots g_n^{m_n}: m_1,\ldots,m_n \in {\bf Z} \} \equiv {\bf Z}^k} is already a {\leq 1}-step nilpotent group with marked elements {g_1,\ldots,g_n}, and so there must be a homomorphism from the free group to {G}. Since {G} distinguishes all the products {g_1^{m_1} \ldots g_n^{m_n}} from each other, the free group must also.

It is only slightly more tricky to describe the free nilpotent group {{\mathcal F}_{\leq 2}(g_1,\ldots,g_n)} of step {\leq 2}. Using the identities

\displaystyle  gh = hg [g,h]; \quad gh^{-1} = ([g,h]^{-1})^{g^{-1}} h^{-1} g; \quad g^{-1} h = h [g,h]^{-1} g^{-1}; \quad g^{-1} h^{-1} := [g,h] g^{-1} h^{-1}

(where {g^h := h^{-1} g h} is the conjugate of {g} by {h}) we see that whenever {1 \leq i < j \leq n}, one can push a positive or negative power of {g_i} past a positive or negative power of {g_j}, at the cost of creating a positive or negative power of {[g_i,g_j]}, or one of its conjugates. Meanwhile, in a {\leq 2}-step nilpotent group, all the commutators are central, and one can pull all the commutators out of a word and collect them as in the abelian case. Doing all this, we see that every element {g} of {{\mathcal F}_{\leq 2}(g_1,\ldots,g_n)} has a representation of the form

\displaystyle  g = (\prod_{j=1}^n g_j^{m_j}) (\prod_{1 \leq i < j \leq n} [g_i,g_j]^{m_{[i,j]}}) \ \ \ \ \ (2)

for some integers {m_j} for {1 \leq j \leq n} and {m_{[i,j]}} for {1 \leq i < j \leq n}. Note that we don’t need to consider commutators {[g_i,g_j]} for {i \geq j}, since

\displaystyle  [g_i,g_i] = id

and

\displaystyle  [g_i,g_j] = [g_j,g_i]^{-1}.

It is possible to show also that this representation is unique, by repeating the previous argument, i.e. by showing that the set of formal products

\displaystyle  G := \{ (\prod_{j=1}^k g_j^{m_j}) (\prod_{1 \leq i < j \leq n} [g_i,g_j]^{m_{[i,j]}}): m_j, m_{[i,j]} \in {\bf Z} \}

forms a {\leq 2}-step nilpotent group, after using the above rules to define the group operations. This can be done, but verifying the group axioms (particularly the associative law) for {G} is unpleasantly tedious.

Once one sees this, one rapidly loses an appetite for trying to obtain a similar explicit description for free nilpotent groups for higher step, especially once one starts seeing that higher commutators obey some non-obvious identities such as the Hall-Witt identity

\displaystyle  [[g, h^{-1}], k]^h\cdot[[h, k^{-1}], g]^k\cdot[[k, g^{-1}], h]^g = 1 \ \ \ \ \ (3)

(a nonlinear version of the Jacobi identity in the theory of Lie algebras), which make one less certain as to the existence or uniqueness of various proposed generalisations of the representations (1) or (2). For instance, in the free {\leq 3}-step nilpotent group, it turns out that for representations of the form

\displaystyle  g = (\prod_{j=1}^n g_j^{m_j}) (\prod_{1 \leq i < j \leq n} [g_i,g_j]^{m_{[i,j]}}) (\prod_{1 \leq i < j < k \leq n} [[g_i,g_j],g_k]^{n_{[[i,j],k]}})

one has uniqueness but not existence (e.g. even in the simplest case {n=3}, there is no place in this representation for, say, {[[g_1,g_3],g_2]} or {[[g_1,g_2],g_2]}), but if one tries to insert more triple commutators into the representation to make up for this, one has to be careful not to lose uniqueness due to identities such as (3). One can paste these in by ad hoc means in the {s=3} case, but the {s=4} case looks more fearsome still, especially now that the quadruple commutators split into several distinct-looking species such as {[[g_i,g_j],[g_k,g_l]]} and {[[[g_i,g_j],g_k],g_l]} which are nevertheless still related to each other by identities such as (3). While one can eventually disentangle this mess for any fixed {n} and {s} by a finite amount of combinatorial computation, it is not immediately obvious how to give an explicit description of {{\mathcal F}_{\leq s}(g_1,\ldots,g_n)} uniformly in {n} and {s}.

Nevertheless, it turns out that one can give a reasonably tractable description of this group if one takes a polycyclic perspective rather than a nilpotent one – i.e. one views the free nilpotent group as a tower of group extensions of the trivial group by the cyclic group {{\bf Z}}. This seems to be a fairly standard observation in group theory – I found it in this book of Magnus, Karrass, and Solitar, via this paper of Leibman – but seems not to be so widely known outside of that field, so I wanted to record it here.

Read the rest of this entry »

This is a technical post inspired by separate conversations with Jim Colliander and with Soonsik Kwon on the relationship between two techniques used to control non-radiating solutions to dispersive nonlinear equations, namely the “double Duhamel trick” and the “in/out decomposition”. See for instance these lecture notes of Killip and Visan for a survey of these two techniques and other related methods in the subject. (I should caution that this post is likely to be unintelligible to anyone not already working in this area.)

For sake of discussion we shall focus on solutions to a nonlinear Schrödinger equation

\displaystyle  iu_t + \Delta u = F(u)

and we will not concern ourselves with the specific regularity of the solution {u}, or the specific properties of the nonlinearity {F} here. We will also not address the issue of how to justify the formal computations being performed here.

Solutions to this equation enjoy the forward Duhamel formula

\displaystyle  u(t) = e^{i(t-t_0)\Delta} u(t_0) - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt'

for times {t} to the future of {t_0} in the lifespan of the solution, as well as the backward Duhamel formula

\displaystyle  u(t) = e^{i(t-t_1)\Delta} u(t_1) + i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt'

for all times {t} to the past of {t_1} in the lifespan of the solution. The first formula asserts that the solution at a given time is determined by the initial state and by the immediate past, while the second formula is the time reversal of the first, asserting that the solution at a given time is determined by the final state and the immediate future. These basic causal formulae are the foundation of the local theory of these equations, and in particular play an instrumental role in establishing local well-posedness for these equations. In this local theory, the main philosophy is to treat the homogeneous (or linear) term {e^{i(t-t_0)\Delta} u(t_0)} or {e^{i(t-t_1)\Delta} u(t_1)} as the main term, and the inhomogeneous (or nonlinear, or forcing) integral term as an error term.

The situation is reversed when one turns to the global theory, and looks at the asymptotic behaviour of a solution as one approaches a limiting time {T} (which can be infinite if one has global existence, or finite if one has finite time blowup). After a suitable rescaling, the linear portion of the solution often disappears from view, leaving one with an asymptotic blowup profile solution which is non-radiating in the sense that the linear components of the Duhamel formulae vanish, thus

\displaystyle  u(t) = - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (1)

and

\displaystyle  u(t) = i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (2)

where {t_0, t_1} are the endpoint times of existence. (This type of situation comes up for instance in the Kenig-Merle approach to critical regularity problems, by reducing to a minimal blowup solution which is almost periodic modulo symmetries, and hence non-radiating.) These types of non-radiating solutions are propelled solely by their own nonlinear self-interactions from the immediate past or immediate future; they are generalisations of “nonlinear bound states” such as solitons.

A key task is then to somehow combine the forward representation (1) and the backward representation (2) to obtain new information on {u(t)} itself, that cannot be obtained from either representation alone; it seems that the immediate past and immediate future can collectively exert more control on the present than they each do separately. This type of problem can be abstracted as follows. Let {\|u(t)\|_{Y_+}} be the infimal value of {\|F_+\|_N} over all forward representations of {u(t)} of the form

\displaystyle  u(t) = \int_{t_0}^t e^{i(t-t')\Delta} F_+(t') \ dt' \ \ \ \ \ (3)

where {N} is some suitable spacetime norm (e.g. a Strichartz-type norm), and similarly let {\|u(t)\|_{Y_-}} be the infimal value of {\|F_-\|_N} over all backward representations of {u(t)} of the form

\displaystyle  u(t) = \int_{t}^{t_1} e^{i(t-t')\Delta} F_-(t') \ dt'. \ \ \ \ \ (4)

Typically, one already has (or is willing to assume as a bootstrap hypothesis) control on {F(u)} in the norm {N}, which gives control of {u(t)} in the norms {Y_+, Y_-}. The task is then to use the control of both the {Y_+} and {Y_-} norm of {u(t)} to gain control of {u(t)} in a more conventional Hilbert space norm {X}, which is typically a Sobolev space such as {H^s} or {L^2}.

One can use some classical functional analysis to clarify this situation. By the closed graph theorem, the above task is (morally, at least) equivalent to establishing an a priori bound of the form

\displaystyle  \| u \|_X \lesssim \|u\|_{Y_+} + \|u\|_{Y_-} \ \ \ \ \ (5)

for all reasonable {u} (e.g. test functions). The double Duhamel trick accomplishes this by establishing the stronger estimate

\displaystyle  |\langle u, v \rangle_X| \lesssim \|u\|_{Y_+} \|v\|_{Y_-} \ \ \ \ \ (6)

for all reasonable {u, v}; note that setting {u=v} and applying the arithmetic-geometric inequality then gives (5). The point is that if {u} has a forward representation (3) and {v} has a backward representation (4), then the inner product {\langle u, v \rangle_X} can (formally, at least) be expanded as a double integral

\displaystyle  \int_{t_0}^t \int_{t}^{t_1} \langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X\ dt'' dt'.

The dispersive nature of the linear Schrödinger equation often causes {\langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X} to decay, especially in high dimensions. In high enough dimension (typically one needs five or higher dimensions, unless one already has some spacetime control on the solution), the decay is stronger than {1/|t'-t''|^2}, so that the integrand becomes absolutely integrable and one recovers (6).

Unfortunately it appears that estimates of the form (6) fail in low dimensions (for the type of norms {N} that actually show up in applications); there is just too much interaction between past and future to hope for any reasonable control of this inner product. But one can try to obtain (5) by other means. By the Hahn-Banach theorem (and ignoring various issues related to reflexivity), (5) is equivalent to the assertion that every {u \in X} can be decomposed as {u = u_+ + u_-}, where {\|u_+\|_{Y_+^*} \lesssim \|u\|_X} and {\|u_-\|_{Y_-^*} \lesssim \|v\|_X}. Indeed once one has such a decomposition, one obtains (5) by computing the inner product of {u} with {u=u_++u_-} in {X} in two different ways. One can also (morally at least) write {\|u_+\|_{Y_+^*}} as {\| e^{i(\cdot-t)\Delta} u_+\|_{N^*([t_0,t])}} and similarly write {\|u_-\|_{Y_-^*}} as {\| e^{i(\cdot-t)\Delta} u_-\|_{N^*([t,t_1])}}

So one can dualise the task of proving (5) as that of obtaining a decomposition of an arbitrary initial state {u} into two components {u_+} and {u_-}, where the former disperses into the past and the latter disperses into the future under the linear evolution. We do not know how to achieve this type of task efficiently in general – and doing so would likely lead to a significant advance in the subject (perhaps one of the main areas in this topic where serious harmonic analysis is likely to play a major role). But in the model case of spherically symmetric data {u}, one can perform such a decomposition quite easily: one uses microlocal projections to set {u_+} to be the “inward” pointing component of {u}, which propagates towards the origin in the future and away from the origin in the past, and {u_-} to simimlarly be the “outward” component of {u}. As spherical symmetry significantly dilutes the amplitude of the solution (and hence the strength of the nonlinearity) away from the origin, this decomposition tends to work quite well for applications, and is one of the main reasons (though not the only one) why we have a global theory for low-dimensional nonlinear Schrödinger equations in the radial case, but not in general.

The in/out decomposition is a linear one, but the Hahn-Banach argument gives no reason why the decomposition needs to be linear. (Note that other well-known decompositions in analysis, such as the Fefferman-Stein decomposition of BMO, are necessarily nonlinear, a fact which is ultimately equivalent to the non-complemented nature of a certain subspace of a Banach space; see these lecture notes of mine and this old blog post for some discussion.) So one could imagine a sophisticated nonlinear decomposition as a general substitute for the in/out decomposition. See for instance this paper of Bourgain and Brezis for some of the subtleties of decomposition even in very classical function spaces such as {H^{1/2}(R)}. Alternatively, there may well be a third way to obtain estimates of the form (5) that do not require either decomposition or the double Duhamel trick; such a method may well clarify the relative relationship between past, present, and future for critical nonlinear dispersive equations, which seems to be a key aspect of the theory that is still only partially understood. (In particular, it seems that one needs a fairly strong decoupling of the present from both the past and the future to get the sort of elliptic-like regularity results that allow us to make further progress with such equations.)

One of the most basic theorems in linear algebra is that every finite-dimensional vector space has a finite basis. Let us give a statement of this theorem in the case when the underlying field is the rationals:

Theorem 1 (Finite generation implies finite basis, infinitary version) Let {V} be a vector space over the rationals {{\mathbb Q}}, and let {v_1,\ldots,v_n} be a finite collection of vectors in {V}. Then there exists a collection {w_1,\ldots,w_k} of vectors in {V}, with {1 \leq k \leq n}, such that

  • ({w} generates {v}) Every {v_j} can be expressed as a rational linear combination of the {w_1,\ldots,w_k}.
  • ({w} independent) There is no non-trivial linear relation {a_1 w_1 + \ldots + a_k w_k = 0}, {a_1,\ldots,a_k \in {\mathbb Q}} among the {w_1,\ldots,w_m} (where non-trivial means that the {a_i} are not all zero).

In fact, one can take {w_1,\ldots,w_m} to be a subset of the {v_1,\ldots,v_n}.

Proof: We perform the following “rank reduction argument”. Start with {w_1,\ldots,w_k} initialised to {v_1,\ldots,v_n} (so initially we have {k=n}). Clearly {w} generates {v}. If the {w_i} are linearly independent then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the {w_i}, say {w_k}, is a rational linear combination of the {w_1,\ldots,w_{k-1}}. In such a case, {w_k} becomes redundant, and we may delete it (reducing the rank {k} by one). We repeat this procedure; it can only run for at most {n} steps and so terminates with {w_1,\ldots,w_m} obeying both of the desired properties. \Box

In additive combinatorics, one often wants to use results like this in finitary settings, such as that of a cyclic group {{\mathbb Z}/p{\mathbb Z}} where {p} is a large prime. Now, technically speaking, {{\mathbb Z}/p{\mathbb Z}} is not a vector space over {{\mathbb Q}}, because one only multiply an element of {{\mathbb Z}/p{\mathbb Z}} by a rational number if the denominator of that rational does not divide {p}. But for {p} very large, {{\mathbb Z}/p{\mathbb Z}} “behaves” like a vector space over {{\mathbb Q}}, at least if one restricts attention to the rationals of “bounded height” – where the numerator and denominator of the rationals are bounded. Thus we shall refer to elements of {{\mathbb Z}/p{\mathbb Z}} as “vectors” over {{\mathbb Q}}, even though strictly speaking this is not quite the case.

On the other hand, saying that one element of {{\mathbb Z}/p{\mathbb Z}} is a rational linear combination of another set of elements is not a very interesting statement: any non-zero element of {{\mathbb Z}/p{\mathbb Z}} already generates the entire space! However, if one again restricts attention to rational linear combinations of bounded height, then things become interesting again. For instance, the vector {1} can generate elements such as {37} or {\frac{p-1}{2}} using rational linear combinations of bounded height, but will not be able to generate such elements of {{\mathbb Z}/p{\mathbb Z}} as {\lfloor\sqrt{p}\rfloor} without using rational numbers of unbounded height.

For similar reasons, the notion of linear independence over the rationals doesn’t initially look very interesting over {{\mathbb Z}/p{\mathbb Z}}: any two non-zero elements of {{\mathbb Z}/p{\mathbb Z}} are of course rationally dependent. But again, if one restricts attention to rational numbers of bounded height, then independence begins to emerge: for instance, {1} and {\lfloor\sqrt{p}\rfloor} are independent in this sense.

Thus, it becomes natural to ask whether there is a “quantitative” analogue of Theorem 1, with non-trivial content in the case of “vector spaces over the bounded height rationals” such as {{\mathbb Z}/p{\mathbb Z}}, which asserts that given any bounded collection {v_1,\ldots,v_n} of elements, one can find another set {w_1,\ldots,w_k} which is linearly independent “over the rationals up to some height”, such that the {v_1,\ldots,v_n} can be generated by the {w_1,\ldots,w_k} “over the rationals up to some height”. Of course to make this rigorous, one needs to quantify the two heights here, the one giving the independence, and the one giving the generation. In order to be useful for applications, it turns out that one often needs the former height to be much larger than the latter; exponentially larger, for instance, is not an uncommon request. Fortunately, one can accomplish this, at the cost of making the height somewhat large:

Theorem 2 (Finite generation implies finite basis, finitary version) Let {n \geq 1} be an integer, and let {F: {\mathbb N} \rightarrow {\mathbb N}} be a function. Let {V} be an abelian group which admits a well-defined division operation by any natural number of size at most {C(F,n)} for some constant {C(F,n)} depending only on {F,n}; for instance one can take {V = {\mathbb Z}/p{\mathbb Z}} for {p} a prime larger than {C(F,n)}. Let {v_1,\ldots,v_n} be a finite collection of “vectors” in {V}. Then there exists a collection {w_1,\ldots,w_k} of vectors in {V}, with {1 \leq k \leq n}, as well an integer {M \geq 1}, such that

  • (Complexity bound) {M \leq C(F,n)} for some {C(F,n)} depending only on {F, n}.
  • ({w} generates {v}) Every {v_j} can be expressed as a rational linear combination of the {w_1,\ldots,w_k} of height at most {M} (i.e. the numerator and denominator of the coefficients are at most {M}).
  • ({w} independent) There is no non-trivial linear relation {a_1 w_1 + \ldots + a_k w_k = 0} among the {w_1,\ldots,w_k} in which the {a_1,\ldots,a_k} are rational numbers of height at most {F(M)}.

In fact, one can take {w_1,\ldots,w_k} to be a subset of the {v_1,\ldots,v_n}.

Proof: We perform the same “rank reduction argument” as before, but translated to the finitary setting. Start with {w_1,\ldots,w_k} initialised to {v_1,\ldots,v_n} (so initially we have {k=n}), and initialise {M=1}. Clearly {w} generates {v} at this height. If the {w_i} are linearly independent up to rationals of height {F(M)} then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the {w_i}, say {w_k}, is a rational linear combination of the {w_1,\ldots,w_{k-1}}, whose height is bounded by some function depending on {F(M)} and {k}. In such a case, {w_k} becomes redundant, and we may delete it (reducing the rank {k} by one), but note that in order for the remaining {w_1,\ldots,w_{k-1}} to generate {v_1,\ldots,v_n} we need to raise the height upper bound for the rationals involved from {M} to some quantity {M'} depending on {M, F(M), k}. We then replace {M} by {M'} and continue the process. We repeat this procedure; it can only run for at most {n} steps and so terminates with {w_1,\ldots,w_m} and {M} obeying all of the desired properties. (Note that the bound on {M} is quite poor, being essentially an {n}-fold iteration of {F}! Thus, for instance, if {F} is exponential, then the bound on {M} is tower-exponential in nature.) \Box

(A variant of this type of approximate basis lemma was used in my paper with Van Vu on the singularity probability of random Bernoulli matrices.)

Looking at the statements and proofs of these two theorems it is clear that the two results are in some sense the “same” result, except that the latter has been made sufficiently quantitative that it is meaningful in such finitary settings as {{\mathbb Z}/p{\mathbb Z}}. In this note I will show how this equivalence can be made formal using the language of non-standard analysis. This is not a particularly deep (or new) observation, but it is perhaps the simplest example I know of that illustrates how nonstandard analysis can be used to transfer a quantifier-heavy finitary statement, such as Theorem 2, into a quantifier-light infinitary statement, such as Theorem 1, thus lessening the need to perform “epsilon management” duties, such as keeping track of unspecified growth functions such as {F}. This type of transference is discussed at length in this previous blog post of mine.

In this particular case, the amount of effort needed to set up the nonstandard machinery in order to reduce Theorem 2 from Theorem 1 is too great for this transference to be particularly worthwhile, especially given that Theorem 2 has such a short proof. However, when performing a particularly intricate argument in additive combinatorics, in which one is performing a number of “rank reduction arguments”, “energy increment arguments”, “regularity lemmas”, “structure theorems”, and so forth, the purely finitary approach can become bogged down with all the epsilon management one needs to do to organise all the parameters that are flying around. The nonstandard approach can efficiently hide a large number of these parameters from view, and it can then become worthwhile to invest in the nonstandard framework in order to clean up the rest of a lengthy argument. Furthermore, an advantage of moving up to the infinitary setting is that one can then deploy all the firepower of an existing well-developed infinitary theory of mathematics (in this particular case, this would be the theory of linear algebra) out of the box, whereas in the finitary setting one would have to painstakingly finitise each aspect of such a theory that one wished to use (imagine for instance trying to finitise the rank-nullity theorem for rationals of bounded height).

The nonstandard approach is very closely related to use of compactness arguments, or of the technique of taking ultralimits and ultraproducts; indeed we will use an ultrafilter in order to create the nonstandard model in the first place.

I will also discuss a two variants of both Theorem 1 and Theorem 2 which have actually shown up in my research. The first is that of the regularity lemma for polynomials over finite fields, which came up when studying the equidistribution of such polynomials (in this paper with Ben Green). The second comes up when is dealing not with a single finite collection {v_1,\ldots,v_n} of vectors, but rather with a family {(v_{h,1},\ldots,v_{h,n})_{h \in H}} of such vectors, where {H} ranges over a large set; this gives rise to what we call the sunflower lemma, and came up in this recent paper of myself, Ben Green, and Tamar Ziegler.

This post is mostly concerned with nonstandard translations of the “rank reduction argument”. Nonstandard translations of the “energy increment argument” and “density increment argument” were briefly discussed in this recent post; I may return to this topic in more detail in a future post.

Read the rest of this entry »

Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem to the edge of the spectrum.)

In this paper, we establish the analogous result for the singular values of rectangular iid matrices {M = M_{n,p}}, or (equivalently) the eigenvalues of the associated covariance matrix {\frac{1}{n} M M^*}. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that the singular values of a rectangular matrix {M} are essentially the same thing as the eigenvalues of the augmented matrix

\displaystyle  \begin{pmatrix} 0 & M \\ M^* & 0\end{pmatrix}

after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two diagonal blocks of that matrix have been zeroed out.

The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in {\ell^\infty} norm as one could hope to get) is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform).

As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble.

Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now covering the complex Bernoulli ensemble.

In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This is done by rearranging the order of steps of the original argument carefully.

Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in generalisations of the classical Hardy-Littlewood maximal inequality

for any absolutely integrable {f: {\mathbb R}^n \rightarrow {\mathbb R}}, where {B(x,r)} is the Euclidean ball of radius {r} centred at {x}, and {|E|} denotes the Lebesgue measure of a subset {E} of {{\mathbb R}^n}. This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to Calderón-Zygmund theory. A similar inequality in fact holds with the Euclidean norm replaced by any other convex norm on {{\mathbb R}^n}.

The exact value of the constant {C_n} is only known in {n=1}, with a remarkable result of Melas establishing that {C_1 = \frac{11+\sqrt{61}}{12}}. Classical covering lemma arguments give the exponential upper bound {C_n \leq 2^n} when properly optimised (a direct application of the Vitali covering lemma gives {C_n \leq 5^n}, but one can reduce {5} to {2} by being careful). In an important paper of Stein and Strömberg, the improved bound {C_n = O( n \log n )} was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement {C_n = O(n)} obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that {C_n \rightarrow \infty} in the case of the {\ell^\infty} norm, and in fact in an even more recent preprint of Aubrun, the lower bound {C_n \gg_\epsilon \log^{1-\epsilon} n} for any {\epsilon > 0} has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that {C_n} is in fact uniformly bounded in this case.

Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound {C_n = O(n \log n)} is extremely general, applying to a wide class of metric measure spaces obeying a certain “microdoubling condition at dimension {n}“; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general localisation principle, which roughly speaking asserts that in order to prove a maximal inequality over all scales {r \in (0,+\infty)}, it suffices to prove such an inequality in a smaller range {r \in [R, nR]} uniformly in {R>0}. It is this localisation which ultimately explains the significance of the {n \log n} growth in the Stein-Strömberg result (there are {O(n \log n)} essentially distinct scales in any range {[R,nR]}). It also shows that if one restricts the radii {r} to a lacunary range (such as powers of {2}), the best constant improvees to {O(\log n)}; if one restricts the radii to an even sparser range such as powers of {n}, the best constant becomes {O(1)}.

Read the rest of this entry »

This is an adaptation of a talk I gave recently for a program at IPAM. In this talk, I gave a (very informal and non-rigorous) overview of Hrushovski’s use of model-theoretic techniques to establish new Freiman-type theorems in non-commutative groups, and some recent work in progress of Ben Green, Tom Sanders and myself to establish combinatorial proofs of some of Hrushovski’s results.

Read the rest of this entry »

This is the last reading seminar of this quarter for the Hrushovski paper. Anush Tserunyan continued working through her notes on stable theories. We introduced the key notion of non-forking extensions (in the context of stable theories, at least) of types when constants are added; these are extensions which are “as generic as possible” with respect to the constants being added. The existence of non-forking extensions can be used for instance to generate Morley sequences – sequences of indiscernibles which are “in general position” in some sense.

Read the rest of this entry »

Starting in the winter quarter (Monday Jan 4, to  be precise), I will be giving a graduate course on random matrices, with lecture notes to be posted on this blog.  The topics I have in mind are somewhat fluid, but my initial plan is to cover a large fraction of the following:

  • Central limit theorem, random walks, concentration of measure
  • The semicircular and Marcenko-Pastur laws for bulk distribution
  • A little bit on the connections with free probability
  • The spectral distribution of GUE and gaussian random matrices; theory of determinantal processes
  • A little bit on the connections with orthogonal polynomials and Riemann-Hilbert problems
  • Singularity probability and the least singular value; connections with the Littlewood-Offord problem
  • The circular law
  • Universality for eigenvalue spacing; Erdos-Schlein-Yau delocalisation of eigenvectors and applications

If time permits, I may also cover

  • The Tracy-Widom law
  • Connections with Dyson Brownian motion and the Ornstein-Uhlenbeck process; the Erdos-Schlein-Yau approach to eigenvalue spacing universality
  • Conjectural connections with zeroes of the Riemann zeta function

Depending on how the course progresses, I may also continue it into the spring quarter (or else have a spring graduate course on a different topic – one potential topic I have in mind is dynamics on nilmanifolds and applications to combinatorics).

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,823 other followers