You are currently browsing the monthly archive for December 2009.
Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting. The short answer is “yes for small averages, but not for long ones”.
The Furstenberg multiple recurrence theorem can be phrased as follows: if is a probability space with a measure-preserving shift
(which naturally induces an isomorphism
by setting
),
is non-negative with positive trace
, and
is an integer, then one has
In particular, for all
in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic progressions.
The Host-Kra multiple convergence theorem makes the related assertion that if , then the scalar averages
converge to a limit as ; a fortiori, the function averages
converge in (say) norm.
The space is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space
which is closed under the weak operator topology, and under taking adjoints. Indeed, one can take
to be
, and identify each element
of
with the multiplier operator
. The operation
is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars
such that
,
, and
, with equality iff
. The shift
is then an automorphism of this algebra (preserving shift and conjugation).
We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system to be a von Neumann algebra
with a finite trace
and an automorphism
. In addition to the commutative examples generated by measure-preserving systems, we give three other examples here:
- (Matrices)
is the algebra of
complex matrices, with trace
and shift
, where
is a fixed unitary
matrix.
- (Group algebras)
is the closure of the group algebra
of a discrete group
(i.e. the algebra of finite formal complex combinations of group elements), which acts on the Hilbert space
by convolution (identifying each group element with its Kronecker delta function). A trace is given by
, where
is the Kronecker delta at the identity. Any automorphism
of the group induces a shift
.
- (Noncommutative torus)
is the von Neumann algebra acting on
generated by the multiplier operator
and the shifted multiplier operator
, where
is fixed. A trace is given by
, where
is the constant function.
Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed and for a fixed von Neumann dynamical system
:
- (Recurrence on average) Whenever
is non-negative with positive trace, is it true that
- (Recurrence on a dense set) Whenever
is non-negative with positive trace, is it true that
for all
in a set of positive upper density?
- (Weak convergence) With
, is it true that
converges?
- (Strong convergence) With
, is it true that
converges in using the Hilbert-Schmidt norm
?
Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set.
For , all four questions can trivially be answered “yes”. For
, the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators. For
, we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that
is ergodic. For general
, we have a positive answer to all four questions under the assumption that
is asymptotically abelian, which roughly speaking means that the commutators
converges to zero (in an appropriate weak sense) as
. Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido, Duvenhage, and Beyers-Duvenhage-Stroh. For the
result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the “generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity.
In most other situations we have counterexamples to all of these questions. In particular:
- For
, recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von Neumann algebra construction known as the crossed product.
- For
, recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction.
- For
, weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold.
- For
, recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This example only applies for
; we do not know for
whether recurrence on a dense set holds for ergodic systems.
This will be a more frivolous post than usual, in part due to the holiday season.
I recently happened across the following video, which exploits a simple rhetorical trick that I had not seen before:
If nothing else, it’s a convincing (albeit unsubtle) demonstration that the English language is non-commutative (or perhaps non-associative); a linguistic analogue of the swindle, if you will.
Of course, the trick relies heavily on sentence fragments that negate or compare; I wonder if it is possible to achieve a comparable effect without using such fragments.
A related trick which I have seen (though I cannot recall any explicit examples right now; perhaps some readers know of some?) is to set up the verses of a song so that the last verse is identical to the first, but now has a completely distinct meaning (e.g. an ironic interpretation rather than a literal one) due to the context of the preceding verses. The ultimate challenge would be to set up a Möbius song, in which each iteration of the song completely reverses the meaning of the next iterate (cf. this xkcd strip), but this may be beyond the capability of the English language.
On a related note: when I was a graduate student in Princeton, I recall John Conway (and another author whose name I forget) producing another light-hearted demonstration that the English language was highly non-commutative, by showing that if one takes the free group with 26 generators and quotients out by all relations given by anagrams (e.g.
) then the resulting group was commutative. Unfortunately I was not able to locate this recreational mathematics paper of Conway (which also treated the French language, if I recall correctly); perhaps one of the readers knows of it?
In a multiplicative group , the commutator of two group elements
is defined as
(other conventions are also in use, though they are largely equivalent for the purposes of this discussion). A group is said to be nilpotent of step
(or more precisely, step
), if all iterated commutators of order
or higher necessarily vanish. For instance, a group is nilpotent of order
if and only if it is abelian, and it is nilpotent of order
if and only if
for all
(i.e. all commutator elements
are central), and so forth. A good example of an
-step nilpotent group is the group of
upper-triangular unipotent matrices (i.e. matrices with
s on the diagonal and zero below the diagonal), and taking values in some ring (e.g. reals, integers, complex numbers, etc.).
Another important example of nilpotent groups arise from operations on polynomials. For instance, if is the vector space of real polynomials of one variable of degree at most
, then there are two natural affine actions on
. Firstly, every polynomial
in
gives rise to an “vertical” shift
. Secondly, every
gives rise to a “horizontal” shift
. The group generated by these two shifts is a nilpotent group of step
; this reflects the well-known fact that a polynomial of degree
vanishes once one differentiates more than
times. Because of this link between nilpotentcy and polynomials, one can view nilpotent algebra as a generalisation of polynomial algebra.
Suppose one has a finite number of generators. Using abstract algebra, one can then construct the free nilpotent group
of step
, defined as the group generated by the
subject to the relations that all commutators of order
involving the generators are trivial. This is the universal object in the category of nilpotent groups of step
with
marked elements
. In other words, given any other
-step nilpotent group
with
marked elements
, there is a unique homomorphism from the free nilpotent group to
that maps each
to
for
. In particular, the free nilpotent group is well-defined up to isomorphism in this category.
In many applications, one wants to have a more concrete description of the free nilpotent group, so that one can perform computations more easily (and in particular, be able to tell when two words in the group are equal or not). This is easy for small values of . For instance, when
,
is simply the free abelian group generated by
, and so every element
of
can be described uniquely as
for some integers , with the obvious group law. Indeed, to obtain existence of this representation, one starts with any representation of
in terms of the generators
, and then uses the abelian property to push the
factors to the far left, followed by the
factors, and so forth. To show uniqueness, we observe that the group
of formal abelian products
is already a
-step nilpotent group with marked elements
, and so there must be a homomorphism from the free group to
. Since
distinguishes all the products
from each other, the free group must also.
It is only slightly more tricky to describe the free nilpotent group of step
. Using the identities
(where is the conjugate of
by
) we see that whenever
, one can push a positive or negative power of
past a positive or negative power of
, at the cost of creating a positive or negative power of
, or one of its conjugates. Meanwhile, in a
-step nilpotent group, all the commutators are central, and one can pull all the commutators out of a word and collect them as in the abelian case. Doing all this, we see that every element
of
has a representation of the form
for some integers for
and
for
. Note that we don’t need to consider commutators
for
, since
and
It is possible to show also that this representation is unique, by repeating the previous argument, i.e. by showing that the set of formal products
forms a -step nilpotent group, after using the above rules to define the group operations. This can be done, but verifying the group axioms (particularly the associative law) for
is unpleasantly tedious.
Once one sees this, one rapidly loses an appetite for trying to obtain a similar explicit description for free nilpotent groups for higher step, especially once one starts seeing that higher commutators obey some non-obvious identities such as the Hall-Witt identity
(a nonlinear version of the Jacobi identity in the theory of Lie algebras), which make one less certain as to the existence or uniqueness of various proposed generalisations of the representations (1) or (2). For instance, in the free -step nilpotent group, it turns out that for representations of the form
one has uniqueness but not existence (e.g. even in the simplest case , there is no place in this representation for, say,
or
), but if one tries to insert more triple commutators into the representation to make up for this, one has to be careful not to lose uniqueness due to identities such as (3). One can paste these in by ad hoc means in the
case, but the
case looks more fearsome still, especially now that the quadruple commutators split into several distinct-looking species such as
and
which are nevertheless still related to each other by identities such as (3). While one can eventually disentangle this mess for any fixed
and
by a finite amount of combinatorial computation, it is not immediately obvious how to give an explicit description of
uniformly in
and
.
Nevertheless, it turns out that one can give a reasonably tractable description of this group if one takes a polycyclic perspective rather than a nilpotent one – i.e. one views the free nilpotent group as a tower of group extensions of the trivial group by the cyclic group . This seems to be a fairly standard observation in group theory – I found it in this book of Magnus, Karrass, and Solitar, via this paper of Leibman – but seems not to be so widely known outside of that field, so I wanted to record it here.
This is a technical post inspired by separate conversations with Jim Colliander and with Soonsik Kwon on the relationship between two techniques used to control non-radiating solutions to dispersive nonlinear equations, namely the “double Duhamel trick” and the “in/out decomposition”. See for instance these lecture notes of Killip and Visan for a survey of these two techniques and other related methods in the subject. (I should caution that this post is likely to be unintelligible to anyone not already working in this area.)
For sake of discussion we shall focus on solutions to a nonlinear Schrödinger equation
and we will not concern ourselves with the specific regularity of the solution , or the specific properties of the nonlinearity
here. We will also not address the issue of how to justify the formal computations being performed here.
Solutions to this equation enjoy the forward Duhamel formula
for times to the future of
in the lifespan of the solution, as well as the backward Duhamel formula
for all times to the past of
in the lifespan of the solution. The first formula asserts that the solution at a given time is determined by the initial state and by the immediate past, while the second formula is the time reversal of the first, asserting that the solution at a given time is determined by the final state and the immediate future. These basic causal formulae are the foundation of the local theory of these equations, and in particular play an instrumental role in establishing local well-posedness for these equations. In this local theory, the main philosophy is to treat the homogeneous (or linear) term
or
as the main term, and the inhomogeneous (or nonlinear, or forcing) integral term as an error term.
The situation is reversed when one turns to the global theory, and looks at the asymptotic behaviour of a solution as one approaches a limiting time (which can be infinite if one has global existence, or finite if one has finite time blowup). After a suitable rescaling, the linear portion of the solution often disappears from view, leaving one with an asymptotic blowup profile solution which is non-radiating in the sense that the linear components of the Duhamel formulae vanish, thus
where are the endpoint times of existence. (This type of situation comes up for instance in the Kenig-Merle approach to critical regularity problems, by reducing to a minimal blowup solution which is almost periodic modulo symmetries, and hence non-radiating.) These types of non-radiating solutions are propelled solely by their own nonlinear self-interactions from the immediate past or immediate future; they are generalisations of “nonlinear bound states” such as solitons.
A key task is then to somehow combine the forward representation (1) and the backward representation (2) to obtain new information on itself, that cannot be obtained from either representation alone; it seems that the immediate past and immediate future can collectively exert more control on the present than they each do separately. This type of problem can be abstracted as follows. Let
be the infimal value of
over all forward representations of
of the form
where is some suitable spacetime norm (e.g. a Strichartz-type norm), and similarly let
be the infimal value of
over all backward representations of
of the form
Typically, one already has (or is willing to assume as a bootstrap hypothesis) control on in the norm
, which gives control of
in the norms
. The task is then to use the control of both the
and
norm of
to gain control of
in a more conventional Hilbert space norm
, which is typically a Sobolev space such as
or
.
One can use some classical functional analysis to clarify this situation. By the closed graph theorem, the above task is (morally, at least) equivalent to establishing an a priori bound of the form
for all reasonable (e.g. test functions). The double Duhamel trick accomplishes this by establishing the stronger estimate
for all reasonable ; note that setting
and applying the arithmetic-geometric inequality then gives (5). The point is that if
has a forward representation (3) and
has a backward representation (4), then the inner product
can (formally, at least) be expanded as a double integral
The dispersive nature of the linear Schrödinger equation often causes to decay, especially in high dimensions. In high enough dimension (typically one needs five or higher dimensions, unless one already has some spacetime control on the solution), the decay is stronger than
, so that the integrand becomes absolutely integrable and one recovers (6).
Unfortunately it appears that estimates of the form (6) fail in low dimensions (for the type of norms that actually show up in applications); there is just too much interaction between past and future to hope for any reasonable control of this inner product. But one can try to obtain (5) by other means. By the Hahn-Banach theorem (and ignoring various issues related to reflexivity), (5) is equivalent to the assertion that every
can be decomposed as
, where
and
. Indeed once one has such a decomposition, one obtains (5) by computing the inner product of
with
in
in two different ways. One can also (morally at least) write
as
and similarly write
as
So one can dualise the task of proving (5) as that of obtaining a decomposition of an arbitrary initial state into two components
and
, where the former disperses into the past and the latter disperses into the future under the linear evolution. We do not know how to achieve this type of task efficiently in general – and doing so would likely lead to a significant advance in the subject (perhaps one of the main areas in this topic where serious harmonic analysis is likely to play a major role). But in the model case of spherically symmetric data
, one can perform such a decomposition quite easily: one uses microlocal projections to set
to be the “inward” pointing component of
, which propagates towards the origin in the future and away from the origin in the past, and
to simimlarly be the “outward” component of
. As spherical symmetry significantly dilutes the amplitude of the solution (and hence the strength of the nonlinearity) away from the origin, this decomposition tends to work quite well for applications, and is one of the main reasons (though not the only one) why we have a global theory for low-dimensional nonlinear Schrödinger equations in the radial case, but not in general.
The in/out decomposition is a linear one, but the Hahn-Banach argument gives no reason why the decomposition needs to be linear. (Note that other well-known decompositions in analysis, such as the Fefferman-Stein decomposition of BMO, are necessarily nonlinear, a fact which is ultimately equivalent to the non-complemented nature of a certain subspace of a Banach space; see these lecture notes of mine and this old blog post for some discussion.) So one could imagine a sophisticated nonlinear decomposition as a general substitute for the in/out decomposition. See for instance this paper of Bourgain and Brezis for some of the subtleties of decomposition even in very classical function spaces such as . Alternatively, there may well be a third way to obtain estimates of the form (5) that do not require either decomposition or the double Duhamel trick; such a method may well clarify the relative relationship between past, present, and future for critical nonlinear dispersive equations, which seems to be a key aspect of the theory that is still only partially understood. (In particular, it seems that one needs a fairly strong decoupling of the present from both the past and the future to get the sort of elliptic-like regularity results that allow us to make further progress with such equations.)
One of the most basic theorems in linear algebra is that every finite-dimensional vector space has a finite basis. Let us give a statement of this theorem in the case when the underlying field is the rationals:
Theorem 1 (Finite generation implies finite basis, infinitary version) Let
be a vector space over the rationals
, and let
be a finite collection of vectors in
. Then there exists a collection
of vectors in
, with
, such that
- (
generates
) Every
can be expressed as a rational linear combination of the
.
- (
independent) There is no non-trivial linear relation
,
among the
(where non-trivial means that the
are not all zero).
In fact, one can take
to be a subset of the
.
Proof: We perform the following “rank reduction argument”. Start with initialised to
(so initially we have
). Clearly
generates
. If the
are linearly independent then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the
, say
, is a rational linear combination of the
. In such a case,
becomes redundant, and we may delete it (reducing the rank
by one). We repeat this procedure; it can only run for at most
steps and so terminates with
obeying both of the desired properties.
In additive combinatorics, one often wants to use results like this in finitary settings, such as that of a cyclic group where
is a large prime. Now, technically speaking,
is not a vector space over
, because one only multiply an element of
by a rational number if the denominator of that rational does not divide
. But for
very large,
“behaves” like a vector space over
, at least if one restricts attention to the rationals of “bounded height” – where the numerator and denominator of the rationals are bounded. Thus we shall refer to elements of
as “vectors” over
, even though strictly speaking this is not quite the case.
On the other hand, saying that one element of is a rational linear combination of another set of elements is not a very interesting statement: any non-zero element of
already generates the entire space! However, if one again restricts attention to rational linear combinations of bounded height, then things become interesting again. For instance, the vector
can generate elements such as
or
using rational linear combinations of bounded height, but will not be able to generate such elements of
as
without using rational numbers of unbounded height.
For similar reasons, the notion of linear independence over the rationals doesn’t initially look very interesting over : any two non-zero elements of
are of course rationally dependent. But again, if one restricts attention to rational numbers of bounded height, then independence begins to emerge: for instance,
and
are independent in this sense.
Thus, it becomes natural to ask whether there is a “quantitative” analogue of Theorem 1, with non-trivial content in the case of “vector spaces over the bounded height rationals” such as , which asserts that given any bounded collection
of elements, one can find another set
which is linearly independent “over the rationals up to some height”, such that the
can be generated by the
“over the rationals up to some height”. Of course to make this rigorous, one needs to quantify the two heights here, the one giving the independence, and the one giving the generation. In order to be useful for applications, it turns out that one often needs the former height to be much larger than the latter; exponentially larger, for instance, is not an uncommon request. Fortunately, one can accomplish this, at the cost of making the height somewhat large:
Theorem 2 (Finite generation implies finite basis, finitary version) Let
be an integer, and let
be a function. Let
be an abelian group which admits a well-defined division operation by any natural number of size at most
for some constant
depending only on
; for instance one can take
for
a prime larger than
. Let
be a finite collection of “vectors” in
. Then there exists a collection
of vectors in
, with
, as well an integer
, such that
- (Complexity bound)
for some
depending only on
.
- (
generates
) Every
can be expressed as a rational linear combination of the
of height at most
(i.e. the numerator and denominator of the coefficients are at most
).
- (
independent) There is no non-trivial linear relation
among the
in which the
are rational numbers of height at most
.
In fact, one can take
to be a subset of the
.
Proof: We perform the same “rank reduction argument” as before, but translated to the finitary setting. Start with initialised to
(so initially we have
), and initialise
. Clearly
generates
at this height. If the
are linearly independent up to rationals of height
then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the
, say
, is a rational linear combination of the
, whose height is bounded by some function depending on
and
. In such a case,
becomes redundant, and we may delete it (reducing the rank
by one), but note that in order for the remaining
to generate
we need to raise the height upper bound for the rationals involved from
to some quantity
depending on
. We then replace
by
and continue the process. We repeat this procedure; it can only run for at most
steps and so terminates with
and
obeying all of the desired properties. (Note that the bound on
is quite poor, being essentially an
-fold iteration of
! Thus, for instance, if
is exponential, then the bound on
is tower-exponential in nature.)
(A variant of this type of approximate basis lemma was used in my paper with Van Vu on the singularity probability of random Bernoulli matrices.)
Looking at the statements and proofs of these two theorems it is clear that the two results are in some sense the “same” result, except that the latter has been made sufficiently quantitative that it is meaningful in such finitary settings as . In this note I will show how this equivalence can be made formal using the language of non-standard analysis. This is not a particularly deep (or new) observation, but it is perhaps the simplest example I know of that illustrates how nonstandard analysis can be used to transfer a quantifier-heavy finitary statement, such as Theorem 2, into a quantifier-light infinitary statement, such as Theorem 1, thus lessening the need to perform “epsilon management” duties, such as keeping track of unspecified growth functions such as
. This type of transference is discussed at length in this previous blog post of mine.
In this particular case, the amount of effort needed to set up the nonstandard machinery in order to reduce Theorem 2 from Theorem 1 is too great for this transference to be particularly worthwhile, especially given that Theorem 2 has such a short proof. However, when performing a particularly intricate argument in additive combinatorics, in which one is performing a number of “rank reduction arguments”, “energy increment arguments”, “regularity lemmas”, “structure theorems”, and so forth, the purely finitary approach can become bogged down with all the epsilon management one needs to do to organise all the parameters that are flying around. The nonstandard approach can efficiently hide a large number of these parameters from view, and it can then become worthwhile to invest in the nonstandard framework in order to clean up the rest of a lengthy argument. Furthermore, an advantage of moving up to the infinitary setting is that one can then deploy all the firepower of an existing well-developed infinitary theory of mathematics (in this particular case, this would be the theory of linear algebra) out of the box, whereas in the finitary setting one would have to painstakingly finitise each aspect of such a theory that one wished to use (imagine for instance trying to finitise the rank-nullity theorem for rationals of bounded height).
The nonstandard approach is very closely related to use of compactness arguments, or of the technique of taking ultralimits and ultraproducts; indeed we will use an ultrafilter in order to create the nonstandard model in the first place.
I will also discuss a two variants of both Theorem 1 and Theorem 2 which have actually shown up in my research. The first is that of the regularity lemma for polynomials over finite fields, which came up when studying the equidistribution of such polynomials (in this paper with Ben Green). The second comes up when is dealing not with a single finite collection of vectors, but rather with a family
of such vectors, where
ranges over a large set; this gives rise to what we call the sunflower lemma, and came up in this recent paper of myself, Ben Green, and Tamar Ziegler.
This post is mostly concerned with nonstandard translations of the “rank reduction argument”. Nonstandard translations of the “energy increment argument” and “density increment argument” were briefly discussed in this recent post; I may return to this topic in more detail in a future post.
Van Vu and I have just uploaded to the arXiv our paper “Random covariance matrices: Universality of local statistics of eigenvalues“, to be submitted shortly. This paper draws heavily on the technology of our previous paper, in which we established a Four Moment Theorem for the local spacing statistics of eigenvalues of Wigner matrices. This theorem says, roughly speaking, that these statistics are completely determined by the first four moments of the coefficients of such matrices, at least in the bulk of the spectrum. (In a subsequent paper we extended the Four Moment Theorem to the edge of the spectrum.)
In this paper, we establish the analogous result for the singular values of rectangular iid matrices , or (equivalently) the eigenvalues of the associated covariance matrix
. As is well-known, there is a parallel theory between the spectral theory of random Wigner matrices and those of covariance matrices; for instance, just as the former has asymptotic spectral distribution governed by the semi-circular law, the latter has asymptotic spectral distribution governed by the Marcenko-Pastur law. One reason for the connection can be seen by noting that the singular values of a rectangular matrix
are essentially the same thing as the eigenvalues of the augmented matrix
after eliminating sign ambiguities and degeneracies. So one can view singular values of a rectangular iid matrix as the eigenvalues of a matrix which resembles a Wigner matrix, except that two diagonal blocks of that matrix have been zeroed out.
The zeroing out of these elements prevents one from applying the entire Wigner universality theory directly to the covariance matrix setting (in particular, the crucial Talagrand concentration inequality for the magnitude of a projection of a random vector to a subspace does not work perfectly once there are many zero coefficients). Nevertheless, a large part of the theory (particularly the deterministic components of the theory, such as eigenvalue variation formulae) carry through without much difficulty. The one place where one has to spend a bit of time to check details is to ensure that the Erdos-Schlein-Yau delocalisation result (that asserts, roughly speaking, that the eigenvectors of a Wigner matrix are about as small in norm as one could hope to get) is also true for in the covariance matrix setting, but this is a straightforward (though somewhat tedious) adaptation of the method (which is based on the Stieltjes transform).
As an application, we extend the sine kernel distribution of local covariance matrix statistics, first established in the case of Wishart ensembles (when the underlying variables are gaussian) by Nagao and Wadati, and later extended to gaussian-divisible matrices by Ben Arous and Peche, to any distributions which matches one of these distributions to up to four moments, which covers virtually all complex distributions with independent iid real and imaginary parts, with basically the lone exception of the complex Bernoulli ensemble.
Recently, Erdos, Schlein, Yau, and Yin generalised their local relaxation flow method to also obtain similar universality results for distributions which have a large amount of smoothness, but without any matching moment conditions. By combining their techniques with ours as in our joint paper, one should probably be able to remove both smoothness and moment conditions, in particular now covering the complex Bernoulli ensemble.
In this paper we also record a new observation that the exponential decay hypothesis in our earlier paper can be relaxed to a finite moment condition, for a sufficiently high (but fixed) moment. This is done by rearranging the order of steps of the original argument carefully.
Assaf Naor and I have just uploaded to the arXiv our paper “Random Martingales and localization of maximal inequalities“, to be submitted shortly. This paper investigates the best constant in generalisations of the classical Hardy-Littlewood maximal inequality
for any absolutely integrable , where
is the Euclidean ball of radius
centred at
, and
denotes the Lebesgue measure of a subset
of
. This inequality is fundamental to a large part of real-variable harmonic analysis, and in particular to Calderón-Zygmund theory. A similar inequality in fact holds with the Euclidean norm replaced by any other convex norm on
.
The exact value of the constant is only known in
, with a remarkable result of Melas establishing that
. Classical covering lemma arguments give the exponential upper bound
when properly optimised (a direct application of the Vitali covering lemma gives
, but one can reduce
to
by being careful). In an important paper of Stein and Strömberg, the improved bound
was obtained for any convex norm by a more intricate covering norm argument, and the slight improvement
obtained in the Euclidean case by another argument more adapted to the Euclidean setting that relied on heat kernels. In the other direction, a recent result of Aldaz shows that
in the case of the
norm, and in fact in an even more recent preprint of Aubrun, the lower bound
for any
has been obtained in this case. However, these lower bounds do not apply in the Euclidean case, and one may still conjecture that
is in fact uniformly bounded in this case.
Unfortunately, we do not make direct progress on these problems here. However, we do show that the Stein-Strömberg bound is extremely general, applying to a wide class of metric measure spaces obeying a certain “microdoubling condition at dimension
“; and conversely, in such level of generality, it is essentially the best estimate possible, even with additional metric measure hypotheses on the space. Thus, if one wants to improve this bound for a specific maximal inequality, one has to use specific properties of the geometry (such as the connections between Euclidean balls and heat kernels). Furthermore, in the general setting of metric measure spaces, one has a general localisation principle, which roughly speaking asserts that in order to prove a maximal inequality over all scales
, it suffices to prove such an inequality in a smaller range
uniformly in
. It is this localisation which ultimately explains the significance of the
growth in the Stein-Strömberg result (there are
essentially distinct scales in any range
). It also shows that if one restricts the radii
to a lacunary range (such as powers of
), the best constant improvees to
; if one restricts the radii to an even sparser range such as powers of
, the best constant becomes
.
This is an adaptation of a talk I gave recently for a program at IPAM. In this talk, I gave a (very informal and non-rigorous) overview of Hrushovski’s use of model-theoretic techniques to establish new Freiman-type theorems in non-commutative groups, and some recent work in progress of Ben Green, Tom Sanders and myself to establish combinatorial proofs of some of Hrushovski’s results.
This is the last reading seminar of this quarter for the Hrushovski paper. Anush Tserunyan continued working through her notes on stable theories. We introduced the key notion of non-forking extensions (in the context of stable theories, at least) of types when constants are added; these are extensions which are “as generic as possible” with respect to the constants being added. The existence of non-forking extensions can be used for instance to generate Morley sequences – sequences of indiscernibles which are “in general position” in some sense.
Starting in the winter quarter (Monday Jan 4, to be precise), I will be giving a graduate course on random matrices, with lecture notes to be posted on this blog. The topics I have in mind are somewhat fluid, but my initial plan is to cover a large fraction of the following:
- Central limit theorem, random walks, concentration of measure
- The semicircular and Marcenko-Pastur laws for bulk distribution
- A little bit on the connections with free probability
- The spectral distribution of GUE and gaussian random matrices; theory of determinantal processes
- A little bit on the connections with orthogonal polynomials and Riemann-Hilbert problems
- Singularity probability and the least singular value; connections with the Littlewood-Offord problem
- The circular law
- Universality for eigenvalue spacing; Erdos-Schlein-Yau delocalisation of eigenvectors and applications
If time permits, I may also cover
- The Tracy-Widom law
- Connections with Dyson Brownian motion and the Ornstein-Uhlenbeck process; the Erdos-Schlein-Yau approach to eigenvalue spacing universality
- Conjectural connections with zeroes of the Riemann zeta function
Depending on how the course progresses, I may also continue it into the spring quarter (or else have a spring graduate course on a different topic – one potential topic I have in mind is dynamics on nilmanifolds and applications to combinatorics).
Recent Comments