You are currently browsing the category archive for the ‘math.OA’ category.

Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting. The short answer is “yes for small averages, but not for long ones”.

The Furstenberg multiple recurrence theorem can be phrased as follows: if ${X = (X, {\mathcal X}, \mu)}$ is a probability space with a measure-preserving shift ${T:X \rightarrow X}$ (which naturally induces an isomorphism ${\alpha: L^\infty(X) \rightarrow L^\infty(X)}$ by setting ${\alpha a := a \circ T^{-1}}$), ${a \in L^\infty(X)}$ is non-negative with positive trace ${\tau(a) := \int_X a\ d\mu}$, and ${k \geq 1}$ is an integer, then one has

$\displaystyle \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0.$

In particular, ${\tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0}$ for all ${n}$ in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic progressions.

The Host-Kra multiple convergence theorem makes the related assertion that if ${a_0,\ldots,a_{k-1} \in L^\infty(X)}$, then the scalar averages

$\displaystyle \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )$

converge to a limit as ${N \rightarrow \infty}$; a fortiori, the function averages

$\displaystyle \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})$

converge in (say) ${L^2(X)}$ norm.

The space ${L^\infty(X)}$ is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space ${H}$ which is closed under the weak operator topology, and under taking adjoints. Indeed, one can take ${H}$ to be ${L^2(X)}$, and identify each element ${m}$ of ${L^\infty(X)}$ with the multiplier operator ${a \mapsto ma}$. The operation ${\tau: a \mapsto \int_X a\ d\mu}$ is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars ${{\mathbb C}}$ such that ${\tau(ab)=\tau(ba)}$, ${\tau(a^*) = \overline{\tau(a)}}$, and ${\tau(a^* a) \geq 0}$, with equality iff ${a=0}$. The shift ${\alpha: L^\infty(X) \rightarrow L^\infty(X)}$ is then an automorphism of this algebra (preserving shift and conjugation).

We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system ${(M, \tau, \alpha)}$ to be a von Neumann algebra ${M}$ with a finite trace ${\tau}$ and an automorphism ${\alpha: M \rightarrow M}$. In addition to the commutative examples generated by measure-preserving systems, we give three other examples here:

• (Matrices) ${M = M_n({\mathbb C})}$ is the algebra of ${n \times n}$ complex matrices, with trace ${\tau(a) = \frac{1}{n} \hbox{tr}(a)}$ and shift ${\alpha(a) := UaU^{-1}}$, where ${U}$ is a fixed unitary ${n \times n}$ matrix.
• (Group algebras) ${M = \overline{{\mathbb C} G}}$ is the closure of the group algebra ${{\mathbb C} G}$ of a discrete group ${G}$ (i.e. the algebra of finite formal complex combinations of group elements), which acts on the Hilbert space ${\ell^2(G)}$ by convolution (identifying each group element with its Kronecker delta function). A trace is given by ${\alpha(a) = \langle a \delta_0, \delta_0 \rangle_{\ell^2(G)}}$, where ${\delta_0 \in \ell^2(G)}$ is the Kronecker delta at the identity. Any automorphism ${T: G \rightarrow G}$ of the group induces a shift ${\alpha: M \rightarrow M}$.
• (Noncommutative torus) ${M}$ is the von Neumann algebra acting on ${L^2(({\mathbb R}/{\mathbb Z})^2)}$ generated by the multiplier operator ${f(x,y) \mapsto e^{2\pi i x} f(x,y)}$ and the shifted multiplier operator ${f(x,y) \mapsto e^{2\pi i y} f(x+\alpha,y)}$, where ${\alpha \in {\mathbb R}/{\mathbb Z}}$ is fixed. A trace is given by ${\alpha(a) = \langle 1, a1\rangle_{L^2(({\mathbb R}/{\mathbb Z})^2)}}$, where ${1 \in L^2(({\mathbb R}/{\mathbb Z})^2)}$ is the constant function.

Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed ${k \geq 1}$ and for a fixed von Neumann dynamical system ${(M,\tau,\alpha)}$:

• (Recurrence on average) Whenever ${a \in M}$ is non-negative with positive trace, is it true that$\displaystyle \liminf_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0?$
• (Recurrence on a dense set) Whenever ${a \in M}$ is non-negative with positive trace, is it true that$\displaystyle \tau( a (\alpha^n a) \ldots (\alpha^{(k-1)n} a) ) > 0$for all ${n}$ in a set of positive upper density?
• (Weak convergence) With ${a_0,\ldots,a_{k-1} \in M}$, is it true that$\displaystyle \frac{1}{N} \sum_{n=1}^N \tau( a_0 (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1}) )$converges?
• (Strong convergence) With ${a_1,\ldots,a_{k-1} \in M}$, is it true that$\displaystyle \frac{1}{N} \sum_{n=1}^N (\alpha^n a_1) \ldots (\alpha^{(k-1)n} a_{k-1})$converges in using the Hilbert-Schmidt norm ${\|a\|_{L^2(M)} := \tau(a^* a)^{1/2}}$?

Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set.

For ${k=1}$, all four questions can trivially be answered “yes”. For ${k=2}$, the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators. For ${k=3}$, we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that ${M}$ is ergodic. For general ${k}$, we have a positive answer to all four questions under the assumption that ${M}$ is asymptotically abelian, which roughly speaking means that the commutators ${[a,\alpha^n b]}$ converges to zero (in an appropriate weak sense) as ${n \rightarrow \infty}$. Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido, Duvenhage, and Beyers-Duvenhage-Stroh. For the ${k=3}$ result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the “generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity.

In most other situations we have counterexamples to all of these questions. In particular:

• For ${k=3}$, recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von Neumann algebra construction known as the crossed product.
• For ${k=3}$, recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction.
• For ${k=4}$, weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold.
• For ${k=5}$, recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This example only applies for ${k \geq 5}$; we do not know for ${k=4}$ whether recurrence on a dense set holds for ergodic systems.

In these notes we lay out the basic theory of the Fourier transform, which is of course the most fundamental tool in harmonic analysis and also of major importance in related fields (functional analysis, complex analysis, PDE, number theory, additive combinatorics, representation theory, signal processing, etc.). The Fourier transform, in conjunction with the Fourier inversion formula, allows one to take essentially arbitrary (complex-valued) functions on a group ${G}$ (or more generally, a space ${X}$ that ${G}$ acts on, e.g. a homogeneous space ${G/H}$), and decompose them as a (discrete or continuous) superposition of much more symmetric functions on the domain, such as characters ${\chi: G \rightarrow S^1}$; the precise superposition is given by Fourier coefficients ${\hat f(\xi)}$, which take values in some dual object such as the Pontryagin dual ${\hat G}$ of ${G}$. Characters behave in a very simple manner with respect to translation (indeed, they are eigenfunctions of the translation action), and so the Fourier transform tends to simplify any mathematical problem which enjoys a translation invariance symmetry (or an approximation to such a symmetry), and is somehow “linear” (i.e. it interacts nicely with superpositions). In particular, Fourier analytic methods are particularly useful for studying operations such as convolution ${f, g \mapsto f*g}$ and set-theoretic addition ${A, B \mapsto A+B}$, or the closely related problem of counting solutions to additive problems such as ${x = a_1 + a_2 + a_3}$ or ${x = a_1 - a_2}$, where ${a_1, a_2, a_3}$ are constrained to lie in specific sets ${A_1, A_2, A_3}$. The Fourier transform is also a particularly powerful tool for solving constant-coefficient linear ODE and PDE (because of the translation invariance), and can also approximately solve some variable-coefficient (or slightly non-linear) equations if the coefficients vary smoothly enough and the nonlinear terms are sufficiently tame.

The Fourier transform ${\hat f(\xi)}$ also provides an important new way of looking at a function ${f(x)}$, as it highlights the distribution of ${f}$ in frequency space (the domain of the frequency variable ${\xi}$) rather than physical space (the domain of the physical variable ${x}$). A given property of ${f}$ in the physical domain may be transformed to a rather different-looking property of ${\hat f}$ in the frequency domain. For instance:

• Smoothness of ${f}$ in the physical domain corresponds to decay of ${\hat f}$ in the Fourier domain, and conversely. (More generally, fine scale properties of ${f}$ tend to manifest themselves as coarse scale properties of ${\hat f}$, and conversely.)
• Convolution in the physical domain corresponds to pointwise multiplication in the Fourier domain, and conversely.
• Constant coefficient differential operators such as ${d/dx}$ in the physical domain corresponds to multiplication by polynomials such as ${2\pi i \xi}$ in the Fourier domain, and conversely.
• More generally, translation invariant operators in the physical domain correspond to multiplication by symbols in the Fourier domain, and conversely.
• Rescaling in the physical domain by an invertible linear transformation corresponds to an inverse (adjoint) rescaling in the Fourier domain.
• Restriction to a subspace (or subgroup) in the physical domain corresponds to projection to the dual quotient space (or quotient group) in the Fourier domain, and conversely.
• Frequency modulation in the physical domain corresponds to translation in the frequency domain, and conversely.

(We will make these statements more precise below.)

On the other hand, some operations in the physical domain remain essentially unchanged in the Fourier domain. Most importantly, the ${L^2}$ norm (or energy) of a function ${f}$ is the same as that of its Fourier transform, and more generally the inner product ${\langle f, g \rangle}$ of two functions ${f}$ is the same as that of their Fourier transforms. Indeed, the Fourier transform is a unitary operator on ${L^2}$ (a fact which is variously known as the Plancherel theorem or the Parseval identity). This makes it easier to pass back and forth between the physical domain and frequency domain, so that one can combine techniques that are easy to execute in the physical domain with other techniques that are easy to execute in the frequency domain. (In fact, one can combine the physical and frequency domains together into a product domain known as phase space, and there are entire fields of mathematics (e.g. microlocal analysis, geometric quantisation, time-frequency analysis) devoted to performing analysis on these sorts of spaces directly, but this is beyond the scope of this course.)

In these notes, we briefly discuss the general theory of the Fourier transform, but will mainly focus on the two classical domains for Fourier analysis: the torus ${{\Bbb T}^d := ({\bf R}/{\bf Z})^d}$, and the Euclidean space ${{\bf R}^d}$. For these domains one has the advantage of being able to perform very explicit algebraic calculations, involving concrete functions such as plane waves ${x \mapsto e^{2\pi i x \cdot \xi}}$ or Gaussians ${x \mapsto A^{d/2} e^{-\pi A |x|^2}}$.

One way to study a general class of mathematical objects is to embed them into a more structured class of mathematical objects; for instance, one could study manifolds by embedding them into Euclidean spaces. In these (optional) notes we study two (related) embedding theorems for topological spaces:

A key theme in real analysis is that of studying general functions ${f: X \rightarrow {\bf R}}$ or ${f: X \rightarrow {\bf C}}$ by first approximating them by “simpler” or “nicer” functions. But the precise class of “simple” or “nice” functions may vary from context to context. In measure theory, for instance, it is common to approximate measurable functions by indicator functions or simple functions. But in other parts of analysis, it is often more convenient to approximate rough functions by continuous or smooth functions (perhaps with compact support, or some other decay condition), or by functions in some algebraic class, such as the class of polynomials or trigonometric polynomials.

In order to approximate rough functions by more continuous ones, one of course needs tools that can generate continuous functions with some specified behaviour. The two basic tools for this are Urysohn’s lemma, which approximates indicator functions by continuous functions, and the Tietze extension theorem, which extends continuous functions on a subdomain to continuous functions on a larger domain. An important consequence of these theorems is the Riesz representation theorem for linear functionals on the space of compactly supported continuous functions, which describes such functionals in terms of Radon measures.

Sometimes, approximation by continuous functions is not enough; one must approximate continuous functions in turn by an even smoother class of functions. A useful tool in this regard is the Stone-Weierstrass theorem, that generalises the classical Weierstrass approximation theorem to more general algebras of functions.

As an application of this theory (and of many of the results accumulated in previous lecture notes), we will present (in an optional section) the commutative Gelfand-Neimark theorem classifying all commutative unital ${C^*}$-algebras.

dWhen studying a mathematical space X (e.g. a vector space, a topological space, a manifold, a group, an algebraic variety etc.), there are two fundamentally basic ways to try to understand the space:

1. By looking at subobjects in X, or more generally maps $f: Y \to X$ from some other space Y into X.  For iTnstance, a point in a space X can be viewed as a map from $pt$ to X; a curve in a space X could be thought of as a map from ${}[0,1]$ to X; a group G can be studied via its subgroups K, and so forth.
2. By looking at objects on X, or more precisely maps $f: X \to Y$ from X into some other space Y.  For instance, one can study a topological space X via the real- or complex-valued continuous functions $f \in C(X)$ on X; one can study a group G via its quotient groups $\pi: G \to G/H$; one can study an algebraic variety V by studying the polynomials on V (and in particular, the ideal of polynomials that vanish identically on V); and so forth.

(There are also more sophisticated ways to study an object via its maps, e.g. by studying extensions, joinings, splittings, universal lifts, etc.  The general study of objects via the maps between them is formalised abstractly in modern mathematics as category theory, and is also closely related to homological algebra.)

A remarkable phenomenon in many areas of mathematics is that of (contravariant) duality: that the maps into and out of one type of mathematical object X can be naturally associated to the maps out of and into a dual object $X^*$ (note the reversal of arrows here!).  In some cases, the dual object $X^*$ looks quite different from the original object X.  (For instance, in Stone duality, discussed in Notes 4, X would be a Boolean algebra (or some other partially ordered set) and $X^*$ would be a compact totally disconnected Hausdorff space (or some other topological space).)   In other cases, most notably with Hilbert spaces as discussed in Notes 5, the dual object $X^*$ is essentially identical to X itself.

In these notes we discuss a third important case of duality, namely duality of normed vector spaces, which is of an intermediate nature to the previous two examples: the dual $X^*$ of a normed vector space turns out to be another normed vector space, but generally one which is not equivalent to X itself (except in the important special case when X is a Hilbert space, as mentioned above).  On the other hand, the double dual $(X^*)^*$ turns out to be closely related to X, and in several (but not all) important cases, is essentially identical to X.  One of the most important uses of dual spaces in functional analysis is that it allows one to define the transpose $T^*: Y^* \to X^*$ of a continuous linear operator $T: X \to Y$.

A fundamental tool in understanding duality of normed vector spaces will be the Hahn-Banach theorem, which is an indispensable tool for exploring the dual of a vector space.  (Indeed, without this theorem, it is not clear at all that the dual of a non-trivial normed vector space is non-trivial!)  Thus, we shall study this theorem in detail in these notes concurrently with our discussion of duality.

In the third of the Distinguished Lecture Series given by Eli Stein here at UCLA, Eli presented a slightly different topic, which is work in preparation with Alex Nagel, Fulvio Ricci, and Steve Wainger, on algebras of singular integral operators which are sensitive to multiple different geometries in a nilpotent Lie group.

Having studied compact extensions in the previous lecture, we now consider the opposite type of extension, namely that of a weakly mixing extension. Just as compact extensions are “relative” versions of compact systems, weakly mixing extensions are “relative” versions of weakly mixing systems, in which the underlying algebra of scalars ${\Bbb C}$ is replaced by $L^\infty(Y)$. As in the case of unconditionally weakly mixing systems, we will be able to use the van der Corput lemma to neglect “conditionally weakly mixing” functions, thus allowing us to lift the uniform multiple recurrence property (UMR) from a system to any weakly mixing extension of that system.

To finish the proof of the Furstenberg recurrence theorem requires two more steps. One is a relative version of the dichotomy between mixing and compactness: if a system is not weakly mixing relative to some factor, then that factor has a non-trivial compact extension. This will be accomplished using the theory of conditional Hilbert-Schmidt operators in this lecture. Finally, we need the (easy) result that the UMR property is preserved under limits of chains; this will be accomplished in the next lecture.

In Lecture 11, we studied compact measure-preserving systems – those systems $(X, {\mathcal X}, \mu, T)$ in which every function $f \in L^2(X, {\mathcal X}, \mu)$ was almost periodic, which meant that their orbit $\{ T^n f: n \in {\Bbb Z}\}$ was precompact in the $L^2(X, {\mathcal X}, \mu)$ topology. Among other things, we were able to easily establish the Furstenberg recurrence theorem (Theorem 1 from Lecture 11) for such systems.

In this lecture, we generalise these results to a “relative” or “conditional” setting, in which we study systems which are compact relative to some factor $(Y, {\mathcal Y}, \nu, S)$ of $(X, {\mathcal X}, \mu, T)$. Such systems are to compact systems as isometric extensions are to isometric systems in topological dynamics. The main result we establish here is that the Furstenberg recurrence theorem holds for such compact extensions whenever the theorem holds for the base. The proof is essentially the same as in the compact case; the main new trick is to not to work in the Hilbert spaces $L^2(X,{\mathcal X},\mu)$ over the complex numbers, but rather in the Hilbert module $L^2(X,{\mathcal X},\mu|Y, {\mathcal Y}, \nu)$ over the (commutative) von Neumann algebra $L^\infty(Y,{\mathcal Y},\nu)$. (Modules are to rings as vector spaces are to fields.) Because of the compact nature of the extension, it turns out that results from topological dynamics (and in particular, van der Waerden’s theorem) can be exploited to good effect in this argument.

[Note: this operator-algebraic approach is not the only way to understand these extensions; one can also proceed by disintegrating $\mu$ into fibre measures $\mu_y$ for almost every $y \in Y$ and working fibre by fibre. We will discuss the connection between the two approaches below.]

 Terence Tao on Polymath15, ninth thread: goin… Anonymous on Polymath15, ninth thread: goin… KM on Polymath15, ninth thread: goin… Anonymous on Polymath15, ninth thread: goin… “The beauty of… on Maryam Mirzakhani A short proof of the… on 245A, Notes 5: Differentiation… I think my mathemati… on There’s more to mathematics th… masoudkomi on Books Terence Tao on Tricks Wiki article: The tenso… Paulin Ndoj on Does one have to be a genius t… Anonymous on Polymath15, ninth thread: goin… Anonymous on Which universities should one… Anonymous on Polymath15, ninth thread: goin… Anonymous on The theorems of Frobenius and… Vivek Sharma on 246C notes 3: Univalent functi…