You are currently browsing the tag archive for the ‘Tim Austin’ tag.
Tim Austin, Tanja Eisner, and I have just uploaded to the arXiv our joint paper Nonconventional ergodic averages and multiple recurrence for von Neumann dynamical systems, submitted to Pacific Journal of Mathematics. This project started with the observation that the multiple recurrence theorem of Furstenberg (and the related multiple convergence theorem of Host and Kra) could be interpreted in the language of dynamical systems of commutative finite von Neumann algebras, which naturally raised the question of the extent to which the results hold in the noncommutative setting. The short answer is “yes for small averages, but not for long ones”.
The Furstenberg multiple recurrence theorem can be phrased as follows: if is a probability space with a measure-preserving shift (which naturally induces an isomorphism by setting ), is non-negative with positive trace , and is an integer, then one has
In particular, for all in a set of positive upper density. This result is famously equivalent to Szemerédi’s theorem on arithmetic progressions.
The Host-Kra multiple convergence theorem makes the related assertion that if , then the scalar averages
converge to a limit as ; a fortiori, the function averages
converge in (say) norm.
The space is a commutative example of a von Neumann algebra: an algebra of bounded linear operators on a complex Hilbert space which is closed under the weak operator topology, and under taking adjoints. Indeed, one can take to be , and identify each element of with the multiplier operator . The operation is then a finite trace for this algebra, i.e. a linear map from the algebra to the scalars such that , , and , with equality iff . The shift is then an automorphism of this algebra (preserving shift and conjugation).
We can generalise this situation to the noncommutative setting. Define a von Neumann dynamical system to be a von Neumann algebra with a finite trace and an automorphism . In addition to the commutative examples generated by measure-preserving systems, we give three other examples here:
- (Matrices) is the algebra of complex matrices, with trace and shift , where is a fixed unitary matrix.
- (Group algebras) is the closure of the group algebra of a discrete group (i.e. the algebra of finite formal complex combinations of group elements), which acts on the Hilbert space by convolution (identifying each group element with its Kronecker delta function). A trace is given by , where is the Kronecker delta at the identity. Any automorphism of the group induces a shift .
- (Noncommutative torus) is the von Neumann algebra acting on generated by the multiplier operator and the shifted multiplier operator , where is fixed. A trace is given by , where is the constant function.
Inspired by noncommutative generalisations of other results in commutative analysis, one can then ask the following questions, for a fixed and for a fixed von Neumann dynamical system :
- (Recurrence on average) Whenever is non-negative with positive trace, is it true that
- (Recurrence on a dense set) Whenever is non-negative with positive trace, is it true thatfor all in a set of positive upper density?
- (Weak convergence) With , is it true thatconverges?
- (Strong convergence) With , is it true thatconverges in using the Hilbert-Schmidt norm ?
Note that strong convergence automatically implies weak convergence, and recurrence on average automatically implies recurrence on a dense set.
For , all four questions can trivially be answered “yes”. For , the answer to the above four questions is also “yes”, thanks to the von Neumann ergodic theorem for unitary operators. For , we were able to establish a positive answer to the “recurrence on a dense set”, “weak convergence”, and “strong convergence” results assuming that is ergodic. For general , we have a positive answer to all four questions under the assumption that is asymptotically abelian, which roughly speaking means that the commutators converges to zero (in an appropriate weak sense) as . Both of these proofs adapt the usual ergodic theory arguments; the latter result generalises some earlier work of Niculescu-Stroh-Zsido, Duvenhage, and Beyers-Duvenhage-Stroh. For the result, a key observation is that the van der Corput lemma can be used to control triple averages without requiring any commutativity; the “generalised von Neumann” trick of using multiple applications of the van der Corput trick to control higher averages, however, relies much more strongly on commutativity.
In most other situations we have counterexamples to all of these questions. In particular:
- For , recurrence on average can fail on an ergodic system; indeed, one can even make the average negative. This example is ultimately based on a Behrend example construction and a von Neumann algebra construction known as the crossed product.
- For , recurrence on a dense set can also fail if the ergodicity hypothesis is dropped. This also uses the Behrend example and the crossed product construction.
- For , weak and strong convergence can fail even assuming ergodicity. This uses a group theoretic construction, which amusingly was inspired by Grothendieck’s interpretation of a group as a sheaf of flat connections, which I blogged about recently, and which I will discuss below the fold.
- For , recurrence on a dense set fails even with the ergodicity hypothesis. This uses a fancier version of the Behrend example due to Ruzsa in this paper of Bergelson, Host, and Kra. This example only applies for ; we do not know for whether recurrence on a dense set holds for ergodic systems.
I’ve just uploaded to the arXiv my joint paper with Tim Austin, “On the testability and repair of hereditary hypergraph properties“, which has been submitted to Random Structures and Algorithms. In this paper we prove some positive and negative results for the testability (and the local repairability) of various properties of directed or undirected graphs and hypergraphs, which can be either monochromatic or multicoloured.
The negative results have already been discussed in a previous posting of mine, so today I will focus on the positive results. The property testing results here are finitary results, but it turns out to be rather convenient to use a certain correspondence principle (the hypergraph version of the Furstenberg correspondence principle) to convert the question into one about exchangeable probability measures on spaces of hypergraphs (i.e. on random hypergraphs whose probability distribution is invariant under exchange of vertices). Such objects are also closely related to the”graphons” and “hypergraphons” that emerge as graph limits, as studied by Lovasz-Szegedy, Elek-Szegedy, and others. Somewhat amusingly, once one does so, it then becomes convenient to keep track of objects indexed by vertex sets and how they are exchanged via the language of category theory, and in particular using the concept of a natural transformation to describe such objects as exchangeable measures, graph colourings, and local modification rules. I will try to sketch out some of these connections, after describing the main positive results.
This month I have been at the Institute for Advanced Study, participating in their semester program on additive combinatorics. Today I gave a talk on my forthcoming paper with Tim Austin on the property testing of graphs and hypergraphs (I hope to make a preprint available here soon). There has been an immense amount of progress on these topics recently, based in large part on the graph and hypergraph regularity lemmas; but we have discovered some surprising subtleties regarding these results, namely a distinction between undirected and directed graphs, between graphs and hypergraphs, between partite hypergraphs and non-partite hypergraphs, and between monotone hypergraph properties and hereditary ones.
For simplicity let us first work with (uncoloured, undirected, loop-free) graphs G = (V,E). In the subject of graph property testing, one is given a property which any given graph G may or may not have. For example, could be one of the following properties:
- G is planar.
- G is four-colourable.
- G has a number of edges equal to a power of two.
- G contains no triangles.
- G is bipartite.
- G is empty.
- G is a complete bipartite graph.
We assume that the labeling of the graph is irrelevant. More precisely, we assume that whenever two graphs G, G’ are isomorphic, that G satisfies if and only if G’ satisfies . For instance, all seven of the graph properties listed above are invariant under graph isomorphism.
We shall think of G as being very large (so is large) and dense (so ). We are interested in obtaining some sort of test that can answer the question “does G satisfy ?” with reasonable speed and reasonable accuracy. By “reasonable speed”, we mean that we will only make a bounded number of queries about the graph, i.e. we only look at a bounded number k of distinct vertices in V (selected at random) and base our test purely on how these vertices are connected to each other in E. (We will always assume that the number of vertices in V is at least k.) By “reasonable accuracy”, we will mean that we specify in advance some error tolerance and require the following:
- (No false negatives) If G indeed satisfies , then our test will always (correctly) accept G.
- (Few false positives in the -far case) If G fails to satisfy , and is furthermore -far from satisfying in the sense that one needs to add or remove at least edges in G before can be satisfied, then our test will (correctly) reject G with probability at least .
When a test with the above properties exists for each given (with the number of queried vertices k being allowed to depend on ), we say that the graph property is testable with one-sided error. (The general notion of property testing was introduced by Rubinfeld and Sudan, and first studied for graph properties by Goldreich, Goldwasser, and Ron; see this web page of Goldreich for further references and discussion.) The rejection probability is not very important in this definition, since if one wants to improve the success rate of the algorithm one can simply run independent trials of that algorithm (selecting fresh random vertices each time) in order to increase the chance that G is correctly rejected. However, it is intuitively clear that one must allow some probability of failure, since one is only inspecting a small portion of the graph and so cannot say with complete certainty whether the entire graph has the property or not. For similar reasons, one cannot reasonably demand to have a low false positive rate for all graphs that fail to obey , since if the graph is only one edge modification away from obeying , this modification is extremely unlikely to be detected by only querying a small portion of the graph. This explains why we need to restrict attention to graphs that are -far from obeying .
An example should illustrate this definition. Consider for instance property 6 above (the property that G is empty). To test whether a graph is empty, one can perform the following obvious algorithm: take k vertices in G at random and check whether they have any edges at all between them. If they do, then the test of course rejects G as being non-empty, while if they don’t, the test accepts G as being empty. Clearly there are no false negatives in this test, and if k is large enough depending on one can easily see (from the law of large numbers) that we will have few false positives if G is -far from being empty (i.e. if it has at least vertices). So the property of being empty is testable with one-sided error.
On the other hand, it is intuitively obvious that property 3 (having an number of edges equal to a power of 2) is not testable with one-sided error.
So it is reasonable to ask: what types of graph properties are testable with one-sided error, and which ones are not?