You are currently browsing the tag archive for the ‘randomness’ tag.
In this lecture – the final one on general measure-preserving dynamics – we put together the results from the past few lectures to establish the Furstenberg-Zimmer structure theorem for measure-preserving systems, and then use this to finish the proof of the Furstenberg recurrence theorem.
Having studied compact extensions in the previous lecture, we now consider the opposite type of extension, namely that of a weakly mixing extension. Just as compact extensions are “relative” versions of compact systems, weakly mixing extensions are “relative” versions of weakly mixing systems, in which the underlying algebra of scalars is replaced by . As in the case of unconditionally weakly mixing systems, we will be able to use the van der Corput lemma to neglect “conditionally weakly mixing” functions, thus allowing us to lift the uniform multiple recurrence property (UMR) from a system to any weakly mixing extension of that system.
To finish the proof of the Furstenberg recurrence theorem requires two more steps. One is a relative version of the dichotomy between mixing and compactness: if a system is not weakly mixing relative to some factor, then that factor has a non-trivial compact extension. This will be accomplished using the theory of conditional Hilbert-Schmidt operators in this lecture. Finally, we need the (easy) result that the UMR property is preserved under limits of chains; this will be accomplished in the next lecture.
In the previous lecture, we studied the recurrence properties of compact systems, which are systems in which all measurable functions exhibit almost periodicity – they almost return completely to themselves after repeated shifting. Now, we consider the opposite extreme of mixing systems – those in which all measurable functions (of mean zero) exhibit mixing – they become orthogonal to themselves after repeated shifting. (Actually, there are two different types of mixing, strong mixing and weak mixing, depending on whether the orthogonality occurs individually or on the average; it is the latter concept which is of more importance to the task of establishing the Furstenberg recurrence theorem.)
We shall see that for weakly mixing systems, averages such as can be computed very explicitly (in fact, this average converges to the constant ). More generally, we shall see that weakly mixing components of a system tend to average themselves out and thus become irrelevant when studying many types of ergodic averages. Our main tool here will be the humble Cauchy-Schwarz inequality, and in particular a certain consequence of it, known as the van der Corput lemma.
[This post is authored by Luca Trevisan. – T.]
Notions of “quasirandomness” for graphs and hypergraphs have many applications in combinatorics and computer science. Several past posts by Terry have addressed the role of quasirandom structures in additive combinatorics. A recurring theme is that if an object (a graph, a hypergraph, a subset of integers, …) is quasirandom, then several useful properties can be immediately deduced, but, also, if an object is not quasirandom then it possesses some “structure” than can be usefully exploited in an inductive argument. The contrapositive statements of such randomness-structure dichotomies are that certain simple properties imply quasirandomness. One can view such results as algorithms that can be used to “certify” quasirandomness, and the existence (or, in some cases, conjectural non-existence) of such algorithms has implications in computer science, as I shall discuss in this post.
Read the rest of this entry »
This Thursday I was at the University of Sydney, Australia, giving a public lecture on a favourite topic of mine, “Structure and randomness in the prime numbers“. My slides here are a merge between my slides for a Royal Society meeting and the slides I gave for the UCLA Science Colloquium; now that I figured out to use Powerpoint a little bit better, I was able to make the latter a bit more colourful (and the former less abridged).
The Distinguished Lecture Series at UCLA for this winter quarter is given by Avi Wigderson, who is lecturing on “some topics in computational complexity“. In his first lecture on Wednesday, Avi gave a wonderful talk (in his inimitably entertaining style) on “The power and weakness of randomness in computation“. The talk was based on these slides. He also gave a sort of “encore” on zero-knowledge proofs in more informal discussions after the main talk.
As always, any errors here are due to my transcription and interpretation.
This week I am visiting the University of Washington in Seattle, giving the Milliman Lecture Series for 2007-2008. My chosen theme here is “Recent developments in arithmetic combinatorics“. In my first lecture, I will speak (once again) on how methods in additive combinatorics have allowed us to detect additive patterns in the prime numbers, in particular discussing my joint work with Ben Green. In the second lecture I will discuss how additive combinatorics has made it possible to study the invertibility and spectral behaviour of random discrete matrices, in particular discussing my joint work with Van Vu; and in the third lecture I will discuss how sum-product estimates have recently led to progress in the theory of expanders relating to Lie groups, as well as to sieving over orbits of such groups, in particular presenting work of Jean Bourgain and his coauthors.
I’ve just come back from the 48th Annual IEEE Symposium on the Foundations of Computer science, better known as FOCS; this year it was held at Providence, near Brown University. (This conference is also being officially reported on by the blog posts of Nicole Immorlica, Luca Trevisan, and Scott Aaronson.) I was there to give a tutorial on some of the tools used these days in additive combinatorics and graph theory to distinguish structure and randomness. In a previous blog post, I had already mentioned that my lecture notes for this were available on the arXiv; now the slides for my tutorial are available too (it covers much the same ground as the lecture notes, and also incorporates some material from my ICM slides, but in a slightly different format).
In the slides, I am tentatively announcing some very recent (and not yet fully written up) work of Ben Green and myself establishing the Gowers inverse conjecture in finite fields in the special case when the function f is a bounded degree polynomial (this is a case which already has some theoretical computer science applications). I hope to expand upon this in a future post. But I will describe here a neat trick I learned at the conference (from the FOCS submission of Bogdanov and Viola) which uses majority voting to enhance a large number of small independent correlations into a much stronger single correlation. This application of majority voting is widespread in computer science (and, of course, in real-world democracies), but I had not previously been aware of its utility to the type of structure/randomness problems I am interested in (in particular, it seems to significantly simplify some of the arguments in the proof of my result with Ben mentioned above); thanks to this conference, I now know to add majority voting to my “toolbox”.
I’ve just uploaded to the arXiv my lecture notes “Structure and randomness in combinatorics” for my tutorial at the upcoming FOCS 2007 conference in October. This tutorial covers similar ground as my ICM paper (or slides), or my first two Simons lectures, but focuses more on the “nuts-and-bolts” of how structure theorems actually work to separate objects into structured pieces and pseudorandom pieces, for various definitions of “structured” and “pseudorandom”. Given that the target audience consists of computer scientists, I have focused exclusively here on the combinatorial aspects of this dichotomy (applied for instance to functions on the Hamming cube) rather than, say, the ergodic theory aspects (which are covered in Bryna Kra‘s lecture notes from Montreal, or my notes from Montreal for that matter). While most of the known applications of these decompositions are number-theoretic (e.g. my theorem with Ben Green), the number theory aspects are not covered in detail in these notes. (For that, you can read Bernard Host’s Bourbaki article, Ben Green‘s Gauss-Dirichlet article or ICM article, or my Coates article.)
This week I was in London, attending the New Fellows Seminar at the Royal Society. This was a fairly low-key event preceding the formal admissions ceremony; for instance, it is not publicised on their web site. The format was very interesting: they had each of the new Fellows of the Society give a brief (15 minute) presentation of their work in quick succession, in a manner which would be accessible to a diverse audience in the physical and life sciences. The result was a wonderful two-day seminar on the state of the art in many areas of physics, chemistry, engineering, biology, medicine, and mathematics. For instance, I learnt
- How the solar neutrino problem was resolved by the discovery that the neutrino had mass, which did not commute with flavour and hence caused neutrino oscillations, which have since been detected experimentally;
- Why modern aircraft (such as the Dreamliner and A380) are now assembled using (incredibly tough and waterproofed) adhesives instead of bolts or welds, and how adhesion has been enhanced by nanoparticles;
- How the bacterium Helicobacter pylori was recently demonstrated (by two Aussies :-) ) to be a major cause of peptic ulcers (though the exact mechanism is not fully understood), but has also been proposed (somewhat paradoxically) to also have a preventative effect against esophageal cancer (cf. the hygiene hypothesis);
- How recent advances in machine learning and image segmentation (including graph cut methods!) now allow computers to identify and track many general classes of objects (e.g. people, cars, animals) simultaneously in real-world images and video, though not quite in real-time yet;
- How large-scale structure maps of the universe (such as the 2dF Galaxy Redshift Survey) combine with measurements of the cosmic background radiation (e.g. from WMAP) to demonstrate the existence of both dark matter and dark energy (they have different impacts on the evolution of the curvature of the universe and on the current distribution of visible matter);
- … and 42 other topics like this. (One strongly recurrent theme in the life science talks was just how much recent genomic technologies, such as the genome projects of various key species, have accelerated (by several orders of magnitude!) the ability to identify the genes, proteins, and mechanisms that underlie any given biological function or disease. To paraphrase one speaker, a modern genomics lab could now produce the equivalent of one 1970s PhD thesis in the subject every minute.)