You are currently browsing the tag archive for the ‘structure’ tag.

This Thursday I was at the University of Sydney, Australia, giving a public lecture on a favourite topic of mine, “Structure and randomness in the prime numbers“. My slides here are a merge between my slides for a Royal Society meeting and the slides I gave for the UCLA Science Colloquium; now that I figured out to use Powerpoint a little bit better, I was able to make the latter a bit more colourful (and the former less abridged).

In our final lecture on topological dynamics, we discuss a remarkable theorem of Furstenberg that classifies a major type of topological dynamical system – distal systems – in terms of highly structured (from an algebraic point of view) systems, namely towers of isometric extensions. This theorem is also a model for an important analogous result in ergodic theory, the Furstenberg-Zimmer structure theorem, which we will turn to in a few lectures. We will not be able to prove Furstenberg’s structure theorem for distal systems here in full, but we hope to illustrate some of the key points and ideas.

In this lecture, we move away from recurrence, and instead focus on the structure of topological dynamical systems. One remarkable feature of this subject is that starting from fairly “soft” notions of structure, such as topological structure, one can extract much more “hard” or “rigid” notions of structure, such as geometric or algebraic structure. The key concept needed to capture this structure is that of an isometric system, or more generally an isometric extension, which we shall discuss in this lecture. As an application of this theory we characterise the distribution of polynomial sequences in torii (a baby case of a variant of Ratner’s theorem due to (Leon) Green, which we will cover later in this course).

This week I am visiting the University of Washington in Seattle, giving the Milliman Lecture Series for 2007-2008. My chosen theme here is “Recent developments in arithmetic combinatorics“. In my first lecture, I will speak (once again) on how methods in additive combinatorics have allowed us to detect additive patterns in the prime numbers, in particular discussing my joint work with Ben Green. In the second lecture I will discuss how additive combinatorics has made it possible to study the invertibility and spectral behaviour of random discrete matrices, in particular discussing my joint work with Van Vu; and in the third lecture I will discuss how sum-product estimates have recently led to progress in the theory of expanders relating to Lie groups, as well as to sieving over orbits of such groups, in particular presenting work of Jean Bourgain and his coauthors.

I’ve just come back from the 48th Annual IEEE Symposium on the Foundations of Computer science, better known as FOCS; this year it was held at Providence, near Brown University. (This conference is also being officially reported on by the blog posts of Nicole Immorlica, Luca Trevisan, and Scott Aaronson.) I was there to give a tutorial on some of the tools used these days in additive combinatorics and graph theory to distinguish structure and randomness. In a previous blog post, I had already mentioned that my lecture notes for this were available on the arXiv; now the slides for my tutorial are available too (it covers much the same ground as the lecture notes, and also incorporates some material from my ICM slides, but in a slightly different format).

In the slides, I am tentatively announcing some very recent (and not yet fully written up) work of Ben Green and myself establishing the Gowers inverse conjecture in finite fields in the special case when the function f is a bounded degree polynomial (this is a case which already has some theoretical computer science applications). I hope to expand upon this in a future post. But I will describe here a neat trick I learned at the conference (from the FOCS submission of Bogdanov and Viola) which uses majority voting to enhance a large number of small independent correlations into a much stronger single correlation. This application of majority voting is widespread in computer science (and, of course, in real-world democracies), but I had not previously been aware of its utility to the type of structure/randomness problems I am interested in (in particular, it seems to significantly simplify some of the arguments in the proof of my result with Ben mentioned above); thanks to this conference, I now know to add majority voting to my “toolbox”.

Ben Green and I have just uploaded our paper “The quantitative behaviour of polynomial orbits on nilmanifolds” to the arXiv (and shortly to be submitted to a journal, once a companion paper is finished). This paper grew out of our efforts to prove the Möbius and Nilsequences conjecture MN(s) from our earlier paper, which has applications to counting various linear patterns in primes (Dickson’s conjecture). These efforts were successful – as the companion paper will reveal – but it turned out that in order to establish this number-theoretic conjecture, we had to first establish a purely dynamical quantitative result about polynomial sequences in nilmanifolds, very much in the spirit of the celebrated theorems of Marina Ratner on unipotent flows; I plan to discuss her theorems in more detail in a followup post to this one.In this post I will not discuss the number-theoretic applications or the connections with Ratner’s theorem, and instead describe our result from a slightly different viewpoint, starting from some very simple examples and gradually moving to the general situation considered in our paper.

To begin with, consider a infinite linear sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ in the unit circle ${\Bbb R}/{\Bbb Z}$, where $\alpha, \beta \in {\Bbb R}/{\Bbb Z}$. (One can think of this sequence as the orbit of $\beta$ under the action of the shift operator $T: x \mapsto x +\alpha$ on the unit circle.) This sequence can do one of two things:

1. If $\alpha$ is rational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is periodic and thus only takes on finitely many values.
2. If $\alpha$ is irrational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is dense in ${\Bbb R}/{\Bbb Z}$. In fact, it is not just dense, it is equidistributed, or equivalently that $\displaystyle\lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N F( n \alpha + \beta ) = \int_{{\Bbb R}/{\Bbb Z}} F$

for all continuous functions $F: {\Bbb R}/{\Bbb Z} \to {\Bbb C}$. This statement is known as the equidistribution theorem.

We thus see that infinite linear sequences exhibit a sharp dichotomy in behaviour between periodicity and equidistribution; intermediate scenarios, such as concentration on a fractal set (such as a Cantor set), do not occur with linear sequences. This dichotomy between structure and randomness is in stark contrast to exponential sequences such as $( 2^n \alpha)_{n \in {\Bbb N}}$, which can exhibit an extremely wide spectrum of behaviours. For instance, the question of whether $(10^n \pi)_{n \in {\Bbb N}}$ is equidistributed mod 1 is an old unsolved problem, equivalent to asking whether $\pi$ is normal base 10.

Intermediate between linear sequences and exponential sequences are polynomial sequences $(P(n))_{n \in {\Bbb N}}$, where P is a polynomial with coefficients in ${\Bbb R}/{\Bbb Z}$. A famous theorem of Weyl asserts that infinite polynomial sequences enjoy the same dichotomy as their linear counterparts, namely that they are either periodic (which occurs when all non-constant coefficients are rational) or equidistributed (which occurs when at least one non-constant coefficient is irrational). Thus for instance the fractional parts $\{ \sqrt{2}n^2\}$ of $\sqrt{2} n^2$ are equidistributed modulo 1. This theorem is proven by Fourier analysis combined with non-trivial bounds on Weyl sums.

For our applications, we are interested in strengthening these results in two directions. Firstly, we wish to generalise from polynomial sequences in the circle ${\Bbb R}/{\Bbb Z}$ to polynomial sequences $(g(n)\Gamma)_{n \in {\Bbb N}}$ in other homogeneous spaces, in particular nilmanifolds. Secondly, we need quantitative equidistribution results for finite orbits $(g(n)\Gamma)_{1 \leq n \leq N}$ rather than qualitative equidistribution for infinite orbits $(g(n)\Gamma)_{n \in {\Bbb N}}$.

I’ve just uploaded to the arXiv my lecture notes “Structure and randomness in combinatorics” for my tutorial at the upcoming FOCS 2007 conference in October. This tutorial covers similar ground as my ICM paper (or slides), or my first two Simons lectures, but focuses more on the “nuts-and-bolts” of how structure theorems actually work to separate objects into structured pieces and pseudorandom pieces, for various definitions of “structured” and “pseudorandom”. Given that the target audience consists of computer scientists, I have focused exclusively here on the combinatorial aspects of this dichotomy (applied for instance to functions on the Hamming cube) rather than, say, the ergodic theory aspects (which are covered in Bryna Kra‘s lecture notes from Montreal, or my notes from Montreal for that matter). While most of the known applications of these decompositions are number-theoretic (e.g. my theorem with Ben Green), the number theory aspects are not covered in detail in these notes. (For that, you can read Bernard Host’s Bourbaki article, Ben Green‘s Gauss-Dirichlet article or ICM article, or my Coates article.)

This week I was in London, attending the New Fellows Seminar at the Royal Society. This was a fairly low-key event preceding the formal admissions ceremony; for instance, it is not publicised on their web site. The format was very interesting: they had each of the new Fellows of the Society give a brief (15 minute) presentation of their work in quick succession, in a manner which would be accessible to a diverse audience in the physical and life sciences. The result was a wonderful two-day seminar on the state of the art in many areas of physics, chemistry, engineering, biology, medicine, and mathematics. For instance, I learnt

• How the solar neutrino problem was resolved by the discovery that the neutrino had mass, which did not commute with flavour and hence caused neutrino oscillations, which have since been detected experimentally;
• Why modern aircraft (such as the Dreamliner and A380) are now assembled using (incredibly tough and waterproofed) adhesives instead of bolts or welds, and how adhesion has been enhanced by nanoparticles;
• How the bacterium Helicobacter pylori was recently demonstrated (by two Aussies :-) ) to be a major cause of peptic ulcers (though the exact mechanism is not fully understood), but has also been proposed (somewhat paradoxically) to also have a preventative effect against esophageal cancer (cf. the hygiene hypothesis);
• How recent advances in machine learning and image segmentation (including graph cut methods!) now allow computers to identify and track many general classes of objects (e.g. people, cars, animals) simultaneously in real-world images and video, though not quite in real-time yet;
• How large-scale structure maps of the universe (such as the 2dF Galaxy Redshift Survey) combine with measurements of the cosmic background radiation (e.g. from WMAP) to demonstrate the existence of both dark matter and dark energy (they have different impacts on the evolution of the curvature of the universe and on the current distribution of visible matter);
• … and 42 other topics like this. (One strongly recurrent theme in the life science talks was just how much recent genomic technologies, such as the genome projects of various key species, have accelerated (by several orders of magnitude!) the ability to identify the genes, proteins, and mechanisms that underlie any given biological function or disease. To paraphrase one speaker, a modern genomics lab could now produce the equivalent of one 1970s PhD thesis in the subject every minute.)

This post is a sequel of sorts to my earlier post on hard and soft analysis, and the finite convergence principle. Here, I want to discuss a well-known theorem in infinitary soft analysis – the Lebesgue differentiation theorem – and whether there is any meaningful finitary version of this result. Along the way, it turns out that we will uncover a simple analogue of the Szemerédi regularity lemma, for subsets of the interval rather than for graphs. (Actually, regularity lemmas seem to appear in just about any context in which fine-scaled objects can be approximated by coarse-scaled ones.) The connection between regularity lemmas and results such as the Lebesgue differentiation theorem was recently highlighted by Elek and Szegedy, while the connection between the finite convergence principle and results such as the pointwise ergodic theorem (which is a close cousin of the Lebesgue differentiation theorem) was recently detailed by Avigad, Gerhardy, and Towsner.

The Lebesgue differentiation theorem has many formulations, but we will avoid the strongest versions and just stick to the following model case for simplicity:

Lebesgue differentiation theorem. If $f: [0,1] \to [0,1]$ is Lebesgue measurable, then for almost every $x \in [0,1]$ we have $f(x) = \lim_{r \to 0} \frac{1}{r} \int_x^{x+r} f(y)\ dy$. Equivalently, the fundamental theorem of calculus $f(x) = \frac{d}{dy} \int_0^y f(z) dz|_{y=x}$ is true for almost every x in [0,1].

Here we use the oriented definite integral, thus $\int_x^y = - \int_y^x$. Specialising to the case where $f = 1_A$ is an indicator function, we obtain the Lebesgue density theorem as a corollary:

Lebesgue density theorem. Let $A \subset [0,1]$ be Lebesgue measurable. Then for almost every $x \in A$, we have $\frac{|A \cap [x-r,x+r]|}{2r} \to 1$ as $r \to 0^+$, where |A| denotes the Lebesgue measure of A.

In other words, almost all the points x of A are points of density of A, which roughly speaking means that as one passes to finer and finer scales, the immediate vicinity of x becomes increasingly saturated with A. (Points of density are like robust versions of interior points, thus the Lebesgue density theorem is an assertion that measurable sets are almost like open sets. This is Littlewood’s first principle.) One can also deduce the Lebesgue differentiation theorem back from the Lebesgue density theorem by approximating f by a finite linear combination of indicator functions; we leave this as an exercise.

[This lecture is also doubling as this week’s “open problem of the week”, as it (eventually) discusses the soliton resolution conjecture.]

In this third lecture, I will talk about how the dichotomy between structure and randomness pervades the study of two different types of partial differential equations (PDEs):

• Parabolic PDE, such as the heat equation $u_t = \Delta u$, which turn out to play an important role in the modern study of geometric topology; and
• Hamiltonian PDE, such as the Schrödinger equation $u_t = i \Delta u$, which are heuristically related (via Liouville’s theorem) to measure-preserving actions of the real line (or time axis) ${\Bbb R}$, somewhat in analogy to how combinatorial number theory and graph theory were related to measure-preserving actions of ${\Bbb Z}$ and $S_\infty$ respectively, as discussed in the previous lecture.

(In physics, one would also insert some physical constants, such as Planck’s constant $\hbar$, but for the discussion here it is convenient to normalise away all of these constants.)

### Recent Comments Ryan on 246A, Notes 0: the complex… dn1214 on Nonlinear dispersive equations… Typo? on Moser’s entropy compress… William Deng on Analysis I William Deng on Analysis I ¿En dónde está el ce… on Hilbert’s nullstellensat… William Deng on Analysis I Hubert Schaetzel on Almost all Collatz orbits atta… aquazorcarson on Boosting the van der Corput in… aquazorcarson on Boosting the van der Corput in… James on A quantitative formulation of… SCHAETZEL Hubert on Almost all Collatz orbits atta… Euiwoong Lee on Topics in random matrix t… Marcel Goh on Additive combinatorics William Deng on Analysis I