You are currently browsing the tag archive for the ‘structure’ tag.

This week I am visiting the University of Washington in Seattle, giving the Milliman Lecture Series for 2007-2008. My chosen theme here is “Recent developments in arithmetic combinatorics“. In my first lecture, I will speak (once again) on how methods in additive combinatorics have allowed us to detect additive patterns in the prime numbers, in particular discussing my joint work with Ben Green. In the second lecture I will discuss how additive combinatorics has made it possible to study the invertibility and spectral behaviour of random discrete matrices, in particular discussing my joint work with Van Vu; and in the third lecture I will discuss how sum-product estimates have recently led to progress in the theory of expanders relating to Lie groups, as well as to sieving over orbits of such groups, in particular presenting work of Jean Bourgain and his coauthors.

Read the rest of this entry »

I’ve just come back from the 48th Annual IEEE Symposium on the Foundations of Computer science, better known as FOCS; this year it was held at Providence, near Brown University. (This conference is also being officially reported on by the blog posts of Nicole Immorlica, Luca Trevisan, and Scott Aaronson.) I was there to give a tutorial on some of the tools used these days in additive combinatorics and graph theory to distinguish structure and randomness. In a previous blog post, I had already mentioned that my lecture notes for this were available on the arXiv; now the slides for my tutorial are available too (it covers much the same ground as the lecture notes, and also incorporates some material from my ICM slides, but in a slightly different format).

In the slides, I am tentatively announcing some very recent (and not yet fully written up) work of Ben Green and myself establishing the Gowers inverse conjecture in finite fields in the special case when the function f is a bounded degree polynomial (this is a case which already has some theoretical computer science applications). I hope to expand upon this in a future post. But I will describe here a neat trick I learned at the conference (from the FOCS submission of Bogdanov and Viola) which uses majority voting to enhance a large number of small independent correlations into a much stronger single correlation. This application of majority voting is widespread in computer science (and, of course, in real-world democracies), but I had not previously been aware of its utility to the type of structure/randomness problems I am interested in (in particular, it seems to significantly simplify some of the arguments in the proof of my result with Ben mentioned above); thanks to this conference, I now know to add majority voting to my “toolbox”.

Read the rest of this entry »

Ben Green and I have just uploaded our paper “The quantitative behaviour of polynomial orbits on nilmanifolds” to the arXiv (and shortly to be submitted to a journal, once a companion paper is finished). This paper grew out of our efforts to prove the Möbius and Nilsequences conjecture MN(s) from our earlier paper, which has applications to counting various linear patterns in primes (Dickson’s conjecture). These efforts were successful – as the companion paper will reveal – but it turned out that in order to establish this number-theoretic conjecture, we had to first establish a purely dynamical quantitative result about polynomial sequences in nilmanifolds, very much in the spirit of the celebrated theorems of Marina Ratner on unipotent flows; I plan to discuss her theorems in more detail in a followup post to this one.In this post I will not discuss the number-theoretic applications or the connections with Ratner’s theorem, and instead describe our result from a slightly different viewpoint, starting from some very simple examples and gradually moving to the general situation considered in our paper.

To begin with, consider a infinite linear sequence (n \alpha + \beta)_{n \in {\Bbb N}} in the unit circle {\Bbb R}/{\Bbb Z}, where \alpha, \beta \in {\Bbb R}/{\Bbb Z}. (One can think of this sequence as the orbit of \beta under the action of the shift operator T: x \mapsto x +\alpha on the unit circle.) This sequence can do one of two things:

  1. If \alpha is rational, then the sequence (n \alpha + \beta)_{n \in {\Bbb N}} is periodic and thus only takes on finitely many values.
  2. If \alpha is irrational, then the sequence (n \alpha + \beta)_{n \in {\Bbb N}} is dense in {\Bbb R}/{\Bbb Z}. In fact, it is not just dense, it is equidistributed, or equivalently that

    \displaystyle\lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N F( n \alpha + \beta ) = \int_{{\Bbb R}/{\Bbb Z}} F

    for all continuous functions F: {\Bbb R}/{\Bbb Z} \to {\Bbb C}. This statement is known as the equidistribution theorem.

We thus see that infinite linear sequences exhibit a sharp dichotomy in behaviour between periodicity and equidistribution; intermediate scenarios, such as concentration on a fractal set (such as a Cantor set), do not occur with linear sequences. This dichotomy between structure and randomness is in stark contrast to exponential sequences such as ( 2^n \alpha)_{n \in {\Bbb N}}, which can exhibit an extremely wide spectrum of behaviours. For instance, the question of whether (10^n \pi)_{n \in {\Bbb N}} is equidistributed mod 1 is an old unsolved problem, equivalent to asking whether \pi is normal base 10.

Intermediate between linear sequences and exponential sequences are polynomial sequences (P(n))_{n \in {\Bbb N}}, where P is a polynomial with coefficients in {\Bbb R}/{\Bbb Z}. A famous theorem of Weyl asserts that infinite polynomial sequences enjoy the same dichotomy as their linear counterparts, namely that they are either periodic (which occurs when all non-constant coefficients are rational) or equidistributed (which occurs when at least one non-constant coefficient is irrational). Thus for instance the fractional parts \{ \sqrt{2}n^2\} of \sqrt{2} n^2 are equidistributed modulo 1. This theorem is proven by Fourier analysis combined with non-trivial bounds on Weyl sums.

For our applications, we are interested in strengthening these results in two directions. Firstly, we wish to generalise from polynomial sequences in the circle {\Bbb R}/{\Bbb Z} to polynomial sequences (g(n)\Gamma)_{n \in {\Bbb N}} in other homogeneous spaces, in particular nilmanifolds. Secondly, we need quantitative equidistribution results for finite orbits (g(n)\Gamma)_{1 \leq n \leq N} rather than qualitative equidistribution for infinite orbits (g(n)\Gamma)_{n \in {\Bbb N}}.

Read the rest of this entry »

I’ve just uploaded to the arXiv my lecture notes “Structure and randomness in combinatorics” for my tutorial at the upcoming FOCS 2007 conference in October. This tutorial covers similar ground as my ICM paper (or slides), or my first two Simons lectures, but focuses more on the “nuts-and-bolts” of how structure theorems actually work to separate objects into structured pieces and pseudorandom pieces, for various definitions of “structured” and “pseudorandom”. Given that the target audience consists of computer scientists, I have focused exclusively here on the combinatorial aspects of this dichotomy (applied for instance to functions on the Hamming cube) rather than, say, the ergodic theory aspects (which are covered in Bryna Kra‘s lecture notes from Montreal, or my notes from Montreal for that matter). While most of the known applications of these decompositions are number-theoretic (e.g. my theorem with Ben Green), the number theory aspects are not covered in detail in these notes. (For that, you can read Bernard Host’s Bourbaki article, Ben Green‘s Gauss-Dirichlet article or ICM article, or my Coates article.)

This week I was in London, attending the New Fellows Seminar at the Royal Society. This was a fairly low-key event preceding the formal admissions ceremony; for instance, it is not publicised on their web site. The format was very interesting: they had each of the new Fellows of the Society give a brief (15 minute) presentation of their work in quick succession, in a manner which would be accessible to a diverse audience in the physical and life sciences. The result was a wonderful two-day seminar on the state of the art in many areas of physics, chemistry, engineering, biology, medicine, and mathematics. For instance, I learnt

  • How the solar neutrino problem was resolved by the discovery that the neutrino had mass, which did not commute with flavour and hence caused neutrino oscillations, which have since been detected experimentally;
  • Why modern aircraft (such as the Dreamliner and A380) are now assembled using (incredibly tough and waterproofed) adhesives instead of bolts or welds, and how adhesion has been enhanced by nanoparticles;
  • How the bacterium Helicobacter pylori was recently demonstrated (by two Aussies :-) ) to be a major cause of peptic ulcers (though the exact mechanism is not fully understood), but has also been proposed (somewhat paradoxically) to also have a preventative effect against esophageal cancer (cf. the hygiene hypothesis);
  • How recent advances in machine learning and image segmentation (including graph cut methods!) now allow computers to identify and track many general classes of objects (e.g. people, cars, animals) simultaneously in real-world images and video, though not quite in real-time yet;
  • How large-scale structure maps of the universe (such as the 2dF Galaxy Redshift Survey) combine with measurements of the cosmic background radiation (e.g. from WMAP) to demonstrate the existence of both dark matter and dark energy (they have different impacts on the evolution of the curvature of the universe and on the current distribution of visible matter);
  • … and 42 other topics like this. (One strongly recurrent theme in the life science talks was just how much recent genomic technologies, such as the genome projects of various key species, have accelerated (by several orders of magnitude!) the ability to identify the genes, proteins, and mechanisms that underlie any given biological function or disease. To paraphrase one speaker, a modern genomics lab could now produce the equivalent of one 1970s PhD thesis in the subject every minute.)

Read the rest of this entry »

This post is a sequel of sorts to my earlier post on hard and soft analysis, and the finite convergence principle. Here, I want to discuss a well-known theorem in infinitary soft analysis – the Lebesgue differentiation theorem – and whether there is any meaningful finitary version of this result. Along the way, it turns out that we will uncover a simple analogue of the Szemerédi regularity lemma, for subsets of the interval rather than for graphs. (Actually, regularity lemmas seem to appear in just about any context in which fine-scaled objects can be approximated by coarse-scaled ones.) The connection between regularity lemmas and results such as the Lebesgue differentiation theorem was recently highlighted by Elek and Szegedy, while the connection between the finite convergence principle and results such as the pointwise ergodic theorem (which is a close cousin of the Lebesgue differentiation theorem) was recently detailed by Avigad, Gerhardy, and Towsner.

The Lebesgue differentiation theorem has many formulations, but we will avoid the strongest versions and just stick to the following model case for simplicity:

Lebesgue differentiation theorem. If f: [0,1] \to [0,1] is Lebesgue measurable, then for almost every x \in [0,1] we have f(x) = \lim_{r \to 0} \frac{1}{r} \int_x^{x+r} f(y)\ dy. Equivalently, the fundamental theorem of calculus f(x) = \frac{d}{dy} \int_0^y f(z) dz|_{y=x} is true for almost every x in [0,1].

Here we use the oriented definite integral, thus \int_x^y = - \int_y^x. Specialising to the case where f = 1_A is an indicator function, we obtain the Lebesgue density theorem as a corollary:

Lebesgue density theorem. Let A \subset [0,1] be Lebesgue measurable. Then for almost every x \in A, we have \frac{|A \cap [x-r,x+r]|}{2r} \to 1 as r \to 0^+, where |A| denotes the Lebesgue measure of A.

In other words, almost all the points x of A are points of density of A, which roughly speaking means that as one passes to finer and finer scales, the immediate vicinity of x becomes increasingly saturated with A. (Points of density are like robust versions of interior points, thus the Lebesgue density theorem is an assertion that measurable sets are almost like open sets. This is Littlewood’s first principle.) One can also deduce the Lebesgue differentiation theorem back from the Lebesgue density theorem by approximating f by a finite linear combination of indicator functions; we leave this as an exercise.

Read the rest of this entry »

[This lecture is also doubling as this week’s “open problem of the week”, as it (eventually) discusses the soliton resolution conjecture.]

In this third lecture, I will talk about how the dichotomy between structure and randomness pervades the study of two different types of partial differential equations (PDEs):

  • Parabolic PDE, such as the heat equation u_t = \Delta u, which turn out to play an important role in the modern study of geometric topology; and
  • Hamiltonian PDE, such as the Schrödinger equation u_t = i \Delta u, which are heuristically related (via Liouville’s theorem) to measure-preserving actions of the real line (or time axis) {\Bbb R}, somewhat in analogy to how combinatorial number theory and graph theory were related to measure-preserving actions of {\Bbb Z} and S_\infty respectively, as discussed in the previous lecture.

(In physics, one would also insert some physical constants, such as Planck’s constant \hbar, but for the discussion here it is convenient to normalise away all of these constants.)

Read the rest of this entry »

In this second lecture, I wish to talk about the dichotomy between structure and randomness as it manifests itself in four closely related areas of mathematics:

  • Combinatorial number theory, which seeks to find patterns in unstructured dense sets (or colourings) of integers;
  • Ergodic theory (or more specifically, multiple recurrence theory), which seeks to find patterns in positive-measure sets under the action of a discrete dynamical system on probability spaces (or more specifically, measure-preserving actions of the integers {\Bbb Z});
  • Graph theory, or more specifically the portion of this theory concerned with finding patterns in large unstructured dense graphs; and
  • Ergodic graph theory, which is a very new and undeveloped subject, which roughly speaking seems to be concerned with the patterns within a measure-preserving action of the infinite permutation group S_\infty, which is one of several models we have available to study infinite “limits” of graphs.

The two “discrete” (or “finitary”, or “quantitative”) fields of combinatorial number theory and graph theory happen to be related to each other, basically by using the Cayley graph construction; I will give an example of this shortly. The two “continuous” (or “infinitary”, or “qualitative”) fields of ergodic theory and ergodic graph theory are at present only related on the level of analogy and informal intuition, but hopefully some more systematic connections between them will appear soon.

On the other hand, we have some very rigorous connections between combinatorial number theory and ergodic theory, and also (more recently) between graph theory and ergodic graph theory, basically by the procedure of viewing the infinitary continuous setting as a limit of the finitary discrete setting. These two connections go by the names of the Furstenberg correspondence principle and the graph correspondence principle respectively. These principles allow one to tap the power of the infinitary world (for instance, the ability to take limits and perform completions or closures of objects) in order to establish results in the finitary world, or at least to take the intuition gained in the infinitary world and transfer it to a finitary setting. Conversely, the finitary world provides an excellent model setting to refine one’s understanding of infinitary objects, for instance by establishing quantitative analogues of “soft” results obtained in an infinitary manner. I will remark here that this best-of-both-worlds approach, borrowing from both the finitary and infinitary traditions of mathematics, was absolutely necessary for Ben Green and I in order to establish our result on long arithmetic progressions in the primes. In particular, the infinitary setting is excellent for being able to rigorously define and study concepts (such as structure or randomness) which are much “fuzzier” and harder to pin down exactly in the finitary world.

Read the rest of this entry »

This week I am in Boston, giving this year’s Simons lectures at MIT together with David Donoho. (These lectures, incidentally, are endowed by Jim Simons, who was mentioned in some earlier discussion here.) While preparing these lectures, it occurred to me that I may as well post my lecture notes on this blog, since this medium is essentially just an asynchronous version of a traditional lecture series, and the hypertext capability is in some ways more convenient and informal than, say, \LaTeX slides.

I am giving three lectures, each expounding on some aspects of the theme “the dichotomy between structure and randomness”, which I also spoke about (and wrote about) for the ICM last August. This theme seems to pervade many of the areas of mathematics that I work in, and my lectures aim to explore how this theme manifests itself in several of these. In this, the first lecture, I describe the dichotomy as it appears in Fourier analysis and in number theory. (In the second, I discuss the dichotomy in ergodic theory and graph theory, while in the third, I discuss PDE.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,969 other followers