You are currently browsing the category archive for the ‘travel’ category.

This week I am in Seville, Spain, for a conference in harmonic analysis and related topics.  My talk is titled “the uniform uncertainty principle and compressed sensing“.  The content of this talk overlaps substantially with my Ostrowski lecture on the same topic; the slides I prepared for the Seville lecture can be found here.

[Update, Dec 6: Some people have asked about my other lecture given in Seville, on structure and randomness in the prime numbers.  This lecture is largely equivalent to the one posted here.]

In this final lecture in the Marker lecture series, I discuss the recent work of Bourgain, Gamburd, and Sarnak on how arithmetic combinatorics and expander graphs were used to sieve for almost primes in various algebraic sets.

Read the rest of this entry »

In the third Marker lecture, I would like to discuss the recent progress, particularly by Goldston, Pintz, and Yıldırım, on finding small gaps p_{n+1}-p_n between consecutive primes.  (See also the surveys by Goldston-Pintz-Yıldırım, by Green, and by Soundararajan on the subject; the material here is based to some extent on these prior surveys.)

Read the rest of this entry »

This week I am at Penn State University, giving this year’s Marker lectures.  My chosen theme for my four lectures here is “recent developments in additive prime number theory”.  My first lecture, “Long arithmetic progressions in primes”, is similar to my AMS lecture on the same topic and so I am not reposting it here.  The second lecture, the notes for which begin after the fold, is on “Linear equations in primes”.  These two lectures focus primarily on work of myself and Ben Green.  The third and fourth lectures, entitled “Small gaps between primes” and “Sieving for almost primes and expander graphs”, will instead be focused on the work of Goldston-Yildirim-Pintz and Bourgain-Gamburd-Sarnak respectively.
Read the rest of this entry »

This Thursday I was at the University of Sydney, Australia, giving a public lecture on a favourite topic of mine, “Structure and randomness in the prime numbers“. My slides here are a merge between my slides for a Royal Society meeting and the slides I gave for the UCLA Science Colloquium; now that I figured out to use Powerpoint a little bit better, I was able to make the latter a bit more colourful (and the former less abridged).

This week I am in Australia, attending the ANZIAM annual meeting in Katoomba, New South Wales (in the picturesque Blue Mountains). I gave an overview talk on some recent developments in compressed sensing, particularly with regards to the basis pursuit approach to recovering sparse (or compressible) signals from incomplete measurements. The slides for my talk can be found here, with some accompanying pictures here. (There are of course by now many other presentations of compressed sensing on-line; see for instance this page at Rice.)

There was an interesting discussion after the talk. Some members of the audience asked the very good question as to whether any a priori information about a signal (e.g. some restriction about the support) could be incorporated to improve the performance of compressed sensing; a related question was whether one could perform an adaptive sequence of measurements to similarly improve performance. I don’t have good answers to these questions. Another pointed out that the restricted isometry property was like a local “well-conditioning” property for the matrix, which only applied when one viewed a few columns at a time.

This month I have been at the Institute for Advanced Study, participating in their semester program on additive combinatorics. Today I gave a talk on my forthcoming paper with Tim Austin on the property testing of graphs and hypergraphs (I hope to make a preprint available here soon). There has been an immense amount of progress on these topics recently, based in large part on the graph and hypergraph regularity lemmas; but we have discovered some surprising subtleties regarding these results, namely a distinction between undirected and directed graphs, between graphs and hypergraphs, between partite hypergraphs and non-partite hypergraphs, and between monotone hypergraph properties and hereditary ones.

For simplicity let us first work with (uncoloured, undirected, loop-free) graphs G = (V,E). In the subject of graph property testing, one is given a property {\mathcal P} which any given graph G may or may not have. For example, {\mathcal P} could be one of the following properties:

  1. G is planar.
  2. G is four-colourable.
  3. G has a number of edges equal to a power of two.
  4. G contains no triangles.
  5. G is bipartite.
  6. G is empty.
  7. G is a complete bipartite graph.

We assume that the labeling of the graph is irrelevant. More precisely, we assume that whenever two graphs G, G’ are isomorphic, that G satisfies {\mathcal P} if and only if G’ satisfies {\mathcal P}. For instance, all seven of the graph properties listed above are invariant under graph isomorphism.

We shall think of G as being very large (so |V| is large) and dense (so |E| \sim |V|^2). We are interested in obtaining some sort of test that can answer the question “does G satisfy {\mathcal P}?” with reasonable speed and reasonable accuracy. By “reasonable speed”, we mean that we will only make a bounded number of queries about the graph, i.e. we only look at a bounded number k of distinct vertices in V (selected at random) and base our test purely on how these vertices are connected to each other in E. (We will always assume that the number of vertices in V is at least k.) By “reasonable accuracy”, we will mean that we specify in advance some error tolerance \varepsilon > 0 and require the following:

  1. (No false negatives) If G indeed satisfies {\mathcal P}, then our test will always (correctly) accept G.
  2. (Few false positives in the \varepsilon-far case) If G fails to satisfy {\mathcal P}, and is furthermore \varepsilon-far from satisfying {\mathcal P} in the sense that one needs to add or remove at least \varepsilon |V|^2 edges in G before {\mathcal P} can be satisfied, then our test will (correctly) reject G with probability at least \varepsilon.

When a test with the above properties exists for each given \varepsilon > 0 (with the number of queried vertices k being allowed to depend on \varepsilon), we say that the graph property {\mathcal P} is testable with one-sided error. (The general notion of property testing was introduced by Rubinfeld and Sudan, and first studied for graph properties by Goldreich, Goldwasser, and Ron; see this web page of Goldreich for further references and discussion.) The rejection probability \varepsilon is not very important in this definition, since if one wants to improve the success rate of the algorithm one can simply run independent trials of that algorithm (selecting fresh random vertices each time) in order to increase the chance that G is correctly rejected. However, it is intuitively clear that one must allow some probability of failure, since one is only inspecting a small portion of the graph and so cannot say with complete certainty whether the entire graph has the property {\mathcal P} or not. For similar reasons, one cannot reasonably demand to have a low false positive rate for all graphs that fail to obey {\mathcal P}, since if the graph is only one edge modification away from obeying {\mathcal P}, this modification is extremely unlikely to be detected by only querying a small portion of the graph. This explains why we need to restrict attention to graphs that are \varepsilon-far from obeying {\mathcal P}.

An example should illustrate this definition. Consider for instance property 6 above (the property that G is empty). To test whether a graph is empty, one can perform the following obvious algorithm: take k vertices in G at random and check whether they have any edges at all between them. If they do, then the test of course rejects G as being non-empty, while if they don’t, the test accepts G as being empty. Clearly there are no false negatives in this test, and if k is large enough depending on \varepsilon one can easily see (from the law of large numbers) that we will have few false positives if G is \varepsilon-far from being empty (i.e. if it has at least \varepsilon |V|^2 vertices). So the property of being empty is testable with one-sided error.

On the other hand, it is intuitively obvious that property 3 (having an number of edges equal to a power of 2) is not testable with one-sided error.

So it is reasonable to ask: what types of graph properties are testable with one-sided error, and which ones are not?

Read the rest of this entry »

I’ve just come back from the 48th Annual IEEE Symposium on the Foundations of Computer science, better known as FOCS; this year it was held at Providence, near Brown University. (This conference is also being officially reported on by the blog posts of Nicole Immorlica, Luca Trevisan, and Scott Aaronson.) I was there to give a tutorial on some of the tools used these days in additive combinatorics and graph theory to distinguish structure and randomness. In a previous blog post, I had already mentioned that my lecture notes for this were available on the arXiv; now the slides for my tutorial are available too (it covers much the same ground as the lecture notes, and also incorporates some material from my ICM slides, but in a slightly different format).

In the slides, I am tentatively announcing some very recent (and not yet fully written up) work of Ben Green and myself establishing the Gowers inverse conjecture in finite fields in the special case when the function f is a bounded degree polynomial (this is a case which already has some theoretical computer science applications). I hope to expand upon this in a future post. But I will describe here a neat trick I learned at the conference (from the FOCS submission of Bogdanov and Viola) which uses majority voting to enhance a large number of small independent correlations into a much stronger single correlation. This application of majority voting is widespread in computer science (and, of course, in real-world democracies), but I had not previously been aware of its utility to the type of structure/randomness problems I am interested in (in particular, it seems to significantly simplify some of the arguments in the proof of my result with Ben mentioned above); thanks to this conference, I now know to add majority voting to my “toolbox”.

Read the rest of this entry »

Almost a year ago today, I was in Madrid attending the 2006 International Congress of Mathematicians (ICM). One of the many highlights of an ICM meeting are the plenary talks, which offer an excellent opportunity to hear about current developments in mathematics from leaders in various fields, aimed at a general mathematical audience. All the speakers sweat quite a lot over preparing these high-profile talks; for instance, I rewrote the slides for my own talk from scratch after the first version produced bemused reactions from those friends I had shown them to.

I didn’t write about these talks at the time, since my blog had not started then (and also, things were rather hectic for me in Madrid). During the congress, these talks were webcast live, but the video for these talks no longer seems to be available on-line.

A couple weeks ago, though, I received the first volume of the ICM proceedings, which is the one which among other things contains the articles contributed by the plenary speakers (the other two volumes were available at the congress itself). On reading through this volume, I discovered a pleasant surprise – the publishers had included a CD-ROM on the back page which had all the video and slides of the plenary talks, as well as the opening and closing ceremonies! This was a very nice bonus and I hope that the proceedings of future congresses also include something like this.

Of course, I won’t be able to put the data on that CD-ROM on-line, for both technical and legal reasons; but I thought I would discuss a particularly beautiful plenary lecture given by Étienne Ghys on “Knots and dynamics“. His talk was not only very clear and fascinating, but he also made a superb use of the computer, in particular using well-timed videos and images (developed in collaboration with Jos Leys) to illustrate key ideas and concepts very effectively. (The video on the CD-ROM unfortunately does not fully capture this, as it only has stills from his computer presentation rather than animations.) To give you some idea of how good the talk was, Étienne ended up running over time by about fifteen minutes or so; and yet, in an audience of over a thousand, only a handful of people actually left before the end.

The slides for Étienne’s talk can be found here, although, being in PDF format, they only have stills rather than full animations. Some of the animations though can be found on this page. (Étienne’s article for the proceedings can be found here, though like the contributions of most other plenary speakers, the print article is more detailed and technical than the talk.) I of course cannot replicate Étienne’s remarkable lecture style, but I can at least present the beautiful mathematics he discussed.
Read the rest of this entry »

This week I was in London, attending the New Fellows Seminar at the Royal Society. This was a fairly low-key event preceding the formal admissions ceremony; for instance, it is not publicised on their web site. The format was very interesting: they had each of the new Fellows of the Society give a brief (15 minute) presentation of their work in quick succession, in a manner which would be accessible to a diverse audience in the physical and life sciences. The result was a wonderful two-day seminar on the state of the art in many areas of physics, chemistry, engineering, biology, medicine, and mathematics. For instance, I learnt

  • How the solar neutrino problem was resolved by the discovery that the neutrino had mass, which did not commute with flavour and hence caused neutrino oscillations, which have since been detected experimentally;
  • Why modern aircraft (such as the Dreamliner and A380) are now assembled using (incredibly tough and waterproofed) adhesives instead of bolts or welds, and how adhesion has been enhanced by nanoparticles;
  • How the bacterium Helicobacter pylori was recently demonstrated (by two Aussies :-) ) to be a major cause of peptic ulcers (though the exact mechanism is not fully understood), but has also been proposed (somewhat paradoxically) to also have a preventative effect against esophageal cancer (cf. the hygiene hypothesis);
  • How recent advances in machine learning and image segmentation (including graph cut methods!) now allow computers to identify and track many general classes of objects (e.g. people, cars, animals) simultaneously in real-world images and video, though not quite in real-time yet;
  • How large-scale structure maps of the universe (such as the 2dF Galaxy Redshift Survey) combine with measurements of the cosmic background radiation (e.g. from WMAP) to demonstrate the existence of both dark matter and dark energy (they have different impacts on the evolution of the curvature of the universe and on the current distribution of visible matter);
  • … and 42 other topics like this. (One strongly recurrent theme in the life science talks was just how much recent genomic technologies, such as the genome projects of various key species, have accelerated (by several orders of magnitude!) the ability to identify the genes, proteins, and mechanisms that underlie any given biological function or disease. To paraphrase one speaker, a modern genomics lab could now produce the equivalent of one 1970s PhD thesis in the subject every minute.)

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,587 other followers