You are currently browsing the monthly archive for May 2007.
I gave a non-technical talk today to the local chapter of the Pi Mu Epsilon society here at UCLA. I chose to talk on the cosmic distance ladder – the hierarchy of rather clever (yet surprisingly elementary) mathematical methods that astronomers use to indirectly measure very large distances, such as the distance to planets, nearby stars, or distant stars. This ladder was really started by the ancient Greeks, who used it to measure the size and relative locations of the Earth, Sun and Moon to reasonable accuracy, and then continued by Copernicus, Brahe and Kepler who then measured distances to the planets, and in the modern era to stars, galaxies, and (very recently) to the scale of the universe itself. It’s a great testament to the power of indirect measurement, and to the use of mathematics to cleverly augment observation.
For this (rather graphics-intensive) talk, I used Powerpoint for the first time; the slides (which are rather large – 3 megabytes) – can be downloaded here. [I gave an earlier version of this talk in Australia last year in a plainer PDF format, and had to get someone to convert it for me.]
[Update, May 31: In case the powerpoint file is too large or unreadable, I also have my older PDF version of the talk, which omits all the graphics.]
[Update, July 1 2008: John Hutchinson has made some computations to accompany these slides, which can be found at this page.]
The Skolem-Mahler-Lech theorem in algebraic number theory is a significant generalisation of the obvious statement that a polynomial either has finitely many zeroes (in particular, the set of zeroes is bounded), or it vanishes identically. It appeals to me (despite not really being within my areas of expertise) because it is one of the simplest (non-artificial) results I know of which (currently) comes with an ineffective bound – a bound which is provably finite, but which cannot be computed! It appears that to obtain an effective result, one may need a rather different proof. (I thank Kousha Etessami and Tom Lenagan for drawing this problem to my attention.)
Ineffective bounds seem to arise particularly often in number theory. I am aware of at least three ways in which they come in:
- By using methods from soft (infinitary) analysis.
- By using the fact that any finite set in a metric space is bounded (i.e. is contained in a ball of finite radius centred at a designated origin).
- By using the fact that any set of finite diameter in a metric space is bounded.
Regarding #1, there are often ways to make these arguments quantitative and effective, as discussed in my previous post. But #2 and #3 seem to be irreducibly ineffective: if you know that a set A has finite cardinality or finite diameter, you know it has finite distance to the origin, but an upper bound on the cardinality or diameter does not translate to an effective bound on the radius of the ball centred at the origin needed to contain the set. [In the spirit of the preceding post, one can conclude an effective “meta-bound” on such a set, establishing a large annulus in which the set has no presence, but this is not particularly satisfactory.] The problem with the Skolem-Mahler-Lech theorem is that all the known proofs use #2 at some point.
In the field of analysis, it is common to make a distinction between “hard”, “quantitative”, or “finitary” analysis on one hand, and “soft”, “qualitative”, or “infinitary” analysis on the other. “Hard analysis” is mostly concerned with finite quantities (e.g. the cardinality of finite sets, the measure of bounded sets, the value of convergent integrals, the norm of finite-dimensional vectors, etc.) and their quantitative properties (in particular, upper and lower bounds). “Soft analysis”, on the other hand, tends to deal with more infinitary objects (e.g. sequences, measurable sets and functions, -algebras, Banach spaces, etc.) and their qualitative properties (convergence, boundedness, integrability, completeness, compactness, etc.). To put it more symbolically, hard analysis is the mathematics of
,
,
, and
[1]; soft analysis is the mathematics of 0,
,
, and
.
At first glance, the two types of analysis look very different; they deal with different types of objects, ask different types of questions, and seem to use different techniques in their proofs. They even use[2] different axioms of mathematics; the axiom of infinity, the axiom of choice, and the Dedekind completeness axiom for the real numbers are often invoked in soft analysis, but rarely in hard analysis. (As a consequence, there are occasionally some finitary results that can be proven easily by soft analysis but are in fact impossible to prove via hard analysis methods; the Paris-Harrington theorem gives a famous example.) Because of all these differences, it is common for analysts to specialise in only one of the two types of analysis. For instance, as a general rule (and with notable exceptions), discrete mathematicians, computer scientists, real-variable harmonic analysts, and analytic number theorists tend to rely on “hard analysis” tools, whereas functional analysts, operator algebraists, abstract harmonic analysts, and ergodic theorists tend to rely on “soft analysis” tools. (PDE is an interesting intermediate case in which both types of analysis are popular and useful, though many practitioners of PDE still prefer to primarily use just one of the two types. Another interesting transition occurs on the interface between point-set topology, which largely uses soft analysis, and metric geometry, which largely uses hard analysis. Also, the ineffective bounds which crop up from time to time in analytic number theory are a sort of hybrid of hard and soft analysis. Finally, there are examples of evolution of a field from soft analysis to hard (e.g. Banach space geometry) or vice versa (e.g. recent developments in extremal combinatorics, particularly in relation to the regularity lemma).)
On Friday, Yau concluded his lecture series by discussing the PDE approach to constructing geometric structures, particularly Einstein metrics, and their applications to many questions in low-dimensional topology (yes, this includes the Poincaré conjecture). Yau also discussed the situation in high-dimensional topology, which appears to be completely different (and much less well understood).
Yau’s slides for this talk are available here.
On Thursday, Yau continued his lecture series on geometric structures, focusing a bit more on the tools and philosophy that goes into actually building these structures. Much of the philosophy, in its full generality, is still rather vague and not properly formalised, but is nevertheless supported by a large number of rigorously worked out examples and results in special cases. A dominant theme in this talk was the interaction between geometry and physics, in particular general relativity and string theory.
As usual, there are likely to be some inaccuracies in my presentation of Yau’s talk (I am not really an expert in this subject), and corrections are welcome. Yau’s slides for this talk are available here.
Read the rest of this entry »
The final Distinguished Lecture Series for this academic year at UCLA was started on Tuesday by Shing-Tung Yau. (We’ve had a remarkably high-quality array of visitors this year; for instance, in addition to those already mentioned in this blog, mathematicians such as Peter Lax and Michael Freedman have come here and given lectures earlier this year.) Yau’s chosen topic is “Geometric Structures on Manifolds”, and the first talk was an introduction and overview of his later two, titled “What is a Geometric Structure.” Once again, I found this a great opportunity to learn about a field adjacent to my own areas of expertise, in this case geometric analysis (which is adjacent to nonlinear PDE).
As usual, all inaccuracies in these notes are due to myself and not to Yau, and I welcome corrections or comments. Yau’s slides for the talk are available here. Read the rest of this entry »
This is a well-known problem in multilinear harmonic analysis; it is fascinating to me because it lies barely beyond the reach of the best technology we have for these problems (namely, multiscale time-frequency analysis), and because the most recent developments in quadratic Fourier analysis seem likely to shed some light on this problem.
Recall that the Hilbert transform is defined on test functions (up to irrelevant constants) as
where the integral is evaluated in the principal value sense (removing the region to ensure integrability, and then taking the limit as
.)
On Thursday Shou-wu Zhang concluded his lecture series by talking about the higher genus case , and in particular focusing on some recent work of his which is related to the effective Mordell conjecture and the abc conjecture. The higher genus case is substantially more difficult than the genus 0 or genus 1 cases, and one often needs to use techniques from many different areas of mathematics (together with one or two unproven conjectures) to get somewhere.
This is perhaps the most technical of all the talks, but also the closest to recent developments, in particular the modern attacks on the abc conjecture. (Shou-wu made the point that one sometimes needs to move away from naive formulations of problems to obtain deeper formulations which are more difficult to understand, but can be easier to prove due to the availability of tools, structures, and intuition that were difficult to access in a naive setting, as well as the ability to precisely formulate and quantify what would otherwise be very fuzzy analogies.)
On Wednesday, Shou-wu Zhang continued his lecture series. Whereas the first lecture was a general overview of the rational points on curves problem, the second talk focused entirely on the genus 1 case – i.e. the problem of finding rational points on elliptic curves. This is already a very deep and important problem in number theory – for instance, this theory is decisive in Wiles’ proof of Fermat’s last theorem. It was also somewhat more technical than the previous talk, and I had more difficulty following all the details, but in any case here is my attempt to reconstruct the talk from my notes. Once again, the inevitable inaccuracies here are my fault and not Shou-wu’s, and corrections or comments are greatly appreciated.
NB: the talk here seems to be loosely based in part on Shou-wu’s “Current developments in Mathematics” article from 2001.
[This lecture is also doubling as this week’s “open problem of the week”, as it discusses the Birch and Swinnerton-Dyer conjecture and the effective Mordell conjecture.]
Like many other maths departments, UCLA has a distinguished lecture series for eminent mathematicians to present recent developments in a field of mathematics, both to a broad audience and to specialists. Unlike most departments, though, our lecture series goes by the descriptive (but unimaginative) name of “Distinguished Lecture Series“, supported by the Gill Foundation. This week the lecture series is given by Shou-wu Zhang from Columbia, and revolves around the topic of rational points on curves, a key subject of interest in arithmetic geometry and number theory. The first of three talks, which was on Tuesday, was a very accessible and enjoyable overview talk, which I am reproducing here (to use this opportunity to learn this stuff myself, and also to continue the diversification of subject matter here on this blog). As before, I do not vouch for 100% accuracy, and all errors are my responsibility rather than Shou-wu’s.
Recent Comments