Atle Selberg, who made immense and fundamental contributions to analytic number theory and related areas of mathematics, died last Monday, aged 90.

Selberg’s early work was focused on the study of the Riemann zeta function $\zeta(s)$. In 1942, Selberg showed that a positive fraction of the zeroes of this function lie on the critical line $\hbox{Re}(s)=1/2$. Apart from improvements in the fraction (the best value currently being a little over 40%, a result of Conrey), this is still one of the strongest partial results we have towards the Riemann hypothesis. (I discuss Selberg’s result, and the method of mollifiers he introduced there, in a little more detail after the jump.)

In working on the zeta function, Selberg developed two powerful tools which are still used routinely in analytic number theory today. The first is the method of mollifiers to smooth out the magnitude oscillations of the zeta function, making the (more interesting) phase oscillation more visible. The second was the method of the Selberg $\Lambda^2$ sieve, which is a particularly elegant choice of sieve which allows one to count patterns in almost primes (and hence to upper bound patterns in primes) quite accurately. Variants of the Selberg sieve were a crucial ingredient in, for instance, the recent work of Goldston-Yıldırım-Pintz on prime gaps, as well as the work of Ben Green and myself on arithmetic progressions in primes. (I discuss the Selberg sieve, as well as the Selberg symmetry formula below, in my post on the parity problem. Incidentally, Selberg was the first to formalise this problem as a significant obstruction in sieve theory.)

For all of these achievements, Selberg was awarded the Fields Medal in 1950. Around that time, Selberg and Erdős also produced the first elementary proof of the prime number theorem. A key ingredient here was the Selberg symmetry formula, which is an elementary analogue of the prime number theorem for almost primes.

But perhaps Selberg’s greatest contribution to mathematics was his discovery of the Selberg trace formula, which is a non-abelian generalisation of the Poisson summation formula, and which led to many further deep connections between representation theory and number theory, and in particular being one of the main inspirations for the Langlands program, which in turn has had an impact on many different parts of mathematics (for instance, it plays a role in Wiles’ proof of Fermat’s last theorem). For an introduction to the trace formula, its history, and its impact, I recommend the survey article of Arthur.

Other major contributions of Selberg include the Rankin-Selberg theory connecting Artin L-functions from representation theory to the integrals of automorphic forms (very much in the spirit of the Langlands program), and the Chowla-Selberg formula relating the Gamma function at rational values to the periods of elliptic curves with complex multiplication. He also made an influential conjecture on the spectral gap of the Laplacian on quotients of $SL(2,{\Bbb R})$ by congruence groups, which is still open today (Selberg had the first non-trivial partial result). As an example of this conjecture’s impact, Selberg’s eigenvalue conjecture has inspired some recent work of Sarnak-Xue, Gamburd, and Bourgain-Gamburd on new constructions of expander graphs, and has revealed some further connections between number theory and arithmetic combinatorics (such as sum-product theorems); see this announcement of Bourgain-Gamburd-Sarnak for the most recent developments (this work, incidentally, also employs the Selberg sieve). As observed by Satake, Selberg’s eigenvalue conjecture and the more classical Ramanujan-Petersson conjecture can be unified into a single conjecture, now known as the Ramanujan-Selberg conjecture; the eigenvalue conjecture is then essentially an archimedean (or “non-dyadic“) special case of the general Ramanujan-Selberg conjecture. (The original (dyadic) Ramanujan-Petersson conjecture was finally proved by Deligne-Serre, after many important contributions by other authors, but the non-dyadic version remains open.)

— Selberg’s critical line theorem —

I am not qualified to present all of Selberg’s work, but I can discuss one of his earlier results in a bit more detail, namely his critical line theorem establishing that a positive proportion of the zeroes of the zeta function lie on the critical line.

In Riemann’s famous 1859 memoir on the zeta function, he asserted that the number N(T) of zeroes of the zeta function in the rectangle $\{ \sigma+it: 0 \leq \sigma \leq 1; 0 \leq t \leq T \}$ was $\frac{T}{2\pi} \log \frac{T}{2\pi e} + O(\log T)$ as T went to infinity. The argument was made fully rigorous by von Mangoldt in 1895. The basic idea is to use the argument principle, reducing matters to understanding the integral of the logarithmic derivative $\zeta'(s)/\zeta(s)$ of the zeta function on the boundary of the rectangle. The horizontal sides of this rectangle can be shown to give a negligible contribution (after averaging a bit in T), and the left vertical side can be related to the right one by the functional equation for the zeta function, so the main task is to control the integral on the right-hand side; but this can be shown to be manageable by fairly crude estimates on the zeta function to the right of the critical strip.

The Riemann hypothesis of course asserts that all N(T) of these zeroes lie on the critical line $s=1/2$, so in particular the number of zeroes on the interval $\{ 1/2+it: 0 \leq t \leq T \}$ should be asymptotically $\frac{1}{2\pi} T \log T$. Selberg showed unconditionally that there are at least $c T \log T$ such zeroes for some absolute constant $c > 0$. (An earlier result of Hardy and Littlewood in 1921 gave the weaker bound of $cT$.) To show this, Selberg first replaced the zeta function by the closely related xi function $\xi(t)$, which is real-valued and has a zero whenever $1/2+it$ is a zero of the zeta function.

To get a lower bound on the number of zeroes on the critical line, it thus suffices to get a lower bound on the number of times $\xi(t)$ changes sign. One way to do this is to show that there are many intervals $[t, t+h]$ in which $\int_t^{t+h} \xi(t')\ dt'$ is smaller in magnitude than $\int_t^{t+h} |\xi(t')|\ dt'$, since this forces a sign change in $\xi$ within $[t,t+h]$. This is basically the Hardy-Littlewood approach that gives the lower bound of $cT$, but it does not get as far as $c T \log T$, mainly because of occasional spikes in the magnitude of the xi-function. However, Selberg had the idea of mollifying the xi-function by a non-negative weight (basically $|\phi(1/2 + it)|^2$, where $\phi$ is a smooth truncation of the Dirichlet expansion of $\zeta(s)^{-1/2}$). Intuitively, this weight attempts to eliminates the spikes in the magnitude of the xi-function, leaving only the oscillations which are directly related to the sign changes. With this additional trick, the above approach is now strong enough to give Selberg’s critical line theorem, and the use of mollifiers (or similar devices, such as “zero-detecting polynomials”) is now standard in all subsequent work on controlling the distribution of zeroes in the critical strip.

[Via Luca Trevisan.]

[Update, Aug 13: Relationship between Selberg's eigenvalue conjecture and the Ramanujan-Petersson conjecture corrected.]