You are currently browsing the category archive for the ‘non-technical’ category.

Next week, I will be teaching Math 246A, the first course in the three-quarter graduate complex analysis sequence.  This first course covers much of the same ground as an honours undergraduate complex analysis course, in particular focusing on the basic properties of holomorphic functions such as the Cauchy and residue theorems, the classification of singularities, and the maximum principle, but there will be more of an emphasis on rigour, generalisation and abstraction, and connections with other parts of mathematics.  If time permits I may also cover topics such as factorisation theorems, harmonic functions, conformal mapping, and/or applications to analytic number theory.  The main text I will be using for this course is Stein-Shakarchi (with Ahlfors as a secondary text), but as usual I will also be writing notes for the course on this blog.

In logic, there is a subtle but important distinction between the concept of mutual knowledge – information that everyone (or almost everyone) knows – and common knowledge, which is not only knowledge that (almost) everyone knows, but something that (almost) everyone knows that everyone else knows (and that everyone knows that everyone else knows that everyone else knows, and so forth).  A classic example arises from Hans Christian Andersens’ fable of the Emperor’s New Clothes: the fact that the emperor in fact has no clothes is mutual knowledge, but not common knowledge, because everyone (save, eventually, for a small child) is refusing to acknowledge the emperor’s nakedness, thus perpetuating the charade that the emperor is actually wearing some incredibly expensive and special clothing that is only visible to a select few.  My own personal favourite example of the distinction comes from the blue-eyed islander puzzle, discussed previously here, here and here on the blog.  (By the way, I would ask that any commentary about that puzzle be directed to those blog posts, rather than to the current one.)

I believe that there is now a real-life instance of this situation in the US presidential election, regarding the following

Proposition 1.  The presumptive nominee of the Republican Party, Donald Trump, is not even remotely qualified to carry out the duties of the presidency of the United States of America.

Proposition 1 is a statement which I think is approaching the level of mutual knowledge amongst the US population (and probably a large proportion of people following US politics overseas): even many of Trump’s nominal supporters secretly suspect that this proposition is true, even if they are hesitant to say it out loud.  And there have been many prominent people, from both major parties, that have made the case for Proposition 1: for instance Mitt Romney, the Republican presidential nominee in 2012, did so back in March, and just a few days ago Hillary Clinton, the likely Democratic presidential nominee this year, did so in this speech:

I highly recommend watching the entirety of the (35 mins or so) speech, followed by the entirety of Trump’s rebuttal.

However, even if Proposition 1 is approaching the status of “mutual knowledge”, it does not yet seem to be close to the status of “common knowledge”: one may secretly believe that Trump cannot be considered as a serious candidate for the US presidency, but must continue to entertain this possibility, because they feel that others around them, or in politics or the media, appear to be doing so.  To reconcile these views can require taking on some implausible hypotheses that are not otherwise supported by any evidence, such as the hypothesis that Trump’s displays of policy ignorance, pettiness, and other clearly unpresidential behaviour are merely “for show”, and that behind this facade there is actually a competent and qualified presidential candidate; much like the emperor’s new clothes, this alleged competence is supposedly only visible to a select few.  And so the charade continues.

I feel that it is time for the charade to end: Trump is unfit to be president, and everybody knows it.  But more people need to say so, openly.

Important note: I anticipate there will be any number of “tu quoque” responses, asserting for instance that Hillary Clinton is also unfit to be the US president.  I personally do not believe that to be the case (and certainly not to the extent that Trump exhibits), but in any event such an assertion has no logical bearing on the qualification of Trump for the presidency.  As such, any comments that are purely of this “tu quoque” nature, and which do not directly address the validity or epistemological status of Proposition 1, will be deleted as off-topic.  However, there is a legitimate case to be made that there is a fundamental weakness in the current mechanics of the US presidential election, particularly with the “first-past-the-post” voting system, in that (once the presidential primaries are concluded) a voter in the presidential election is effectively limited to choosing between just two viable choices, one from each of the two major parties, or else refusing to vote or making a largely symbolic protest vote. This weakness is particularly evident when at least one of these two major choices is demonstrably unfit for office, as per Proposition 1.  I think there is a serious case for debating the possibility of major electoral reform in the US (I am particularly partial to the Instant Runoff Voting system, used for instance in my home country of Australia, which allows for meaningful votes to third parties), and I would consider such a debate to be on-topic for this post.  But this is very much a longer term issue, as there is absolutely no chance that any such reform would be implemented by the time of the US elections in November (particularly given that any significant reform would almost certainly require, at minimum, a constitutional amendment).


Over the last few years, a large group of mathematicians have been developing an online database to systematically collect the known facts, numerical data, and algorithms concerning some of the most central types of objects in modern number theory, namely the L-functions associated to various number fields, curves, and modular forms, as well as further data about these modular forms.  This of course includes the most famous examples of L-functions and modular forms respectively, namely the Riemann zeta function \zeta(s) and the discriminant modular form \Delta(q), but there are countless other examples of both. The connections between these classes of objects lie at the heart of the Langlands programme.

As of today, the “L-functions and modular forms database” is now out of beta, and open to the public; at present the database is mostly geared towards specialists in computational number theory, but will hopefully develop into a more broadly useful resource as time develops.  An article by John Cremona summarising the purpose of the database can be found here.

(Thanks to Andrew Sutherland and Kiran Kedlaya for the information.)

The International Mathematical Union (with the assistance of the Friends of the International Mathematical Union and The World Academy of Sciences, and supported by Ian Agol, Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Richard Taylor, and myself) has just launched the Graduate Breakout Fellowships, which will offer highly qualified students from developing countries a full scholarship to study for a PhD in mathematics at an institution that is also located in a developing country.  Nominations for this fellowship (which should be from a sponsoring mathematician, preferably a mentor of the nominee) have just opened (with an application deadline of June 22); details on the nomination process and eligibility requirements can be found at this page.

Nominations for the 2017 Breakthrough Prize in mathematics and the New Horizons Prizes in mathematics are now open.  In 2016, the Breakthrough Prize was awarded to Ian Agol.  The New Horizons prizes are for breakthroughs given by junior mathematicians, usually restricted to within 10 years of PhD; the 2016 prizes were awarded to Andre Neves, Larry Guth, and Peter Scholze (declined).

The rules for the prizes are listed on this page, and nominations can be made at this page.  (No self-nominations are allowed, for the obvious reasons; also, a third-party letter of recommendation is also required.)

Just a quick post to note that the arXiv overlay journal Discrete Analysis, managed by Timothy Gowers, has now gone live with its permanent (and quite modern looking) web site, which is run using the Scholastica platform, as well as the first half-dozen or so accepted papers (including one of my own).  See Tim’s announcement for more details.  I am one of the editors of this journal (and am already handling a few submissions). Needless to say, we are happy to take in more submissions (though they will have to be peer reviewed if they are to be accepted, of course).

The Institute for Pure and Applied Mathematics (IPAM) here at UCLA is seeking applications for its new director in 2017 or 2018, to replace Russ Caflisch, who is nearing the end of his five-year term as IPAM director.  The previous directors of IPAM (Tony Chan, Mark Green, and Russ Caflisch) were also from the mathematics department here at UCLA, but the position is open to all qualified applicants with extensive scientific and administrative experience in mathematics, computer science, or statistics.  Applications will be reviewed on June 1, 2016 (though the applications process will remain open through to Dec 1, 2016).

Over on the polymath blog, I’ve posted (on behalf of Dinesh Thakur) a new polymath proposal, which is to explain some numerically observed identities involving the irreducible polynomials P in the polynomial ring {\bf F}_2[t] over the finite field of characteristic two, the simplest of which is

\displaystyle \sum_P \frac{1}{1+P} = 0

(expanded in terms of Taylor series in u = 1/t).  Comments on the problem should be placed in the polymath blog post; if there is enough interest, we can start a formal polymath project on it.

Klaus Roth, who made fundamental contributions to analytic number theory, died this Tuesday, aged 90.

I never met or communicated with Roth personally, but was certainly influenced by his work; he wrote relatively few papers, but they tended to have outsized impact. For instance, he was one of the key people (together with Bombieri) to work on simplifying and generalising the large sieve, taking it from the technically formidable original formulation of Linnik and Rényi to the clean and general almost orthogonality principle that we have today (discussed for instance in these lecture notes of mine). The paper of Roth that had the most impact on my own personal work was his three-page paper proving what is now known as Roth’s theorem on arithmetic progressions:

Theorem 1 (Roth’s theorem on arithmetic progressions) Let {A} be a set of natural numbers of positive upper density (thus {\limsup_{N \rightarrow\infty} |A \cap \{1,\dots,N\}|/N > 0}). Then {A} contains infinitely many arithmetic progressions {a,a+r,a+2r} of length three (with {r} non-zero of course).

At the heart of Roth’s elegant argument was the following (surprising at the time) dichotomy: if {A} had some moderately large density within some arithmetic progression {P}, either one could use Fourier-analytic methods to detect the presence of an arithmetic progression of length three inside {A \cap P}, or else one could locate a long subprogression {P'} of {P} on which {A} had increased density. Iterating this dichotomy by an argument now known as the density increment argument, one eventually obtains Roth’s theorem, no matter which side of the dichotomy actually holds. This argument (and the many descendants of it), based on various “dichotomies between structure and randomness”, became essential in many other results of this type, most famously perhaps in Szemerédi’s proof of his celebrated theorem on arithmetic progressions that generalised Roth’s theorem to progressions of arbitrary length. More recently, my recent work on the Chowla and Elliott conjectures that was a crucial component of the solution of the Erdös discrepancy problem, relies on an entropy decrement argument which was directly inspired by the density increment argument of Roth.

The Erdös discrepancy problem also is connected with another well known theorem of Roth:

Theorem 2 (Roth’s discrepancy theorem for arithmetic progressions) Let {f(1),\dots,f(n)} be a sequence in {\{-1,+1\}}. Then there exists an arithmetic progression {a+r, a+2r, \dots, a+kr} in {\{1,\dots,n\}} with {r} positive such that

\displaystyle  |\sum_{j=1}^k f(a+jr)| \geq c n^{1/4}

for an absolute constant {c>0}.

In fact, Roth proved a stronger estimate regarding mean square discrepancy, which I am not writing down here; as with the Roth theorem in arithmetic progressions, his proof was short and Fourier-analytic in nature (although non-Fourier-analytic proofs have since been found, for instance the semidefinite programming proof of Lovasz). The exponent {1/4} is known to be sharp (a result of Matousek and Spencer).

As a particular corollary of the above theorem, for an infinite sequence {f(1), f(2), \dots} of signs, the sums {|\sum_{j=1}^k f(a+jr)|} are unbounded in {a,r,k}. The Erdös discrepancy problem asks whether the same statement holds when {a} is restricted to be zero. (Roth also established discrepancy theorems for other sets, such as rectangles, which will not be discussed here.)

Finally, one has to mention Roth’s most famous result, cited for instance in his Fields medal citation:

Theorem 3 (Roth’s theorem on Diophantine approximation) Let {\alpha} be an irrational algebraic number. Then for any {\varepsilon > 0} there is a quantity {c_{\alpha,\varepsilon}} such that

\displaystyle  |\alpha - \frac{a}{q}| > \frac{c_{\alpha,\varepsilon}}{q^{2+\varepsilon}}.

From the Dirichlet approximation theorem (or from the theory of continued fractions) we know that the exponent {2+\varepsilon} in the denominator cannot be reduced to {2} or below. A classical and easy theorem of Liouville gives the claim with the exponent {2+\varepsilon} replaced by the degree of the algebraic number {\alpha}; work of Thue and Siegel reduced this exponent, but Roth was the one who obtained the near-optimal result. An important point is that the constant {c_{\alpha,\varepsilon}} is ineffective – it is a major open problem in Diophantine approximation to produce any bound significantly stronger than Liouville’s theorem with effective constants. This is because the proof of Roth’s theorem does not exclude any single rational {a/q} from being close to {\alpha}, but instead very ingeniously shows that one cannot have two different rationals {a/q}, {a'/q'} that are unusually close to {\alpha}, even when the denominators {q,q'} are very different in size. (I refer to this sort of argument as a “dueling conspiracies” argument; they are strangely prevalent throughout analytic number theory.)

Chantal David, Andrew Granville, Emmanuel Kowalski, Phillipe Michel, Kannan Soundararajan, and I are running a program at MSRI in the Spring of 2017 (more precisely, from Jan 17, 2017 to May 26, 2017) in the area of analytic number theory, with the intention to bringing together many of the leading experts in all aspects of the subject and to present recent work on the many active areas of the subject (e.g. the distribution of the prime numbers, refinements of the circle method, a deeper understanding of the asymptotics of bounded multiplicative functions (and applications to Erdos discrepancy type problems!) and of the “pretentious” approach to analytic number theory, more “analysis-friendly” formulations of the theorems of Deligne and others involving trace functions over fields, and new subconvexity theorems for automorphic forms, to name a few).  Like any other semester MSRI program, there will be a number of workshops, seminars, and similar activities taking place while the members are in residence.  I’m personally looking forward to the program, which should be occurring in the midst of a particularly productive time for the subject.  Needless to say, I (and the rest of the organising committee) plan to be present for most of the program.

Applications for Postdoctoral Fellowships and Research Memberships for this program (and for other MSRI programs in this time period, namely the companion program in Harmonic Analysis and the Fall program in Geometric Group Theory, as well as the complementary program in all other areas of mathematics) remain open until Dec 1.  Applications are open to everyone, but require supporting documentation, such as a CV, statement of purpose, and letters of recommendation from other mathematicians; see the application page for more details.


RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.