You are currently browsing the tag archive for the ‘randomness’ tag.

The most fundamental unsolved problem in complexity theory is undoubtedly the P=NP problem, which asks (roughly speaking) whether a problem which can be solved by a non-deterministic polynomial-time (NP) algorithm, can also be solved by a deterministic polynomial-time (P) algorithm. The general belief is that {P \neq NP}, i.e. there exist problems which can be solved by non-deterministic polynomial-time algorithms but not by deterministic polynomial-time algorithms.

One reason why the {P \neq NP} question is so difficult to resolve is that a certain generalisation of this question has an affirmative answer in some cases, and a negative answer in other cases. More precisely, if we give all the algorithms access to an oracle, then for one choice {A} of this oracle, all the problems that are solvable by non-deterministic polynomial-time algorithms that calls {A} ({NP^A}), can also be solved by a deterministic polynomial-time algorithm algorithm that calls {A} ({P^A}), thus {P^A = NP^A}; but for another choice {B} of this oracle, there exist problems solvable by non-deterministic polynomial-time algorithms that call {B}, which cannot be solved by a deterministic polynomial-time algorithm that calls {B}, thus {P^B \neq NP^B}. One particular consequence of this result (which is due to Baker, Gill, and Solovay) is that there cannot be any relativisable proof of either {P=NP} or {P \neq NP}, where “relativisable” means that the proof would also work without any changes in the presence of an oracle.

The Baker-Gill-Solovay result was quite surprising, but the idea of the proof turns out to be rather simple. To get an oracle {A} such that {P^A=NP^A}, one basically sets {A} to be a powerful simulator that can simulate non-deterministic machines (and, furthermore, can also simulate itself); it turns out that any PSPACE-complete oracle would suffice for this task. To get an oracle {B} for which {P^B \neq NP^B}, one has to be a bit sneakier, setting {B} to be a query device for a sparse set of random (or high-complexity) strings, which are too complex to be guessed at by any deterministic polynomial-time algorithm.

Unfortunately, the simple idea of the proof can be obscured by various technical details (e.g. using Turing machines to define {P} and {NP} precisely), which require a certain amount of time to properly absorb. To help myself try to understand this result better, I have decided to give a sort of “allegory” of the proof, based around a (rather contrived) story about various students trying to pass a multiple choice test, which avoids all the technical details but still conveys the basic ideas of the argument. This allegory was primarily for my own benefit, but I thought it might also be of interest to some readers here (and also has some tangential relation to the proto-polymath project of determinstically finding primes), so I reproduce it below.

Read the rest of this entry »

This week I am in Bremen, where the 50th International Mathematical Olympiad is being held.  A number of former Olympians (Béla Bollobás, Tim Gowers, Laci Lovasz, Stas Smirnov, Jean-Christophe Yoccoz, and myself) were invited to give a short talk (20 minutes in length) at the celebratory event for this anniversary.  I chose to talk on a topic I have spoken about several times before, on “Structure and randomness in the prime numbers“.  Given the time constraints, there was a limit as to how much substance I could put into the talk; but I try to describe, in very general terms, what we know about the primes, and what we suspect to be true, but cannot yet establish.  As I have mentioned in previous talks, the key problem is that we suspect the distribution of the primes to obey no significant patterns (other than “local” structure, such as having a strong tendency to be odd (which is local information at the 2 place), or obeying the prime number theorem (which is local information at the infinity place)), but we still do not have fully satisfactory tools for establishing the absence of a pattern. (This is in contrast with many types of Olympiad problems, where the key to solving a problem often lies in discovering the right pattern or structure in the problem to exploit.)

The PDF of the talk is here; I decided to try out the Beamer LaTeX package for a change.

A remarkable phenomenon in probability theory is that of universality – that many seemingly unrelated probability distributions, which ostensibly involve large numbers of unknown parameters, can end up converging to a universal law that may only depend on a small handful of parameters. One of the most famous examples of the universality phenomenon is the central limit theorem; another rich source of examples comes from random matrix theory, which is one of the areas of my own research.

Analogous universality phenomena also show up in empirical distributions – the distributions of a statistic {X} from a large population of “real-world” objects. Examples include Benford’s law, Zipf’s law, and the Pareto distribution (of which the Pareto principle or 80-20 law is a special case). These laws govern the asymptotic distribution of many statistics {X} which

  • (i) take values as positive numbers;
  • (ii) range over many different orders of magnitude;
  • (iiii) arise from a complicated combination of largely independent factors (with different samples of {X} arising from different independent factors); and
  • (iv) have not been artificially rounded, truncated, or otherwise constrained in size.

Examples here include the population of countries or cities, the frequency of occurrence of words in a language, the mass of astronomical objects, or the net worth of individuals or corporations. The laws are then as follows:

  • Benford’s law: For {k=1,\ldots,9}, the proportion of {X} whose first digit is {k} is approximately {\log_{10} \frac{k+1}{k}}. Thus, for instance, {X} should have a first digit of {1} about {30\%} of the time, but a first digit of {9} only about {5\%} of the time.
  • Zipf’s law: The {n^{th}} largest value of {X} should obey an approximate power law, i.e. it should be approximately {C n^{-\alpha}} for the first few {n=1,2,3,\ldots} and some parameters {C, \alpha > 0}. In many cases, {\alpha} is close to {1}.
  • Pareto distribution: The proportion of {X} with at least {m} digits (before the decimal point), where {m} is above the median number of digits, should obey an approximate exponential law, i.e. be approximately of the form {c 10^{-m/\alpha}} for some {c, \alpha > 0}. Again, in many cases {\alpha} is close to {1}.

Benford’s law and Pareto distribution are stated here for base {10}, which is what we are most familiar with, but the laws hold for any base (after replacing all the occurrences of {10} in the above laws with the new base, of course). The laws tend to break down if the hypotheses (i)-(iv) are dropped. For instance, if the statistic {X} concentrates around its mean (as opposed to being spread over many orders of magnitude), then the normal distribution tends to be a much better model (as indicated by such results as the central limit theorem). If instead the various samples of the statistics are highly correlated with each other, then other laws can arise (for instance, the eigenvalues of a random matrix, as well as many empirically observed matrices, are correlated to each other, with the behaviour of the largest eigenvalues being governed by laws such as the Tracy-Widom law rather than Zipf’s law, and the bulk distribution being governed by laws such as the semicircular law rather than the normal or Pareto distributions).

To illustrate these laws, let us take as a data set the populations of 235 countries and regions of the world in 2007 (using the CIA world factbook); I have put the raw data here. This is a relatively small sample (cf. my previous post), but is already enough to discern these laws in action. For instance, here is how the data set tracks with Benford’s law (rounded to three significant figures):

{k} Countries Number Benford prediction
1 Angola, Anguilla, Aruba, Bangladesh, Belgium, Botswana, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Chile, China, Christmas Island, Cook Islands, Cuba, Czech Republic, Ecuador, Estonia, Gabon, (The) Gambia, Greece, Guam, Guatemala, Guinea-Bissau, India, Japan, Kazakhstan, Kiribati, Malawi, Mali, Mauritius, Mexico, (Federated States of) Micronesia, Nauru, Netherlands, Niger, Nigeria, Niue, Pakistan, Portugal, Russia, Rwanda, Saint Lucia, Saint Vincent and the Grenadines, Senegal, Serbia, Swaziland, Syria, Timor-Leste (East-Timor), Tokelau, Tonga, Trinidad and Tobago, Tunisia, Tuvalu, (U.S.) Virgin Islands, Wallis and Futuna, Zambia, Zimbabwe 59 ({25.1\%}) 71 ({30.1\%})
2 Armenia, Australia, Barbados, British Virgin Islands, Cote d’Ivoire, French Polynesia, Ghana, Gibraltar, Indonesia, Iraq, Jamaica, (North) Korea, Kosovo, Kuwait, Latvia, Lesotho, Macedonia, Madagascar, Malaysia, Mayotte, Mongolia, Mozambique, Namibia, Nepal, Netherlands Antilles, New Caledonia Norfolk Island, Palau, Peru, Romania, Saint Martin, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Slovenia, Sri Lanka, Svalbard, Taiwan, Turks and Caicos Islands, Uzbekistan, Vanuatu, Venezuela, Yemen 44 ({18.7\%}) 41 ({17.6\%})
3 Afghanistan, Albania, Algeria, (The) Bahamas, Belize, Brunei, Canada, (Rep. of the) Congo, Falkland Islands (Islas Malvinas), Iceland, Kenya, Lebanon, Liberia, Liechtenstein, Lithuania, Maldives, Mauritania, Monaco, Morocco, Oman, (Occupied) Palestinian Territory, Panama, Poland, Puerto Rico, Saint Kitts and Nevis, Uganda, United States of America, Uruguay, Western Sahara 29 ({12.3\%}) 29 ({12.5\%})
4 Argentina, Bosnia and Herzegovina, Burma (Myanmar), Cape Verde, Cayman Islands, Central African Republic, Colombia, Costa Rica, Croatia, Faroe Islands, Georgia, Ireland, (South) Korea, Luxembourg, Malta, Moldova, New Zealand, Norway, Pitcairn Islands, Singapore, South Africa, Spain, Sudan, Suriname, Tanzania, Ukraine, United Arab Emirates 27 ({11.4\%}) 22 ({9.7\%})
5 (Macao SAR) China, Cocos Islands, Denmark, Djibouti, Eritrea, Finland, Greenland, Italy, Kyrgyzstan, Montserrat, Nicaragua, Papua New Guinea, Slovakia, Solomon Islands, Togo, Turkmenistan 16 ({6.8\%}) 19 ({7.9\%})
6 American Samoa, Bermuda, Bhutan, (Dem. Rep. of the) Congo, Equatorial Guinea, France, Guernsey, Iran, Jordan, Laos, Libya, Marshall Islands, Montenegro, Paraguay, Sierra Leone, Thailand, United Kingdom 17 ({7.2\%}) 16 ({6.7\%})
7 Bahrain, Bulgaria, (Hong Kong SAR) China, Comoros, Cyprus, Dominica, El Salvador, Guyana, Honduras, Israel, (Isle of) Man, Saint Barthelemy, Saint Helena, Saint Pierre and Miquelon, Switzerland, Tajikistan, Turkey 17 ({7.2\%}) 14 ({5.8\%})
8 Andorra, Antigua and Barbuda, Austria, Azerbaijan, Benin, Burundi, Egypt, Ethiopia, Germany, Haiti, Holy See (Vatican City), Northern Mariana Islands, Qatar, Seychelles, Vietnam 15 ({6.4\%}) 12 ({5.1\%})
9 Belarus, Bolivia, Dominican Republic, Fiji, Grenada, Guinea, Hungary, Jersey, Philippines, Somalia, Sweden 11 ({4.5\%}) 11 ({4.6\%})

Here is how the same data tracks Zipf’s law for the first twenty values of {n}, with the parameters {C \approx 1.28 \times 10^9} and {\alpha \approx 1.03} (selected by log-linear regression), again rounding to three significant figures:

{n} Country Population Zipf prediction Deviation from prediction
1 China 1,330,000,000 1,280,000,000 {+4.1\%}
2 India 1,150,000,000 626,000,000 {+83.5\%}
3 USA 304,000,000 412,000,000 {-26.3\%}
4 Indonesia 238,000,000 307,000,000 {-22.5\%}
5 Brazil 196,000,000 244,000,000 {-19.4\%}
6 Pakistan 173,000,000 202,000,000 {-14.4\%}
7 Bangladesh 154,000,000 172,000,000 {-10.9\%}
8 Nigeria 146,000,000 150,000,000 {-2.6\%}
9 Russia 141,000,000 133,000,000 {+5.8\%}
10 Japan 128,000,000 120,000,000 {+6.7\%}
11 Mexico 110,000,000 108,000,000 {+1.7\%}
12 Philippines 96,100,000 98,900,000 {-2.9\%}
13 Vietnam 86,100,000 91,100,000 {-5.4\%}
14 Ethiopia 82,600,000 84,400,000 {-2.1\%}
15 Germany 82,400,000 78,600,000 {+4.8\%}
16 Egypt 81,700,000 73,500,000 {+11.1\%}
17 Turkey 71,900,000 69,100,000 {+4.1\%}
18 Congo 66,500,000 65,100,000 {+2.2\%}
19 Iran 65,900,000 61,600,000 {+6.9\%}
20 Thailand 65,500,000 58,400,000 {+12.1\%}

As one sees, Zipf’s law is not particularly precise at the extreme edge of the statistics (when {n} is very small), but becomes reasonably accurate (given the small sample size, and given that we are fitting twenty data points using only two parameters) for moderate sizes of {n}.

This data set has too few scales in base {10} to illustrate the Pareto distribution effectively – over half of the country populations are either seven or eight digits in that base. But if we instead work in base {2}, then country populations range in a decent number of scales (the majority of countries have population between {2^{23}} and {2^{32}}), and we begin to see the law emerge, where {m} is now the number of digits in binary, the best-fit parameters are {\alpha \approx 1.18} and {c \approx 1.7 \times 2^{26} / 235}:

{m} Countries with {\geq m} binary digit populations Number Pareto prediction
31 China, India 2 1
30 2 2
29 “, United States of America 3 5
28 “, Indonesia, Brazil, Pakistan, Bangladesh, Nigeria, Russia 9 8
27 “, Japan, Mexico, Philippines, Vietnam, Ethiopia, Germany, Egypt, Turkey 17 15
26 “, (Dem. Rep. of the) Congo, Iran, Thailand, France, United Kingdom, Italy, South Africa, (South) Korea, Burma (Myanmar), Ukraine, Colombia, Spain, Argentina, Sudan, Tanzania, Poland, Kenya, Morocco, Algeria 36 27
25 “, Canada, Afghanistan, Uganda, Nepal, Peru, Iraq, Saudi Arabia, Uzbekistan, Venezuela, Malaysia, (North) Korea, Ghana, Yemen, Taiwan, Romania, Mozambique, Sri Lanka, Australia, Cote d’Ivoire, Madagascar, Syria, Cameroon 58 49
24 “, Netherlands, Chile, Kazakhstan, Burkina Faso, Cambodia, Malawi, Ecuador, Niger, Guatemala, Senegal, Angola, Mali, Zambia, Cuba, Zimbabwe, Greece, Portugal, Belgium, Tunisia, Czech Republic, Rwanda, Serbia, Chad, Hungary, Guinea, Belarus, Somalia, Dominican Republic, Bolivia, Sweden, Haiti, Burundi, Benin 91 88
23 “, Austria, Azerbaijan, Honduras, Switzerland, Bulgaria, Tajikistan, Israel, El Salvador, (Hong Kong SAR) China, Paraguay, Laos, Sierra Leone, Jordan, Libya, Papua New Guinea, Togo, Nicaragua, Eritrea, Denmark, Slovakia, Kyrgyzstan, Finland, Turkmenistan, Norway, Georgia, United Arab Emirates, Singapore, Bosnia and Herzegovina, Croatia, Central African Republic, Moldova, Costa Rica 123 159

Thus, with each new scale, the number of countries introduced increases by a factor of a little less than {2}, on the average. This approximate doubling of countries with each new scale begins to falter at about the population {2^{23}} (i.e. at around {4} million), for the simple reason that one has begun to run out of countries. (Note that the median-population country in this set, Singapore, has a population with {23} binary digits.)

These laws are not merely interesting statistical curiosities; for instance, Benford’s law is often used to help detect fraudulent statistics (such as those arising from accounting fraud), as many such statistics are invented by choosing digits at random, and will therefore deviate significantly from Benford’s law. (This is nicely discussed in Robert Matthews’ New Scientist article “The power of one“; this article can also be found on the web at a number of other places.) In a somewhat analogous spirit, Zipf’s law and the Pareto distribution can be used to mathematically test various models of real-world systems (e.g. formation of astronomical objects, accumulation of wealth, population growth of countries, etc.), without necessarily having to fit all the parameters of that model with the actual data.

Being empirically observed phenomena rather than abstract mathematical facts, Benford’s law, Zipf’s law, and the Pareto distribution cannot be “proved” the same way a mathematical theorem can be proved. However, one can still support these laws mathematically in a number of ways, for instance showing how these laws are compatible with each other, and with other plausible hypotheses on the source of the data. In this post I would like to describe a number of ways (both technical and non-technical) in which one can do this; these arguments do not fully explain these laws (in particular, the empirical fact that the exponent {\alpha} in Zipf’s law or the Pareto distribution is often close to {1} is still quite a mysterious phenomenon), and do not always have the same universal range of applicability as these laws seem to have, but I hope that they do demonstrate that these laws are not completely arbitrary, and ought to have a satisfactory basis of mathematical support. Read the rest of this entry »

One further paper in this stream: László Erdős, José Ramírez, Benjamin Schlein, Van Vu, Horng-Tzer Yau, and myself have just uploaded to the arXiv the paper “Bulk universality for Wigner hermitian matrices with subexponential decay“, submitted to Mathematical Research Letters.  (Incidentally, this is my first six-author paper I have been involved in, not counting the polymath projects of course, though I have had a number of five-author papers.)

This short paper (9 pages) combines the machinery from two recent papers on the universality conjecture for the eigenvalue spacings in the bulk for Wigner random matrices (see my earlier blog post for more discussion).  On the one hand, the paper of Erdős-Ramírez-Schlein-Yau established this conjecture under the additional hypothesis that the distribution of the individual entries obeyed some smoothness and exponential decay conditions.  Meanwhile, the paper of Van Vu and myself (which I discussed in my earlier blog post) established the conjecture under a somewhat different set of hypotheses, namely that the distribution of the individual entries obeyed some moment conditions (in particular, the third moment had to vanish), a support condition (the entries had to have real part supported in at least three points), and an exponential decay condition.

After comparing our results, the six of us realised that our methods could in fact be combined rather easily to obtain a stronger result, establishing the universality conjecture assuming only a exponential decay (or more precisely, sub-exponential decay) bound {\Bbb P}(|x_{\ell k}| > t ) \ll \exp( - t^c ) on the coefficients; thus all regularity, moment, and support conditions have been eliminated.  (There is one catch, namely that we can no longer control a single spacing \lambda_{i+1}-\lambda_i for a single fixed i, but must now average over all 1 \leq i \leq n before recovering the universality.  This is an annoying technical issue but it may be resolvable in the future with further refinements to the method.)

I can describe the main idea behind the unified approach here.  One can arrange the Wigner matrices in a hierarchy, from most structured to least structured:

  • The most structured (or special) ensemble is the Gaussian Unitary Ensemble (GUE), in which the coefficients are gaussian. Here, one has very explicit and tractable formulae for the eigenvalue distributions, gap spacing, etc.
  • The next most structured ensemble of Wigner matrices are the Gaussian-divisible or Johansson matrices, which are matrices H of the form H = e^{-t/2} \hat H + (1-e^{-t})^{1/2} V, where \hat H is another Wigner matrix, V is a GUE matrix independent of \hat H, and 0 < t < 1 is a fixed parameter independent of n.  Here, one still has quite explicit (though not quite as tractable) formulae for the joint eigenvalue distribution and related statistics.  Note that the limiting case t=1 is GUE.
  • After this, one has the Ornstein-Uhlenbeck-evolved matrices, which are also of the form H = e^{-t/2} \hat H + (1-e^{-t})^{1/2} V, but now t = n^{-1+\delta} decays at a power rate with n, rather than being comparable to 1.  Explicit formulae still exist for these matrices, but extracting universality out of this is hard work (and occupies the bulk of the paper of Erdős-Ramírez-Schlein-Yau).
  • Finally, one has arbitrary Wigner matrices, which can be viewed as the t=0 limit of the above Ornstein-Uhlenbeck process.

The arguments in the paper of Erdős-Ramírez-Schlein-Yau can be summarised as follows (I assume subexponential decay throughout this discussion):

  1. (Structured case) The universality conjecture is true for Ornstein-Uhlenbeck-evolved matrices with t = n^{-1+\delta} for any 0 < \delta \leq 1.  (The case 1/4 < \delta \leq 1 was treated in an earlier paper of Erdős-Ramírez-Schlein-Yau, while the case where t is comparable to 1 was treated by Johansson.)
  2. (Matching) Every Wigner matrix with suitable smoothness conditions can be “matched” with an Ornstein-Uhlenbeck-evolved matrix, in the sense that the eigenvalue statistics for the two matrices are asymptotically identical.  (This is relatively easy due to the fact that \delta can be taken arbitrarily close to zero.)
  3. Combining 1. and 2. one obtains universality for all Wigner matrices obeying suitable smoothness conditions.

The arguments in the paper of Van and myself can be summarised as follows:

  1. (Structured case) The universality conjecture is true for Johansson matrices, by the paper of Johansson.
  2. (Matching) Every Wigner matrix with some moment and support conditions can be “matched” with a Johansson matrix, in the sense that the first four moments of the entries agree, and hence (by the Lindeberg strategy in our paper) have asymptotically identical statistics.
  3. Combining 1. and 2. one obtains universality for all Wigner matrices obtaining suitable moment and support conditions.

What we realised is by combining the hard part 1. of the paper of Erdős-Ramírez-Schlein-Yau with the hard part 2. of the paper of Van and myself, we can remove all regularity, moment, and support conditions.  Roughly speaking, the unified argument proceeds as follows:

  1. (Structured case) By the arguments of Erdős-Ramírez-Schlein-Yau, the universality conjecture is true for Ornstein-Uhlenbeck-evolved matrices with t = n^{-1+\delta} for any 0 < \delta \leq 1.
  2. (Matching) Every Wigner matrix H can be “matched” with an Ornstein-Uhlenbeck-evolved matrix e^{-t/2} H + (1-e^{-t})^{1/2} V for t= n^{-1+0.01} (say), in the sense that the first four moments of the entries almost agree, which is enough (by the arguments of Van and myself) to show that these two matrices have asymptotically identical statistics on the average.
  3. Combining 1. and 2. one obtains universality for the averaged statistics for all Wigner matrices.

The averaging should be removable, but this would require better convergence results to the semicircular law than are currently known (except with additional hypotheses, such as vanishing third moment).  The subexponential decay should also be relaxed to a condition of finiteness for some fixed moment {\Bbb E} |x|^C, but we did not pursue this direction in order to keep the paper short.

mbk-tao-cov_0001The AMS has just notified me that the book version of the first year of my blog, now retitled “Structure and Randomness: pages from year one of a mathematical blog“, is now available.  An official web page for this book has also been set up here, though it is fairly empty at present.  A (2MB) high-resolution PDF file of the cover can be found here.

I plan to start on converting this year’s blog posts to book form in January, and hopefully the process should be a little faster this time.  Given that my lecture notes on ergodic theory and on the Poincaré conjecture will form the bulk of that book, I have chosen the working title for that book to be “Poincaré’s legacies: pages from year two of a mathematical blog“.

One of the most important topological concepts in analysis is that of compactness (as discussed for instance in my Companion article on this topic).  There are various flavours of this concept, but let us focus on sequential compactness: a subset E of a topological space X is sequentially compact if every sequence in E has a convergent subsequence whose limit is also in E.  This property allows one to do many things with the set E.  For instance, it allows one to maximise a functional on E:

Proposition 1. (Existence of extremisers)  Let E be a non-empty sequentially compact subset of a topological space X, and let F: E \to {\Bbb R} be a continuous function.  Then the supremum \sup_{x \in E} f(x) is attained at at least one point x_* \in E, thus F(x) \leq F(x_*) for all x \in E.  (In particular, this supremum is finite.)  Similarly for the infimum.

Proof. Let -\infty < L \leq +\infty be the supremum L := \sup_{x \in E} F(x).  By the definition of supremum (and the axiom of (countable) choice), one can find a sequence x^{(n)} in E such that F(x^{(n)}) \to L.  By compactness, we can refine this sequence to a subsequence (which, by abuse of notation, we shall continue to call x^{(n)}) such that x^{(n)} converges to a limit x in E.  Since we still have f(x^{(n)}) \to L, and f is continuous at x, we conclude that f(x)=L, and the claim for the supremum follows.  The claim for the infimum is similar.  \Box

Remark 1. An inspection of the argument shows that one can relax the continuity hypothesis on F somewhat: to attain the supremum, it suffices that F be upper semicontinuous, and to attain the infimum, it suffices that F be lower semicontinuous. \diamond

We thus see that sequential compactness is useful, among other things, for ensuring the existence of extremisers.  In finite-dimensional spaces (such as vector spaces), compact sets are plentiful; indeed, the Heine-Borel theorem asserts that every closed and bounded set is compact.  However, once one moves to infinite-dimensional spaces, such as function spaces, then the Heine-Borel theorem fails quite dramatically; most of the closed and bounded sets one encounters in a topological vector space are non-compact, if one insists on using a reasonably “strong” topology.  This causes a difficulty in (among other things) calculus of variations, which is often concerned to finding extremisers to a functional F: E \to {\Bbb R} on a subset E of an infinite-dimensional function space X.

In recent decades, mathematicians have found a number of ways to get around this difficulty.  One of them is to weaken the topology to recover compactness, taking advantage of such results as the Banach-Alaoglu theorem (or its sequential counterpart).  Of course, there is a tradeoff: weakening the topology makes compactness easier to attain, but makes the continuity of F harder to establish.  Nevertheless, if F enjoys enough “smoothing” or “cancellation” properties, one can hope to obtain continuity in the weak topology, allowing one to do things such as locate extremisers.  (The phenomenon that cancellation can lead to continuity in the weak topology is sometimes referred to as compensated compactness.)

Another option is to abandon trying to make all sequences have convergent subsequences, and settle just for extremising sequences to have convergent subsequences, as this would still be enough to retain Theorem 1.  Pursuing this line of thought leads to the Palais-Smale condition, which is a substitute for compactness in some calculus of variations situations.

But in many situations, one cannot weaken the topology to the point where the domain E becomes compact, without destroying the continuity (or semi-continuity) of F, though one can often at least find an intermediate topology (or metric) in which F is continuous, but for which E is still not quite compact.  Thus one can find sequences x^{(n)} in E which do not have any subsequences that converge to a constant element x \in E, even in this intermediate metric.  (As we shall see shortly, one major cause of this failure of compactness is the existence of a non-trivial action of a non-compact group G on E; such a group action can cause compensated compactness or the Palais-Smale condition to fail also.)  Because of this, it is a priori conceivable that a continuous function F need not attain its supremum or infimum.

Nevertheless, even though a sequence x^{(n)} does not have any subsequences that converge to a constant x, it may have a subsequence (which we also call x^{(n)}) which converges to some non-constant sequence y^{(n)} (in the sense that the distance d(x^{(n)},y^{(n)}) between the subsequence and the new sequence in a this intermediate metric), where the approximating sequence y^{(n)} is of a very structured form (e.g. “concentrating” to a point, or “travelling” off to infinity, or a superposition y^{(n)} = \sum_j y^{(n)}_j of several concentrating or travelling profiles of this form).  This weaker form of compactness, in which superpositions of a certain type of profile completely describe all the failures (or defects) of compactness, is known as concentration compactness, and the decomposition x^{(n)} \approx \sum_j y^{(n)}_j of the subsequence is known as the profile decomposition.  In many applications, it is a sufficiently good substitute for compactness that one can still do things like locate extremisers for functionals F –  though one often has to make some additional assumptions of F to compensate for the more complicated nature of the compactness.  This phenomenon was systematically studied by P.L. Lions in the 80s, and found great application in calculus of variations and nonlinear elliptic PDE.  More recently, concentration compactness has been a crucial and powerful tool in the non-perturbative analysis of nonlinear dispersive PDE, in particular being used to locate “minimal energy blowup solutions” or “minimal mass blowup solutions” for such a PDE (analogously to how one can use calculus of variations to find minimal energy solutions to a nonlinear elliptic equation); see for instance this recent survey by Killip and Visan.

In typical applications, the concentration compactness phenomenon is exploited in moderately sophisticated function spaces (such as Sobolev spaces or Strichartz spaces), with the failure of traditional compactness being connected to a moderately complicated group G of symmetries (e.g. the group generated by translations and dilations).  Because of this, concentration compactness can appear to be a rather complicated and technical concept when it is first encountered.  In this note, I would like to illustrate concentration compactness in a simple toy setting, namely in the space X = l^1({\Bbb Z}) of absolutely summable sequences, with the uniform (l^\infty) metric playing the role of the intermediate metric, and the translation group {\Bbb Z} playing the role of the symmetry group G.  This toy setting is significantly simpler than any model that one would actually use in practice [for instance, in most applications X is a Hilbert space], but hopefully it serves to illuminate this useful concept in a less technical fashion.

Read the rest of this entry »

The U.S. presidential election is now only a few weeks away.  The politics of this election are of course interesting and important, but I do not want to discuss these topics here (there is not exactly a shortage of other venues for such a discussion), and would request that readers refrain from doing so in the comments to this post.  However, I thought it would be apropos to talk about some of the basic mathematics underlying electoral polling, and specifically to explain the fact, which can be highly unintuitive to those not well versed in statistics, that polls can be accurate even when sampling only a tiny fraction of the entire population.

Take for instance a nationwide poll of U.S. voters on which presidential candidate they intend to vote for.  A typical poll will ask a number n of randomly selected voters for their opinion; a typical value here is n = 1000.  In contrast, the total voting-eligible population of the U.S. – let’s call this set X – is about 200 million.  (The actual turnout in the election is likely to be closer to 100 million, but let’s ignore this fact for the sake of discussion.)  Thus, such a poll would sample about 0.0005% of the total population X – an incredibly tiny fraction.  Nevertheless, the margin of error (at the 95% confidence level) for such a poll, if conducted under idealised conditions (see below), is about 3%.  In other words, if we let p denote the proportion of the entire population X that will vote for a given candidate A, and let \overline{p} denote the proportion of the polled voters that will vote for A, then the event \overline{p}-0.03 \leq p \leq \overline{p}+0.03 will occur with probability at least 0.95.  Thus, for instance (and oversimplifying a little – see below), if the poll reports that 55% of respondents would vote for A, then the true percentage of the electorate that would vote for A has at least a 95% chance of lying between 52% and 58%.  Larger polls will of course give a smaller margin of error; for instance the margin of error for an (idealised) poll of 2,000 voters is about 2%.

I’ll give a rigorous proof of a weaker version of the above statement (giving a margin of error of about 7%, rather than 3%) in an appendix at the end of this post.  But the main point of my post here is a little different, namely to address the common misconception that the accuracy of a poll is a function of the relative sample size rather than the absolute sample size, which would suggest that a poll involving only 0.0005% of the population could not possibly have a margin of error as low as 3%.  I also want to point out some limitations of the mathematical analysis; depending on the methodology and the context, some polls involving 1000 respondents may have a much higher margin of error than the idealised rate of 3%.

Read the rest of this entry »

In this lecture – the final one on general measure-preserving dynamics – we put together the results from the past few lectures to establish the Furstenberg-Zimmer structure theorem for measure-preserving systems, and then use this to finish the proof of the Furstenberg recurrence theorem.

Read the rest of this entry »

Having studied compact extensions in the previous lecture, we now consider the opposite type of extension, namely that of a weakly mixing extension. Just as compact extensions are “relative” versions of compact systems, weakly mixing extensions are “relative” versions of weakly mixing systems, in which the underlying algebra of scalars {\Bbb C} is replaced by L^\infty(Y). As in the case of unconditionally weakly mixing systems, we will be able to use the van der Corput lemma to neglect “conditionally weakly mixing” functions, thus allowing us to lift the uniform multiple recurrence property (UMR) from a system to any weakly mixing extension of that system.

To finish the proof of the Furstenberg recurrence theorem requires two more steps. One is a relative version of the dichotomy between mixing and compactness: if a system is not weakly mixing relative to some factor, then that factor has a non-trivial compact extension. This will be accomplished using the theory of conditional Hilbert-Schmidt operators in this lecture. Finally, we need the (easy) result that the UMR property is preserved under limits of chains; this will be accomplished in the next lecture.

Read the rest of this entry »

In the previous lecture, we studied the recurrence properties of compact systems, which are systems in which all measurable functions exhibit almost periodicity – they almost return completely to themselves after repeated shifting. Now, we consider the opposite extreme of mixing systems – those in which all measurable functions (of mean zero) exhibit mixing – they become orthogonal to themselves after repeated shifting. (Actually, there are two different types of mixing, strong mixing and weak mixing, depending on whether the orthogonality occurs individually or on the average; it is the latter concept which is of more importance to the task of establishing the Furstenberg recurrence theorem.)

We shall see that for weakly mixing systems, averages such as \frac{1}{N} \sum_{n=0}^{N-1} T^n f \ldots T^{(k-1)n} f can be computed very explicitly (in fact, this average converges to the constant (\int_X f\ d\mu)^{k-1}). More generally, we shall see that weakly mixing components of a system tend to average themselves out and thus become irrelevant when studying many types of ergodic averages. Our main tool here will be the humble Cauchy-Schwarz inequality, and in particular a certain consequence of it, known as the van der Corput lemma.

As one application of this theory, we will be able to establish Roth’s theorem (the k=3 case of Szemerédi’s theorem).

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,983 other followers