You are currently browsing the category archive for the ‘math.ST’ category.

Given two unit vectors {v,w} in a real inner product space, one can define the correlation between these vectors to be their inner product {\langle v, w \rangle}, or in more geometric terms, the cosine of the angle {\angle(v,w)} subtended by {v} and {w}. By the Cauchy-Schwarz inequality, this is a quantity between {-1} and {+1}, with the extreme positive correlation {+1} occurring when {v,w} are identical, the extreme negative correlation {-1} occurring when {v,w} are diametrically opposite, and the zero correlation {0} occurring when {v,w} are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables {X,Y}, which is the same as the correlation between two unit vectors {v,w} lying in the Hilbert space {L^2(\Omega)} of square-integrable random variables, with {v} being the normalisation of {X} defined by subtracting off the mean {\mathbf{E} X} and then dividing by the standard deviation of {X}, and similarly for {w} and {Y}.

One can also define correlation for complex (Hermitian) inner product spaces by taking the real part {\hbox{Re} \langle , \rangle} of the complex inner product to recover a real inner product.

While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if {X} correlates with {Y}, and {Y} correlates with {Z}, then this does not imply that {X} correlates with {Z}. A simple geometric example is provided by the three unit vectors

\displaystyle  u := (1,0); v := (\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}); w := (0,1)

in the Euclidean plane {{\bf R}^2}: {u} and {v} have a positive correlation of {\frac{1}{\sqrt{2}}}, as does {v} and {w}, but {u} and {w} are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.

However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are very close to {1}: if {u,v,w} are unit vectors such that {u} is very highly correlated with {v}, and {v} is very highly correlated with {w}, then this does imply that {u} is very highly correlated with {w}. Indeed, from the identity

\displaystyle  \| u-v \| = 2^{1/2} (1 - \langle u,v\rangle)^{1/2}

(and similarly for {v-w} and {u-w}) and the triangle inequality

\displaystyle  \|u-w\| \leq \|u-v\| + \|v-w\|,

we see that

\displaystyle  (1 - \langle u,w \rangle)^{1/2} \leq (1 - \langle u,v\rangle)^{1/2} + (1 - \langle v,w\rangle)^{1/2}. \ \ \ \ \ (1)

Thus, for instance, if {\langle u, v \rangle \geq 1-\epsilon} and {\langle v,w \rangle \geq 1-\epsilon}, then {\langle u,w \rangle \geq 1-4\epsilon}. This is of course closely related to (though slightly weaker than) the triangle inequality for angles:

\displaystyle  \angle(u,w) \leq \angle(u,v) + \angle(v,w).

Remark 1 (Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors {u,v,w} with {\|u\|, \|v\|, \|w\| \leq 1}. This comes by extending {u,v,w} in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space {H} if necessary. More concretely, one can apply (1) to the unit vectors

\displaystyle  (u, \sqrt{1-\|u\|^2}, 0, 0), (v, 0, \sqrt{1-\|v\|^2}, 0), (w, 0, 0, \sqrt{1-\|w\|^2})

in {H \times {\bf R}^3}.

But even in the “{1\%}” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the van der Corput lemma, which basically asserts that if a unit vector {v} is correlated with many unit vectors {u_1,\dots,u_n}, then many of the pairs {u_i,u_j} will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality

\displaystyle  |\langle v, \sum_{i=1}^n u_i \rangle|^2 \leq \|v\|^2 \| \sum_{i=1}^n u_i \|^2

we see that

\displaystyle  (\sum_{i=1}^n \langle v, u_i \rangle)^2 \leq \sum_{1 \leq i,j \leq n} \langle u_i, u_j \rangle. \ \ \ \ \ (2)

Thus, for instance, if {\langle v, u_i \rangle \geq \epsilon} for at least {\epsilon n} values of {i=1,\dots,n}, then {\sum_{1 \leq i,j \leq n} \langle u_i, u_j \rangle} must be at least {\epsilon^3 n^2}, which implies that {\langle u_i, u_j \rangle \geq \epsilon^3/2} for at least {\epsilon^3 n^2/2} pairs {(i,j)}. Or as another example: if a random variable {X} exhibits at least {1\%} positive correlation with {n} other random variables {Y_1,\dots,Y_n}, then if {n > 10,000}, at least two distinct {Y_i,Y_j} must have positive correlation with each other (although this argument does not tell you which pair {Y_i,Y_j} are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.

A similar argument (multiplying each {u_i} by an appropriate sign {\pm 1}) shows the related van der Corput inequality

\displaystyle  (\sum_{i=1}^n |\langle v, u_i \rangle|)^2 \leq \sum_{1 \leq i,j \leq n} |\langle u_i, u_j \rangle|, \ \ \ \ \ (3)

and this inequality is also true for complex inner product spaces. (Also, the {u_i} do not need to be unit vectors for this inequality to hold.)

Geometrically, the picture is this: if {v} positively correlates with all of the {u_1,\dots,u_n}, then the {u_1,\dots,u_n} are all squashed into a somewhat narrow cone centred at {v}. The cone is still wide enough to allow a few pairs {u_i, u_j} to be orthogonal (or even negatively correlated) with each other, but (when {n} is large enough) it is not wide enough to allow all of the {u_i,u_j} to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)

A particularly common special case of the van der Corput inequality arises when {v} is a unit vector fixed by some unitary operator {T}, and the {u_i} are shifts {u_i = T^i u} of a single unit vector {u}. In this case, the inner products {\langle v, u_i \rangle} are all equal, and we arrive at the useful van der Corput inequality

\displaystyle  |\langle v, u \rangle|^2 \leq \frac{1}{n^2} \sum_{1 \leq i,j \leq n} |\langle T^i u, T^j u \rangle|. \ \ \ \ \ (4)

(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that {v} has negligible correlation with {u}, it suffices to show that the shifts of {u} have negligible correlation with each other.

Here is a basic application of the van der Corput inequality:

Proposition 1 (Weyl equidistribution estimate) Let {P: {\bf Z} \rightarrow {\bf R}/{\bf Z}} be a polynomial with at least one non-constant coefficient irrational. Then one has

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e( P(n) ) = 0,

where {e(x) := e^{2\pi i x}}.

Note that this assertion implies the more general assertion

\displaystyle  \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N e( kP(n) ) = 0

for any non-zero integer {k} (simply by replacing {P} by {kP}), which by the Weyl equidistribution criterion is equivalent to the sequence {P(1), P(2),\dots} being asymptotically equidistributed in {{\bf R}/{\bf Z}}.

Proof: We induct on the degree {d} of the polynomial {P}, which must be at least one. If {d} is equal to one, the claim is easily established from the geometric series formula, so suppose that {d>1} and that the claim has already been proven for {d-1}. If the top coefficient {a_d} of {P(n) = a_d n^d + \dots + a_0} is rational, say {a_d = \frac{p}{q}}, then by partitioning the natural numbers into residue classes modulo {q}, we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient {a_d} is irrational.

In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter {p} (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter {p} defines an inner product {\langle, \rangle_p} on bounded complex sequences {z = (z_1,z_2,z_3,\dots)} by setting

\displaystyle  \langle z, w \rangle_p := \hbox{st} \lim_{N \rightarrow p} \frac{1}{N} \sum_{n=1}^N z_n \overline{w_n}.

Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that

\displaystyle  \langle 1, e(P) \rangle_p = 0

for every non-principal ultrafilter {p}.

Note that the space of bounded sequences (modulo null vectors) admits a shift {T}, defined by

\displaystyle  T (z_1,z_2,\dots) := (z_2,z_3,\dots).

This shift becomes unitary once we quotient out by null vectors, and the constant sequence {1} is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have

\displaystyle  |\langle 1, e(P) \rangle_p| \leq \frac{1}{n^2} \sum_{1 \leq i,j \leq n} |\langle T^i e(P), T^j e(P) \rangle_p|

for any {n \geq 1}. But we may rewrite {\langle T^i e(P), T^j e(P) \rangle = \langle 1, e(T^j P - T^i P) \rangle_p}. Then observe that if {i \neq j}, {T^j P - T^i P} is a polynomial of degree {d-1} whose {d-1} coefficient is irrational, so by induction hypothesis we have {\langle T^i e(P), T^j e(P) \rangle_p = 0} for {i \neq j}. For {i=j} we of course have {\langle T^i e(P), T^j e(P) \rangle_p = 1}, and so

\displaystyle  |\langle 1, e(P) \rangle_p| \leq \frac{1}{n^2} \times n

for any {n}. Letting {n \rightarrow \infty}, we obtain the claim. \Box

A remarkable phenomenon in probability theory is that of universality – that many seemingly unrelated probability distributions, which ostensibly involve large numbers of unknown parameters, can end up converging to a universal law that may only depend on a small handful of parameters. One of the most famous examples of the universality phenomenon is the central limit theorem; another rich source of examples comes from random matrix theory, which is one of the areas of my own research.

Analogous universality phenomena also show up in empirical distributions – the distributions of a statistic {X} from a large population of “real-world” objects. Examples include Benford’s law, Zipf’s law, and the Pareto distribution (of which the Pareto principle or 80-20 law is a special case). These laws govern the asymptotic distribution of many statistics {X} which

  • (i) take values as positive numbers;
  • (ii) range over many different orders of magnitude;
  • (iiii) arise from a complicated combination of largely independent factors (with different samples of {X} arising from different independent factors); and
  • (iv) have not been artificially rounded, truncated, or otherwise constrained in size.

Examples here include the population of countries or cities, the frequency of occurrence of words in a language, the mass of astronomical objects, or the net worth of individuals or corporations. The laws are then as follows:

  • Benford’s law: For {k=1,\ldots,9}, the proportion of {X} whose first digit is {k} is approximately {\log_{10} \frac{k+1}{k}}. Thus, for instance, {X} should have a first digit of {1} about {30\%} of the time, but a first digit of {9} only about {5\%} of the time.
  • Zipf’s law: The {n^{th}} largest value of {X} should obey an approximate power law, i.e. it should be approximately {C n^{-\alpha}} for the first few {n=1,2,3,\ldots} and some parameters {C, \alpha > 0}. In many cases, {\alpha} is close to {1}.
  • Pareto distribution: The proportion of {X} with at least {m} digits (before the decimal point), where {m} is above the median number of digits, should obey an approximate exponential law, i.e. be approximately of the form {c 10^{-m/\alpha}} for some {c, \alpha > 0}. Again, in many cases {\alpha} is close to {1}.

Benford’s law and Pareto distribution are stated here for base {10}, which is what we are most familiar with, but the laws hold for any base (after replacing all the occurrences of {10} in the above laws with the new base, of course). The laws tend to break down if the hypotheses (i)-(iv) are dropped. For instance, if the statistic {X} concentrates around its mean (as opposed to being spread over many orders of magnitude), then the normal distribution tends to be a much better model (as indicated by such results as the central limit theorem). If instead the various samples of the statistics are highly correlated with each other, then other laws can arise (for instance, the eigenvalues of a random matrix, as well as many empirically observed matrices, are correlated to each other, with the behaviour of the largest eigenvalues being governed by laws such as the Tracy-Widom law rather than Zipf’s law, and the bulk distribution being governed by laws such as the semicircular law rather than the normal or Pareto distributions).

To illustrate these laws, let us take as a data set the populations of 235 countries and regions of the world in 2007 (using the CIA world factbook); I have put the raw data here. This is a relatively small sample (cf. my previous post), but is already enough to discern these laws in action. For instance, here is how the data set tracks with Benford’s law (rounded to three significant figures):

{k} Countries Number Benford prediction
1 Angola, Anguilla, Aruba, Bangladesh, Belgium, Botswana, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Chile, China, Christmas Island, Cook Islands, Cuba, Czech Republic, Ecuador, Estonia, Gabon, (The) Gambia, Greece, Guam, Guatemala, Guinea-Bissau, India, Japan, Kazakhstan, Kiribati, Malawi, Mali, Mauritius, Mexico, (Federated States of) Micronesia, Nauru, Netherlands, Niger, Nigeria, Niue, Pakistan, Portugal, Russia, Rwanda, Saint Lucia, Saint Vincent and the Grenadines, Senegal, Serbia, Swaziland, Syria, Timor-Leste (East-Timor), Tokelau, Tonga, Trinidad and Tobago, Tunisia, Tuvalu, (U.S.) Virgin Islands, Wallis and Futuna, Zambia, Zimbabwe 59 ({25.1\%}) 71 ({30.1\%})
2 Armenia, Australia, Barbados, British Virgin Islands, Cote d’Ivoire, French Polynesia, Ghana, Gibraltar, Indonesia, Iraq, Jamaica, (North) Korea, Kosovo, Kuwait, Latvia, Lesotho, Macedonia, Madagascar, Malaysia, Mayotte, Mongolia, Mozambique, Namibia, Nepal, Netherlands Antilles, New Caledonia Norfolk Island, Palau, Peru, Romania, Saint Martin, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Slovenia, Sri Lanka, Svalbard, Taiwan, Turks and Caicos Islands, Uzbekistan, Vanuatu, Venezuela, Yemen 44 ({18.7\%}) 41 ({17.6\%})
3 Afghanistan, Albania, Algeria, (The) Bahamas, Belize, Brunei, Canada, (Rep. of the) Congo, Falkland Islands (Islas Malvinas), Iceland, Kenya, Lebanon, Liberia, Liechtenstein, Lithuania, Maldives, Mauritania, Monaco, Morocco, Oman, (Occupied) Palestinian Territory, Panama, Poland, Puerto Rico, Saint Kitts and Nevis, Uganda, United States of America, Uruguay, Western Sahara 29 ({12.3\%}) 29 ({12.5\%})
4 Argentina, Bosnia and Herzegovina, Burma (Myanmar), Cape Verde, Cayman Islands, Central African Republic, Colombia, Costa Rica, Croatia, Faroe Islands, Georgia, Ireland, (South) Korea, Luxembourg, Malta, Moldova, New Zealand, Norway, Pitcairn Islands, Singapore, South Africa, Spain, Sudan, Suriname, Tanzania, Ukraine, United Arab Emirates 27 ({11.4\%}) 22 ({9.7\%})
5 (Macao SAR) China, Cocos Islands, Denmark, Djibouti, Eritrea, Finland, Greenland, Italy, Kyrgyzstan, Montserrat, Nicaragua, Papua New Guinea, Slovakia, Solomon Islands, Togo, Turkmenistan 16 ({6.8\%}) 19 ({7.9\%})
6 American Samoa, Bermuda, Bhutan, (Dem. Rep. of the) Congo, Equatorial Guinea, France, Guernsey, Iran, Jordan, Laos, Libya, Marshall Islands, Montenegro, Paraguay, Sierra Leone, Thailand, United Kingdom 17 ({7.2\%}) 16 ({6.7\%})
7 Bahrain, Bulgaria, (Hong Kong SAR) China, Comoros, Cyprus, Dominica, El Salvador, Guyana, Honduras, Israel, (Isle of) Man, Saint Barthelemy, Saint Helena, Saint Pierre and Miquelon, Switzerland, Tajikistan, Turkey 17 ({7.2\%}) 14 ({5.8\%})
8 Andorra, Antigua and Barbuda, Austria, Azerbaijan, Benin, Burundi, Egypt, Ethiopia, Germany, Haiti, Holy See (Vatican City), Northern Mariana Islands, Qatar, Seychelles, Vietnam 15 ({6.4\%}) 12 ({5.1\%})
9 Belarus, Bolivia, Dominican Republic, Fiji, Grenada, Guinea, Hungary, Jersey, Philippines, Somalia, Sweden 11 ({4.5\%}) 11 ({4.6\%})

Here is how the same data tracks Zipf’s law for the first twenty values of {n}, with the parameters {C \approx 1.28 \times 10^9} and {\alpha \approx 1.03} (selected by log-linear regression), again rounding to three significant figures:

{n} Country Population Zipf prediction Deviation from prediction
1 China 1,330,000,000 1,280,000,000 {+4.1\%}
2 India 1,150,000,000 626,000,000 {+83.5\%}
3 USA 304,000,000 412,000,000 {-26.3\%}
4 Indonesia 238,000,000 307,000,000 {-22.5\%}
5 Brazil 196,000,000 244,000,000 {-19.4\%}
6 Pakistan 173,000,000 202,000,000 {-14.4\%}
7 Bangladesh 154,000,000 172,000,000 {-10.9\%}
8 Nigeria 146,000,000 150,000,000 {-2.6\%}
9 Russia 141,000,000 133,000,000 {+5.8\%}
10 Japan 128,000,000 120,000,000 {+6.7\%}
11 Mexico 110,000,000 108,000,000 {+1.7\%}
12 Philippines 96,100,000 98,900,000 {-2.9\%}
13 Vietnam 86,100,000 91,100,000 {-5.4\%}
14 Ethiopia 82,600,000 84,400,000 {-2.1\%}
15 Germany 82,400,000 78,600,000 {+4.8\%}
16 Egypt 81,700,000 73,500,000 {+11.1\%}
17 Turkey 71,900,000 69,100,000 {+4.1\%}
18 Congo 66,500,000 65,100,000 {+2.2\%}
19 Iran 65,900,000 61,600,000 {+6.9\%}
20 Thailand 65,500,000 58,400,000 {+12.1\%}

As one sees, Zipf’s law is not particularly precise at the extreme edge of the statistics (when {n} is very small), but becomes reasonably accurate (given the small sample size, and given that we are fitting twenty data points using only two parameters) for moderate sizes of {n}.

This data set has too few scales in base {10} to illustrate the Pareto distribution effectively – over half of the country populations are either seven or eight digits in that base. But if we instead work in base {2}, then country populations range in a decent number of scales (the majority of countries have population between {2^{23}} and {2^{32}}), and we begin to see the law emerge, where {m} is now the number of digits in binary, the best-fit parameters are {\alpha \approx 1.18} and {c \approx 1.7 \times 2^{26} / 235}:

{m} Countries with {\geq m} binary digit populations Number Pareto prediction
31 China, India 2 1
30 2 2
29 “, United States of America 3 5
28 “, Indonesia, Brazil, Pakistan, Bangladesh, Nigeria, Russia 9 8
27 “, Japan, Mexico, Philippines, Vietnam, Ethiopia, Germany, Egypt, Turkey 17 15
26 “, (Dem. Rep. of the) Congo, Iran, Thailand, France, United Kingdom, Italy, South Africa, (South) Korea, Burma (Myanmar), Ukraine, Colombia, Spain, Argentina, Sudan, Tanzania, Poland, Kenya, Morocco, Algeria 36 27
25 “, Canada, Afghanistan, Uganda, Nepal, Peru, Iraq, Saudi Arabia, Uzbekistan, Venezuela, Malaysia, (North) Korea, Ghana, Yemen, Taiwan, Romania, Mozambique, Sri Lanka, Australia, Cote d’Ivoire, Madagascar, Syria, Cameroon 58 49
24 “, Netherlands, Chile, Kazakhstan, Burkina Faso, Cambodia, Malawi, Ecuador, Niger, Guatemala, Senegal, Angola, Mali, Zambia, Cuba, Zimbabwe, Greece, Portugal, Belgium, Tunisia, Czech Republic, Rwanda, Serbia, Chad, Hungary, Guinea, Belarus, Somalia, Dominican Republic, Bolivia, Sweden, Haiti, Burundi, Benin 91 88
23 “, Austria, Azerbaijan, Honduras, Switzerland, Bulgaria, Tajikistan, Israel, El Salvador, (Hong Kong SAR) China, Paraguay, Laos, Sierra Leone, Jordan, Libya, Papua New Guinea, Togo, Nicaragua, Eritrea, Denmark, Slovakia, Kyrgyzstan, Finland, Turkmenistan, Norway, Georgia, United Arab Emirates, Singapore, Bosnia and Herzegovina, Croatia, Central African Republic, Moldova, Costa Rica 123 159

Thus, with each new scale, the number of countries introduced increases by a factor of a little less than {2}, on the average. This approximate doubling of countries with each new scale begins to falter at about the population {2^{23}} (i.e. at around {4} million), for the simple reason that one has begun to run out of countries. (Note that the median-population country in this set, Singapore, has a population with {23} binary digits.)

These laws are not merely interesting statistical curiosities; for instance, Benford’s law is often used to help detect fraudulent statistics (such as those arising from accounting fraud), as many such statistics are invented by choosing digits at random, and will therefore deviate significantly from Benford’s law. (This is nicely discussed in Robert Matthews’ New Scientist article “The power of one“; this article can also be found on the web at a number of other places.) In a somewhat analogous spirit, Zipf’s law and the Pareto distribution can be used to mathematically test various models of real-world systems (e.g. formation of astronomical objects, accumulation of wealth, population growth of countries, etc.), without necessarily having to fit all the parameters of that model with the actual data.

Being empirically observed phenomena rather than abstract mathematical facts, Benford’s law, Zipf’s law, and the Pareto distribution cannot be “proved” the same way a mathematical theorem can be proved. However, one can still support these laws mathematically in a number of ways, for instance showing how these laws are compatible with each other, and with other plausible hypotheses on the source of the data. In this post I would like to describe a number of ways (both technical and non-technical) in which one can do this; these arguments do not fully explain these laws (in particular, the empirical fact that the exponent {\alpha} in Zipf’s law or the Pareto distribution is often close to {1} is still quite a mysterious phenomenon), and do not always have the same universal range of applicability as these laws seem to have, but I hope that they do demonstrate that these laws are not completely arbitrary, and ought to have a satisfactory basis of mathematical support. Read the rest of this entry »

The U.S. presidential election is now only a few weeks away.  The politics of this election are of course interesting and important, but I do not want to discuss these topics here (there is not exactly a shortage of other venues for such a discussion), and would request that readers refrain from doing so in the comments to this post.  However, I thought it would be apropos to talk about some of the basic mathematics underlying electoral polling, and specifically to explain the fact, which can be highly unintuitive to those not well versed in statistics, that polls can be accurate even when sampling only a tiny fraction of the entire population.

Take for instance a nationwide poll of U.S. voters on which presidential candidate they intend to vote for.  A typical poll will ask a number n of randomly selected voters for their opinion; a typical value here is n = 1000.  In contrast, the total voting-eligible population of the U.S. – let’s call this set X – is about 200 million.  (The actual turnout in the election is likely to be closer to 100 million, but let’s ignore this fact for the sake of discussion.)  Thus, such a poll would sample about 0.0005% of the total population X – an incredibly tiny fraction.  Nevertheless, the margin of error (at the 95% confidence level) for such a poll, if conducted under idealised conditions (see below), is about 3%.  In other words, if we let p denote the proportion of the entire population X that will vote for a given candidate A, and let \overline{p} denote the proportion of the polled voters that will vote for A, then the event \overline{p}-0.03 \leq p \leq \overline{p}+0.03 will occur with probability at least 0.95.  Thus, for instance (and oversimplifying a little – see below), if the poll reports that 55% of respondents would vote for A, then the true percentage of the electorate that would vote for A has at least a 95% chance of lying between 52% and 58%.  Larger polls will of course give a smaller margin of error; for instance the margin of error for an (idealised) poll of 2,000 voters is about 2%.

I’ll give a rigorous proof of a weaker version of the above statement (giving a margin of error of about 7%, rather than 3%) in an appendix at the end of this post.  But the main point of my post here is a little different, namely to address the common misconception that the accuracy of a poll is a function of the relative sample size rather than the absolute sample size, which would suggest that a poll involving only 0.0005% of the population could not possibly have a margin of error as low as 3%.  I also want to point out some limitations of the mathematical analysis; depending on the methodology and the context, some polls involving 1000 respondents may have a much higher margin of error than the idealised rate of 3%.

Read the rest of this entry »

Over two years ago, Emmanuel Candés and I submitted the paper “The Dantzig selector: Statistical estimation when p is much
larger than n
” to the Annals of Statistics. This paper, which appeared last year, proposed a new type of selector (which we called the Dantzig selector, due to its reliance on the linear programming methods to which George Dantzig, who had died as we were finishing our paper, had contributed so much to) for statistical estimation, in the case when the number p of unknown parameters is much larger than the number n of observations. More precisely, we considered the problem of obtaining a reasonable estimate \beta^* for an unknown vector \beta \in {\Bbb R}^p of parameters given a vector y = X \beta + z \in {\Bbb R}^n of measurements, where X is a known n \times p predictor matrix and z is a (Gaussian) noise error with some variance \sigma^2. We assumed that the predictor matrix X obeyed the restricted isometry property (RIP, also known as UUP), which roughly speaking asserts that X\beta has norm comparable to \beta whenever the vector \beta is sparse. This RIP property is known to hold for various ensembles of random matrices of interest; see my earlier blog post on this topic.

Our selection algorithm, inspired by our previous work on compressed sensing, chooses the estimated parameters \beta^* to have minimal l^1 norm amongst all vectors which are consistent with the data in the sense that the residual vector r := y - X \beta^* obeys the condition

\| X^* r \|_\infty \leq \lambda, where \lambda := C \sqrt{\log p} \sigma (1)

(one can check that such a condition is obeyed with high probability in the case that \beta^* = \beta, thus the true vector of parameters is feasible for this selection algorithm). This selector is similar, though not identical, to the more well-studied lasso selector in the literature, which minimises the l^1 norm of \beta^* penalised by the l^2 norm of the residual.

A simple model case arises when n=p and X is the identity matrix, thus the observations are given by a simple additive noise model y_i = \beta_i + z_i. In this case, the Dantzig selector \beta^* is given by the hard soft thresholding formula

\beta^*_i = \max(|y_i| - \lambda, 0 )  \hbox{sgn}(y_i).

The mean square error {\Bbb E}( \| \beta - \beta^* \|^2 ) for this selector can be computed to be roughly

\lambda^2 + \sum_{i=1}^n  \min( |y_i|^2, \lambda^2) (2)

and one can show that this is basically best possible (except for constants and logarithmic factors) amongst all selectors in this model. More generally, the main result of our paper was that under the assumption that the predictor matrix obeys the RIP, the mean square error of the Dantzig selector is essentially equal to (2) and thus close to best possible.

After accepting our paper, the Annals of Statistics took the (somewhat uncommon) step of soliciting responses to the paper from various experts in the field, and then soliciting a rejoinder to these responses from Emmanuel and I. Recently, the Annals posted these responses and rejoinder on the arXiv:

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,823 other followers