You are currently browsing the category archive for the ‘math.ST’ category.
Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.
Suppose one assigns true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number
as the final grade of the examination. More generally, one could assign some score of
points to each correct answer and some score (possibly negative) of
points to each incorrect answer, giving a total grade of
points. As long as
, this grade is simply an affine rescaling of the simple grading scheme
and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.
In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student and a given question
, the student
may think that the answer to question
is true with probability
and false with probability
, where
is some quantity that can be viewed as a measure of confidence
has in the answer (with
being confident that the answer is true if
is close to
, and confident that the answer is false if
is close to
); for simplicity let us assume that in
‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student
wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for
to take is to answer question
true if
and false if
. (If
, the student
can answer arbitrarily.)
[Important note: here we are not using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]
This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities for each question. If for instance
answered
out of
questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.
But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question , the student was allowed to give answers such as “
confident that the answer is true” (and hence
confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities
.
But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in confidence is awarded one point. How many points should one award a correct answer given in
confidence? How about an incorrect answer given in
confidence (or equivalently, a correct answer given in
confidence)?
Mathematically, one could design a grading scheme by selecting some grading function and then awarding a student
points whenever they indicate the correct answer with a confidence of
. For instance, if the student was
confident that the answer was “true” (and hence
confident that the answer was “false”), then this grading scheme would award the student
points if the correct answer actually was “true”, and
points if the correct answer actually was “false”. One can then ask the question of what functions
would be “best” for this scheme?
Intuitively, one would expect that should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function
– thus for instance a correct answer given with
confidence might be worth
points. But is this the “best” option?
To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence for a student
to report on a question may differ from that student’s subjective confidence
. So one could ask to design a scheme so that
is always equal to
, so that the incentive is for the student to honestly report his or her confidence level in the answer.
This turns out to give a precise constraint on the grading function . If a student
thinks that the answer to a question
is true with probability
and false with probability
, and enters in an answer of “true” with confidence
(and thus “false” with confidence
), then student would expect a grade of
on average for this question. To maximise this expected grade (assuming differentiability of , which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable
and setting the result to zero, thus obtaining
In order to avoid perverse incentives, the maximum should occur at , thus we should have
for all . This suggests that the function
should be constant. (Strictly speaking, it only gives the weaker constraint that
is symmetric around
; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force
to be constant in
; we leave this computation to the interested reader.) In other words,
should be of the form
for some
; by monotonicity we expect
to be positive. If we make the normalisation
(so that no points are awarded for a
split in confidence between true and false) and
, one arrives at the grading scheme
Thus, if a student believes that an answer is “true” with confidence and “false” with confidence
, he or she will be awarded
points when the correct answer is “true”, and
points if the correct answer is “false”. The following table gives some illustrative values for this scheme:
Confidence that answer is “true” | Points awarded if answer is “true” | Points awarded if answer is “false” |
Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.
The total grade given under such a scheme to a student who answers each question
to be “true” with confidence
, and “false” with confidence
, is
This grade can also be written as
where
is the likelihood of the student ‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.
One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of to any prediction made without any such qualification would result in an automatic grade of
if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level
of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of
that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.
The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with uncertainty, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between and
confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.
I recently learned about a curious operation on square matrices known as sweeping, which is used in numerical linear algebra (particularly in applications to statistics), as a useful and more robust variant of the usual Gaussian elimination operations seen in undergraduate linear algebra courses. Given an matrix
(with, say, complex entries) and an index
, with the entry
non-zero, the sweep
of
at
is the matrix given by the formulae
for all . Thus for instance if
, and
is written in block form as
for some row vector
,
column vector
, and
minor
, one has
The inverse sweep operation is given by a nearly identical set of formulae:
for all . One can check that these operations invert each other. Actually, each sweep turns out to have order
, so that
: an inverse sweep performs the same operation as three forward sweeps. Sweeps also preserve the space of symmetric matrices (allowing one to cut down computational run time in that case by a factor of two), and behave well with respect to principal minors; a sweep of a principal minor is a principal minor of a sweep, after adjusting indices appropriately.
Remarkably, the sweep operators all commute with each other: . If
and we perform the first
sweeps (in any order) to a matrix
with a
minor,
a
matrix,
a
matrix, and
a
matrix, one obtains the new matrix
Note the appearance of the Schur complement in the bottom right block. Thus, for instance, one can essentially invert a matrix by performing all
sweeps:
If a matrix has the form
for a minor
,
column vector
,
row vector
, and scalar
, then performing the first
sweeps gives
and all the components of this matrix are usable for various numerical linear algebra applications in statistics (e.g. in least squares regression). Given that sweeps behave well with inverses, it is perhaps not surprising that sweeps also behave well under determinants: the determinant of can be factored as the product of the entry
and the determinant of the
matrix formed from
by removing the
row and column. As a consequence, one can compute the determinant of
fairly efficiently (so long as the sweep operations don’t come close to dividing by zero) by sweeping the matrix for
in turn, and multiplying together the
entry of the matrix just before the
sweep for
to obtain the determinant.
It turns out that there is a simple geometric explanation for these seemingly magical properties of the sweep operation. Any matrix
creates a graph
(where we think of
as the space of column vectors). This graph is an
-dimensional subspace of
. Conversely, most subspaces of
arises as graphs; there are some that fail the vertical line test, but these are a positive codimension set of counterexamples.
We use to denote the standard basis of
, with
the standard basis for the first factor of
and
the standard basis for the second factor. The operation of sweeping the
entry then corresponds to a ninety degree rotation
in the
plane, that sends
to
(and
to
), keeping all other basis vectors fixed: thus we have
for generic
(more precisely, those
with non-vanishing entry
). For instance, if
and
is of the form (1), then
is the set of tuples
obeying the equations
The image of under
is
. Since we can write the above system of equations (for
) as
we see from (2) that is the graph of
. Thus the sweep operation is a multidimensional generalisation of the high school geometry fact that the line
in the plane becomes
after applying a ninety degree rotation.
It is then an instructive exercise to use this geometric interpretation of the sweep operator to recover all the remarkable properties about these operations listed above. It is also useful to compare the geometric interpretation of sweeping as rotation of the graph to that of Gaussian elimination, which instead shears and reflects the graph by various elementary transformations (this is what is going on geometrically when one performs Gaussian elimination on an augmented matrix). Rotations are less distorting than shears, so one can see geometrically why sweeping can produce fewer numerical artefacts than Gaussian elimination.
Given two unit vectors in a real inner product space, one can define the correlation between these vectors to be their inner product
, or in more geometric terms, the cosine of the angle
subtended by
and
. By the Cauchy-Schwarz inequality, this is a quantity between
and
, with the extreme positive correlation
occurring when
are identical, the extreme negative correlation
occurring when
are diametrically opposite, and the zero correlation
occurring when
are orthogonal. This notion is closely related to the notion of correlation between two non-constant square-integrable real-valued random variables
, which is the same as the correlation between two unit vectors
lying in the Hilbert space
of square-integrable random variables, with
being the normalisation of
defined by subtracting off the mean
and then dividing by the standard deviation of
, and similarly for
and
.
One can also define correlation for complex (Hermitian) inner product spaces by taking the real part of the complex inner product to recover a real inner product.
While reading the (highly recommended) recent popular maths book “How not to be wrong“, by my friend and co-author Jordan Ellenberg, I came across the (important) point that correlation is not necessarily transitive: if correlates with
, and
correlates with
, then this does not imply that
correlates with
. A simple geometric example is provided by the three unit vectors
in the Euclidean plane :
and
have a positive correlation of
, as does
and
, but
and
are not correlated with each other. Or: for a typical undergraduate course, it is generally true that good exam scores are correlated with a deep understanding of the course material, and memorising from flash cards are correlated with good exam scores, but this does not imply that memorising flash cards is correlated with deep understanding of the course material.
However, there are at least two situations in which some partial version of transitivity of correlation can be recovered. The first is in the “99%” regime in which the correlations are very close to : if
are unit vectors such that
is very highly correlated with
, and
is very highly correlated with
, then this does imply that
is very highly correlated with
. Indeed, from the identity
(and similarly for and
) and the triangle inequality
Thus, for instance, if and
, then
. This is of course closely related to (though slightly weaker than) the triangle inequality for angles:
Remark 1 (Thanks to Andrew Granville for conversations leading to this observation.) The inequality (1) also holds for sub-unit vectors, i.e. vectors
with
. This comes by extending
in directions orthogonal to all three original vectors and to each other in order to make them unit vectors, enlarging the ambient Hilbert space
if necessary. More concretely, one can apply (1) to the unit vectors
in
.
But even in the “” regime in which correlations are very weak, there is still a version of transitivity of correlation, known as the van der Corput lemma, which basically asserts that if a unit vector
is correlated with many unit vectors
, then many of the pairs
will then be correlated with each other. Indeed, from the Cauchy-Schwarz inequality
Thus, for instance, if for at least
values of
, then (after removing those indices
for which
)
must be at least
, which implies that
for at least
pairs
. Or as another example: if a random variable
exhibits at least
positive correlation with
other random variables
, then if
, at least two distinct
must have positive correlation with each other (although this argument does not tell you which pair
are so correlated). Thus one can view this inequality as a sort of `pigeonhole principle” for correlation.
A similar argument (multiplying each by an appropriate sign
) shows the related van der Corput inequality
and this inequality is also true for complex inner product spaces. (Also, the do not need to be unit vectors for this inequality to hold.)
Geometrically, the picture is this: if positively correlates with all of the
, then the
are all squashed into a somewhat narrow cone centred at
. The cone is still wide enough to allow a few pairs
to be orthogonal (or even negatively correlated) with each other, but (when
is large enough) it is not wide enough to allow all of the
to be so widely separated. Remarkably, the bound here does not depend on the dimension of the ambient inner product space; while increasing the number of dimensions should in principle add more “room” to the cone, this effect is counteracted by the fact that in high dimensions, almost all pairs of vectors are close to orthogonal, and the exceptional pairs that are even weakly correlated to each other become exponentially rare. (See this previous blog post for some related discussion; in particular, Lemma 2 from that post is closely related to the van der Corput inequality presented here.)
A particularly common special case of the van der Corput inequality arises when is a unit vector fixed by some unitary operator
, and the
are shifts
of a single unit vector
. In this case, the inner products
are all equal, and we arrive at the useful van der Corput inequality
(In fact, one can even remove the absolute values from the right-hand side, by using (2) instead of (4).) Thus, to show that has negligible correlation with
, it suffices to show that the shifts of
have negligible correlation with each other.
Here is a basic application of the van der Corput inequality:
Proposition 2 (Weyl equidistribution estimate) Let
be a polynomial with at least one non-constant coefficient irrational. Then one has
where
.
Note that this assertion implies the more general assertion
for any non-zero integer (simply by replacing
by
), which by the Weyl equidistribution criterion is equivalent to the sequence
being asymptotically equidistributed in
.
Proof: We induct on the degree of the polynomial
, which must be at least one. If
is equal to one, the claim is easily established from the geometric series formula, so suppose that
and that the claim has already been proven for
. If the top coefficient
of
is rational, say
, then by partitioning the natural numbers into residue classes modulo
, we see that the claim follows from the induction hypothesis; so we may assume that the top coefficient
is irrational.
In order to use the van der Corput inequality as stated above (i.e. in the formalism of inner product spaces) we will need a non-principal ultrafilter (see e.g this previous blog post for basic theory of ultrafilters); we leave it as an exercise to the reader to figure out how to present the argument below without the use of ultrafilters (or similar devices, such as Banach limits). The ultrafilter
defines an inner product
on bounded complex sequences
by setting
Strictly speaking, this inner product is only positive semi-definite rather than positive definite, but one can quotient out by the null vectors to obtain a positive-definite inner product. To establish the claim, it will suffice to show that
for every non-principal ultrafilter .
Note that the space of bounded sequences (modulo null vectors) admits a shift , defined by
This shift becomes unitary once we quotient out by null vectors, and the constant sequence is clearly a unit vector that is invariant with respect to the shift. So by the van der Corput inequality, we have
for any . But we may rewrite
. Then observe that if
,
is a polynomial of degree
whose
coefficient is irrational, so by induction hypothesis we have
for
. For
we of course have
, and so
for any . Letting
, we obtain the claim.
A remarkable phenomenon in probability theory is that of universality – that many seemingly unrelated probability distributions, which ostensibly involve large numbers of unknown parameters, can end up converging to a universal law that may only depend on a small handful of parameters. One of the most famous examples of the universality phenomenon is the central limit theorem; another rich source of examples comes from random matrix theory, which is one of the areas of my own research.
Analogous universality phenomena also show up in empirical distributions – the distributions of a statistic from a large population of “real-world” objects. Examples include Benford’s law, Zipf’s law, and the Pareto distribution (of which the Pareto principle or 80-20 law is a special case). These laws govern the asymptotic distribution of many statistics
which
- (i) take values as positive numbers;
- (ii) range over many different orders of magnitude;
- (iiii) arise from a complicated combination of largely independent factors (with different samples of
arising from different independent factors); and
- (iv) have not been artificially rounded, truncated, or otherwise constrained in size.
Examples here include the population of countries or cities, the frequency of occurrence of words in a language, the mass of astronomical objects, or the net worth of individuals or corporations. The laws are then as follows:
- Benford’s law: For
, the proportion of
whose first digit is
is approximately
. Thus, for instance,
should have a first digit of
about
of the time, but a first digit of
only about
of the time.
- Zipf’s law: The
largest value of
should obey an approximate power law, i.e. it should be approximately
for the first few
and some parameters
. In many cases,
is close to
.
- Pareto distribution: The proportion of
with at least
digits (before the decimal point), where
is above the median number of digits, should obey an approximate exponential law, i.e. be approximately of the form
for some
. Again, in many cases
is close to
.
Benford’s law and Pareto distribution are stated here for base , which is what we are most familiar with, but the laws hold for any base (after replacing all the occurrences of
in the above laws with the new base, of course). The laws tend to break down if the hypotheses (i)-(iv) are dropped. For instance, if the statistic
concentrates around its mean (as opposed to being spread over many orders of magnitude), then the normal distribution tends to be a much better model (as indicated by such results as the central limit theorem). If instead the various samples of the statistics are highly correlated with each other, then other laws can arise (for instance, the eigenvalues of a random matrix, as well as many empirically observed matrices, are correlated to each other, with the behaviour of the largest eigenvalues being governed by laws such as the Tracy-Widom law rather than Zipf’s law, and the bulk distribution being governed by laws such as the semicircular law rather than the normal or Pareto distributions).
To illustrate these laws, let us take as a data set the populations of 235 countries and regions of the world in 2007 (using the CIA world factbook); I have put the raw data here. This is a relatively small sample (cf. my previous post), but is already enough to discern these laws in action. For instance, here is how the data set tracks with Benford’s law (rounded to three significant figures):
|
Countries | Number | Benford prediction |
1 | Angola, Anguilla, Aruba, Bangladesh, Belgium, Botswana, Brazil, Burkina Faso, Cambodia, Cameroon, Chad, Chile, China, Christmas Island, Cook Islands, Cuba, Czech Republic, Ecuador, Estonia, Gabon, (The) Gambia, Greece, Guam, Guatemala, Guinea-Bissau, India, Japan, Kazakhstan, Kiribati, Malawi, Mali, Mauritius, Mexico, (Federated States of) Micronesia, Nauru, Netherlands, Niger, Nigeria, Niue, Pakistan, Portugal, Russia, Rwanda, Saint Lucia, Saint Vincent and the Grenadines, Senegal, Serbia, Swaziland, Syria, Timor-Leste (East-Timor), Tokelau, Tonga, Trinidad and Tobago, Tunisia, Tuvalu, (U.S.) Virgin Islands, Wallis and Futuna, Zambia, Zimbabwe | 59 ( |
71 ( |
2 | Armenia, Australia, Barbados, British Virgin Islands, Cote d’Ivoire, French Polynesia, Ghana, Gibraltar, Indonesia, Iraq, Jamaica, (North) Korea, Kosovo, Kuwait, Latvia, Lesotho, Macedonia, Madagascar, Malaysia, Mayotte, Mongolia, Mozambique, Namibia, Nepal, Netherlands Antilles, New Caledonia Norfolk Island, Palau, Peru, Romania, Saint Martin, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Slovenia, Sri Lanka, Svalbard, Taiwan, Turks and Caicos Islands, Uzbekistan, Vanuatu, Venezuela, Yemen | 44 ( |
41 ( |
3 | Afghanistan, Albania, Algeria, (The) Bahamas, Belize, Brunei, Canada, (Rep. of the) Congo, Falkland Islands (Islas Malvinas), Iceland, Kenya, Lebanon, Liberia, Liechtenstein, Lithuania, Maldives, Mauritania, Monaco, Morocco, Oman, (Occupied) Palestinian Territory, Panama, Poland, Puerto Rico, Saint Kitts and Nevis, Uganda, United States of America, Uruguay, Western Sahara | 29 ( |
29 ( |
4 | Argentina, Bosnia and Herzegovina, Burma (Myanmar), Cape Verde, Cayman Islands, Central African Republic, Colombia, Costa Rica, Croatia, Faroe Islands, Georgia, Ireland, (South) Korea, Luxembourg, Malta, Moldova, New Zealand, Norway, Pitcairn Islands, Singapore, South Africa, Spain, Sudan, Suriname, Tanzania, Ukraine, United Arab Emirates | 27 ( |
22 ( |
5 | (Macao SAR) China, Cocos Islands, Denmark, Djibouti, Eritrea, Finland, Greenland, Italy, Kyrgyzstan, Montserrat, Nicaragua, Papua New Guinea, Slovakia, Solomon Islands, Togo, Turkmenistan | 16 ( |
19 ( |
6 | American Samoa, Bermuda, Bhutan, (Dem. Rep. of the) Congo, Equatorial Guinea, France, Guernsey, Iran, Jordan, Laos, Libya, Marshall Islands, Montenegro, Paraguay, Sierra Leone, Thailand, United Kingdom | 17 ( |
16 ( |
7 | Bahrain, Bulgaria, (Hong Kong SAR) China, Comoros, Cyprus, Dominica, El Salvador, Guyana, Honduras, Israel, (Isle of) Man, Saint Barthelemy, Saint Helena, Saint Pierre and Miquelon, Switzerland, Tajikistan, Turkey | 17 ( |
14 ( |
8 | Andorra, Antigua and Barbuda, Austria, Azerbaijan, Benin, Burundi, Egypt, Ethiopia, Germany, Haiti, Holy See (Vatican City), Northern Mariana Islands, Qatar, Seychelles, Vietnam | 15 ( |
12 ( |
9 | Belarus, Bolivia, Dominican Republic, Fiji, Grenada, Guinea, Hungary, Jersey, Philippines, Somalia, Sweden | 11 ( |
11 ( |
Here is how the same data tracks Zipf’s law for the first twenty values of , with the parameters
and
(selected by log-linear regression), again rounding to three significant figures:
|
Country | Population | Zipf prediction | Deviation from prediction |
1 | China | 1,330,000,000 | 1,280,000,000 | |
2 | India | 1,150,000,000 | 626,000,000 | |
3 | USA | 304,000,000 | 412,000,000 | |
4 | Indonesia | 238,000,000 | 307,000,000 | |
5 | Brazil | 196,000,000 | 244,000,000 | |
6 | Pakistan | 173,000,000 | 202,000,000 | |
7 | Bangladesh | 154,000,000 | 172,000,000 | |
8 | Nigeria | 146,000,000 | 150,000,000 | |
9 | Russia | 141,000,000 | 133,000,000 | |
10 | Japan | 128,000,000 | 120,000,000 | |
11 | Mexico | 110,000,000 | 108,000,000 | |
12 | Philippines | 96,100,000 | 98,900,000 | |
13 | Vietnam | 86,100,000 | 91,100,000 | |
14 | Ethiopia | 82,600,000 | 84,400,000 | |
15 | Germany | 82,400,000 | 78,600,000 | |
16 | Egypt | 81,700,000 | 73,500,000 | |
17 | Turkey | 71,900,000 | 69,100,000 | |
18 | Congo | 66,500,000 | 65,100,000 | |
19 | Iran | 65,900,000 | 61,600,000 | |
20 | Thailand | 65,500,000 | 58,400,000 | |
As one sees, Zipf’s law is not particularly precise at the extreme edge of the statistics (when is very small), but becomes reasonably accurate (given the small sample size, and given that we are fitting twenty data points using only two parameters) for moderate sizes of
.
This data set has too few scales in base to illustrate the Pareto distribution effectively – over half of the country populations are either seven or eight digits in that base. But if we instead work in base
, then country populations range in a decent number of scales (the majority of countries have population between
and
), and we begin to see the law emerge, where
is now the number of digits in binary, the best-fit parameters are
and
:
|
Countries with |
Number | Pareto prediction |
31 | China, India | 2 | 1 |
30 | ” | 2 | 2 |
29 | “, United States of America | 3 | 5 |
28 | “, Indonesia, Brazil, Pakistan, Bangladesh, Nigeria, Russia | 9 | 8 |
27 | “, Japan, Mexico, Philippines, Vietnam, Ethiopia, Germany, Egypt, Turkey | 17 | 15 |
26 | “, (Dem. Rep. of the) Congo, Iran, Thailand, France, United Kingdom, Italy, South Africa, (South) Korea, Burma (Myanmar), Ukraine, Colombia, Spain, Argentina, Sudan, Tanzania, Poland, Kenya, Morocco, Algeria | 36 | 27 |
25 | “, Canada, Afghanistan, Uganda, Nepal, Peru, Iraq, Saudi Arabia, Uzbekistan, Venezuela, Malaysia, (North) Korea, Ghana, Yemen, Taiwan, Romania, Mozambique, Sri Lanka, Australia, Cote d’Ivoire, Madagascar, Syria, Cameroon | 58 | 49 |
24 | “, Netherlands, Chile, Kazakhstan, Burkina Faso, Cambodia, Malawi, Ecuador, Niger, Guatemala, Senegal, Angola, Mali, Zambia, Cuba, Zimbabwe, Greece, Portugal, Belgium, Tunisia, Czech Republic, Rwanda, Serbia, Chad, Hungary, Guinea, Belarus, Somalia, Dominican Republic, Bolivia, Sweden, Haiti, Burundi, Benin | 91 | 88 |
23 | “, Austria, Azerbaijan, Honduras, Switzerland, Bulgaria, Tajikistan, Israel, El Salvador, (Hong Kong SAR) China, Paraguay, Laos, Sierra Leone, Jordan, Libya, Papua New Guinea, Togo, Nicaragua, Eritrea, Denmark, Slovakia, Kyrgyzstan, Finland, Turkmenistan, Norway, Georgia, United Arab Emirates, Singapore, Bosnia and Herzegovina, Croatia, Central African Republic, Moldova, Costa Rica | 123 | 159 |
Thus, with each new scale, the number of countries introduced increases by a factor of a little less than , on the average. This approximate doubling of countries with each new scale begins to falter at about the population
(i.e. at around
million), for the simple reason that one has begun to run out of countries. (Note that the median-population country in this set, Singapore, has a population with
binary digits.)
These laws are not merely interesting statistical curiosities; for instance, Benford’s law is often used to help detect fraudulent statistics (such as those arising from accounting fraud), as many such statistics are invented by choosing digits at random, and will therefore deviate significantly from Benford’s law. (This is nicely discussed in Robert Matthews’ New Scientist article “The power of one“; this article can also be found on the web at a number of other places.) In a somewhat analogous spirit, Zipf’s law and the Pareto distribution can be used to mathematically test various models of real-world systems (e.g. formation of astronomical objects, accumulation of wealth, population growth of countries, etc.), without necessarily having to fit all the parameters of that model with the actual data.
Being empirically observed phenomena rather than abstract mathematical facts, Benford’s law, Zipf’s law, and the Pareto distribution cannot be “proved” the same way a mathematical theorem can be proved. However, one can still support these laws mathematically in a number of ways, for instance showing how these laws are compatible with each other, and with other plausible hypotheses on the source of the data. In this post I would like to describe a number of ways (both technical and non-technical) in which one can do this; these arguments do not fully explain these laws (in particular, the empirical fact that the exponent in Zipf’s law or the Pareto distribution is often close to
is still quite a mysterious phenomenon), and do not always have the same universal range of applicability as these laws seem to have, but I hope that they do demonstrate that these laws are not completely arbitrary, and ought to have a satisfactory basis of mathematical support. Read the rest of this entry »
The U.S. presidential election is now only a few weeks away. The politics of this election are of course interesting and important, but I do not want to discuss these topics here (there is not exactly a shortage of other venues for such a discussion), and would request that readers refrain from doing so in the comments to this post. However, I thought it would be apropos to talk about some of the basic mathematics underlying electoral polling, and specifically to explain the fact, which can be highly unintuitive to those not well versed in statistics, that polls can be accurate even when sampling only a tiny fraction of the entire population.
Take for instance a nationwide poll of U.S. voters on which presidential candidate they intend to vote for. A typical poll will ask a number of randomly selected voters for their opinion; a typical value here is
. In contrast, the total voting-eligible population of the U.S. – let’s call this set
– is about 200 million. (The actual turnout in the election is likely to be closer to 100 million, but let’s ignore this fact for the sake of discussion.) Thus, such a poll would sample about 0.0005% of the total population
– an incredibly tiny fraction. Nevertheless, the margin of error (at the 95% confidence level) for such a poll, if conducted under idealised conditions (see below), is about 3%. In other words, if we let
denote the proportion of the entire population
that will vote for a given candidate
, and let
denote the proportion of the polled voters that will vote for
, then the event
will occur with probability at least 0.95. Thus, for instance (and oversimplifying a little – see below), if the poll reports that 55% of respondents would vote for A, then the true percentage of the electorate that would vote for A has at least a 95% chance of lying between 52% and 58%. Larger polls will of course give a smaller margin of error; for instance the margin of error for an (idealised) poll of 2,000 voters is about 2%.
I’ll give a rigorous proof of a weaker version of the above statement (giving a margin of error of about 7%, rather than 3%) in an appendix at the end of this post. But the main point of my post here is a little different, namely to address the common misconception that the accuracy of a poll is a function of the relative sample size rather than the absolute sample size, which would suggest that a poll involving only 0.0005% of the population could not possibly have a margin of error as low as 3%. I also want to point out some limitations of the mathematical analysis; depending on the methodology and the context, some polls involving 1000 respondents may have a much higher margin of error than the idealised rate of 3%.
Over two years ago, Emmanuel Candés and I submitted the paper “The Dantzig selector: Statistical estimation when is much
larger than ” to the Annals of Statistics. This paper, which appeared last year, proposed a new type of selector (which we called the Dantzig selector, due to its reliance on the linear programming methods to which George Dantzig, who had died as we were finishing our paper, had contributed so much to) for statistical estimation, in the case when the number
of unknown parameters is much larger than the number
of observations. More precisely, we considered the problem of obtaining a reasonable estimate
for an unknown vector
of parameters given a vector
of measurements, where
is a known
predictor matrix and
is a (Gaussian) noise error with some variance
. We assumed that the predictor matrix X obeyed the restricted isometry property (RIP, also known as UUP), which roughly speaking asserts that
has norm comparable to
whenever the vector
is sparse. This RIP property is known to hold for various ensembles of random matrices of interest; see my earlier blog post on this topic.
Our selection algorithm, inspired by our previous work on compressed sensing, chooses the estimated parameters to have minimal
norm amongst all vectors which are consistent with the data in the sense that the residual vector
obeys the condition
, where
(1)
(one can check that such a condition is obeyed with high probability in the case that , thus the true vector of parameters is feasible for this selection algorithm). This selector is similar, though not identical, to the more well-studied lasso selector in the literature, which minimises the
norm of
penalised by the
norm of the residual.
A simple model case arises when n=p and X is the identity matrix, thus the observations are given by a simple additive noise model . In this case, the Dantzig selector
is given by the hard soft thresholding formula
The mean square error for this selector can be computed to be roughly
(2)
and one can show that this is basically best possible (except for constants and logarithmic factors) amongst all selectors in this model. More generally, the main result of our paper was that under the assumption that the predictor matrix obeys the RIP, the mean square error of the Dantzig selector is essentially equal to (2) and thus close to best possible.
After accepting our paper, the Annals of Statistics took the (somewhat uncommon) step of soliciting responses to the paper from various experts in the field, and then soliciting a rejoinder to these responses from Emmanuel and I. Recently, the Annals posted these responses and rejoinder on the arXiv:
Recent Comments