Last year on this blog, I sketched out a non-rigorous probabilistic argument justifying the following well-known theorem:

Theorem 1. (Non-measurable sets exist) There exists a subset E of the unit interval {}[0,1] which is not  Lebesgue-measurable.

The idea was to let E be a “random” subset of {}[0,1].  If one (non-rigorously) applies the law of large numbers, one expects E to have “density” 1/2 with respect to every subinterval of {}[0,1], which would contradict the Lebesgue differentiation theorem.

I was recently asked whether I could in fact make the above argument rigorous. This turned out to be more difficult than I had anticipated, due to some technicalities in trying to make the concept of a random subset of {}[0,1] (which requires an uncountable number of “coin flips” to generate) both rigorous and useful.  However, there is a simpler variant of the above argument which can be made rigorous.  Instead of letting E be a “random” subset of {}[0,1], one takes E to be an “alternating” set that contains “every other” real number in {}[0,1]; this again should have density 1/2 in every subinterval and thus again contradict the Lebesgue differentiation theorem.

Of course, in the standard model of the real numbers, it makes no sense to talk about “every other” or “every second” real number, as the real numbers are not discrete.  If however one employs the language of non-standard analysis, then it is possible to make the above argument rigorous, and this is the purpose of my post today. I will assume some basic familiarity with non-standard analysis, for instance as discussed in this earlier post of mine.

We begin by selecting a non-principal ultrafilter p \in \beta {\Bbb N} \backslash {\Bbb N} and use it to construct non-standard models {}*{\Bbb N}, *[0,1] of the natural numbers {\Bbb N} and the unit interval {}[0,1] by the usual ultrapower construction.  We then let N \in *{\Bbb N} be an unlimited non-standard number, i.e. a non-standard natural number larger than any standard natural number.  (For instance, one could take N to be the equivalence class in {}*{\Bbb N} of the sequence 1, 2, 3, \ldots.)

We can partition the non-standard unit interval {}*[0,1] into 2^N (non-standard) intervals J_j := {}[j/2^N, (j+1)/2^N] for j=0,\ldots,2^N-1 (the overlap of these intervals will have a negligible impact in our analysis).  We then define the non-standard set {}*E \subset *[0,1] to be the union of those J_j with j odd; this is the formalisation of the idea of “every other real number” in the introduction.  The key property about *E that we need here is the following symmetry property: if I = [a/2^n, (a+1)/2^n] is any standard dyadic interval, then the (non-standard counterpart to the) interval I can be partitioned (modulo (non-standard) rationals) as the set {}*E \cap I and its reflection (2a+1)/2^n - (*E \cap I).  This demonstrates in particular that {}*E has (non-standard) density 1/2 inside I, but we will need to use the symmetry property directly, rather than the non-standard density property, because the former is much easier to transfer to the standard setting than the latter.

We now return to the standard world, and introduce the standard set E \subset [0,1], defined as the collection of all standard x \in [0,1] whose non-standard representative {}*x lies in {}*E.  This is certainly a standard subset of {}[0,1].  We claim that it is not Lebesgue measurable, thus establishing Theorem 1.  To see this, recall for any standard dyadic interval I = [a/2^n,(a+1)/2^n], that every non-standard irrational element of I lies in exactly one of *E or (2a+1)/2^n - *E.  Applying the transfer principle, we conclude that every standard irrational element of I lies in exactly one of E or (2a+1)/2^n - E.  Thus, if E were measurable, the density of E in the dyadic interval I must be exactly 1/2.  But this contradicts the Lebesgue differentiation theorem, and we are done.

Remark 1. One can eliminate the non-standard analysis from this argument and rely directly on the non-principal ultrafilter p.  Indeed, if one inspects the ultrapower construction carefully, one sees that (outside of the terminating binary rationals, which do not have a unique binary expansion), E consists of those numbers in {}[0,1] whose 1s in the binary expansion lie in p.  The symmetry property of E then reflects the non-principal ultrafilter nature of p, in particular the fact that membership of a set A \subset {\Bbb N} in p is insensitive to any finite modification of A, and reversed by replacing A with its complement.  On the other hand, one cannot eliminate the non-principal ultrafilter entirely; once one drops the axiom of choice, there exist models of the real line in which every set is Lebesgue measurable, and so it is necessary to have some step in the proof of Theorem 1 that involves this axiom.  In the above argument, choice is used to find the non-principal ultrafilter p. \diamond

Remark 2. It is tempting to also make the original argument, based on randomness, work, but I was unable to push it through completely.  Certainly, if one sets {}*E to be a (non-standardly) random collection of the J_j, then with probability infinitesimally close to 1, the (non-standard) density of {}*E in any standard dyadic interval (or indeed, any standard interval) is infinitesimally close to 1/2, thanks to the law of large numbers.  However, I was not able to transfer this fact to tell me anything about E; indeed, I could not even show that E was non-empty.  (Question: does there exist a non-standard subset of {}*[0,1] of positive (non-infinitesimal) measure which avoids all standard real numbers?  I don’t know the answer to this.) \diamond

[Update, Oct 14: some LaTeX quirks fixed.]