I’ve just uploaded to the arXiv my preprint “Perfectly packing a square by squares of nearly harmonic sidelength“. This paper concerns a variant of an old problem of Meir and Moser, who asks whether it is possible to perfectly pack squares of sidelength {1/n} for {n \geq 2} into a single square or rectangle of area {\sum_{n=2}^\infty \frac{1}{n^2} = \frac{\pi^2}{6} - 1}. (The following variant problem, also posed by Meir and Moser and discussed for instance in this MathOverflow post, is perhaps even more well known: is it possible to perfectly pack rectangles of dimensions {1/n \times 1/(n+1)} for {n \geq 1} into a single square of area {\sum_{n=1}^\infty \frac{1}{n(n+1)} = 1}?) For the purposes of this paper, rectangles and squares are understood to have sides parallel to the axes, and a packing is perfect if it partitions the region being packed up to sets of measure zero. As one partial result towards these problems, it was shown by Paulhus that squares of sidelength {1/n} for {n \geq 2} can be packed (not quite perfectly) into a single rectangle of area {\frac{\pi^2}{6} - 1 + \frac{1}{1244918662}}, and rectangles of dimensions {1/n \times 1/n+1} for {n \geq 1} can be packed (again not quite perfectly) into a single square of area {1 + \frac{1}{10^9+1}}. (Paulhus’s paper had some gaps in it, but these were subsequently repaired by Grzegorek and Januszewski.)

Another direction in which partial progress has been made is to consider instead the problem of packing squares of sidelength {n^{-t}}, {n \geq 1} perfectly into a square or rectangle of total area {\sum_{n=1}^\infty \frac{1}{n^{2t}}}, for some fixed constant {t > 1/2} (this lower bound is needed to make the total area {\sum_{n=1}^\infty \frac{1}{n^{2t}}} finite), with the aim being to get {t} as close to {1} as possible. Prior to this paper, the most recent advance in this direction was by Januszewski and Zielonka last year, who achieved such a packing in the range {1/2 < t \leq 2/3}.

In this paper we are able to get {t} arbitrarily close to {1} (which turns out to be a “critical” value of this parameter), but at the expense of deleting the first few tiles:

Theorem 1 If {1/2 < t < 1}, and {n_0} is sufficiently large depending on {t}, then one can pack squares of sidelength {n^{-t}}, {n \geq n_0} perfectly into a square of area {\sum_{n=n_0}^\infty \frac{1}{n^{2t}}}.

As in previous works, the general strategy is to execute a greedy algorithm, which can be described somewhat incompletely as follows.

  • Step 1: Suppose that one has already managed to perfectly pack a square {S} of area {\sum_{n=n_0}^\infty \frac{1}{n^{2t}}} by squares of sidelength {n^{-t}} for {n_0 \leq n < n_1}, together with a further finite collection {{\mathcal R}} of rectangles with disjoint interiors. (Initially, we would have {n_1=n_0} and {{\mathcal R} = \{S\}}, but these parameter will change over the course of the algorithm.)
  • Step 2: Amongst all the rectangles in {{\mathcal R}}, locate the rectangle {R} of the largest width (defined as the shorter of the two sidelengths of {R}).
  • Step 3: Pack (as efficiently as one can) squares of sidelength {n^{-t}} for {n_1 \leq n < n_2} into {R} for some {n_2>n_1}, and decompose the portion of {R} not covered by this packing into rectangles {{\mathcal R}'}.
  • Step 4: Replace {n_1} by {n_2}, replace {{\mathcal R}} by {({\mathcal R} \backslash \{R\}) \cup {\mathcal R}'}, and return to Step 1.

The main innovation of this paper is to perform Step 3 somewhat more efficiently than in previous papers.

The above algorithm can get stuck if one reaches a point where one has already packed squares of sidelength {1/n^t} for {n_0 \leq n < n_1}, but that all remaining rectangles {R} in {{\mathcal R}} have width less than {n_1^{-t}}, in which case there is no obvious way to fit in the next square. If we let {w(R)} and {h(R)} denote the width and height of these rectangles {R}, then the total area of the rectangles must be

\displaystyle  \sum_{R \in {\mathcal R}} w(R) h(R) = \sum_{n=n_0}^\infty \frac{1}{n^{2t}} - \sum_{n=n_0}^{n_1-1} \frac{1}{n^{2t}} \asymp n_1^{1-2t}

and the total perimeter {\mathrm{perim}({\mathcal R})} of these rectangles is

\displaystyle  \mathrm{perim}({\mathcal R}) = \sum_{R \in {\mathcal R}} 2(w(R)+h(R)) \asymp \sum_{R \in {\mathcal R}} h(R).

Thus we have

\displaystyle  n_1^{1-2t} \ll \mathrm{perim}({\mathcal R}) \sup_{R \in {\mathcal R}} w(R)

and so to ensure that there is at least one rectangle {R} with {w(R) \geq n_1^{-t}} it would be enough to have the perimeter bound

\displaystyle  \mathrm{perim}({\mathcal R}) \leq c n_1^{1-t}

for a sufficiently small constant {c>0}. It is here that we now see the critical nature of the exponent {t=1}: for {t<1}, the amount of perimeter we are permitted to have in the remaining rectangles increases as one progresses with the packing, but for {t=1} the amount of perimeter one is “budgeted” for stays constant (and for {t>1} the situation is even worse, in that the remaining rectangles {{\mathcal R}} should steadily decrease in total perimeter).

In comparison, the perimeter of the squares that one has already packed is equal to

\displaystyle  \sum_{n=n_0}^{n_1-1} 4 n^{-t}

which is comparable to {n_1^{1-t}} for {n_1} large (with the constants blowing up as {t} approaches the critical value of {1}). In previous algorithms, the total perimeter of the remainder rectangles {{\mathcal R}} was basically comparable to the perimeter of the squares already packed, and this is the main reason why the results only worked when {t} was sufficiently far away from {1}. In my paper, I am able to get the perimeter of {{\mathcal R}} significantly smaller than the perimeter of the squares already packed, by grouping those squares into lattice-like clusters (of about {M^2} squares arranged in an {M \times M} pattern), and sliding the squares in each cluster together to almost entirely eliminate the wasted space between each square, leaving only the space around the cluster as the main source of residual perimeter, which will be comparable to about {M n_1^{-t}} per cluster, as compared to the total perimeter of the squares in the cluster which is comparable to {M^2 n_1^{-t}}. This strategy is perhaps easiest to illustrate with a picture, in which {3 \times 4} squares {S_{i,j}} of slowly decreasing sidelength are packed together with relatively little wasted space:

By choosing the parameter {M} suitably large (and taking {n_0} sufficiently large depending on {M}), one can then prove the theorem. (In order to do some technical bookkeeping and to allow one to close an induction in the verification of the algorithm’s correctness, it is convenient to replace the perimeter {\sum_{R \in {\mathcal R}} 2(w(R)+h(R))} by a slightly weighted variant {\sum_{R \in {\mathcal R}} w(R)^\delta h(R)} for a small exponent {\delta}, but this is a somewhat artificial device that somewhat obscures the main ideas.)