You are currently browsing the category archive for the ‘paper’ category.

Apoorva Khare and I have just uploaded to the arXiv our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“. This paper explores the relationship between positive definiteness of Hermitian matrices, and entrywise operations on these matrices. The starting point for this theory is the Schur product theorem, which asserts that if ${A = (a_{ij})_{1 \leq i,j \leq N}}$ and ${B = (b_{ij})_{1 \leq i,j \leq N}}$ are two ${N \times N}$ Hermitian matrices that are positive semi-definite, then their Hadamard product

$\displaystyle A \circ B := (a_{ij} b_{ij})_{1 \leq i,j \leq N}$

is also positive semi-definite. (One should caution that the Hadamard product is not the same as the usual matrix product.) To prove this theorem, first observe that the claim is easy when ${A = {\bf u} {\bf u}^*}$ and ${B = {\bf v} {\bf v}^*}$ are rank one positive semi-definite matrices, since in this case ${A \circ B = ({\bf u} \circ {\bf v}) ({\bf u} \circ {\bf v})^*}$ is also a rank one positive semi-definite matrix. The general case then follows by noting from the spectral theorem that a general positive semi-definite matrix can be expressed as a non-negative linear combination of rank one positive semi-definite matrices, and using the bilinearity of the Hadamard product and the fact that the set of positive semi-definite matrices form a convex cone. A modification of this argument also lets one replace “positive semi-definite” by “positive definite” in the statement of the Schur product theorem.

One corollary of the Schur product theorem is that any polynomial ${P(z) = c_0 + c_1 z + \dots + c_d z^d}$ with non-negative coefficients ${c_n \geq 0}$ is entrywise positivity preserving on the space ${{\mathbb P}_N({\bf C})}$ of ${N \times N}$ positive semi-definite Hermitian matrices, in the sense that for any matrix ${A = (a_{ij})_{1 \leq i,j \leq N}}$ in ${{\mathbb P}_N({\bf C})}$, the entrywise application

$\displaystyle P[A] := (P(a_{ij}))_{1 \leq i,j \leq N}$

of ${P}$ to ${A}$ is also positive semi-definite. (As before, one should caution that ${P[A]}$ is not the application ${P(A)}$ of ${P}$ to ${A}$ by the usual functional calculus.) Indeed, one can expand

$\displaystyle P[A] = c_0 A^{\circ 0} + c_1 A^{\circ 1} + \dots + c_d A^{\circ d},$

where ${A^{\circ i}}$ is the Hadamard product of ${i}$ copies of ${A}$, and the claim now follows from the Schur product theorem and the fact that ${{\mathbb P}_N({\bf C})}$ is a convex cone.

A slight variant of this argument, already observed by Pólya and Szegö in 1925, shows that if ${I}$ is any subset of ${{\bf C}}$ and

$\displaystyle f(z) = \sum_{n=0}^\infty c_n z^n \ \ \ \ \ (1)$

is a power series with non-negative coefficients ${c_n \geq 0}$ that is absolutely and uniformly convergent on ${I}$, then ${f}$ will be entrywise positivity preserving on the set ${{\mathbb P}_N(I)}$ of positive definite matrices with entries in ${I}$. (In the case that ${I}$ is of the form ${I = [0,\rho]}$, such functions are precisely the absolutely monotonic functions on ${I}$.)

In the work of Schoenberg and of Rudin, we have a converse: if ${f: (-1,1) \rightarrow {\bf C}}$ is a function that is entrywise positivity preserving on ${{\mathbb P}_N((-1,1))}$ for all ${N}$, then it must be of the form (1) with ${c_n \geq 0}$. Variants of this result, with ${(-1,1)}$ replaced by other domains, appear in the work of Horn, Vasudeva, and Guillot-Khare-Rajaratnam.

This gives a satisfactory classification of functions ${f}$ that are entrywise positivity preservers in all dimensions ${N}$ simultaneously. However, the question remains as to what happens if one fixes the dimension ${N}$, in which case one may have a larger class of entrywise positivity preservers. For instance, in the trivial case ${N=1}$, a function would be entrywise positivity preserving on ${{\mathbb P}_1((0,\rho))}$ if and only if ${f}$ is non-negative on ${I}$. For higher ${N}$, there is a necessary condition of Horn (refined slightly by Guillot-Khare-Rajaratnam) which asserts (at least in the case of smooth ${f}$) that all derivatives of ${f}$ at zero up to ${(N-1)^{th}}$ order must be non-negative in order for ${f}$ to be entrywise positivity preserving on ${{\mathbb P}_N((0,\rho))}$ for some ${0 < \rho < \infty}$. In particular, if ${f}$ is of the form (1), then ${c_0,\dots,c_{N-1}}$ must be non-negative. In fact, a stronger assertion can be made, namely that the first ${N}$ non-zero coefficients in (1) (if they exist) must be positive, or equivalently any negative term in (1) must be preceded (though not necessarily immediately) by at least ${N}$ positive terms. If ${f}$ is of the form (1) is entrywise positivity preserving on the larger set ${{\mathbb P}_N((0,+\infty))}$, one can furthermore show that any negative term in (1) must also be followed (though not necessarily immediately) by at least ${N}$ positive terms.

The main result of this paper is that these sign conditions are the only constraints for entrywise positivity preserving power series. More precisely:

Theorem 1 For each ${n}$, let ${\epsilon_n \in \{-1,0,+1\}}$ be a sign.

• Suppose that any negative sign ${\epsilon_M = -1}$ is preceded by at least ${N}$ positive signs (thus there exists ${0 \leq n_0 < \dots < n_{N-1}< M}$ with ${\epsilon_{n_0} = \dots = \epsilon_{n_{N-1}} = +1}$). Then, for any ${0 < \rho < \infty}$, there exists a convergent power series (1) on ${(0,\rho)}$, with each ${c_n}$ having the sign of ${\epsilon_n}$, which is entrywise positivity preserving on ${{\bf P}_N((0,\rho))}$.
• Suppose in addition that any negative sign ${\epsilon_M = -1}$ is followed by at least ${N}$ positive signs (thus there exists ${M < n_N < \dots < n_{2N-1}}$ with ${\epsilon_{n_N} = \dots = \epsilon_{n_{2N-1}} = +1}$). Then there exists a convergent power series (1) on ${(0,+\infty)}$, with each ${c_n}$ having the sign of ${\epsilon_n}$, which is entrywise positivity preserving on ${{\bf P}_N((0,+\infty))}$.

One can ask the same question with ${(0,\rho)}$ or ${(0,+\infty)}$ replaced by other domains such as ${(-\rho,\rho)}$, or the complex disk ${D(0,\rho)}$, but it turns out that there are far fewer entrywise positivity preserving functions in those cases basically because of the non-trivial zeroes of Schur polynomials in these ranges; see the paper for further discussion. We also have some quantitative bounds on how negative some of the coefficients can be compared to the positive coefficients, but they are a bit technical to state here.

The heart of the proofs of these results is an analysis of the determinants ${\mathrm{det} P[ {\bf u} {\bf u}^* ]}$ of polynomials ${P}$ applied entrywise to rank one matrices ${uu^*}$; the positivity of these determinants can be used (together with a continuity argument) to establish the positive definiteness of ${P[uu^*]}$ for various ranges of ${P}$ and ${u}$. Using the Cauchy-Binet formula, one can rewrite such determinants as linear combinations of squares of magnitudes of generalised Vandermonde determinants

$\displaystyle \mathrm{det}( u_i^{n_j} )_{1 \leq i,j \leq N},$

where ${{\bf u} = (u_1,\dots,u_N)}$ and the signs of the coefficients in the linear combination are determined by the signs of the coefficients of ${P}$. The task is then to find upper and lower bounds for the magnitudes of such generalised Vandermonde determinants. These determinants oscillate in sign, which makes the problem look difficult; however, an algebraic miracle intervenes, namely the factorisation

$\displaystyle \mathrm{det}( u_i^{n_j} )_{1 \leq i,j \leq N} = \pm V({\bf u}) s_\lambda({\bf u})$

of the generalised Vandermonde determinant into the ordinary Vandermonde determinant

$\displaystyle V({\bf u}) = \prod_{1 \leq i < j \leq N} (u_j - u_i)$

and a Schur polynomial ${s_\lambda}$ applied to ${{\bf u}}$, where the weight ${\lambda}$ of the Schur polynomial is determined by ${n_1,\dots,n_N}$ in a simple fashion. The problem then boils down to obtaining upper and lower bounds for these Schur polynomials. Because we are restricting attention to matrices taking values in ${(0,\rho)}$ or ${(0,+\infty)}$, the entries of ${{\bf u}}$ can be taken to be non-negative. One can then take advantage of the total positivity of the Schur polynomials to compare these polynomials with a monomial, at which point one can obtain good criteria for ${P[A]}$ to be positive definite when ${A}$ is a rank one positive definite matrix ${A = {\bf u} {\bf u}^*}$.

If we allow the exponents ${n_1,\dots,n_N}$ to be real numbers rather than integers (thus replacing polynomials or power series by Pusieux series or Hahn series), then we lose the above algebraic miracle, but we can replace it with a geometric miracle, namely the Harish-Chandra-Itzykson-Zuber identity, which I discussed in this previous blog post. This factors the above generalised Vandermonde determinant as the product of the ordinary Vandermonde determinant and an integral of a positive quantity over the orthogonal group, which one can again compare with a monomial after some fairly elementary estimates.

It remains to understand what happens for more general positive semi-definite matrices ${A}$. Here we use a trick of FitzGerald and Horn to amplify the rank one case to the general case, by expressing a general positive semi-definite matrix ${A}$ as a linear combination of a rank one matrix ${{\bf u} {\bf u}^*}$ and another positive semi-definite matrix ${B}$ that vanishes on the last row and column (and is thus effectively a positive definite ${N-1 \times N-1}$ matrix). Using the fundamental theorem of calculus to continuously deform the rank one matrix ${{\bf u} {\bf u}^*}$ to ${A}$ in the direction ${B}$, one can then obtain positivity results for ${P[A]}$ from positivity results for ${P[{\bf u} {\bf u}^*]}$ combined with an induction hypothesis on ${N}$.

Joni Teräväinen and I have just uploaded to the arXiv our paper “The structure of logarithmically averaged correlations of multiplicative functions, with applications to the Chowla and Elliott conjectures“, submitted to Duke Mathematical Journal. This paper builds upon my previous paper in which I introduced an “entropy decrement method” to prove the two-point (logarithmically averaged) cases of the Chowla and Elliott conjectures. A bit more specifically, I showed that

$\displaystyle \lim_{m \rightarrow \infty} \frac{1}{\log \omega_m} \sum_{x_m/\omega_m \leq n \leq x_m} \frac{g_0(n+h_0) g_1(n+h_1)}{n} = 0$

whenever ${1 \leq \omega_m \leq x_m}$ were sequences going to infinity, ${h_0,h_1}$ were distinct integers, and ${g_0,g_1: {\bf N} \rightarrow {\bf C}}$ were ${1}$-bounded multiplicative functions which were non-pretentious in the sense that

$\displaystyle \liminf_{X \rightarrow \infty} \inf_{|t_j| \leq X} \sum_{p \leq X} \frac{1-\mathrm{Re}( g_j(p) \overline{\chi_j}(p) p^{it_j})}{p} = \infty \ \ \ \ \ (1)$

for all Dirichlet characters ${\chi_j}$ and for ${j=0,1}$. Thus, for instance, one had the logarithmically averaged two-point Chowla conjecture

$\displaystyle \sum_{n \leq x} \frac{\lambda(n) \lambda(n+h)}{n} = o(\log x)$

for fixed any non-zero ${h}$, where ${\lambda}$ was the Liouville function.

One would certainly like to extend these results to higher order correlations than the two-point correlations. This looks to be difficult (though perhaps not completely impossible if one allows for logarithmic averaging): in a previous paper I showed that achieving this in the context of the Liouville function would be equivalent to resolving the logarithmically averaged Sarnak conjecture, as well as establishing logarithmically averaged local Gowers uniformity of the Liouville function. However, in this paper we are able to avoid having to resolve these difficult conjectures to obtain partial results towards the (logarithmically averaged) Chowla and Elliott conjecture. For the Chowla conjecture, we can obtain all odd order correlations, in that

$\displaystyle \sum_{n \leq x} \frac{\lambda(n+h_1) \dots \lambda(n+h_k)}{n} = o(\log x) \ \ \ \ \ (2)$

for all odd ${k}$ and all integers ${h_1,\dots,h_k}$ (which, in the odd order case, are no longer required to be distinct). (Superficially, this looks like we have resolved “half of the cases” of the logarithmically averaged Chowla conjecture; but it seems the odd order correlations are significantly easier than the even order ones. For instance, because of the Katai-Bourgain-Sarnak-Ziegler criterion, one can basically deduce the odd order cases of (2) from the even order cases (after allowing for some dilations in the argument ${n}$).

For the more general Elliott conjecture, we can show that

$\displaystyle \lim_{m \rightarrow \infty} \frac{1}{\log \omega_m} \sum_{x_m/\omega_m \leq n \leq x_m} \frac{g_1(n+h_1) \dots g_k(n+h_k)}{n} = 0$

for any ${k}$, any integers ${h_1,\dots,h_k}$ and any bounded multiplicative functions ${g_1,\dots,g_k}$, unless the product ${g_1 \dots g_k}$ weakly pretends to be a Dirichlet character ${\chi}$ in the sense that

$\displaystyle \sum_{p \leq X} \frac{1 - \hbox{Re}( g_1 \dots g_k(p) \overline{\chi}(p)}{p} = o(\log\log X).$

This can be seen to imply (2) as a special case. Even when ${g_1,\dots,g_k}$ does pretend to be a Dirichlet character ${\chi}$, we can still say something: if the limits

$\displaystyle f(a) := \lim_{m \rightarrow \infty} \frac{1}{\log \omega_m} \sum_{x_m/\omega_m \leq n \leq x_m} \frac{g_1(n+ah_1) \dots g_k(n+ah_k)}{n}$

exist for each ${a \in {\bf Z}}$ (which can be guaranteed if we pass to a suitable subsequence), then ${f}$ is the uniform limit of periodic functions ${f_i}$, each of which is ${\chi}$isotypic in the sense that ${f_i(ab) = f_i(a) \chi(b)}$ whenever ${a,b}$ are integers with ${b}$ coprime to the periods of ${\chi}$ and ${f_i}$. This does not pin down the value of any single correlation ${f(a)}$, but does put significant constraints on how these correlations may vary with ${a}$.

Among other things, this allows us to show that all ${16}$ possible length four sign patterns ${(\lambda(n+1),\dots,\lambda(n+4)) \in \{-1,+1\}^4}$ of the Liouville function occur with positive density, and all ${65}$ possible length four sign patterns ${(\mu(n+1),\dots,\mu(n+4)) \in \{-1,0,+1\}^4 \backslash \{-1,+1\}^4}$ occur with the conjectured logarithmic density. (In a previous paper with Matomaki and Radziwill, we obtained comparable results for length three patterns of Liouville and length two patterns of Möbius.)

To describe the argument, let us focus for simplicity on the case of the Liouville correlations

$\displaystyle f(a) := \lim_{X \rightarrow \infty} \frac{1}{\log X} \sum_{n \leq X} \frac{\lambda(n) \lambda(n+a) \dots \lambda(n+(k-1)a)}{n}, \ \ \ \ \ (3)$

assuming for sake of discussion that all limits exist. (In the paper, we instead use the device of generalised limits, as discussed in this previous post.) The idea is to combine together two rather different ways to control this function ${f}$. The first proceeds by the entropy decrement method mentioned earlier, which roughly speaking works as follows. Firstly, we pick a prime ${p}$ and observe that ${\lambda(pn)=-\lambda(n)}$ for any ${n}$, which allows us to rewrite (3) as

$\displaystyle (-1)^k f(a) = \lim_{X \rightarrow \infty} \frac{1}{\log X}$

$\displaystyle \sum_{n \leq X} \frac{\lambda(pn) \lambda(pn+ap) \dots \lambda(pn+(k-1)ap)}{n}.$

Making the change of variables ${n' = pn}$, we obtain

$\displaystyle (-1)^k f(a) = \lim_{X \rightarrow \infty} \frac{1}{\log X}$

$\displaystyle \sum_{n' \leq pX} \frac{\lambda(n') \lambda(n'+ap) \dots \lambda(n'+(k-1)ap)}{n'} p 1_{p|n'}.$

The difference between ${n' \leq pX}$ and ${n' \leq X}$ is negligible in the limit (here is where we crucially rely on the log-averaging), hence

$\displaystyle (-1)^k f(a) = \lim_{X \rightarrow \infty} \frac{1}{\log X} \sum_{n \leq X} \frac{\lambda(n) \lambda(n+ap) \dots \lambda(n+(k-1)ap)}{n} p 1_{p|n}$

and thus by (3) we have

$\displaystyle (-1)^k f(a) = f(ap) + \lim_{X \rightarrow \infty} \frac{1}{\log X}$

$\displaystyle \sum_{n \leq X} \frac{\lambda(n) \lambda(n+ap) \dots \lambda(n+(k-1)ap)}{n} (p 1_{p|n}-1).$

The entropy decrement argument can be used to show that the latter limit is small for most ${p}$ (roughly speaking, this is because the factors ${p 1_{p|n}-1}$ behave like independent random variables as ${p}$ varies, so that concentration of measure results such as Hoeffding’s inequality can apply, after using entropy inequalities to decouple somewhat these random variables from the ${\lambda}$ factors). We thus obtain the approximate isotopy property

$\displaystyle (-1)^k f(a) \approx f(ap) \ \ \ \ \ (4)$

for most ${a}$ and ${p}$.

On the other hand, by the Furstenberg correspondence principle (as discussed in these previous posts), it is possible to express ${f(a)}$ as a multiple correlation

$\displaystyle f(a) = \int_X g(x) g(T^a x) \dots g(T^{(k-1)a} x)\ d\mu(x)$

for some probability space ${(X,\mu)}$ equipped with a measure-preserving invertible map ${T: X \rightarrow X}$. Using results of Bergelson-Host-Kra, Leibman, and Le, this allows us to obtain a decomposition of the form

$\displaystyle f(a) = f_1(a) + f_2(a) \ \ \ \ \ (5)$

where ${f_1}$ is a nilsequence, and ${f_2}$ goes to zero in density (even along the primes, or constant multiples of the primes). The original work of Bergelson-Host-Kra required ergodicity on ${X}$, which is very definitely a hypothesis that is not available here; however, the later work of Leibman removed this hypothesis, and the work of Le refined the control on ${f_1}$ so that one still has good control when restricting to primes, or constant multiples of primes.

Ignoring the small error ${f_2(a)}$, we can now combine (5) to conclude that

$\displaystyle f(a) \approx (-1)^k f_1(ap).$

Using the equidistribution theory of nilsequences (as developed in this previous paper of Ben Green and myself), one can break up ${f_1}$ further into a periodic piece ${f_0}$ and an “irrational” or “minor arc” piece ${f_3}$. The contribution of the minor arc piece ${f_3}$ can be shown to mostly cancel itself out after dilating by primes ${p}$ and averaging, thanks to Vinogradov-type bilinear sum estimates (transferred to the primes). So we end up with

$\displaystyle f(a) \approx (-1)^k f_0(ap),$

which already shows (heuristically, at least) the claim that ${f}$ can be approximated by periodic functions ${f_0}$ which are isotopic in the sense that

$\displaystyle f_0(a) \approx (-1)^k f_0(ap).$

But if ${k}$ is odd, one can use Dirichlet’s theorem on primes in arithmetic progressions to restrict to primes ${p}$ that are ${1}$ modulo the period of ${f_0}$, and conclude now that ${f_0}$ vanishes identically, which (heuristically, at least) gives (2).

The same sort of argument works to give the more general bounds on correlations of bounded multiplicative functions. But for the specific task of proving (2), we initially used a slightly different argument that avoids using the ergodic theory machinery of Bergelson-Host-Kra, Leibman, and Le, but replaces it instead with the Gowers uniformity norm theory used to count linear equations in primes. Basically, by averaging (4) in ${p}$ using the “${W}$-trick”, as well as known facts about the Gowers uniformity of the von Mangoldt function, one can obtain an approximation of the form

$\displaystyle (-1)^k f(a) \approx {\bf E}_{b: (b,W)=1} f(ab)$

where ${b}$ ranges over a large range of integers coprime to some primorial ${W = \prod_{p \leq w} p}$. On the other hand, by iterating (4) we have

$\displaystyle f(a) \approx f(apq)$

for most semiprimes ${pq}$, and by again averaging over semiprimes one can obtain an approximation of the form

$\displaystyle f(a) \approx {\bf E}_{b: (b,W)=1} f(ab).$

For ${k}$ odd, one can combine the two approximations to conclude that ${f(a)=0}$. (This argument is not given in the current paper, but we plan to detail it in a subsequent one.)

I’ve just uploaded to the arXiv my paper “On the universality of the incompressible Euler equation on compact manifolds“, submitted to Discrete and Continuous Dynamical Systems. This is a variant of my recent paper on the universality of potential well dynamics, but instead of trying to embed dynamical systems into a potential well ${\partial_{tt} u = -\nabla V(u)}$, here we try to embed dynamical systems into the incompressible Euler equations

$\displaystyle \partial_t u + \nabla_u u = - \mathrm{grad}_g p \ \ \ \ \ (1)$

$\displaystyle \mathrm{div}_g u = 0$

on a Riemannian manifold ${(M,g)}$. (One is particularly interested in the case of flat manifolds ${M}$, particularly ${{\bf R}^3}$ or ${({\bf R}/{\bf Z})^3}$, but for the main result of this paper it is essential that one is permitted to consider curved manifolds.) This system, first studied by Ebin and Marsden, is the natural generalisation of the usual incompressible Euler equations to curved space; it can be viewed as the formal geodesic flow equation on the infinite-dimensional manifold of volume-preserving diffeomorphisms on ${M}$ (see this previous post for a discussion of this in the flat space case).

The Euler equations can be viewed as a nonlinear equation in which the nonlinearity is a quadratic function of the velocity field ${u}$. It is thus natural to compare the Euler equations with quadratic ODE of the form

$\displaystyle \partial_t y = B(y,y) \ \ \ \ \ (2)$

where ${y: {\bf R} \rightarrow {\bf R}^n}$ is the unknown solution, and ${B: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}$ is a bilinear map, which we may assume without loss of generality to be symmetric. One can ask whether such an ODE may be linearly embedded into the Euler equations on some Riemannian manifold ${(M,g)}$, which means that there is an injective linear map ${U: {\bf R}^n \rightarrow \Gamma(TM)}$ from ${{\bf R}^n}$ to smooth vector fields on ${M}$, as well as a bilinear map ${P: {\bf R}^n \times {\bf R}^n \rightarrow C^\infty(M)}$ to smooth scalar fields on ${M}$, such that the map ${y \mapsto (U(y), P(y,y))}$ takes solutions to (2) to solutions to (1), or equivalently that

$\displaystyle U(B(y,y)) + \nabla_{U(y)} U(y) = - \mathrm{grad}_g P(y,y)$

$\displaystyle \mathrm{div}_g U(y) = 0$

for all ${y \in {\bf R}^n}$.

For simplicity let us restrict ${M}$ to be compact. There is an obvious necessary condition for this embeddability to occur, which comes from energy conservation law for the Euler equations; unpacking everything, this implies that the bilinear form ${B}$ in (2) has to obey a cancellation condition

$\displaystyle \langle B(y,y), y \rangle = 0 \ \ \ \ \ (3)$

for some positive definite inner product ${\langle, \rangle: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}}$ on ${{\bf R}^n}$. The main result of the paper is the converse to this statement: if ${B}$ is a symmetric bilinear form obeying a cancellation condition (3), then it is possible to embed the equations (2) into the Euler equations (1) on some Riemannian manifold ${(M,g)}$; the catch is that this manifold will depend on the form ${B}$ and on the dimension ${n}$ (in fact in the construction I have, ${M}$ is given explicitly as ${SO(n) \times ({\bf R}/{\bf Z})^{n+1}}$, with a funny metric on it that depends on ${B}$).

As a consequence, any finite dimensional portion of the usual “dyadic shell models” used as simplified toy models of the Euler equation, can actually be embedded into a genuine Euler equation, albeit on a high-dimensional and curved manifold. This includes portions of the self-similar “machine” I used in a previous paper to establish finite time blowup for an averaged version of the Navier-Stokes (or Euler) equations. Unfortunately, the result in this paper does not apply to infinite-dimensional ODE, so I cannot yet establish finite time blowup for the Euler equations on a (well-chosen) manifold. It does not seem so far beyond the realm of possibility, though, that this could be done in the relatively near future. In particular, the result here suggests that one could construct something resembling a universal Turing machine within an Euler flow on a manifold, which was one ingredient I would need to engineer such a finite time blowup.

The proof of the main theorem proceeds by an “elimination of variables” strategy that was used in some of my previous papers in this area, though in this particular case the Nash embedding theorem (or variants thereof) are not required. The first step is to lessen the dependence on the metric ${g}$ by partially reformulating the Euler equations (1) in terms of the covelocity ${g \cdot u}$ (which is a ${1}$-form) instead of the velocity ${u}$. Using the freedom to modify the dimension of the underlying manifold ${M}$, one can also decouple the metric ${g}$ from the volume form that is used to obtain the divergence-free condition. At this point the metric can be eliminated, with a certain positive definiteness condition between the velocity and covelocity taking its place. After a substantial amount of trial and error (motivated by some “two-and-a-half-dimensional” reductions of the three-dimensional Euler equations, and also by playing around with a number of variants of the classic “separation of variables” strategy), I eventually found an ansatz for the velocity and covelocity that automatically solved most of the components of the Euler equations (as well as most of the positive definiteness requirements), as long as one could find a number of scalar fields that obeyed a certain nonlinear system of transport equations, and also obeyed a positive definiteness condition. Here I was stuck for a bit because the system I ended up with was overdetermined – more equations than unknowns. After trying a number of special cases I eventually found a solution to the transport system on the sphere, except that the scalar functions sometimes degenerated and so the positive definiteness property I wanted was only obeyed with positive semi-definiteness. I tried for some time to perturb this example into a strictly positive definite solution before eventually working out that this was not possible. Finally I had the brainwave to lift the solution from the sphere to an even more symmetric space, and this quickly led to the final solution of the problem, using the special orthogonal group rather than the sphere as the underlying domain. The solution ended up being rather simple in form, but it is still somewhat miraculous to me that it exists at all; in retrospect, given the overdetermined nature of the problem, relying on a large amount of symmetry to cut down the number of equations was basically the only hope.

I’ve just uploaded to the arXiv my paper “On the universality of potential well dynamics“, submitted to Dynamics of PDE. This is a spinoff from my previous paper on blowup of nonlinear wave equations, inspired by some conversations with Sungjin Oh. Here we focus mainly on the zero-dimensional case of such equations, namely the potential well equation

$\displaystyle \partial_{tt} u = - (\nabla F)(u) \ \ \ \ \ (1)$

for a particle ${u: {\bf R} \rightarrow {\bf R}^m}$ trapped in a potential well with potential ${F: {\bf R}^m \rightarrow {\bf R}}$, with ${F(z) \rightarrow +\infty}$ as ${z \rightarrow \infty}$. This ODE always admits global solutions from arbitrary initial positions ${u(0)}$ and initial velocities ${\partial_t u(0)}$, thanks to conservation of the Hamiltonian ${\frac{1}{2} |\partial_t u|^2 + F(u)}$. As this Hamiltonian is coercive (in that its level sets are compact), solutions to this equation are always almost periodic. On the other hand, as can already be seen using the harmonic oscillator ${\partial_{tt} u = - k^2 u}$ (and direct sums of this system), this equation can generate periodic solutions, as well as quasiperiodic solutions.

All quasiperiodic motions are almost periodic. However, there are many examples of dynamical systems that admit solutions that are almost periodic but not quasiperiodic. So one can pose the question: are the dynamics of potential wells universal in the sense that they can capture all almost periodic solutions?

A precise question can be phrased as follows. Let ${M}$ be a compact manifold, and let ${X}$ be a smooth vector field on ${M}$; to avoid degeneracies, let us take ${X}$ to be non-singular in the sense that it is everywhere non-vanishing. Then the trajectories of the first-order ODE

$\displaystyle \partial_t u = X(u) \ \ \ \ \ (2)$

for ${u: {\bf R} \rightarrow M}$ are always global and almost periodic. Can we then find a (coercive) potential ${F: {\bf R}^m \rightarrow {\bf R}}$ for some ${m}$, as well as a smooth embedding ${\phi: M \rightarrow {\bf R}^m}$, such that every solution ${u}$ to (2) pushes forward under ${\phi}$ to a solution to (1)? (Actually, for technical reasons it is preferable to map into the phase space ${{\bf R}^m \times {\bf R}^m}$, rather than position space ${{\bf R}^m}$, but let us ignore this detail for this discussion.)

It turns out that the answer is no; there is a very specific obstruction. Given a pair ${(M,X)}$ as above, define a strongly adapted ${1}$-form to be a ${1}$-form ${\phi}$ on ${M}$ such that ${\phi(X)}$ is pointwise positive, and the Lie derivative ${{\mathcal L}_X \phi}$ is an exact ${1}$-form. We then have

Theorem 1 A smooth compact non-singular dynamics ${(M,X)}$ can be embedded smoothly in a potential well system if and only if it admits a strongly adapted ${1}$-form.

For the “only if” direction, the key point is that potential wells (viewed as a Hamiltonian flow on the phase space ${{\bf R}^m \times {\bf R}^m}$) admit a strongly adapted ${1}$-form, namely the canonical ${1}$-form ${p dq}$, whose Lie derivative is the derivative ${dL}$ of the Lagrangian ${L := \frac{1}{2} |\partial_t u|^2 - F(u)}$ and is thus exact. The converse “if” direction is mainly a consequence of the Nash embedding theorem, and follows the arguments used in my previous paper.

Interestingly, the same obstruction also works for potential wells in a more general Riemannian manifold than ${{\bf R}^m}$, or for nonlinear wave equations with a potential; combining the two, the obstruction is also present for wave maps with a potential.

It is then natural to ask whether this obstruction is non-trivial, in the sense that there are at least some examples of dynamics ${(M,X)}$ that do not support strongly adapted ${1}$-forms (and hence cannot be modeled smoothly by the dynamics of a potential well, nonlinear wave equation, or wave maps). I posed this question on MathOverflow, and Robert Bryant provided a very nice construction, showing that the vector field ${(\sin(2\pi x), \cos(2\pi x))}$ on the ${2}$-torus ${({\bf R}/{\bf Z})^2}$ had no strongly adapted ${1}$-forms, and hence the dynamics of this vector field cannot be smoothly reproduced by a potential well, nonlinear wave equation, or wave map:

On the other hand, the suspension of any diffeomorphism does support a strongly adapted ${1}$-form (the derivative ${dt}$ of the time coordinate), and using this and the previous theorem I was able to embed a universal Turing machine into a potential well. In particular, there are flows for an explicitly describable potential well whose trajectories have behavior that is undecidable using the usual ZFC axioms of set theory! So potential well dynamics are “effectively” universal, despite the presence of the aforementioned obstruction.

In my previous work on blowup for Navier-Stokes like equations, I speculated that if one could somehow replicate a universal Turing machine within the Euler equations, one could use this machine to create a “von Neumann machine” that replicated smaller versions of itself, which on iteration would lead to a finite time blowup. Now that such a mechanism is present in nonlinear wave equations, it is tempting to try to make this scheme work in that setting. Of course, in my previous paper I had already demonstrated finite time blowup, at least in a three-dimensional setting, but that was a relatively simple discretely self-similar blowup in which no computation occurred. This more complicated blowup scheme would be significantly more effort to set up, but would be proof-of-concept that the same scheme would in principle be possible for the Navier-Stokes equations, assuming somehow that one can embed a universal Turing machine into the Euler equations. (But I’m still hopelessly stuck on how to accomplish this latter task…)

Kaisa Matomaki, Maksym Radziwill, and I have uploaded to the arXiv our paper “Correlations of the von Mangoldt and higher divisor functions I. Long shift ranges“, submitted to Proceedings of the London Mathematical Society. This paper is concerned with the estimation of correlations such as

$\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+h) \ \ \ \ \ (1)$

for medium-sized ${h}$ and large ${X}$, where ${\Lambda}$ is the von Mangoldt function; we also consider variants of this sum in which one of the von Mangoldt functions is replaced with a (higher order) divisor function, but for sake of discussion let us focus just on the sum (1). Understanding this sum is very closely related to the problem of finding pairs of primes that differ by ${h}$; for instance, if one could establish a lower bound

$\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+2) \gg X$

then this would easily imply the twin prime conjecture.

The (first) Hardy-Littlewood conjecture asserts an asymptotic

$\displaystyle \sum_{n \leq X} \Lambda(n) \Lambda(n+h) = {\mathfrak S}(h) X + o(X) \ \ \ \ \ (2)$

as ${X \rightarrow \infty}$ for any fixed positive ${h}$, where the singular series ${{\mathfrak S}(h)}$ is an arithmetic factor arising from the irregularity of distribution of ${\Lambda}$ at small moduli, defined explicitly by

$\displaystyle {\mathfrak S}(h) := 2 \Pi_2 \prod_{p|h; p>2} \frac{p-2}{p-1}$

when ${h}$ is even, and ${{\mathfrak S}(h)=0}$ when ${h}$ is odd, where

$\displaystyle \Pi_2 := \prod_{p>2} (1-\frac{1}{(p-1)^2}) = 0.66016\dots$

is (half of) the twin prime constant. See for instance this previous blog post for a a heuristic explanation of this conjecture. From the previous discussion we see that (2) for ${h=2}$ would imply the twin prime conjecture. Sieve theoretic methods are only able to provide an upper bound of the form ${ \sum_{n \leq X} \Lambda(n) \Lambda(n+h) \ll {\mathfrak S}(h) X}$.

Needless to say, apart from the trivial case of odd ${h}$, there are no values of ${h}$ for which the Hardy-Littlewood conjecture is known. However there are some results that say that this conjecture holds “on the average”: in particular, if ${H}$ is a quantity depending on ${X}$ that is somewhat large, there are results that show that (2) holds for most (i.e. for ${1-o(1)}$) of the ${h}$ betwen ${0}$ and ${H}$. Ideally one would like to get ${H}$ as small as possible, in particular one can view the full Hardy-Littlewood conjecture as the endpoint case when ${H}$ is bounded.

The first results in this direction were by van der Corput and by Lavrik, who established such a result with ${H = X}$ (with a subsequent refinement by Balog); Wolke lowered ${H}$ to ${X^{5/8+\varepsilon}}$, and Mikawa lowered ${H}$ further to ${X^{1/3+\varepsilon}}$. The main result of this paper is a further lowering of ${H}$ to ${X^{8/33+\varepsilon}}$. In fact (as in the preceding works) we get a better error term than ${o(X)}$, namely an error of the shape ${O_A( X \log^{-A} X)}$ for any ${A}$.

Our arguments initially proceed along standard lines. One can use the Hardy-Littlewood circle method to express the correlation in (2) as an integral involving exponential sums ${S(\alpha) := \sum_{n \leq X} \Lambda(n) e(\alpha n)}$. The contribution of “major arc” ${\alpha}$ is known by a standard computation to recover the main term ${{\mathfrak S}(h) X}$ plus acceptable errors, so it is a matter of controlling the “minor arcs”. After averaging in ${h}$ and using the Plancherel identity, one is basically faced with establishing a bound of the form

$\displaystyle \int_{\beta-1/H}^{\beta+1/H} |S(\alpha)|^2\ d\alpha \ll_A X \log^{-A} X$

for any “minor arc” ${\beta}$. If ${\beta}$ is somewhat close to a low height rational ${a/q}$ (specifically, if it is within ${X^{-1/6-\varepsilon}}$ of such a rational with ${q = O(\log^{O(1)} X)}$), then this type of estimate is roughly of comparable strength (by another application of Plancherel) to the best available prime number theorem in short intervals on the average, namely that the prime number theorem holds for most intervals of the form ${[x, x + x^{1/6+\varepsilon}]}$, and we can handle this case using standard mean value theorems for Dirichlet series. So we can restrict attention to the “strongly minor arc” case where ${\beta}$ is far from such rationals.

The next step (following some ideas we found in a paper of Zhan) is to rewrite this estimate not in terms of the exponential sums ${S(\alpha) := \sum_{n \leq X} \Lambda(n) e(\alpha n)}$, but rather in terms of the Dirichlet polynomial ${F(s) := \sum_{n \sim X} \frac{\Lambda(n)}{n^s}}$. After a certain amount of computation (including some oscillatory integral estimates arising from stationary phase), one is eventually reduced to the task of establishing an estimate of the form

$\displaystyle \int_{t \sim \lambda X} (\sum_{t-\lambda H}^{t+\lambda H} |F(\frac{1}{2}+it')|\ dt')^2\ dt \ll_A \lambda^2 H^2 X \log^{-A} X$

for any ${X^{-1/6-\varepsilon} \ll \lambda \ll \log^{-B} X}$ (with ${B}$ sufficiently large depending on ${A}$).

The next step, which is again standard, is the use of the Heath-Brown identity (as discussed for instance in this previous blog post) to split up ${\Lambda}$ into a number of components that have a Dirichlet convolution structure. Because the exponent ${8/33}$ we are shooting for is less than ${1/4}$, we end up with five types of components that arise, which we call “Type ${d_1}$“, “Type ${d_2}$“, “Type ${d_3}$“, “Type ${d_4}$“, and “Type II”. The “Type II” sums are Dirichlet convolutions involving a factor supported on a range ${[X^\varepsilon, X^{-\varepsilon} H]}$ and is quite easy to deal with; the “Type ${d_j}$” terms are Dirichlet convolutions that resemble (non-degenerate portions of) the ${j^{th}}$ divisor function, formed from convolving together ${j}$ portions of ${1}$. The “Type ${d_1}$” and “Type ${d_2}$” terms can be estimated satisfactorily by standard moment estimates for Dirichlet polynomials; this already recovers the result of Mikawa (and our argument is in fact slightly more elementary in that no Kloosterman sum estimates are required). It is the treatment of the “Type ${d_3}$” and “Type ${d_4}$” sums that require some new analysis, with the Type ${d_3}$ terms turning to be the most delicate. After using an existing moment estimate of Jutila for Dirichlet L-functions, matters reduce to obtaining a family of estimates, a typical one of which (relating to the more difficult Type ${d_3}$ sums) is of the form

$\displaystyle \int_{t - H}^{t+H} |M( \frac{1}{2} + it')|^2\ dt' \ll X^{\varepsilon^2} H \ \ \ \ \ (3)$

for “typical” ordinates ${t}$ of size ${X}$, where ${M}$ is the Dirichlet polynomial ${M(s) := \sum_{n \sim X^{1/3}} \frac{1}{n^s}}$ (a fragment of the Riemann zeta function). The precise definition of “typical” is a little technical (because of the complicated nature of Jutila’s estimate) and will not be detailed here. Such a claim would follow easily from the Lindelof hypothesis (which would imply that ${M(1/2 + it) \ll X^{o(1)}}$) but of course we would like to have an unconditional result.

At this point, having exhausted all the Dirichlet polynomial estimates that are usefully available, we return to “physical space”. Using some further Fourier-analytic and oscillatory integral computations, we can estimate the left-hand side of (3) by an expression that is roughly of the shape

$\displaystyle \frac{H}{X^{1/3}} \sum_{\ell \sim X^{1/3}/H} |\sum_{m \sim X^{1/3}} e( \frac{t}{2\pi} \log \frac{m+\ell}{m-\ell} )|.$

The phase ${\frac{t}{2\pi} \log \frac{m+\ell}{m-\ell}}$ can be Taylor expanded as the sum of ${\frac{t_j \ell}{\pi m}}$ and a lower order term ${\frac{t_j \ell^3}{3\pi m^3}}$, plus negligible errors. If we could discard the lower order term then we would get quite a good bound using the exponential sum estimates of Robert and Sargos, which control averages of exponential sums with purely monomial phases, with the averaging allowing us to exploit the hypothesis that ${t}$ is “typical”. Figuring out how to get rid of this lower order term caused some inefficiency in our arguments; the best we could do (after much experimentation) was to use Fourier analysis to shorten the sums, estimate a one-parameter average exponential sum with a binomial phase by a two-parameter average with a monomial phase, and then use the van der Corput ${B}$ process followed by the estimates of Robert and Sargos. This rather complicated procedure works up to ${H = X^{8/33+\varepsilon}}$ it may be possible that some alternate way to proceed here could improve the exponent somewhat.

In a sequel to this paper, we will use a somewhat different method to reduce ${H}$ to a much smaller value of ${\log^{O(1)} X}$, but only if we replace the correlations ${\sum_{n \leq X} \Lambda(n) \Lambda(n+h)}$ by either ${\sum_{n \leq X} \Lambda(n) d_k(n+h)}$ or ${\sum_{n \leq X} d_k(n) d_l(n+h)}$, and also we now only save a ${o(1)}$ in the error term rather than ${O_A(\log^{-A} X)}$.

In July I will be spending a week at Park City, being one of the mini-course lecturers in the Graduate Summer School component of the Park City Summer Session on random matrices.  I have chosen to give some lectures on least singular values of random matrices, the circular law, and the Lindeberg exchange method in random matrix theory; this is a slightly different set of topics than I had initially advertised (which was instead about the Lindeberg exchange method and the local relaxation flow method), but after consulting with the other mini-course lecturers I felt that this would be a more complementary set of topics.  I have uploaded an draft of my lecture notes (some portion of which is derived from my monograph on the subject); as always, comments and corrections are welcome.

<I>[Update, June 23: notes revised and reformatted to PCMI format. -T.]</I>

Ben Green and I have (finally!) uploaded to the arXiv our paper “New bounds for Szemerédi’s theorem, III: A polylogarithmic bound for ${r_4(N)}$“, submitted to Mathematika. This is the sequel to two previous papers (and an erratum to the former paper), concerning quantitative versions of Szemerédi’s theorem in the case of length four progressions. This sequel has been delayed for over a decade for a number of reasons, but we have finally managed to write the arguments up to our satisfaction and submit it (to a special issue of Mathematika honouring the work of Klaus Roth).

For any natural number ${N}$, define ${r_4(N)}$ to be the largest cardinality of a subset ${A}$ of ${[N] = \{1,\dots,N\}}$ which does not contain any non-trivial arithmetic progressions ${a, a+r, a+2r, a+3r}$ of length four (where “non-trivial” means that ${r}$ is non-zero). Trivially we have ${r_4(N) \leq N}$. In 1969, Szemerédi showed that ${r_4(N) = o(N)}$. However, the decay rate that could be theoretically extracted from this argument (and from several subsequent proofs of this bound, including one by Roth) were quite poor. The first significant quantitative bound on this quantity was by Gowers, who showed that ${r_4(N) \ll N (\log \log N)^{-c}}$ for some absolute constant ${c>0}$. In the second paper in the above-mentioned series, we managed to improve this bound to ${r_4(N) \ll N \exp( - c \sqrt{\log \log N})}$. In this paper, we improve the bound further to ${r_4(N) \ll N (\log N)^{-c}}$, which seems to be the limit of the methods. (We remark that if we could take ${c}$ to be larger than one, this would imply the length four case of a well known conjecture of Erdös that any set of natural numbers whose sum of reciprocals diverges would contain arbitrarily long arithmetic progressions. Thanks to the work of Sanders and of Bloom, the corresponding case of the conjecture for length three conjectures is nearly settled, as it is known that for the analogous bound on ${r_3(N)}$ one can take any ${c}$ less than one.)

Most of the previous work on bounding ${r_4(N)}$ relied in some form or another on the density increment argument introduced by Roth back in 1953; roughly speaking, the idea is to show that if a dense subset ${A}$ of ${[N]}$ fails to contain arithmetic progressions of length four, one seeks to then locate a long subprogression of ${[N]}$ in which ${A}$ has increased density. This was the basic method for instance underlying our previous bound ${r_4(N) \ll N \exp( - c \sqrt{\log \log N})}$, as well as a finite field analogue of the bound ${r_4(N) \ll N (\log N)^{-c}}$; however we encountered significant technical difficulties for several years in extending this argument to obtain the result of the current paper. Our method is instead based on “energy increment arguments”, and more specifically on establishing quantitative version of a Khintchine-type recurrence theorem, similar to the qualitative recurrence theorems established (in the ergodic theory context) by Bergelson-Host-Kra, and (in the current combinatorial context) by Ben Green and myself.

One way to phrase the latter recurrence theorem is as follows. Suppose that ${A \subset [N]}$ has density ${\delta}$. Then one would expect a “randomly” selected arithmetic progression ${{\bf a}, {\bf a}+{\bf r}, {\bf a}+2{\bf r}, {\bf a}+3{\bf r}}$ in ${[N]}$ (using the convention that random variables will be in boldface) to be contained in ${A}$ with probability about ${\delta^4}$. This is not true in general, however it was shown by Ben and myself that for any ${\eta>0}$, there was a set of shifts ${r \in [-N,N]}$ of cardinality ${\gg_{\delta,\eta} N}$, such that for any such ${r}$ one had

$\displaystyle {\bf P}( {\bf a}, {\bf a}+r, {\bf a}+2r, {\bf a}+3r \in A ) \geq \delta^4 - \eta$

if ${{\bf a}}$ was chosen uniformly at random from ${[N]}$. This easily implies that ${r_4(N) = o(N)}$, but does not give a particularly good bound on the decay rate, because the implied constant in the cardinality lower bound ${\gg_{\delta,\eta} N}$ is quite poor (in fact of tower-exponential type, due to the use of regularity lemmas!), and so one has to take ${N}$ to be extremely large compared to ${\delta,\eta}$ to avoid the possibility that the set of shifts in the above theorem consists only of the trivial shift ${r=0}$.

We do not know how to improve the lower bound on the set of shifts to the point where it can give bounds that are competitive with those in this paper. However, we can obtain better quantitative results if we permit ourselves to couple together the two parameters ${{\bf a}}$ and ${{\bf r}}$ of the length four progression. Namely, with ${A}$, ${\delta}$, ${\eta}$ as above, we are able to show that there exist random variables ${{\bf a}, {\bf r}}$, not necessarily independent, such that

$\displaystyle {\bf P}( {\bf a}, {\bf a}+{\bf r}, {\bf a}+2{\bf r}, {\bf a}+3{\bf r} \in A ) \geq \delta^4 - \eta \ \ \ \ \ (1)$

and such that we have the non-degeneracy bound

$\displaystyle {\bf P}( {\bf r} = 0 ) \ll \exp( - \eta^{-O(1)} ) / N.$

This then easily implies the main theorem.

The energy increment method is then deployed to locate a good pair ${({\bf a}, {\bf r})}$ of random variables that will obey the above bounds. One can get some intuition on how to proceed here by considering some model cases. Firstly one can consider a “globally quadratically structured” case in which the indicator function ${1_A}$ “behaves like” a globally quadratic function such as ${F( \alpha n^2 )}$, for some irrational ${\alpha}$ and some smooth periodic function ${F: {\bf R}/{\bf Z} \rightarrow {\bf R}}$ of mean ${\delta}$. If one then takes ${{\bf a}, {\bf r}}$ to be uniformly distributed in ${[N]}$ and ${[-\varepsilon N, \varepsilon N]}$ respectively for some small ${\varepsilon>0}$, with no coupling between the two variables, then the left-hand side of (1) is approximately of the form

$\displaystyle \int_{(x,y,z,w) \in ({\bf R}/{\bf Z})^4: x-3y+3z-w = 0} F(x) F(y) F(z) F(w) \ \ \ \ \ (2)$

where the integral is with respect to the probability Haar measure, and the constraint ${x-3y+3z-w=0}$ ultimately arises from the algebraic constraint

$\displaystyle \alpha {\bf a}^2 - 3 \alpha ({\bf a}+{\bf r})^2 + 3 \alpha ({\bf a}+2{\bf r})^2 - \alpha ({\bf a}+3{\bf r})^2 = 0.$

However, an application of the Cauchy-Schwarz inequality and Fubini’s theorem shows that the integral in (2) is at least ${(\int_{{\bf R}/{\bf Z}} F)^4}$, which (morally at least) gives (1) in this case.

Due to the nature of the energy increment argument, it also becomes necessary to consider “locally quadratically structured” cases, in which ${[N]}$ is partitioned into some number of structured pieces ${B_c}$ (think of these as arithmetic progressions, or as “Bohr sets), and on each piece ${B_c}$, ${1_A}$ behaves like a locally quadratic function such as ${F_c( \alpha_c n^2 )}$, where ${\alpha_c}$ now varies with ${c}$, and the mean of ${F_c}$ will be approximately ${\delta}$ on the average after averaging in ${c}$ (weighted by the size of the pieces ${B_c}$). Now one should select ${{\bf a}}$ and ${{\bf r}}$ in the following coupled manner: first one chooses ${{\bf a}}$ uniformly from ${[N]}$, then one defines ${{\bf c}}$ to be the label ${c}$ such that ${{\bf a} \in B_c}$, and then selects ${{\bf r}}$ uniformly from a set ${B_{c,\varepsilon}}$ which is related to ${B_c}$ in much the same way that ${[-\varepsilon N, \varepsilon N]}$ is related to ${[N]}$. If one does this correctly, the analogue of (2) becomes

$\displaystyle {\bf E} \int_{(x,y,z,w) \in ({\bf R}/{\bf Z})^4: x-3y+3z-w = 0} F_{\mathbf c}(x) F_{\mathbf c}(y) F_{\mathbf c}(z) F_{\mathbf c}(w),$

and one can again use Cauchy-Schwarz and Fubini’s theorem to conclude.

The general case proceeds, very roughly, by an iterative argument. At each stage of the iteration, one has some sort of quadratic model of ${1_A}$ which involves a decomposition of ${[N]}$ into structured pieces ${B_c}$, and a quadratic approximation to ${1_A}$ on each piece. If this approximation is accurate enough (or more precisely, if a certain (averaged) local Gowers uniformity norm ${U^3}$ of the error is small enough) to model the count in (1) (for random variables ${{\bf a}, {\bf r}}$ determined by the above partition of ${[N]}$ into pieces ${B_c}$), and if the frequencies (such as ${\alpha_c}$) involved in the quadratic approximation are “high rank” or “linearly independent over the rationals” in a suitably quantitative sense, then some version of the above arguments can be made to work. If there are some unwanted linear dependencies in the frequencies, we can do some linear algebra to eliminate one of the frequencies (using some geometry of numbers to keep the quantitative bounds under control) and continue the iteration. If instead the approximation is too inaccurate, then the error will be large in a certain averaged local Gowers uniformity norm ${U^3}$. A significant fraction of the paper is then devoted to establishing a quantitative inverse theorem for that norm that concludes (with good bounds) that the error must then locally correlate with locally quadratic phases, which can be used to refine the quadratic approximation to ${1_A}$ in a manner that significantly increases its “energy” (basically an ${L^2}$ norm). Such energy increments cannot continue indefinitely, and when they terminate we obtain the desired claim.

There are existing inverse theorems for ${U^3}$ type norms in the literature, going back to the pioneering work of Gowers mentioned previously, and relying on arithmetic combinatorics tools such as Freiman’s theorem and the Balog-Szemerédi-Gowers lemma, which are good for analysing the “${1\%}$-structured homomorphisms” that arise in Gowers’ argument. However, when we applied these methods to the local Gowers norms we obtained inferior quantitative results that were not strong enough for our application. Instead, we use arguments from a different paper of Gowers in which he tackled Szemerédi’s theorem for arbitrary length progressions. This method produces “${99\%}$-structured homomorphisms” associated to any function with large Gowers uniformity norm; however the catch is that such homomorphisms are initially supported only on a sparse unstructured set, rather than a structured set such as a Bohr set. To proceed further, one first has to locate inside the sparse unstructured set a sparse pseudorandom subset of a Bohr set, and then use “error-correction” type methods (such as “majority-vote” based algorithms) to locally upgrade this ${99\%}$-structured homomorphism on pseudorandom subsets of Bohr sets to a ${100\%}$-structured homomorphism on the entirety of a Bohr set. It is then possible to use some “approximate cohomology” tools to “integrate” these homomorphisms (and discern a key “local symmetry” property of these homomorphisms) to locate the desired local quadratic structure (in much the same fashion that a ${1}$-form on ${{\bf R}^n}$ that varies linearly with the coordinates can be integrated to be the derivative of a quadratic function if we know that the ${1}$-form is closed). These portions of the paper are unfortunately rather technical, but broadly follow the methods already used in previous literature.

Daniel Kane and I have just uploaded to the arXiv our paper “A bound on partitioning clusters“, submitted to the Electronic Journal of Combinatorics. In this short and elementary paper, we consider a question that arose from biomathematical applications: given a finite family ${X}$ of sets (or “clusters”), how many ways can there be of partitioning a set ${A \in X}$ in this family as the disjoint union ${A = A_1 \uplus A_2}$ of two other sets ${A_1, A_2}$ in this family? That is to say, what is the best upper bound one can place on the quantity

$\displaystyle | \{ (A,A_1,A_2) \in X^3: A = A_1 \uplus A_2 \}|$

in terms of the cardinality ${|X|}$ of ${X}$? A trivial upper bound would be ${|X|^2}$, since this is the number of possible pairs ${(A_1,A_2)}$, and ${A_1,A_2}$ clearly determine ${A}$. In our paper, we establish the improved bound

$\displaystyle | \{ (A,A_1,A_2) \in X^3: A = A_1 \uplus A_2 \}| \leq |X|^{3/p}$

where ${p}$ is the somewhat strange exponent

$\displaystyle p := \log_3 \frac{27}{4} = 1.73814\dots, \ \ \ \ \ (1)$

so that ${3/p = 1.72598\dots}$. Furthermore, this exponent is best possible!

Actually, the latter claim is quite easy to show: one takes ${X}$ to be all the subsets of ${\{1,\dots,n\}}$ of cardinality either ${n/3}$ or ${2n/3}$, for ${n}$ a multiple of ${3}$, and the claim follows readily from Stirling’s formula. So it is perhaps the former claim that is more interesting (since many combinatorial proof techniques, such as those based on inequalities such as the Cauchy-Schwarz inequality, tend to produce exponents that are rational or at least algebraic). We follow the common, though unintuitive, trick of generalising a problem to make it simpler. Firstly, one generalises the bound to the “trilinear” bound

$\displaystyle | \{ (A_1,A_2,A_3) \in X_1 \times X_2 \times X_3: A_3 = A_1 \uplus A_2 \}|$

$\displaystyle \leq |X_1|^{1/p} |X_2|^{1/p} |X_3|^{1/p}$

for arbitrary finite collections ${X_1,X_2,X_3}$ of sets. One can place all the sets in ${X_1,X_2,X_3}$ inside a single finite set such as ${\{1,\dots,n\}}$, and then by replacing every set ${A_3}$ in ${X_3}$ by its complement in ${\{1,\dots,n\}}$, one can phrase the inequality in the equivalent form

$\displaystyle | \{ (A_1,A_2,A_3) \in X_1 \times X_2 \times X_3: \{1,\dots,n\} =A_1 \uplus A_2 \uplus A_3 \}|$

$\displaystyle \leq |X_1|^{1/p} |X_2|^{1/p} |X_3|^{1/p}$

for arbitrary collections ${X_1,X_2,X_3}$ of subsets of ${\{1,\dots,n\}}$. We generalise further by turning sets into functions, replacing the estimate with the slightly stronger convolution estimate

$\displaystyle f_1 * f_2 * f_3 (1,\dots,1) \leq \|f_1\|_{\ell^p(\{0,1\}^n)} \|f_2\|_{\ell^p(\{0,1\}^n)} \|f_3\|_{\ell^p(\{0,1\}^n)}$

for arbitrary functions ${f_1,f_2,f_3}$ on the Hamming cube ${\{0,1\}^n}$, where the convolution is on the integer lattice ${\bf Z}^n$ rather than on the finite field vector space ${\bf F}_2^n$. The advantage of working in this general setting is that it becomes very easy to apply induction on the dimension ${n}$; indeed, to prove this estimate for arbitrary ${n}$ it suffices to do so for ${n=1}$. This reduces matters to establishing the elementary inequality

$\displaystyle (ab(1-c))^{1/p} + (bc(1-a))^{1/p} + (ca(1-b))^{1/p} \leq 1$

for all ${0 \leq a,b,c \leq 1}$, which can be done by a combination of undergraduate multivariable calculus and a little bit of numerical computation. (The left-hand side turns out to have local maxima at ${(1,1,0), (1,0,1), (0,1,1), (2/3,2/3,2/3)}$, with the latter being the cause of the numerology (1).)

The same sort of argument also gives an energy bound

$\displaystyle E(A,A) \leq |A|^{\log_2 6}$

for any subset ${A \subset \{0,1\}^n}$ of the Hamming cube, where

$\displaystyle E(A,A) := |\{(a_1,a_2,a_3,a_4) \in A^4: a_1+a_2 = a_3 + a_4 \}|$

is the additive energy of ${A}$. The example ${A = \{0,1\}^n}$ shows that the exponent ${\log_2 6}$ cannot be improved.

I’ve just uploaded to the arXiv my paper Finite time blowup for a supercritical defocusing nonlinear Schrödinger system, submitted to Analysis and PDE. This paper is an analogue of a recent paper of mine in which I constructed a supercritical defocusing nonlinear wave (NLW) system ${-\partial_{tt} u + \Delta u = (\nabla F)(u)}$ which exhibited smooth solutions that developed singularities in finite time. Here, we achieve essentially the same conclusion for the (inhomogeneous) supercritical defocusing nonlinear Schrödinger (NLS) equation

$\displaystyle i \partial_t u + \Delta u = (\nabla F)(u) + G \ \ \ \ \ (1)$

where ${u: {\bf R} \times {\bf R}^d \rightarrow {\bf C}^m}$ is now a system of scalar fields, ${F: {\bf C}^m \rightarrow {\bf R}}$ is a potential which is strictly positive and homogeneous of degree ${p+1}$ (and invariant under phase rotations ${u \mapsto e^{i\theta} u}$), and ${G: {\bf R} \times {\bf R}^d \rightarrow {\bf C}^m}$ is a smooth compactly supported forcing term, needed for technical reasons.

To oversimplify somewhat, the equation (1) is known to be globally regular in the energy-subcritical case when ${d \leq 2}$, or when ${d \geq 3}$ and ${p < 1+\frac{4}{d-2}}$; global regularity is also known (but is significantly more difficult to establish) in the energy-critical case when ${d \geq 3}$ and ${p = 1 +\frac{4}{d-2}}$. (This is an oversimplification for a number of reasons, in particular in higher dimensions one only knows global well-posedness instead of global regularity. See this previous post for some exploration of this issue in the context of nonlinear wave equations.) The main result of this paper is to show that global regularity can break down in the remaining energy-supercritical case when ${d \geq 3}$ and ${p > 1 + \frac{4}{d-2}}$, at least when the target dimension ${m}$ is allowed to be sufficiently large depending on the spatial dimension ${d}$ (I did not try to achieve the optimal value of ${m}$ here, but the argument gives a value of ${m}$ that grows quadratically in ${d}$). Unfortunately, this result does not directly impact the most interesting case of the defocusing scalar NLS equation

$\displaystyle i \partial_t u + \Delta u = |u|^{p-1} u \ \ \ \ \ (2)$

in which ${m=1}$; however it does establish a rigorous barrier to any attempt to prove global regularity for the scalar NLS equation, in that such an attempt needs to crucially use some property of the scalar NLS that is not shared by the more general systems in (1). For instance, any approach that is primarily based on the conservation laws of mass, momentum, and energy (which are common to both (1) and (2)) will not be sufficient to establish global regularity of supercritical defocusing scalar NLS.

The method of proof in this paper is broadly similar to that in the previous paper for NLW, but with a number of additional technical complications. Both proofs begin by reducing matters to constructing a discretely self-similar solution. In the case of NLW, this solution lived on a forward light cone ${\{ (t,x): |x| \leq t \}}$ and obeyed a self-similarity

$\displaystyle u(2t, 2x) = 2^{-\frac{2}{p-1}} u(t,x).$

The ability to restrict to a light cone arose from the finite speed of propagation properties of NLW. For NLS, the solution will instead live on the domain

$\displaystyle H_d := ([0,+\infty) \times {\bf R}^d) \backslash \{(0,0)\}$

and obey a parabolic self-similarity

$\displaystyle u(4t, 2x) = 2^{-\frac{2}{p-1}} u(t,x)$

and solve the homogeneous version ${G=0}$ of (1). (The inhomogeneity ${G}$ emerges when one truncates the self-similar solution so that the initial data is compactly supported in space.) A key technical point is that ${u}$ has to be smooth everywhere in ${H_d}$, including the boundary component ${\{ (0,x): x \in {\bf R}^d \backslash \{0\}\}}$. This unfortunately rules out many of the existing constructions of self-similar solutions, which typically will have some sort of singularity at the spatial origin.

The remaining steps of the argument can broadly be described as quantifier elimination: one systematically eliminates each of the degrees of freedom of the problem in turn by locating the necessary and sufficient conditions required of the remaining degrees of freedom in order for the constraints of a particular degree of freedom to be satisfiable. The first such degree of freedom to eliminate is the potential function ${F}$. The task here is to determine what constraints must exist on a putative solution ${u}$ in order for there to exist a (positive, homogeneous, smooth away from origin) potential ${F}$ obeying the homogeneous NLS equation

$\displaystyle i \partial_t u + \Delta u = (\nabla F)(u).$

Firstly, the requirement that ${F}$ be homogeneous implies the Euler identity

$\displaystyle \langle (\nabla F)(u), u \rangle = (p+1) F(u)$

(where ${\langle,\rangle}$ denotes the standard real inner product on ${{\bf C}^m}$), while the requirement that ${F}$ be phase invariant similarly yields the variant identity

$\displaystyle \langle (\nabla F)(u), iu \rangle = 0,$

so if one defines the potential energy field to be ${V = F(u)}$, we obtain from the chain rule the equations

$\displaystyle \langle i \partial_t u + \Delta u, u \rangle = (p+1) V$

$\displaystyle \langle i \partial_t u + \Delta u, iu \rangle = 0$

$\displaystyle \langle i \partial_t u + \Delta u, \partial_t u \rangle = \partial_t V$

$\displaystyle \langle i \partial_t u + \Delta u, \partial_{x_j} u \rangle = \partial_{x_j} V.$

Conversely, it turns out (roughly speaking) that if one can locate fields ${u}$ and ${V}$ obeying the above equations (as well as some other technical regularity and non-degeneracy conditions), then one can find an ${F}$ with all the required properties. The first of these equations can be thought of as a definition of the potential energy field ${V}$, and the other three equations are basically disguised versions of the conservation laws of mass, energy, and momentum respectively. The construction of ${F}$ relies on a classical extension theorem of Seeley that is a relative of the Whitney extension theorem.

Now that the potential ${F}$ is eliminated, the next degree of freedom to eliminate is the solution field ${u}$. One can observe that the above equations involving ${u}$ and ${V}$ can be expressed instead in terms of ${V}$ and the Gram-type matrix ${G[u,u]}$ of ${u}$, which is a ${(2d+4) \times (2d+4)}$ matrix consisting of the inner products ${\langle D_1 u, D_2 u \rangle}$ where ${D_1,D_2}$ range amongst the ${2d+4}$ differential operators

$\displaystyle D_1,D_2 \in \{ 1, i, \partial_t, i\partial_t, \partial_{x_1},\dots,\partial_{x_d}, i\partial_{x_1}, \dots, i\partial_{x_d}\}.$

To eliminate ${u}$, one thus needs to answer the question of what properties are required of a ${(2d+4) \times (2d+4)}$ matrix ${G}$ for it to be the Gram-type matrix ${G = G[u,u]}$ of a field ${u}$. Amongst some obvious necessary conditions are that ${G}$ needs to be symmetric and positive semi-definite; there are also additional constraints coming from identities such as

$\displaystyle \partial_t \langle u, u \rangle = 2 \langle u, \partial_t u \rangle$

$\displaystyle \langle i u, \partial_t u \rangle = - \langle u, i \partial_t u \rangle$

and

$\displaystyle \partial_{x_j} \langle iu, \partial_{x_k} u \rangle - \partial_{x_k} \langle iu, \partial_{x_j} u \rangle = 2 \langle i \partial_{x_j} u, \partial_{x_k} u \rangle.$

Ideally one would like a theorem that asserts (for ${m}$ large enough) that as long as ${G}$ obeys all of the “obvious” constraints, then there exists a suitably non-degenerate map ${u}$ such that ${G = G[u,u]}$. In the case of NLW, the analogous claim was basically a consequence of the Nash embedding theorem (which can be viewed as a theorem about the solvability of the system of equations ${\langle \partial_{x_j} u, \partial_{x_k} u \rangle = g_{jk}}$ for a given positive definite symmetric set of fields ${g_{jk}}$). However, the presence of the complex structure in the NLS case poses some significant technical challenges (note for instance that the naive complex version of the Nash embedding theorem is false, due to obstructions such as Liouville’s theorem that prevent a compact complex manifold from being embeddable holomorphically in ${{\bf C}^m}$). Nevertheless, by adapting the proof of the Nash embedding theorem (in particular, the simplified proof of Gunther that avoids the need to use the Nash-Moser iteration scheme) we were able to obtain a partial complex analogue of the Nash embedding theorem that sufficed for our application; it required an artificial additional “curl-free” hypothesis on the Gram-type matrix ${G[u,u]}$, but fortunately this hypothesis ends up being automatic in our construction. Also, this version of the Nash embedding theorem is unable to prescribe the component ${\langle \partial_t u, \partial_t u \rangle}$ of the Gram-type matrix ${G[u,u]}$, but fortunately this component is not used in any of the conservation laws and so the loss of this component does not cause any difficulty.

After applying the above-mentioned Nash-embedding theorem, the task is now to locate a matrix ${G}$ obeying all the hypotheses of that theorem, as well as the conservation laws for mass, momentum, and energy (after defining the potential energy field ${V}$ in terms of ${G}$). This is quite a lot of fields and constraints, but one can cut down significantly on the degrees of freedom by requiring that ${G}$ is spherically symmetric (in a tensorial sense) and also continuously self-similar (not just discretely self-similar). Note that this hypothesis is weaker than the assertion that the original field ${u}$ is spherically symmetric and continuously self-similar; indeed we do not know if non-trivial solutions of this type actually exist. These symmetry hypotheses reduce the number of independent components of the ${(2d+4) \times (2d+4)}$ matrix ${G}$ to just six: ${g_{1,1}, g_{1,i\partial_t}, g_{1,i\partial_r}, g_{\partial_r, \partial_r}, g_{\partial_\omega, \partial_\omega}, g_{\partial_r, \partial_t}}$, which now take as their domain the ${1+1}$-dimensional space

$\displaystyle H_1 := ([0,+\infty) \times {\bf R}) \backslash \{(0,0)\}.$

One now has to construct these six fields, together with a potential energy field ${v}$, that obey a number of constraints, notably some positive definiteness constraints as well as the aforementioned conservation laws for mass, momentum, and energy.

The field ${g_{1,i\partial_t}}$ only arises in the equation for the potential ${v}$ (coming from Euler’s identity) and can easily be eliminated. Similarly, the field ${g_{\partial_r,\partial_t}}$ only makes an appearance in the current of the energy conservation law, and so can also be easily eliminated so long as the total energy is conserved. But in the energy-supercritical case, the total energy is infinite, and so it is relatively easy to eliminate the field ${g_{\partial_r, \partial_t}}$ from the problem also. This leaves us with the task of constructing just five fields ${g_{1,1}, g_{1,i\partial_r}, g_{\partial_r,\partial_r}, g_{\partial_\omega,\partial_\omega}, v}$ obeying a number of positivity conditions, symmetry conditions, regularity conditions, and conservation laws for mass and momentum.

The potential field ${v}$ can effectively be absorbed into the angular stress field ${g_{\partial_\omega,\partial_\omega}}$ (after placing an appropriate counterbalancing term in the radial stress field ${g_{\partial_r, \partial_r}}$ so as not to disrupt the conservation laws), so we can also eliminate this field. The angular stress field ${g_{\partial_\omega, \partial_\omega}}$ is then only constrained through the momentum conservation law and a requirement of positivity; one can then eliminate this field by converting the momentum conservation law from an equality to an inequality. Finally, the radial stress field ${g_{\partial_r, \partial_r}}$ is also only constrained through a positive definiteness constraint and the momentum conservation inequality, so it can also be eliminated from the problem after some further modification of the momentum conservation inequality.

The task then reduces to locating just two fields ${g_{1,1}, g_{1,i\partial_r}}$ that obey a mass conservation law

$\displaystyle \partial_t g_{1,1} = 2 \left(\partial_r + \frac{d-1}{r} \right) g_{1,i\partial r}$

together with an additional inequality that is the remnant of the momentum conservation law. One can solve for the mass conservation law in terms of a single scalar field ${W}$ using the ansatz

$\displaystyle g_{1,1} = 2 r^{1-d} \partial_r (r^d W)$

$\displaystyle g_{1,i\partial_r} = r^{1-d} \partial_t (r^d W)$

so the problem has finally been simplified to the task of locating a single scalar field ${W}$ with some scaling and homogeneity properties that obeys a certain differential inequality relating to momentum conservation. This turns out to be possible by explicitly writing down a specific scalar field ${W}$ using some asymptotic parameters and cutoff functions.

I’ve just uploaded to the arXiv my paper “An integration approach to the Toeplitz square peg problem“, submitted to Forum of Mathematics, Sigma. This paper resulted from my attempts recently to solve the Toeplitz square peg problem (also known as the inscribed square problem):

Conjecture 1 (Toeplitz square peg problem) Let ${\gamma}$ be a simple closed curve in the plane. Is it necessarily the case that ${\gamma}$ contains four vertices of a square?

See this recent survey of Matschke in the Notices of the AMS for the latest results on this problem.

The route I took to the results in this paper was somewhat convoluted. I was motivated to look at this problem after lecturing recently on the Jordan curve theorem in my class. The problem is superficially similar to the Jordan curve theorem in that the result is known (and rather easy to prove) if ${\gamma}$ is sufficiently regular (e.g. if it is a polygonal path), but seems to be significantly more difficult when the curve is merely assumed to be continuous. Roughly speaking, all the known positive results on the problem have proceeded using (in some form or another) tools from homology: note for instance that one can view the conjecture as asking whether the four-dimensional subset ${\gamma^4}$ of the eight-dimensional space ${({\bf R}^2)^4}$ necessarily intersects the four-dimensional space ${\mathtt{Squares} \subset ({\bf R}^2)^4}$ consisting of the quadruples ${(v_1,v_2,v_3,v_4)}$ traversing a square in (say) anti-clockwise order; this space is a four-dimensional linear subspace of ${({\bf R}^2)^4}$, with a two-dimensional subspace of “degenerate” squares ${(v,v,v,v)}$ removed. If one ignores this degenerate subspace, one can use intersection theory to conclude (under reasonable “transversality” hypotheses) that ${\gamma^4}$ intersects ${\mathtt{Squares}}$ an odd number of times (up to the cyclic symmetries of the square), which is basically how Conjecture 1 is proven in the regular case. Unfortunately, if one then takes a limit and considers what happens when ${\gamma}$ is just a continuous curve, the odd number of squares created by these homological arguments could conceivably all degenerate to points, thus blocking one from proving the conjecture in the general case.

Inspired by my previous work on finite time blowup for various PDEs, I first tried looking for a counterexample in the category of (locally) self-similar curves that are smooth (or piecewise linear) away from a single origin where it can oscillate infinitely often; this is basically the smoothest type of curve that was not already covered by previous results. By a rescaling and compactness argument, it is not difficult to see that such a counterexample would exist if there was a counterexample to the following periodic version of the conjecture:

Conjecture 2 (Periodic square peg problem) Let ${\gamma_1, \gamma_2}$ be two disjoint simple closed piecewise linear curves in the cylinder ${({\bf R}/{\bf Z}) \times {\bf R}}$ which have a winding number of one, that is to say they are homologous to the loop ${x \mapsto (x,0)}$ from ${{\bf R}/{\bf Z}}$ to ${({\bf R}/{\bf Z}) \times {\bf R}}$. Then the union of ${\gamma_1}$ and ${\gamma_2}$ contains the four vertices of a square.

In contrast to Conjecture 1, which is known for polygonal paths, Conjecture 2 is still open even under the hypothesis of polygonal paths; the homological arguments alluded to previously now show that the number of inscribed squares in the periodic setting is even rather than odd, which is not enough to conclude the conjecture. (This flipping of parity from odd to even due to an infinite amount of oscillation is reminiscent of the “Eilenberg-Mazur swindle“, discussed in this previous post.)

I therefore tried to construct counterexamples to Conjecture 2. I began perturbatively, looking at curves ${\gamma_1, \gamma_2}$ that were small perturbations of constant functions. After some initial Taylor expansion, I was blocked from forming such a counterexample because an inspection of the leading Taylor coefficients required one to construct a continuous periodic function of mean zero that never vanished, which of course was impossible by the intermediate value theorem. I kept expanding to higher and higher order to try to evade this obstruction (this, incidentally, was when I discovered this cute application of Lagrange reversion) but no matter how high an accuracy I went (I think I ended up expanding to sixth order in a perturbative parameter ${\varepsilon}$ before figuring out what was going on!), this obstruction kept resurfacing again and again. I eventually figured out that this obstruction was being caused by a “conserved integral of motion” for both Conjecture 2 and Conjecture 1, which can in fact be used to largely rule out perturbative constructions. This yielded a new positive result for both conjectures:

Theorem 3

• (i) Conjecture 1 holds when ${\gamma}$ is the union ${\{ (t,f(t)): t \in [t_0,t_1]\} \cup \{ (t,g(t)): t \in [t_0,t_1]\}}$ of the graphs of two Lipschitz functions ${f,g: [t_0,t_1] \rightarrow {\bf R}}$ of Lipschitz constant less than one that agree at the endpoints.
• (ii) Conjecture 2 holds when ${\gamma_1, \gamma_2}$ are graphs of Lipschitz functions ${f: {\bf R}/{\bf Z} \rightarrow {\bf R}, g: {\bf R}/{\bf Z} \rightarrow {\bf R}}$ of Lipschitz constant less than one.

We sketch the proof of Theorem 3(i) as follows (the proof of Theorem 3(ii) is very similar). Let ${\gamma_1: [t_0, t_1] \rightarrow {\bf R}}$ be the curve ${\gamma_1(t) := (t,f(t))}$, thus ${\gamma_1}$ traverses one of the two graphs that comprise ${\gamma}$. For each time ${t \in [t_0,t_1]}$, there is a unique square with first vertex ${\gamma_1(t)}$ (and the other three vertices, traversed in anticlockwise order, denoted ${\gamma_2(t), \gamma_3(t), \gamma_4(t)}$) such that ${\gamma_2(t)}$ also lies in the graph of ${f}$ and ${\gamma_4(t)}$ also lies in the graph of ${g}$ (actually for technical reasons we have to extend ${f,g}$ by constants to all of ${{\bf R}}$ in order for this claim to be true). To see this, we simply rotate the graph of ${g}$ clockwise by ${\frac{\pi}{2}}$ around ${\gamma_1(t)}$, where (by the Lipschitz hypotheses) it must hit the graph of ${f}$ in a unique point, which is ${\gamma_2(t)}$, and which then determines the other two vertices ${\gamma_3(t), \gamma_4(t)}$ of the square. The curve ${\gamma_3(t)}$ has the same starting and ending point as the graph of ${f}$ or ${g}$; using the Lipschitz hypothesis one can show this graph is simple. If the curve ever hits the graph of ${g}$ other than at the endpoints, we have created an inscribed square, so we may assume for contradiction that ${\gamma_3(t)}$ avoids the graph of ${g}$, and hence by the Jordan curve theorem the two curves enclose some non-empty bounded open region ${\Omega}$.

Now for the conserved integral of motion. If we integrate the ${1}$-form ${y\ dx}$ on each of the four curves ${\gamma_1, \gamma_2, \gamma_3, \gamma_4}$, we obtain the identity

$\displaystyle \int_{\gamma_1} y\ dx - \int_{\gamma_2} y\ dx + \int_{\gamma_3} y\ dx - \int_{\gamma_4} y\ dx = 0.$

This identity can be established by the following calculation: one can parameterise

$\displaystyle \gamma_1(t) = (x(t), y(t))$

$\displaystyle \gamma_2(t) = (x(t)+a(t), y(t)+b(t))$

$\displaystyle \gamma_3(t) = (x(t)+a(t)-b(t), y(t)+a(t)+b(t))$

$\displaystyle \gamma_4(t) = (x(t)-b(t), y(t)+a(t))$

for some Lipschitz functions ${x,y,a,b: [t_0,t_1] \rightarrow {\bf R}}$; thus for instance ${\int_{\gamma_1} y\ dx = \int_{t_0}^{t_1} y(t)\ dx(t)}$. Inserting these parameterisations and doing some canceling, one can write the above integral as

$\displaystyle \int_{t_0}^{t_1} d \frac{a(t)^2-b(t)^2}{2}$

which vanishes because ${a(t), b(t)}$ (which represent the sidelengths of the squares determined by ${\gamma_1(t), \gamma_2(t), \gamma_3(t), \gamma_4(t)}$ vanish at the endpoints ${t=t_0,t_1}$.

Using this conserved integral of motion, one can show that

$\displaystyle \int_{\gamma_3} y\ dx = \int_{t_0}^{t_1} g(t)\ dt$

which by Stokes’ theorem then implies that the bounded open region ${\Omega}$ mentioned previously has zero area, which is absurd.

This argument hinged on the curve ${\gamma_3}$ being simple, so that the Jordan curve theorem could apply. Once one left the perturbative regime of curves of small Lipschitz constant, it became possible for ${\gamma_3}$ to be self-crossing, but nevertheless there still seemed to be some sort of integral obstruction. I eventually isolated the problem in the form of a strengthened version of Conjecture 2:

Conjecture 4 (Area formulation of square peg problem) Let ${\gamma_1, \gamma_2, \gamma_3, \gamma_4: {\bf R}/{\bf Z} \rightarrow ({\bf R}/{\bf Z}) \times {\bf R}}$ be simple closed piecewise linear curves of winding number ${1}$ obeying the area identity

$\displaystyle \int_{\gamma_1} y\ dx - \int_{\gamma_2} y\ dx + \int_{\gamma_3} y\ dx - \int_{\gamma_4} y\ dx = 0$

(note the ${1}$-form ${y\ dx}$ is still well defined on the cylinder ${({\bf R}/{\bf Z}) \times {\bf R}}$; note also that the curves ${\gamma_1,\gamma_2,\gamma_3,\gamma_4}$ are allowed to cross each other.) Then there exists a (possibly degenerate) square with vertices (traversed in anticlockwise order) lying on ${\gamma_1, \gamma_2, \gamma_3, \gamma_4}$ respectively.

It is not difficult to see that Conjecture 4 implies Conjecture 2. Actually I believe that the converse implication is at least morally true, in that any counterexample to Conjecture 4 can be eventually transformed to a counterexample to Conjecture 2 and Conjecture 1. The conserved integral of motion argument can establish Conjecture 4 in many cases, for instance if ${\gamma_2,\gamma_4}$ are graphs of functions of Lipschitz constant less than one.

Conjecture 4 has a model special case, when one of the ${\gamma_i}$ is assumed to just be a horizontal loop. In this case, the problem collapses to that of producing an intersection between two three-dimensional subsets of a six-dimensional space, rather than to four-dimensional subsets of an eight-dimensional space. More precisely, some elementary transformations reveal that this special case of Conjecture 4 can be formulated in the following fashion in which the geometric notion of a square is replaced by the additive notion of a triple of real numbers summing to zero:

Conjecture 5 (Special case of area formulation) Let ${\gamma_1, \gamma_2, \gamma_3: {\bf R}/{\bf Z} \rightarrow ({\bf R}/{\bf Z}) \times {\bf R}}$ be simple closed piecewise linear curves of winding number ${1}$ obeying the area identity

$\displaystyle \int_{\gamma_1} y\ dx + \int_{\gamma_2} y\ dx + \int_{\gamma_3} y\ dx = 0.$

Then there exist ${x \in {\bf R}/{\bf Z}}$ and ${y_1,y_2,y_3 \in {\bf R}}$ with ${y_1+y_2+y_3=0}$ such that ${(x,y_i) \in \gamma_i}$ for ${i=1,2,3}$.

This conjecture is easy to establish if one of the curves, say ${\gamma_3}$, is the graph ${\{ (t,f(t)): t \in {\bf R}/{\bf Z}\}}$ of some piecewise linear function ${f: {\bf R}/{\bf Z} \rightarrow {\bf R}}$, since in that case the curve ${\gamma_1}$ and the curve ${\tilde \gamma_2 := \{ (x, -y-f(x)): (x,y) \in \gamma_2 \}}$ enclose the same area in the sense that ${\int_{\gamma_1} y\ dx = \int_{\tilde \gamma_2} y\ dx}$, and hence must intersect by the Jordan curve theorem (otherwise they would enclose a non-zero amount of area between them), giving the claim. But when none of the ${\gamma_1,\gamma_2,\gamma_3}$ are graphs, the situation becomes combinatorially more complicated.

Using some elementary homological arguments (e.g. breaking up closed ${1}$-cycles into closed paths) and working with a generic horizontal slice of the curves, I was able to show that Conjecture 5 was equivalent to a one-dimensional problem that was largely combinatorial in nature, revolving around the sign patterns of various triple sums ${y_{1,a} + y_{2,b} + y_{3,c}}$ with ${y_{1,a}, y_{2,b}, y_{3,c}}$ drawn from various finite sets of reals.

Conjecture 6 (Combinatorial form) Let ${k_1,k_2,k_3}$ be odd natural numbers, and for each ${i=1,2,3}$, let ${y_{i,1},\dots,y_{i,k_i}}$ be distinct real numbers; we adopt the convention that ${y_{i,0}=y_{i,k_i+1}=-\infty}$. Assume the following axioms:

• (i) For any ${1 \leq p \leq k_1, 1 \leq q \leq k_2, 1 \leq r \leq k_3}$, the sums ${y_{1,p} + y_{2,q} + y_{3,r}}$ are non-zero.
• (ii) (Non-crossing) For any ${i=1,2,3}$ and ${0 \leq p < q \leq k_i}$ with the same parity, the pairs ${\{ y_{i,p}, y_{i,p+1}\}}$ and ${\{y_{i,q}, y_{i,q+1}\}}$ are non-crossing in the sense that

$\displaystyle \sum_{a \in \{p,p+1\}} \sum_{b \in \{q,q+1\}} (-1)^{a+b} \mathrm{sgn}( y_{i,a} - y_{i,b} ) = 0.$

• (iii) (Non-crossing sums) For any ${0 \leq p \leq k_1}$, ${0 \leq q \leq k_2}$, ${0 \leq r \leq k_3}$ of the same parity, one has

$\displaystyle \sum_{a \in \{p,p+1\}} \sum_{b \in \{q,q+1\}} \sum_{c \in \{r,r+1\}} (-1)^{a+b+c} \mathrm{sgn}( y_{1,a} + y_{2,b} + y_{3,c} ) = 0.$

Then one has

$\displaystyle \sum_{i=1}^3 \sum_{p=1}^{k_i} (-1)^{p-1} y_{i,p} < 0.$

Roughly speaking, Conjecture 6 and Conjecture 5 are connected by constructing curves ${\gamma_i}$ to connect ${(0, y_{i,p})}$ to ${(0,y_{i,p+1})}$ for ${0 \leq p \leq k+1}$ by various paths, which either lie to the right of the ${y}$ axis (when ${p}$ is odd) or to the left of the ${y}$ axis (when ${p}$ is even). The axiom (ii) is asserting that the numbers ${-\infty, y_{i,1},\dots,y_{i,k_i}}$ are ordered according to the permutation of a meander (formed by gluing together two non-crossing perfect matchings).

Using various ad hoc arguments involving “winding numbers”, it is possible to prove this conjecture in many cases (e.g. if one of the ${k_i}$ is at most ${3}$), to the extent that I have now become confident that this conjecture is true (and have now come full circle from trying to disprove Conjecture 1 to now believing that this conjecture holds also). But it seems that there is some non-trivial combinatorial argument to be made if one is to prove this conjecture; purely homological arguments seem to partially resolve the problem, but are not sufficient by themselves.

While I was not able to resolve the square peg problem, I think these results do provide a roadmap to attacking it, first by focusing on the combinatorial conjecture in Conjecture 6 (or its equivalent form in Conjecture 5), then after that is resolved moving on to Conjecture 4, and then finally to Conjecture 1.