You are currently browsing the monthly archive for February 2013.

The following result is due independently to Furstenberg and to Sarkozy:

Theorem 1 (Furstenberg-Sarkozy theorem) Let ${\delta > 0}$, and suppose that ${N}$ is sufficiently large depending on ${\delta}$. Then every subset ${A}$ of ${[N] := \{1,\ldots,N\}}$ of density ${|A|/N}$ at least ${\delta}$ contains a pair ${n, n+r^2}$ for some natural numbers ${n, r}$ with ${r \neq 0}$.

This theorem is of course similar in spirit to results such as Roth’s theorem or Szemerédi’s theorem, in which the pattern ${n,n+r^2}$ is replaced by ${n,n+r,n+2r}$ or ${n,n+r,\ldots,n+(k-1)r}$ for some fixed ${k}$ respectively. There are by now many proofs of this theorem (see this recent paper of Lyall for a survey), but most proofs involve some form of Fourier analysis (or spectral theory). This may be compared with the standard proof of Roth’s theorem, which combines some Fourier analysis with what is now known as the density increment argument.

A few years ago, Ben Green, Tamar Ziegler, and myself observed that it is possible to prove the Furstenberg-Sarkozy theorem by just using the Cauchy-Schwarz inequality (or van der Corput lemma) and the density increment argument, removing all invocations of Fourier analysis, and instead relying on Cauchy-Schwarz to linearise the quadratic shift ${r^2}$. As such, this theorem can be considered as even more elementary than Roth’s theorem (and its proof can be viewed as a toy model for the proof of Roth’s theorem). We ended up not doing too much with this observation, so decided to share it here.

The first step is to use the density increment argument that goes back to Roth. For any ${\delta > 0}$, let ${P(\delta)}$ denote the assertion that for ${N}$ sufficiently large, all sets ${A \subset [N]}$ of density at least ${\delta}$ contain a pair ${n,n+r^2}$ with ${r}$ non-zero. Note that ${P(\delta)}$ is vacuously true for ${\delta > 1}$. We will show that for any ${0 < \delta_0 \leq 1}$, one has the implication

$\displaystyle P(\delta_0 + c \delta_0^3) \implies P(\delta_0) \ \ \ \ \ (1)$

for some absolute constant ${c>0}$. This implies that ${P(\delta)}$ is true for any ${\delta>0}$ (as can be seen by considering the infimum of all ${\delta>0}$ for which ${P(\delta)}$ holds), which gives Theorem 1.

It remains to establish the implication (1). Suppose for sake of contradiction that we can find ${0 < \delta_0 \leq 1}$ for which ${P(\delta_0+c\delta^3_0)}$ holds (for some sufficiently small absolute constant ${c>0}$), but ${P(\delta_0)}$ fails. Thus, we can find arbitrarily large ${N}$, and subsets ${A}$ of ${[N]}$ of density at least ${\delta_0}$, such that ${A}$ contains no patterns of the form ${n,n+r^2}$ with ${r}$ non-zero. In particular, we have

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} 1_A(n) 1_A(n+(r+h)^2) = 0.$

(The exact ranges of ${r}$ and ${h}$ are not too important here, and could be replaced by various other small powers of ${N}$ if desired.)

Let ${\delta := |A|/N}$ be the density of ${A}$, so that ${\delta_0 \leq \delta \leq 1}$. Observe that

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} 1_A(n) \delta 1_{[N]}(n+(r+h)^2) = \delta^2 + O(N^{-1/3})$

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} \delta 1_{[N]}(n) \delta 1_{[N]}(n+(r+h)^2) = \delta^2 + O(N^{-1/3})$

and

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} \delta 1_{[N]}(n) 1_A(n+(r+h)^2) = \delta^2 + O( N^{-1/3} ).$

If we thus set ${f := 1_A - \delta 1_{[N]}}$, then

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h \in [N^{1/100}]} f(n) f(n+(r+h)^2) = -\delta^2 + O( N^{-1/3} ).$

In particular, for ${N}$ large enough,

$\displaystyle \mathop{\bf E}_{n \in [N]} |f(n)| \mathop{\bf E}_{r \in [N^{1/3}]} |\mathop{\bf E}_{h \in [N^{1/100}]} f(n+(r+h)^2)| \gg \delta^2.$

On the other hand, one easily sees that

$\displaystyle \mathop{\bf E}_{n \in [N]} |f(n)|^2 = O(\delta)$

and hence by the Cauchy-Schwarz inequality

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} |\mathop{\bf E}_{h \in [N^{1/100}]} f(n+(r+h)^2)|^2 \gg \delta^3$

which we can rearrange as

$\displaystyle |\mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h,h' \in [N^{1/100}]} \mathop{\bf E}_{n \in [N]} f(n+(r+h)^2) f(n+(r+h')^2)| \gg \delta^3.$

Shifting ${n}$ by ${(r+h)^2}$ we obtain (again for ${N}$ large enough)

$\displaystyle |\mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{h,h' \in [N^{1/100}]} \mathop{\bf E}_{n \in [N]} f(n) f(n+(h'-h)(2r+h'+h))| \gg \delta^3.$

In particular, by the pigeonhole principle (and deleting the diagonal case ${h=h'}$, which we can do for ${N}$ large enough) we can find distinct ${h,h' \in [N^{1/100}]}$ such that

$\displaystyle |\mathop{\bf E}_{r \in [N^{1/3}]} \mathop{\bf E}_{n \in [N]} f(n) f(n+(h'-h)(2r+h'+h))| \gg \delta^3,$

so in particular

$\displaystyle \mathop{\bf E}_{n \in [N]} |\mathop{\bf E}_{r \in [N^{1/3}]} f(n+(h'-h)(2r+h'+h))| \gg \delta^3.$

If we set ${d := 2(h'-h)}$ and shift ${n}$ by ${(h'-h) (h'+h)}$, we can simplify this (again for ${N}$ large enough) as

$\displaystyle \mathop{\bf E}_{n \in [N]} |\mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr)| \gg \delta^3. \ \ \ \ \ (2)$

On the other hand, since

$\displaystyle \mathop{\bf E}_{n \in [N]} f(n) = 0$

we have

$\displaystyle \mathop{\bf E}_{n \in [N]} f(n+dr) = O( N^{-2/3+1/100})$

for any ${r \in [N^{1/3}]}$, and thus

$\displaystyle \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr) = O( N^{-2/3+1/100}).$

Averaging this with (2) we conclude that

$\displaystyle \mathop{\bf E}_{n \in [N]} \max( \mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr), 0 ) \gg \delta^3.$

In particular, by the pigeonhole principle we can find ${n \in [N]}$ such that

$\displaystyle \mathop{\bf E}_{r \in [N^{1/3}]} f(n+dr) \gg \delta^3,$

or equivalently ${A}$ has density at least ${\delta+c'\delta^3}$ on the arithmetic progression ${\{ n+dr: r \in [N^{1/3}]\}}$, which has length ${\lfloor N^{1/3}\rfloor }$ and spacing ${d}$, for some absolute constant ${c'>0}$. By partitioning this progression into subprogressions of spacing ${d^2}$ and length ${\lfloor N^{1/4}\rfloor}$ (plus an error set of size ${O(N^{1/4})}$, we see from the pigeonhole principle that we can find a progression ${\{ n' + d^2 r': r' \in [N^{1/4}]\}}$ of length ${\lfloor N^{1/4}\rfloor}$ and spacing ${d^2}$ on which ${A}$ has density at least ${\delta + c\delta^3}$ (and hence at least ${\delta_0+c\delta_0^3}$) for some absolute constant ${c>0}$. If we then apply the induction hypothesis to the set

$\displaystyle A' := \{ r' \in [N^{1/4}]: n' + d^2 r' \in A \}$

we conclude (for ${N}$ large enough) that ${A'}$ contains a pair ${m, m+s^2}$ for some natural numbers ${m,s}$ with ${s}$ non-zero. This implies that ${(n'+d^2 m), (n'+d^2 m) + (|d|s)^2}$ lie in ${A}$, a contradiction, establishing the implication (1).

A more careful analysis of the above argument reveals a more quantitative version of Theorem 1: for ${N \geq 100}$ (say), any subset of ${[N]}$ of density at least ${C/(\log\log N)^{1/2}}$ for some sufficiently large absolute constant ${C}$ contains a pair ${n,n+r^2}$ with ${r}$ non-zero. This is not the best bound known; a (difficult) result of Pintz, Steiger, and Szemeredi allows the density to be as low as ${C / (\log N)^{\frac{1}{4} \log\log\log\log N}}$. On the other hand, this already improves on the (simpler) Fourier-analytic argument of Green that works for densities at least ${C/(\log\log N)^{1/11}}$ (although the original argument of Sarkozy, which is a little more intricate, works up to ${C (\log\log N)^{2/3}/(\log N)^{1/3}}$). In the other direction, a construction of Rusza gives a set of density ${\frac{1}{65} N^{-0.267}}$ without any pairs ${n,n+r^2}$.

Remark 1 A similar argument also applies with ${n,n+r^2}$ replaced by ${n,n+r^k}$ for fixed ${k}$, because this sort of pattern is preserved by affine dilations ${r' \mapsto n'+d^k r'}$ into arithmetic progressions whose spacing ${d^k}$ is a ${k^{th}}$ power. By re-introducing Fourier analysis, one can also perform an argument of this type for ${n,n+d,n+2d}$ where ${d}$ is the sum of two squares; see the above-mentioned paper of Green for details. However there seems to be some technical difficulty in extending it to patterns of the form ${n,n+P(r)}$ for polynomials ${P}$ that consist of more than a single monomial (and with the normalisation ${P(0)=0}$, to avoid local obstructions), because one no longer has this preservation property.

The fundamental notions of calculus, namely differentiation and integration, are often viewed as being the quintessential concepts in mathematical analysis, as their standard definitions involve the concept of a limit. However, it is possible to capture most of the essence of these notions by purely algebraic means (almost completely avoiding the use of limits, Riemann sums, and similar devices), which turns out to be useful when trying to generalise these concepts to more abstract situations in which it becomes convenient to permit the underlying number systems involved to be something other than the real or complex numbers, even if this makes many standard analysis constructions unavailable. For instance, the algebraic notion of a derivation often serves as a substitute for the analytic notion of a derivative in such cases, by abstracting out the key algebraic properties of differentiation, namely linearity and the Leibniz rule (also known as the product rule).

Abstract algebraic analogues of integration are less well known, but can still be developed. To motivate such an abstraction, consider the integration functional ${I: {\mathcal S}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}$ from the space ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ of complex-valued Schwarz functions ${f: {\bf R} \rightarrow {\bf C}}$ to the complex numbers, defined by

$\displaystyle I(f) := \int_{\bf R} f(x)\ dx$

where the integration on the right is the usual Lebesgue integral (or improper Riemann integral) from analysis. This functional obeys two obvious algebraic properties. Firstly, it is linear over ${{\bf C}}$, thus

$\displaystyle I(cf) = c I(f) \ \ \ \ \ (1)$

and

$\displaystyle I(f+g) = I(f) + I(g) \ \ \ \ \ (2)$

for all ${f,g \in {\mathcal S}({\bf R} \rightarrow {\bf C})}$ and ${c \in {\bf C}}$. Secondly, it is translation invariant, thus

$\displaystyle I(\tau_h f) = I(f) \ \ \ \ \ (3)$

for all ${h \in {\bf C}}$, where ${\tau_h f(x) := f(x-h)}$ is the translation of ${f}$ by ${h}$. Motivated by the uniqueness theory of Haar measure, one might expect that these two axioms already uniquely determine ${I}$ after one sets a normalisation, for instance by requiring that

$\displaystyle I( x \mapsto e^{-\pi x^2} ) = 1. \ \ \ \ \ (4)$

This is not quite true as stated (one can modify the proof of the Hahn-Banach theorem, after first applying a Fourier transform, to create pathological translation-invariant linear functionals on ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ that are not multiples of the standard Fourier transform), but if one adds a mild analytical axiom, such as continuity of ${I}$ (using the usual Schwartz topology on ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$), then the above axioms are enough to uniquely pin down the notion of integration. Indeed, if ${I: {\mathcal S}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}$ is a continuous linear functional that is translation invariant, then from the linearity and translation invariance axioms one has

$\displaystyle I( \frac{\tau_h f - f}{h} ) = 0$

for all ${f \in {\mathcal S}({\bf R} \rightarrow {\bf C})}$ and non-zero reals ${h}$. If ${f}$ is Schwartz, then as ${h \rightarrow 0}$, one can verify that the Newton quotients ${\frac{\tau_h f - f}{h}}$ converge in the Schwartz topology to the derivative ${f'}$ of ${f}$, so by the continuity axiom one has

$\displaystyle I(f') = 0.$

Next, note that any Schwartz function of integral zero has an antiderivative which is also Schwartz, and so ${I}$ annihilates all zero-integral Schwartz functions, and thus must be a scalar multiple of the usual integration functional. Using the normalisation (4), we see that ${I}$ must therefore be the usual integration functional, giving the claimed uniqueness.

Motivated by the above discussion, we can define the notion of an abstract integration functional ${I: X \rightarrow R}$ taking values in some vector space ${R}$, and applied to inputs ${f}$ in some other vector space ${X}$ that enjoys a linear action ${h \mapsto \tau_h}$ (the “translation action”) of some group ${V}$, as being a functional which is both linear and translation invariant, thus one has the axioms (1), (2), (3) for all ${f,g \in X}$, scalars ${c}$, and ${h \in V}$. The previous discussion then considered the special case when ${R = {\bf C}}$, ${X = {\mathcal S}({\bf R} \rightarrow {\bf C})}$, ${V = {\bf R}}$, and ${\tau}$ was the usual translation action.

Once we have performed this abstraction, we can now present analogues of classical integration which bear very little analytic resemblance to the classical concept, but which still have much of the algebraic structure of integration. Consider for instance the situation in which we keep the complex range ${R = {\bf C}}$, the translation group ${V = {\bf R}}$, and the usual translation action ${h \mapsto \tau_h}$, but we replace the space ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ of Schwartz functions by the space ${Poly_{\leq d}({\bf R} \rightarrow {\bf C})}$ of polynomials ${x \mapsto a_0 + a_1 x + \ldots + a_d x^d}$ of degree at most ${d}$ with complex coefficients, where ${d}$ is a fixed natural number; note that this space is translation invariant, so it makes sense to talk about an abstract integration functional ${I: Poly_{\leq d}({\bf R} \rightarrow {\bf C}) \rightarrow {\bf C}}$. Of course, one cannot apply traditional integration concepts to non-zero polynomials, as they are not absolutely integrable. But one can repeat the previous arguments to show that any abstract integration functional must annihilate derivatives of polynomials of degree at most ${d}$:

$\displaystyle I(f') = 0 \hbox{ for all } f \in Poly_{\leq d}({\bf R} \rightarrow {\bf C}). \ \ \ \ \ (5)$

Clearly, every polynomial of degree at most ${d-1}$ is thus annihilated by ${I}$, which makes ${I}$ a scalar multiple of the functional that extracts the top coefficient ${a_d}$ of a polynomial, thus if one sets a normalisation

$\displaystyle I( x \mapsto x^d ) = c$

for some constant ${c}$, then one has

$\displaystyle I( x \mapsto a_0 + a_1 x + \ldots + a_d x^d ) = c a_d \ \ \ \ \ (6)$

for any polynomial ${x \mapsto a_0 + a_1 x + \ldots + a_d x^d}$. So we see that up to a normalising constant, the operation of extracting the top order coefficient of a polynomial of fixed degree serves as the analogue of integration. In particular, despite the fact that integration is supposed to be the “opposite” of differentiation (as indicated for instance by (5)), we see in this case that integration is basically (${d}$-fold) differentiation; indeed, compare (6) with the identity

$\displaystyle (\frac{d}{dx})^d ( a_0 + a_1 x + \ldots + a_d x^d ) = d! a_d.$

In particular, we see, in contrast to the usual Lebesgue integral, the integration functional (6) can be localised to an arbitrary location: one only needs to know the germ of the polynomial ${x \mapsto a_0 + a_1 x + \ldots + a_d x^d}$ at a single point ${x_0}$ in order to determine the value of the functional (6). This localisation property may initially seem at odds with the translation invariance, but the two can be reconciled thanks to the extremely rigid nature of the class ${Poly_{\leq d}({\bf R} \rightarrow {\bf C})}$, in contrast to the Schwartz class ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ which admits bump functions and so can generate local phenomena that can only be detected in small regions of the underlying spatial domain, and which therefore forces any translation-invariant integration functional on such function classes to measure the function at every single point in space.

The reversal of the relationship between integration and differentiation is also reflected in the fact that the abstract integration operation on polynomials interacts with the scaling operation ${\delta_\lambda f(x) := f(x/\lambda)}$ in essentially the opposite way from the classical integration operation. Indeed, for classical integration on ${{\bf R}^d}$, one has

$\displaystyle \int_{{\bf R}^d} f(x/\lambda)\ dx = \lambda^d \int f(x)\ dx$

for Schwartz functions ${f \in {\mathcal S}({\bf R}^d \rightarrow {\bf C})}$, and so in this case the integration functional ${I(f) := \int_{{\bf R}^d} f(x)\ dx}$ obeys the scaling law

$\displaystyle I( \delta_\lambda f ) = \lambda^d I(f).$

In contrast, the abstract integration operation defined in (6) obeys the opposite scaling law

$\displaystyle I( \delta_\lambda f ) = \lambda^{-d} I(f). \ \ \ \ \ (7)$

Remark 1 One way to interpret what is going on is to view the integration operation (6) as a renormalised version of integration. A polynomial ${x \mapsto a_0 + a_1 + \ldots + a_d x^d}$ is, in general, not absolutely integrable, and the partial integrals

$\displaystyle \int_0^N a_0 + a_1 + \ldots + a_d x^d\ dx$

diverge as ${N \rightarrow \infty}$. But if one renormalises these integrals by the factor ${\frac{1}{N^{d+1}}}$, then one recovers convergence,

$\displaystyle \lim_{N \rightarrow \infty} \frac{1}{N^{d+1}} \int_0^N a_0 + a_1 + \ldots + a_d x^d\ dx = \frac{1}{d+1} a_d$

thus giving an interpretation of (6) as a renormalised classical integral, with the renormalisation being responsible for the unusual scaling relationship in (7). However, this interpretation is a little artificial, and it seems that it is best to view functionals such as (6) from an abstract algebraic perspective, rather than to try to force an analytic interpretation on them.

$\displaystyle I(f) := \int_{\bf R} f(x)\ dx. \ \ \ \ \ (8)$

As noted earlier, this integration functional has a translation invariance associated to translations along the real line ${{\bf R}}$, as well as a dilation invariance by real dilation parameters ${\lambda>0}$. However, if we refine the class ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ of functions somewhat, we can obtain a stronger family of invariances, in which we allow complex translations and dilations. More precisely, let ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$ denote the space of all functions ${f: {\bf C} \rightarrow {\bf C}}$ which are entire (or equivalently, are given by a Taylor series with an infinite radius of convergence around the origin) and also admit rapid decay in a sectorial neighbourhood of the real line, or more precisely there exists an ${\epsilon>0}$ such that for every ${A > 0}$ there exists ${C_A > 0}$ such that one has the bound

$\displaystyle |f(z)| \leq C_A (1+|z|)^{-A}$

whenever ${|\hbox{Im}(z)| \leq A + \epsilon |\hbox{Re}(z)|}$. For want of a better name, we shall call elements of this space Schwartz entire functions. This is clearly a complex vector space. A typical example of a Schwartz entire function are the complex gaussians

$\displaystyle f(z) := e^{-\pi (az^2 + 2bz + c)}$

where ${a,b,c}$ are complex numbers with ${\hbox{Re}(a) > 0}$. From the Cauchy integral formula (and its derivatives) we see that if ${f}$ lies in ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$, then the restriction of ${f}$ to the real line lies in ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$; conversely, from analytic continuation we see that every function in ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$ has at most one extension in ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$. Thus one can identify ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$ with a subspace of ${{\mathcal S}({\bf R} \rightarrow {\bf C})}$, and in particular the integration functional (8) is inherited by ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$, and by abuse of notation we denote the resulting functional ${I: \mathcal{SE}({\bf C} \rightarrow {\bf C}) \rightarrow {\bf C}}$ as ${I}$ also. Note, in analogy with the situation with polynomials, that this abstract integration functional is somewhat localised; one only needs to evaluate the function ${f}$ on the real line, rather than the entire complex plane, in order to compute ${I(f)}$. This is consistent with the rigid nature of Schwartz entire functions, as one can uniquely recover the entire function from its values on the real line by analytic continuation.

Of course, the functional ${I: \mathcal{SE}({\bf C} \rightarrow {\bf C}) \rightarrow {\bf C}}$ remains translation invariant with respect to real translation:

$\displaystyle I(\tau_h f) = I(f) \hbox{ for all } h \in {\bf R}.$

However, thanks to contour shifting, we now also have translation invariance with respect to complex translation:

$\displaystyle I(\tau_h f) = I(f) \hbox{ for all } h \in {\bf C},$

where of course we continue to define the translation operator ${\tau_h}$ for complex ${h}$ by the usual formula ${\tau_h f(x) := f(x-h)}$. In a similar vein, we also have the scaling law

$\displaystyle I(\delta_\lambda f) = \lambda I(f)$

for any ${f \in \mathcal{SE}({\bf C} \rightarrow {\bf C})}$, if ${\lambda}$ is a complex number sufficiently close to ${1}$ (where “sufficiently close” depends on ${f}$, and more precisely depends on the sectoral aperture parameter ${\epsilon}$ associated to ${f}$); again, one can verify that ${\delta_\lambda f}$ lies in ${\mathcal{SE}({\bf C} \rightarrow {\bf C})}$ for ${\lambda}$ sufficiently close to ${1}$. These invariances (which relocalise the integration functional ${I}$ onto other contours than the real line ${{\bf R}}$) are very useful for computing integrals, and in particular for computing gaussian integrals. For instance, the complex translation invariance tells us (after shifting by ${b/a}$) that

$\displaystyle I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = e^{-\pi (c-b^2/a)} I( z \mapsto e^{-\pi a z^2} )$

when ${a,b,c \in {\bf C}}$ with ${\hbox{Re}(a) > 0}$, and then an application of the complex scaling law (and a continuity argument, observing that there is a compact path connecting ${a}$ to ${1}$ in the right half plane) gives

$\displaystyle I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = a^{-1/2} e^{-\pi (c-b^2/a)} I( z \mapsto e^{-\pi z^2} )$

using the branch of ${a^{-1/2}}$ on the right half-plane for which ${1^{-1/2} = 1}$. Using the normalisation (4) we thus have

$\displaystyle I( z \mapsto e^{-\pi (az^2 + 2bz + c) } ) = a^{-1/2} e^{-\pi (c-b^2/a)}$

giving the usual gaussian integral formula

$\displaystyle \int_{\bf R} e^{-\pi (ax^2 + 2bx + c)}\ dx = a^{-1/2} e^{-\pi (c-b^2/a)}. \ \ \ \ \ (9)$

This is a basic illustration of the power that a large symmetry group (in this case, the complex homothety group) can bring to bear on the task of computing integrals.

One can extend this sort of analysis to higher dimensions. For any natural number ${n \geq 1}$, let ${\mathcal{SE}({\bf C}^n \rightarrow {\bf C})}$ denote the space of all functions ${f: {\bf C}^n \rightarrow {\bf C}}$ which is jointly entire in the sense that ${f(z_1,\ldots,z_n)}$ can be expressed as a Taylor series in ${z_1,\ldots,z_n}$ which is absolutely convergent for all choices of ${z_1,\ldots,z_n}$, and such that there exists an ${\epsilon > 0}$ such that for any ${A>0}$ there is ${C_A>0}$ for which one has the bound

$\displaystyle |f(z)| \leq C_A (1+|z|)^{-A}$

whenever ${|\hbox{Im}(z_j)| \leq A + \epsilon |\hbox{Re}(z_j)|}$ for all ${1 \leq j \leq n}$, where ${z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}}$ and ${|z| := (|z_1|^2+\ldots+|z_n|^2)^{1/2}}$. Again, we call such functions Schwartz entire functions; a typical example is the function

$\displaystyle f(z) := e^{-\pi (z^T A z + 2b^T z + c)}$

where ${A}$ is an ${n \times n}$ complex symmetric matrix with positive definite real part, ${b}$ is a vector in ${{\bf C}^n}$, and ${c}$ is a complex number. We can then define an abstract integration functional ${I: \mathcal{SE}({\bf C}^n \rightarrow {\bf C}) \rightarrow {\bf C}}$ by integration on the real slice ${{\bf R}^n}$:

$\displaystyle I(f) := \int_{{\bf R}^n} f(x)\ dx$

where ${dx}$ is the usual Lebesgue measure on ${{\bf R}^n}$. By contour shifting in each of the ${n}$ variables ${z_1,\ldots,z_n}$ separately, we see that ${I}$ is invariant with respect to complex translations of each of the ${z_j}$ variables, and is thus invariant under translating the joint variable ${z}$ by ${{\bf C}^n}$. One can also verify the scaling law

$\displaystyle I(\delta_A f) = \hbox{det}(A) I(f)$

for ${n \times n}$ complex matrices ${A}$ sufficiently close to the origin, where ${\delta_A f(z) := f(A^{-1} z)}$. This can be seen for shear transformations ${A}$ by Fubini’s theorem and the aforementioned translation invariance, while for diagonal transformations near the origin this can be seen from ${n}$ applications of one-dimensional scaling law, and the general case then follows by composition. Among other things, these laws then easily lead to the higher-dimensional generalisation

$\displaystyle \int_{{\bf R}^n} e^{-\pi (x^T A x + 2 b^T x + c)}\ dx = \hbox{det}(A)^{-1/2} e^{-\pi (c-b^T A^{-1} b)} \ \ \ \ \ (10)$

whenever ${A}$ is a complex symmetric matrix with positive definite real part, ${b}$ is a vector in ${{\bf C}^n}$, and ${c}$ is a complex number, basically by repeating the one-dimensional argument sketched earlier. Here, we choose the branch of ${\hbox{det}(A)^{-1/2}}$ for all matrices ${A}$ in the indicated class for which ${\hbox{det}(1)^{-1/2} = 1}$.

Now we turn to an integration functional suitable for computing complex gaussian integrals such as

$\displaystyle \int_{{\bf C}^n} e^{-2\pi (z^\dagger A z + b^\dagger z + z^\dagger \tilde b + c)}\ dz d\overline{z}, \ \ \ \ \ (11)$

where ${z}$ is now a complex variable

$\displaystyle z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix},$

${z^\dagger}$ is the adjoint

$\displaystyle z^\dagger := (\overline{z_1},\ldots, \overline{z_n}),$

${A}$ is a complex ${n \times n}$ matrix with positive definite Hermitian part, ${b, \tilde b}$ are column vectors in ${{\bf C}^n}$, ${c}$ is a complex number, and ${dz d\overline{z} = \prod_{j=1}^n 2 d\hbox{Re}(z_j) d\hbox{Im}(z_j)}$ is ${2^n}$ times Lebesgue measure on ${{\bf C}^n}$. (The factors of two here turn out to be a natural normalisation, but they can be ignored on a first reading.) As we shall see later, such integrals are relevant when performing computations on the Gaussian Unitary Ensemble (GUE) in random matrix theory. Note that the integrand here is not complex analytic due to the presence of the complex conjugates. However, this can be dealt with by the trick of replacing the complex conjugate ${\overline{z}}$ by a variable ${z^*}$ which is formally conjugate to ${z}$, but which is allowed to vary independently of ${z}$. More precisely, let ${\mathcal{SA}({\bf C}^n \times {\bf C}^n \rightarrow {\bf C})}$ be the space of all functions ${f: (z,z^*) \mapsto f(z,z^*)}$ of two independent ${n}$-tuples

$\displaystyle z = \begin{pmatrix} z_1 \\ \vdots \\ z_n \end{pmatrix}, z^* = \begin{pmatrix} z_1^* \\ \vdots \\ z_n^* \end{pmatrix}$

of complex variables, which is jointly entire in all ${2n}$ variables (in the sense defined previously, i.e. there is a joint Taylor series that is absolutely convergent for all independent choices of ${z, z^* \in {\bf C}^n}$), and such that there is an ${\epsilon>0}$ such that for every ${A>0}$ there is ${C_A>0}$ such that one has the bound

$\displaystyle |f(z,z^*)| \leq C_A (1 + |z|)^{-A}$

whenever ${|z^* - \overline{z}| \leq A + \epsilon |z|}$. We will call such functions Schwartz analytic. Note that the integrand in (11) is Schwartz analytic when ${A}$ has positive definite Hermitian part, if we reinterpret ${z^\dagger}$ as the transpose of ${z^*}$ rather than as the adjoint of ${z}$ in order to make the integrand entire in ${z}$ and ${z^*}$. We can then define an abstract integration functional ${I: \mathcal{SA}({\bf C}^n \times {\bf C}^n \rightarrow {\bf C}) \rightarrow {\bf C}}$ by the formula

$\displaystyle I(f) := \int_{{\bf C}^n} f(z,\overline{z})\ dz d\overline{z}, \ \ \ \ \ (12)$

thus ${I}$ can be localised to the slice ${\{ (z,\overline{z}): z \in {\bf C}^n\}}$ of ${{\bf C}^n \times {\bf C}^n}$ (though, as with previous functionals, one can use contour shifting to relocalise ${I}$ to other slices also.) One can also write this integral as

$\displaystyle I(f) = 2^n \int_{{\bf R}^n \times {\bf R}^n} f(x+iy, x-iy)\ dx dy$

and note that the integrand here is a Schwartz entire function on ${{\bf C}^n \times {\bf C}^n}$, thus linking the Schwartz analytic integral with the Schwartz entire integral. Using this connection, one can verify that this functional ${I}$ is invariant with respect to translating ${z}$ and ${z^*}$ by independent shifts in ${{\bf C}^n}$ (thus giving a ${{\bf C}^n \times {\bf C}^n}$ translation symmetry), and one also has the independent dilation symmetry

$\displaystyle I(\delta_{A,B} f) = \hbox{det}(A) \hbox{det}(B) I(f)$

for ${n \times n}$ complex matrices ${A,B}$ that are sufficiently close to the identity, where ${\delta_{A,B} f(z,z^*) := f(A^{-1} z, B^{-1} z^*)}$. Arguing as before, we can then compute (11) as

$\displaystyle \int_{{\bf C}^n} e^{-2\pi (z^\dagger A z + b^\dagger z + z^\dagger \tilde b + c)}\ dz d\overline{z} = \hbox{det}(A)^{-1} e^{-2\pi (c - b^\dagger A^{-1} \tilde b)}. \ \ \ \ \ (13)$

In particular, this gives an integral representation for the determinant-reciprocal ${\hbox{det}(A)^{-1}}$ of a complex ${n \times n}$ matrix with positive definite Hermitian part, in terms of gaussian expressions in which ${A}$ only appears linearly in the exponential:

$\displaystyle \hbox{det}(A)^{-1} = \int_{{\bf C}^n} e^{-2\pi z^\dagger A z}\ dz d\overline{z}.$

This formula is then convenient for computing statistics such as

$\displaystyle \mathop{\bf E} \hbox{det}(W_n-E-i\eta)^{-1}$

for random matrices ${W_n}$ drawn from the Gaussian Unitary Ensemble (GUE), and some choice of spectral parameter ${E+i\eta}$ with ${\eta>0}$; we review this computation later in this post. By the trick of matrix differentiation of the determinant (as reviewed in this recent blog post), one can also use this method to compute matrix-valued statistics such as

$\displaystyle \mathop{\bf E} \hbox{det}(W_n-E-i\eta)^{-1} (W_n-E-i\eta)^{-1}.$

However, if one restricts attention to classical integrals over real or complex (and in particular, commuting or bosonic) variables, it does not seem possible to easily eradicate the negative determinant factors in such calculations, which is unfortunate because many statistics of interest in random matrix theory, such as the expected Stieltjes transform

$\displaystyle \mathop{\bf E} \frac{1}{n} \hbox{tr} (W_n-E-i\eta)^{-1},$

which is the Stieltjes transform of the density of states. However, it turns out (as I learned recently from Peter Sarnak and Tom Spencer) that it is possible to cancel out these negative determinant factors by balancing the bosonic gaussian integrals with an equal number of fermionic gaussian integrals, in which one integrates over a family of anticommuting variables. These fermionic integrals are closer in spirit to the polynomial integral (6) than to Lebesgue type integrals, and in particular obey a scaling law which is inverse to the Lebesgue scaling (in particular, a linear change of fermionic variables ${\zeta \mapsto A \zeta}$ ends up transforming a fermionic integral by ${\hbox{det}(A)}$ rather than ${\hbox{det}(A)^{-1}}$), which conveniently cancels out the reciprocal determinants in the previous calculations. Furthermore, one can combine the bosonic and fermionic integrals into a unified integration concept, known as the Berezin integral (or Grassmann integral), in which one integrates functions of supervectors (vectors with both bosonic and fermionic components), and is of particular importance in the theory of supersymmetry in physics. (The prefix “super” in physics means, roughly speaking, that the object or concept that the prefix is attached to contains both bosonic and fermionic aspects.) When one applies this unified integration concept to gaussians, this can lead to quite compact and efficient calculations (provided that one is willing to work with “super”-analogues of various concepts in classical linear algebra, such as the supertrace or superdeterminant).

Abstract integrals of the flavour of (6) arose in quantum field theory, when physicists sought to formally compute integrals of the form

$\displaystyle \int F( x_1, \ldots, x_n, \xi_1, \ldots, \xi_m )\ dx_1 \ldots dx_n d\xi_1 \ldots d\xi_m \ \ \ \ \ (14)$

where ${x_1,\ldots,x_n}$ are familiar commuting (or bosonic) variables (which, in particular, can often be localised to be scalar variables taking values in ${{\bf R}}$ or ${{\bf C}}$), while ${\xi_1,\ldots,\xi_m}$ were more exotic anticommuting (or fermionic) variables, taking values in some vector space of fermions. (As we shall see shortly, one can formalise these concepts by working in a supercommutative algebra.) The integrand ${F(x_1,\ldots,x_n,\xi_1,\ldots,\xi_m)}$ was a formally analytic function of ${x_1,\ldots,x_n,\xi_1,\ldots,\xi_m}$, in that it could be expanded as a (formal, noncommutative) power series in the variables ${x_1,\ldots,x_n,\xi_1,\ldots,\xi_m}$. For functions ${F(x_1,\ldots,x_n)}$ that depend only on bosonic variables, it is certainly possible for such analytic functions to be in the Schwartz class and thus fall under the scope of the classical integral, as discussed previously. However, functions ${F(\xi_1,\ldots,\xi_m)}$ that depend on fermionic variables ${\xi_1,\ldots,\xi_m}$ behave rather differently. Indeed, a fermonic variable ${\xi}$ must anticommute with itself, so that ${\xi^2 = 0}$. In particular, any power series in ${\xi}$ terminates after the linear term in ${\xi}$, so that a function ${F(\xi)}$ can only be analytic in ${\xi}$ if it is a polynomial of degree at most ${1}$ in ${\xi}$; more generally, an analytic function ${F(\xi_1,\ldots,\xi_m)}$ of ${m}$ fermionic variables ${\xi_1,\ldots,\xi_m}$ must be a polynomial of degree at most ${m}$, and an analytic function ${F(x_1,\ldots,x_n,\xi_1,\ldots,\xi_m)}$ of ${n}$ bosonic and ${m}$ fermionic variables can be Schwartz in the bosonic variables but will be polynomial in the fermonic variables. As such, to interpret the integral (14), one can use classical (Lebesgue) integration (or the variants discussed above for integrating Schwartz entire or Schwartz analytic functions) for the bosonic variables, but must use abstract integrals such as (6) for the fermonic variables, leading to the concept of Berezin integration mentioned earlier.

In this post I would like to set out some of the basic algebraic formalism of Berezin integration, particularly with regards to integration of gaussian-type expressions, and then show how this formalism can be used to perform computations involving GUE (for instance, one can compute the density of states of GUE by this machinery without recourse to the theory of orthogonal polynomials). The use of supersymmetric gaussian integrals to analyse ensembles such as GUE appears in the work of Efetov (and was also proposed in the slightly earlier works of Parisi-Sourlas and McKane, with a related approach also appearing in the work of Wegner); the material here is adapted from this survey of Mirlin, as well as the later papers of Disertori-Pinson-Spencer and of Disertori.

Consider the free Schrödinger equation in ${d}$ spatial dimensions, which I will normalise as

$\displaystyle i u_t + \frac{1}{2} \Delta_{{\bf R}^d} u = 0 \ \ \ \ \ (1)$

where ${u: {\bf R} \times {\bf R}^d \rightarrow {\bf C}}$ is the unknown field and ${\Delta_{{\bf R}^{d+1}} = \sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}}$ is the spatial Laplacian. To avoid irrelevant technical issues I will restrict attention to smooth (classical) solutions to this equation, and will work locally in spacetime avoiding issues of decay at infinity (or at other singularities); I will also avoid issues involving branch cuts of functions such as ${t^{d/2}}$ (if one wishes, one can restrict ${d}$ to be even in order to safely ignore all branch cut issues). The space of solutions to (1) enjoys a number of symmetries. A particularly non-obvious symmetry is the pseudoconformal symmetry: if ${u}$ solves (1), then the pseudoconformal solution ${pc(u): {\bf R} \times {\bf R}^d \rightarrow {\bf C}}$ defined by

$\displaystyle pc(u)(t,x) := \frac{1}{(it)^{d/2}} \overline{u(\frac{1}{t}, \frac{x}{t})} e^{i|x|^2/2t} \ \ \ \ \ (2)$

for ${t \neq 0}$ can be seen after some computation to also solve (1). (If ${u}$ has suitable decay at spatial infinity and one chooses a suitable branch cut for ${(it)^{d/2}}$, one can extend ${pc(u)}$ continuously to the ${t=0}$ spatial slice, whereupon it becomes essentially the spatial Fourier transform of ${u(0,\cdot)}$, but we will not need this fact for the current discussion.)

An analogous symmetry exists for the free wave equation in ${d+1}$ spatial dimensions, which I will write as

$\displaystyle u_{tt} - \Delta_{{\bf R}^{d+1}} u = 0 \ \ \ \ \ (3)$

where ${u: {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ is the unknown field. In analogy to pseudoconformal symmetry, we have conformal symmetry: if ${u: {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ solves (3), then the function ${conf(u): {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$, defined in the interior ${\{ (t,x): |x| < |t| \}}$ of the light cone by the formula

$\displaystyle conf(u)(t,x) := (t^2-|x|^2)^{-d/2} u( \frac{t}{t^2-|x|^2}, \frac{x}{t^2-|x|^2} ), \ \ \ \ \ (4)$

also solves (3).

There are also some direct links between the Schrödinger equation in ${d}$ dimensions and the wave equation in ${d+1}$ dimensions. This can be easily seen on the spacetime Fourier side: solutions to (1) have spacetime Fourier transform (formally) supported on a ${d}$-dimensional hyperboloid, while solutions to (3) have spacetime Fourier transform formally supported on a ${d+1}$-dimensional cone. To link the two, one then observes that the ${d}$-dimensional hyperboloid can be viewed as a conic section (i.e. hyperplane slice) of the ${d+1}$-dimensional cone. In physical space, this link is manifested as follows: if ${u: {\bf R} \times {\bf R}^d \rightarrow {\bf C}}$ solves (1), then the function ${\iota_{1}(u): {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ defined by

$\displaystyle \iota_{1}(u)(t,x_1,\ldots,x_{d+1}) := e^{-i(t+x_{d+1})} u( \frac{t-x_{d+1}}{2}, x_1,\ldots,x_d)$

solves (3). More generally, for any non-zero scaling parameter ${\lambda}$, the function ${\iota_{\lambda}(u): {\bf R} \times {\bf R}^{d+1} \rightarrow {\bf C}}$ defined by

$\displaystyle \iota_{\lambda}(u)(t,x_1,\ldots,x_{d+1}) :=$

$\displaystyle \lambda^{d/2} e^{-i\lambda(t+x_{d+1})} u( \lambda \frac{t-x_{d+1}}{2}, \lambda x_1,\ldots,\lambda x_d) \ \ \ \ \ (5)$

solves (3).

As an “extra challenge” posed in an exercise in one of my books (Exercise 2.28, to be precise), I asked the reader to use the embeddings ${\iota_1}$ (or more generally ${\iota_\lambda}$) to explicitly connect together the pseudoconformal transformation ${pc}$ and the conformal transformation ${conf}$. It turns out that this connection is a little bit unusual, with the “obvious” guess (namely, that the embeddings ${\iota_\lambda}$ intertwine ${pc}$ and ${conf}$) being incorrect, and as such this particular task was perhaps too difficult even for a challenge question. I’ve been asked a couple times to provide the connection more explicitly, so I will do so below the fold.

Let ${A, B}$ be ${n \times n}$ Hermitian matrices, with eigenvalues ${\lambda_1(A) \leq \ldots \leq \lambda_n(A)}$ and ${\lambda_1(B) \leq \ldots\leq \lambda_n(B)}$. The Harish-ChandraItzykson-Zuber integral formula exactly computes the integral

$\displaystyle \int_{U(n)} \exp( t \hbox{tr}( A U B U^* ) )\ dU$

where ${U}$ is integrated over the Haar probability measure of the unitary group ${U(n)}$ and ${t}$ is a non-zero complex parameter, as the expression

$\displaystyle c_n \frac{ \det( \exp( t \lambda_i(A) \lambda_j(B) ) )_{1 \leq i,j \leq n} }{t^{(n^2-n)/2} \Delta(\lambda(A)) \Delta(\lambda(B))}$

when the eigenvalues of ${A,B}$ are simple, where ${\Delta}$ denotes the Vandermonde determinant

$\displaystyle \Delta(\lambda(A)) := \prod_{1 \leq i

and ${c_n}$ is the constant

$\displaystyle c_n := \prod_{i=1}^{n-1} i!.$

There are at least two standard ways to prove this formula in the literature. One way is by applying the Duistermaat-Heckman theorem to the pushforward of Liouville measure on the coadjoint orbit ${{\mathcal O}_B := \{ UBU^*: U \in U(n) \}}$ (or more precisely, a rotation of such an orbit by ${i}$) under the moment map ${M \mapsto \hbox{diag}(M)}$, and then using a stationary phase expansion. Another way, which I only learned about recently, is to use the formulae for evolution of eigenvalues under Dyson Brownian motion (as well as the closely related formulae for the GUE ensemble), which were derived in this previous blog post. Both of these approaches can be found in several places in the literature (the former being observed in the original paper of Duistermaat and Heckman, and the latter observed in the paper of Itzykson and Zuber as well as in this later paper of Johansson), but I thought I would record both of these here for my own benefit.

The Harish-Chandra-Itzykson-Zuber formula can be extended to other compact Lie groups than ${U(n)}$. At first glance, this might suggest that these formulae could be of use in the study of the GOE ensemble, but unfortunately the Lie algebra associated to ${O(n)}$ corresponds to real anti-symmetric matrices rather than real symmetric matrices. This also occurs in the ${U(n)}$ case, but there one can simply multiply by ${i}$ to rotate a complex skew-Hermitian matrix into a complex Hermitian matrix. This is consistent, though, with the fact that the (somewhat rarely studied) anti-symmetric GOE ensemble has cleaner formulae (in particular, having a determinantal structure similar to GUE) than the (much more commonly studied) symmetric GOE ensemble.

[These are notes intended mostly for myself, as these topics are useful in random matrix theory, but may be of interest to some readers also. -T.]

One of the most fundamental partial differential equations in mathematics is the heat equation

$\displaystyle \partial_t f = L f \ \ \ \ \ (1)$

where ${f: [0,+\infty) \times {\bf R}^n \rightarrow {\bf R}}$ is a scalar function ${(t,x) \mapsto f(t,x)}$ of both time and space, and ${L}$ is the Laplacian ${L := \frac{1}{2} \Delta = \sum_{i=1}^n \frac{\partial^2}{\partial x_i^2}}$. For the purposes of this post, we will ignore all technical issues of regularity and decay, and always assume that the solutions to equations such as (1) have all the regularity and decay in order to justify all formal operations such as the chain rule, integration by parts, or differentiation under the integral sign. The factor of ${\frac{1}{2}}$ in the definition of the heat propagator ${L}$ is of course an arbitrary normalisation, chosen for some minor technical reasons; one can certainly continue the discussion below with other choices of normalisations if desired.

In probability theory, this equation takes on particular significance when ${f}$ is restricted to be non-negative, and furthermore to be a probability measure at each time, in the sense that

$\displaystyle \int_{{\bf R}^n} f(t,x)\ dx = 1$

for all ${t}$. (Actually, it suffices to verify this constraint at time ${t=0}$, as the heat equation (1) will then preserve this constraint.) Indeed, in this case, one can interpret ${f(t,x)\ dx}$ as the probability distribution of a Brownian motion

$\displaystyle dx = dB(t) \ \ \ \ \ (2)$

where ${x = x(t) \in {\bf R}^n}$ is a stochastic process with initial probability distribution ${f(0,x)\ dx}$; see for instance this previous blog post for more discussion.

A model example of a solution to the heat equation to keep in mind is that of the fundamental solution

$\displaystyle G(t,x) = \frac{1}{(2\pi t)^{n/2}} e^{-|x|^2/2t} \ \ \ \ \ (3)$

defined for any ${t>0}$, which represents the distribution of Brownian motion of a particle starting at the origin ${x=0}$ at time ${t=0}$. At time ${t}$, ${G(t,x)}$ represents an ${{\bf R}^n}$-valued random variable, each coefficient of which is an independent random variable of mean zero and variance ${t}$. (As ${t \rightarrow 0^+}$, ${G(t)}$ converges in the sense of distributions to a Dirac mass at the origin.)

The heat equation can also be viewed as the gradient flow for the Dirichlet form

$\displaystyle D(f,g) := \frac{1}{2} \int_{{\bf R}^n} \nabla f \cdot \nabla g\ dx \ \ \ \ \ (4)$

since one has the integration by parts identity

$\displaystyle \int_{{\bf R}^n} Lf(x) g(x)\ dx = \int_{{\bf R}^n} f(x) Lg(x)\ dx = - D(f,g) \ \ \ \ \ (5)$

for all smooth, rapidly decreasing ${f,g}$, which formally implies that ${L f}$ is (half of) the negative gradient of the Dirichlet energy ${D(f,f) = \frac{1}{2} \int_{{\bf R}^n} |\nabla f|^2\ dx}$ with respect to the ${L^2({\bf R}^n,dx)}$ inner product. Among other things, this implies that the Dirichlet energy decreases in time:

$\displaystyle \partial_t D(f,f) = - 2 \int_{{\bf R}^n} |Lf|^2\ dx. \ \ \ \ \ (6)$

For instance, for the fundamental solution (3), one can verify for any time ${t>0}$ that

$\displaystyle D(G,G) = \frac{n}{2^{n+2} \pi^{n/2}} \frac{1}{t^{(n+2)/2}} \ \ \ \ \ (7)$

(assuming I have not made a mistake in the calculation). In a similar spirit we have

$\displaystyle \partial_t \int_{{\bf R}^n} |f|^2\ dx = - 2 D(f,f). \ \ \ \ \ (8)$

Since ${D(f,f)}$ is non-negative, the formula (6) implies that ${\int_{{\bf R}^n} |Lf|^2\ dx}$ is integrable in time, and in particular we see that ${Lf}$ converges to zero as ${t \rightarrow \infty}$, in some averaged ${L^2}$ sense at least; similarly, (8) suggests that ${D(f,f)}$ also converges to zero. This suggests that ${f}$ converges to a constant function; but as ${f}$ is also supposed to decay to zero at spatial infinity, we thus expect solutions to the heat equation in ${{\bf R}^n}$ to decay to zero in some sense as ${t \rightarrow \infty}$. However, the decay is only expected to be polynomial in nature rather than exponential; for instance, the solution (3) decays in the ${L^\infty}$ norm like ${O(t^{-n/2})}$.

Since ${L1=0}$, we also observe the basic cancellation property

$\displaystyle \int_{{\bf R}^n} Lf(x) \ dx = 0 \ \ \ \ \ (9)$

for any function ${f}$.

There are other quantities relating to ${f}$ that also decrease in time under heat flow, particularly in the important case when ${f}$ is a probability measure. In this case, it is natural to introduce the entropy

$\displaystyle S(f) := \int_{{\bf R}^n} f(x) \log f(x)\ dx.$

Thus, for instance, if ${f(x)\ dx}$ is the uniform distribution on some measurable subset ${E}$ of ${{\bf R}^n}$ of finite measure ${|E|}$, the entropy would be ${-\log |E|}$. Intuitively, as the entropy decreases, the probability distribution gets wider and flatter. For instance, in the case of the fundamental solution (3), one has ${S(G) = -\frac{n}{2} \log( 2 \pi e t )}$ for any ${t>0}$, reflecting the fact that ${G(t)}$ is approximately uniformly distributed on a ball of radius ${O(\sqrt{t})}$ (and thus of measure ${O(t^{n/2})}$).

A short formal computation shows (if one assumes for simplicity that ${f}$ is strictly positive, which is not an unreasonable hypothesis, particularly in view of the strong maximum principle) using (9), (5) that

$\displaystyle \partial_t S(f) = \int_{{\bf R}^n} (Lf) \log f + f \frac{Lf}{f}\ dx$

$\displaystyle = \int_{{\bf R}^n} (Lf) \log f\ dx$

$\displaystyle = - D( f, \log f )$

$\displaystyle = - \frac{1}{2} \int_{{\bf R}^n} \frac{|\nabla f|^2}{f}\ dx$

$\displaystyle = - 4D( g, g )$

where ${g := \sqrt{f}}$ is the square root of ${f}$. For instance, if ${f}$ is the fundamental solution (3), one can check that ${D(g,g) = \frac{n}{8t}}$ (note that this is a significantly cleaner formula than (7)!).

In particular, the entropy is decreasing, which corresponds well to one’s intuition that the heat equation (or Brownian motion) should serve to spread out a probability distribution over time.

Actually, one can say more: the rate of decrease ${4D(g,g)}$ of the entropy is itself decreasing, or in other words the entropy is convex. I do not have a satisfactorily intuitive reason for this phenomenon, but it can be proved by straightforward application of basic several variable calculus tools (such as the chain rule, product rule, quotient rule, and integration by parts), and completing the square. Namely, by using the chain rule we have

$\displaystyle L \phi(f) = \phi'(f) Lf + \frac{1}{2} \phi''(f) |\nabla f|^2, \ \ \ \ \ (10)$

valid for for any smooth function ${\phi: {\bf R} \rightarrow {\bf R}}$, we see from (1) that

$\displaystyle 2 g \partial_t g = 2 g L g + |\nabla g|^2$

and thus (again assuming that ${f}$, and hence ${g}$, is strictly positive to avoid technicalities)

$\displaystyle \partial_t g = Lg + \frac{|\nabla g|^2}{2g}.$

We thus have

$\displaystyle \partial_t D(g,g) = 2 D(g,Lg) + D(g, \frac{|\nabla g|^2}{g} ).$

It is now convenient to compute using the Einstein summation convention to hide the summation over indices ${i,j = 1,\ldots,n}$. We have

$\displaystyle 2 D(g,Lg) = \frac{1}{2} \int_{{\bf R}^n} (\partial_i g) (\partial_i \partial_j \partial_j g)\ dx$

and

$\displaystyle D(g, \frac{|\nabla g|^2}{g} ) = \frac{1}{2} \int_{{\bf R}^n} (\partial_i g) \partial_i \frac{\partial_j g \partial_j g}{g}\ dx.$

By integration by parts and interchanging partial derivatives, we may write the first integral as

$\displaystyle 2 D(g,Lg) = - \frac{1}{2} \int_{{\bf R}^n} (\partial_i \partial_j g) (\partial_i \partial_j g)\ dx,$

and from the quotient and product rules, we may write the second integral as

$\displaystyle D(g, \frac{|\nabla g|^2}{g} ) = \int_{{\bf R}^n} \frac{(\partial_i g) (\partial_j g) (\partial_i \partial_j g)}{g} - \frac{(\partial_i g) (\partial_j g) (\partial_i g) (\partial_j g)}{2g^2}\ dx.$

Gathering terms, completing the square, and making the summations explicit again, we see that

$\displaystyle \partial_t D(g,g) =- \frac{1}{2} \int_{{\bf R}^n} \frac{\sum_{i=1}^n \sum_{j=1}^n |g \partial_i \partial_j g - (\partial_i g) (\partial_j g)|^2}{g^2}\ dx$

and so in particular ${D(g,g)}$ is always decreasing.

The above identity can also be written as

$\displaystyle \partial_t D(g,g) = - \frac{1}{2} \int_{{\bf R}^n} |\nabla^2 \log g|^2 g^2\ dx.$

Exercise 1 Give an alternate proof of the above identity by writing ${f = e^{2u}}$, ${g = e^u}$ and deriving the equation ${\partial_t u = Lu + |\nabla u|^2}$ for ${u}$.

It was observed in a well known paper of Bakry and Emery that the above monotonicity properties hold for a much larger class of heat flow-type equations, and lead to a number of important relations between energy and entropy, such as the log-Sobolev inequality of Gross and of Federbush, and the hypercontractivity inequality of Nelson; we will discuss one such family of generalisations (or more precisely, variants) below the fold.

Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv our survey “Small doubling in groups“, for the proceedings of the upcoming Erdos Centennial.  This is a short survey of the known results on classifying finite subsets $A$ of an (abelian) additive group $G = (G,+)$ or a (not necessarily abelian) multiplicative group $G = (G,\cdot)$ that have small doubling in the sense that the sum set $A+A$ or product set $A \cdot A$ is small.  Such sets behave approximately like finite subgroups of $G$ (and there is a closely related notion of an approximate group in which the analogy is even tighter) , and so this subject can be viewed as a sort of approximate version of finite group theory.  (Unfortunately, thus far the theory does not have much new to say about the classification of actual finite groups; progress has been largely made instead on classifying the (highly restricted) number of ways in which approximate groups can differ from a genuine group.)

In the classical case when $G$ is the integers ${\mathbb Z}$, these sets were classified (in a qualitative sense, at least) by a celebrated theorem of Freiman, which roughly speaking says that such sets $A$ are necessarily “commensurate” in some sense with a (generalised) arithmetic progression $P$ of bounded rank.   There are a number of essentially equivalent ways to define what “commensurate” means here; for instance, in the original formulation of the theorem, one asks that $A$ be a dense subset of $P$, but in modern formulations it is often more convenient to require instead that $A$ be of comparable size to $P$ and be covered by a bounded number of translates of $P$, or that $A$ and $P$ have an intersection that is of comparable size to both $A$ and $P$ (cf. the notion of commensurability in group theory).

Freiman’s original theorem was extended to more general abelian groups in a sequence of papers culminating in the paper of Green and Ruzsa that handled arbitrary abelian groups.   As such groups now contain non-trivial finite subgroups, the conclusion of the theorem must be  modified by allowing for “coset progressions” $P+H$, which can be viewed as “extensions”  of generalized arithmetic progressions $P$ by genuine finite groups $H$.

The proof methods in these abelian results were Fourier-analytic in nature (except in the cases of sets of very small doubling, in which more combinatorial approaches can be applied, and there were also some geometric or combinatorial methods that gave some weaker structural results).  As such, it was a challenge to extend these results to nonabelian groups, although for various important special types of ambient group $G$ (such as an linear group over a finite or infinite field) it turns out that one can use tools exploiting the special structure of those groups (e.g. for linear groups one would use tools from Lie theory and algebraic geometry) to obtain quite satisfactory results; see e.g. this survey of  Pyber and Szabo for the linear case.   When the ambient group $G$ is completely arbitrary, it turns out the problem is closely related to the classical Hilbert’s fifth problem of determining the minimal requirements of a topological group in order for such groups to have Lie structure; this connection was first observed and exploited by Hrushovski, and then used by Breuillard, Green, and myself to obtain the analogue of Freiman’s theorem for an arbitrary nonabelian group.

This survey is too short to discuss in much detail the proof techniques used in these results (although the abelian case is discussed in this book of mine with Vu, and the nonabelian case discussed in this more recent book of mine), but instead focuses on the statements of the various known results, as well as some remaining open questions in the subject (in particular, there is substantial work left to be done in making the estimates more quantitative, particularly in the nonabelian setting).