You are currently browsing the category archive for the ‘math.CA’ category.

Apoorva Khare and I have updated our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“, announced at this post from last month. The quantitative results are now sharpened using a new monotonicity property of ratios {s_{\lambda}(u)/s_{\mu}(u)} of Schur polynomials, namely that such ratios are monotone non-decreasing in each coordinate of {u} if {u} is in the positive orthant, and the partition {\lambda} is larger than that of {\mu}. (This monotonicity was also independently observed by Rachid Ait-Haddou, using the theory of blossoms.) In the revised version of the paper we give two proofs of this monotonicity. The first relies on a deep positivity result of Lam, Postnikov, and Pylyavskyy, which uses a representation-theoretic positivity result of Haiman to show that the polynomial combination

\displaystyle s_{(\lambda \wedge \nu) / (\mu \wedge \rho)} s_{(\lambda \vee \nu) / (\mu \vee \rho)} - s_{\lambda/\mu} s_{\nu/\rho} \ \ \ \ \ (1)

of skew-Schur polynomials is Schur-positive for any partitions {\lambda,\mu,\nu,\rho} (using the convention that the skew-Schur polynomial {s_{\lambda/\mu}} vanishes if {\mu} is not contained in {\lambda}, and where {\lambda \wedge \nu} and {\lambda \vee \nu} denotes the pointwise min and max of {\lambda} and {\nu} respectively). It is fairly easy to derive the monotonicity of {s_\lambda(u)/s_\mu(u)} from this, by using the expansion

\displaystyle s_\lambda(u_1,\dots, u_n) = \sum_k u_1^k s_{\lambda/(k)}(u_2,\dots,u_n)

of Schur polynomials into skew-Schur polynomials (as was done in this previous post).

The second proof of monotonicity avoids representation theory by a more elementary argument establishing the weaker claim that the above expression (1) is non-negative on the positive orthant. In fact we prove a more general determinantal log-supermodularity claim which may be of independent interest:

Theorem 1 Let {A} be any {n \times n} totally positive matrix (thus, every minor has a non-negative determinant). Then for any {k}-tuples {I_1,I_2,J_1,J_2} of increasing elements of {\{1,\dots,n\}}, one has

\displaystyle \det( A_{I_1 \wedge I_2, J_1 \wedge J_2} ) \det( A_{I_1 \vee I_2, J_1 \vee J_2} ) - \det(A_{I_1,J_1}) \det(A_{I_2,J_2}) \geq 0

where {A_{I,J}} denotes the {k \times k} minor formed from the rows in {I} and columns in {J}.

For instance, if {A} is the matrix

\displaystyle A = \begin{pmatrix} a & b & c & d \\ e & f & g & h \\ i & j & k & l \\ m & n & o & p \end{pmatrix}

for some real numbers {a,\dots,p}, one has

\displaystyle a h - de\geq 0

(corresponding to the case {k=1}, {I_1 = (1), I_2 = (2), J_1 = (4), J_2 = (1)}), or

\displaystyle \det \begin{pmatrix} a & c \\ i & k \end{pmatrix} \det \begin{pmatrix} f & h \\ n & p \end{pmatrix} - \det \begin{pmatrix} e & h \\ i & l \end{pmatrix} \det \begin{pmatrix} b & c \\ n & o \end{pmatrix} \geq 0

(corresponding to the case {k=2}, {I_1 = (2,3)}, {I_2 = (1,4)}, {J_1 = (1,4)}, {J_2 = (2,3)}). It turns out that this claim can be proven relatively easy by an induction argument, relying on the Dodgson and Karlin identities from this previous post; the difficulties are largely notational in nature. Combining this result with the Jacobi-Trudi identity for skew-Schur polynomials (discussed in this previous post) gives the non-negativity of (1); it can also be used to directly establish the monotonicity of ratios {s_\lambda(u)/s_\mu(u)} by applying the theorem to a generalised Vandermonde matrix.

(Log-supermodularity also arises as the natural hypothesis for the FKG inequality, though I do not know of any interesting application of the FKG inequality in this current setting.)

In one of the earliest posts on this blog, I talked about the ability to “arbitrage” a disparity of symmetry in an inequality, and in particular to “amplify” such an inequality into a stronger one. (The principle can apply to other mathematical statements than inequalities, with the “hypothesis” and “conclusion” of that statement generally playing the role of the “right-hand side” and “left-hand side” of an inequality, but for sake of discussion I will restrict attention here to inequalities.) One can formalise this principle as follows. Many inequalities in analysis can be expressed in the form

\displaystyle A(f) \leq B(f) \ \ \ \ \ (1)

for all {f} in some space {X} (in many cases {X} will be a function space, and {f} a function in that space), where {A(f)} and {B(f)} are some functionals of {f} (that is to say, real-valued functions of {f}). For instance, {B(f)} might be some function space norm of {f} (e.g. an {L^p} norm), and {A(f)} might be some function space norm of some transform of {f}. In addition, we assume we have some group {G} of symmetries {T: X \rightarrow X} acting on the underlying space. For instance, if {X} is a space of functions on some spatial domain, the group might consist of translations (e.g. {Tf(x) = f(x-h)} for some shift {h}), or perhaps dilations with some normalisation (e.g. {Tf(x) = \frac{1}{\lambda^\alpha} f(\frac{x}{\lambda})} for some dilation factor {\lambda > 0} and some normalisation exponent {\alpha \in {\bf R}}, which can be thought of as the dimensionality of length one is assigning to {f}). If we have

\displaystyle A(Tf) = A(f)

for all symmetries {T \in G} and all {f \in X}, we say that {A} is invariant with respect to the symmetries in {G}; otherwise, it is not.

Suppose we know that the inequality (1) holds for all {f \in X}, but that there is an imbalance of symmetry: either {A} is {G}-invariant and {B} is not, or vice versa. Suppose first that {A} is {G}-invariant and {B} is not. Substituting {f} by {Tf} in (1) and taking infima, we can then amplify (1) to the stronger inequality

\displaystyle A(f) \leq \inf_{T \in G} B(Tf).

In particular, it is often the case that there is a way to send {T} off to infinity in such a way that the functional {B(Tf)} has a limit {B_\infty(f)}, in which case we obtain the amplification

\displaystyle A(f) \leq B_\infty(f) \ \ \ \ \ (2)

of (1). Note that these amplified inequalities will now be {G}-invariant on both sides (assuming that the way in which we take limits as {T \rightarrow \infty} is itself {G}-invariant, which it often is in practice). Similarly, if {B} is {G}-invariant but {A} is not, we may instead amplify (1) to

\displaystyle \sup_{T \in G} A(Tf) \leq B(f)

and in particular (if {A(Tf)} has a limit {A_\infty(f)} as {T \rightarrow \infty})

\displaystyle A_\infty(f) \leq B(f). \ \ \ \ \ (3)

If neither {A(f)} nor {B(f)} has a {G}-symmetry, one can still use the {G}-symmetry by replacing {f} by {Tf} and taking a limit to conclude that

\displaystyle A_\infty(f) \leq B_\infty(f),

though now this inequality is not obviously stronger than the original inequality (1) (for instance it could well be trivial). In some cases one can also average over {G} instead of taking a limit as {T \rightarrow \infty}, thus averaging a non-invariant inequality into an invariant one.

As discussed in the previous post, this use of amplification gives rise to a general principle about inequalities: the most efficient inequalities are those in which the left-hand side and right-hand side enjoy the same symmetries. It is certainly possible to have true inequalities that have an imbalance of symmetry, but as shown above, such inequalities can always be amplified to more efficient and more symmetric inequalities. In the case when limits such as {A_\infty} and {B_\infty} exist, the limiting functionals {A_\infty(f)} and {B_\infty(f)} are often simpler in form, or more tractable analytically, than their non-limiting counterparts {A(f)} and {B(f)} (this is one of the main reasons why we take limits at infinity in the first place!), and so in many applications there is really no reason to use the weaker and more complicated inequality (1), when stronger, simpler, and more symmetric inequalities such as (2), (3) are available. Among other things, this explains why many of the most useful and natural inequalities one sees in analysis are dimensionally consistent.

One often tries to prove inequalities (1) by directly chaining together simpler inequalities. For instance, one might attempt to prove (1) by by first bounding {A(f)} by some auxiliary quantity {C(f)}, and then bounding {C(f)} by {B(f)}, thus obtaining (1) by chaining together two inequalities

\displaystyle A(f) \leq C(f) \leq B(f). \ \ \ \ \ (4)

A variant of the above principle then asserts that when proving inequalities by such direct methods, one should, whenever possible, try to maintain the symmetries that are present in both sides of the inequality. Why? Well, suppose that we ignored this principle and tried to prove (1) by establishing (4) for some {C} that is not {G}-invariant. Assuming for sake of argument that (4) were actually true, we could amplify the first half {A(f) \leq C(f)} of this inequality to conclude that

\displaystyle A(f) \leq \inf_{T \in G} C(Tf)

and also amplify the second half {C(f) \leq B(f)} of the inequality to conclude that

\displaystyle \sup_{T \in G} C(Tf) \leq B(f)

and hence (4) amplifies to

\displaystyle A(f) \leq \inf_{T \in G} C(Tf) \leq \sup_{T \in G} C(Tf) \leq B(f). \ \ \ \ \ (5)

Let’s say for sake of argument that all the quantities involved here are positive numbers (which is often the case in analysis). Then we see in particular that

\displaystyle \frac{\sup_{T \in G} C(Tf)}{\inf_{T \in G} C(Tf)} \leq \frac{B(f)}{A(f)}. \ \ \ \ \ (6)

Informally, (6) asserts that in order for the strategy (4) used to prove (1) to work, the extent to which {C} fails to be {G}-invariant cannot exceed the amount of “room” present in (1). In particular, when dealing with those “extremal” {f} for which the left and right-hand sides of (1) are comparable to each other, one can only have a bounded amount of non-{G}-invariance in the functional {C}. If {C} fails so badly to be {G}-invariant that one does not expect the left-hand side of (6) to be at all bounded in such extremal situations, then the strategy of proving (1) using the intermediate quantity {C} is doomed to failure – even if one has already produced some clever proof of one of the two inequalities {A(f) \leq C(f)} or {C(f) \leq B(f)} needed to make this strategy work. And even if it did work, one could amplify (4) to a simpler inequality

\displaystyle A(f) \leq C_\infty(f) \leq B(f) \ \ \ \ \ (7)

(assuming that the appropriate limit {C_\infty(f) = \lim_{T \rightarrow \infty} C(Tf)} existed) which would likely also be easier to prove (one can take whatever proofs one had in mind of the inequalities in (4), conjugate them by {T}, and take a limit as {T \rightarrow \infty} to extract a proof of (7)).

Here are some simple (but somewhat contrived) examples to illustrate these points. Suppose one wishes to prove the inequality

\displaystyle xy \leq x^2 + y^2 \ \ \ \ \ (8)

for all {x,y>0}. Both sides of this inequality are invariant with respect to interchanging {x} with {y}, so the principle suggests that when proving this inequality directly, one should only use sub-inequalities that are also invariant with respect to this interchange. However, in this particular case there is enough “room” in the inequality that it is possible (though somewhat unnatural) to violate this principle. For instance, one could decide (for whatever reason) to start with the inequality

\displaystyle 0 \leq (x - y/2)^2 = x^2 - xy + y^2/4

to conclude that

\displaystyle xy \leq x^2 + y^2/4

and then use the obvious inequality {x^2 + y^2/4 \leq x^2+y^2} to conclude the proof. Here, the intermediate quantity {x^2 + y^2/4} is not invariant with respect to interchange of {x} and {y}, but the failure is fairly mild (changing {x} and {y} only modifies the quantity {x^2 + y^2/4} by a multiplicative factor of {4} at most), and disappears completely in the most extremal case {x=y}, which helps explain why one could get away with using this quantity in the proof here. But it would be significantly harder (though still not impossible) to use non-symmetric intermediaries to prove the sharp version

\displaystyle xy \leq \frac{x^2 + y^2}{2}

of (8) (that is to say, the arithmetic mean-geometric mean inequality). Try it!

Similarly, consider the task of proving the triangle inequality

\displaystyle |z+w| \leq |z| + |w| \ \ \ \ \ (9)

for complex numbers {z, w}. One could try to leverage the triangle inequality {|x+y| \leq |x| + |y|} for real numbers by using the crude estimate

\displaystyle |z+w| \leq |\hbox{Re}(z+w)| + |\hbox{Im}(z+w)|

and then use the real triangle inequality to obtain

\displaystyle |\hbox{Re}(z+w)| \leq |\hbox{Re}(z)| + |\hbox{Re}(w)|

and

\displaystyle |\hbox{Im}(z+w)| \leq |\hbox{Im}(z)| + |\hbox{Im}(w)|

and then finally use the inequalities

\displaystyle |\hbox{Re}(z)|, |\hbox{Im}(z)| \leq |z| \ \ \ \ \ (10)

and

\displaystyle |\hbox{Re}(w)|, |\hbox{Im}(w)| \leq |w| \ \ \ \ \ (11)

but when one puts this all together at the end of the day, one loses a factor of two:

\displaystyle |z+w| \leq 2(|z| + |w|).

One can “blame” this loss on the fact that while the original inequality (9) was invariant with respect to phase rotation {(z,w) \mapsto (e^{i\theta} z, e^{i\theta} w)}, the intermediate expressions we tried to use when proving it were not, leading to inefficient estimates. One can try to be smarter than this by using Pythagoras’ theorem {|z|^2 = |\hbox{Re}(z)|^2 + |\hbox{Im}(z)|^2}; this reduces the loss from {2} to {\sqrt{2}} but does not eliminate it completely, which is to be expected as one is still using non-invariant estimates in the proof. But one can remove the loss completely by using amplification; see the previous blog post for details (we also give a reformulation of this amplification below).

Here is a slight variant of the above example. Suppose that you had just learned in class to prove the triangle inequality

\displaystyle (\sum_{n=1}^\infty |a_n+b_n|^2)^{1/2} \leq (\sum_{n=1}^\infty |a_n|^2)^{1/2} + (\sum_{n=1}^\infty |b_n|^2)^{1/2} \ \ \ \ \ (12)

for (say) real square-summable sequences {(a_n)_{n=1}^\infty}, {(b_n)_{n=1}^\infty}, and was tasked to conclude the corresponding inequality

\displaystyle (\sum_{n \in {\bf Z}} |a_n+b_n|^2)^{1/2} \leq (\sum_{n \in {\bf Z}} |a_n|^2)^{1/2} + (\sum_{n \in {\bf Z}} |b_n|^2)^{1/2} \ \ \ \ \ (13)

for doubly infinite square-summable sequences {(a_n)_{n \in {\bf Z}}, (b_n)_{n \in {\bf Z}}}. The quickest way to do this is of course to exploit a bijection between the natural numbers {1,2,\dots} and the integers, but let us say for sake of argument that one was unaware of such a bijection. One could then proceed instead by splitting the integers into the positive integers and the non-positive integers, and use (12) on each component separately; this is very similar to the strategy of proving (9) by splitting a complex number into real and imaginary parts, and will similarly lose a factor of {2} or {\sqrt{2}}. In this case, one can “blame” this loss on the abandonment of translation invariance: both sides of the inequality (13) are invariant with respect to shifting the sequences {(a_n)_{n \in {\bf Z}}}, {(b_n)_{n \in {\bf Z}}} by some shift {h} to arrive at {(a_{n-h})_{n \in {\bf Z}}, (b_{n-h})_{n \in {\bf Z}}}, but the intermediate quantities caused by splitting the integers into two subsets are not invariant. Another way of thinking about this is that the splitting of the integers gives a privileged role to the origin {n=0}, whereas the inequality (13) treats all values of {n} equally thanks to the translation invariance, and so using such a splitting is unnatural and not likely to lead to optimal estimates. On the other hand, one can deduce (13) from (12) by sending this symmetry to infinity; indeed, after applying a shift to (12) we see that

\displaystyle (\sum_{n=-N}^\infty |a_n+b_n|^2)^{1/2} \leq (\sum_{n=-N}^\infty |a_n|^2)^{1/2} + (\sum_{n=-N}^\infty |b_n|^2)^{1/2}

for any {N}, and on sending {N \rightarrow \infty} we obtain (13) (one could invoke the monotone convergence theorem here to justify the limit, though in this case it is simple enough that one can just use first principles).

Note that the principle of preserving symmetry only applies to direct approaches to proving inequalities such as (1). There is a complementary approach, discussed for instance in this previous post, which is to spend the symmetry to place the variable {f} “without loss of generality” in a “normal form”, “convenient coordinate system”, or a “good gauge”. Abstractly: suppose that there is some subset {Y} of {X} with the property that every {f \in X} can be expressed in the form {f = Tg} for some {T \in G} and {g \in Y} (that is to say, {X = GY}). Then, if one wishes to prove an inequality (1) for all {f \in X}, and one knows that both sides {A(f), B(f)} of this inequality are {G}-invariant, then it suffices to check (1) just for those {f} in {Y}, as this together with the {G}-invariance will imply the same inequality (1) for all {f} in {GY=X}. By restricting to those {f} in {Y}, one has given up (or spent) the {G}-invariance, as the set {Y} will in typical not be preserved by the group action {G}. But by the same token, by eliminating the invariance, one also eliminates the prohibition on using non-invariant proof techniques, and one is now free to use a wider range of inequalities in order to try to establish (1). Of course, such inequalities should make crucial use of the restriction {f \in Y}, for if they did not, then the arguments would work in the more general setting {f \in X}, and then the previous principle would again kick in and warn us that the use of non-invariant inequalities would be inefficient. Thus one should “spend” the symmetry wisely to “buy” a restriction {f \in Y} that will be of maximal utility in calculations (for instance by setting as many annoying factors and terms in one’s analysis to be {0} or {1} as possible).

As a simple example of this, let us revisit the complex triangle inequality (9). As already noted, both sides of this inequality are invariant with respect to the phase rotation symmetry {(z,w) \mapsto (e^{i\theta} z, e^{i\theta} w)}. This seems to limit one to using phase-rotation-invariant techniques to establish the inequality, in particular ruling out the use of real and imaginary parts as discussed previously. However, we can instead spend the phase rotation symmetry to restrict to a special class of {z} and {w}. It turns out that the most efficient way to spend the symmetry is to achieve the normalisation of {z+w} being a nonnegative real; this is of course possible since any complex number {z+w} can be turned into a nonnegative real by multiplying by an appropriate phase {e^{i\theta}}. Once {z+w} is a nonnegative real, the imaginary part disappears and we have

\displaystyle |z+w| = \hbox{Re}(z+w) = \hbox{Re}(z) + \hbox{Re}(w),

and the triangle inequality (9) is now an immediate consequence of (10), (11). (But note that if one had unwisely spent the symmetry to normalise, say, {z} to be a non-negative real, then one is no closer to establishing (9) than before one had spent the symmetry.)

Apoorva Khare and I have just uploaded to the arXiv our paper “On the sign patterns of entrywise positivity preservers in fixed dimension“. This paper explores the relationship between positive definiteness of Hermitian matrices, and entrywise operations on these matrices. The starting point for this theory is the Schur product theorem, which asserts that if {A = (a_{ij})_{1 \leq i,j \leq N}} and {B = (b_{ij})_{1 \leq i,j \leq N}} are two {N \times N} Hermitian matrices that are positive semi-definite, then their Hadamard product

\displaystyle  A \circ B := (a_{ij} b_{ij})_{1 \leq i,j \leq N}

is also positive semi-definite. (One should caution that the Hadamard product is not the same as the usual matrix product.) To prove this theorem, first observe that the claim is easy when {A = {\bf u} {\bf u}^*} and {B = {\bf v} {\bf v}^*} are rank one positive semi-definite matrices, since in this case {A \circ B = ({\bf u} \circ {\bf v}) ({\bf u} \circ {\bf v})^*} is also a rank one positive semi-definite matrix. The general case then follows by noting from the spectral theorem that a general positive semi-definite matrix can be expressed as a non-negative linear combination of rank one positive semi-definite matrices, and using the bilinearity of the Hadamard product and the fact that the set of positive semi-definite matrices form a convex cone. A modification of this argument also lets one replace “positive semi-definite” by “positive definite” in the statement of the Schur product theorem.

One corollary of the Schur product theorem is that any polynomial {P(z) = c_0 + c_1 z + \dots + c_d z^d} with non-negative coefficients {c_n \geq 0} is entrywise positivity preserving on the space {{\mathbb P}_N({\bf C})} of {N \times N} positive semi-definite Hermitian matrices, in the sense that for any matrix {A = (a_{ij})_{1 \leq i,j \leq N}} in {{\mathbb P}_N({\bf C})}, the entrywise application

\displaystyle  P[A] := (P(a_{ij}))_{1 \leq i,j \leq N}

of {P} to {A} is also positive semi-definite. (As before, one should caution that {P[A]} is not the application {P(A)} of {P} to {A} by the usual functional calculus.) Indeed, one can expand

\displaystyle  P[A] = c_0 A^{\circ 0} + c_1 A^{\circ 1} + \dots + c_d A^{\circ d},

where {A^{\circ i}} is the Hadamard product of {i} copies of {A}, and the claim now follows from the Schur product theorem and the fact that {{\mathbb P}_N({\bf C})} is a convex cone.

A slight variant of this argument, already observed by Pólya and Szegö in 1925, shows that if {I} is any subset of {{\bf C}} and

\displaystyle  f(z) = \sum_{n=0}^\infty c_n z^n \ \ \ \ \ (1)

is a power series with non-negative coefficients {c_n \geq 0} that is absolutely and uniformly convergent on {I}, then {f} will be entrywise positivity preserving on the set {{\mathbb P}_N(I)} of positive definite matrices with entries in {I}. (In the case that {I} is of the form {I = [0,\rho]}, such functions are precisely the absolutely monotonic functions on {I}.)

In the work of Schoenberg and of Rudin, we have a converse: if {f: (-1,1) \rightarrow {\bf C}} is a function that is entrywise positivity preserving on {{\mathbb P}_N((-1,1))} for all {N}, then it must be of the form (1) with {c_n \geq 0}. Variants of this result, with {(-1,1)} replaced by other domains, appear in the work of Horn, Vasudeva, and Guillot-Khare-Rajaratnam.

This gives a satisfactory classification of functions {f} that are entrywise positivity preservers in all dimensions {N} simultaneously. However, the question remains as to what happens if one fixes the dimension {N}, in which case one may have a larger class of entrywise positivity preservers. For instance, in the trivial case {N=1}, a function would be entrywise positivity preserving on {{\mathbb P}_1((0,\rho))} if and only if {f} is non-negative on {I}. For higher {N}, there is a necessary condition of Horn (refined slightly by Guillot-Khare-Rajaratnam) which asserts (at least in the case of smooth {f}) that all derivatives of {f} at zero up to {(N-1)^{th}} order must be non-negative in order for {f} to be entrywise positivity preserving on {{\mathbb P}_N((0,\rho))} for some {0 < \rho < \infty}. In particular, if {f} is of the form (1), then {c_0,\dots,c_{N-1}} must be non-negative. In fact, a stronger assertion can be made, namely that the first {N} non-zero coefficients in (1) (if they exist) must be positive, or equivalently any negative term in (1) must be preceded (though not necessarily immediately) by at least {N} positive terms. If {f} is of the form (1) is entrywise positivity preserving on the larger set {{\mathbb P}_N((0,+\infty))}, one can furthermore show that any negative term in (1) must also be followed (though not necessarily immediately) by at least {N} positive terms.

The main result of this paper is that these sign conditions are the only constraints for entrywise positivity preserving power series. More precisely:

Theorem 1 For each {n}, let {\epsilon_n \in \{-1,0,+1\}} be a sign.

  • Suppose that any negative sign {\epsilon_M = -1} is preceded by at least {N} positive signs (thus there exists {0 \leq n_0 < \dots < n_{N-1}< M} with {\epsilon_{n_0} = \dots = \epsilon_{n_{N-1}} = +1}). Then, for any {0 < \rho < \infty}, there exists a convergent power series (1) on {(0,\rho)}, with each {c_n} having the sign of {\epsilon_n}, which is entrywise positivity preserving on {{\bf P}_N((0,\rho))}.
  • Suppose in addition that any negative sign {\epsilon_M = -1} is followed by at least {N} positive signs (thus there exists {M < n_N < \dots < n_{2N-1}} with {\epsilon_{n_N} = \dots = \epsilon_{n_{2N-1}} = +1}). Then there exists a convergent power series (1) on {(0,+\infty)}, with each {c_n} having the sign of {\epsilon_n}, which is entrywise positivity preserving on {{\bf P}_N((0,+\infty))}.

One can ask the same question with {(0,\rho)} or {(0,+\infty)} replaced by other domains such as {(-\rho,\rho)}, or the complex disk {D(0,\rho)}, but it turns out that there are far fewer entrywise positivity preserving functions in those cases basically because of the non-trivial zeroes of Schur polynomials in these ranges; see the paper for further discussion. We also have some quantitative bounds on how negative some of the coefficients can be compared to the positive coefficients, but they are a bit technical to state here.

The heart of the proofs of these results is an analysis of the determinants {\mathrm{det} P[ {\bf u} {\bf u}^* ]} of polynomials {P} applied entrywise to rank one matrices {uu^*}; the positivity of these determinants can be used (together with a continuity argument) to establish the positive definiteness of {P[uu^*]} for various ranges of {P} and {u}. Using the Cauchy-Binet formula, one can rewrite such determinants as linear combinations of squares of magnitudes of generalised Vandermonde determinants

\displaystyle  \mathrm{det}( u_i^{n_j} )_{1 \leq i,j \leq N},

where {{\bf u} = (u_1,\dots,u_N)} and the signs of the coefficients in the linear combination are determined by the signs of the coefficients of {P}. The task is then to find upper and lower bounds for the magnitudes of such generalised Vandermonde determinants. These determinants oscillate in sign, which makes the problem look difficult; however, an algebraic miracle intervenes, namely the factorisation

\displaystyle  \mathrm{det}( u_i^{n_j} )_{1 \leq i,j \leq N} = \pm V({\bf u}) s_\lambda({\bf u})

of the generalised Vandermonde determinant into the ordinary Vandermonde determinant

\displaystyle  V({\bf u}) = \prod_{1 \leq i < j \leq N} (u_j - u_i)

and a Schur polynomial {s_\lambda} applied to {{\bf u}}, where the weight {\lambda} of the Schur polynomial is determined by {n_1,\dots,n_N} in a simple fashion. The problem then boils down to obtaining upper and lower bounds for these Schur polynomials. Because we are restricting attention to matrices taking values in {(0,\rho)} or {(0,+\infty)}, the entries of {{\bf u}} can be taken to be non-negative. One can then take advantage of the total positivity of the Schur polynomials to compare these polynomials with a monomial, at which point one can obtain good criteria for {P[A]} to be positive definite when {A} is a rank one positive definite matrix {A = {\bf u} {\bf u}^*}.

If we allow the exponents {n_1,\dots,n_N} to be real numbers rather than integers (thus replacing polynomials or power series by Pusieux series or Hahn series), then we lose the above algebraic miracle, but we can replace it with a geometric miracle, namely the Harish-Chandra-Itzykson-Zuber identity, which I discussed in this previous blog post. This factors the above generalised Vandermonde determinant as the product of the ordinary Vandermonde determinant and an integral of a positive quantity over the orthogonal group, which one can again compare with a monomial after some fairly elementary estimates.

It remains to understand what happens for more general positive semi-definite matrices {A}. Here we use a trick of FitzGerald and Horn to amplify the rank one case to the general case, by expressing a general positive semi-definite matrix {A} as a linear combination of a rank one matrix {{\bf u} {\bf u}^*} and another positive semi-definite matrix {B} that vanishes on the last row and column (and is thus effectively a positive definite {N-1 \times N-1} matrix). Using the fundamental theorem of calculus to continuously deform the rank one matrix {{\bf u} {\bf u}^*} to {A} in the direction {B}, one can then obtain positivity results for {P[A]} from positivity results for {P[{\bf u} {\bf u}^*]} combined with an induction hypothesis on {N}.

The complete homogeneous symmetric polynomial {h_d(x_1,\dots,x_n)} of {n} variables {x_1,\dots,x_n} and degree {d} can be defined as

\displaystyle h_d(x_1,\dots,x_n) := \sum_{1 \leq i_1 \leq \dots \leq i_d \leq n} x_{i_1} \dots x_{i_d},

thus for instance

\displaystyle h_0(x_1,\dots,x_n) = 0,

\displaystyle h_1(x_1,\dots,x_n) = x_1 + \dots + x_n,

and

\displaystyle h_2(x_1,\dots,x_n) = x_1^2 + \dots + x_n^2 + \sum_{1 \leq i < j \leq n} x_i x_j.

One can also define all the complete homogeneous symmetric polynomials of {n} variables simultaneously by means of the generating function

\displaystyle \sum_{d=0}^\infty h_d(x_1,\dots,x_n) t^d = \frac{1}{(1-t x_1) \dots (1-tx_n)}. \ \ \ \ \ (1)

We will think of the variables {x_1,\dots,x_n} as taking values in the real numbers. When one does so, one might observe that the degree two polynomial {h_2} is a positive definite quadratic form, since it has the sum of squares representation

\displaystyle h_2(x_1,\dots,x_n) = \frac{1}{2} \sum_{i=1}^n x_i^2 + \frac{1}{2} (x_1+\dots+x_n)^2.

In particular, {h_2(x_1,\dots,x_n) > 0} unless {x_1=\dots=x_n=0}. This can be compared against the superficially similar quadratic form

\displaystyle x_1^2 + \dots + x_n^2 + \sum_{1 \leq i < j \leq n} \epsilon_{ij} x_i x_j

where {\epsilon_{ij} = \pm 1} are independent randomly chosen signs. The Wigner semicircle law says that for large {n}, the eigenvalues of this form will be mostly distributed in the interval {[-\sqrt{n}, \sqrt{n}]} using the semicircle distribution, so in particular the form is quite far from being positive definite despite the presence of the first {n} positive terms. Thus the positive definiteness is coming from the finer algebraic structure of {h_2}, and not just from the magnitudes of its coefficients.

One could ask whether the same positivity holds for other degrees {d} than two. For odd degrees, the answer is clearly no, since {h_d(-x_1,\dots,-x_n) = -h_d(x_1,\dots,x_n)} in that case. But one could hope for instance that

\displaystyle h_4(x_1,\dots,x_n) = \sum_{1 \leq i \leq j \leq k \leq l \leq n} x_i x_j x_k x_l

also has a sum of squares representation that demonstrates positive definiteness. This turns out to be true, but is remarkably tedious to establish directly. Nevertheless, we have a nice result of Hunter that gives positive definiteness for all even degrees {d}. In fact, a modification of his argument gives a little bit more:

Theorem 1 Let {n \geq 1}, let {d \geq 0} be even, and let {x_1,\dots,x_n} be reals.

  • (i) (Positive definiteness) One has {h_d(x_1,\dots,x_n) \geq 0}, with strict inequality unless {x_1=\dots=x_n=0}.
  • (ii) (Schur convexity) One has {h_d(x_1,\dots,x_n) \geq h_d(y_1,\dots,y_n)} whenever {(x_1,\dots,x_n)} majorises {(y_1,\dots,y_n)}, with equality if and only if {(y_1,\dots,y_n)} is a permutation of {(x_1,\dots,x_n)}.
  • (iii) (Schur-Ostrowski criterion for Schur convexity) For any {1 \leq i < j \leq n}, one has {(x_i - x_j) (\frac{\partial}{\partial x_i} - \frac{\partial}{\partial x_j}) h_d(x_1,\dots,x_n) \geq 0}, with strict inequality unless {x_i=x_j}.

Proof: We induct on {d} (allowing {n} to be arbitrary). The claim is trivially true for {d=0}, and easily verified for {d=2}, so suppose that {d \geq 4} and the claims (i), (ii), (iii) have already been proven for {d-2} (and for arbitrary {n}).

If we apply the differential operator {(x_i - x_j) (\frac{\partial}{\partial x_i} - \frac{\partial}{\partial x_j})} to {\frac{1}{(1-t x_1) \dots (1-tx_n)}} using the product rule, one obtains after a brief calculation

\displaystyle \frac{(x_i-x_j)^2 t^2}{(1-t x_1) \dots (1-tx_n) (1-t x_i) (1-t x_j)}.

Using (1) and extracting the {t^d} coefficient, we obtain the identity

\displaystyle (x_i - x_j) (\frac{\partial}{\partial x_i} - \frac{\partial}{\partial x_j}) h_d(x_1,\dots,x_n)

\displaystyle = (x_i-x_j)^2 h_{d-2}( x_1,\dots,x_n,x_i,x_j). \ \ \ \ \ (2)

The claim (iii) then follows from (i) and the induction hypothesis.

To obtain (ii), we use the more general statement (known as the Schur-Ostrowski criterion) that (ii) is implied from (iii) if we replace {h_d} by an arbitrary symmetric, continuously differentiable function. To establish this criterion, we induct on {n} (this argument can be made independently of the existing induction on {d}). If {(y_1,\dots,y_n)} is majorised by {(x_1,\dots,x_n)}, it lies in the permutahedron of {(x_1,\dots,x_n)}. If {(y_1,\dots,y_n)} lies on a face of this permutahedron, then after permuting both the {x_i} and {y_j} we may assume that {(y_1,\dots,y_m)} is majorised by {(x_1,\dots,x_m)}, and {(y_{m+1},\dots,y_n)} is majorised by {(x_{m+1},\dots,x_n)} for some {1 \leq m < n}, and the claim then follows from two applications of the induction hypothesis. If instead {(y_1,\dots,y_n)} lies in the interior of the permutahedron, one can follow it to the boundary by using one of the vector fields {(x_i - x_j) (\frac{\partial}{\partial x_i} - \frac{\partial}{\partial x_j})}, and the claim follows from the boundary case.

Finally, to obtain (i), we observe that {(x_1,\dots,x_n)} majorises {(x,\dots,x)}, where {x} is the arithmetic mean of {x_1,\dots,x_n}. But {h_d(x,\dots,x)} is clearly a positive multiple of {x^d}, and the claim now follows from (ii). \Box

If the variables {x_1,\dots,x_n} are restricted to be nonnegative, the same argument gives Schur convexity for odd degrees also.

The proof in Hunter of positive definiteness is arranged a little differently than the one above, but still relies ultimately on the identity (2). I wonder if there is a genuinely different way to establish positive definiteness that does not go through this identity.

Suppose {F: X \rightarrow Y} is a continuous (but nonlinear) map from one normed vector space {X} to another {Y}. The continuity means, roughly speaking, that if {x_0, x \in X} are such that {\|x-x_0\|_X} is small, then {\|F(x)-F(x_0)\|_Y} is also small (though the precise notion of “smallness” may depend on {x} or {x_0}, particularly if {F} is not known to be uniformly continuous). If {F} is known to be differentiable (in, say, the Fréchet sense), then we in fact have a linear bound of the form

\displaystyle  \|F(x)-F(x_0)\|_Y \leq C(x_0) \|x-x_0\|_X

for some {C(x_0)} depending on {x_0}, if {\|x-x_0\|_X} is small enough; one can of course make {C(x_0)} independent of {x_0} (and drop the smallness condition) if {F} is known instead to be Lipschitz continuous.

In many applications in analysis, one would like more explicit and quantitative bounds that estimate quantities like {\|F(x)-F(x_0)\|_Y} in terms of quantities like {\|x-x_0\|_X}. There are a number of ways to do this. First of all, there is of course the trivial estimate arising from the triangle inequality:

\displaystyle  \|F(x)-F(x_0)\|_Y \leq \|F(x)\|_Y + \|F(x_0)\|_Y. \ \ \ \ \ (1)

This estimate is usually not very good when {x} and {x_0} are close together. However, when {x} and {x_0} are far apart, this estimate can be more or less sharp. For instance, if the magnitude of {F} varies so much from {x_0} to {x} that {\|F(x)\|_Y} is more than (say) twice that of {\|F(x_0)\|_Y}, or vice versa, then (1) is sharp up to a multiplicative constant. Also, if {F} is oscillatory in nature, and the distance between {x} and {x_0} exceeds the “wavelength” of the oscillation of {F} at {x_0} (or at {x}), then one also typically expects (1) to be close to sharp. Conversely, if {F} does not vary much in magnitude from {x_0} to {x}, and the distance between {x} and {x_0} is less than the wavelength of any oscillation present in {F}, one expects to be able to improve upon (1).

When {F} is relatively simple in form, one can sometimes proceed simply by substituting {x = x_0 + h}. For instance, if {F: R \rightarrow R} is the squaring function {F(x) = x^2} in a commutative ring {R}, one has

\displaystyle  F(x_0+h) = (x_0+h)^2 = x_0^2 + 2x_0 h+ h^2

and thus

\displaystyle  F(x_0+h) - F(x_0) = 2x_0 h + h^2

or in terms of the original variables {x, x_0} one has

\displaystyle  F(x) - F(x_0) = 2 x_0 (x-x_0) + (x-x_0)^2.

If the ring {R} is not commutative, one has to modify this to

\displaystyle  F(x) - F(x_0) = x_0 (x-x_0) + (x-x_0) x_0 + (x-x_0)^2.

Thus, for instance, if {A, B} are {n \times n} matrices and {\| \|_{op}} denotes the operator norm, one sees from the triangle inequality and the sub-multiplicativity {\| AB\|_{op} \leq \| A \|_{op} \|B\|_{op}} of operator norm that

\displaystyle  \| A^2 - B^2 \|_{op} \leq \| A - B \|_{op} ( 2 \|B\|_{op} + \|A - B \|_{op} ). \ \ \ \ \ (2)

If {F(x)} involves {x} (or various components of {x}) in several places, one can sometimes get a good estimate by “swapping” {x} with {x_0} at each of the places in turn, using a telescoping series. For instance, if we again use the squaring function {F(x) = x^2 = x x} in a non-commutative ring, we have

\displaystyle  F(x) - F(x_0) = x x - x_0 x_0

\displaystyle  = (x x - x_0 x) + (x_0 x - x_0 x_0)

\displaystyle  = (x-x_0) x + x_0 (x-x_0)

which for instance leads to a slight improvement of (2):

\displaystyle  \| A^2 - B^2 \|_{op} \leq \| A - B \|_{op} ( \| A\|_{op} + \|B\|_{op} ).

More generally, for any natural number {n}, one has the identity

\displaystyle  x^n - x_0^n = (x-x_0) (x^{n-1} + x^{n-2} x_0 + \dots + x x_0^{n-2} + x_0^{n-1}) \ \ \ \ \ (3)

in a commutative ring, while in a non-commutative ring one must modify this to

\displaystyle  x^n - x_0^n = \sum_{i=0}^{n-1} x_0^i (x-x_0) x^{n-1-i},

and for matrices one has

\displaystyle  \| A^n - B^n \|_{op} \leq \| A-B\|_{op} ( \|A\|_{op}^{n-1} + \| A\|_{op}^{n-2} \| B\|_{op} + \dots + \|B\|_{op}^{n-1} ).

Exercise 1 If {U} and {V} are unitary {n \times n} matrices, show that the commutator {[U,V] := U V U^{-1} V^{-1}} obeys the inequality

\displaystyle  \| [U,V] - I \|_{op} \leq 2 \| U - I \|_{op} \| V - I \|_{op}.

(Hint: first control {\| UV - VU \|_{op}}.)

Now suppose (for simplicity) that {F: {\bf R}^d \rightarrow {\bf R}^{d'}} is a map between Euclidean spaces. If {F} is continuously differentiable, then one can use the fundamental theorem of calculus to write

\displaystyle  F(x) - F(x_0) = \int_0^1 \frac{d}{dt} F( \gamma(t) )\ dt

where {\gamma: [0,1] \rightarrow Y} is any continuously differentiable path from {x_0} to {x}. For instance, if one uses the straight line path {\gamma(t) := (1-t) x_0 + tx}, one has

\displaystyle  F(x) - F(x_0) = \int_0^1 ((x-x_0) \cdot \nabla F)( (1-t) x_0 + t x )\ dt.

In the one-dimensional case {d=1}, this simplifies to

\displaystyle  F(x) - F(x_0) = (x-x_0) \int_0^1 F'( (1-t) x_0 + t x )\ dt. \ \ \ \ \ (4)

Among other things, this immediately implies the factor theorem for {C^k} functions: if {F} is a {C^k({\bf R})} function for some {k \geq 1} that vanishes at some point {x_0}, then {F(x)} factors as the product of {x-x_0} and some {C^{k-1}} function {G}. Another basic consequence is that if {\nabla F} is uniformly bounded in magnitude by some constant {C}, then {F} is Lipschitz continuous with the same constant {C}.

Applying (4) to the power function {x \mapsto x^n}, we obtain the identity

\displaystyle  x^n - x_0^n = n (x-x_0) \int_0^1 ((1-t) x_0 + t x)^{n-1}\ dt \ \ \ \ \ (5)

which can be compared with (3). Indeed, for {x_0} and {x} close to {1}, one can use logarithms and Taylor expansion to arrive at the approximation {((1-t) x_0 + t x)^{n-1} \approx x_0^{(1-t) (n-1)} x^{t(n-1)}}, so (3) behaves a little like a Riemann sum approximation to (5).

Exercise 2 For each {i=1,\dots,n}, let {X^{(1)}_i} and {X^{(0)}_i} be random variables taking values in a measurable space {R_i}, and let {F: R_1 \times \dots \times R_n \rightarrow {\bf R}^m} be a bounded measurable function.

  • (i) (Lindeberg exchange identity) Show that

    \displaystyle  \mathop{\bf E} F(X^{(1)}_1,\dots,X^{(1)}_n) - \mathop{\bf E} F(X^{(0)}_1,\dots,X^{(0)}_n)

    \displaystyle = \sum_{i=1}^n \mathop{\bf E} F( X^{(1)}_1,\dots, X^{(1)}_{i-1}, X^{(1)}_i, X^{(0)}_{i+1}, \dots, X^{(0)}_n)

    \displaystyle - \mathop{\bf E} F( X^{(1)}_1,\dots, X^{(1)}_{i-1}, X^{(0)}_i, X^{(0)}_{i+1}, \dots, X^{(0)}_n).

  • (ii) (Knowles-Yin exchange identity) Show that

    \displaystyle  \mathop{\bf E} F(X^{(1)}_1,\dots,X^{(1)}_n) - \mathop{\bf E} F(X^{(0)}_1,\dots,X^{(0)}_n)

    \displaystyle = \int_0^1 \sum_{i=1}^n \mathop{\bf E} F( X^{(t)}_1,\dots, X^{(t)}_{i-1}, X^{(1)}_i, X^{(t)}_{i+1}, \dots, X^{(t)}_n)

    \displaystyle - \mathop{\bf E} F( X^{(t)}_1,\dots, X^{(t)}_{i-1}, X^{(0)}_i, X^{(t)}_{i+1}, \dots, X^{(t)}_n)\ dt,

    where {X^{(t)}_i = 1_{I_i \leq t} X^{(0)}_i + 1_{I_i > t} X^{(1)}_i} is a mixture of {X^{(0)}_i} and {X^{(1)}_i}, with {I_1,\dots,I_n} uniformly drawn from {[0,1]} independently of each other and of the {X^{(0)}_1,\dots,X^{(0)}_n, X^{(1)}_0,\dots,X^{(1)}_n}.

  • (iii) Discuss the relationship between the identities in parts (i), (ii) with the identities (3), (5).

(The identity in (i) is the starting point for the Lindeberg exchange method in probability theory, discussed for instance in this previous post. The identity in (ii) can also be used in the Lindeberg exchange method; the terms in the right-hand side are slightly more symmetric in the indices {1,\dots,n}, which can be a technical advantage in some applications; see this paper of Knowles and Yin for an instance of this.)

Exercise 3 If {F: {\bf R}^d \rightarrow {\bf R}^{d'}} is continuously {k} times differentiable, establish Taylor’s theorem with remainder

\displaystyle  F(x) = \sum_{j=0}^{k-1} \frac{1}{j!} (((x-x_0) \cdot \nabla)^j F)( x_0 )

\displaystyle + \int_0^1 \frac{(1-t)^{k-1}}{(k-1)!} (((x-x_0) \cdot \nabla)^k F)((1-t) x_0 + t x)\ dt.

If {\nabla^k F} is bounded, conclude that

\displaystyle  |F(x) - \sum_{j=0}^{k-1} \frac{1}{j!} (((x-x_0) \cdot \nabla)^j F)( x_0 )|

\displaystyle \leq \frac{|x-x_0|^k}{k!} \sup_{y \in {\bf R}^d} |\nabla^k F(y)|.

For real scalar functions {F: {\bf R}^d \rightarrow {\bf R}}, the average value of the continuous real-valued function {(x - x_0) \cdot \nabla F((1-t) x_0 + t x)} must be attained at some point {t} in the interval {[0,1]}. We thus conclude the mean-value theorem

\displaystyle  F(x) - F(x_0) = ((x - x_0) \cdot \nabla F)((1-t) x_0 + t x)

for some {t \in [0,1]} (that can depend on {x}, {x_0}, and {F}). This can for instance give a second proof of fact that continuously differentiable functions {F} with bounded derivative are Lipschitz continuous. However it is worth stressing that the mean-value theorem is only available for real scalar functions; it is false for instance for complex scalar functions. A basic counterexample is given by the function {e(x) := e^{2\pi i x}}; there is no {t \in [0,1]} for which {e(1) - e(0) = e'(t)}. On the other hand, as {e'} has magnitude {2\pi}, we still know from (4) that {e} is Lipschitz of constant {2\pi}, and when combined with (1) we obtain the basic bounds

\displaystyle  |e(x) - e(y)| \leq \min( 2, 2\pi |x-y| )

which are already very useful for many applications.

Exercise 4 Let {H_0, V} be {n \times n} matrices, and let {t} be a non-negative real.

  • (i) Establish the Duhamel formula

    \displaystyle  e^{t(H_0+V)} = e^{tH_0} + \int_0^t e^{(t-s) H_0} V e^{s (H_0+V)}\ ds

    \displaystyle  = e^{tH_0} + \int_0^t e^{(t-s) (H_0+V)} V e^{s H_0}\ ds

    where {e^A} denotes the matrix exponential of {A}. (Hint: Differentiate {e^{(t-s) H_0} e^{s (H_0+V)}} or {e^{(t-s) (H_0+V)} e^{s H_0}} in {s}.)

  • (ii) Establish the iterated Duhamel formula

    \displaystyle  e^{t(H_0+V)} = e^{tH_0} + \sum_{j=1}^k \int_{0 \leq t_1 \leq \dots \leq t_j \leq t}

    \displaystyle e^{(t-t_j) H_0} V e^{(t_j-t_{j-1}) H_0} V \dots e^{(t_2-t_1) H_0} V e^{t_1 H_0}\ dt_1 \dots dt_j

    \displaystyle  + \int_{0 \leq t_1 \leq \dots \leq t_{k+1} \leq t}

    \displaystyle  e^{(t-t_{k+1}) (H_0+V)} V e^{(t_{k+1}-t_k) H_0} V \dots e^{(t_2-t_1) H_0} V e^{t_1 H_0}\ dt_1 \dots dt_{k+1}

    for any {k \geq 0}.

  • (iii) Establish the infinitely iterated Duhamel formula

    \displaystyle  e^{t(H_0+V)} = e^{tH_0} + \sum_{j=1}^\infty \int_{0 \leq t_1 \leq \dots \leq t_j \leq t}

    \displaystyle e^{(t-t_j) H_0} V e^{(t_j-t_{j-1}) H_0} V \dots e^{(t_2-t_1) H_0} V e^{t_1 H_0}\ dt_1 \dots dt_j.

  • (iv) If {H(t)} is an {n \times n} matrix depending in a continuously differentiable fashion on {t}, establish the variation formula

    \displaystyle  \frac{d}{dt} e^{H(t)} = (F(\mathrm{ad}(H(t))) H'(t)) e^{H(t)}

    where {\mathrm{ad}(H)} is the adjoint representation {\mathrm{ad}(H)(V) = HV - VH} applied to {H}, and {F} is the function

    \displaystyle  F(z) := \int_0^1 e^{sz}\ ds

    (thus {F(z) = \frac{e^z-1}{z}} for non-zero {z}), with {F(\mathrm{ad}(H(t)))} defined using functional calculus.

We remark that further manipulation of (iv) of the above exercise using the fundamental theorem of calculus eventually leads to the Baker-Campbell-Hausdorff-Dynkin formula, as discussed in this previous blog post.

Exercise 5 Let {A, B} be positive definite {n \times n} matrices, and let {Y} be an {n \times n} matrix. Show that there is a unique solution {X} to the Sylvester equation

\displaystyle  AX + X B = Y

which is given by the formula

\displaystyle  X = \int_0^\infty e^{-tA} Y e^{-tB}\ dt.

In the above examples we had applied the fundamental theorem of calculus along linear curves {\gamma(t) = (1-t) x_0 + t x}. However, it is sometimes better to use other curves. For instance, the circular arc {\gamma(t) = \cos(\pi t/2) x_0 + \sin(\pi t/2) x} can be useful, particularly if {x_0} and {x} are “orthogonal” or “independent” in some sense; a good example of this is the proof by Maurey and Pisier of the gaussian concentration inequality, given in Theorem 8 of this previous blog post. In a similar vein, if one wishes to compare a scalar random variable {X} of mean zero and variance one with a Gaussian random variable {G} of mean zero and variance one, it can be useful to introduce the intermediate random variables {\gamma(t) := (1-t)^{1/2} X + t^{1/2} G} (where {X} and {G} are independent); note that these variables have mean zero and variance one, and after coupling them together appropriately they evolve by the Ornstein-Uhlenbeck process, which has many useful properties. For instance, one can use these ideas to establish monotonicity formulae for entropy; see e.g. this paper of Courtade for an example of this and further references. More generally, one can exploit curves {\gamma} that flow according to some geometrically natural ODE or PDE; several examples of this occur famously in Perelman’s proof of the Poincaré conjecture via Ricci flow, discussed for instance in this previous set of lecture notes.

In some cases, it is difficult to compute {F(x)-F(x_0)} or the derivative {\nabla F} directly, but one can instead proceed by implicit differentiation, or some variant thereof. Consider for instance the matrix inversion map {F(A) := A^{-1}} (defined on the open dense subset of {n \times n} matrices consisting of invertible matrices). If one wants to compute {F(B)-F(A)} for {B} close to {A}, one can write temporarily write {F(B) - F(A) = E}, thus

\displaystyle  B^{-1} - A^{-1} = E.

Multiplying both sides on the left by {B} to eliminate the {B^{-1}} term, and on the right by {A} to eliminate the {A^{-1}} term, one obtains

\displaystyle  A - B = B E A

and thus on reversing these steps we arrive at the basic identity

\displaystyle  B^{-1} - A^{-1} = B^{-1} (A - B) A^{-1}. \ \ \ \ \ (6)

For instance, if {H_0, V} are {n \times n} matrices, and we consider the resolvents

\displaystyle  R_0(z) := (H_0 - z I)^{-1}; \quad R_V(z) := (H_0 + V - zI)^{-1}

then we have the resolvent identity

\displaystyle  R_V(z) - R_0(z) = - R_V(z) V R_0(z) \ \ \ \ \ (7)

as long as {z} does not lie in the spectrum of {H_0} or {H_0+V} (for instance, if {H_0}, {V} are self-adjoint then one can take {z} to be any strictly complex number). One can iterate this identity to obtain

\displaystyle  R_V(z) = \sum_{j=0}^k (-R_0(z) V)^j R_0(z) + (-R_V(z) V) (-R_0(z) V)^k R_0(z)

for any natural number {k}; in particular, if {R_0(z) V} has operator norm less than one, one has the Neumann series

\displaystyle  R_V(z) = \sum_{j=0}^\infty (-R_0(z) V)^j R_0(z).

Similarly, if {A(t)} is a family of invertible matrices that depends in a continuously differentiable fashion on a time variable {t}, then by implicitly differentiating the identity

\displaystyle  A(t) A(t)^{-1} = I

in {t} using the product rule, we obtain

\displaystyle  (\frac{d}{dt} A(t)) A(t)^{-1} + A(t) \frac{d}{dt} A(t)^{-1} = 0

and hence

\displaystyle  \frac{d}{dt} A(t)^{-1} = - A(t)^{-1} (\frac{d}{dt} A(t)) A(t)^{-1}

(this identity may also be easily derived from (6)). One can then use the fundamental theorem of calculus to obtain variants of (6), for instance by using the curve {\gamma(t) = (1-t) A + tB} we arrive at

\displaystyle  B^{-1} - A^{-1} = \int_0^1 ((1-t)A + tB)^{-1} (A-B) ((1-t)A + tB)^{-1}\ dt

assuming that the curve stays entirely within the set of invertible matrices. While this identity may seem more complicated than (6), it is more symmetric, which conveys some advantages. For instance, using this identity it is easy to see that if {A, B} are positive definite with {A>B} in the sense of positive definite matrices (that is, {A-B} is positive definite), then {B^{-1} > A^{-1}}. (Try to prove this using (6) instead!)

Exercise 6 If {A} is an invertible {n \times n} matrix and {u, v} are {n \times 1} vectors, establish the Sherman-Morrison formula

\displaystyle  (A + t uv^T)^{-1} = A^{-1} - \frac{t}{1 + t v^T A^{-1} u} A^{-1} uv^T A^{-1}

whenever {t} is a scalar such that {1 + t v^T A^{-1} u} is non-zero. (See also this previous blog post for more discussion of these sorts of identities.)

One can use the Cauchy integral formula to extend these identities to other functions of matrices. For instance, if {F: {\bf C} \rightarrow {\bf C}} is an entire function, and {\gamma} is a counterclockwise contour that goes around the spectrum of both {H_0} and {H_0+V}, then we have

\displaystyle  F(H_0+V) = \frac{-1}{2\pi i} \int_\gamma F(z) R_V(z)\ dz

and similarly

\displaystyle  F(H_0) = \frac{-1}{2\pi i} \int_\gamma F(z) R_0(z)\ dz

and hence by (7) one has

\displaystyle  F(H_0+V) - F(H_0) = \frac{1}{2\pi i} \int_\gamma F(z) R_V(z) V F_0(z)\ dz;

similarly, if {H(t)} depends on {t} in a continuously differentiable fashion, then

\displaystyle  \frac{d}{dt} F(H(t)) = \frac{1}{2\pi i} \int_\gamma F(z) (H(t) - zI)^{-1} H'(t) (z) (H(t)-zI)^{-1}\ dz

as long as {\gamma} goes around the spectrum of {H(t)}.

Exercise 7 If {H(t)} is an {n \times n} matrix depending continuously differentiably on {t}, and {F: {\bf C} \rightarrow {\bf C}} is an entire function, establish the tracial chain rule

\displaystyle  \frac{d}{dt} \hbox{tr} F(H(t)) = \hbox{tr}(F'(H(t)) H'(t)).

In a similar vein, given that the logarithm function is the antiderivative of the reciprocal, one can express the matrix logarithm {\log A} of a positive definite matrix by the fundamental theorem of calculus identity

\displaystyle  \log A = \int_0^\infty (I + sI)^{-1} - (A + sI)^{-1}\ ds

(with the constant term {(I+tI)^{-1}} needed to prevent a logarithmic divergence in the integral). Differentiating, we see that if {A(t)} is a family of positive definite matrices depending continuously on {t}, that

\displaystyle  \frac{d}{dt} \log A(t) = \int_0^\infty (A(t) + sI)^{-1} A'(t) (A(t)+sI)^{-1}\ dt.

This can be used for instance to show that {\log} is a monotone increasing function, in the sense that {\log A> \log B} whenever {A > B > 0} in the sense of positive definite matrices. One can of course integrate this formula to obtain some formulae for the difference {\log A - \log B} of the logarithm of two positive definite matrices {A,B}.

To compare the square root {A^{1/2} - B^{1/2}} of two positive definite matrices {A,B} is trickier; there are multiple ways to proceed. One approach is to use contour integration as before (but one has to take some care to avoid branch cuts of the square root). Another to express the square root in terms of exponentials via the formula

\displaystyle  A^{1/2} = \frac{1}{\Gamma(-1/2)} \int_0^\infty (e^{-tA} - I) t^{-1/2} \frac{dt}{t}

where {\Gamma} is the gamma function; this formula can be verified by first diagonalising {A} to reduce to the scalar case and using the definition of the Gamma function. Then one has

\displaystyle  A^{1/2} - B^{1/2} = \frac{1}{\Gamma(-1/2)} \int_0^\infty (e^{-tA} - e^{-tB}) t^{-1/2} \frac{dt}{t}

and one can use some of the previous identities to control {e^{-tA} - e^{-tB}}. This is pretty messy though. A third way to proceed is via implicit differentiation. If for instance {A(t)} is a family of positive definite matrices depending continuously differentiably on {t}, we can differentiate the identity

\displaystyle  A(t)^{1/2} A(t)^{1/2} = A(t)

to obtain

\displaystyle  A(t)^{1/2} \frac{d}{dt} A(t)^{1/2} + (\frac{d}{dt} A(t)^{1/2}) A(t)^{1/2} = \frac{d}{dt} A(t).

This can for instance be solved using Exercise 5 to obtain

\displaystyle  \frac{d}{dt} A(t)^{1/2} = \int_0^\infty e^{-sA(t)^{1/2}} A'(t) e^{-sA(t)^{1/2}}\ ds

and this can in turn be integrated to obtain a formula for {A^{1/2} - B^{1/2}}. This is again a rather messy formula, but it does at least demonstrate that the square root is a monotone increasing function on positive definite matrices: {A > B > 0} implies {A^{1/2} > B^{1/2} > 0}.

Several of the above identities for matrices can be (carefully) extended to operators on Hilbert spaces provided that they are sufficiently well behaved (in particular, if they have a good functional calculus, and if various spectral hypotheses are obeyed). We will not attempt to do so here, however.

Suppose one has a bounded sequence {(a_n)_{n=1}^\infty = (a_1, a_2, \dots)} of real numbers. What kinds of limits can one form from this sequence?

Of course, we have the usual notion of limit {\lim_{n \rightarrow \infty} a_n}, which in this post I will refer to as the classical limit to distinguish from the other limits discussed in this post. The classical limit, if it exists, is the unique real number {L} such that for every {\varepsilon>0}, one has {|a_n-L| \leq \varepsilon} for all sufficiently large {n}. We say that a sequence is (classically) convergent if its classical limit exists. The classical limit obeys many useful limit laws when applied to classically convergent sequences. Firstly, it is linear: if {(a_n)_{n=1}^\infty} and {(b_n)_{n=1}^\infty} are classically convergent sequences, then {(a_n+b_n)_{n=1}^\infty} is also classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} (a_n + b_n) = (\lim_{n \rightarrow \infty} a_n) + (\lim_{n \rightarrow \infty} b_n) \ \ \ \ \ (1)

and similarly for any scalar {c}, {(ca_n)_{n=1}^\infty} is classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} (ca_n) = c \lim_{n \rightarrow \infty} a_n. \ \ \ \ \ (2)

It is also an algebra homomorphism: {(a_n b_n)_{n=1}^\infty} is also classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} (a_n b_n) = (\lim_{n \rightarrow \infty} a_n) (\lim_{n \rightarrow \infty} b_n). \ \ \ \ \ (3)

We also have shift invariance: if {(a_n)_{n=1}^\infty} is classically convergent, then so is {(a_{n+1})_{n=1}^\infty} with

\displaystyle  \lim_{n \rightarrow \infty} a_{n+1} = \lim_{n \rightarrow \infty} a_n \ \ \ \ \ (4)

and more generally in fact for any injection {\phi: {\bf N} \rightarrow {\bf N}}, {(a_{\phi(n)})_{n=1}^\infty} is classically convergent with

\displaystyle  \lim_{n \rightarrow \infty} a_{\phi(n)} = \lim_{n \rightarrow \infty} a_n. \ \ \ \ \ (5)

The classical limit of a sequence is unchanged if one modifies any finite number of elements of the sequence. Finally, we have boundedness: for any classically convergent sequence {(a_n)_{n=1}^\infty}, one has

\displaystyle  \inf_n a_n \leq \lim_{n \rightarrow \infty} a_n \leq \sup_n a_n. \ \ \ \ \ (6)

One can in fact show without much difficulty that these laws uniquely determine the classical limit functional on convergent sequences.

One would like to extend the classical limit notion to more general bounded sequences; however, when doing so one must give up one or more of the desirable limit laws that were listed above. Consider for instance the sequence {a_n = (-1)^n}. On the one hand, one has {a_n^2 = 1} for all {n}, so if one wishes to retain the homomorphism property (3), any “limit” of this sequence {a_n} would have to necessarily square to {1}, that is to say it must equal {+1} or {-1}. On the other hand, if one wished to retain the shift invariance property (4) as well as the homogeneity property (2), any “limit” of this sequence would have to equal its own negation and thus be zero.

Nevertheless there are a number of useful generalisations and variants of the classical limit concept for non-convergent sequences that obey a significant portion of the above limit laws. For instance, we have the limit superior

\displaystyle  \limsup_{n \rightarrow \infty} a_n := \inf_N \sup_{n \geq N} a_n

and limit inferior

\displaystyle  \liminf_{n \rightarrow \infty} a_n := \sup_N \inf_{n \geq N} a_n

which are well-defined real numbers for any bounded sequence {(a_n)_{n=1}^\infty}; they agree with the classical limit when the sequence is convergent, but disagree otherwise. They enjoy the shift-invariance property (4), and the boundedness property (6), but do not in general obey the homomorphism property (3) or the linearity property (1); indeed, we only have the subadditivity property

\displaystyle  \limsup_{n \rightarrow \infty} (a_n + b_n) \leq (\limsup_{n \rightarrow \infty} a_n) + (\limsup_{n \rightarrow \infty} b_n)

for the limit superior, and the superadditivity property

\displaystyle  \liminf_{n \rightarrow \infty} (a_n + b_n) \geq (\liminf_{n \rightarrow \infty} a_n) + (\liminf_{n \rightarrow \infty} b_n)

for the limit inferior. The homogeneity property (2) is only obeyed by the limits superior and inferior for non-negative {c}; for negative {c}, one must have the limit inferior on one side of (2) and the limit superior on the other, thus for instance

\displaystyle  \limsup_{n \rightarrow \infty} (-a_n) = - \liminf_{n \rightarrow \infty} a_n.

The limit superior and limit inferior are examples of limit points of the sequence, which can for instance be defined as points that are limits of at least one subsequence of the original sequence. Indeed, the limit superior is always the largest limit point of the sequence, and the limit inferior is always the smallest limit point. However, limit points can be highly non-unique (indeed they are unique if and only if the sequence is classically convergent), and so it is difficult to sensibly interpret most of the usual limit laws in this setting, with the exception of the homogeneity property (2) and the boundedness property (6) that are easy to state for limit points.

Another notion of limit are the Césaro limits

\displaystyle  \mathrm{C}\!-\!\lim_{n \rightarrow \infty} a_n := \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N a_n;

if this limit exists, we say that the sequence is Césaro convergent. If the sequence {(a_n)_{n=1}^\infty} already has a classical limit, then it also has a Césaro limit that agrees with the classical limit; but there are additional sequences that have a Césaro limit but not a classical one. For instance, the non-classically convergent sequence {a_n= (-1)^n} discussed above is Césaro convergent, with a Césaro limit of {0}. However, there are still bounded sequences that do not have Césaro limit, such as {a_n := \sin( \log n )} (exercise!). The Césaro limit is linear, bounded, and shift invariant, but not an algebra homomorphism and also does not obey the rearrangement property (5).

Using the Hahn-Banach theorem, one can extend the classical limit functional to generalised limit functionals {\mathop{\widetilde \lim}_{n \rightarrow \infty} a_n}, defined to be bounded linear functionals from the space {\ell^\infty({\bf N})} of bounded real sequences to the real numbers {{\bf R}} that extend the classical limit functional (defined on the space {c_0({\bf N}) + {\bf R}} of convergent sequences) without any increase in the operator norm. (In some of my past writings I made the slight error of referring to these generalised limit functionals as Banach limits, though as discussed below, the latter actually refers to a subclass of generalised limit functionals.) It is not difficult to see that such generalised limit functionals will range between the limit inferior and limit superior. In fact, for any specific sequence {(a_n)_{n=1}^\infty} and any number {L} lying in the closed interval {[\liminf_{n \rightarrow \infty} a_n, \limsup_{n \rightarrow \infty} a_n]}, there exists at least one generalised limit functional {\mathop{\widetilde \lim}_{n \rightarrow \infty}} that takes the value {L} when applied to {a_n}; for instance, for any number {\theta} in {[-1,1]}, there exists a generalised limit functional that assigns that number {\theta} as the “limit” of the sequence {a_n = (-1)^n}. This claim can be seen by first designing such a limit functional on the vector space spanned by the convergent sequences and by {(a_n)_{n=1}^\infty}, and then appealing to the Hahn-Banach theorem to extend to all sequences. This observation also gives a necessary and sufficient criterion for a bounded sequence {(a_n)_{n=1}^\infty} to classically converge to a limit {L}, namely that all generalised limits of this sequence must equal {L}.

Because of the reliance on the Hahn-Banach theorem, the existence of generalised limits requires the axiom of choice (or some weakened version thereof); there are presumably models of set theory without the axiom of choice in which no generalised limits exist, but I do not know of an explicit reference for this.

Generalised limits can obey the shift-invariance property (4) or the algebra homomorphism property (2), but as the above analysis of the sequence {a_n = (-1)^n} shows, they cannot do both. Generalised limits that obey the shift-invariance property (4) are known as Banach limits; one can for instance construct them by applying the Hahn-Banach theorem to the Césaro limit functional; alternatively, if {\mathop{\widetilde \lim}} is any generalised limit, then the Césaro-type functional {(a_n)_{n=1}^\infty \mapsto \mathop{\widetilde \lim}_{N \rightarrow \infty} \frac{1}{N} \sum_{n=1}^N a_n} will be a Banach limit. The existence of Banach limits can be viewed as a demonstration of the amenability of the natural numbers (or integers); see this previous blog post for further discussion.

Generalised limits that obey the algebra homomorphism property (2) are known as ultrafilter limits. If one is given a generalised limit functional {p\!-\!\lim_{n \rightarrow \infty}} that obeys (2), then for any subset {A} of the natural numbers {{\bf N}}, the generalised limit {p\!-\!\lim_{n \rightarrow \infty} 1_A(n)} must equal its own square (since {1_A(n)^2 = 1_A(n)}) and is thus either {0} or {1}. If one defines {p \subset 2^{2^{\bf N}}} to be the collection of all subsets {A} of {{\bf N}} for which {p\!-\!\lim_{n \rightarrow \infty} 1_A(n) = 1}, one can verify that {p} obeys the axioms of a non-principal ultrafilter. Conversely, if {p} is a non-principal ultrafilter, one can define the associated generalised limit {p\!-\!\lim_{n \rightarrow \infty} a_n} of any bounded sequence {(a_n)_{n=1}^\infty} to be the unique real number {L} such that the sets {\{ n \in {\bf N}: |a_n - L| \leq \varepsilon \}} lie in {p} for all {\varepsilon>0}; one can check that this does indeed give a well-defined generalised limit that obeys (2). Non-principal ultrafilters can be constructed using Zorn’s lemma. In fact, they do not quite need the full strength of the axiom of choice; see the Wikipedia article on the ultrafilter lemma for examples.

We have previously noted that generalised limits of a sequence can converge to any point between the limit inferior and limit superior. The same is not true if one restricts to Banach limits or ultrafilter limits. For instance, by the arguments already given, the only possible Banach limit for the sequence {a_n = (-1)^n} is zero. Meanwhile, an ultrafilter limit must converge to a limit point of the original sequence, but conversely every limit point can be attained by at least one ultrafilter limit; we leave these assertions as an exercise to the interested reader. In particular, a bounded sequence converges classically to a limit {L} if and only if all ultrafilter limits converge to {L}.

There is no generalisation of the classical limit functional to any space that includes non-classically convergent sequences that obeys the subsequence property (5), since any non-classically-convergent sequence will have one subsequence that converges to the limit superior, and another subsequence that converges to the limit inferior, and one of these will have to violate (5) since the limit superior and limit inferior are distinct. So the above limit notions come close to the best generalisations of limit that one can use in practice.

We summarise the above discussion in the following table:

Limit Always defined Linear Shift-invariant Homomorphism Constructive
Classical No Yes Yes Yes Yes
Superior Yes No Yes No Yes
Inferior Yes No Yes No Yes
Césaro No Yes Yes No Yes
Generalised Yes Yes Depends Depends No
Banach Yes Yes Yes No No
Ultrafilter Yes Yes No Yes No

A sequence {a: {\bf Z} \rightarrow {\bf C}} of complex numbers is said to be quasiperiodic if it is of the form

\displaystyle a(n) = F( \alpha_1 n \hbox{ mod } 1, \dots, \alpha_k n \hbox{ mod } 1)

for some real numbers {\alpha_1,\dots,\alpha_k} and continuous function {F: ({\bf R}/{\bf Z})^k \rightarrow {\bf C}}. For instance, linear phases such as {n \mapsto e(\alpha n + \beta)} (where {e(\theta) := e^{2\pi i \theta}}) are examples of quasiperiodic sequences; the top order coefficient {\alpha} (modulo {1}) can be viewed as a “frequency” of the integers, and an element of the Pontryagin dual {\hat {\bf Z} \equiv {\bf R}/{\bf Z}} of the integers. Any periodic sequence is also quasiperiodic (taking {k=1} and {\alpha_1} to be the reciprocal of the period). A sequence is said to be almost periodic if it is the uniform limit of quasiperiodic sequences. For instance any Fourier series of the form

\displaystyle a(n) = \sum_{j=1}^\infty c_j e(\alpha_j n)

with {\alpha_1,\alpha_2,\dots} real numbers and {c_1,c_2,\dots} an absolutely summable sequence of complex coefficients, will be almost periodic.

These sequences arise in various “complexity one” problems in arithmetic combinatorics and ergodic theory. For instance, if {(X, \mu, T)} is a measure-preserving system – a probability space {(X,\mu)} equipped with a measure-preserving shift, and {f_1,f_2 \in L^\infty(X)} are bounded measurable functions, then the correlation sequence

\displaystyle a(n) := \int_X f_1(x) f_2(T^n x)\ d\mu(x)

can be shown to be an almost periodic sequence, plus an error term {b_n} which is “null” in the sense that it has vanishing uniform density:

\displaystyle \sup_N \frac{1}{M} \sum_{n=N+1}^{N+M} |b_n| \rightarrow 0 \hbox{ as } M \rightarrow \infty.

This can be established in a number of ways, for instance by writing {a(n)} as the Fourier coefficients of the spectral measure of the shift {T} with respect to the functions {f_1,f_2}, and then decomposing that measure into pure point and continuous components.

In the last two decades or so, it has become clear that there are natural higher order versions of these concepts, in which linear polynomials such as {\alpha n + \beta} are replaced with higher degree counterparts. The most obvious candidates for these counterparts would be the polynomials {\alpha_d n^d + \dots + \alpha_0}, but this turns out to not be a complete set of higher degree objects needed for the theory. Instead, the higher order versions of quasiperiodic and almost periodic sequences are now known as basic nilsequences and nilsequences respectively, while the higher order version of a linear phase is a nilcharacter; each nilcharacter then has a symbol that is a higher order generalisation of the concept of a frequency (and the collection of all symbols forms a group that can be viewed as a higher order version of the Pontryagin dual of {{\bf Z}}). The theory of these objects is spread out in the literature across a number of papers; in particular, the theory of nilcharacters is mostly developed in Appendix E of this 116-page paper of Ben Green, Tamar Ziegler, and myself, and is furthermore written using nonstandard analysis and treating the more general setting of higher dimensional sequences. I therefore decided to rewrite some of that material in this blog post, in the simpler context of the qualitative asymptotic theory of one-dimensional nilsequences and nilcharacters rather than the quantitative single-scale theory that is needed for combinatorial applications (and which necessitated the use of nonstandard analysis in the previous paper).

For technical reasons (having to do with the non-trivial topological structure on nilmanifolds), it will be convenient to work with vector-valued sequences, that take values in a finite-dimensional complex vector space {{\bf C}^m} rather than in {{\bf C}}. By doing so, the space of sequences is now, technically, no longer a ring, as the operations of addition and multiplication on vector-valued sequences become ill-defined. However, we can still take complex conjugates of a sequence, and add sequences taking values in the same vector space {{\bf C}^m}, and for sequences taking values in different vector spaces {{\bf C}^m}, {{\bf C}^{m'}}, we may utilise the tensor product {\otimes: {\bf C}^m \times {\bf C}^{m'} \rightarrow {\bf C}^{mm'}}, which we will normalise by defining

\displaystyle (z_1,\dots,z_m) \otimes (w_1,\dots,w_{m'}) = (z_1 w_1, \dots, z_1 w_{m'}, \dots, z_m w_1, \dots, z_m w_{m'} ).

This product is associative and bilinear, and also commutative up to permutation of the indices. It also interacts well with the Hermitian norm

\displaystyle \| (z_1,\dots,z_m) \| := \sqrt{|z_1|^2 + \dots + |z_m|^2}

since we have {\|z \otimes w\| = \|z\| \|w\|}.

The traditional definition of a basic nilsequence (as defined for instance by Bergelson, Host, and Kra) is as follows:

Definition 1 (Basic nilsequence, first definition) A nilmanifold of step at most {d} is a quotient {G/\Gamma}, where {G} is a connected, simply connected nilpotent Lie group of step at most {d} (thus, all {d+1}-fold commutators vanish) and {\Gamma} is a discrete cocompact lattice in {G}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g^n g_0 \Gamma)}, where {g_0 \Gamma \in G/\Gamma}, {g \in G}, and {F: G/\Gamma \rightarrow {\bf C}^m} is a continuous function.

For instance, it is not difficult using this definition to show that a sequence is a basic nilsequence of degree at most {1} if and only if it is quasiperiodic. The requirement that {G} be simply connected can be easily removed if desired by passing to a universal cover, but it is technically convenient to assume it (among other things, it allows for a well-defined logarithm map that obeys the Baker-Campbell-Hausdorff formula). When one wishes to perform a more quantitative analysis of nilsequences (particularly when working on a “single scale”. sich as on a single long interval {\{ N+1, \dots, N+M\}}), it is common to impose additional regularity conditions on the function {F}, such as Lipschitz continuity or smoothness, but ordinary continuity will suffice for the qualitative discussion in this blog post.

Nowadays, and particularly when one needs to understand the “single-scale” equidistribution properties of nilsequences, it is more convenient (as is for instance done in this ICM paper of Green) to use an alternate definition of a nilsequence as follows.

Definition 2 Let {d \geq 0}. A filtered group of degree at most {d} is a group {G} together with a sequence {G_\bullet = (G_0,G_1,G_2,\dots)} of subgroups {G \geq G_0 \geq G_1 \geq \dots} with {G_{d+1}=\{\hbox{id}\}} and {[G_i,G_j] \subset G_{i+j}} for {i,j \geq 0}. A polynomial sequence {g: {\bf Z} \rightarrow G} into a filtered group is a function such that {\partial_{h_i} \dots \partial_{h_1} g(n) \in G_i} for all {i \geq 0} and {n,h_1,\dots,h_i \in{\bf Z}}, where {\partial_h g(n) := g(n+h) g(n)^{-1}} is the difference operator. A filtered nilmanifold of degree at most {s} is a quotient {G/\Gamma}, where {G} is a filtered group of degree at most {s} such that {G} and all of the subgroups {G_i} are connected, simply connected nilpotent filtered Lie group, and {\Gamma} is a discrete cocompact subgroup of {G} such that {\Gamma_i := \Gamma \cap G_i} is a discrete cocompact subgroup of {G_i}. A basic nilsequence of degree at most {d} is a sequence of the form {n \mapsto F(g(n))}, where {g: {\bf Z} \rightarrow G} is a polynomial sequence, {G/\Gamma} is a filtered nilmanifold of degree at most {d}, and {F: G \rightarrow {\bf C}^m} is a continuous function which is {\Gamma}-automorphic, in the sense that {F(g \gamma) = F(g)} for all {g \in G} and {\gamma \in \Gamma}.

One can easily identify a {\Gamma}-automorphic function on {G} with a function on {G/\Gamma}, but there are some (very minor) advantages to working on the group {G} instead of the quotient {G/\Gamma}, as it becomes slightly easier to modify the automorphy group {\Gamma} when needed. (But because the action of {\Gamma} on {G} is free, one can pass from {\Gamma}-automorphic functions on {G} to functions on {G/\Gamma} with very little difficulty.) The main reason to work with polynomial sequences {n \mapsto g(n)} rather than geometric progressions {n \mapsto g^n g_0 \Gamma} is that they form a group, a fact essentially established by by Lazard and Leibman; see Corollary B.4 of this paper of Green, Ziegler, and myself for a proof in the filtered group setting.

It is easy to see that any sequence that is a basic nilsequence of degree at most {d} in the sense of the first definition, is also a basic nilsequence of degree at most {d} in the second definition, since a nilmanifold of degree at most {d} can be filtered using the lower central series, and any linear sequence {n \mapsto g^n g_0} will be a polynomial sequence with respect to that filtration. The converse implication is a little trickier, but still not too hard to show: see Appendix C of this paper of Ben Green, Tamar Ziegler, and myself. There are two key examples of basic nilsequences to keep in mind. The first are the polynomially quasiperiodic sequences

\displaystyle a(n) = F( P_1(n), \dots, P_k(n) ),

where {P_1,\dots,P_k: {\bf Z} \rightarrow {\bf R}} are polynomials of degree at most {d}, and {F: {\bf R}^k \rightarrow {\bf C}^m} is a {{\bf Z}^k}-automorphic (i.e., {{\bf Z}^k}-periodic) continuous function. The map {P: {\bf Z} \rightarrow {\bf R}^k} defined by {P(n) := (P_1(n),\dots,P_k(n))} is a polynomial map of degree at most {d}, if one filters {{\bf R}^k} by defining {({\bf R}^k)_i} to equal {{\bf R}^k} when {i \leq d}, and {\{0\}} for {i > d}. The torus {{\bf R}^k/{\bf Z}^k} then becomes a filtered nilmanifold of degree at most {d}, and {a(n)} is thus a basic nilsequence of degree at most {d} as per the second definition. It is also possible explicitly describe {a_n} as a basic nilsequence of degree at most {d} as per the first definition, for instance (in the {k=1} case) by taking {G} to be the space of upper triangular unipotent {d+1 \times d+1} real matrices, and {\Gamma} the subgroup with integer coefficients; we leave the details to the interested reader.

The other key example is a sequence of the form

\displaystyle a(n) = F( \alpha n, \{ \alpha n \} \beta n )

where {\alpha,\beta} are real numbers, {\{ \alpha n \} = \alpha n - \lfloor \alpha n \rfloor} denotes the fractional part of {\alpha n}, and and {F: {\bf R}^2 \rightarrow {\bf C}^m} is a {{\bf Z}^2}-automorphic continuous function that vanishes in a neighbourhood of {{\bf Z} \times {\bf R}}. To describe this as a nilsequence, we use the nilpotent connected, simply connected degree {2}, Heisenberg group

\displaystyle G := \begin{pmatrix} 1 & {\bf R} & {\bf R} \\ 0 & 1 & {\bf R} \\ 0 & 0 & 1 \end{pmatrix}

with the lower central series filtration {G_0=G_1=G}, {G_2= [G,G] = \begin{pmatrix} 1 &0 & {\bf R} \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}}, and {G_i = \{ \mathrm{id} \}} for {i > 2}, {\Gamma} to be the discrete compact subgroup

\displaystyle \Gamma := \begin{pmatrix} 1 & {\bf Z} & {\bf Z} \\ 0 & 1 & {\bf Z} \\ 0 & 0 & 1 \end{pmatrix},

{g: {\bf Z} \rightarrow G} to be the polynomial sequence

\displaystyle g(n) := \begin{pmatrix} 1 & \beta n & \alpha \beta n^2 \\ 0 & 1 & \alpha n \\ 0 & 0 & 1 \end{pmatrix}

and {\tilde F: G \rightarrow {\bf C}^m} to be the {\Gamma}-automorphic function

\displaystyle \tilde F( \begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix} ) = F( \{ z \}, y - \lfloor z \rfloor x );

one easily verifies that this function is indeed {\Gamma}-automorphic, and it is continuous thanks to the vanishing properties of {F}. Also we have {a(n) = \tilde F(g(n))}, so {a} is a basic nilsequence of degree at most {2}. One can concoct similar examples with {\{ \alpha n \} \beta n} replaced by other “bracket polynomials” of {n}; for instance

\displaystyle a(n) = F( \alpha n, \{ \alpha n - \frac{1}{2} \} \beta n )

will be a basic nilsequence if {F} now vanishes in a neighbourhood of {(\frac{1}{2}+{\bf Z}) \times {\bf R}} rather than {{\bf Z} \times {\bf R}}. See this paper of Bergelson and Leibman for more discussion of bracket polynomials (also known as generalised polynomials) and their relationship to nilsequences.

A nilsequence of degree at most {d} is defined to be a sequence that is the uniform limit of basic nilsequences of degree at most {d}. Thus for instance a sequence is a nilsequence of degree at most {1} if and only if it is almost periodic, while a sequence is a nilsequence of degree at most {0} if and only if it is constant. Such objects arise in higher order recurrence: for instance, if {h_0,\dots,h_d} are integers, {(X,\mu,T)} is a measure-preserving system, and {f_0,\dots,f_d \in L^\infty(X)}, then it was shown by Leibman that the sequence

\displaystyle n \mapsto \int_X f_0(T^{h_0 n} x) \dots f_d(T^{h_d n} x)\ d\mu(x)

is equal to a nilsequence of degree at most {d}, plus a null sequence. (The special case when the measure-preserving system was ergodic and {h_i = i} for {i=0,\dots,d} was previously established by Bergelson, Host, and Kra.) Nilsequences also arise in the inverse theory of the Gowers uniformity norms, as discussed for instance in this previous post.

It is easy to see that a sequence {a: {\bf Z} \rightarrow {\bf C}^m} is a basic nilsequence of degree at most {d} if and only if each of its {m} components are. The scalar basic nilsequences {a: {\bf Z} \rightarrow {\bf C}} of degree {d} are easily seen to form a {*}-algebra (that is to say, they are a complex vector space closed under pointwise multiplication and complex conjugation), which implies similarly that vector-valued basic nilsequences {a: {\bf Z} \rightarrow {\bf C}^m} of degree at most {d} form a complex vector space closed under complex conjugation for each {m}, and that the tensor product of any two basic nilsequences of degree at most {d} is another basic nilsequence of degree at most {d}. Similarly with “basic nilsequence” replaced by “nilsequence” throughout.

Now we turn to the notion of a nilcharacter, as defined in this paper of Ben Green, Tamar Ziegler, and myself:

Definition 3 (Nilcharacters) Let {d \geq 1}. A sub-nilcharacter of degree {d} is a basic nilsequence {\chi: n \mapsto F(g(n))} of degree at most {d}, such that {F} obeys the additional modulation property

\displaystyle F( g_d g ) = e( \xi \cdot g_d ) F(g) \ \ \ \ \ (1)

for all {g \in G} and {g_d \in G_d}, where {\xi: G_d \rightarrow {\bf R}} is a continuous homomorphism {g_d \mapsto \xi \cdot g_d}. (Note from (1) and {\Gamma}-automorphy that unless {F} vanishes identically, {\xi} must map {\Gamma_d} to {{\bf Z}}, thus without loss of generality one can view {\xi} as an element of the Pontryagial dual of the torus {G_d / \Gamma_d}.) If in addition one has {\|F(g)\|=1} for all {g \in G}, we call {\chi} a nilcharacter of degree {d \geq 1}.

In the degree one case {d=1}, the only sub-nilcharacters are of the form {\chi(n) = e(\alpha n)} for some vector {c \in {\bf C}^m} and {\alpha \in {\bf R}}, and this is a nilcharacter if {c} is a unit vector. Similarly, in higher degree, any sequence of the form {\chi(n) = c e(P(n))}, where {c \in {\bf C}^m} is a vector and {P: {\bf Z} \rightarrow {\bf R}} is a polynomial of degree at most {d}, is a sub-nilcharacter of degree {d}, and a character if {c} is a unit vector. A nilsequence of degree at most {d-1} is automatically a sub-nilcharacter of degree {d}, and a nilcharacter if it is of magnitude {1}. A further example of a nilcharacter is provided by the two-dimensional sequence {\chi: {\bf Z} \rightarrow {\bf C}^2} defined by

\displaystyle \chi(n) := ( F_0( \alpha n ) e( \{ \alpha n \} \beta n ), F_{1/2}( \alpha n ) e( \{ \alpha n - \frac{1}{2} \} \beta n ) ) \ \ \ \ \ (2)

where {F_0, F_{1/2}: {\bf R} \rightarrow {\bf C}} are continuous, {{\bf Z}}-automorphic functions that vanish on a neighbourhood of {{\bf Z}} and {\frac{1}{2}+{\bf Z}} respectively, and which form a partition of unity in the sense that

\displaystyle |F_0(x)|^2 + |F_{1/2}(x)|^2 = 1

for all {x \in {\bf R}}. Note that one needs both {F_0} and {F_{1/2}} to be not identically zero in order for all these conditions to be satisfied; it turns out (for topological reasons) that there is no scalar nilcharacter that is “equivalent” to this nilcharacter in a sense to be defined shortly. In some literature, one works exclusively with sub-nilcharacters rather than nilcharacters, however the former space contains zero-divisors, which is a little annoying technically. Nevertheless, both nilcharacters and sub-nilcharacters generate the same set of “symbols” as we shall see later.

We claim that every degree {d} sub-nilcharacter {f: {\bf Z} \rightarrow {\bf C}^m} can be expressed in the form {f = c \chi}, where {\chi: {\bf Z} \rightarrow {\bf C}^{m'}} is a degree {d} nilcharacter, and {c: {\bf C}^{m'} \rightarrow {\bf C}^m} is a linear transformation. Indeed, by scaling we may assume {f(n) = F(g(n))} where {|F| < 1} uniformly. Using partitions of unity, one can find further functions {F_1,\dots,F_m} also obeying (1) for the same character {\xi} such that {|F_1|^2 + \dots + |F_m|^2} is non-zero; by dividing out the {F_1,\dots,F_m} by the square root of this quantity, and then multiplying by {\sqrt{1-|F|^2}}, we may assume that

\displaystyle |F|^2 + |F_1|^2 + \dots + |F_m|^2 = 1,

and then

\displaystyle \chi(n) := (F(g(n)), F_1(g(n)), \dots, F_m(g(n)))

becomes a degree {d} nilcharacter that contains {f(n)} amongst its components, giving the claim.

As we shall show below, nilsequences can be approximated uniformly by linear combinations of nilcharacters, in much the same way that quasiperiodic or almost periodic sequences can be approximated uniformly by linear combinations of linear phases. In particular, nilcharacters can be used as “obstructions to uniformity” in the sense of the inverse theory of the Gowers uniformity norms.

The space of degree {d} nilcharacters forms a semigroup under tensor product, with the constant sequence {1} as the identity. One can upgrade this semigroup to an abelian group by quotienting nilcharacters out by equivalence:

Definition 4 Let {d \geq 1}. We say that two degree {d} nilcharacters {\chi: {\bf Z} \rightarrow {\bf C}^m}, {\chi': {\bf Z} \rightarrow {\bf C}^{m'}} are equivalent if {\chi \otimes \overline{\chi'}: {\bf Z} \rightarrow {\bf C}^{mm'}} is equal (as a sequence) to a basic nilsequence of degree at most {d-1}. (We will later show that this is indeed an equivalence relation.) The equivalence class {[\chi]_{\mathrm{Symb}^d({\bf Z})}} of such a nilcharacter will be called the symbol of that nilcharacter (in analogy to the symbol of a differential or pseudodifferential operator), and the collection of such symbols will be denoted {\mathrm{Symb}^d({\bf Z})}.

As we shall see below the fold, {\mathrm{Symb}^d({\bf Z})} has the structure of an abelian group, and enjoys some nice “symbol calculus” properties; also, one can view symbols as precisely describing the obstruction to equidistribution for nilsequences. For {d=1}, the group is isomorphic to the Ponytragin dual {\hat {\bf Z} = {\bf R}/{\bf Z}} of the integers, and {\mathrm{Symb}^d({\bf Z})} for {d > 1} should be viewed as higher order generalisations of this Pontryagin dual. In principle, this group can be explicitly described for all {d}, but the theory rapidly gets complicated as {d} increases (much as the classification of nilpotent Lie groups or Lie algebras of step {d} rapidly gets complicated even for medium-sized {d} such as {d=3} or {d=4}). We will give an explicit description of the {d=2} case here. There is however one nice (and non-trivial) feature of {\mathrm{Symb}^d({\bf Z})} for {d \geq 2} – it is not just an abelian group, but is in fact a vector space over the rationals {{\bf Q}}!

Read the rest of this entry »

Just a short post to note that Norwegian Academy of Science and Letters has just announced that the 2017 Abel prize has been awarded to Yves Meyer, “for his pivotal role in the development of the mathematical theory of wavelets”.  The actual prize ceremony will be at Oslo in May.

I am actually in Oslo myself currently, having just presented Meyer’s work at the announcement ceremony (and also having written a brief description of some of his work).  The Abel prize has a somewhat unintuitive (and occasionally misunderstood) arrangement in which the presenter of the work of the prize is selected independently of the winner of the prize (I think in part so that the choice of presenter gives no clues as to the identity of the laureate).  In particular, like other presenters before me (which in recent years have included Timothy Gowers, Jordan Ellenberg, and Alex Bellos), I agreed to present the laureate’s work before knowing who the laureate was!  But in this case the task was very easy, because Meyer’s areas of (both pure and applied) harmonic analysis and PDE fell rather squarely within my own area of expertise.  (I had previously written about some other work of Meyer in this blog post.)  Indeed I had learned about Meyer’s wavelet constructions as a graduate student while taking a course from Ingrid Daubechies.   Daubechies also made extremely important contributions to the theory of wavelets, but due to a conflict of interest (as per the guidelines for the prize committee) arising from Daubechies’ presidency of the International Mathematical Union (which nominates the majority of the members of the Abel prize committee, who then serve for two years) from 2011 to 2014 (and her continuing service ex officio on the IMU executive committee from 2015 to 2018), she will not be eligible for the prize until 2021 at the earliest, and so I do not think this prize should be necessarily construed as a judgement on the relative contributions of Meyer and Daubechies to this field.  (In any case I fully agree with the Abel prize committee’s citation of Meyer’s pivotal role in the development of the theory of wavelets.)

[Update, Mar 28: link to prize committee guidelines and clarification of the extent of Daubechies’ conflict of interest added. -T]

In the previous set of notes we saw that functions {f: U \rightarrow {\bf C}} that were holomorphic on an open set {U} enjoyed a large number of useful properties, particularly if the domain {U} was simply connected. In many situations, though, we need to consider functions {f} that are only holomorphic (or even well-defined) on most of a domain {U}, thus they are actually functions {f: U \backslash S \rightarrow {\bf C}} outside of some small singular set {S} inside {U}. (In this set of notes we only consider interior singularities; one can also discuss singular behaviour at the boundary of {U}, but this is a whole separate topic and will not be pursued here.) Since we have only defined the notion of holomorphicity on open sets, we will require the singular sets {S} to be closed, so that the domain {U \backslash S} on which {f} remains holomorphic is still open. A typical class of examples are the functions of the form {\frac{f(z)}{z-z_0}} that were already encountered in the Cauchy integral formula; if {f: U \rightarrow {\bf C}} is holomorphic and {z_0 \in U}, such a function would be holomorphic save for a singularity at {z_0}. Another basic class of examples are the rational functions {P(z)/Q(z)}, which are holomorphic outside of the zeroes of the denominator {Q}.

Singularities come in varying levels of “badness” in complex analysis. The least harmful type of singularity is the removable singularity – a point {z_0} which is an isolated singularity (i.e., an isolated point of the singular set {S}) where the function {f} is undefined, but for which one can extend the function across the singularity in such a fashion that the function becomes holomorphic in a neighbourhood of the singularity. A typical example is that of the complex sinc function {\frac{\sin(z)}{z}}, which has a removable singularity at the origin {0}, which can be removed by declaring the sinc function to equal {1} at {0}. The detection of isolated removable singularities can be accomplished by Riemann’s theorem on removable singularities (Exercise 35 from Notes 3): if a holomorphic function {f: U \backslash S \rightarrow {\bf C}} is bounded near an isolated singularity {z_0 \in S}, then the singularity at {z_0} may be removed.

After removable singularities, the mildest form of singularity one can encounter is that of a pole – an isolated singularity {z_0} such that {f(z)} can be factored as {f(z) = \frac{g(z)}{(z-z_0)^m}} for some {m \geq 1} (known as the order of the pole), where {g} has a removable singularity at {z_0} (and is non-zero at {z_0} once the singularity is removed). Such functions have already made a frequent appearance in previous notes, particularly the case of simple poles when {m=1}. The behaviour near {z_0} of function {f} with a pole of order {m} is well understood: for instance, {|f(z)|} goes to infinity as {z} approaches {z_0} (at a rate comparable to {|z-z_0|^{-m}}). These singularities are not, strictly speaking, removable; but if one compactifies the range {{\bf C}} of the holomorphic function {f: U \backslash S \rightarrow {\bf C}} to a slightly larger space {{\bf C} \cup \{\infty\}} known as the Riemann sphere, then the singularity can be removed. In particular, functions {f: U \backslash S \rightarrow {\bf C}} which only have isolated singularities that are either poles or removable can be extended to holomorphic functions {f: U \rightarrow {\bf C} \cup \{\infty\}} to the Riemann sphere. Such functions are known as meromorphic functions, and are nearly as well-behaved as holomorphic functions in many ways. In fact, in one key respect, the family of meromorphic functions is better: the meromorphic functions on {U} turn out to form a field, in particular the quotient of two meromorphic functions is again meromorphic (if the denominator is not identically zero).

Unfortunately, there are isolated singularities that are neither removable or poles, and are known as essential singularities. A typical example is the function {f(z) = e^{1/z}}, which turns out to have an essential singularity at {z=0}. The behaviour of such essential singularities is quite wild; we will show here the Casorati-Weierstrass theorem, which shows that the image of {f} near the essential singularity is dense in the complex plane, as well as the more difficult great Picard theorem which asserts that in fact the image can omit at most one point in the complex plane. Nevertheless, around any isolated singularity (even the essential ones) {z_0}, it is possible to expand {f} as a variant of a Taylor series known as a Laurent series {\sum_{n=-\infty}^\infty a_n (z-z_0)^n}. The {\frac{1}{z-z_0}} coefficient {a_{-1}} of this series is particularly important for contour integration purposes, and is known as the residue of {f} at the isolated singularity {z_0}. These residues play a central role in a common generalisation of Cauchy’s theorem and the Cauchy integral formula known as the residue theorem, which is a particularly useful tool for computing (or at least transforming) contour integrals of meromorphic functions, and has proven to be a particularly popular technique to use in analytic number theory. Within complex analysis, one important consequence of the residue theorem is the argument principle, which gives a topological (and analytical) way to control the zeroes and poles of a meromorphic function.

Finally, there are the non-isolated singularities. Little can be said about these singularities in general (for instance, the residue theorem does not directly apply in the presence of such singularities), but certain types of non-isolated singularities are still relatively easy to understand. One particularly common example of such non-isolated singularity arises when trying to invert a non-injective function, such as the complex exponential {z \mapsto \exp(z)} or a power function {z \mapsto z^n}, leading to branches of multivalued functions such as the complex logarithm {z \mapsto \log(z)} or the {n^{th}} root function {z \mapsto z^{1/n}} respectively. Such branches will typically have a non-isolated singularity along a branch cut; this branch cut can be moved around the complex domain by switching from one branch to another, but usually cannot be eliminated entirely, unless one is willing to lift up the domain {U} to a more general type of domain known as a Riemann surface. As such, one can view branch cuts as being an “artificial” form of singularity, being an artefact of a choice of local coordinates of a Riemann surface, rather than reflecting any intrinsic singularity of the function itself. The further study of Riemann surfaces is an important topic in complex analysis (as well as the related fields of complex geometry and algebraic geometry), but unfortunately this topic will probably be postponed to the next course in this sequence (which I will not be teaching).

Read the rest of this entry »

At the core of almost any undergraduate real analysis course are the concepts of differentiation and integration, with these two basic operations being tied together by the fundamental theorem of calculus (and its higher dimensional generalisations, such as Stokes’ theorem). Similarly, the notion of the complex derivative and the complex line integral (that is to say, the contour integral) lie at the core of any introductory complex analysis course. Once again, they are tied to each other by the fundamental theorem of calculus; but in the complex case there is a further variant of the fundamental theorem, namely Cauchy’s theorem, which endows complex differentiable functions with many important and surprising properties that are often not shared by their real differentiable counterparts. We will give complex differentiable functions another name to emphasise this extra structure, by referring to such functions as holomorphic functions. (This term is also useful to distinguish these functions from the slightly less well-behaved meromorphic functions, which we will discuss in later notes.)

In this set of notes we will focus solely on the concept of complex differentiation, deferring the discussion of contour integration to the next set of notes. To begin with, the theory of complex differentiation will greatly resemble the theory of real differentiation; the definitions look almost identical, and well known laws of differential calculus such as the product rule, quotient rule, and chain rule carry over verbatim to the complex setting, and the theory of complex power series is similarly almost identical to the theory of real power series. However, when one compares the “one-dimensional” differentiation theory of the complex numbers with the “two-dimensional” differentiation theory of two real variables, we find that the dimensional discrepancy forces complex differentiable functions to obey a real-variable constraint, namely the Cauchy-Riemann equations. These equations make complex differentiable functions substantially more “rigid” than their real-variable counterparts; they imply for instance that the imaginary part of a complex differentiable function is essentially determined (up to constants) by the real part, and vice versa. Furthermore, even when considered separately, the real and imaginary components of complex differentiable functions are forced to obey the strong constraint of being harmonic. In later notes we will see these constraints manifest themselves in integral form, particularly through Cauchy’s theorem and the closely related Cauchy integral formula.

Despite all the constraints that holomorphic functions have to obey, a surprisingly large number of the functions of a complex variable that one actually encounters in applications turn out to be holomorphic. For instance, any polynomial {z \mapsto P(z)} with complex coefficients will be holomorphic, as will the complex exponential {z \mapsto \exp(z)}. From this and the laws of differential calculus one can then generate many further holomorphic functions. Also, as we will show presently, complex power series will automatically be holomorphic inside their disk of convergence. On the other hand, there are certainly basic complex functions of interest that are not holomorphic, such as the complex conjugation function {z \mapsto \overline{z}}, the absolute value function {z \mapsto |z|}, or the real and imaginary part functions {z \mapsto \mathrm{Re}(z), z \mapsto \mathrm{Im}(z)}. We will also encounter functions that are only holomorphic at some portions of the complex plane, but not on others; for instance, rational functions will be holomorphic except at those few points where the denominator vanishes, and are prime examples of the meromorphic functions mentioned previously. Later on we will also consider functions such as branches of the logarithm or square root, which will be holomorphic outside of a branch cut corresponding to the choice of branch. It is a basic but important skill in complex analysis to be able to quickly recognise which functions are holomorphic and which ones are not, as many of useful theorems available to the former (such as Cauchy’s theorem) break down spectacularly for the latter. Indeed, in my experience, one of the most common “rookie errors” that beginning complex analysis students make is the error of attempting to apply a theorem about holomorphic functions to a function that is not at all holomorphic. This stands in contrast to the situation in real analysis, in which one can often obtain correct conclusions by formally applying the laws of differential or integral calculus to functions that might not actually be differentiable or integrable in a classical sense. (This latter phenomenon, by the way, can be largely explained using the theory of distributions, as covered for instance in this previous post, but this is beyond the scope of the current course.)

Remark 1 In this set of notes it will be convenient to impose some unnecessarily generous regularity hypotheses (e.g. continuous second differentiability) on the holomorphic functions one is studying in order to make the proofs simpler. In later notes, we will discover that these hypotheses are in fact redundant, due to the phenomenon of elliptic regularity that ensures that holomorphic functions are automatically smooth.

Read the rest of this entry »

Archives