You are currently browsing the monthly archive for May 2018.

I have just uploaded to the arXiv my paper “Commutators close to the identity“, submitted to the Journal of Operator Theory. This paper resulted from some progress I made on the problem discussed in this previous post. Recall in that post the following result of Popa: if ${D,X \in B(H)}$ are bounded operators on a Hilbert space ${H}$ whose commutator ${[D,X] := DX-XD}$ is close to the identity in the sense that

$\displaystyle \| [D,X] - I \|_{op} \leq \varepsilon \ \ \ \ \ (1)$

for some ${\varepsilon > 0}$, then one has the lower bound

$\displaystyle \| X \|_{op} \|D \|_{op} \geq \frac{1}{2} \log \frac{1}{\varepsilon}. \ \ \ \ \ (2)$

In the other direction, for any ${0 < \varepsilon < 1}$, there are examples of operators ${D,X \in B(H)}$ obeying (1) such that

$\displaystyle \| X \|_{op} \|D \|_{op} \ll \varepsilon^{-2}. \ \ \ \ \ (3)$

In this paper we improve the upper bound to come closer to the lower bound:

Theorem 1 For any ${0 < \varepsilon < 1/2}$, and any infinite-dimensional ${H}$, there exist operators ${D,X \in B(H)}$ obeying (1) such that

$\displaystyle \| X \|_{op} \|D \|_{op} \ll \log^{16} \frac{1}{\varepsilon}. \ \ \ \ \ (4)$

One can probably improve the exponent ${16}$ somewhat by a modification of the methods, though it does not seem likely that one can lower it all the way to ${1}$ without a substantially new idea. Nevertheless I believe it plausible that the lower bound (2) is close to optimal.

We now sketch the methods of proof. The construction giving (3) proceeded by first identifying ${B(H)}$ with the algebra ${M_2(B(H))}$ of ${2 \times 2}$ matrices that have entries in ${B(H)}$. It is then possible to find two matrices ${D, X \in M_2(B(H))}$ whose commutator takes the form

$\displaystyle [D,X] = \begin{pmatrix} I & u \\ 0 & I \end{pmatrix}$

for some bounded operator ${u \in B(H)}$ (for instance one can take ${u}$ to be an isometry). If one then conjugates ${D, X}$ by the diagonal operator ${\mathrm{diag}(\varepsilon,1)}$, one can eusure that (1) and (3) both hold.

It is natural to adapt this strategy to ${n \times n}$ matrices ${D,X \in M_n(B(H))}$ rather than ${2 \times 2}$ matrices, where ${n}$ is a parameter at one’s disposal. If one can find matrices ${D,X \in M_n(B(H))}$ that are almost upper triangular (in that only the entries on or above the lower diagonal are non-zero), whose commutator ${[D,X]}$ only differs from the identity in the top right corner, thus

$\displaystyle [D, X] = \begin{pmatrix} I & 0 & 0 & \dots & 0 & S \\ 0 & I & 0 & \dots & 0 & 0 \\ 0 & 0 & I & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & I & 0 \\ 0 & 0 & 0 & \dots & 0 & I \end{pmatrix}.$

for some ${S}$, then by conjugating by a diagonal matrix such as ${\mathrm{diag}( \mu^{n-1}, \mu^{n-2}, \dots, 1)}$ for some ${\mu}$ and optimising in ${\mu}$, one can improve the bound ${\varepsilon^{-2}}$ in (3) to ${O_n( \varepsilon^{-\frac{2}{n-1}} )}$; if the bounds in the implied constant in the ${O_n(1)}$ are polynomial in ${n}$, one can then optimise in ${n}$ to obtain a bound of the form (4) (perhaps with the exponent ${16}$ replaced by a different constant).

The task is then to find almost upper triangular matrices ${D, X}$ whose commutator takes the required form. The lower diagonals of ${D,X}$ must then commute; it took me a while to realise then that one could (usually) conjugate one of the matrices, say ${X}$ by a suitable diagonal matrix, so that the lower diagonal consisted entirely of the identity operator, which would make the other lower diagonal consist of a single operator, say ${u}$. After a lot of further lengthy experimentation, I eventually realised that one could conjugate ${X}$ further by unipotent upper triangular matrices so that all remaining entries other than those on the far right column vanished. Thus, without too much loss of generality, one can assume that ${X}$ takes the normal form

$\displaystyle X := \begin{pmatrix} 0 & 0 & 0 & \dots & 0 & b_1 \\ I & 0 & 0 & \dots & 0 & b_2 \\ 0 & I & 0 & \dots & 0 & b_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 0 & b_{n-1} \\ 0 & 0 & 0 & \dots & I & b_n \end{pmatrix}.$

$\displaystyle D := \begin{pmatrix} v & I & 0 & \dots & 0 & b_1 u \\ u & v & 2 I & \dots & 0 & b_2 u \\ 0 & u & v & \dots & 0 & b_3 u \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & v & (n-1) I + b_{n-1} u \\ 0 & 0 & 0 & \dots & u & v + b_n u \end{pmatrix}$

for some ${u,v \in B(H)}$, solving the system of equations

$\displaystyle [v, b_i] + [u, b_{i-1}] + i b_{i+1} + b_i [u, b_n] = 0 \ \ \ \ \ (5)$

for ${i=2,\dots,n-1}$, and also

$\displaystyle [v, b_n] + [u, b_{n-1}] + b_n [u, b_n] = n \cdot 1_{B(H)}. \ \ \ \ \ (6)$

It turns out to be possible to solve this system of equations by a contraction mapping argument if one takes ${u,v}$ to be a “Hilbert’s hotel” pair of isometries as in the previous post, though the contraction is very slight, leading to polynomial losses in ${n}$ in the implied constant.

There is a further question raised in Popa’s paper which I was unable to resolve. As a special case of one of the main theorems (Theorem 2.1) of that paper, the following result was shown: if ${A \in B(H)}$ obeys the bounds

$\displaystyle \|A \| = O(1)$

and

$\displaystyle \| A \| = O( \mathrm{dist}( A, {\bf C} + K(H) )^{2/3} ) \ \ \ \ \ (7)$

(where ${{\bf C} + K(H)}$ denotes the space of all operators of the form ${\lambda I + T}$ with ${\lambda \in {\bf C}}$ and ${T}$ compact), then there exist operators ${D,X \in B(H)}$ with ${\|D\|, \|X\| = O(1)}$ such that ${A = [D,X]}$. (In fact, Popa’s result covers a more general situation in which one is working in a properly infinite ${W^*}$ algebra with non-trivial centre.) We sketch a proof of this result as follows. Suppose that ${\mathrm{dist}(A, {\bf C} + K(H)) = \varepsilon}$ and ${\|A\| = O( \varepsilon^{2/3})}$ for some ${0 < \varepsilon \ll 1}$. A standard greedy algorithm argument (see this paper of Brown and Pearcy) allows one to find orthonormal vectors ${e_n, f_n, g_n}$ for ${n=1,2,\dots}$ such that for each ${n}$, one has ${A e_n = \varepsilon_n f_n + v_n}$ for some ${\varepsilon_n}$ comparable to ${\varepsilon}$, and some ${v_n}$ orthogonal to all of the ${e_n,f_n,g_n}$. After some conjugation (and a suitable identification of ${B(H)}$ with ${M_2(B(H))}$, one can thus place ${A}$ in a normal form

$\displaystyle A = \begin{pmatrix} \varepsilon^{2/3} x & \varepsilon v^* \\ \varepsilon^{2/3} y & \varepsilon^{2/3} z \end{pmatrix}$

where ${v \in B(H)}$ is a isometry with infinite deficiency, and ${x,y,z \in B(H)}$ have norm ${O(1)}$. Setting ${\varepsilon' := \varepsilon^{1/3}}$, it then suffices to solve the commutator equation

$\displaystyle [D,X] = \begin{pmatrix} x & \varepsilon' v^* \\ y & z \end{pmatrix}$

with ${\|D\|_{op} \|X\|_{op} \ll (\varepsilon')^{-2}}$; note the similarity with (3).

By the usual Hilbert’s hotel construction, one can complement ${v}$ with another isometry ${u}$ obeying the “Hilbert’s hotel” identity

$\displaystyle uu^* + vv^* = I$

and also ${u^* u = v^* v = I}$, ${u^* v = v^* u = 0}$. Proceeding as in the previous post, we can try the ansatz

$\displaystyle D = \begin{pmatrix} \frac{1}{2} u^* & 0 \\ a & \frac{1}{2} u^* - v^* \end{pmatrix}, X = \begin{pmatrix} b & \varepsilon' I \\ c & d \end{pmatrix}$

for some operators ${a,b,c,d \in B(H)}$, leading to the system of equations

$\displaystyle [\frac{1}{2} u^*, b] + [\frac{1}{2} u^* - v^*, c] = x+z$

$\displaystyle \varepsilon' a = [\frac{1}{2} u^*, b] - x$

$\displaystyle \frac{1}{2} u^* c + c (\frac{1}{2} u^* - v^*) + ab-da = y.$

Using the first equation to solve for ${b,c}$, the second to then solve for ${a}$, and the third to then solve for ${c}$, one can obtain matrices ${D,X}$ with the required properties.

Thus far, my attempts to extend this construction to larger matrices with good bounds on ${D,X}$ have been unsuccessful. A model problem would be to express

$\displaystyle \begin{pmatrix} I & 0 & \varepsilon v^* \\ 0 & I & 0 \\ 0 & 0 & I \end{pmatrix}$

as a commutator ${[D,X]}$ with ${\|D\| \|X\|}$ significantly smaller than ${O(\varepsilon^{-2})}$. The construction in my paper achieves something like this, but with ${v^*}$ replaced by a more complicated operator. One would also need variants of this result in which one is allowed to perturb the above operator by an arbitrary finite rank operator of bounded operator norm.

Important note: As this is not a course in probability, we will try to avoid developing the general theory of stochastic calculus (which includes such concepts as filtrations, martingales, and Ito calculus). This will unfortunately limit what we can actually prove rigorously, and so at some places the arguments will be somewhat informal in nature. A rigorous treatment of many of the topics here can be found for instance in Lawler’s Conformally Invariant Processes in the Plane, from which much of the material here is drawn.

In these notes, random variables will be denoted in boldface.

Definition 1 A real random variable ${\mathbf{X}}$ is said to be normally distributed with mean ${x_0 \in {\bf R}}$ and variance ${\sigma^2 > 0}$ if one has

$\displaystyle \mathop{\bf E} F(\mathbf{X}) = \frac{1}{\sqrt{2\pi} \sigma} \int_{\bf R} e^{-(x-x_0)^2/2\sigma^2} F(x)\ dx$

for all test functions ${F \in C_c({\bf R})}$. Similarly, a complex random variable ${\mathbf{Z}}$ is said to be normally distributed with mean ${z_0 \in {\bf R}}$ and variance ${\sigma^2>0}$ if one has

$\displaystyle \mathop{\bf E} F(\mathbf{Z}) = \frac{1}{\pi \sigma^2} \int_{\bf C} e^{-|z-x_0|^2/\sigma^2} F(z)\ dx dy$

for all test functions ${F \in C_c({\bf C})}$, where ${dx dy}$ is the area element on ${{\bf C}}$.

A real Brownian motion with base point ${x_0 \in {\bf R}}$ is a random, almost surely continuous function ${\mathbf{B}^{x_0}: [0,+\infty) \rightarrow {\bf R}}$ (using the locally uniform topology on continuous functions) with the property that (almost surely) ${\mathbf{B}^{x_0}(0) = x_0}$, and for any sequence of times ${0 \leq t_0 < t_1 < t_2 < \dots < t_n}$, the increments ${\mathbf{B}^{x_0}(t_i) - \mathbf{B}^{x_0}(t_{i-1})}$ for ${i=1,\dots,n}$ are independent real random variables that are normally distributed with mean zero and variance ${t_i - t_{i-1}}$. Similarly, a complex Brownian motion with base point ${z_0 \in {\bf R}}$ is a random, almost surely continuous function ${\mathbf{B}^{z_0}: [0,+\infty) \rightarrow {\bf R}}$ with the property that ${\mathbf{B}^{z_0}(0) = z_0}$ and for any sequence of times ${0 \leq t_0 < t_1 < t_2 < \dots < t_n}$, the increments ${\mathbf{B}^{z_0}(t_i) - \mathbf{B}^{z_0}(t_{i-1})}$ for ${i=1,\dots,n}$ are independent complex random variables that are normally distributed with mean zero and variance ${t_i - t_{i-1}}$.

Remark 2 Thanks to the central limit theorem, the hypothesis that the increments ${\mathbf{B}^{x_0}(t_i) - \mathbf{B}^{x_0}(t_{i-1})}$ be normally distributed can be dropped from the definition of a Brownian motion, so long as one retains the independence and the normalisation of the mean and variance (technically one also needs some uniform integrability on the increments beyond the second moment, but we will not detail this here). A similar statement is also true for the complex Brownian motion (where now we need to normalise the variances and covariances of the real and imaginary parts of the increments).

Real and complex Brownian motions exist from any base point ${x_0}$ or ${z_0}$; see e.g. this previous blog post for a construction. We have the following simple invariances:

Exercise 3

• (i) (Translation invariance) If ${\mathbf{B}^{x_0}}$ is a real Brownian motion with base point ${x_0 \in {\bf R}}$, and ${h \in {\bf R}}$, show that ${\mathbf{B}^{x_0}+h}$ is a real Brownian motion with base point ${x_0+h}$. Similarly, if ${\mathbf{B}^{z_0}}$ is a complex Brownian motion with base point ${z_0 \in {\bf R}}$, and ${h \in {\bf C}}$, show that ${\mathbf{B}^{z_0}+c}$ is a complex Brownian motion with base point ${z_0+h}$.
• (ii) (Dilation invariance) If ${\mathbf{B}^{0}}$ is a real Brownian motion with base point ${0}$, and ${\lambda \in {\bf R}}$ is non-zero, show that ${t \mapsto \lambda \mathbf{B}^0(t / |\lambda|^{1/2})}$ is also a real Brownian motion with base point ${0}$. Similarly, if ${\mathbf{B}^0}$ is a complex Brownian motion with base point ${0}$, and ${\lambda \in {\bf C}}$ is non-zero, show that ${t \mapsto \lambda \mathbf{B}^0(t / |\lambda|^{1/2})}$ is also a complex Brownian motion with base point ${0}$.
• (iii) (Real and imaginary parts) If ${\mathbf{B}^0}$ is a complex Brownian motion with base point ${0}$, show that ${\sqrt{2} \mathrm{Re} \mathbf{B}^0}$ and ${\sqrt{2} \mathrm{Im} \mathbf{B}^0}$ are independent real Brownian motions with base point ${0}$. Conversely, if ${\mathbf{B}^0_1, \mathbf{B}^0_2}$ are independent real Brownian motions of base point ${0}$, show that ${\frac{1}{\sqrt{2}} (\mathbf{B}^0_1 + i \mathbf{B}^0_2)}$ is a complex Brownian motion with base point ${0}$.

The next lemma is a special case of the optional stopping theorem.

Lemma 4 (Optional stopping identities)

• (i) (Real case) Let ${\mathbf{B}^{x_0}}$ be a real Brownian motion with base point ${x_0 \in {\bf R}}$. Let ${\mathbf{t}}$ be a bounded stopping time – a bounded random variable with the property that for any time ${t \geq 0}$, the event that ${\mathbf{t} \leq t}$ is determined by the values of the trajectory ${\mathbf{B}^{x_0}}$ for times up to ${t}$ (or more precisely, this event is measurable with respect to the ${\sigma}$ algebra generated by this proprtion of the trajectory). Then

$\displaystyle \mathop{\bf E} \mathbf{B}^{x_0}(\mathbf{t}) = x_0$

and

$\displaystyle \mathop{\bf E} (\mathbf{B}^{x_0}(\mathbf{t})-x_0)^2 - \mathbf{t} = 0$

and

$\displaystyle \mathop{\bf E} (\mathbf{B}^{x_0}(\mathbf{t})-x_0)^4 = O( \mathop{\bf E} \mathbf{t}^2 ).$

• (ii) (Complex case) Let ${\mathbf{B}^{z_0}}$ be a real Brownian motion with base point ${z_0 \in {\bf R}}$. Let ${\mathbf{t}}$ be a bounded stopping time – a bounded random variable with the property that for any time ${t \geq 0}$, the event that ${\mathbf{t} \leq t}$ is determined by the values of the trajectory ${\mathbf{B}^{x_0}}$ for times up to ${t}$. Then

$\displaystyle \mathop{\bf E} \mathbf{B}^{z_0}(\mathbf{t}) = z_0$

$\displaystyle \mathop{\bf E} (\mathrm{Re}(\mathbf{B}^{z_0}(\mathbf{t})-z_0))^2 - \frac{1}{2} \mathbf{t} = 0$

$\displaystyle \mathop{\bf E} (\mathrm{Im}(\mathbf{B}^{z_0}(\mathbf{t})-z_0))^2 - \frac{1}{2} \mathbf{t} = 0$

$\displaystyle \mathop{\bf E} \mathrm{Re}(\mathbf{B}^{z_0}(\mathbf{t})-z_0) \mathrm{Im}(\mathbf{B}^{z_0}(\mathbf{t})-z_0) = 0$

$\displaystyle \mathop{\bf E} |\mathbf{B}^{x_0}(\mathbf{t})-z_0|^4 = O( \mathop{\bf E} \mathbf{t}^2 ).$

Proof: (Slightly informal) We just prove (i) and leave (ii) as an exercise. By translation invariance we can take ${x_0=0}$. Let ${T}$ be an upper bound for ${\mathbf{t}}$. Since ${\mathbf{B}^0(T)}$ is a real normally distributed variable with mean zero and variance ${T}$, we have

$\displaystyle \mathop{\bf E} \mathbf{B}^0( T ) = 0$

and

$\displaystyle \mathop{\bf E} \mathbf{B}^0( T )^2 = T$

and

$\displaystyle \mathop{\bf E} \mathbf{B}^0( T )^4 = 3T^2.$

By the law of total expectation, we thus have

$\displaystyle \mathop{\bf E} \mathop{\bf E}(\mathbf{B}^0( T ) | \mathbf{t}, \mathbf{B}^{z_0}(\mathbf{t}) ) = 0$

and

$\displaystyle \mathop{\bf E} \mathop{\bf E}((\mathbf{B}^0( T ))^2 | \mathbf{t}, \mathbf{B}^{z_0}(\mathbf{t}) ) = T$

and

$\displaystyle \mathop{\bf E} \mathop{\bf E}((\mathbf{B}^0( T ))^4 | \mathbf{t}, \mathbf{B}^{z_0}(\mathbf{t}) ) = 3T^2$

where the inner conditional expectations are with respect to the event that ${\mathbf{t}, \mathbf{B}^{0}(\mathbf{t})}$ attains a particular point in ${S}$. However, from the independent increment nature of Brownian motion, once one conditions ${(\mathbf{t}, \mathbf{B}^{0}(\mathbf{t}))}$ to a fixed point ${(t, x)}$, the random variable ${\mathbf{B}^0(T)}$ becomes a real normally distributed variable with mean ${x}$ and variance ${T-t}$. Thus we have

$\displaystyle \mathop{\bf E}(\mathbf{B}^0( T ) | \mathbf{t}, \mathbf{B}^{z_0}(\mathbf{t}) ) = \mathbf{B}^{z_0}(\mathbf{t})$

and

$\displaystyle \mathop{\bf E}( (\mathbf{B}^0( T ))^2 | \mathbf{t}, \mathbf{B}^{z_0}(\mathbf{t}) ) = \mathbf{B}^{z_0}(\mathbf{t})^2 + T - \mathbf{t}$

and

$\displaystyle \mathop{\bf E}( (\mathbf{B}^0( T ))^4 | \mathbf{t}, \mathbf{B}^{z_0}(\mathbf{t}) ) = \mathbf{B}^{z_0}(\mathbf{t})^4 + 6(T - \mathbf{t}) \mathbf{B}^{z_0}(\mathbf{t})^2 + 3(T - \mathbf{t})^2$

which give the first two claims, and (after some algebra) the identity

$\displaystyle \mathop{\bf E} \mathbf{B}^{z_0}(\mathbf{t})^4 - 6 \mathbf{t} \mathbf{B}^{z_0}(\mathbf{t})^2 + 3 \mathbf{t}^2 = 0$

which then also gives the third claim. $\Box$

Exercise 5 Prove the second part of Lemma 4.

This is the ninth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant ${\Lambda}$, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have now tentatively improved the upper bound of the de Bruijn-Newman constant to ${\Lambda \leq 0.22}$. Among the technical improvements in our approach, we now are able to use Taylor expansions to efficiently compute the approximation ${A+B}$ to ${H_t(x+iy)}$ for many values of ${x,y}$ in a given region, thus speeding up the computations in the barrier considerably. Also, by using the heuristic that ${H_t(x+iy)}$ behaves somewhat like the partial Euler product ${\prod_p (1 - \frac{1}{p^{\frac{1+y-ix}{2}}})^{-1}}$, we were able to find a good location to place the barrier in which ${H_t(x+iy)}$ is larger than average, hence easier to keep away from zero.

The main remaining bottleneck is that of computing the Euler mollifier bounds that keep ${A+B}$ bounded away from zero for larger values of ${x}$ beyond the barrier. In going below ${0.22}$ we are beginning to need quite complicated mollifiers with somewhat poor tail behavior; we may be reaching the point where none of our bounds will succeed in keeping ${A+B}$ bounded away from zero, so we may be close to the natural limits of our methods.

Participants are also welcome to add any further summaries of the situation in the comments below.

We now approach conformal maps from yet another perspective. Given an open subset ${U}$ of the complex numbers ${{\bf C}}$, define a univalent function on ${U}$ to be a holomorphic function ${f: U \rightarrow {\bf C}}$ that is also injective. We will primarily be studying this concept in the case when ${U}$ is the unit disk ${D(0,1) := \{ z \in {\bf C}: |z| < 1 \}}$.

Clearly, a univalent function ${f: D(0,1) \rightarrow {\bf C}}$ on the unit disk is a conformal map from ${D(0,1)}$ to the image ${f(D(0,1))}$; in particular, ${f(D(0,1))}$ is simply connected, and not all of ${{\bf C}}$ (since otherwise the inverse map ${f^{-1}: {\bf C} \rightarrow D(0,1)}$ would violate Liouville’s theorem). In the converse direction, the Riemann mapping theorem tells us that every open simply connected proper subset ${V \subsetneq {\bf C}}$ of the complex numbers is the image of a univalent function on ${D(0,1)}$. Furthermore, if ${V}$ contains the origin, then the univalent function ${f: D(0,1) \rightarrow {\bf C}}$ with this image becomes unique once we normalise ${f(0) = 0}$ and ${f'(0) > 0}$. Thus the Riemann mapping theorem provides a one-to-one correspondence between open simply connected proper subsets of the complex plane containing the origin, and univalent functions ${f: D(0,1) \rightarrow {\bf C}}$ with ${f(0)=0}$ and ${f'(0)>0}$. We will focus particular attention on the univalent functions ${f: D(0,1) \rightarrow {\bf C}}$ with the normalisation ${f(0)=0}$ and ${f'(0)=1}$; such functions will be called schlicht functions.

One basic example of a univalent function on ${D(0,1)}$ is the Cayley transform ${z \mapsto \frac{1+z}{1-z}}$, which is a Möbius transformation from ${D(0,1)}$ to the right half-plane ${\{ \mathrm{Re}(z) > 0 \}}$. (The slight variant ${z \mapsto \frac{1-z}{1+z}}$ is also referred to as the Cayley transform, as is the closely related map ${z \mapsto \frac{z-i}{z+i}}$, which maps ${D(0,1)}$ to the upper half-plane.) One can square this map to obtain a further univalent function ${z \mapsto \left( \frac{1+z}{1-z} \right)^2}$, which now maps ${D(0,1)}$ to the complex numbers with the negative real axis ${(-\infty,0]}$ removed. One can normalise this function to be schlicht to obtain the Koebe function

$\displaystyle f(z) := \frac{1}{4}\left( \left( \frac{1+z}{1-z} \right)^2 - 1\right) = \frac{z}{(1-z)^2}, \ \ \ \ \ (1)$

which now maps ${D(0,1)}$ to the complex numbers with the half-line ${(-\infty,-1/4]}$ removed. A little more generally, for any ${\theta \in {\bf R}}$ we have the rotated Koebe function

$\displaystyle f(z) := \frac{z}{(1 - e^{i\theta} z)^2} \ \ \ \ \ (2)$

that is a schlicht function that maps ${D(0,1)}$ to the complex numbers with the half-line ${\{ -re^{-i\theta}: r \geq 1/4\}}$ removed.

Every schlicht function ${f: D(0,1) \rightarrow {\bf C}}$ has a convergent Taylor expansion

$\displaystyle f(z) = a_1 z + a_2 z^2 + a_3 z^3 + \dots$

for some complex coefficients ${a_1,a_2,\dots}$ with ${a_1=1}$. For instance, the Koebe function has the expansion

$\displaystyle f(z) = z + 2 z^2 + 3 z^3 + \dots = \sum_{n=1}^\infty n z^n$

and similarly the rotated Koebe function has the expansion

$\displaystyle f(z) = z + 2 e^{i\theta} z^2 + 3 e^{2i\theta} z^3 + \dots = \sum_{n=1}^\infty n e^{(n-1)\theta} z^n.$

Intuitively, the Koebe function and its rotations should be the “largest” schlicht functions available. This is formalised by the famous Bieberbach conjecture, which asserts that for any schlicht function, the coefficients ${a_n}$ should obey the bound ${|a_n| \leq n}$ for all ${n}$. After a large number of partial results, this conjecture was eventually solved by de Branges; see for instance this survey of Korevaar or this survey of Koepf for a history.

It turns out that to resolve these sorts of questions, it is convenient to restrict attention to schlicht functions ${g: D(0,1) \rightarrow {\bf C}}$ that are odd, thus ${g(-z)=-g(z)}$ for all ${z}$, and the Taylor expansion now reads

$\displaystyle g(z) = b_1 z + b_3 z^3 + b_5 z^5 + \dots$

for some complex coefficients ${b_1,b_3,\dots}$ with ${b_1=1}$. One can transform a general schlicht function ${f: D(0,1) \rightarrow {\bf C}}$ to an odd schlicht function ${g: D(0,1) \rightarrow {\bf C}}$ by observing that the function ${f(z^2)/z^2: D(0,1) \rightarrow {\bf C}}$, after removing the singularity at zero, is a non-zero function that equals ${1}$ at the origin, and thus (as ${D(0,1)}$ is simply connected) has a unique holomorphic square root ${(f(z^2)/z^2)^{1/2}}$ that also equals ${1}$ at the origin. If one then sets

$\displaystyle g(z) := z (f(z^2)/z^2)^{1/2} \ \ \ \ \ (3)$

it is not difficult to verify that ${g}$ is an odd schlicht function which additionally obeys the equation

$\displaystyle f(z^2) = g(z)^2. \ \ \ \ \ (4)$

Conversely, given an odd schlicht function ${g}$, the formula (4) uniquely determines a schlicht function ${f}$.

For instance, if ${f}$ is the Koebe function (1), ${g}$ becomes

$\displaystyle g(z) = \frac{z}{1-z^2} = z + z^3 + z^5 + \dots, \ \ \ \ \ (5)$

which maps ${D(0,1)}$ to the complex numbers with two slits ${\{ \pm iy: y > 1/2 \}}$ removed, and if ${f}$ is the rotated Koebe function (2), ${g}$ becomes

$\displaystyle g(z) = \frac{z}{1- e^{i\theta} z^2} = z + e^{i\theta} z^3 + e^{2i\theta} z^5 + \dots. \ \ \ \ \ (6)$

De Branges established the Bieberbach conjecture by first proving an analogous conjecture for odd schlicht functions known as Robertson’s conjecture. More precisely, we have

Theorem 1 (de Branges’ theorem) Let ${n \geq 1}$ be a natural number.

• (i) (Robertson conjecture) If ${g(z) = b_1 z + b_3 z^3 + b_5 z^5 + \dots}$ is an odd schlicht function, then

$\displaystyle \sum_{k=1}^n |b_{2k-1}|^2 \leq n.$

• (ii) (Bieberbach conjecture) If ${f(z) = a_1 z + a_2 z^2 + a_3 z^3 + \dots}$ is a schlicht function, then

$\displaystyle |a_n| \leq n.$

It is easy to see that the Robertson conjecture for a given value of ${n}$ implies the Bieberbach conjecture for the same value of ${n}$. Indeed, if ${f(z) = a_1 z + a_2 z^2 + a_3 z^3 + \dots}$ is schlicht, and ${g(z) = b_1 z + b_3 z^3 + b_5 z^5 + \dots}$ is the odd schlicht function given by (3), then from extracting the ${z^{2n}}$ coefficient of (4) we obtain a formula

$\displaystyle a_n = \sum_{j=1}^n b_{2j-1} b_{2(n+1-j)-1}$

for the coefficients of ${f}$ in terms of the coefficients of ${g}$. Applying the Cauchy-Schwarz inequality, we derive the Bieberbach conjecture for this value of ${n}$ from the Robertson conjecture for the same value of ${n}$. We remark that Littlewood and Paley had conjectured a stronger form ${|b_{2k-1}| \leq 1}$ of Robertson’s conjecture, but this was disproved for ${k=3}$ by Fekete and Szegö.

To prove the Robertson and Bieberbach conjectures, one first takes a logarithm and deduces both conjectures from a similar conjecture about the Taylor coefficients of ${\log \frac{f(z)}{z}}$, known as the Milin conjecture. Next, one continuously enlarges the image ${f(D(0,1))}$ of the schlicht function to cover all of ${{\bf C}}$; done properly, this places the schlicht function ${f}$ as the initial function ${f = f_0}$ in a sequence ${(f_t)_{t \geq 0}}$ of univalent maps ${f_t: D(0,1) \rightarrow {\bf C}}$ known as a Loewner chain. The functions ${f_t}$ obey a useful differential equation known as the Loewner equation, that involves an unspecified forcing term ${\mu_t}$ (or ${\theta(t)}$, in the case that the image is a slit domain) coming from the boundary; this in turn gives useful differential equations for the Taylor coefficients of ${f(z)}$, ${g(z)}$, or ${\log \frac{f(z)}{z}}$. After some elementary calculus manipulations to “integrate” this equations, the Bieberbach, Robertson, and Milin conjectures are then reduced to establishing the non-negativity of a certain explicit hypergeometric function, which is non-trivial to prove (and will not be done here, except for small values of ${n}$) but for which several proofs exist in the literature.

The theory of Loewner chains subsequently became fundamental to a more recent topic in complex analysis, that of the Schramm-Loewner equation (SLE), which is the focus of the next and final set of notes.

### Recent Comments

 Anonymous on Polymath15, eleventh thread: W… Anonymous on Polymath15, eleventh thread: W… Terence Tao on Polymath15, eleventh thread: W… goingtoinfinity on Polymath15, eleventh thread: W… M Ruxton on Polymath15, eleventh thread: W… Anonymous on Polymath15, eleventh thread: W… Anonymous on Polymath15, eleventh thread: W… Mike on Einstein’s derivation of… SkysubO on Analysis II The (Random) Matrix… on Random matrices: The Universal… Terence Tao on 254A, Notes 1: Lie groups, Lie… Arturo Rodríguez Fan… on 254A, Notes 1: Lie groups, Lie… Russ Lyons on Ratner’s theorems rudolph01 on Polymath15, eleventh thread: W… Alberto Ibañez on The Collatz conjecture, Little…