At the core of almost any undergraduate real analysis course are the concepts of differentiation and integration, with these two basic operations being tied together by the fundamental theorem of calculus (and its higher dimensional generalisations, such as Stokes’ theorem). Similarly, the notion of the complex derivative and the complex line integral (that is to say, the contour integral) lie at the core of any introductory complex analysis course. Once again, they are tied to each other by the fundamental theorem of calculus; but in the complex case there is a further variant of the fundamental theorem, namely Cauchy’s theorem, which endows complex differentiable functions with many important and surprising properties that are often not shared by their real differentiable counterparts. We will give complex differentiable functions another name to emphasise this extra structure, by referring to such functions as holomorphic functions. (This term is also useful to distinguish these functions from the slightly less well-behaved meromorphic functions, which we will discuss in later notes.)

In this set of notes we will focus solely on the concept of complex differentiation, deferring the discussion of contour integration to the next set of notes. To begin with, the theory of complex differentiation will greatly resemble the theory of real differentiation; the definitions look almost identical, and well known laws of differential calculus such as the product rule, quotient rule, and chain rule carry over verbatim to the complex setting, and the theory of complex power series is similarly almost identical to the theory of real power series. However, when one compares the “one-dimensional” differentiation theory of the complex numbers with the “two-dimensional” differentiation theory of two real variables, we find that the dimensional discrepancy forces complex differentiable functions to obey a real-variable constraint, namely the Cauchy-Riemann equations. These equations make complex differentiable functions substantially more “rigid” than their real-variable counterparts; they imply for instance that the imaginary part of a complex differentiable function is essentially determined (up to constants) by the real part, and vice versa. Furthermore, even when considered separately, the real and imaginary components of complex differentiable functions are forced to obey the strong constraint of being harmonic. In later notes we will see these constraints manifest themselves in integral form, particularly through Cauchy’s theorem and the closely related Cauchy integral formula.

Despite all the constraints that holomorphic functions have to obey, a surprisingly large number of the functions of a complex variable that one actually encounters in applications turn out to be holomorphic. For instance, any polynomial ${z \mapsto P(z)}$ with complex coefficients will be holomorphic, as will the complex exponential ${z \mapsto \exp(z)}$. From this and the laws of differential calculus one can then generate many further holomorphic functions. Also, as we will show presently, complex power series will automatically be holomorphic inside their disk of convergence. On the other hand, there are certainly basic complex functions of interest that are not holomorphic, such as the complex conjugation function ${z \mapsto \overline{z}}$, the absolute value function ${z \mapsto |z|}$, or the real and imaginary part functions ${z \mapsto \mathrm{Re}(z), z \mapsto \mathrm{Im}(z)}$. We will also encounter functions that are only holomorphic at some portions of the complex plane, but not on others; for instance, rational functions will be holomorphic except at those few points where the denominator vanishes, and are prime examples of the meromorphic functions mentioned previously. Later on we will also consider functions such as branches of the logarithm or square root, which will be holomorphic outside of a branch cut corresponding to the choice of branch. It is a basic but important skill in complex analysis to be able to quickly recognise which functions are holomorphic and which ones are not, as many of useful theorems available to the former (such as Cauchy’s theorem) break down spectacularly for the latter. Indeed, in my experience, one of the most common “rookie errors” that beginning complex analysis students make is the error of attempting to apply a theorem about holomorphic functions to a function that is not at all holomorphic. This stands in contrast to the situation in real analysis, in which one can often obtain correct conclusions by formally applying the laws of differential or integral calculus to functions that might not actually be differentiable or integrable in a classical sense. (This latter phenomenon, by the way, can be largely explained using the theory of distributions, as covered for instance in this previous post, but this is beyond the scope of the current course.)

Remark 1 In this set of notes it will be convenient to impose some unnecessarily generous regularity hypotheses (e.g. continuous second differentiability) on the holomorphic functions one is studying in order to make the proofs simpler. In later notes, we will discover that these hypotheses are in fact redundant, due to the phenomenon of elliptic regularity that ensures that holomorphic functions are automatically smooth.

— 1. Complex differentiation and power series —

Recall in real analysis that if ${f: U \rightarrow {\bf R}}$ is a function defined on some subset ${U}$ of the real line ${{\bf R}}$, and ${x_0}$ is an interior point of ${U}$ (that is to say, ${U}$ contains an interval of the form ${(x_0-\varepsilon,x_0+\varepsilon)}$ for some ${\varepsilon>0}$), then we say that ${f}$ is differentiable at ${x_0}$ if the limit

$\displaystyle \lim_{x \rightarrow x_0; x \in U \backslash \{x_0\}} \frac{f(x)-f(x_0)}{x-x_0}$

exists (note we have to exclude ${x_0}$ from the possible values of ${x}$ to avoid division by zero. If ${f}$ is differentiable at ${x_0}$, we denote the above limit as ${f'(x_0)}$ or ${\frac{df}{dx}(x_0)}$, and refer to this as the derivative of ${f}$ at ${x_0}$. If ${U}$ is open (that is to say, every element of ${U}$ is an interior point), and ${f}$ is differentiable at every point of ${U}$, then we say that ${f}$ is differentiable on ${U}$, and call ${f': U \rightarrow {\bf R}}$ the derivative of ${f}$. (One can also define differentiability at non-interior points if they are not isolated, but for simplicity we will restrict attention to interior derivatives only.)

We can adapt this definition to the complex setting without any difficulty:

Definition 2 (Complex differentiability) Let ${U}$ be a subset of the complex numbers ${{\bf C}}$, and let ${f: U \rightarrow {\bf C}}$ be a function. If ${z_0}$ is an interior point of ${U}$ (that is to say, ${U}$ contains a disk ${D(z_0,\varepsilon) := \{ z \in {\bf C}: |z-z_0| < \varepsilon\}}$ for some ${\varepsilon>0}$), we say that ${f}$ is complex differentiable at ${z_0}$ if the limit

$\displaystyle \lim_{z \rightarrow z_0; z \in U \backslash \{z_0\}} \frac{f(z)-f(z_0)}{z-z_0}$

exists, in which case we denote this limit as ${f'(z_0)}$, ${\frac{df}{dz}(z_0)}$, or ${\frac{d}{dz} f(z_0)}$, and refer to this as the complex derivative of ${f}$ at ${z_0}$. If ${U}$ is open (that is to say, every point in ${U}$ is an interior point), and ${f}$ is complex differentiable at every point at ${U}$, we say that ${f}$ is complex differentiable on ${U}$, or holomorphic on ${U}$.

In terms of epsilons and deltas: ${f}$ is complex differentiable at ${z_0}$ with derivative ${f'(z_0)}$ if and only if, for every ${\varepsilon>0}$, there exists ${\delta>0}$ such that ${|\frac{f(z)-f(z_0)}{z-z_0} - f'(z_0)| < \varepsilon}$ whenever ${z \in U}$ is such that ${0 < |z-z_0| < \delta}$. Another way of writing this is that we have an approximate linearisation

$\displaystyle f(z) = f(z_0) + f'(z_0) (z-z_0) + o( |z-z_0| ) \ \ \ \ \ (1)$

as ${z}$ approaches ${z_0}$, where ${o(|z-z_0|)}$ denotes a quantity of the form ${|z-z_0| c(z)}$ for ${z}$ in a neighbourhood of ${z_0}$, where ${c(z)}$ goes to zero as ${z}$ goes to ${z_0}$. Making the change of variables ${z = z_0 + h}$, one can also write the derivative ${f'(z_0)}$ in the familiar form

$\displaystyle f'(z_0) = \lim_{h \rightarrow 0; h \in (U - z_0) \backslash \{0\}} \frac{f(z_0+h) - f(z_0)}{h}$

where ${U-z_0 := \{ z - z_0: z \in U \}}$ is the translation of ${U}$ by ${-z_0}$.

If ${f}$ is differentiable at ${z_0}$, then from the limit laws we see that

$\displaystyle \lim_{z \rightarrow z_0: z \in U \backslash \{z_0\}} f(z) - f(z_0) = f'(z_0) \lim_{z \rightarrow z_0}(z-z_0) = 0$

and hence

$\displaystyle \lim_{z \rightarrow z_0: z \in U} f(z) = f(z_0),$

that is to say that ${f}$ is continuous at ${z_0}$. In particular, holomorphic functions are automatically continuous. (Later on we will see that they are in fact far more regular than this, being smooth and even analytic.)

It is usually quite tedious to verify complex differentiability of a function, and to compute its derivative, from first principles. We will give just one example of this:

Proposition 3 Let ${n}$ be a non-negative integer. Then the function ${z \mapsto z^n}$ is holomorphic on the entire complex plane ${{\bf C}}$, with derivative ${z \mapsto n z^{n-1}}$ (with the convention that ${n z^{n-1}}$ is zero when ${n=0}$).

Proof: This is clear for ${n=0}$, so suppose ${n \geq 1}$. We need to show that for any complex number ${z_0}$, that

$\displaystyle \lim_{z \rightarrow z_0; z \neq z_0} \frac{z^n - z_0^n}{z - z_0} = n z_0^{n-1}.$

But we have the geometric series identity

$\displaystyle \frac{z^n - z_0^n}{z - z_0} = z^{n-1} + z^{n-2} z_0 + \dots + z z_0^{n-2} + z_0^{n-1},$

which is valid (in any field) whenever ${z \neq z_0}$, as can be seen either by induction or by multiplying both sides by ${z-z_0}$ and cancelling the telescoping series on the right-hand side. The claim then follows from the usual limit laws. $\Box$

Fortunately, we have the familiar laws of differential calculus, that allow us to more quickly establish the differentiability of functions if they arise as various combinations of functions that are already known to be differentiable, and to compute the derivative:

Exercise 4 (Laws of differentiation) Let ${U}$ be an open subset of ${{\bf C}}$, let ${z_0}$ be a point in ${U}$, and let ${f, g: U \rightarrow {\bf C}}$ be functions that are complex differentiable at ${z_0}$.

• (i) (Linearity) Show that ${f+g}$ is complex differentiable at ${z_0}$, with derivative ${f'(z_0)+g'(z_0)}$. For any constant ${c \in {\bf C}}$, show that ${cf}$ is differentiable at ${z_0}$, with derivative ${cf'(z_0)}$.
• (ii) (Product rule) Show that ${fg}$ is complex differentiable at ${z_0}$, with derivative ${f'(z_0) g(z_0) + f(z_0) g'(z_0)}$.
• (iii) (Quotient rule) If ${g(z_0)}$ is non-zero, show that ${f/g}$ (which is defined in a neighbourhood of ${z_0}$, by continuity) is complex differentiable at ${z_0}$, with derivative ${\frac{f'(z_0) g(z_0) - f(z_0) g'(z_0)}{g(z_0)^2}}$.
• (iv) (Chain rule) If ${V}$ is a neighbourhood of ${f(z_0)}$, and ${g: V \rightarrow {\bf C}}$ is a function that is complex differentiable at ${f(z_0)}$, show that the composition ${g \circ f}$ (which is defined in a neighbourhood of ${z_0}$) is complex differentiable at ${z_0}$, with derivative

$\displaystyle (g \circ f)'(z_0) = g'(f(z_0)) f'(z_0).$

(Hint: take your favourite proof of the real-variable version of these facts and adapt them to the complex setting.)

One could also state and prove a complex-variable form of the inverse function theorem here, but the proof of that statement is a bit more complicated than the ones in the above exercise, so we defer it until later in the course when it becomes needed.

If a function ${f: {\bf C} \rightarrow {\bf C}}$ is holomorphic on the entire complex plane, we call it an entire function; clearly such functions remain holomorphic when restricted to any open subset ${U}$ of the complex plane. Thus for instance Proposition 3 tells us that the functions ${z \mapsto z^n}$ are entire, and from linearity we then see that any complex polynomial

$\displaystyle P(z) = a_n z^n + \dots + a_1 z + a_0$

will be an entire function, with derivative given by the familiar formula

$\displaystyle P'(z) = n a_n z^{n-1} + \dots + a_1. \ \ \ \ \ (2)$

A function of the form ${P(z)/Q(z)}$, where ${P,Q}$ are polynomials with ${Q}$ not identically zero, is called a rational function, being to polynomials as rational numbers are to integers. Such a rational function is well defined as long as ${Q}$ is not zero. From the factor theorem (which works over any field, and in particular over the complex numbers) we know that the number of zeroes of ${Q}$ is finite, being bounded by the degree of ${Q}$ (of course we will be able to say something stronger once we have the fundamental theorem of algebra). Because of these singularities, rational functions are rarely entire; but from the quotient rule we do at least see that ${P(z)/Q(z)}$ is complex differentiable wherever the denominator is non-zero. Such functions are prime examples of meromorphic functions, which we will discuss later in the course.

Exercise 5 (Gauss-Lucas theorem) Let ${P(z)}$ be a complex polynomial that is factored as

$\displaystyle P(z) = c (z-z_1) \dots (z-z_n)$

for some non-zero constant ${c \in {\bf C}}$ and roots ${z_1,\dots,z_n \in {\bf C}}$ (not necessarily distinct) with ${n \geq 1}$.

• (i) Suppose that ${z_1,\dots,z_n}$ all lie in the upper half-plane ${\{ z \in {\bf C}: \mathrm{Im}(z) \geq 0 \}}$. Show that any root of the derivative ${P'(z)}$ also lies in the upper half-plane. (Hint: use the product rule to decompose the log-derivative ${\frac{P'(z)}{P(z)}}$ into partial fractions, and then investigate the sign of the imaginary part of this log-derivative for ${z}$ outside the upper half-plane.)
• (ii) Show that all the roots of ${P'}$ lie in the convex hull of the set ${z_1,\dots,z_n}$ of roots of ${P}$, that is to say the smallest convex polygon that contains ${z_1,\dots,z_n}$.

Now we discuss power series, which are infinite degree variants of polynomials, and which turn out to inherit many of the algebraic and analytic properties of such polynomials, at least if one stays within the disk of convergence.

Definition 6 (Power series) Let ${z_0}$ be a complex number. A formal power series with complex coefficients around the point ${z_0}$ is a formal series of the form

$\displaystyle \sum_{n=0}^\infty a_n (\mathrm{z}-z_0)^n$

for some complex numbers ${a_0, a_1, \dots}$, with ${\mathrm{z}}$ an indeterminate.

One can attempt to evaluate a formal power series ${\sum_{n=0}^\infty a_n (\mathrm{z}-z_0)^n}$ at a given complex number ${z}$ by replacing the formal indeterminate ${\mathrm{z}}$ with the complex number ${z}$. This may or may not produce a convergent (or absolutely convergent) series, depending on where ${z}$ is; for instance, the power series ${\sum_{n=0}^\infty a_n (\mathrm{z}-z_0)^n}$ is always absolutely convergent at ${z=z_0}$, but the geometric power series ${\sum_{n=0}^\infty z^n}$ fails to be even conditionally convergent whenever ${|z| > 1}$ (since the summands do not go to zero). As it turns out, the region of convergence is always essentially a disk, the size of which depends on how rapidly the coefficients ${a_n}$ decay (or how slowly they grow):

Proposition 7 (Convergence of power series) Let ${\sum_{n=0}^\infty a_n (\mathrm{z}-z_0)^n}$ be a formal power series, and define the radius of convergence ${R \in [0,+\infty]}$ of the series to be the quantity

$\displaystyle R := \liminf_{n \rightarrow \infty} |a_n|^{-1/n} \ \ \ \ \ (3)$

with the convention that ${|a_n|^{-1/n}}$ is infinite if ${a_n=0}$. (Note that ${R}$ is allowed to be zero or infinite.) Then the formal power series is absolutely convergent for any ${z}$ in the disk ${D(z_0,R) := \{ z: |z-z_0| < R \}}$ (known as the disk of convergence), and is divergent (i.e., not convergent) for any ${z}$ in the exterior region ${\{ z: |z-z_0| > R \}}$.

Proof: The proof is nearly identical to the analogous result for real power series. First suppose that ${z}$ is a complex number with ${|z-z_0| > R}$ (this of course implies that ${R}$ is finite). Then by (3), we have ${|a_n|^{-1/n} < |z-z_0|}$ for infinitely many ${n}$, which after some rearranging implies that ${| a_n (z-z_0)^n | > 1}$ for infinitely many ${n}$. In particular, the sequence ${a_n (z-z_0)^n}$ does not go to zero as ${n \rightarrow \infty}$, which implies that ${\sum_{n=0}^\infty a_n (z-z_0)^n}$ is divergent.

Now suppose that ${z}$ is a complex number with ${|z-z_0| < R}$ (this of course implies that ${R}$ is non-zero). Choose a real number ${r}$ with ${|z-z_0| < r < R}$, then by (3), we have ${|a_n|^{-1/n} > r}$ for all sufficiently large ${n}$, which after some rearranging implies that

$\displaystyle |a_n (z-z_0)^n | < (\frac{|z-z_0|}{r})^n$

for all sufficiently large ${n}$. Since the geometric series ${\sum_{n=0}^\infty (\frac{|z-z_0|}{r})^n}$ is absolutely convergent, this implies that ${\sum_{n=0}^\infty a_n (z-z_0)^n}$ is absolutely convergent also, as required. $\Box$

Remark 8 Note that this proposition does not say what happens on the boundary ${\{ z: |z-z_0| = R \}}$ of this disk (assuming for sake of discussion that the radius of convergence ${R}$ is finite and non-zero). The behaviour of power series on and near the boundary of the disk of convergence is in fact remarkably subtle; see for instance Example 11 below.

The above proposition gives a “root test” formula for the radius of convergence. The following “ratio test” variant gives convenient lower and upper bounds for the radius of convergence which suffices in many applications:

Exercise 9 (Ratio test) If ${\sum_{n=0}^\infty a_n (\mathrm{z}-z_0)^n}$ is a formal power series with the ${a_n}$ non-zero for all sufficiently large ${n}$, show that the radius of convergence ${R}$ of the series obeys the lower bound

$\displaystyle \limsup_{n \rightarrow \infty} \frac{|a_n|}{|a_{n+1}|} \geq R \geq \liminf_{n \rightarrow \infty} \frac{|a_n|}{|a_{n+1}|}. \ \ \ \ \ (4)$

In particular, if the limit ${\lim_{n \rightarrow \infty} \frac{|a_n|}{|a_{n+1}|}}$ exists, then it is equal to ${R}$. Give example to show that strict inequality can hold in both bounds in (4).

If a formal power series ${\sum_{n=0}^\infty a_n (\mathrm{z}-z_0)^n}$ has a positive radius of convergence, then it defines a function ${F: D(z_0,R) \rightarrow {\bf C}}$ in the disk of convergence by setting

$\displaystyle F(z) := \sum_{n=0}^\infty a_n (z-z_0)^n.$

We refer to such a function as a power series, and refer to ${R}$ as the radius of convergence of that power series. (Strictly speaking, a formal power series and a power series are different concepts, but there is little actual harm in conflating them together in practice, because of the uniqueness property established in Exercise 17 below.)

Example 10 The formal power series ${\sum_{n=0}^\infty n! \mathrm{z}^n}$ has a zero radius of convergence, thanks to the ratio test, and so only converges at ${z=0}$. Conversely, the exponential formal power series ${\sum_{n=0}^\infty \frac{\mathrm{z}^n}{n!}}$ has an infinite radius of convergence (thanks to the ratio test), and converges of course to ${\exp(z)}$ when evaluated at any complex number ${z}$.

Example 11 (Geometric series) The formal power series ${\sum_{n=0}^\infty \mathrm{z}^n}$ has radius of convergence ${1}$. If ${z}$ lies in the disk of convergence ${D(0,1) = \{ z \in {\bf C}: |z| < 1 \}}$, then we have

$\displaystyle z \sum_{n=0}^\infty z^n = \sum_{n=0}^\infty z^{n+1}$

$\displaystyle = \sum_{n=1}^\infty z^n$

$\displaystyle = \sum_{n=0}^\infty z^n - 1$

and thus after some algebra we obtain the geometric series formula

$\displaystyle \sum_{n=0}^\infty z^n = \frac{1}{1-z} \ \ \ \ \ (5)$

as long as ${z}$ is inside the disk ${D(0,1)}$. The function ${z \mapsto \frac{1}{1-z}}$ does not extend continuously to the boundary point ${z=1}$ of the disk, but does extend continuously (and even smoothly) to the rest of the boundary, and is in fact holomorphic on the remainder ${{\bf C} \backslash \{1\}}$ of the complex plane. However, the geometric series ${\sum_{n=0}^\infty z^n}$ diverges at every single point of this boundary (when ${|z|=1}$, the coefficients ${z^n}$ of the series do not converge to zero), and of course definitely diverge outside of the disk as well. Thus we see that the function that a power series converges to can extend well beyond the disk of convergence, which thus may only capture a portion of the domain of definition of that function. For instance, if one formally applies (5) with, say, ${z=2}$, one ends up with the apparent identity

$\displaystyle 1 + 2 + 2^2 + 2^3 + \dots = -1.$

This identity does not make sense if one interprets infinite series in the classical fashion, as the series ${1 + 2 + 2^2 + \dots}$ is definitely divergent. However, by formally extending identities such as (5) beyond their disk of convergence, we can generalise the notion of summation of infinite series to assign meaningful values to such series even if they do not converge in the classical sense. This leads to generalised summation methods such as zeta function regularisation, which are discussed in this previous blog post. However, we will not use such generalised interpretations of summation very much in this course.

Exercise 12 For any complex numbers ${a, r, z_0}$, show that the formal power series ${\sum_{n=0}^\infty a r^n (\mathrm{z}-z_0)^n}$ has radius of convergence ${1/|r|}$ (with the convention that this is infinite for ${r=0}$), and is equal to the function ${z \mapsto \frac{a}{1 - r(z-z_0)}}$ inside the disk of convergence.

Exercise 13 For any positive integer ${m}$, show that the formal power series

$\displaystyle \sum_{n=0}^\infty \binom{n+m}{m} \mathrm{z}^n$

has radius of convergence ${1}$, and converges to the function ${z \mapsto \frac{1}{(1-z)^{m+1}}}$ in the disk ${D(0,1)}$. Here of course ${\binom{n+m}{m} := \frac{(n+m)!}{n! m!}}$ is the usual binomial coefficient.

We have seen above that power series can be well behaved as one approaches the boundary of the disk of convergence, while being divergent at the boundary. However, the converse scenario, in which the power series converges at the boundary but does not behave well as one approaches the boundary, does not occur:

Exercise 14

• (i) (Summation by parts formula) Let ${a_0,a_1,a_2,\dots,a_N}$ be a finite sequence of complex numbers, and let ${A_n := a_0 + \dots + a_n}$ be the partial sums for ${n=0,\dots,N}$. Show that for any complex numbers ${b_0,\dots,b_N}$, that

$\displaystyle \sum_{n=0}^N a_n b_n = \sum_{n=0}^{N-1} A_n (b_n - b_{n+1}) + b_N A_N.$

• (ii) Let ${a_0,a_1,\dots}$ be a sequence of complex numbers such that ${\sum_{n=0}^\infty a_n}$ is convergent (not necessarily absolutely) to zero. Show that for any ${0 < r < 1}$, the series ${\sum_{n=0}^\infty a_n r^n}$ is absolutely convergent, and

$\displaystyle \lim_{r \rightarrow 1^-} \sum_{n=0}^\infty a_n r^n = 0.$

(Hint: use summation by parts and a limiting argument to express ${\sum_{n=0}^\infty a_n r^n}$ in terms of the partial sums ${A_n = a_0 + \dots + a_n}$.)

• (iii) (Abel’s theorem) Let ${F(z) = \sum_{n=0}^\infty a_n (z-z_0)^n}$ be a power series with a finite positive radius of convergence ${R}$, and let ${z_1 := z_0+Re^{i\theta}}$ be a point on the boundary of the disk of convergence at which the series ${\sum_{n=0}^\infty a_n (z_1 - z_0)^n}$ converges (not necessarily absolutely). Show that ${\lim_{r \rightarrow R^-} F(z_0 + r e^{i\theta}) = F(z_1)}$. (Hint: use various translations and rotations to reduce to the case considered in (ii).)

As a general rule of thumb, as long as one is inside the disk of convergence, power series behave very similarly to polynomials. In particular, we can generalise the differentiation formula (2) to such power series:

Theorem 15 Let ${F(z) = \sum_{n=0}^\infty a_n (z-z_0)^n}$ be a power series with a positive radius of convergence ${R}$. Then ${F}$ is holomorphic on the disk of convergence ${D(z_0,R)}$, and the derivative ${F'}$ is given by the power series

$\displaystyle F'(z) = \sum_{n=1}^\infty n a_n (z-z_0)^{n-1} = \sum_{n=0}^\infty (n+1) a_{n+1} (z-z_0)^n$

that has the same radius of convergence ${R}$ as ${F}$.

Proof: From (3), the standard limit ${\lim_{n \rightarrow \infty} n^{1/n} = 1}$ and the usual limit laws, it is easy to see that the power series ${\sum_{n=0}^\infty (n+1) a_{n+1} (z-z_0)^n}$ has the same radius of convergence ${R}$ as ${\sum_{n=0}^\infty a_n (z-z_0)^n}$. To show that this series is actually the derivative of ${F}$, we use first principles. If ${z_1}$ lies in the disk of convergence, we consider the Newton quotient

$\displaystyle \frac{F(z)-F(z_1)}{z-z_1}$

for ${z \in D(z_0,R) \backslash \{z_1\}}$. Expanding out the absolutely convergent series ${F(z)}$ and ${F(z_1)}$, we can write

$\displaystyle \frac{F(z)-F(z_1)}{z-z_1} = \sum_{n=0}^\infty a_n \frac{(z-z_0)^n - (z_1-z_0)^n}{z-z_1}.$

The ratio ${\frac{z^n - z_1^n}{z-z_1}}$ vanishes for ${n=0}$, and for ${n \geq 1}$ it is equal to ${(z-z_0)^{n-1} + (z-z_0)^{n-2} (z_1-z_0) + \dots + (z_1-z_0)^{n-1}}$ as in the proof of Proposition 3. Thus

$\displaystyle \frac{F(z)-F(z_1)}{z-z_1} = \sum_{n=1}^\infty a_n ((z-z_0)^{n-1} + (z-z_0)^{n-2} (z_1-z_0) + \dots + (z_1-z_0)^{n-1}).$

As ${z}$ approaches ${z_1}$, each summand ${a_n ((z-z_0)^{n-1} + (z-z_0)^{n-2} (z_1-z_0) + \dots + (z_1-z_0)^{n-1})}$ converges to ${n a_n (z-z_0)^{n-1}}$. This almost proves the desired limiting formula

$\displaystyle \lim_{z \rightarrow z_1: z \in D(z_0,R) \backslash \{z_1\}} \frac{F(z)-F(z_1)}{z-z_1} = \sum_{n=1}^\infty n a_n (z-z_0)^{n-1},$

but we need to justify the interchange of a sum and limit. Fortunately we have a standard tool for this, namely the Weierstrass ${M}$-test (which works for complex-valued functions exactly as it does for real-valued functions; one could also use the dominated convergence theorem here). It will be convenient to select two real numbers ${r_1,r_2}$ with ${|z_1-z_0| < r_1 < r_2 < R}$. Clearly, for ${z}$ close enough to ${z_1}$, we have ${|z-z_0| < r_1}$. By the triangle inequality we then have

$\displaystyle |a_n ((z-z_0)^{n-1} + (z-z_0)^{n-2} (z_1-z_0) + \dots + (z_1-z_0)^{n-1})| \leq n a_n r_1^{n-1}.$

On the other hand, from (3) we know that ${|a_n|^{-1/n} \geq r_2}$ for sufficiently large ${n}$, hence ${|a_n| \leq r_2^{-n}}$ for sufficiently large ${n}$. From the ratio test we know that the series ${\sum_{n=1}^\infty n r_2^{-n} r_1^{n-1}}$ is absolutely convergent, hence the series ${\sum_{n=1}^\infty n a_n r_1^{n-1}}$ is also. Thus, for ${z}$ sufficiently close to ${z_1}$, the summands ${|a_n ((z-z_0)^{n-1} + (z-z_0)^{n-2} (z_1-z_0) + \dots + (z_1-z_0)^{n-1})|}$ are uniformly dominated by an absolutely summable sequence of numbers ${n a_n r_1^{n-1}}$. Applying the Weierstrass ${M}$-test (or dominated convergence theorem), we obtain the claim. $\Box$

Exercise 16 Prove the above theorem directly using epsilon and delta type arguments, rather than invoking the ${M}$-test or the dominated convergence theorem.

We remark that the above theorem is a little easier to prove once we have the complex version of the fundamental theorem of calculus, but this will have to wait until the next set of notes, where we will also prove a remarkable converse to the above theorem, in that any holomorphic function can be expanded as a power series around any point in its domain.

A convenient feature of power series is the ability to equate coefficients: if two power series around the same point ${z_0}$ agree, then their coefficients must also agree. More precisely, we have:

Exercise 17 (Taylor expansion and uniqueness of power series) Let ${F(z) = \sum_{n=0}^\infty a_n (z-z_0)^n}$ be a power series with a positive radius of convergence. Show that ${a_n = \frac{1}{n!} F^{(n)}(z_0)}$, where ${F^{(n)}}$ denotes the ${n^{\mathrm{th}}}$ complex derivative of ${F}$. In particular, if ${G(z) = \sum_{n=0}^\infty b_n (z-z_0)^n}$ is another power series around ${z_0}$ with a positive radius of convergence which agrees with ${F}$ on some neighbourhood ${U}$ of ${z_0}$ (thus, ${F(z)=G(z)}$ for all ${z \in U}$), show that the coefficients of ${F}$ and ${G}$ are identical, that is to say that ${a_n = b_n}$ for all ${n \geq 0}$.

Of course, one can no longer compare coefficients so easily if the power series are based around two different points. For instance, from Exercise 11 we see that the geometric series ${\sum_{n=0}^\infty z^n}$ and ${\sum_{n=0}^\infty \frac{1}{2^{n-1}} (z+1)^n}$ both converge to the same function ${\frac{1}{1-z}}$ on the unit disk ${D(0,1)}$, but have differing coefficients. The precise relation between the coefficients of power series of the same function is given as follows:

Exercise 18 (Changing the origin of a power series) Let ${F(z) = \sum_{n=0}^\infty a_n (z-z_0)^n}$ be a power series with a positive radius of convergence ${R}$. Let ${z_1}$ be an element of the disk of convergence ${D(z_0,R)}$. Show that the formal power series ${\sum_{n=0}^\infty b_n (z-z_1)^n}$, where

$\displaystyle b_n := \sum_{m=n}^\infty \binom{m}{n} a_m (z_1-z_0)^{m-n},$

has radius of convergence at least ${R - |z_1-z_0|}$, and converges to ${F(z)}$ on the disk ${D(z_1, R - |z_1-z_0|)}$. Here of course ${\binom{m}{n} = \frac{m!}{n!(m-n)!}}$ is the usual binomial coefficient.

Theorem 15 gives us a rich supply of complex differentiable functions, particularly when combined with Exercise 4. For instance, the complex exponential function

$\displaystyle e^z = \sum_{n=0}^\infty \frac{z^n}{n!}$

has an infinite radius of convergence, and so is entire, and is its own derivative:

$\displaystyle \frac{d}{dz} e^z = \sum_{n=1}^\infty n \frac{z^{n-1}}{n!} = \sum_{n=0}^\infty \frac{z^n}{n!} = e^z.$

This makes the complex trigonometric functions

$\displaystyle \sin(z) := \frac{e^{iz} - e^{-iz}}{2i}, \quad \cos(z) := \frac{e^{iz} + e^{-iz}}{2}$

entire as well, and from the chain rule we recover the familiar formulae

$\displaystyle \frac{d}{dz} \sin(z) = \cos(z); \quad \frac{d}{dz} \cos(z) = - \sin(z).$

Of course, one can combine these functions together in many ways to create countless other complex differentiable functions with explicitly computable derivatives, e.g. ${\sin(z^2)}$ is an entire function with derivative ${2z \cos(z^2)}$, the tangent function ${\tan(z) := \sin(z) / \cos(z)}$ is holomorphic outside of the discrete set ${\{ (2k+1) \pi/2: k \in {\bf Z}\}}$ with derivative ${\mathrm{sec}(z)^2 = 1/\cos(z)^2}$, and so forth.

Exercise 19 (Multiplication of power series) Let ${F(z) = \sum_{n=0}^\infty a_n(z-z_0)^n}$ and ${G(z) = \sum_{n=0}^\infty b_n (z-z_0)^n}$ be power series that both have radius of convergence at least ${R}$. Show that on the disk ${D(z_0,r)}$, we have

$\displaystyle F(z) G(z) = \sum_{n=0}^\infty c_n (z-z_0)^n$

where the right-hand side is another power series of radius of convergence at least ${R}$, with coefficients ${c_n}$ given as the convolution

$\displaystyle c_n := \sum_{m=0}^n a_m b_{n-m}$

of the sequences ${a_n}$ and ${b_n}$.

— 2. The Cauchy-Riemann equations —

Thus far, the theory of complex differentiation closely resembles the analogous theory of real differentiation that one sees in an introductory real analysis class. But now we take advantage of the Argand plane representation of ${{\bf C}}$ to view a function ${z \mapsto f(z)}$ of one complex variable as a function ${(x,y) \mapsto f(x+iy)}$ of two real variables. This gives rise to some further notions of differentiation. Indeed, if ${f: U \rightarrow {\bf C}}$ is a function defined on an open subset of ${{\bf C}}$, and ${z_0 = x_0 + i y_0}$ is a point in ${U}$, then in addition to the complex derivative

$\displaystyle f'(z_0) := \lim_{z \rightarrow z_0: z \in U \backslash \{z_0\}} \frac{f(z)-f(z_0)}{z-z_0} \ \ \ \ \ (6)$

already discussed, we can also define (if they exist) the partial derivatives

$\displaystyle \frac{\partial f}{\partial x}(z_0) := \lim_{h \rightarrow 0: h \in {\bf R} \backslash \{0\}} \frac{f((x_0+h) + i y_0) - f(x_0+iy_0)}{h}$

and

$\displaystyle \frac{\partial f}{\partial y}(z_0) := \lim_{h \rightarrow 0: h \in {\bf R} \backslash \{0\}} \frac{f(x_0 + i (y_0+h)) - f(x_0+iy_0)}{h};$

these will be complex numbers if the limits on the right-hand side exist. There is also (if it exists) the gradient (or Fréchet derivative) ${Df(z_0) \in {\bf C}^2}$, defined as the vector ${(D_1 f(z_0), D_2 f(z_0)) \in {\bf C}^2}$ with the property that

$\displaystyle \lim_{(h_1,h_2) \rightarrow 0: (h_1,h_2) \in {\bf R}^2 \backslash \{0\}} \ \ \ \ \ (7)$

$\displaystyle \frac{|f((x_0+h_1)+i(y_0+h_2)) - f(x_0+iy_0) - h_1 D_1f(z_0) - h_2 D_2f(z_0)|}{|(h_1,h_2)|} = 0$

where ${|(h_1,h_2)| := \sqrt{h_1^2+h_2^2}}$ is the Euclidean norm of ${(h_1,h_2)}$.

These notions of derivative are of course closely related to each other. If a function ${f: U \rightarrow {\bf C}}$ is Fréchet differentiable at ${z_0}$, in the sense that the gradient ${Df(z_0)}$ exists, then on specialising the limit in (7) to vectors ${(h_1,h_2)}$ of the form ${(h,0)}$ or ${(0,h)}$ we see that

$\displaystyle \frac{\partial f}{\partial x}(z_0) = D_1 f(z_0)$

and

$\displaystyle \frac{\partial f}{\partial y}(z_0) = D_2 f(z_0)$

leading to the familiar formula

$\displaystyle Df(z_0) = ( \frac{\partial f}{\partial x}(z_0), \frac{\partial f}{\partial y}(z_0)) \ \ \ \ \ (8)$

for the gradient ${Df(z_0)}$ of a function ${f}$ that is Fréchet differentiable at ${z_0}$. We caution however that it is possible for the partial derivatives ${ \frac{\partial f}{\partial x}(z_0), \frac{\partial f}{\partial y}(z_0)}$ of a function to exist without the function being Fréchet differentiable, in which case the formula (8) is of course not valid. (A typical example is the function ${f: {\bf C} \rightarrow {\bf C}}$ defined by setting ${f(x+iy) := \frac{xy}{x^2+y^2}}$ for ${x+iy \neq 0}$, with ${f(0) := 0}$; this function has both partial derivatives ${\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}}$ existing at ${z_0=0}$, but ${f}$ is not differentiable here.) On the other hand, if the partial derivatives ${ \frac{\partial f}{\partial x}(z_0), \frac{\partial f}{\partial y}(z_0)}$ exist everywhere on ${U}$ and are additionally known to be continuous, then the fundamental theorem of calculus gives the identity

$\displaystyle f((x_0+h_1) + i(y_0+h_2)) = f(x_0,y_0) + \int_0^{h_1} \frac{\partial f}{\partial x}(x_0+t+iy_0)\ dt$

$\displaystyle + \int_0^{h_2} \frac{\partial f}{\partial y}(x_0 + h_1 + i(y_0+t))\ dt$

for ${x_0+iy_0 \in U}$ and ${h_1,h_2}$ sufficiently small (with the convention that ${\int_a^b = -\int_b^a}$ if ${a>b}$), and from this it is not difficult to see that ${f}$ is then Fréchet differentiable everywhere on ${U}$.

Similarly, if ${f}$ is complex differentiable at ${z_0}$, then by specialising the limit (6) to variables ${z}$ of the form ${z = z_0 + h}$ or ${z = z_0 + ih}$ for some non-zero real ${h}$ near zero, we see that

$\displaystyle \frac{\partial f}{\partial x}(z_0) = f'(z_0)$

and

$\displaystyle \frac{\partial f}{\partial y}(z_0) = i f'(z_0)$

leading in particular to the Cauchy-Riemann equations

$\displaystyle \frac{\partial f}{\partial x}(z_0) = \frac{1}{i} \frac{\partial f}{\partial y}(z_0) \ \ \ \ \ (9)$

that must be satisfied in order for ${f}$ to be complex differentiable. More generally, from (6) we see that if ${f}$ is complex differentiable at ${z_0}$, then

$\displaystyle \lim_{h \rightarrow 0: h \neq 0} \frac{|f(z_0+h) - f(z_0) - f'(z_0) h|}{|h|} = 0,$

which on comparison with (7) shows that ${f}$ is also Fréchet differentiable with

$\displaystyle Df(z_0) = (f'(z_0), i f'(z_0)).$

Finally, if ${f}$ is Fréchet differentiable at ${z_0}$ and one has the Cauchy-Riemann equations (9), then from (7) we have

$\displaystyle \lim_{(h_1,h_2) \rightarrow 0: (h_1,h_2) \in {\bf R}^2 \backslash \{0\}}$

$\displaystyle \frac{|f((x_0+h_1)+i(y_0+h_2)) - f(x_0+iy_0) - (h_1+ih_2) \frac{\partial f}{\partial x}(z_0)|}{|(h_1,h_2)|} = 0$

which after making the substitution ${z := (x_0+h_1)+i(y_0+h_2)}$ gives

$\displaystyle \lim_{z \rightarrow z_0: z \in U \backslash \{z_0\}} \frac{|f(z) - f(z_0) - (z-z_0) \frac{\partial f}{\partial x}(z_0)|}{|z-z_0|} = 0$

which on comparison with (6) shows that ${f}$ is complex differentiable with

$\displaystyle f'(z_0) =\frac{\partial f}{\partial x}(z_0).$

We summarise the above discussion as follows:

Proposition 20 (Differentiability and the Cauchy-Riemann equations) Let ${U}$ be an open subset of ${{\bf C}}$, let ${f: U \rightarrow {\bf C}}$ be a function, and let ${z_0}$ be an element of ${U}$.

• (i) If ${f}$ is complex differentiable at ${z_0}$, then it is also Fréchet differentiable at ${z_0}$, with

$\displaystyle f'(z_0) = \frac{\partial f}{\partial x}(z_0) = \frac{1}{i} \frac{\partial f}{\partial y}(z_0).$

In particular, the Cauchy-Riemann equations (9) hold at ${z_0}$.

• (ii) Conversely, if ${f}$ is Fréchet differentiable at ${z_0}$ and obeys the Cauchy-Riemann equations at ${z_0}$, then ${f}$ is complex differentiable at ${z_0}$.

Remark 21 From part (ii) of the above proposition we see that if ${f}$ is Fréchet differentiable on ${U}$ and obeys the Cauchy-Riemann equations on ${U}$, then it is holomorphic on ${U}$. One can ask whether the requirement of Fréchet differentiability can be weakened. It cannot be omitted entirely; one can show, for instance, that the function ${f: {\bf C} \rightarrow {\bf C}}$ defined by ${f(z) := e^{-1/z^4}}$ for non-zero ${z}$ and ${f(0) := 0}$ obeys the Cauchy-Riemann equations at every point ${z_0 \in {\bf C}}$, but is not complex differentiable (or even continuous) at the origin. But there is a somewhat difficult theorem of Looman and Menchoff that asserts that if ${f}$ is continuous on ${U}$ and obeys the Cauchy-Riemann equations on ${U}$, then it is holomorphic. We will not prove or use this theorem in this course; generally in modern applications, when one wants to weaken the regularity hypotheses of a theorem involving classical differentiation, the best way to do so is to replace the notion of a classical derivative with that of a weak derivative, rather than insist on computing derivatives in the classical pointwise sense. See this blog post for more discussion.

Combining part (i) of the above proposition with Theorem 15, we also conclude as a corollary that any power series will be smooth inside its disk of convergence, in the sense all partial derivatives to all orders of this power series exist.

Remark 22 From the geometric perspective, one can interpret complex differentiability at a point ${z_0}$ as a requirement that a map ${f}$ is conformal and orientation-preserving at ${z_0}$, at least in the non-degenerate case when ${f'(z_0)}$ is non-zero. In more detail: suppose that ${f: U \rightarrow {\bf C}}$ is a map that is complex differentiable at some point ${z_0 \in U}$ with ${f'(z_0) \neq 0}$. Let ${\gamma: (-\varepsilon,\varepsilon) \rightarrow U}$ be a differentiable curve with ${\gamma(0)=z_0}$; we view this as the trajectory of some particle which passes through ${z_0}$ at time ${t=0}$. The derivative ${\gamma'(0)}$ (defined in the usual manner by limits of Newton quotients) can then be viewed as the velocity of the particle as it passes through ${z_0}$. The map ${f}$ takes this particle to a new particle parameterised by the curve ${f \circ \gamma: (-\varepsilon,\varepsilon) \rightarrow {\bf C}}$; at time ${t=0}$, this new particle passes through ${f(z_0)}$, and by the chain rule we see that the velocity of the new particle at this time is given by

$\displaystyle (f \circ \gamma)'(0) = f'(z_0) \gamma'(0).$

Thus, if we write ${f'(z_0)}$ in polar coordinates as ${f'(z_0) = r e^{i\theta}}$, the map ${f}$ transforms the velocity of the particle by multiplying the speed by a factor of ${r}$ and rotating the direction of travel counter-clockwise by ${\theta}$. In particular, we consider two differentiable trajectories ${\gamma_1, \gamma_2}$ both passing through ${z_0}$ at time ${t=0}$ (with non-zero speeds), then the map ${f}$ preserves the angle between the two velocity vectors ${\gamma'_1(0), \gamma'_2(0)}$, as well as their orientation (e.g. if ${\gamma'_2(0)}$ is counterclockwise to ${\gamma'_1(0)}$, then ${(f \circ \gamma_2)'(0)}$ is counterclockwise to ${(f \circ \gamma_1)'(0)}$. This is in contrast to, for instance, shear transformations such as ${f(x+iy) := x + i(x+y)}$, which preserve orientation but not angle, or the complex conjugation map ${f(x+iy) := x-iy}$, which preserve angle but not orientation. The same preservation of angle is present for real differentiable functions ${f: I \rightarrow {\bf R}}$ on an interval ${I}$, but is much less impressive in that case since the only angles possible between two vectors on the real line are ${0}$ and ${\pi}$; it is the geometric two-dimensionality of the complex plane that makes conformality a much stronger and more “rigid” property for complex differentiable functions.

One consequence of the first component ${f' = \frac{\partial f}{\partial x}}$ of Proposition 20(i) is that the complex notion of differentiation and the real notion of differentiation are compatible with each other. More precisely, if ${f: U \rightarrow {\bf C}}$ is holomorphic and ${g: U \cap {\bf R} \rightarrow {\bf C}}$ is the restriction of ${f}$ to the real line, then the real derivative ${\frac{dg}{dx}: U \cap {\bf R} \rightarrow {\bf C}}$ of ${g}$ exists and is equal to the restriction of the complex derivative ${\frac{df}{dz}: U \rightarrow {\bf C}}$ to the real line. For instance, since ${\frac{d}{dz} e^{iz} = i e^{iz}}$ for a complex variable ${z}$, we also have ${\frac{d}{dt} e^{it} = i e^{it}}$ for a real variable ${t}$. For similar reasons, it will be a safe “abuse of notation” to use the notation ${f'}$ both to refer to the complex derivative of a function ${f}$, and also the real derivative of the restriction of that function to the real line.

If one breaks up a complex function ${f: U \rightarrow {\bf C}}$ into real and imaginary parts ${f = u+iv}$ for some ${u,v: U \rightarrow {\bf R}}$, then on taking real and imaginary parts one can express the Cauchy-Riemann equations as a system

$\displaystyle \frac{\partial u}{\partial x}(z_0) = \frac{\partial v}{\partial y}(z_0) \ \ \ \ \ (10)$

$\displaystyle \frac{\partial v}{\partial x}(z_0) = -\frac{\partial u}{\partial y}(z_0) \ \ \ \ \ (11)$

of two partial differential equations for two functions ${u,v}$. This gives a quick way to test if various functions are differentiable. Consider for instance the conjugation function ${f: z \mapsto \overline{z}}$. In this case, ${u(x+iy) = x}$ and ${v(x+iy) = -y}$. These functions, being polynomial in ${x,y}$, is certainly Fréchet differentiable everywhere; the equation (11) is always satisfied, but the equation (10) is never satisfied. As such, the conjugation function is never complex differentiable. Similarly for the real part function ${z \mapsto \mathrm{Re}(z)}$, the imaginary part function ${z \mapsto \mathrm{Im}(z)}$, or the absolute value function ${z \mapsto |z|}$. The function ${z \mapsto |z|^2}$ has real part ${u(x+iy) = x^2+y^2}$ and imaginary part ${v(x+iy)=0}$; one easily checks that the system (10), (11) is only satisfied when ${x=y=0}$, so this function is only complex differentiable at the origin. In particular, it is not holomorphic on any non-empty open set.

The general rule of thumb that one should take away from these examples is that complex functions that are constructed purely out of “good” functions such as polynomials, the complex exponential, complex trigonometric functions, or other convergent power series are likely to be holomorphic, whereas functions that involve “bad” functions such as complex conjugation, the real and imaginary part, or the absolute value, are unlikely to be holomorphic.

Exercise 23 (Wirtinger derivatives) Let ${U}$ be an open subset of ${{\bf C}}$, and let ${f: U \rightarrow {\bf C}}$ be a Fréchet differentiable function. Define the Wirtinger derivatives ${\frac{\partial f}{\partial z}: U \rightarrow {\bf C}}$, ${\frac{\partial f}{\partial \overline{z}}: U \rightarrow {\bf C}}$ by the formulae

$\displaystyle \frac{\partial f}{\partial z} := \frac{1}{2}( \frac{\partial f}{\partial x} + \frac{1}{i} \frac{\partial f}{\partial y} )$

$\displaystyle \frac{\partial f}{\partial \overline{z}} := \frac{1}{2}( \frac{\partial f}{\partial x} - \frac{1}{i} \frac{\partial f}{\partial y} ).$

• (i) Show that ${f}$ is holomorphic on ${U}$ if and only if the Wirtinger derivative ${\frac{\partial f}{\partial \overline{z}}}$ vanishes identically on ${U}$.
• (ii) If ${f}$ is given by a polynomial

$\displaystyle f(z) = \sum_{n,m \geq 0: n+m \leq d} c_{n,m} z^n \overline{z}^m \ \ \ \ \ (12)$

in both ${z}$ and ${\overline{z}}$ for some complex coefficients ${c_{n,m}}$ and some natural number ${d}$, show that

$\displaystyle \frac{\partial f}{\partial z}(z) = \sum_{n,m \geq 0: n+m \leq d} c_{n,m} (n z^{n-1}) \overline{z}^m$

and

$\displaystyle \frac{\partial f}{\partial \overline{z}}(z) = \sum_{n,m \geq 0: n+m \leq d} c_{n,m} z^{n} (m \overline{z}^{m-1}).$

(Hint: first establish a Leibniz rule for Wirtinger derivatives.) Conclude in particular that ${f}$ is holomorphic if and only if ${c_{n,m}}$ vanishes whenever ${m \geq 1}$ (i.e. ${f}$ does not contain any terms that involve ${\overline{z}}$).

• (iii) If ${z_0}$ is a point in ${U}$, show that one has the Taylor expansion

$\displaystyle f(z) = f(z_0) + \frac{\partial f}{\partial z}(z_0) (z-z_0) + \frac{\partial f}{\partial \overline{z}}(z_0) \overline{(z-z_0)} + o(|z-z_0|)$

as ${z \rightarrow z_0}$, where ${o(|z-z_0|)}$ denotes a quantity of the form ${|z-z_0| c(z)}$, where ${c(z)}$ goes to zero as ${z}$ goes to ${z_0}$ (compare with (1)). Conversely, show that this property determines the numbers ${\frac{\partial f}{\partial z}(z_0)}$ and ${\frac{\partial f}{\partial \overline{z}}(z_0)}$ uniquely (and thus can be used as an alternate definition of the Wirtinger derivatives).

Remark 24 Any polynomial

$\displaystyle f(x+iy) = \sum_{n,m \geq 0: n+m \leq d} a_{n,m} x^n y^m$

in the real and imaginary parts ${x,y}$ of ${x+iy}$ can be rewritten as a polynomial in ${z}$ and ${\overline{z}}$ as per (17), using the usual identities

$\displaystyle x = \frac{z+\overline{z}}{2}, y = \frac{z - \overline{z}}{2i}$

for ${z = x+iy}$. Thus such a non-holomorphic polynomial of one complex variable ${z=x+iy}$ can be viewed as the restriction of a holomorphic polynomial

$\displaystyle P(z_1,z_2) := \sum_{n,m \geq 0: n+m \leq d} c_{n,m} z_1^n z_2^m$

of two complex variables ${z_1,z_2 \in {\bf C}}$ to the anti-diagonal ${\{ (z_1,z_2) \in {\bf C}^2: z_2 = \overline{z_1}\}}$, and the Wirtinger derivatives can then be interpreted as genuine (complex) partial derivatives in these two complex variables. More generally, Wirtinger derivatives are convenient tools in the subject of several complex variables, which we will not cover in this course.

The Cauchy-Riemann equations couple the real and imaginary parts ${u,v: U \rightarrow {\bf R}}$ of a holomorphic function to each other. But it is also possible to eliminate one of these components from the equations and obtain a constraint on just the real part ${u}$, or just the imaginary part ${v}$. Suppose for the moment that ${f: U \rightarrow {\bf C}}$ is a holomorphic function which is twice continuously differentiable (thus the second partial derivatives ${\frac{\partial^2 f}{\partial x^2}}$, ${\frac{\partial^2 f}{\partial y\partial x}}$, ${\frac{\partial^2 f}{\partial x \partial y}}$, ${\frac{\partial^2 f}{\partial y^2}}$ all exist and are continuous on ${U}$); we will show in the next set of notes that this extra hypothesis is in fact redundant. Assuming continuous second differentiability for now, we have Clairaut’s theorem

$\displaystyle \frac{\partial^2 f}{\partial x \partial y} = \frac{\partial^2 f}{\partial y \partial x}$

everywhere on ${U}$. Similarly for the real and imaginary parts ${u,v}$. If we then differentiate (10) in the ${x}$ direction, (11) in the ${y}$ direction, and then sum, the derivatives of ${v}$ cancel thanks to Clairaut’s theorem, and we obtain Laplace’s equation

$\displaystyle \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = 0 \ \ \ \ \ (13)$

which is often written more compactly as

$\displaystyle \Delta u = 0$

where ${\Delta}$ is the Laplacian operator

$\displaystyle \Delta := \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}.$

A similar argument gives ${\Delta v = 0}$; by linearity we then also have ${\Delta f=0}$.

Functions ${u}$ for which ${u}$ is continuously twice differentiable and ${\Delta u = 0}$ are known as harmonic functions: thus we have shown that (continuously twice differentiable) holomorphic functions are automatically harmonic, as are the real and imaginary parts of such functions. The converse is not true: not every harmonic function ${f: U \rightarrow {\bf C}}$ is holomorphic. For instance, the conjugate function ${z \mapsto \overline{z}}$ is clearly harmonic on ${{\bf C}}$, but not holomorphic. We will return to the precise relationship between harmonic and holomorphic functions shortly.

Harmonic functions have many remarkable properties. Since the second derivative in a given direction is a local measure of “convexity” of a function, we see from (13) that any convex behaviour of a harmonic function in one direction has to be balanced by an equal and opposite amount of concave behaviour in the orthogonal direction. A good example of a harmonic function to keep in mind is the function

$\displaystyle u(x+iy) := x^2 - y^2$

which exhibits convex behavior in ${x}$ and concave behavior in ${y}$ in exactly opposite amounts. This function is the real part of the holomorphic function ${f(z) :=z^2}$, which is of course consistent with the previous observation that the real parts of holomorphic functions are harmonic.

We will discuss harmonic functions more in later notes. For now, we record just one important property of these functions, namely the maximum principle:

Theorem 25 (Maximum principle) Let ${U}$ be an open subset of ${{\bf C}}$, and let ${u: U \rightarrow {\bf R}}$ be a harmonic function. Let ${K}$ be a compact subset of ${U}$, and let ${\partial K}$ be the boundary of ${K}$. Then

$\displaystyle \sup_{z \in K} u(z) = \sup_{z \in \partial K} u(z) \ \ \ \ \ (14)$

and similarly

$\displaystyle \inf_{z \in K} u(z) = \inf_{z \in \partial K} u(z) \ \ \ \ \ (15)$

Informally, the maximum principle asserts that the maximum of a real-valued harmonic function on a compact set is always attained on the boundary, and similarly for the minimum. In particular, any bound on the harmonic function that one can obtain on the boundary is automatically inherited by the interior. Compare this with a non-harmonic function such as ${u(x+iy) := 1 - x^2 - y^2}$, which is bounded by ${0}$ on the boundary of the compact unit disk ${\overline{D(0,1)} := \{ z \in {\bf C}: |z| \leq 1 \}}$, but is not bounded by ${0}$ on the interior of this disk.

Proof: We begin with an “almost proof” of this principle, and then repair this attempted proof so that it is an actual proof.

We will just prove (14), as (15) is proven similarly (or one can just observe that if ${u}$ is harmonic then so is ${-u}$). Clearly we have

$\displaystyle \sup_{z \in K} u(z) \geq \sup_{z \in \partial K} u(z)$

so the only scenario that needs to be excluded is when

$\displaystyle \sup_{z \in K} u(z) > \sup_{z \in \partial K} u(z). \ \ \ \ \ (16)$

Suppose that this is the case. As ${u}$ is continuous and ${K}$ is compact, ${u}$ must attain its maximum at some point ${z_0}$ in ${K}$; from (16) we see that ${z_0}$ must be an interior point. Since ${z_0}$ is a local maximum of ${u}$, and ${u}$ is twice differentiable, we must have

$\displaystyle \frac{\partial^2 u}{\partial x^2}(z_0) \leq 0$

and similarly

$\displaystyle \frac{\partial^2 u}{\partial y^2}(z_0) \leq 0.$

This almost, but does not quite, contradict the harmonicity of ${u}$, since it is still possible that both of these partial derivatives vanish. To get around this problem we use the trick of creating an epsilon of room, adding a tiny bit of convexity to ${u}$. Let ${\varepsilon>0}$ be a small number to be chosen later, and let ${u_\varepsilon: U \rightarrow {\bf R}}$ be the modified function

$\displaystyle u_\varepsilon(x+iy) := u(x+iy) + \varepsilon(x^2+y^2).$

Since ${K}$ is compact, the function ${x^2+y^2}$ is bounded on ${K}$. Thus, from (16), we see that if ${\varepsilon>0}$ is small enough we have

$\displaystyle \sup_{z \in K} u_\varepsilon(z) > \sup_{z \in \partial K} u_\varepsilon(z).$

Arguing as before, ${u_\varepsilon}$ must attain its maximum at some interior point ${z_\varepsilon}$ of ${K}$, and so we again have

$\displaystyle \frac{\partial^2 u_\varepsilon}{\partial x^2}(z_\varepsilon) \leq 0$

and similarly

$\displaystyle \frac{\partial^2 u_\varepsilon}{\partial y^2}(z_\varepsilon) \leq 0.$

On the other hand, since ${u}$ is harmonic, we have

$\displaystyle \frac{\partial^2 u_\varepsilon}{\partial x^2} + \frac{\partial^2 u_\varepsilon}{\partial y^2} = \frac{\partial^2 u}{\partial x^2} + 2 \varepsilon + \frac{\partial^2 u}{\partial y^2} + 2\varepsilon = 4 \varepsilon > 0$

on ${U}$. These facts contradict each other, and we are done. $\Box$

Exercise 26 (Maximum principle for holomorphic functions) If ${f: U \rightarrow {\bf C}}$ is a continuously twice differentiable holomorphic function on an open set ${U}$, and ${K}$ is a compact subset of ${U}$, show that

$\displaystyle \sup_{z \in K} |f(z)| = \sup_{z \in \partial K} |f(z)|.$

(Hint: use Theorem 25 and the fact that ${|w| = \sup_{\theta \in {\bf R}} \mathrm{Re} w e^{i\theta}}$ for any complex number ${w}$.) What happens if we replace the suprema on both sides by infima?

Exercise 27 Recall the Wirtinger derivatives defined in Exercise 23(i).

• (i) If ${f: U \rightarrow {\bf C}}$ is twice continuously differentiable on an open subset ${U}$ of ${{\bf C}}$, show that

$\displaystyle \Delta f = 4 \frac{\partial}{\partial z} \frac{\partial f}{\partial \overline{z}} = 4 \frac{\partial}{\partial \overline{z}} \frac{\partial f}{\partial z}.$

Use this to give an alternate proof that holomorphic functions are harmonic.

• (ii) If ${f}$ is given by a polynomial

$\displaystyle f(z) = \sum_{n,m \geq 0: n+m \leq d} c_{n,m} z^n \overline{z}^m \ \ \ \ \ (17)$

in both ${z}$ and ${\overline{z}}$ for some complex coefficients ${c_{n,m}}$ and some natural number ${d}$, show that ${f}$ is harmonic on ${{\bf C}}$ if and only if ${c_{n,m}}$ vanishes whenever ${n}$ and ${m}$ are both positive (i.e. ${f}$ only contains terms ${c_{n,0} z^n}$ or ${c_{0,m} \overline{z}^m}$ that only involve one of ${z}$ or ${\overline{z}}$).

• (iii) If ${u: U \rightarrow {\bf R}}$ is a real polynomial

$\displaystyle u(x+iy) = \sum_{n,m \geq 0: n+m \leq d} a_{n,m} x^n y^m$

in ${x}$ and ${y}$ for some real coefficients ${a_{n,m}}$ and some natural number ${d}$, show that ${u}$ is harmonic if and only if it is the real part of a polynomial ${f(z)}$ of one complex variable ${z}$.

We have seen that the real and imaginary parts ${u,v: U \rightarrow {\bf R}}$ of any holomorphic function ${f: U \rightarrow {\bf C}}$ are harmonic functions. Conversely, let us call a harmonic function ${v:U \rightarrow {\bf R}}$ a harmonic conjugate of another harmonic function ${u: U \rightarrow {\bf R}}$ if ${u+iv}$ is holomorphic on ${U}$; this is equivalent by Proposition 20 to ${u,v}$ satisfying the Cauchy-Riemann equations (10), (11). Here is a short table giving some examples of harmonic conjugates:

 ${u}$ ${v}$ ${u+iv}$ ${x}$ ${y}$ ${z}$ ${x}$ ${y+1}$ ${z+i}$ ${y}$ ${-x}$ ${-iz}$ ${x^2-y^2}$ ${2xy}$ ${z^2}$ ${e^x \cos y}$ ${e^x \sin y}$ ${e^z}$ ${\frac{x}{x^2+y^2}}$ ${\frac{-y}{x^2+y^2}}$ ${\frac{1}{z}}$

(for the last example one of course has to exclude the origin from the domain ${U}$).

From Exercise 27(ii) we know that every harmonic polynomial has at least one harmonic conjugate; it is natural to ask whether the same fact is true for more general harmonic functions than polynomials. In the case that the domain ${U}$ is the entire complex plane, the answer is affirmative:

Proposition 28 Let ${u: {\bf C} \rightarrow {\bf R}}$ be a harmonic function. Then there exists a harmonic conjugate ${v: {\bf C} \rightarrow {\bf R}}$ of ${u}$. Furthermore, this harmonic conjugate is unique up to constants: if ${v,v'}$ are two harmonic conjugates of ${u}$, then ${v'-v}$ is a constant function.

Proof: We first prove uniqueness. If ${v}$ is a harmonic conjugate of ${u}$, then from the fundamental theorem of calculus, we have

$\displaystyle v(x+iy) = v(0) + \int_0^x \frac{\partial v}{\partial x}(t)\ dt + \int_0^y \frac{\partial v}{\partial y}(x+it)\ dt$

and hence by the Cauchy-Riemann equations (10), (11) we have

$\displaystyle v(x+iy) = v(0) - \int_0^x \frac{\partial u}{\partial y}(t)\ dt + \int_0^y \frac{\partial u}{\partial x}(x+it)\ dt.$

Similarly for any other harmonic conjugate ${v'}$ of ${v}$. It is now clear that ${v.}$ and ${v'}$ differ by a constant.

Now we prove existence. Inspired by the above calculations, we define ${v: {\bf C} \rightarrow {\bf R}}$ to be the define ${v}$ explicitly by the formula

$\displaystyle v(x+iy) := -\int_0^x \frac{\partial u}{\partial y}(t)\ dt + \int_0^y \frac{\partial u}{\partial x}(x+it)\ dt. \ \ \ \ \ (18)$

From the fundamental theorem of calculus, we see that ${v}$ is differentiable in the ${y}$ direction with

$\displaystyle \frac{\partial v}{\partial y}(x+it) = \frac{\partial u}{\partial x}(x+it).$

This is one of the two Cauchy-Riemann equations needed. To obtain the other one, we differentiate (18) in the ${x}$ variable. The fact that ${u}$ is continuously twice differentiable allows one to differentiate under the integral sign (exercise!) and conclude that

$\displaystyle \frac{\partial v}{\partial x}(x+iy) := - \frac{\partial u}{\partial y}(x) + \int_0^y \frac{\partial^2 u}{\partial x^2}(x+it)\ dt.$

As ${u}$ is harmonic, we have ${\frac{\partial^2 u}{\partial x^2} = - \frac{\partial^2 u}{\partial y^2}}$, so by the fundamental theorem of calculus we conclude that

$\displaystyle \frac{\partial v}{\partial x}(x+iy) = -\frac{\partial u}{\partial y}(x+iy).$

Thus we now have both of the Cauchy-Riemann equations (10), (11) in ${{\bf C}}$. Differentiating these equations again, we conclude that ${v}$ is twice continuously differentiable, and hence by Proposition 20 we have ${u+iv}$ holomorphic on ${{\bf C}}$, giving the claim. $\Box$

The same argument would also work for some other domains than ${{\bf C}}$, such as rectangles ${\{ z: a < \mathrm{Re}(z) < b; c < \mathrm{Im}(z) < d \}}$. To handle the general case, though, it becomes convenient to introduce the notion of contour integration, which we will do in the next set of notes. In some cases (specifically, when the underlying domain ${U}$ fails to be simply connected), it will turn out that some harmonic functions do not have conjugates!

Exercise 29 Show that an entire function ${f: {\bf C} \rightarrow {\bf C}}$ can be real-valued on ${{\bf C}}$ only if it is constant.

Exercise 30 Let ${c}$ be a complex number. Show that if ${f: {\bf C} \rightarrow {\bf C}}$ is an entire function such that ${\frac{df}{dz}(z) = cf(z)}$ for all ${z \in {\bf C}}$, then ${f(z) = f(0) \exp( cz)}$ for all ${z \in {\bf C}}$.