\(
\newcommand{\BE}{\begin{equation}}
\newcommand{\EE}{\end{equation}}
\newcommand{\BA}{\begin{eqnarray}}
\newcommand{\EA}{\end{eqnarray}}
\newcommand\CC{\mathbb{C}}
\newcommand\FF{\mathbb{F}}
\newcommand\NN{\mathbb{N}}
\newcommand\QQ{\mathbb{Q}}
\newcommand\RR{\mathbb{R}}
\newcommand\ZZ{\mathbb{Z}}
\newcommand{\va}{\hat{\mathbf{a}}}
\newcommand{\vb}{\hat{\mathbf{b}}}
\newcommand{\vn}{\hat{\mathbf{n}}}
\newcommand{\vt}{\hat{\mathbf{t}}}
\newcommand{\bx}{\mathbf{x}}
\newcommand{\bv}{\mathbf{v}}
\newcommand{\bg}{\mathbf{g}}
\newcommand{\bn}{\mathbf{n}}
\newcommand{\by}{\mathbf{y}}
\)
Complex Analysis Primer
Introduction
Complex analysis is the theory of functions of a complex variable. The emphasis of this brief review of complex analysis is on motivating its existence from a mathematical perspective and its usefulness for physics applications. We briefly review complex numbers and use insights from linear algebra and multivariable calculus to get intuition what makes the requirement of a complex derivative develop such a rich structure of consequences. We summarize some selected key fundamental theorems of complex analysis which we will need in some of our courses, but will not necessarily derive them in detail. In fact, we do not want to clutter the line of thought with proofs, though we regret that this precludes this article from being a stand-alone resource for deep understanding. The intent of this article is to remind readers already familiar with complex analysis and to give an overview to those seeing it for the first time. For a proper understanding of this branch of mathematics, we encourage the reader to consult a detailed mathematical textbook on complex analysis with all the derivations and proofs.
We will use complex analysis in some of our more advanced courses when discussing 2D aerodynamics, because it lends itself excellently to study incompressible, irrotational, inviscid flows, due to the fact that the velocity vector field of such a flow satisfies the Cauchy-Riemann differential equations. This will allow us to define a complex velocity of one complex variable, which turns out to be a complex differentiable (holomorphic) function to which many of the theorems of complex analysis can be applied.
Notation
- Numbers (ofter referred to as scalars) will be denoted by lowercase letters in normal font, either different ones like \(a, b, c, d,\) etc., or by lowercase letters with indices, e.g.: \(a_{11}, a_{12}, a_{21}, a_{22},\) etc. For complex numbers the usage of letter \(z\) is popular, but other letters can be used as well as needed (in some cases we will use \(a\), not to be confused with the \(a\) we occasionally use for the real part of a complex number.
- Matrices will be denoted by uppercase letters, e.g.: \(A, B, C, D,\) etc., and/or will be sometimes enclosed in brackets […], for easier recognition.
The elements (entries) of matrices are numbers and are denoted with lowercase letters as any other number discussed above. If indices are used to distinguish the matrix elements, the first index counts through the (horizontal) lines, while the second index counts through the vertical columns. - Vectors will be denoted with lowercase boldface letters. oftentimes \(\mathbf{v}\) and \(\mathbf{w}\), but sometimes also \(\mathbf{a}\), \(\mathbf{b}\) and others. An optional hat on top of the vector signifies the vector has unit length (length = 1).
The elements (entries) of vectors (which are numbers) will again be lowercase letters in normal font, because they are numbers.
Preliminaries and Motivation
Complex Numbers: Definition and Algebraic Structure
Definition
An indispensable component of complex analysis are complex numbers. The idea of complex numbers is that we wish to have a number, which when squared, equals \(-1\). We shall denote such a number by the symbol \(i\) and note that \(i=\sqrt{-1}\). \(i\) has the property that \(i^2=-1\). (Consequently also \(i^3=-i\), \(i^4=1\), etc.)
We see immediately that this number \(i\) does not correspond to any real number, since for any real number \(x\) we always get \(x^\le0\), and \(0\) if and only if \(x=0\). There is no way to get a negative number like \(-1\) as a result of squaring a real number. We shall therefore call \(i\) an imaginary number.
We can then construct a so called complex number \(z\), which shall have a real and an imaginary part, by “adding” the real and imaginary parts
\begin{equation}
z = a + i b.
\end{equation}
Here \(a,b\in\RR\) are two real numbers, being called the real and the imaginary part of the complex number, respectively. The plus here is just a symbolical way to combine the two parts here, which turns out to be a convenient notation.
We also define the complex conjugate of a complex number \(z=x+iy\) as \(\overline{z}=x-iy\) and denote it by a line over the letter of the variable.
As an aside, it is possible to extend the complex numbers further by defining additional numbers similar to \(i\). In doing so, as William Rowan Hamilton has first done, one arrives at quaternions, which are used in kinematics for attitude description as a coordinate-singularity-free alternative to Euler angles, but that is an entirely different topic for some other time.
Complex Numbers as a Field
Similarly to real numbers, complex numbers form a field in the sense of abstract algebra (see our linear algebra primer) with the operations of addition and multiplication defined, respectively, as
\begin{eqnarray}
z_1 \hat{+} z_2 &=& (a_1+ib_1) \hat{+} (a_2+ib_2) = (a_1+a_2) + i (b_1+b_2)\\
z_1 \hat\cdot z_2 &=& (a_1+ib_1) \hat\cdot (a_2+ib_2) = (a_1a_2-b_1b_2) + i (a_1b_2+a_2b_1)
\end{eqnarray}
where we have artificially put hats on top of the operations defined by the above equations in order to highlight them. (Note that there are three different pluses in the first equation above: 1) the hatted operations are the ones being defined, 2) there are pluses between real numbers (in the parentheses of the last expression), 3) pluses between the real and imaginary part of complex numbers. We shall dispense with the hats henceforth.
The field of complex numbers is denoted by \(\CC\). It has certain properties that the field of real numbers \(\RR\) does not have. For instance, \(\CC\) is algebraically closed (meaning every nonconstant polynomial \(\CC[x]\) with coefficients in \(\CC\) has a root in \(\CC\), which is obviously not true for polynomials in \(\RR\), since for instance \(x^2+4=0\) does not have a real-number solution). But we shall not get into this further.
Complex Numbers as a Vector Space
As we know from our linear algebra primer, a vector space is an additively written Abelian group over a field, which satisfies the vector space axioms that stipulate, among other things, how elements of the field are to interact with the elements of the Abelian group to form new elements of the group.
2-Dimensional Vector Space over \(\RR\)
Without Multiplication
If we ignore the multiplication operation defined above, and keep only addition between complex numbers, complex numbers can be viewed as a two-dimensional vector space over \(\RR\). In order to see this, write the complex number \(z=a+ib\) as a two-dimensional column vector
\begin{equation}
\mathbf{z} =
\begin{pmatrix}
a\\
b
\end{pmatrix}
\end{equation}
This 2-tuple of real numbers has been taken with respect to the basis \(\{1, i\}\) of \(\CC\) thought of as a 2D vector space over the real numbers \(\RR\). The first of these two basis vectors, \(1\), points along the horizontal axis, which is therefore called the real axis. The second basis vector, \(i\), points along the vertical axis, which is called the imaginary axis. Together, they span the complex plane, a 2D plane in which every complex number can be represented by a 2D vector.
If we perform normal component wise vector addition on this 2-tuple, it is the same as the addition of two complex numbers we have defined earlier. Since all finite dimensional vector spaces of equal dimension are isomorphic to each other, we can view the complex numbers as elements in \(\RR^2\) and imagine \(\CC\) as the complex plane isomorphic to \(\RR^2\).
In the complex plane, when we view a complex number as a vector, we can introduce polar coordinates \(r\) and \(\varphi\), where \(r=\sqrt{a^2+b^2}\) and \(\varphi=atan2(b,a)\) is the angle with the real axis (counterclockwise). We can then write
\begin{equation}
z = a + ib = r (\cos \varphi + i \sin \varphi) = r e^{i\varphi}
\end{equation}
(The last equality is due to Euler’s formula of complex analysis and can be seen by appropriately grouping the terms of the Taylor series (power series) of \(e^i\varphi\) with respect to \(\varphi\)). Sometimes one also writes \(\mathrm{cis}\ \varphi\) for \(\cos \varphi + i \sin \varphi\).
Remember from our linear algebra primer that \(\RR^2\) is not a field. So there is no room in this picture for the multiplication operation on complex numbers defined above. Nonetheless, this visualization of the complex plane is enormously useful, and we shall use it for physics applications, too.
With Multiplication
Despite the previous caveat, it is nonetheless possible to incorporate complex number multiplication in this picture over \(\RR\), if we are a bit clever. Instead of writing the complex number \(z\) as a column vector as above, we shall write it as a \(2\times 2\) matrix:
\begin{equation}
[z]=
\begin{pmatrix}
a & -b\\
b & a
\end{pmatrix}
\end{equation}
The addition of complex numbers is now performed as regular matrix addition. And with this addition, it still is a two-dimensional vector space over \(\RR\). What we have gained with this construction is that we can use regular matrix multiplication to perform the multiplication of two complex numbers, and it will be equivalent to how we defined complex number multiplication above.
This (additive) subgroup of \(2\times2\) matrices is more difficult to visualize as a vector space than the previous column vector notation. We shall therefore use it rarely, but will resort to it on occasion, especially when multiplication is needed.
It is easy to verify that the subgroup of matrices of the above form are a field, even though general \(2\times 2\) matrices only form a ring with unity (see again our linear algebra primer). Since the determinant of the above matrix is \(a^2+b^2\), it is always positive except if both \(a\) and \(b\) are zero. So every matrix of this form, except for the zero matrix is invertible. Furthermore, it is easy to verify that matrix multiplication for matrices of the above form commutes. This resolves our two objections why general matrices did not form a field, but only a ring with unity.
1-Dimensional Vector Space over \(\CC\)
Since complex numbers form a field, however, we can also view complex numbers as a 1-dimensional vector space over \(\CC\) (just like real numbers are a one-dimensional vector space over \(\RR\), instead of viewing them as a two-dimensional vector space over \(\RR\). In this viewpoint, complex numbers are only a one-dimensional vector space, not a two-dimensional one, and the underlying field has changed from \(\RR\) to \(\CC\). We shall use this viewpoint, too, as we will see shortly.
Real and Complex Derivatives and Differentials
Differential of a Real Vector-Valued Function
So far we have only discussed the algebraic properties of complex numbers. Complex analysis will involve computing derivatives and integrals of functions of a complex variable. We shall now therefore take a look at differentiation of functions.
As we have seen in the black section of our multivariable calculus primer (the section dealing with the total differential of a vector-valued function and the Jacobian matrix), a differentiable vector-valued function
\begin{eqnarray}
\mathbf{F}:\RR^2&\rightarrow&\RR^2\\
(x,y)&\mapsto&f(x,y)=(f_1(x,y), f_2(x,y))
\end{eqnarray}
composed of scalar functions \(f_1:\RR^2\rightarrow\RR\) and \(f_2:\RR^2\rightarrow\RR\) has a total differential (sometimes called the total derivative) \(d\mathbf{F}\) that can be written in terms of the Jacobian matrix
\begin{equation}
[J] =
\begin{pmatrix}
\frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y}\\
\frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y}
\end{pmatrix}
\end{equation}
The entries are the partial derivatives, i.e. the derivative with respect to the indicated variable, while all other variables are held fixed. For a real, differentiable, vector-valued function, each entry in \([J]\) can take a different value, depending on the values of the partial derivatives at the point \((x,y)\) at which they are evaluated. \(\mathbf{F}\) is differentiable at \((x,y)\) as long as all four partial derivatives exist. There is no constraint on the relation of these four values.
The reader shall be reminded at this point, that a total differential is a linear map approximating the function in a region around a point, this linear map goes from one vector space to another, and the linear map can be expressed with respect to a choice of bases in these vector spaces by multiplication with a matrix, which in this case is the Jacobian matrix. (The coordinates in the domain and codomain of \(f\) give rise to the basis vectors.)
We shall also note that, analogously, for a scalar function of one variable, \(f:\RR\rightarrow\RR, x\mapsto f(x)\) the Jacobian matrix \([J]=\frac{\partial f}{\partial x}=\frac{d f}{d x}\) is a \(1 \times 1\) matrix (or simply a real number – but it is important to view this number here still as representing a linear map by means of (matrix) multiplication). Also, it is not necessary to write a partial derivative \(\partial\) here and one can use the total derivative \(d\) symbol instead, because there is only one variable.
Differential of a Complex Scalar Function of a Complex Variable
Next we introduce the idea of the derivative of a complex scalar function \(f:\CC\rightarrow\CC, z\mapsto f(z)\), with respect to its single complex variable \(z\), i.e. \(df/dz\), using the above picture. (We will do it more formally later, by requiring that the derivative be a complex number is the same no matter from which direction one approaches it in the complex plane, but for now, let us build on what we have learned so far to further intuition to come to the same conclusion.)
We have seen that \(\CC\) as a vector space can be identified with \(\RR^2\). In that sense, viewing \(f\) actually as \(f:\RR^2\rightarrow\RR^2\), the total derivative of \(f\) is the Jacobian matrix \([J]\), a \(2 \times 2\) matrix with real entries. However, we have also seen that \(\CC\) can be viewed as a 1-dimensional vector space over \(\CC\). In that sense, the total derivative of \(f:\CC\rightarrow\CC\) is a Jacobian matrix \([J]\) as well, but this time only a \(1 \times 1\) matrix with complex entries, actually just one entry, a single complex number when the derivative is evaluated at a particular \(z\).
The two viewpoints seem to be incongruous, one offering four real degrees of freedom (the four entries of the \(2\times 2\) Jacobian matrix of the total differential \(df\)), while the other viewpoint only offers two (the real and imaginary part of a complex number). But then we remember that we have found a way to write complex numbers as \(2\times 2\) matrices of a particular form
\begin{equation}
[z]=
\begin{pmatrix}
a & -b\\
b & a
\end{pmatrix}
\end{equation}
where \(a\) is the real part and \(b\) the imaginary part of the complex number \(z\). If we want both viewpoints to be true simultaneously, i.e. that \(df\) is an endomorphism of \(\CC\) and \(CC\) can be viewed as a 2-dimensional vector space over \(\RR\) and also a 1-dimensional vector space over \(\CC\), we realize that we have to restrict the general Jacobian matrix \([J]\) of the (real vector-valued) function \(f\) from four free entries
\begin{equation}
[J] =
\begin{pmatrix}
\frac{\partial f_1}{\partial x} & \frac{\partial f_1}{\partial y}\\
\frac{\partial f_2}{\partial x} & \frac{\partial f_2}{\partial y}
\end{pmatrix}
\end{equation}
to just two free entries \(a\) and \(b\) as defined in the expression for \([z]\). With other words, by comparing the above expressions for \([z]\) and \([J]\), we arrive at the requirement that (from \(a\) and \(b\) respectively):
\begin{eqnarray}
\frac{\partial f_1}{\partial x} &=& \frac{\partial f_2}{\partial y}\\
\frac{\partial f_2}{\partial x} &=& -\frac{\partial f_1}{\partial y}
\end{eqnarray}
So for a real vector-valued function \(f:\RR^2\rightarrow\RR^2\) to be interpretable as the real and imaginary parts of a complex scalar function that is complex differentiable, the four real partial derivatives of the function \(f\) are not all free, but must be connected in a specific way, satisfying the above two equations. These are the so called Cauchy-Riemann differential equations of complex analysis for the holomorphic function (i.e. complex differentiable complex function) \(f(z)=f_1(x,y)+if_2(x,y)\), with \(z=x+iy\). (It is customary in mathematics literature to use the letters \(u\) and \(v\) for \(f_1\) and \(f_2\) in this context.) The Cauchy-Riemann equations provide constraints on \(f\) that give rise to a wealth of theorems and a whole branch of mathematics; they are what makes complex analysis different from real analysis.
Relevance of Complex Analysis for Physics
A Solution of Laplace's Equation
One of the things that makes complex analysis, and in particular holomorphic functions, so attractive in physics is that their real components \(f_1\) and \(f_2\) satisfy Laplace’s equation (which is ubiquitous in physics)
\begin{eqnarray}
\Delta f_1 &=& 0\\
\Delta f_2 &=& 0
\end{eqnarray}
where \(\Delta\) is the Laplace operator defined as \(\Delta = \nabla^2\), and \(\nabla = (\frac{\partial}{\partial x}, \frac{\partial}{\partial y})^T\) is the nabla operator, which we have already encountered in our multivariable calculus primer. This is a direct consequence of \((f_1, f_2)\) satisfying the Cauchy-Riemann differential equations, which we shall see soon. From this we can also start to guess that knowing a holomorphic function on the boundary of a region is sufficient to know its values in the middle of the region, which will eventually give rise to Cauchy’s integral formula for holomorphic functions.
Physics Application Example: Incompressible, Irrotational Flow in 2D
To motivate physics applications of complex analysis, let us give an example of a function in fluid dynamics which satisfies the Cauchy-Riemann differential equations for complex differentiable (i.e. holomorphic) functions (when written in a complex form). One such function is the velocity vector field of an incompressible, irrotational flow in two dimensions. (Because viscosity introduces vorticity, this flow typically has to be inviscid as well.) That the components of this two dimensional velocity vector field satisfy the Cauchy-Riemann equations follows from the following.
The first Cauchy-Riemann differential equation for the velocity vector is obtained from the incompressibility requirement \(\rho=\mathrm{const.}\), which results in \(\frac{\partial\rho}{\partial t} = 0\) and \(\nabla\rho=0\). With the continuity equation
\begin{equation}
\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho\mathbf{V}) = 0,
\end{equation}
this translates to \(\nabla \cdot \mathbf{V}= 0\) (since \(\rho\) is constant, one can simply pull it in front of the derivative), which written in components is
\begin{equation}
\nabla \cdot \mathbf{V} = \frac{\partial V_1}{\partial x} + \frac{\partial V_2}{\partial y} + \underbrace{\frac{\partial V_3}{\partial z}}_{=0}=0
\end{equation}
The last term is always zero for a 2-dimensional flow in the \(xy\)-plane, because it is the derivative with respect to the third coordinate \(z\) (yet the vector field is constant in this third dimension, since we are only looking at a 2-dimensional flow). To make it look the same as the first Cauchy-Riemann equation obtained previously for \(f_1\) and \(f_2\), we need to make the assignment \(f_1=V_1\) and \(f_2=-V_2\) (notice the minus sign for the second component).
For an irrotational flow we have by definition \(\nabla \times \mathbf{V} = \mathbf{0}\) (see our multivariable calculus primer), which, written in components, leads to
\begin{equation}
\nabla \times \mathbf{V} =
\begin{pmatrix}
\frac{\partial}{\partial x}\\
\frac{\partial}{\partial y}\\
\frac{\partial}{\partial z}
\end{pmatrix}
\times
\begin{pmatrix}
V_1 \\
V_2 \\
V_3
\end{pmatrix}
=
\begin{pmatrix}
\frac{\partial V_3}{\partial y} – \frac{\partial V_2}{\partial z}\\
\frac{\partial V_1}{\partial z} – \frac{\partial V_3}{\partial x}\\
\frac{\partial V_2}{\partial x} – \frac{\partial V_1}{\partial y}
\end{pmatrix}
= \mathbf{0}
\end{equation}
The first two component are irrelevant for a 2-dimensional flow in the \(xy\)-plane, because derivatives with respect to the coordinate in the third dimension, \(z\), are zero. We are thus left with
\begin{equation}
\frac{\partial V_2}{\partial x} – \frac{\partial V_1}{\partial y} = 0
\end{equation}
which is exactly the second Cauchy-Riemann differential equation we obtained previously, if we assign again \(f_1=V_1\) and \(f_2=-V_2\).
We can therefore define the complex velocity \(W(z)=V_1(x,y)-iV_2(x,y)\) (notice the minus sign) and \(z=x+iy\), which becomes a complex differentiable (i.e. holomorphic) function, since its real and imaginary part satisfy the Cauchy-Riemann equations. Then we can unleash the full machinery of complex analysis onto the complex velocity \(W(z)\) (including the Cauchy integral formula, residual theorem, Laurent series, etc.) and do fluid dynamic calculations this way. We will do this, for instance, during the formal proof of the Kutta-Joukowski theorem in two-dimensional aerodynamics, which relates the lift generated by an airfoil to the incoming airflow velocity and the circulation around the airfoil.
In a similar way, one can also define the complex potential \(F(z)=\phi(x,y) + i \psi(x,y)\) from the real velocity potential \(\phi(x,y)\) and the real stream function \(\psi(x,y)\) (\(\psi\) is constant along streamlines, which are lines tangent to the velocity vector field) of the incompressible, irrotational flow, and show that the real and imaginary parts of \(F(z)\) obey the Cauchy-Riemann equations. Taking the complex derivative (which exists, because holomorphic functions are infinitely differentiable), we obtain \(dF/dz = V_1 – iV_2 = W\) for the relation between the complex potential \(F(z)\) and the complex velocity \(W(z)\).
Examples of Holomorphic Functions and Their Complex Derivatives
Holomorphic (i.e. complex differentiable) functions include polynomials, the exponential function \(e^z\), and trigonometric function like \(\sin z\) and \(\cos z\).
Since we will expand holomorphic functions in power series and Laurent series, we shall encounter often terms of the form \(z^n\), where \(n\) can take positive integer values (monomials of polynomials, power series), or negative values (in which case the monomial appears in the denominator of a fraction). Regardless whether \(n\) is positive or negative, if \(f(z)=z^n\), then the complex derivative of \(f(z)\) is
\begin{equation}
f'(z)=n z^{n-1},
\end{equation}
Note that for \(n=0\), we have \(f(z)=z^0=\mathrm{const.}\), which leads to \(f'(z)=0\), not \(f'(z)=1/z\).
In the above, \(f(z)\) can be viewed as the antiderivative (primitive function) of function \(f'(z)\). Let us write \(f\) for \(f’\) and \(F\) for \(f\). Then for \(f(z)=z^n\)
\begin{equation}
F(z)=\frac{1}{n+1}z^{n+1}
\end{equation}
except for \(n=-1\), i.e. for \(f(z)=1/z\). In fact, as we shall see, \(f'(z)=1/z\) does not have an antiderivative in the complex plain, because the complex logarithm, \(f(z)=\log(z)\), is multivalued and not continuous. This will lead to the line integral of \(1/z\) on a closed circle around \(z=0\) to give \(2\pi i\) instead of zero, which in turn gives rise to residues, etc. All other \(z^n\) have antiderivatives, regardless of \(n\) being positive or negative, as long as \(n\not=1\). We shall explore this later. What we want the reader to remember for now is the above formula for the derivative and that all functions of the form \(f(z)=z^n\) except for \(n=-1\) have an antiderivative.
Overview of Basic Definitions and Theorems of Complex Analysis
After this intuitive introduction to complex analysis, let us now start to develop complex analysis a bit more formally, though we shall proceed rather quickly and will just highlight certain definitions and theorems, without giving the proofs for most of them.
Holomorphic Functions and the Cauchy-Riemann Differential Equations
Complex analysis deals with complex functions of one complex variable. Let us define differentiation with respect to this complex variable. This requires a little bit more care than with respect to a real variable, because the point of differentiation can be approached from different directions in the complex plane.
Definition (Complex Differentiable, Holomorphic Function): Let \(U\subset \CC\), \(f:U\rightarrow\CC, z_0\in\CC\). \(f\) is complex differentiable if
\begin{equation}
f'(z_0) = \lim_{z\rightarrow z_0}\ \frac{f(z)-f(z_0)}{z-z_0}
\end{equation}
exists. \(f\) said to be holomorphic in \(U\), if \(f\) is complex differentiable at every \(z\in U\).
The derivative \(f'(z)=\frac{df}{dz}(z)\) can then be written as a complex number \(f'(z)=a+ib\), with real and imaginary parts \(a\) and \(b\), respectively. For holomorphic functions the typical rules of differential calculus are valid, including chain rule, summation rule, product rule and quotient rule.
The above requirement is more stringent than it may seem at first. It implies that the derivative has to have the same value no matter from which direction in the complex plane one approaches \(z_0\). As we shall see, this will reduce the Jacobian matrix from four free parameters to just two, even though over real numbers, the Jacobian will remain a \(2\times2\) matrix.
If we were to identify the complex plane \(\CC\) with \(\RR^2\) using the notation
\begin{eqnarray}
z &=& x + iy\\
f(z) &=& u(x,y) + iv(x,y)
\end{eqnarray}
with \(x,y\in\RR\) and \(u\) and \(v\) functions \(\RR^2\rightarrow\RR\) we can express this constraint as the Cauchy-Riemann differential equations
\begin{eqnarray}
\frac{\partial u}{\partial x}&=&\frac{\partial v}{\partial y}\\
\frac{\partial u}{\partial y}&=&-\frac{\partial v}{\partial x}
\end{eqnarray}
The real and imaginary components \(u\) and \(v\) of every holomorphic function must satisfy these equations in the region where the function is holomorphic.
It turns out that \(\frac{df}{dz}\) is again a complex number, and we can write it from the above as either
\begin{equation}
f'(z)=\frac{df}{dz} = a+ib = \frac{\partial u}{\partial x} – i\frac{\partial u}{\partial y} = \frac{\partial v}{\partial y} + i \frac{\partial v}{\partial x}
\end{equation}
We will use these expressions for the complex derivative of a function in various of our calculations, so it is best to remember them.
Taking the viewpoint of \(f\) as a real function, \(\RR^2\rightarrow\RR^2\), the total differential \(df\) (Jacobian matrix) takes the form
\begin{equation}
[J] :=
\begin{pmatrix}
\frac{\partial u}{\partial x} & \frac{\partial u}{\partial y}\\
\frac{\partial v}{\partial x} & \frac{\partial v}{\partial y}
\end{pmatrix}
=
\begin{pmatrix}
a & -b\\
b & a
\end{pmatrix}
\end{equation}
for some values \(a, b\in\RR\). This is exactly the form matrices have to take if we want to express complex numbers as \(2\times2\) matrices and then use simple matrix multiplication instead of regular complex number multiplication (the reader is encouraged to easily verify this).
In general, of course, the Jacobian matrix \([J]\) would have four independent entries corresponding to the partial derivatives of \(u\) and \(v\) with respect to \(x\) and \(y\). So from the above we see again how restrictive the requirement is that a function be complex differentiable.
Holomorphic functions, viewed as real mappings, are conformal (they preserve angles).
Laplace's Equation
Corollary: If \(f\) is a holomorphic function in \(U\), then its real and imaginary parts \(u\) and \(v\) are harmonic functions in \(U\), i.e. they each satisfy the Laplace equation \(\Delta u = 0\) and \(\Delta v = 0\). Here we have used the Laplace operator notation \(\Delta := \nabla^2\).
Proof:
\begin{equation}
\Delta u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} = \frac{\partial}{\partial x}\frac{\partial u}{\partial x} + \frac{\partial}{\partial y}\frac{\partial u}{\partial y} = \frac{\partial^2 v}{\partial y\partial x} – \frac{\partial^2 v}{\partial x\partial y} = 0
\end{equation}
where in the last step (before the zero) we have used the Cauchy-Riemann differential equations (and the zero results from the partial derivatives commuting, i.e. their order can be interchanged for a twice differentiable function).
Cauchy's Integral Theorem and Direct Consequences
Complex Line Integrals
Complex line integrals are ubiquitous in complex analysis.
Definition (Complex Line Integral):
Let \(U\in\CC\), \(f: U\rightarrow \CC\) continuous.
\begin{equation}
\int_\gamma f(z)\,dz = \int_{t_0}^{t_1} f(\gamma(t))\cdot\dot\gamma(t)\,dt
\end{equation}
is the complex line integral of \(f\) along the curve \(\gamma\). The dot in the last expression denotes complex multiplication.
The complex line integral above is defined very similarly to the two-dimensional version of the real line integral we have encountered in our multivariable calculus primer, but the second dimension here is written as the imaginary part of a complex number rather than the second dimension of a vector, and the dot product becomes complex multiplication (so the mixing of the components between the \(f\) “vector” and the tangent “vector” of the curve, \(\dot \gamma\), becomes a bit different than with the standard dot product; in particular, both second entries become a real entry with a minus sign).
We can write the above complex line integral as real-valued integrals of \(f(x+iy)=u(x,y)+iv(x,y)\):
\begin{equation}
\int_\gamma f(z)\,dz = \int_\gamma (u\,dx-v\,dy) + i\int_\gamma (v\, dx + u\,dy)
\end{equation}
The reader can easily verify this from the complex line integral definition and using \((x(t)+ iy(t))=\gamma(t)\) and \((\frac{dx}{dt}+i\frac{dy}{dt})=\dot\gamma\).
Lemma: If \(f\) has an antiderivative \(F\), i.e. if a function \(F\) exists, such that \(F’=f\), then
\begin{equation}
\int_\gamma f(z) = F(\gamma(t_1))-F(\gamma(t_0))
\end{equation}
Furthermore, the magnitude of the complex line integral is always smaller than the maximum of the integrand \(f(z)\) along the curve times the length of the curve \(\gamma\).
Cauchy's Integral Theorem
Theorem (Cauchy’s Integral Theorem): Let \(U\subset\CC\) be an open subset and \(f:U\rightarrow\CC\) holomorphic. Let \(K\subset U\) be a compact region with piecewise smooth boundary. Then
\begin{equation}
\int_{\partial K} f(z)dz = 0
\end{equation}
The boundary of \(K\), denoted by \(\partial K\) is typically followed counterclockwise in the integral. \(K\) can have holes in it, which are excluded from \(K\). The boundaries of these holes are part of \(\partial K\) and are to be followed clockwise.
The proof of Cauchy’s integral theorem uses Green’s theorem, which we have encountered in our multivariable calculus primer. Green’s theorem converts the line integral into a 2D volume integral. The integrand of the volume integral then turns out to be zero, if the Cauchy-Riemann equations hold for the two terms. To prove the real part of Cauchy’s integral theorem, use \(-v\) and \(-u\) for the two functions in Green’s theorem, and \(u\) and \(-v\) to prove the imaginary part, where \(u\) and \(v\) are the real and imaginary parts of function \(f\).
Cauchy's Integral Formula
A consequence of Cauchy’s integral theorem is Cauchy’s integral formula (to be distinguished from the integral theorem above):
Theorem (Cauchy’s Integral Formula): Let \(z_0\in\CC\), \(r>0\), and let \(f\) be holomorphic on a vicinity of \(\overline{B}_r(z_0)=\{z\in\CC\, |\, |z-z_0|\le r\}\). If \(|a-z_0|<r\), then
\begin{equation}
f(a) = \frac{1}{2\pi i} \int_{|z-z_0|=r}\frac{f(z)}{z-a}\,dz
\end{equation}
The proof uses Cauchy’s integral theorem. Instead of just on a circle, the formula holds for any closed path \(\gamma\) in a simply connected region. Note that unlike the region \(K\) in Cauchy’s integral theorem, the vicinity \(B_r\) around \(z_0\) here must not have any holes.
Cauchy’s integral formula essentially states that the value of function \(f\) at point \(a\) can be computed from the values of a closed curve \(\gamma\), which can be interpreted as the boundary of the region within which the point \(a\) is located. This is closely related to holomorphic functions being solutions to Laplace’s equation (a second-order partial differential equation we have mentioned earlier) and the solution of Laplace’s equation being determined by the boundary conditions.
The above is how Cauchy’s integral formula in often presented in mathematical literature, but for better notational clarity let us make a purely notational change and write \(z’\) for the integration variable \(z\) which is internal and only appears on the righthand side, and write \(z\) for the variable \(a\) which also appears as the argument of the function on the lefthand side:
\begin{equation}
f(z) = \frac{1}{2\pi i} \int_{|z’-z_0|=r}\frac{f(z’)}{z’-z}\,dz’
\end{equation}
This is exactly the same equation, just with a notational change, to facilitate that we are usually used to thinking of complex functions as being functions of \(z\).
Power Series (Taylor Series) and Laurent Series of a Complex Function
Another direct consequence of Cauchy’s integral formula and therefore of Cauchy’s integral theorem is the fact that all holomorphic functions are analytic, i.e. they can be written locally around a point \(z_0\) as a power series. This also implies that holomorphic functions are infinitely differentiable. (The term analytic function used here is not to be confused with the term analytic expression, also used in mathematics, but with a different meaning.)
Power Series (Taylor Series) Expansion and Cauchy's Differentiation Formula
Using Cauchy’s integral formula, it can be shown that a power series expansion of any holomorphic function exists in a vicinity of a point \(z_0\) and that the coefficients have the form as shown on the righthand side of the equation below. Because of Cauchy’s integration formula (and assuming \(z_0=0\) without loss of generality), the existence of a power series in variable \(z\) for the function \(1/(z’-z)\) is sufficient to generate a power series for all holomorphic functions, since on the righthand side of Cauchy’s integral formula (after the notational change we performed above) only \(1/(z’-z)\) depends on \(z\). (\f(z’)\) in the integrand depends only on the integration variable \(z’\), which is internal and does not appear on the lefthand side of the equation.
Dropping the assumption that \(z_0=0\), one obtains the power series
\begin{equation}
f(z) = \sum_{n=0}^\infty a_n(z-z_0)^n
\end{equation}
where the coefficients are
\begin{equation}
a_n = \frac{1}{2\pi i} \oint_\gamma \frac{f(z’)}{(z’-z_0)^n}\,dz’
\end{equation}
Because the power series expansion is unique, one can compare the coefficients \(a_n\) of the power series above with the standard form of the Taylor series to see that \(a_n=\frac{1}{n!}f^{(n)}(a)\) and therefore obtain an expression for the \(n\)-th derivative of the holomorphic function \(f\):
Theorem (Cauchy’s Differentiation Formula):
\begin{equation}
f^{(n)}(z)=\frac{n!}{2\pi i}\oint_\gamma\frac{f(z’)}{(z’-z)^{n+1}}\,dz’
\end{equation}
As an obvious side note, notice that the index \(n\) in a power series is an integer \(n\le0\), i.e. negative values for \(n\), which would effectively put the \((z-z_0)^n\) term in the denominator of a fraction, are not allowed (this is what makes a power series differ from the Laurent series, which we will define shortly). This makes the power series, when truncated at some \(n=n_0\) a polynomial of order \(n_0\).
Region of Validity of Power Series
Let us briefly focus on the region where the above power series construction is valid. We pick a point \(z_0\) and the expansion is valid on some simply connected local vicinity around \(z_0\) (for simplicity the reader can imagine a region with a circular boundary, i.e. a disk, and indeed, because of path independence, the integral of the coefficient can be evaluated on a circle). There shall be no singularities of \(f\) inside the region bounded by curve \(\gamma\), and, importantly, \(z_0\) shall not be a singularity itself.
We shall now introduce a different series, the Laurent series, which is defined on an annulus around \(z_0\), not on a disk around \(z_0\). This will allow \(z_0\) or any points close to it to be excluded, which in turn will allow singularities to reside inside the annulus. This will be especially helpful when we apply complex analysis in physics, because we will be able to place an airfoil inside the annulus, such that the annulus only covers the region where the complex airflow velocity field has no singularities and represents an incompressible, irrotational flow to good accuracy. A Taylor series would be unable to do so due to being defined on a disk.
Laurent Series
Definition (Laurent Series): A Laurent series is a series of the form
\begin{equation}
\sum_{n=-\infty}^{\infty} c_n(z-z_0)^n
\end{equation}
or a couple of series, consisting of a principal part (or singular part)
\begin{equation}
\sum_{n=1}^{\infty} c_{-n}\frac{1}{(z-z_0)^n}
\end{equation}
and the regular part (or analytic part)
\begin{equation}
\sum_{n=0}^{\infty} c_{n}(z-z_0)^n
\end{equation}
The Laurent series converges if both the principal part and the regular part converge.
Lemma: A Laurent series converges on an annulus
\begin{equation}
A_{r,R}(z_0):=\{z\ |\ r<|z-z_0|<R\}
\end{equation}
where \(1/r\) is the convergence radius of the principal part and \(R\) is the convergence radius of the regular part.
If function \(f\) has no singularity inside the annulus, the interior radius \(r\) can be taken to zero and the Laurent series reduces to a Taylor series.
Some useful general remarks about Laurent series expansions are in order. Let
\begin{equation}
f(z) = \sum_{n=-\infty}^{\infty} c_n(z-z_0)^n
\end{equation}
be a convergent Laurent series on the annulus \(A_{r,R}(z_0)\). Then \(f\) is holomorphic on \(A_{r,R}\). \(f\) is differentiable on \(A_{r,R}\) with
\begin{equation}
f'(z)=\sum_{\begin{array}{c}n=-\infty\\ n\not=0\end{array}}^\infty n c_n (z-z_0)^{n-1}.
\end{equation}
If \(c_{-1}\not=0\) then \(f(z)\) is multivalued and has no antiderivative (primitive function), because
\begin{equation}
\oint_{|z-z_0|=\rho} \frac{1}{z-z_0} = 2\pi i \not= 0
\end{equation}
for any small radius \(\rho>0\). (We will see later, when we discuss the residue theorem, that \(c_{-1}\) will be related to the value of residues; and in physics applications, where \(f(z)\) will be the complex velocity \(W(z)\), \(c_{-1}\) will be related to the value of circulation around an airfoil).
If \(c_{-1}=0\), then the antiderivative is
\begin{equation}
F(z)=\sum_{n=-\infty}^{\infty} \frac{c_n}{n+1}z^{n+1}.
\end{equation}
The coefficients in the Laurent series of a function can be determined from the expression in the following proposition:
Theorem (Laurent Series Expansion): If \(f\) is holomorphic on \(A_{r,R}\), then \(f(z) = \sum_{n=-\infty}^{\infty} c_n(z-z_0)^n\) and
\begin{equation}
c_n=\frac{1}{2\pi i} \int_{|z-z_0|=\rho}\frac{f(z)}{(z-z_0)^{n+1}}\, dz
\end{equation}
The proof uses Cauchy’s integral formula and can be found in any textbook on complex analysis.
We have thus created a series expansion that can be used on an annulus, in an aerodynamics application excluding the airfoil (and any singularities of the complex velocity function \(f(z)=W(z)\)) in the center. We will use the Laurent series expansion in the proof of the Kutta-Joukowski theorem.
Isolated Singularities and Meromorphic Functions
The principal part of the Laurent series, which we have just introduced, blows up to infinity at \(z=z_0\) (\(z_0\) is the center of the annulus, the expansion point of the Laurent series). If we ever want to use the principal part of the Laurent series, we must therefore learn to deal with singularities, i.e. with a function which is holomorphic throughout the annulus but not in its interior. (If function \(f\) is holomorphic inside the interior radius of the annulus as well, then the principal part of the Laurent series is zero and the Laurent series reduces to the Taylor series valid on the whole disk (annulus and the interior of the annulus).
Singularities
Definition (Isolated Singularity): If \(f:U\rightarrow\CC\) is a holomorphic function, then isolated points of \(\CC\) which are not in \(U\) are called isolated singularities.
Definition (Removable Singularity): An isolated singularity \(z_0\) is called removable if a holomorphic continuation of \(f\) on \(z_0\) exists.
For example, the function \(f(z)=\frac{z^2+i}{z-i}\) has a removable singularity at \(z=i\), by defining \(f(z=i)=2i\).
Definition (Pole): An isolated singularity \(z_0\) is called a pole of function \(f\) if \(z-z_0)^n f(z) has a removable singularity at \(z_0\). The smallest value of \(n\in\NN\) which makes the singularity removable is called the order of the pole.
Definition (Essential Singularity): An isolated singularity is called an essential singularity if it is not removable and not a pole.
Meromorphic Functions
Definition (Meromorphic Function): A function that is holomorphic except for at points where it has a pole is called a meromorphic function.
The quotient of two holomorphic function is a meromorphic function. Likewise, a meromorphic function can be written locally as the quotient of two holomorphic functions.
The Laurent series can be used to describe a function on an annulus around a singularity. The principal part of the series will then be nonzero.
Application of the Laurent Series Expansion Theorem to Isolated Singularities
Let \(f\) be a holomorphic function with a singularity at \(z_0\). On a punctured disk with sufficiently small radius \(R\) and centered at \(z_0\), we can write \(f\) as a Laurent series \(f(z)=\sum_{n=-\infty}^\infty c_n (z-z_0)^n\). Let us remember that the principal part (or singular part) of the Laurent series is the part with \(n\le-1\).
If the singularity at \(z_0\) is removable, then all coefficients of the principal part are zero. (The same statement holds if the function does not have a singularity at \(z_0\), as we have already noted.) If the singularity is a pole, the principal part has only finitely many terms with nonzero coefficients \(c_n\). If the singularity is essential, then the principal part has infinitely many nonzero terms.
Residue Theorem
Definition (Residue): Let \(z_0\) be an isolated singularity of a holomorphic function \(f\). The residue of \(f\) in \(z_0\) is defined as
\begin{equation}
\mathrm{Res}_{z_0}(f)=\frac{1}{2\pi i} \int_{|z-z_0|=\epsilon} f(z) dz
\end{equation}
with \(\epsilon>0\) small. (According to the Cauchy’s integral theorem the integral does not depend on \(\epsilon\) for small \(\epsilon\).)
If \(\sum_{n=-\infty}^{\infty} c_n(z-z_0)^n\) is a Laurent series of \(f\) in \(z_0\), we have
\begin{equation}
\frac{1}{2\pi i}\int_{|z-z_0|=\epsilon} f(z) = \frac{1}{2\pi i} \sum_{n=-\infty}^\infty c_n \underbrace{\int_{|z-z_0|=\epsilon} (z-z_0)^n dr}_{=\left\{\begin{array}{cc}
0&n\not=-1\\
2\pi i & n=-1
\end{array}\right.} = c_{-1}
\end{equation}
and thus \(\mathrm{Res}_{z_0} (f) = c_{-1}\). The value of the residual of \(f\) in point \(z_0\) is equal to the \(c_{-1}\) coefficient of the Laurent series of \(f\) at \(z_0\).
Theorem (Residue Theorem): Let \(f:U\rightarrow\CC\) be holomorphic except for isolated singularities. Let \(K\subset U\) be a compact region with piecewise smooth boundary \(\partial K\), and let \(f\) not have any singularities on \(\partial K\). Let \(S\) denote the set of singularities of \(f\) which lie in \(K\). Then
\begin{equation}
\int_{\partial K} f(z)\, dz = 2\pi i \sum_{a\in S} \mathrm{Res}_a (f)
\end{equation}
The residue theorem can be used to compute real-valued integrals where the integrand has a polynomial in the denominator:
Proposition (Computation of Integral over Real Axis): Let \(F(z)\) be a rational function with roots at infinity of order 2 or larger. Let this function also have no pole on the real axis. Then
\begin{equation}
\int_{-\infty}^{\infty} F(x)\, dx = 2\pi i \sum_{a\ \mathrm{with}\ \Im(a)>0} \mathrm{Res}_a(f).
\end{equation}
The last proposition lets us conveniently compute integrals of real functions \(F(x)\) over the whole real axis (from \(-\infty\) to \(\infty\), if we can find an analytic continuation into the complex plane. Oftentimes in practice one can simply replace the real variable \(x\) in the expression with a complex one, \(z\). Good candidates for such functions are, for instance, rational functions with polynomials in the denominator of an order of at least two higher than the polynomial in the numerator. (The root-order-at-infinity requirement is such that the integral over the completion arc in the complex plane to close the curve goes to zero, as its radius is made larger.)
Relevance for Aerodynamics
While all the above concepts of complex analysis are still freshly in the readers mind, let us briefly comment on how they will appear in some of our aerodynamics problems.
We have already commented on the fact that we will be able to write the velocity vector field of a 2-dimensional incompressible, irrotational flow of a fluid as a complex velocity function \(W(z)\). The fact that this function satisfies the Cauchy-Riemann equations for holomorphic functions elegantly captures the incompressibility and irrotationality conditions of the flow.
If the function \(W(z)\) was holomorphic everywhere and did not have a singularity somewhere within the airfoil, the circulation (defined as a line integral on a closed curve \(\gamma\) around the airfoil) would be zero because of Cauchy’s integral theorem. Such an airfoil would not produce any lift (because the Kutta-Joukowski theorem – the proof of which we review elsewhere – relates the lift force of an airfoil to the value of circulation of the airflow on a curve surrounding the airfoil).
According to the residue theorem, the circulation around the airfoil (which will be proportional to lift) will equal to the sum of all residues of all singularities of function \(W(z)\) contained within the airfoil. And the value of a residue is equal to the \(c_{-1}\) coefficient of the Laurent series expansion of \(W(z)\) around the singularity.
So in one form or another, we will use almost everything we have discussed so far. Complex analysis reaches much further than this though, and the reader is encouraged to dive deeper with specialized mathematical literature. In this article, we have only crudely reviewed the bare minimum of essentials quickly, which we will absolutely need.
As a great starting point for further reading for the purposes of an aerospace engineer, a very good concise introduction to complex analysis with detailed applications in aerodynamics can be found in:
Krishnamurty Karamcheti, “Principles of Ideal Fluid Aerodynamics,” John Wiley & Sons, Inc., New York, 1966.