Data Science and Computing with Python for Pilots and Flight Test Engineers
Inverse Laplace Transform
The inverse Laplace transform takes the Laplace transform \(F(s)\) of a function \(f(t)\) and turns it back into the function \(f(t)\). As its name says, it is the inverse operation of the Laplace transform.
In practice, we will decompose any Laplace transform of a function into known transforms of functions and look these then up in a table to get the inverse transform. For such a decomposition, we will need to learn several techniques, such as partial fraction decomposition and completing the square.
Before we do so, let us look at the formal definition of the inverse Laplace transform first.
Definition of the Inverse Laplace Transform
Even though we will never use it explicitly in this course, let us introduce the formal definition of the inverse Laplace Transform. It is a bit more complicated than the Laplace transform, because the Laplace transform took us from the one-dimensional time domain, characterized by parameter \(t\), to the 2-(real)-dimensional \(s\)-plane of the Laplace domain, with \(s=\sigma+i\omega\) being a complex number (which is a 2-dimensional plane over the real numbers, thinking of \(\sigma\) and \(\omega\) denoting the two real dimensions).
The Laplace transform simply integrated over \(t\) in its definition. For the inverse Laplace transform, we must think carefully how to perform the complex integral (a line integral in the complex plane, characterized by one parameter and forming a one-dimensional curve in the plane).
The integral formula for the inverse Laplace transform, also called Mellvin’s inverse formula, Bromwich integral, or the Fourier-Mellin integral, is given by
\begin{equation} f(t) = \mathcal{L}^{-1}[F(s)](t) = \frac{1}{2\pi i} \lim_{T\rightarrow\infty} \int_{\gamma-iT}^{\gamma+iT} e^{st} F(s) \, ds \end{equation}
where the integration is to be performed along a vertical line with real part \(\Re(s)=\gamma\) in the complex s-plane. \(\gamma\) must be chosen such that it is greater than the real part of all singularities (poles) of \(F(s)\) and \(F(s)\) must be bounded on the line.
If all poles of \(F(s)\) are in the left-half of the \(s\)-plane, then we can set \(\gamma=0\) and the inverse Laplace transform becomes identical to the inverse Fourier transform (we are integrating over the imaginary axis of the \(s\)-plane in this case, which is equivalent to the Fourier domain). The same is true if \(F(s)\) is an entire function, i.e. a complex function that is holomorphic on the whole complex plane.
The complex integral can be computed in practice with the residue theorem. For a brief, informal introduction to complex analysis, including some of the concepts just mentioned, see our complex analysis primer.
We will not pursue this technical line here further, and will rather turn our attention to decomposing functions into more basic functions for which the Laplace transform (and its inverse) have been tabulated, and we will perform the inverse transform by looking it up in a table, just like we have done already for the forward transform.
Decomposition Techniques
The reason why we need to learn decomposition techniques only now, is because the forward Laplace transforms that we used were always the same, those for \(x(t)\), \(\dot x(t)\), and \(\ddot x(t)\), which we also just looked up in a table, without ever computing from first principles. The only function that might have caused some difficulty was the input function \(f(t)\). But for this function we usually chose something fairly simple, for which we were able to look up its Laplace transform easily.
While using the above to convert the differential equation of a system into an algebraic equation was always the same, the solutions in the Laplace domain that we will obtain will be different. Therefore, we have to be able to deal with a much greater variety of functions to start the inverse Laplace transform than we had to worry for the forward transform.
In general, our solution of one of our differential equations in the Laplace domain will look something like this
\begin{equation} X(s)=\frac{as^2+bs+c}{ds^3+es^2+fs+g}F(s),\end{equation}
where \(F(s)\) being a Laplace transform oftentimes is also another fraction with polynomials in the numerator and denominator (see e.g. the Laplace transforms of sine and cosine). Thus we will want to decompose this expression, consisting of long polynomials in the numerator and denominator, into individual (additive) terms that look like any of the functions in the right column of the Laplace transform table provided in the lesson on the Laplace transform, such that we can then use this table to find the corresponding functions in the time domain. Because of the linearity of the Laplace transform, we can do this for each additive term individually. But in the expression above everything is in one big complicated fraction, and this is no good.
We will use partial fraction decomposition first to create the individual additive terms, and then completing the square to shape the individual terms to look exactly like the terms in the table (oftentimes those for the damped sine and cosine, or the exponential solution).
Partial Fraction Decomposition
Partial fraction decomposition is a technique that aims at decomposing an expression like
$$ \frac{as^2+bs+c}{ds^3+es^2+fs+g} $$
and writing it as a sum of fractions, which have in their denominator only polynomials which cannot be factored further, i.e. polynomials with either one or no solution in the real numbers. There are several cases, which will each be treated below. We will each treat with an explicit example.
In all of the examples below, it is assumed that the degree of the polynomial in the numerator is smaller than the degree of the polynomial in the numerator. If this is not the case, we must perform polynomial long division first of the numerator by the denominator.
Case 1: Denominator Polynomial with Different Real Roots (Linear Factors)
$$\frac{as^2+bs+c}{(s+d)(s+e)(s+f)} = \frac{A}{s+d}+\frac{B}{s+e}+\frac{C}{s+f}$$
In the above, \(s\) is the variable of the polynomial, other lowercase letters are constants which are known (they can be positive, negative, or zero), and the uppercase constants on the right-hand side are to be determined, after the decomposition is written as above. We will explain further at the bottom, how to determine the uppercase constants, once one has managed to set up the above equation.
The above situation is just an example. The polynomial in the denominator could, of course, be of higher order (containing more factors), and one would have to add more terms accordingly, following the above pattern. Likewise, the polynomial in the numerator could be of higher order. Then – if the polynomial in the numerator is of equal or higher order than the denominator – one needs to remember to perform polynomial long division first, before decomposing with an equation like the one above.
Furthermore, initially, the polynomial in the denominator may be present in expanded form and needs to be factored into its roots first, before the decomposition can begin. During that process, one learns, if we are really in this Case 1 or in one of the other three cases explained below.
Case 2: Denominator Polynomial with Repeated Real Roots (Linear Factors)
Some polynomials have multiple roots with the same value. This occurs when the same factor appears in the polynomial more than once. In that case, one must set up the right-hand side such that the multiple root appears in the denominator of the fraction one, twice, etc. (until the highest order of appearance is reached), as is illustrated in the example below.
$$\frac{as^2+bs+c}{(s+d)(s+e)^2} = \frac{A}{s+d}+\frac{B}{s+e}+\frac{C}{(s+e)^2}$$
Case 3: Denominator Polynomial with Distinct Complex Roots (Quadratic Factors)
If a polynomial possesses a factor that has complex roots, i.e. one that does not have real roots and that cannot be decomposed further into linear factors with real coefficients (i.e. one that remains a quadratic factor), then one proceeds as follows:
$$\frac{as^2+bs+c}{(s+d)(s^2+es+f)} = \frac{A}{s+d}+\frac{Bs+C}{s^2+es+f}$$
Case 4: Denominator Polynomial with Repeated Complex Roots (Quadratic Factors)
If the factor with complex roots appears multiple times, one proceeds analogously to the occurrence of multiple linear factors, as is illustrated below:
$$\frac{as^2+bs+c}{(s+d)(s^2+es+f)^2} = \frac{A}{s+d}+\frac{Bs+C}{s^2+es+f}+\frac{Ds+E}{(s^2+es+f)^2}$$
Solving for the Uppercase Constants
Now that we know how to set up the decomposition equation, according to the examples above, depending on the factors appearing in the polynomial in the denominator, we need to learn how to determine the newly introduced constants denoted by uppercase letters on the right-hand side of the equation.
The way to determine these uppercase constants is by bringing all the terms on the right-hand side over a common denominator, i.e. multiply each term at the top and bottom by corresponding factors, such that the denominators of all the terms on the right-hand side become the same as the denominator on the left-hand side.
Then one can simply equate the numerator on the right-hand side to the numerator on the left-hand side. Since this equality must hold for all values of \(s\), we can simply substitute enough values of \(s\) to obtain enough equations to determine all the uppercase letter constants. (As many such equations will be needed as there are upperletter constants to be determined.)
When choosing values of \(s\) to substitute, it is advantageous to use the roots of the denominator polynomial, because this makes many terms on the right-hand side cancel and the values of the uppercase constants are faster to determine.
Worked examples of the above procedure can be found towards the bottom of the Wikipedia article on partial fraction decomposition and in the textbook of the USAF Test Pilot School, Flying Qualities Textbook, Volume II, Part 1, USAF-TPS-CUR-86-02, Edwards AFB, California, 1986, on Pages 3.64-3.67.
Completing the Square
Completing the square is a technique that aims to turn a second-order polynomial in s
$$ as^2 + bs + c $$
into something of the form
$$ a(s-h)^2+k $$
for some values \(h\) and \(k\) yet to be determined.
The procedure is quite simple:
- Set \(h=\frac{b}{2}\) and insert it for \(h\) into the second expression above.
- Choose (or compute) \(k\), such that \(a(s-\frac{b}{2})^2+k = as^2+bs+c\)
The Wikipedia article on completing the square contains several worked examples, which the reader is encouraged to consult for illustration of the technique.