Data Science and Computing with Python for Pilots and Flight Test Engineers
Linear Algebra
Basic Linear Algebra: Working With Matrices (2-Dimensional Arrays of Numbers)
In order to do matrix algebra, we need know how to write matrices in Python and we need to define the following three operations:
- Matrix Addition (addition of two matrices)
- Matrix Multiplication (multiplication of two matrices)
- Scalar Multiplication (multiplication of a scalar (a simple number) with a matrix)
We are also interested in obtaining:
- Additive inverse matrix (referred to as the “negative matrix”)
- Multiplicative inverse matrix (commonly just referred to as “inverse matrix” for short)
Note: If you are unfamiliar with matrix calculations in linear algebra, you may want to review the blue section of our linear algebra primer article briefly, before continuing to read below.
import numpy as np
Representing a Matrix in Python Code
The matrix
$$ A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix} $$
is written in Python as
A = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
Without the np.array (i.e. defining the object as a NumPy array), the computer would simply interpret everything as a list of lists (see earlier, where we discussed Python data types), which is not what we want here. We want the computer to understand that it is a matrix, an algebraic object in a mathematical sense, such that we can calculate with, using the standard matrix operations defined in linear algebra, which are implemented in the NumPy module.
Matrix Operations
The standard matrix operations mentioned above in the introductory paragraph are implemented in Python’s NumPy module as:
# Let another matrix be:
B = np.array([[5, 2, 7], [4, 8, 0], [0, 8, 10]])
# and let c be a scalar (regular number):
c = 3.5
# Then:
# Matrix Addition A + B:
C = A + B
# Matrix Muliplication A * B:
D = np.matmul(A, B)
# Note that the following line is doing element wise multiplication of matrices,
# this is NOT matrix multiplication and generally has very little use in mathematics:
E = A * B # THIS IS NOT WHAT YOU WANT FOR MATRIX MULTIPLICATION!!!
# Scalar Multiplication:
F = c * A
# Additive inverse matrix (aka "negative matrix"):
G = -A
# Multiplicative inverse matrix (aka "inverse matrix"):
H = np.linalg.inv(B)
print(C)
print("")
print(D)
print("")
print(E)
print("")
print(F)
print("")
print(G)
print("")
print(H)
It is to be noted that the element-wise multiplication of two matrices (with the asterisk symbol), which we warned about above – while never used in linear algebra – may have its use in computer programing. If your matrix (i.e. array of numbers) consists of unrelated numbers (think of them as a two-dimensional list and not as an algebraic object), and you need to perform a multiplication of all these numbers with another list of numbers organized in similar fashion. Then the asterisk operation on a NumPy array may come handy (it will not work on regular Python lists).
Issues during Matrix Inversion of Almost-Singular Matrices due to Numerical Precision
Note that matrix \(A\) above is singular, because it has linearly dependent rows and therefore determinant zero. If we try to invert it, we should receive an error message, because it has no multiplicative inverse (not all matrices have an inverse, see our linear algebra primer).
# This line may produce an error, if Python (NumPy) recognizes A as a singular matrix:
J = np.linalg.inv(A)
print(J)
Due to numerical errors (coming from finite numerical precision with which the computer calculates), however, the matrix may to appear to be only almost-singular, and we do get a result, which turns out to be incorrect (due to numerical errors). In that case, the inverse matrix entries may end up being unnaturally large, of the order of \(10^{15}\).
A similar thing can happen when trying to invert a truly almost-singular matrix on a computer (i.e. one with a very small, but non-zero determinant). One must use special care and techniques when doing so, e.g. singular value decomposition and throw away the eigenvalues closest to zero. The result will not be completely accurate, but much better than if one were to keep these eigenvalues. Discussing this is way beyond the scope of this current introduction. However, such advanced linear algebra techniques may be appended below in this linear algebra lesson in the future.