Eigen-decomposition of a Matrix
Eigen-decomposition of a Matrix#
In many applications, we need to find the eigenvalues and eigenvectors of a matrix. This can be done using the NumPy linear algebra module numpy.linalg
. For example, let us consider a random matrix \(\mathbf{A}\).
import numpy as np
np.set_printoptions(suppress=True)
import matplotlib.pyplot as plt
A = np.random.randn(3,3)
print(A)
[[ 0.33222557 -0.63034335 0.07890825]
[ 1.35934868 -0.3405165 -0.49860337]
[-0.19721574 0.19602863 -0.83804111]]
To find its eigenvalues and eigenvectors, use the function numpy.linalg.eig
as follows.
w, v = np.linalg.eig(A) # find eigenvalues and eigenvectors
print(f'eigenvalues = {w}') # each entry is an eigenvalue
print(f'eigenvectors = \n{v}') # each column is the corresponding eigenvector
eigenvalues = [-0.03188706+0.90169563j -0.03188706-0.90169563j -0.78255793+0.j ]
eigenvectors =
[[-0.19177332-0.50350317j -0.19177332+0.50350317j 0.18268415+0.j ]
[-0.83200937+0.j -0.83200937-0.j 0.43354694+0.j ]
[-0.00783031+0.13193422j -0.00783031-0.13193422j 0.88241915+0.j ]]
Each entry of w
is an eigenvalue, and each column v[:,i]
is the eigenvector corresponding to the eigenvalue w[i]
. Note that even for a real matrix, the eigenvalues and eigenvectors can have complex numbers.
Let \(\mathbf{D}\) be a diagonal matrix formed by the eigenvalues, and \(\mathbf{R}\) be a matrix whose columns are the corresponding eigenvectors. We can check that \(\mathbf{A}\) can be decomposed as \(\mathbf{A} = \mathbf{R} \cdot \mathbf{D} \cdot \mathbf{R}^{-1}\).
D = np.diag(w) # convert a vector to a diagonal matrix
R = v
A_decomp = np.dot(R, np.dot(D, np.linalg.inv(R))) # decompose A into eigenmodes
print(A_decomp)
[[ 0.33222557+0.j -0.63034335+0.j 0.07890825-0.j]
[ 1.35934868-0.j -0.3405165 -0.j -0.49860337-0.j]
[-0.19721574-0.j 0.19602863-0.j -0.83804111+0.j]]
In the special (but common) case where a matrix \(\mathbf{A}\) is Hermitian (or symmetric if \(\mathbf{A}\) is real), the eigenvalues will be all real. In this case, it is convenient to use the function numpy.linalg.eigh
instead, which returns the eigenvalues in ascending order.
As = (A + A.T) / 2 # construct a real symmetric matrix
print(As)
[[ 0.33222557 0.36450267 -0.05915374]
[ 0.36450267 -0.3405165 -0.15128737]
[-0.05915374 -0.15128737 -0.83804111]]
w, v = np.linalg.eigh(As) # find eigenvalues and eigenvectors of a Hermitian matrix
print(f'eigenvalues = {w}') # eigenvalues in ascending order
print(f'eigenvectors = \n{v}') # each column is the corresponding eigenvector
eigenvalues = [-0.88217369 -0.46595103 0.50179268]
eigenvectors =
[[-0.04201506 -0.41556531 0.90859243]
[ 0.29489901 0.86372198 0.40867949]
[ 0.95460427 -0.2851137 -0.08626047]]
Since \(\mathbf{A}\) is real symmetric, the eigenvectors are also real, and we have \(\mathbf{R}^{-1} = \mathbf{R}^\top\), i.e., \(\mathbf{R}\) is orthogonal. Therefore, we should have \(\mathbf{A} = \mathbf{R} \cdot \mathbf{D} \cdot \mathbf{R}^\top\).
D = np.diag(w)
R = v
A_decomp = np.dot(R, np.dot(D, R.T))
print(A_decomp)
[[ 0.33222557 0.36450267 -0.05915374]
[ 0.36450267 -0.3405165 -0.15128737]
[-0.05915374 -0.15128737 -0.83804111]]
Notice that here we used the transpose R.T
instead of the inverse np.linalg.inv(R)
.