Matrices package a linear map into a rectangular array that transforms basis coordinates. A matrix acts on a vector by stacking weighted combinations of components, , where is the -th column. The column space captures all reachable combinations, and its dimension is the rank , which indicates how many independent directions are preserved. Rank deficiency points to constraints or redundant actuators, themes that show up when forming defects in Pontryagin's Minimum Principle Notes or assembling observers in Kalman Filter Essentials.
Key invariants such as the determinant measure volume scaling: tells us how hypercubes warp under . If , the transformation collapses volume, signaling loss of invertibility. Decompositions like or singular value decomposition (SVD) reveal orthogonal bases and stretch factors that we later use to precondition Line Search Methods or to regularize Hessians in Nonlinear MPC Overview. The spectral theorem gives us diagonalization for symmetric matrices, unlocking tools like modal analysis in Kinematic Bicycle Model.
import numpy as np
A = np.array([[1.0, 2.0],
[3.0, 4.0]])
values, vectors = np.linalg.eig(A)
print("Eigenvalues:", values)
print("Eigenvectors:\n", vectors)
Conditioning matters whenever we invert matrices or solve linear systems. The 2-norm condition number bounds how errors in the right-hand side translate into solution errors. When is large, small measurement noise can corrupt state estimates, motivating the covariance inflation heuristics discussed in EKF and UKF Overview. For control design, recognizing structure (bandedness, sparsity, symmetry) enables tailored solvers exploited in Direct Transcription.