Vectors give us a language for describing direction and magnitude in spaces ranging from robot joint velocities to gradients of cost functions. A vector can be represented in a basis as , where the coefficients capture how far the vector leans along each axis. Thinking geometrically, operations such as addition and scalar multiplication obey the parallelogram rule, which is fundamental to the intuition we use later in Matrix Fundamentals and Calculus — Differentiation.
The norm measures length, and the dot product encodes angles via . These tools let us decompose motion along useful directions, assess alignment between gradients and search directions, and project information between coordinate frames. The projection of onto a non-zero vector , given by , frequently appears in controllers and estimators that seek components aligned with constraints.
from vector_ops import dot, scale
v = [2.0, -1.0, 0.5]
u = [0.5, 0.5, 0.5]
projection = scale(u, dot(v, u) / dot(u, u))
print(projection)
The small utility module in ./code/vector-basics/vector_ops.py illustrates how dot products and scaling compose into projections. Experimenting with it highlights numerical sensitivity when is nearly zero or when vectors become highly collinear. These pathologies foreshadow conditioning issues we treat in Line Search Methods and Kalman Filter Essentials.

Understanding vectors also primes interpretation of constraint gradients in Karush-Kuhn-Tucker Conditions and value-function derivatives in Dynamic Programming and Bellman Equations. Pay careful attention to how norms induce dual spaces, because this underlies the choice of trust-region radii and observer gains later in the stack.