Vector Basics

Build geometric and algebraic intuition for vectors, norms, and projections in finite-dimensional spaces.

Vectors give us a language for describing direction and magnitude in spaces ranging from robot joint velocities to gradients of cost functions. A vector vRn\mathbf{v} \in \mathbb{R}^n can be represented in a basis as v=i=1nviei\mathbf{v} = \sum_{i=1}^n v_i \mathbf{e}_i, where the coefficients viv_i capture how far the vector leans along each axis. Thinking geometrically, operations such as addition and scalar multiplication obey the parallelogram rule, which is fundamental to the intuition we use later in Matrix Fundamentals and Calculus — Differentiation.

The norm v2=i=1nvi2\lVert \mathbf{v} \rVert_2 = \sqrt{\sum_{i=1}^n v_i^2} measures length, and the dot product uv=iuivi\mathbf{u}^\top \mathbf{v} = \sum_i u_i v_i encodes angles via uv=u2v2cosθ\mathbf{u}^\top \mathbf{v} = \lVert \mathbf{u} \rVert_2 \lVert \mathbf{v} \rVert_2 \cos \theta. These tools let us decompose motion along useful directions, assess alignment between gradients and search directions, and project information between coordinate frames. The projection of v\mathbf{v} onto a non-zero vector u\mathbf{u}, given by proju(v)=uvuuu\text{proj}_{\mathbf{u}}(\mathbf{v}) = \frac{\mathbf{u}^\top \mathbf{v}}{\mathbf{u}^\top \mathbf{u}}\mathbf{u}, frequently appears in controllers and estimators that seek components aligned with constraints.

from vector_ops import dot, scale

v = [2.0, -1.0, 0.5]
u = [0.5, 0.5, 0.5]
projection = scale(u, dot(v, u) / dot(u, u))
print(projection)

The small utility module in ./code/vector-basics/vector_ops.py illustrates how dot products and scaling compose into projections. Experimenting with it highlights numerical sensitivity when u\mathbf{u} is nearly zero or when vectors become highly collinear. These pathologies foreshadow conditioning issues we treat in Line Search Methods and Kalman Filter Essentials.

Vector projection geometry

Understanding vectors also primes interpretation of constraint gradients in Karush-Kuhn-Tucker Conditions and value-function derivatives in Dynamic Programming and Bellman Equations. Pay careful attention to how norms induce dual spaces, because this underlies the choice of trust-region radii and observer gains later in the stack.

See also