Eigenvalues and Eigenvectors
🚀 Eigenvalues & Eigenvectors: Finding the Invariants
Eigenvalues and eigenvectors are properties of square matrices that reveal the “skeleton” of a linear transformation. They describe the directions along which a transformation only scales the space, rather than rotating it.
🟢 Level 1: Core Definitions
1. The Eigenvalue Equation
For a square matrix , a non-zero vector is an eigenvector if the transformation of by results in a vector that is a scalar multiple of : Where:
- is the eigenvector.
- (lambda) is the eigenvalue.
2. Characteristic Equation
To find the eigenvalues of , we solve the characteristic equation: This is a polynomial in of degree (where is the dimension of ).
import numpy as np
# Define a 2x2 matrix
A = np.array([[4, -2], [1, 1]])
# Calculate eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print(f"Eigenvalues: {eigenvalues}")
print(f"Eigenvectors (columns):\n{eigenvectors}")🟡 Level 2: Diagonalization and Eigendecomposition
3. Eigendecomposition
If a square matrix has linearly independent eigenvectors, it can be factored into: Where:
- is a matrix whose columns are the eigenvectors of .
- (Lambda) is a diagonal matrix containing the corresponding eigenvalues.
4. Geometric Interpretation
Eigenvectors represent the “principal axes” of a transformation. In a transformation that stretches an image, the eigenvectors point in the directions of the stretch, and the eigenvalues indicate the magnitude of that stretch.
🔴 Level 3: Principal Component Analysis (PCA)
5. Variance and Eigenvectors
PCA is a technique for dimensionality reduction that identifies the directions of maximum variance in a dataset. These directions are the eigenvectors of the covariance matrix of the data.
- Center the data: Subtract the mean from each feature.
- Compute Covariance: .
- Eigendecomposition: Find eigenvalues and eigenvectors of .
- Project: Choose the eigenvectors with the largest eigenvalues to represent the data in a lower-dimensional space.
# Simple PCA workflow concept
from sklearn.decomposition import PCA
import numpy as np
X = np.array([[1, 2], [3, 4], [5, 6], [7, 8]])
# Reduce to 1 dimension
pca = PCA(n_components=1)
X_reduced = pca.fit_transform(X)
print(f"Original shape: {X.shape}")
print(f"Reduced shape: {X_reduced.shape}")
print(f"Explained variance ratio: {pca.explained_variance_ratio_}")