README.md

Download
markdown 311 lines 9.7 KB
  1# Math for AI - Example Code
  2
  3This directory contains Python examples demonstrating mathematical concepts used in AI/ML.
  4
  5## Files
  6
  7### 01_vector_matrix_ops.py (283 lines)
  8**Linear Algebra Fundamentals**
  9
 10Demonstrates:
 11- Vector space operations: basis, span, linear independence
 12- Matrix operations: multiplication, transpose, inverse
 13- Rank computation and properties
 14- ML application: feature matrices and weight matrices
 15- Visualization of vector addition and linear transformations
 16
 17**Key concepts:**
 18- Standard basis vectors in R^n
 19- Linear independence checking via rank
 20- Matrix multiplication and properties
 21- Feature vectors as matrix rows in ML
 22
 23**Output:** `vector_ops.png` - Vector operations visualization
 24
 25---
 26
 27### 02_svd_pca.py (302 lines)
 28**SVD and Principal Component Analysis**
 29
 30Demonstrates:
 31- Singular Value Decomposition (SVD) with NumPy
 32- Low-rank matrix approximation
 33- PCA implementation from scratch (centering → covariance → eigen)
 34- Comparison with sklearn PCA
 35- Application to Iris dataset
 36- Explained variance visualization
 37
 38**Key concepts:**
 39- SVD decomposition: A = U @ S @ V^T
 40- Relationship between SVD and PCA
 41- Principal components as eigenvectors
 42- Dimensionality reduction in practice
 43
 44**Output:** `pca_visualization.png` - PCA results on Iris dataset
 45
 46---
 47
 48### 03_matrix_calculus_autograd.py (408 lines)
 49**Matrix Calculus and Automatic Differentiation**
 50
 51Demonstrates:
 52- Manual gradient computation for scalar and vector functions
 53- Jacobian matrix computation (vector → vector)
 54- Hessian matrix computation (second derivatives)
 55- PyTorch autograd comparison
 56- Numerical gradient checking
 57- MSE loss gradients for linear regression
 58
 59**Key concepts:**
 60- Partial derivatives and gradient vectors
 61- Jacobian for vector-valued functions
 62- Hessian for convexity analysis
 63- Automatic differentiation with PyTorch
 64- Gradient descent optimization
 65
 66**Output:** `gradient_descent.png` - Gradient descent trajectory visualization
 67
 68---
 69
 70### 04_norms_regularization.py (400 lines)
 71**Norms, Distances, and Regularization**
 72
 73Demonstrates:
 74- Lp norms (L1, L2, L∞) and their properties
 75- Distance metrics: Euclidean, Manhattan, Cosine, Mahalanobis
 76- L1 vs L2 regularization in linear regression
 77- Sparsity-inducing property of L1
 78- Unit ball visualization for different norms
 79
 80**Key concepts:**
 81- Norm properties: non-negativity, homogeneity, triangle inequality
 82- L1 (Lasso) produces sparse solutions
 83- L2 (Ridge) shrinks coefficients smoothly
 84- Feature selection vs feature shrinkage
 85- Regularization paths
 86
 87**Output:**
 88- `unit_balls.png` - Unit ball shapes for L1, L2, L∞
 89- `regularization_comparison.png` - L1 vs L2 effects
 90
 91---
 92
 93### 05_gradient_descent.py (271 lines)
 94**Gradient Descent Optimization Algorithms**
 95
 96Demonstrates:
 97- Basic Gradient Descent (GD) optimizer
 98- Stochastic Gradient Descent (SGD)
 99- Momentum-based optimization
100- Adam optimizer from scratch
101- Optimization of Rosenbrock function (non-convex with narrow valley)
102
103**Key concepts:**
104- Vanilla gradient descent updates: θ = θ - lr * ∇θ
105- Momentum accumulation for acceleration
106- Adaptive learning rates with Adam
107- Comparison of convergence behavior across optimizers
108
109**Output:** `gradient_optimizers.png` - Optimization trajectories for different algorithms
110
111---
112
113### 06_optimization_constrained.py (320 lines)
114**Constrained and Unconstrained Optimization**
115
116Demonstrates:
117- Unconstrained optimization with scipy.optimize methods (BFGS, CG, Newton-CG, L-BFGS-B)
118- Lagrange multipliers for equality constraints
119- KKT conditions for inequality constraints
120- Convex vs non-convex optimization comparison
121
122**Key concepts:**
123- Analytical vs numerical optimization
124- Lagrangian formulation for constraints
125- First-order (gradient) and second-order (Hessian) methods
126- Constraint handling in optimization problems
127
128**Output:** `constrained_optimization.png` - Constraint visualization and optimization paths
129
130---
131
132### 07_probability_distributions.py (367 lines)
133**Probability Distributions and Statistical Inference**
134
135Demonstrates:
136- Common probability distributions: Gaussian, Bernoulli, Poisson, Exponential
137- Maximum Likelihood Estimation (MLE) for Gaussian parameters
138- Maximum A Posteriori (MAP) estimation with Gaussian prior
139- Bayesian update visualization
140
141**Key concepts:**
142- PDF/PMF properties for different distributions
143- MLE: argmax_θ P(data | θ)
144- MAP: argmax_θ P(θ | data) = argmax_θ P(data | θ) P(θ)
145- Prior × Likelihood = Posterior (Bayes' rule)
146
147**Output:** `probability_distributions.png` - Distribution PDFs and Bayesian inference
148
149---
150
151### 08_information_theory.py (462 lines)
152**Information Theory for Machine Learning**
153
154Demonstrates:
155- Entropy as measure of uncertainty: H(X) = -Σ p(x) log p(x)
156- Cross-entropy and KL divergence
157- Mutual information between variables
158- Connection to ML loss functions (cross-entropy loss)
159- ELBO (Evidence Lower Bound) visualization for VAEs
160
161**Key concepts:**
162- Maximum entropy for uniform distributions
163- KL divergence as measure of distribution difference
164- Cross-entropy = Entropy + KL divergence
165- ELBO = log p(x) - KL(q||p) used in variational inference
166
167**Output:** `information_theory.png` - Entropy, KL divergence, and ELBO visualization
168
169---
170
171### 09_mcmc_sampling.py (308 lines)
172**MCMC Sampling and Advanced Sampling Techniques**
173
174Demonstrates:
175- Rejection sampling from target distribution
176- Importance sampling for expectation estimation
177- Metropolis-Hastings MCMC algorithm
178- Reparameterization trick (VAE-style)
179
180**Key concepts:**
181- Rejection sampling: accept/reject based on proposal distribution
182- Importance sampling: weighted samples for expectation
183- MCMC: Markov chain converging to target distribution
184- Reparameterization: z = μ + σ * ε for gradient flow in VAEs
185
186**Output:** `mcmc_sampling.png` - Sampling method comparisons and MCMC convergence
187
188---
189
190### 10_tensor_ops_einsum.py (298 lines)
191**Tensor Operations and Einstein Summation**
192
193Demonstrates:
194- Tensor creation and manipulation in NumPy and PyTorch
195- Einstein summation notation (einsum) for efficient operations
196- Broadcasting rules and examples
197- Numerical stability techniques (log-sum-exp, softmax)
198
199**Key concepts:**
200- einsum notation: implicit summation over repeated indices
201- Common operations: matrix multiply, batch operations, trace, transpose
202- Broadcasting: automatic shape alignment for element-wise ops
203- Numerical stability: avoid overflow/underflow in exp/log
204
205**Output:** Console output demonstrating einsum equivalences and timing comparisons
206
207---
208
209### 11_graph_spectral.py (372 lines)
210**Graph Theory and Spectral Graph Theory**
211
212Demonstrates:
213- Graph matrix construction: adjacency, degree, Laplacian
214- Spectral decomposition of graph Laplacian
215- Spectral clustering algorithm
216- Simple Graph Neural Network (GNN) message passing
217- PageRank computation
218
219**Key concepts:**
220- Laplacian eigenvalues/eigenvectors encode graph structure
221- Normalized Laplacian: L_norm = I - D^(-1/2) A D^(-1/2)
222- Spectral clustering uses eigenvectors for community detection
223- GNN message passing aggregates neighbor features
224
225**Output:** `graph_spectral.png` - Graph visualization, spectral clustering, and PageRank
226
227---
228
229### 12_attention_math.py (419 lines)
230**Attention Mechanism Mathematics**
231
232Demonstrates:
233- Scaled dot-product attention from scratch
234- Multi-head attention implementation
235- Positional encoding (sinusoidal)
236- Attention weight visualization
237- Comparison with PyTorch nn.MultiheadAttention
238
239**Key concepts:**
240- Attention formula: softmax(Q @ K^T / sqrt(d_k)) @ V
241- Scaling factor sqrt(d_k) prevents softmax saturation
242- Multi-head attention: parallel attention with different projections
243- Positional encoding adds sequence order information to embeddings
244
245**Output:** `attention_weights.png` - Heatmap visualization of attention weights
246
247---
248
249## Running the Examples
250
251Each file is standalone and can be run independently:
252
253```bash
254python 01_vector_matrix_ops.py
255python 02_svd_pca.py
256# ... through 12_attention_math.py
257```
258
259## Dependencies
260
261All examples require:
262- numpy
263- matplotlib
264- torch (PyTorch)
265- sklearn (scikit-learn)
266- scipy
267
268Install with:
269```bash
270pip install numpy matplotlib torch scikit-learn scipy
271```
272
273## Learning Path
274
275**Recommended order:**
2761. `01_vector_matrix_ops.py` - Linear algebra fundamentals
2772. `02_svd_pca.py` - Matrix factorization and dimensionality reduction
2783. `03_matrix_calculus_autograd.py` - Gradients and automatic differentiation
2794. `04_norms_regularization.py` - Norms and regularization
2805. `05_gradient_descent.py` - Optimization algorithms
2816. `06_optimization_constrained.py` - Constrained optimization
2827. `07_probability_distributions.py` - Probability and inference
2838. `08_information_theory.py` - Information theory for ML
2849. `09_mcmc_sampling.py` - Sampling techniques
28510. `10_tensor_ops_einsum.py` - Tensor operations and einsum
28611. `11_graph_spectral.py` - Graph theory and spectral methods
28712. `12_attention_math.py` - Attention mechanism mathematics
288
289## Output Files
290
291Running the scripts will generate PNG visualizations:
292- `vector_ops.png` - Vector addition and transformations
293- `pca_visualization.png` - PCA projection and explained variance
294- `gradient_descent.png` - Optimization trajectory
295- `unit_balls.png` - Norm visualizations
296- `regularization_comparison.png` - Regularization effects
297- `gradient_optimizers.png` - Optimizer comparison
298- `constrained_optimization.png` - Constrained optimization
299- `probability_distributions.png` - Distribution PDFs
300- `information_theory.png` - Entropy and KL divergence
301- `mcmc_sampling.png` - Sampling methods
302- `graph_spectral.png` - Graph spectral analysis
303- `attention_weights.png` - Attention weight heatmaps
304
305## Notes
306
307- All examples include extensive comments explaining mathematical concepts
308- Print statements show intermediate results for learning
309- Visualizations are automatically saved to the current directory
310- Each file has a `if __name__ == "__main__":` block for modularity