lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

commit 53950e129904c31cb495eada32dd855d46f30003
parent 6c6eb05b4b5b8495d796de85bc7cfcfd98959c42
Author: Alex Balgavy <alex@balgavy.eu>
Date:   Mon,  2 Nov 2020 22:21:20 +0100

Linear algebra notes

Diffstat:
Mcontent/_index.md | 4++--
Acontent/lin-algebra-notes/_index.md | 54++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/addjs.sh | 10----------
Dcontent/lin-algebra-notes/applications-to-computer-graphics.html | 67-------------------------------------------------------------------
Rcontent/lin-algebra-notes/img/graphics-coordinate-matrix.png -> content/lin-algebra-notes/applications-to-computer-graphics/graphics-coordinate-matrix.png | 0
Acontent/lin-algebra-notes/applications-to-computer-graphics/index.md | 36++++++++++++++++++++++++++++++++++++
Rcontent/lin-algebra-notes/img/perspective-projection-diagram.png -> content/lin-algebra-notes/applications-to-computer-graphics/perspective-projection-diagram.png | 0
Rcontent/lin-algebra-notes/img/typical-transformations.png -> content/lin-algebra-notes/applications-to-computer-graphics/typical-transformations.png | 0
Rcontent/lin-algebra-notes/img/vector-letter-n.png -> content/lin-algebra-notes/applications-to-computer-graphics/vector-letter-n.png | 0
Dcontent/lin-algebra-notes/eigenvectors-eigenvalues.html | 145-------------------------------------------------------------------------------
Rcontent/lin-algebra-notes/img/determinant-geometric-diagram.png -> content/lin-algebra-notes/eigenvectors-eigenvalues/determinant-geometric-diagram.png | 0
Acontent/lin-algebra-notes/eigenvectors-eigenvalues/index.md | 64++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/index.html | 188-------------------------------------------------------------------------------
Dcontent/lin-algebra-notes/introduction.html | 266-------------------------------------------------------------------------------
Rcontent/lin-algebra-notes/img/graph-example.png -> content/lin-algebra-notes/introduction/graph-example.png | 0
Acontent/lin-algebra-notes/introduction/index.md | 128+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/linear-transformations.html | 126-------------------------------------------------------------------------------
Rcontent/lin-algebra-notes/img/geo-contract-shears.png -> content/lin-algebra-notes/linear-transformations/geo-contract-shears.png | 0
Rcontent/lin-algebra-notes/img/geo-projections.png -> content/lin-algebra-notes/linear-transformations/geo-projections.png | 0
Rcontent/lin-algebra-notes/img/geo-reflections.png -> content/lin-algebra-notes/linear-transformations/geo-reflections.png | 0
Acontent/lin-algebra-notes/linear-transformations/index.md | 47+++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/matrix-operations.html | 183-------------------------------------------------------------------------------
Acontent/lin-algebra-notes/matrix-operations.md | 85+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/orthogonality-least-squares.html | 199-------------------------------------------------------------------------------
Acontent/lin-algebra-notes/orthogonality-least-squares.md | 104+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/solution-sets-of-linear-systems.html | 127-------------------------------------------------------------------------------
Acontent/lin-algebra-notes/solution-sets-of-linear-systems.md | 48++++++++++++++++++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/src/applications-to-computer-graphics.wiki | 31-------------------------------
Dcontent/lin-algebra-notes/src/eigenvectors-eigenvalues.wiki | 59-----------------------------------------------------------
Dcontent/lin-algebra-notes/src/img | 2--
Dcontent/lin-algebra-notes/src/index.wiki | 50--------------------------------------------------
Dcontent/lin-algebra-notes/src/introduction.wiki | 116-------------------------------------------------------------------------------
Dcontent/lin-algebra-notes/src/linear-transformations.wiki | 42------------------------------------------
Dcontent/lin-algebra-notes/src/matrix-operations.wiki | 81-------------------------------------------------------------------------------
Dcontent/lin-algebra-notes/src/orthogonality-least-squares.wiki | 97-------------------------------------------------------------------------------
Dcontent/lin-algebra-notes/src/solution-sets-of-linear-systems.wiki | 43-------------------------------------------
Dcontent/lin-algebra-notes/src/symmetric-matrices.wiki | 27---------------------------
Dcontent/lin-algebra-notes/src/vector-spaces.wiki | 42------------------------------------------
Dcontent/lin-algebra-notes/style.css | 38--------------------------------------
Dcontent/lin-algebra-notes/symmetric-matrices.html | 82-------------------------------------------------------------------------------
Acontent/lin-algebra-notes/symmetric-matrices.md | 32++++++++++++++++++++++++++++++++
Dcontent/lin-algebra-notes/vector-spaces.html | 96-------------------------------------------------------------------------------
Acontent/lin-algebra-notes/vector-spaces.md | 47+++++++++++++++++++++++++++++++++++++++++++++++
Mtemplates/index.html | 4++++
44 files changed, 651 insertions(+), 2119 deletions(-)

diff --git a/content/_index.md b/content/_index.md @@ -2,7 +2,7 @@ title = "Alex's university course notes" +++ -# MSc Computer Systems Security (VU Amsterdam) +# MSc Computer Systems Security (VU Amsterdam & UVA) --- ## Subject notes: Year 1 @@ -25,7 +25,7 @@ title = "Alex's university course notes" * [Statistical Methods](https://thezeroalpha.github.io/stats-notes) * [Operating Systems](https://thezeroalpha.github.io/os-notes) * [Intelligent Systems](is-notes/) -* [Linear Algebra](https://thezeroalpha.github.io/lin-algebra-notes) +* [Linear Algebra](lin-algebra-notes/) * [Software Design](https://thezeroalpha.github.io/softdesign-notes) * [Logic & Modelling](https://thezeroalpha.github.io/logic-modelling-notes) * [Databases](databases-notes) diff --git a/content/lin-algebra-notes/_index.md b/content/lin-algebra-notes/_index.md @@ -0,0 +1,54 @@ ++++ +title = 'Linear Algebra' ++++ + +# Linear Algebra +If you need help with any of the topics, check out [PatrickJMT on Youtube](https://www.youtube.com/user/patrickJMT). He has some of the best math videos on the internet. + +- [Introduction](introduction) + - [Linear Equations](introduction#linear-equations) + - [Matrix notation](introduction#matrix-notation) + - [Reducing a matrix](introduction#reducing-a-matrix) + - [Vectors](introduction#vectors) +- [Solution sets of linear systems](solution-sets-of-linear-systems) + - [Homogeneous linear systems](solution-sets-of-linear-systems#homogeneous-linear-systems) + - [Parametric vector form](solution-sets-of-linear-systems#parametric-vector-form) + - [Linear independence](solution-sets-of-linear-systems#linear-independence) +- [Linear transformations](linear-transformations) +- [Matrix operations](matrix-operations) + - [Sums and scalar multiples](matrix-operations#sums-and-scalar-multiples) + - [Powers of a matrix](matrix-operations#powers-of-a-matrix) + - [Transpose of a matrix](matrix-operations#transpose-of-a-matrix) + - [Inverse of a matrix](matrix-operations#inverse-of-a-matrix) + - [Elementary matrices](matrix-operations#elementary-matrices) +- [Applications to computer graphics](applications-to-computer-graphics) + - [Homogeneous coordinates](applications-to-computer-graphics#homogeneous-coordinates) + - [2D](applications-to-computer-graphics#2d) + - [3D](applications-to-computer-graphics#3d) + - [Composite transformations](applications-to-computer-graphics#composite-transformations) + - [Perspective projections](applications-to-computer-graphics#perspective-projections) +- [Vector spaces](vector-spaces) + - [Column space and null space of a matrix](vector-spaces#column-space-and-null-space-of-a-matrix) + - [Basis for a subspace](vector-spaces#basis-for-a-subspace) + - [Coordinates](vector-spaces#coordinates) + - [Dimension of a subspace](vector-spaces#dimension-of-a-subspace) +- [Eigenvectors & eigenvalues](eigenvectors-eigenvalues) + - [Determinant](eigenvectors-eigenvalues#determinant) + - [Similarity](eigenvectors-eigenvalues#similarity) + - [Diagonalization](eigenvectors-eigenvalues#diagonalization) +- [Orthogonality & least squares](orthogonality-least-squares) + - [Inner (dot) product & uses](orthogonality-least-squares#inner-dot-product-uses) + - [Length of a vector](orthogonality-least-squares#length-of-a-vector) + - [Distance between vectors](orthogonality-least-squares#distance-between-vectors) + - [Orthogonal complement](orthogonality-least-squares#orthogonal-complement) + - [Orthogonal sets](orthogonality-least-squares#orthogonal-sets) + - [Orthogonal projections](orthogonality-least-squares#orthogonal-projections) + - [Orthogonal decomposition](orthogonality-least-squares#orthogonal-decomposition) + - [Best approximation](orthogonality-least-squares#best-approximation) + - [When basis for W is an orthonormal set](orthogonality-least-squares#when-basis-for-w-is-an-orthonormal-set) + - [Gram-Schmidt process](orthogonality-least-squares#gram-schmidt-process) + - [QR factorization](orthogonality-least-squares#qr-factorization) + - [Least-squares problems](orthogonality-least-squares#least-squares-problems) +- [Symmetric matrices](symmetric-matrices) + - [Diagonalization of symmetric matrices](symmetric-matrices#diagonalization-of-symmetric-matrices) + - [Singular value decomposition](symmetric-matrices#singular-value-decomposition) diff --git a/content/lin-algebra-notes/addjs.sh b/content/lin-algebra-notes/addjs.sh @@ -1,10 +0,0 @@ -#!/usr/bin/env bash -if ! command -v vim; then - echo "Don't know how to do this without vim yet." - exit 1 -elif [ $# -ne 1 ]; then - echo "Need to pass the file as argument." - exit 1 -fi - -vim -c 'v/<head> <script/ s#<head>#<head> <script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>' -c 'wq' $1 diff --git a/content/lin-algebra-notes/applications-to-computer-graphics.html b/content/lin-algebra-notes/applications-to-computer-graphics.html @@ -1,67 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>applications-to-computer-graphics</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Applications to computer graphics"><h2 id="Applications to computer graphics">Applications to computer graphics</h2></div> -<p> -graphics are stored in a matrix, such as this: -</p> - -<p> -<img src="img/graphics-coordinate-matrix.png" alt="Graphics coordinate matrix" /> <img src="img/vector-letter-n.png" alt="Vector letter N" /> -</p> - -<div id="Applications to computer graphics-Homogeneous coordinates"><h3 id="Homogeneous coordinates">Homogeneous coordinates</h3></div> -<div id="Applications to computer graphics-Homogeneous coordinates-2D"><h4 id="2D">2D</h4></div> -<p> -each point (x, y) in 2D can be identified with point (x, y, 1) in 3D. so we say that (x, y) has homogeneous coordinates (x, y, 1). -</p> - -<p> -e.g. translation is not a linear transformation. but \((x, y) \mapsto (x+h, y+k)\) can be written in homogeneous coordinates as \((x, y, 1) \mapsto (x+h, y+k, 1)\), and can be computed using matrix multiplication: -</p> - -<p> -\(\begin{bmatrix} 1 &amp; 0 &amp; h\\ 0 &amp; 1 &amp; k\\ 0 &amp; 0 &amp; 1\end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix} x+h \\ y+k \\ 1 \end{bmatrix}\) -</p> - -<div id="Applications to computer graphics-Homogeneous coordinates-3D"><h4 id="3D">3D</h4></div> -<p> -(X, Y, Z, H) are homogeneous coordinates for (x, y, z) if H ≠ 0 and -</p> - -<p> -\(x = \frac{X}{H}, \quad y = \frac{Y}{H}, \quad \text{and} \; z = \frac{Z}{H}\) -</p> - -<div id="Applications to computer graphics-Matrices for typical transformations"><h3 id="Matrices for typical transformations">Matrices for typical transformations</h3></div> -<p> -<img src="img/typical-transformations.png" alt="Typical transformations" /> -</p> - -<div id="Applications to computer graphics-Composite transformations"><h3 id="Composite transformations">Composite transformations</h3></div> -<p> -when you need two or more basic transformations, such a composite transformation is a matrix multiplication. -</p> - -<p> -matrices for new transformations are "prepended" in multiplication. so if you're rotating, then translating, the calculation is <code>[matrix for translation][matrix for rotation]</code>. -</p> - -<div id="Applications to computer graphics-Perspective projections"><h3 id="Perspective projections">Perspective projections</h3></div> -<p> -maps each point (x, y, z) onto an image point (x*, y*, 0) so that two points and eye position (center of projection) are on a line. -</p> - -<p> -<img src="img/perspective-projection-diagram.png" alt="Perspective projection diagram" /> -</p> - -</body> -</html> diff --git a/content/lin-algebra-notes/img/graphics-coordinate-matrix.png b/content/lin-algebra-notes/applications-to-computer-graphics/graphics-coordinate-matrix.png Binary files differ. diff --git a/content/lin-algebra-notes/applications-to-computer-graphics/index.md b/content/lin-algebra-notes/applications-to-computer-graphics/index.md @@ -0,0 +1,36 @@ ++++ +title = 'Applications to computer graphics' +template = 'page-math.html' ++++ + +# Applications to computer graphics +graphics are stored in a matrix, such as this: + +![Graphics coordinate matrix](graphics-coordinate-matrix.png) ![Vector letter N](vector-letter-n.png) + +## Homogeneous coordinates +### 2D +each point (x, y) in 2D can be identified with point (x, y, 1) in 3D. so we say that (x, y) has homogeneous coordinates (x, y, 1). + +e.g. translation is not a linear transformation. but $(x, y) \mapsto (x+h, y+k)$ can be written in homogeneous coordinates as $(x, y, 1) \mapsto (x+h, y+k, 1)$, and can be computed using matrix multiplication: + +$\begin{bmatrix} 1 & 0 & h\\\\ 0 & 1 & k\\\\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} x \\\\ y \\\\ 1 \end{bmatrix} = \begin{bmatrix} x+h \\\\ y+k \\\\ 1 \end{bmatrix}$ + +### 3D +(X, Y, Z, H) are homogeneous coordinates for (x, y, z) if H ≠ 0 and + +$x = \frac{X}{H}, \quad y = \frac{Y}{H}, \quad \text{and} \\; z = \frac{Z}{H}$ + +## Matrices for typical transformations +![Typical transformations](typical-transformations.png) + +## Composite transformations +when you need two or more basic transformations, such a composite transformation is a matrix multiplication. + +matrices for new transformations are "prepended" in multiplication. so if you're rotating, then translating, the calculation is `[matrix for translation][matrix for rotation]`. + +## Perspective projections +maps each point (x, y, z) onto an image point (x*, y*, 0) so that two points and eye position (center of projection) are on a line. + +![Perspective projection diagram](perspective-projection-diagram.png) + diff --git a/content/lin-algebra-notes/img/perspective-projection-diagram.png b/content/lin-algebra-notes/applications-to-computer-graphics/perspective-projection-diagram.png Binary files differ. diff --git a/content/lin-algebra-notes/img/typical-transformations.png b/content/lin-algebra-notes/applications-to-computer-graphics/typical-transformations.png Binary files differ. diff --git a/content/lin-algebra-notes/img/vector-letter-n.png b/content/lin-algebra-notes/applications-to-computer-graphics/vector-letter-n.png Binary files differ. diff --git a/content/lin-algebra-notes/eigenvectors-eigenvalues.html b/content/lin-algebra-notes/eigenvectors-eigenvalues.html @@ -1,145 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>eigenvectors-eigenvalues</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Eigenvectors &amp; eigenvalues"><h2 id="Eigenvectors &amp; eigenvalues">Eigenvectors &amp; eigenvalues</h2></div> -<p> -let A be n × n, \(x \in \Re^n\) is an eigenvector of A if x ≠ 0 and \(\exists \lambda \in \Re\) such that \(Ax = \lambda x\) -</p> - -<p> -x is eigenvector with corresponding eigenvalue λ. -</p> - -<p> -Is a given vector \(u \in \Re^n\) an eigenvector of a given A (n × n)? -</p> -<ul> -<li> -Do \(Au\), check if result is a multiple of u. - -</ul> - -<p> -Is a given λ an eigenvalue of A? -</p> -<ul> -<li> -\(\exists x \ne 0\) such that \(Ax - \lambda x = 0 \leftrightarrow (A-\lambda I_n)x = 0\) with nontrivial solutions. - -</ul> - -<p> -The solution set of \((A-\lambda I_n)x = 0\) is the eigenspace corresponding to λ. -</p> - -<p> -How to find a basis for the eigenspace of a given λ? -</p> -<ol> -<li> -calculate matrix for \(A-\lambda I_n\) where n is the number of rows or columns of A - -<li> -reduce matrix to reduced echelon form - -<li> -express solutions in parametric form (basic variables in terms of free variables) - -<li> -basis for eigenspace is the set of the coefficients - -</ol> - -<p> -If λ = 0, then Ax = 0 has a nontrivial solution (and A is <em>not</em> invertible). -</p> - -<p> -Eigenvectors corresponding to distinct eigenvalues are linearly independent. -</p> - -<div id="Eigenvectors &amp; eigenvalues-Determinant"><h3 id="Determinant">Determinant</h3></div> -<p> -Geometric interpretation: let \(A = [a_1 \; a_2]\). then the determinant (absolute value) is the surface area (or volume in 3D): -</p> - -<p> -<img src="img/determinant-geometric-diagram.png" alt="Determinant geometric diagram" /> -</p> - -<p> -Let A (n × n). A ~ U without scaling and using <em>r</em> row interchanges. then \(\det A = (-1)^r u_{11} \times \dots \times u_{nn}\) -</p> - -<p> -A is invertible iff \(\det A \ne 0\) -</p> - -<p> -\(\det AB = (\det A)(\det B)\) -</p> - -<p> -λ is an eigenvalue of A iff \(\det (A-\lambda I) = 0\) (the characteristic equation of A) -</p> - -<p> -The eigenvalues of A (n × n) are the solutions for λ. Multiplicity is the number of solutions for λ. -</p> - -<div id="Eigenvectors &amp; eigenvalues-Similarity"><h3 id="Similarity">Similarity</h3></div> -<p> -given A and B (n × n), A is similar to B if ∃p s.t. \(A = PBP^{-1}\) -</p> - -<p> -If A and B are similar, then they have the same characteristic polynomials (and the same eigenvalues with the same multiplicities) -</p> - -<div id="Eigenvectors &amp; eigenvalues-Diagonalization"><h3 id="Diagonalization">Diagonalization</h3></div> -<p> -A is diagonalizable if A is similar to a diagonal matrix. -</p> - -<p> -Diagonalization Theorem: A (n × n) is diagonalizable iff A has n linearly independent eigenvectors (the eigenbasis for \(\Re^n\)) -</p> - -<p> -\(A = P D P^{-1} \leftrightarrow\) columns of P are linearly independent eigenvectors, and the diagonal values of D are the eigenvalues corresponding to the eigenvectors in P. -</p> - -<p> -How to diagonalize a matrix: -</p> -<ol> -<li> -Find eigenvalues of A - -<li> -Find n = λ linearly independent eigenvectors - -<li> -Construct \(P = \begin{bmatrix} p_1 &amp; p_2 &amp; \ldots &amp; p_n \end{bmatrix}\) - -<li> -Construct D from the corresponding eigenvalues on the diagonal. Order of eigenvalues must match the order for columns of P. - -<li> -Check \(A = p D p^{-1} \leftrightarrow Ap = pD\) (if p is invertible) - -</ol> - -<p> -If A (n × n) has n distinct eigenvalues, it is diagonalizable. -</p> - -</body> -</html> diff --git a/content/lin-algebra-notes/img/determinant-geometric-diagram.png b/content/lin-algebra-notes/eigenvectors-eigenvalues/determinant-geometric-diagram.png Binary files differ. diff --git a/content/lin-algebra-notes/eigenvectors-eigenvalues/index.md b/content/lin-algebra-notes/eigenvectors-eigenvalues/index.md @@ -0,0 +1,64 @@ ++++ +title = 'Eigenvectors & eigenvalues' +template = 'page-math.html' ++++ + +# Eigenvectors & eigenvalues +let A be n × n, $x \in \Re^n$ is an eigenvector of A if x ≠ 0 and $\exists \lambda \in \Re$ such that $Ax = \lambda x$ + +x is eigenvector with corresponding eigenvalue λ. + +Is a given vector $u \in \Re^n$ an eigenvector of a given A (n × n)? +* Do $Au$, check if result is a multiple of u. + +Is a given λ an eigenvalue of A? +* $\exists x \ne 0$ such that $Ax - \lambda x = 0 \leftrightarrow (A-\lambda I_n)x = 0$ with nontrivial solutions. + +The solution set of $(A-\lambda I_n)x = 0$ is the eigenspace corresponding to λ. + +How to find a basis for the eigenspace of a given λ? +1. calculate matrix for $A-\lambda I_n$ where n is the number of rows or columns of A +2. reduce matrix to reduced echelon form +3. express solutions in parametric form (basic variables in terms of free variables) +4. basis for eigenspace is the set of the coefficients + +If λ = 0, then Ax = 0 has a nontrivial solution (and A is _not_ invertible). + +Eigenvectors corresponding to distinct eigenvalues are linearly independent. + +## Determinant +Geometric interpretation: let $A = [a_1 \\; a_2]$. then the determinant (absolute value) is the surface area (or volume in 3D): + +![Determinant geometric diagram](determinant-geometric-diagram.png) + +Let A (n × n). A ~ U without scaling and using _r_ row interchanges. then $\det A = (-1)^r u_{11} \times \dots \times u_{nn}$ + +A is invertible iff $\det A \ne 0$ + +$\det AB = (\det A)(\det B)$ + +λ is an eigenvalue of A iff $\det (A-\lambda I) = 0$ (the characteristic equation of A) + +The eigenvalues of A (n × n) are the solutions for λ. Multiplicity is the number of solutions for λ. + +## Similarity +given A and B (n × n), A is similar to B if ∃p s.t. $A = PBP^{-1}$ + +If A and B are similar, then they have the same characteristic polynomials (and the same eigenvalues with the same multiplicities) + +## Diagonalization +A is diagonalizable if A is similar to a diagonal matrix. + +Diagonalization Theorem: A (n × n) is diagonalizable iff A has n linearly independent eigenvectors (the eigenbasis for $\Re^n$) + +$A = P D P^{-1} \leftrightarrow$ columns of P are linearly independent eigenvectors, and the diagonal values of D are the eigenvalues corresponding to the eigenvectors in P. + +How to diagonalize a matrix: +1. Find eigenvalues of A +2. Find n = λ linearly independent eigenvectors +3. Construct $P = \begin{bmatrix} p_1 & p_2 & \ldots & p_n \end{bmatrix}$ +4. Construct D from the corresponding eigenvalues on the diagonal. Order of eigenvalues must match the order for columns of P. +5. Check $A = p D p^{-1} \leftrightarrow Ap = pD$ (if p is invertible) + +If A (n × n) has n distinct eigenvalues, it is diagonalizable. + diff --git a/content/lin-algebra-notes/index.html b/content/lin-algebra-notes/index.html @@ -1,188 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>index</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Linear Algebra"> -<h1 id="Linear Algebra">Linear Algebra</h1> -<h3 class="name">Alex Balgavy</h3> -</div> -<p> -If you need help with any of the topics, check out <a href="https://www.youtube.com/user/patrickJMT">PatrickJMT on Youtube</a>. He has some of the best math videos on the internet. -</p> - -<ul> -<li> -<a href="introduction.html#Introduction">Introduction</a> - -<ul> -<li> -<a href="introduction.html#Introduction-Linear Equations">Linear Equations</a> - -<li> -<a href="introduction.html#Introduction-Matrix notation">Matrix notation</a> - -<li> -<a href="introduction.html#Introduction-Reducing a matrix">Reducing a matrix</a> - -<li> -<a href="introduction.html#Introduction-Vectors">Vectors</a> - -</ul> -<li> -<a href="solution-sets-of-linear-systems.html#Solution sets of linear systems">Solution sets of linear systems</a> - -<ul> -<li> -<a href="solution-sets-of-linear-systems.html#Solution sets of linear systems-Homogeneous linear systems">Homogeneous linear systems</a> - -<li> -<a href="solution-sets-of-linear-systems.html#Solution sets of linear systems-Parametric vector form">Parametric vector form</a> - -<li> -<a href="solution-sets-of-linear-systems.html#Solution sets of linear systems-Linear independence">Linear independence</a> - -</ul> -<li> -<a href="linear-transformations.html#Linear transformations">Linear transformations</a> - -<li> -<a href="matrix-operations.html#Matrix operations">Matrix operations</a> - -<ul> -<li> -<a href="matrix-operations.html#Matrix operations-Sums and scalar multiples">Sums and scalar multiples</a> - -<li> -<a href="matrix-operations.html#Matrix operations-Powers of a matrix">Powers of a matrix</a> - -<li> -<a href="matrix-operations.html#Matrix operations-Transpose of a matrix">Transpose of a matrix</a> - -<li> -<a href="matrix-operations.html#Matrix operations-Inverse of a matrix">Inverse of a matrix</a> - -<li> -<a href="matrix-operations.html#Matrix operations-Elementary matrices">Elementary matrices</a> - -</ul> -<li> -<a href="applications-to-computer-graphics.html#Applications to computer graphics">Applications to computer graphics</a> - -<ul> -<li> -<a href="applications-to-computer-graphics.html#Applications to computer graphics-Homogeneous coordinates">Homogeneous coordinates</a> - -<ul> -<li> -<a href="applications-to-computer-graphics.html#Applications to computer graphics-Homogeneous coordinates-2D">2D</a> - -<li> -<a href="applications-to-computer-graphics.html#Applications to computer graphics-Homogeneous coordinates-3D">3D</a> - -</ul> -<li> -<a href="applications-to-computer-graphics.html#Applications to computer graphics-Composite transformations">Composite transformations</a> - -<li> -<a href="applications-to-computer-graphics.html#Applications to computer graphics-Perspective projections">Perspective projections</a> - -</ul> -<li> -<a href="vector-spaces.html#Vector spaces">Vector spaces</a> - -<ul> -<li> -<a href="vector-spaces.html#Vector spaces-Column space and null space of a matrix">Column space and null space of a matrix</a> - -<li> -<a href="vector-spaces.html#Vector spaces-Basis for a subspace">Basis for a subspace</a> - -<li> -<a href="vector-spaces.html#Vector spaces-Coordinates">Coordinates</a> - -<li> -<a href="vector-spaces.html#Vector spaces-Dimension of a subspace">Dimension of a subspace</a> - -</ul> -<li> -<a href="eigenvectors-eigenvalues.html#Eigenvectors &amp; eigenvalues">Eigenvectors &amp; eigenvalues</a> - -<ul> -<li> -<a href="eigenvectors-eigenvalues.html#Eigenvectors &amp; eigenvalues-Determinant">Determinant</a> - -<li> -<a href="eigenvectors-eigenvalues.html#Eigenvectors &amp; eigenvalues-Similarity">Similarity</a> - -<li> -<a href="eigenvectors-eigenvalues.html#Eigenvectors &amp; eigenvalues-Diagonalization">Diagonalization</a> - -</ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares">Orthogonality &amp; least squares</a> - -<ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Inner (dot) product &amp; uses">Inner (dot) product &amp; uses</a> - -<ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Inner (dot) product &amp; uses-Length of a vector">Length of a vector</a> - -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Inner (dot) product &amp; uses-Distance between vectors">Distance between vectors</a> - -</ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Orthogonal complement">Orthogonal complement</a> - -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Orthogonal sets">Orthogonal sets</a> - -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Orthogonal projections">Orthogonal projections</a> - -<ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Orthogonal projections-Orthogonal decomposition">Orthogonal decomposition</a> - -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Orthogonal projections-Best approximation">Best approximation</a> - -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Orthogonal projections-When basis for W is an orthonormal set">When basis for W is an orthonormal set</a> - -</ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Gram-Schmidt process">Gram-Schmidt process</a> - -<ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Gram-Schmidt process-QR factorization">QR factorization</a> - -</ul> -<li> -<a href="orthogonality-least-squares.html#Orthogonality &amp; least squares-Least-squares problems">Least-squares problems</a> - -</ul> -<li> -<a href="symmetric-matrices.html#Symmetric matrices">Symmetric matrices</a> - -<ul> -<li> -<a href="symmetric-matrices.html#Symmetric matrices-Diagonalization of symmetric matrices">Diagonalization of symmetric matrices</a> - -<li> -<a href="symmetric-matrices.html#Symmetric matrices-Singular value decomposition">Singular value decomposition</a> - -</ul> -</ul> - -</body> -</html> diff --git a/content/lin-algebra-notes/introduction.html b/content/lin-algebra-notes/introduction.html @@ -1,266 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>introduction</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Introduction"><h2 id="Introduction">Introduction</h2></div> -<div id="Introduction-Linear Equations"><h3 id="Linear Equations">Linear Equations</h3></div> -<p> -"the study of linear equations" -</p> - -<p> -a linear equation in the variables \(x_1, \dots, x_n\) has the form \(a_1 x_1+\dots+a_n x_n = b\), with \(a_1, \dots, a_n\) being the <em>coefficients</em> -</p> - -<p> -geometric interpretation: -</p> - -\[ -\begin{alignat*}{3} -&amp;n=1\qquad &amp;&amp;a_1 x_1 = b \longrightarrow x_1 = \frac{b}{a_1}\qquad &amp;&amp;\text{(point on a line in $\Re$)}\\ -&amp;n=2\qquad &amp;&amp;a_1 x_1 + a_2 x_2 = b \longrightarrow x_2 = \frac{b}{a_2} - \frac{a_1}{a_2}\qquad &amp;&amp;\text{(line in a plane in $\Re^2$)}\\ -&amp;n=3\qquad &amp;&amp;a_1 x_1 + a_2 x_2 + a_3 x_3 = b\qquad &amp;&amp;\text{(planes in 3D space, in $\Re^3$)} -\end{alignat*} -\] - -<p> -in general, \(n-1\)-dimensional planes in n-dimensional space -</p> - -<p> -system of linear equations \(x_1, \dots, x_n\) is a collection of linear equations in these variables. -</p> - -<p> -\(x_1 - 2x_2 = -1\) -</p> - -<p> -\(-x_1 + 3x_2 = 3\) -</p> - -<p> -If you graph them, you get this: -</p> - -<p> -<img src="img/graph-example.png" alt="System of equations graph" /> -</p> - -<p> -the solution is the intersection. -</p> - -<p> -a system of linear equations has: -</p> -<ol> -<li> -no solutions (inconsistent) -- e.g. parallel lines - -<li> -exactly 1 solution (consistent) - -<li> -infinitely many solutions (consistent) - e.g. the same line twice - -</ol> - -<p> -two linear systems are "equivalent" if they have the same solutions. -</p> - -<div id="Introduction-Matrix notation"><h3 id="Matrix notation">Matrix notation</h3></div> -<table> -<tr> -<th> -Equation -</th> -<th> -(Augmented) coefficient matrix notation -</th> -</tr> -<tr> -<td> -\(\begin{alignat*}{6} &amp;x_1 &amp;&amp;-&amp;&amp;2x_2 &amp;&amp;+&amp;&amp;x_3 &amp;&amp;= 0\\ &amp; &amp;&amp; &amp;&amp;2x_2 &amp;&amp;-&amp;&amp;8x_3 &amp;&amp;= 8\\ &amp;5x_1 &amp;&amp; &amp;&amp; &amp;&amp;-&amp;&amp;5x_3 &amp;&amp;= 10\end{alignat*}\) -</td> -<td> -\(\begin{bmatrix} 1 &amp; -2 &amp; 1 &amp; 0\\ 0 &amp; 2 &amp; -8 &amp; 8\\ 5 &amp; 0 &amp; -5 &amp; 10 \end{bmatrix}\) -</td> -</tr> -</table> - -<p> -the strategy to solve is to replace the system with an equivalent system that is easier to solve. -</p> - -<p> -elementary row operations: -</p> -<ol> -<li> -replacement: add rows - -<li> -scaling: multiply by constant (non-zero scalar) - -<li> -interchange: swap two rows - -</ol> - -<p> -all of these are reversible &amp; don't change the solution set. -</p> - -<p> -Matrices A and B are equivalent (\(A \sim B\)) if there is a sequence of elementary operations to transform A to B. -</p> - -<p> -If augmented matrices of two systems are row-equivalent, then the systems are equivalent. -</p> - -<p> -Matrix A is in echelon form if: -</p> -<ol> -<li> -zero rows are below non-zero rows - -<li> -the leading entry of a row is contained in a column that is to the left of the leading entry of the row below it. - -</ol> - -<p> -A is in reduced echelon form if: -</p> -<ol> -<li> -A is in echelon form - -<li> -all leading entries are 1 - -<li> -the leading entry is the only non-zero entry in that column - -</ol> - -<div id="Introduction-Reducing a matrix"><h3 id="Reducing a matrix">Reducing a matrix</h3></div> -<p> -The reduced echelon form of a matrix is unique. -</p> - -<p> -every matrix is row-equivalent to a unique reduced echelon matrix. -</p> - -<p> -the positions of the leading entries in an echelon matrix are unique -</p> - -<p> -\(\begin{bmatrix} \textbf{1} &amp; * &amp; * &amp; *\\ 0 &amp; 0 &amp; \textbf{1} &amp; *\\ 0 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 0 \end{bmatrix}\) -</p> - -<p> -the values in bold are pivot positions. the columns containing those values are pivot columns. -</p> - -<p> -Row reduction algorithm: -</p> -<ol> -<li> -Start with leftmost non-zero column (pivot column) - -<li> -Select a non-zero entry as pivot and move it to the pivot position. - -<li> -Create zeros below the pivot position. - -<li> -Ignore the row containing the pivot position &amp; repeat steps 1-3 until solved. The matrix will be in echelon form. - -<li> -Make pivot positions equal to 1, create zeros in all pivot columns. Start with the rightmost column. The matrix will be in reduced echelon form. - -</ol> - -<p> -Side note: a computer chooses as pivot the entry that's smallest in absolute value to minimize the round-off error. -</p> - -<p> -Basic variables correspond to pivot columns. Free variables correspond to non-pivot columns. You solve the equation by expressing basic variables in terms of free variables. -</p> - -<p> -The matrix can be written in parametric form, example with \(x_3\) being a free variable: -</p> - -<p> -\(\binom{x_1}{x_2} = \big\{ \binom{1}{4} + \binom{5}{-1} x_3 \;\rvert\; x_3 \in \Re \big\}\) -</p> - -<p> -if there are any free variables, there are infinite solutions. -</p> - -<div id="Introduction-Vectors"><h3 id="Vectors">Vectors</h3></div> -<p> -A vector is a line. If you have a vector in the form \(\begin{bmatrix} a\\b\end{bmatrix}\), you can draw it as an arrow from the origin ending at the point \((a,b)\). -</p> - -<p> -To add vectors, add the individual cells together. -</p> - -<p> -A vector equation \(a_1 x_1 + a_2 x_2 + \dots + a_n x_n = b\) has the same solution set as \(\begin{bmatrix} a_1 &amp; a_2 &amp; \dots &amp; a_n &amp; b \end{bmatrix}\). -</p> - -<p> -When asked whether \(b\) is in \(\text{Span} \{v_1, \dots, v_p\}\), you have to check whether the augmented matrix \(\begin{bmatrix} v_1 &amp; \dots &amp; v_p &amp; b \end{bmatrix}\) has a solution. -</p> - -<p> -\(b\) is a linear combination of \(A\) if \(Ax = b\) has a solution. -</p> - -<p> -The span is the set of all linear combinations of the vectors. -</p> - -<p> -To calculate \(Ax\), if the number of columns in A is the same as the number of rows in x, you can follow the definition: -</p> - -\[ -Ax = \begin{bmatrix} a_1 &amp; a_2 &amp; \dots &amp; a_n \end{bmatrix} \begin{bmatrix} x_1 \\ \dots \\ x_n \end{bmatrix} = x_1 a_1 + x_2 a_2 + \dots + x_n a_n -\] - -<p> -You also have the rules (matrix A, vectors u and v, scalar c): -</p> -<ul> -<li> -\(A(u+v) = Au + Av\) - -<li> -\(A(cu) = c(Au)\) - -</ul> - -</body> -</html> diff --git a/content/lin-algebra-notes/img/graph-example.png b/content/lin-algebra-notes/introduction/graph-example.png Binary files differ. diff --git a/content/lin-algebra-notes/introduction/index.md b/content/lin-algebra-notes/introduction/index.md @@ -0,0 +1,128 @@ ++++ +title = 'Introduction' +template = 'page-math.html' ++++ + +# Introduction +## Linear Equations +"the study of linear equations" + +a linear equation in the variables $x_1, \dots, x_n$ has the form $a_1 x_1+\dots+a_n x_n = b$, with $a_1, \dots, a_n$ being the _coefficients_ + +geometric interpretation: + +$ +\begin{alignedat}{3} +&n=1\qquad &&a_1 x_1 = b \longrightarrow x_1 = \frac{b}{a_1}\qquad &&\text{(point on a line in $\Re$)}\\\\\\\\ +&n=2\qquad &&a_1 x_1 + a_2 x_2 = b \longrightarrow x_2 = \frac{b}{a_2} - \frac{a_1}{a_2}\qquad &&\text{(line in a plane in $\Re^2$)}\\\\\\\\ +&n=3\qquad &&a_1 x_1 + a_2 x_2 + a_3 x_3 = b\qquad &&\text{(planes in 3D space, in $\Re^3$)} +\end{alignedat} +$ + +in general, $n-1$-dimensional planes in n-dimensional space + +system of linear equations $x_1, \dots, x_n$ is a collection of linear equations in these variables. + +$x_1 - 2x_2 = -1$ + +$-x_1 + 3x_2 = 3$ + +If you graph them, you get this: + +![System of equations graph](graph-example.png) + +the solution is the intersection. + +a system of linear equations has: +1. no solutions (inconsistent) -- e.g. parallel lines +2. exactly 1 solution (consistent) +3. infinitely many solutions (consistent) - e.g. the same line twice + +two linear systems are "equivalent" if they have the same solutions. + +## Matrix notation +<table> +<tr> +<td>Equation</td> +<td>(Augmented) coefficient matrix notation</td> +</tr> +<tr> +<td>$\begin{alignedat}{6} &x_1 &&-&&2x_2 &&+&&x_3 &&= 0\\\\ & && &&2x_2 &&-&&8x_3 &&= 8\\\\ &5x_1 && && &&-&&5x_3 &&= 10\end{alignedat}$</td> +<td>$\begin{bmatrix} 1 & -2 & 1 & 0\\\\ 0 & 2 & -8 & 8\\\\ 5 & 0 & -5 & 10 \end{bmatrix}$</td> +</tr> +</table> + +the strategy to solve is to replace the system with an equivalent system that is easier to solve. + +elementary row operations: +1. replacement: add rows +2. scaling: multiply by constant (non-zero scalar) +3. interchange: swap two rows + +all of these are reversible & don't change the solution set. + +Matrices A and B are equivalent ($A \sim B$) if there is a sequence of elementary operations to transform A to B. + +If augmented matrices of two systems are row-equivalent, then the systems are equivalent. + +Matrix A is in echelon form if: +a) zero rows are below non-zero rows +b) the leading entry of a row is contained in a column that is to the left of the leading entry of the row below it. + +A is in reduced echelon form if: +a) A is in echelon form +b) all leading entries are 1 +c) the leading entry is the only non-zero entry in that column + +## Reducing a matrix +The reduced echelon form of a matrix is unique. + +every matrix is row-equivalent to a unique reduced echelon matrix. + +the positions of the leading entries in an echelon matrix are unique + +$\begin{bmatrix} \textbf{1} & * & * & *\\\\ 0 & 0 & \textbf{1} & *\\\\ 0 & 0 & 0 & 0\\\\ 0 & 0 & 0 & 0 \end{bmatrix}$ + +the values in bold are pivot positions. the columns containing those values are pivot columns. + +Row reduction algorithm: +1. Start with leftmost non-zero column (pivot column) +2. Select a non-zero entry as pivot and move it to the pivot position. +3. Create zeros below the pivot position. +4. Ignore the row containing the pivot position & repeat steps 1-3 until solved. The matrix will be in echelon form. +5. Make pivot positions equal to 1, create zeros in all pivot columns. Start with the rightmost column. The matrix will be in reduced echelon form. + +Side note: a computer chooses as pivot the entry that's smallest in absolute value to minimize the round-off error. + +Basic variables correspond to pivot columns. Free variables correspond to non-pivot columns. You solve the equation by expressing basic variables in terms of free variables. + +The matrix can be written in parametric form, example with $x_3$ being a free variable: + +$\binom{x_1}{x_2} = \big \\{ \binom{1}{4} + \binom{5}{-1} x_3 \\;\rvert\; x_3 \in \Re \\}$ + +if there are any free variables, there are infinite solutions. + +## Vectors +A vector is a line. If you have a vector in the form $\begin{bmatrix} a\\\\b\end{bmatrix}$, you can draw it as an arrow from the origin ending at the point $(a,b)$. + +To add vectors, add the individual cells together. + +A vector equation $a_1 x_1 + a_2 x_2 + \dots + a_n x_n = b$ has the same solution set as $\begin{bmatrix} a_1 & a_2 & \dots & a_n & b \end{bmatrix}$. + +When asked whether $b$ is in $\text{Span} \{v_1, \dots, v_p\}$, you have to check whether the augmented matrix $\begin{bmatrix} v_1 & \dots & v_p & b \end{bmatrix}$ has a solution. + +$b$ is a linear combination of $A$ if $Ax = b$ has a solution. + +The span is the set of all linear combinations of the vectors. + +To calculate $Ax$, if the number of columns in A is the same as the number of rows in x, you can follow the definition: + +$ +Ax = \begin{bmatrix} a_1 & a_2 & \dots & a_n \end{bmatrix} \begin{bmatrix} x_1 \\\\ \dots \\\\ x_n \end{bmatrix} = x_1 a_1 + x_2 a_2 + \dots + x_n a_n +$ + +You also have the rules (matrix A, vectors u and v, scalar c): +* $A(u+v) = Au + Av$ +* $A(cu) = c(Au)$ + + diff --git a/content/lin-algebra-notes/linear-transformations.html b/content/lin-algebra-notes/linear-transformations.html @@ -1,126 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>linear-transformations</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Linear transformations"><h2 id="Linear transformations">Linear transformations</h2></div> -<p> -definitions: -</p> -<ul> -<li> -transformation, function, mapping: rule assigning to each vector in \(\Re^n\) a vector \(T(x)\) in \(\Re^m\) - -<li> -domain: set \(\Re^n\) - -<li> -codomain: set \(\Re^m\) - -<li> -image: vector T(x) - -<li> -range: set of all images T(x) - -</ul> - -<p> -a projection transformation happens if you go to a lower dimension (e.g. \(x_3\) becomes 0). a shear transformation happens if a 2D square is tilted sideways into a parallelogram. -</p> - -<p> -a transformation T is linear if: -</p> -<ol> -<li> -\(T(u + v) = T(u) + T(v)\) for all \(u,v \in \text{Domain}(T)\) - -<li> -\(T(cu) = cT(u)\) for all scalars c and all \(u \in \text{Domain}(T)\) - -</ol> - -<p> -linear transformations preserve operations of vector addition and scalar multiplication. -</p> - -<p> -if T is a linear transformation, then: -</p> -<ul> -<li> -\(T(0) = 0)\) - -<li> -\(T(cu + dv) = cT(u) + dT(v)\) - -<li> -\(T(c_1 v_2 + \dots + c_p v_p) = c_1 T(v_1) + \dots + c_p T(v_p)\) (superposition principle) - -</ul> - -<p> -given scalar r, and \(T: \Re^2 \rightarrow \Re^2\) by \(T(x) = rx\) -</p> -<ul> -<li> -contraction: when \(0 \leq r \leq 1\) - -<li> -dilation: when \(r &gt; 1\) - -</ul> - -<p> -every linear transformation \(\Re^n \rightarrow \Re^m\) is a matrix transformation \(x \mapsto Ax\). -</p> - -<p> -\(A = [[T(e_1) \dots T(e_n)]\), where \(e_j\) is the jth column of the identity matrix in \(\Re^n\) -</p> - -<p> -geometric linear transformations of \(\Re^2\): -</p> - -<p> -<img src="img/geo-reflections.png" alt="Reflections" /> <img src="img/geo-contract-shears.png" alt="Contractions/expansions and shears" /> <img src="img/geo-projections.png" alt="Projections" /> -</p> - -<p> -types of mappings: -</p> -<ul> -<li> -\(T: \Re^n \rightarrow \Re^m\) is 'onto' \(\Re^m\) if <em>each</em> b in \(\Re^m\) is the image of <em>at least one</em> x in \(\Re^n\). - -<li> -\(T: \Re^n \rightarrow \Re^m\) is one-to-one if <em>each</em> b in \(\Re^m\) is the image of <em>max one</em> x in \(\Re^n\). - -<ul> -<li> -so if \(T(x) = 0\) only has the trivial solution - -</ul> -</ul> - -<p> -for mapping \(T: \Re^n \rightarrow \Re^m\) and standard matrix \(A\): -</p> -<ul> -<li> -T maps \(\Re^n\) onto \(\Re^m\) iff columns of matrix span \(\Re^m\) - -<li> -T is one-to-one iff columns of matrix are linearly independent. - -</ul> - -</body> -</html> diff --git a/content/lin-algebra-notes/img/geo-contract-shears.png b/content/lin-algebra-notes/linear-transformations/geo-contract-shears.png Binary files differ. diff --git a/content/lin-algebra-notes/img/geo-projections.png b/content/lin-algebra-notes/linear-transformations/geo-projections.png Binary files differ. diff --git a/content/lin-algebra-notes/img/geo-reflections.png b/content/lin-algebra-notes/linear-transformations/geo-reflections.png Binary files differ. diff --git a/content/lin-algebra-notes/linear-transformations/index.md b/content/lin-algebra-notes/linear-transformations/index.md @@ -0,0 +1,47 @@ ++++ +title = 'Linear transformations' +template = 'page-math.html' ++++ + +# Linear transformations +definitions: +* transformation, function, mapping: rule assigning to each vector in $\Re^n$ a vector $T(x)$ in $\Re^m$ +* domain: set $\Re^n$ +* codomain: set $\Re^m$ +* image: vector T(x) +* range: set of all images T(x) + +a projection transformation happens if you go to a lower dimension (e.g. $x_3$ becomes 0). a shear transformation happens if a 2D square is tilted sideways into a parallelogram. + +a transformation T is linear if: +i) $T(u + v) = T(u) + T(v)$ for all $u,v \in \text{Domain}(T)$ +ii) $T(cu) = cT(u)$ for all scalars c and all $u \in \text{Domain}(T)$ + +linear transformations preserve operations of vector addition and scalar multiplication. + +if T is a linear transformation, then: +* $T(0) = 0)$ +* $T(cu + dv) = cT(u) + dT(v)$ +* $T(c_1 v_2 + \dots + c_p v_p) = c_1 T(v_1) + \dots + c_p T(v_p)$ (superposition principle) + +given scalar r, and $T: \Re^2 \rightarrow \Re^2$ by $T(x) = rx$ +* contraction: when $0 \leq r \leq 1$ +* dilation: when $r > 1$ + +every linear transformation $\Re^n \rightarrow \Re^m$ is a matrix transformation $x \mapsto Ax$. + +$A = [[T(e_1) \dots T(e_n)]$, where $e_j$ is the jth column of the identity matrix in $\Re^n$ + +geometric linear transformations of $\Re^2$: + +![Reflections](geo-reflections.png) ![Contractions/expansions and shears](geo-contract-shears.png) ![Projections](geo-projections.png) + +types of mappings: +* $T: \Re^n \rightarrow \Re^m$ is 'onto' $\Re^m$ if _each_ b in $\Re^m$ is the image of _at least one_ x in $\Re^n$. +* $T: \Re^n \rightarrow \Re^m$ is one-to-one if _each_ b in $\Re^m$ is the image of _max one_ x in $\Re^n$. + * so if $T(x) = 0$ only has the trivial solution + +for mapping $T: \Re^n \rightarrow \Re^m$ and standard matrix $A$: +* T maps $\Re^n$ onto $\Re^m$ iff columns of matrix span $\Re^m$ +* T is one-to-one iff columns of matrix are linearly independent. + diff --git a/content/lin-algebra-notes/matrix-operations.html b/content/lin-algebra-notes/matrix-operations.html @@ -1,183 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>matrix-operations</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Matrix operations"><h2 id="Matrix operations">Matrix operations</h2></div> -<p> -\(a_{ij}\) is the entry in the ith row and jth column of A -</p> - -<p> -diagonal entries are \(a_{11}\), \(a_{22}\), etc. and form the main diagonal. if non-diagonal entries are zero, then it's a diagonal matrix. -</p> - -<p> -equal matrices have same size <em>and</em> their corresponding entries are equal. -</p> - -<div id="Matrix operations-Sums and scalar multiples"><h3 id="Sums and scalar multiples">Sums and scalar multiples</h3></div> -<p> -sum A+B: sum corresponding entries in A and B. -</p> - -<p> -scalar multiple \(rA\) is matrix whose columns are r times the corresponding columns in A (with r scalar). -</p> - -<p> -the usual rules of algebra apply to sums and scalar multiples of matrices. -</p> - -<p> -when matrix B multiplies vector x, it transforms x into vector \(Bx\). if \(Bx\) is multiplied by A, the result is \(A(Bx)\). \(A(Bx)\) is produced from x by composition of mappings, which can be represented as multiplication by a single matrix AB. -</p> - -<p> -\(A(Bx) = (AB)x\) -</p> - -<p> -\(AB = A \begin{bmatrix} b_1 &amp; b_2 &amp; \dots &amp; b_p \end{bmatrix} = \begin{bmatrix} Ab_1 &amp; Ab_2 &amp; \dots &amp; Ab_p \end{bmatrix}\) -</p> - -<p> -A is matrix, B is matrix with columns \(b_1 \dots b_p\). -</p> - -<p> -each column of AB is a linear combination of columns of A using weights from corresponding column of B. AB has the same number of rows as A and same number of columns as B. if the number of columns of A does not match number of rows of B, the product is undefined. in general, AB ≠ BA. -</p> - -<p> -if product AB is defined, then: -</p> - - -<p> -\((AB)_{ij} = a_{i1} b_{1j} + a_{i2} b_{2j} + \dots + a_{in} b_{nj}\) -</p> - -<p> -\(row_i (AB) = row_i (A) \times B\) -</p> - -<p> -a nifty trick: if multiplying matrix m×n by matrix n×o, the product will be a matrix m×o. -</p> - -<div id="Matrix operations-Powers of a matrix"><h3 id="Powers of a matrix">Powers of a matrix</h3></div> -<p> -\(A^k = \underbrace{A \dots A}_{k}\) -</p> - -<p> -with \(A\) an n × n matrix and k a positive integer. -</p> - -<div id="Matrix operations-Transpose of a matrix"><h3 id="Transpose of a matrix">Transpose of a matrix</h3></div> -<p> -a matrix \(A'\) whose columns are made up of the corresponding rows of \(A\) -</p> - -<p> -properties: -</p> -<ul> -<li> -\((A^T)^T = A\) - -<li> -\((A+B)^T = A^T + B^T\) - -<li> -\((rA)^T = rA^T\) with r a scalar - -<li> -\((AB)^T = B^T A^T\)$ - -</ul> - -<p> -the transpose of a product of matrices == product of their transposes in reverse order -</p> - -<div id="Matrix operations-Inverse of a matrix"><h3 id="Inverse of a matrix">Inverse of a matrix</h3></div> -<p> -invertible (singular) if there is same size matrix C such that \(CA = I\) and \(AC = I\) where I is the n × n identity matrix. -</p> - -<p> -identity matrix: a matrix where the diagonals are all 1. -</p> - -<p> -C is uniquely determined by A, so: \(A^{-1} A = I\). -</p> - -<p> -let \(A = \begin{bmatrix} a &amp; b \\ c &amp; d \end{bmatrix}.\) if \(ad - bc \ne 0\) then \(A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d &amp; -b \\ -c &amp; a \end{bmatrix}\) -</p> - -<p> -determinant: \(\det A = ad - bc\) -</p> - -<p> -if A is invertible (determinant is not 0), then for each \(b \in \Re^n\) the solution of \(Ax = b\) is \(A^{-1} b\). -</p> - -<p> -properties of inverse: -</p> -<ul> -<li> -\((A^{-1})^{-1} = A\) - -<li> -\((AB)^{-1} = B^{-1} A^{-1}\) (watch out for order!) - -<li> -\((A^T)^{-1} = (A^{-1})^T\) - -</ul> - -<p> -finding \(A^{-1}\): -</p> -<ul> -<li> -Row reduce augmented matrix \(\begin{bmatrix} A &amp; I \end{bmatrix}\). - -<li> -if A is row equivalent to I, then \(\begin{bmatrix} A &amp; I \end{bmatrix}\) is row equivalent to \(\begin{bmatrix} I &amp; A^{-1} \end{bmatrix}\) - -<li> -otherwise, A doesn't have an inverse. - -</ul> - -<div id="Matrix operations-Elementary matrices"><h3 id="Elementary matrices">Elementary matrices</h3></div> -<p> -elementary matrix: obtained by performing single elementary row operation on identity matrix -</p> - -<p> -if elementary row operation is performed on m × n matrix A, result is EA, with E an m × m matrix obtained by performing same row operation on \(I_m\) -</p> - -<p> -inverse of any elementary matrix E is of same type that transforms E back into I. -</p> - -<p> -an n² matrix A is only invertible if A is row equivalent to \(I_n\). any sequence of elementary operations reducing A to \(I_n\) also transforms \(I_n\) into \(A^{-1}\). -</p> - -</body> -</html> diff --git a/content/lin-algebra-notes/matrix-operations.md b/content/lin-algebra-notes/matrix-operations.md @@ -0,0 +1,85 @@ ++++ +title = 'Matrix operations' +template = 'page-math.html' ++++ + +# Matrix operations +$a_{ij}$ is the entry in the ith row and jth column of A + +diagonal entries are $a_{11}$, $a_{22}$, etc. and form the main diagonal. if non-diagonal entries are zero, then it's a diagonal matrix. + +equal matrices have same size _and_ their corresponding entries are equal. + +## Sums and scalar multiples +sum A+B: sum corresponding entries in A and B. + +scalar multiple $rA$ is matrix whose columns are r times the corresponding columns in A (with r scalar). + +the usual rules of algebra apply to sums and scalar multiples of matrices. + +when matrix B multiplies vector x, it transforms x into vector $Bx$. if $Bx$ is multiplied by A, the result is $A(Bx)$. $A(Bx)$ is produced from x by composition of mappings, which can be represented as multiplication by a single matrix AB. + +$A(Bx) = (AB)x$ + +$AB = A \begin{bmatrix} b_1 & b_2 & \dots & b_p \end{bmatrix} = \begin{bmatrix} Ab_1 & Ab_2 & \dots & Ab_p \end{bmatrix}$ + +A is matrix, B is matrix with columns $b_1 \dots b_p$. + +each column of AB is a linear combination of columns of A using weights from corresponding column of B. AB has the same number of rows as A and same number of columns as B. if the number of columns of A does not match number of rows of B, the product is undefined. in general, AB ≠ BA. + +if product AB is defined, then: + +$(AB)\_{ij} = a\_{i1} b_{1j} + a_{i2} b_{2j} + \dots + a_{in} b_{nj}$ + +$row_i (AB) = row_i (A) \times B$ + +a nifty trick: if multiplying matrix m×n by matrix n×o, the product will be a matrix m×o. + +## Powers of a matrix +$A^k = \underbrace{A \dots A}_{k}$ + +with $A$ an n × n matrix and k a positive integer. + +## Transpose of a matrix +a matrix $A'$ whose columns are made up of the corresponding rows of $A$ + +properties: +* $(A^T)^T = A$ +* $(A+B)^T = A^T + B^T$ +* $(rA)^T = rA^T$ with r a scalar +* $(AB)^T = B^T A^T$ + +the transpose of a product of matrices == product of their transposes in reverse order + +## Inverse of a matrix +invertible (singular) if there is same size matrix C such that $CA = I$ and $AC = I$ where I is the n × n identity matrix. + +identity matrix: a matrix where the diagonals are all 1. + +C is uniquely determined by A, so: $A^{-1} A = I$. + +let $A = \begin{bmatrix} a & b \\\\ c & d \end{bmatrix}.$ if $ad - bc \ne 0$ then $A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\\\ -c & a \end{bmatrix}$ + +determinant: $\det A = ad - bc$ + +if A is invertible (determinant is not 0), then for each $b \in \Re^n$ the solution of $Ax = b$ is $A^{-1} b$. + +properties of inverse: +* $(A^{-1})^{-1} = A$ +* $(AB)^{-1} = B^{-1} A^{-1}$ (watch out for order!) +* $(A^T)^{-1} = (A^{-1})^T$ + +finding $A^{-1}$: +* Row reduce augmented matrix $\begin{bmatrix} A & I \end{bmatrix}$. +* if A is row equivalent to I, then $\begin{bmatrix} A & I \end{bmatrix}$ is row equivalent to $\begin{bmatrix} I & A^{-1} \end{bmatrix}$ +* otherwise, A doesn't have an inverse. + +## Elementary matrices +elementary matrix: obtained by performing single elementary row operation on identity matrix + +if elementary row operation is performed on m × n matrix A, result is EA, with E an m × m matrix obtained by performing same row operation on $I_m$ + +inverse of any elementary matrix E is of same type that transforms E back into I. + +an n² matrix A is only invertible if A is row equivalent to $I_n$. any sequence of elementary operations reducing A to $I_n$ also transforms $I_n$ into $A^{-1}$. + diff --git a/content/lin-algebra-notes/orthogonality-least-squares.html b/content/lin-algebra-notes/orthogonality-least-squares.html @@ -1,199 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>orthogonality-least-squares</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Orthogonality &amp; least squares"><h2 id="Orthogonality &amp; least squares">Orthogonality &amp; least squares</h2></div> -<p> -let \(u,v \in \Re^n\). orthogonal iff: -</p> -<ul> -<li> -\(u \cdot v = 0\) - -<li> -or \(\|u\|^2 + \|v\|^2 = \|u+v\|^2\) - -</ul> - -<div id="Orthogonality &amp; least squares-Inner (dot) product &amp; uses"><h3 id="Inner (dot) product &amp; uses">Inner (dot) product &amp; uses</h3></div> -<p> -let \(u,v \in \Re^n\). then, \(u \cdot v = u^T v \in \Re\). -</p> - -<p> -in English, to calculate you just multiply the vectors row-wise, and sum all the results. -</p> - -<p> -Regular algebraic rules apply. -</p> - -<p> -\(u \cdot u \geq 0\), only 0 iff u = 0. -</p> - -<div id="Orthogonality &amp; least squares-Inner (dot) product &amp; uses-Length of a vector"><h4 id="Length of a vector">Length of a vector</h4></div> -<p> -Let \(v \in \Re^n\), then the norm (length) of v is \(\|v\| = \sqrt{v \cdot v}\). -</p> - -<p> -Does the norm coincide with length of line segments? Yes: -</p> - -<p> -\(x = \begin{bmatrix}a\\b\end{bmatrix}, \quad \|v\| = \sqrt{v \cdot v} = \sqrt{a^2 + b^2} = \text{Pythagoras}\) -</p> - -<div id="Orthogonality &amp; least squares-Inner (dot) product &amp; uses-Distance between vectors"><h4 id="Distance between vectors">Distance between vectors</h4></div> -<p> -Let \(u,v \in \Re^n\). then, \(\text{dist}(u,v) = \|u-v\|\). -</p> - -<div id="Orthogonality &amp; least squares-Orthogonal complement"><h3 id="Orthogonal complement">Orthogonal complement</h3></div> -<p> -Let \(W \subset \Re^n\) a subspace, then orthogonal complement of W is \(W^\perp = \{x \in \Re^n | x \cdot v = 0 \forall u \in W \}\) -</p> - -<p> -properties: -</p> -<ul> -<li> -\((colA)^\perp = (NulA)^T\) - -<li> -\((NulA)^\perp = (colA)^T\) - -</ul> - -<div id="Orthogonality &amp; least squares-Orthogonal sets"><h3 id="Orthogonal sets">Orthogonal sets</h3></div> -<p> -a set \(\{v_1 \dots v_p\}\) is orthogonal if \(v_i \cdot v_j = 0 \forall i,j\). then \(\{v_1 \dots v_p\}\) is a basis for \(\text{Span}\{v_1 \dots v_p\}\) -</p> - -<p> -An orthogonal basis is a basis that is also an orthogonal set -</p> - -<p> -Why orthogonal basis? Let \(W \in \Re^n\) be subspace with orthogonal basis \(\{u_1 \dots u_p\}\), then \(W \ni y = c_1 u_1 + \ldots + c_p u_p\), with \(c_i = \frac{y \cdot u_i}{u_i \cdot u_i}\) for i = 1...p. -</p> - -<p> -An orthonormal set/basis is an orthogonal set/basis consisting of unit vectors (like \(\{e_1, \ldots, e_n\}\text{ for }\Re^n\)). -</p> - -<p> -An m × matrix A has orthonormal columns iff \(A^T A = I_n\) -</p> -<ul> -<li> -\((Ax) \cdot (Ay) = x \cdot y\) - -<li> -\(\| Ax \| = \| x \|\) - -</ul> - -<div id="Orthogonality &amp; least squares-Orthogonal projections"><h3 id="Orthogonal projections">Orthogonal projections</h3></div> -<div id="Orthogonality &amp; least squares-Orthogonal projections-Orthogonal decomposition"><h4 id="Orthogonal decomposition">Orthogonal decomposition</h4></div> -<p> -Let W be a subspace of \(\Re^n\). Each y in \(R^n\) can be written uniquely in \(y = \hat{y}+z\) (\(\hat{y} \in W,\; z \in W^\perp\)) -</p> - -<p> -If \(\{u_1, \ldots, u_p\}\) in orthogonal basis of W, then \(\hat{y} = \frac{y \cdot u_1}{u_1 \cdot u_1} u_1 + \ldots + \frac{y \cdot u_p}{u_p \cdot u_p}u_p\) -</p> - -<p> -ŷ is an orthogonal projection of y onto W (\(proj_w y\)) -</p> - -<div id="Orthogonality &amp; least squares-Orthogonal projections-Best approximation"><h4 id="Best approximation">Best approximation</h4></div> -<p> -Let W be subspace of \(\Re^n\), y a vector in \(\Re^n\), ŷ an orthogonal projection of y onto W. -</p> - -<p> -Then \(\|y-\hat{y}\| &lt; \|y-v\|\) -</p> - -<div id="Orthogonality &amp; least squares-Orthogonal projections-When basis for W is an orthonormal set"><h4 id="When basis for W is an orthonormal set">When basis for W is an orthonormal set</h4></div> -<p> -If \(\{u_1 \ldots u_p\}\) is orthonormal basis for subspace W of \(\Re^n\), then \(\text{proj}_w y = (y \cdot u_1)u_1 + \dots + (y \cdot u_p) u_p\) -</p> - -<p> -If U = \(\begin{bmatrix} u_1 &amp; u_2 &amp; \dots &amp; u_p \end{bmatrix}\), then \(\text{proj}_w y = UU^T y \quad \forall y \in \Re^n\) -</p> - -<div id="Orthogonality &amp; least squares-Gram-Schmidt process"><h3 id="Gram-Schmidt process">Gram-Schmidt process</h3></div> -<p> -An algorithm for producing orthogonal or orthonormal basis for any nonzero subspace of \(\Re^n\). -</p> - -<p> -Given basis \(\{ x_1 \dots x_p \}\) for nonzero subspace W of \(\Re^n\), define: -</p> - -<p> -{{\(%align*\) -v_1 &amp;= x_1\\ -v_2 &amp;= x_2 - \frac{x_2 \cdot v_1}{v_1 \cdot v_1} v_1\\ -v_3 &amp;= x_3 - \frac{x_3 \cdot v_1}{v_1 \cdot v_1} v_1 - \frac{x_3 \cdot v_2}{v_2 \cdot v_2} v_2\\ -\vdots \\ -v_p &amp;= x_p - \frac{x_p \cdot v_1}{v_1 \cdot v_1} v_1 - \dots - \frac{x_p \cdot v_{p-1}}{v_{p-1} \cdot v_{p-1} v_{p-1}} -}}$ -</p> - -<p> -Then \(\{v_1 \dots v_p\}\) is an orthogonal basis for W. -</p> - -<p> -\(\text{Span}\{v_1 \dots v_k\} = \text{Span}\{x_1 \dots x+k\}\) for 1 ≤ k ≤ p. -</p> - -<div id="Orthogonality &amp; least squares-Gram-Schmidt process-QR factorization"><h4 id="QR factorization">QR factorization</h4></div> -<p> -If A is an m × n matrix, with linearly independent columns, then A can be factored as \(A = QR\), where Q is he m×n matrix whose columns form an orthonormal basis for Col A, and R is n×n upper triangular invertible matrix with diagonal positive entries. -</p> - -<div id="Orthogonality &amp; least squares-Least-squares problems"><h3 id="Least-squares problems">Least-squares problems</h3></div> -<p> -If a solution for \(Ax = b\) does not exist and one is needed, try to find the best approximation x for \(Ax = b\). -</p> - -<p> -General least-squares problem is to find x that makes \(\| b - Ax\|\) as small as possible. -</p> - -<p> -If A is m×n and \(b \in \Re^m\), a least-squares solution of \(Ax = b\) is \(\hat{x} \in \Re^n\) such that \(\| b - A\hat{x} \| \leq \| b - Ax \|, \qquad \forall x \in \Re^n\). -</p> - -<p> -Least-square solution set of \(Ax = b\) is the same as the solution set for \(A^T Ax = A^T b\). -</p> - -<p> -Therefore, \(\hat{x} = (A^T A)^{-1} A^T b\). -</p> - -<p> -Given an m×n matrix A with linearly independent columns, let \(A = QR\) be a QR factorization of A. Then, for each \(b \in \Re^m\), \(Ax = b\) has unique least-squares solution: -</p> - -<p> -\(\hat{x} = R^{-1} Q^T b\) -</p> - -</body> -</html> diff --git a/content/lin-algebra-notes/orthogonality-least-squares.md b/content/lin-algebra-notes/orthogonality-least-squares.md @@ -0,0 +1,104 @@ ++++ +title = 'Orthogonality & least squares' +template = 'page-math.html' ++++ + +# Orthogonality & least squares +let $u,v \in \Re^n$. orthogonal iff: +* $u \cdot v = 0$ +* or $\|u\|^2 + \|v\|^2 = \|u+v\|^2$ + +## Inner (dot) product & uses +let $u,v \in \Re^n$. then, $u \cdot v = u^T v \in \Re$. + +in English, to calculate you just multiply the vectors row-wise, and sum all the results. + +Regular algebraic rules apply. + +$u \cdot u \geq 0$, only 0 iff u = 0. + +### Length of a vector +Let $v \in \Re^n$, then the norm (length) of v is $\|v\| = \sqrt{v \cdot v}$. + +Does the norm coincide with length of line segments? Yes: + +$x = \begin{bmatrix}a\\\\b\end{bmatrix}, \quad \|v\| = \sqrt{v \cdot v} = \sqrt{a^2 + b^2} = \text{Pythagoras}$ + +### Distance between vectors +Let $u,v \in \Re^n$. then, $\text{dist}(u,v) = \|u-v\|$. + +## Orthogonal complement +Let $W \subset \Re^n$ a subspace, then orthogonal complement of W is $W^\perp = \{x \in \Re^n | x \cdot v = 0 \forall u \in W \}$ + +properties: +* $(colA)^\perp = (NulA)^T$ +* $(NulA)^\perp = (colA)^T$ + +## Orthogonal sets +a set $\{v_1 \dots v_p\}$ is orthogonal if $v_i \cdot v_j = 0 \forall i,j$. then $\{v_1 \dots v_p\}$ is a basis for $\text{Span}\{v_1 \dots v_p\}$ + +An orthogonal basis is a basis that is also an orthogonal set + +Why orthogonal basis? Let $W \in \Re^n$ be subspace with orthogonal basis $\{u_1 \dots u_p\}$, then $W \ni y = c_1 u_1 + \ldots + c_p u_p$, with $c_i = \frac{y \cdot u_i}{u_i \cdot u_i}$ for i = 1...p. + +An orthonormal set/basis is an orthogonal set/basis consisting of unit vectors (like $\{e_1, \ldots, e_n\}\text{ for }\Re^n$). + +An m × matrix A has orthonormal columns iff $A^T A = I_n$ +* $(Ax) \cdot (Ay) = x \cdot y$ +* $\| Ax \| = \| x \|$ + +## Orthogonal projections +### Orthogonal decomposition +Let W be a subspace of $\Re^n$. Each y in $R^n$ can be written uniquely in $y = \hat{y}+z$ ($\hat{y} \in W,\\; z \in W^\perp$) + +If $\{u_1, \ldots, u_p\}$ in orthogonal basis of W, then $\hat{y} = \frac{y \cdot u_1}{u_1 \cdot u_1} u_1 + \ldots + \frac{y \cdot u_p}{u_p \cdot u_p}u_p$ + +ŷ is an orthogonal projection of y onto W ($proj_w y$) + +### Best approximation +Let W be subspace of $\Re^n$, y a vector in $\Re^n$, ŷ an orthogonal projection of y onto W. + +Then $\|y-\hat{y}\| < \|y-v\|$ + +### When basis for W is an orthonormal set +If $\{u_1 \ldots u_p\}$ is orthonormal basis for subspace W of $\Re^n$, then $\text{proj}_w y = (y \cdot u_1)u_1 + \dots + (y \cdot u_p) u_p$ + +If U = $\begin{bmatrix} u_1 & u_2 & \dots & u_p \end{bmatrix}$, then $\text{proj}_w y = UU^T y \quad \forall y \in \Re^n$ + +## Gram-Schmidt process +An algorithm for producing orthogonal or orthonormal basis for any nonzero subspace of $\Re^n$. + +Given basis $\{ x_1 \dots x_p \}$ for nonzero subspace W of $\Re^n$, define: + +$ +\begin{aligned} +v_1 &= x_1\\\\ +v_2 &= x_2 - \frac{x_2 \cdot v_1}{v_1 \cdot v_1} v_1\\\\ +v_3 &= x_3 - \frac{x_3 \cdot v_1}{v_1 \cdot v_1} v_1 - \frac{x_3 \cdot v_2}{v_2 \cdot v_2} v_2\\\\ +\vdots \\\\ +v_p &= x_p - \frac{x_p \cdot v_1}{v_1 \cdot v_1} v_1 - \dots - \frac{x_p \cdot v_{p-1}}{v_{p-1} \cdot v_{p-1} v_{p-1}} +\end{aligned} +$ + +Then $\{v_1 \dots v_p\}$ is an orthogonal basis for W. + +$\text{Span}\{v_1 \dots v_k\} = \text{Span}\{x_1 \dots x+k\}$ for 1 ≤ k ≤ p. + +### QR factorization +If A is an m × n matrix, with linearly independent columns, then A can be factored as $A = QR$, where Q is he m×n matrix whose columns form an orthonormal basis for Col A, and R is n×n upper triangular invertible matrix with diagonal positive entries. + +## Least-squares problems +If a solution for $Ax = b$ does not exist and one is needed, try to find the best approximation x for $Ax = b$. + +General least-squares problem is to find x that makes $\| b - Ax\|$ as small as possible. + +If A is m×n and $b \in \Re^m$, a least-squares solution of $Ax = b$ is $\hat{x} \in \Re^n$ such that $\| b - A\hat{x} \| \leq \| b - Ax \|, \qquad \forall x \in \Re^n$. + +Least-square solution set of $Ax = b$ is the same as the solution set for $A^T Ax = A^T b$. + +Therefore, $\hat{x} = (A^T A)^{-1} A^T b$. + +Given an m×n matrix A with linearly independent columns, let $A = QR$ be a QR factorization of A. Then, for each $b \in \Re^m$, $Ax = b$ has unique least-squares solution: + +$\hat{x} = R^{-1} Q^T b$ + diff --git a/content/lin-algebra-notes/solution-sets-of-linear-systems.html b/content/lin-algebra-notes/solution-sets-of-linear-systems.html @@ -1,127 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>solution-sets-of-linear-systems</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Solution sets of linear systems"><h2 id="Solution sets of linear systems">Solution sets of linear systems</h2></div> -<div id="Solution sets of linear systems-Homogeneous linear systems"><h3 id="Homogeneous linear systems">Homogeneous linear systems</h3></div> -<p> -homogeneous: if you can write it in \(Ax = 0\) where A is an \(m \times n\) matrix and 0 is the zero vector in \(\Re^m\) -</p> -<ul> -<li> -always has at least one solution (the trivial solution, \(x = 0\)). - -<li> -has a nontrivial solution iff there is a free variable - -<ul> -<li> -if the equation has only one free variable, the solution is a line through the origin - -<li> -when there are two or more free variables, it's a line through the origin - -</ul> -<li> -solution set is \(\text{Span} \{v_1, \ldots, v_p\}\) for suitable vectors - -</ul> - -<div id="Solution sets of linear systems-Parametric vector form"><h3 id="Parametric vector form">Parametric vector form</h3></div> -<p> -implicit description: -</p> -<ul> -<li> -a simple equation - -<li> -e.g. \(10x_1 - 3x_2 - 2x_3 = 0\) - -</ul> - -<p> -explicit description (parametric vector form): -</p> -<ul> -<li> -the solution to the equation as a set spanned by u and v - -<li> -of the form \(x = su + tv\), with \(s,t \in \Re\) - -</ul> - -<p> -the solution set of \(Ax = 0\) is \(x = tv\) with \(t \in \Re\). -</p> - -<p> -if \(Ax = b\) has a solution, then you get the solution set by translating the solution set of \(Ax = 0\) using any particular solution p of \(Ax = b\). The set is then \(x = p + tv\) -</p> - -<p> -Writing a solution set in parametric vector form: -</p> -<ol> -<li> -Row reduce augmented matrix to echelon form - -<li> -Express each basic variable in terms of any free variables. - -<li> -Write a typical solution x as a vector, with entries depending on the (potential) free variables. - -<li> -Decompose x into a linear combination of vectors using free vars as parameters. - -</ol> - -<div id="Solution sets of linear systems-Linear independence"><h3 id="Linear independence">Linear independence</h3></div> - -<p> -linearly independent: -</p> -<ul> -<li> -set of vector equations: iff the vector equation has only the trivial solution (\(x_1 = x_2 = x_3 = 0\)) - -<li> -columns of matrix: iff \(Ax = 0\) has <em>only</em> the trivial solution - -<li> -one vector: iff v is not the zero vector - -<li> -two vectors: if neither of the vectors is a multiple of the other - -</ul> - -<p> -linearly dependent: -</p> -<ul> -<li> -iff at least one of the vectors is a linear combination of the others - -<li> -if there are more vectors than entries in each vector - -<li> -if the set contains the zero vector - -</ul> - -<p> -a set is linearly dependent iff it's not linearly independent. -</p> - -</body> -</html> diff --git a/content/lin-algebra-notes/solution-sets-of-linear-systems.md b/content/lin-algebra-notes/solution-sets-of-linear-systems.md @@ -0,0 +1,48 @@ ++++ +title = 'Solution sets of linear systems' +template = 'page-math.html' ++++ + +# Solution sets of linear systems +## Homogeneous linear systems +homogeneous: if you can write it in $Ax = 0$ where A is an $m \times n$ matrix and 0 is the zero vector in $\Re^m$ +* always has at least one solution (the trivial solution, $x = 0$). +* has a nontrivial solution iff there is a free variable + * if the equation has only one free variable, the solution is a line through the origin + * when there are two or more free variables, it's a line through the origin +* solution set is $\text{Span} \{v_1, \ldots, v_p\}$ for suitable vectors + +## Parametric vector form +implicit description: +* a simple equation +* e.g. $10x_1 - 3x_2 - 2x_3 = 0$ + +explicit description (parametric vector form): +* the solution to the equation as a set spanned by u and v +* of the form $x = su + tv$, with $s,t \in \Re$ + +the solution set of $Ax = 0$ is $x = tv$ with $t \in \Re$. + +if $Ax = b$ has a solution, then you get the solution set by translating the solution set of $Ax = 0$ using any particular solution p of $Ax = b$. The set is then $x = p + tv$ + +Writing a solution set in parametric vector form: +1. Row reduce augmented matrix to echelon form +2. Express each basic variable in terms of any free variables. +3. Write a typical solution x as a vector, with entries depending on the (potential) free variables. +4. Decompose x into a linear combination of vectors using free vars as parameters. + +## Linear independence + +linearly independent: +* set of vector equations: iff the vector equation has only the trivial solution ($x_1 = x_2 = x_3 = 0$) +* columns of matrix: iff $Ax = 0$ has _only_ the trivial solution +* one vector: iff v is not the zero vector +* two vectors: if neither of the vectors is a multiple of the other + +linearly dependent: +* iff at least one of the vectors is a linear combination of the others +* if there are more vectors than entries in each vector +* if the set contains the zero vector + +a set is linearly dependent iff it's not linearly independent. + diff --git a/content/lin-algebra-notes/src/applications-to-computer-graphics.wiki b/content/lin-algebra-notes/src/applications-to-computer-graphics.wiki @@ -1,31 +0,0 @@ -== Applications to computer graphics == -graphics are stored in a matrix, such as this: - -{{file:img/graphics-coordinate-matrix.png|Graphics coordinate matrix}} {{file:img/vector-letter-n.png|Vector letter N}} - -=== Homogeneous coordinates === -==== 2D ==== -each point (x, y) in 2D can be identified with point (x, y, 1) in 3D. so we say that (x, y) has homogeneous coordinates (x, y, 1). - -e.g. translation is not a linear transformation. but $(x, y) \mapsto (x+h, y+k)$ can be written in homogeneous coordinates as $(x, y, 1) \mapsto (x+h, y+k, 1)$, and can be computed using matrix multiplication: - -$\begin{bmatrix} 1 & 0 & h\\ 0 & 1 & k\\ 0 & 0 & 1\end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix} x+h \\ y+k \\ 1 \end{bmatrix}$ - -==== 3D ==== -(X, Y, Z, H) are homogeneous coordinates for (x, y, z) if H ≠ 0 and - -$x = \frac{X}{H}, \quad y = \frac{Y}{H}, \quad \text{and} \; z = \frac{Z}{H}$ - -=== Matrices for typical transformations === -{{file:img/typical-transformations.png|Typical transformations}} - -=== Composite transformations === -when you need two or more basic transformations, such a composite transformation is a matrix multiplication. - -matrices for new transformations are "prepended" in multiplication. so if you're rotating, then translating, the calculation is `[matrix for translation][matrix for rotation]`. - -=== Perspective projections === -maps each point (x, y, z) onto an image point (x*, y*, 0) so that two points and eye position (center of projection) are on a line. - -{{file:img/perspective-projection-diagram.png|Perspective projection diagram}} - diff --git a/content/lin-algebra-notes/src/eigenvectors-eigenvalues.wiki b/content/lin-algebra-notes/src/eigenvectors-eigenvalues.wiki @@ -1,59 +0,0 @@ -== Eigenvectors & eigenvalues == -let A be n × n, $x \in \Re^n$ is an eigenvector of A if x ≠ 0 and $\exists \lambda \in \Re$ such that $Ax = \lambda x$ - -x is eigenvector with corresponding eigenvalue λ. - -Is a given vector $u \in \Re^n$ an eigenvector of a given A (n × n)? - * Do $Au$, check if result is a multiple of u. - -Is a given λ an eigenvalue of A? - * $\exists x \ne 0$ such that $Ax - \lambda x = 0 \leftrightarrow (A-\lambda I_n)x = 0$ with nontrivial solutions. - -The solution set of $(A-\lambda I_n)x = 0$ is the eigenspace corresponding to λ. - -How to find a basis for the eigenspace of a given λ? - 1. calculate matrix for $A-\lambda I_n$ where n is the number of rows or columns of A - 2. reduce matrix to reduced echelon form - 3. express solutions in parametric form (basic variables in terms of free variables) - 4. basis for eigenspace is the set of the coefficients - -If λ = 0, then Ax = 0 has a nontrivial solution (and A is _not_ invertible). - -Eigenvectors corresponding to distinct eigenvalues are linearly independent. - -=== Determinant === -Geometric interpretation: let $A = [a_1 \; a_2]$. then the determinant (absolute value) is the surface area (or volume in 3D): - -{{file:img/determinant-geometric-diagram.png|Determinant geometric diagram}} - -Let A (n × n). A ~ U without scaling and using _r_ row interchanges. then $\det A = (-1)^r u_{11} \times \dots \times u_{nn}$ - -A is invertible iff $\det A \ne 0$ - -$\det AB = (\det A)(\det B)$ - -λ is an eigenvalue of A iff $\det (A-\lambda I) = 0$ (the characteristic equation of A) - -The eigenvalues of A (n × n) are the solutions for λ. Multiplicity is the number of solutions for λ. - -=== Similarity === -given A and B (n × n), A is similar to B if ∃p s.t. $A = PBP^{-1}$ - -If A and B are similar, then they have the same characteristic polynomials (and the same eigenvalues with the same multiplicities) - -=== Diagonalization === -A is diagonalizable if A is similar to a diagonal matrix. - -Diagonalization Theorem: A (n × n) is diagonalizable iff A has n linearly independent eigenvectors (the eigenbasis for $\Re^n$) - -$A = P D P^{-1} \leftrightarrow$ columns of P are linearly independent eigenvectors, and the diagonal values of D are the eigenvalues corresponding to the eigenvectors in P. - -How to diagonalize a matrix: - 1. Find eigenvalues of A - 2. Find n = λ linearly independent eigenvectors - 3. Construct $P = \begin{bmatrix} p_1 & p_2 & \ldots & p_n \end{bmatrix}$ - 4. Construct D from the corresponding eigenvalues on the diagonal. Order of eigenvalues must match the order for columns of P. - 5. Check $A = p D p^{-1} \leftrightarrow Ap = pD$ (if p is invertible) - -If A (n × n) has n distinct eigenvalues, it is diagonalizable. - diff --git a/content/lin-algebra-notes/src/img b/content/lin-algebra-notes/src/img @@ -1 +0,0 @@ -../img- \ No newline at end of file diff --git a/content/lin-algebra-notes/src/index.wiki b/content/lin-algebra-notes/src/index.wiki @@ -1,50 +0,0 @@ -= Linear Algebra = -If you need help with any of the topics, check out [[https://www.youtube.com/user/patrickJMT|PatrickJMT on Youtube]]. He has some of the best math videos on the internet. - -- [[/introduction#Introduction|Introduction]] - - [[/introduction#Introduction#Linear Equations|Linear Equations]] - - [[/introduction#Introduction#Matrix notation|Matrix notation]] - - [[/introduction#Introduction#Reducing a matrix|Reducing a matrix]] - - [[/introduction#Introduction#Vectors|Vectors]] -- [[/solution-sets-of-linear-systems#Solution sets of linear systems|Solution sets of linear systems]] - - [[/solution-sets-of-linear-systems#Solution sets of linear systems#Homogeneous linear systems|Homogeneous linear systems]] - - [[/solution-sets-of-linear-systems#Solution sets of linear systems#Parametric vector form|Parametric vector form]] - - [[/solution-sets-of-linear-systems#Solution sets of linear systems#Linear independence|Linear independence]] -- [[/linear-transformations#Linear transformations|Linear transformations]] -- [[/matrix-operations#Matrix operations|Matrix operations]] - - [[/matrix-operations#Matrix operations#Sums and scalar multiples|Sums and scalar multiples]] - - [[/matrix-operations#Matrix operations#Powers of a matrix|Powers of a matrix]] - - [[/matrix-operations#Matrix operations#Transpose of a matrix|Transpose of a matrix]] - - [[/matrix-operations#Matrix operations#Inverse of a matrix|Inverse of a matrix]] - - [[/matrix-operations#Matrix operations#Elementary matrices|Elementary matrices]] -- [[/applications-to-computer-graphics#Applications to computer graphics|Applications to computer graphics]] - - [[/applications-to-computer-graphics#Applications to computer graphics#Homogeneous coordinates|Homogeneous coordinates]] - - [[/applications-to-computer-graphics#Applications to computer graphics#Homogeneous coordinates#2D|2D]] - - [[/applications-to-computer-graphics#Applications to computer graphics#Homogeneous coordinates#3D|3D]] - - [[/applications-to-computer-graphics#Applications to computer graphics#Composite transformations|Composite transformations]] - - [[/applications-to-computer-graphics#Applications to computer graphics#Perspective projections|Perspective projections]] -- [[/vector-spaces#Vector spaces|Vector spaces]] - - [[/vector-spaces#Vector spaces#Column space and null space of a matrix|Column space and null space of a matrix]] - - [[/vector-spaces#Vector spaces#Basis for a subspace|Basis for a subspace]] - - [[/vector-spaces#Vector spaces#Coordinates|Coordinates]] - - [[/vector-spaces#Vector spaces#Dimension of a subspace|Dimension of a subspace]] -- [[/eigenvectors-eigenvalues#Eigenvectors & eigenvalues|Eigenvectors & eigenvalues]] - - [[/eigenvectors-eigenvalues#Eigenvectors & eigenvalues#Determinant|Determinant]] - - [[/eigenvectors-eigenvalues#Eigenvectors & eigenvalues#Similarity|Similarity]] - - [[/eigenvectors-eigenvalues#Eigenvectors & eigenvalues#Diagonalization|Diagonalization]] -- [[/orthogonality-least-squares#Orthogonality & least squares|Orthogonality & least squares]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Inner (dot) product & uses|Inner (dot) product & uses]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Inner (dot) product & uses#Length of a vector|Length of a vector]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Inner (dot) product & uses#Distance between vectors|Distance between vectors]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Orthogonal complement|Orthogonal complement]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Orthogonal sets|Orthogonal sets]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Orthogonal projections|Orthogonal projections]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Orthogonal projections#Orthogonal decomposition|Orthogonal decomposition]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Orthogonal projections#Best approximation|Best approximation]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Orthogonal projections#When basis for W is an orthonormal set|When basis for W is an orthonormal set]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Gram-Schmidt process|Gram-Schmidt process]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Gram-Schmidt process#QR factorization|QR factorization]] - - [[/orthogonality-least-squares#Orthogonality & least squares#Least-squares problems|Least-squares problems]] -- [[/symmetric-matrices#Symmetric matrices|Symmetric matrices]] - - [[/symmetric-matrices#Symmetric matrices#Diagonalization of symmetric matrices|Diagonalization of symmetric matrices]] - - [[/symmetric-matrices#Symmetric matrices#Singular value decomposition|Singular value decomposition]] diff --git a/content/lin-algebra-notes/src/introduction.wiki b/content/lin-algebra-notes/src/introduction.wiki @@ -1,116 +0,0 @@ -== Introduction == -=== Linear Equations === -"the study of linear equations" - -a linear equation in the variables $x_1, \dots, x_n$ has the form $a_1 x_1+\dots+a_n x_n = b$, with $a_1, \dots, a_n$ being the _coefficients_ - -geometric interpretation: - -{{$ -\begin{alignat*}{3} -&n=1\qquad &&a_1 x_1 = b \longrightarrow x_1 = \frac{b}{a_1}\qquad &&\text{(point on a line in $\Re$)}\\ -&n=2\qquad &&a_1 x_1 + a_2 x_2 = b \longrightarrow x_2 = \frac{b}{a_2} - \frac{a_1}{a_2}\qquad &&\text{(line in a plane in $\Re^2$)}\\ -&n=3\qquad &&a_1 x_1 + a_2 x_2 + a_3 x_3 = b\qquad &&\text{(planes in 3D space, in $\Re^3$)} -\end{alignat*} -}}$ - -in general, $n-1$-dimensional planes in n-dimensional space - -system of linear equations $x_1, \dots, x_n$ is a collection of linear equations in these variables. - -$x_1 - 2x_2 = -1$ - -$-x_1 + 3x_2 = 3$ - -If you graph them, you get this: - -{{file:img/graph-example.png|System of equations graph}} - -the solution is the intersection. - -a system of linear equations has: - 1. no solutions (inconsistent) -- e.g. parallel lines - 2. exactly 1 solution (consistent) - 3. infinitely many solutions (consistent) - e.g. the same line twice - -two linear systems are "equivalent" if they have the same solutions. - -=== Matrix notation === -| Equation | (Augmented) coefficient matrix notation | -|-------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------| -| $\begin{alignat*}{6} &x_1 &&-&&2x_2 &&+&&x_3 &&= 0\\ & && &&2x_2 &&-&&8x_3 &&= 8\\ &5x_1 && && &&-&&5x_3 &&= 10\end{alignat*}$ | $\begin{bmatrix} 1 & -2 & 1 & 0\\ 0 & 2 & -8 & 8\\ 5 & 0 & -5 & 10 \end{bmatrix}$ | - -the strategy to solve is to replace the system with an equivalent system that is easier to solve. - -elementary row operations: - 1. replacement: add rows - 2. scaling: multiply by constant (non-zero scalar) - 3. interchange: swap two rows - -all of these are reversible & don't change the solution set. - -Matrices A and B are equivalent ($A \sim B$) if there is a sequence of elementary operations to transform A to B. - -If augmented matrices of two systems are row-equivalent, then the systems are equivalent. - -Matrix A is in echelon form if: - a) zero rows are below non-zero rows - b) the leading entry of a row is contained in a column that is to the left of the leading entry of the row below it. - -A is in reduced echelon form if: - a) A is in echelon form - b) all leading entries are 1 - c) the leading entry is the only non-zero entry in that column - -=== Reducing a matrix === -The reduced echelon form of a matrix is unique. - -every matrix is row-equivalent to a unique reduced echelon matrix. - -the positions of the leading entries in an echelon matrix are unique - -$\begin{bmatrix} \textbf{1} & * & * & *\\ 0 & 0 & \textbf{1} & *\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix}$ - -the values in bold are pivot positions. the columns containing those values are pivot columns. - -Row reduction algorithm: - 1. Start with leftmost non-zero column (pivot column) - 2. Select a non-zero entry as pivot and move it to the pivot position. - 3. Create zeros below the pivot position. - 4. Ignore the row containing the pivot position & repeat steps 1-3 until solved. The matrix will be in echelon form. - 5. Make pivot positions equal to 1, create zeros in all pivot columns. Start with the rightmost column. The matrix will be in reduced echelon form. - -Side note: a computer chooses as pivot the entry that's smallest in absolute value to minimize the round-off error. - -Basic variables correspond to pivot columns. Free variables correspond to non-pivot columns. You solve the equation by expressing basic variables in terms of free variables. - -The matrix can be written in parametric form, example with $x_3$ being a free variable: - -$\binom{x_1}{x_2} = \big\{ \binom{1}{4} + \binom{5}{-1} x_3 \;\rvert\; x_3 \in \Re \big\}$ - -if there are any free variables, there are infinite solutions. - -=== Vectors === -A vector is a line. If you have a vector in the form $\begin{bmatrix} a\\b\end{bmatrix}$, you can draw it as an arrow from the origin ending at the point $(a,b)$. - -To add vectors, add the individual cells together. - -A vector equation $a_1 x_1 + a_2 x_2 + \dots + a_n x_n = b$ has the same solution set as $\begin{bmatrix} a_1 & a_2 & \dots & a_n & b \end{bmatrix}$. - -When asked whether $b$ is in $\text{Span} \{v_1, \dots, v_p\}$, you have to check whether the augmented matrix $\begin{bmatrix} v_1 & \dots & v_p & b \end{bmatrix}$ has a solution. - -$b$ is a linear combination of $A$ if $Ax = b$ has a solution. - -The span is the set of all linear combinations of the vectors. - -To calculate $Ax$, if the number of columns in A is the same as the number of rows in x, you can follow the definition: - -{{$ -Ax = \begin{bmatrix} a_1 & a_2 & \dots & a_n \end{bmatrix} \begin{bmatrix} x_1 \\ \dots \\ x_n \end{bmatrix} = x_1 a_1 + x_2 a_2 + \dots + x_n a_n -}}$ - -You also have the rules (matrix A, vectors u and v, scalar c): - * $A(u+v) = Au + Av$ - * $A(cu) = c(Au)$ - - diff --git a/content/lin-algebra-notes/src/linear-transformations.wiki b/content/lin-algebra-notes/src/linear-transformations.wiki @@ -1,42 +0,0 @@ -== Linear transformations == -definitions: - * transformation, function, mapping: rule assigning to each vector in $\Re^n$ a vector $T(x)$ in $\Re^m$ - * domain: set $\Re^n$ - * codomain: set $\Re^m$ - * image: vector T(x) - * range: set of all images T(x) - -a projection transformation happens if you go to a lower dimension (e.g. $x_3$ becomes 0). a shear transformation happens if a 2D square is tilted sideways into a parallelogram. - -a transformation T is linear if: - i) $T(u + v) = T(u) + T(v)$ for all $u,v \in \text{Domain}(T)$ - ii) $T(cu) = cT(u)$ for all scalars c and all $u \in \text{Domain}(T)$ - -linear transformations preserve operations of vector addition and scalar multiplication. - -if T is a linear transformation, then: - * $T(0) = 0)$ - * $T(cu + dv) = cT(u) + dT(v)$ - * $T(c_1 v_2 + \dots + c_p v_p) = c_1 T(v_1) + \dots + c_p T(v_p)$ (superposition principle) - -given scalar r, and $T: \Re^2 \rightarrow \Re^2$ by $T(x) = rx$ - * contraction: when $0 \leq r \leq 1$ - * dilation: when $r > 1$ - -every linear transformation $\Re^n \rightarrow \Re^m$ is a matrix transformation $x \mapsto Ax$. - -$A = [[T(e_1) \dots T(e_n)]$, where $e_j$ is the jth column of the identity matrix in $\Re^n$ - -geometric linear transformations of $\Re^2$: - -{{file:img/geo-reflections.png|Reflections}} {{file:img/geo-contract-shears.png|Contractions/expansions and shears}} {{file:img/geo-projections.png|Projections}} - -types of mappings: - * $T: \Re^n \rightarrow \Re^m$ is 'onto' $\Re^m$ if _each_ b in $\Re^m$ is the image of _at least one_ x in $\Re^n$. - * $T: \Re^n \rightarrow \Re^m$ is one-to-one if _each_ b in $\Re^m$ is the image of _max one_ x in $\Re^n$. - * so if $T(x) = 0$ only has the trivial solution - -for mapping $T: \Re^n \rightarrow \Re^m$ and standard matrix $A$: - * T maps $\Re^n$ onto $\Re^m$ iff columns of matrix span $\Re^m$ - * T is one-to-one iff columns of matrix are linearly independent. - diff --git a/content/lin-algebra-notes/src/matrix-operations.wiki b/content/lin-algebra-notes/src/matrix-operations.wiki @@ -1,81 +0,0 @@ -== Matrix operations == -$a_{ij}$ is the entry in the ith row and jth column of A - -diagonal entries are $a_{11}$, $a_{22}$, etc. and form the main diagonal. if non-diagonal entries are zero, then it's a diagonal matrix. - -equal matrices have same size _and_ their corresponding entries are equal. - -=== Sums and scalar multiples === -sum A+B: sum corresponding entries in A and B. - -scalar multiple $rA$ is matrix whose columns are r times the corresponding columns in A (with r scalar). - -the usual rules of algebra apply to sums and scalar multiples of matrices. - -when matrix B multiplies vector x, it transforms x into vector $Bx$. if $Bx$ is multiplied by A, the result is $A(Bx)$. $A(Bx)$ is produced from x by composition of mappings, which can be represented as multiplication by a single matrix AB. - -$A(Bx) = (AB)x$ - -$AB = A \begin{bmatrix} b_1 & b_2 & \dots & b_p \end{bmatrix} = \begin{bmatrix} Ab_1 & Ab_2 & \dots & Ab_p \end{bmatrix}$ - -A is matrix, B is matrix with columns $b_1 \dots b_p$. - -each column of AB is a linear combination of columns of A using weights from corresponding column of B. AB has the same number of rows as A and same number of columns as B. if the number of columns of A does not match number of rows of B, the product is undefined. in general, AB ≠ BA. - -if product AB is defined, then: - - -$(AB)_{ij} = a_{i1} b_{1j} + a_{i2} b_{2j} + \dots + a_{in} b_{nj}$ - -$row_i (AB) = row_i (A) \times B$ - -a nifty trick: if multiplying matrix m×n by matrix n×o, the product will be a matrix m×o. - -=== Powers of a matrix === -$A^k = \underbrace{A \dots A}_{k}$ - -with $A$ an n × n matrix and k a positive integer. - -=== Transpose of a matrix === -a matrix $A'$ whose columns are made up of the corresponding rows of $A$ - -properties: - * $(A^T)^T = A$ - * $(A+B)^T = A^T + B^T$ - * $(rA)^T = rA^T$ with r a scalar - * $(AB)^T = B^T A^T$$ - -the transpose of a product of matrices == product of their transposes in reverse order - -=== Inverse of a matrix === -invertible (singular) if there is same size matrix C such that $CA = I$ and $AC = I$ where I is the n × n identity matrix. - -identity matrix: a matrix where the diagonals are all 1. - -C is uniquely determined by A, so: $A^{-1} A = I$. - -let $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}.$ if $ad - bc \ne 0$ then $A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$ - -determinant: $\det A = ad - bc$ - -if A is invertible (determinant is not 0), then for each $b \in \Re^n$ the solution of $Ax = b$ is $A^{-1} b$. - -properties of inverse: - * $(A^{-1})^{-1} = A$ - * $(AB)^{-1} = B^{-1} A^{-1}$ (watch out for order!) - * $(A^T)^{-1} = (A^{-1})^T$ - -finding $A^{-1}$: - * Row reduce augmented matrix $\begin{bmatrix} A & I \end{bmatrix}$. - * if A is row equivalent to I, then $\begin{bmatrix} A & I \end{bmatrix}$ is row equivalent to $\begin{bmatrix} I & A^{-1} \end{bmatrix}$ - * otherwise, A doesn't have an inverse. - -=== Elementary matrices === -elementary matrix: obtained by performing single elementary row operation on identity matrix - -if elementary row operation is performed on m × n matrix A, result is EA, with E an m × m matrix obtained by performing same row operation on $I_m$ - -inverse of any elementary matrix E is of same type that transforms E back into I. - -an n² matrix A is only invertible if A is row equivalent to $I_n$. any sequence of elementary operations reducing A to $I_n$ also transforms $I_n$ into $A^{-1}$. - diff --git a/content/lin-algebra-notes/src/orthogonality-least-squares.wiki b/content/lin-algebra-notes/src/orthogonality-least-squares.wiki @@ -1,97 +0,0 @@ -== Orthogonality & least squares == -let $u,v \in \Re^n$. orthogonal iff: - * $u \cdot v = 0$ - * or $\|u\|^2 + \|v\|^2 = \|u+v\|^2$ - -=== Inner (dot) product & uses === -let $u,v \in \Re^n$. then, $u \cdot v = u^T v \in \Re$. - -in English, to calculate you just multiply the vectors row-wise, and sum all the results. - -Regular algebraic rules apply. - -$u \cdot u \geq 0$, only 0 iff u = 0. - -==== Length of a vector ==== -Let $v \in \Re^n$, then the norm (length) of v is $\|v\| = \sqrt{v \cdot v}$. - -Does the norm coincide with length of line segments? Yes: - -$x = \begin{bmatrix}a\\b\end{bmatrix}, \quad \|v\| = \sqrt{v \cdot v} = \sqrt{a^2 + b^2} = \text{Pythagoras}$ - -==== Distance between vectors ==== -Let $u,v \in \Re^n$. then, $\text{dist}(u,v) = \|u-v\|$. - -=== Orthogonal complement === -Let $W \subset \Re^n$ a subspace, then orthogonal complement of W is $W^\perp = \{x \in \Re^n | x \cdot v = 0 \forall u \in W \}$ - -properties: - * $(colA)^\perp = (NulA)^T$ - * $(NulA)^\perp = (colA)^T$ - -=== Orthogonal sets === -a set $\{v_1 \dots v_p\}$ is orthogonal if $v_i \cdot v_j = 0 \forall i,j$. then $\{v_1 \dots v_p\}$ is a basis for $\text{Span}\{v_1 \dots v_p\}$ - -An orthogonal basis is a basis that is also an orthogonal set - -Why orthogonal basis? Let $W \in \Re^n$ be subspace with orthogonal basis $\{u_1 \dots u_p\}$, then $W \ni y = c_1 u_1 + \ldots + c_p u_p$, with $c_i = \frac{y \cdot u_i}{u_i \cdot u_i}$ for i = 1...p. - -An orthonormal set/basis is an orthogonal set/basis consisting of unit vectors (like $\{e_1, \ldots, e_n\}\text{ for }\Re^n$). - -An m × matrix A has orthonormal columns iff $A^T A = I_n$ - * $(Ax) \cdot (Ay) = x \cdot y$ - * $\| Ax \| = \| x \|$ - -=== Orthogonal projections === -==== Orthogonal decomposition ==== -Let W be a subspace of $\Re^n$. Each y in $R^n$ can be written uniquely in $y = \hat{y}+z$ ($\hat{y} \in W,\; z \in W^\perp$) - -If $\{u_1, \ldots, u_p\}$ in orthogonal basis of W, then $\hat{y} = \frac{y \cdot u_1}{u_1 \cdot u_1} u_1 + \ldots + \frac{y \cdot u_p}{u_p \cdot u_p}u_p$ - -ŷ is an orthogonal projection of y onto W ($proj_w y$) - -==== Best approximation ==== -Let W be subspace of $\Re^n$, y a vector in $\Re^n$, ŷ an orthogonal projection of y onto W. - -Then $\|y-\hat{y}\| < \|y-v\|$ - -==== When basis for W is an orthonormal set ==== -If $\{u_1 \ldots u_p\}$ is orthonormal basis for subspace W of $\Re^n$, then $\text{proj}_w y = (y \cdot u_1)u_1 + \dots + (y \cdot u_p) u_p$ - -If U = $\begin{bmatrix} u_1 & u_2 & \dots & u_p \end{bmatrix}$, then $\text{proj}_w y = UU^T y \quad \forall y \in \Re^n$ - -=== Gram-Schmidt process === -An algorithm for producing orthogonal or orthonormal basis for any nonzero subspace of $\Re^n$. - -Given basis $\{ x_1 \dots x_p \}$ for nonzero subspace W of $\Re^n$, define: - -{{$%align*$ -v_1 &= x_1\\ -v_2 &= x_2 - \frac{x_2 \cdot v_1}{v_1 \cdot v_1} v_1\\ -v_3 &= x_3 - \frac{x_3 \cdot v_1}{v_1 \cdot v_1} v_1 - \frac{x_3 \cdot v_2}{v_2 \cdot v_2} v_2\\ -\vdots \\ -v_p &= x_p - \frac{x_p \cdot v_1}{v_1 \cdot v_1} v_1 - \dots - \frac{x_p \cdot v_{p-1}}{v_{p-1} \cdot v_{p-1} v_{p-1}} -}}$ - -Then $\{v_1 \dots v_p\}$ is an orthogonal basis for W. - -$\text{Span}\{v_1 \dots v_k\} = \text{Span}\{x_1 \dots x+k\}$ for 1 ≤ k ≤ p. - -==== QR factorization ==== -If A is an m × n matrix, with linearly independent columns, then A can be factored as $A = QR$, where Q is he m×n matrix whose columns form an orthonormal basis for Col A, and R is n×n upper triangular invertible matrix with diagonal positive entries. - -=== Least-squares problems === -If a solution for $Ax = b$ does not exist and one is needed, try to find the best approximation x for $Ax = b$. - -General least-squares problem is to find x that makes $\| b - Ax\|$ as small as possible. - -If A is m×n and $b \in \Re^m$, a least-squares solution of $Ax = b$ is $\hat{x} \in \Re^n$ such that $\| b - A\hat{x} \| \leq \| b - Ax \|, \qquad \forall x \in \Re^n$. - -Least-square solution set of $Ax = b$ is the same as the solution set for $A^T Ax = A^T b$. - -Therefore, $\hat{x} = (A^T A)^{-1} A^T b$. - -Given an m×n matrix A with linearly independent columns, let $A = QR$ be a QR factorization of A. Then, for each $b \in \Re^m$, $Ax = b$ has unique least-squares solution: - -$\hat{x} = R^{-1} Q^T b$ - diff --git a/content/lin-algebra-notes/src/solution-sets-of-linear-systems.wiki b/content/lin-algebra-notes/src/solution-sets-of-linear-systems.wiki @@ -1,43 +0,0 @@ -== Solution sets of linear systems == -=== Homogeneous linear systems === -homogeneous: if you can write it in $Ax = 0$ where A is an $m \times n$ matrix and 0 is the zero vector in $\Re^m$ - * always has at least one solution (the trivial solution, $x = 0$). - * has a nontrivial solution iff there is a free variable - * if the equation has only one free variable, the solution is a line through the origin - * when there are two or more free variables, it's a line through the origin - * solution set is $\text{Span} \{v_1, \ldots, v_p\}$ for suitable vectors - -=== Parametric vector form === -implicit description: - * a simple equation - * e.g. $10x_1 - 3x_2 - 2x_3 = 0$ - -explicit description (parametric vector form): - * the solution to the equation as a set spanned by u and v - * of the form $x = su + tv$, with $s,t \in \Re$ - -the solution set of $Ax = 0$ is $x = tv$ with $t \in \Re$. - -if $Ax = b$ has a solution, then you get the solution set by translating the solution set of $Ax = 0$ using any particular solution p of $Ax = b$. The set is then $x = p + tv$ - -Writing a solution set in parametric vector form: - 1. Row reduce augmented matrix to echelon form - 2. Express each basic variable in terms of any free variables. - 3. Write a typical solution x as a vector, with entries depending on the (potential) free variables. - 4. Decompose x into a linear combination of vectors using free vars as parameters. - -=== Linear independence === - -linearly independent: - * set of vector equations: iff the vector equation has only the trivial solution ($x_1 = x_2 = x_3 = 0$) - * columns of matrix: iff $Ax = 0$ has _only_ the trivial solution - * one vector: iff v is not the zero vector - * two vectors: if neither of the vectors is a multiple of the other - -linearly dependent: - * iff at least one of the vectors is a linear combination of the others - * if there are more vectors than entries in each vector - * if the set contains the zero vector - -a set is linearly dependent iff it's not linearly independent. - diff --git a/content/lin-algebra-notes/src/symmetric-matrices.wiki b/content/lin-algebra-notes/src/symmetric-matrices.wiki @@ -1,27 +0,0 @@ -== Symmetric matrices == -symmetric if $A^T = A$ (also has to be square) - -=== Diagonalization of symmetric matrices === -If A is symmetric, any two eigenvectors from two different eigenspaces are orthogonal. - -An n×n matrix is orthogonally diagonalizable iff A is symmetric. - -An n×n matrix A: - * has n real eigenvalues, including multiplicities - * is orthogonally diagonalizable - * dimension of eigenspace for each eigenvalue λ == multiplicity of λ as root of the characteristic equation ($\det (A-\lambda I) = 0$) - * eigenspaces are mutually orthogonal (i.e. eigenvectors corresponding to different eigenvalues are orthogonal) - -=== Singular value decomposition === -singular values: square roots of eigenvalues of $A^T A$, denoted by $\sigma_1, \dots, \sigma_n$ in ascending order. They are also the lengths of vectors $Av_1, \dots, Av_n$. - -Suppose $\{v_1, \dots, v_n\}$ is an orthonormal basis for $\Re^n$ consisting of eigenvectors of $A^T A$ in ascending order, and suppose A has r nonzero singular values. - * Then $\{Av_1, \dots, Av_n\}$ is orthogonal basis for Col A, and rank == r. - * Then there exists Σ matrix m×n for which diagonal entries are first r singular values of A, and there exist matrices U (orthogonal, m²) and V (orthogonal, n²) such that $A = U \Sigma V T$. - * Col U are "left singular vectors" of A, Col V are "right singular vectors" of A. - -Let A be n², then the fact that "A is invertible" means that: - * $(\text{Col} A)^\perp = \{ 0 \}$ - * $(\text{Nul} A)^\perp = \Re^n$ - * $\text{Row} A = \Re^n$ - * A has n nonzero singular values diff --git a/content/lin-algebra-notes/src/vector-spaces.wiki b/content/lin-algebra-notes/src/vector-spaces.wiki @@ -1,42 +0,0 @@ -== Vector spaces == -subspace of $\Re^n$ is any set H in $\Re^n$ that has properties: - a) The zero vector is in H - b) For each u and v in H, the sum $u + v$ is in H - c) For each u in H and each scalar c, the vector $cu$ is in H - -the zero subspace is the set that only contains zero vector in $\Re^n$ - -=== Column space and null space of a matrix === -column space: set of all linear combinations of the columns of a matrix. it's the _span_ of the columns of the matrix. - -column space of m × n matrix is subspace of $\Re^m$ - -null space: set of all solutions of equation $Ax = 0$. - -null space of an m × n matrix is subspace of $\Re^n$. - -to determine if p is in Nul A, check if Ap = 0. if so, p is in Nul A. - -=== Basis for a subspace === -basis for subspace H of $\Re^n$ is linearly independent set in H spanning H - -the pivot columns of a matrix form the basis for its column space. - -=== Coordinates === -let $H \in \Re^n$ be subspace with $B = \{ b_1, \dots, b_p\}$. then for all x ∈ H, there are unique $c_1, \dots, c_p$ such that $x = c_1 b_2 + \dots + c_p b_p$. (to prove this theorem, use a contradiction on uniqueness) - -the coordinates of x w.r.t. B are $c_1, \dots, c_p$. - -the coordinate system of x w.r.t. B is $[x]_B = \begin{bmatrix}c_1\\ \dots\\ c_p\end{bmatrix}$ - -=== Dimension of a subspace === -let $H \in \Re^n$ be a subspace with basis $B=\{ b_1, \dots, b_p \}$. then every basis for H comprises p vectors. - -the dimension of H is the number of basis vectors in _any_ basis for H. - -dim Col A = #pivot columns (rank A) - -dim Nul A = #free variables in Ax = 0 - -Rank theorem: dim Col A + dim Nul A = #columns - diff --git a/content/lin-algebra-notes/style.css b/content/lin-algebra-notes/style.css @@ -1,38 +0,0 @@ -@charset 'UTF-8'; -@font-face{font-family:'FontAwesome';src:url('font/fontawesome-webfont.eot?v=4.0.1');src:url('font/fontawesome-webfont.eot?#iefix&v=4.0.1') format('embedded-opentype'),url('font/fontawesome-webfont.woff?v=4.0.1') format('woff'),url('font/fontawesome-webfont.ttf?v=4.0.1') format('truetype'),url('font/fontawesome-webfont.svg?v=4.0.1#fontawesomeregular') format('svg');font-weight:normal;font-style:normal} - -body { - margin: 0px; - padding: 1em; - background: #f3f2ed; - font-family: 'Lato', sans-serif; - font-size: 12pt; - font-weight: 300; - color: #8A8A8A; - padding-left: 50px; -} -h1 { - margin: 0px; - padding: 0px; - font-weight: 300; - text-align: center; -} -ul.toc li { - margin: 8px 0; -} -h3.name { - font-style: italic; - text-align: center; - font-weight: 300; - font-size: 20px; -} -a { - color: #D1551F; - } -a:hover { - color: #AF440F; -} - strong { - font-weight: 700; - color: #2A2A2A; - } diff --git a/content/lin-algebra-notes/symmetric-matrices.html b/content/lin-algebra-notes/symmetric-matrices.html @@ -1,82 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>symmetric-matrices</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Symmetric matrices"><h2 id="Symmetric matrices">Symmetric matrices</h2></div> -<p> -symmetric if \(A^T = A\) (also has to be square) -</p> - -<div id="Symmetric matrices-Diagonalization of symmetric matrices"><h3 id="Diagonalization of symmetric matrices">Diagonalization of symmetric matrices</h3></div> -<p> -If A is symmetric, any two eigenvectors from two different eigenspaces are orthogonal. -</p> - -<p> -An n×n matrix is orthogonally diagonalizable iff A is symmetric. -</p> - -<p> -An n×n matrix A: -</p> -<ul> -<li> -has n real eigenvalues, including multiplicities - -<li> -is orthogonally diagonalizable - -<li> -dimension of eigenspace for each eigenvalue λ == multiplicity of λ as root of the characteristic equation (\(\det (A-\lambda I) = 0\)) - -<li> -eigenspaces are mutually orthogonal (i.e. eigenvectors corresponding to different eigenvalues are orthogonal) - -</ul> - -<div id="Symmetric matrices-Singular value decomposition"><h3 id="Singular value decomposition">Singular value decomposition</h3></div> -<p> -singular values: square roots of eigenvalues of \(A^T A\), denoted by \(\sigma_1, \dots, \sigma_n\) in ascending order. They are also the lengths of vectors \(Av_1, \dots, Av_n\). -</p> - -<p> -Suppose \(\{v_1, \dots, v_n\}\) is an orthonormal basis for \(\Re^n\) consisting of eigenvectors of \(A^T A\) in ascending order, and suppose A has r nonzero singular values. -</p> -<ul> -<li> -Then \(\{Av_1, \dots, Av_n\}\) is orthogonal basis for Col A, and rank == r. - -<li> -Then there exists Σ matrix m×n for which diagonal entries are first r singular values of A, and there exist matrices U (orthogonal, m²) and V (orthogonal, n²) such that \(A = U \Sigma V T\). - -<li> -Col U are "left singular vectors" of A, Col V are "right singular vectors" of A. - -</ul> - -<p> -Let A be n², then the fact that "A is invertible" means that: -</p> -<ul> -<li> -\((\text{Col} A)^\perp = \{ 0 \}\) - -<li> -\((\text{Nul} A)^\perp = \Re^n\) - -<li> -\(\text{Row} A = \Re^n\) - -<li> -A has n nonzero singular values - -</ul> - -</body> -</html> diff --git a/content/lin-algebra-notes/symmetric-matrices.md b/content/lin-algebra-notes/symmetric-matrices.md @@ -0,0 +1,32 @@ ++++ +title = 'Symmetric matrices' +template = 'page-math.html' ++++ + +# Symmetric matrices +symmetric if $A^T = A$ (also has to be square) + +## Diagonalization of symmetric matrices +If A is symmetric, any two eigenvectors from two different eigenspaces are orthogonal. + +An n×n matrix is orthogonally diagonalizable iff A is symmetric. + +An n×n matrix A: +* has n real eigenvalues, including multiplicities +* is orthogonally diagonalizable +* dimension of eigenspace for each eigenvalue λ == multiplicity of λ as root of the characteristic equation ($\det (A-\lambda I) = 0$) +* eigenspaces are mutually orthogonal (i.e. eigenvectors corresponding to different eigenvalues are orthogonal) + +## Singular value decomposition +singular values: square roots of eigenvalues of $A^T A$, denoted by $\sigma_1, \dots, \sigma_n$ in ascending order. They are also the lengths of vectors $Av_1, \dots, Av_n$. + +Suppose $\{v_1, \dots, v_n\}$ is an orthonormal basis for $\Re^n$ consisting of eigenvectors of $A^T A$ in ascending order, and suppose A has r nonzero singular values. +* Then $\{Av_1, \dots, Av_n\}$ is orthogonal basis for Col A, and rank == r. +* Then there exists Σ matrix m×n for which diagonal entries are first r singular values of A, and there exist matrices U (orthogonal, m²) and V (orthogonal, n²) such that $A = U \Sigma V T$. +* Col U are "left singular vectors" of A, Col V are "right singular vectors" of A. + +Let A be n², then the fact that "A is invertible" means that: +* $(\text{Col} A)^\perp = \{ 0 \}$ +* $(\text{Nul} A)^\perp = \Re^n$ +* $\text{Row} A = \Re^n$ +* A has n nonzero singular values diff --git a/content/lin-algebra-notes/vector-spaces.html b/content/lin-algebra-notes/vector-spaces.html @@ -1,96 +0,0 @@ -<!DOCTYPE html> -<html> -<head> -<script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> -<link rel="Stylesheet" type="text/css" href="style.css"> -<title>vector-spaces</title> -<meta http-equiv="Content-Type" content="text/html; charset=utf-8"> -</head> -<body> - -<div id="Vector spaces"><h2 id="Vector spaces">Vector spaces</h2></div> -<p> -subspace of \(\Re^n\) is any set H in \(\Re^n\) that has properties: -</p> -<ol> -<li> -The zero vector is in H - -<li> -For each u and v in H, the sum \(u + v\) is in H - -<li> -For each u in H and each scalar c, the vector \(cu\) is in H - -</ol> - -<p> -the zero subspace is the set that only contains zero vector in \(\Re^n\) -</p> - -<div id="Vector spaces-Column space and null space of a matrix"><h3 id="Column space and null space of a matrix">Column space and null space of a matrix</h3></div> -<p> -column space: set of all linear combinations of the columns of a matrix. it's the <em>span</em> of the columns of the matrix. -</p> - -<p> -column space of m × n matrix is subspace of \(\Re^m\) -</p> - -<p> -null space: set of all solutions of equation \(Ax = 0\). -</p> - -<p> -null space of an m × n matrix is subspace of \(\Re^n\). -</p> - -<p> -to determine if p is in Nul A, check if Ap = 0. if so, p is in Nul A. -</p> - -<div id="Vector spaces-Basis for a subspace"><h3 id="Basis for a subspace">Basis for a subspace</h3></div> -<p> -basis for subspace H of \(\Re^n\) is linearly independent set in H spanning H -</p> - -<p> -the pivot columns of a matrix form the basis for its column space. -</p> - -<div id="Vector spaces-Coordinates"><h3 id="Coordinates">Coordinates</h3></div> -<p> -let \(H \in \Re^n\) be subspace with \(B = \{ b_1, \dots, b_p\}\). then for all x ∈ H, there are unique \(c_1, \dots, c_p\) such that \(x = c_1 b_2 + \dots + c_p b_p\). (to prove this theorem, use a contradiction on uniqueness) -</p> - -<p> -the coordinates of x w.r.t. B are \(c_1, \dots, c_p\). -</p> - -<p> -the coordinate system of x w.r.t. B is \([x]_B = \begin{bmatrix}c_1\\ \dots\\ c_p\end{bmatrix}\) -</p> - -<div id="Vector spaces-Dimension of a subspace"><h3 id="Dimension of a subspace">Dimension of a subspace</h3></div> -<p> -let \(H \in \Re^n\) be a subspace with basis \(B=\{ b_1, \dots, b_p \}\). then every basis for H comprises p vectors. -</p> - -<p> -the dimension of H is the number of basis vectors in <em>any</em> basis for H. -</p> - -<p> -dim Col A = #pivot columns (rank A) -</p> - -<p> -dim Nul A = #free variables in Ax = 0 -</p> - -<p> -Rank theorem: dim Col A + dim Nul A = #columns -</p> - -</body> -</html> diff --git a/content/lin-algebra-notes/vector-spaces.md b/content/lin-algebra-notes/vector-spaces.md @@ -0,0 +1,47 @@ ++++ +title = 'Vector spaces' +template = 'page-math.html' ++++ + +# Vector spaces +subspace of $\Re^n$ is any set H in $\Re^n$ that has properties: +a) The zero vector is in H +b) For each u and v in H, the sum $u + v$ is in H +c) For each u in H and each scalar c, the vector $cu$ is in H + +the zero subspace is the set that only contains zero vector in $\Re^n$ + +## Column space and null space of a matrix +column space: set of all linear combinations of the columns of a matrix. it's the _span_ of the columns of the matrix. + +column space of m × n matrix is subspace of $\Re^m$ + +null space: set of all solutions of equation $Ax = 0$. + +null space of an m × n matrix is subspace of $\Re^n$. + +to determine if p is in Nul A, check if Ap = 0. if so, p is in Nul A. + +## Basis for a subspace +basis for subspace H of $\Re^n$ is linearly independent set in H spanning H + +the pivot columns of a matrix form the basis for its column space. + +## Coordinates +let $H \in \Re^n$ be subspace with $B = \{ b_1, \dots, b_p\}$. then for all x ∈ H, there are unique $c_1, \dots, c_p$ such that $x = c_1 b_2 + \dots + c_p b_p$. (to prove this theorem, use a contradiction on uniqueness) + +the coordinates of x w.r.t. B are $c_1, \dots, c_p$. + +the coordinate system of x w.r.t. B is $[x]_B = \begin{bmatrix}c_1\\\\ \dots\\\\ c_p\end{bmatrix}$ + +## Dimension of a subspace +let $H \in \Re^n$ be a subspace with basis $B=\{ b_1, \dots, b_p \}$. then every basis for H comprises p vectors. + +the dimension of H is the number of basis vectors in _any_ basis for H. + +dim Col A = #pivot columns (rank A) + +dim Nul A = #free variables in Ax = 0 + +Rank theorem: dim Col A + dim Nul A = #columns + diff --git a/templates/index.html b/templates/index.html @@ -6,6 +6,10 @@ <h1><a href="{{ config.base_url }}">{{ config.title }}</a></h1> <p>{{ config.description }}</p> + <p> + <a href="https://github.com/thezeroalpha/lectures.alex.balgavy.eu">Everything is on Github</a>, so if you want to fix/change something, feel free to open a pull request. + You can also contact me via the methods on my homepage.</p> + <p>My <a href="http://alex.balgavy.eu">homepage,</a> my <a href="http://blog.alex.balgavy.eu">blog,</a> and my <a href="http://github.com/thezeroalpha/dotfiles">dotfiles.</a>