lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

orthogonality-least-squares.html (6604B)


      1 <!DOCTYPE html>
      2 <html>
      3 <head>
      4 <script type="text/javascript" async src="https://cdn.jsdelivr.net/gh/mathjax/MathJax@2.7.5/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
      5 <link rel="Stylesheet" type="text/css" href="style.css">
      6 <title>orthogonality-least-squares</title>
      7 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      8 </head>
      9 <body>
     10 
     11 <div id="Orthogonality &amp; least squares"><h2 id="Orthogonality &amp; least squares">Orthogonality &amp; least squares</h2></div>
     12 <p>
     13 let \(u,v \in \Re^n\). orthogonal iff:
     14 </p>
     15 <ul>
     16 <li>
     17 \(u \cdot v = 0\)
     18 
     19 <li>
     20 or \(\|u\|^2 + \|v\|^2 = \|u+v\|^2\)
     21 
     22 </ul>
     23 
     24 <div id="Orthogonality &amp; least squares-Inner (dot) product &amp; uses"><h3 id="Inner (dot) product &amp; uses">Inner (dot) product &amp; uses</h3></div>
     25 <p>
     26 let \(u,v \in \Re^n\). then, \(u \cdot v = u^T v \in \Re\).
     27 </p>
     28 
     29 <p>
     30 in English, to calculate you just multiply the vectors row-wise, and sum all the results.
     31 </p>
     32 
     33 <p>
     34 Regular algebraic rules apply.
     35 </p>
     36 
     37 <p>
     38 \(u \cdot u \geq 0\), only 0 iff u = 0.
     39 </p>
     40 
     41 <div id="Orthogonality &amp; least squares-Inner (dot) product &amp; uses-Length of a vector"><h4 id="Length of a vector">Length of a vector</h4></div>
     42 <p>
     43 Let \(v \in \Re^n\), then the norm (length) of v is \(\|v\| = \sqrt{v \cdot v}\).
     44 </p>
     45 
     46 <p>
     47 Does the norm coincide with length of line segments? Yes:
     48 </p>
     49 
     50 <p>
     51 \(x = \begin{bmatrix}a\\b\end{bmatrix}, \quad \|v\| = \sqrt{v \cdot v} = \sqrt{a^2 + b^2} = \text{Pythagoras}\)
     52 </p>
     53 
     54 <div id="Orthogonality &amp; least squares-Inner (dot) product &amp; uses-Distance between vectors"><h4 id="Distance between vectors">Distance between vectors</h4></div>
     55 <p>
     56 Let \(u,v \in \Re^n\). then, \(\text{dist}(u,v) = \|u-v\|\).
     57 </p>
     58 
     59 <div id="Orthogonality &amp; least squares-Orthogonal complement"><h3 id="Orthogonal complement">Orthogonal complement</h3></div>
     60 <p>
     61 Let \(W \subset \Re^n\) a subspace, then orthogonal complement of W is \(W^\perp = \{x \in \Re^n | x \cdot v = 0 \forall u \in W \}\)
     62 </p>
     63 
     64 <p>
     65 properties:
     66 </p>
     67 <ul>
     68 <li>
     69 \((colA)^\perp = (NulA)^T\)
     70 
     71 <li>
     72 \((NulA)^\perp = (colA)^T\)
     73 
     74 </ul>
     75 
     76 <div id="Orthogonality &amp; least squares-Orthogonal sets"><h3 id="Orthogonal sets">Orthogonal sets</h3></div>
     77 <p>
     78 a set \(\{v_1 \dots v_p\}\) is orthogonal if \(v_i \cdot v_j = 0 \forall i,j\). then \(\{v_1 \dots v_p\}\) is a basis for \(\text{Span}\{v_1 \dots v_p\}\)
     79 </p>
     80 
     81 <p>
     82 An orthogonal basis is a basis that is also an orthogonal set
     83 </p>
     84 
     85 <p>
     86 Why orthogonal basis? Let \(W \in \Re^n\) be subspace with orthogonal basis \(\{u_1 \dots u_p\}\), then \(W \ni y = c_1 u_1 + \ldots + c_p u_p\), with \(c_i = \frac{y \cdot u_i}{u_i \cdot u_i}\) for i = 1...p.
     87 </p>
     88 
     89 <p>
     90 An orthonormal set/basis is an orthogonal set/basis consisting of unit vectors (like \(\{e_1, \ldots, e_n\}\text{ for }\Re^n\)).
     91 </p>
     92 
     93 <p>
     94 An m × matrix A has orthonormal columns iff \(A^T A = I_n\)
     95 </p>
     96 <ul>
     97 <li>
     98 \((Ax) \cdot (Ay) = x \cdot y\)
     99 
    100 <li>
    101 \(\| Ax \| = \| x \|\)
    102 
    103 </ul>
    104 
    105 <div id="Orthogonality &amp; least squares-Orthogonal projections"><h3 id="Orthogonal projections">Orthogonal projections</h3></div>
    106 <div id="Orthogonality &amp; least squares-Orthogonal projections-Orthogonal decomposition"><h4 id="Orthogonal decomposition">Orthogonal decomposition</h4></div>
    107 <p>
    108 Let W be a subspace of \(\Re^n\). Each y in \(R^n\) can be written uniquely in \(y = \hat{y}+z\) (\(\hat{y} \in W,\; z \in W^\perp\))
    109 </p>
    110 
    111 <p>
    112 If \(\{u_1, \ldots, u_p\}\) in orthogonal basis of W, then \(\hat{y} = \frac{y \cdot u_1}{u_1 \cdot u_1} u_1 + \ldots + \frac{y \cdot u_p}{u_p \cdot u_p}u_p\)
    113 </p>
    114 
    115 <p>
    116 ŷ is an orthogonal projection of y onto W (\(proj_w y\))
    117 </p>
    118 
    119 <div id="Orthogonality &amp; least squares-Orthogonal projections-Best approximation"><h4 id="Best approximation">Best approximation</h4></div>
    120 <p>
    121 Let W be subspace of \(\Re^n\), y a vector in \(\Re^n\), ŷ an orthogonal projection of y onto W.
    122 </p>
    123 
    124 <p>
    125 Then \(\|y-\hat{y}\| &lt; \|y-v\|\)
    126 </p>
    127 
    128 <div id="Orthogonality &amp; least squares-Orthogonal projections-When basis for W is an orthonormal set"><h4 id="When basis for W is an orthonormal set">When basis for W is an orthonormal set</h4></div>
    129 <p>
    130 If \(\{u_1 \ldots u_p\}\) is orthonormal basis for subspace W of \(\Re^n\), then \(\text{proj}_w y = (y \cdot u_1)u_1 + \dots + (y \cdot u_p) u_p\)
    131 </p>
    132 
    133 <p>
    134 If U = \(\begin{bmatrix} u_1 &amp; u_2 &amp; \dots &amp; u_p \end{bmatrix}\), then \(\text{proj}_w y = UU^T y \quad \forall y \in \Re^n\)
    135 </p>
    136 
    137 <div id="Orthogonality &amp; least squares-Gram-Schmidt process"><h3 id="Gram-Schmidt process">Gram-Schmidt process</h3></div>
    138 <p>
    139 An algorithm for producing orthogonal or orthonormal basis for any nonzero subspace of \(\Re^n\).
    140 </p>
    141 
    142 <p>
    143 Given basis \(\{ x_1 \dots x_p \}\) for nonzero subspace W of \(\Re^n\), define:
    144 </p>
    145 
    146 <p>
    147 {{\(%align*\)
    148 v_1 &amp;= x_1\\
    149 v_2 &amp;= x_2 - \frac{x_2 \cdot v_1}{v_1 \cdot v_1} v_1\\
    150 v_3 &amp;= x_3 - \frac{x_3 \cdot v_1}{v_1 \cdot v_1} v_1 - \frac{x_3 \cdot v_2}{v_2 \cdot v_2} v_2\\
    151 \vdots \\
    152 v_p &amp;= x_p - \frac{x_p \cdot v_1}{v_1 \cdot v_1} v_1 - \dots - \frac{x_p \cdot v_{p-1}}{v_{p-1} \cdot v_{p-1} v_{p-1}}
    153 }}$
    154 </p>
    155 
    156 <p>
    157 Then \(\{v_1 \dots v_p\}\) is an orthogonal basis for W.
    158 </p>
    159 
    160 <p>
    161 \(\text{Span}\{v_1 \dots v_k\} = \text{Span}\{x_1 \dots x+k\}\) for 1 ≤ k ≤ p.
    162 </p>
    163 
    164 <div id="Orthogonality &amp; least squares-Gram-Schmidt process-QR factorization"><h4 id="QR factorization">QR factorization</h4></div>
    165 <p>
    166 If A is an m × n matrix, with linearly independent columns, then A can be factored as \(A = QR\), where Q is he m×n matrix whose columns form an orthonormal basis for Col A, and R is n×n upper triangular invertible matrix with diagonal positive entries.
    167 </p>
    168 
    169 <div id="Orthogonality &amp; least squares-Least-squares problems"><h3 id="Least-squares problems">Least-squares problems</h3></div>
    170 <p>
    171 If a solution for \(Ax = b\) does not exist and one is needed, try to find the best approximation x for \(Ax = b\).
    172 </p>
    173 
    174 <p>
    175 General least-squares problem is to find x that makes \(\| b - Ax\|\) as small as possible.
    176 </p>
    177 
    178 <p>
    179 If A is m×n and \(b \in \Re^m\), a least-squares solution of \(Ax = b\) is \(\hat{x} \in \Re^n\) such that \(\| b - A\hat{x} \| \leq \| b - Ax \|, \qquad \forall x \in \Re^n\).
    180 </p>
    181 
    182 <p>
    183 Least-square solution set of \(Ax = b\) is the same as the solution set for \(A^T Ax = A^T b\).
    184 </p>
    185 
    186 <p>
    187 Therefore, \(\hat{x} = (A^T A)^{-1} A^T b\).
    188 </p>
    189 
    190 <p>
    191 Given an m×n matrix A with linearly independent columns, let \(A = QR\) be a QR factorization of A. Then, for each \(b \in \Re^m\), \(Ax = b\) has unique least-squares solution:
    192 </p>
    193 
    194 <p>
    195 \(\hat{x} = R^{-1} Q^T b\)
    196 </p>
    197 
    198 </body>
    199 </html>