lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

lecture-2.md (3494B)


      1 +++
      2 title = "Lecture 2"
      3 template = "page-math.html"
      4 +++
      5 
      6 ## Error-detecting codes
      7 error pattern is u = v + w if v ∈ C sent and w ∈ Kⁿ received.
      8 
      9 error detected if u is not a codeword.
     10 error patterns that can't be detected are sums of codewords
     11 
     12 distance of code: smallest d(v, w) ∀v,w.
     13 
     14 Code of dist d at least detects patterns of weight d-1, there's at least one pattern of weight d not detected.
     15 
     16 t-error-detecting code if detects pattern weight max t, and does not detect at least one pattern of weight t+1.
     17 so code with dist d is "d-1 error-detecting code"
     18 
     19 ## Error-correcting codes
     20 Code C corrects error pattern u if ∀v ∈ C, v+u closer to v than any other word
     21 
     22 Code of dist d corrects all error patterns $weight \leq \frac{d-1}{2}$, at least one pat weight $1+\frac{d-1}{2}$ not corrected.
     23 
     24 ## Linear codes
     25 Code linear v+w is word in C when v and w in C.
     26 
     27 dist of linear code = min weight of any nonzero codeword.
     28 
     29 vector w is linear combination of vectors $v_{1} \dots v_{k}$ if scalars $a_1 \dots a_k$ st. $w = a_{1} v_{1} + \dots + a_{k} v_{k}$
     30 
     31 linear span \<S\> is set of all linear comb of vectors in S.
     32 
     33 For subset S of Kⁿ, code C = \<S\> is: zero word, all words in S, all sums.
     34 
     35 ## Scalar/dot product
     36 $\begin{aligned}
     37 v &= (a_{1} \dots a_{n}) \\\\
     38 w &= (b_{1} \dots b_{n}) \\\\
     39 v \dot w &= a_{1} b_{1} + \dots + a_{n} b_{n}
     40 \end{aligned}$
     41 
     42 - orthogonal: v ⋅ w = 0
     43 - v orthogonal to set S if ∀w ∈ S, v ⋅ w = 0
     44 - $S^{\perp}$ orthogonal complement: set of vectors orthogonal to S
     45 
     46 For subset S of vector space V, $S^{\perp}$ subspace of V.
     47 
     48 if C = \<S\>, $C^{\perp} = S^{\perp}$ and $C^{\perp}$ is _dual code_ of C.
     49 
     50 To find $C^{\perp}$,  compute words whose dot with elements of S is 0.
     51 
     52 ## Linear independence
     53 linearly dependent $S = {v_{1} \dots v_{k}}$ if scalars $a_1 \dots a_k$ not all zero st. $a_{1} v_{1} + \dots + a_{k} v_{k} = 0$.
     54 
     55 If all scalars have to be zero ⇒ linearly independent.
     56 
     57 Largest linearly independent subset: eliminate words that are linear combination of others, iteratively.
     58 
     59 ## Basis
     60 Any linearly independent set B is basis for \<B\>
     61 
     62 Nonempty subset B of vectors from space V is basis for V if:
     63 1. B spans V
     64 2. B is linearly independent
     65 
     66 dimension of space is number of elements in any basis for the space.
     67 
     68 linear code dimension K contains $2^{K}$ codewords.
     69 
     70 $\dim C + \dim C^{\perp} = n$
     71 
     72 if ${v_{1} + \dots + v_{k}}$ is basis for V, any vector in V is linear combination of ${v_{1} + \dots + v_{k}}$.
     73 
     74 Basis for C = \<S\>:
     75 1. make matrix A where rows are words in S
     76 2. find REF of A by row operations
     77 3. read nonzero rows
     78 
     79 Basis for C:
     80 1. make matrix A where rows are words in S
     81 2. find REF of A
     82 3. locate leading cols
     83 4. original cols corresponding to leading cols are basis
     84 
     85 Basis for $C^{\perp}$ ("Algorithm 2.5.7"):
     86 1. make matrix A where rows are words in S
     87 2. Find RREF
     88 3. $\begin{aligned}
     89     G &= \text{nonzero rows of RREF} \\\\
     90     X &= \text{G without leading cols} \\\\
     91     H &= \begin{cases}
     92         \text{rows corresponding to leading cols of G} &\text{are rows of X} \\\\
     93         \text{remaining rows} &\text{are rows of identity matrix}
     94         \end{cases}
     95     \end{aligned}$
     96 4. Cols of H are basis for $C^{\perp}$
     97 
     98 ## Matrices
     99 product of A (m × n) and B (n × p) is C (m × p), where row i col j is dot product (row i of A) ⋅ (col i of B).
    100 
    101 leading column: contains leading 1
    102 
    103 row echelon form: zero rows of all at bottom, leading 1s stack from right.
    104 - reduced REF: each leading col has exactly one 1