lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

index.md (6276B)


      1 +++
      2 title = "Linear codes"
      3 template = "page-math.html"
      4 +++
      5 
      6 # Linear codes
      7 Code linear v+w is word in C when v and w in C.
      8 
      9 dist of linear code = min weight of any nonzero codeword.
     10 
     11 vector w is linear combination of vectors $v_{1} \dots v_{k}$ if scalars $a_1 \dots a_k$ st. $w = a_{1} v_{1} + \dots + a_{k} v_{k}$
     12 
     13 linear span \<S\> is set of all linear comb of vectors in S.
     14 
     15 For subset S of Kⁿ, code C = \<S\> is: zero word, all words in S, all sums.
     16 
     17 ## Scalar/dot product
     18 $\begin{aligned}
     19 v &= (a_{1} \dots a_{n}) \\\\
     20 w &= (b_{1} \dots b_{n}) \\\\
     21 v \dot w &= a_{1} b_{1} + \dots + a_{n} b_{n}
     22 \end{aligned}$
     23 
     24 - orthogonal: v ⋅ w = 0
     25 - v orthogonal to set S if ∀w ∈ S, v ⋅ w = 0
     26 - $S^{\perp}$ orthogonal complement: set of vectors orthogonal to S
     27 
     28 For subset S of vector space V, $S^{\perp}$ subspace of V.
     29 
     30 if C = \<S\>, $C^{\perp} = S^{\perp}$ and $C^{\perp}$ is _dual code_ of C.
     31 
     32 To find $C^{\perp}$,  compute words whose dot with elements of S is 0.
     33 
     34 ## Linear independence
     35 linearly dependent $S = {v_{1} \dots v_{k}}$ if scalars $a_1 \dots a_k$ not all zero st. $a_{1} v_{1} + \dots + a_{k} v_{k} = 0$.
     36 
     37 If all scalars have to be zero ⇒ linearly independent.
     38 
     39 Largest linearly independent subset: eliminate words that are linear combination of others, iteratively.
     40 
     41 ## Basis
     42 Any linearly independent set B is basis for \<B\>
     43 
     44 Nonempty subset B of vectors from space V is basis for V if:
     45 1. B spans V
     46 2. B is linearly independent
     47 
     48 dimension of space is number of elements in any basis for the space.
     49 
     50 linear code dimension K contains $2^{K}$ codewords.
     51 
     52 $\dim C + \dim C^{\perp} = n$
     53 
     54 if ${v_{1} + \dots + v_{k}}$ is basis for V, any vector in V is linear combination of ${v_{1} + \dots + v_{k}}$.
     55 
     56 Basis for C = \<S\>:
     57 1. make matrix A where rows are words in S
     58 2. find REF of A by row operations
     59 3. read nonzero rows
     60 
     61 Basis for C:
     62 1. make matrix A where rows are words in S
     63 2. find REF of A
     64 3. locate leading cols
     65 4. original cols corresponding to leading cols are basis
     66 
     67 Basis for $C^{\perp}$ ("Algorithm 2.5.7"):
     68 1. make matrix A where rows are words in S
     69 2. Find RREF
     70 3. $\begin{aligned}
     71     G &= \text{nonzero rows of RREF} \\\\
     72     X &= \text{G without leading cols} \\\\
     73     H &= \begin{cases}
     74         \text{rows corresponding to leading cols of G} &\text{are rows of X} \\\\
     75         \text{remaining rows} &\text{are rows of identity matrix}
     76         \end{cases}
     77     \end{aligned}$
     78 4. Cols of H are basis for $C^{\perp}$
     79 
     80 ## Matrices
     81 product of A (m × n) and B (n × p) is C (m × p), where row i col j is dot product (row i of A) ⋅ (col i of B).
     82 
     83 leading column: contains leading 1
     84 
     85 row echelon form: zero rows of all at bottom, leading 1s stack from right.
     86 - reduced REF: each leading col has exactly one 1
     87 
     88 ## Generating matrices & encoding
     89 - rank of matrix over K: num of nonzero rows in any REF
     90 - dim k of code C: dim of C as subspace of Kⁿ
     91     - if C has length n and dist d -- C is (n, k, d) linear code
     92 - if C linear code of length n and dim k, any matrix whose rows are basis for C is generator matrix for C (must have k rows, n cols, rank k)
     93 - G generator matrix ⇔ rows of G linearly independent
     94 - to find a generator matrix, put codewords in matrix and reduce, then take nonzero rows
     95 
     96 if G generator matrix for linear code C length n and dimension k, then v = u G ranges over all $2^{K}$ words in C $\forall u \text{length} k \in 2^{k}$
     97 - → C = { words u G, u in $K^{K}$ }
     98 - → u₁ G = u₂ G ⇔ u₁ = u₂
     99 
    100 info rate of (n, k, d) code: $\frac{\log_{2} (2^{k})}{n} = \frac{k}{n}$
    101 
    102 For C is (n,k,d):
    103 - $\dim C^{\perp} = n - k $
    104 - $|C| = 2^{k}$
    105 - $H_{C}$ is n row, n - k cols, rank n - k (nonzero rows in REF)
    106 
    107 ## Parity check matrices
    108 H parity check matrix for linear code C if columns form basis for dual code $C^{\perp}$.
    109 - if C length n dimension k, parity check matrix has n rows, n-k columns, n-k rank
    110 - H parity check ⇔ columns H linearly independent
    111 - if H parity check matrix for C length n, then C = {words v ∈ Kⁿ | v H = 0}
    112 
    113 Matrix G generating and H parity check iff:
    114 1. rows of G linearly independent
    115 2. columns of H linearly independent
    116 3. rows(G) + columns(H) = columns(G) = rows(H)
    117 4. G H = 0
    118 
    119 H parity check for C ⇔ $H^{T}$ generator for $C^{\perp}$
    120 
    121 $H ^{T} G ^{T} = (G H) ^{T} = 0$
    122 
    123 ![Conversion between generator and parity check matrices](conversion.svg)
    124 
    125 ## Equivalent codes
    126 - if $G = [I_{k}, X]$, G in standard form and generator for linear code length n dimension k with standard form G, then first K digits in codeword v = u G form word u in $K^{K}$ ("information digits")
    127 - you can always permute C and rearrange every word
    128 - any linear code C equivalent to linear code C' having generator matrix in standard form
    129 
    130 ## Distance of linear code
    131 H parity check for linear code C.
    132 
    133 C has distance d ⇔ any set d-1 rows of H linearly independent & at least one set d rows of H linearly dependent
    134 
    135 ## Cosets
    136 for C linear code length n, u any word length n: coset of C determined by u = C + u = {v + u | v ∈ C}
    137 
    138 for C linear code length n, u and v words length n:
    139 - u ∈ C + v → c + u = C + v
    140 - u ∈ C + u
    141 - u + v ∈ C →  u and v in same coset
    142 - u + v ∉ C → u and v in different cosets
    143 - either C + u = C + v, or the two have no common words
    144 - |C + u| = |C|
    145 - C dimension k → exists $2^{n-k}$ different cosets of C, each with $2^{k}$ words
    146 - C is one of cosets
    147 
    148 ## MLD for linear codes
    149 when word w received, choose word u of least weight in coset c + w. conclude that v = w + u sent.
    150 
    151 for C linear code length n, H parity check for C, w and u words in Kⁿ:
    152 - w H = 0 ⇔ w codeword in C
    153 - w H = u H ⇔ w and u in same coset of C
    154 - u error pattern in received w → u H sum rows of H corresponding to position where errors in transmission
    155 - syndrome word w in Kⁿ: w H in $K^{n-k}$
    156 
    157 coset leader: word of least weight in coset.
    158 
    159 standard decoding array (SDA) matches coset leader u to syndrome u H.
    160 
    161 constructing SDA:
    162 1. list all cosets for code, elect leaders
    163 2. find parity check H for code
    164 3. calculate syndromes u H
    165 
    166 decoding w received:
    167 1. calculate syndrome w H
    168 2. find coset leader u next to syndrome w H = u H in SDA
    169 3. conclude v = w + u sent
    170 
    171 IMLD:
    172 - num words closest to w = num least weight error patterns in c + w
    173 - cosets with more than 1 least weight are omitted.