commit fc8e3a9c542b3950bc94d465c0412c36ffc278b3
parent 599fafc521374365e5bdff7b3a94ba4efa12c59c
Author: Alex Balgavy <alex@balgavy.eu>
Date: Mon, 8 Feb 2021 10:33:22 +0100
Update new lecture notes
Diffstat:
5 files changed, 223 insertions(+), 0 deletions(-)
diff --git a/content/_index.md b/content/_index.md
@@ -10,6 +10,7 @@ title = "Alex's university course notes"
* [Logical Verification](logical-verification/)
* [Software Architecture](software-architecture/)
* [Programming Multi-Core and Many-Core Systems](programming-multi-core-and-many-core-systems)
+* [Coding and Cryptography](coding-and-cryptography)
# BSc Computer Science (VU Amsterdam)
---
diff --git a/content/coding-and-cryptography/_index.md b/content/coding-and-cryptography/_index.md
@@ -0,0 +1,8 @@
++++
+title = "Coding and Cryptography"
++++
+
+# Table of Contents
+1. [Lecture 1](lecture-1)
+2. [Lecture 2](lecture-2)
+
diff --git a/content/coding-and-cryptography/lecture-1.md b/content/coding-and-cryptography/lecture-1.md
@@ -0,0 +1,55 @@
++++
+title = "Lecture 1"
+template = "page-math.html"
++++
+
+Some definitions:
+- digit: 0, 1
+- word: sequence of digits
+- length: digits in word (|word|)
+- binary code: set C of words
+
+assumptions about transmission channel:
+- length of sent == length of received
+- easy to find beginning of first word
+- noise scattered randomly (not in bursts)
+
+properties of binary channel:
+- symmetric if 0 and 1 transmitted with same accuracy
+- reliability: probability that digit sent == digit received
+ - we assume ½ ≤ p < 1
+
+information rate of code (length n) = $\frac{1}{n} \log_{2} |C|$
+
+## Most likely codeword
+Let $\phi_{p} (v,w)$ be probability that if word v sent over BSC with reliability p, word w is received.
+
+$\phi_{p} (v, w) = p^{n-d} (1-p)^d$ if v and w disagree in d positions.
+
+if v₁ and w disagree in d₁, and v₂ and w in d₂, then $\phi_{p} (v_{1}, w) \leq \phi_{p} (v_{2}, w)$ iff d₁ ≥ d₂.
+- English: the most likely word disagrees in least digits
+
+## Weight and distance
+
+K = {0, 1}, $K^{n}$ = set of all binary words of length n
+
+(Hamming) weight: number of ones
+
+(Hamming) distance: number of differing digits between words
+
+## Max likelihood decoding (MLD)
+Complete:
+- if one word min distance, decode to that
+- else, arbitrarily select one of closest
+
+Incomplete:
+- if one word min distance, decode to that
+- else, ask for retransmission
+ - look for smallest weight in error patterns with C, e.g. 0+w and 1+w
+ - retransmit if same weight
+
+Reliability: probability that if v sent over BSC of prob b, then IMLD concludes that v was sent
+
+$\theta_{p} (c, v) = \sum_{w \in L(v)} \phi_{p} (v, w)$ where $L(v) = \lbrace words \in K^{n} \enspace \| \enspace \text{IMLD correctly concludes v sent} \rbrace$
+
+
diff --git a/content/coding-and-cryptography/lecture-2.md b/content/coding-and-cryptography/lecture-2.md
@@ -0,0 +1,104 @@
++++
+title = "Lecture 2"
+template = "page-math.html"
++++
+
+## Error-detecting codes
+error pattern is u = v + w if v ∈ C sent and w ∈ Kⁿ received.
+
+error detected if u is not a codeword.
+error patterns that can't be detected are sums of codewords
+
+distance of code: smallest d(v, w) ∀v,w.
+
+Code of dist d at least detects patterns of weight d-1, there's at least one pattern of weight d not detected.
+
+t-error-detecting code if detects pattern weight max t, and does not detect at least one pattern of weight t+1.
+so code with dist d is "d-1 error-detecting code"
+
+## Error-correcting codes
+Code C corrects error pattern u if ∀v ∈ C, v+u closer to v than any other word
+
+Code of dist d corrects all error patterns $weight \leq \frac{d-1}{2}$, at least one pat weight $1+\frac{d-1}{2}$ not corrected.
+
+## Linear codes
+Code linear v+w is word in C when v and w in C.
+
+dist of linear code = min weight of any nonzero codeword.
+
+vector w is linear combination of vectors $v_{1} \dots v_{k}$ if scalars $a_1 \dots a_k$ st. $w = a_{1} v_{1} + \dots + a_{k} v_{k}$
+
+linear span \<S\> is set of all linear comb of vectors in S.
+
+For subset S of Kⁿ, code C = \<S\> is: zero word, all words in S, all sums.
+
+## Scalar/dot product
+$\begin{aligned}
+v &= (a_{1} \dots a_{n}) \\\\
+w &= (b_{1} \dots b_{n}) \\\\
+v \dot w &= a_{1} b_{1} + \dots + a_{n} b_{n}
+\end{aligned}$
+
+- orthogonal: v ⋅ w = 0
+- v orthogonal to set S if ∀w ∈ S, v ⋅ w = 0
+- $S^{\perp}$ orthogonal complement: set of vectors orthogonal to S
+
+For subset S of vector space V, $S^{\perp}$ subspace of V.
+
+if C = \<S\>, $C^{\perp} = S^{\perp}$ and $C^{\perp}$ is _dual code_ of C.
+
+To find $C^{\perp}$, compute words whose dot with elements of S is 0.
+
+## Linear independence
+linearly dependent $S = {v_{1} \dots v_{k}}$ if scalars $a_1 \dots a_k$ not all zero st. $a_{1} v_{1} + \dots + a_{k} v_{k} = 0$.
+
+If all scalars have to be zero ⇒ linearly independent.
+
+Largest linearly independent subset: eliminate words that are linear combination of others, iteratively.
+
+## Basis
+Any linearly independent set B is basis for \<B\>
+
+Nonempty subset B of vectors from space V is basis for V if:
+1. B spans V
+2. B is linearly independent
+
+dimension of space is number of elements in any basis for the space.
+
+linear code dimension K contains $2^{K}$ codewords.
+
+$\dim C + \dim C^{\perp} = n$
+
+if ${v_{1} + \dots + v_{k}}$ is basis for V, any vector in V is linear combination of ${v_{1} + \dots + v_{k}}$.
+
+Basis for C = \<S\>:
+1. make matrix A where rows are words in S
+2. find REF of A by row operations
+3. read nonzero rows
+
+Basis for C:
+1. make matrix A where rows are words in S
+2. find REF of A
+3. locate leading cols
+4. original cols corresponding to leading cols are basis
+
+Basis for $C^{\perp}$:
+1. make matrix A where rows are words in S
+2. Find RREF
+3. $\begin{aligned}
+ G &= \text{nonzero rows of RREF} \\\\
+ X &= \text{G without leading cols} \\\\
+ H &= \begin{cases}
+ \text{rows corresponding to leading cols of G} &\text{are rows of X} \\\\
+ \text{remaining rows} &\text{are rows of identity matrix}
+ \end{cases}
+ \end{aligned}$
+4. Cols of H are basis for $C^{\perp}$
+
+## Matrices
+product of A (m × n) and B (n × p) is C (m × p), where row i col j is dot product (row i of A) ⋅ (col i of B).
+
+leading column: contains leading 1
+
+row echelon form: zero rows of all at bottom, leading 1s stack from right.
+- reduced REF: each leading col has exactly one 1
diff --git a/content/programming-multi-core-and-many-core-systems/lecture-2.md b/content/programming-multi-core-and-many-core-systems/lecture-2.md
@@ -0,0 +1,55 @@
++++
+title = "Lecture 2"
++++
+
+Multi vs many core
+
+CPU levels of parallelism
+- Multi-core parallelism: task/data parallelism
+ - 4-12 powerful cores, hardware hyperthreading
+ - local caches
+ - symmetrical/asymmetrical threading model
+ - implemented by programmer
+- Instruction-level:
+- SIMD
+
+Cores/threads:
+- hardware multi-threading (SMT)
+ - core manages thread context
+ - interleaved: temporal multi-threading
+ - simultaneous : co-located execution
+
+GPU levels of parallelism:
+- data parallelism
+ - write 1 thread, instantiate a lot of them
+ - SIMT (Single Instruction Multiple Thread) execution
+ - many threads run concurrently: same instruction, different data elements
+- task parallelism is 'emulated'
+ - hardware mechanisms exist
+ - specific programming constructs to run multiple tasks
+
+usually connect GPU with host over PCI express 3, theoretical speed 8 GT/s (gigatransactions per second).
+
+Why different design in CPU vs GPU?
+- CPU must be good at everything, GPUs focus on massive parallelism
+- CPU minimize latency experienced by 1 thread
+- GPU maximize throughput of all threads
+
+Locality: programs tend to use data and instructions with address near to those used recently
+
+CPU caches:
+- small fast SRAM-based memories
+- hold frequently accessed blocks of main memory
+
+Hardware performance metrics:
+- clock frequency (GHz) - absolute hardware speed
+- operational speed (GFLOPs) - operations per second, single and double precision
+- memory bandwidth (GB/s) - memory operations per second
+- power (Watt) - rate of consumption of energy
+
+Main constraint for optimising compilers: do not cause any change in program behavior. So when in doubt, compiler must be conservative.
+
+
+In-core parallelism:
+- ILP: multiple instructions executed at some time, enabled by hardware (which CPU must provision)
+- SIMD: single instruction on multiple data