lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

commit 8aa8bc8be3feb7990812815ba2bd4d06812f5fb4
parent a277de2fa111a6f43999ff3361f6f3bbcb053917
Author: Alex Balgavy <alex@balgavy.eu>
Date:   Wed, 28 Jul 2021 16:00:25 +0200

Fix warnings

Diffstat:
Mcontent/automata-complexity-notes/lecture-2.md | 4++--
Mcontent/automata-complexity-notes/lecture-5.md | 2+-
Mcontent/ml-notes/Linear models/index.md | 10+++++-----
3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/content/automata-complexity-notes/lecture-2.md b/content/automata-complexity-notes/lecture-2.md @@ -57,7 +57,7 @@ every u,v ∈ (V ∪ T)<sup>\*</sup>. Find grammar G such that L(G) = {a,b}<sup>\*</sup> {c} {b,c}<sup>\*</sup> -``` {.example} +``` S → AcB A → aA B → bB A → bA B → cB @@ -71,7 +71,7 @@ multiple rules with an or. So like: -``` {.example} +``` S → a | b | λ diff --git a/content/automata-complexity-notes/lecture-5.md b/content/automata-complexity-notes/lecture-5.md @@ -44,7 +44,7 @@ An example of ambiguity is the dangling else problem. Joerg gave an example in ALGOL, but it\'s the same in C (and is solved by always making else bind tightly, i.e. to the nearest if): -``` {.c} +```c if (condition) if (condition2) printf("Whatever"); else printf("Something else"); diff --git a/content/ml-notes/Linear models/index.md b/content/ml-notes/Linear models/index.md @@ -53,7 +53,7 @@ things Start with random point p in model space. -``` {.example} +``` loop: pick random point p' close to p if loss(p') < loss(p): @@ -76,7 +76,7 @@ structure), you need to figure out a transition function. 'Improved' random search. -``` {.example} +``` pick random point p' close to p loop: pick random point p' close to p @@ -91,7 +91,7 @@ some communication between searches. Population methods, eg. evolutionary algorithms: -``` {.example} +``` start with population of k models loop: rank population by loss @@ -104,7 +104,7 @@ loop: Coming closer to gradient descent: -``` {.example} +``` pick random point p in model spce loop: pick k random points {p_i} close to p @@ -141,7 +141,7 @@ $ The angle is maximised when cos(α) is 1, so α is 0. So the gradient is the direction of steepest ascent -``` {.example} +``` pick a random point p in model space loop: p <- p - \eta \nabla loss(p)