lectures.alex.balgavy.eu

Lecture notes from university.
git clone git://git.alex.balgavy.eu/lectures.alex.balgavy.eu.git
Log | Files | Refs | Submodules

deployment-updates.md (1365B)


      1 +++
      2 title = 'Deployment updates'
      3 +++
      4 # Deployment updates
      5 ## Rolling updates
      6 If you change Deployment's pod template (`.spec.template`), deployment rollout is triggered.
      7 To observe rollout, you can use e.g. `kubectl rollout status <deployment>`
      8 
      9 Each pod and ReplicaSet created by Deployment controller get the same `pod-template-hash` label.
     10 That's generated by hashing PodTemplate of ReplicaSet.
     11 Its purpose is to ensure that ReplicaSets created from Deployment don't overlap.
     12 
     13 So that the application remains available, Deployment ensures that:
     14 - 25% max unavailable: at most 25% of desired number of Pods are down
     15 - 25% max surge: at most 25% more than desired number of pods are up
     16 
     17 You can check `kubectl rollout history`. To save a change cause, use the parameter `--record`.
     18 
     19 To roll back, use `kubectl rollout undo --to-revision=n`
     20 
     21 ## Canary Deployments
     22 Problem with rolling updates is while it's happening, you have no way of testing that it's working fine.
     23 
     24 Canary Deployments are used to test new release with subset of users before propagating to all users.
     25 
     26 Involves using at least on Service to direct traffic to pods that run old code or pods that run new code.
     27 You add a label to pods, indicating whether it's the original type or canary.
     28 If service does not discriminate based on that label, then both types of pods get traffic directed to them.