# Application Lifecycle Management

## Rolling Updates & Rollbacks

A new rollout creates a new deployment revision (Revision 1), When we upgrade the application, a new rollout is triggered and a new deployment is created (Revision 2), this allows us to rollout to previous revision if anything breaks.

```bash
kubectl rollout status deployment <deployment-name>  # Current rollout
kubectl rollout history deployment <deployment-name> # History of rollouts
```

### Deployment Strategies

#### Recreate strategy

We bring all pods of our application down, then spin the updated ones after. This causes downtime which is bad.

#### Rolling Update (Default)

We bring each pod one by one, and whenever we get an older version pod down, a newer version pod spins up.

<figure><img src="/files/nr4PohF2hZ3Yz0W2wf4a" alt=""><figcaption></figcaption></figure>

To update a deployment, you could vim into the deployment and change image version, what happens under the hood is that the deployment creates a new replicaset (replicaset-2) with the new image version and also start removing pods from the original replicaset (replicaset-1).

{% code overflow="wrap" %}

```bash
kubectl apply -f deployment-def.yml # After updating using vim
kubectl set image deployment <deployment-name> nginx=nginx:1.9.1 # This won't change version in the yaml definition
```

{% endcode %}

<figure><img src="/files/BPIVvKqLGgKhNcUJ9ncT" alt=""><figcaption></figcaption></figure>

#### If something went wrong when you updated your deployment you can run:

```bash
kubectl rollout undo deployment <deployment-name>
```

<figure><img src="/files/JEOiVFXG1mdWMkDoCkAS" alt=""><figcaption></figcaption></figure>

### Lab:

Just a small mistake was identified, when changing deployment from RollingUpdate to Recreate, there will be additional configuration specified for RollingUpdate so make sure to also remove them and only keep the `type: Recreate`.

## Commands & Arguments

<figure><img src="/files/dTRwftPgxD9pR5OppvCc" alt=""><figcaption></figcaption></figure>

#### We can specify docker images commands and arguments overrides in the pod definitions:

<figure><img src="/files/DrsX7dH6yioLHiIWeQZN" alt=""><figcaption></figcaption></figure>

### Lab (Review):

I had issues with the pod definition of command and arguments, also I had misunderstanding of some concepts, here are the correct ones:

* Command in pod definition is same as Entrypoint in Dockerfile
* args in pod definition is same as CMD in Dockerfile

Instead of deleting the pod and re-applying the /tmp/ pod, we can run `kubectl replace --force -f /tmp/...`

## Configuring Environment Variables

#### To set environment variables in a pod definition:

<figure><img src="/files/0HvzvqDtBM5TeHyfiFrG" alt=""><figcaption></figcaption></figure>

## Configmaps

Instead of defining key-value pairs environment variables in the pod definition, we could create configmaps.

When a pod is created we inject the configmap into the pod.

* Create configmap
* Inject configmap to pod

### Create configmap

#### Creating configmap using imperative commands:

{% code overflow="wrap" %}

```bash
kubectl create configmap <config-name> -from-literal=APP_COLOR=blue --from-literal=APP_MOD=prod

# Or

kubectl create configmap <config-name> --from-file=.env
```

{% endcode %}

#### Creating configmap using declarative approach:

<figure><img src="/files/XG2xZ9OHoWjYncryteMZ" alt=""><figcaption></figcaption></figure>

```
kubectl create -f config-map.yaml
```

```
kubectl get configmaps
kubectl describe configmap
```

### Inject a configmap to a pod

We add a property under containers called `envFrom,` note that this is a pod yaml that we  still didn't create/apply (not for reference).

<figure><img src="/files/0wClPTN6MNd1swm564bD" alt="" width="455"><figcaption></figcaption></figure>

#### Ways to inject a configmap:

* ENV
* SINGLE ENV
* VOLUME

<figure><img src="/files/YzvJHGcKBoMivm9FpF9C" alt="" width="506"><figcaption></figcaption></figure>

### Lab (Review):

A huge issue I encountered is the syntax of configmaps, I was so confused what to use and had to look at the solution.

The main issue here is that I didn't look correctly in the k8s docs, I must check links found at bottom of pages If I didn't find anything useful in a configmap docs page.

<figure><img src="/files/zqbk2Jr6pCeS5Zq3QUTa" alt=""><figcaption></figcaption></figure>

<figure><img src="/files/IVjCENXG1uR2RiVf9wnb" alt="" width="563"><figcaption></figcaption></figure>

Also I didn't utilize the `--force` flag in the `kubectl replace` command.

## Configure Secrets

#### Two steps are required to work with secrets:

* Create Secret
* Inject Secret

### Create Secret

#### Using imperative command:

```bash
kubectl create secret generic <secret-name> --from-literal=<key>=<value>
kubectl create secret generic <secret-name> --from-file=<secret-file>
```

#### Using Declarative approach:

<figure><img src="/files/QXmvLvVYDrns0g5hfgpm" alt=""><figcaption></figcaption></figure>

```bash
echo -n "mysqlpassword" | base64 # DB_Password encoded string
```

```bash
kubectl create -f secret-data.yaml
kubectl get secrets
```

### Inject Secret

<figure><img src="/files/3Qqz1GArJit6NDHYuOlD" alt=""><figcaption></figcaption></figure>

#### Other ways to inject secrets:

* ENV
* SINGLE ENV
* VOLUME

<figure><img src="/files/kJIrLd7tMe2BGmsjNSLg" alt="" width="522"><figcaption></figcaption></figure>

{% hint style="danger" %}
Secrets are not encrypted. only encoded. Secrets aren't encrypted in ETCD. Make sure to not push them to a git. Anyone able to create pods/deployments in same namespace can access secrets.
{% endhint %}

{% hint style="success" %}
[Enable Encryption at rest for ETCD](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/). Configure least-privilege access to secrets (RBAC). Consider 3rd party secrets store providers such as AWS, Vault etc...
{% endhint %}

#### Also the way Kubernetes handles secrets. Such as:

* A secret is only sent to a node if a pod on that node requires it.
* Kubelet stores the secret into a tmpfs so that the secret is not written to disk storage.
* Once the Pod that depends on the secret is deleted, kubelet will delete its local copy of the secret data as well.

Read about the [protections ](https://kubernetes.io/docs/concepts/configuration/secret/#protections)and [risks](https://kubernetes.io/docs/concepts/configuration/secret/#risks) of using secrets [here](https://kubernetes.io/docs/concepts/configuration/secret/#risks)

Having said that, there are other better ways of handling sensitive data like passwords in Kubernetes, such as using tools like Helm Secrets, [HashiCorp Vault](https://www.vaultproject.io/).

### Lab (Review):

Didn't include the `generic` option when creating the secret `kubectl create secret generic ...` and I found it by using `kubectl create secret -h`

<figure><img src="/files/J3VQ8okR95Y9J9uDLBES" alt=""><figcaption></figcaption></figure>

There are alot of mistakes identified, I did the same mistake of not checking the docs well, I was asked to load secrets as environment variables, so I must have went:

<figure><img src="/files/pl7dsdpBdpa5vbN5mYhF" alt=""><figcaption></figcaption></figure>

And I must have used `envFrom` which takes environment variables from a pod

<figure><img src="/files/pHawuyAZoAQ18j9cE1Hd" alt=""><figcaption></figcaption></figure>

{% hint style="danger" %}
Also, regarding vim, to paste correctly in vim, I must copy the spaces too from the docs, also paste from the beginning of the line so it can be indented correctly.
{% endhint %}

## Encrypting Secret Data At Rest

#### Follow this docs page for additional information:

{% embed url="<https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data>" %}

<figure><img src="/files/RT5caWhjvcxd91yN349W" alt="" width="563"><figcaption></figcaption></figure>

## Multi Container Pods

When building web applications, you might want to have a logging agent as a container separate from your app container, in that case you can do a multi container pod, that can contain more than one container and share same storage and network that they can refer to each other as `localhost`

#### To create a multi-container pod, define it as an array under containers section:

<figure><img src="/files/Osi5UqK8LtTvXRZx72y8" alt=""><figcaption></figcaption></figure>

### Lab:

Everything went well except I had to concentrate on the correct namespace for editing the pod.&#x20;

{% hint style="danger" %}
I was editing the app pod in the default namespace instead of the elastic-stack namespace.
{% endhint %}

#### I tried to exec to the pod but it wasn't working, I didn't specify the `-n` `-it` flags:

{% code overflow="wrap" %}

```bash
kubectl -n elastic-stack exec -it app -- /bin/bash # Or instead of /bin/bash run the command you want (cat /log/app.log)
```

{% endcode %}

#### Docs used:

{% embed url="<https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/>" %}

## InitContainers

In Kubernetes, multi-container pods use `initContainers` for tasks that must complete before the main containers start. These are specified in the pod configuration.&#x20;

#### Here's a shortened example:

{% code overflow="wrap" %}

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
```

{% endcode %}

In this configuration, `initContainers` run tasks sequentially before the main container starts. If an `initContainer` fails, Kubernetes restarts the pod until it succeeds.

#### Read more about InitContainers here:

<https://kubernetes.io/docs/concepts/workloads/pods/init-containers/>

### Lab:

#### Its as usual a `kubectl edit pod` yaml issue,  the `- name` is always first:

<figure><img src="/files/Z1kMDscqUDwltcSVNmj8" alt=""><figcaption></figcaption></figure>

Also, I should have checked pod logs before fixing the issue to fix it.

## Slides

{% file src="/files/gysqTekVPrWh7yTcLkEP" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://smadi0x86-blog.gitbook.io/smadi0x86-playground/certifications/certified-kubernetes-administrator/application-lifecycle-management.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
