Application Lifecycle Management
Last updated
Last updated
A new rollout creates a new deployment revision (Revision 1), When we upgrade the application, a new rollout is triggered and a new deployment is created (Revision 2), this allows us to rollout to previous revision if anything breaks.
We bring all pods of our application down, then spin the updated ones after. This causes downtime which is bad.
We bring each pod one by one, and whenever we get an older version pod down, a newer version pod spins up.
To update a deployment, you could vim into the deployment and change image version, what happens under the hood is that the deployment creates a new replicaset (replicaset-2) with the new image version and also start removing pods from the original replicaset (replicaset-1).
Just a small mistake was identified, when changing deployment from RollingUpdate to Recreate, there will be additional configuration specified for RollingUpdate so make sure to also remove them and only keep the type: Recreate
.
I had issues with the pod definition of command and arguments, also I had misunderstanding of some concepts, here are the correct ones:
Command in pod definition is same as Entrypoint in Dockerfile
args in pod definition is same as CMD in Dockerfile
Instead of deleting the pod and re-applying the /tmp/ pod, we can run kubectl replace --force -f /tmp/...
Instead of defining key-value pairs environment variables in the pod definition, we could create configmaps.
When a pod is created we inject the configmap into the pod.
Create configmap
Inject configmap to pod
We add a property under containers called envFrom,
note that this is a pod yaml that we still didn't create/apply (not for reference).
ENV
SINGLE ENV
VOLUME
A huge issue I encountered is the syntax of configmaps, I was so confused what to use and had to look at the solution.
The main issue here is that I didn't look correctly in the k8s docs, I must check links found at bottom of pages If I didn't find anything useful in a configmap docs page.
Also I didn't utilize the --force
flag in the kubectl replace
command.
Create Secret
Inject Secret
ENV
SINGLE ENV
VOLUME
Secrets are not encrypted. only encoded. Secrets aren't encrypted in ETCD. Make sure to not push them to a git. Anyone able to create pods/deployments in same namespace can access secrets.
Enable Encryption at rest for ETCD. Configure least-privilege access to secrets (RBAC). Consider 3rd party secrets store providers such as AWS, Vault etc...
A secret is only sent to a node if a pod on that node requires it.
Kubelet stores the secret into a tmpfs so that the secret is not written to disk storage.
Once the Pod that depends on the secret is deleted, kubelet will delete its local copy of the secret data as well.
Read about the protections and risks of using secrets here
Having said that, there are other better ways of handling sensitive data like passwords in Kubernetes, such as using tools like Helm Secrets, HashiCorp Vault.
Didn't include the generic
option when creating the secret kubectl create secret generic ...
and I found it by using kubectl create secret -h
There are alot of mistakes identified, I did the same mistake of not checking the docs well, I was asked to load secrets as environment variables, so I must have went:
And I must have used envFrom
which takes environment variables from a pod
Also, regarding vim, to paste correctly in vim, I must copy the spaces too from the docs, also paste from the beginning of the line so it can be indented correctly.
When building web applications, you might want to have a logging agent as a container separate from your app container, in that case you can do a multi container pod, that can contain more than one container and share same storage and network that they can refer to each other as localhost
Everything went well except I had to concentrate on the correct namespace for editing the pod.
I was editing the app pod in the default namespace instead of the elastic-stack namespace.
-n
-it
flags:In Kubernetes, multi-container pods use initContainers
for tasks that must complete before the main containers start. These are specified in the pod configuration.
In this configuration, initContainers
run tasks sequentially before the main container starts. If an initContainer
fails, Kubernetes restarts the pod until it succeeds.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
kubectl edit pod
yaml issue, the - name
is always first:Also, I should have checked pod logs before fixing the issue to fix it.