Supply Chain Security
Last updated
Last updated
We must add a securityContext
at the pod level and container level
Specify a custom serviceaccount
name
The curl command is passing a hard-coded token which is bad, so we must pass it as $TOKEN environment variable
To use and run static analysis, run docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml
and replace the kubesec-test.yaml
with your pod name.
You can save the results in a json file and start editing and hardening your pod defnition step by step based on the scoring of kubesec and the advises it gives.
Conftest is the same as kubesec but its used with OPA policies.
docker run --rm -v $(pwd):/project openpolicyagent/conftest test deploy.yaml
It takes the /project
directory which has the OPA policies defnition.
It runs the openpolicyagent/conftest
image.
It names this run as test
and specifies the yaml definition to scan.
Web servers or other apps can contain vulnerabilities and the targets could be:
Remotely accessible application in container
Local application inside container
And the damage could be:
Privilege Escalation
Information Leaks
Denial Of Service
etc...
Vulnerabilites can be discovered in our own image and dependencies, so we must check during build and during runtime of our image.
A simple vulnerability scanner for container and other artifacts, suitable for CI.
We will now use trivy to check some public images and check our kube-apiserver image.
To scan an image using trivy, simply run docker run aquasec/trivy image <image-name
and you can | grep CRITICAL
to check for critical vulnerabilities.
To scan kube-apiserver image, you can run kubectl -n kube-system describe po kube-apiserver | grep image
and then pass the image to trivy.
Supply chain security is all about how can we make sure that the container created in our build is the same as the container running in our kubernetes cluster and how can we secure the container and restrict it.
We can list the yaml of kube-apiserver by running kubectl -n kube-system get po kube-apiserver -o yaml
and then check the containerlook for containerStatuses.containerID
which will have the image digest, we can copy it and replace it in the image section by editing the kube-apiserver yaml definition.
We create a template for OPA that lists the images that will be restricted, which they are all docker registry images and k8s registry images.
Then we create a constraint which applies on pods, so no pods will be created if they have docker or k8s images.
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [pod-trusted-images] not trusted image!
We will investigate the ImagePolicyWebhook and use it up to the point where it calls an external service.
We must edit /etc/kubernetes/manifests/kubeapi-server.yaml
and add ImagePolicyWebhook
to the - --enable-admission-plugins
line.
It seems the kube-apiserver isn't getting back online, so we cd /var/log/pods
and tail -f kube-system_kube-apiserver.../kube-apiserver/0.log
to then find out it gives as no config specified
error.
Now, we have an admission_config.yaml
which was pre-created and we must add its path to kube-apiserver definition as - --admission-control-config-file
, this has path to kubeconfig, and the kubeconfig file has certificates and URL of the external service that will decide on actions instead of the kube-apiserver. Also note the defaultAllow
is set to false which means now we are using the external service and the kube-apiserver won't appear as a pod in background.
Don't forget to add the volume and volume mounts for the new file.
After we did that, we can execute commands and it returns to us, but kube-apiserver is not showing as a running pod due to the external service background permissions, if you are not sure how we can even get pods or run command and get response its because we are still using same connection for kube-apiserver as it didn't find an external host URL specified.
If we wanna remove the external host, we can set defaultAllow: true