Cluster Hardening
RBAC
Role-Based Access Control (RBAC) in Kubernetes governs access to cluster resources based on roles and permissions, with two types:
Role for namespace-specific access
ClusterRole for cluster-wide access.
Combinations for RBAC in Kubernetes:
**Role -> RoleBinding: **Assigns permissions in a namespace.
Role -> ClusterRoleBinding: Assigns permissions cluster-wide.
ClusterRole -> RoleBinding: Assigns cluster-wide permissions in a namespace.
ClusterRole -> ClusterRoleBinding: Assigns cluster-wide permissions cluster-wide.
Wrong combination:
ClusterRole -> RoleBinding
ClusterRoles are intended for cluster-wide permissions and apply globally across all namespaces, so binding a ClusterRole to a RoleBinding contradicts this scope by trying to apply cluster-wide permissions to a specific namespace.
Note: Be cautious when using ClusterRoles as they grant permissions across the entire cluster, potentially leading to unintended access if not applied carefully.
Simple Scenario
Another Scenario
Accounts
Demo:
Everything regarding creating a user is found here: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/#normal-user
Service Accounts
When creating a pod and attaching a service account to it, you can view details by:
Make sure to create a token for the service account using
kubectl create token <sa-name>
and view it using JWTkubectl exec -it pod-name -- bash
Move to
/run/secrets/kubernetes.io/serviceaccount/
directory to view details of the service accountYou can curl the kubernetes API by viewing the IP of it using
env | grep KUBE
By default you will get a ssl certficate problem so pass
-k
flag to the curl commandYou will notice that you are getting a forbidden error and the user is anonymous due to us not passing the authroization token as Bearer in the request
You can pass the token using
-H
flag ascurl https://10.96.0.1 -k -H "Authorization: Bearer <token>"
We also got forbidden but the user is identified as our service account name, this is due to RBAC rules which we can create an RBAC rule to allow our service account to get, list etc...
Make sure under spec:
in the pod definition to disable automount of service account token by setting automountServiceAccountToken: false
You can check if its mounted or not by exec into pod and then running mount | grep ser
Restrict API Access
Restrictions:
Hands-on [IMPORTANT]
Anonymous authentication is enabled by default in the kube-api server definition
/etc/kubernetes/manifests/kube-apiserver.yaml
We need to set --anonymous-auth=false
, if its not there, you can add this line.
Enable
insecure-port=8080
access/port and use it (it's set to 0 by default which means its disabled)Manual API Query using kubeconfig certs:
kubectl config view --raw
Get ca (ca) base64 decoded
Get client-certificate (crt) base64 decoded
Get client-key (key) base64 decoded
curl http://k8s:6443 --cacert ca --cert crt --key key
Make Kubernetes API reachable from the outside
Edit the kubernetes service and change it from ClusterIP to NodePort
Curl with the
https://external IP:NodePort
If you wanna access kube-apiserver using kubectl, add
https://external IP:NodePort
to the kubeconfig file and add the entry in/etc/hosts
as<external-ip> kubernetes
NodeRestriction
NodeRestriction limits the node labels a kubelet can modify and it ensure secure workload isolation via labels.
Add
--enable-admission-plugins=NodeRestriction
in/etc/kubernetes/manifests/kube-apiserver.yaml
Go to worker node and run
export KUBECONFIG=/etc/kubernetes/kubelet.conf
Test to check if the node restriction is working, it will deny running commands on cks-master or adding any labels
Update Kubernetes
This is known from the CKA content.
Microservice Vulnerabilities
Manage Kubernetes Secrets
Simple Secret Scenario
Create secret1 using
kubectl create secret secret1 generic --from-literal user=admin
Create secret2 using
kubectl create secret generic secret2 --from-literal pass=123456
Create pod definition and dry run client, then edit it and add both env secret and mount fs secret
Pay attention to the env secret, the name of it must be capital letter (PASSWORD), the secretKeyRef.name
must be same as the secret name (secret2) and the key is the one we passed to --from-literal
which is (pass).
Hacking Secrets and Containerd
You can view more information about the secret-pod using crictl
Go to the node that is running the pod (cks-worker)
crictl ps | grep secret-pod
crictl inspect <secret-pod-id>
For looking at the raw password of the env secret:
crictl inspect <secret-pod-id> | grep env -A 45
For looking at the information of the mounted secret:
crictl inspect <secret-pod-id> | vim -
Search for PID, it will be 4 digits, take it and
ps aux | grep <PID>
Move to
/proc/6464/root/etc/secret1
and to view the secret mounted
Hacking Secrets and Etcd
Instead of specifying the etcd version with every command, we can run
export ETCDCTL_API=3
We must send the certificates with every
etcdctl
commandWe can run
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd
to get the certificates for etcdThen we can run command like this
etcdctl --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key endpoint health
We can get secrets by running
get /registry/secrets/default/secret2
We can see the secrets as raw data, so we need to encrypt etcd at rest.
Encrypt Secrets in ETCD at rest Scenario
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
head -c 32 /dev/urandom | base64 > key
Create the
enc.yaml
Now, add the
--encryption-provider-config=<path-to-enc.yaml>
Add the directory that
enc.yaml
is inside to the volumes and volumeMounts at the end of kube-apiserver definitionRun
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
to make sure previously created secrets get encrypted
To disable encryption at rest, place the identity
provider as the first entry in your encryption configuration file, then force kube-apiserver to restart by changing its directory and putting it back again and after that run kubectl get secrets --all-namespaces -o json | kubectl replace -f -
to force decrypt all secrets
Best practices for encrypting etcd is to use KMS, which is an external secret manager, providers include AWS KMS, Hashicorp Vault.
Container Runtime Sandboxes
Containers aren't contained, they can run syscalls and reach out to hardware and potentially the attacker can escape the container through them.
So we add an extra security layer by creating a sandbox for every container before system calls so the container can't run system calls directly and allow us to restrict malicious calls.
Contact Linux Kernel From Inside a Container
Running uname -r
in a pod will do a syscall and return the kernel version, we can run strace uname -r
to check functions called and ltrace uname -r
to check libraries used.
Open Container Initiative (OCI)
OCI is a linux foundation project to design open standards for virtualization.
OCI creates a specification (runtime, image and distribution)
Then OCI creates a binary container runtime (runc) to implement these specifications
**Container Runtime Interface (CRI) **allows kubelet to contact more than one container runtime.
gVisor
Kernel exploits ran on container with gvisor won't work on the affected kernel as gvisor filters them.
Create and use RuntimeClasses for runsc (gVisor)
https://kubernetes.io/docs/concepts/containers/runtime-class/
The pod is not running due to gVisor not installed on worker node.
Installation of gvisor/runsc with containerd
Now, the pod is running, if we exec and run uname -r, we will see different kernel version as gVisor is added as an additional layer and the system call is not being executed on linux directly.
OS Level Security Domains
Security Contexts
We can add a global security context for all containers to run as user and in a group, also as a more security precaution, we can add runAsNonRoot
in the container itself to true, so they container won't be created if its running as root.
Priviliged Containers
Also, there is AllowPrivilegeEscalation
which controls whether a process can gain more priviliges that its parent process, its enabled by default so change it to false.
mTLS
mTLS between pods
By default, every pod can communicate with every pod without encryption.
So mTLS main goal is to let pods encrypt/decrypt traffic for communication.
To create mTLS between 2 pods, we need certificates so pods can identify each other:
Client certificate
Server certificate
Service Mesh/Proxy
Create proxy sidecar
Create a proxy sidecar with NET_ADMIN capability so we can simulate re-routing rules in a pod.
Create a pod with bash image and make it ping google
Add to the pod definition another container named proxy with ubuntu image that runs
apt-get update && apt-get install iptables -y && iptables -L && sleep1d
which installs iptables and lists the rules then sleeps for 1 day.To check logs for a container in a pod, run
kubectl logs pod -c <container-name>
Give the container suitable permissions (NET_ADMIN)
Run
kubectl logs app -c proxy
to check the iptables rules
Open Policy Agent
Open Policy Agent is an extension we can add to kubernetes which allows us to customize policies.
Easy implementation of policies (Rego Language)
Works with JSON/YAML
It uses kubernetes admission controllers
Doesn't know concepts like pod and deployments
Introduction to OPA and Gatekeeper
In OPA gatekeeper, we have a constraint template which searches for labels to enforce child contstraints on them.
Installing OPA Gatekeeper
https://open-policy-agent.github.io/gatekeeper/website/docs/install/
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
Run
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.16.3/deploy/gatekeeper.yaml
OPA Gatekeeper creates a webhook, kubernetes offers 2 types of web hooks:
Validating admission webhook which is used to create/deny pods and its used by OPA Gatekeeper
Mutating admission webhook which for instance creates labels on a pod of a constraint is met or creates a deployment etc...
OPA Gatekeeper CRDs
ConstraintTemplates are a type of Custom Resource Definition (CRD) in the Open Policy Agent (OPA) Gatekeeper framework.
These templates allow you to define reusable policies that can then be enforced across your Kubernetes clusters.
Once a ConstraintTemplate is defined, you can create Constraint CRDs based on that template to enforce specific rules.
Hereβs a breakdown:
ConstraintTemplate: This defines the structure and logic of a policy. It uses Rego, OPAβs policy language, to specify the policy logic.
Constraint: This applies the policy defined in a ConstraintTemplate to specific resources in the Kubernetes cluster.
Creating Templates and constraints
OPA Deny All
Create the deny all template and check for it using
kubectl get constrainttemplates
Create the deny all constraint and check for it using
kubectl get k8salwaysdeny
To approve all pods, we can change the true statement in the template.yaml
to false.
Overall, a template must be created then constraints accordingly.
Last updated