Security
Last updated
Last updated
Users: Admins & Developers
Bots: Service Accounts which are other processes, services, apps that require access to cluster
File with user details
Certificates
3rd-party identity service like LDAP
So you cannot create users or list users in a Kubernetes cluster
Kubernetes can manage service accounts using the kubeapi-server
Static Password File: List of username & password in a file
Static Token File: Usernames & Token in a file
Certificates: Authenticate using certificates
Identity Services: Connect to 3rd party authentication protocols such as LDAP, Kerberos etc...
Static Password File
Static Token File
We can create list of users with their passwords as a csv file:
Then pass the csv file name as an option to the kube-apiserver then restart manually
Or using the kubeadm tool in the /etc/kubernetes/manifests/kube-apiserver.yaml
file which will be restarted automatically by the kubeadm tool
To authenticate user using the basic credentials while accessing the kube-apiserver, specify the user and password
We can have a 4th column to specify a group
Also, if we are using tokens instead of password, we must add it to the csv file and use the token as bearer in the request header
This is not the recommended approach for authentication as it stores plain text information
Consider volume mount while providing the auth file in a kubdeadm setup
If user tries to access a web server, TLS certificates ensure the traffic between user and web server is encrypted and the web server is who it says it is.
Hackers can sniff symmetric keys sent by user to the server and decrypt the messages
To solve this issue, we use asymmetric encryption to generate public and private keys using OpenSSL
User requests the web server on HTTPS
The web server sends a public key to the user
The user browser encrypts the symmetric key using the public key sent by the webserver
The user then sends his message (encrypted with symmetric key) which is also encrypted with public key of the server
The server receives the message and decrypts it using its private key
The hacker only has a public key and encrypted data which he can't do anything with them.
Hackers now know our trick and creates their own server and generating their own private and public keys for a secure connection and somehow routes our request to their server
To solve this issue, here is where the certificates play a crucial role, so when we request to the server on HTTPS and receive its public key.
Public key of server
Location of server
Who is the certificate for
DNS Information etc...
Most important part to verify a certificate is to check who signed and issued the certificate
When you generate a certificate yourself, it is a self-signed certificate which appears to be suspicious, browsers validates and warns you if the certificate is fake/self-signed
Symantec
Comodo
GlobalSign
digicert
Generate a certificate signing request (CSR) using key we generated earlier and name of our website
Authority validates the information you sent
Certificate is signed and sent back to you
You now have a certificate signed by a CA that the browsers trust
CAs use different techniques to make sure you're the actual owner of that domain
All CAs have their own set of public and private keys which are built in to the browsers
Browser uses public key of CA to check if its signed by CA itself
CAs offers private hosting of their servers, we can deploy the CA server internally and then have the public key of out internal CA server installed on all employees browsers and establish secure connectivity in the organization.
Admin generates a key pair for securing SSH
Web server generates a key pair for securing the website with HTTPS
CA generates its own key pairs to sign certificates
End users generates a single symmetric key to encrypt his credentials and uses it after establishing trust to the webserver to send his credentials for authentication
All of these key pair generations mentioned are called Public Key Infrastructure (PKI)
You can encrypt data with both public and private keys but only decrypt with private key
Communication between Master and worker nodes must be secure, also when an admin uses the kubectl utility to communicate with kube-apiserver it must be secure. Overall all communications between Kubernetes components must be secure.
Server certificates for servers
Client certificates for clients
Let's start with the server components which uses TLS certificates.
Server private and public key naming convention may differ depending on who and how the cluster was setup
Lets start with kube-apiserver which exposes an https service that other components and external users use to manage the Kubernetes cluster.
Clients are admins or components who needs access to the kube-apiserver through kubectl or rest API.
Lets start with the admin which must have key-pairs to talk to the kube-apiserver and authenticate
The scheduler needs to communicate with the kube-apiserver to get pods who needs scheduling, which it must have key-pairs to authenticate
CA for server
CA for client
For now, we'll just stick to one CA for our cluster.
We will use OpenSSL utility to generate our certificates
To generate client certificates starting with admin
To distinguish admins from normal users, we must put the admin user in a group, and mention it in the Certificate Signing Request as /OU=<group-name>
For client system components such as scheduler/controller manager/kube-proxy, we must include a prefix to their certificate name (CN) like "CN=SYSTEM:kube-scheduler"
CN=kube-admin doesn't have to be kube-admin, you can name it as you like, but remember this is the name that kubectl client authenticates with and can be found in logs and elsewhere
kubeconfig
yaml to authenticate with it:Lets take ETCD server as an example, ETCD can be deployed as a cluster across multiple server in a high availability environment, so we need to generate peer certificates to secure communication between different ETCD members in the cluster.
ETCD config requires the CA root certificate to verify the client connecting to ETCD are valid
kubernetes
kubernetes.default
kubernetes.default.svc
kubernetes.default.svc.cluster.local
By its IP address
Those referring to the kube-apiserver by these names can establish a valid connection
To generate a certificate to the kube-apiserver
To consider all alternative names to offer flexibility in establishing connection to kube-apiserver, create an openssl.cnf
file
Then pass it as a config when generating the certificate signing request
Specify these keys in the kube-apiserver service configuration
Certificates created for kubelet are named based on the node they are on.
We also mentioned a set of client certificates that will be used by kubelet to authenticate to kube-apiserver. The kube-apiserver needs to know which node is authenticating (use prefix of node name) to give it right set of permissions using the group.
Once certificates are generated they go to kubeconfig
file.
View the kube-apiserver static pod definition found in /etc/kubernetes/manifests/kube-apiserver/yaml
Then to inspect a certain certificate, use the openssl x509
command
Certificate requirements are found in Kubernetes docs in details
I was asked to give the CN Name of kubeAPI server certificate, I chose the Issuer CN instead of the Subject CN which is wrong.
Checking the certificate paths and its name if its correct or not, then check the static pod definition to fix the issue.
Utilize crictl
to troubleshoot if kubectl is not working due to kube-apiserver or etcd server are down.
It appeared to be the wrong path of --etcd-cafile
, which must be /etc/kubernetes/pki/etcd/ca.crt
The CA is whatever server that has the key pairs which we can sign certificates with, in our case its the master node.
Whenever a certificate expires, we need to do same steps to re-generate them, this can be inefficient as we scale, so Kubernetes helps us by providing a certificate API.
Create CertificateSigningRequest object
Review requests
Approve requests
Share certificates to users
A user creates a key and then generates a certificate signing request using the key with his name on it
Then users sends this request to the admin
Admin takes the key and creates a CertificateSigningRequest object using yaml definition
The request must be encoded with base64 before adding it to the request in definition yaml using cat jane.csr | base64
Once the object is created, admin can see it using kubectl get csr
Admin can then approve the csr using kubectl certificate approve jane
Admin then kubectl get csr jane -o yaml
and decode it using echo "encoded-cert>" | base64 -d
and share it with the end user
All of these tasks are done by the csr-approving and csr-signing controllers in the controller manager component
The main issue here is doing the CertificateSigningRequest object, specially that in docs it was given as a cat command.
Make sure to utilize dd
for delete entire line and u
for undo in vim, also to copy the whole base64 output with the "=
"
Kubeconfig is set by default in the $HOME/.kube/config
directory and kubectl uses it without you specifying the config path with the command you enter.
Regarding certificates, you must specify full path, also you could use certificate-authority-data
instead to paste the base64 encoded certificate.
You won't be allowed to curl like that, so you need to use your credentials as parameters to authenticate to the API
As an alternative, you could start a kubectl proxy
that uses your credentials from kubeconfig and then curl localhost:8001
All resources in Kubernetes are grouped into different API groups
We can use all of these APIs to allow and deny actions for authorization, specially in the RBAC yaml definitions inapiGroup:[]
Node Authorization
Attribute Based Authorization (ABAC)
Role Based Authorization (RBAC)
Webhook
Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
In Kubernetes Node Authorization, a "user" refers to a system component, such as a kubelet, rather than a human operator.
The kubelet acts on behalf of a node, making requests to the KubeAPI server. The Node Authorizer processes these requests, determining access based on the node's identity and the resources it needs.
Kubelets are registered as "users" within the SYSTEM:NODE group.
Each kubelet's username is prefixed with SYSTEM:NODE, distinguishing their identity for authorization purposes.
This framework ensures that kubelets requests are appropriately authorized by the node authorizer.
ABAC is meant for external access to the kube-apiserver which are for instance admins/developers.
Every time you need to add/change policies you must edit this policy file manually and restart kubeapi-server, which is not efficient and difficult to manage.
Instead of directly assigning a policy to a user or group of users, we define a role (for developers in our case) and then assign all developers for this role.
So we only need to modify the role instead of modifying every single policy file.
For outsourcing the authorization mechanism, we could use 3rd party tool like Open Policy Agent.
Kubernetes make an API call to Open Policy Agent with information about user and his access requirements and let Open Policy Agent decide if he's permitted or not.
Based on the Open Policy Agent response, the user is granted access.
The authorization mode is a configuration parameter option specified in the kube-apiserver configuration.
We didn't mention 2 Authorization mechanisms which are AlwaysAllow
and AlwaysDeny
By default, the authorization mode is set to AlwaysAllow
When we specify more than one mechanism, they will get executed by order.
If Node authorizer fails to grant permission, it will move to RBAC, if RBAC grants the permission it returns to user and webhook authorization is not used.
You can add the namespace under metadata, by default its the default namespace
When adding additional rules (specially different api groups such as apps, storage etc...)
Namespaced: Within a namespace, usually kube-system for system components and default for normal resources such as pods, deployments.
Cluster Scoped: Within the whole cluster and they are not in a namespace.
They are regular roles but only for cluster scoped resources.
In Kubernetes docs, you can view how to create cluster roles and regular roles using kubectl
Everything went well, the most useful thing I wasn't utilizing is:
Also, make sure to check if the APIVERSION, if its v1 then in the definition yaml make sure the apiGroup: [""]
is written like this, if its a deployment (apps) then apiGroup: ["apps]
User accounts: Are used by humans, Administrators for management or developers for deploy an application
Service accounts: Are used by bots, for example Prometheus to gather metrics for alerts and visualize it on a dashboard by quertying the kube-apiserver (needs authentication)
When creating a service account, a token is generated by default automatically which can be used by external application while authenticating to the kube-apiserver as a bearer token
A new update to serviceaccounts was introduced which changed how it works, a TokenRequest API was introducted which now offers JWT expiration and other security enhancements, the old way of serviceaccounts tokens didn't have an expiration date and they were created automatically which is not the case now.
image: nginx
it is in fact like this behind the scenes:You may have your own set of images that are stored in your private repository, whether its docker hub, ECR or other registries.
We create a secret object with our registry URL, username and password
Then we specify the private image URL and the secret so kubelet on worker nodes can use this secret to pull your private image
The most critical mistake I did is thinking that the imagePullSecrets
is under containers:
and its not correct, it must be under spec:
Containers aren't fully isolated like virtual machines, they share same kernel with host.
Containers are in their own namespaces and they can't see anything outside this namespace.
Or you can specify the user ID within the docker image itself
Docker limits the abilities of the root user within the container which is not the same as root user on a linux host machine
You can configure all of these at a pod-level in an object definition yaml within kubernetes using securityContext
Capabilities are only supported at the container level and not at the pod level
There was a syntax issue and I faced a problem where I edited the pod and made sure I added the securityContext but whenever I created it, it was created without security context, so I had to delete it and do another pod with the requirements.
kubectl apply caused problems, specially when dealing with pods, so lets move to kubectl create if problems were faced with apply
Also, It's important to note the difference between pod level and container level, pod level is under spec:
and above containers:
, container level is under containers:
Container level securityContext
overrides pod level securityContext
Web Frontend Pod
Backend Pod
DB Pod
And we are required to create a network policy for the DB Pod to allow only Ingress traffic from the Backend pod at port 3306.
Now, DB pod only allow ingress traffic from the backend pod on port 3306 and blocks all others.
Create a label and a selector in the DB pod
Create the policy rule
Create the policy object definition and pass both label, selector and the policy rule
In the policy type we defined Ingress type only, so egress is not affected and its by default allowing all egress traffic. If needed add egress rule too.
namepsaceSelector:
If we add a -namespaceSelector and make it under the from:
it will be 2 rules not connected together (api-pod is allowed from anywhere, all pods in namespace prod are allowed), in the given example above it means that pod must be api-pod and in prod namespace
This will allow all pods named api-pod
in the prod namespace to access the DB at port 3306, if we removed the api-pod
then it will allow all pods within the prod namespace.
ipBlock:
This will allow all traffic at port 80 originating from the DB pod to the backup server at the CIDR block addresses given.
matchLabels: role:db
to the label in the pod that I want to write the policy for:With this tool, you don't have to make use of lengthy “kubectl config” commands to switch between contexts. This tool is particularly useful to switch context between clusters in a multi-cluster environment.
Installation:
Syntax:
To list all contexts:
kubectx
To switch to a new context:
kubectx <context_name>
To switch back to previous context:
kubectx -
To see current context:
kubectx -c
This tool allows users to switch between namespaces quickly with a simple command.
Installation:
Syntax:
To switch to a new namespace:
kubens <new_namespace>
To switch back to previous namespace:
kubens -