k8s — Pod Security Policy

Varun Tomar
4 min readMar 24, 2020

--

k8s PSP

In the last few years, Kubernetes has become the defacto standard for deploying containerized applications. There are a lot of interesting posts on Kubernetes. In this post, I am going to highlight how we are implementing PodSecurityPolicies(PSP) on k8s. To begin with, we are running k8s 1.16.6 with RBAC enabled.

What is PodSecurityPolicy — PSP is a cluster-level resource for managing security related aspects of a pod specification.

There are multiple ways to implement PodSecurityPolicy. One way is to use Role Based Access Control (RBAC): Roles and Role bindings. For a PodSecurityPolicy(PSP) to take effect, Cluster User or ServiceAccount that is used to launch the workload must have the use permission on the desired PodSecurityPolicy (added via Role or ClusterRole configuration).

What is the need for PSP?

Actually, we had no plans to implement PSP. We were working on rolling out a monitoring tool across our environment and during rollout, we figured out that the monitoring agent would not work with SecurityContextDeny enabled. I did some research to find a way to bypass SecurityContextDeny, but couldn’t find anything. It’s a very limited system which is why PodSecurityPolicies(PSP) was added to be a far more flexible version of the same idea.

Our API Server config (more information on the admission controller: here):

grep -- --enable-admission-plugins /etc/kubernetes/manifests/kube-apiserver.yaml- --enable-admission-plugins=SecurityContextDeny,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

Monitoring agent DaemonSet needs SecurityContext:

securityContext:
runAsUser: 0

Where to start? How the PSP will impact current deployments?

Verify if cluster supports PSP:

kubectl get psp -A
No resources found

If you get the above message, it means cluster supports PSP. Anything other than the above means cluster does not support PSP, it may be running an older version of k8s.

Verify if PSP is enabled:

grep -- --enable-admission-plugins /etc/kubernetes/manifests/kube-apiserver.yaml- --enable-admission-plugins=SecurityContextDeny,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

As you can see in the above output PodSecurityPolicy is not showing up in the output.

Enable PSP:

I removed SecurityContextDeny and added PodSecurityPolicy in /etc/kubernetes/manifests/kube-apiserver.yaml. It will restart API server. If you grep enabled admission plugins again, it will show the changes.

grep -- --enable-admission-plugins /etc/kubernetes/manifests/kube-apiserver.yaml- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,PodSecurityPolicy

Create PSP:

My first attempt with PSP, this is sufficient enough for the tool to work.

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: monitoring-psp
namespace: monitoring
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
privileged: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'

Create ServiceAccounts/Roles/RoleBindings:

Note: Roles and RoleBindings only apply to a Namespace. While ClusterRoles and ClusterRoleBindings apply to the whole Cluster.

Create Namespace:

kubectl create -f- <<EOF
#Create namespace
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
EOF
# or
kubectl create namespace monitoring

Create ServiceAccount:

kubectl create -f- <<EOF
#Create a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-user
namespace: monitoring
EOF

Create Role:

kubectl create -f- <<EOF
#Create a role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- monitoring-psp
EOF

Create RoleBinding:

kubectl create -f- <<EOF
#Create a role binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: monitoring-user
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: monitoring-user
namespace: monitoring
EOF

Update monitoring agents(DaemonSet):

Note: If no SA is defined in a pod definition, default is applied. So, we need to update our monitoring tool deployment files to add “serviceAccountName” inside pod definition:

That’s it if everything goes as planned resources will be deployed without any issue in the monitoring namespace provided they use the right serviceAccountName.

Restricted PodSecurityPolicy:

Now, that we have tested the deployment and everything worked as planned. We can further tighten the PSP. This is the final policy that I deployed:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: monitoring-psp
namespace: monitoring
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
privileged: true
allowedCapabilities:
- SYS_ADMIN
- SETUID
- SETGID
- SETPCAP
- SYS_PTRACE
- KILL
- DAC_OVERRIDE
- IPC_LOCK
- FOWNER
- CHOWN
volumes:
- '*'
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'

Final thoughts:

As we are only enforcing policy in the Monitoring namespace. So, migrating to PSP might be easy as it does not impact existing pods. PSP provides a handly way to enforce security settings across a cluster. I have been playing with the PSP for a while now. I will try to incorporate it into the new deployments that happen going forward.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Varun Tomar
Varun Tomar

No responses yet

Write a response