Kubernetes create RBAC user for kubectl

Suppose we have an Kubernetes cluster with enabled RBAC and want to create dedicated "user" for kubectl

Note: inside Kubernetes there is no such things as "users" and for end users it is better to utilize some kind of OpenId Connect provider instead, in this demo we are using certificate based authorization, and certificate subject is used as a "username"

The process outline:

  • create RSA private key
  • create certificate signing request (CSR) for this private key
  • send this CSR to Kubernetes and sign it with its CA
  • retrieve signed certificate
  • create config file for kubectl
  • check that we can talk to Kubernetes but do not have privileges
  • grant privileges

The script:

USERNAME=demo

# create private key (PKCS1, 2048, PEM)
openssl genrsa -out $USERNAME.pem

# create signing request (will be used in next step)
openssl req -new -key $USERNAME.pem -out $USERNAME.csr -subj "/CN=$USERNAME"

# pass signing request into kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: $USERNAME
spec:
  request: $(cat $USERNAME.csr | base64)
  signerName: kubernetes.io/kube-apiserver-client
  expirationSeconds: 600
  usages:
  - client auth
EOF

# kubectl get csr $USERNAME # condition - pending

# sign requerst
kubectl certificate approve $USERNAME

# kubectl get csr $USERNAME # condition -> approved,issued

kubectl get csr $USERNAME -o jsonpath='{.status.certificate}' | base64 -d > $USERNAME.crt

# create conf file, TODO: retrieve kubernetes api endpoint, retrieve CA bundle
kubectl --kubeconfig ./$USERNAME.conf config set-cluster local --insecure-skip-tls-verify=true --server=https://kubernetes.docker.internal:6443
kubectl --kubeconfig ./$USERNAME.conf config set-credentials $USERNAME --client-certificate=$USERNAME.crt --client-key=$USERNAME.pem --embed-certs=true
kubectl --kubeconfig ./$USERNAME.conf config set-context default --cluster=local --user=$USERNAME
kubectl --kubeconfig ./$USERNAME.conf config use-context default

kubectl --kubeconfig ./$USERNAME.conf config view

# cleanup, we do not need csr anymore, temporary files were embeded into config
kubectl delete csr $USERNAME
rm $USERNAME.pem $USERNAME.crt $USERNAME.csr

# check that we can talk to kubernetes but do not have privileges it should complain with error: User "demo" cannot list resource "namespaces" in API group "" at the cluster scope
kubectl --kubeconfig ./$USERNAME.conf get ns

# grant cluster admin cluster role to user
kubectl create clusterrolebinding $USERNAME --clusterrole=cluster-admin --user=$USERNAME

# check
kubectl --kubeconfig ./$USERNAME.conf get ns

# cleanup
rm $USERNAME.conf
kubectl delete clusterrolebinding $USERNAME

With that in place we may send config to end user so he may use it to talk to Kubernetes cluster

Also we may want to use expirationSeconds to restrict how long certificate will be valid for

When we are creating config file we asking kubectl to embed files, so in config:

  • client-key-data - base64 encoded contents of our private rsa key file
  • client-certificate-data - base64 encoded contents of signed certificate we have retrieved from Kubernetes

Note that there is no way to revoke, signed certificate will be valid forever, but we always may remove cluster role binding like in very last cleanup command, so user may still talk to Kubernetes but has no privileges to do anything

Also note that having private key we may create and sign as many certificates as we want and because role binding does not care all of them will work