Local Kubernetes with Ingress and HTTPS

In my case, from time to time there is a need to run some services with TLS/HTTPS enabled

There are some notes here and there, as well as publicly available certificates but each time it is a pain, so decided to write a note for myself in future

Prerequisites: it is expected that docker and kind are already installed

Kubernetes

cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF
kubectl wait --for=condition=Ready node --all --timeout=90s

Note:

  • instead of usual kind create cluster we are running all this to configure it so it is ready to serve web traffic
  • after creation we will wait till it starts
  • context should be switched automatically but if not run kubectx kind-kind

Ingress

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl -n ingress-nginx rollout status deployment ingress-nginx-controller --timeout=10m

Notes:

  • to keep things simple, we are going to use nginx ingress default setup

localhost.direct

localhost.direct is a great project that provides an SSL certificate for *.localhost.direct which resolves everything to localhost

so instead of having hard times with cert-manager and let's encrypt we are going to use it instead

wget https://aka.re/localhost
unzip -P IWillNotPutKeyFileInPublicAccessiblePlace localhost
rm localhost
mv 'localhost.direct;*.localhost.direct.cert' localhost.direct.crt
mv 'localhost.direct;*.localhost.direct.key' localhost.direct.key
kubectl create secret tls tls --cert=localhost.direct.crt --key=localhost.direct.key
rm localhost.direct.crt localhost.direct.key

Notes:

  • actual password is usually published in github readme
  • DO NOT publish key file - otherwise whole cert may be revoked for everyone
  • we are creating kubernetes from this files, after that cleaning up them

Demo application

Having all that in place lets deploy our demo app

demo.yml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
        - name: demo
          image: nginx
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: 50m
              memory: 64Mi
            limits:
              cpu: 500m
              memory: 526Mi
---
apiVersion: v1
kind: Service
metadata:
  name: demo
spec:
  selector:
    app: demo
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: demo
spec:
  rules:
    - host: demo.localhost.direct
      http:
        paths:
          - backend:
              service:
                name: demo
                port:
                  number: 80
            path: /
            pathType: ImplementationSpecific
  tls:
    - hosts:
        - demo.localhost.direct
      secretName: tls

as you can see nothing special here at all

kubectl apply -f kube.yml
kubectl rollout status deployment demo --timeout=1m

and now, finally both commands should work as expected

curl https://demo.localhost.direct
open https://demo.localhost.direct

Optional: CoreDNS wildcard subdomain

There is one more step you may want to do

Let's configure everything in a way so all services inside Kubernetes will resolve this domain to ingress

so suddenly you may curl https://demo.localhost.direct from your host machine and from any service inside the cluster

to do it, edit core dns config map:

kubectl -n kube-system edit cm coredns

here is whole Corefile with comments what have been added

.:53 {
    errors
    health {
        lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
        ttl 30
    }

    # POI: coredns, wildcard subdomain, resolve to ingress service
    template IN A localhost.direct {
        match .*\.localhost\.direct\.
        answer "{{.Name}} 60 IN CNAME ingress-nginx-controller.ingress-nginx.svc.cluster.local"
    }

    prometheus :9153
    forward . /etc/resolv.conf {
        max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}

Note: instead of CNAME we may use A recod like so answer "{{.Name}} 60 IN A 10.96.186.124"

After editing config map coredns should realod but just in case you may want to run

kubectl rollout restart deployment coredns -n kube-system
kubectl rollout status deployment coredns -n kube-system --timeout=1m

as well as check logs

kubectl -n kube-system logs -l k8s-app=kube-dns -f

and finally perform a test

kubectl run -it --rm --image=ubuntu mactemp -- bash

and once you are inside container

apt -qq update && apt install -y curl netcat-traditional dnsutils
curl https://demo.localhost.direct

Cleanup

Once you are done, just run

kind delete cluster

Links