CoreDNS custom wildcard domain for internal ingress

Suppose we have public and private ingress side by side. We want internal traffic to go via ingress rather than services because it will give us some basic prometheus metrics and some things like compression, potentially tls, etc.

For this to work we somehow need to have some kind of *.local.contoso.com domain pointing to this internal ingress.

Thankfully there is a CoreDNS inside which is responsible for resolving services and may be used for this as well.

Thanks to this article I was able to wireup everything togethere

Here is short version of it:

Corefile

.:53 {
    forward . 8.8.8.8 1.1.1.1
    log
    errors
}

example.com:53 {
    file /root/db.example
    log
    errors
}

Here we are configuring 8.8.8.8 and 1.1.1.1 as fallback name servers and single example.com zone which might be repeated as many times as we want.

db.example

$ORIGIN example.com.  ; designates the start of this zone file in the namespace
$TTL 1h               ; default expiration time of all resource records without their own TTL value

; =============================== Resource Records ==============================
@                 IN  SOA     ns.example.com. rtiger.example.com. (
                                  2020010510     ; Serial
                                  1d             ; Refresh
                                  2h             ; Retry
                                  4w             ; Expire
                                  1h)            ; Minimum TTL
@                 IN  A       192.168.1.20       ; Local IPv4 address for example.com.
@                 IN  NS      ns.example.com.    ; Name server for example.com.
ns                IN  CNAME   @                  ; Alias for name server (points to example.com.)
webblog           IN  CNAME   @                  ; Alias for webblog.example.com
netprint          IN  CNAME   @                  ; Alias for netprint.example.com
* IN CNAME @

And here we are describing our zone. The simplest way to look of what can be done there is to export real zone from Cloudflare.

Note: for wildcard subdomain you gonna use * IN CNAME @

docker run -it --rm \
  --name=coredns \
  -v ${PWD}/Corefile:/root/Corefile \
  -v ${PWD}/db.example:/root/db.example \
  -p 53:53/udp \
  coredns/coredns -conf /root/Corefile

And after that we have our own DNS which we may use:

dig @127.0.0.1 example.com

nslookup example.com 127.0.0.1

Offtopic, to switch macos dns we may:

# list interfaces
networksetup -listnetworkserviceorder
networksetup -listallnetworkservices

# change dns of Wi-Fi interface
networksetup -setdnsservers Wi-Fi 192.168.1.20

# reset name servers used to the default
networksetup -setdnsservers Wi-Fi empty

# get current dns servers
networksetup -getdnsservers Wi-Fi

Having that in place the only thing left is to do the same in Kubernetes

Each pod has configured resolv.conf

kubectl exec prometheus-0 -- cat /etc/resolv.conf
# search dev.svc.cluster.local svc.cluster.local cluster.local
# nameserver 10.0.0.10

Pointing to coresponding CoreDNS service

kubectl -n kube-system get services kube-dns
# NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
# kube-dns   ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP   78d

Thats why resolving inside kubernetes happens via CoreDNS

kubectl exec prometheus-0 -- nc -vz myapp 80
# myapp (10.0.232.235:80) open

and allows us to make calls to http://myapp/ which is service name (kubectl get service myapp)

in our setup changes to coredns are disallowed, but there is an dedicated configmap to add modifications

kubectl -n kube-system get cm coredns-custom -o yaml

which is empty by default

so we are going to back it up

kubectl -n kube-system get cm coredns-custom -o yaml > backup.yml

adding custom hosts

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  local.override: | # you may select any name here, but it must end with the .override file extension
    hosts { 
      10.0.0.1 example1.org
      10.0.0.2 example2.org
      10.0.0.3 example3.org
      fallthrough
    }

for changes to apply we gonna need to restart dns pods

kubectl -n kube-system delete pod -l k8s-app=kube-dns
kubectl -n kube-system get pod -l k8s-app=kube-dns
kubectl -n kube-system logs -l k8s-app=kube-dns

after that we may give it a try

kubectl exec prometheus-0 -- nc -vz -w 2 example3.org
# nc: example3.org (10.0.0.3:0): Connection timed out

as a result we have dedicated custom host names

Trick with hosts file is easy to accomplish but has no wildcard option, thats why we are going hard way with file plugin instead

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  local.server: | # you may select any name here, but it must end with the .override file extension
    example.com:53 {
      file /etc/coredns/custom/example.db
      log
      errors
    }
  example.db: |
    $ORIGIN example.com.  ; designates the start of this zone file in the namespace
    $TTL 1h               ; default expiration time of all resource records without their own TTL value

    ; =============================== Resource Records ==============================
    @                 IN  SOA     ns.example.com. rtiger.example.com. (
                                      2020010510     ; Serial
                                      1d             ; Refresh
                                      2h             ; Retry
                                      4w             ; Expire
                                      1h)            ; Minimum TTL
    @                 IN  A       192.168.1.20       ; Local IPv4 address for example.com.
    @                 IN  NS      ns.example.com.    ; Name server for example.com.
    ns                IN  CNAME   @                  ; Alias for name server (points to example.com.)
    webblog           IN  CNAME   @                  ; Alias for webblog.example.com
    netprint          IN  CNAME   @                  ; Alias for netprint.example.com

Notes:

  • in my case by mistake I had db.example in local.server and example.db in config map, inside logs there was errors complaining that [ERROR] plugin/file: Failed to open zone "example.com." in "custom/example.db": open custom/example.db: no such file or directory
  • to figure out correct path you may want to look at kubectl get deployment coredns -o yaml to see where and how config map is mounted to container, in our case mountPath: /etc/coredns/custom thats why we have file /etc/coredns/custom/example.db

Once again for changes to take effect do not forget to "restart" deploymnet and check its logs

kubectl -n kube-system delete pod -l k8s-app=kube-dns
kubectl -n kube-system get pod -l k8s-app=kube-dns
kubectl -n kube-system logs -l k8s-app=kube-dns

And final check

kubectl run mactestdeleteme --rm -it --image=ubuntu --overrides='{"spec": { "nodeSelector": {"kubernetes.io/os": "linux"}}}' -- bash

apt -qq update && apt -qq install -y dnsutils

nslookup example.com
nslookup webblog.example.com

The cools thing with such setup is that we may not only point to internal ingress but to other internal services as well and even more, with trick of ingress with custom internal ip we may configure everything in a such a way that ingress and coredns will be the entrypoints for everything.

To rollback changes you may apply empty config map

apiVersion: v1
kind: ConfigMap
metadata:
  # creationTimestamp: "2022-09-08T07:06:33Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
  name: coredns-custom
  namespace: kube-system
  # resourceVersion: "318"
  # uid: a782b320-d90b-4399-8d43-3ebe3d48cc0f

and once again restart containers.

CoreDNS Ingress

And there is one more option

CoreDNS has rewrite plugin that allows you to simply add aliases for any domain pointing to any service

Ingress itself has service which is used for incomming traffic

In my case it is something like:

kubectl -n ingress get svc ingress

# NAME      TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)                      AGE
# ingress   LoadBalancer   10.0.125.37   20.10.170.60   80:31932/TCP,443:30431/TCP   82d

What you should not is that it has both external and internal ip addresses

And if you will apply something like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  rewrite.override: |
    rewrite name foo.mac-blog.org.ua ingress.ingress.svc.cluster.local

Suddenly inside your cluster all requests to foo.mac-blog.org.ua will be resolved to 10.0.125.37 internall ingress service ip

Also note that it does not matter in which namespaces services are, everything just works

Rewrite has option to use regex, and to backport some outside services we always may use hosts plugin, so I ended up with something like:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  hosts.override: |
    hosts {
      10.50.10.4 another-service.mac-blog.org.ua
      fallthrough
    }
  rewrite.override: |
    rewrite regex (.+)\.mac-blog\.org\.ua ingress.ingress.svc.cluster.local

And from now any subdomain will point to internal ip address of ingress, which means we are free to create as many ingresses as we want and everything will work and thanks to cert-manager we may even have tls for them and it also will work because acme requests will come outside the cluster

With such setup there is no need in internal and external ingress everything may be covered with a single one

What is cools it is transparent for applications and if we want some service to be internal only we always may use annotation like: nginx.ingress.kubernetes.io/whielist-source-range: 10.50.0.0/16 on concrete ingress to disallow external access to it