Kubernetes includes a DNS server, Kube-DNS, for use in service discovery. This DNS server utilizes the libraries from SkyDNS to serve DNS requests for Kubernetes pods and services. The author of SkyDNS2, Miek Gieben, has a new DNS server, CoreDNS, that is built with a more modular, extensible framework. Infoblox has been working with Miek to adapt this DNS server as an alternative to Kube-DNS.
CoreDNS utilizes a server framework developed as part of the web server Caddy. That framework has a very flexible, extensible model for passing requests through various middleware components. These middleware components offer different operations on the request – for example logging, redirecting, modifying or servicing it. Although it started as a web server, Caddy is not bound specifically to the HTTP protocol, and makes an ideal framework on which to base CoreDNS.
Adding support for Kubernetes within this flexible model amounts to creating a Kubernetes middleware. That middleware uses the Kubernetes API to fulfill DNS requests for specific Kubernetes pods or services. And because Kube-DNS runs as just another service in Kubernetes, there is no tight binding between kubelet and Kube-DNS. You just need to pass the DNS service IP address and domain into kubelet, and Kubernetes doesn’t really care who is actually servicing the requests at that IP.
Today, CoreDNS Kubernetes middleware supports serving A records with the cluster IP of the service. In the near future, we will add the following features of Kube-DNS:
- Serve pod IPs for A records of services without a cluster IP
- Serve SRV records for named ports that are part of a service (with or without a cluster IP)
- Serve PTR records (reverse DNS lookups)
Additionally, there are a few features unique to the CoreDNS integration:
- Flexible record templates
- Label-based filtering of responses
And of course, you have access to all of the various other middleware in CoreDNS.
To configure CoreDNS to provide Kubernetes service discovery, you have to set up your Corefile – that is, your CoreDNS configuration file. Here is an example that runs CoreDNS on port 53 and serves the cluster.local domain out of Kubernetes.
.:53 {
log stdout
kubernetes cluster.local
}
Figure 1: Simple Corefile for Kubernetes Service Discovery
This Corefile enables two middlewares – the log middleware and the kubernetes middleware. The log middleware is configured to write logs to STDOUT. If you do not include this statement, you won’t see any logging unless there are errors. Any requests for cluster.local will be fulfilled using the data from the Kubernetes API, which is automatically accessed via a service account. By default, services can be looked up with the same format used by Kube-DNS; for example, if you have a service called nginx running in the default namespace, then you would look up its cluster IP with a request for an A record nginx.default.svc.cluster.local.
This basic Corefile will handle the base requirements, but it is missing a few things we would have in a standard Kube-DNS install:
- There is no way to do health checking of the CoreDNS instance.
- There is no caching (Kube-DNS handles this through a separate dnsmasq instance)
- Requests for domains other than cluster.local are not handled.
Fortunately, CoreDNS has middleware to enable all of these functions and a lot more. For health checking, we have the health middleware. This provides an HTTP endpoint on a specified port (8080 by default) that will return “OK” if the instance is healthy. For caching we have the cache middleware. This allows the caching of both positive (i.e., the query returns a result) and negative (the query returns “no such domain”) responses, with separate cache sizes and TTLs. Finally, for handling other domains we have the proxy middleware. This can be configured with several upstream nameservers, and also can be used to defer lookups to the nameservers defined in /etc/resolv.conf. Configuring these, we end with a Corefile like this:
.:53 {
log stdout
health
kubernetes cluster.local
proxy . /etc/resolv.conf
cache 30
}
Figure 2: Complete Corefile for Kubernetes Service Discovery
This Corefile serves Kubernetes services out of cluster.local and sends all other queries to the nameservers defined in /etc/resolv.conf, caching all responses for thirty seconds (for Kubernetes and proxied requests).
There are many additional middleware components available in CoreDNS. These can enable a lot more functionality than can be achieved with Kube-DNS. For example, while CoreDNS does not currently offer dynamic service registration directly, you can achieve a similar result by enabling the etcd middleware for another domain. We will take a closer look at that in our next post.
So, how do you get CoreDNS up and running as the cluster DNS? The details will vary a bit depending on how you have configured your Kubernetes cluster, but the same two steps apply:
- Run CoreDNS as a service in your cluster
- Update kubelet parameters to include the IP of CoreDNS and the cluster domain
If you are already running Kube-DNS, then the kubelet parameters will already define the cluster domain and IP; you simply need to replace your Kube-DNS service with CoreDNS, and use the same cluster IP.
We have a Kubernetes cluster setup using CoreOS as described here, on the CoreOS site. In this configuration, the specifications for system-level services are kept in /srv/kubernetes/manifests. So, to update our cluster to run CoreDNS, we replace the existing kube-dns-rc.yaml and kube-dns-svc.yaml with the coredns-de.yaml and coredns-svc.yaml below. This deployment uses a CoreDNS container image built especially for this blog (infoblox/coredns:k8sblog), but once CoreDNS v003 is released, the standard CoreDNS image may be used.
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
log stdout
health
kubernetes cluster.local
proxy . /etc/resolv.conf
cache 30
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: coredns
image: infoblox/coredns:k8sblog
imagePullPolicy: Always
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
Figure 3: coredns-de.yaml
Notice that because we need to configure CoreDNS with a file, we also create a ConfigMap to house that file. Other than that, this is quite similar to the previous Kube-DNS versions of the files (though we are using a deployment instead of a replication controller).
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 10.3.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
Figure 4: coredns-svc.yaml
Manifests in /srv/kubernetes/manifests are not automatically reloaded, so we need to manually delete the Kube-DNS entries and create the CoreDNS entries (this is not the way to do this in production!):
$ kubectl delete –namespace=kube-system rc kube-dns-v20
replicationcontroller “kube-dns-v20” deleted
$ kubectl delete –namespace=kube-system svc kube-dns
service “kube-dns” deleted
$ kubectl create -f coredns-de.yaml
configmap “coredns” created
deployment “coredns” created
$ kubectl create -f coredns-svc.yaml
service “coredns” created
$
Shortly you should see CoreDNS running via kubectl:
$ kubectl get –namespace=kube-system deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
coredns 1 1 1 1 2m
heapster-v1.2.0 1 1 1 1 5d
$ kubectl get –namespace=kube-system pods
NAME READY STATUS RESTARTS AGE
coredns-4204825988-ll0xw 1/1 Running 0 2m
heapster-v1.2.0-4088228293-a8gkc 2/2 Running 0 4d
kube-apiserver-10.222.243.77 1/1 Running 2 5d
kube-controller-manager-10.222.243.77 1/1 Running 2 5d
kube-proxy-10.222.243.76 1/1 Running 0 4d
kube-proxy-10.222.243.77 1/1 Running 2 5d
kube-proxy-10.222.243.78 1/1 Running 0 4d
kube-scheduler-10.222.243.77 1/1 Running 2 5d
kubernetes-dashboard-v1.4.1-7wdrh 1/1 Running 1 5d
$ kubectl get –namespace=kube-system svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns 10.3.0.10 <none> 53/UDP,53/TCP 2m
heapster 10.3.0.95 <none> 80/TCP 5d
kubernetes-dashboard 10.3.0.66 <none> 80/TCP 5d
$
If you were not already running Kube-DNS, or you used a different ClusterIP for CoreDNS, then you’ll need to update the kubelet configuration to set the cluster_dns and cluster_domain appropriately. On our cluster, this is done by modifying the /etc/systemd/system/kubelet.service file, then restarting the kubelet service.
[Service]
Environment=KUBELET_VERSION=v1.4.1_coreos.0
Environment=KUBELET_ACI=quay.io/coreos/hyperkube
Environment="RKT_OPTS=--volume dns,kind=host,source=/etc/resolv.conf --mount volume=dns,target=/etc/resolv.conf --volume rkt,kind=host,source=/opt/bin/host-rkt --mount volume=rkt,target=/usr/bin/rkt --volume var-lib-rkt,kind=host,source=/var/lib/rkt --mount volume=var-lib-rkt,target=/var/lib/rkt --volume stage,kind=host,source=/tmp --mount volume=stage,target=/tmp --volume var-log,kind=host,source=/var/log --mount volume=var-log,target=/var/log"
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStart=/usr/lib/coreos/kubelet-wrapper --api-servers=http://127.0.0.1:8080 --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=cni --container-runtime=docker --rkt-path=/usr/bin/rkt --rkt-stage1-image=coreos.com/rkt/stage1-coreos --allow-privileged=true --config=/etc/kubernetes/manifests --hostname-override=10.222.243.77 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
Restart=always
RestartSec=10
Figure 5: Kubelet Configuration
The red sections (you’ll need to scroll all the way to the right to see those options in Figure 5) are the additions to set the cluster DNS. The same changes should be made on each node.
Once all those are configured, we can test out the CoreDNS by running a pod and doing a lookup. First, let’s start a service, nginx with the spec shown in Figure 6.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
Figure 6: nginx.yaml
Create this service:
$ kubectl create -f nginx.yaml
pod “nginx” created
service “nginx” created
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.3.0.1 <none> 443/TCP 5d
nginx 10.3.0.201 <none> 80/TCP 19s
Now let’s see if we can access it:
$ kubectl run -i -t –image=infoblox/dnstools:k8sblog –restart=Never dnstest
Waiting for pod default/dnstest to be running, status is Pending, pod ready: false
If you don’t see a command prompt, try pressing enter.
/ # curl nginx.default.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #
So, there we have it! CoreDNS is now providing the service discovery function for this Kubernetes cluster.
In our next post, we’ll show how to enable additional features in CoreDNS so that you can also use it to access arbitrary DNS entries that can be dynamically created.