This guide presents an example approach to deploying linkerd as a sidecar in Kubernetes, using the multi-container pod support that Kubernetes provides. For more information on various deployment models, see the Deployment page.
In this example we’ll be deploying a basic “hello world” app with linkerd to the default namespace in our Kubernetes cluster.
To begin, we need to write a linkerd configuration file. This file will be
mounted by the linkerd container when it starts up. We provide this file to
Kubernetes using a ConfigMap
object, in the file
apiVersion: v1 kind: ConfigMap metadata: name: linkerd-config data: config.yaml: |- admin: port: 9990 namers: - kind: io.l5d.k8s experimental: true host: 127.0.0.1 port: 8001 routers: - protocol: http servers: - port: 8080 ip: 0.0.0.0 dtab: | /iface => /#/io.l5d.k8s/default; /svc => /iface/http;
Create the config map with:
$ kubectl create -f config.yml configmap "linkerd-config" created
The linkerd config that we’ve created specifies a single http router running on
port 8080. The router routes requests by looking up the value of their HTTP Host
header in service discovery. To do this, it uses the experimental k8s namer,
which will lookup services in the Kubernetes cluster API. If a service exists in
the default namespace with a name matching Host header value, it will load
balance the requests over all of the pods that are managed by that service. The
k8s namer expects a
kubectl proxy process to be running locally to allow
linkerd to talk to the cluster API. We’ll set that process up in the next
We can mount the ConfigMap from above into a linkerd container that we’ve
defined as part of a pod spec. We’ll create a Replication Controller object to manage
the pod, in the file
kind: ReplicationController apiVersion: v1 metadata: name: hello spec: replicas: 1 selector: app: hello template: metadata: labels: app: hello spec: dnsPolicy: ClusterFirst volumes: - name: linkerd-config configMap: name: "linkerd-config" containers: - name: hello image: dockercloud/hello-world:latest ports: - name: http containerPort: 80 - name: linkerd image: buoyantio/linkerd:latest args: - "/io.buoyant/linkerd/config/config.yaml" ports: - name: ext containerPort: 8080 - name: admin containerPort: 9990 volumeMounts: - name: "linkerd-config" mountPath: "/io.buoyant/linkerd/config" readOnly: true - name: kubectl image: buoyantio/kubectl:1.2.3 args: - "proxy" - "-p" - "8001"
Create the replication controller (and corresponding pod) with:
$ kubectl create -f rc.yml replicationcontroller "hello" created
This replication controller includes a pod spec that starts three containers. The “hello” container runs a simple hello world web service on port 80, which responds to http requests with a 200. The “linkerd” container runs linkerd, and exposes ports matching the config file that it is mounting. The “kubectl” container runs a local kubectl proxy, which linkerd’s k8s namer uses to perform service discovery via the Kubernetes cluster API.
Once the replication controller is created, we’ll need to create a service that
allows access to the pods that are managed by the replication controller. We’ll
add a Service object, in the
kind: Service apiVersion: v1 metadata: name: hello spec: selector: app: hello type: LoadBalancer ports: - name: ext port: 80 targetPort: 8080 - name: http port: 8081 targetPort: 80 - name: admin port: 9990
Create the service with:
$ kubectl create -f svc.yml service "hello" created
This service forwards traffic on port 80 to the linkerd router running on 8080, as well as traffic on port 8081 to the hello world service running on port 80. With this configuration in place, requests received with a “hello” Host header on port 80 will be forwarded to the linkerd router running on port 8080, which will look up the hello service in service discovery and forward the request to the hello world service running on port 80. This effectively adds linkerd as a load balancer in front of all of the hello world instances. This service config also exposes linkerd’s admin service on port 9990.
Making sure it works
Once all of your objects have been created, you can test it out by making requests to the the external IP of the hello service that you created. Find the external IP with:
$ kubectl get svc hello NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello 10.10.10.10 <public-ip> 80/TCP,8081/TCP,9990/TCP 1m
Next make a request to the external IP, specifying the “hello” host header:
$ curl -sI -H 'Host: hello' <public-ip> | head -n1 HTTP/1.1 200 OK
The request was successfully routed. Now to verify that host header routing is working properly, try making a request with an unknown header:
$ curl -sI -H 'Host: missing' <public-ip> | head -n1 HTTP/1.1 502 Bad Gateway
As expected, linkerd does not route this request, since there is no “missing” service available in service discovery.
Finally, to reach the admin service, make a request on port 9990:
$ curl <public-ip>:9990/admin/ping pong
Success! For more information about linkerd’s admin capabilities, see the Administration page.