Running in Kubernetes with DaemonSets

The simplest way to run linkerd in Kubernetes is as a sidecar process which runs one linkerd instance in each Kubernetes pod. Unfortunately, the cost of running many linkerd processes on the same host can be high. Since version 1.2, Kubernetes has provided an additional deployment model called a daemonset: a special kind of replication controller that runs exactly one pod on every host. By deploying linkerd as a daemonset, we can reduce the number of linkerd containers to one per-host instead of one per-pod. This guide will describe how to deploy linkerd in this way and how to configure your application to use it.

Deploy linkerd

Install linkerd using this Kubernetes config. This will install linkerd as a DaemonSet (i.e., one instance per host) running in the default Kubernetes namespace:

curl -s | kubectl apply -f -

You can confirm that installation was successful by viewing linkerd’s admin page:

open http://$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):9990 # on OSX

Note: Load Balancers may not be available in all deployments of Kubernetes. For more information, have a look at Non-GKE setups.

For more information about linkerd’s admin capabilities, see the Administration page.

Configure your app

Your application needs to be configured to use linkerd as an HTTP proxy. The easiest way to accomplish this is to use the http_proxy environment variable. You can set this in the pod where your application is running using information from the Kubernetes Downward API. To set http_proxy, add these environment variables to your pod config:

        - name: NODE_NAME
              fieldPath: spec.nodeName
        - name: POD_IP
              fieldPath: status.podIP
        - name: http_proxy
          value: $(NODE_NAME):4140

Note that using the Downward API in this way requires Kuberentes 1.4 or later. If you’re using an older version of Kubernetes, see this legacy config and this script for an example of how to set http_proxy in legacy environments.

You will also need to run a kubectl proxy container in your pod, to allow access to the API:

      - name: kubectl
        image: buoyantio/kubectl:1.2.3
        - proxy
        - "-p"
        - "8001"

Finally, your application’s service object will need to define a port named http where your application is serving.

  - name: http
    port: 7777

Once configured, any requests your application makes to http://hello will be automatically forwarded by linkerd to the Kubernetes service named hello.

For an example of a complete application configured this way, see the hello world sample config.

Making sure it works

Once all of your objects have been created, you can test it out by making requests to linkerd’s external IP.

http_proxy=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}"):4140 curl -s http://hello

Non-GKE setups

All Kubernetes Daemonset documentation and examples assume GKE with default networking. For examples of configuring linkerd to run in other configurations, such as CNI, Calico, Weave, or Minikube, have a look at Flavors of Kubernetes.