Welcome to Linkerd! 🎈
In this guide, we’ll walk you through how to install Linkerd into your Kubernetes cluster. Then we’ll deploy a sample application to show off what Linkerd can do for your services.
Installing Linkerd is easy. First, you first install the CLI (command-line interface) onto your local machine. Using this CLI, you’ll install the Linkerd control plane into your Kubernetes cluster. Finally, you’ll “mesh” one or more services by adding the the data plane proxies. (See the Architecture page for details.)
We’ll walk you through this process step by step.
Step 0: Setup
Before we can do anything, we need to ensure you have access to a Kubernetes
cluster running 1.9 or later, and a functioning
kubectl command on your local
When ready, make sure you’re running a recent version of Kubernetes with:
kubectl version --short
Additionally, if you are using GKE with RBAC enabled, you will want to grant a
cluster-admin to your Google Cloud account first. This will
provide your current user all the permissions required to install the control
plane. To bind this
ClusterRole to your user, you can run:
kubectl create clusterrolebinding cluster-admin-binding-$USER \ --clusterrole=cluster-admin --user=$(gcloud config get-value account)
In the next step, we will install the CLI and validate that your cluster is ready to install the control plane.
Step 1: Install the CLI
If this is your first time running Linkerd, you’ll need to download the command-line interface (CLI) onto your local machine. You’ll use this CLI to interact with Linkerd, including installing the control plane onto your Kubernetes cluster.
To install the CLI, run:
curl -sL https://run.linkerd.io/install | sh
Alternatively, you can download the CLI directly via the Linkerd releases page.
linkerd to your path with:
Verify the CLI is installed and running correctly with:
You should see the CLI version, and also “Server version: unavailable”. This is because we haven’t installed the control plane. We’ll do that soon.
Step 2: Validate your Kubernetes cluster
Kubernetes clusters can be configured in many different ways. To ensure that the control plane will install correctly, the Linkerd CLI can check and validate that everything is configured correctly.
To check that your cluster is configured correctly and ready to install the control plane, you can run:
linkerd check --pre
Step 3: Install Linkerd onto the cluster
Now that you have the CLI running locally and a cluster that is ready to go,
it’s time to install the lightweight control plane into its own namespace
linkerd). If you would like to install it into a different namespace, check out
the help for
install. To do this, run:
linkerd install | kubectl apply -f -
linkerd install generates a list of Kubernetes resources. Run it standalone if
you would like to understand what is going on. By piping the output of
kubectl, the Linkerd control plane resources will be added to
your cluster and start running immediately.
Depending on the speed of your internet connection, it may take a minute or two for your Kubernetes cluster to pull the Linkerd images. While that’s happening, we can validate that everything’s happening correctly by running:
This command will patiently wait until Linkerd has been installed and is running. If you’re interested in what components were installed, you can run:
kubectl -n linkerd get deploy
Check out the architecture documentation for an in depth explanation of what these components are and what they do.
Step 4: Explore Linkerd
With the control plane installed and running, you can now view the Linkerd dashboard by running:
The control plane components all have the proxy installed in their pods and are part of the data plane itself. This provides the ability to dig into these components and see what is going on behind the scenes. In fact, you can run:
linkerd -n linkerd top deploy/linkerd-web
This is the traffic you’re generating by looking at the dashboard itself!
Step 5: Install the demo app
To get a feel for how Linkerd would work for one of your services, you can
install the demo application. It provides an excellent place to look at all the
functionality that Linkerd provides. To install it on your own cluster, in its
own namespace (
curl -sL https://run.linkerd.io/emojivoto.yml \ | kubectl apply -f -
You can take a look at this by forwarding the
web pod to localhost and looking
at the app in your browser. To forward
web locally to port 8080, you can run:
kubectl -n emojivoto port-forward \ $(kubectl -n emojivoto get po -l app=web-svc -oname | cut -d/ -f 2) \ 8080:80
You might notice that some parts of the application are broken! If you were to inspect your handy local Kubernetes dashboard, you wouldn’t see very much of interest — as far as Kubernetes is concerned, the app is running just fine. This is a very common situation! Kubernetes understands whether your pods are running, but not whether they are responding properly. Check out the debugging example if you’re interested in how to figure out exactly what is wrong.
To get some added visibility into what is going on and see some of the functionality of Linkerd, let’s add Linkerd to emojivoto by running:
kubectl get -n emojivoto deploy -o yaml \ | linkerd inject - \ | kubectl apply -f -
This command retrieves all of the deployments running in the
runs the set of Kubernetes resources through
inject, and finally reapplies it to
inject augments the resources to include the data plane’s
proxies. As with
inject is a pure text operation, meaning that you
can inspect the input and output before you use it. You can even run it through
diff to see exactly what is changing.
Once piped into
kubectl apply, Kubernetes will execute a rolling deploy and
update each pod with the data plane’s proxies, all without any downtime.
You’ve added Linkerd to existing services without touching the original YAML!
inject augments YAML, it would also be possible to take
emojivoto.yml itself and do the same thing
cat emojivoto.yml | linkerd inject -).
This is a great way to get Linkerd integrated into your CI/CD
pipeline. You can choose which services use Linkerd one at a time and
incrementally add them to the data plane.
Just like with the control plane, it is possible to verify that everything worked the way it should with the data plane. To do this check, run:
linkerd -n emojivoto check --proxy
Step 6: Watch it run!
You can glance at the Linkerd dashboard and see all the HTTP/2 (gRPC) and HTTP/1
(web frontend) speaking services in the demo app show up in the list of
resources running in the
emojivoto namespace. As the demo app comes with a
load generator, it is possible to check out some of the Linkerd functionality.
To see some high level stats about the app, you can run:
linkerd -n emojivoto stat deploy
This will show the “golden” metrics for each deployment:
- Success rates
- Request rates
- Latency distribution percentiles
To dig in a little further, it is possible to
top the running services in real
time and get an idea of what is happening on a per-path basis. To see this, you
linkerd -n emojivoto top deploy
If you’re interested in going even deeper,
tap shows the stream of requests
across a single pod, deployment, or even everything in the emojivoto namespace.
To see this stream for the
web deployment, all you need to do is run:
linkerd -n emojivoto tap deploy/web
All of this is also available with the dashboard, if you would like to use your browser instead. The dashboard views look like:
These are all great for seeing real time data, but what about things that happened in the past? Linkerd includes Grafana to visualize all the great metrics collected by Prometheus and ships with some extremely valuable dashboards. You can get to these by clicking the Grafana icon in the overview page.
That’s it! 👏
For more things you can do: