Welcome to Linkerd! 🎈
In this guide, we’ll walk you through how to install Linkerd into your Kubernetes cluster. Then we’ll deploy a sample application to show off what Linkerd can do for your services.
Installing Linkerd is easy. First, you first install the CLI (command-line interface) onto your local machine. Using this CLI, you’ll install the Linkerd control plane into your Kubernetes cluster. Finally, you’ll “mesh” one or more services by adding the the data plane proxies. (See the Architecture page for details.)
We’ll walk you through this process step by step.
Before we can do anything, we need to ensure you have access to a Kubernetes
cluster running 1.10.0 or later, and a functioning
kubectl command on your
When ready, make sure you’re running a recent version of Kubernetes with:
kubectl version --short
Additionally, if you are using GKE with RBAC enabled, you will want to grant a
cluster-admin to your Google Cloud account first. This will
provide your current user all the permissions required to install the control
plane. To bind this
ClusterRole to your user, you can run:
kubectl create clusterrolebinding cluster-admin-binding-$USER \ --clusterrole=cluster-admin --user=$(gcloud config get-value account)
In the next step, we will install the CLI and validate that your cluster is ready to install the control plane.
If this is your first time running Linkerd, you’ll need to download the command-line interface (CLI) onto your local machine. You’ll use this CLI to interact with Linkerd, including installing the control plane onto your Kubernetes cluster.
To install the CLI, run:
curl -sL https://run.linkerd.io/install | sh
Alternatively, you can download the CLI directly via the Linkerd releases page.
linkerd to your path with:
Verify the CLI is installed and running correctly with:
You should see the CLI version, and also “Server version: unavailable”. This is because we haven’t installed the control plane. We’ll do that soon.
Kubernetes clusters can be configured in many different ways. To ensure that the control plane will install correctly, the Linkerd CLI can check and validate that everything is configured correctly.
To check that your cluster is configured correctly and ready to install the control plane, you can run:
linkerd check --pre
Now that you have the CLI running locally and a cluster that is ready to go,
it’s time to install the lightweight control plane into its own namespace
linkerd). If you would like to install it into a different namespace, check out
the help for
install. To do this, run:
linkerd install | kubectl apply -f -
linkerd install generates a list of Kubernetes resources. Run it standalone if
you would like to understand what is going on. By piping the output of
kubectl, the Linkerd control plane resources will be added to
your cluster and start running immediately.
Depending on the speed of your internet connection, it may take a minute or two for your Kubernetes cluster to pull the Linkerd images. While that’s happening, we can validate that everything’s happening correctly by running:
This command will patiently wait until Linkerd has been installed and is running. If you’re interested in what components were installed, you can run:
kubectl -n linkerd get deploy
Check out the architecture documentation for an in depth explanation of what these components are and what they do.
With the control plane installed and running, you can now view the Linkerd dashboard by running:
linkerd dashboard &
The control plane components all have the proxy installed in their pods and are part of the data plane itself. This provides the ability to dig into these components and see what is going on behind the scenes. In fact, you can run:
linkerd -n linkerd top deploy/linkerd-web
This is the traffic you’re generating by looking at the dashboard itself!
To get a feel for how Linkerd would work for one of your services, you can
install the demo application. It provides an excellent place to look at all the
functionality that Linkerd provides. To install it on your own cluster, in its
own namespace (
curl -sL https://run.linkerd.io/emojivoto.yml \ | kubectl apply -f -
Before we mesh it, let’s take a look at the app. If you’re using Docker for
Desktop at this point you can
visit http://localhost directly. If you’re not using
Docker for Desktop, we’ll need to forward the
web-svc service. To forward
web-svc locally to port 8080, you can run:
kubectl -n emojivoto port-forward svc/web-svc 8080:80
Now visit http://localhost:8080. Voila! The emojivoto app in all its glory.
Clicking around, you might notice that some parts of the application are broken! For example, if you click on a doughnut emoji, you’ll get a 404 page. Don’t worry, these errors are intentional. (And we can use Linkerd to identify the problem. Check out the debugging guide if you’re interested in how to figure out exactly what is wrong.)
Next, let’s add Linkerd to the Emojivoto app, by running:
kubectl get -n emojivoto deploy -o yaml \ | linkerd inject - \ | kubectl apply -f -
This command retrieves all of the deployments running in the
namespace, runs the set of Kubernetes resources through
inject, and finally
reapplies it to the cluster. The
inject command augments the resources to
include the data plane’s proxies. As with
inject is a pure text
operation, meaning that you can inspect the input and output before you use it.
You can even run it through
diff to see exactly what is changing.
Once piped into
kubectl apply, Kubernetes will execute a rolling deploy and
update each pod with the data plane’s proxies, all without any downtime.
You’ve added Linkerd to existing services without touching the original YAML!
inject augments YAML, it would also be possible to take
emojivoto.yml itself and do the same thing
cat emojivoto.yml | linkerd inject -).
This is a great way to get Linkerd integrated into your CI/CD
pipeline. You can choose which services use Linkerd one at a time and
incrementally add them to the data plane.
Just like with the control plane, it is possible to verify that everything worked the way it should with the data plane. To do this check, run:
linkerd -n emojivoto check --proxy
NoteIn this step, we meshed the emojivoto app when the app was already running. While this ad hoc approach works fine in many cases, Linkerd also supports automated proxy injection, which is typically more suitable for applications using automated deployment patterns.
You can glance at the Linkerd dashboard and see all the HTTP/2 (gRPC) and HTTP/1
(web frontend) speaking services in the demo app show up in the list of
resources running in the
emojivoto namespace. As the demo app comes with a
load generator, it is possible to check out some of the Linkerd functionality.
To see some high level stats about the app, you can run:
linkerd -n emojivoto stat deploy
This will show the “golden” metrics for each deployment:
To dig in a little further, it is possible to
top the running services in real
time and get an idea of what is happening on a per-path basis. To see this, you
linkerd -n emojivoto top deploy
If you’re interested in going even deeper,
tap shows the stream of requests
across a single pod, deployment, or even everything in the emojivoto namespace.
To see this stream for the
web deployment, all you need to do is run:
linkerd -n emojivoto tap deploy/web
All of this is also available with the dashboard, if you would like to use your browser instead. The dashboard views look like:
These are all great for seeing real time data, but what about things that happened in the past? Linkerd includes Grafana to visualize all the great metrics collected by Prometheus and ships with some extremely valuable dashboards. You can get to these by clicking the Grafana icon in the overview page.
For more things you can do: