Getting started with Multicluster

This guide will walk you through installing and configuring Linkerd so that two clusters can talk to services hosted on both. There are a lot of moving parts and concepts here, so it is valuable to read through our introduction that explains how this works beneath the hood. By the end of this guide, you will understand how to split traffic between services that live on different clusters.

At a high level, you will:

  1. Install Linkerd on two clusters with a shared trust anchor.
  2. Prepare the clusters.
  3. Link the clusters.
  4. Install the demo.
  5. Export the demo services, to control visibility.
  6. Verify the security of your clusters.
  7. Split traffic from pods on the source cluster (west) to the target cluster (east)


  • Two clusters, we will refer to them as east and west in this guide. Follow along with the blog post as you walk through this guide! The easiest way to do this for development is running a kind or k3d cluster locally on your laptop and one remotely on a cloud provider, such as AKS.
  • Each of these clusters should be configured as kubectl contexts. We'd recommend you use the names east and west so that you can follow along with this guide. It is easy to rename contexts with kubectl, so don't feel like you need to keep it all named this way forever.
  • Elevated privileges on both clusters. We'll be creating service accounts and granting extended privileges, so you'll need to be able to do that on your test clusters.
  • Support for services of type LoadBalancer in the east cluster. Check out the documentation for your cluster provider or take a look at inlets. This is what the west cluster will use to communicate with east via the gateway.

Install Linkerd

Two Clusters

Linkerd requires a shared trust anchor to exist between the installations in all clusters that communicate with each other. This is used to encrypt the traffic between clusters and authorize requests that reach the gateway so that your cluster is not open to the public internet. Instead of letting linkerd generate everything, we'll need to generate the credentials and use them as configuration for the install command.

We like to use the step CLI to generate these certificates. If you prefer openssl instead, feel free to use that! To generate the trust anchor with step, you can run:

step certificate create identity.linkerd.cluster.local root.crt root.key \
  --profile root-ca --no-password --insecure --san identity.linkerd.cluster.local

This certificate will form the common base of trust between all your clusters. Each proxy will get a copy of this certificate and use it to validate the certificates that it receives from peers as part of the mTLS handshake. With a common base of trust, we now need to generate a certificate that can be used in each cluster to issue certificates to the proxies. If you'd like to get a deeper picture into how this all works, check out the deep dive.

The trust anchor that we've generated is a self-signed certificate which can be used to create new certificates (a certificate authority). To generate the issuer credentials using the trust anchor, run:

step certificate create identity.linkerd.cluster.local issuer.crt issuer.key \
  --profile intermediate-ca --not-after 8760h --no-password --insecure \
  --ca root.crt --ca-key root.key --san identity.linkerd.cluster.local

An identity service in your cluster will use the certificate and key that you generated here to generate the certificates that each individual proxy uses. While we will be using the same issuer credentials on each cluster for this guide, it is a good idea to have separate ones for each cluster. Read through the certificate documentation for more details.

With a valid trust anchor and issuer credentials, we can install Linkerd on your west and east clusters now.

linkerd install \
  --identity-trust-anchors-file root.crt \
  --identity-issuer-certificate-file issuer.crt \
  --identity-issuer-key-file issuer.key \
  | tee \
    >(kubectl --context=west apply -f -) \
    >(kubectl --context=east apply -f -)

The output from install will get applied to each cluster and come up! You can verify that everything has come up successfully with check.

for ctx in west east; do
  echo "Checking cluster: ${ctx} .........\n"
  linkerd --context=${ctx} check || break
  echo "-------------\n"

Preparing your cluster


There are two components required to discover services and route traffic between clusters. The service mirror is responsible for continuously monitoring services on a set of target clusters and copying these services from the target cluster to the local one running the service mirror. Instead of creating a new, special way to address and interact with services, Linkerd leverages Kubernetes services so that your application code does not need to change and there is nothing new to learn.

In addition, there needs to be a component that can be reached external to the cluster. The gateway component routes incoming requests to the correct internal service.. It gateway will be exposed to the public internet via a Service of type LoadBalancer. Only requests verified through Linkerd's mTLS (with a shared trust anchor) will be allowed through this gateway. If you're interested, we go into more detail as to why this is important in architecting for multicluster Kubernetes.

To install these components on both west and east, you can run:

for ctx in west east; do
  echo "Installing on cluster: ${ctx} ........."
  linkerd --context=${ctx} multicluster install | \
    kubectl --context=${ctx} apply -f - || break
  echo "-------------\n"

Installed into the linkerd-multicluster namespace, the gateway is a simple NGINX proxy which has been injected with the Linkerd proxy. On the inbound side, Linkerd takes care of validating that the connection uses a TLS certificate that is part of the trust anchor. NGINX takes the request and forwards it to the Linkerd proxy's outbound side. At this point, the Linkerd proxy is operating like any other in the data plane and forwards the requests to the correct service. Make sure the gateway comes up successfully by running:

for ctx in west east; do
  echo "Checking gateway on cluster: ${ctx} ........."
  kubectl --context=${ctx} -n linkerd-multicluster \
    rollout status deploy/linkerd-gateway || break
  echo "-------------\n"

Double check that the load balancer was able to allocate a public IP address by running:

for ctx in west east; do
  printf "Checking cluster: ${ctx} ........."
  while [ "$(kubectl --context=${ctx} -n linkerd-multicluster get service \
    -o 'custom-columns=:.status.loadBalancer.ingress[0].ip' \
    --no-headers)" = "<none>" ]; do
      printf '.'
      sleep 1
  printf "\n"
Service Mirror

The service mirror is a Kubernetes controller that connects to a target cluster (east in this example). One of its jobs is to watch the services running on the target cluster and locally mirror any service that has been exported. In addition, the mirror watches the gateway's service on the target cluster and adds the public IP address to each mirrored service's endpoints. It should be up and running by now, but you can verify by running:

for ctx in west east; do
  echo "Checking cluster: ${ctx} ........."
  kubectl --context=${ctx} -n linkerd-multicluster \
    rollout status deploy/linkerd-service-mirror || break

Finally, let's do a pass with check and make sure all the components are healthy and ready to go.

for ctx in west east; do
  echo "Checking cluster: ${ctx} ........."
  linkerd --context=${ctx} check --multicluster || break
  echo "-------------\n"

If you're interested, you can take a peek at everything that's now running in both clusters with kubectl.

kubectl --context=west -n linkerd-multicluster get all

Every cluster is now running the multicluster control plane and ready to start mirroring services. We'll want to link the clusters together now!

Linking the clusters


For west to mirror services from east, the west cluster needs to have credentials so that it can watch for services in east to be exported. You'd not want anyone to be able to introspect what's running on your cluster after all! The credentials consist of a service account to authenticate the service mirror as well as a ClusterRole and ClusterRoleBinding to allow watching services. In total, the service mirror component uses these credentials to watch services on east or the target cluster and add/remove them from itself (west). There is a default set added as part of linkerd multicluster install, but if you would like to have separate credentials for every cluster you can run linkerd multicluster allow.

The next step is to link west to east. This will create a kubeconfig which contains the target (east) cluster's service account token and connection details. The kubeconfig will be applied to the source (west) cluster as a secret that can be read by the service mirror in west. To do this, you'll want to run link against the east context as you're fetching the details required to connect to that cluster. When applying it, you'll want to use the west context as that is what needs the details. To link the west cluster to the east one, run:

linkerd --context=east multicluster link --cluster-name east |
  kubectl --context=west apply -f -

Linkerd will look at your current east context, extract the cluster configuration which contains the server location as well as the CA bundle. It will then fetch the ServiceAccount token and merge these pieces of configuration into a kubeconfig that is a secret.

The service mirror, watches for secrets in the linkerd-multicluster namespace that contain a kubeconfig for remote clusters. Now that we have created the credentials service mirror needs, it will connect to east and mirror any services that have been exported to west. We've not explicitly exported any services yet, that'll happen in the next step.

Running check again will make sure that the service mirror has discovered this secret and can reach east.

linkerd --context=west check --multicluster

Additionally, the east gateway should now show up in the list:

linkerd --context=west multicluster gateways

Installing the test services


It is time to test this all out! The first step is to add some services that we can mirror. To add these to both clusters, you can run:

for ctx in west east; do
  echo "Adding test services on cluster: ${ctx} ........."
  kubectl --context=${ctx} apply \
    -k "${ctx}/"
  kubectl --context=${ctx} -n test \
    rollout status deploy/podinfo || break
  echo "-------------\n"

You'll now have a test namespace running two deployments in each cluster - frontend and podinfo. podinfo has been configured slightly differently in each cluster with a different name and color so that we can tell where requests are going.

To see what it looks like from the west cluster right now, you can run:

kubectl --context=west -n test port-forward svc/frontend 8080
West Podinfo

With the podinfo landing page available at http://localhost:8080, you can see how it looks in the west cluster right now. Alternatively, running curl http://localhost:8080 will return a JSON response that looks something like:

  "hostname": "podinfo-5c8cf55777-zbfls",
  "version": "4.0.2",
  "revision": "b4138fdb4dce7b34b6fc46069f70bb295aa8963c",
  "color": "#6c757d",
  "logo": "",
  "message": "greetings from west",
  "goos": "linux",
  "goarch": "amd64",
  "runtime": "go1.14.3",
  "num_goroutine": "8",
  "num_cpu": "4"

Notice that the message references the west cluster name.

Exporting the services

To make sure sensitive services are not mirrored and cluster performance is impacted by the creation or deletion of services, we require that services be explicitly exported. For the purposes of this guide, we will be exporting the podinfo service from the east cluster to the west cluster. To do this, we must first export the podinfo service in the east cluster. You can do this by running:

kubectl --context=east get svc -n test podinfo -o yaml | \
  linkerd multicluster export-service - | \
  kubectl --context=east apply -f -

The linkerd multicluster export-service command simply adds a couple annotations to the service. There's no reason you have to use the command, feel free to add them yourself! The added annotations are: linkerd-gateway linkerd-multicluster

Make sure to configure the values based on how you have configured the installation. The gateway's service name and namespace are required.

These annotations are picked up by the service mirror component in the west cluster. A podinfo-east service is then created in the test namespace. The cluster name of a mirrored service is automatically added to the service name so that there are no local collisions and it is explicit where the traffic is being sent. Check out the service that was just created by the service mirror controller!

kubectl --context=west -n test get svc podinfo-east

From the architecture, you'll remember that the service mirror component is doing more than just moving services over. It is also managing the endpoints on the mirrored service. To verify that is setup correctly, you can check the endpoints in west and verify that they match the gateway's public IP address in east.

kubectl --context=west -n test get endpoints podinfo-east \
  -o 'custom-columns=ENDPOINT_IP:.subsets[*].addresses[*].ip'
kubectl --context=east -n linkerd-multicluster get svc linkerd-gateway \
  -o "custom-columns=GATEWAY_IP:.status.loadBalancer.ingress[*].ip"

At this point, we can hit the podinfo service in east from the west cluster. This requires the client to be meshed, so let's run curl from within the frontend pod:

kubectl --context=west -n test exec -c nginx -it \
  $(kubectl --context=west -n test get po -l app=frontend \
    --no-headers -o \
  -- /bin/sh -c "apk add curl && curl http://podinfo-east:9898"

You'll see the greeting from east message! Requests from the frontend pod running in west are being transparently forwarded to east. Assuming that you're still port forwarding from the previous step, you can also reach this from your browser at http://localhost:8080/east. Refresh a couple times and you'll be able to get metrics from linkerd stat as well.

linkerd --context=west -n test stat --from deploy/frontend svc

We also provide a grafana dashboard to get a feel for what's going on here. You can get to it by running linkerd --context=west dashboard and going to http://localhost:50750/grafana/



By default, requests will be going across the public internet. Linkerd extends its automatic mTLS across clusters to make sure that the communication going across the public internet is encrypted. If you'd like to have a deep dive on how to validate this, check out the docs. To quickly check, however, you can run:

linkerd --context=west -n test tap deploy/frontend | \
  grep "$(kubectl --context=east -n linkerd-multicluster get svc linkerd-gateway \
    -o "custom-columns=GATEWAY_IP:.status.loadBalancer.ingress[*].ip")"

tls=true tells you that the requests are being encrypted!

In addition to making sure all your requests are encrypted, it is important to block arbitrary requests coming into your cluster. We do this by validating that requests are coming from clients in the mesh. To do this validation, we rely on a shared trust anchor between clusters. To see what happens when a client is outside the mesh, you can run:

kubectl --context=west -n test run -it --rm --image=alpine:3 test -- \
  /bin/sh -c "apk add curl && curl -vv http://podinfo-east:9898"

Traffic Splitting

Traffic Split

It is pretty useful to have services automatically show up in clusters and be able to explicitly address them, however that only covers one use case for operating multiple clusters. Another scenario for multicluster is failover. In a failover scenario, you don't have time to update the configuration. Instead, you need to be able to leave the application alone and just change the routing. If this sounds a lot like how we do canary deployments, you'd be correct!

TrafficSplit allows us to define weights between multiple services and split traffic between them. In a failover scenario, you want to do this slowly as to make sure you don't overload the other cluster or trip any SLOs because of the added latency. To get this all working with our scenario, let's split between the podinfo service in west and east. To configure this, you'll run:

cat <<EOF | kubectl --context=west apply -f -
kind: TrafficSplit
  name: podinfo
  namespace: test
  service: podinfo
  - service: podinfo
    weight: 50
  - service: podinfo-east
    weight: 50

Any requests to podinfo will now be forwarded to the podinfo-east cluster 50% of the time and the local podinfo service the other 50%. Requests sent to podinfo-east end up in the east cluster, so we've now effectively failed over 50% of the traffic from west to east.

If you're still running port-forward, you can send your browser to http://localhost:8080. Refreshing the page should show both clusters.Alternatively, for the command line approach, curl localhost:8080 will give you a message that greets from west and east.

Cross Cluster Podinfo

You can also watch what's happening with metrics. To see the source side of things (west), you can run:

linkerd --context=west -n test stat trafficsplit

It is also possible to watch this from the target (east) side by running:

linkerd --context=east -n test stat \
  --from deploy/linkerd-gateway \
  --from-namespace linkerd-multicluster \

There's even a dashboard! Run linkerd dashboard and send your browser to localhost:50750.

Cross Cluster Podinfo


To cleanup the multicluster control plane, you can run:

for ctx in west east; do
  kubectl --context=${ctx} delete ns linkerd-multicluster

If you'd also like to remove your Linkerd installation, run:

for ctx in west east; do
  linkerd --context=${ctx} uninstall | kubectl --context=${ctx} delete -f -