You are viewing docs for an older version of Linkerd. View the latest docs.
  • GitHub
  • Slack
  • Linkerd Forum

Adding non-Kubernetes workloads to your mesh

In this guide, we’ll walk you through an example of mesh expansion: setting up and configuring an example non-Kubernetes workload and adding it to your Linkerd mesh.

Overall flow

In this guide, we’ll take you through how to:

  1. Install the Linkerd proxy onto a virtual or physical machine outside the Kubernetes cluster.
  2. Configure network rules so traffic is routed through the proxy.
  3. Register the external workload in the mesh.
  4. Exercise traffic patterns and apply authorization policies that affect the external workload.

We’ll be using SPIRE as our identity mechanism to generate a workload identity.

Prerequisites

You will need:

  • A functioning Linkerd installation and its trust anchor.
  • A cluster that you have elevated privileges to. For local development, you can use kind or k3d.
  • A physical or virtual machine.
  • NET_CAP privileges on the machine, so iptables rules can be modified.
  • IP connectivity from the machine to every pod in the mesh.
  • A working DNS setup such that the machine is able to resolve DNS names for in-cluster Kubernetes workloads.

Getting the current trust anchor and key

To be able to use mutual TLS across cluster boundaries, the off-cluster machine and the cluster need to have a shared trust anchor. For the purposes of this tutorial, we will assume that you have access to the trust anchor certificate and secret key for your Linkerd deployment and placed it in files called ca.key and ca.crt.

Install SPIRE on your machine

Linkerd’s proxies normally obtain TLS certificates from the identity component of Linkerd’s control plane. In order to attest their identity, they use the Kubernetes Service Account token that is provided to each Pod.

Since our external workload lives outside of Kubernetes, the concept of Service Account tokens does not exist. Instead, we turn to the SPIFFE framework and its SPIRE implementation to create identities for off-cluster resources. Thus, for mesh expansion, we configure the Linkerd proxy to obtain its certificates directly from SPIRE instead of the Linkerd’s identity service. The magic of SPIFFE is that these certificates are compatible with those generated by Linkerd on the cluster.

In production, you may already have your own identity infrastructure built on top of SPIFFE that can be used by the proxies on external machines. For this tutorial however, we can take you through installing and setting up a minimal SPIRE environment on your machine. To begin with you need to install SPIRE by downloading it from the SPIRE GitHub releases page. For example:

wget https://github.com/spiffe/SPIRE/releases/download/v1.8.2/SPIRE-1.8.2-linux-amd64-musl.tar.gz tar zvxf SPIRE-1.8.2-linux-amd64-musl.tar.gz cp -r SPIRE-1.8.2/. /opt/SPIRE/

Then you need to configure the SPIRE server on your machine:

cat >/opt/SPIRE/server.cfg <<EOL server { bind_address = "127.0.0.1" bind_port = "8081" trust_domain = "root.linkerd.cluster.local" data_dir = "/opt/SPIRE/data/server" log_level = "DEBUG" ca_ttl = "168h" default_x509_svid_ttl = "48h" } plugins { DataStore "sql" { plugin_data { database_type = "sqlite3" connection_string = "/opt/SPIRE/data/server/datastore.sqlite3" } } KeyManager "disk" { plugin_data { keys_path = "/opt/SPIRE/data/server/keys.json" } } NodeAttestor "join_token" { plugin_data {} } UpstreamAuthority "disk" { plugin_data { cert_file_path = "/opt/SPIRE/certs/ca.crt" key_file_path = "/opt/SPIRE/certs/ca.key" } } } EOL

This file configures the SPIRE server. It assumes that your root cert and key that you have installed Linkerd with are placed in the /opt/SPIRE/certs directory.

Additionally, you need will need to configure the SPIRE agent:

cat >/opt/SPIRE/agent.cfg <<EOL agent { data_dir = "/opt/SPIRE/data/agent" log_level = "DEBUG" trust_domain = "root.linkerd.cluster.local" server_address = "localhost" server_port = 8081 # Insecure bootstrap is NOT appropriate for production use but is ok for # simple testing/evaluation purposes. insecure_bootstrap = true } plugins { KeyManager "disk" { plugin_data { directory = "/opt/SPIRE/data/agent" } } NodeAttestor "join_token" { plugin_data {} } WorkloadAttestor "unix" { plugin_data {} } } EOL

Now you need to start the server and provide a registration policy for your workload. The server is the component that issues certificates. To begin with, start the SPIRE server and verify that it is healthy:

SPIRE-server run -config ./server.cfg && SPIRE-server healthcheck

Now you need to register the agent and run it. The agent queries the SPIRE server to attest (authenticate) workloads.

AGENT_TOKEN=$(SPIRE-server token generate -spiffeID spiffe://root.linkerd.cluster.local/agent -output json | jq -r '.value') SPIRE-agent run -config ./agent.cfg -joinToken "$AGENT_TOKEN" & SPIRE-agent healthcheck Agent is healthy.

After both the server and agent are running, you need to provide a registration policy for your workload. For the sake of simplicity, we can put together a simple registration policy that hands out a predefined SPIFFE identity to any process that runs under the root UID.

SPIRE-server entry create -parentID spiffe://root.linkerd.cluster.local/agent \ -spiffeID spiffe://root.linkerd.cluster.local/external-workload -selector unix:uid:$(id -u root) Entry ID : ac5e2354-596a-4059-85f7-5b76e3bb53b3 SPIFFE ID : spiffe://root.linkerd.cluster.local/external-workload Parent ID : spiffe://root.linkerd.cluster.local/agent TTL : 3600 Selector : unix:uid:0

Registering the external workload with the mesh

For Linkerd to know about the external workload and be able to route traffic to it, we need to supply some information. This is done via an ExternalWorkload CRD that needs to be present in the cluster. Create one now:

machine_IP=<the ip address of your machine> kubectl --context=west apply -f - <<EOF apiVersion: workload.linkerd.io/v1alpha1 kind: ExternalWorkload metadata: name: external-workload namespace: mixed-env labels: location: vm app: legacy-app workload_name: external-workload spec: meshTls: identity: "spiffe://root.linkerd.cluster.local/external-workload" serverName: "external-workload.cluster.local" workloadIPs: - ip: $machine_IP ports: - port: 80 name: http status: conditions: - type: Ready status: "True" lastTransitionTime: "2024-01-24T11:53:43Z" EOF

This will create an ExternalWorkload resource that will be used to discover workloads that live outside of Kubernetes. A Service object can select over these resources the same way it selects over Pods, but more on that later.

Installing the Linkerd proxy on the machine

We need to install and run the Linkerd proxy on the machine. Typically the proxy runs as a container in Kubernetes. The container itself has some additional machinery that is specific to bootstrapping identity in Kubernetes environments. When in a foreign environment, we do not need this functionality, so we can simply get the proxy binary:

LINKERD_VERSION=enterprise-2.15.0 mkdir /opt/linkerd-proxy && cs /opt/linkerd-proxy id=$(docker create cr.l5d.io/linkerd/proxy:$LINKERD_VERSION) docker cp $id:/usr/lib/linkerd/linkerd2-proxy ./linkerd-proxy docker rm -v $id

Configuring and running the proxy

The machine network configuration needs to be set up for traffic to be steered through the proxy. This could be done by adding the following iptables rules:

PROXY_INBOUND_PORT=4143 PROXY_OUTBOUND_PORT=4140 PROXY_USER_UID=$(id -u root) # default inbound and outbound ports to ignore INBOUND_PORTS_TO_IGNORE="4190,4191,4567,4568" OUTBOUND_PORTS_TO_IGNORE="4567,4568" iptables -t nat -N PROXY_INIT_REDIRECT # ignore inbound ports iptables -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports $INBOUND_PORTS_TO_IGNORE -j RETURN # redirect all incoming traffic to proxy's inbound port iptables -t nat -A PROXY_INIT_REDIRECT -p tcp -j REDIRECT --to-port $PROXY_INBOUND_PORT iptables -t nat -A PREROUTING -j PROXY_INIT_REDIRECT # outbound rules iptables -t nat -N PROXY_INIT_OUTPUT # ignore proxy user iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner $PROXY_USER_UID -j RETURN # ignore loopback iptables -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN # ignore outbound ports iptables -t nat -A PROXY_INIT_OUTPUT -p tcp --match multiport --dports $OUTBOUND_PORTS_TO_IGNORE -j RETURN # redirect all outgoing traffic proxy's outbound port iptables -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port $PROXY_OUTBOUND_PORT iptables -t nat -A OUTPUT -j PROXY_INIT_OUTPUT iptables-save -t nat

These rules ensure that traffic is correctly routed through the proxy. Now that this is done, we need to run the proxy with the correct environment variables set up:

export LINKERD2_PROXY_IDENTITY_SERVER_ID="spiffe://root.linkerd.cluster.local/external-workload" export LINKERD2_PROXY_IDENTITY_SERVER_NAME="external-workload.cluster.local" export LINKERD2_PROXY_POLICY_WORKLOAD="{\"ns\":\"mixed-env\", \"external_workload\":\"external-workload\"}" export LINKERD2_PROXY_DESTINATION_CONTEXT="{\"ns\":\"mixed-env\", \"nodeName\":\"my-vm\", \"external_workload\":\"external-workload\"}" export LINKERD2_PROXY_DESTINATION_SVC_ADDR="linkerd-dst-headless.linkerd.svc.cluster.local.:8086" export LINKERD2_PROXY_DESTINATION_SVC_NAME="linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local" export LINKERD2_PROXY_POLICY_SVC_NAME="linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local" export LINKERD2_PROXY_POLICY_SVC_ADDR="linkerd-policy.linkerd.svc.cluster.local.:8090" export LINKERD2_PROXY_IDENTITY_SPIRE_SOCKET="unix:///tmp/spire-agent/public/api.sock" export LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS=`cat /opt/SPIRE/crts/ca.crt` ./linkerd-proxy

Start an application workload on the machine

Now that the proxy is running on the machine, you can start another workload on it that will be reachable from within the cluster. Make sure that you run this application under a user account different than the one the proxy is using. Let’s use the bb utility to mimic a workload:

docker run -p 80:80 buoyantio/bb:v0.0.5 terminus \ --h1-server-port 80 \ --response-text hello-from-external-vm

Send encrypted traffic from and to the machine

Now that everything is running, you can send traffic from an in-cluster workload to the machine. Let’s start by creating our client as a workload in the cluster:

kubectl apply -f - <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: client namespace: mixed-env --- apiVersion: v1 kind: Pod metadata: name: client namespace: mixed-env annotations: linkerd.io/inject: enabled spec: volumes: - name: shared-data emptyDir: {} containers: - name: client image: cr.l5d.io/linkerd/client:current command: - "sh" - "-c" - > while true; do sleep 3600; done serviceAccountName: client EOF

You can also create a service that selects over both the machine as well as an in-cluster workload:

kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: name: legacy-app namespace: mixed-env spec: type: ClusterIP selector: app: legacy-app ports: - port: 80 protocol: TCP name: one --- apiVersion: v1 kind: Service metadata: name: legacy-app-cluster namespace: mixed-env spec: type: ClusterIP selector: app: legacy-app location: cluster ports: - port: 80 protocol: TCP name: one --- apiVersion: apps/v1 kind: Deployment metadata: namespace: mixed-env name: legacy-app spec: replicas: 1 selector: matchLabels: app: legacy-app template: metadata: labels: app: legacy-app location: cluster annotations: linkerd.io/inject: enabled spec: containers: - name: legacy-app image: buoyantio/bb:v0.0.5 command: [ "sh", "-c"] args: - "/out/bb terminus --h1-server-port 80 --response-text hello-from-$POD_NAME --fire-and-forget" ports: - name: http-port containerPort: 80 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name EOF

Now you can ssh into the client pod and observe traffic being load-balanced between both the in-cluster workload and the machine:

kubectl exec -c client --stdin --tty client -n mixed-env -- bash while sleep 1; do curl -s http://legacy-app.mixed-env.svc.cluster.local:80/who-am-i| jq .; done { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-571813026", "payload": "hello-from-external-workload" } { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-599832807", "payload": "hello-from-legacy-app-d4446455b-2fgcr" } { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-634437030", "payload": "hello-from-external-workload" } { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-667578518", "payload": "hello-from-external-workload" }

Similarly, you can send traffic from the machine to the cluster:

while sleep 1; do curl -s http://legacy-app-cluster.mixed-env.svc.cluster.local:80/who-am-i| jq .; done # You should start seeing responses from the in-cluster workload. { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-824112662", "payload": "hello-from-legacy-app-6bb4854789-x4wbw" } { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-858574572", "payload": "hello-from-legacy-app-6bb4854789-x4wbw" } { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-895218927", "payload": "hello-from-legacy-app-6bb4854789-x4wbw" }

Use authorization policies with machines

Although the identity of the proxy running on the machine is not tied to a Kubernetes service account, there is still an attested identity that can be used to define authorization policies. Let’s limit the kind of traffic that can reach our in-cluster workload. Create a Server resource now:

kubectl apply -f - <<EOF apiVersion: policy.linkerd.io/v1beta2 kind: Server metadata: name: in-cluster-endpoint namespace: mixed-env annotations: config.linkerd.io/default-inbound-policy: "deny" spec: podSelector: matchLabels: app: legacy-app port: http proxyProtocol: HTTP/1 EOF

You can observe that we no longer get responses when we try and target the in-cluster workload from the machine. This is because our default policy is deny. We can fix that by explicitly allowing traffic from the machine by creating a policy that allows its SPIFFE id:

kubectl apply -f - <<EOF apiVersion: policy.linkerd.io/v1beta2 kind: Server metadata: name: in-cluster-endpoint namespace: mixed-env annotations: config.linkerd.io/default-inbound-policy: "deny" spec: podSelector: matchLabels: app: legacy-app port: http proxyProtocol: HTTP/1 apiVersion: policy.linkerd.io/v1alpha1 kind: AuthorizationPolicy metadata: name: in-cluster-endpoint-authn namespace: mixed-env spec: targetRef: group: policy.linkerd.io kind: Server name: in-cluster-endpoint requiredAuthenticationRefs: - name: in-cluster-endpoint-mtls kind: MeshTLSAuthentication group: policy.linkerd.io --- apiVersion: policy.linkerd.io/v1alpha1 kind: MeshTLSAuthentication metadata: name: in-cluster-endpoint-mtls namespace: mixed-env spec: identities: - "spiffe://root.linkerd.cluster.local/external-workload" EOF

When this policy is applied, you can observe that traffic is allowed from the machine to the in-cluster workload. Similarly, you can attach policies to an external workload object by using the externalWorkloadSelector field of the Server object.

That’s it

Congrats! You have successfully meshed a non-Kubernetes workload with Linkerd and demonstrated secure, reliable communication between it and the meshed pods on your cluster.