How to use it

linkerd runs as a separate standalone proxy, freeing it of language and library requirements. Applications typically use linkerd by running instances in known locations, and proxying calls through these instances—i.e., rather than connecting to destinations directly, services connect to their corresponding linkerd instances, and treat these instances as if they were the destination services.

Under the hood, linkerd applies routing rules, communicates with existing service discovery mechanisms, and load-balances over destination instances—all while instrumenting the communication and reporting metrics. A typical linkerd setup is illustrated below:

How linkerd integrates into existing applications.

By deferring the mechanics of making the call to linkerd, application code is decoupled from:

  1. knowledge of the production topology;
  2. knowledge of the service discovery mechanism; and
  3. load balancing and connection management logic.

Applications also benefit from a consistent, global traffic control mechanism. This is particularly important for polyglot applications, for which it is very difficult to attain this sort of consistency via libraries.

linkerd instances can be deployed as sidecars (i.e. one instance per application service instance) or per-host. Since linkerd instances are stateless and independent, they can fit easily into existing deployment topologies. They can be deployed alongside application code in a variety of configurations and with a minimum of coordination.