linkerd runs as a separate standalone proxy, freeing it of language and library requirements. Applications typically use linkerd by running instances in known locations, and proxying calls through these instances—i.e., rather than connecting to destinations directly, services connect to their corresponding linkerd instances, and treat these instances as if they were the destination services.
Under the hood, linkerd applies routing rules, communicates with existing service discovery mechanisms, and load-balances over destination instances—all while instrumenting the communication and reporting metrics. A typical linkerd setup is illustrated below:
By deferring the mechanics of making the call to linkerd, application code is decoupled from:
- knowledge of the production topology;
- knowledge of the service discovery mechanism; and
- load balancing and connection management logic.
Applications also benefit from a consistent, global traffic control mechanism. This is particularly important for polyglot applications, for which it is very difficult to attain this sort of consistency via libraries.
linkerd instances can be deployed as sidecars (i.e. one instance per application service instance) or per-host. Since linkerd instances are stateless and independent, they can fit easily into existing deployment topologies. They can be deployed alongside application code in a variety of configurations and with a minimum of coordination.