Configuring Proxy Concurrency
Linkerd data plane proxies allocate a fixed number of worker threads at startup, and this thread count directly determines the maximum CPU consumption of the proxy. In Kubernetes environments, where proxies run as sidecars alongside other containers in the same pod and coexist with pods on the same node, this static allocation means that choosing too many threads can lead to CPU oversubscription. Operators must balance the proxy’s fixed thread count with the pod’s CPU limits and resource quotas to ensure that both the proxy and the application containers operate efficiently without degrading overall performance.
Default Behavior
Linkerd’s default Helm configuration runs sidecar proxies with a single runtime worker. No requests or limits are configured for the proxy.
proxy:
resources:
cpu:
request:
limit:
runtime:
workers:
minimum: 1
This document describes how to run proxies with additional runtime workers.
Configuring Proxy CPU Requests and Limits
Kubernetes allows you to set CPU requests and limitss for any container, and these settings can also control the CPU usage of the Linkerd proxy. However, the effect of these settings depends on how the kubelet enforces CPU limits.
The kubelet enforces pod CPU limits using one of two approaches, determined by
its
--cpu-manager-policy
flag:
Default CPU Manager Policy
When using the default
none
policy, the kubelet relies on Completely Fair Scheduler
(CFS) quotas. In this
mode, the Linux kernel limits the percentage of CPU time that processes
(including the Linkerd proxy) can use.
Static CPU Manager Policy
When the kubelet is configured with the static CPU manager policy, it assigns whole CPU cores to containers by leveraging Linux cgroup cpusets. To successfully use this mechanism, the following conditions must be met:
- The kubelet must run with the
static
CPU manager policy. - The pod must belong to the Guaranteed QoS class. This requires that every container in the pod has matching CPU (and memory) requests and limits.
- The CPU request and CPU limit for the proxy must be specified as whole numbers (integers) and must be at least 1.
Configuring Default Proxy CPU Requests and Limits Using Helm
A global default CPU request can be configured in the control-plane helm chart to influence the scheduler:
proxy:
resources:
cpu:
request: 100m
When only a request is specified, its value is used to configure the proxy’s runtime (by rounding up to the next whole number).
Alternatively, a global default CPU limit can be configured in the control-plane helm chart:
proxy:
resources:
cpu:
limit: 2000m
Similarly, this value controls the proxy’s runtime configuration (by rounding up to the next whole number).
When both values are specified, the request is used to influence the scheduler and the limit is used to configure the proxy’s runtime:
proxy:
resources:
cpu:
request: 100m
limit: 2000m
Overriding Proxy CPU Requests and Limits Using Annotations
The config.linkerd.io/proxy-cpu-request
and
config.linkerd.io/proxy-cpu-limit
annotations can be used to override the Helm
configuration for a given namespace or workload:
kind: Deployment
apiVersion: apps/v1
metadata:
# ...
spec:
template:
metadata:
annotations:
config.linkerd.io/proxy-cpu-request: 100m
config.linkerd.io/proxy-cpu-limit: 2000m
# ...
Note
Configuring Rational Proxy CPU Limits
In some environments, it might not be practical to use a fixed CPU limit for a workload (for example, when the workload does not specify CPU limits and runs on nodes of varying sizes). In this case, the proxy can be configured with a maximum ratio of the host’s total available CPUs.
A runtime.workers.maximumCPURatio
value of 1.0
configures the proxy to
allocate a worker for each CPU, while a value of 0.2
configures the proxy to
allocate 1 proxy worker for every 5 available cores (rounded up or down as
appropriate). The runtime.workers.minimum
value sets a lower bound on the
number of workers per proxy.
Configuring Rational Proxy CPU Limits Using Helm
Global defaults can be configured in the control-plane helm chart:
proxy:
runtime:
workers:
maximumCPURatio: 0.2
minimum: 1
Note
Overriding Rational Proxy CPU Limits Using Annotations
To override the default maximum CPU ratio, use the
config.linkerd.io/proxy-cpu-ratio-limit
annotation:
kind: Deployment
apiVersion: apps/v1
metadata:
# ...
spec:
template:
metadata:
annotations:
config.linkerd.io/proxy-cpu-ratio-limit: '0.3'
# ...