BEL Resource Request and Limits

This guide describes our recommendations for setting Kubernetes resource limits and requests for Buoyant Enterprise for Linkerd.

This is a general guide, intended to provide a safe starting point for systems that handle moderate load. Systems expected to handle high load (or small load) may wish to alter these values. Additionally, as you develop an operational history with your applications, you may wish to tune these values for more efficient allocation of resources and to achieve specific QoS goals.

See Real-world Examples below for concrete examples of Linkerd’s resource consumption in practice.

The effects of resource requests and limits can be nuanced in Kubernetes, and there are many factors to consider including desired QoS levels and application behaviors. In this guide, we attempt to provide a safe framework that minimizes disruptions and allows Linkerd to scale in most cases. These values are a starting point. Once an operational history has been established with Linkerd, there are many ways to can improve upon them, including:

  • You can lower the requests and limits to capture the empirical resource consumption, allowing you to more efficiently schedule Linkerd workloads.

  • You can set CPU limits when unspecified below, allowing your pods to achieve Guaranteed QoS rather than Burstable QoS.

  • You can use automated systems to continually tune these values.

In short: this guide aims to provide a safe basic setup, but there is plenty of opportunity for tailoring.

We recommend the following resource configuration for the Linkerd control plane as a safe starting point for a system designed for moderate load. These values are built from our empirical observations of the “moderate traffic” system described below, plus plenty of headroom.

ComponentCPU requestCPU limitMemory requestMemory limit
destination100m100m500 MB500 MB
sp-validator10m10m100 MB100 MB
policy10m10m100 MB100 MB
identity10m10m100 MB100 MB
proxy-injector20m20m200 MB200 MB
license20m20m200 MB200 MB
autoregistration (if enabled)20m20m100 MB100 MB

See Real-World Examples below for a demonstration of how memory and CPU consumption can grow at scale.

In contrast to the control plane, which is in many ways a standard Kubernetes application, the resource consumption by data plane proxies is a function of the application it sits in front of. Thus, resource usage in practice can vary wildly across proxies within the same cluster.

As a starting point, we recommend the following resource configuration. For new Linkerd deployments, we suggest starting with the Flexible profile until an operational history has been established. If you know your application will handle particularly small or large amounts of traffic, you may wish to start with the lightweight or heavyweight configurations instead.

ProfileCPU requestCPU limitProxy CPU limitMemory requestMemory limit
Flexible / generic100m1000munset20 MB250 MB
Lightweight10m10munset20 MB20 MB
Heavyweight1000m2000m2000m1000 MB1000 MB

Important notes for further tuning:

  • Linkerd has its own proxy-cpu-limit configuration value. When unset (the default), the proxy will run a single worker thread and will be unable to scale beyond one core. Thus, when setting either a CPU request or limit above 1000m, you should also set the proxy-cpu-limit configuration to match the higher value so that the proxy is able to make use of multiple CPU cores.
  • The Linkerd proxy-init container uses the same requests and limits as the proxy (because resources are shared) and thus does not need explicit configuration.
  • If you want to achieve Guaranteed QoS for application pods, you must set equal CPU and memory limits and requests for all containers in the pod, i.e. both proxy and application containers.

For very high-scale proxies that need to consume multiple cores, there is a relationship between worker threads, CPU cores, and the way that Kubernetes handles concurrency. We suggest reviewing Configuring Proxy Concurrency for complex cases.

These limits can be configured in Linkerd by setting the corresponding values during control plane installation (or upgrade) time.

This includes control plane configuration: (all container-level resources)

  • destinationResources
  • spValidatorResources
  • policyController.resources
  • identityResources
  • proxyInjectorResources
  • heartbeatResources
  • licenseResources
  • autoregistrationResources (if enabled)

And control plane configuration of the proxy resources which run in the control plane: (pod-level)

  • destinationProxyResources
  • identityProxyResources
  • proxyInjectorProxyResources
  • enterpriseProxyResources
  • autoregistrationProxyResources (if enabled)

And finally, global proxy configuration (which are overridden by the section above, for proxies that run in the control plane specifically):

  • proxy.resources

Finally, the global proxy configuration above may be overridden by namespace and pod-level annotations, to allow per-workload tuning of the data plane:

  • config.linkerd.io/proxy-cpu-limit
  • config.linkerd.io/proxy-cpu-request
  • config.linkerd.io/proxy-memory-limit
  • config.linkerd.io/proxy-memory-request

See the Linkerd control plane configuration reference for details on these variables and their CLI equivalents.

Below are some real-world examples of control plane usage in production in recent versions of Buoyant Enterprise for Linkerd. The numbers in these tables represent the maximum measured CPU and memory usage for a single pod in that service.

Control plane:

DeploymentContainer nameCPU (mC)Memory
linkerd-destinationdestination1100 MB
linkerd-proxy-injectorproxy-injector172 MB
linkerd-identityidentity136 MB
linkerd-destinationsp-validator131 MB
linkerd-destinationpolicy128 MB

Data plane: The most memory consumed by a single proxy in this environment was 50 MB and the most CPU consumed was 35 millicores, with the majority of proxies consuming significantly fewer resources.

Control plane:

DeploymentContainer nameCPU (mC)Memory
linkerd-destinationdestination24205 MB
linkerd-destinationpolicy423 MB
linkerd-destinationsp-validator241 MB
linkerd-proxy-injectorproxy-injector372 MB
linkerd-enterpriselicense172 MB
linkerd-identityidentity142 MB
linkerd-autoregistrationautoregistration140 Mb

Data plane: The most memory consumed by a single proxy in this environment was 180 MB and the most CPU consumed was 250 millicores, with the majority of proxies consuming significantly fewer resources.

Control plane:

DeploymentContainer nameCPU (mC)Memory
linkerd-destinationdestination2501.6 GB
linkerd-destinationpolicy7166 Mb
linkerd-destinationsp-validator245 Mb
linkerd-proxy-injectorproxy-injector15291 Mb
linkerd-identityidentity356 Mb

Data plane: the most memory consumed by a single proxy in this environment was 700 MB and the most CPU consumed was 650 millicores, with the majority of proxies consuming significantly fewer resources.