Migrating to BEL's lifecycle automation

In this guide, we’ll walk you through how to migrate your Linkerd installation to take advantage of BEL’s lifecycle automation capabilities. To complete this migration, you will need:

  1. A functioning Kubernetes cluster with Linkerd install via Helm or the linkerd CLI
  2. Helm installed on your local machine
  3. The base64 CLI, if decoding TLS certificates in Step 2

BEL requires a valid license key to run, which is available through the Buoyant portal. Following the instructions there, you should end up with an environment variable like this:

export BUOYANT_LICENSE=[LICENSE]

The commands below assume that you have this environment variable set.

The first step is to download and install the BEL CLI:

curl --proto '=https' --tlsv1.2 -sSfL https://enterprise.buoyant.io/install | sh

Follow the instructions to add the linkerd CLI to your system path.

Verify that the CLI is installed and running the expected version with:

linkerd version --client

You should see:

Client version: enterprise-2.15.2

If your Linkerd installation was installed via the linkerd CLI, you’ll need to move it to a Helm install of the latest BEL version before managing it with the lifecycle automation operator. If your Linkerd installation was installed via Helm, you can skip to Step 3.

The Helm charts used in this step are hosted in the linkerd-buoyant Helm repo, which can be added/updated as followed:

helm repo add linkerd-buoyant https://helm.buoyant.cloud
helm repo update linkerd-buoyant

Start by adding the required labels and annotations to Linkerd’s CRDs:

kubectl label crds -l linkerd.io/control-plane-ns=linkerd app.kubernetes.io/managed-by=Helm
kubectl annotate crds -l linkerd.io/control-plane-ns=linkerd \
  meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd

Then use the helm command to move these resources to be managed by BEL’s linkerd-enterprise-crds chart:

helm install linkerd-crds -n linkerd linkerd-buoyant/linkerd-enterprise-crds

Next add the required labels and annotations to all control plane resources:

kubectl label clusterrole,clusterrolebinding,configmap,cronjob,deployment,mutatingwebhookconfiguration,namespace,role,rolebinding,secret,service,serviceaccount,validatingwebhookconfiguration \
  -A -l linkerd.io/control-plane-ns=linkerd \
  app.kubernetes.io/managed-by=Helm
kubectl annotate clusterrole,clusterrolebinding,configmap,cronjob,deployment,mutatingwebhookconfiguration,namespace,role,rolebinding,secret,service,serviceaccount,validatingwebhookconfiguration \
  -A -l linkerd.io/control-plane-ns=linkerd \
  meta.helm.sh/release-name=linkerd-control-plane meta.helm.sh/release-namespace=linkerd
kubectl -n linkerd label role/ext-namespace-metadata-linkerd-config \
  app.kubernetes.io/managed-by=Helm
kubectl -n linkerd annotate role/ext-namespace-metadata-linkerd-config \
  linkerd.io/control-plane-ns=linkerd meta.helm.sh/release-name=linkerd-control-plane meta.helm.sh/release-namespace=linkerd

Then use the helm command to move these resources to be managed by BEL’s linkerd-enterprise-crds chart.

To complete this step, you will need the certificate files that you used to install your control plane. If you didn’t provide them when installing the control plane, then you can retrieve them using kubectl:

kubectl -n linkerd get cm/linkerd-identity-trust-roots -ojsonpath='{.data.ca-bundle\.crt}' > ca.crt
kubectl -n linkerd get secret/linkerd-identity-issuer -ojsonpath='{.data.crt\.pem}' | base64 -D > issuer.crt
kubectl -n linkerd get secret/linkerd-identity-issuer -ojsonpath='{.data.key\.pem}' | base64 -D > issuer.key

Then run the Helm install:

helm install linkerd-control-plane \
  -n linkerd \
  --set license=$BUOYANT_LICENSE \
  --set-file linkerd-control-plane.identityTrustAnchorsPEM=ca.crt \
  --set-file linkerd-control-plane.identity.issuer.tls.crtPEM=issuer.crt \
  --set-file linkerd-control-plane.identity.issuer.tls.keyPEM=issuer.key \
  linkerd-buoyant/linkerd-enterprise-control-plane

Your Linkerd installation on this cluster is now managed via Helm, which you can verify by running:

helm list -n linkerd

Which should output something like:

NAME                 	NAMESPACE	REVISION	...	APP VERSION
linkerd-control-plane	linkerd  	1       	...	enterprise-2.15.2
linkerd-crds         	linkerd  	1       	...	enterprise-2.15.2

We’re now ready to install the BEL’s lifecycle automation components, which will automate installation and upgrades of BEL.

If you didn’t do this as part of the previous step, add/update the linkerd-buoyant Helm repo:

helm repo add linkerd-buoyant https://helm.buoyant.cloud
helm repo update

Then install the BEL lifecycle automation operator itself:

helm install linkerd-buoyant \
  --create-namespace \
  --namespace linkerd-buoyant \
  --set buoyantCloudEnabled=false \
  --set license=$BUOYANT_LICENSE \
  linkerd-buoyant/linkerd-buoyant

Validate that the operator was installed successfully by running:

linkerd buoyant check

We’re now ready to create the ControlPlane resource that’s used to manage the Linkerd installation on your cluster.

To create this resource automatically, use the import-helm-config subcommand:

linkerd buoyant controlplane import-helm-config | sh

Validate that it worked by viewing the status of the resource that you just created:

kubectl get controlplane/cp-update

Which should output something like:

NAME        STATUS     DESIRED             CURRENT             AGE
cp-update   UpToDate   enterprise-2.15.2   enterprise-2.15.2   40s

You can also use lifecycle automation to manage the data plane on your cluster, separate from the control plane. This requires creating instances of the DataPlane resource in each namespace where Linkerd proxies are running on your cluster.

For instance, to opt-in proxies in the default namespace:

kubectl apply -f - <<EOF
apiVersion: linkerd.buoyant.io/v1alpha1
kind: DataPlane
metadata:
  name: dp-update
  namespace: default
spec:
  workloadSelector:
    matchLabels: {}
EOF

Validate that it worked by viewing the status of the resource that you just created:

kubectl -n default get dataplane/dp-update

Which should output something like:

NAME        STATUS     DESIRED   CURRENT   AGE
dp-update   UpToDate   3         3         100s

Congratulation! You are now successfully running the BEL lifecycle automation components on your cluster. For more information about using these components, see the Lifecycle automation reference page.