You are viewing docs for an older version of Buoyant Enterprise for Linkerd. You may want the latest documentation instead.

Migrating to BEL's lifecycle automation

In this guide, we’ll walk you through how to migrate your Linkerd installation to take advantage of BEL’s lifecycle automation capabilities. To complete this migration, you will need:

  • A Kubernetes cluster with Linkerd install via Helm or the linkerd CLI
  • Helm installed on your local machine
  • The BUOYANT_LICENSE environment variable set, with functioning BEL CLI
  • The base64 BEL CLI, if decoding TLS certificates in Step 1

Step 1: Migrating a CLI install to Helm

If your Linkerd installation was installed via the linkerd CLI, you’ll need to move it to a Helm install of the latest BEL version before managing it with the lifecycle automation operator. If your Linkerd installation was installed via Helm, you can skip to Step 2.

The Helm charts used in this step are hosted in the linkerd-buoyant Helm repo, which can be added/updated as followed:

helm repo add linkerd-buoyant https://helm.buoyant.cloud
helm repo update linkerd-buoyant

Start by adding the required labels and annotations to Linkerd’s CRDs:

kubectl label crds -l linkerd.io/control-plane-ns=linkerd app.kubernetes.io/managed-by=Helm
kubectl annotate crds -l linkerd.io/control-plane-ns=linkerd \
  meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd

Then use the helm command to move these resources to be managed by BEL’s linkerd-enterprise-crds chart:

helm install linkerd-crds -n linkerd linkerd-buoyant/linkerd-enterprise-crds

Next add the required labels and annotations to all control plane resources:

kubectl label clusterrole,clusterrolebinding,configmap,cronjob,deployment,mutatingwebhookconfiguration,namespace,role,rolebinding,secret,service,serviceaccount,validatingwebhookconfiguration \
  -A -l linkerd.io/control-plane-ns=linkerd \
  app.kubernetes.io/managed-by=Helm
kubectl annotate clusterrole,clusterrolebinding,configmap,cronjob,deployment,mutatingwebhookconfiguration,namespace,role,rolebinding,secret,service,serviceaccount,validatingwebhookconfiguration \
  -A -l linkerd.io/control-plane-ns=linkerd \
  meta.helm.sh/release-name=linkerd-control-plane meta.helm.sh/release-namespace=linkerd
kubectl -n linkerd label role/ext-namespace-metadata-linkerd-config \
  app.kubernetes.io/managed-by=Helm
kubectl -n linkerd annotate role/ext-namespace-metadata-linkerd-config \
  linkerd.io/control-plane-ns=linkerd meta.helm.sh/release-name=linkerd-control-plane meta.helm.sh/release-namespace=linkerd

Then use the helm command to move these resources to be managed by BEL’s linkerd-enterprise-crds chart.

To complete this step, you will need the certificate files that you used to install your control plane. If you didn’t provide them when installing the control plane, then you can retrieve them using kubectl:

kubectl -n linkerd get cm/linkerd-identity-trust-roots -ojsonpath='{.data.ca-bundle\.crt}' > ca.crt
kubectl -n linkerd get secret/linkerd-identity-issuer -ojsonpath='{.data.crt\.pem}' | base64 -D > issuer.crt
kubectl -n linkerd get secret/linkerd-identity-issuer -ojsonpath='{.data.key\.pem}' | base64 -D > issuer.key

Then run the Helm install:

helm install linkerd-control-plane \
  -n linkerd \
  --set license=$BUOYANT_LICENSE \
  --set-file linkerd-control-plane.identityTrustAnchorsPEM=ca.crt \
  --set-file linkerd-control-plane.identity.issuer.tls.crtPEM=issuer.crt \
  --set-file linkerd-control-plane.identity.issuer.tls.keyPEM=issuer.key \
  linkerd-buoyant/linkerd-enterprise-control-plane

Your Linkerd installation on this cluster is now managed via Helm, which you can verify by running:

helm list -n linkerd

Which should output something like:

NAME                 	NAMESPACE	REVISION	...	APP VERSION
linkerd-control-plane	linkerd  	1       	...	enterprise-2.15.6
linkerd-crds         	linkerd  	1       	...	enterprise-2.15.6

Step 2: Install lifecycle automation components

We’re now ready to install the BEL’s lifecycle automation components, which will automate installation and upgrades of BEL.

If you didn’t do this as part of the previous step, add/update the linkerd-buoyant Helm repo:

helm repo add linkerd-buoyant https://helm.buoyant.cloud
helm repo update

Then install the BEL lifecycle automation operator itself:

helm install linkerd-buoyant \
  --create-namespace \
  --namespace linkerd-buoyant \
  --set buoyantCloudEnabled=false \
  --set license=$BUOYANT_LICENSE \
  linkerd-buoyant/linkerd-buoyant

Validate that the operator was installed successfully by running:

linkerd buoyant check

Step 3: Create the ControlPlane resource

We’re now ready to create the ControlPlane resource that’s used to manage the Linkerd installation on your cluster.

To create this resource automatically, use the import-helm-config subcommand:

linkerd buoyant controlplane import-helm-config | sh

Validate that it worked by viewing the status of the resource that you just created:

kubectl get controlplane/linkerd-control-plane

Which should output something like:

NAME                    STATUS     DESIRED             CURRENT             AGE
linkerd-control-plane   UpToDate   enterprise-2.15.6   enterprise-2.15.6   40s

Step 4: Create DataPlane resources

You can also use lifecycle automation to manage the data plane on your cluster, separate from the control plane. This requires creating instances of the DataPlane resource in each namespace where Linkerd proxies are running on your cluster.

For instance, to opt-in proxies in the default namespace:

kubectl apply -f - <<EOF
apiVersion: linkerd.buoyant.io/v1alpha1
kind: DataPlane
metadata:
  name: dp-update
  namespace: default
spec:
  workloadSelector:
    matchLabels: {}
EOF

Validate that it worked by viewing the status of the resource that you just created:

kubectl -n default get dataplane/dp-update

Which should output something like:

NAME        STATUS     DESIRED   CURRENT   AGE
dp-update   UpToDate   3         3         100s

Congratulation! You are now successfully running the BEL lifecycle automation components on your cluster. For more information about using these components, see the Lifecycle automation reference page.