What's on this page
Managing external workloads
Buoyant Enterprise for Linkerd provides opt-in automatic management capabilities
for ExternalWorkload
resources. This allows you to effortlessly enable VM
workloads to join the mesh and benefit from all the security and reliability
benefits of Linkerd.
Prerequisites
- Kubernetes cluster with Buoyant Enterprise for Linkerd installed
- BEL needs to be installed with the
manageExternalWorkloads
value set - Direct network connectivity between the VM and the cluster
- DNS on the VM needs to be configured to resolve in-cluster DNS names
- SPIRE agent needs to be present on the VM and serve certificates that are rooted in the same trust bundle as Linkerd
Step 1: Install the proxy harness on the VM
In order for the virtual machine to join the mesh, a daemon, called the
linkerd-proxy-harness
needs to be installed. In order to install it you first
need to download the corresponding package for your architecture and
distribution from the
Releases page. Then run
the following command:
apt-get -y install ./package-name
For example, to install the AMD64/Debian package, use the following command:
apt-get -y install ./linkerd-proxy-harness-enterprise-2.17.0-amd64.deb
This provisions a systemd
service named linkerd-proxy-harness
.
Step 2: Provision the workload group
Next, provision the workload group on the cluster by running the follow command:
cat <<EOF > external-group.yaml
apiVersion: workload.buoyant.io/v1alpha1
kind: ExternalGroup
metadata:
name: legacy-app-vm
namespace: mixed-env
spec:
probes:
- failureThreshold: 1
httpGet:
path: /ready
port: 80
scheme: HTTP
host: 127.0.0.1
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
template:
metadata:
labels:
app: legacy-app
location: vm
ports:
- port: 80
EOF
kubectl apply -f external-group.yaml
Step 3: Configure the harness
In order for the harness to register itself and send readiness data, it needs to know how to connect to the control plane. You can do that with the following command (note that your control plane addresses might be different):
harnessctl set-config \
--workload-group-name=legacy-app-vm \
--workload-group-namespace=mixed-env \
--control-plane-address=linkerd-autoregistration.linkerd.svc.cluster.local.:8081 \
--control-plane-identity=linkerd-autoregistration.linkerd.serviceaccount.identity.linkerd.cluster.local
Step 4: Deploy test resources
In order to test the harness connectivity, you can deploy a test workload:
cat <<EOF > test-workload.yaml
apiVersion: v1
kind: Service
metadata:
name: test-server
spec:
type: ClusterIP
selector:
app: test-server
ports:
- port: 80
protocol: TCP
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-server
spec:
replicas: 1
selector:
matchLabels:
app: test-server
template:
metadata:
labels:
app: test-server
annotations:
linkerd.io/inject: enabled
spec:
containers:
- name: test-server
image: buoyantio/bb:v0.0.5
command: [ "sh", "-c"]
args:
- "/out/bb terminus --h1-server-port 80 --response-text hello-from-$POD_NAME --fire-and-forget"
ports:
- name: http-port
containerPort: 80
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
EOF
kubectl apply -f test-workload.yaml
Step 5: Test harness connectivity
Test that the harness can reach the service on the cluster:
systemctl start linkerd-proxy-harness
while sleep 1; do curl -s http://test-server:80/who-am-i| jq .; done
You should the following:
{
"requestUID": "in:http-sid:terminus-grpc:-1-h1:80-824112662",
"payload": "hello-from-test-server-6bb4854789-x4wbw"
}
{
"requestUID": "in:http-sid:terminus-grpc:-1-h1:80-858574572",
"payload": "hello-from-test-server-6bb4854789-x4wbw"
}
{
"requestUID": "in:http-sid:terminus-grpc:-1-h1:80-895218927",
"payload": "hello-from-test-server-6bb4854789-x4wbw"
}