Kubernetes on Windows Subsystem for Linux (WSL) 2 using kind and Docker Engine
This article shows how to run a Kubernetes cluster on Windows Subsystem for Linux (WSL) 2 using kind and Docker Engine.
1. Prerequisites
-
Helm CLI. See Installing Helm.
2. Install kubectl
sudo apt-get update
sudo apt-get install -y ca-certificates
sudo curl -fsSLo \
/usr/share/keyrings/kubernetes-archive-keyring.gpg \
https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
3. Install kind
executable
Install kind
executable.
curl -fsSLo ./kind https://kind.sigs.k8s.io/dl/v0.16.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Test kind
executable.
$ kind version
kind v0.16.0 go1.19.1 linux/amd64
4. Obtain organization’s CA certificates
If your organization uses a forward proxy that intercepts SSL traffic, this proxy may present organization’s SSL certificates to kind
when kind
downloads container images from public registries.
You may also want to configure kind
to download containers images from your organization’s secure private registries.
In these cases, kind
needs to be configured to trust your organization’s CA certificates.
Before configuring kind
, the CA certificates need to be downloaded.
4.1. Obtain the certificate
Download the certificate.
mkdir --parent "/tmp/cacerts/org"
curl -sSfL \
"http://example.com/ca.crt" \
--output "/tmp/cacerts/org/ca.cer"
Remember to replace the URL in above example with the URL of your organization’s CA certificate. |
Verify that the correct certificate was downloaded by verifying the subject name.
$ openssl x509 \
-in "/tmp/cacerts/org/ca.cer" \
-inform DER \
-subject \
-noout
subject=CN = Example Root Certification Authority (1)
1 | Verify that this name matches the name of your organization’s CA. |
4.2. Convert certificate to PEM format
openssl x509 \
-inform DER \
-in "/tmp/cacerts/org/ca.cer" \
-outform PEM \
-out "/tmp/cacerts/org/ca.pem"
Verify that the conversion succeeded.
$ openssl x509 \
-in "/tmp/cacerts/org/ca.pem" \
-inform PEM \
-subject \
-noout
subject=CN = Example Root Certification Authority (1)
1 | Verify that this name matches the name of your organization’s CA. |
5. Create a kind
cluster
Create a Kubernetes cluster.
$ kind create cluster --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: c1
nodes:
- role: control-plane
image: kindest/node:v1.25.2@sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace
extraPortMappings: (1)
# Ports for ingress controller
- containerPort: 80 (2)
hostPort: 31080 (2)
listenAddress: "127.0.0.1"
protocol: TCP
- containerPort: 443
hostPort: 31443
listenAddress: "127.0.0.1"
protocol: TCP
# Ports for NodePort services
- containerPort: 30009 (3)
hostPort: 30009 (3)
listenAddress: "127.0.0.1"
protocol: TCP
extraMounts:
- hostPath: /tmp/cacerts/org/ca.pem (4)
containerPath: /etc/containerd/certs.d/org/ca.pem (4)
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry-1.docker.io".tls]
ca_file = "/etc/containerd/certs.d/org/ca.pem"(5)
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry-1.example.com:5000"]
endpoint = ["http://registry-1.example.com:5000"](6)
EOF
1 | Extra port mappings are used to forward ports from the host to kind nodes. |
2 | Traffic at port 31080 of the host machine will be forwarded to port 80 of the kind node. Typically, an ingress controller would be listening at port 80 of the kind node. |
3 | Traffic at port 30009 of the host machine will be forwarded to port 30009 of the kind node. A NodePort service may be listening at port 30009 of the kind node. |
4 | Use extraMounts to pass through organization’s CA certificate on the host to the kind node. |
5 | Configure kind to trust the SSL certificates presented by your organization’s forward proxy when the proxy intercepts requests to public image registries. |
6 | If you are using an insecure (i.e. not using SSL) image registry, configure kind to treat it as an insecure registry. |
Creating cluster "c1" ...
✓ Ensuring node image (kindest/node:v1.25.2) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-c1"
You can now use your cluster with:
kubectl cluster-info --context kind-c1
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Test the cluster.
kubectl create deployment nginx --image=nginx --port=80
kubectl create service nodeport nginx --tcp=80:80 --node-port=30009
curl http://localhost:30009
<html>
...
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
...
</body>
</html>
Cleanup.
kubectl delete \
service/nginx \
deployment/nginx
6. Ingress
Install NGINX ingress controller.
Go to github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/kind, select the appropriate tag, click on deploy.yaml
, click on Raw, and copy the URL from browser. Use this URL in kubectl apply
.
kubectl apply \
-f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/kind/deploy.yaml
Test ingress.
kubectl apply -f - <<EOF
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo:0.2.3
args: ["-text=foo"]
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
- port: 5678 # Default port used by the image
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: foo-service
port:
number: 5678
EOF
$ curl http://localhost:31080/foo
foo (1)
1 | Expected output |
kubectl delete \
ingress/example-ingress \
service/foo-service \
pod/foo-app
7. Load balancer
Install MetalLB, a network load balancer, for assigning external IPs to Kubernetes Service
s of type LoadBalancer
.
7.1. Install MetalLB
Install MetalLB.
helm repo add \
metallb \
https://metallb.github.io/metallb
helm repo update
helm install \
metallb \
metallb/metallb --version 0.13.9 \
--namespace metallb-system \
--create-namespace \
--wait --timeout 90s \
--values <(echo '
controller:
logLevel: info
speaker:
logLevel: info
')
Verify MetalLB installation.
$ helm status metallb -n metallb-system . . . STATUS: deployed . . . NOTES: MetalLB is now running in the cluster. Now you can configure it via its CRs. Please refer to the metallb official docs on how to use the CRs.
7.2. Configure MetalLB
Configure the address pool to be used by MetalLB.
The range of IP addresses provided to MetalLB must belong to the IPv4 subnet of the Docker network named kind
.
Determine the subnets of Docker network named kind
.
$ docker network inspect \
kind \
-f '{{range .IPAM.Config}}{{printf "%s\n" .Subnet}}{{end}}'
172.18.0.0/16 (1)
fc00:f853:ccd:e793::/64
1 | IPv4 subnet |
Configure the address pool to be used by MetalLB.
kubectl apply -f - <<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
- 172.18.255.200-172.18.255.250 (1)
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
EOF
1 | These IP addresses must belong to the IPv4 subnet of the Docker network named kind . |
7.3. Test MetalLB
Create Pods and Service.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: foo-app
labels:
app: http-echo
spec:
containers:
- name: foo-app
image: hashicorp/http-echo:0.2.3
args:
- "-text=foo"
---
apiVersion: v1
kind: Pod
metadata:
name: bar-app
labels:
app: http-echo
spec:
containers:
- name: bar-app
image: hashicorp/http-echo:0.2.3
args:
- "-text=bar"
---
kind: Service
apiVersion: v1
metadata:
name: foo-bar-service
spec:
type: LoadBalancer
selector:
app: http-echo
ports:
- port: 5678 # Default port used by the image
EOF
Verify that the Service was assigned an external IP.
$ kubectl get service foo-bar-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) foo-bar-service LoadBalancer 10.96.31.92 172.18.255.200 5678:30638/TCP
Send HTTP request to the external IP of the Service.
LB_IP=$(\
kubectl get service/foo-bar-service \
-o=jsonpath='{.status.loadBalancer.ingress[0].ip}' \
)
for _ in {1..10}; do
curl ${LB_IP}:5678
done
foo foo foo bar foo bar bar foo bar foo
Cleanup - delete the Pods and Services created for testing.
kubectl delete \
service/foo-bar-service \
pod/bar-app \
pod/foo-app
8. Dynamic storage provisioning
kind
provides an out-of-the-box dynamic storage provisioner.
#118 added Rancher’s Local Path Provisioner to kind. |
8.1. Test the storage provisioner
Create a PersistentVolumeClaim
.
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Mi
EOF
Create a Pod
that uses the PersistentVolumeClaim
.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: local-path-pvc
EOF
Verify that the persistent volume is created.
$ kubectl get pv
NAME RECLAIM POLICY STATUS CLAIM STORAGECLASS
pvc-ff55783c-e0a2-48ec-827a-4d812ff91651 Delete Bound default/local-path-pvc standard
Verify that the persistent volume claim has been bound.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY STORAGECLASS
local-path-pvc Bound pvc-ff55783c-e0a2-48ec-827a-4d812ff91651 1Mi standard
Write something into the pod.
kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"
Check the volume content.
$ kubectl exec volume-test -- cat /data/test
local-path-test
Cleanup - delete the Pod
and PersistentVolumeClaim
created for testing.
kubectl -n default delete \
pod/volume-test \
pvc/local-path-pvc
9. Metrics Server
Install Kubernetes Metrics Server.
kubectl apply \
-f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.1/components.yaml
Test the Metrics Server.
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
c1-control-plane 214m 1% 19Mi 0%
If the Metrics Server pod does not become If the log contains error As a workaround, patch the deployment as suggested in the issue.
|