Using Kyverno to attach ConfigMaps to Kubernetes Pods
This article shows how to use Kyverno for attaching ConfigMaps to Kubernetes Pods.
Supposed you want to provide configuration for an application that is programmed, deployed and managed by another team. The application is running on Kubernetes. It is deployed using a Helm chart. The Helm chart does not have a provision to mount ConfigMaps to the application’s Pods. You want to provide additional configuration to the application. How do you do that? Kyverno can help.
1. What is Kyverno?
Kyverno is a policy engine designed for Kubernetes.
Policies are managed as Kubernetes resources and no new language is required to write policies.
. . .
Kyverno policies can validate, mutate, generate, and cleanup Kubernetes resources plus ensure OCI image supply chain security.
2. Prerequisites
-
Docker Engine.
-
A Kubernetes cluster.
-
Helm CLI. See Installing Helm.
-
kind
CLI, if using akind
Kubernetes cluster.
I was using the following at the time of writing this article:
|
3. Install Kyverno
I used Kyverno 1.10.0-beta.1 for this article because it has the feature Preconditions support in mutate existing rules. |
Install Kyverno using Helm.
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm upgrade --install \
kyverno \
kyverno/kyverno --version 3.0.0-beta.1 \
--namespace kyverno \
--create-namespace \
--set replicaCount=1 \(1)
--set backgroundController.logging.verbosity=2 (2)
1 | One replica is sufficient for our experiments. |
2 | Increase log level (up to 6) for troubleshooting. |
Verify that Kyverno Pods are running.
$ kubectl get pods -n kyverno
NAME READY STATUS
kyverno-admission-controller-7dcb877f95-nv562 1/1 Running
kyverno-background-controller-f5bcd6ff6-fjhd7 1/1 Running
kyverno-cleanup-controller-87dcb6b9-ccklq 1/1 Running
kyverno-reports-controller-86c75fd786-tg2bd 1/1 Running
4. Build and deploy the demo application
Suppose there is an application that ships the parts needed to assemble different types of vehicles. The application itself does not specialize in vehicles; all it does is to take the vehicle kind as an input and ship the parts that are needed to assemble a vehicle of that kind. The application expects it to be provided the different kinds of vehicles and their parts as configuration. Out of the box, the application does know any vehicle kinds or their parts.
A system configurer is expected to provide the application different kinds of vehicles and their parts through configuration files. Since, the application is running on Kubernetes, the configuration files are to be provided in the form of Kubernetes ConfigMaps.
However, the application’s Helm chart does not have a provision to mount ConfigMaps to the application’s Pods. Instead, Kyverno is expected to be used for attaching ConfigMaps to the application’s Pods.
4.1. Download application source code
-
Download kyverno-demo-app.tar.gz containing the source code of the demo application.
-
Extract
kyverno-demo-app.tar.gz
mkdir kyverno-demo-app && \ tar -xzf kyverno-demo-app.tar.gz --directory kyverno-demo-app && \ rm kyverno-demo-app.tar.gz
4.2. Build application container image
cd kyverno-demo-app
docker build \
--file Dockerfile \
--tag shipping \
.
5. Create Kyverno policies and roles
5.1. Roles
Give Kyverno the privilege to update Kubernetes Deployments.
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kyverno:background-controller:additional
labels:
app.kubernetes.io/component: background-controller
app.kubernetes.io/instance: kyverno
app.kubernetes.io/part-of: kyverno
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["update"]
EOF
5.2. Kyverno policy to attach ConfigMaps
Create a Kyverno policy to mutate the Kubernetes Deployment when a ConfigMap is created. The mutation is to mount the ConfigMap on the Pod as a volume.
When the Kubernetes Deployment is mutated, its Pods are restarted, which causes the application to load the configuration from the ConfigMap. |
kubectl apply -f - <<EOF
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: attach-configmaps
spec:
rules:
- name: attach-configmap
match:
any:
- resources:
kinds:
- ConfigMap
preconditions:
all:
- key: '{{request.object.metadata.annotations."attach"}}'
operator: "Equals"
value: "true"
- key: "{{request.operation || 'BACKGROUND'}}"
operator: "AnyIn"
value:
- CREATE
mutate:
targets:
- apiVersion: apps/v1
kind: Deployment
preconditions:
all:
- key: "{{ target.metadata.labels.app || '' }}"
operator: Equals
value: "{{ request.object.metadata.annotations.targetApp }}"
- key: "{{ target.spec.template.spec.volumes[].name || '' | contains(@, 'all-configs') }}"
operator: Equals
value: false
patchStrategicMerge:
spec:
template:
spec:
volumes:
- name: all-configs
projected:
sources:
- configMap:
name: "{{request.object.metadata.name}}"
items:
- key: "{{request.object.metadata.name}}.yaml"
path: "{{request.object.metadata.name}}.yaml"
containers:
- name: "main"
volumeMounts:
- name: "all-configs"
mountPath: "/run/config"
EOF
5.3. Kyverno policy to reload modified ConfigMaps
Create a Kyverno policy to mutate the Kubernetes Deployment when the attached ConfigMap is modified. The mutation is to modify the Pod spec, which causes the Pod to be restarted. Upon restart, the Pod loads the modified configuration from the ConfigMap.
kubectl apply -f - <<EOF
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: reload-configmaps
spec:
rules:
- name: restart-pods
match:
any:
- resources:
kinds:
- ConfigMap
preconditions:
all:
- key: '{{request.object.metadata.annotations."attach"}}'
operator: "Equals"
value: "true"
- key: "{{request.operation || 'BACKGROUND'}}"
operator: "AnyIn"
value:
- UPDATE
mutate:
targets:
- apiVersion: apps/v1
kind: Deployment
preconditions:
all:
- key: "{{ target.metadata.labels.app || '' }}"
operator: Equals
value: "{{ request.object.metadata.annotations.targetApp }}"
- key: "{{ target.spec.template.spec.volumes[].name | contains(@, 'all-configs') }}"
operator: Equals
value: true
patchStrategicMerge:
spec:
template:
metadata:
annotations:
pod-restart-random: "{{ random('[0-9a-z]{8}') }}" (1)
EOF
1 | When this annotation’s value changes, the Pod spec changes, which causes the Pod to be restarted. |
5.4. Kyverno policy to detach ConfigMaps
Create a Kyverno policy to mutate the Kubernetes Deployment when the attached ConfigMap is deleted. The mutation is to delete the volume corresponding to the ConfigMap.
kubectl apply -f - <<-"EOF"
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: detach-configmaps
spec:
schemaValidation: false (1)
rules:
- name: detach-configmap
match:
any:
- resources:
kinds:
- ConfigMap
preconditions:
all:
- key: "{{request.operation || 'BACKGROUND'}}"
operator: "AnyIn"
value:
- DELETE
- key: '{{request.object.metadata.annotations."attach"}}'
operator: "Equals"
value: "true"
mutate:
targets:
- apiVersion: apps/v1
kind: Deployment
preconditions:
all:
- key: "{{ target.metadata.labels.app || '' }}"
operator: Equals
value: "{{ request.object.metadata.annotations.targetApp }}"
- key: "{{ target.spec.template.spec.volumes[].name | contains(@, 'all-configs') }}"
operator: Equals
value: true
patchStrategicMerge:
spec:
template:
spec:
volumes:
- name: all-configs
$patch: delete
containers:
- name: main
volumeMounts:
- name: all-configs
$patch: delete
EOF
1 | To avoid the following error when applying the policy:
admission webhook "validate-policy.kyverno.svc" denied the request: mutate result violates resource schema: ValidationError(io.k8s.api.apps.v1.Deployment.spec.template.spec.volumes[0]): unknown field "$patch" in io.k8s.api.core.v1.Volume Read about |
6. Test
6.1. Zero configuration
Verify that the application knows no configuration for vehicle kind car
, when there is no ConfigMap that provides such a configuration.
vehicle_kind="car"
ingress_port="31080"
curl \
"http://shipping:${ingress_port}//ship-parts/${vehicle_kind}" \
--resolve "shipping:${ingress_port}:127.0.0.1" \
--location
No parts found for vehicle kind car
6.2. ConfigMap created
Create a ConfigMap that provides configuration for vehicle kind car
.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: car
annotations:
attach: "true"
targetApp: shipping
data:
car.yaml: |-
car:
engine: 1
EOF
Verify that the application has received the configuration for vehicle kind car
.
vehicle_kind="car"
ingress_port="31080"
curl \
"http://shipping:${ingress_port}/ship-parts/${vehicle_kind}" \
--resolve "shipping:${ingress_port}:127.0.0.1" \
--location
engine: 1
6.3. ConfigMap modification
Modify the ConfigMap that provides configuration for vehicle kind car
.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: car
annotations:
attach: "true"
targetApp: shipping
data:
car.yaml: |-
car:
engine: 1
wheels: 4 (1)
EOF
1 | This is the modification. |
Verify that the application has received the modified configuration for vehicle kind car
.
vehicle_kind="car"
ingress_port="31080"
curl \
"http://shipping:${ingress_port}/ship-parts/${vehicle_kind}" \
--resolve "shipping:${ingress_port}:127.0.0.1" \
--location
engine: 1 wheels: 4 (1)
1 | This is the modification. |
Modify the ConfigMap again.
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: car
annotations:
attach: "true"
targetApp: shipping
data:
car.yaml: |-
car:
engine: 1
wheels: 4
doors: 4 (1)
EOF
1 | This is the modification. |
Verify that the application has received the modified configuration for vehicle kind car
.
vehicle_kind="car"
ingress_port="31080"
curl \
"http://shipping:${ingress_port}/ship-parts/${vehicle_kind}" \
--resolve "shipping:${ingress_port}:127.0.0.1" \
--location
doors: 4 (1) engine: 1 wheels: 4
1 | This is the modification. |
6.4. ConfigMap deletion
Delete the ConfigMap that provided configuration for vehicle kind car
.
kubectl delete configmap car
Verify that the application has lost the configuration for vehicle kind car
.
vehicle_kind="car"
ingress_port="31080"
curl \
"http://shipping:${ingress_port}/ship-parts/${vehicle_kind}" \
--resolve "shipping:${ingress_port}:127.0.0.1" \
--location
No parts found for vehicle kind car
7. Cleanup
7.1. Demo application
helm uninstall shipping
kubectl delete configmap/car
docker exec \
-it \
c1-control-plane \(1)
crictl rmi docker.io/library/shipping:latest
docker rmi shipping:latest
rm -Rf kyverno-demo-app
1 | Replace c1-control-plane with the name of the node in your kind cluster. |
Appendix A: Installing Kyverno CLI
Install Kyverno CLI.
kyverno_version="1.10.0-beta.1"
kernel_name="$(uname --kernel-name | tr '[:upper:]' '[:lower:]')"
machine_hardware="$(uname --machine)"
archive_name="kyverno-cli_v${kyverno_version}_${kernel_name}_${machine_hardware}.tar.gz"
curl -L -O\
https://github.com/kyverno/kyverno/releases/download/v${kyverno_version}/${archive_name}
curl -L \
https://github.com/kyverno/kyverno/releases/download/v${kyverno_version}/checksums.txt \
-o kyverno-checksums.txt
sha256sum --check --ignore-missing kyverno-checksums.txt && \
tar -xvf "${archive_name}" && \
sudo mv kyverno /usr/local/bin/
rm "${archive_name}" kyverno-checksums.txt
Test Kyverno CLI.
$ kyverno version
Version: 1.10.0-beta.1
Time: 2023-05-11T09:24:40Z
Git commit ID: 8a350ab5cb1f2af024510ae2c2ee723efb8b964c