Kyverno (Greek for “govern”) is a policy engine explicitly designed for Kubernetes.
It was created by Nirmata and is currently running as a CNCF sandbox project.
Kyverno can be used to:
When Kyverno is deployed in a cluster, it will create validating and mutating webhooks and run as an admission controller inside the cluster.
Meaning any command issued towards the Kubernetes API will first be intercepted and run through Kyverno to check, mutate, or validate the request.
The great thing about Kyverno is that it uses Kubernetes type of custom resource definitions, and every policy can be simply defined in a YAML file. Without learning complex languages or queries.
Defining the policies in Kyverno is easy, and you will find that it is very much similar to any other Kubernetes object definition.
Opposite of OPA Gatekeeper, Kyverno is much simpler to use and start with; however, it’s worth mentioning that the OPA Gatekeeper can give you much finer control if that’s what you are after.
This article will cover all policy types by Kyverno and how to apply them to your cluster. But first, let’s install Kyverno.
Installing Kyverno can be done directly by applying the manifests or through the official Helm chart.
This article will explain the installation steps using Helm.
Before installing, it’s a good idea to assess and evaluate how you want the security and policy enforcement to be run inside your cluster.
The two suggested settings to consider before installing are the podSecurityStandard and validationFailureAction.
The podSecurityStandard is a setting that controls the Pods security profile, based on the existing, but soon to be deprecated Kubernetes Pod Security Standards.
You can choose from the three options available:
By default, if no option is specified when installing Kyverno with Helm, the Baseline profile will be selected.
The validationFailureAction setting has two options; you can set it in enforce or audit mode.
When set to enforce, if a policy violation occurs due to the set pod security, the request will be blocked.
In audit mode, the resource request will be allowed, and the policy violation will be logged. Audit mode is the default setting if not specified otherwise.
Add the Helm repository and run an update:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/3d9f527a33450e58614ee0b27fa8cda1.js</p>
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/8bff9dff9f2b245262418b453270502b.js</p>
It’s best to Install Kyverno in its own namespace. And for the pod security options, you can install them based on the level of security you want to achieve.
From the Helm chart, we will go with the defaults for the policy settings.
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/660539b1b09ccee842faadf875d6069a.js</p>
If the install is successful, you will be greeted by the install notes and an output from the policy setup.
“We have installed the "baseline" profile of Pod Security Standards and set them in audit mode.”
You can check on the Kyverno pod with:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/309f9e85626e6acbc7c932154cf6afd0.js</p>
And the installed baseline profile policies with:
In Kubernetes, there are two types of webhooks: the Mutating and Validating webhooks.
However, when Kyverno is installed as an admission controller, it offers one additional option along with the validating and mutating webhooks; it also has policy Generation, which can be used to generate configuration based on some event.
We will explain all three types for you to understand them better, when they are used, and why.
When a request is sent, with the validation type of policy, it will first be validated via the admission controller; and based on its binary result status, a response will be sent back either accepted or rejected.
For example, you can create a policy for configuration validation and set it so that the images in your cluster are downloaded only from allowed registries. The action can be logged on violation in audit mode, or enforced to block the request if it’s against the policy rule.
When the request is sent towards the Kubernetes API, if the object isn’t conforming to the set policy by Kyverno, it will be mutated and adjusted before getting applied and written to the ETCD database.
Mutating webhooks can also be validating ones since if a request doesn’t follow the policy rule, it won’t get rejected but modified and fit to the policy. That way, the effective “validation” will return pass, and the object will be stored in the database.
With the mutation webhook, there are three ways to modify the request.
It can be done via the JSON patch approach, strategic merge that Kubernetes uses, or by a set conditional for the desired end state.
Before creating a Deployment in Kubernetes, you may like to have a specific label attached. Even if you forget to add it before applying the configuration, the mutating webhook will add it for you before creating the resource.
As mentioned before, Kyverno can be used to generate a policy based on some event in the cluster.
This type of policy is helpful if you like some default configuration to be set based on updates or the creation of other resources.
Later an example will be given how this type of policy generation can generate Limit Ranges for each created namespace.
Enough with the theory. Let’s have a look at each type of the policies and test them out.
Consider the following validation policy:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/4c46484c0b4ad7a8524ecd617dfef3b1.js</p>
You’ve already guessed what it does and what the options are, but let’s explain the fields and their values.
The object kind defines that this is a cluster scoped policy. Meaning every Deployment, from every namespace will be affected.
For validationFailureAction - the options are audit or enforce. Here, we have it set to enforce. You want the request blocked if the Deployment doesn’t have the proper labeling.
Under the rules, you define what to match the policy against and what you want to validate.
In this case, you will be matching a resource of kind Deployment, and the validation will be checking for the label company=squadcast.
If this label isn’t set, the request will be blocked, and the admission controller will display a message to inform you of the reason.
Now save the file and apply it to the cluster:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/8835e1a93cf3f2dc8925b19f9cde09f1.js</p>
Once that is applied, try to create a deployment without specifying a label.
You can quickly test this using the imperative approach:
<p> CODE: https://gist.github.com/ShubhanjanMedhi-dev/3ef509a055f67bb7263a8c8c919460be.js</p>
Success! The request didn’t pass since you haven’t specified the required company label.
Try again, but this time add the company=squadcast label:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/a44fbb3f5951ddf2862588edfd682bd2.js</p>
Save it and apply it:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/8eb48c780a05148e5c3d3f51b1b9902b.js</p>
This time the Deployment passed. Excellent!
You can take a look at other validating policy examples here.
You can take the above example from the Validating policy and change it, so it mutates the request and adds the required label automatically if it’s missing.
If you want to append a label to Deployments while applying or updating an object, you can do so with the following policy.
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/79af35b2f4ff4d1d3b9f518b961f804d.js</p>
Notice something missing?
The validationFailureAction field isn’t needed anymore because if the required label is missing, the webhook will automatically mutate the request and apply it. You’ll also notice that there will be no message displayed as well.
Under rules instead of validate, we now have the mutate field that describes the patch that will happen if the Deployment doesn’t contain the required label. This type of patching uses the JSONPatch method.
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/4855e708e49bc9e5982c39b09b78daa6.js</p>
Try once again with the previous command to verify that the labeling will be automatically added.
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/35c9abf93a2af19a7ba9959eba5773aa.js</p>
And verify:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/600259e25e00968083f0a855aa0b0e4a.js</p>
You can also check the Kyverno logs on which action has been taken.
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/5be1021dfba7d44c7d9f9a4318d8272c.js</p>
There it is, there was a successful mutate action on the Deployment - mutate-test, under the policy add-deployment-label.
Finally, let’s check out the third option that Kyverno offers for policy generation.
With the Generating policy, you can generate configurations based on specific object creation or updates.
Examples would be: setting up default limits or requests on Namespace creation, adding labeling, Quotas, Network Policies, etc.
In this example, you will create a policy that will automatically generate Limit Ranges on namespace creation.
In short, Limit Ranges are used to define specific requests and limits that can be tied to namespaces and based on CPU, memory, or storage. The LimitRanger admission controller enforces the specified limits in the policy.
The Generating policy in Kyverno for Limit Ranges looks something like this:
It looks a bit lengthy, but there isn’t anything complex about it.
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/2604b35dee5942e4fb3b7acfdedb410d.js</p>
Some of the fields are already familiar to you, so let’s explain the new ones.
The rule is now generate, and the policy will apply to Namespaces.
In the exclude section, you define which namespaces you wouldn’t want to be affected by the policy.
Under generate, you define your LimitRange object.
The synchronize field set to true means that the generated object will be protected from direct changes and restored to its original content as long as the Kyverno policy is intact.
Also, another cool thing with the synchronization option is that any new change to the policy will be automatically propagated to every namespace where the Limit Range object exists.
There is no need to copy it everywhere. Kyverno will automatically do this for you.
Deleting the Kyverno policy will also delete the LimitRange object.
Finally, under generate.data.spec, you define the actual Limit Range limits and requests that will apply per container.
Generate the policy with:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/4a20d077478c110fdc7e3be3f8348024.js</p>
And create a new namespace:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/0f13f14cb05954ed181e0344cb847c6f.js</p>
Now check if the limit range is automatically created along with the namespace:
<p>CODE: https://gist.github.com/ShubhanjanMedhi-dev/de76e9d18084ad5b9557641f821727a9.js</p>
One more time to verify in the logs:
Done!
With this, you have successfully deployed and tested all three policy types in Kyverno.