This is the multi-page printable view of this section. Click here to print.
Concepts
- 1: Image Scanning
- 1.1: Automating Scanning
- 1.2: Scanning in CICD Pipelines
- 1.3: Enforcing Compliance
- 1.3.1: Setting Up Policies
- 1.3.2: Installing Webhook
- 1.3.3: Enabling Enforcement
- 1.3.4: Exceptions
- 2: kube-bench
- 3: kube-hunter
- 4: KubeSec
- 5: Gatekeeper
- 5.1: Getting Started
- 5.2: Managing Constraints
- 5.3: Reviewing Exceptions
- 5.4: Reviewing Pod Compliance
- 6: Falco
1 - Image Scanning
Trivy is one of the best tools for scanning Kubernetes images, and m9sweeper can coordinate scanning images deployed to your cluster, rescanning of those images, as well as blocking images from deploying if they do not meet your minimum criteria for compliance.
M9sweeper also allows you to create exceptions or have your employees' request exceptions be approved when they do not have the time to fix an issue in the moment but still want to allow applications to deploy.
For a full list of trawler configuration options, see the trawler reference guide
1.1 - Automating Scanning
By default, m9sweeper will scan all images that it sees deployed in your cluster that have not been scanned.
Also, you can configure the Image Rescan Period (Days) when setting up policies to automatically rescan images. This will then rescan any images currently running in your cluster if they have not been scanned in a certain number of days.

1.2 - Scanning in CICD Pipelines
You can automatically scan images using trawler in your automated CICD pipelines. The easiest way to do this is by running trawler from the command line using the container image. It will look something like this.
docker run \
--env "M9SWEEPER_URL=XXX" \
--env "M9SWEEPER_API_KEY=XXX" \
--env "CLUSTER_NAME=XXX" \
--env "DOCKER_IMAGE_URL=XXX" \
-it m9sweeper/trawler trawler scan
Note that you will need to provide an API key as well as the name of the cluster you are scanning it for so that it can authenticate with m9sweeper. You will have to run a scan for each cluster you plan to deploy it to because each cluster might have different policies setup.
1.3 - Enforcing Compliance
1.3.1 - Setting Up Policies
Policy Settings
In the organization settings, you can click on policies in the left navigation and configure one or more policies for your cluster. These policies define what criteria an image must meet to be considered compliant in the cluster.
It looks something like this.

Only policies and scanners that are active and required will be used in determing whether an image is compliant. Also, when evaluating an image for a cluster, only policies that are configured for that cluster will be applied.
Configuring Trivy Requirements
When configuring the trivy scanner, you can define the maximum number of vulnerabilities for each category. The defaults that come pre-installed essentially will block any image with a fixable major or critical vulnerability.

1.3.2 - Installing Webhook
In order to have m9sweeper enforce image scanning compliance in your cluster, you need to install a validating webhook in your cluster. This should be done automatically by m9sweeper during the setup process, but if for some reason it was not you can click “Update Kubeconfig” on your cluster’s settings page and run through the setup wizard again to have it install the webhook for you.

1.3.3 - Enabling Enforcement
To enable enforcement, you need to make your way to the Cluster settings for your cluster and check the box that enables webhook enforcement.

Once checked, anything that is not compliant with the policies you have setup will be prevented from deploying. Note that this only works if you have installed the webhook during the setup process.
1.3.4 - Exceptions
Sometimes, for practical reasons, you may need to allow something with a known security issue to continue to be deployed in an environment. You can do this using exceptions.
Creating Exceptions
Your team can create exceptions when the need arises.

Temporary Exceptions
When a new exception is discovered such as through a nightly image rescan, you may want automatically provide teams with a certain amount of time (lets say a week) before it would block their deployments. This can be done through the use of a temporary exception.
To enable this feature, you need to edit the policy that is setup for your cluster(s) and check the box (see below) and set how many days the temporary exceptions should be active.

When new temporary exceptions are created, it will email all of your admins to review and decide what to do. They should notify your software development teams if the issue should be resolved right away and/or change the end date on the exception.
Exception Statuses
Active: Active exceptions are the only exceptions that will be used when validating image compliance, and only if the current date is within the exception’s start and end date.
In Review: When an exception is submitted for review, it will be in this status. It will not be used when validating an image’s compliance, but someone should review to decide whether it is a risk your organization is willing to take.
Inactive: The exception will be ignored when validating image compliance.
Requesting Exceptions
When viewing an image, if a team member who is NOT an admin believes an exception is required, they can request an exception. This exception falls into the In Review status and will not be active, but it does provide a forum for your team to request exceptions and for someone else (such as your security/ops team) to review and approve the exception. They would approve the exception by changing its status to Active.

2 - kube-bench
kube-bench will run a scan of your cluster to compare its configuration against the Center for Internet Security (CIS) benchmarks. This is a good way to check for obvious configuration issues, such as allowing anonymous users. It deploys as an application in your cluster and then accesses the Kubernetes APIs to see how your cluster is configured.
We recommend setting up kube-bench to run as a nightly cron job so that you can see the effect of any changes you make to your cluster.
First, you need to install kube-bench and set it up to upload its results to m9sweeper. To do this, go to kube-bench for your cluster and click on Run Audit in the top right.

Then, you can use the wizard to generate a CLI command that will install kube-bench using our helm chart as a cron job or one-time job in your cluster. It will upload its results back to the API (and you should see an API key in the url).
Note that this will only work IF you have enabled traffic ingressing or otherwise allowed kube-bench to pipe its results back to the m9sweeper dash app.

And then this will display a summary report, like this:

You can click on any line to expand it and see directions for remediation.

3 - kube-hunter
kube-hunter will run a non-invasive (or invasive, if you want) penetration test of your cluster. It deploys as an application in your cluster and then attempts to explore and see what all it is able to do. It reports back on any concerns that you should be aware of.
We recommend setting up kube-hunter to run as a nightly cron job so that you can see the effect of any changes you make to your cluster.
First, you need to install kube-hunter and set it up to upload its results to m9sweeper. To do this, go to kube-hunter for your cluster and click on Run Audit in the top right.

Then, you can use the wizard to generate a CLI command that will install kube-hunter using our helm chart as a cron job or one-time job in your cluster. It will upload its results back to the API (and you should see an API key in the url).
Note that this will only work IF you have enabled traffic ingressing or otherwise allowed kube-hunter to pipe its results back to the m9sweeper dash app.

And then this will display a summary report, like this:

4 - KubeSec
KubeSec coaches you about how to make your deployments more secure. You can find it in the left navigation after selecting a cluster.

To get started, select a pod you want to evaluation and click Run KubeSec.

It will then display a report of your pods' compliance and any improvements that could be made.

5 - Gatekeeper
5.1 - Getting Started
Gatekeeper is a great tool for creating rules for your Kubernetes cluster. You configure rules using constraint templates and constraints.
Constraint templates, such as this, allow you to define rules using a language called Rego.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
Constraints are then created to apply these rules to a particular set of kubernetes entities. These constraints can also contain configuration parameters, such as which labels are required (in this example).
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
M9sweeper makes managing your constraint templates and constraints extremely convenient through an easy-to-use user interface.
To get started, you need to install Gatekeeper.
5.2 - Managing Constraints
Managing Constraint Templates
After installing Gatekeeper, your next step is to install constraint templates. You can do this using a CICD pipeline, or if you are new to this you can use the m9sweeper graphical user interface to install Constraint Templates from our library of templates.
First, open the Gatekeeper page for your cluster and click on “+Add More” in the top right.

Then, check the boxes on the constraint templates you want to install and click save changes.

After doing this, you will see the list of constraint templates has been installed.
Managing Constraints
Just installing constraint templates alone does not do anything - you also have to apply these constraint templates to specific workloads / namespaces. This is done through the use of constraints.
If you click on one of your constraint templates, you will be taken to a page that lists all of the constraints created for this constraint template. After doing so, click “+Add More” to create a constraint for this template.

In this user interface, it will let you set which namespaces and type of entity it applies to. Reasonable defaults are typically filled in if you used one of our templates.

If properties can be configured, we will automatically generate a user interface for configuring those properties. You can fill in the required properties.

Do not forget to select whether it is to be in enforcement mode or audit mode. Only enforcement mode is actually enforced - audit mode is purely used for evaluation purposes.

Click save changes and now you should have your constraint created!

5.3 - Reviewing Exceptions
While gatekeeper constraints can be scoped to specific namespaces or entity types, sometimes you want to create temporary exceptions for a particular namespace that end at a specific date, or sometimes you want to just target a specific workload.
In those cases, you can use our exceptions feature. Our exceptions feature will automatically code-generate rego code nightly, re-evaluating the exceptions every day and taking into account the exception’s status, start date, and end date. These exceptions work just like image compliance exceptions, except they target gatekeeper constraint templates rather than image scanning rules.
To use them, be sure and pick Gatekeeper as the exception type:

5.4 - Reviewing Pod Compliance
To view a pod’s compliance, navigate to your cluster’s list of workloads. Then, click on the namespace you want to review.

Next, it will list all pods in the namespace. Note that it re-populates this list hourly.

If you click on a pod, it will list all images in the pod and those images' own compliance.

If you click on the Gatekeeper icon in the top right, it will tell if you if any violations exist and, if so, what violations exist.

6 - Falco
What is Falco?
The Falco Project is an open source runtime security tool originally built by Sysdig, Inc.
What does Falco do?
Falco uses system calls to secure and monitor a system, by:
- Parsing the Linux system calls from the kernel at runtime
- Asserting the stream against a powerful rules engine
- Alerting when a rule is violated
For more information, see the Falco Documentation.
Setup and configuration
M9sweeper consumes HTTP requests from Falco in JSON format to present readable information in our UI.
To accomplish this, FalcoSideKick is deployed to give us more control over Falco’s output.
Deploy FalcoSideKick
The following commands add the FalcoSideKick chart and then installs with the passed configuration. This will deploy into the “falco” namespace, and will be created if it doesn’t exist.
Add the Helm Chart Repository then install FalcoSideKick:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install -n falco falcosidekick \
--create-namespace \
--set-string config.webhook.address="https://m9sweeper.domain.com/api/falco/CLUSTER_ID/create" \
--set-string config.webhook.minimumpriority="error" \
--set config.webhook.checkcert=true \
falcosecurity/falcosidekick
Configuration Notes:
-
Set the config.webhook.address value to your instance of M9sweeper.
-
Depending on how you are deploying M9sweeper, you might need to set config.webhook.checkcert=false.
-
We recommend setting the minimum priorty to “error”. This filters the noise from Falco’s warnings. However, you can change this as needed. The order is as following:
emergency|alert|critical|error|warning|notice|informational|debug
Deploy Falco
Add the Helm Chart Repo then install Falco:
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco \
--create-namespace \
--namespace falco \
--set falco.driver.enabled=true \
--set-string falco.driver.kind=ebpf \
--set falco.tty=true \
--set falco.json_output=true \
--set falco.json_include_output_property=true \
--set falco.http_output.enabled=true \
--set-string falco.http_output.url=http://falcosidekick:2801/
Notes:
-
Make sure to change the URL value to point to your M9sweeper instance as well as enter the CLUSTER_ID of whatever cluster it is supposed to save to.
-
We recommend using the EBPF driver, however, if you have issues please refer to to the installation page. Or you may try to use the kernel driver and set “falco.driver.kind=module” above.