helm repo add kubescape https://kubescape.github.io/helm-charts/
helm repo update
if you ran kubescape cli tool and submitted, you can get your Account ID from the local cache:
kubescape config view | grep -i accountID
Otherwise, get the account ID from the kubescape SaaS
Run the install command:
helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set account=<my_account_ID> --set cluster-name=`kubectl config current-context`
Add
--set clientID=<generated client id> --set secretKey=<generated secret key>
if you have generated an auth key
Add
--set kubescape.serviceMonitor.enabled=true
for installing the Prometheus service monitor, read more about Prometheus integration
By default, Kubescape is configured for small- to medium-sized clusters. If you have a larger cluster and you experience slowdowns or see Kubernetes evicting components, please revise the amount of resources allocated for the troubled component.
Taking Kubescape for example, we found that our defaults of 500 MiB of memory and 500m CPU work well for clusters up to 1250 total resources. If you have more total resources or experience resource pressure already, first check out how many resources are in your cluster by running the following command:
kubectl get all -A --no-headers | wc -l
The command should print an approximate count of resources in your cluster. Then, based on the number you see, allocate 100 MiB of memory for every 200 resources in your cluster over the count of 1250, but no less than 128 MiB total. The formula for memory is as follows:
MemoryLimit := max(128, 0.4 * YOUR_AMOUNT_OF_RESOURCES)
For example, if your your cluster has 500 resources, a sensible memory limit would be:
kubescape:
resources:
limits:
memory: 200Mi # max(128, 0.4 * 500) == 200
If your cluster has 50 resources, we still recommend allocating at least 128 MiB of memory.
When it comes to CPU, the more you allocate, the faster Kubescape will scan your cluster. This is especially true for clusters that have a large amount of resources. However, we recommend that you give Kubescape no less than 500m CPU no matter the size of your cluster so it can scan a relatively large amount of resources fast ;)
Key | Type | Default | Description |
---|---|---|---|
kollector.affinity | object | {} |
Assign custom affinity rules to the StatefulSet |
kollector.enabled | bool | true |
enable/disable the kollector |
kollector.env[0] | object | {"name":"PRINT_REPORT","value":"false"} |
print in verbose mode (print all reported data) |
kollector.image.repository | string | "quay.io/kubescape/kollector" |
source code |
kollector.nodeSelector | object | {} |
Node selector |
kollector.volumes | object | [] |
Additional volumes for the collector |
kollector.volumeMounts | object | [] |
Additional volumeMounts for the collector |
kubescape.affinity | object | {} |
Assign custom affinity rules to the deployment |
kubescape.downloadArtifacts | bool | true |
download policies every scan, we recommend it should remain true, you should change to ‘false’ when running in an air-gapped environment or when scanning with high frequency (when running with Prometheus) |
kubescape.enableHostScan | bool | true |
enable host scanner feature |
kubescape.enabled | bool | true |
enable/disable kubescape scanning |
kubescape.image.repository | string | "quay.io/kubescape/kubescape" |
source code (public repo) |
kubescape.nodeSelector | object | {} |
Node selector |
kubescape.serviceMonitor.enabled | bool | false |
enable/disable service monitor for prometheus (operator) integration |
kubescape.skipUpdateCheck | bool | false |
skip check for a newer version |
kubescape.submit | bool | true |
submit results to Kubescape SaaS: https://cloud.armosec.io/ |
kubescape.volumes | object | [] |
Additional volumes for Kubescape |
kubescape.volumeMounts | object | [] |
Additional volumeMounts for Kubescape |
kubescapeScheduler.enabled | bool | true |
enable/disable a kubescape scheduled scan using a CronJob |
kubescapeScheduler.image.repository | string | "quay.io/kubescape/http_request" |
source code (public repo) |
kubescapeScheduler.scanSchedule | string | "0 0 * * *" |
scan schedule frequency |
kubescapeScheduler.volumes | object | [] |
Additional volumes for scan scheduler |
kubescapeScheduler.volumeMounts | object | [] |
Additional volumeMounts for scan scheduler |
gateway.affinity | object | {} |
Assign custom affinity rules to the deployment |
gateway.enabled | bool | true |
enable/disable passing notifications from Kubescape SaaS to the Operator microservice. The notifications are the onDemand scanning and the scanning schedule settings |
gateway.image.repository | string | "quay.io/kubescape/gateway" |
source code |
gateway.nodeSelector | object | {} |
Node selector |
gateway.volumes | object | [] |
Additional volumes for the notification service |
gateway.volumeMounts | object | [] |
Additional volumeMounts for the notification service |
kubevuln.affinity | object | {} |
Assign custom affinity rules to the deployment |
kubevuln.enabled | bool | true |
enable/disable image vulnerability scanning |
kubevuln.image.repository | string | "quay.io/kubescape/kubevuln" |
source code |
kubevuln.nodeSelector | object | {} |
Node selector |
kubevuln.volumes | object | [] |
Additional volumes for the image vulnerability scanning |
kubevuln.volumeMounts | object | [] |
Additional volumeMounts for the image vulnerability scanning |
kubevulnScheduler.enabled | bool | true |
enable/disable a image vulnerability scheduled scan using a CronJob |
kubevulnScheduler.image.repository | string | "quay.io/kubescape/http_request" |
source code (public repo) |
kubevulnScheduler.scanSchedule | string | "0 0 * * *" |
scan schedule frequency |
kubevulnScheduler.volumes | object | [] |
Additional volumes for scan scheduler |
kubevulnScheduler.volumeMounts | object | [] |
Additional volumeMounts for scan scheduler |
operator.affinity | object | {} |
Assign custom affinity rules to the deployment |
operator.enabled | bool | true |
enable/disable kubescape and image vulnerability scanning |
operator.image.repository | string | "quay.io/kubescape/operator" |
source code |
operator.nodeSelector | object | {} |
Node selector |
operator.volumes | object | [] |
Additional volumes for the web socket |
operator.volumeMounts | object | [] |
Additional volumeMounts for the web socket |
kubescapeHostScanner.volumes | object | [] |
Additional volumes for the host scanner |
kubescapeHostScanner.volumeMounts | object | [] |
Additional volumeMounts for the host scanner |
awsIamRoleArn | string | nil |
AWS IAM arn role |
clientID | string | "" |
client ID, read more |
addRevisionLabel | bool | true |
Add revision label to the components. This will insure the components will restart when updating the helm |
cloudRegion | string | nil |
cloud region |
cloudProviderEngine | string | nil |
cloud provider engine |
gkeProject | string | nil |
GKE project |
gkeServiceAccount | string | nil |
GKE service account |
secretKey | string | "" |
secret key, read more |
triggerNewImageScan | bool | false |
enable/disable trigger image scan for new images |
volumes | object | [] |
Additional volumes for all containers |
volumeMounts | object | [] |
Additional volumeMounts for all containers |
An overview of each in-cluster component which is part of the Kubescape platform helm chart. Follow the repository link for in-depth information on a specific component.
graph TB
client([client]) .-> dashboard
masterGw .- gw
subgraph Cluster
gw(Gateway)
operator(Operator)
k8sApi(Kubernetes API);
kubevuln(Kubevuln)
ks(Kubescape)
gw --- operator
operator -->|scan cluster| ks
operator -->|scan images| kubevuln
operator --> k8sApi
ks --> k8sApi
end;
subgraph Backend
er(EventReceiver)
dashboard(Dashboard) --> masterGw("Master Gateway")
ks --> er
kubevuln --> er
end;
classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff;
classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000;
class k8sApi k8s
class ks,operator,gw,masterGw,kollector,kubevuln,er,dashboard plain
Deployment
Responsibility: Broadcasts a message received to its registered clients. When a client registers itself in a Gateway it must provide a set of attributes, which will serve as identification, for message routing purposes.
In our architecture, the Gateway acts both as a server and a client, depending on its running configuration:
A Master Gateway communicates with multiple in-cluster Gateways, hence it is able to communicate with multiple clusters.
Deployment
Deployment
Deployment
StatefulSet
Holds a list of communication URLs. Used by the following components:
Some in-cluster components communicate with the Kubernetes API server for different purposes:
Kollector
Watches for changes in namespace, workloads, nodes. Reports information to the EventReceiver. Identifies image-related changes and triggers an image scanning on the new images accordingly (scanning new images functionality is optional).
Operator
Creates/updates/deletes resources for recurring scan purposes (CronJobs, ConfigMaps). Collects required information (NS, image names/tags) for Kubevuln’s image scanning.
Kubescape
Collects namespaces, workloads, RBAC etc. required for cluster scans.
The backend components are running in Kubescape’s SaaS offering.
Each component writes logs to the standard output.
Every action has a generated jobId
which is written to the log.
An action which creates sub-action(s), will be created with a different jobId
but with a parentId
which will correlate to the parent action’s jobId
.
3 types of recurring scans are supported:
When creating a recurring scan, the Operator component will create a ConfigMap
and a CronJob
from a recurring template ConfigMap. Each scan type comes with a template.
The CronJob itself does not run the scan directly. When a CronJob is ready to run, it will send a REST API request to the Operator component, which will then trigger the relevant scan (similarly to a request coming from the Gateway).
The scan results are then sent by each relevant component to the EventReceiver.