helm-charts

Kubescape Operator

Version: 1.8.1 Type: application AppVersion: v1.8.1

Docs

Installing Kubescape Operator in a Kubernetes cluster using Helm:

  1. Add the Kubescape Helm Repo
    helm repo add kubescape https://kubescape.github.io/helm-charts/
    
  2. Update helm repo
    helm repo update
    
  3. Install the Helm Chart, use your account ID and give your cluster a name

if you ran kubescape cli tool and submitted, you can get your Account ID from the local cache:

kubescape config view | grep -i accountID

Otherwise, get the account ID from the kubescape SaaS

Run the install command:

helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set account=<my_account_ID> --set cluster-name=`kubectl config current-context` 

Add --set clientID=<generated client id> --set secretKey=<generated secret key> if you have generated an auth key

Add --set kubescape.serviceMonitor.enabled=true for installing the Prometheus service monitor, read more about Prometheus integration

Adjusting Resource Usage for Your Cluster

By default, Kubescape is configured for small- to medium-sized clusters. If you have a larger cluster and you experience slowdowns or see Kubernetes evicting components, please revise the amount of resources allocated for the troubled component.

Taking Kubescape for example, we found that our defaults of 500 MiB of memory and 500m CPU work well for clusters up to 1250 total resources. If you have more total resources or experience resource pressure already, first check out how many resources are in your cluster by running the following command:

kubectl get all -A --no-headers | wc -l

The command should print an approximate count of resources in your cluster. Then, based on the number you see, allocate 100 MiB of memory for every 200 resources in your cluster over the count of 1250, but no less than 128 MiB total. The formula for memory is as follows:

MemoryLimit := max(128, 0.4 * YOUR_AMOUNT_OF_RESOURCES)

For example, if your your cluster has 500 resources, a sensible memory limit would be:

kubescape:
  resources:
    limits:
      memory: 200Mi  # max(128, 0.4 * 500) == 200

If your cluster has 50 resources, we still recommend allocating at least 128 MiB of memory.

When it comes to CPU, the more you allocate, the faster Kubescape will scan your cluster. This is especially true for clusters that have a large amount of resources. However, we recommend that you give Kubescape no less than 500m CPU no matter the size of your cluster so it can scan a relatively large amount of resources fast ;)

Chart support

Values

Key Type Default Description
kollector.affinity object {} Assign custom affinity rules to the StatefulSet
kollector.enabled bool true enable/disable the kollector
kollector.env[0] object {"name":"PRINT_REPORT","value":"false"} print in verbose mode (print all reported data)
kollector.image.repository string "quay.io/kubescape/kollector" source code
kollector.nodeSelector object {} Node selector
kollector.volumes object [] Additional volumes for the collector
kollector.volumeMounts object [] Additional volumeMounts for the collector
kubescape.affinity object {} Assign custom affinity rules to the deployment
kubescape.downloadArtifacts bool true download policies every scan, we recommend it should remain true, you should change to ‘false’ when running in an air-gapped environment or when scanning with high frequency (when running with Prometheus)
kubescape.enableHostScan bool true enable host scanner feature
kubescape.enabled bool true enable/disable kubescape scanning
kubescape.image.repository string "quay.io/kubescape/kubescape" source code (public repo)
kubescape.nodeSelector object {} Node selector
kubescape.serviceMonitor.enabled bool false enable/disable service monitor for prometheus (operator) integration
kubescape.skipUpdateCheck bool false skip check for a newer version
kubescape.submit bool true submit results to Kubescape SaaS: https://cloud.armosec.io/
kubescape.volumes object [] Additional volumes for Kubescape
kubescape.volumeMounts object [] Additional volumeMounts for Kubescape
kubescapeScheduler.enabled bool true enable/disable a kubescape scheduled scan using a CronJob
kubescapeScheduler.image.repository string "quay.io/kubescape/http_request" source code (public repo)
kubescapeScheduler.scanSchedule string "0 0 * * *" scan schedule frequency
kubescapeScheduler.volumes object [] Additional volumes for scan scheduler
kubescapeScheduler.volumeMounts object [] Additional volumeMounts for scan scheduler
gateway.affinity object {} Assign custom affinity rules to the deployment
gateway.enabled bool true enable/disable passing notifications from Kubescape SaaS to the Operator microservice. The notifications are the onDemand scanning and the scanning schedule settings
gateway.image.repository string "quay.io/kubescape/gateway" source code
gateway.nodeSelector object {} Node selector
gateway.volumes object [] Additional volumes for the notification service
gateway.volumeMounts object [] Additional volumeMounts for the notification service
kubevuln.affinity object {} Assign custom affinity rules to the deployment
kubevuln.enabled bool true enable/disable image vulnerability scanning
kubevuln.image.repository string "quay.io/kubescape/kubevuln" source code
kubevuln.nodeSelector object {} Node selector
kubevuln.volumes object [] Additional volumes for the image vulnerability scanning
kubevuln.volumeMounts object [] Additional volumeMounts for the image vulnerability scanning
kubevulnScheduler.enabled bool true enable/disable a image vulnerability scheduled scan using a CronJob
kubevulnScheduler.image.repository string "quay.io/kubescape/http_request" source code (public repo)
kubevulnScheduler.scanSchedule string "0 0 * * *" scan schedule frequency
kubevulnScheduler.volumes object [] Additional volumes for scan scheduler
kubevulnScheduler.volumeMounts object [] Additional volumeMounts for scan scheduler
operator.affinity object {} Assign custom affinity rules to the deployment
operator.enabled bool true enable/disable kubescape and image vulnerability scanning
operator.image.repository string "quay.io/kubescape/operator" source code
operator.nodeSelector object {} Node selector
operator.volumes object [] Additional volumes for the web socket
operator.volumeMounts object [] Additional volumeMounts for the web socket
kubescapeHostScanner.volumes object [] Additional volumes for the host scanner
kubescapeHostScanner.volumeMounts object [] Additional volumeMounts for the host scanner
awsIamRoleArn string nil AWS IAM arn role
clientID string "" client ID, read more
addRevisionLabel bool true Add revision label to the components. This will insure the components will restart when updating the helm
cloudRegion string nil cloud region
cloudProviderEngine string nil cloud provider engine
gkeProject string nil GKE project
gkeServiceAccount string nil GKE service account
secretKey string "" secret key, read more
triggerNewImageScan bool false enable/disable trigger image scan for new images
volumes object [] Additional volumes for all containers
volumeMounts object [] Additional volumeMounts for all containers

In-cluster components overview

An overview of each in-cluster component which is part of the Kubescape platform helm chart. Follow the repository link for in-depth information on a specific component.


High-level Architecture Diagram

graph TB

  client([client]) .-> dashboard
  masterGw  .- gw

  subgraph Cluster
    gw(Gateway)
    operator(Operator)
    k8sApi(Kubernetes API);
    kubevuln(Kubevuln)
    ks(Kubescape)
    gw --- operator
    operator -->|scan cluster| ks
    operator -->|scan images| kubevuln
    operator --> k8sApi
    ks --> k8sApi
  end;
  
subgraph Backend
    er(EventReceiver)
    dashboard(Dashboard) --> masterGw("Master Gateway") 
    ks --> er
    kubevuln --> er
  end;
  
  classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff;
  classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000;
  class k8sApi k8s
  class ks,operator,gw,masterGw,kollector,kubevuln,er,dashboard plain

Gateway

Component Diagram ```mermaid graph TB subgraph Backend dashboard(Dashboard) masterGw("Gateway (Master)") end subgraph Cluster N gw3("Gateway (In-cluster)") operator3(Operator) end; subgraph Cluster 2 gw2("Gateway (In-cluster)") operator2(Operator) end; subgraph Cluster 1 gw1("Gateway (In-cluster)") operator1(Operator) end; dashboard --> masterGw masterGw .- gw2 masterGw .- gw3 gw1 .- operator1 gw2 .- operator2 gw3 .- operator3 masterGw .- gw1 classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000; class k8sApi k8s class ks,operator1,dashboard,operator2,operator3 plain ```

Operator

Component Diagram ```mermaid graph TB subgraph Cluster gw(Gateway) operator(Operator) k8sApi(Kubernetes API); kubevuln(Kubevuln) ks(Kubescape) urlCm recurringTempCm recurringScanCj end; masterGw(Master Gateway) .- gw gw ---> operator recurringScanCj ---> operator recurringScanCj --> recurringScanCj operator -->|scan cluster| ks operator -->|scan images| kubevuln operator --> k8sApi operator --- urlCm operator --- recurringTempCm classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000; class k8sApi k8s class ks,gw,masterGw,kollector,urlCm,recurringScanCj,recurringTempCm,kubevuln,er,dashboard plain ```

Kubevuln

Component Diagram ```mermaid graph TB subgraph Cluster kubevuln(Kubevuln) k8sApi(Kubernetes API) operator(Operator) gateway(Gateway) urlCm recurringScanCj recurringScanCm recurringTempCm end masterGateway .- gateway gateway .-|Scan Notification| operator operator -->|Collect NS, Images|k8sApi operator -->|Start Scan| kubevuln operator --- urlCm urlCm --- kubevuln recurringTempCm --- operator recurringScanCj -->|Periodic Run| recurringScanCj recurringScanCj -->|Scan Notification| operator recurringScanCm --- recurringScanCj subgraph Backend er(EventReceiver) masterGateway("Master Gateway") kubevuln -->|Scan Results| er end; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000 class k8sApi k8s class urlCm,recurringScanCm,operator,er,gateway,masterGateway,recurringScanCj,recurringTempCm plain ```

Kubescape

Component Diagram ```mermaid graph TB subgraph Cluster ks(Kubescape) k8sApi(Kubernetes API) operator(Operator) gateway(Gateway) ksCm recurringScanCj recurringScanCm recurringTempCm end masterGateway .- gateway gateway .-|Scan Notification| operator operator -->|Start Scan| ks ks-->|Collect Cluster Info|k8sApi ksCm --- ks recurringTempCm --- operator recurringScanCj -->|Periodic Run| recurringScanCj recurringScanCj -->|Scan Notification| operator recurringScanCm --- recurringScanCj subgraph Backend er(EventReceiver) masterGateway("Master Gateway") ks -->|Scan Results| er end; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000 class k8sApi k8s class ksCm,recurringScanCm,operator,er,gateway,masterGateway,recurringScanCj,recurringTempCm plain ```

Kollector

Component Diagram ```mermaid graph TD subgraph Backend er(EventReceiver) masterGw("Master Gateway") end; subgraph Cluster kollector(Kollector) k8sApi(Kubernetes API); gw(Gateway) end; kollector .->|Scan new image| gw masterGw .- gw kollector --> er kollector --> k8sApi classDef k8s fill:#326ce5,stroke:#fff,stroke-width:1px,color:#fff; classDef plain fill:#ddd,stroke:#fff,stroke-width:1px,color:#000; class k8sApi k8s class er,gw,masterGw plain ```

URLs ConfigMap

Holds a list of communication URLs. Used by the following components:

Config Example (YAML) ```yaml gatewayWebsocketURL: 127.0.0.1:8001 # component: in-cluster gateway gatewayRestURL: 127.0.0.1:8002 # component: in-cluster gateway kubevulnURL: 127.0.0.1:8081 # component: kubevuln kubescapeURL: 127.0.0.1:8080 # component: kubescape eventReceiverRestURL: https://report.cloud.com # component: eventReceiver eventReceiverWebsocketURL: wss://report.cloud.com # component: eventReceiver rootGatewayURL: wss://masterns.cloud.com/v1/waitfornotification # component: master gateway accountID: 1111-aaaaa-4444-555 clusterName: minikube ```

Kubernetes API

Some in-cluster components communicate with the Kubernetes API server for different purposes:


Backend components

The backend components are running in Kubescape’s SaaS offering.

Dashboard

EventReceiver


Logging and troubleshooting

Each component writes logs to the standard output.

Every action has a generated jobId which is written to the log.

An action which creates sub-action(s), will be created with a different jobId but with a parentId which will correlate to the parent action’s jobId.


Recurring scans

3 types of recurring scans are supported:

  1. Cluster configuration scanning (Kubescape)
  2. Vulnerability scanning for container images (Kubevuln)
  3. Container registry scanning (Kubevuln)

When creating a recurring scan, the Operator component will create a ConfigMap and a CronJob from a recurring template ConfigMap. Each scan type comes with a template.

The CronJob itself does not run the scan directly. When a CronJob is ready to run, it will send a REST API request to the Operator component, which will then trigger the relevant scan (similarly to a request coming from the Gateway).

The scan results are then sent by each relevant component to the EventReceiver.

Main Flows Diagrams

Recurring Scan Creation ```mermaid sequenceDiagram actor user participant dashboard as Backend

Dashboard participant masterGw as Backend

Master Gateway participant clusterGw as Cluster

In-Cluster Gateway participant operator as Cluster

Operator participant k8sApi as Cluster

Kubernetes API user->>dashboard: 1. create scan schedule dashboard->>masterGw: 2. build schedule notification masterGw->>clusterGw: 3. broadcast notification clusterGw->>operator: 4. create recurring scan operator->>k8sApi: 5. get namespaces, workloads k8sApi-->>operator: operator->>k8sApi: 6. Create cronjob & ConfigMap ```
Recurring Image Scan ```mermaid sequenceDiagram participant cronJob as Cluster

CronJob participant operator as Cluster

Operator participant k8sApi as Cluster

Kubernetes API participant kubeVuln as Cluster

Kubevuln participant er as Backend

EventReceiver loop cronJob->>operator: 1. run image scan end operator->>k8sApi: 2. list NS, container images k8sApi-->>operator: operator->>kubeVuln: 3. scan images kubeVuln ->> er: 4. send scan results ```
Recurring Kubescape Scan ```mermaid sequenceDiagram participant cronJob as Cluster

CronJob participant operator as Cluster

Operator participant ks as Cluster

Kubescape participant k8sApi as Cluster

Kubernetes API participant er as Backend

EventReceiver loop cronJob->>operator: 1. run configuration scan end operator->>ks: 2. kubescape scan ks->>k8sApi: 3. list NS, workloads, RBAC k8sApi->>ks: ks ->> er: 4. send scan results ```