Deploy on a Kubernetes Cluster

This note explains how to deploy the Cisco Panoptica controller on a Kubernetes cluster. The controller is deployed as a single pod in the cluster and, from there, it can apply Panoptica workload management methods on the entire cluster.
As part of the deployment, an extended version of Istio is also deployed as a pod in the cluster.

As soon as the controller is deployed on the cluster, you will gain the following benefits:

  • you will gain visibility about what workloads (microservices, containers, etc.) are running on the cluster, and the communications between them, and with the external world.
  • you will gain control over which workloads run on the cluster, and with whom they can communicate, by defining a few simple Panoptica runtime policy rules.
  • you will gain implicit, automatic scalability as you grow the cluster to production scales, without having to change the Panoptica controller or policies.
  • you will be able to apply Panoptica runtime policies on workloads running on multiple clusters.

Pre-requisites

  • you can deploy the Panoptica controller on any Kubernetes cluster, including managed cluster environments such as GKE
  • K8S CLI should be installed on the machine or VM from which the deployment is run
  • the machine or VM must have connectivity to the cluster (to run kubectl commands)
  • Ensure that the controller that will be installed in the Kubernetes cluster can communicate over port 443 towards the Panoptica console
  • you must have a Panoptica account

High Availability

The Panoptica controller on the cluster provides High Availability to dynamically meet the needs of your workloads as they run on the cluster. The controller will replicate automatically to create additional controllers, to meet dynamic load conditions from your workloads, and decrease as load declines.

Deploy the Panoptica controller

Follow the steps below to deploy the controller as a pod on the cluster. As part of the deployment, Istio will also be deployed.

Define a Kubernetes cluster on Panoptica

You create a new cluster in these steps, some of which are optional:

  • STEP 1 - define the cluster properties, including the name and type of orchestration
  • STEP 2 (optional) - select options for the controller that is deployed on the cluster; if not selected, default options are used
  • STEP 3 (optional) - select API Security options, if the Panoptica account is configured for API Security
  • STEP 4 (optional) - select Advanced settings for the controller; if not selected, default options are used.

To create a new cluster, start with these steps:

  1. Navigate to the Deployments page (use the Navigation menu on the left).
  2. Select the CLUSTERS tab.
  3. Click New Cluster.
738

STEP 1 - Cluster Properties

  1. Enter a name for the cluster (as it will be referred to in the Panoptica UI).
  2. Select the orchestration for the cluster, from the list (GKE, EKS, AKS, OpenShift, Rancher, etc.).
  3. Optionally, select these two options -
  1. Click NEXT to continue to the next step, or FINISH to complete the cluster creation, skipping the next steps.

STEP 2 - Network Security
This is an optional step, to set options for the Panoptica controller that is deployed on the cluster. If it is skipped, default settings will be used.

742
  1. Set the following options:
  • Add Network Security - enable tracking of connections to the workloads deployed on the cluster, and apply the Runtime Connection Policy to it.
    If selected ('Yes'), these additional options can be set:
  • Istio already installed - Istio will not be included in the Panoptica controller deployment (the YAML file), and deployed by it. Instead, the controller will use an existing Istio deployment in the cluster (the controller will not work if Istio is not present). When selected, choose the version of the installed Istio. The version can be found in the image tag of the istiod deployment in the istio-system namespace, and can be retrieved using the following command:
    Istio options.
kubectl get deployment istiod -o yaml -n istio-system | grep image: | cut -d ':' -f3 | head -1 | grep -oE "[0-9]+\.[0-9]+\.[0-9]*"
  • Note: if you select this option, and use an existing Istio deployment, these capabilities may be missing or reduced:

Istio configuration options

  • Install Istio Ingress Gateway - install the Istio Ingress Gateway when Istio is deployed (only available if Istio already installed is set to No).
  • Enable namespaces isolation - service information is only synced to proxies within the same namespace (only available if Istio already installed is set to No).
  • Supports multi cluster communication - enables the controller to discover pods on other clusters in a multi-cluster environment. This option must be enabled in order to apply runtime policies on workloads across multiple clusters.
  • Inspect incoming cluster connections - Connection rules with external IP-based sources will be enforced, and external IP sources will be shown with their IP address. When switched off, these rules will not be enforced, and the external IP address will not be shown.

Envoy configuration options

  • Hold Application till proxy is ready - if enabled, application workloads will not be started on the cluster until the Istio proxy (deployed as part of the Panoptica controller) is started.
  • Custom envoy settings - allow customization of memory and CPU resource allocation.
  • Enable TLS inspection - allow Panoptica to inspect HTTPS traffic from the cluster, by decrypting and then re-encrypting it. This is used to enforce Connection Policy rules and to identify Layer 7 attributes.
    • External CA integration* - configure the controller to use the external CA configured for the cluster, instead of the default CA configured in Istio.
  1. Click NEXT to continue to the next step, or FINISH to complete the cluster creation, skipping the next steps.

STEP 3: API Security
This step appears only if API Security is enabled in your Panoptica account, and selected in STEP 1, above.

463

1, Select API Security trace sources:

  • Istio - trace API traffic into and out of the cluster, using Istio
  • External gateways - trace API traffic into and out of an external API Gateways gateway
  1. Click NEXT to continue to the next step, or FINISH to complete the cluster creation, skipping the next step.

STEP 4: Advanced settings
These are additional settings for the cluster controller.

  1. Set advanced settings:

Pre-deployment settings

  • CI image hash validation - Panoptica will identify new pods with containers according to the image hash value, which must match the value generated by Panoptica CI plugin, or the value entered manually in the UI. If the hash value does not match, the pod will be unknown.
  • CI image signature validation - Panoptica ensures that only signed images are deployed to pods.
  • CD Pod template- Panoptica will identify new Pod template workloads in the cluster from a CD tool (such as Helm) that has a Panoptica CD plugin installed, and will assume that Pod templates running on the cluster, but not identified by the CD plugin, are 'unknown'. Pod templates deployed while this switch is 'Off', and that appear in the Workloads page, will be considered 'known'. Unknown workloads are subject to the default Runtime Deployment Policy Unkown Workload rule , which may block their deployment on the cluster
  • Restrict Registries - Panoptica will mark workloads as 'unknown' if the images are pulled from registries not designated as trusted (in the Registries tab). If disabled, workloads from all registries will be designated as 'known'. Unknown workloads are subject to the default Runtime Deployment Policy Unknown Workload rule.
    • API token injection - allow Panoptica to securely manage tokens , and inject them into workloads as need ed.

Controller deployment settings

  • Persistent storage - the controller will save the policy in persistent storage (disk), to be available after a pod restart, without having to copy it from the server. Requires 128MB of storage.
  • Minimal number of controller replicas - the number of controllers that will be replicated, to provide High Availability.
  • Use Internal Registry - When enabled, Panoptica deployment will be from images stored in an internal registry and not from Cisco's registry
  • External HTTPs proxy - if your cluster has an HTTP proxy configured, enable this switch, and set the value to the address of the proxy.

Other settings

  • Enable fuzz test option on APIs - allow API fuzzing testing on APIs
  • Panoptica policy CR requires deployer-
  • Auto-label new namespaces - When enabled, any new Kubernetes namespace will be labeled to allow Panoptica to protect it (will be shown as "protected")
  • Fail Close - when enabled, workloads and connections will be blocked if the Panoptica controller is not responding.
  1. Click FINISH. The new cluster will appear in the list of clusters.
1346

Similarly, the K8 Controllers page will show an entry for the new cluster, with status Pending (indicating the controller has not yet been deployed).

  1. Hover over the cluster, and click the download arrow. This will show instructions to install Panoptica controller on your cluster. There will also be a link to download and run the installer, as a YAML or script file. Save the file on the machine or VM from which you will deploy the controller (running the K8S CLI).
474

Deploy the Panoptica controller to the cluster

Follow the steps shown onscreen (in the Installation Info box) to deploy the controller to the cluster.

You can deploy the controller on one or more namespaces in the cluster. Namespaces that have a controller deployed will be protected by Panoptica.

Typically, these are the steps to deploy the controller:

  1. Add a label on all the K8S cluster namespaces that will be controlled by the controller, using this command, for a single namespace (repeat this for additional, individual namespaces in the cluster):
kubectl label namespace <name> SecureApplication-protected=full  --overwrite

Or this, for all namespaces:

kubectl label namespace $(kubectl get namespaces | awk '{print$1}' | grep -v -e NAME -e kube-public -e kube-system -e istio-system -e portshift) SecureApplication-protected=full  --overwrite

πŸ“˜

Note

The label SecureApplication-protected=full configures the controller to apply both Connection and Deployment Policies on the namespace (see Runtime Policies). You can use these options to restrict the enforcement of Panoptica Policies on the namespace:

  • SecureApplication-protected=full - Controller will enforce both connection policy and deployment policy in the labeled namespace
  • SecureApplication-protected=connections-only - Controller will enforce only connection policy in the labeled namespace
  • SecureApplication-protected=deployment-only - Controller will enforce only deployment policy in the labeled namespace
  • SecureApplication-protected=disabled - Controller will not enforce connection policy or deployment policy in the labeled namespace
  1. Extract the installation scripts:
tar -xzvf newCluster.tar.gz
  1. Run this command to deploy the controller:
./install_bundle.sh

πŸ“˜

Note

The downloaded installation bundle is unique per cluster. Once the Panoptica controller is installed on the cluster, the same installation bundle cannot be used for another Kubernetes cluster.

After the controller is deployed, the K8 Controllers page on the Panoptica Console will show the status of the controller as 'Active'.

1258

Deploy in Multi-cluster environments

You can deploy Panoptica in a multi-cluster environment. In this type of environment, you can define Panoptica environments that span multiple clusters, and apply Panoptica policies to workloads running on different clusters.

To apply Panoptica to multiple clusters, select the Supports multi cluster communication option for each cluster, when defining the cluster in Panoptica.

Chain of trust

Panoptica requires that a chain of trust be established between the clusters in a multi-cluster environment. This can be based on certificates that Panoptica generates when deploying the controller on each cluster, or on your existing certificates, if they exist.

When you run the script to deploy the controller, with the multi cluster communication option selected, you will be prompted to select the certificates to be used to establish the chain of trust.

The command line to run the script is this:

./install_bundle.sh -c <absolute path to certs folder>

If the path to the certs folder exists, and there are certificates in it, they will be used. Otherwise, Panoptica will create the folder and the certificates.

Uninstall the Panoptica controller

  1. Navigate to the Deployments page (use the Navigation menu on the left).
  2. Select the CLUSTERS tab.
  3. Select the cluster you want to delete, and click on the trash icon.
3162
  1. Follow the onscreen instructions to delete the cluster.
1280

Uninstall the Panoptica controller from the cluster using the downloaded installer

If the downloaded installation bundle is present, run the following:

./install_bundle.sh --uninstall

If API token injection is enabled for the cluster, the uninstall script will not remove the Vault instance, and the stored tokens, by default. Run the following to remove the Vault instance and the stored tokens:

./install_bundle.sh --uninstall --force-remove-vault

Uninstall the Panoptica controller from the cluster without the downloaded installer

If the downloaded installer is not present, run the uninstall script stored in the cluster using the following command:

kubectl get cm -n portshift portshift-uninstaller -o jsonpath='{.data.config}' | bash

If API token injection is enabled for the cluster, the uninstall script will not remove the Vault instance and the stored tokens by default. To remove the Vault instance and the stored tokens, run the following:

kubectl get cm -n portshift portshift-uninstaller -o jsonpath='{.data.config}' | FORCE_REMOVE_VAULT="TRUE" bash