Skip to main content

PowerProtect Data Manager 19.9 Kubernetes User Guide

PDF

Recommendations and considerations when using a Kubernetes cluster

Review the following information that is related to the deployment, configuration, and use of the Kubernetes cluster as an asset source in PowerProtect Data Manager:

NodePort service requires port 30095

PowerProtect Data Manager creates a NodePort service on the Kubernetes cluster to download logs from the powerprotect-k8s-controller pod. The NodePort is opened on port 30095. Ensure that there are no firewalls blocking this port between the PowerProtect Data Manager appliance and the Kubernetes cluster. For a Kubernetes cluster in a public cloud, see the documentation of your cloud provider for instructions on opening this port.

By default, PowerProtect Data Manager connects to the node on which the powerprotect-k8s-controller pod is running to download the log.

Add line to custom-ports file when not using port 443 or 6443 for Kubernetes API server

If a Kubernetes API server listens on a port other than 443 or 6443, an update is required to the PowerProtect Data Manager firewall to allow outgoing communication on the port being used. Before you add the Kubernetes cluster as an asset source, perform the following steps to ensure that the port is open:

  1. Log in to PowerProtect Data Manager, and change the user to root.
  2. Add a line to the file /etc/sysconfig/scripts/custom-ports that includes the port number that you want to open.
  3. Run the command service SuSEfirewall2 restart.

This procedure should be performed after a PowerProtect Data Manager update, restart, or server disaster recovery.

Log locations for Kubernetes asset backup and restore operations and pod networking

All session logs for Kubernetes asset protection operations are pulled into the /logs/external-components/k8s folder on the PowerProtect Data Manager host. If you cannot locate the logs in this location, ensure that firewall is not blocking the PowerProtect Data Manager NodePort service port 30095, and that this port is reachable from all the Kubernetes worker and control plane nodes. If using Calico pod networking, ensure that the cluster CIDR block matches the Calico CIDR block.

Obtaining logs from Tanzu Kubernetes guest cluster when using a private network

The Kubernetes log files are typically pulled automatically, whether using a public or private network. When using a private network for a Tanzu Kubernetes guest cluster, if the logs are not pulled automatically, perform the following steps on each guest cluster in order to enable the CNDM service to pull the log files:

  1. Run the following command to modify the service running in the PowerProtect namespace:

    # kubectl edit service powerprotect-controller -n powerprotect

    • Change the NodePort service type to LoadBalancer.

    • Update the spec.ports.port to listen on port 30095.

  2. Run the following to command to verify the service status:

    # kubectl get service powerprotect-controller -n powerprotect

    In the output, make note of the external IP address. For example:

    NAME                    TYPE          CLUSTER-IP     EXTERNAL-IP    PORT(S)      AGE
    powerprotect-controller   LoadBalancer  198.xx.xx.xx 10.xx.xx.xx 30095:30095/TCP  22h
    
  3. In the PowerProtect namespace, create a config map that is named ppdm-custom-config, and use the external IP address of the service as the host and port. For example:

    #kubectl create configmap ppdm-custom-config -n powerprotect --from-literal=k8s.ppdm.service.host=10.xx.xx.xx --from-literal=k8s.ppdm.service.port=30095

PVC parallel backup and restore performance considerations

To throttle system performance, PowerProtect Data Manager supports only five parallel namespace backups and two parallel namespace restores per Kubernetes cluster. PVCs within a namespace are backed up and restored sequentially.

You can queue up to 100 namespace backups across protection policies in PowerProtect Data Manager.

Overhead of PowerProtect Data Manager components on Kubernetes cluster

At any time during backup, the typical footprint of PowerProtect Data Manager components (Velero, PowerProtect Controller, cProxy) is less than 2 GB RAM Memory and four CPU cores, and such usage is not sustained and visible only during the backup window.

The following resource limits are defined on the PowerProtect PODs, which are part of the PowerProtect Data Manager stack:

  • Velero maximum resource usage: 1 CPU core, 256 MiB memory
  • PowerProtect Controller maximum resource usage: 1 CPU core, 256 MiB memory
  • PowerProtect cProxy pods (maximum of 5 per cluster): Each cProxy pod typically consumes less than 300 MB memory and less than 0.8 CPU cores. These pods are created and terminated within the backup job.

Only Persistent Volumes with VolumeMode Filesystem supported

Backup and recovery of Kubernetes cluster assets in PowerProtect Data Manager is only supported for Persistent Volumes with the VolumeMode Filesystem.

Objects using PVC scaled down before starting the restore

The following activities occur before a PVC restore to the original namespace or an existing namespace when PowerProtect Data Manager detects that the PVC is in use by a Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, or Replication Controller:

  • PowerProtect Data Manager scales down any objects using the PVC.
  • PowerProtect Data Manager deletes the daemonSet and any Pods using PVCs.

Upon completion of the PVC restore, any objects that were scaled down are scaled back up, and any objects that were deleted are re-created. Ensure that you shut down any Kubernetes jobs that actively use the PVC before running a restore.

NOTE If PowerProtect Data Manager is unable to reset the configuration changes due to a controller crash, it is recommended to delete the Pod, Deployment, StatefulSet, DaemonSet, ReplicaSet, or Replication Controller from the namespace, and then perform a Restore to Original again on the same namespace.

Restore to a different namespace that already exists can result in mismatch between UID of pod and UID persistent volume files

A PowerProtect Data Manager restore of files in persistent volumes restores the UID and GID along with the contents. When performing a restore to a different namespace that already exists, and the pod consuming the persistent volume is running with restricted Security Context Constraints (SCC) on OpenShift, the UID assigned to the pod upon restore might not match the UID of the files in the persistent volumes. This UID mismatch might result in a pod startup failure.

For namespaces with pods running with restricted SCC, Dell Technologies recommends one of the following restore options:

  • Restore to a new namespace where PowerProtect Data Manager restores the namespace resource as well.
  • Restore to the original namespace if this namespace still exists.

  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\