How To Protect a Tanzu Kubernetes guest cluster with PPDM 19.7?
Before adding a Tanzu Kubernetes guest cluster in PowerProtect Data Manager for namespace and PVC protection, Complete the Prerequisites to Tanzu Kubernetes guest cluster protection on page 55 in PowerProtect 19.7 Administration and User Guide.
https://dl.dell.com/content/docu102385_powerprotect-data-manager-19-7-administration-and-user-guide.pdf?language=en_US
Enable an asset source for both the vCenter Server and the Kubernetes cluster in PPDM UI
Steps:
1. From the PowerProtect Data Manager UI, select
Infrastructure >
Asset Sources, and then click
+ to reveal the New Asset Source tab.
2. In the pane for the asset source that you want to add, click
Enable Source for both the
vCenter Server and the
Kubernetes cluster
The Asset Sources window updates to display a tab for the new asset sources.
Add a VMware vCenter Server (Hosting TKG Clusters) as an asset source in the PowerProtect Data Manager UI.
Perform the steps to add a vCenter Server as an asset source on page 47 in PowerProtect 19.7 Administration and User Guide.
https://dl.dell.com/content/docu102385_powerprotect-data-manager-19-7-administration-and-user-guide.pdf?language=en_US
Add a Kubernetes cluster as an asset source in the PowerProtect Data Manager UI.
Perform the following steps to add a Kubernetes cluster as an asset source in the PowerProtect Data Manager UI. When added, PowerProtect Data Manager automatically deploys resources on the cluster that enable the backup and recovery of namespaces.
Steps:
1. From the left navigation pane, select
Infrastructure >
Asset Sources.
2. In the Asset Sources window, select the
Kubernetes cluster tab.
3. Click
Add.
4. In the
Add Kubernetes cluster dialog box, specify the
source attributes:
a. Tanzu Cluster -> When adding a Kubernetes Tanzu guest cluster for protection of vSphere CSI-based persistent volumes, Move the slider to the right.
b. Select vCenter -> Select the vCenter Server that contains the guest cluster from the list.
c. Name -> Specify the Cluster name.
d. Address -> The fully qualified domain name (FQDN) or the IP address of the Kubernetes API server.
e. Port -> specify the port to use for communication when not using the default port, 443.
NOTE: The use of any port other than 443 or 6443 requires you to open the port on PowerProtect Data Manager first to enable outgoing communication. The procedure that is described in Recommendations and considerations when using a Kubernetes cluster on page 193 in PowerProtect Data Manager Administration and User Guide.
e. Under Host Credentials, click Add to add the service account token for the Kubernetes cluster, and then click Save.
NOTE: The service account must have the following privileges:
- Get/Create/Update/List CustomResourceDefinitions
- Get/Create/Update ClusterRoleBinding for 'cluster-admin' role
- Create/Update 'powerprotect' namespace
- Get/List/Create/Update/Delete all kinds of resources inside 'powerprotect' namespace
- Get/List/Watch all namespaces in the cluster as well as PV, PVC and pods in all these namespaces
The admin-user service account in the kube-system namespace contains all these privileges. You can provide the token of this account, or an existing similar service account. Alternatively, create a service account that is bound to a cluster role that contains these privileges, and then provide the token of this service account.
f. Click Verify to review the certificate and token information, and then click Accept.
Upon successful validation, the status for the new credentials updates to indicate Accepted.
g. Click Save. The Kubernetes cluster information that you entered now appears as an entry on the Asset Sources window, with a Discovery status of Unknown.
5. (Optional) If you want to initiate a manual discovery, select the Kubernetes cluster, and then click Discover.
6. Verify that the Discovery Status column indicates OK, and then go to the Assets window.
Upon adding the Kubernetes cluster as an asset source, a PowerProtect controller is installed on the cluster, which is also used to install Velero with the Data Domain Object store plug-in and vSphere plug-in.
The namespaces in the Kubernetes cluster will appear in the Kubernetes tab of the Assets window.
NOTE: If namespace assets are not discovered after adding a Kubernetes cluster asset source, ensure that the bearer token that is provided for the Kubernetes asset source belongs to a service account that has the privileges as specified in step 4.
NOTE: Discovery of a Kubernetes cluster discovers namespaces that contain volumes from both container storage interface (CSI) and non-CSI based storage. However, Backup and Recovery are supported only from CSI-based storage. If you select a namespace from non-CSI storage, the backup fails.
Add a VM Direct Engine in PPDM UI that is dedicated to Kubernetes workloads.
Since cProxy pods running in the Kubernetes Tanzu guest cluster will not have access to FCDs directly, a vProxy will be deployed in vCenter to protect the guest cluster. This protection requires an external VM Direct engine dedicated to Kubernetes workloads.
There should be a minimum of one VM Direct engine per Supervisor cluster.
Perform the steps to Add a VM Direct Engine in PPDM UI that is dedicated to Kubernetes workloads on page 52 in PowerProtect 19.7 Administration and User Guide.
https://dl.dell.com/content/docu102385_powerprotect-data-manager-19-7-administration-and-user-guide.pdf?language=en_US
Make Sure to specify in Supported Protection Type that VM Direct engine is intended for Kubernetes Tanzu guest cluster asset protection.
NOTE: When adding a VM Direct engine for Kubernetes guest cluster protection, add a second network interface card (NIC) if the PowerProtect controller POD running in the guest cluster cannot reach the vProxy on the primary network. Provide information for the second NIC.
Create Kubernetes Protection Policies To Backup Namespaces and PVCs.
Steps:
1. From the left navigation pane, select
Protection >
Protection Policies.
2. In the Protection Policies window, click
Add. The Add Policy wizard appears.
3. On the
Type page, specify the following fields, and then click
Next:
- Name -> Type a descriptive name for the protection policy.
- Description ->Type a description for the policy.
- Type -> For the policy type, select Kubernetes.
4. On the
Purpose page, select from the following options to indicate the purpose of the new protection policy group, and then click
Next:
- Crash Consistent -> Select this type for point-in-time backup of namespaces.
- Exclusion -> Select this type if there are assets within the protection policy that you plan to exclude from data protection operations.
5. In the
Assets page, select one or more
unprotected namespaces that you want to back up as part of this protection policy.
If the namespace that you want to protect is not listed, perform one of the following:
a. Click Find More Assets to perform an updated discovery of the Kubernetes cluster.
b. Use the Search box to search by asset name.
6. (Optional) For the selected namespaces, click the link in the PVCs Excluded column, if available, to clear any PVCs that you want to exclude from the backup. By default, all PVCs are selected for inclusion.
7. Click
Next. The Schedule page appears.
8. On the
Schedule page, click
+ Backup to create a schedule.
9. On the
Add Primary Backup page, specify the backup schedule fields, and then click
OK:
- Recurrence -> Specify how often backups occur.
- Create Every -> Specify how often to create a synthetic full backup.
For persistent volumes on VMware first class disks (FCDs), a synthetic full backs up only the changed blocks since last backup to create a new full backup. Also, namespace metadata is backed up in full upon every backup.
- Keep For -> Specify the retention period for the backup.
- Start Time -> Specify the time of day to start initiating backups.
- End Time -> Specify the time of day to stop initiating backups.
10. Click
Next. The Summary page appears.
11.
Review the protection policy group configuration details, and then click
Finish.
12. Click
OK to exit the window, or click
Go to Jobs to open the
Jobs window.
From the
Jobs window, you can monitor the progress of the new Kubernetes cluster protection policy backup and
associated tasks. You can also cancel any in-progress or queued job or task.
When the new protection policy is created and assets are added to the protection policy, PowerProtect Data Manager performs backups according to the
Backup Schedule.
Manual Backups of Protected Assets.
Once assets have been added to a protection policy, you can perform
manual backups by using the
Protect Now functionality in the PowerProtect Data Manager UI.
You can use a single manual backup from the
Protection > Protection Policies window to back up
multiple assets that are protected in the designated protection policy.
To perform this manual backup:
- From the PowerProtect Data Manager UI, select Protection > Protection Policies
- Select the protection policy that contains the assets that you want to back up, and click Protect Now.
- On the Assets Selection page, select whether you want to back up all assets Or choose individual assets that are defined in the protection policy, and then click Next.
- On the Configuration page, select Back up now, and then select from the available backup types.
- You can Edit the retention period if you want to change the default settings, and then click Next.
- On the Summary page, Review the settings and then click Protect Now.
You can also perform a manual backup from the
Infrastructure > Assets window, but only for
one asset at a time.
To perform this manual backup:
- From the PowerProtect Data Manager UI, select Infrastructure > Assets.
- Select the Kubernetes tab for the asset type you want to back up. A list of assets appears.
- Select an asset from the table that has an associated protection policy.
- Click Protect Now.
If the backup fails with the error Failed to create Proxy Pods. Creating Pod exceeds safeguard limit of
10 minutes, verify that the CSI driver is functioning properly, such that the driver can create snapshots and a PVC from the VolumeSnapshot datasource. Also, ensure that you clean up any orphan VolumeSnapshot resources that still exist in the namespace.
Please refer this video: