Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products

PowerProtect Data Manager 19.9 SAP HANA Agent User Guide

PDF

Configure the SAP HANA parallel backup setting

The streams usage by the storage unit during an SAP HANA backup varies, depending on the number and type of parallel operations that are performed at a given time. You can set the SAP HANA parameter parallel_data_backup_backint_channels to specify the number of channels to use for the backup.

The SAP HANA agent requires one DD stream for each backed-up pipe. For example, if an SAP HANA scale-out system has 12 running services, then 12 streams are required to back up the data. Starting with SAP HANA SPS 09, each service can also back up multiple logs for each backup, as controlled by the database parameter max_log_backup_group_size.

For a multistream backup with SAP HANA SPS 11 or later, the SAP HANA agent can use multiple SAP HANA channels to write the backup data for each service. The SAP HANA agent uses a separate channel to write each stream of data to the DD system. To specify the number of channels to use for the backup, up to a maximum of 32 channels, you can set the SAP HANA parameter parallel_data_backup_backint_channels. SAP HANA opens the corresponding number of pipe files for the backup, and the agent saves each stream as a separate save set.

For example, if the parallel_data_backup_backint_channels parameter is set to 12 on the SAP HANA database server, then 12 streams are used for the backup, which produces 12 save sets.

A restore uses the same number of streams as the backup, and ignores the parallel_data_backup_backint_channels parameter setting.

During an SAP HANA backup or restore, the storage unit typically uses the following number of streams:

Number of services x max_log_backup_group_size

Due to the design of SAP HANA log backups, an SAP HANA system cannot wait until a stream is available because waiting can negatively affect the database performance.

If the DD system runs out of streams during a backup, the backup fails (although not immediately) with the following error message in the operational log:

153004:hdbbackint: Unable to write to a file because the streams limit was exceeded.
The error message is: [5519] [16805] [140261664245536] Tue May 10 06:45:23 2016
        ddp_write() failed Offset 0, BytesToWrite 317868, BytesWritten 0 Err: 5519-Exceeded streams limit

If the DD system runs out of streams during a restore, then the restore fails (although not immediately) with the following error message in the operational log:

163971 11/28/2016 06:55:59 AM  hdbbackint SYSTEM critical Unable to read from a file because the streams limit was exceeded.
The error message is: [5519] [60299] [140167084230432] Mon Nov 28 06:55:59 2016
        ddp_read() failed Offset 192, BytesToRead 262144, BytesRead 0 Err: 5519-nfs readext remote failed (nfs: Resource (quota) hard limit exceeded)

  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\