Start a Conversation

Unsolved

This post is more than 5 years old

10571

December 14th, 2012 10:00

Datadomain Slow Down

We are new to using a Datadomain device for Oracle RMAN backups but I was wondering if someone might provide some insight to a couple of questions which has come up in testing the device. The server is an IBM AIX 6.1 P7 frame running with VIO front end. The current production backup is done using a VTL with an 8Gb fiber connection and the server is a media master. The datadomain connection is also thru the VIO and uses a 10Gb fiber. The database is approximately ~1Tb in size. When a RMAN full backup is done to the VTL the backup will complete in about 35 to 40 minutes. However, when a backup is done of the same database is done to the datadomain device the length jumps to 1 hours / 20 minutes. Also, there is another database which is 283Gb in size and it also takes about 1 hour / 10 minutes to backup to the datadomain - the confusion is why would a backup of 283Gb and 1Tb be so similar. In addition, a couple of times it has been found that the a backup to the datadomain really slows down and a unmout/remout of the device needs to be done to return the backup to at least a consistent speed. While I was writing this discussion a backup was running to the datadomain of the 1Tb database at the same time I was running a iostat –VF /datadomain/OracleMMA 30 1000 command – suddenly there was a significant drop off in speed as seen below:

FS Name: % tm_act Kbps tps Kb_read Kb_wrtn

/datadomain/OracleMM - 70327.4 68.7 0 2109440

FS Name: % tm_act Kbps tps Kb_read Kb_wrtn

/datadomain/OracleMM - 17331.6 17.0 0 519992

From that point on the device has not regained the speed – earlier I was seeing speeds like 2.9 to 3.0GB every 30 seconds. I have no idea what would cause such a drop – it should be noted this backup is the only one being written to the datadomain at this time.

We have separated the Oracle data files and the archive logs into separate directories. The RMAN command is setting the FILESPERSET=1 and encryption/compression are not set. What I’m trying to figure out is have we configured the device incorrectly or are their some methods to see if we can account for the differences in the throughput. In addition we have separated the Oracle data files and the archive logs into separate directories. The mount options are as follows:

DD01 /data/col1/DBlogs /datadomain/DBlogs nfs3 Dec 14 11:09 cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,timeo=600

DD01 /data/col1/OracleMMA /datadomain/OracleMMA nfs3 Dec 14 11:09 cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,ver s=3,timeo=600

Nick Thank you - I will talk to the SA to see but I believe the HBA are connected to the DD and aren't shared with the SAN.  Thank for the document.

Nick - I spoke with the SA and he confirmed that a dedicated HBA is used to connect the LPAR to the DataDomain.  The backup I was running when the iostat data was obtained took 3 hours and 42 mins to finish.

Zhaos2:  Thank you.  Then the backup was orginially setup we followed the document and I checked the recommendations are present in the backup.

94 Posts

December 14th, 2012 12:00

dmfinn

Please review page 7 of the guide to ensure DD VTL is tuned correctly for VIO.

The VIO server can have multiple physical HBAs assigned to it. At this time,

however, EMC Data Domain requires that physical HBAs be dedicated to the

connection to the Data Domain system. See Figure 2 for supported NPIV

Nick

1 Attachment

9 Posts

December 14th, 2012 13:00

Nick:

Thanks for the information I will see if that is how the device is configured – I’m not the SA but the DBA so I will forward it to him. Thanks Dennis

643 Posts

December 17th, 2012 23:00

Please refer to the white paper 'Oracle RMAN Design Best Practices with EMC Data Domain' at the below link in Everything Oracle at EMC:

https://community.emc.com/docs/DOC-18775

It includes:

- Best practices for Oracle RMAN with Data Domain deduplication storage (page 15)

- Best practice for configuring RMAN options (page 18)

You can also check 'tuning parameter best practices for Oracle RMAN' in page 14.

9 Posts

December 18th, 2012 11:00

Zhaos2:  Thank you.  When the backup was orginially setup we followed the document and I checked the recommendations are present in the backup.

9 Posts

December 18th, 2012 14:00

Sorry I should have point out in the Oracle Note that not using the noac option applies to all operating systems.

9 Posts

December 18th, 2012 14:00

There seems to be a real conflict between what EMC states in the best practices document and what oracle support notes has to say about the mount options.  In the EMC document on page # 13 in the Table #1 as for NFS mount options - Linux and other UNIX the options sw,hard,rize=32768,wsize=32768,nolock. In the RAC is states to add the "noac" or "actimeo-0".  However, if you look at Oracle Support Note # 359515.1 dated 11/18/2012.  On a single instance non-RAC for AIX the mount options for Oracle Datafiles is rw,bg,hard,rsize=32768,wsize=32768,vers=3,cio,intr,timeo=600,proto=tcp - nolock isn't a option on AIX - kind of a difference.  Then when talking about a RAC installation the document states:

"For RMAN backup sets, image copies, and Data Pump dump files, the "NOAC" mount option should not be specified - that is because RMAN and Data Pump do not check this option and specifying this can adversely affect performance."  That is in direct conflict with EMC statements.

256 Posts

December 18th, 2012 15:00

Not sure which document you are referring to from EMC which you feel states that the noac option should be set for backup NFS mount points in a RAC context. However, this is certainly not my understanding. Noac is an NFS attribute which should be set in a RAC context for datafiles only. Since RMAN backups are not actively written to by more than one RAC node, it is not necessary to set this attribute for NFS mounts which contain only RMAN backup data. At least that has always been my understanding. Let me know if you have any differing information, please.

9 Posts

December 18th, 2012 19:00

Jeff:

Oracle Rman Design Best Practices with EMC Data Domain White Paper dated November 2010

On page #14 of that document there is a Table #1 Summary of best practice settings. Under the

NFS Mount Options ORACLE RAC environment the Settings state "Add the noac or actime0=0 option to

appropriate settings above (see partcular Oracle release notes)".  I have attached the document and  URL reference for the

document is: https://community.emc.com/message/698169#698169

Dennis

Message was edited by: dmflinn

1 Attachment

53 Posts

December 19th, 2012 02:00

Hi DmFlinn,

Above you mention that 2 databases, one of  ~1Tb in size and a second of 283GB both take about 1 hour / 10 minutes to backup. You ask why the similar duration. In the past I saw something simalar when backing up a 5TB database to VTL. Backup would run, for a while lots of write to the VTL, then the writes would stop but the backup would continue for almost 50% as long before RMAN finished. Turns out that the DBA had prepared to extend the database, RMAN was scanning through the empty datafiles that had been created.

However you also appear to have some other issues, I suggest rather than discuss them on this form any longer you  open a call with support. There you will get an SME dedicated to address your issues. Go to

https://support.emc.com

Click support by product

On the right hand panel click Data Domain

Then hit - Support Cases

Create New Case

256 Posts

December 19th, 2012 06:00

Obviously that document is not correct. You are correct in stating that the noac or atime=0 (which are equivalent settings, actually) is not needed for backup data mount points. It is needed for datafiles, though, but only on RAC.

2 Intern

 • 

20.4K Posts

January 2nd, 2013 11:00

VTL functionality requires a license.  Why use NFS ? Because it allows my DBAs to send data directly to DD and allowed us to stop paying IBM for RMAN-TDPO integration license for TSM ..we are talking big bucks. Even our big TSM server is no longer connected to DD via FC, we are using NFS over 10G. The only reason we had to get VTL was to continue to do NDMP backup from Celerra/VNX.

28 Posts

January 2nd, 2013 11:00

I am wondering why you would backup over NFS, when the HBA option is present.  I backup all of our databases to Data Domain.  Our larger databases, 1TB+ all backups are done via HBA's connected to a dedicated backup SAN infrastructure.  We have no dificulty with consistant backup speeds of 4TB/hour and is only limited by our desire not to overload the storage arrays and impact performance of our production databases.  Our NFS backups do suffer from performance throughput.  This is frequently a network issue or simply the NFS protocol.  I would suggest using DD boost to reduce the overhead required by the network layer or utilize the available SAN.

Darryl Smith

Chief Database Architect

EMC IT

28 Posts

January 2nd, 2013 12:00

Good point, I am not familiar with TSM licensing.  Have you looked at Data Domain Boost?

2 Intern

 • 

20.4K Posts

January 2nd, 2013 13:00

we sure have, DBAs love it.  Full database backup (RMAN) of 900G database took 4.5 hours using RMAN to DD via NFS. Same full backup using DD Boost for RMAN took 27 minutes. DBAs thought something was broken, after doing multiple restores from those backups they were finally convinced

I can't completely get rid of NFS because my DBAs like to run data pump from time to time and DD Boost does not work with expdp.

No Events found!

Top