Start a Conversation

Unsolved

This post is more than 5 years old

A

5 Practitioner

 • 

274.2K Posts

6166

June 12th, 2012 21:00

How to avoid big archivelog when trying to delete millions of rows?


If I have a very big Oracle database, when I tried to delete million rows, it will generate a large , is there anyway we can avoid this ?



Message was edited by: TseWang

225 Posts

June 12th, 2012 21:00

You might consider using Oracle partition table, truncate the partition when you like to delete the rows

643 Posts

June 12th, 2012 21:00

Did you use NOLOGGING clause?

46 Posts

June 13th, 2012 02:00

Great if you use Oracle Data Guard ;-)

225 Posts

June 13th, 2012 02:00

Data Guard? Never thought it can make this.

Bart, could you give me more detail of this method?

46 Posts

June 13th, 2012 03:00

Actually my comment was a bit cynical...

If you use Data Guard then the source DB enforces logging. So /NOLOGGING will not be honoured and the delete will still generate lots of archive log data - and therefore performance impact.

Which shows why Data Guard is (IMO) not always an enterprise class DR tool.

63 Posts

June 13th, 2012 03:00

Eddy

If NOLOGGING is used for a table or tablespace changes to the database will not be propagated to the Data Guard instance.

So whilst using NOLOGGINg may seem performant  it can mess up your recoverabilty, DR and backup.

63 Posts

June 13th, 2012 04:00

"Data Guard then the source DB enforces logging." 

Data Guard does not enforce logging. It is a best practice and a sensible one when using Data Guard, here is a link to the Oracle documentation.

http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_dg.htm#CEGGJBBA

Also worth looking at is the MyOS  note: The Gains and Pains of Nologging Operations [ID 290161.1]

Onto the Original question

"If I have a very big Oracle database, when I tried to delete million rows, it will generate a large , is there anyway we can avoid this ?"

A Large what?

Large UNDO?: use a rownum delimiter and commit to restrict the number of rows deleted at any one time and minimizeundo/rolback size: e.g

Large redo generation for large deletes

 

1. Design : If you know you need to delete millions rows of data regularly, an archiving policy, then,  as previously advised, make sure that the data is in its own table or partition and use truncate

2. If you delete 1million rows and keep 10,000: then it may be more effective to recreate the table online and drop the original.

3. Double check whether you need to insert/delete the data into a regular table  the first place. Is the data persistent? Can the work be done using External or Global temp tables?

If you must do any NOLOGGING changes it is advised to immediately take a backup of the affected tablespace(s). If RMAN is used, even an incremental level 1 backup will suffice instead of a full backup.  The physical standby database can then be  updated by restoring the the affected file(s).

Using NOLOGGING with  Data Guard and other remote replication technologies you run the risk of ORA-26040 errors (Data block was loaded using the NOLOGGING option) - see MyOS document: "ORA-1578 / ORA-26040 Corrupt blocks by NOLOGGING - Error explanation and solution [ID 794505.1]"

46 Posts

June 13th, 2012 05:00

Ah, I wasn't aware that you had to enable force logging manually. Good point. It proves even more that DG is tricky as an enterprise DR solution. Without force logging you risk having inconsistent data at the standby even if DG is enabled and replicating. With force logging you have serious overhead on some transactions....

5 Practitioner

 • 

274.2K Posts

June 13th, 2012 19:00

Nologging was set with tables, but still can’t stop generate archivelog?

63 Posts

June 14th, 2012 04:00

Two things

1. Even with NOLOGGING some redo is still generated.

2. It is possible/probable that the tablespace or database is in FORCE LOGGING mode. So your attempts to use NOLOGGING at the table level are futile

Is the delete causing the archive log/FRA filling up? Is this causing a performance problem? Is it causing the backup to overrun? or some other problem?

What you are encountering here is a schema /application issue which is best resolved through schema design.

643 Posts

June 19th, 2012 00:00

You may create a table with the records you want to retain.

Truncate the source table

reinsert the retained records.

5 Practitioner

 • 

274.2K Posts

June 19th, 2012 02:00

Allanr,

thanks for your sharing, it does cause performance issue and FRA filling up. 

or I may create a table with the records you want to retain.
Truncate the source table
reinsert the retained records.

thanks

No Events found!

Top