Unsolved
This post is more than 5 years old
225 Posts
0
1242
ORA-01555 on data exporting
here is a BW system, customer need to archieve thier old data via exporting due to storage capacity limit, during the exporting, ORA 01555 happens.
Any suggestion?
WBGaynor
28 Posts
0
June 11th, 2012 07:00
Probably the undo_retention init.ora parameter is set too low relative to the time needed to complete the export, as the Oracle export engine is trying to create a read-consistent point-in-time export. The undo tablespace may need to be increased as well.
Have a look at metalink (my support) notes:
ORA-1555 When Error Exporting A Large Database [ID 174782.1]
Master Note for Export and Import [ID 1264691.1]
bg
reseach
225 Posts
0
June 12th, 2012 00:00
Thank for your suggestion.
I do have a question after looked at 174782.1. if we do accordingly as it suggest, Use CONSISTENT=N during export, will it cause any data integrity issues?
reseach
225 Posts
0
June 13th, 2012 19:00
It is 10gR2, not like 8i that old, right?
But it is running over a set old PA system, I think if hardware upgrade could resolve this.
dba_hba
63 Posts
0
June 14th, 2012 02:00
Eddy
If the issue is related to data volume then, as Bill says, try increasing undo_retention or give more storage to the UNDO.
Also are they using the query parameter to limit the amount of data exported?
If they are using 10gR2 then they should consider using datapump expdp rather than exp.
The performance difference is considerable even without using the parallel capabilities.
Not sure how you arrive at "hardware upgrade could resolve this." unless you mean making more storage available for UNDO.
There are programatic options such as saving archive data to external tables
SQL> create table system.logon_external
2 organization external
3 ( type oracle_datapump
4 default directory DATA_PUMP_DIR
5 location('SOE_LOGON_DATA01.dmp', 'SOE_LOGON_DATA02.dmp','SOE_LOGON_DATA03.dmp','SOE_LOGON_DATA04.dmp' )
6 ) parallel 4
7 as select /*+ PARALLEL(LOGON,4)*/ * from SYSTEM.LOSOE_LOGONIP
8 /
Table created.
Elapsed: 00:00:05.85
SQL> !ls -altr /u01/app/oracle/admin/vm02/dpdump/*dmp
-rw-r----- 1 oracle dba 197926912 Sep 10 11:26 /u01/app/oracle/admin/vm02/dpdump/SOE_LOGON_DATA04.dmp
-rw-r----- 1 oracle dba 195923968 Sep 10 11:26 /u01/app/oracle/admin/vm02/dpdump/SOE_LOGON_DATA03.dmp
-rw-r----- 1 oracle dba 198025216 Sep 10 11:26 /u01/app/oracle/admin/vm02/dpdump/SOE_LOGON_DATA02.dmp
-rw-r----- 1 oracle dba 197861376 Sep 10 11:26 /u01/app/oracle/admin/vm02/dpdump/SOE_LOGON_DATA01.dmp
SQL>
SQL> select count(*) from system.logon_external
2 /
COUNT(*)
----------
45821936
Elapsed: 00:00:00.46
SQL> truncate table SYSTEM.LOSOE_LOGONIP
2 /
Table truncated.
Elapsed: 00:00:00.15
SQL> INSERT /*+ PARALLEL(LOSOE_LOGONIP ,4) */ INTO SYSTEM.LOSOE_LOGONIP
select /*+ PARALLEL(LOGON,4)*/ * from SYSTEM.logon_external
2 3
SQL> /
45821936 rows created.
Elapsed: 00:00:22.70
SQL> commit;
Commit complete.
Elapsed: 00:00:00.03
SQL> @segs SYSTEM.LOSOE_LOGONIP % %
SEGMENT NAME TYPE TABLESPACE INITL NEXT %INC EXT/MAX EXT% SIZE
--------------------------------------------- ---------- -------------------- ----- ------ ---- --------------- ---- -------
Anonymous
5 Practitioner
5 Practitioner
•
274.2K Posts
0
June 14th, 2012 07:00
Have you considered using transportable tablepace. Since you are indicating that you are archiving data meaning it is not going to change. There are several considerations but the main thing you need to know is the tablespace needs to be put in read only.
Simply put, you can place the tablespace in read only mode, copy(DD) the files representing the tablespace to the target DB, and Datapump some table information and other DDL characteristics.
There are other consideration and requirements but you get the idea. Ideally, if you have SNAP or BCV split copy that would be your ideal source.