Monday, 3 February 2025

How to use EXPDP to export to multiple locations?

 Efficiently backing up large Oracle databases is crucial for data protection and disaster recovery.  While EXPDP (Export Data Pump) excels at single-location exports, achieving multi-location backups requires a slightly different approach. You might be in a need of exporting a huge schema or some object but you don't have enough storage on single mount point. How to deal with this situation? Well, EXPDP gives you flexibility to export your dump to multiple locations at once. But you must use PARALLEL option to achieve this.

Let me show you how this can be done. In the below example, I'll be exporting 2 tables from HR schema residing in pluggable database PDB2 which is attached to CDB1. The Oracle Database version is 19c.

1) Connected to CDB1, started instance and opened PDB2.

[oracle@oravm19c ~]$ . oraenv
ORACLE_SID = [cdb1] ?
The Oracle base remains unchanged with value /u01/app/oracle

SQL> startup
ORACLE instance started.

Total System Global Area 1140850632 bytes
Fixed Size                  9134024 bytes
Variable Size             486539264 bytes
Database Buffers          637534208 bytes
Redo Buffers                7643136 bytes
Database mounted.
Database opened.
SQL>
SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         5 PDB2                           MOUNTED

SQL> alter pluggable database pdb2 open;

Pluggable database altered.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

2) Created 2 directories where DUMP files to be placed. I am using SYS for all operations with EXPDP/IMPDP, but if you are going to use some non-DBA schema, ensure to give required privileges as specified here.

[oracle@oravm19c ~]$ sqlplus sys@pdb2 as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Mon Feb 3 10:30:15 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> create directory DB_DUMP1 as '/u01/app/expdp';

Directory created.

SQL> create directory DB_DUMP2 as '/u01/app/expdp2';

Directory created.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
[oracle@oravm19c ~]$

3) Run EXPDP command with options as shown below. I am just breaking the DUMP to 2 pieces/sets and will place each set in above directories with common name.

Note: Keeping identical names can cause confusion. See below in point 6) the best way to deal with this situation.

[oracle@oravm19c ~]$ expdp dumpfile=DB_DUMP1:HR.dmp,DB_DUMP2:HR.dmp logfile=HR.log directory=DB_DUMP1 parallel=2 tables=HR.EMPLOYEES,HR.DEPARTMENTS

Export: Release 19.0.0.0.0 - Production on Mon Feb 3 10:35:45 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Username: sys@pdb2 as sysdba
Password:

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Starting "SYS"."SYS_EXPORT_TABLE_01":  sys/********@pdb2 AS SYSDBA dumpfile=DB_DUMP1:HR.dmp,DB_DUMP2:HR.dmp logfile=HR.log directory=DB_DUMP1 parallel=2 tables=HR.EMPLOYEES,HR.DEPARTMENTS
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "HR"."EMPLOYEES"                            17.08 KB     107 rows
. . exported "HR"."DEPARTMENTS"                          7.125 KB      27 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /u01/app/expdp/HR.dmp
  /u01/app/expdp2/HR.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Feb 3 10:36:11 2025 elapsed 0 00:00:12

4) You can use IMPDP to restore the tables back but you must provide all DUMP sets those were extracted earlier OR you'll get errors as shown.

[oracle@oravm19c ~]$ impdp dumpfile=HR.dmp logfile=HR-imp.log directory=DB_DUMP1 tables=HR.EMPLOYEES,HR.DEPARTMENTS table_exists_action=REPLACE

Import: Release 19.0.0.0.0 - Production on Mon Feb 3 10:43:01 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Username: sys@pdb2 as sysdba
Password:

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
ORA-39002: invalid operation
ORA-39059: dump file set is incomplete

5) When required pieces are provided, IMPDP also works as expected. So you must provide all the needed dump file sets.

[oracle@oravm19c ~]$ impdp dumpfile=DB_DUMP1:HR.dmp,DB_DUMP2:HR.dmp logfile=HR-imp.log directory=DB_DUMP1 parallel=2 table_exists_action=REPLACE

Import: Release 19.0.0.0.0 - Production on Mon Feb 3 10:44:45 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Username: sys@pdb2 as sysdba
Password:

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01":  sys/********@pdb2 AS SYSDBA dumpfile=DB_DUMP1:HR.dmp,DB_DUMP2:HR.dmp logfile=HR-imp.log directory=DB_DUMP1 parallel=2 table_exists_action=REPLACE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "HR"."EMPLOYEES"                            17.08 KB     107 rows
. . imported "HR"."DEPARTMENTS"                          7.125 KB      27 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Mon Feb 3 10:45:16 2025 elapsed 0 00:00:16


6) For better management of DUMP pieces, you can use %U to give them naming. This is helpful when you have large DB Objects to export. Below is the example,

expdp dumpfile=DB_DUMP1:HR%U.dmp,DB_DUMP2:HR%U.dmp logfile=HR.log directory=DB_DUMP1 parallel=2 tables=HR.EMPLOYEES,HR.DEPARTMENTS
impdp dumpfile=DB_DUMP1:HR%U.dmp,DB_DUMP2:HR%U.dmp logfile=HR-imp.log directory=DB_DUMP1 parallel=2 table_exists_action=REPLACE


Essential Considerations for Large Dumps:

Compression: Always use compression=all (or compression=METADATA_ONLY) to reduce dump file sizes.

Parallelism: Adjust the number of parallel processes based on system resources. Avoid overloading the database server.

Storage: Ensure sufficient storage at all backup locations.

Network: Account for network bandwidth limitations.

Monitoring: Monitor EXPDP processes and check log files for errors.

Testing: Regularly test restores from all locations.


I hope this helps !!

No comments:

Post a Comment