Saturday, April 7, 2012

Oracle Data Pump (impdp and expdp)

User Crtl + C to suspend a expdp job started from a command line.
to stop, kill, resume, attach, cancel, restart a datapump job
http://blog.oracle48.nl/killing-and-resuming-datapump-expdp-and-impdp-jobs/



To enhance performance for large data imports with impdp, consider two things:
One: Using parallellims:
Degree=2 x number of CPUs
Degree <= number of dump files
Degree of parallelims can be adjusted on the fly
Oracle uses worker processes and parallel execution (PX) process. Worker processes are used in inter-segment parallelims using DIRECTPATH while PX processes are used in intra-segment parallelims using EXTERNAL TABLE.
Oracle will automatically choose which method to use in parallellims.
There are times when you can use a hidden parameter to only use DIRECTPATH method
access_method=direct_path
Note: if direct_path cannot be used for a segment and the above parameter is set, data for that segment is not going to be loaded.
Two equally sized segments will be loaded faster with access_method=direct_path

Indexes are created by one at a time by a single worker process using multiple paralle PX processes. Index creation could take a long time.

Two: Setting the following parameters before the load:
1. disk_asynch_io=true
2. noarchivelog (makes big difference)
3. db_block_checking=false
4. db_block_checksum=false
5. disable block change tracking
6. size data files properly before loading, do not use autoextend

Setting EXCLUDE/INCLUDE parameter will also exclude/include dependent objects
To check dependencies:
database_export_objects
schema_export_objects
table_export_objects

To import data into a database with archivelog mode:
1. put database in NOLOGGING mode, or
2. a) create tables with content=metadata_only exclude=indes, constraint, ref_constraint
b) put tables in nologging mode
c) import with table_exists_action=append
4) cretae index with nologging


TRACING:
TRACE=480300 for impdp and expdp
extended log information
not very excessive data output size
(metalink Note: 286496.1)

DATAPUMP NETWORK MODE (Without creating dump file)

STEP 1: grant exp_full_database role to socre shcema [IN REMOTE DB]
conn system/pass;

GRANT EXP_FULL_DATABASE to user_info;

STEP 2: [IN DESTINATION DB] Create destination user and grant necessary roles

conn system/pass;
CREATE USER user_info
IDENTIFIED BY pass
DEFAULT TABLESPACE USER
TEMPORARY TABLESPACE TEMP;

GRANT CONNECT , RESOURCE TO user_info;

STEP 3: [IN DESTINATION DB] grant read/write on dump directory

conn system/pass;

GRANT read,write on DIRECTORY dump_directory to user_info;

Network import does not requer any dump file. This directory is only requer to write the import log file.

STEP 4: [IN DESTINATION DB] create public DB Link

conn system/pass;
CREATE PUBLIC DATABASE LINK SATURN
connect to user_info identified by pass
using '(DESCRIPTION=(
ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=remote_db)
(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=orcl.oracle.com)
(server=DEDICATED)))';

STEP 5: [IN DESTINATION DB MACHINE] execute impdp

impdp user_info/pass directory=dump_dir network_link=SATURN logfile=net_import_proddev.log EXCLUDE=GRANT,STATISTICS,SYNONYM,DB_LINK REMAP_SCHEMA=USER_INFO:USER_INFO

1 comment:

Unknown said...

Nice article
thanks