General Information |
Source |
{ORACLE_HOME}/rdbms/admin/dbmsdp.sql
{ORACLE_HOME}/rdbms/admin/dbmspump.sql |
First Available |
10.1 |
Constants |
Name |
Data Type |
Value |
KU$_DUMPFILE_TYPE_DISK |
BINARY_INTEGER |
0 |
KU$_DUMPFILE_TYPE_PIPE |
BINARY_INTEGER |
1 |
KU$_DUMPFILE_TYPE_TAPE |
BINARY_INTEGER |
2 |
KU$_DUMPFILE_TYPE_TEMPLATE |
BINARY_INTEGER |
3 |
KU$_FILE_TYPE_DUMP_FILE |
BINARY_INTEGER |
1 |
KU$_FILE_TYPE_BAD_FILE |
BINARY_INTEGER |
2 |
KU$_FILE_TYPE_LOG_FILE |
BINARY_INTEGER |
3 |
KU$_FILE_TYPE_SQL_FILE |
BINARY_INTEGER |
4 |
KU$_JOB_COMPLETE |
BINARY_INTEGER |
1 |
KU$_JOB_COMPLETE_ERRORS |
BINARY_INTEGER |
2 |
KU$_JOB_STOPPED |
BINARY_INTEGER |
3 |
KU$_JOB_ABORTED |
BINARY_INTEGER |
4 |
KU$_JOB_VIEW_ALL |
BINARY_INTEGER |
0 |
KU$_JOB_TTS_TABLESPACES |
BINARY_INTEGER |
1 |
KU$_STATUS_WIP |
BINARY_INTEGER |
1 |
KU$_STATUS_JOB_DESC |
BINARY_INTEGER |
2 |
KU$_STATUS_JOB_STATUS |
BINARY_INTEGER |
4 |
KU$_STATUS_JOB_ERROR |
BINARY_INTEGER |
8 |
KU$_STATUS_VERSION |
BINARY_INTEGER |
KU$_STATUS_VERSION_2 |
KU$_STATUS_VERSION_1 |
BINARY_INTEGER |
1 |
KU$_STATUS_VERSION_2 |
BINARY_INTEGER |
2 |
|
Dependencies |
SELECT name
FROM dba_dependencies
WHERE referenced_name = 'DBMS_DATAPUMP'
UNION
SELECT referenced_name
FROM dba_dependencies
WHERE name = 'DBMS_DATAPUMP'; |
Exceptions |
Error Code |
Name |
Reason |
-39001 |
INVALID_ARGVAL |
An invalid value was supplied for an input parameter |
-39002 |
INVALID_OPERATION |
Cannot be executed because of inconsistencies
between the API and the job |
-39004 |
INVALID_STATE |
The state of the job precludes the execution of the API |
-39005 |
INCONSISTENT_ARGS |
Inconsistent arguments |
-39006 |
INTERNAL_ERROR |
Internal datapump exception |
-31623 |
INVALID_HANDLE |
Incorrect handle specified for the job |
-31626 |
NO_SUCH_JOB |
A invalid reference to a job which is no longer executing |
-31627 |
SUCCESS_WITH_INFO |
User specified job parameters that yielded informational messages |
-31631 |
PRIVILEGE_ERROR |
The necessary privileges are not available for operations |
-31634 |
JOB_EXISTS |
Job creation or restart failed due
to duplicate name |
-39211 |
NO_DUMPFILE_INFO |
User specified an invalid or inaccessible file |
|
Bitmap Definitions used in DATA_OPTIONS parameter |
Name |
Data Type |
Value |
KU$_DATAOPT_SKIP_CONST_ERR |
NUMBER |
1 |
KU$_DATAOPT_XMLTYPE_CLOB |
NUMBER |
2 |
KU$_DATAOPT_NOTYPE_EVOL |
NUMBER |
4 |
|
Filters |
Name |
Object Type |
Meaning |
INCLUDE_NAME_EXPR
and
EXCLUDE_NAME_EXPR |
Named objects |
Defines which object names are included in (INCLUDE_NAME_EXPR), or excluded from (EXCLUDE_NAME_EXPR), the job. You use the object_type parameter to limit the filter to a particular object type.
For Table mode, identifies which tables are to be processed. |
SCHEMA_EXPR |
Schema objects |
Restricts the job to objects whose owning schema name is satisfied by the expression.
For Table mode, only a single SCHEMA_EXPR filter is supported. If specified, it must only specify a single schema (for example, 'IN (''SCOTT'')').
For Schema mode, identifies which users are to be processed. |
TABLESPACE_EXPR |
Table, Cluster, Index, Rollback
Segment |
Restricts the job to objects stored in a tablespace whose name is satisfied by the expression.
For Tablespace mode, identifies which tablespaces are to be processed. If a partition of an object is stored in the tablespace, the entire object is added to the job.
For Transportable mode, identifies which tablespaces are to be processed. If a table has a single partition in the tablespace set, all partitions must be in the tablespace set. An index is not included within the tablespace set unless all of its partitions are in the tablespace set. A domain index is not included in the tablespace set unless all of its secondary objects are included in the tablespace set. |
INCLUDE_PATH_EXPR and EXCLUDE_PATH_EXPR |
All |
Defines which object paths are included in, or excluded from, the job. You use these filters to select only certain object types from the database or dump file set. Objects of paths satisfying the condition are included (INCLUDE_PATH_EXPR) or excluded (EXCLUDE_PATH_EXPR) from the operation. The object_path parameter is not supported for these filters. |
|
Item Codes for entry in a dump file info table (of type ku$_dumpfile_info) |
Name |
Data Type |
Value |
KU$_COMPRESS_NONE |
NUMBER |
1 |
KU$_COMPRESS_METADATA |
NUMBER |
2 |
KU$_DFHDR_FILE_VERSION |
NUMBER |
1 |
KU$_DFHDR_MASTER_PRESENT |
NUMBER |
2 |
KU$_DFHDR_GUID |
NUMBER |
3 |
KU$_DFHDR_FILE_NUMBER |
NUMBER |
4 |
KU$_DFHDR_CHARSET_ID |
NUMBER |
5 |
KU$_DFHDR_CREATION_DATE |
NUMBER |
6 |
KU$_DFHDR_FLAGS |
NUMBER |
7 |
KU$_DFHDR_JOB_NAME |
NUMBER |
8 |
KU$_DFHDR_PLATFORM |
NUMBER |
9 |
KU$_DFHDR_INSTANCE |
NUMBER |
10 |
KU$_DFHDR_LANGUAGE |
NUMBER |
11 |
KU$_DFHDR_BLOCKSIZE |
NUMBER |
12 |
KU$_DFHDR_DIRPATH |
NUMBER |
13 |
KU$_DFHDR_METADATA_COMPRESSED |
NUMBER |
14 |
KU$_DFHDR_DB_VERSION |
NUMBER |
15 |
KU$_DFHDR_MASTER_PIECE_COUNT |
NUMBER |
16 |
KU$_DFHDR_MASTER_PIECE_NUMBER |
NUMBER |
17 |
KU$_DFHDR_DATA_COMPRESSED |
NUMBER |
18 |
KU$_DFHDR_METADATA_ENCRYPTED |
NUMBER |
19 |
KU$_DFHDR_DATA_ENCRYPTED |
NUMBER |
20 |
KU$_DFHDR_MAX_ITEM_CODE |
NUMBER |
20 |
|
Remaps |
Name |
Data
Type |
Object Type |
Meaning |
REMAP_DATAFILE |
Text |
Library,
Tablespace,
Directory |
Any datafile reference in the job that matches the object_type parameter and referenced the old_value datafile will be redefined to use the value
datafile. |
REMAP_SCHEMA |
Text |
Schema Objects |
Any schema object in the job that matches the object_type parameter and was located in the old_value schema will be moved to the value schema. |
REMAP_TABLESPACE |
Text |
Table, Index, Rollback Segment,
Materialized View, Materialized View Log, Tablespace |
Any storage segment in the job that matches the object_type parameter and was located in the old_value tablespace will be relocated to the value tablespace. |
|
Transforms |
Name |
Data
Type |
Object Type |
Meaning |
SEGMENT_ATTRIBUTES |
Number |
Table, Index |
If nonzero (TRUE), emit storage segment parameters. Defaults to 1 |
STORAGE |
Number |
Table |
If nonzero (TRUE), emit storage clause. (Ignored if SEGMENT_ATTRIBUTES is zero.) Defaults to nonzero (TRUE) |
|
Related System Privileges |
CREATE DIRECTORY
EXP_FULL_DATABASE
IMP_FULL_DATABASE |
Security Model |
Execute is granted to PUBLIC. Runs as
AUTHID CURRENT_USER |
|
Defined Data
Types |
Job Description Types |
CREATE TYPE sys.ku$_ParamValue1010 AS OBJECT (
param_name VARCHAR2(30),
param_op VARCHAR2(30),
param_type VARCHAR2(30),
param_length NUMBER,
param_value_n NUMBER,
param_value_t VARCHAR2(2000));
/
CREATE TYPE sys.ku$_ParamValues1010 AS TABLE OF sys.ku$_ParamValue1010;
CREATE TYPE sys.ku$_JobDesc1010 AS OBJECT (
job_name VARCHAR2(30),
guid RAW(16),
operation VARCHAR2(30),
job_mode VARCHAR2(30),
remote_link VARCHAR2(4000),
owner VARCHAR2(30),
instance VARCHAR2(16),
db_version VARCHAR2(30),
creator_privs VARCHAR2(30),
start_time DATE,
max_degree NUMBER,
log_file VARCHAR2(4000),
sql_file VARCHAR2(4000),
params ku$_ParamValues1010);
/ |
Job Status Types |
CREATE TYPE sys.ku$_DumpFile1010 AS OBJECT (
file_name VARCHAR2(4000),
file_type NUMBER,
file_size NUMBER,
file_bytes_written NUMBER)
CREATE TYPE sys.ku$_DumpFileSet1010 AS TABLE OF sys.ku$_DumpFile1010;
CREATE TYPE sys.ku$_JobStatus1010 AS OBJECT (
job_name VARCHAR2(30),
operation VARCHAR2(30),
job_mode VARCHAR2(30),
bytes_processed NUMBER,
total_bytes NUMBER,
percent_done NUMBER,
degree NUMBER,
error_count NUMBER,
state VARCHAR2(30),
phase NUMBER,
restart_count NUMBER,
worker_status_list ku$_WorkerStatusList1010,
files ku$_DumpFileSet1010)
CREATE PUBLIC SYNONYM ku$_JobStatus1010
FOR sys.ku$_JobStatus1010; |
Log Entry & Error Types |
CREATE TYPE sys.ku$_LogLine1010 AS OBJECT (
logLineNumber NUMBER,
errorNumber NUMBER,
LogText VARCHAR2(2000) );
/
CREATE PUBLIC SYNONYM ku$_LogLine1010
FOR sys.ku$_LogLine1010;
CREATE TYPE sys.ku$_LogEntry1010 AS TABLE OF
sys.ku$_LogLine1010;
/
CREATE PUBLIC SYNONYM ku$_LogEntry1010
FOR sys.ku$_LogEntry1010; |
Status Types |
CREATE TYPE sys.ku$_Status1010 AS OBJECT
(
mask NUMBER,
wip ku$_LogEntry1010,
job_description ku$_JobDesc1010,
job_status ku$_JobStatus1010,
error ku$_LogEntry1010);
/ |
Worker Status Types |
CREATE TYPE sys.ku$_WorkerStatus1010 AS OBJECT (
worker_number NUMBER,
process_name VARCHAR2(30),
state VARCHAR2(30),
schema VARCHAR2(30),
name VARCHAR2(4000),
object_type VARCHAR2(200),
partition VARCHAR2(30),
completed_objects NUMBER,
total_objects NUMBER,
completed_rows NUMBER,
completed_bytes NUMBER,
percent_done NUMBER)
/
CREATE PUBLIC SYNONYM ku$_WorkerStatus1010
FOR sys.ku$_WorkerStatus1010;
CREATE TYPE sys.ku$_WorkerStatusList1010 AS TABLE OF sys.ku$_WorkerStatus1010
/
CREATE PUBLIC SYNONYM ku$_WorkerStatusList1010
FOR sys.ku$_WorkerStatusList1010; |
|
ADD_DEVICE |
Adds a sequential device to the dump file set for Export, Import, or Sql_file operations |
dbms_datapump.add_device(
handle IN
NUMBER,
-- job handle
device_name IN
VARCHAR2,
-- name of device being added
volume_size IN VARCHAR2 DEFAULT NULL); -- device storage capacity |
TBD |
|
ADD_FILE (new 11g
parameter) |
Adds files to the dump file set for an Export, Import, or SQL_FILE operation or specifies the log file or the output file for a SQL_FILE operation |
dbms_datapump.add_file(
handle IN NUMBER,
filename IN VARCHAR2,
directory IN VARCHAR2 DEFAULT NULL,
filesize IN VARCHAR2 DEFAULT NULL,
filetype IN NUMBER DEFAULT
dbms_datapump.KU$_FILE_TYPE_DUMP_FILE,
reusefile IN NUMBER DEFAULT NULL); |
See Export Demo below |
|
ATTACH |
Used to gain access to a Data Pump job that is in the Defining, Executing, Idling, or Stopped state |
dbms_datapump.attch(
job_name IN VARCHAR2 DEFAULT NULL,
job_owner IN VARCHAR2 DEFAULT NULL)
RETURN NUMBER; |
See LOG_ENTRY demo |
|
CLIENT_LOB_APPEND
(new in 11g) |
Temporary home for clob helper routines |
dbms_datapump.client_lob_append(
value IN VARCHAR2,
position IN NUMBER ); |
TBD |
|
CLIENT_LOB_DELETE
(new in 11g) |
Undocumented |
dbms_datapump.client_lob_delete; |
TBD |
|
CLIENT_LOB_GET
(new in 11g) |
Undocumented |
dbms_datapump.client_lob_get RETURN
CLOB; |
TBD |
|
CREATE_JOB_VIEW |
Create view into master table for a job
Overload 1 |
dbms_datapump.create_job_view(
job_schema IN VARCHAR2,
job_name IN VARCHAR2,
view_name IN VARCHAR2,
view_type IN VARCHAR2 DEFAULT dbms_datapump.ku$_job_view_all); |
TBD |
Overload 2 |
dbms_datapump.create_job_view(
handle IN NUMBER,
view_name IN VARCHAR2,
view_type IN VARCHAR2 DEFAULT dbms_datapump.ku$_job_view_all); |
TBD |
|
DATAPUMP_JOB |
Is it or is it not? |
dbms_datapump.datapump_job RETURN
BOOLEAN; |
set serveroutput on
BEGIN
IF dbms_datapump.datapump_job THEN
dbms_output.put_line('Yes');
ELSE
dbms_output.put_line('No');
END IF;
END;
/ |
|
DATA_FILTER (new
11g overload) |
Specifies restrictions on the rows that are to be retrieved
Overload 1 |
dbms_datapump.data_filter (
handle IN NUMBER,
name IN VARCHAR2,
-- filter name
value IN NUMBER,
table_name IN VARCHAR2 DEFAULT NULL, -- if not specified = all
schema_name IN VARCHAR2 DEFAULT NULL); |
TBD |
Overload 2 |
dbms_datapump.data_filter(
handle IN NUMBER,
name IN VARCHAR2,
value IN CLOB,
table_name IN VARCHAR2 DEFAULT NULL,
schema_name IN VARCHAR2 DEFAULT NULL); |
TBD |
Overload 3 |
dbms_datapump.data_filter(
handle IN NUMBER,
name IN VARCHAR2,
value IN VARCHAR2,
table_name IN VARCHAR2 DEFAULT NULL,
schema_name IN VARCHAR2 DEFAULT NULL); |
TBD |
|
DATA_REMAP (new
in 11g) |
Modify the values of data in user tables |
dbms_datapump.data_remap(
handle IN NUMBER,
name IN VARCHAR2,
table_name IN VARCHAR2,
column IN VARCHAR2,
function IN VARCHAR2,
schema IN VARCHAR2 DEFAULT NULL); |
TBD |
|
DETACH |
Specifies that the user has no further interest in using the handle |
dbms_datapump.detach(handle IN NUMBER); |
See Export Demo below |
|
ESTABLISH_REMOTE_CONTEXT |
Establish remote Data Pump job context |
dbms_datapump.establish_remote_context(remote_link
IN VARCHAR2); |
TBD |
|
GET_DUMPFILE_INFO |
Monitors the status of a job or waits for the completion of a job |
dbms_datapump.get_dumpfile_info(
file_name IN VARCHAR2,
directory IN VARCHAR2,
info_table OUT dbms_datapump.ku$_dumpfile_info,
filetype OUT NUMBER); |
TBD |
|
GET_STATUS |
Monitors the status of a job or waits for the completion of a job or for more details on API errors
Overload 1 |
dbms_datapump.get_status(
handle IN NUMBER,
mask IN INTEGER,
timeout IN NUMBER DEFAULT NULL)
RETURN dbms_datapump.ku$_status; |
TBD |
Overload 2 |
dbms_datapump.get_status(
handle IN NUMBER,
mask IN BINARY_INTEGER,
timeout IN NUMBER DEFAULT NULL,
job_state OUT VARCHAR2,
status OUT ku$_status1010); |
TBD |
Overload 3 |
dbms_datapump.get_status(
handle IN NUMBER,
mask IN BINARY_INTEGER,
timeout IN NUMBER DEFAULT NULL,
job_state OUT VARCHAR2,
status OUT ku$_status1020); |
See Export Demo below |
|
GET_STATUS_VERSION |
Determine ku$_Status object version to use for network operations |
dbms_datapump.get_status_version(version
IN NUMBER)
RETURN NUMBER; |
TBD |
|
HAS_PRIVS |
Privs - Yes or no? |
dbms_datapump.has_privs(oper IN
VARCHAR2) RETURN BOOLEAN; |
set serveroutput on
BEGIN
IF dbms_datapump.has_privs('EXPORT')
THEN
dbms_output.put_line('Yes');
ELSE
dbms_output.put_line('No');
END IF;
END;
/ |
|
LOG_ENTRY |
Inserts a message into the log file |
dbms_datapump.log_entry (
handle IN NUMBER,
message IN VARCHAR2
log_file_only IN NUMBER DEFAULT 0); |
DECLARE
dph NUMBER;
BEGIN
dph := dbms_datapump.attach('EXAMPLE5',
USER);
dbms_output.put_line(dph);
dbms_datapump.log_entry(dph, 'Log entry from DataPump API');
dbms_datapump.detach(dph);
END;
/ |
|
LOG_ERROR |
Undocumented |
dbms_datapump.log_error(
handle IN NUMBER,
message IN VARCHAR2,
error_number IN NUMBER DEFAULT 0,
fatal_error IN NUMBER DEFAULT 0,
log_file_only IN NUMBER DEFAULT 0); |
TBD |
|
METADATA_FILTER
(new overload in 11g) |
Creates a
filters that restricts the items that are included in a job
Overload 1 |
dbms_datapump.metadata_filter (
handle IN NUMBER,
name IN VARCHAR2,
value IN VARCHAR2,
object_path IN VARCHAR2 DEFAULT NULL,
object_type IN VARCHAR2 DEFAULT NULL);
|
See Export Demo below |
Overload 2 |
dbms_datapump.metadata_filter (
handle IN NUMBER,
name IN VARCHAR2,
value IN CLOB,
object_path IN VARCHAR2 DEFAULT NULL,
object_type IN VARCHAR2 DEFAULT NULL);
|
TBD |
|
METADATA_REMAP |
Specifies a remapping to be applied to objects as they are processed in the specified job |
dbms_datapump.metadata_remap (
handle IN NUMBER,
name IN VARCHAR2,
old_value IN VARCHAR2,
value IN VARCHAR2,
object_type IN VARCHAR2 DEFAULT NULL);
|
TBD |
|
METADATA_TRANSFORM |
Specifies transformations to be applied to objects as they are processed in the specified job
Overload 1 |
dbms_datapump.metadata_transform (
handle IN NUMBER,
name IN VARCHAR2,
value IN VARCHAR2,
object_type IN VARCHAR2 DEFAULT NULL);
|
TBD |
Overload 2 |
dbms_datapump.METADATA_TRANSFORM (
handle IN NUMBER,
name IN VARCHAR2,
value IN NUMBER,
object_type IN VARCHAR2 DEFAULT NULL);
|
TBD |
|
OPEN |
Declare a new job using the Data Pump API. The handle that is returned is used as a parameter for calls to all other procedures except ATTACH. |
dbms_datapump.OPEN (
operation IN VARCHAR2,
job_mode IN VARCHAR2,
remote_link IN VARCHAR2 DEFAULT NULL,
job_name IN VARCHAR2 DEFAULT NULL,
version IN VARCHAR2 DEFAULT 'COMPATIBLE',
compression IN NUMBER DEFAULT dbms_datapump.ku$_compress_metadata)
RETURN NUMBER;
Job Mode |
Description |
FULL |
Operates on the full database or full dump file set except for the SYS, XDB,ORDSYS, MDSYS, CTXSYS, ORDPLUGINS, and LBACSYS schemas. |
SCHEMA |
Operates on a set of selected schemas. Defaults to the schema of the current user. All objects in the selected schemas are processed. Users cannot specify SYS, XDB, ORDSYS, MDSYS, CTXSYS, ORDPLUGINS, or LBACSYS schemas for this mode. |
TABLE |
Operates on a set of selected tables. Defaults to all of the tables in the current user's schema. Only tables and their dependent objects are processed. |
TABLESPACE |
Operates on a set of selected tablespaces. No defaulting is performed. Tables that have storage in the specified tablespaces are processed in the same manner as in Table mode. |
TRANSPORTABLE |
Operates on metadata for tables (and their dependent objects) within a set of selected tablespaces to perform a transportable tablespace export/import. |
|
See Export Demo below |
|
SETUP_REMOTE_CONTEXT |
Sets up a remote Data Pump job context |
dbms_datapump.setup_remote_context(
user_name IN VARCHAR2,
job_name IN VARCHAR2,
version IN NUMBER,
status_xml IN VARCHAR2,
status_xml_len IN NUMBER,
more IN
NUMBER);
|
TBD |
|
SET_DEBUG (new
11g overload) |
Sets the internal debug switch
Overload 1 |
dbms_datapump.set_debug(
on_off IN NUMBER,
ip_addr IN VARCHAR2 DEFAULT NULL);
|
TBD |
Overload 2 |
dbms_datapump.set_debug(
debug_flags IN BINARY_INTEGER,
version_flag IN BINARY_INTEGER);
|
TBD |
|
SET_PARALLEL |
Adjusts the degree of parallelism within a job |
dbms_datapump.set_parallel (
handle IN NUMBER,
degree IN NUMBER);
|
See Export Demo below |
|
SET_PARMETER |
Specify job-processing options
Overload 1 |
dbms_datapump.set_parameter (
handle IN NUMBER,
name IN VARCHAR2,
value IN VARCHAR2);
|
TBD |
Overload 2 |
dbms_datapump.set_parameter (
handle IN NUMBER,
name IN VARCHAR2,
value IN NUMBER);
|
TBD |
|
START_JOB (new
11g parameters) |
Begins or resumes
job execution |
dbms_datapump.start_job (
handle IN NUMBER,
skip_current IN NUMBER DEFAULT 0,
abort_step IN NUMBER DEFAULT 0,
cluster_ok IN NUMBER DEFAULT 1,
service_name IN VARCHAR2 DEFAULT NULL);
|
See Export Demo below |
|
STOP_JOB |
Terminates a job, but optionally, preserves the state of the job |
dbms_datapump.stop_job (
handle IN NUMBER,
immediate IN NUMBER DEFAULT 0,
keep_master IN NUMBER DEFAULT NULL,
delay IN NUMBER DEFAULT 60);
|
See Export Demo below |
|
TEST_REMOTE_CONTEXT1010 |
Test remote Data Pump job context:
Is version 10.1? |
dbms_datapump.test_remove_context1010;
|
TBD |
|
TEST_REMOTE_CONTEXT1020 |
Test remote Data Pump job context:
Is version 10.2? |
dbms_datapump.test_remote_context1020;
|
TBD |
|
WAIT_FOR_JOB |
Runs a job until it either completes normally or stops for some other reason |
dbms_datapump.wait_for_job(
handle IN NUMBER,
job_state OUT VARCHAR2);
|
TBD |
|
Datapump Demos |
DataPump Export |
conn / as sysdba
set linesize 121
col owner format a20
col directory_name format a15
col directory_path format a60
SELECT *
FROM dba_directories;
HOST mkdir c:xpimp
HOST mkdir c:xpimp\logs
EXIT
CREATE OR REPLACE DIRECTORY expimp AS 'c:xpimp';
CREATE OR REPLACE DIRECTORY expimp_log AS 'c:xpimp\logs';
GRANT READ,WRITE ON DIRECTORY expimp TO system;
GRANT READ,WRITE ON DIRECTORY expimp_log TO system;
HOST del c:xpimp\*.dmp
HOST del c:xpimp_logs\*.log
set serveroutput on
DECLARE
ind NUMBER;
-- loop index
dph NUMBER;
-- job handle
pct_done NUMBER; --
percentage complete
job_state VARCHAR2(30); -- track job state
le ku$_LogEntry; --
WIP and error messages
js ku$_JobStatus; --
job status from get_status
jd ku$_JobDesc;
-- job description from get_status
sts ku$_Status;
-- status object returned by get_status
BEGIN
-- create job
dph := dbms_datapump.open('EXPORT','SCHEMA',NULL,'EXAMPLE5','LATEST');
-- specify dump file
dbms_datapump.add_file(dph,
'example5.dmp', 'EXPIMP',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
-- specify log file
dbms_datapump.add_file(dph,
'example5.log', 'EXPIMP_LOG',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
-- specify export schema
dbms_datapump.metadata_filter(dph, 'SCHEMA_EXPR',
'IN (''HR'')');
-- set parallelism
dbms_datapump.set_parallel(dph, 2);
-- start job
dbms_datapump.start_job(dph);
-- monitor job
pct_done := 0;
job_state := 'UNDEFINED';
WHILE (job_state != 'COMPLETED') AND (job_state != 'STOPPED') LOOP
dbms_datapump.get_status(dph,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip, -1, job_state, sts);
js := sts.job_status;
-- If the percentage done changed, display the new
value
IF js.percent_done != pct_done THEN
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
pct_done := js.percent_done;
END IF;
-- If any work-in-progress (WIP) or error messages
-- were received for the job, display them.
IF (BITAND(sts.mask,dbms_datapump.ku$_status_wip) != 0)
THEN
le := sts.wip;
ELSE
IF (BITAND(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
THEN
le := sts.error;
ELSE
le := NULL;
END IF;
END IF;
IF le IS NOT NULL THEN
ind := le.FIRST;
WHILE ind IS NOT NULL LOOP
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
END LOOP;
END IF;
END LOOP;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(dph);
EXCEPTION
WHEN OTHERS THEN
dbms_datapump.stop_job(dph);
END;
/
|
set serveroutput on
1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := dbms_datapump.open(operation => 'EXPORT',
job_mode => 'SCHEMA', remote_link => NULL, job_name =>
'EMP_EXPORT', version => 'LATEST');
dbms_datapump.add_file(handle => l_dp_handle, filename =>
'SCOTT.dmp', directory => 'TEST_DIR');
dbms_datapump.metadata_filter(handle => l_dp_handle,
name => 'SCHEMA_EXPR', value => '= ''SCOTT''');
dbms_datapump.start_job(l_dp_handle);
dbms_datapump.detach(l_dp_handle);
END;
/
-- check job status
system@db10g> SELECT * FROM dba_datapump_jobs;
|
DataPump Import |
conn / as sysdba
set serveroutput on
DECLARE
dph NUMBER;
BEGIN
dph := dbms_datapump.open(operation => 'IMPORT', job_mode =>
'TABLE',job_name => 'EMP_IMPORT');
dbms_datapump.add_file(handle => p_dph,filename =>
'EXPIMP%U.DMP', directory => 'EXPIMP',filetype=>1);
dbms_datapump.add_file(handle
=> dph,filename =>
'EXPIMP.LOG',directory => 'EXPIMP_LOG',filetype=>3);
dbms_datapump.set_parameter(handle =>
dph,name =>
'TABLE_EXISTS_ACTION', value =>'REPLACE');
dbms_datapump.start_job(dph);
dbms_datapump.detach(dph);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Error:' || sqlerrm || ' on
Job-ID:' || dph);
END;
/ |