1. Overview of Oracle Data Pump
Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another.This chapter discusses the following topics:
Data Pump Components
Oracle Data Pump is made up of three distinct parts:-
The command-line clients,
expdp
andimpdp
-
The
DBMS_DATAPUMP
PL/SQL package (also known as the Data Pump API) -
The
DBMS_METADATA
PL/SQL package (also known as the Metadata API)
expdp
and impdp
,
invoke the Data Pump Export utility and Data Pump Import utility,
respectively. They provide a user interface that closely resembles the
original Export (exp
) and Import (imp
) utilities.
Note:
Dump files generated by the Data Pump Export utility are not compatible
with dump files generated by the original Export utility. Therefore,
files generated by the original Export (exp
) utility cannot be imported with the Data Pump Import (impdp
) utility.
In most cases, Oracle recommends that you use the Data Pump Export
and Import utilities. They provide enhanced data movement performance in
comparison to the original Export and Import utilities.expdp
and impdp
clients use the procedures provided in the DBMS_DATAPUMP
PL/SQL package to execute export and import commands, using the
parameters entered at the command-line. These parameters enable the
exporting and importing of data and metadata for a complete database or
for subsets of a database.When metadata is moved, Data Pump uses functionality provided by the
DBMS_METADATA
PL/SQL package. The DBMS_METADATA
package provides a centralized facility for the extraction, manipulation, and resubmission of dictionary metadata.The
DBMS_DATAPUMP
and DBMS_METADATA
PL/SQL packages can be used independently of the Data Pump clients.
Note:
All Data Pump Export and Import processing, including the reading and
writing of dump files, is done on the system (server) selected by the
specified database connect string. This means that,
for nonprivileged users, the database administrator (DBA) must create
directory objects for the Data Pump files that are read and written on
that server file system. For privileged users, a default directory object is available. See Default Locations for Dump, Log, and SQL Files for more information about directory objects.
Note:
Data Pump Export and Import are not supported on physical or logical
standby databases except for initial table instantiation on a logical
standby.How Does Data Pump Move Data?
Data Pump uses four mechanisms for moving data in and out of databases. They are as follows, in order of decreasing speed:-
Data file copying
-
Direct path
-
External tables
-
Network link import
Note:
Data Pump will not load tables with disabled unique indexes. If the data
needs to be loaded into the table, the indexes must be either dropped
or reenabled.
Note:
There are a few situations in which Data Pump will not be able to load
data into a table using either direct path or external tables. This
occurs when there are conflicting table attributes. For example, a
conflict occurs if a table contains a column of datatype LONG
(which requires the direct path access method) but also has a condition
that prevents use of direct path access. In such cases, an ORA-39242 error message is generated. To work around this, prior to import, create the table with a LOB column instead of a LONG
column. You can then perform the import and use the TABLE_EXISTS_ACTION
parameter with a value of either APPEND
or TRUNCATE
.Using Data File Copying to Move Data
The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file. This method is used in the following situations:-
The
TRANSPORT_TABLESPACES
parameter is used to specify a transportable mode export. Only metadata for the specified tablespaces is exported.
-
The
TRANSPORTABLE=ALWAYS
parameter is supplied on a table mode export (specified with theTABLES
parameter). Only metadata for the tables, partitions, and subpartitions specified on theTABLES
parameter is exported.
When data is moved by using data file copying, the character sets must be identical on both the source and target databases. Therefore, in addition to copying the data, you may need to prepare it by using the Recovery Manager (RMAN)
CONVERT
command to perform some data conversions. You can generally do this at either the source or target database.Using Direct Path to Move Data
After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it. Note that if the table has any columns of datatypeLONG
, then direct path must be used.The following sections describe situations in which direct path cannot be used for loading and unloading.
If any of the following conditions exist for a table, Data Pump uses external tables rather than direct path to load the data for that table:
-
A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
-
A domain index exists for a LOB column.
-
A table is in a cluster.
-
There is an active trigger on a pre-existing table.
-
Fine-grained access control is enabled in insert mode on a pre-existing table.
-
A table contains
BFILE
columns or columns of opaque types.
-
A referential integrity constraint is present on a pre-existing table.
-
A table contains
VARRAY
columns with an embedded opaque type.
-
The table has encrypted columns
-
The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
-
There is an active trigger
-
The table is partitioned
-
Fine-grained access control is in insert mode
-
A referential integrity constraint exists
-
A unique index exists
-
-
Supplemental logging is enabled and the table has at least one LOB column.
-
The Data Pump command for the specified table used the
QUERY
,SAMPLE
, orREMAP_DATA
parameter.
-
Fine-grained access control for
SELECT
is enabled.
-
The table is a queue table.
-
The table contains one or more columns of type
BFILE
or opaque, or an object type containing opaque columns.
-
The table contains encrypted columns.
-
The table contains a column of an evolved type that needs upgrading.
-
The table contains a column of type
LONG
orLONG RAW
that is not last.
-
The Data Pump command for the specified table used the
QUERY
,SAMPLE
, orREMAP_DATA
parameter.
Using External Tables to Move Data
When data file copying is not selected and the data cannot be moved using direct path, the external table mechanism is used. The external table mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, theAPPEND
hint is used on import to
speed the copying of the data into the database. The representation of
data for direct path data and external table data is the same in a dump
file. Therefore, Data Pump might use the direct path mechanism at export
time, but use external tables when the data is imported into the target
database. Similarly, Data Pump might use external tables for the
export, but use direct path for the import.In particular, Data Pump uses external tables in the following situations:
-
Loading and unloading very large tables and partitions in situations where parallel SQL can be used to advantage
-
Loading tables with global or domain indexes defined on them, including partitioned object tables
-
Loading tables with active triggers or clustered tables
-
Loading and unloading tables with encrypted columns
-
Loading tables with fine-grained access control enabled for inserts
-
Loading tables that are partitioned differently at load time and unload time
Note:
When Data Pump uses external tables as the data access mechanism, it uses the ORACLE_DATAPUMP
access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL
statement. One of the reasons for this is that a manually created
external table unloads only data (no metadata), whereas Data Pump
maintains both data and metadata information for all objects involved.Using Network Link Import to Move Data
When the ImportNETWORK_LINK
parameter is used to specify a network link for an import operation, SQL is directly used to move the data using an INSERT SELECT
statement. The SELECT
clause retrieves the data from the remote database over the network link. The INSERT
clause uses SQL to insert the data into the target database. There are no dump files involved.When you perform an export over a database link, the data from the source database instance is written to dump files on the connected database instance. The source database can be a read-only database.
Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.
Because reading over a network is generally slower than reading from a disk, network link is the slowest of the four access methods used by Data Pump and may be undesirable for very large jobs.
The following types of database links are supported for use with Data Pump Export and Import:
-
Public (both public and shared)
-
Fixed-user
-
Connected user
What Happens During Execution of a Data Pump Job?
Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.Coordination of a Job
For every Data Pump Export job and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.Tracking Progress Within a Job
While the data and metadata are being transferred, a master table is used to track the progress within a job. The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:-
For export jobs, the master table records the location of database
objects within a dump file set. Export builds and maintains the master
table for the duration of the job. At the end of an export job, the
content of the master table is written to a file in the dump file set.
-
For import jobs, the master table is loaded from the dump file set
and is used to control the sequence of operations for locating objects
that need to be imported into the target database.
CREATE TABLE
system privilege and a sufficient
tablespace quota for creation of the master table. The name of the
master table is the same as the name of the job that created it.
Therefore, you cannot explicitly give a Data Pump job the same name as a
preexisting table or view.For all operations, the information in the master table is used to restart a job.
The master table is either retained or dropped, depending on the circumstances, as follows:
-
Upon successful job completion, the master table is dropped.
-
If a job is stopped using the
STOP_JOB
interactive command, the master table is retained for use in restarting the job.
-
If a job is killed using the
KILL_JOB
interactive command, the master table is dropped and the job cannot be restarted.
-
If a job terminates unexpectedly, the master table is retained. You can delete it if you do not intend to restart the job.
-
If a job stops before it starts running (that is, before any database objects have been copied), the master table is dropped.
Filtering Data and Metadata During a Job
Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such asTABLE
, INDEX,
or DIRECTORY
). The class of an object is called its object type. You can use the EXCLUDE
and INCLUDE
parameters to restrict the types of objects that are exported and
imported. The objects can be based upon the name of the object or the
name of the schema that owns the object. You can also specify
data-specific filters to restrict the rows that are exported and
imported.Transforming Metadata During a Job
When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects. This is done using the following Data Pump Import parameters:REMAP_DATAFILE
, REMAP_SCHEMA
, REMAP_TABLE,REMAP_TABLESPACE
, TRANSFORM
and PARTITION_OPTIONS
.Maximizing Job Performance
Data Pump can employ multiple worker processes, running in parallel, to job increase performance. Use thePARALLEL
parameter to set a degree of parallelism that takes maximum advantage
of current conditions. For example, to limit the effect of a job on a
production system, the database administrator (DBA) might wish to
restrict the parallelism. The degree of parallelism can be reset at any
time during a job. For example, PARALLEL
could be set to 2
during production hours to restrict a particular job to only two degrees
of parallelism, and during nonproduction hours it could be reset to 8.
The parallelism setting is enforced by the master process, which
allocates work to be executed to worker processes that perform the data
and metadata processing within an operation. These worker processes
operate in parallel. In general, the degree of parallelism should be set
to no more than twice the number of CPUs on an instance.
Note:
The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.Loading and Unloading of Data
The worker processes are the ones that actually unload and load metadata and table data in parallel. Worker processes are created as needed until the number of worker processes is equal to the value supplied for thePARALLEL
command-line parameter. The number of active worker processes can be reset throughout the life of a job.
Note:
The value of PARALLEL
is restricted to 1 in the Standard Edition of Oracle Database.Monitoring Job Status
The Data Pump Export and Import utilities can be attached to a job in either interactive-command mode or logging mode. In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be exported, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).
See Also:
A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered along the way, and records the completion status of the job.
See Also:
DBA_DATAPUMP_JOBS
, USER_DATAPUMP_JOBS
, or DBA_DATAPUMP_SESSIONS
views. See Oracle Database Reference for descriptions of these views.Monitoring the Progress of Executing Jobs
Data Pump operations that transfer table data (export and import) maintain an entry in theV$SESSION_LONGOPS
dynamic
performance view indicating the job progress (in megabytes of table
data transferred). The entry contains the estimated transfer size and is
periodically updated to reflect the actual amount of data transferred.Use of the
COMPRESSION
, ENCRYPTION
, ENCRYPTION_ALGORITHM
, ENCRYPTION_MODE
, ENCRYPTION_PASSWORD
, QUERY
, REMAP_DATA
, and SAMPLE
parameters will not be reflected in the determination of estimate values.The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.
The
V$SESSION_LONGOPS
columns that are relevant to a Data Pump job are as follows:-
USERNAME
- job owner -
OPNAME
- job name -
TARGET_DESC
- job operation -
SOFAR
- megabytes (MB) transferred thus far during the job -
TOTALWORK
- estimated number of megabytes (MB) in the job -
UNITS
- 'MB' -
MESSAGE
- a formatted status message of the form:'job_name: operation_name : nnn out of mmm MB done'
File Allocation
Data Pump jobs manage the following types of files:-
Dump files to contain the data and metadata that is being moved
-
Log files to record the messages associated with an operation
-
SQL files to record the output of a SQLFILE operation. A SQLFILE operation is invoked using the Data Pump Import
SQLFILE
parameter and results in all of the SQL DDL that Import will be executing based on other parameters, being written to a SQL file.
-
Files specified by the
DATA_FILES
parameter during a transportable import.
Specifying Files and Adding Additional Dump Files
For export operations, you can specify dump files at the time the job is defined, as well as at a later time during the operation. For example, if you discover that space is running low during an export operation, you can add additional dump files by using the Data Pump ExportADD_FILE
command in interactive mode.For import operations, all dump files must be specified at the time the job is defined.
Log files and SQL files will overwrite previously existing files. For dump files, you can use the Export
REUSE_DUMPFILES
parameter to specify whether or not to overwrite a preexisting dump file.Default Locations for Dump, Log, and SQL Files
Because Data Pump is server-based, rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires you to specify directory paths as directory objects. A directory object maps a name to a directory path on the file system.For example, the following SQL statement creates a directory object named
dpump_dir1
that is mapped to a directory located at /usr/apps/datafiles.
SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';The reason that a directory object is required is to ensure data security and integrity. For example:
-
If you were allowed to specify a directory path location for an input
file, you might be able to read data that the server has access to, but
to which you should not.
-
If you were allowed to specify a directory path location for an
output file, the server might overwrite a file that you might not
normally have privileges to delete.
DATA_PUMP_DIR
,
is created at database creation or whenever the database dictionary is
upgraded. By default, it is available only to privileged users.If you are not a privileged user, before you can run Data Pump Export or Data Pump Import, a directory object must be created by a database administrator (DBA) or by any user with the
CREATE
ANY
DIRECTORY
privilege.After a directory is created, the user creating the directory object needs to grant
READ
or WRITE
permission on the directory to other users. For example, to allow the
Oracle database to read and write files on behalf of user hr
in the directory named by dpump_dir1
, the DBA must execute the following command:SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;Note that
READ
or WRITE
permission to a
directory object only means that the Oracle database will read or write
that file on your behalf. You are not given direct access to those files
outside of the Oracle database unless you have the appropriate
operating system privileges. Similarly, the Oracle database requires
permission from the operating system to read and write files in the
directories.Data Pump Export and Import use the following order of precedence to determine a file's location:
-
If a directory object is specified as part of the file specification,
then the location specified by that directory object is used. (The
directory object must be separated from the filename by a colon.)
-
If a directory object is not specified for a file, then the directory object named by the
DIRECTORY
parameter is used.
-
If a directory object is not specified, and if no directory object was named by the
DIRECTORY
parameter, then the value of the environment variable,DATA_PUMP_DIR
, is used. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object isDUMP_FILES1
, and it is located at'/usr/apps/dumpfiles1'
.
SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';
Then, a user on a UNIX-based client system usingcsh
can assign the valueDUMP_FILES1
to the environment variableDATA_PUMP_DIR
. TheDIRECTORY
parameter can then be omitted from the command line. The dump fileemployees.dmp
, as well as the log fileexport.log
, will be written to'/usr/apps/dumpfiles1'
.
%setenv DATA_PUMP_DIR DUMP_FILES1 %expdp hr TABLES=employees DUMPFILE=employees.dmp
-
If none of the previous three conditions yields a directory object
and you are a privileged user, then Data Pump attempts to use the value
of the default server-based directory object,
DATA_PUMP_DIR
. This directory object is automatically created at database creation or when the database dictionary is upgraded. You can use the following SQL query to see the path definition forDATA_PUMP_DIR
:
SQL> SELECT directory_name, directory_path FROM dba_directories 2 WHERE directory_name='DATA_PUMP_DIR';
If you are not a privileged user, access to theDATA_PUMP_DIR
directory object must have previously been granted to you by a DBA.
Do not confuse the defaultDATA_PUMP_DIR
directory object with the client-based environment variable of the same name.
Using Directory Objects When Automatic Storage Management Is Enabled
If you use Data Pump Export or Import with Automatic Storage Management (ASM) enabled, you must define the directory object used for the dump file so that the ASM disk-group name is used (instead of an operating system directory path). A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the ASM dump file as follows:SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';Then you would create a separate directory object for the log file:
SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';To enable user
hr
to have access to these directory objects, you would assign the necessary privileges, for example:SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr; SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;You would then use the following Data Pump Export command (you will be prompted for a password):
> expdp hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log
Setting Parallelism
For export and import operations, the parallelism setting (specified with thePARALLEL
parameter) should be less than or equal to the number of dump files in
the dump file set. If there are not enough dump files, the performance
will not be optimal because multiple threads of execution will be trying
to access the same dump file.The
PARALLEL
parameter is valid only in the Enterprise Edition of Oracle Database.Using Substitution Variables
Instead of, or in addition to, listing specific filenames, you can use theDUMPFILE
parameter during export operations to specify multiple dump files, by using a substitution variable (%U
) in the filename. This is called a dump file template. The new dump files are created as they are needed, beginning with 01
for %U
, then using 02
, 03
, and so on. Enough dump files are created to allow all processes specified by the current setting of the PARALLEL
parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the FILESIZE
parameter, it is closed, and a new dump file (with a new generated name) is created to take its place.If multiple dump file templates are provided, they are used to generate dump files in a round-robin fashion. For example, if
expa%U
, expb%U,
and expc%U
were all specified for a job having a parallelism of 6, the initial dump files created would be expa01
.dmp
, expb01
.dmp
, expc01
.dmp
, expa02
.dmp
, expb02
.dmp
, and expc02
.dmp
.For import and SQLFILE operations, if dump file specifications
expa%U
, expb%U,
and expc%U
are specified, then the operation will begin by attempting to open the dump files expa01
.dmp
, expb01
.dmp
, and expc01
.dmp
.
It is possible for the master table to span multiple dump files, so
until all pieces of the master table are found, dump files continue to
be opened by incrementing the substitution variable and looking up the
new filenames (for example, expa02
.dmp
, expb02
.dmp
, and expc02
.dmp
).
If a dump file does not exist, the operation stops incrementing the
substitution variable for the dump file specification that was in error.
For example, if expb01
.dmp
and expb02
.dmp
are found but expb03
.dmp
is not found, then no more files are searched for using the expb%U
specification. Once the entire master table is found, it is used to
determine whether all dump files in the dump file set have been located.Moving Data Between Different Database Versions
Because most Data Pump operations are performed on the server side, if you are using any version of the database other thanCOMPATIBLE
,
you must provide the server with specific version information.
Otherwise, errors may occur. To specify version information, use the VERSION
parameter.Keep the following information in mind when you are using Data Pump Export and Import to move data between different database versions:
-
If you specify a database version that is older than the current
database version, certain features may be unavailable. For example,
specifying
VERSION=10.1
will cause an error if data compression is also specified for the job because compression was not supported in 10.1.
-
On a Data Pump export, if you specify a database version that is
older than the current database version, then a dump file set is created
that you can import into that older version of the database. However,
the dump file set will not contain any objects that the older database
version does not support. For example, if you export from a version 10.2
database to a version 10.1 database, comments on indextypes will not be
exported into the dump file set.
-
Data Pump Import can always read dump file sets created by older versions of the database.
-
Data Pump Import cannot read dump file sets created by a database
version that is newer than the current database version, unless those
dump file sets were created with the version parameter set to the
version of the target database. Therefore, the best way to perform a
downgrade is to perform your Data Pump export with the
VERSION
parameter set to the version of the target database.
-
When operating across a network link, Data Pump requires that the
remote database version be either the same as the local database or one
version older, at the most. For example, if the local database is
version 11.1, the remote database must be either version 10.2 or 11.1.
2. Data Pump Export
This chapter describes the Oracle Data Pump Export utility. The following topics are discussed:
-
What Is Data Pump Export?
-
Invoking Data Pump Export
-
Filtering During Export Operations
-
Parameters Available in Export's Command-Line Mode
-
How Data Pump Export Parameters Map to Those of the Original Export Utility
-
Commands Available in Export's Interactive-Command Mode
-
Examples of Using Data Pump Export
-
Syntax Diagrams for Data Pump Export
What Is Data Pump Export?
Note:
Although Data Pump Export (expdp
) functionality is similar to that of the original Export utility (exp
), they are completely separate utilities and their files are not compatible. See Chapter 20, "Original Export and Import" for a description of the original Export utility.The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
Because the dump files are written by the server, rather than by the client, the data base administrator (DBA) must create directory objects. See Default Locations for Dump, Log, and SQL Files for more information about directory objects.
Data Pump Export enables you to specify that a job should move a subset of the data and metadata, as determined by the export mode. This is done using data filters and metadata filters, which are specified through Export parameters. See Filtering During Export Operations.
To see some examples of the various ways in which you can use Data Pump Export, refer to Examples of Using Data Pump Export.
Invoking Data Pump Export
The Data Pump Export utility is invoked using theexpdp
command. The characteristics of the export operation are determined by
the Export parameters you specify. These parameters can be specified
either on the command line or in a parameter file.
Note:
Do not invoke Export as SYSDBA,
except at the request of Oracle technical support. SYSDBA
is used internally and has specialized functions; its behavior is not the same as for general users.
Note:
It is not possible to start or restart Data Pump jobs on one instance in
an Oracle Real Application Clusters (RAC) environment if there are Data
Pump jobs currently running on other instances in the Oracle RAC
environment.Data Pump Export Interfaces
You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode.-
Command-Line Interface: Enables you to specify most of the Export
parameters directly on the command line. For a complete description of
the parameters available in the command-line interface, see Parameters Available in Export's Command-Line Mode.
-
Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the
PARFILE
parameter, because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. See Use of Quotation Marks On the Data Pump Command Line.
-
Interactive-Command Interface:
Stops logging to the terminal and displays the Export prompt, from which
you can enter various commands, some of which are specific to
interactive-command mode. This mode is enabled by pressing Ctrl+C during
an export operation started with the command-line interface or the
parameter file interface. Interactive-command mode is also enabled when
you attach to an executing or stopped job.
For a complete description of the commands available in interactive-command mode, see Commands Available in Export's Interactive-Command Mode.
Data Pump Export Modes
Export provides different modes for unloading different portions of the database. The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:
Note:
A number of system schemas cannot be exported because they are not user
schemas; they contain Oracle-managed data and metadata. Examples of
system schemas that are not exported include SYS
, ORDSYS
, and MDSYS
.
See Also:
Examples of Using Data Pump ExportFull Export Mode
A full export is specified using theFULL
parameter. In a full database export, the entire database is unloaded. This mode requires that you have the EXP_FULL_DATABASE
role.Schema Mode
A schema export is specified using theSCHEMAS
parameter. This is the default export mode. If you have the EXP_FULL_DATABASE
role, then you can specify a list of schemas and optionally include the
schema definitions themselves, as well as system privilege grants to
those schemas. If you do not have the EXP_FULL_DATABASE
role, you can export only your own schema.The
SYS
schema cannot be used as a source schema for export jobs.Cross-schema references are not exported unless the referenced schema is also specified in the list of schemas to be exported. For example, a trigger defined on a table within one of the specified schemas, but that resides in a schema not explicitly specified, is not exported. This is also true for external type definitions upon which tables in the specified schemas depend. In such a case, it is expected that the type definitions already exist in the target instance at import time.
Table Mode
A table mode export is specified using theTABLES
parameter. In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded.If you specify the
TRANSPORTABLE=ALWAYS
parameter in conjunction with the TABLES
parameter, then only object metadata is unloaded. To move the actual
data, you copy the data files to the target database. This results in
quicker export times. If you are moving data files between versions or
platforms, the data files may need to be processed by Oracle Recovery
Manager (RMAN).
See Also:
Oracle Database Backup and Recovery User's Guide for more information on transporting data across platformsEXP_FULL_DATABASE
role to specify
tables that are not in your own schema. All specified tables must reside
in a single schema. Note that type definitions for columns are not
exported in table mode. It is expected that the type definitions
already exist in the target instance at import time. Also, as in schema
exports, cross-schema references are not exported.
See Also:
-
TABLES for a description of the Export
TABLES
parameter -
TRANSPORTABLE for a description of the Export
TRANSPORTABLE
parameter
Tablespace Mode
A tablespace export is specified using theTABLESPACES
parameter. In tablespace mode, only the tables contained in a specified
set of tablespaces are unloaded. If a table is unloaded, its dependent
objects are also unloaded. Both object metadata and data are unloaded.
In tablespace mode, if any part of a table resides in the specified set,
then that table and all of its dependent objects are exported.
Privileged users get all tables. Nonprivileged users get only the tables
in their own schemas.
See Also:
-
TABLESPACES for a description of the Export TABLESPACES parameter
Transportable Tablespace Mode
A transportable tablespace export is specified using theTRANSPORT_TABLESPACES
parameter. In transportable tablespace mode, only the metadata for the
tables (and their dependent objects) within a specified set of
tablespaces is exported. The tablespace datafiles are copied in a
separate operation. Then, a transportable tablespace import is performed
to import the dump file containing the metadata and to specify the
datafiles to use.Transportable tablespace mode requires that the specified tables be completely self-contained. That is, all storage segments of all tables (and their indexes) defined within the tablespace set must also be contained within the set. If there are self-containment violations, Export identifies all of the problems without actually performing the export.
Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.
Encrypted columns are not supported in transportable tablespace mode.
Note:
You cannot export transportable tablespaces and then import them into a
database at a lower release level. The target database must be at the
same or higher release level as the source database.
See Also:
-
Oracle Database Administrator's Guide for more information about transportable tablespaces
Network Considerations
You can specify a connect identifier in the connect string when you invoke the Data Pump Export utility. This identifier can specify a database instance that is different from the current instance identified by the current Oracle System ID (SID). The connect identifier can be an Oracle*Net connect descriptor or a name that maps to a connect descriptor. This requires an active listener (to start the listener, enterlsnrctl
start
) that can be located using the connect descriptor. The following example invokes Export for user hr
, using the connect descriptor named inst1
:expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TABLES=employees
Export: Release 11.1.0.6.0 - Production on Monday, 27 August, 2007 10:15:45
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password: password@inst1
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, Data Mining and Real Application Testing options
The local Export client connects to the database instance identified by the connect descriptor inst1
(a simple net service name, usually defined in a tnsnames.ora
file), to export the data on that instance.Do not confuse invoking the Export utility using a connect identifier with an export operation specifying the Export
NETWORK_LINK
command-line parameter. When you perform an export and use the NETWORK_LINK
parameter, the export is initiated over a database link. Whereas, when
you start an export operation and specify a connect identifier, the
local Export client connects to the database instance identified by the
command-line connect string, retrieves the data to be exported from the
database instance identified by the database link, and writes the data
to a dump file set on the connected database instance.Filtering During Export Operations
Data Pump Export provides much greater data and metadata filtering capability than was provided by the original Export utility.Data Filters
Data specific filtering is implemented through theQUERY
and SAMPLE
parameters, which specify restrictions on the table rows that are to be exported.Data filtering can also occur indirectly as a result of metadata filtering, which can include or exclude table objects along with any associated row data.
Each data filter can be specified once per table within a job. If different filters using the same name are applied to both a particular table and to the whole job, the filter parameter supplied for the specific table will take precedence.
Metadata Filters
Metadata filtering is implemented through theEXCLUDE
and INCLUDE
parameters. The EXCLUDE
and INCLUDE
parameters are mutually exclusive.Metadata filters identify a set of objects to be included or excluded from an Export or Import operation. For example, you could request a full export, but without Package Specifications or Package Bodies.
To use filters correctly and to get the results you expect, remember that dependent objects of an identified object are processed along with the identified object. For example, if a filter specifies that an index is to be included in an operation, then statistics from that index will also be included. Likewise, if a table is excluded by a filter, then indexes, constraints, grants, and triggers upon the table will also be excluded by the filter.
If multiple filters are specified for an object type, an implicit
AND
operation is applied to them. That is, objects pertaining to the job must pass all of the filters applied to their object types.The same metadata filter name can be specified multiple times within a job.
To see which objects can be filtered, you can query the following views:
DATABASE_EXPORT_OBJECTS
for Full-mode exports, SCHEMA_EXPORT_OBJECTS
for schema-mode exports, and TABLE_EXPORT_OBJECTS
for table-mode and tablespace-mode exports. For example, you could perform the following query:SQL> SELECT OBJECT_PATH, COMMENTS FROM SCHEMA_EXPORT_OBJECTS 2 WHERE OBJECT_PATH LIKE '%GRANT' AND OBJECT_PATH NOT LIKE '%/%';The output of this query looks similar to the following:
OBJECT_PATH -------------------------------------------------------------------------------- COMMENTS -------------------------------------------------------------------------------- GRANT Object grants on the selected tables OBJECT_GRANT Object grants on the selected tables PROCDEPOBJ_GRANT Grants on instance procedural objects PROCOBJ_GRANT Schema procedural object grants in the selected schemas ROLE_GRANT Role grants to users associated with the selected schemas SYSTEM_GRANT System privileges granted to users associated with the selected schemas
Parameters Available in Export's Command-Line Mode
This section provides descriptions of the parameters available in the command-line mode of Data Pump Export. Many of the descriptions include an example of how to use the parameter.If you try running the examples that are provided for each parameter, be aware of the following:
-
After you enter the username and parameters as shown in the example,
Export is started and you are prompted for a password before a database
connection is made:
Export: Release 11.1.0.6.0 - Production on Monday, 27 August, 2007 11:45:35 Copyright (c) 2003, 2007, Oracle. All rights reserved. Password: password Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, Data Mining and Real Application Testing options
-
Most of the examples use the sample schemas of the seed database,
which is installed by default when you install Oracle Database. In
particular, the human resources (
hr
) schema is often used.
-
The examples assume that the directory objects,
dpump_dir1
anddpump_dir2
, already exist and thatREAD
andWRITE
privileges have been granted to thehr
schema for these directory objects. See Default Locations for Dump, Log, and SQL Files for information about creating directory objects and assigning privileges to them.
-
Some of the examples require the
EXP_FULL_DATABASE
andIMP_FULL_DATABASE
roles. The examples assume that thehr
schema has been granted these roles.
Syntax diagrams of these parameters are provided in Syntax Diagrams for Data Pump Export.
Unless specifically noted, these parameters can also be specified in a parameter file.
Some operating systems require that quotation marks on the command line be preceded by an escape character, such as the backslash. If the backslashes were not present, the command-line parser that Export uses would not understand the quotation marks and would remove them, resulting in an error. In general, Oracle recommends that you place such statements in a parameter file because escape characters are not necessary in parameter files.
See Also:
-
Default Locations for Dump, Log, and SQL Files for information about creating default directory objects
-
Examples of Using Data Pump Export
-
Oracle Database Sample Schemas
Note:
If you are accustomed to using the original Export utility (exp
),
you may be wondering which Data Pump parameters are used to perform the
operations you used to perform with original Export. For a comparison,
see How Data Pump Export Parameters Map to Those of the Original Export Utility.ATTACH
Default: job currently in the user's schema, if there is only oneAttaches the client session to an existing export job and automatically places you in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the Export prompt.
ATTACH [=[schema_name.]job_name]The
schema_name
is optional. To specify a schema other than your own, you must have the EXP_FULL_DATABASE
role.The
job_name
is
optional if only one export job is associated with your schema and the
job is active. To attach to a stopped job, you must supply the job name.
To see a list of Data Pump job names, you can query the DBA_DATAPUMP_JOBS
view or the USER_DATAPUMP_JOBS
view.When you are attached to the job, Export displays a description of the job and then displays the Export prompt.
-
When you specify the
ATTACH
parameter, the only other Data Pump parameter you can specify on the command line isENCRYPTION_PASSWORD
.
-
If the job you are attaching to was initially started using an
encryption password, then when you attach to the job you must again
enter the
ENCRYPTION_PASSWORD
parameter on the command line to re-specify that password. The only exception to this is if the job was initially started with theENCRYPTION=ENCRYPTED_COLUMNS_ONLY
parameter. In that case, the encryption password is not needed when attaching to the job.
-
You cannot attach to a job in another schema unless it is already running.
-
If the dump file set or master table for the job have been deleted, the attach operation will fail.
-
Altering the master table in any way will lead to unpredictable results.
ATTACH
parameter. It assumes that the job, hr.export_job
, already exists.> expdp hr ATTACH=hr.export_job
COMPRESSION
Default:METADATA_ONLY
Specifies which data to compress before writing to the dump file set.
COMPRESSION={ALL | DATA_ONLY | METADATA_ONLY | NONE}
-
ALL
enables compression for the entire export operation.
-
DATA_ONLY
results in all data being written to the dump file in compressed format.
-
METADATA_ONLY
results in all metadata being written to the dump file in compressed format. This is the default.
-
NONE
disables compression for the entire export operation.
-
To make full use of all these compression options, the
COMPATIBLE
initialization parameter must be set to at least 11.0.0.
-
The
METADATA_ONLY
option can be used even if theCOMPATIBLE
initialization parameter is set to 10.2.
COMPRESSION
parameter:> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLYThis command will execute a schema-mode export that will compress all metadata before writing it out to the dump file,
hr_comp.dmp
. It defaults to a schema-mode export because no export mode is specified.CONTENT
Default:ALL
Enables you to filter what Export unloads: data only, metadata only, or both.
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
-
ALL
unloads both data and metadata. This is the default. -
DATA_ONLY
unloads only table row data; no database object definitions are unloaded. -
METADATA_ONLY
unloads only database object definitions; no table row data is unloaded.
-
The
CONTENT=METADATA_ONLY
parameter cannot be used in conjunction with the parameterTRANSPORT_TABLESPACES
(transportable-tablespace-mode).
CONTENT
parameter:> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp CONTENT=METADATA_ONLYThis command will execute a schema-mode export that will unload only the metadata associated with the
hr
schema. It defaults to a schema-mode export of the hr
schema because no export mode is specified.DATA_OPTIONS
Default: There is no default. If this parameter is not used, then the special data handling options it provides simply do not take effect.The
DATA_OPTIONS
parameter provides options for how to
handle certain types of data during exports and imports. For export
operations, the only valid option for the DATA_OPTIONS
parameter is XML_CLOBS
.DATA_OPTIONS=XML_CLOBSThe
XML_CLOBS
option specifies that XMLType columns are
to be exported in uncompressed CLOB format regardless of the XMLType
storage format that was defined for them.If a table has XMLType columns stored only as CLOBs, then it is not necessary to specify the
XML_CLOBS
option because Data Pump automatically exports them in CLOB format.If a
table has XMLType columns stored as any combination of
object-relational (schema-based), binary, or CLOB formats, Data Pump
exports them in compressed format, by default. This is the preferred
method. However, if you need to export the data in uncompressed CLOB
format, you can use the XML_CLOBS
option to override the default.
See Also:
Oracle XML DB Developer's Guide for information specific to exporting and importing XMLType tables-
Using the
XML_CLOBS
option requires that the same XML schema be used at both export and import time.
-
The Export
DATA_OPTIONS
parameter requires the job version to be set at 11.0.0 or higher. See VERSION.
hr.xdb_tab1
table are exported in uncompressed CLOB format regardless of the XMLType storage format that was defined for them.> expdp hr TABLES=hr.xdb_tab1 DIRECTORY=dpump_dir1 DUMPFILE=hr_xml.dmp VERSION=11.1 DATA_OPTIONS=xml_clobs
DIRECTORY
Default:DATA_PUMP_DIR
Specifies the default location to which Export can write the dump file set and the log file.
DIRECTORY=directory_object
The directory_object
is the name of a database directory object (not the file path of an actual directory). Upon installation, privileged users have access to a default directory object named DATA_PUMP_DIR
. Users with access to DATA_PUMP_DIR
need not use the DIRECTORY
parameter at all.A directory object specified on the
DUMPFILE
or LOGFILE
parameter overrides any directory object that you specify for the DIRECTORY
parameter.The following is an example of using the
DIRECTORY
parameter:> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLYThe dump file,
employees
.dmp
, will be written to the path that is associated with the directory object dpump_dir1
.
See Also:
-
Default Locations for Dump, Log, and SQL Files for more information about default directory objects
-
Oracle Database SQL Language Reference for information about the
CREATE
DIRECTORY
command
DUMPFILE
Default:expdat
.dmp
Specifies the names, and optionally, the directory objects of dump files for an export job.
DUMPFILE=[directory_object:]file_name [, ...]The
directory_object
is optional if one has already been established by the DIRECTORY
parameter. If you supply a value here, it must be a directory object
that already exists and that you have access to. A database directory
object that is specified as part of the DUMPFILE
parameter overrides a value specified by the DIRECTORY
parameter or by the default directory object.You can supply multiple
file_name
specifications as a comma-delimited list or in separate DUMPFILE
parameter specifications. If no extension is given for the filename, then Export uses the default file extension of .dmp
. The filenames can contain a substitution variable (%U
),
which implies that multiple files may be generated. The substitution
variable is expanded in the resulting filenames into a 2-digit,
fixed-width, incrementing integer starting at 01 and ending at 99. If a
file specification contains two substitution variables, both are
incremented at the same time. For example, exp%Uaa%U.dmp
would resolve to exp01aa01.dmp
, exp02aa02.dmp
, and so forth.If the
FILESIZE
parameter is specified, each dump file
will have a maximum of that size in bytes and be nonextensible. If more
space is required for the dump file set and a template with a
substitution variable (%U
) was supplied, a new dump file is automatically created of the size specified by FILESIZE,
if there is room on the device.As each file specification or file template containing a substitution variable is defined, it is instantiated into one fully qualified filename and Export attempts to create it. The file specifications are processed in the order in which they are specified. If the job needs extra files because the maximum file size is reached, or to keep parallel workers active, then additional files are created if file templates with substitution variables were specified.
Although it is possible to specify multiple files using the
DUMPFILE
parameter, the export job may only require a subset of those files to
hold the exported data. The dump file set displayed at the end of the
export job shows exactly which files were used. It is this list of files
that is required in order to perform an import operation using this
dump file set.-
Any resulting dump file names that match preexisting dump file names
will generate an error and the preexisting dump files will not be
overwritten. You can override this behavior by specifying the Export
parameter
REUSE_DUMPFILES=Y
.
DUMPFILE
parameter:> expdp hr SCHEMAS=hr DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp PARALLEL=3The dump file,
exp1
.dmp,
will be written to the path associated with the directory object dpump_dir2
because dpump_dir2
was specified as part of the dump file name, and therefore overrides the directory object specified with the DIRECTORY
parameter. Because all three parallel processes will be given work to perform during this job, the exp201.dmp
and exp202.dmp
dump files will be created and they will be written to the path associated with the directory object, dpump_dir1
, that was specified with the DIRECTORY
parameter.
See Also:
ENCRYPTION
Default: The default value depends upon the combination of encryption-related parameters that are used. To enable encryption, either theENCRYPTION
or ENCRYPTION_PASSWORD
parameter, or both, must be specified. If only the ENCRYPTION_PASSWORD
parameter is specified, then the ENCRYPTION
parameter defaults to ALL
. If neither ENCRYPTION
nor ENCRYPTION_PASSWORD
is specified, then ENCRYPTION
defaults to NONE
.Specifies whether or not to encrypt data before writing it to the dump file set.
ENCRYPTION = {ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE}
ALL
enables encryption for all data and metadata in the export operation.DATA_ONLY
specifies that only data is written to the dump file set in encrypted format.ENCRYPTED_COLUMNS_ONLY
specifies that only encrypted columns are written to the dump file set in encrypted format.METADATA_ONLY
specifies that only metadata is written to the dump file set in encrypted format.NONE
specifies that no data is written to the dump file set in encrypted format.
Note:
If the data being exported includes SecureFiles that you want to be encrypted, then you must specify ENCRYPTION=ALL
to encrypt the entire dump file set. Encryption of the entire dump file
set is the only way to achieve encryption security for SecureFiles
during a Data Pump export operation. For more information about
SecureFiles, see Oracle Database SecureFiles and Large Objects Developer's Guide.-
To specify the
ALL
,DATA_ONLY
, orMETADATA_ONLY
options, theCOMPATIBLE
initialization parameter must be set to at least 11.0.0.
-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp JOB_NAME=enc1 ENCRYPTION=data_only ENCRYPTION_PASSWORD=foobar
ENCRYPTION_ALGORITHM
Default:AES128
Specifies which cryptographic algorithm should be used to perform the encryption.
ENCRYPTION_ALGORITHM = { AES128 | AES192 | AES256 }See Oracle Database Advanced Security Administrator's Guide for information about encryption algorithms.
-
To use this encryption feature, the
COMPATIBLE
initialization parameter must be set to at least 11.0.0.
-
The
ENCRYPTION_ALGORITHM
parameter requires that you also specify either theENCRYPTION
orENCRYPTION_PASSWORD
parameter; otherwise an error is returned.
-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp ENCRYPTION_PASSWORD=foobar ENCRYPTION_ALGORITHM=AES128
ENCRYPTION_MODE
Default: The default mode depends on which other encryption-related parameters are used. If only theENCRYPTION
parameter is specified, then the default mode is TRANSPARENT
. If the ENCRYPTION_PASSWORD
parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL
. If the ENCRYPTION_PASSWORD
parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD
.Specifies the type of security to use when encryption and decryption are performed.
ENCRYPTION_MODE = { DUAL | PASSWORD | TRANSPARENT }
DUAL
mode creates a dump file set that can later be
imported either transparently or by specifying a password that was used
when the dual-mode encrypted dump file set was created. When you later
import the dump file set created in DUAL
mode, you can use either the Oracle Encryption Wallet or the password that was specified with the ENCRYPTION_PASSWORD
parameter. DUAL
mode is best suited for cases in which the dump file set will be
imported onsite using the Oracle Encryption Wallet, but which may also
need to be imported offsite where the Oracle Encryption Wallet is not
available.PASSWORD
mode requires that you provide a password when
creating encrypted dump file sets. You will need to provide the same
password when you import the dump file set. PASSWORD
mode requires that you also specify the ENCRYPTION_PASSWORD
parameter. The PASSWORD
mode is best suited for cases in which the dump file set will be
imported into a different or remote database, but which must remain
secure in transit.TRANSPARENT
mode allows an encrypted dump file set to be
created without any intervention from a database administrator (DBA),
provided the required Oracle Encryption Wallet is available. Therefore,
the ENCRYPTION_PASSWORD
parameter is not required, and will in fact, cause an error if it is used in TRANSPARENT
mode. This encryption mode is best suited for cases in which the dump
file set will be imported into the same database from which it was
exported.-
To use
DUAL
orTRANSPARENT
mode, theCOMPATIBLE
initialization parameter must be set to at least 11.0.0.
-
When you use the
ENCRYPTION_MODE
parameter, you must also use either theENCRYPTION
orENCRYPTION_PASSWORD
parameter. Otherwise, an error is returned.
-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp ENCRYPTION=all ENCRYPTION_PASSWORD=secretwords ENCRYPTION_ALGORITHM=AES256 ENCRYPTION_MODE=dual
ENCRYPTION_PASSWORD
Default: There is no default; the value is user-provided.Specifies a password for encrypting encrypted column data, metadata, or table data in the export dumpfile. This prevents unauthorized access to an encrypted dump file set.
Note:
Data Pump encryption functionality has changed as of Oracle Database 11g release 1 (11.1). Prior to release 11.1, the ENCRYPTION_PASSWORD
parameter applied only to encrypted columns. However, as of release 11.1, the new ENCRYPTION
parameter provides options for encrypting other types of data. This means that if you now specify ENCRYPTION_PASSWORD
without also specifying ENCRYPTION
and a specific option, then all data written to the dump file will be encrypted (equivalent to specifying ENCRYPTION=ALL
). If you want to re-encrypt only encrypted columns, you must now specify ENCRYPTION=ENCRYPTED_COLUMNS_ONLY
in addition to ENCRYPTION_PASSWORD
.ENCRYPTION_PASSWORD = password
The password
value that is supplied specifies a key for
re-encrypting encrypted table columns, metadata, or table data so that
they are not written as clear text in the dump file set. If the export
operation involves encrypted table columns, but an encryption password
is not supplied, then the encrypted columns will be written to the dump
file set as clear text and a warning will be issued.For export operations, this parameter is required if the
ENCRYPTION_MODE
parameter is set to either PASSWORD
or DUAL
.
Note:
There is no connection or dependency between the key specified with the Data Pump ENCRYPTION_PASSWORD
parameter and the key specified with the ENCRYPT
keyword when the table with encrypted columns was initially created.
For example, suppose a table is created as follows, with an encrypted
column whose key is xyz
:
CREATE TABLE emp (col1 VARCHAR2(256) ENCRYPT IDENTIFIED BY "xyz");When you export the
emp
table, you can supply any arbitrary value for ENCRYPTION_PASSWORD
. It does not have to be xyz
.-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g.
-
If
ENCRYPTION_PASSWORD
is specified butENCRYPTION_MODE
is not specified, then it is not necessary to have the Transparent Data Encryption option set up sinceENCRYPTION_MODE
will default toPASSWORD
.
-
The
ENCRYPTION_PASSWORD
parameter is not valid if the requested encryption mode isTRANSPARENT
.
-
To use the
ENCRYPTION_PASSWORD
parameter ifENCRYPTION_MODE
is set toDUAL
, you must have the Transparent Data Encryption option set up. See Oracle Database Advanced Security Administrator's Guide for more information about the Transparent Data Encryption option.
-
For network exports, the
ENCRYPTION_PASSWORD
parameter in conjunction withENCRYPTED_COLUMNS_ONLY
is not supported with user-defined external tables that have encrypted columns. The table will be skipped and an error message will be displayed, but the job will continue.
-
Encryption attributes for all columns must match between the exported
table definition and the target table. For example, suppose you have a
table,
EMP
, and one of its columns is namedEMPNO
. Both of the following situations would result in an error because the encryption attribute for theEMP
column in the source table would not match the encryption attribute for theEMP
column in the target table:
-
The
EMP
table is exported with theEMPNO
column being encrypted, but prior to importing the table you remove the encryption attribute from theEMPNO
column.
-
The
EMP
table is exported without theEMPNO
column being encrypted, but prior to importing the table you enable encryption on theEMPNO
column.
-
The
123456
, is assigned to the dump file, dpcd2be1.dmp
.expdp hr TABLES=employee_s_encrypt DIRECTORY=dpump_dir DUMPFILE=dpcd2be1.dmp ENCRYPTION=ENCRYPTED_COLUMNS_ONLY ENCRYPTION_PASSWORD=123456Encrypted columns in the
employee_s_encrypt
table, will not be written as clear text in the dpcd2be1.dmp
dump file. Note that in order to subsequently import the dpcd2be1.dmp
file created by this example, you will need to supply the same encryption password. (See "ENCRYPTION_PASSWORD" for an example of an import operation using the ENCRYPTION_PASSWORD
parameter.)ESTIMATE
Default:BLOCKS
Specifies the method that Export will use to estimate how much disk space each table in the export job will consume (in bytes). The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for table row data only; it does not include metadata.
ESTIMATE={BLOCKS | STATISTICS}
-
BLOCKS
- The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes.
-
STATISTICS
- The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently.
-
If the Data Pump export job involves compressed tables, the default
size estimation given for the compressed table is inaccurate when
ESTIMATE=BLOCKS
is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, useESTIMATE=STATISTICS
.
-
The estimate may also be inaccurate if the
QUERY
,SAMPLE
, orREMAP_DATA
parameter is used.
ESTIMATE
parameter in which the estimate is calculated using statistics for the employees
table:> expdp hr TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1 DUMPFILE=estimate_stat.dmp
ESTIMATE_ONLY
Default:n
Instructs Export to estimate the space that a job would consume, without actually performing the export operation.
ESTIMATE_ONLY={y | n}If
ESTIMATE_ONLY
=y
, then Export estimates the space that would be consumed, but quits without actually performing the export operation.The following shows an example of using the
ESTIMATE_ONLY
parameter to determine how much space an export of the HR
schema will take.> expdp hr ESTIMATE_ONLY=y NOLOGFILE=y SCHEMAS=HR
EXCLUDE
Default: There is no defaultEnables you to filter the metadata that is exported by specifying objects and object types that you want excluded from the export operation.
EXCLUDE=object_type[:name_clause] [, ...]All object types for the given mode of export will be included except those specified in an
EXCLUDE
statement. If an object is excluded, all of its dependent objects are
also excluded. For example, excluding a table will also exclude all
indexes and triggers on the table.The
name_clause
is
optional. It allows selection of specific objects within an object
type. It is a SQL expression used as a filter on the type's object
names. It consists of a SQL operator and the values against which the
object names of the specified type are to be compared. The name clause
applies only to object types whose instances have names (for example, it
is applicable to TABLE
, but not to GRANT
).
The name clause must be separated from the object type with a colon and
enclosed in double quotation marks, because single-quotation marks are
required to delimit the name strings. For example, you could set EXCLUDE=INDEX:"LIKE 'EMP%'"
to exclude all indexes whose names start with EMP
.If no
name_clause
is provided, all objects of the specified type are excluded.More than one
EXCLUDE
statement can be specified.Oracle recommends that you place
EXCLUDE
clauses in a parameter file to avoid having to use escape characters on the command line.
See Also:
-
INCLUDE for an example of using a parameter file
object_type
you specify is CONSTRAINT
, GRANT
, or USER
, you should be aware of the effects this will have, as described in the following paragraphs.The following constraints cannot be explicitly excluded:
-
NOT
NULL
constraints
-
Constraints needed for the table to be created and loaded
successfully; for example, primary key constraints for index-organized
tables, or
REF
SCOPE
andWITH
ROWID
constraints for tables withREF
columns
EXCLUDE
statements will be interpreted as follows:-
EXCLUDE=
CONSTRAINT
will exclude all (nonreferential) constraints, except forNOT
NULL
constraints and any constraints needed for successful table creation and loading.
-
EXCLUDE=
REF_CONSTRAINT
will exclude referential integrity (foreign key) constraints.
EXCLUDE
=GRANT
excludes object grants on all object types and system privilege grants.Specifying
EXCLUDE
=USER
excludes only the definitions of users, not the objects contained within users' schemas.To exclude a specific user and all objects of that user, specify a filter such as the following (where
hr
is the schema name of the user you want to exclude):EXCLUDE=SCHEMA:"='HR'"If you try to exclude a user by using a statement such as
EXCLUDE=USER:"='HR'"
, then only the information used in CREATE USER hr
DDL statements will be excluded, and you may not get the results you expect.-
The
EXCLUDE
andINCLUDE
parameters are mutually exclusive.
EXCLUDE
statement.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW, PACKAGE, FUNCTIONThis will result in a schema-mode export in which all of the
hr
schema will be exported except its views, packages, and functions.
See Also:
-
Filtering During Export Operations for more information about the effects of using the
EXCLUDE
parameter
FILESIZE
Default:0
(unlimited)Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable.
FILESIZE=integer[B | K | M | G]
The integer
can be followed by B
, K
, M
, or G
(indicating bytes, kilobytes, megabytes, and gigabytes respectively).
Bytes is the default. The actual size of the resulting file may be
rounded down slightly to match the size of the internal blocks used in
dump files.-
The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3MIf three megabytes had not been sufficient to hold all the exported data, then the following error would have been displayed and the job would have stopped:
ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytesThe actual number of bytes that could not be allocated may vary. Also, this number does not represent the amount of space needed to complete the entire export operation. It indicates only the size of the current object that was being exported when the job ran out of dump file space.This situation can be corrected by first attaching to the stopped job, adding one or more files using the
ADD_FILE
command, and then restarting the operation.FLASHBACK_SCN
Default: There is no defaultSpecifies the system change number (SCN) that Export will use to enable the Flashback Query utility.
FLASHBACK_SCN=scn_value
The export operation is performed with data that is consistent as of the specified SCN. If the NETWORK_LINK
parameter is specified, the SCN refers to the SCN of the source database.-
FLASHBACK_SCN
andFLASHBACK_TIME
are mutually exclusive.
-
The
FLASHBACK_SCN
parameter pertains only to the Flashback Query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.
384632
exists. It exports the hr
schema up to SCN 384632.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632
Note:
If you are on a logical standby system and using a network link to access the logical standby primary, the FLASHBACK_SCN
parameter is ignored because SCNs are selected by logical standby. See Oracle Data Guard Concepts and Administration for information about logical standby databases.FLASHBACK_TIME
Default: There is no defaultThe SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent as of this SCN.
FLASHBACK_TIME="TO_TIMESTAMP(time-value)"Because the
TO_TIMESTAMP
value is enclosed in quotation
marks, it would be best to put this parameter in a parameter file.
Otherwise, you might need to use escape characters on the command line
in front of the quotation marks. See Use of Quotation Marks On the Data Pump Command Line.-
FLASHBACK_TIME
andFLASHBACK_SCN
are mutually exclusive.
-
The
FLASHBACK_TIME
parameter pertains only to the flashback query capability of Oracle Database. It is not applicable to Flashback Database, Flashback Drop, or Flashback Data Archive.
DBMS_FLASHBACK.ENABLE_AT_TIME
procedure accepts. For example, suppose you have a parameter file, flashback.par, with the following contents:DIRECTORY=dpump_dir1 DUMPFILE=hr_time.dmp FLASHBACK_TIME="TO_TIMESTAMP('25-08-2003 14:35:00', 'DD-MM-YYYY HH24:MI:SS')"You could then issue the following command:
> expdp hr PARFILE=flashback.parThe export operation will be performed with data that is consistent with the SCN that most closely matches the specified time.
Note:
If you are on a logical standby system and using a network link to access the logical standby primary, the FLASHBACK_SCN
parameter is ignored because SCNs are selected by logical standby. FULL
Default:n
Specifies that you want to perform a full database mode export.
FULL={y | n}
FULL=y
indicates that all data and metadata are to be exported. Filtering can restrict what is exported using this export mode. See Filtering During Export Operations.To perform a full export, you must have the
EXP_FULL_DATABASE
role.
Note:
Be aware that when you later import a dump file that was created by a
full-mode export, the import operation attempts to copy the password for
the SYS
account from the source database. This sometimes
fails (for example, if the password is in a shared password file). If it
does fail, then after the import completes, you must set the password
for the SYS
account at the target database to a password of your choice-
A full export does not export system schemas that contain
Oracle-managed data and metadata. Examples of system schemas that are
not exported include
SYS
,ORDSYS
, andMDSYS
.
-
Grants on objects owned by the
SYS
schema are never exported.
-
If you are exporting data that is protected by a realm, you must have authorization for that realm.
FULL
parameter. The dump file, expfull.dmp
is written to the dpump_dir2
directory.> expdp hr DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=y NOLOGFILE=y
HELP
Default:N
Displays online help for the Export utility.
HELP = {y | n}If
HELP
=y
is specified, Export displays a summary of all Export command-line parameters and interactive commands.> expdp HELP = yThis example will display a brief description of all Export parameters and commands.
INCLUDE
Default: There is no defaultEnables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported.
INCLUDE = object_type[:name_clause] [, ...]Only object types explicitly specified in
INCLUDE
statements, and their dependent objects, are exported. No other object
types, including the schema definition information that is normally part
of a schema-mode export when you have the EXP_FULL_DATABASE
role, are exported.To see a list of valid paths for use with the
INCLUDE
parameter, you can query the following views: DATABASE_EXPORT_OBJECTS
for Full mode, SCHEMA_EXPORT_OBJECTS
for schema mode, and TABLE_EXPORT_OBJECTS
for table and tablespace mode.The
name_clause
is
optional. It allows fine-grained selection of specific objects within
an object type. It is a SQL expression used as a filter on the object
names of the type. It consists of a SQL operator and the values against
which the object names of the specified type are to be compared. The
name clause applies only to object types whose instances have names (for
example, it is applicable to TABLE
, but not to GRANT
).
The optional name clause must be separated from the object type with a
colon and enclosed in double quotation marks, because single-quotation
marks are required to delimit the name strings.Oracle recommends that
INCLUDE
statements be placed in a
parameter file; otherwise you might have to use operating
system-specific escape characters on the command line before quotation
marks.For example, suppose you have a parameter file named
hr.par
with the following content:SCHEMAS=HR DUMPFILE=expinclude.dmp DIRECTORY=dpump_dir1 LOGFILE=expinclude.log INCLUDE=TABLE:"IN ('EMPLOYEES', 'DEPARTMENTS')" INCLUDE=PROCEDURE INCLUDE=INDEX:"LIKE 'EMP%'"You could then use the
hr.par
file to start an export operation, without having to enter any other parameters on the command line:> expdp hr parfile=hr.parIf the
object_type
you specify is a CONSTRAINT
, you should be aware of the effects this will have.The following constraints cannot be explicitly included:
-
NOT
NULL
constraints
-
Constraints needed for the table to be created and loaded
successfully; for example, primary key constraints for index-organized
tables, or
REF
SCOPE
andWITH
ROWID
constraints for tables withREF
columns
INCLUDE
statements will be interpreted as follows:-
INCLUDE=
CONSTRAINT
will include all (nonreferential) constraints, except forNOT
NULL
constraints and any constraints needed for successful table creation and loading.
-
INCLUDE=
REF_CONSTRAINT
will include referential integrity (foreign key) constraints.
-
The
INCLUDE
andEXCLUDE
parameters are mutually exclusive. -
Grants on objects owned by the
SYS
schema are never exported.
hr
schema:> expdp hr INCLUDE=TABLE DUMPFILE=dpump_dir1:exp_inc.dmp NOLOGFILE=y
JOB_NAME
Default: system-generated name of the formSYS_EXPORT_<mode>_NN
Used to identify the export job in subsequent actions, such as when the
ATTACH
parameter is used to attach to a job, or to identify the job using the DBA_DATAPUMP_JOBS
or USER_DATAPUMP_JOBS
views. The job name becomes the name of the master table in the current
user's schema. The master table is used to control the export job.JOB_NAME=jobname_string
The jobname_string
specifies a name of up to 30 bytes for this export job.
The bytes must represent printable characters and spaces. If spaces are
included, the name must be enclosed in single quotation marks (for
example, 'Thursday Export'). The job name is implicitly qualified by the
schema of the user performing the export operation.The default job name is system-generated in the form
SYS_EXPORT_<mode>_NN
, where NN
expands to a 2-digit incrementing integer starting at 01. An example of a default name is 'SYS_EXPORT_TABLESPACE_02'
.The following example shows an export operation that is assigned a job name of
exp_job
:> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=exp_job.dmp JOB_NAME=exp_job NOLOGFILE=y
LOGFILE
Default:export
.log
Specifies the name, and optionally, a directory, for the log file of the export job.
LOGFILE=[directory_object:]file_nameYou can specify a database
directory_object
previously established by the DBA, assuming that you have access to it. This overrides the directory object specified with the DIRECTORY
parameter.The
file_name
specifies a name for the log file. The default behavior is to create a file named export
.log
in the directory referenced by the directory object specified in the DIRECTORY
parameter.All messages regarding work in progress, work completed, and errors encountered are written to the log file. (For a real-time status of the job, use the
STATUS
command in interactive mode.)A log file is always created for an export job unless the
NOLOGFILE
parameter is specified. As with the dump file set, the log file is relative to the server and not the client.An existing file matching the filename will be overwritten.
-
To perform a Data Pump Export using Automatic Storage Management (ASM), you must specify a
LOGFILE
parameter that includes a directory object that does not include the ASM + notation. That is, the log file must be written to a disk file, and not written into the ASM storage. Alternatively, you can specifyNOLOGFILE=Y
. However, this prevents the writing of the log file.
> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp LOGFILE=hr_export.log
Note:
Data Pump Export writes the log file using the database character set. If your client NLS_LANG
environment setting sets up a different client character set from the
database character set, then it is possible that table names may be
different in the log file than they are when displayed on the client
output screen.
See Also:
-
STATUS
-
Using Directory Objects When Automatic Storage Management Is Enabled for information about Automatic Storage Management and directory objects
NETWORK_LINK
Default: There is no defaultEnables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance.
NETWORK_LINK=source_database_link
The NETWORK_LINK
parameter initiates an export using a database link. This means that the system to which the expdp
client is connected contacts the source database referenced by the source_database_link
, retrieves data from it, and writes the data to a dump file set back on the connected system.The
source_database_link
provided must be the name of a database link to an available database.
If the database on that instance does not already have a database link,
you or your DBA must create one. For more information about the CREATE
DATABASE
LINK
statement, see Oracle Database SQL Language Reference.If the source database is read-only, then the user on the source database must have a locally managed tablespace assigned as the default temporary tablespace. Otherwise, the job will fail. For further details about this, see the information about creating locally managed temporary tablespaces in the Oracle Database Administrator's Guide.
Caution:
If an export operation is performed over an unencrypted network link,
then all data is exported as clear text even if it is encrypted in the
database. See Oracle Database Advanced Security Administrator's Guide for information about network security.-
When the
NETWORK_LINK
parameter is used in conjunction with theTABLES
parameter, only whole tables can be exported (not partitions of tables).
-
The only types of database links supported by Data Pump Export are:
public, fixed-user, and connected-user. Current-user database links are
not supported.
NETWORK_LINK
parameter. The source_database_link
would be replaced with the name of a valid database link that must already exist.> expdp hr DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
DUMPFILE=network_export.dmp LOGFILE=network_export.log
NOLOGFILE
Default:n
Specifies whether to suppress creation of a log file.
NOLOGFILE={y | n}Specify
NOLOGFILE
=y
to suppress
the default behavior of creating a log file. Progress and error
information is still written to the standard output device of any
attached clients, including the client that started the original export
operation. If there are no clients attached to a running job and you
specify NOLOGFILE=y
, you run the risk of losing important progress and error information.The following is an example of using the
NOLOGFILE
parameter:> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp NOLOGFILE=yThis command results in a schema-mode export in which no log file is written.
PARALLEL
Default:1
Specifies the maximum number of threads of active execution operating on behalf of the export job. This execution set consists of a combination of worker processes and parallel I/O server processes. The master control process and worker processes acting as query coordinators in parallel query operations do not count toward this total.
This parameter enables you to make trade-offs between resource consumption and elapsed time.
PARALLEL=integer
The value you specify for integer
should be less than, or equal to, the number of files in the dump file
set (or you should specify substitution variables in the dump file
specifications). Because each active worker process or I/O server
process writes exclusively to one file at a time, an insufficient number
of files can have adverse effects. Some of the worker processes will be
idle while waiting for files, thereby degrading the overall performance
of the job. More importantly, if any member of a cooperating group of
parallel I/O server processes cannot obtain a file for output, then the
export operation will be stopped with an ORA-39095
error. Both situations can be corrected by attaching to the job using
the Data Pump Export utility, adding more files using the ADD_FILE
command while in interactive mode, and in the case of a stopped job, restarting the job.To increase or decrease the value of
PARALLEL
during job
execution, use interactive-command mode. Decreasing parallelism does
not result in fewer worker processes associated with the job; it merely
decreases the number of worker processes that will be executing at any
given time. Also, any ongoing work must reach an orderly completion
point before the decrease takes effect. Therefore, it may take a while
to see any effect from decreasing the value. Idle workers are not
deleted until the job exits.Increasing the parallelism takes effect immediately if there is work that can be performed in parallel.
See Also:
Controlling Resource Consumption-
This parameter is valid only in the Enterprise Edition of Oracle Database 11g.
PARALLEL
parameter:> expdp hr DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4This results in a schema-mode export of the
hr
schema in which up to four files could be created in the path pointed to by the directory object, dpump_dir1
.PARFILE
Default: There is no defaultSpecifies the name of an export parameter file.
PARFILE=[directory_path]file_nameUnlike dump and log files, which are created and written by the Oracle database, the parameter file is opened and read by the client running the
expdp
image. Therefore, a directory object name
is neither required nor appropriate. The directory path is an operating
system-specific directory specification. The default is the user's
current directory.The use of parameter files is highly recommended if you are using parameters whose values require the use of quotation marks.
-
The
PARFILE
parameter cannot be specified within a parameter file.
hr.par
, might be as follows:SCHEMAS=HR DUMPFILE=exp.dmp DIRECTORY=dpump_dir1 LOGFILE=exp.logYou could then issue the following Export command to specify the parameter file:
> expdp hr parfile=hr.par
QUERY
Default: There is no defaultAllows you to specify a query clause that is used to filter the data that gets exported.
QUERY = [schema.][table_name:] query_clauseThe
query_clause
is typically a SQL WHERE
clause for fine-grained row selection, but could be any SQL clause. For example, an ORDER
BY
clause could be used to speed up a migration from a heap-organized
table to an index-organized table. If a schema and table name are not
supplied, the query is applied to (and must be valid for) all tables in
the export job. A table-specific query overrides a query applied to all
tables.When the query is to be applied to a specific table, a colon must separate the table name from the query clause. More than one table-specific query can be specified, but only one query can be specified per table.
The query must be enclosed in single or double quotation marks. Double quotation marks are recommended, because strings within the clause must be enclosed in single quotation marks. Oracle recommends that you place
QUERY
specifications in a parameter file to avoid having to use operating system-specific escape characters on the command line.To specify a schema other than your own in a table-specific query, you must be granted access to that specific table.
-
The
QUERY
parameter cannot be used in conjunction with the following parameters:
-
CONTENT=METADATA_ONLY
-
ESTIMATE_ONLY
-
TRANSPORT_TABLESPACES
-
-
When the
QUERY
parameter is specified for a table, Data Pump uses external tables to unload the target table. External tables uses a SQLCREATE TABLE AS SELECT
statement. The value of theQUERY
parameter is theWHERE
clause in theSELECT
portion of theCREATE TABLE
statement. If theQUERY
parameter includes references to another table with columns whose names match the table being unloaded, and if those columns are used in the query, then you will need to use a table alias to distinguish between columns in the table being unloaded and columns in theSELECT
statement with the same name. The table alias used by Data Pump for the table being unloaded isKU$
.
For example, suppose you want to export a subset of thesh.sales
table based on the credit limit for a customer in thesh.customers
table. In the following example,KU$
is used to qualify thecust_id
field in theQUERY
parameter for unloadingsh.sales
. As a result, Data Pump exports only rows for customers whose credit limit is greater than $10,000.
QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c WHERE cust_credit_limit > 10000 AND ku$.cust_id = c.cust_id)"'
If, as in the following query,KU$
is not used for a table alias, the result will be that all rows are unloaded:
QUERY='sales:"WHERE EXISTS (SELECT cust_id FROM customers c WHERE cust_credit_limit > 10000 AND cust_id = c.cust_id)"'
QUERY
parameter:> expdp hr parfile=emp_query.parThe contents of the emp_query.par file are as follows:
QUERY=employees:"WHERE department_id > 10 AND salary > 10000" NOLOGFILE=y DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmpThis example unloads all tables in the
hr
schema, but only the rows that fit the query expression. In this case, all rows in all tables (except employees
) in the hr
schema will be unloaded. For the employees
table, only rows that meet the query criteria are unloaded.REMAP_DATA
Default: There is no defaultThe
REMAP_DATA
parameter allows you to specify a remap
function that takes as a source the original value of the designated
column and returns a remapped value that will replace the original value
in the dump file. A common use for this option is to mask data when
moving from a production system to a test system. For example, a column
of sensitive customer data such as credit card numbers could be replaced
with numbers generated by a REMAP_DATA
function. This
would allow the data to retain its essential formatting and processing
characteristics without exposing private data to unauthorized personnel.The same function can be applied to multiple columns being dumped. This is useful when you want to guarantee consistency in remapping both the child and parent column in a referential constraint.
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.functionThe description of each syntax element, in the order in which they appear in the syntax, is as follows:
schema1 -- the schema containing the table to be remapped. By default, this is the schema of the user doing the export.
tablename -- the table whose column will be remapped.
column_name -- the column whose data is to be remapped.
schema2 -- the schema containing the PL/SQL package you have created that contains the remapping function. As a default, this is the schema of the user doing the export.
pkg -- the name of the PL/SQL package you have created that contains the remapping function.
function -- the name of the function within the PL/SQL that will be called to remap the column table in each row of the specified table.
-
The data types of the source argument and the returned value should
both match the data type of the designated column in the table.
-
Remapping functions should not perform commits or rollbacks.
remap
has been created that contains functions named minus10
and plusx
which change the values for employee_id
and first_name
in the employees
table.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=remap1.dmp TABLES=employees REMAP_DATA=hr.employees.employee_id:hr.remap.minus10 REMAP_DATA=hr.employees.first_name:hr.remap.plusx
REUSE_DUMPFILES
Default: NSpecifies whether or not to overwrite a preexisting dump file.
REUSE_DUMPFILES={Y | N}Normally, Data Pump Export will return an error if you specify a dump file name that already exists. The
REUSE_DUMPFILES
parameter allows you to override that behavior and reuse a dump file
name. For example, if you performed an export and specified DUMPFILE=hr.dmp
and REUSE_DUMPFILES=Y
, then hr.dmp
would be overwritten if it already existed. Its previous contents would
be lost and it would contain data for the current export instead.The following export operation creates a dump file named
enc1.dmp
, even if a dump file with that name already exists.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=enc1.dmp TABLES=employees REUSE_DUMPFILES=y
SAMPLE
Default: There is no defaultAllows you to specify a percentage of the data blocks to be sampled and unloaded from the source database.
SAMPLE=[[schema_name.]table_name:]sample_percentThis parameter allows you to export subsets of data by specifying the percentage of data to be sampled and exported. The
sample_percent
indicates the probability that a block of rows will be selected as part
of the sample. It does not mean that the database will retrieve exactly
that amount of rows from the table. The value you supply for sample_percent
can be anywhere from .000001 up to, but not including, 100.The
sample_percent
can be applied to specific tables. In the following example, 50% of the HR.EMPLOYEES
table will be exported:SAMPLE="HR"."EMPLOYEES":50If you specify a schema, you must also specify a table. However, you can specify a table without specifying a schema; the current user will be assumed. If no table is specified, then the
sample_percent
value applies to the entire export job.Note that you can use this parameter in conjunction with the Data Pump Import PCTSPACE transform, so that the size of storage allocations matches the sampled data subset. (See TRANSFORM.)
-
The
SAMPLE
parameter is not valid for network exports.
SAMPLE
is applied to the entire export job because no table name is specified.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70
SCHEMAS
Default: current user's schemaSpecifies that you want to perform a schema-mode export. This is the default mode for Export.
SCHEMAS=schema_name [, ...]
If you have the EXP_FULL_DATABASE
role, then you can specify a single schema other than your own or a list of schema names. The EXP_FULL_DATABASE
role also allows you to export additional nonschema object information
for each specified schema so that the schemas can be re-created at
import time. This additional information includes the user definitions
themselves and all associated system and role grants, user password
history, and so on. Filtering can further restrict what is exported
using schema mode (see Filtering During Export Operations).-
If you do not have the
EXP_FULL_DATABASE
role, then you can specify only your own schema.
-
The
SYS
schema cannot be used as a source schema for export jobs.
SCHEMAS
parameter. Note that user hr
is allowed to specify more than one schema because the EXP_FULL_DATABASE
role was previously assigned to it for the purpose of these examples.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=hr,sh,oeThis results in a schema-mode export in which the schemas,
hr
, sh,
and oe
will be written to the expdat.dmp
dump file located in the dpump_dir1
directory.STATUS
Default:0
Specifies the frequency at which the job status display is updated.
STATUS=[integer]
If you supply a value for integer
,
it specifies how frequently, in seconds, job status should be displayed
in logging mode. If no value is entered or if the default value of 0 is
used, no additional information is displayed beyond information about
the completion of each object type, table, or partition.This status information is written only to your standard output device, not to the log file (if one is in effect).
The following is an example of using the
STATUS
parameter.> expdp hr DIRECTORY=dpump_dir1 SCHEMAS=hr,sh STATUS=300This example will export the
hr
and sh
schemas and display the status of the export every 5 minutes (60 seconds x 5 = 300 seconds).TABLES
Default: There is no defaultSpecifies that you want to perform a table-mode export.
TABLES=[schema_name.]table_name[:partition_name] [, ...]Filtering can restrict what is exported using this mode (see Filtering During Export Operations). You can filter the data and metadata that is exported, by specifying a comma-delimited list of tables and partitions or subpartitions. If a partition name is specified, it must be the name of a partition or subpartition in the associated table. Only the specified set of tables, partitions, and their dependent objects are unloaded.
The table name that you specify can be preceded by a qualifying schema name. All table names specified must reside in the same schema. The schema defaults to that of the current user. To specify a schema other than your own, you must have the
EXP_FULL_DATABASE
role.The use of wildcards is supported for one table name per export operation. For example,
TABLES=emp%
would export all tables having names that start with 'EMP
'.To use the transportable option during a table-mode export, specify the
TRANSPORTABLE=ALWAYS
parameter in conjunction with the TABLES
parameter. Metadata for the specified tables, partitions, or
subpartitions is exported to the dump file. To move the actual data, you
copy the data files to the target database.When partitioned tables are exported using the transportable method, each partition and subpartition is promoted to its own table. During the subsequent import operation, the new table is automatically named by combining the table and partition name (that is, tablename_partitionname). You can override this automatic naming by using the Import
REMAP_TABLE
parameter.-
Cross-schema references are not exported. For example, a trigger
defined on a table within one of the specified schemas, but that resides
in a schema not explicitly specified, is not exported.
-
Types used by the table are not exported
in table mode. This means that if you subsequently import the dump file
and the TYPE does not already exist in the destination database, the
table creation will fail.
-
The use of synonyms as values for the
TABLES
parameter is not supported. For example, if theregions
table in thehr
schema had a synonym ofregn
, it would not be valid to useTABLES=regn
. An error would be returned.
-
The export of individual table partitions is not supported when the
NETWORK_LINK
parameter is used.
-
The export of tables that include wildcards in the table name is not supported if the table has partitions.
-
The length of the table name list specified for the
TABLES
parameter is limited to a maximum of 4MB, unless you are using theNETWORK_LINK
parameter to a 10.2.0.3 or earlier database or to a read-only database. In such cases, the limit is 4KB.
-
You can only specify partitions from one table if
TRANSPORTABLE=ALWAYS
is also set on the export.
TABLES
parameter to export three tables found in the hr
schema: employees
, jobs
, and departments
. Because user hr
is exporting tables found in the hr
schema, the schema name is not needed before the table names.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp TABLES=employees,jobs,departmentsThe following example assumes that user
hr
has the EXP_FULL_DATABASE
role. It shows the use of the TABLES
parameter to export partitions.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp TABLES=sh.sales:sales_Q1_2000,sh.sales:sales_Q2_2000This example exports the partitions,
sales_Q1_2000
and sales_Q2_2000
, from the table sales
in the schema sh
.TABLESPACES
Default: There is no defaultSpecifies a list of tablespace names to be exported in tablespace mode.
TABLESPACES=tablespace_name [, ...]
In tablespace mode, only the tables contained in a specified set of
tablespaces are unloaded. If a table is unloaded, its dependent objects
are also unloaded. Both object metadata and data are unloaded. If any
part of a table resides in the specified set, then that table and all of
its dependent objects are exported. Privileged users get all tables.
Nonprivileged users get only the tables in their own schemasFiltering can restrict what is exported using this mode (see Filtering During Export Operations).
-
The length of the tablespace name list specified for the
TABLESPACES
parameter is limited to a maximum of 4MB, unless you are using theNETWORK_LINK
parameter to a 10.2.0.3 or earlier database or to a read-only database. In such cases, the limit is 4KB.
TABLESPACES
parameter. The example assumes that tablespaces tbs_4
, tbs_5
, and tbs_6
already exist.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp TABLESPACES=tbs_4, tbs_5, tbs_6This results in a tablespace export in which tables (and their dependent objects) from the specified tablespaces (
tbs_4
, tbs_5
, and tbs_6
) will be unloaded.TRANSPORT_FULL_CHECK
Default:n
Specifies whether or not to check for dependencies between those objects inside the transportable set and those outside the transportable set. This parameter is applicable only to a transportable-tablespace mode export.
TRANSPORT_FULL_CHECK={y | n}If
TRANSPORT_FULL_CHECK
=y
, then Export
verifies that there are no dependencies between those objects inside the
transportable set and those outside the transportable set. The check
addresses two-way dependencies. For example, if a table is inside the
transportable set but its index is not, a failure is returned and the
export operation is terminated. Similarly, a failure is also returned if
an index is in the transportable set but the table is not.If
TRANSPORT_FULL_CHECK
=n,
then Export
verifies only that there are no objects within the transportable set
that are dependent on objects outside the transportable set. This check
addresses a one-way dependency. For example, a table is not dependent on
an index, but an index is dependent on a
table, because an index without a table has no meaning. Therefore, if
the transportable set contains a table, but not its index, then this
check succeeds. However, if the transportable set contains an index, but
not the table, the export operation is terminated.There are other checks performed as well. For instance, export always verifies that all storage segments of all tables (and their indexes) defined within the tablespace set specified by
TRANSPORT_TABLESPACES
are actually contained within the tablespace set.The following is an example of using the
TRANSPORT_FULL_CHECK
parameter. It assumes that tablespace tbs_1
exists.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log
TRANSPORT_TABLESPACES
Default: There is no defaultSpecifies that you want to perform a transportable-tablespace-mode export.
TRANSPORT_TABLESPACES=tablespace_name [, ...]
Use the TRANSPORT_TABLESPACES
parameter to specify a
list of tablespace names for which object metadata will be exported from
the source database into the target database.The log file for the export lists the datafiles that are used in the transportable set, the dump files, and any containment violations.
The
TRANSPORT_TABLESPACES
parameter exports metadata for all
objects within the specified tablespace. If you want to perform a
transportable export of only certain tables, partitions, or
subpartitions, you must use the TABLES
parameter in conjunction with the TRANSPORTABLE=ALWAYS
parameter.
Note:
You cannot export transportable tablespaces and then import them into a
database at a lower release level. The target database must be at the
same or higher release level as the source database.-
Transportable jobs are not restartable.
-
Transportable jobs are restricted to a degree of parallelism of 1.
-
Transportable tablespace mode requires that you have the
EXP_FULL_DATABASE
role.
-
Transportable mode does not support encrypted columns.
-
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.
-
The
SYS
andSYSAUX
tablespaces are not transportable.
-
All tablespaces in the transportable set must be set to read-only.
TRANSPORT_TABLESPACES
parameter in a file-based job (rather than network-based). The tablespace tbs_1
is the tablespace being moved. This example assumes that tablespace tbs_1
exists and that it has been set to read-only. This examples also
assumes that the default tablespace was changed prior to this export
command.> expdp hr DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=y LOGFILE=tts.log
TRANSPORTABLE
Default:NEVER
Specifies whether or not the transportable option should be used during a table mode export (specified with the
TABLES
parameter) to export metadata for specific tables, partitions, and subpartitions.TRANSPORTABLE = {ALWAYS | NEVER}The definitions of the allowed values are as follows:
ALWAYS
- Instructs the export job to use the
transportable option. If transportable is not possible, the job will
fail. The transportable option exports only metadata for the specified
tables, partitions, or subpartitions specified by the TABLES
parameter. You must copy the actual data files to the target database. See Using Data File Copying to Move Data.NEVER
- Instructs the export job to use either the
direct path or external table method to unload data rather than the
transportable option. This is the default.
Note:
If you want to export an entire tablespace in transportable mode, use the TRANSPORT_TABLESPACES
parameter.-
The
TRANSPORTABLE
parameter is only valid in table mode exports.
-
The schema performing a transportable export requires the
EXP_FULL_DATABASE
privilege.
-
Tablespaces associated with tables, partitions, and subpartitions must be read-only.
-
Transportable mode does not export any data. Data is copied when the
tablespace data files are copied from the source system to the target
system. The tablespaces that must be copied are listed at the end of the
log file for the export operation.
-
To make use of the
TRANSPORTABLE
parameter, theCOMPATIBLE
initialization parameter must be set to at least 11.0.0.
-
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.
sh
schema has the EXP_FULL_DATABASE
privilege and that table sales2
is partitioned and contained within tablespace tbs2
. (The tbs2
tablespace must be set to read-only in the source database.)> expdp sh DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp TABLES=sh.sales2 TRANSPORTABLE=alwaysAfter the export completes successfully, you must copy the data files to the target database area. You could then perform an import operation using the
PARTITION_OPTIONS
and REMAP_SCHEMA
parameters to make each of the partitions in sales2
its own table.> impdp system PARTITION_OPTIONS=departition TRANSPORT_DATAFILES=oracle/dbs/tbs2 DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp REMAP_SCHEMA=sh:dp
VERSION
Default:COMPATIBLE
Specifies the version of database objects to be exported. This can be used to create a dump file set that is compatible with a previous release of Oracle Database. Note that this does not mean that Data Pump Export can be used with versions of Oracle Database prior to 10.1. Data Pump Export only works with Oracle Database 10g release 1 (10.1) or later. The
VERSION
parameter simply allows you to identify the version of the objects being exported.VERSION={COMPATIBLE | LATEST | version_string}
The legal values for the VERSION
parameter are as follows:-
COMPATIBLE
- This is the default value. The version of the metadata corresponds to the database compatibility level. Database compatibility must be set to 9.2 or higher.
-
LATEST
- The version of the metadata corresponds to the database version.
-
version_string
- A specific database version (for example, 11.1.0). In Oracle Database 11g, this value cannot be lower than 9.2.
The following example shows an export for which the version of the metadata will correspond to the database version:
> expdp hr TABLES=hr.employees VERSION=LATEST DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp NOLOGFILE=y
How Data Pump Export Parameters Map to Those of the Original Export Utility
Table 2-1 maps, as closely as possible, Data Pump Export parameters to original Export parameters. In some cases, because of feature redesign, the original Export parameter is no longer needed, so there is no Data Pump parameter to compare it to. Also, as shown in the table, some of the parameter names may be the same, but the functionality is slightly different.Original Export Parameter | Comparable Data Pump Export Parameter |
---|---|
BUFFER |
A parameter comparable to BUFFER is not needed. |
COMPRESS |
A parameter comparable to COMPRESS is not needed. |
CONSISTENT |
A parameter comparable to CONSISTENT is not needed. Use FLASHBACK_SCN and FLASHBACK_TIME for this functionality. |
CONSTRAINTS |
EXCLUDE=CONSTRAINT |
DIRECT |
A parameter comparable to DIRECT is not needed. Data Pump Export automatically chooses the best method (direct path mode or external tables mode). |
FEEDBACK |
STATUS |
FILE |
DUMPFILE |
FILESIZE |
FILESIZE |
FLASHBACK_SCN |
FLASHBACK_SCN |
FLASHBACK_TIME |
FLASHBACK_TIME |
FULL |
FULL |
GRANTS |
EXCLUDE=GRANT |
HELP |
HELP |
INDEXES |
EXCLUDE=INDEX |
LOG |
LOGFILE |
OBJECT_CONSISTENT |
A parameter comparable to OBJECT_CONSISTENT is not needed. |
OWNER |
SCHEMAS |
PARFILE |
PARFILE |
QUERY |
QUERY |
RECORDLENGTH |
A parameter comparable to RECORDLENGTH is not needed because sizing is done automatically. |
RESUMABLE |
A parameter comparable to RESUMABLE is not needed. This functionality is automatically provided for users who have been granted the EXP_FULL_DATABASE role. |
RESUMABLE_NAME |
A parameter comparable to RESUMABLE_NAME is not needed. This functionality is automatically provided for users who have been granted the EXP_FULL_DATABASE role. |
RESUMABLE_TIMEOUT |
A parameter comparable to RESUMABLE_TIMEOUT is not needed. This functionality is automatically provided for users who have been granted the EXP_FULL_DATABASE role. |
ROWS=N |
CONTENT =METADATA_ONLY |
ROWS=Y |
CONTENT =ALL |
STATISTICS |
A parameter comparable to STATISTICS is not needed. Statistics are always saved for tables. |
TABLES |
TABLES |
TABLESPACES |
TABLESPACES (Same parameter; slightly different behavior) |
TRANSPORT_TABLESPACE |
TRANSPORT_TABLESPACES (Same parameter; slightly different behavior) |
TRIGGERS |
EXCLUDE=TRIGGER |
TTS_FULL_CHECK |
TRANSPORT_FULL_CHECK |
USERID |
A parameter comparable to USERID is not needed. This information is supplied as the username and password when you invoke Export. |
VOLSIZE |
A parameter comparable to VOLSIZE is not needed. |
Commands Available in Export's Interactive-Command Mode
In interactive-command mode, the current job continues running, but logging to the terminal is suspended and the Export prompt (Export>
) is displayed.
Note:
Data Pump Export interactive-command mode is different from the
interactive mode for original Export, in which Export prompted you for
input. See Interactive Mode for information about interactive mode in original Export.-
From an attached client, press Ctrl+C.
-
From a terminal other than the one on which the job is running, specify the
ATTACH
parameter in anexpdp
command to attach to the job. This is a useful feature in situations in which you start a job at one location and need to check on it at a later time from a different location.
Activity | Command Used |
---|---|
Add additional dump files. |
ADD_FILE |
Exit interactive mode and enter logging mode. |
CONTINUE_CLIENT |
Stop the export client session, but leave the job running. |
EXIT_CLIENT |
Redefine the default size to be used for any subsequent dump files. |
FILESIZE |
Display a summary of available commands. |
HELP |
Detach all currently attached client sessions and kill the current job. |
KILL_JOB |
Increase or decrease the number of active worker processes for the
current job. This command is valid only in the Enterprise Edition of
Oracle Database 11g. |
PARALLEL |
Restart a stopped job to which you are attached. |
START_JOB |
Display detailed status for the current job and/or set status interval. |
STATUS |
Stop the current job for later restart. |
STOP_JOB |
ADD_FILE
Adds additional files or substitution variables to the export dump file set.ADD_FILE=[directory_object:]file_name [,...]The
file_name
must not contain any directory path information. However, it can include a substitution variable, %U
, which indicates that multiple files may be generated using the specified filename as a template. It can also specify another directory_object
.The size of the file being added is determined by the setting of the
FILESIZE
parameter.
See Also:
File Allocation for information about the effects of using substitution variableshr2.dmp
, so the default directory object for the job is assumed. A different directory object, dpump_dir2
, is specified for the dump file named hr3.dmp
.Export> ADD_FILE=hr2.dmp, dpump_dir2:hr3.dmp
CONTINUE_CLIENT
Changes the Export mode from interactive-command mode to logging mode.CONTINUE_CLIENTIn logging mode, status is continually output to the terminal. If the job is currently stopped, then
CONTINUE_CLIENT
will also cause the client to attempt to start the job.Export> CONTINUE_CLIENT
EXIT_CLIENT
Stops the export client session, exits Export, and discontinues logging to the terminal, but leaves the current job running.EXIT_CLIENTBecause
EXIT_CLIENT
leaves the job running, you can
attach to the job at a later time. To see the status of the job, you can
monitor the log file for the job or you can query the USER_DATAPUMP_JOBS
view or the V$SESSION_LONGOPS
view.Export> EXIT_CLIENT
FILESIZE
Redefines the default size to be used for any subsequent dump files.FILESIZE=number
The file size can be followed by B
, K
, M
, or G
to indicate that the size is expressed in bytes, kilobytes, megabytes, or gigabytes, respectively. The default is B
.A file size of 0 indicates that there will not be any size restrictions on new dump files. They will be extended as needed until the limits of the containing device are reached.
Export> FILESIZE=100M
HELP
Provides information about Data Pump Export commands available in interactive-command mode.HELPDisplays information about the commands available in interactive-command mode.
Export> HELP
KILL_JOB
Detaches all currently attached client sessions and then kills the current job. It exits Export and returns to the terminal prompt.KILL_JOBA job that is killed using
KILL_JOB
cannot be restarted. All attached clients, including the one issuing the KILL_JOB
command, receive a warning that the job is being killed by the current
user and are then detached. After all clients are detached, the job's
process structure is immediately run down and the master table and dump
files are deleted. Log files are not deleted.Export> KILL_JOB
PARALLEL
Enables you to increase or decrease the number of active processes (worker and parallel slaves) for the current job.PARALLEL=integer
PARALLEL
is available as both a command-line parameter and as an interactive-command mode
parameter. (It is only available in Enterprise Edition.) You set it to
the desired number of parallel processes (worker and parallel slaves).
An increase takes effect immediately if there are sufficient files and
resources. A decrease does not take effect until an existing process
finishes its current task. If the value is decreased, workers are idled
but not deleted until the job exits.
See Also:
PARALLEL for more information about parallelismExport> PARALLEL=10
START_JOB
Starts the current job to which you are attached.START_JOBThe
START_JOB
command restarts the current job to which
you are attached (the job cannot be currently executing). The job is
restarted with no data loss or corruption after an unexpected failure or
after you issued a STOP_JOB
command, provided the dump file set and master table have not been altered in any way.Transportable-tablespace-mode exports are not restartable.
Export> START_JOB
STATUS
Displays cumulative status of the job, along with a description of the current operation. An estimated completion percentage for the job is also returned. Also allows resetting the display interval for logging mode status.STATUS[=integer]
You have the option of specifying how frequently, in seconds, this
status should be displayed in logging mode. If no value is entered or if
the default value of 0 is used, the periodic status display is turned
off and status is displayed only once.This status information is written only to your standard output device, not to the log file (even if one is in effect).
The following example will display the current job status and change the logging mode display interval to five minutes (300 seconds):
Export> STATUS=300
STOP_JOB
Stops the current job either immediately or after an orderly shutdown, and exits Export.STOP_JOB[=IMMEDIATE]If the master table and dump file set are not disturbed when or after the
STOP_JOB
command is issued, the job can be attached to and restarted at a later time with the START_JOB
command.To perform an orderly shutdown, use
STOP_JOB
(without
any associated value). A warning requiring confirmation will be issued.
An orderly shutdown stops the job after worker processes have finished
their current tasks.To perform an immediate shutdown, specify
STOP_JOB
=IMMEDIATE
. A warning requiring confirmation will be issued. All attached clients, including the one issuing the STOP_JOB
command, receive a warning that the job is being stopped by the current
user and they will be detached. After all clients are detached, the
process structure of the job is immediately run down. That is, the
master process will not wait for the worker processes to finish their
current tasks. There is no risk of corruption or data loss when you
specify STOP_JOB=IMMEDIATE
. However, some tasks that were incomplete at the time of shutdown may have to be redone at restart time.Export> STOP_JOB=IMMEDIATE
Examples of Using Data Pump Export
This section provides the following examples of using Data Pump Export:Performing a Table-Mode Export
Example 2-1 shows a table-mode export, specified using theTABLES
parameter. Issue the following Data Pump export command to perform a table export of the tables employees
and jobs
from the human resources (hr
) schema:expdp hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y
hr
is exporting tables in his own schema, it is not necessary to specify the schema name for the tables. The NOLOGFILE=y
parameter indicates that an Export log file of the operation will not be generated.Data-Only Unload of Selected Tables and Rows
Example 2-2 shows the contents of a parameter file (exp.par
) that you could use to perform a data-only unload of all tables in the human resources (hr
) schema except for the tables countries
and regions
. Rows in the employees
table are unloaded that have a department_id
other than 50. The rows are ordered by employee_id
.DIRECTORY=dpump_dir1 DUMPFILE=dataonly.dmp CONTENT=DATA_ONLY EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')" QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"
exp.par
parameter file:> expdp hr PARFILE=exp.parA schema-mode export (the default mode) is performed, but the
CONTENT
parameter effectively limits the export to an unload of just the table's data. The DBA previously created the directory object dpump_dir1
which points to the directory on the server where user hr
is authorized to read and write export dump files. The dump file dataonly.dmp
is created in dpump_dir1
.Estimating Disk Space Needed in a Table-Mode Export
Example 2-3 shows the use of theESTIMATE_ONLY
parameter to estimate the space that would be consumed in a table-mode
export, without actually performing the export operation. Issue the
following command to use the BLOCKS
method to estimate the
number of bytes required to export the data in the following three
tables located in the human resource (hr
) schema: employees
, departments
, and locations
.> expdp hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees, departments, locations LOGFILE=estimate.log
Performing a Schema-Mode Export
Example 2-4 shows a schema-mode export of thehr
schema. In a schema-mode export, only objects belonging to the
corresponding schemas are unloaded. Because schema mode is the default
mode, it is not necessary to specify the SCHEMAS
parameter on the command line, unless you are specifying more than one schema or a schema other than your own.Performing a Parallel Full Database Export
Example 2-5 shows a full database Export that will have up to 3 parallel processes (worker or PQ slaves).> expdp hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfullBecause this is a full database export, all data and metadata in the database will be exported. Dump files
full101
.dmp
, full201
.dmp
, full102
.dmp
, and so on will be created in a round-robin fashion in the directories pointed to by the dpump_dir1
and dpump_dir2
directory objects. For best performance, these should be on separate
I/O channels. Each file will be up to 2 gigabytes in size, as necessary.
Initially, up to three files will be created. More files will be
created, if needed. The job and master table will have a name of expfull
. The log file will be written to expfull.log
in the dpump_dir1
directory.Using Interactive Mode to Stop and Reattach to a Job
To start this example, reexecute the parallel full export in Example 2-5. While the export is running, press Ctrl+C. This will start the interactive-command interface of Data Pump Export. In the interactive interface, logging to the terminal stops and the Export prompt is displayed.
At the Export prompt, issue the following command to stop the job:
Enter the following command to reattach to the job you just stopped:
After the job status is displayed, you can issue the Export> STOP_JOB=IMMEDIATE Are you sure you wish to stop this job ([y]/n): yThe job is placed in a stopped state and exits the client.
Enter the following command to reattach to the job you just stopped:
> expdp hr ATTACH=EXPFULL
CONTINUE_CLIENT
command to resume logging mode and restart the expfull
job.Export> CONTINUE_CLIENTA message is displayed that the job has been reopened, and processing status is output to the client.
Syntax Diagrams for Data Pump Export
This section provides syntax diagrams for Data Pump Export. These diagrams use standard SQL syntax notation. For more information about SQL syntax notation, see Oracle Database SQL Language Reference.Description of the illustration expinit.gif
Description of the illustration expstart.gif
Description of the illustration expmodes.gif
Description of the illustration expopts.gif
Description of the illustration expencrypt.gif
Description of the illustration expfilter.gif
Description of the illustration expremap.gif
Description of the illustration expfileopts.gif
-
The Data Pump utilities are designed especially for very large
databases. If your site has very large quantities of data versus
metadata, you should experience a dramatic increase in performance
compared to the original Export and Import utilities. This chapter
briefly discusses why the performance is better and also suggests
specific steps you can take to enhance performance of export and import
operations.
This chapter contains the following sections:
Data Performance Improvements for Data Pump Export and Import
The improved performance of the Data Pump Export and Import utilities is attributable to several factors, including the following:-
Multiple worker processes can perform intertable and interpartition
parallelism to load and unload tables in multiple, parallel, direct-path
streams.
-
For very large tables and partitions, single worker processes can
choose intrapartition parallelism through multiple parallel queries and
parallel DML I/O server processes when the external tables method is
used to access data.
-
Data Pump uses parallelism to build indexes and load package bodies.
-
Dump files are read and written directly by the server and, therefore, do not require any data movement to the client.
-
The dump file storage format is the internal stream format of the
direct path API. This format is very similar to the format stored in
Oracle database datafiles inside of tablespaces. Therefore, no
client-side conversion to
INSERT
statement bind variables is performed.
-
The supported data access methods, direct path and external tables,
are faster than conventional SQL. The direct path API provides the
fastest single-stream performance. The external tables feature makes
efficient use of the parallel queries and parallel DML capabilities of
the Oracle database.
-
Metadata and data extraction can be overlapped during export.
Tuning Performance
Data Pump technology fully uses all available resources to maximize throughput and minimize elapsed job time. For this to happen, a system must be well-balanced across CPU, memory, and I/O. In addition, standard performance tuning principles apply. For example, for maximum performance you should ensure that the files that are members of a dump file set reside on separate disks, because the dump files will be written and read in parallel. Also, the disks should not be the same ones on which the source or target tablespaces reside.Any performance tuning activity involves making trade-offs between performance and resource consumption.
Controlling Resource Consumption
The Data Pump Export and Import utilities enable you to dynamically increase and decrease resource consumption for each job. This is done using thePARALLEL
parameter to specify a degree of parallelism for the job. (The PARALLEL
parameter is the only tuning parameter that is specific to Data Pump.) For maximum throughput, do not set PARALLEL
to much more than twice the number of CPUs (two workers for each CPU).
See Also:
To maximize parallelism, you must supply at least one file for each degree of parallelism. The simplest way of doing this is to use substitution variables in your file names (for example,
file%u.dmp
).
However, depending upon your disk set up (for example, simple,
non-striped disks), you might not want to put all dump files on one
device. In this case, it is best to specify multiple file names using
substitution variables, with each in a separate directory resolving to a
separate disk. Even with fast CPUs and fast disks, the path between the
CPU and the disk may be the constraining factor in the amount of
parallelism that can be sustained.The
PARALLEL
parameter is valid only in the Enterprise Edition of Oracle Database 11g.Effects of Compression and Encryption on Performance
The use of Data Pump parameters related to compression and encryption can possibly have a negative impact upon performance of export and import operations. This is because additional CPU resources are required to perform transformations on the raw data.Initialization Parameters That Affect Data Pump Performance
The settings for certain initialization parameters can affect the performance of Data Pump Export and Import. In particular, you can try using the following settings to improve performance, although the effect may not be the same on all platforms.-
DISK_ASYNCH_IO=TRUE
-
DB_BLOCK_CHECKING=FALSE
-
DB_BLOCK_CHECKSUM=FALSE
-
PROCESSES
-
SESSIONS
-
PARALLEL_MAX_SERVERS
SHARED_POOL_SIZE
and UNDO_TABLESPACE
initialization parameters should be generously sized. The exact values will depend upon the size of your database.Setting the Size Of the Buffer Cache In a Streams Environment
Oracle Data Pump uses Streams functionality to communicate between processes. If theSGA_TARGET
initialization parameter is set, then the STREAMS_POOL_SIZE
initialization parameter is automatically set to a reasonable value.If the
SGA_TARGET
initialization parameter is not set and the STREAMS_POOL_SIZE
initialization parameter is not defined, then the size of the streams
pool automatically defaults to 10% of the size of the shared pool.When the streams pool is created, the required SGA memory is taken from memory allocated to the buffer cache, reducing the size of the cache to less than what was specified by the
DB_CACHE_SIZE
initialization parameter. This means that if the buffer cache was
configured with only the minimal required SGA, then Data Pump operations
may not work properly. A minimum size of 10M is recommended for STREAMS_POOL_SIZE
in order to ensure successful Data Pump operations.
No comments:
Post a Comment