Oracle AWS RDS Import/Export with datapump
Introduction
The AWS documentation about using Oracle data pump is very un-clear and very difficult for non-DBA to understand. This article provides you an easy to follow example on how to use data pump with AWS RDS Oracle:
- We have a need to export AWS RDS oracle database data from one VPC to another VPC.
- There is no direct network connectivites between these two VPC.
- We access these VPCs from a 3rd network via SSH.
- AWS RDS Oracle database support datadump as a mean to export/import data.
- To accomplish the task, we set the following:
Taskes on the source VPC
Setup an Oracle instance on EC2 instance in source VPC
Take a look on the "oracle-12C-installation" article that I wrote.
Setup Directory Object on Source EC2 Oracle Instance
- SSH to the EC2 instance with Oracle installed.
- Create a directory
mkdir /home/oracle/dumps
- login to Oracle as sys user to the database and create a directory object
$ sqlplus / as sysdba
reate directory mydumps as '/home/oracle/dumps';
Directory created.
- You can use the user system to perform backup
- If you want to use different user, such as backup_user to perform the backup, create and grant permission to this user
SQL > grant read,write on directory dumps to backup_user;
SQL > grant datapump_exp_full_database backup_user;
Setup database link between the Oracle DB on EC2 and the source Oracle DB on RDS
Using with TNS
- Create a TNS entry for the foreign database in your tnsnames.ora file
Create TNS entry for foreign database on
$ORACLE_HOME/network/admin/tnsnames.ora
sourcedb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = yyyy.xxxxxx.us-east-1.rds.amazonaws.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = MYDB)
)
)
- Test connection with tnsping
$ tnsping sourcedb
- Test log in with TNS name
$ sqlplus <master_username>/<password>@sourcedb
- Create a database link to the foreign database by logging into the EC2 Oracle DB as system suer
sqlplus system/oracle@ORA12C
- create database link
create database link <dblink name> connect to <rds login ID< identified by <rds login pwd> using '<rds tns name>';
example
SQL> create database link source_dblink connect to admin identified by admpwd using 'sourcedb';
- Verify the connectivity by checking the tables from foregin DB with the link by running
select count(*) from all_tables@<db link name>;
Example:
SQL> select count(*) from all_tables@source_dblink;
COUNT(*)
----------
1626
Drop database link
You can drop the database link when it is not needed
DROP DATABASE LINK <db_link_name>;
Export the schema to a dump file with datapump
Export RDS oracle DB with datapump
- Schema Export
The following example exports an user's schemas
expdp system/oracle \
directory=<dump directory> \
dumpfile=<dump file name> \
logfile=<log file name> \
network_link=<db link name> \
schemas=<RDS user schema name>
For example:
expodp system/oracle directory=dumps \
dumpfile=source_db.dmp \
logfile=source_db_dmp.log \
network_link=source_dblink \
schemas=admin
- You can perform table export by replacing the schemas line with the following:
...
...
TABLES=sales:sales_q3_2003 scott.sales2
More on expdp options - check https://www.morganslibrary.org/reference/datapump.html
Copy the dump file to the target environment
Use whatever tool to transfor the dump file to the target EC2 host.
Taskes on the target VPC
Setup an Oracle database on EC2 instance in target VPC
Use "Oracle-12C-installation" document
Setup Directory Object on Target EC2 Oracle Instance
- SSH to the EC2 instance with Oracle installed.
- Create a directory
mkdir /home/oracle/dumps
- login to Oracle as sys user to the database and create a directory object
$ sqlplus / as sysdba
reate directory mydumps as '/home/oracle/dumps';
Directory created.
The above example create a data dump directory call "mydumps" in Oracle and point to the local direct "/home/oracle/home"
Setup database link between the Oracle DB on EC2 and the target Oracle DB on RDS
- add the following into tnsnames.ora
targetdb =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = yyyyyyyy.eu-central-1.rds.amazonaws.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = NEWDB)
)
)
- login to local DB
sqlplus system/oracle/ora12c
- Run the following to create DB link
create database link <target db link name> connect to <user id> identified by <password> using 'tns name';
- test the link
select count(*) from all_tables@<target_db_link>
Setup Target DB environment for import
- Create the user and grant any permissions on tablespaces in target DB as required.
- If the user exists already, make sure you drop all object under this user.
- Create a script call drop_obj.sql
BEGIN
FOR cur_rec IN (SELECT object_name, object_type
FROM user_objects
WHERE object_type IN
('TABLE',
'VIEW',
'PACKAGE',
'PROCEDURE',
'FUNCTION',
'SEQUENCE'
))
LOOP
BEGIN
IF cur_rec.object_type = 'TABLE'
THEN
EXECUTE IMMEDIATE 'DROP '
|| cur_rec.object_type
|| ' "'
|| cur_rec.object_name
|| '" CASCADE CONSTRAINTS';
ELSE
EXECUTE IMMEDIATE 'DROP '
|| cur_rec.object_type
|| ' "'
|| cur_rec.object_name
|| '"';
END IF;
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line ( 'FAILED: DROP '
|| cur_rec.object_type
|| ' "'
|| cur_rec.object_name
|| '"'
);
END;
END LOOP;
END;
/
Run this script under import user ID
sqlplus <userid>/<password>@sourcedb @drop_obj.sql
- Another way is droping and re-creating the target user ID. this way:
-- drop and recreate create APP_ADMIN user
DROP USER TARGET_USER CASCADE;
CREATE USER TARGET_USER IDENTIFIED BY XXXYYYZZZ;
GRANT CONNECT, RESOURCE TO APP_ADMIN;
GRANT CREATE SESSION,ALTER SESSION,CREATE TABLE,CREATE SYNONYM,CREATE VIEW,CREATE SEQUENCE, CREATE PROCEDURE,CREATE TRIGGER,CREATE TYPE,CREATE OPERATOR,CREATE INDEXTYPE TO APP_ADMIN;
GRANT UNLIMITED TABLESPACE TO TARGET_USER;
quit;
Copy dump file into RDS instance
Dumpfiles need to be local on the RDS EC2 instance when using impdp. Since we can not login to the EC2 instance that host the RDS instance, we need move the file with SQL.
- On the RDS Oracle database, there is pre-defined DATA_PUMP_DIR
DATA_PUMP_DIR /rdsdbdata/datapump
- To move the dump file to the AWS RDS's
DATA_PUMP_DIR
, we can useDBMS_FILE_TRANSFER
. - Create a file call
mv_file_to_rds.sql
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object => 'dumps',
source_file_name => 'source_db.dmp',
destination_directory_object => 'DATA_PUMP_DIR',
destination_file_name => 'dit1.dmp',
destination_database => 'sdbx_dblink' );
END;
/
quit;
- Run it with the following:
sqlplus system/oracle@ORA12C @mv_file_to_rds.sql
Check if the file landed on the RDS instance
- Log into remote database
$ sqlplus <rds_master_user>/<rds_password>@<db name>
- Run the following sql command
select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'))
order by filename;
Import data to RDS DB
- Import the dump to RDS database by running the following command on the EC2 instance
# export dit1 dump to sdbx env
# sdbx is the tnsname of the remote database
impdp <rds_master_user>/<rds_password> \
DUMPFILE=source_db.dmp \
DIRECTORY=DATA_PUMP_DIR \
logfile=imp_dmp.log
# export dit1 dump to sdbx env
# sdbx is the tnsname of the remote database
impdp <rds_master_user>/<rds_password> \
DUMPFILE=source_db.dmp \
DIRECTORY=DATA_PUMP_DIR \
logfile=imp_dmp.log
Cleanup
The following command can be used to delete files in the DATA_PUMP_DIR after the import:
exec utl_file.fremove('DATA_PUMP_DIR','<file name>');
carry on don’t stop.
ReplyDelete