In this Document
APPLIES TO:
Oracle Database Cloud Schema Service - Version N/A and later Oracle Database Exadata Cloud Machine - Version N/A and later Oracle Cloud Infrastructure - Database Service - Version N/A and later Oracle Database Cloud Exadata Service - Version N/A and later Oracle Database Backup Service - Version N/A and later Information in this document applies to any platform.
PURPOSE
To assist with migrating non-CDB databases, 12c container databases (CDB) and 12c pluggable databases (PDB) to a new container database when ASM is being used for storage.
SCOPE
NOTE: In the images and/or the document content below, the user information and data used represents fictitious data from the Oracle sample schema(s) or Public Documentation delivered with an Oracle database product. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.
This note will cover:
- Migrating a 12c non-CDB to a PDB whose container has a physical standby using 12c RMAN Active Database Duplication to copy the source files
This section should be used when you have a non-CDB that has either been created with or upgraded to Oracle Database 12c. You wish to convert this 12c non-CDB into a pluggable database.
- Migrating a 12c PDB into another CDB whose container has a physical standby using 12c RMAN Active Database Duplication to copy the source files
This section should be used when you have a PDB that you wish to unplug from one container database and plug into another container database.
- Migrating a 11.2.0.3 non-CDB using dbms_file_transfer scripts from note 1389592.1 performing full transportable export/import.
This section should be used when you want to migrate a pre-12c non-CDB directly into a PDB.
Using these methods, each will be done using incremental rollforward with minimal downtime for datafile movement.
To use these steps fixes 16347068, 16822629, 16874123 and 16888264 are required to be applied on 12c Oracle Database homes. These fixes will be available in the 12.1.0.1 Patchset Update 2 or later and are available for 12.1.0.1 base as merge patch 17759598. For environments running 12.1.0.2 patch 20464614 is required to be applied on the 12c Oracle Database homes.
All of the examples below consist of an environment of:
- Source environment
- Database is a Real Application Cluster database
- Database can be part of a Data Guard configuration, has no impact
- Database files are stored in ASM
- Destination environment
- Part of a Data Guard configuration with an Active Data Guard physical standby database in place
- Both the primary database and standby database are Real Application Cluster databases
- Database files are stored in ASM
This process assumes that files from the source database will need to be moved to a new location due to file management desires (e.g. wanting to have the files kept under the directory structure for a particular database) or that the souce and destination environments do not share storage. If the source and destination environments share storage (primary databases only), it is possble to perform plugin operations without needing to move files. The operations noted below in the Handling of Standby Databases during plugin will still need to be considered. It is possible that you may not need to move files on the primary site but will need to in order to maintain a standby database. The steps described here can be performed on both the primary and standby as the examples show, or just standby only or just primary only (if no standby exists).
It is recommended to migrate files even if source and destination are on shared storage. Using the steps below only add a short time to the outage (needed to complete the final incremental rollforward) and, by leaving the original files intact, there is a quick fallback to the previous environment if an immediate issue arises. If you do not wish to migrate files and your source and destination are on shared storage, you can maintain the standby database as well. Please see the following notes: For non-DB Migration: Reusing the Source Standby Database Files When Plugging a non-CDB as a PDB into the Primary Database of a Data Guard Configuration Document 2273304.1For PDB Migration: Reusing the Source Standby Database Files When Plugging a PDB into the Primary Database of a Data Guard Configuration Document 2273829.1
DETAILS
Goal
Provide step by step examples of migrating non-Container Database (non-CDB) and Pluggable Database (PDB) data into a different CDB on ASM.
Solution
These examples were created with 12.1.0.1 with additional patches. Prior to using any of the examples in this document, you must ensure merge patch 17759598 is applied. All of the required patches are included in 12.1.0.1 Patch Set Update 2 or later.
It is highly recommended that you upgrade to Oracle Database 12c and then perform the non-CDB to PDB plugin as described here. Upgrading first has the following benefits:
- Provides access to more tools to assist with the migration
- Simplifies the migration by allowing for the database to be treated as a complete entity for migration
- Simplifies the migration process with easier, less error-prone steps
- Generally shorter downtime than performing full transportable export/import
Also prior to performing any plugin operations it is highly recommended to execute the DBMS_PDB.CHECK_PLUG_COMPATIBILITY() function and resolve any issues that will stop the plugin from being successful.
Multitenant Plugin with OMF/ASM
Oracle Managed Files (OMF) is a feature where the RDBMS is given control of naming and location of files. Files are named using system generated names and are placed in the location specified in the DB_CREATE_FILE_DEST initialization parameter. When you use Oracle Automatic Storage Management (ASM) to manage and store your files, OMF is also used to simplify file management. If a Data Guard physical standby database is also in use, there are initilization parameters (*_FILE_NAME_CONVERT, STANDBY_FILE_MANAGEMENT) to assist with converting names of existing and newly created datafiles when OMF is in use.
Oracle Multitenant plugin and Data Pump import for full database operations use these same initialization parameters when the operation is performed, but because these operations are very specific in file naming (the file names in the multitenant manifest or in the full transportable database import command must match all files exactly for the operation to complete), there are some difficulties when OMF/ASM is used, files are copied, and a physical standby database is in place. There have been improvements made to the multitenant plugin operation on both the primary and standby environments, however at this time additional work must still be done on the standby database when a full transportable database import operation is performed.
The ASM directory structure and naming convention is as follows. A traditional file name in ASM consists of 3 parts, <tablespace_name>.<ASM file number>.<incarnation>. You are not allowed to directly create files with this naming structure. The directory structure for datafiles traditionally consists of <DB_CREATE_FILE_DEST diskgroup_name>/<db unique name>/DATAFILE/<files>. With Oracle Multitenant, the directory structure changes for each pluggable database (PDB) that exists in a container database (CDB). Each database (non-CDB, CDB, PDB) as of 12gR1 has a generic unique identifier (GUID) associated with it. For pre-12c non-CDB databases, the GUID will be created when the database is upgraded to 12c. This GUID will not change throughout the life of the container/non-CDB and is one of the main reasons a PDB should be cloned (cloning generates a new GUID) rather than copying the same PDB to multilple locations and plugging in to multiple CDBs. The GUID for a particular container/non-CDB can be found by querying V$CONTAINERS in any 12c database. To assist with identifiying which files belong to which PDB, an ASM directory structure of <DB_CREATE_FILE_DEST diskgroup_name>/<db_unique_name>/<GUID>/DATAFILE/<files> is used for PDBs.
RMAN has been enhanced so that, when copying files between databases it recognizes the GUID and acts accordingly when writing the files.
- If the clone/auxiliary instance being connected to for clone operations is a CDB root, the GUID of the RMAN target database is used to determine the directory structure to write the datafiles. Connect to the CDB root as the RMAN clone/auxiliary instance when the source database should be a 12c non-CDB or PDB that is going to be migrated and plugged into a remote CDB as a brand new PDB. This will ensure that the files copied by RMAN will be written to the GUID directory of source database for the migration.
- If the clone/auxiliary instance being connected to for clone operations is a PDB, the GUID of the auxiliary PDB will be used to determine the directory structure to write the datafiles. Connect to the destination PDB as the RMAN clone auxiliary instance when the source database is a 12c non-CDB or PDB that requires a cross platform full transportable database import and the data and files will be imported into an existing PDB. This will ensure the files copied by RMAN will be written to the GUID directory of the PDB target database for the migration.
The enhancements for multitenant plugin operations with OMF simplify the process extensively. The manifest generated on the source non-CDB/PDB contains all of the filenames and characteristics about each file. Normally, the plugin operation would use the filenames in the manifest and look for those exact filenames or partially converted (using the SOURCE_FILE_NAME_CONVERT clause on the CREATE PLUGGABLE DATABASE....USING...statement). Since all filenames will be different when copied to a new location when OMF is used, you would need to specify full directory and filename convert pairs for EACH file being plugged in. By using the SOURCE_FILE_DIRECTORY clause on the CREATE PLUGGABLE DATABASE....USING... statement in the plugin operation, the filename in the manifest is ignored and the plugin looks for a file to match additional characteristics about the file stored in the manifest, looking for the file in the SOURCE_FILE_DIRECTORY location.
SOURCE_FILE_NAME_CONVERT and SOURCE_FILE_DIRECTORY are mutually exlusive clauses. You can use one or the other in the plugin statement, attempting to specify both in the same CREATE PLUGGABLE DATABASE statement will generate an error.
Handling of Standby Databases during plugin.
Standby databases do not use the filename conversion clauses specified on the plugin statement. Media recovery will use the DB_FILE_NAME_CONVERT initialization parameter specifed in the standby database's spfile to try to convert the filename/directory from the redo record and find the file on the standby environment.
On standby databases when performing a multitenant plugin if OMF is in use, media recovery will search in the default directory noted above (in the case of ASM, <DB_CREATE_FILE_DEST diskgroup_name>/<db_unique_name>/<GUID>/DATAFILE) and use the same file characteristics to find a file match. If a match is found, media recovery adds the file to the controlfile and continues to apply redo without requiring any additional intervention.
If a suitable file match is not found, media recovery stops and returns an error and manual steps must then be taken to resolve the error and restart media recovery.
Oracle Database 12c version 12.1.0.2 provides a new clause, STANDBYS=NONE, that can be used when creating or plugging a PDB into a container. This clause allows you to defer migration of datafiles to destination standby CDBs. The standby will create the basic structure of the PDB but all files will be marked as OFFLINE/RECOVER. At some future point, you can copy the files to the standby site and ENABLE RECOVERY of the PDB to add the files to the standby and begin providing Data Guard protection. See document 1916648.1 for more information on usage of STANDBYS=NONE. Oracle Database 12c version 12.2.0.1 provides additional options for the STANDBYS= clause allowing you to specify subsets of standbys that you wish to include or exclude in the full creation of PDBs. Please note that this option does not remove the requirement for files being pre-staged at the standbys during the plugin operation for standbys that are included in the subset. Oracle Database 12c version 12.2.0.1 offers the ability to perform hot clones and PDB relocation. At this time it is not possible to maintain any standbys at the destination directly while using either of these options. You should always create PDBs with the STANDBYS=NONE clause when using hot cloning or relocation, then ENABLE RECOVERY at some future point.
Full Transportable and Export/Import implications
Data Pump full transportable or export/import requires exact filenames to be passed to Data Pump on import so the files must be in the location specified for the import operation to work successfully on the primary. As noted above, filenames will not match between primary databases and standby databases when OMF is used (even after applying DB_FILE_NAME_CONVERT), and with ASM directory structures will not match (DB_UNIQUE_NAME settings will be different). At this time with OMF, as the redo for each tablespace on the import attempts to be applied at the standby site, media recovery will fail as it cannot find the file(s). You must manually perform a rename of the file at the standby site to the actual filename and restart recovery. This must be done tablespace by tablespace until all of the tablespaces have been processed.
12c Migrate non-CDB and Convert to a PDB using 12c RMAN Active Database Duplication
At this time, conversion of a non-CDB to a PDB can only be done same-endian. If you need to perform a cross-endian migration, you must precreate a PDB in your destination CDB and use some form of export/import (Data Pump) or SQL based (e.g CTAS...) to migrate data and/or files; files will need to be moved manually. These options include expdp/impdp using the TRANSPORTABLE=ALWAYS and FULL=Y parameters and schema-based expdp/impdp.
The RMAN Block Change Tracking feature can be used to improve the performance of the incremental roll forward steps.
The following steps were executed using the bash shell.
This method takes advantage of a new feature of RMAN in 12c allowing you to clone database files and apply incrementals to those files from an active database without requiring staging areas. In addition, the source non-CDB application activity can continue while the cloning and incremental applies are being performed. When ready to perform the actual migration, the source non-CDB must be put into READ ONLY mode. At this time, application activity should stop and a final incremental apply performed.
To get the datafiles in the correct ASM directory for the destination PDB, you perform an RMAN BACKUP AS COPY from the source non-CDB while also connected to the destination CDB in the same RMAN session. When using OMF or ASM on the destination, this will ensure that the datafile copies are written to the correct location. The incremental apply is also performed across the network and applied to the datafiles in their destination location.
This operation requires a temporary instance to be used for the incremental apply. When the datafile copies are created, they aren't known to any databases on the new environment, by using the temporary instance we create a copy of the source non-CDB controlfile on the destination environment and catalog the newly created datafile copies in the temporary instance. We now have an instance that is aware of the files and can perform recovery on them. The temporary instance is used only for this purpose and can be discarded once the migration process is complete. The temporary instance should only be started in either nomount or mount mode, it should never be opened.
If you have an Active Data Guard phyiscal standby in place on the destination, perform the RMAN BACKUP AS COPY and incremental apply process on both the primary and standby destination environments. This will place the files in the correct location on the destination standby environment so that media recovery of the plugin operation can continue without stopping. You must run the process separately for each destination environment, e.g. you will run an RMAN process to copy the files to your destination primary site and also run an RMAN process to copy the files to your destination standby site, you will create a temporary instance on your destination primary site and your destination standby site, etc. Once the final incremental apply is completed on both the destination primary and standby sites, you can plugin the new PDB on the primary with no further action needing to be taken on the standby site.
The example below documents the process with ASM in use for file storage and with an Active Data Guard physical standby database in place.
While doing this process do NOT add datafiles to the source as the incremental apply (recover database noredo) commands will not restore the new datafiles, they will only recover the datafiles already restored/cataloged.
Ensure backup compression on the disk device type is not enabled on the primary database. Backup as copy is not allowed with compression enabled.
RMAN> show device type;
RMAN configuration parameters for database with db_unique_name NONCDB are: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
Run the code blocks just created on both the destination primary site and the destination standby site to copy the files from the source non-CDB to destination environments. Note the directory structure for the output files on both the primary and the standby environments (lines starting with "output file name"). These directories will be needed in a later step.
- Primary
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 12:23:17 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> @backup.cmd
RMAN> connect target * target database Password: connected to target database: NONCDB (DBID=1643045589)
RMAN> connect auxiliary * auxiliary database Password: connected to auxiliary database: CONT3 (DBID=1783201232)
RMAN> backup as copy database auxiliary format '+data'; Starting backup at 04-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=319 device type=DISK
<snip>
channel ORA_DISK_1: starting datafile copy input datafile file number=00003 name=+DATAD1/NONCDB/DATAFILE/undotbs1.441.825151103 channel ORA_DISK_2: starting datafile copy input datafile file number=00005 name=+DATAD1/NONCDB/DATAFILE/soets.438.825151965 channel ORA_DISK_3: starting datafile copy input datafile file number=00006 name=+DATAD1/NONCDB/DATAFILE/soets2.436.825151971 channel ORA_DISK_4: starting datafile copy input datafile file number=00007 name=+DATAD1/NONCDB/DATAFILE/soets3.451.825151981
<snip>
input datafile file number=00001 name=+DATAD1/NONCDB/DATAFILE/system.455.825151099 output file name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.387.825251193 tag=TAG20130904T102613 # Save this directory name to be used later channel ORA_DISK_8: datafile copy complete, elapsed time: 00:01:05 channel ORA_DISK_8: starting datafile copy input datafile file number=00004 name=+DATAD1/NONCDB/DATAFILE/users.288.825151109 output file name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.386.825251261 tag=TAG20130904T102613 channel ORA_DISK_8: datafile copy complete, elapsed time: 00:00:05
<snip>
output file name=+DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.272.825251181 tag=TAG20130904T102613 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:08:30 Finished backup at 04-SEP-13
RMAN> **end-of-file**
- Standby
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 10:26:51 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> @backup.cmd
RMAN> connect target * target database Password: connected to target database: NONCDB (DBID=1643045589)
RMAN> connect auxiliary * auxiliary database Password: connected to auxiliary database: CONT3 (DBID=1783201232)
RMAN> backup as copy database auxiliary format '+datad1'; Starting backup at 04-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=556 device type=DISK
<snip>
input datafile file number=00001 name=+DATAD1/NONCDB/DATAFILE/system.455.825151099 output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.528.825244029 tag=TAG20130904T102706 # Save this directory name to be used later channel ORA_DISK_7: datafile copy complete, elapsed time: 00:00:15 channel ORA_DISK_7: starting datafile copy input datafile file number=00004 name=+DATAD1/NONCDB/DATAFILE/users.288.825151109 output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.517.825244029 tag=TAG20130904T102706 channel ORA_DISK_8: datafile copy complete, elapsed time: 00:00:16 output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.510.825244043 tag=TAG20130904T102706 channel ORA_DISK_7: datafile copy complete, elapsed time: 00:00:01 output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.516.825244027 tag=TAG20130904T102706
<snip>
output file name=+DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.515.825244029 tag=TAG20130904T102706 channel ORA_DISK_6: datafile copy complete, elapsed time: 00:00:51 Finished backup at 04-SEP-13
RMAN> RMAN> **end-of-file**
- Create a small pfile for the temporary instances. Although not required to be different for the destination primary and standby environments, for clarity they are different in this example with each temporary instance getting a different DB_UNIQUE_NAME. Note the comments for each initialization parameter setting.
- Primary
db_name=cont4 # MUST match db_name of source CDB/non-CDB db_unique_name=junk # ONLY specify if the source db is on the same system as the TEMP instance control_files='/home/oracle/temp/control1.ctl','/home/oracle/temp/control2.ctl' # Just use local files compatible='12.1.0.0.0' # required if compatible is not set to default value #Since the source is a non-CDB, we do not use the enable_pluggable_database parameter
- Standby
db_name=cont4 # MUST match db_name of source CDB/non-CDB db_unique_name=junkstby # ONLY specify if the source db is on the same system as the TEMP instance control_files='/home/oracle/temp/control1.ctl','/home/oracle/temp/control2.ctl' # Just use local files compatible='12.1.0.0.0' # required if compatible is not set to default value #Since the source is a non-CDB, we do not use the enable_pluggable_database parameter
- Create a TNS alias for the destination temporary instances. This should be done on the environment you will be running RMAN (in this example, the destination primary and standby environments). If you needed to specify DB_UNIQUE_NAME, the same value must also be specified for the SERVICE_NAME in the TNS address. Do not use the SCAN address for the host as the temporary instances will not be registered with the SCAN listener. (UR = A ) is required to allow you to connect to the service while the database is in mount/nomount mode.
- Primary
TEMP = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <primhostdb01>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = junk) (UR = A) ) )
- Standby
TEMPSTBY = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <stbyhostdb01>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = junkstby) (UR = A) ) )
- Startup nomount the temporary instances on the destination primary and standby environments using the pfiles created above
- Primary
$ export ORACLE_SID=<temp> $ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 12:51:52 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora' ORACLE instance started.
Total System Global Area 384200704 bytes Fixed Size 2288536 bytes Variable Size 322962536 bytes Database Buffers 50331648 bytes Redo Buffers 8617984 bytes
- Standby
$ export ORACLE_SID=<tempstby> $ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 10:53:06 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora' ORACLE instance started.
Total System Global Area 634732544 bytes Fixed Size 2291424 bytes Variable Size 524290336 bytes Database Buffers 100663296 bytes Redo Buffers 7487488 bytes
- Restore the controlfile from the source non-CDB to both the destination primary and standby environment temporary instances. Connect to the temporary instances using TNS (e.g. rman target sys@temp).
The FROM SERVICE clause in the RESTORE command provides the connection to the source non-CDB. The clause in the example uses the TNS alias for the source non-CDB created above. This is the only connection used and required for the source non-CDB in these commands.
- Primary
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 13:32:03 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@<temp>
target database Password: connected to target database: NONCDB (not mounted)
RMAN> restore controlfile from service noncdb;
Starting restore at 04-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=790 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: using network backup set from service noncdb channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 output file name=/home/oracle/demo/control1.ctl output file name=/home/oracle/demo/control2.ctl Finished restore at 04-SEP-13
- Standby
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 4 11:33:33 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@<tempstby>
target database Password: connected to target database: NONCDB (not mounted)
RMAN> restore controlfile from service noncdb;
Starting restore at 04-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=514 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: using network backup set from service noncdb channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 output file name=/home/oracle/demo/control1.ctl output file name=/home/oracle/demo/control2.ctl Finished restore at 04-SEP-13
- Mount the temporary instances on the destination primary and standby environments to allow for cataloging the cloned database files.
RMAN> alter database mount;
Statement processed
- Catalog the datafiles in the temporary instances on the destination primary and standby environments using "START WITH" to locate the datafile copies restored above. The directory location can be found in the BACKUP AS COPY screen output in lines starting with "output file name".
- Primary
RMAN> catalog start with '+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE';
searching for all files that match the pattern +DATA/CONT3/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE
List of Files Unknown to the Database ===================================== File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.283.825173903 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.388.825182125 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.259.825200151 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.272.825251181 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.384.825251183 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.390.825251185 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.374.825251187 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.389.825251189 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.373.825251191 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.385.825251191 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.387.825251193 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.386.825251261
Do you really want to catalog the above files (enter YES or NO)? yes cataloging files... cataloging done
List of Cataloged Files ======================= File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.283.825173903 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.388.825182125 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.259.825200151 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.272.825251181 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.384.825251183 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.390.825251185 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.374.825251187 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.389.825251189 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.373.825251191 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.385.825251191 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.387.825251193 File Name: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.386.825251261
- Standby
RMAN> catalog start with '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE';
searching for all files that match the pattern +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE
List of Files Unknown to the Database ===================================== File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.516.825244027 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.511.825244027 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.513.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.529.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.514.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.528.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.515.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.517.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.510.825244043
Do you really want to catalog the above files (enter YES or NO)? yes cataloging files... cataloging done
List of Cataloged Files ======================= File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/UNDOTBS1.516.825244027 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.511.825244027 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.513.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.529.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.514.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.528.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.515.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.517.825244029 File Name: +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.510.825244043
- Switch the files in temporary instances on the destination primary and standby environments to the datafile copies just cataloged
- Primary
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.387.825251193" datafile 2 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.385.825251191" datafile 3 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.272.825251181" datafile 4 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.386.825251261" datafile 5 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.384.825251183" datafile 6 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.390.825251185" datafile 7 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.374.825251187" datafile 8 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.389.825251189" datafile 9 switched to datafile copy "+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.373.825251191"
- Standby
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.517.825244029" datafile 2 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.528.825244029" datafile 3 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.516.825244027" datafile 4 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.510.825244043" datafile 5 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.511.825244027" datafile 6 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.513.825244029" datafile 7 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.529.825244029" datafile 8 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.514.825244029" datafile 9 switched to datafile copy "+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.515.825244029"
- Set RMAN parallelism in the temporary instances on both destination primary and standby environments
RMAN> configure device type disk parallelism 8;
old RMAN configuration parameters: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; new RMAN configuration parameters: CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET; new RMAN configuration parameters are successfully stored
- At periodic intervals, perform incremental applies from the source non-CDB to the temporary instances on the destination primary and standby environments to keep the destination datafiles somewhat current. This can be done as many times as necessary until you are ready to perform the migration. The incremental process will compare the files in the temporary instances with the files in the source non-CDB and determine which files need recovery applied, thus the recovery work that is done can be different in each of the primary environment temporary instance and the standby temporary instance, depending on the states of their respective files. The source non-CDB can continue application activity while the incremental applies are occurring.
The FROM SERVICE clause in the RECOVER command provides the connection to the source non-CDB. The clause in the example uses the TNS alias for the source non-CDB created above. This is the only connection used and required for the source non-CDB in these commands.
- Primary
RMAN> connect target sys@<temp>
target database Password: connected to target database: NONCDB (mounted)
RMAN> recover database noredo from service noncdb;
Starting recover at 04-SEP-13 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=136 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=529 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=660 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=791 device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=922 device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=6 device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=137 device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=268 device type=DISK channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service noncdb destination for restore of datafile 00001: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.387.825251193 channel ORA_DISK_2: starting incremental datafile backup set restore channel ORA_DISK_2: using network backup set from service noncdb destination for restore of datafile 00002: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.385.825251191 channel ORA_DISK_3: starting incremental datafile backup set restore channel ORA_DISK_3: using network backup set from service noncdb destination for restore of datafile 00003: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/undotbs1.272.825251181 channel ORA_DISK_4: starting incremental datafile backup set restore channel ORA_DISK_4: using network backup set from service noncdb destination for restore of datafile 00004: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.386.825251261 channel ORA_DISK_5: starting incremental datafile backup set restore channel ORA_DISK_5: using network backup set from service noncdb destination for restore of datafile 00005: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.384.825251183 channel ORA_DISK_6: starting incremental datafile backup set restore channel ORA_DISK_6: using network backup set from service noncdb destination for restore of datafile 00006: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.390.825251185 channel ORA_DISK_7: starting incremental datafile backup set restore channel ORA_DISK_7: using network backup set from service noncdb destination for restore of datafile 00007: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.374.825251187 channel ORA_DISK_8: starting incremental datafile backup set restore channel ORA_DISK_8: using network backup set from service noncdb destination for restore of datafile 00008: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.389.825251189 channel ORA_DISK_1: restore complete, elapsed time: 00:00:08 channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service noncdb destination for restore of datafile 00009: +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.373.825251191 channel ORA_DISK_4: restore complete, elapsed time: 00:00:08 channel ORA_DISK_6: restore complete, elapsed time: 00:00:15 channel ORA_DISK_7: restore complete, elapsed time: 00:00:15 channel ORA_DISK_8: restore complete, elapsed time: 00:00:14 channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 channel ORA_DISK_2: restore complete, elapsed time: 00:00:33 channel ORA_DISK_5: restore complete, elapsed time: 00:01:43 channel ORA_DISK_3: restore complete, elapsed time: 00:02:23
Finished recover at 04-SEP-13
- Standby
RMAN> connect target sys@<tempstby>
target database Password: connected to target database: NONCDB (mounted)
RMAN> recover database noredo from service noncdb;
Starting recover at 04-SEP-13 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=770 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=834 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=898 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=961 device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=1026 device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=1090 device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=1154 device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=1218 device type=DISK skipping datafile 1; already restored to SCN 7688784 skipping datafile 2; already restored to SCN 7688782 skipping datafile 3; already restored to SCN 7688777 skipping datafile 5; already restored to SCN 7688778 skipping datafile 6; already restored to SCN 7688779 skipping datafile 7; already restored to SCN 7688780 skipping datafile 8; already restored to SCN 7688781 skipping datafile 9; already restored to SCN 7688783 channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service noncdb destination for restore of datafile 00004: +DATAD1/CONT3STBY/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.510.825244043 channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished recover at 04-SEP-13
- Build the plugin statement to be executed on the destination primary database. use the SOURCE_FILE_DIRECTORY clause to point to the location the files were restored to on the destination CDB primary database environment. Note that the plugin process will search for files in the SOURCE_FILE_DIRECTORY that match the information contained in the manifest XML file created by the DBMS_PDB.DESCRIBE execution. Use the NOCOPY clause as the files are already in their intended location. This statement can be pre-created outside of the outage window.
create pluggable database my_pdb using '/home/oracle/noncdb/noncdb_unplug.xml' source_file_directory= '+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE' nocopy;
- Shutdown the source non-CDB and restart it read only. The shutdown must not be done ABORT. At this time, application activity should cease.
SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started.
Total System Global Area 3206836224 bytes Fixed Size 3079928 bytes Variable Size 922747144 bytes Database Buffers 2264924160 bytes Redo Buffers 16084992 bytes Database mounted. SQL> alter database open read only;
Database altered.
- On the source non-CDB, using DBMS_PDB.DESCRIBE(), create the xml manifest for the source non-CDB
SQL> exec dbms_pdb.describe('/home/oracle/demo/noncdb_unplug.xml');
PL/SQL procedure successfully completed.
- Leave the source non-CDB running and perform a final incremental apply through the temporary instances on the destination primary and standby environments
- Shutdown the temporary instances
- Copy the xml manifest file to the destination primary host
- If your diskgroup names are different between your destination primary and standby, ensure that you have db_file_name_convert set on the standby specifying only the diskgroup names.
- Login to the destination CDB primary and plugin the non-CDB as a PDB using the plugin statement created above. As long as the files already exist on the standby prior to the plugin, are in the correct directory (<dgname>/<dbuniqname>/<guid>/DATAFILE for ASM) and are current on the destination standby environment, no additional steps are needed for the standby. Media recovery will find the files and continue processing.
SQL> set echo on SQL> @plugin_mypdb SQL> create pluggable database my_pdb using '/home/oracle/demo/noncdb_unplug.xml' 2 source_file_directory= '+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE' 3 nocopy;
Pluggable database created.
If you encounter ora-922 on the source_file_directory clause, this indicates you have not applied the pre-requisite patches. To use these steps fixes 16347068, 16822629, 16874123 and 16888264 are required to be applied on 12c Oracle Database Homes. These fixes will be available in the 12.1.0.1 Patchset Update 2 or later and are available for 12.1.0.1 base as merge patch 17759598.
Here is sample output from the alert log on the primary for finding the files:
Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/system.455.825151099 with afn -1 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSTEM.384.825258561 for original file-+DATAD1/NONCDB/DATAFILE/system.455.825151099 with afn-1 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/sysaux.317.825151101 with afn -2 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SYSAUX.390.825258559 for original file-+DATAD1/NONCDB/DATAFILE/sysaux.317.825151101 with afn-2 Creating new file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/temp.444.825151103 for original file-+DATAD1/NONCDB/TEMPFILE/temp.444.825151103 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/users.288.825151109 with afn -4 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/USERS.272.825258627 for original file-+DATAD1/NONCDB/DATAFILE/users.288.825151109 with afn-4 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets.438.825151965 with afn -5 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS.387.825258551 for original file-+DATAD1/NONCDB/DATAFILE/soets.438.825151965 with afn-5 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets2.436.825151971 with afn -6 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS2.385.825258553 for original file-+DATAD1/NONCDB/DATAFILE/soets2.436.825151971 with afn-6 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets3.451.825151981 with afn -7 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS3.373.825258555 for original file-+DATAD1/NONCDB/DATAFILE/soets3.451.825151981 with afn-7 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets4.452.825151987 with afn -8 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS4.389.825258557 for original file-+DATAD1/NONCDB/DATAFILE/soets4.452.825151987 with afn-8 Scanning source datafile directory - +DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for file originally created as +DATAD1/NONCDB/DATAFILE/soets5.443.825151993 with afn -9 Using file-+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/SOETS5.374.825258557 for original file-+DATAD1/NONCDB/DATAFILE/soets5.443.825151993 with afn-9
Here is sample output from the alert log on the standby for finding the file:
Wed Sep 04 12:43:07 2013 Recovery scanning directory +DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE for any matching files Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/system.384.825258561 Successfully added datafile 37 to media recovery Datafile #37: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.511.825251253' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/sysaux.390.825258559 Successfully added datafile 38 to media recovery Datafile #38: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.528.825251253' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/users.272.825258627 Successfully added datafile 39 to media recovery Datafile #39: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.516.825251265' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets.387.825258551 Successfully added datafile 40 to media recovery Datafile #40: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.517.825251253' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets2.385.825258553 Successfully added datafile 41 to media recovery Datafile #41: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.515.825251253' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets3.373.825258555 Successfully added datafile 42 to media recovery Datafile #42: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.514.825251253' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets4.389.825258557 Successfully added datafile 43 to media recovery Datafile #43: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.529.825251253' Deleted Oracle managed file +DATAD1/<cont3>/e57ca3f121dfd48ae043f646f40a61ea/datafile/soets5.374.825258557 Successfully added datafile 44 to media recovery Datafile #44: '+DATAD1/<CONT3STBY>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.513.825251253'
- In the same SQL*Plus session in the destination primary CDB root used by the plugin operation, complete the conversion of the non-CDB to a PDB using the noncdb_to_pdb.sql script. You will connect to the newly created PDB, the PDB should be in mount mode (not open) and the script should be executed while connected to the PDB.
SQL> alter session set container=my_pdb;
Session altered.
SQL> @?/rdbms/admin/noncdb_to_pdb
SQL> Rem SQL> Rem $Header: rdbms/admin/noncdb_to_pdb.sql /st_rdbms_12.1.0.1/9 2013/04/14 20:05:50 talliu Exp $ SQL> Rem SQL> Rem noncdb_to_pdb.sql SQL> Rem SQL> Rem Copyright (c) 2011, 2013, Oracle and/or its affiliates. SQL> Rem All rights reserved. SQL> Rem SQL> Rem NAME SQL> Rem noncdb_to_pdb.sql - Convert PDB SQL> Rem SQL> Rem DESCRIPTION SQL> Rem Converts DB to PDB. SQL> Rem SQL> Rem NOTES SQL> Rem Given a DB with proper obj$ common bits set, we convert it to a proper SQL> Rem PDB by deleting unnecessary metadata. SQL> Rem SQL> Rem MODIFIED (MM/DD/YY) SQL> Rem sankejai 04/11/13 - 16530655: do not update status in container$ SQL> Rem pyam 04/03/13 - rename temp cdb$* views, to not interfere when SQL> Rem this is run in multiple PDBs simultaneously SQL> Rem pyam 02/06/13 - error out for non-CDB SQL> Rem pyam 01/21/13 - stop exiting on sqlerror at end
<snip>
TIMESTAMP -------------------------------------------------------------------------------- COMP_TIMESTAMP UTLRP_BGN 2013-09-04 14:46:41
1 row selected.
SQL> SQL> DOC DOC> The following PL/SQL block invokes UTL_RECOMP to recompile invalid DOC> objects in the database. Recompilation time is proportional to the DOC> number of invalid objects in the database, so this command may take DOC> a long time to execute on a database with a large number of invalid DOC> objects. DOC> DOC> Use the following queries to track recompilation progress:
<snip>
SQL> -- leave the PDB in the same state it was when we started SQL> BEGIN 2 execute immediate '&open_sql &restricted_state'; 3 EXCEPTION 4 WHEN OTHERS THEN 5 BEGIN 6 IF (sqlcode <> -900) THEN 7 RAISE; 8 END IF; 9 END; 10 END; 11 /
PL/SQL procedure successfully completed.
SQL> SQL> WHENEVER SQLERROR CONTINUE; SQL>
- Once the script completes successfully, open the PDB on the primary database
The PDB will have a STATUS of NEW on both the primary and standby until the PDB is opened for the first time in the primary at which time the status will change to NORMAL. This initial open must be performed on the primary. There is a small amount of redo generated by this initial open which will be transported to the standby database and applied via media recovery. Until this redo is applied on the standby to change the status from NEW to NORMAL, the PDB cannot be opened on the standby database.
SQL> alter session set container=cdb$root;
Session altered.
SQL> alter pluggable database stark open instances=all;
Pluggable database altered.
- On the standby, ensure the new PDB has a NORMAL status and open the PDB on standby. If you did not specify the TEMPFILE REUSE clause on the CREATE PLUGGABLE DATABASE statement, add a file to the TEMP tablespace on the standby.
SQL> select pdb_name, status from cdb_pdbs;
PDB_NAME STATUS -------------------- ------------- PDB$SEED NORMAL STARK NORMAL MY_PDB NORMAL
SQL> alter pluggable database my_pdb open instances=all;
Pluggable database altered.
SQL> alter session set container=my_pdb;
Session altered.
SQL> alter tablespace temp add tempfile '+DATA1' size 10g;
Tablespace altered.
- Backup the PDB. It cannot be recovered on the new container database without a backup
RMAN> backup pluggable database my_pdb;
Starting backup at 06-SEP-13 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00040 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets.387.825258551 input datafile file number=00041 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets2.385.825258553 input datafile file number=00042 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets3.373.825258555 input datafile file number=00043 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets4.389.825258557 input datafile file number=00044 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/soets5.374.825258557 input datafile file number=00038 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/sysaux.390.825258559 input datafile file number=00037 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/system.384.825258561 input datafile file number=00039 name=+DATA/<CONT3>/E57CA3F121DFD48AE043F646F40A61EA/DATAFILE/users.272.825258627 channel ORA_DISK_1: starting piece 1 at 06-SEP-13 channel ORA_DISK_1: finished piece 1 at 06-SEP-13 piece handle=+RECO/CONT3/E57CA3F121DFD48AE043F646F40A61EA/BACKUPSET/2013_09_06/nnndf0_tag20130906t152558_0.381.825434759 tag=TAG20130906T152558 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:01:16 Finished backup at 06-SEP-13
Starting Control File and SPFILE Autobackup at 06-SEP-13 piece handle=+RECO/CONT3/AUTOBACKUP/2013_09_06/s_825434834.358.825434835 comment=NONE Finished Control File and SPFILE Autobackup at 06-SEP-13
12c Unplug/Plug PDB to new CDB using 12c RMAN Active Database Duplication
At this time, unplug/plug of a PDB can only be done same-endian. If you need to perform a cross-endian migration, you must precreate a PDB in your destination CDB and use Transportable Tablespace.
The RMAN Block Change Tracking feature can be used to improve the performance of the incremental roll forward steps.
The following steps were executed using the bash shell.
This method takes advantage of a new feature of RMAN in 12c allowing you to clone database files and apply incrementals to those files from an active database without requiring staging areas. In addition, the source non-CDB application activity can continue while the cloning and incremental applies are being performed. To get the datafiles in the correct ASM directory for the destination PDB, you perform an RMAN BACKUP AS COPY from the source PDB while also connected to the target CDB. When using OMF or ASM, this will ensure that the datafile copies are written to the correct location. The incremental apply is also performed across the network and applied to the datafiles in their destination location.
This operation requires a temporary instance to be used for the incremental apply. When the datafile copies are created, they aren't known to any databases on the new environment, by using the temporary instance we create a copy of the source PDB controlfile to the destination environment and catalog the newly created datafile copies in the temporary instance. We now have an instance that is aware of the files and perform recovery on them. The temporary instance is used only for this purpose and can be discarded once the migration process is complete, it should only be started in either nomount or mount mode, it should never be opened.
If you have an Active Data Guard phyiscal standby in place, perform the RMAN BACKUP AS COPY and incremental apply process on both the destination primary and standby environments. This will place the files in the correct location on the destination standby environment so that media recovery of the plugin operation can continue without stopping. You must run the process separately for each destination environment, e.g. you will run an RMAN process to copy the files to your destination primary site and also run an RMAN process to copy the files to your destination standby site, you will create a temporary instance on your destination primary site and your destination standby site, etc. Once the final incremental apply is completed on both the destination primary and standby sites, you can create the new PDB on the primary, no further action need be taken on the standby site.
The example below documents the process with ASM in use for file storage and with an Active Data Guard physical standby database in place.
While doing this process do NOT add datafiles to the source as the recover commands will not restore the new datafiles, they will only recover the datafiles already restored/cataloged.
- On the destination primary and standby database environments create a TNS entry to point to the CDB containing the PDB you wish to move. This will be used by RMAN to create the datafile copies on the destination environments and to perform incremental applies across the network with no need for staging area.
CONT4 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <pdb-scan>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <cont4>) ) )
- Ensure that there are TNS entries on both the destination primary and standby environments that point to the respective destination instances. In other words, on the destination primary server, there must be a TNS entry pointing to the destination primary database and on the destination standby server, there must be a TNS entry pointing to the destination standby database. These TNS entries can use SCAN hosts.
- Destination Primary
CONT3 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <target-prim-scan>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <cont3>) ) )
- Destination Standby
CONT3STBY = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <target-stby-scan>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <cont3stby>) ) )
- Copy the TNS entries pointing to the destination primary and standby environments to the source CDB environment. These are required to allow the source database to connect to the destinations as part of the BACKUP AS COPY process. These TNS entries can use SCAN hosts. If you used SCAN hosts for the TNS entry pointing to the CDB source in the destination environment (first step above) and the source CDB is a Real Application Cluster database, these TNS entries pointing to the destination should be on ALL the nodes where the CDB database has instances running.
- Using RMAN, perform a BACKUP AS COPY of the source PDB to the destination primary and standby environments using the destination primary and standby database instances. This will create datafile copies of the source PDB in the correct directories of ASM on the destination sites, these file copies will not be known to the databases at this point. A TNS connection is needed to both the RMAN target (source 12c CDB) and destination (12c CDB). In the RMAN session, connect to the ROOT of the both the source container database and the destination container databases you will be migrating to as the clone.
Create a command file similar to the following. The target is the source non-CDB to be migrated, the clone is the destination CDB in which the new PDB will be created.
Ensure backup compression on the disk device type is not enabled on the primary database. Backup as copy is not allowed with compression enabled.
RMAN> show device type;
RMAN configuration parameters for database with db_unique_name NONCDB are: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
Run the code blocks just created on both the destination primary site and the destination standby site to copy the files from the source non-CDB to destination environments. Note the directory structure for the output files on both the primary and the standby environments (lines starting with "output file name"). These directories will be needed in a later step.
- Primary
RMAN> @backup_pdb.cmd
RMAN> connect target * target database Password: connected to target database: <CONT4> (DBID=1644992445)
RMAN> connect auxiliary * auxiliary database Password: connected to auxiliary database: <CONT3> (DBID=1783201232)
RMAN> backup as copy pluggable database <PDB NAME> auxiliary format '+data'; Starting backup at 05-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=204 instance=<cont42> device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=239 instance=<cont42> device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=371 instance=<cont41> device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=271 instance=<cont42> device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=402 instance=<cont41> device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=306 instance=<cont42> device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=438 instance=<cont41> device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=343 instance=<cont42> device type=DISK channel ORA_DISK_1: starting datafile copy input datafile file number=00045 name=+DATAD1/<CONT4>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.531.825347311 channel ORA_DISK_2: starting datafile copy input datafile file number=00043 name=+DATAD1/<CONT4>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.284.825346821 channel ORA_DISK_3: starting datafile copy input datafile file number=00042 name=+DATAD1/<CONT4>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.530.825346821 output file name=+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197 tag=TAG20130905T151946 # Save this directory name to be used later channel ORA_DISK_3: datafile copy complete, elapsed time: 00:00:49 output file name=+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195 tag=TAG20130905T151946 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:01:23 output file name=+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193 tag=TAG20130905T151946 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:06:32 Finished backup at 05-SEP-13
RMAN> **end-of-file**
- Standby
RMAN> @backup_pdb.cmd
RMAN> connect target * target database Password: connected to target database: <CONT4> (DBID=1644992445)
RMAN> connect auxiliary * auxiliary database Password: connected to auxiliary database: <CONT3> (DBID=1783201232)
RMAN> backup as copy pluggable database <PDB NAME> auxiliary format '+datad1'; Starting backup at 05-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=105 instance=<cont42> device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=207 instance=<cont41> device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=141 instance=<cont42> device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=244 instance=<cont41> device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=275 instance=<cont41> device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=307 instance=<cont41> device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=173 instance=<cont42> device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=346 instance=<cont41> device type=DISK channel ORA_DISK_1: starting datafile copy input datafile file number=00045 name=+DATAD1/<CONT4>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.531.825347311 channel ORA_DISK_2: starting datafile copy input datafile file number=00043 name=+DATAD1/<CONT4>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.284.825346821 channel ORA_DISK_3: starting datafile copy input datafile file number=00042 name=+DATAD1/<CONT4>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.530.825346821 output file name=+DATAD1/CONT3STBY/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897 tag=TAG20130905T151815 # Save this directory name to be used later channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:07 output file name=+DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897 tag=TAG20130905T151815 channel ORA_DISK_3: datafile copy complete, elapsed time: 00:00:07 output file name=+DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897 tag=TAG20130905T151815 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35 Finished backup at 05-SEP-13
RMAN> RMAN> **end-of-file**
- Copy the password file from the source CDB to the destination sites (both primary and standby) for the temp instances. The following example shows the password file stored in ASM for the source CDB. The password file is extracted from ASM on the source and written to a local filesystem, then remote copied to the destination environment (both primary and standby). For ease of use, the files should be copied to the $ORACLE_HOME/dbs directory (default location) being used by the temporary instance on the destination environments and should be given a default name of orapw<$ORACLE_SID of the temporary instance>.
[oracle@source]$ asmcmd -p ASMCMD [+] > cd datad1/cont4 ASMCMD [+datad1/cont4] > cp orapwcont4 /home/oracle/<temp>/orapwtemp copying +datad1/cont4/orapwcont4 -> /home/oracle/<temp>/orapwtemp
[oracle@source]$ scp /home/oracle/<temp>/orapwtemp oracle@destination-prim:<ORACLE_HOME>/dbs [oracle@source]$ scp /home/oracle/<temp>/orapwtemp oracle@destination-stby:<ORACLE_HOME>/dbs
- Create a small pfile for the temporary instances. Although not required to be different for the destination primary and standby environments, for clarity they are different in this example with each temporary instance getting a different DB_UNIQUE_NAME. Note the comments for each initialization parameter setting.
- Primary
db_name=<cont4> # MUST match db_name of source CDB/non-CDB db_unique_name=junk # ONLY required if the source db is on the same system as the TEMP instance control_files='/home/oracle/demo/control1.ctl','/home/oracle/demo/control2.ctl' # Just use local files compatible='12.1.0.0.0' # Required if compatible is not set to default value enable_pluggable_database=TRUE # Since the source is a CDB, we must specify the enable_pluggable_database parameter
- Standby
db_name=<cont4> # MUST match db_name of source CDB/non-CDB db_unique_name=junk # ONLY required if the source db is on the same system as the TEMP instance control_files='/home/oracle/demo/control1.ctl','/home/oracle/demo/control2.ctl' # Just use local files compatible='12.1.0.0.0' # required if compatible is not set to default value enable_pluggable_database=TRUE # Since the source is a CDB, we must specify the enable_pluggable_database parameter
- Create a TNS alias for the destination temporary instances. This should be done on the environment you will be running RMAN (in this example, the destination primary and standby environments). If you needed to specify DB_UNIQUE_NAME, the same value must also be specified for the SERVICE_NAME in the TNS address. Do not use the SCAN address for the host as the temporary instances will not be registered with the SCAN listener. (UR = A ) is required to allow you to connect to the service while the database is in mount/nomount mode.
- Primary
TEMP = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <primdb01>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = junk) (UR = A) ) )
- Standby
TEMPSTBY = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = <stbyadm04>)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = junkstby) (UR = A) ) )
- Startup nomount the temporary instances on the destination primary and standby environments
- Primary
$ export ORACLE_SID=<temp> $ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 12:51:52 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora' ORACLE instance started.
Total System Global Area 384200704 bytes Fixed Size 2288536 bytes Variable Size 322962536 bytes Database Buffers 50331648 bytes Redo Buffers 8617984 bytes
- Standby
$ export ORACLE_SID=<tempstby> $ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Wed Sep 4 10:53:06 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/home/oracle/demo/pfile.ora' ORACLE instance started.
Total System Global Area 634732544 bytes Fixed Size 2291424 bytes Variable Size 524290336 bytes Database Buffers 100663296 bytes Redo Buffers 7487488 bytes
- Restore the controlfile from the source CDB to both the destination primary and standby environment temporary instances. Connect to the temporary instance using TNS (e.g. rman target sys@temp).
The FROM SERVICE clause in the RESTORE command provides the connection to the source CDB. The clause in the example uses the TNS alias for the source CDB created above. This is the only connection used and required for the source CDB in these commands.
- Primary
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Sep 5 15:02:34 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@<temp>
target database Password: connected to target database: <CONT4> (not mounted)
RMAN> restore controlfile from service <cont4>;
Starting restore at 05-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=790 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: using network backup set from service cont4 channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 output file name=/home/oracle/demo/control1.ctl output file name=/home/oracle/demo/control2.ctl Finished restore at 05-SEP-13
- Standby
$ rman
Recovery Manager: Release 12.1.0.1.0 - Production on Thu Sep 5 13:03:04 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
RMAN> connect target sys@<tempstby>
target database Password: connected to target database: <CONT4> (not mounted)
RMAN> restore controlfile from service <cont4>;
Starting restore at 05-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=514 device type=DISK
channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: using network backup set from service <cont4> channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 output file name=/home/oracle/demo/control1.ctl output file name=/home/oracle/demo/control2.ctl Finished restore at 05-SEP-13
- Mount the temporary instances on the destination primary and standby environments to allow for cataloging the cloned database files.
RMAN> alter database mount;
Statement processed released channel: ORA_DISK_1
- Catalog the datafiles in the temporary instances on the destination primary and standby environments using "START WITH" to locate the datafile copies restored above. The directory location can be found in the BACKUP AS COPY screen output starting with "output file name".
- Primary
RMAN> catalog start with '+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE';
searching for all files that match the pattern +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE
List of Files Unknown to the Database ===================================== File Name: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.396.825355193 File Name: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.395.825355195 File Name: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.394.825355197
Do you really want to catalog the above files (enter YES or NO)? yes cataloging files... cataloging done
List of Cataloged Files ======================= File Name: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.396.825355193 File Name: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.395.825355195 File Name: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.394.825355197
- Standby
RMAN> catalog start with '+DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE';
searching for all files that match the pattern +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE
List of Files Unknown to the Database ===================================== File Name: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.518.825347897 File Name: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.498.825347897 File Name: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.499.825347897
Do you really want to catalog the above files (enter YES or NO)? yes cataloging files... cataloging done
List of Cataloged Files ======================= File Name: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSAUX.518.825347897 File Name: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SOETS2.498.825347897 File Name: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/SYSTEM.499.825347897
- Switch the pluggable database to copy in the temporary instances on the destination primary and standby environments to the datafiles just cataloged.
- Primary
RMAN> switch pluggable database <PDB NAME> to copy;
datafile 42 switched to datafile copy "+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197" datafile 43 switched to datafile copy "+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195" datafile 45 switched to datafile copy "+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193"
- Standby
RMAN> switch pluggable database <PDB NAME> to copy;
datafile 42 switched to datafile copy "+DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897" datafile 43 switched to datafile copy "+DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897" datafile 45 switched to datafile copy "+DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897"
- Set parallelism on both destination primary and standby temporary instances
RMAN> configure device type disk parallelism 8;
old RMAN configuration parameters: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; new RMAN configuration parameters: CONFIGURE DEVICE TYPE DISK PARALLELISM 8 BACKUP TYPE TO BACKUPSET; new RMAN configuration parameters are successfully stored
- At periodic intervals, perform incremental applies from the source PDB to the temporary instances on the destination primary and standby environments to keep the destination datafiles somewhat current. This can be done as many times as necessary until you are ready to perform the migration. The incremental process will compare the files in the temporary instances with the files in the source PDB and determine which files need recovery applied, thus the recovery work that is done can be different in each of the primary environment temporary instance and the standby temporary instance, depending on the states of their respective files. The source non-CDB can continue application activity while the incremental applies are occurring.
The FROM SERVICE clause in the RECOVER command provides the connection to the source CDB (we do not need to connect to the source PDB). The clause in the example uses the TNS alias for the source CDB created above. This is the only connection used and required for the source PDB in these commands.
- Primary
RMAN> connect target sys@<temp>
target database Password: connected to target database: <CONT4> (mounted)
RMAN> recover pluggable database <PDB NAME> noredo from service <cont4>;
Starting recover at 05-SEP-13 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 using channel ORA_DISK_5 using channel ORA_DISK_6 using channel ORA_DISK_7 using channel ORA_DISK_8 channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service cont4 destination for restore of datafile 00042: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197 channel ORA_DISK_2: starting incremental datafile backup set restore channel ORA_DISK_2: using network backup set from service cont4 destination for restore of datafile 00043: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195 channel ORA_DISK_3: starting incremental datafile backup set restore channel ORA_DISK_3: using network backup set from service cont4 destination for restore of datafile 00045: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193 channel ORA_DISK_1: restore complete, elapsed time: 00:00:08 channel ORA_DISK_2: restore complete, elapsed time: 00:00:08 channel ORA_DISK_3: restore complete, elapsed time: 00:00:45 Finished recover at 05-SEP-13
- Standby
RMAN> connect target sys@<tempstby>
target database Password: connected to target database: <CONT4> (mounted)
RMAN> recover pluggable database <PDB NAME> noredo from service >cont4>;
Starting recover at 05-SEP-13 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=770 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=834 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=898 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=961 device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=1026 device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=1090 device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=1154 device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=1218 device type=DISK channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service cont4 destination for restore of datafile 00042: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897 channel ORA_DISK_2: starting incremental datafile backup set restore channel ORA_DISK_2: using network backup set from service cont4 destination for restore of datafile 00043: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897 channel ORA_DISK_3: starting incremental datafile backup set restore channel ORA_DISK_3: using network backup set from service cont4 destination for restore of datafile 00045: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 channel ORA_DISK_2: restore complete, elapsed time: 00:00:03 channel ORA_DISK_3: restore complete, elapsed time: 00:00:15 Finished recover at 05-SEP-13
- Build the plugin statement to be executed on the destination primary database. Use the SOURCE_FILE_DIRECTORY clause to point to the location where the files were restored to on the destination CDB primary database environment. Note that the plugin process will search for files in the SOURCE_FILE_DIRECTORY that match the information contained in the XML file created by the unplug operation. Use the NOCOPY clause as the files are already in their intended location. The statement can be pre-created outside of the outage window.
create pluggable database <PDB NAME> using '/home/oracle/demo/<PDB NAME>_unplug.xml' source_file_directory= '+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE' nocopy;
If you encounter ora-922 on the source_file_directory clause, this indicates you have not applied the pre-requisite patches. To use these steps fixes 16347068, 16822629, 16874123 and 16888264 are required to be applied on 12c Oracle Database Homes. These fixes will be available in the 12.1.0.1 Patchset Update 2 or later and are available for 12.1.0.1 base as merge patch 17759598.
- Shutdown the source PDB
SQL> alter pluggable database <PDB NAME> close instances=all;
Pluggable database altered.
- Unplug the source PDB and create the manifest XML file
SQL> alter pluggable database <PDB NAME> unplug into '/home/oracle/cont4/<PDB NAME>_unplug.xml';
Pluggable database altered.
- Perform a final incremental apply through the temporary instances on the destination primary and standby environments. The source CDB must be open during this process (the source PDB remains closed).
- Primary
RMAN> connect target sys@<temp>
target database Password: connected to target database: <CONT4> (DBID=1644992445, not open)
RMAN> recover pluggable database <PDB NAME> noredo from service <cont4>;
Starting recover at 05-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=267 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=398 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=529 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=660 device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=791 device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=922 device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=6 device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=137 device type=DISK channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service cont4 destination for restore of datafile 00042: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197 channel ORA_DISK_2: starting incremental datafile backup set restore channel ORA_DISK_2: using network backup set from service cont4 destination for restore of datafile 00043: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195 channel ORA_DISK_3: starting incremental datafile backup set restore channel ORA_DISK_3: using network backup set from service cont4 destination for restore of datafile 00045: +DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193 channel ORA_DISK_1: restore complete, elapsed time: 00:00:08 channel ORA_DISK_2: restore complete, elapsed time: 00:00:08 channel ORA_DISK_3: restore complete, elapsed time: 00:00:45 Finished recover at 05-SEP-13
- Standby
RMAN> connect target sys@<tempstby>
target database Password: connected to target database: <CONT4> (DBID=1644992445, not open)
RMAN> recover pluggable database <PDB NAME> noredo from service <cont4>;
Starting recover at 05-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=770 device type=DISK allocated channel: ORA_DISK_2 channel ORA_DISK_2: SID=834 device type=DISK allocated channel: ORA_DISK_3 channel ORA_DISK_3: SID=898 device type=DISK allocated channel: ORA_DISK_4 channel ORA_DISK_4: SID=961 device type=DISK allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=1026 device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=1090 device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=1154 device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=1218 device type=DISK channel ORA_DISK_1: starting incremental datafile backup set restore channel ORA_DISK_1: using network backup set from service cont4 destination for restore of datafile 00042: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.499.825347897 channel ORA_DISK_2: starting incremental datafile backup set restore channel ORA_DISK_2: using network backup set from service cont4 destination for restore of datafile 00043: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.518.825347897 channel ORA_DISK_3: starting incremental datafile backup set restore channel ORA_DISK_3: using network backup set from service cont4 destination for restore of datafile 00045: +DATAD1/<CONT3STBY>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.498.825347897 channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 channel ORA_DISK_2: restore complete, elapsed time: 00:00:01 channel ORA_DISK_3: restore complete, elapsed time: 00:00:07 Finished recover at 05-SEP-13
- Shutdown the temporary instances
- Copy the manifest file to the destination primary host
- If your diskgroup names are different between your destination primary and standby, ensure that you have db_file_name_convert set on the standby specifying only the diskgroup names.
- Plug the PDB into the new CDB using the manifest file created above. As long as the files already exist on the standby prior to the plugin, are in the correct directory (<dgname>/<dbuniqname>/<guid>/DATAFILE) and are current on the destination standby environment, no additional steps are needed for the standby. Media recovery will find the files and continue processing.
SQL> set echo on SQL> @plugin_<PDB NAME> SQL> create pluggable database <PDB NAME> using '/home/oracle/demo/<PDB NAME>_unplug.xml' 2 source_file_directory= '+DATA/<CONT3>/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE' 3 nocopy;
Pluggable database created.
- Since the source was already a PDB, all we need to do is sync the new PDB
SQL> alter pluggable database <PDB NAME> open restricted;
Pluggable database altered.
SQL> alter session set container=<PDB NAME>;
Session altered.
SQL> exec dbms_pdb.sync_pdb();
PL/SQL procedure successfully completed.
- Open the PDB on the destination primary
The PDB will have a STATUS of NEW until the PDB is opened for the first time at the primary database at which time the status will change to NORMAL. This initial open must be performed on the primary. There is a small amount of redo generated by this initial open which will be transported to the standby database and applied via media recovery. Until this redo is applied on the standby to change the status from NEW to NORMAL, the PDB cannot be opened on the standby database.
SQL> alter session set container=cdb$root;
Session altered.
SQL> alter pluggable database <PDB NAME> close;
Pluggable database altered.
SQL> alter pluggable database <PDB NAME> open instances=all;
Pluggable database altered.
- Open the PDB on destination standby. Ensure the status is NORMAL first. If you did not specify the TEMPFILE REUSE clause on the CREATE PLUGGABLE DATABASE statement, add a file to the TEMP tablespace on the standby.
SQL> col pdb_name format a30 SQL> select pdb_name, status from cdb_pdbs;
PDB_NAME STATUS ------------------------------ ------------- PDB$SEED NORMAL STARK NORMAL BARNEY NORMAL MY_PDB NORMAL
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 STARK MOUNTED 4 MY_PDB READ ONLY NO 5 BARNEY MOUNTED SQL> alter pluggable database <PDB NAME> open instances=all;
Pluggable database altered.
SQL> alter session set container=<PDB NAME>;
Session altered.
SQL> alter tablespace temp add tempfile '+DATAd1' size 10g;
Tablespace altered.
- Backup the PDB. It cannot be recovered on the new container database without a backup
RMAN> backup pluggable database <PDB NAME>;
Starting backup at 06-SEP-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=122 instance=cont31 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00047 name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/soets2.396.825355193 input datafile file number=00046 name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/sysaux.395.825355195 input datafile file number=00045 name=+DATA/CONT3/E5AA35C304D68CBBE043F646F40AEE75/DATAFILE/system.394.825355197 channel ORA_DISK_1: starting piece 1 at 06-SEP-13 channel ORA_DISK_1: finished piece 1 at 06-SEP-13 piece handle=+RECO/CONT3/E5AA35C304D68CBBE043F646F40AEE75/BACKUPSET/2013_09_06/nnndf0_tag20130906t152412_0.525.825434653 tag=TAG20130906T152412 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 Finished backup at 06-SEP-13
Starting Control File and SPFILE Autobackup at 06-SEP-13 piece handle=+RECO/CONT3/AUTOBACKUP/2013_09_06/s_825434668.769.825434669 comment=NONE Finished Control File and SPFILE Autobackup at 06-SEP-13
Data Pump Full Transportable Using Incremental Backups to Migrate a pre-12c non-CDB to 12c PDB
This method allows you to migrate an entire database from an 11.2.0.3 or 11.2.0.4 non-CDB directly to a 12c pluggable database. It is highly recommended that you upgrade to one of these releases to take advantage of full transportable export/import for the following reasons:
- It simplifies metadata movement significantly. With full transportable export/import, you only need a single export/import step (or network import as we will show in the example) to move the metadata (e.g. users, tablespaces, actual data) to the destination database.
- Extended data for Oracle RDBMS options like Spatial and Multimedia are exported/imported as part of the process.
- The number of error messages is significantly reduced as the process recognizes Oracle objects that don't need to be exported or applied. Going forward in later releases, the messages should be reduced even more.
- Full transportable export/import supports moving encrypted tablespaces.
The time required to perform full transportable export/import is highly dependent on the number of objects to be imported into the destination environment and can be a time consuming process. It is recommended to upgrade your database to 12c and use the first method documented in this note. However, full transportable export/import can be used for smaller departmental type databases to avoid performing the upgrade process. The method requires a staging area on both the source and destination systems for incremental backups that are taken to assist with reducing downtime. The staging area can be local file system, shared storage (e.g. NFS, CFS) or ASM. A PDB must be pre-created in the 12c container database to accept the Data Pump import of the database metadata.
The method described here can be used for same- or cross-endian migrations. For same-endian, it is recommended to upgrade your source database to Oracle 12c and use the steps defined in the 12c Migrate non-CDB and Convert to a PDB using 12c RMAN Active Database Duplication section of this document. For direct migration of pre-12c non-CDB's, full database export/import is available for source databases running Oracle RDBMS 11.2.0.3 or later. If your source database is running Oracle 12c and you wish to do full transportable export/import, we recommend using RMAN to perform file movement. See the Oracle Database Backup and Recovery User's Guide 12c Release 1 documentation, Transporting Data Across Platforms chapter for more information. The section of reference is Cross-Platform Transport of Tablespaces using Inconsistent Backups.
The process for moving the datafiles of the full transportable export/import set will be done using scripts provided in Note 1389592.1. Although the scripts and MOS document discuss cross-platform, they can be used for same platform to provide minimal downtime file movement. These scripts support active file copy without a staging area to prepare the destination environments for the incremental apply/roll forward. Using this process, the files can be moved without downtime to the application, except for the final incremental and the full transportable export/import steps. When performing full transportable export/import all user/application tablespaces should be included in the transportable set. Do not include SYSTEM, SYSAUX, TEMP or any UNDO tablespace.
Please review Note 1389592.1 prior to attempting to use this process. The process documented here will use the recommended dbms_file_transfer method. Although the example output shown here may differ from what is displayed by the current version of script, the steps and setup as documented here are correct.
This process also takes advantage of the NETWORK_LINK command line option for Data Pump. This removes the need to perform a Data Pump export as the Data Pump import will connect directly to the source database to retrieve the metadata necessary for the import process.
The environment for the following example consists of an 11.2.0.3 RAC database (fred) as the source and 12.1.0.1 RAC CDB (cont3) with a RAC Active Data Guard standby database (cont3stby).
The standby database must have the Active Data Guard option and be open read only, otherwise script execution will fail when run against the destination standby site. In Oracle RDBMS 12.1.0.2, the capability to defer recovery of PDBs at the standby site. A new clause, STANDBYS=NONE, can be used when creating or plugging a PDB into a container. This clause allows you to defer migration of datafiles to destination standby CDBs. The standby will create the basic structure of the PDB but all files will be marked as OFFLINE/RECOVER. At some future point, you can copy the files to the standby site and ENABLE RECOVERY of the PDB to add the files to the standby and begin providing Data Guard protection. For the process of migrating directly from a pre-12c non-CDB, Oracle recommends that you use the STANDBYS=NONE clause when creating the empty PDB in step 1 to simplify the migration process. If you use the deferred recovery method, you can skip the steps dealing with the physical standby and after running the Data Pump import handling migration of the files to the standby database. See document 1916648.1 for more information on usage of STANDBYS=NONE.
- Create an empty PDB (<PDB NAME2>) in the destination CDB (cont3).
If your disk group names are different on your primary database and standby database environments, ensure you have specified DB_FILE_NAME_CONVERT in your spfile. It is sufficient to include just the disk group name and the database unique name in the parameter values. In the following example, the primary database uses a diskgroup called +DATA and the standby uses +DATAD1. The DB_FILE_NAME_CONVERT setting for the standby database would be:
DB_FILE_NAME_CONVERT='+DATA/<CONT3>/','+DATAD1/<CONT3STBY>/'
and for the primary database it would be:
DB_FILE_NAME_CONVERT='+DATAD1/<CONT3STBY>/','+DATA/<CONT3>/'
SQL-DESTPRIM> create pluggable database <PDB NAME2> admin user pdbadmin identified by <admin password> roles=(connect,resource);
Pluggable database created.
SQL-DESTPRIM> alter pluggable database <PDB NAME2> open instances=all;
Pluggable database altered.
- This PDB will automatically be created on cont3stby when the "create pluggable database..." redo is applied via media recovery.
SQL-DESTPRIM> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 STARK MOUNTED 4 <PDB NAME2> MOUNTED
SQL-DESTPRIM> alter pluggable database <PDB NAME2> open instances=all;
Pluggable database altered.
- Get the GUID for the new PDB to be used in the file transfer process (used for the DESTDIR and DESTDIRSTBY directory objects created later in this process)
SQL-DESTPRIM> select name, guid from v$pdbs where name='<PDB NAME2>';
NAME GUID
----------------------------- --------------------------------
<PDB NAME2> E909A2CB24B15CBFE0437628890A510D
- Create the following directory object in the SOURCE database. This will tell the transport scripts where to find the source files for copy. Grant access to the "system" user for this directory.
In some situations when files are stored in ASM, the directory structure will be listed in mixed case (e.g. +DATAD1/fred/datafile) in the database. Prior to creating the SRCDIR directory query v$datafile in the SOURCE database and ensure the path specified in the CREATE DIRECTORY SRCDIR statement matches the format of the directory structure of the existing datafiles. For example, if v$datafile shows the location for all of the datafiles as +DATAD1/fred/datafile, the CREATE DIRECTORY statement required would be
SQL-SOURCE> create directory SRCDIR as '+DATAD1/fred/datafile';
SQL-SOURCE> create directory SRCDIR as '+DATAD1/FRED/DATAFILE';
Directory created.
SQL-SOURCE> grant read, write on directory SRCDIR to system;
Grant succeeded.
- Create the following directory objects in the DESTINATION primary root. Note they MUST be created in the destination primary ROOT as the transport scripts cannot connect directly to the PDB. Grant access to the "system" user for this directory. The DESTDIR
ASM directory is where the files will be transported to on the destination primary environment and the DESTDIRSTBY is the ASM directory where the files will be transported to on the destination standby environment. These directories should use the GUID of the destination PDB (<PDB NAME2>).
SQL-DESTPRIM> create directory DESTDIR as '+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE';
Directory created.
SQL-DESTPRIM> create directory DESTDIRSTBY as '+DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D/DATAFILE';
Directory created.
SQL-DESTPRIM> grant read, write on directory DESTDIR to system;
Grant succeeded.
SQL-DESTPRIM> grant read, write on directory DESTDIRSTBY to system;
Grant succeeded.
- In the DESTINATION PDB on the primary, create a directory to be used for the Data Pump import log files. These must be created in the destination PDB as the Data Pump import process, which connects to the PDB, will use this directory.
SQL-DESTPRIM-PDB> alter session set container=<PDB NAME2>;
Session altered.
SQL-DESTPRIM-PDB> create directory mydir as '/u01/app/oracle/bkup/pdb';
Directory created.
SQL-DESTPRIM-PDB>grant read, write on directory mydir to system;
Grant succeeded.
- Create a TNS alias on the DESTINATION primary and standby sites pointing back to the source non-CDB database. This alias should connect directly to the source host as opposed to using the SCAN address.
SOURCEDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = source1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = fred) ) )
- On the DESTINATION primary, create the following database link pointing back to the source database. Create a link in the root for the incremental transport process and create the same link in the PDB for the Data Pump import.
SQL-DESTPRIM> create public database link sourcedb connect to system identified by <password> using 'sourcedb';
Database link created.
SQL-DESTPRIM> select * from dual@sourcedb;
D - X
SQL-DESTPRIM> alter session set container=<PDB NAME2>;
Session altered.
SQL-DESTPRIM-PDB> create public database link sourcedb connect to system identified by <password> using 'sourcedb';
Database link created.
SQL-DESTPRIM-PDB> select * from dual@sourcedb;
D - X
- On the DESTINATION primary environment, create a service for the new PDB just created above and start it
[oracle@destprim1 transport]$ srvctl add serv -d cont3 -s <PDB NAME2>serv -pdb <PDB NAME2> -r cont31,cont32 [oracle@destprim1 transport]$ srvctl start serv -d cont3 -s <PDB NAME2>serv
- On the SOURCE environment, create a TNS alias connecting to the PDB service just created. This should connect directly to the destination primary host as opposed to using SCAN address.
<PDB NAME2>SERV = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = destprim1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <PDB NAME2>serv) ) )
- On the SOURCE database, create a database link to the destination primary using the TNS alias created above.
SQL-SOURCE> create public database link <PDB NAME2>serv connect to system identified by <password> using '<PDB NAME2>serv';
Database link created.
SQL-SOURCE> select * from dual@<PDB NAME2>serv;
D - X
- On the DESTINATION primary, create a TNS alias to connect to the PDB service just created. This alias should connect directly to the destination primary host as opposed to using the SCAN address.
<PDB NAME2>SERV = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = destprim1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = <PDB NAME2>serv) ) )
- On the DESTINATION primary, create a database link in the destination PDB using the TNS alias just created pointing to the destination PDB. This will be used by DataPump import process to connect directly to the destination PDB.
SQL-DESTPRIM> alter session set container=<PDB NAME2>;
Session altered.
SQL-DESTPRIM> create public database link <PDB NAME2>serv connect to system identified by <password> using '<PDB NAME2>serv';
Database link created.
SQL-DESTPRIM> select * from dual@<PDB NAME2>serv;
D - X
- The XTTS script zip contains a file named xtt.properties that will be used by the scripts to specify different characteristics of the environment. Starting on the SOURCE environment, modify the following entries in the xtt.properties file to be used when
running the script on the source:
- Set tablespaces= to the list of tablespaces you will be migrating. For full transportable export/import, all tablespaces except SYSTEM, SYSAUX, TEMP and any UNDO should be listed.
tablespaces=USERS,SOETS,SOETS2,SOETS3,SOETS4,SOETS5
- Set platformid= to the platform_id you are running on, e.g. 13 for Linux 64-bit. This value can be gotten from V$DATABASE.PLATFORM_ID
platformid=13
- Set dfcopydir= to the directory object created on the DESTINATION in Step 5 to hold the datafile copies for the restore. Note this value will be different from that specified on the SOURCE due to changes in diskgroup name (maybe) and db_unique_name (definitely). In this example this is DESTDIR which is defined as +DATA/CONT3/ E909A2CB24B15CBFE0437628890A510D /DATAFILE which is the GUID of the <PDB NAME2> PDB that was created. This MUST be set when using the -G option of the XTTS scripts so that the dbms_file_transfer.put_file knows where to write the files on the DESTINATION side. It is enough to set this for the primary destination at this time, we will manually modify the files generated by the script execution for the destination standby.
dfcopydir=DESTDIR
- Set backupformat= to the ASM directory where incremental backups will be stored on the SOURCE. Before running the xttdriver.pl script, ensure this directory exists in ASM on the source.
backupformat=+RECOD1/fred/TRANSPORT
- Set stageondest= to the ASM directory where the incremental backups will be stored and applied from on the DESTINATION PRIMARY environment. This just needs to be set to some accessible location on the DESTINATION PRIMARY, it will not be used directly by the SOURCE
stageondest=+RECO
- Set storageondest= to the ASM directory where the datafiles full transportable export/import will be written to on the DESTINATION PRIMARY environment. This just needs to be set to some accessible location on the DESTINATION PRIMARY, it will not be used directly by the SOURCE
storageondest=+DATA
- Set backupondest= to the ASM directory where the datafiles in the full transportable export/import will be written to on the DESTINATION PRIMARY environment. On the SOURCE this just needs to be set to some accessible location
backupondest=+RECO
- Set asm_home= to the Grid Infrastructure home on the SOURCE system
asm_home=/u01/app/12.1.0.2/grid
- Set asm_sid= to the ORACLE_SID of the SOURCE local ASM instance
asm_sid=+ASM1
- Set srcdir= to the name of the directory created on the SOURCE in Step 4 to tell the process where to find the SOURCE files
srcdir=SRCDIR
- Set dstdir= to the directory object created above on the DESTINATION PRIMARY in Step 5 to show where the files in the full transportable export/import will be written to for the DESTINATION PRIMARY. The xttdriver.pl script allows multiple SRCDIR and DSTDIR specifications and the driver script validates settings of SRCDIR locations and DSTDIR locations specified when the script is run. Adding this entry now avoids an error when the driver script is executed.
dstdir=DESTDIR
- Set srclink= to the database link on the SOURCE in Step 8 pointing back to the SOURCE database. This will be used to build statements to pull data from the source
srclink=sourcedb
- Save the xtt.properties file and copy it from the SOURCE environment to the DESTINATION PRIMARY and STANDBY hosts where the zip file was extracted.
- Modify the following entries in the xtt.properties file on the DESTINATION PRIMARY host
- Set stageondest= to the ASM directory where the incremental backups will be stored on the DESTINATION PRIMARY environment. This directory must be precreated in the ASM environment on the DESTINATION PRIMARY
stageondest=+RECO/CONT3/TRANSPORT
- Set backupondest= to some local filesystem that the process can write to on the DESTINATION. The rollforward writes a file to this location to keep track of progress. This should not be on ASM and the directory should be precreated before running the driver script.
backupondest=/u01/app/oracle/bkup/transport
- Set asm_home= to the Grid Infrastructure home on the DESTINATION primary system
asm_home=/u01/app/12.1.0.2/grid
- Set asm_sid= the ORACLE_SID of the DESTINATION PRIMARY local ASM instance
asm_sid=+ASM1
- Modify the following entries in the xtt.properties file on the DESTINATION STANDBY host
- Set dfcopydir= to the directory object created on the DESTINATION PRIMARY in Step 5 to hold the datafile copies for the restore to the DESTINATION STANDBY. Note this value will be different from that specified on the primary due to changes in diskgroup name (maybe) and db_unique_name (definitely). In this example this is DESTDIRSTBY which is defined as +DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D /DATAFILE which is the GUID of the <PDB NAME2> PDB that was created. This MUST be done on the DESTINATION STANDBY copy of the xtt.properties file.
dfcopydir=DESTDIRSTBY
- Set stageondest= to the ASM directory where the incremental backups will be stored on the DESTINATION STANDBY environment. This directory must be precreated in the ASM environment on the DESTINATION STANDBY
stageondest=+RECOD1/CONT3STBY/TRANSPORT
- Set backupondest= to some local filesystem that the process can write to on the DESTINATION STANDBY environment. The rollforward writes a file to this location to keep track of progress. This should not be on ASM
backupondest=/u01/app/oracle/bkup/transport
- Set asm_home= the Grid Infrastructure home on the DESTINATION STANDBY system
asm_home=/u01/app/12.1.0.2/grid
- Set asm_sid= the ORACLE_SID of the DESTINATION STANDBY local ASM instance
asm_sid=+ASM2
- Set dstdir= to the directory object created above on the DESTINATION PRIMARY in Step 5 to show where the files included in the full transportable export/import will be written to for the DESTINATION STANDBY
dstdir=DESTDIRSTBY
- Verify that there are no Time Zone issues between the SOURCE database objects and the DESTINATION environment. If the DB Time Zones do not match, tables with date/timestamp columns will not be imported. Compare the DBTIMEZONE values in the SOURCE non-CDB and the new DESTINATION PRIMARY PDB. If they do not match, change the setting in the DESTINATION PRIMARY PDB to match that in the SOURCE non-CDB, then bounce the DESTINATION PRIMARY PDB.
- Source
SQL-SOURCE> select dbtimezone from dual;
DBTIME ------ -06:00
- Destination PDB
SQL-DESTPRIM-PDB> select dbtimezone from dual;
DBTIME ------ -04:00
- Modify <PDB NAME2> timezone to match the SOURCE
SQL-DESTPRIM-PDB> alter database set time_zone='-06:00';
Database altered.
SQL-DESTPRIM-PDB> alter session set container=cdb$root;
Session altered.
SQL-DESTPRIM-PDB> alter pluggable database <PDB NAME2> close instances=all;
Pluggable database altered.
SQL-DESTPRIM-PDB> c/close/open 1* alter pluggable database <PDB NAME2> open instances=all SQL-DESTPRIM-PDB> /
Pluggable database altered.
SQL-DESTPRIM-PDB> alter session set container=<PDB NAME2>;
Session altered.
SQL-DESTPRIM-PDB> select dbtimezone from dual;
DBTIME ------ -06:00
- Use the XTTS scripts to create incremental backups on the SOURCE and roll the cloned files forward on the DESTINATION environments.
The tablespaces to be retrieved from the SOURCE non-CDB MUST be in READ WRITE mode for the inital copy, they cannot be open readonly. The script will perform an active database copy of the files over the network from the SOURCE non-CDB to the DESTINATION environment using settings in the xtt.properties files using DBMS_FILE_TRANSFER.GET_FILE. The clone of the files must be performed to both the DESTINATION PRIMARY and the DESTINATION STANDBY site. This process requires staging areas for the incremental backup on all environments. It is possible to use shared storage such as NFS to reduce space utilization or avoid the need to copy files from host to host. The staging area locations will be determined by settings in the xtt.properties files on each host. The incremental creation and roll forward can be done as many times as necessary until you are ready to perform the import. These incrementals can be performed with all of the tablespaces in READ WRITE so applications may continue processing while the incrementals are created and the roll forward process is being performed, however the final incremental apply prior to the Data Pump import MUST be done with the SOURCE tablespaces in READ ONLY. These steps are also documented in Note 1389592.1 as part of Phase 3.
- Perform initial setup on 1 node of the SOURCE environment, the DESTINATION PRIMARY environment and the DESTINATION STANDBY environment by setting TMPDIR environment variable to a directory
$ export TMPDIR=/home/oracle/transport/tmp
- Ensure ALL tablespaces on the SOURCE that are included in the xtt.properties files are open READ WRITE
SQL-SOURCE> select name, enabled from v$datafile where enabled ! = 'READ WRITE';
no rows selected
- On the SOURCE, run xttdriver.pl -S to create the files to be pushed to the DESTINATION sites for their local script execution.
[oracle@source1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -S Prepare source for Tablespaces: 'USERS','SOETS' +RECO xttpreparesrc.sql for 'USERS','SOETS' started at Fri Oct 18 14:42:56 2013 xttpreparesrc.sql for ended at Fri Oct 18 14:42:56 2013 Prepare source for Tablespaces: 'SOETS2','SOETS3' +RECO xttpreparesrc.sql for 'SOETS2','SOETS3' started at Fri Oct 18 14:42:56 2013 xttpreparesrc.sql for ended at Fri Oct 18 14:42:56 2013 Prepare source for Tablespaces: 'SOETS4','SOETS5' +RECO xttpreparesrc.sql for 'SOETS4','SOETS5' started at Fri Oct 18 14:42:56 2013 xttpreparesrc.sql for ended at Fri Oct 18 14:42:56 2013
- Copy the getfile.sql and xttnewdatafiles.txt from the $TMPDIR location on the SOURCE to the $TMPDIR location on both the DESTINATION PRIMARY and the DESTINATION STANDBY
[oracle@source1 transport]$ cat scp.sh scp $TMPDIR/getfile.sql destprim1:~/transport/tmp scp $TMPDIR/getfile.sql deststby1:~/transport/tmp scp $TMPDIR/xttnewdatafiles.txt deststby1:~/transport/tmp scp $TMPDIR/xttnewdatafiles.txt destprim1:~/transport/tmp
[oracle@source1 transport]$ ./scp.sh
getfile.sql 100% 2797 2.7KB/s 00:00 getfile.sql 100% 2797 2.7KB/s 00:00 xttnewdatafiles.txt 100% 243 0.2KB/s 00:00 xttnewdatafiles.txt 100% 243 0.2KB/s 00:00
- On the DESTINATION STANDBY, modify each entry in the getfile.sql file and the xttnewdatafiles.txt to use the DESTDIRSTBY directory object created above. This will ensure the files will be located in the correct ASM diskgroup/directory structure for the DESTINATION STANDBY
getfile.sql:
0,SRCDIR,soets1.319.878309815,DESTDIRSTBY,soets1_319_878309815
xttnewdatafiles.txt
::USERS 5,DESTDIRSTBY/users_324_878299557
- On the DESTINATION PRIMARY and DESTINATION STANDBY, run $ORACLE_HOME/perl/bin/perl xttdriver.pl -G to get the files from the source. This can take some time depending on file size, number of files and distance between SOURCE and DESTINATION sites. Files will be retrieved across the network and stored in the DESTDIR/DESTDIRSTBY directory created above. After executing the script, ensure that the xttnewdatafiles.txt file has the correct path inserted into the file name for each file. This should match the path for either DESTDIR or DESTDIRSTBY as appropriate. If the values are incorrect, the files may have been put in an incorrect location and later steps will fail. This step requires that the destination database be open so it can query ALL_DIRECTORIES to match the directory object name with the directory object path created in Step 5.
- Primary - Note the files are being written to the diskgroup, DB_UNIQUE_NAME and PDB GUID of the DESTINATION PRIMARY
[oracle@destprim1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
-------------------------------------------------------------------- Parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Getting datafiles from source --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets1.319.878309815_0.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets2.268.878309843_1.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets3.356.878309865_2.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets4.354.878309885_3.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets5.384.878309905_4.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_users.324.878299557_5.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Completed getting datafiles from source --------------------------------------------------------------------
- Standby - Note the files are being written to the diskgroup and DB_UNIQUE_NAME and PDB GUID of the DESTINATION STANDBY
[oracle@deststby1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G
-------------------------------------------------------------------- Parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Getting datafiles from source --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets1.319.878309815_0.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets2.268.878309843_1.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets3.356.878309865_2.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets4.354.878309885_3.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_soets5.384.878309905_4.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Executing getfile for getfile_srcdir_users.324.878299557_5.sql --------------------------------------------------------------------
-------------------------------------------------------------------- Completed getting datafiles from source --------------------------------------------------------------------
- Repeat the following as many times as necessary until ready to complete. These steps are located in note 1389592.1 (Phase 3)
- Run incremental create on the SOURCE
[oracle@source1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i
-------------------------------------------------------------------- Parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Backup incremental --------------------------------------------------------------------
Prepare newscn for Tablespaces: 'USERS','SOETS' Prepare newscn for Tablespaces: 'SOETS2','SOETS3' Prepare newscn for Tablespaces: 'SOETS4','SOETS5' rman target / cmdfile /home/oracle/transport/tmp/rmanincr.cmd
Recovery Manager: Release 11.2.0.3.0 - Production on Fri Oct 18 15:33:34 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: FRED (DBID=1053879784)
RMAN> set nocfau; 2> host 'echo ts::USERS'; 3> backup incremental from scn 34888639 4> tag tts_incr_update tablespace 'USERS' format 5> '+RECOD1/fred/TRANSPORT/%U'; 6> set nocfau; 7> host 'echo ts::SOETS'; 8> backup incremental from scn 34888639 9> tag tts_incr_update tablespace 'SOETS' format 10> '+RECOD1/fred/TRANSPORT/%U'; 11> set nocfau; 12> host 'echo ts::SOETS2'; 13> backup incremental from scn 34888639 14> tag tts_incr_update tablespace 'SOETS2' format 15> '+RECOD1/fred/TRANSPORT/%U'; 16> set nocfau; 17> host 'echo ts::SOETS3'; 18> backup incremental from scn 34888639 19> tag tts_incr_update tablespace 'SOETS3' format 20> '+RECOD1/fred/TRANSPORT/%U'; 21> set nocfau; 22> host 'echo ts::SOETS4'; 23> backup incremental from scn 34888639 24> tag tts_incr_update tablespace 'SOETS4' format 25> '+RECOD1/fred/TRANSPORT/%U'; 26> set nocfau; 27> host 'echo ts::SOETS5'; 28> backup incremental from scn 34888639 29> tag tts_incr_update tablespace 'SOETS5' format 30> '+RECOD1/fred/TRANSPORT/%U'; 31> executing command: SET NOCFAU using target database control file instead of recovery catalog
ts::USERS host command complete
Starting backup at 18-OCT-13
allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=563 instance=fred1 device type=DISK backup will be obsolete on date 25-OCT-13 archived logs will not be kept or backed up channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00005 name=+DATAD1/fred/datafile/users.409.820573579 channel ORA_DISK_1: starting piece 1 at 18-OCT-13 channel ORA_DISK_1: finished piece 1 at 18-OCT-13 piece handle=+RECOD1/fred/transport/57omnl6f_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
using channel ORA_DISK_1 backup will be obsolete on date 25-OCT-13 archived logs will not be kept or backed up channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set channel ORA_DISK_1: starting piece 1 at 18-OCT-13 channel ORA_DISK_1: finished piece 1 at 18-OCT-13 piece handle=+RECOD1/fred/transport/58omnl6h_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-OCT-13
executing command: SET NOCFAU
ts::SOETS host command complete
Starting backup at 18-OCT-13
using channel ORA_DISK_1 backup will be obsolete on date 25-OCT-13 archived logs will not be kept or backed up channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00006 name=+DATAD1/fred/datafile/soets.411.820575221 channel ORA_DISK_1: starting piece 1 at 18-OCT-13 channel ORA_DISK_1: finished piece 1 at 18-OCT-13 piece handle=+RECOD1/fred/transport/59omnl6k_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
using channel ORA_DISK_1 backup will be obsolete on date 25-OCT-13 archived logs will not be kept or backed up channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set channel ORA_DISK_1: starting piece 1 at 18-OCT-13 channel ORA_DISK_1: finished piece 1 at 18-OCT-13 piece handle=+RECOD1/fred/transport/5aomnl6r_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-OCT-13
executing command: SET NOCFAU
ts::SOETS2 host command complete
<SNIP>
ts::SOETS5 host command complete
Starting backup at 18-OCT-13
using channel ORA_DISK_1 backup will be obsolete on date 25-OCT-13 archived logs will not be kept or backed up channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00010 name=+DATAD1/fred/datafile/soets5.536.825789101 channel ORA_DISK_1: starting piece 1 at 18-OCT-13 channel ORA_DISK_1: finished piece 1 at 18-OCT-13 piece handle=+RECOD1/fred/transport/5homnl8b_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
using channel ORA_DISK_1 backup will be obsolete on date 25-OCT-13 archived logs will not be kept or backed up channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set channel ORA_DISK_1: starting piece 1 at 18-OCT-13 channel ORA_DISK_1: finished piece 1 at 18-OCT-13 piece handle=+RECOD1/fred/transport/5iomnl8q_1_1 tag=TTS_INCR_UPDATE comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 18-OCT-13
Recovery Manager complete.
- Copy the xttplan.txt and tsbkupmap.txt files from the $TMPDIR on the SOURCE to the $TMPDIR on the DESTINATION PRIMARY and DESTINATION STANDBY
[oracle@source1 transport]$ cat scp2.sh scp $TMPDIR/xttplan.txt destprim1:~/transport/tmp scp $TMPDIR/xttplan.txt deststby1:~/transport/tmp scp $TMPDIR/tsbkupmap.txt deststby1:~/transport/tmp scp $TMPDIR/tsbkupmap.txt destprim1:~/transport/tmp
[oracle@source1 transport]$ ./scp2.sh xttplan.txt 100% 125 0.1KB/s 00:00 xttplan.txt 100% 125 0.1KB/s 00:00 tsbkupmap.txt 100% 137 0.1KB/s 00:00 tsbkupmap.txt 100% 137 0.1KB/s 00:00
- Copy the incremental backupsets from the SOURCE to the DESTINATION PRIMARY and DESTINATION STANDBY. The following example will get all of the files included in the full transportable export/import and push them across. Note that it will ONLY work if the ASM versions are the same, otherwise you will have to copy the files to filesystem and push across. The following is an example of code that can be used to copy from ASM instance to ASM instance. The code was put into shell scripts with appropriate destination host names and directory structure for the primary and standby environments. Run the scripts from your source system with set to point to the GRID HOME for the following example to work.
for i in `cat ./tmp/incrbackups.txt`; do asmcmd cp $i sys/<password>@destprim1.+ASM1:+RECO/CONT3/TRANSPORT/ done
[oracle@sourchost1 transport]$ ./cp_standby_asm.sh copying +RECOD1/fred/TRANSPORT/5fomnl7p_1_1 -> +RECOD1/<CONT3STBY>/TRANSPORT/5fomnl7p_1_1 copying +RECOD1/fred/TRANSPORT/57omnl6f_1_1 -> +RECOD1/<CONT3STBY>/TRANSPORT/57omnl6f_1_1 copying +RECOD1/fred/TRANSPORT/5homnl8b_1_1 -> +RECOD1/<CONT3STBY>/TRANSPORT/5homnl8b_1_1 copying +RECOD1/fred/TRANSPORT/5domnl78_1_1 -> +RECOD1/<CONT3STBY>/TRANSPORT/5domnl78_1_1 copying +RECOD1/fred/TRANSPORT/5bomnl6u_1_1 -> +RECOD1/<CONT3STBY>/TRANSPORT/5bomnl6u_1_1 copying +RECOD1/fred/TRANSPORT/59omnl6k_1_1 -> +RECOD1/<CONT3STBY>/TRANSPORT/59omnl6k_1_1
[oracle@sourcehost1 transport]$ ./cp_primary_asm.sh copying +RECOD1/fred/TRANSPORT/5fomnl7p_1_1 -> nshexdak4db01:+RECO/<CONT3>/TRANSPORT/5fomnl7p_1_1 copying +RECOD1/fred/TRANSPORT/57omnl6f_1_1 -> nshexdak4db01:+RECO/<CONT3>/TRANSPORT/57omnl6f_1_1 copying +RECOD1/fred/TRANSPORT/5homnl8b_1_1 -> nshexdak4db01:+RECO/<CONT3>/TRANSPORT/5homnl8b_1_1 copying +RECOD1/fred/TRANSPORT/5domnl78_1_1 -> nshexdak4db01:+RECO/<CONT3>/TRANSPORT/5domnl78_1_1 copying +RECOD1/fred/TRANSPORT/5bomnl6u_1_1 -> nshexdak4db01:+RECO/<CONT3>/TRANSPORT/5bomnl6u_1_1 copying +RECOD1/fred/TRANSPORT/59omnl6k_1_1 -> nshexdak4db01:+RECO/<CONT3>/TRANSPORT/59omnl6k_1_1
Once the files have been copied to the DESTINATION sites, they can be deleted from the SOURCE site to save space
- Perform a rollforward to apply the changes to the copies on the DESTINATION sites. The environment should be set to point to the destination database $ORACLE_HOME.
The rollforward process requires the instance being used be started in nomount mode. You can use your destination instance or, to avoid downtime to other PDBs and their applications, use a temporary instance. In the following example a temporary instance will be created on the destination primary and standby environments and used to perform the rollforward.
The ASMCMD-8001 errors below occur because there are no conversion files being created (source and destination platform are the same), so there are no files to be removed. These errors will appear for each datafile that is included in the migration set. These errors can be ignored.
- DESTINATION PRIMARY
[oracle@destprim1 transport]$ export ORACLE_SID=xtt
[oracle@destprim1 transport]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora db_name=xtt compatible=12.1.0.2.0 EOF
[oracle@destprim1 transport]$ sqlplus / as sysdba SQL> startup nomount SQL>exit
[oracle@destprim1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r -L
-------------------------------------------------------------------- Parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Start rollforward -------------------------------------------------------------------- ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
-------------------------------------------------------------------- End of rollforward phase --------------------------------------------------------------------
- DESTINATION STANDBY
[oracle@destprim1 transport]$ export ORACLE_SID=xtt
[oracle@destprim1 transport]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora db_name=xtt compatible=12.1.0.2.0 EOF
[oracle@destprim1 transport]$ sqlplus / as sysdba SQL> startup nomount SQL>exit
[oracle@deststby1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r -L
-------------------------------------------------------------------- Parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done parsing properties --------------------------------------------------------------------
-------------------------------------------------------------------- Checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Done checking properties --------------------------------------------------------------------
-------------------------------------------------------------------- Start rollforward -------------------------------------------------------------------- ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted ASMCMD-8001: diskgroup 'u01' does not exist or is not mounted
-------------------------------------------------------------------- End of rollforward phase --------------------------------------------------------------------
- On the SOURCE system run the script to determine the new FROM_SCN for the next incremental backup execution.
[oracle@sourcehost1 transport]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -s
- When ready to complete the process do the following:
- Perform the final incremental creation on the SOURCE and apply the incrementals to the DESTINATION environments.
- Put all tablespaces on the SOURCE non-CDB to be transported into READ ONLY mode
SQL-SOURCE> alter tablespace soets read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets2 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets3 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets4 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace soets5 read only;
Tablespace altered.
SQL-SOURCE> alter tablespace users read only;
Tablespace altered.
- Perform final incremental and roll forward using SAME STEPS AS ABOVE
- Create the incremental backups on the SOURCE
- Copy the xttplan.txt and tsbkupmap.txt from the SOURCE to the DESTINATION sites
- Copy the incremental backups to the DESTINATION sites (if not using shared storage)
- Run the rollforward on the DESTINATION sites. There is no need to run the step to determine a new FROM_SCN.
- Shut down the temporary instances
- Perform the full transportable export/import on the DESTINATION connecting to the PDB using the link created that points to the SOURCE in Step 8 as the NETWORK_LINK= and the directory created in the DESTINATION PDB in Step 6 for the Data Pump logfile. Specify all the datafiles that have been copied and rolled forward in the TRANSPORT_DATAFILES clause, these files MUST be the file names on the DESTINATION PRIMARY site. Use FULL=Y to perform the full transpotable export/import. VERSION=12 is only required when the source is 11.2.0.3 or 11.2.0.4, for 12c source databases it can be omitted.
$[oracle@destprim1 transport]$ impdp \"sys/welcome1@<PDB NAME2>serv as sysdba\" network_link=sourcedb version=12 full=y transportable=always metrics=y exclude=statistics logfile=mydir:xttspdb.log transport_datafiles='+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/users_409_820573579','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets_411_820575221','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets2_539_825789091','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets3_538_825789093','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets4_537_825789097','+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets5_536_825789101'
Some errors will be seen (e.g. failed attempt to create undo tablespace, some failures for APEX object creation, errors for ORDATA objects already existing) but these can be ignored.
- Maintain the DESTINATION STANDBY by renaming the datafiles as they are encountered by media recovery.
As discussed earlier, the file names and paths on the DESTINATION STANDBY will not match those of the DESTINATION PRIMARY As the redo for the files being added is applied by media recovery on the DESTINATION STANDBY, the media recovery process (MRP) will die with a file not found condition.
As each tablespace is encountered, you must manually rename the files in the DESTINATION STANDBY controlfile and restart the MRP process. You must continue this rename/start MRP loop until all tablespaces have been processed. The file names that have been attempted to be assigned can be found by querying v$datafile. If multiple files are being added for a single tablespace, ensure you rename them correctly.
SQL-DESTSTBY> select name from v$datafile;
NAME -------------------------------------------------------------------------------- +DATAD1/<CONT3STBY>/DATAFILE/system.338.819017389 +DATAD1/<CONT3STBY>/DFEF7419D8C7245AE0437628890A96C3/DATAFILE/system.335.819017397 +DATAD1/<CONT3STBY>/DATAFILE/sysaux.379.819017389 +DATAD1/<CONT3STBY>/DFEF7419D8C7245AE0437628890A96C3/DATAFILE/sysaux.395.819017397 +DATAD1/<CONT3STBY>/DATAFILE/undotbs1.337.819017389 +DATAD1/<CONT3STBY>/DATAFILE/undotbs2.389.819017389 +DATAD1/<CONT3STBY>/DATAFILE/users.392.819017389 +DATAD1/<CONT3STBY>/DFFBB8557B625421E0437628890A461D/DATAFILE/system.427.819029243 +DATAD1/<CONT3STBY>/DFFBB8557B625421E0437628890A461D/DATAFILE/sysaux.426.819029179 +DATAD1/<CONT3STBY>/DFFBB8557B625421E0437628890A461D/DATAFILE/soets1.425.819028813 +DATAD1/<CONT3STBY>/E7C9ABBF31FC0FF3E0437628890A5753/DATAFILE/system.511.827767375 +DATAD1/<CONT3STBY>/E7C9ABBF31FC0FF3E0437628890A5753/DATAFILE/sysaux.528.827767377 +DATAD1/cont3/E7C9ABBF31FC0FF3E0437628890A5753/DATAFILE/soets_411_820575221 /u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00059
To be able to rename the datafiles, you must set standby_file_management to manual
SQL-DESTSTBY> alter system set standby_file_management=manual;
System altered.
SQL-DESTSTBY> alter database rename file '+DATAD1/cont3/e909a2cb24b15cbfe0437628890a510d/datafile/soets_411_820575221' to '+DATAD1/CONT3STBY/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets_411_820575221';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect; Media recovery complete.
SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00067' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets2_539_825789091';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect; Media recovery complete.
SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00068' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets3_538_825789093';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect; Media recovery complete.
SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00069' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets4_537_825789097';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect; Media recovery complete.
SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00070' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/soets5_536_825789101';
Database altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect; Media recovery complete.
SQL-DESTSTBY> alter database rename file '/u01/app/oracle/product/12.1.0/fkhome_1/dbs/UNNAMED00071' to '+DATAD1/CONT3STBY/e909a2cb24b15cbfe0437628890a510d/DATAFILE/users_409_820573579';
Database altered.
SQL-DESTSTBY> alter system set standby_file_management=auto;
System altered.
SQL-DESTSTBY> recover managed standby database using current logfile disconnect; Media recovery complete.
- Perform a logical check of the application.
- Backup the PDB.
[oracle@destprim1 transport]$ rman target /
Recovery Manager: Release 12.1.0.1.0 - Production on Mon Oct 21 13:09:37 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.
connected to target database: CONT3 (DBID=1783201232)
RMAN> backup as compressed backupset pluggable database <PDB NAME2>;
Starting backup at 21-OCT-13 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=211 instance=cont31 device type=DISK channel ORA_DISK_1: starting compressed full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00066 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets_411_820575221 input datafile file number=00067 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets2_539_825789091 input datafile file number=00068 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets3_538_825789093 input datafile file number=00069 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets4_537_825789097 input datafile file number=00070 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/soets5_536_825789101 input datafile file number=00065 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/sysaux.354.829148807 input datafile file number=00064 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/system.391.829148807 input datafile file number=00071 name=+DATA/CONT3/E909A2CB24B15CBFE0437628890A510D/DATAFILE/users_409_820573579 channel ORA_DISK_1: starting piece 1 at 21-OCT-13 1021T131147 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:08:45 Finished backup at 21-OCT-13
Starting Control File and SPFILE Autobackup at 21-OCT-13 piece handle=+RECO/CONT3/AUTOBACKUP/2013_10_21/s_829401633.479.829401633 comment=NONE Finished Control File and SPFILE Autobackup at 21-OCT-13
|
No comments:
Post a Comment