Friday, May 24, 2019

Exact Steps To Migrate ASM Diskgroups To Another SAN/Disk-Array/DAS/etc Without Downtime. (Doc ID 837308.1)

In this Document

Goal
 Ask Questions, Get Help, And Share Your Experiences With This Article
Solution
References

Applies to:

Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Cloud Exadata Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.

Goal


The present document explains in detail the exact steps to migrate ASM diskgroups from one SAN/Disk-Array/DAS/etc to another SAN/Disk-Array/DAS/etc without a downtime. This procedure will also work for diskgroups hosting OCR and Votefiles and ASM spfile.
Note: These steps are applicable to External, Normal & High redundancy diskgroups.

Ask Questions, Get Help, And Share Your Experiences With This Article

Would you like to explore this topic further with other Oracle Customers, Oracle Employees, and Industry Experts?

Click here to join the discussion where you can ask questions, get help from others, and share your experiences with this specific article.
Discover discussions about other articles and helpful subjects by clicking here to access the main My Oracle Support Community page for Database Install/Upgrade.

Solution

If your plans are replacing the current disks associated to your diskgroups with a new storage, this operation can be accomplished without any downtime, so you can follow the next steps

1) Backup all your databases and valid the backup (always required to protect your data).

2) Add the new path (new disks from the new storage) to your asm_diskstring to be recognized by ASM:

Example:
SQL> alter system set asm_diskstring = '/dev/emcpowerc*' , '/dev/emcpowerh*';

Where: '/dev/emcpowerc*' are the current disks.
Where: '/dev/emcpowerh*' are the new disks.

3) Confirm that the new disks are being detected by ASM:
SQL> select path from v$asm_disk;

4) Validate all the new disks as described in the following document:
 
How To Add a New Disk(s) to An Existing Diskgroup on RAC Cluster or Standalone ASM Configuration (Best Practices). (Doc ID 557348.1)
 
5) Add the new disks to your desired diskgroup:
SQL> alter diskgroup <diskgroup name> add disk
‘<new disk 1>’,
‘<new disk 2>’,
‘<new disk 3>’,
‘<new disk 4>’,
.
.
.
‘<new disk N>’ rebalance power <#>; 


6) Then wait until the rebalance operation completes:
SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;

7) Finally, remove the old disks:
SQL> alter diskgroup <diskgroup name> drop disk
<disk name A>,
<disk name B>,
<disk name D>,
<disk name E>,
.
.
.
<disk name X>  rebalance power <#>;

8) Then wait until the rebalance operation completes:
SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;


9) Done, your ASM diskgroups and database have been migrated to the new storage.

Note: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one rebalance operation will be started as follow:
SQL> alter diskgroup <diskgroup name>
add disk '<new device physical name 1>', .., '<new device physical name N>'
drop disk <old disk logical name 1>, <old disk logical name 2>, ..,<old disk logical name N>
rebalance power <#>;

This is more efficient than separated commands (add disk & drop disk statements).

Note 1: On 10g, a manual rebalance operation is required to restart the diskgroup rebalance and expel the disk(s) because on 10g (if something wrong happens on disk expelling, e.g. hanging) ASM will not restart the ASM rebalance automatically (this was already enhanced on 11g and 12c), therefore you will need to restart a manual rebalance operation as follows:
SQL> alter diskgroup <diskgroup name> rebalance power 11;
  

 
Note 2: Disk from the old SAN/Disk-Array/DAS/etc are finally expelled from the diskgroup(s) once the rebalance operation (from the drop operation) completes and when HEADER_STATUS = FORMER is reported thru the v$asm_disk view.
 

The window below is a live discussion of this article (not a screenshot).  We encourage you to join the discussion by clicking the "Reply" link below for the entry you would like to provide feedback on.  If you have questions or implementation issues with the information in the article above, please share that below.



Exact Steps to Migrate ASM Diskgroups to Another SAN/Disk-Array/DAS/Etc without Downtime (When ASMLIB Devices Are Involved) (Doc ID 1918350.1)

In this Document

Goal
Solution
 Community Discussions
References

Applies to:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 12.1.0.2 [Release 10.2 to 12.1]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Information in this document applies to any platform.

Goal


The present document explains in detail the exact steps to migrate ASM diskgroups (using ASMLIB devices) from one SAN/Disk-Array/DAS/etc. to another SAN/Disk-Array/DAS/etc. without a downtime. This procedure will also work for diskgroups hosting OCR, Vote files and ASM spfiles.

Note: These steps are applicable to External, Normal & High redundancy diskgroups.

Solution


If you need to migrate and to replace the current ASMLIB disks associated to your diskgroups to a new storage, then you can perform this operation without any downtime, therefore you can follow the next steps:

1) Backup all your databases and valid the backup (it is always required to protect your data).


2) Create new ASMLIB devices on the new storage physical disks as described in the following document/demo:
White Paper: ASMLIB Installation & Configuration On MultiPath Mapper Devices (Step by Step Demo) On RAC Or Standalone Configurations. (Doc ID 1594584.1)

Example:

2.1) Create the new ASMLIB disks using the “/etc/init.d/oracleasm createdisk” command as root OS user:
 

# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_1    /dev/mapper/mpathbp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_2    /dev/mapper/mpathcp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_3    /dev/mapper/mpathdp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_4    /dev/mapper/mpathep1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_5    /dev/mapper/mpathfp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_6    /dev/mapper/mpathgp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_7    /dev/mapper/mpathhp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_8    /dev/mapper/mpathip1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_9    /dev/mapper/mpathjp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_10   /dev/mapper/mpathkp1
 
2.2) Scan the new ASMIB disks on all the other RAC nodes (on Standalone configurations the disks are implicitly scanned during the ASMLIB disk creation) as follows (as root OS user):


# /etc/init.d/oracleasm    scandisks
 
2.3) Make sure the new ASMLIB disks and old ASMLIB disks are present on all the RAC Cluster nodes or Standalone configurations as follows:
 

# /etc/init.d/oracleasm  listdisks

ASMDISK_NEW_SAN_1
ASMDISK_NEW_SAN_2
ASMDISK_NEW_SAN_3
ASMDISK_NEW_SAN_4
ASMDISK_NEW_SAN_5
ASMDISK_NEW_SAN_6
ASMDISK_NEW_SAN_7
ASMDISK_NEW_SAN_8
ASMDISK_NEW_SAN_9
ASMDISK_NEW_SAN_10

ASMDISK_OLD_SAN_1
ASMDISK_OLD_SAN_2
ASMDISK_OLD_SAN_3
ASMDISK_OLD_SAN_4
ASMDISK_OLD_SAN_5
ASMDISK_OLD_SAN_6
ASMDISK_OLD_SAN_7
ASMDISK_OLD_SAN_8
ASMDISK_OLD_SAN_9
ASMDISK_OLD_SAN_10
  
# /usr/sbin/oracleasm-discover 'ORCL:*'
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]

Discovered disk: ORCL:ASMDISK_NEW_SAN_1 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_2 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_3 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_4 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_5 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_6 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_7 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_8 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_9 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_10 [40017852 blocks (20489140224 bytes), maxio 512]


Discovered disk: ORCL:ASMDISK_OLD_SAN_1 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_2 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_3 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_4 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_5 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_6 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_7 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_8 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_9 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_10 [40017852 blocks (20489140224 bytes), maxio 512]
  

3) Verify the ASM discovery string (“ASM_DISKSTRING”) is correctly pointing to the ASMLIB devices (in all the ASM instances) as follows:

3.1) RAC Cluster configurations:

+ASM1 instance:
 

[grid@dbaasm ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter  ASM_DISKSTRING

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*
 
+ASM2 instance:
 

[grid@dbaasm ~]$ . oraenv
ORACLE_SID = [+ASM2] ? +ASM2
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter  ASM_DISKSTRING

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*
 
3.2) Standalone configurations:

+ASM1 instance:


[grid@dbaasm ~]$ . oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter  ASM_DISKSTRING

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*
  

4) Then confirm that the new ASMLIB disks and old ASMLIB disks are being detected by ASM as follows:
 

ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option


SQL> select path from v$asm_disk;

PATH
--------------------------------------------------------------------------------
ORCL:ASMDISK_NEW_SAN_1
ORCL:ASMDISK_NEW_SAN_2
ORCL:ASMDISK_NEW_SAN_3
ORCL:ASMDISK_NEW_SAN_4
ORCL:ASMDISK_NEW_SAN_5
ORCL:ASMDISK_NEW_SAN_6
ORCL:ASMDISK_NEW_SAN_7
ORCL:ASMDISK_NEW_SAN_8
ORCL:ASMDISK_NEW_SAN_9
ORCL:ASMDISK_NEW_SAN_10

PATH
--------------------------------------------------------------------------------
ORCL:ASMDISK_OLD_SAN_1
ORCL:ASMDISK_OLD_SAN_2
ORCL:ASMDISK_OLD_SAN_3
ORCL:ASMDISK_OLD_SAN_4
ORCL:ASMDISK_OLD_SAN_5
ORCL:ASMDISK_OLD_SAN_6
ORCL:ASMDISK_OLD_SAN_7
ORCL:ASMDISK_OLD_SAN_8
ORCL:ASMDISK_OLD_SAN_9
ORCL:ASMDISK_OLD_SAN_10

10 rows selected.
 
5) Validate all the new disks as described in the following document:
 
How To Add a New Disk(s) to An Existing Diskgroup on RAC Cluster or Standalone ASM Configuration (Best Practices). (Doc ID 557348.1)
  

6) Add the new disks to your desired diskgroup as follows:

SQL> alter diskgroup <diskgroup name> add disk
‘<new disk 1>’,
‘<new disk 2>’,
‘<new disk 3>’,
‘<new disk 4>’,
.
.
.
‘<new disk N>’ rebalance power <#>;


Example:
SQL> alter diskgroup DATA add disk
'ORCL:ASMDISK_NEW_SAN_1',
'ORCL:ASMDISK_NEW_SAN_2',
'ORCL:ASMDISK_NEW_SAN_3',
'ORCL:ASMDISK_NEW_SAN_4',
'ORCL:ASMDISK_NEW_SAN_5',
'ORCL:ASMDISK_NEW_SAN_6',
'ORCL:ASMDISK_NEW_SAN_7',
'ORCL:ASMDISK_NEW_SAN_8',
'ORCL:ASMDISK_NEW_SAN_9',
'ORCL:ASMDISK_NEW_SAN_10'  rebalance power 11;


6) Then wait until the rebalance operation completes:

SQL> select * from v$asm_operation;

no rows selected

SQL> select * from gv$asm_operation;

no rows selected


7) Finally, remove the old disks:
SQL> alter diskgroup <diskgroup name> drop disk
<disk name A>,
<disk name B>,
<disk name D>,
<disk name E>,
.
.
.
<disk name X>  rebalance power <#>;


Example:
SQL> alter diskgroup DATA drop disk
ASMDISK_OLD_SAN_1,
ASMDISK_OLD_SAN_2,
ASMDISK_OLD_SAN_3,
ASMDISK_OLD_SAN_4,
ASMDISK_OLD_SAN_5,
ASMDISK_OLD_SAN_6,
ASMDISK_OLD_SAN_7,
ASMDISK_OLD_SAN_8,
ASMDISK_OLD_SAN_9,
ASMDISK_OLD_SAN_10  rebalance power 11;


8) Then wait until the rebalance operation completes:
SQL> select * from v$asm_operation;

no rows selected

SQL> select * from gv$asm_operation;

no rows selected



9) After the old disks are completely expelled from the diskgroup(s), then your ASM diskgroup(s) and database(s) have been migrated to the new storage.

Note 1: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one rebalance operation will be started as follow:
SQL> alter diskgroup <diskgroup name>
add disk '<new device physical name 1>', .., '<new device physical name N>'
drop disk <old disk logical name 1>, <old disk logical name 2>, ..,<old disk logical name N>  rebalance power <#>;
 
Example:
alter diskgroup DATA add disk
'ORCL:ASMDISK_NEW_SAN_1',
'ORCL:ASMDISK_NEW_SAN_2',
'ORCL:ASMDISK_NEW_SAN_3',
'ORCL:ASMDISK_NEW_SAN_4',
'ORCL:ASMDISK_NEW_SAN_5',
'ORCL:ASMDISK_NEW_SAN_6',
'ORCL:ASMDISK_NEW_SAN_7',
'ORCL:ASMDISK_NEW_SAN_8',
'ORCL:ASMDISK_NEW_SAN_9',
'ORCL:ASMDISK_NEW_SAN_10'  
drop disk
ASMDISK_OLD_SAN_1,
ASMDISK_OLD_SAN_2,
ASMDISK_OLD_SAN_3,
ASMDISK_OLD_SAN_4,
ASMDISK_OLD_SAN_5,
ASMDISK_OLD_SAN_6,
ASMDISK_OLD_SAN_7,
ASMDISK_OLD_SAN_8,
ASMDISK_OLD_SAN_9,
ASMDISK_OLD_SAN_10  rebalance power 11;
  

This is more efficient than separated commands (add disk & drop disk statements).

Then wait until the rebalance operation completes:
  

SQL> select * from v$asm_operation;

no rows selected

SQL> select * from gv$asm_operation;

no rows selected
  

Note 2:  As a best practice, never execute the “/etc/init.d/oracleasm deletedisk” command on active ASMLIB disks (those which are currently being used by ASM diskgroups as disks members).

Note 3:  As a best practice, never execute the “/etc/init.d/oracleasm createdisk” command on active ASMLIB disks (those which are currently being used by ASM diskgroups as disks members).

Note 4:  Never remove/format/modify/overlap/resize (at OS level or hardware) the physical associated disk until the corresponding logical ASMLIB/ASM disk is completely dropped and expelled from the ASM diskgroup (in other words, until the rebalance operation completes).

Note 5: On 10g, a manual rebalance operation is required to restart the diskgroup rebalance and expel the disk(s) because on 10g (if something wrong happens on disk expelling, e.g. hanging) ASM will not restart the ASM rebalance automatically (this was already enhanced on 11g and 12c), therefore you will need to restart a manual rebalance operation as follows:
SQL> alter diskgroup <diskgroup name> rebalance power 11;

Wednesday, May 15, 2019

How to clone PDB ( Remote Clone ) across CDB using Database Link (Doc ID 2297470.1)

In this Document

Goal
Solution
References



Applies to:

Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.

Goal

This article provides an example by going through the the steps how to clone PDB from CDB to another CDB

Existing Database information:
Source Multitenant  Database : CDB1 with Pdb1 , Pdb2
Destination Multitenant Database : CDB2 with Pdb2

 Point to Note :
  During cloning pdb of source database should be READ ONLY. But from 12.2, Source PDB can be kept in READ WRITE mode also

Solution

Let we see how to clone pdb1 from cdb1 to cdb2 using remote link method.

Source Database: ( CDB1 )
SQL> select CON_ID, dbid, NAME, OPEN_MODE,open_time,create_scn from V$PDBS ORDER BY name;
CON_ID DBID             NAME           OPEN_MODE     OPEN_TIME                                      CREATE_SCN
---------- ---------- ------------------------------ ---------- ------------------------------------------------------------
2        3998669976    PDB$SEED     READ ONLY      08-AUG-17 06.02.49.899 PM +05:30      227
3        2537019739    PDB1            MOUNTED         08-AUG-17 06.09.57.759 PM +05:30      1965762
4        2868907633    PDB2            MOUNTED         08-AUG-17 06.11.14.177 PM +05:30      2018955

Destination Database ( CDB2)
SQL> select CON_ID, dbid, NAME, OPEN_MODE,open_time,create_scn from V$PDBS ORDER BY name;
CON_ID   DBID            NAME       OPEN_MODE OPEN_TIME                                       CREATE_SCN
---------- ---------- --------------- ---------- ------------------------------------------------------------------
2            3798668876 PDB$SEED READ ONLY   08-AUG-17 05.55.05.936 PM +05:30     227
4            2568907633 PDB2         MOUNTED     08-AUG-17 05.55.26.840 PM +05:30     3013955

Phase 1:  At Source Database: ( CDB1 )
 1.1 Create user at pdb which going to clone for this activity 
 1.2 Assign grants  'CREATE SESSION and CREATE PLUGGABLE DATABASE' to that user.

 To explain this case, i have created user, named 'remote_user_for_clone' 

Example:

At CDB1: 

SQL> ALTER SESSION SET CONTAINER=pdb1;

SQL> ALTER PLUGGABLE DATABASE pdb1 open;

SQL> CREATE USER remote_user_for_clone identified by remote_user_for_clone;

SQL> GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE TO remote_user_for_clone;

SQL> ALTER PLUGGABLE DATABASE pdb1 close;

SQL> ALTER PLUGGABLE DATABASE pdb1 open read only; 

Phase-2
Create net service name ( tnsalias ) in tnsnames.ora of  Destination Database ( CDB2 ) server 
tnsnames.ora
~~~~~~~~~~
getpdb1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(Host = <Source_Database_ CDB1_host > )(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1)
)
)

Phase-3
At Destination Database  ( CDB2 )

From cdb$root
 3.1 Create database link using user which created for this activity 
 3.2 Create pluggable database using dblink. Optionally if folder structure of source and destination database are different then we can provide FILE_NAME_CONVERT parameter approprietely
 3.3 Open cloned pdb database at destination database.

SQL> CREATE DATABASE LINK getpdb_link CONNECT TO remote_user_for_clone identified by remote_user_for_clone using 'getpdb1';
SQL> create pluggable database pdb1new from pdb1@getpdb_link file_name_convert=('D:\ORADATA\dbcdb\DATA\pdb1\','D:\ORADATA\dbcdbaux\DATA\pdb1\');
SQL> alter pluggable database pdb1new open;             <======= to make read write from mount state


Note: We can test database  link prior to creation of pluggable database at Destination database
      Assuming we have below object at source ( CDB1) Database
             SQL> select count(*) from usrpdb1.allobj;
                  COUNT(*)
                 ----------
                   78779
      Provide "select on" privs on above object to user which we created at 1.1
                  SQL> GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE , [ select on usrpdb1.allobj] TO remote_user_for_clone;
      Now, after database link creation, we can test the connectivity like below at Destination Database ( CDB2)
           ASQL> select count(*) from usrpdb1.allobj@getpdb_link;
                 COUNT(*)
                ----------
                 78779
            Above output implies , database link which we created works fine.

3.4  Verify and check the objects in pdb1new which cloned from source database
        SQL> alter session set container=pdb1new ;
        SQL> select username,password from dba_users;

 SQL> select CON_ID, dbid, NAME, OPEN_MODE,open_time,create_scn from V$PDBS ORDER BY name;
CON_ID DBID             NAME          OPEN_MODE        OPEN_TIME                                         CREATE_SCN
---------- ---------- ------------------- ---------- --------------------------------------------- ----------------------
2           3798668876 PDB$SEED    READ ONLY          08-AUG-17 05.55.05.936 PM +05:30            227
4           2568907633 PDB2            MOUNTED            08-AUG-17 05.55.26.840 PM +05:30            3013955
3           2156111281 PDB1NEW     READ WRITE        08-AUG-17 07.11.36.419 PM +05:30            2951277                     <<<<< Read write
SQL> select count(*) from usrpdb1.allobj;
COUNT(*)
----------
78779 

"OPTION WARNING Database option mismatch: PDB installed version NULL" in PDB_PLUG_IN_VIOLATIONS (Doc ID 2020172.1)

In this Document

Symptoms
Changes
Cause
Solution
References



Applies to:

Oracle Database - Enterprise Edition - Version 12.1.0.2 and later
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.

Symptoms


After converting a non-CDB to a PDB and running the noncdb_to_pdb.sql script, you see messages of this form in PDB_PLUG_IN_VIOLATIONS.  In this case, the PDB name is "<PDB> ".
<PDB> OPTION WARNING Database option APS mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option CATJAVA mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option DV mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option JAVAVM mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option OLS mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option OWM mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option SDO mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option XML mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> OPTION WARNING Database option XOQ mismatch: PDB installed version NULL. CDB installed version 12.1.0.2.0. PENDING Fix the database option in the PDB or the CDB
<PDB> APEX WARNING APEX mismatch: PDB installed version NULL CDB installed version 4.2.5.00.08 PENDING Please contact Oracle Support.

Changes

 You converted a non-CDB to a PDB.

Cause

 These messages mean that the option is not installed in the PDB, but is installed in the CDB.

Solution

Warnings in PDB_PLUG_IN_VIOLATIONS do not prevent you from actually opening the PDB in READ WRITE mode.  You can ignore WARNING messages (you cannot ignore ERROR messages).   It is okay for a PDB to have a subset (fewer) options installed than the CDB into which it is plugged.   (The reverse is NOT true, however -- the CDB must always have the same or more options as its PDBs).  
Unpublished Bug 16192980 : NO SIMPLE WAY TO CLEAN ROWS FROM PDB_PLUG_IN_VIOLATIONS AFTER DBMS_PDB CALL has been filed to request a way to clear out no-impact warnings from PDB_PLUG_IN_VIOLATIONS.  This enhancement is implemented in versions > 12.2.0.1.


Note: Do not disregard such warnings about PDB$SEED.  PDB$SEED should have all the same options as the CDB.  It should not be opened/modified by users.  Oracle tools, such as datapatch, will open/close it as needed.  You do need to address warnings of this format for PDB$SEED.