Friday, May 24, 2019

Exact Steps to Migrate ASM Diskgroups to Another SAN/Disk-Array/DAS/Etc without Downtime (When ASMLIB Devices Are Involved) (Doc ID 1918350.1)

In this Document

Goal
Solution
 Community Discussions
References

Applies to:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 12.1.0.2 [Release 10.2 to 12.1]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Information in this document applies to any platform.

Goal


The present document explains in detail the exact steps to migrate ASM diskgroups (using ASMLIB devices) from one SAN/Disk-Array/DAS/etc. to another SAN/Disk-Array/DAS/etc. without a downtime. This procedure will also work for diskgroups hosting OCR, Vote files and ASM spfiles.

Note: These steps are applicable to External, Normal & High redundancy diskgroups.

Solution


If you need to migrate and to replace the current ASMLIB disks associated to your diskgroups to a new storage, then you can perform this operation without any downtime, therefore you can follow the next steps:

1) Backup all your databases and valid the backup (it is always required to protect your data).


2) Create new ASMLIB devices on the new storage physical disks as described in the following document/demo:
White Paper: ASMLIB Installation & Configuration On MultiPath Mapper Devices (Step by Step Demo) On RAC Or Standalone Configurations. (Doc ID 1594584.1)

Example:

2.1) Create the new ASMLIB disks using the “/etc/init.d/oracleasm createdisk” command as root OS user:
 

# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_1    /dev/mapper/mpathbp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_2    /dev/mapper/mpathcp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_3    /dev/mapper/mpathdp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_4    /dev/mapper/mpathep1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_5    /dev/mapper/mpathfp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_6    /dev/mapper/mpathgp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_7    /dev/mapper/mpathhp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_8    /dev/mapper/mpathip1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_9    /dev/mapper/mpathjp1
# /etc/init.d/oracleasm    createdisk       ASMDISK_NEW_SAN_10   /dev/mapper/mpathkp1
 
2.2) Scan the new ASMIB disks on all the other RAC nodes (on Standalone configurations the disks are implicitly scanned during the ASMLIB disk creation) as follows (as root OS user):


# /etc/init.d/oracleasm    scandisks
 
2.3) Make sure the new ASMLIB disks and old ASMLIB disks are present on all the RAC Cluster nodes or Standalone configurations as follows:
 

# /etc/init.d/oracleasm  listdisks

ASMDISK_NEW_SAN_1
ASMDISK_NEW_SAN_2
ASMDISK_NEW_SAN_3
ASMDISK_NEW_SAN_4
ASMDISK_NEW_SAN_5
ASMDISK_NEW_SAN_6
ASMDISK_NEW_SAN_7
ASMDISK_NEW_SAN_8
ASMDISK_NEW_SAN_9
ASMDISK_NEW_SAN_10

ASMDISK_OLD_SAN_1
ASMDISK_OLD_SAN_2
ASMDISK_OLD_SAN_3
ASMDISK_OLD_SAN_4
ASMDISK_OLD_SAN_5
ASMDISK_OLD_SAN_6
ASMDISK_OLD_SAN_7
ASMDISK_OLD_SAN_8
ASMDISK_OLD_SAN_9
ASMDISK_OLD_SAN_10
  
# /usr/sbin/oracleasm-discover 'ORCL:*'
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]

Discovered disk: ORCL:ASMDISK_NEW_SAN_1 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_2 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_3 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_4 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_5 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_6 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_7 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_8 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_9 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_NEW_SAN_10 [40017852 blocks (20489140224 bytes), maxio 512]


Discovered disk: ORCL:ASMDISK_OLD_SAN_1 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_2 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_3 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_4 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_5 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_6 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_7 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_8 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_9 [40017852 blocks (20489140224 bytes), maxio 512]
Discovered disk: ORCL:ASMDISK_OLD_SAN_10 [40017852 blocks (20489140224 bytes), maxio 512]
  

3) Verify the ASM discovery string (“ASM_DISKSTRING”) is correctly pointing to the ASMLIB devices (in all the ASM instances) as follows:

3.1) RAC Cluster configurations:

+ASM1 instance:
 

[grid@dbaasm ~]$ . oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter  ASM_DISKSTRING

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*
 
+ASM2 instance:
 

[grid@dbaasm ~]$ . oraenv
ORACLE_SID = [+ASM2] ? +ASM2
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter  ASM_DISKSTRING

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*
 
3.2) Standalone configurations:

+ASM1 instance:


[grid@dbaasm ~]$ . oraenv
ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option

SQL> show parameter  ASM_DISKSTRING

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      ORCL:*
  

4) Then confirm that the new ASMLIB disks and old ASMLIB disks are being detected by ASM as follows:
 

ORACLE_SID = [+ASM] ? +ASM
The Oracle base remains unchanged with value /u01/app/grid
[grid@dbaasm ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 18 18:39:28 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Automatic Storage Management option


SQL> select path from v$asm_disk;

PATH
--------------------------------------------------------------------------------
ORCL:ASMDISK_NEW_SAN_1
ORCL:ASMDISK_NEW_SAN_2
ORCL:ASMDISK_NEW_SAN_3
ORCL:ASMDISK_NEW_SAN_4
ORCL:ASMDISK_NEW_SAN_5
ORCL:ASMDISK_NEW_SAN_6
ORCL:ASMDISK_NEW_SAN_7
ORCL:ASMDISK_NEW_SAN_8
ORCL:ASMDISK_NEW_SAN_9
ORCL:ASMDISK_NEW_SAN_10

PATH
--------------------------------------------------------------------------------
ORCL:ASMDISK_OLD_SAN_1
ORCL:ASMDISK_OLD_SAN_2
ORCL:ASMDISK_OLD_SAN_3
ORCL:ASMDISK_OLD_SAN_4
ORCL:ASMDISK_OLD_SAN_5
ORCL:ASMDISK_OLD_SAN_6
ORCL:ASMDISK_OLD_SAN_7
ORCL:ASMDISK_OLD_SAN_8
ORCL:ASMDISK_OLD_SAN_9
ORCL:ASMDISK_OLD_SAN_10

10 rows selected.
 
5) Validate all the new disks as described in the following document:
 
How To Add a New Disk(s) to An Existing Diskgroup on RAC Cluster or Standalone ASM Configuration (Best Practices). (Doc ID 557348.1)
  

6) Add the new disks to your desired diskgroup as follows:

SQL> alter diskgroup <diskgroup name> add disk
‘<new disk 1>’,
‘<new disk 2>’,
‘<new disk 3>’,
‘<new disk 4>’,
.
.
.
‘<new disk N>’ rebalance power <#>;


Example:
SQL> alter diskgroup DATA add disk
'ORCL:ASMDISK_NEW_SAN_1',
'ORCL:ASMDISK_NEW_SAN_2',
'ORCL:ASMDISK_NEW_SAN_3',
'ORCL:ASMDISK_NEW_SAN_4',
'ORCL:ASMDISK_NEW_SAN_5',
'ORCL:ASMDISK_NEW_SAN_6',
'ORCL:ASMDISK_NEW_SAN_7',
'ORCL:ASMDISK_NEW_SAN_8',
'ORCL:ASMDISK_NEW_SAN_9',
'ORCL:ASMDISK_NEW_SAN_10'  rebalance power 11;


6) Then wait until the rebalance operation completes:

SQL> select * from v$asm_operation;

no rows selected

SQL> select * from gv$asm_operation;

no rows selected


7) Finally, remove the old disks:
SQL> alter diskgroup <diskgroup name> drop disk
<disk name A>,
<disk name B>,
<disk name D>,
<disk name E>,
.
.
.
<disk name X>  rebalance power <#>;


Example:
SQL> alter diskgroup DATA drop disk
ASMDISK_OLD_SAN_1,
ASMDISK_OLD_SAN_2,
ASMDISK_OLD_SAN_3,
ASMDISK_OLD_SAN_4,
ASMDISK_OLD_SAN_5,
ASMDISK_OLD_SAN_6,
ASMDISK_OLD_SAN_7,
ASMDISK_OLD_SAN_8,
ASMDISK_OLD_SAN_9,
ASMDISK_OLD_SAN_10  rebalance power 11;


8) Then wait until the rebalance operation completes:
SQL> select * from v$asm_operation;

no rows selected

SQL> select * from gv$asm_operation;

no rows selected



9) After the old disks are completely expelled from the diskgroup(s), then your ASM diskgroup(s) and database(s) have been migrated to the new storage.

Note 1: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one rebalance operation will be started as follow:
SQL> alter diskgroup <diskgroup name>
add disk '<new device physical name 1>', .., '<new device physical name N>'
drop disk <old disk logical name 1>, <old disk logical name 2>, ..,<old disk logical name N>  rebalance power <#>;
 
Example:
alter diskgroup DATA add disk
'ORCL:ASMDISK_NEW_SAN_1',
'ORCL:ASMDISK_NEW_SAN_2',
'ORCL:ASMDISK_NEW_SAN_3',
'ORCL:ASMDISK_NEW_SAN_4',
'ORCL:ASMDISK_NEW_SAN_5',
'ORCL:ASMDISK_NEW_SAN_6',
'ORCL:ASMDISK_NEW_SAN_7',
'ORCL:ASMDISK_NEW_SAN_8',
'ORCL:ASMDISK_NEW_SAN_9',
'ORCL:ASMDISK_NEW_SAN_10'  
drop disk
ASMDISK_OLD_SAN_1,
ASMDISK_OLD_SAN_2,
ASMDISK_OLD_SAN_3,
ASMDISK_OLD_SAN_4,
ASMDISK_OLD_SAN_5,
ASMDISK_OLD_SAN_6,
ASMDISK_OLD_SAN_7,
ASMDISK_OLD_SAN_8,
ASMDISK_OLD_SAN_9,
ASMDISK_OLD_SAN_10  rebalance power 11;
  

This is more efficient than separated commands (add disk & drop disk statements).

Then wait until the rebalance operation completes:
  

SQL> select * from v$asm_operation;

no rows selected

SQL> select * from gv$asm_operation;

no rows selected
  

Note 2:  As a best practice, never execute the “/etc/init.d/oracleasm deletedisk” command on active ASMLIB disks (those which are currently being used by ASM diskgroups as disks members).

Note 3:  As a best practice, never execute the “/etc/init.d/oracleasm createdisk” command on active ASMLIB disks (those which are currently being used by ASM diskgroups as disks members).

Note 4:  Never remove/format/modify/overlap/resize (at OS level or hardware) the physical associated disk until the corresponding logical ASMLIB/ASM disk is completely dropped and expelled from the ASM diskgroup (in other words, until the rebalance operation completes).

Note 5: On 10g, a manual rebalance operation is required to restart the diskgroup rebalance and expel the disk(s) because on 10g (if something wrong happens on disk expelling, e.g. hanging) ASM will not restart the ASM rebalance automatically (this was already enhanced on 11g and 12c), therefore you will need to restart a manual rebalance operation as follows:
SQL> alter diskgroup <diskgroup name> rebalance power 11;

No comments:

Post a Comment