Background
So, the time has come to migrate your Oracle database to another storage array.
There are a number of different options available to us, including the Oracle online datafile move, Oracle Recover Manager (RMAN) backup/restore, Oracle DataGuard failover, Data Pump, GoldenGate, host base mirroring, storage replication to name just a few.
The method selected will be influenced by available hardware, software, skills and acceptable business downtime.
For databases that can not support any downtime, Oracle Automatic Storage Management (ASM) is an excellent option.
In this post I will show how we can use Oracle ASM rebalancing to migrate volumes between 2 Pure Storage FlashArrays within my Lab.
ASM Volumes
First, using asmcmd or the graphical asmca utility identify the current Oracle ASM disk groups and disks, for example
[oracle@z-rac1 ~]$ asmcmd lsdg State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 512 4096 4194304 10485760 8388424 0 8388424 0 N ACFS/ MOUNTED EXTERN N 512 512 4096 4194304 204800 204668 0 204668 0 N CONTROL_REDO/ MOUNTED EXTERN N 512 512 4096 4194304 10485760 3273476 0 3273476 0 N DATA/ MOUNTED EXTERN N 512 512 4096 4194304 3145728 3121436 0 3121436 0 N FRA/ MOUNTED EXTERN N 512 512 4096 4194304 102400 45436 0 45436 0 N MGMT/ MOUNTED EXTERN N 512 512 4096 4194304 51200 50864 0 50864 0 Y OCR/
And ASM disks.
[oracle@z-rac1 ~]$ asmcmd lsdsk -t -g Inst_ID Create_Date Mount_Date Repair_Timer Path 1 30-OCT-20 02-AUG-21 0 /dev/mapper/dg_rac_acfs 2 30-OCT-20 02-AUG-21 0 /dev/mapper/dg_rac_acfs 1 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_control_redo 2 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_control_redo 1 05-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_crs 2 05-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_crs 1 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_data01 2 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_data01 1 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_data02 2 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_data02 1 14-OCT-20 02-AUG-21 0 /dev/mapper/dg_rac_data03 2 14-OCT-20 02-AUG-21 0 /dev/mapper/dg_rac_data03 1 14-OCT-20 02-AUG-21 0 /dev/mapper/dg_rac_data04 2 14-OCT-20 02-AUG-21 0 /dev/mapper/dg_rac_data04 1 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_fra 2 11-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_fra 1 05-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_gimr 2 05-JUN-19 02-AUG-21 0 /dev/mapper/dg_rac_gimr
If you have many disk groups use may want to use the asmcmd lsdsk -G option to limit output to a specific disk group.
Linux Multi-path
From the above I can see I am using a multi-pathing from /etc/multipath.conf we can see the storage UUID.
... multipaths { multipath { wwid 3624a937050c939582b0f46c00031e3f4 alias dg_rac_data01 } multipath { wwid 3624a937050c939582b0f46c00031e418 alias dg_rac_data02 } ...
From the my lab FlashArray I can confirm the Array ID which is embedded within the UUID.
pureuser@z-fa420-a> purearray list
Name ID OS Version
z-fa420-a 50c93958-2b0f-46c0-80ee-494974331af9 Purity//FA 5.3.17
In the above ‘dg_rac_data01‘ the wwid (World Wide Identifier) is set to Vendor ID + Serial number. e.g. ‘3624a9370’ (for Pure Storage) + ‘50c939582b0f46c0′ (for Array) + ‘0031e3f4’ (for disk).
Note, the ‘wwid’ needs to be in lowercase and the ‘alias’ name for ASM disks needs to be less than 30 characters, alphanumeric and only use the ‘_ ‘ special character.
Source Storage Array
Using the FlashArray CLI (command line interface) we can see size and space details with purevol list
pureuser@z-fa420-a> purevol list z-rac* --space Name Size Thin Provisioning Data Reduction Total Reduction Volume Snapshots Shared Space System Total z-rac_dg_acfs01 10T 90% 16.1 to 1 >100 to 1 2.82G 3.83K - - 2.82G z-rac_dg_control_redo 200G 100% 16.0 to 1 >100 to 1 157.84K 1.62K - - 159.46K z-rac_dg_crs 50G 100% 9.1 to 1 >100 to 1 10.06M 16.39K - - 10.08M z-rac_dg_data01 2560G 31% 6.3 to 1 9.1 to 1 212.09G 106.14M - - 212.19G z-rac_dg_data02 2560G 31% 6.3 to 1 9.1 to 1 212.39G 111.36M - - 212.50G z-rac_dg_data03 2560G 31% 6.3 to 1 9.1 to 1 212.10G 107.99M - - 212.21G z-rac_dg_data04 2560G 31% 6.3 to 1 9.1 to 1 212.07G 110.85M - - 212.18G z-rac_dg_fra 3T 24% 3.1 to 1 4.1 to 1 753.28G 1010.51M - - 754.26G z-rac_dg_gimr 100G 45% 4.1 to 1 7.4 to 1 13.24G 1.29M - - 13.24G
Target Storage Array
I will now logon to my target FlashArray and create corresponding volumes and attach them to both nodes in my RAC cluster.
pureuser@z-m20-c> purevol list z-rac*
Name Size Source Created Serial
z-rac_dg_acfs01 10T - 2021-09-24 15:51:34 BST 3497650B62BF43AA0002BA8C
z-rac_dg_control_redo 200G - 2021-09-24 15:51:56 BST 3497650B62BF43AA0002BA8D
z-rac_dg_crs 50G - 2021-09-24 15:52:12 BST 3497650B62BF43AA0002BA8E
z-rac_dg_data01 2560G - 2021-09-24 15:52:59 BST 3497650B62BF43AA0002BA8F
z-rac_dg_data02 2560G - 2021-09-24 15:53:21 BST 3497650B62BF43AA0002BA90
z-rac_dg_data03 2560G - 2021-09-24 15:58:50 BST 3497650B62BF43AA0002BA94
z-rac_dg_data04 2560G - 2021-09-24 15:53:57 BST 3497650B62BF43AA0002BA91
z-rac_dg_fra 3T - 2021-09-24 15:54:37 BST 3497650B62BF43AA0002BA92
z-rac_dg_gimr 100G - 2021-09-24 15:54:55 BST 3497650B62BF43AA0002BA93
The Array ID is:
pureuser@z-m20-c> purearray list
Name ID OS Version
z-m20-c 3497650b-62bf-43aa-858e-d34130f449fa Purity//FA 6.2.0
Linux config
As root rescan the scsi bus using rescan-scsi-bus.sh, if all has gone well the new devices should be visible, we can check this with multipath -ll looking for the new volumes, for example.
[root@z-rac1:~]# multipath -ll | grep 2ba8f
3624a93703497650b62bf43aa0002ba8f dm-18 PURE ,FlashArray
In the above ‘dm-18‘ the wwid (World Wide Identifier) is set to Vendor ID + Serial number. e.g. ‘3624a9370’ (for Pure Storage) + ‘3497650b62bf43aa’ (for Array) + ‘0002ba8f’ (for disk).
Update /etc/multipath.conf to include new volumes name aliases.
Reload multi-path
[root@z-rac1:~]# service multipathd reload Redirecting to /bin/systemctl reload multipathd.service
And if we repeat multipath -ll we should now see the new multipath names, for example.
[root@z-rac1:~]# multipath -ll | grep 2ba8f
dg_rac_data01_new (3624a93703497650b62bf43aa0002ba8f) dm-18 PURE ,FlashArray
ASM rebalance
We are now ready to use Oracle ASM to perform an on-line relocation of ASM diskgroups.
The ASM_POWER_LIMIT specifies the maximum power on an Oracle ASM instance for disk rebalancing.
SQL> show parameter power
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_power_limit integer 1
We can specify a high priority if we have adequate resources using the rebalance option, the upper limit is 1024.
Add new Disk
SQL> alter diskgroup OCR add disk '/dev/mapper/dg_rac_crs_new' rebalance power 10 wait; Diskgroup altered.
The WAIT clause means wait for the rebalance to complete before returning, the default is NOWAIT which means the query will return instantly, however progress can be monitored by with GV$ASM_OPERATION view, for example.
SQL> select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV$ASM_OPERATION where GROUP_NUMBER=1;
If you’re monitoring your rebalance you can still modify the power using alter diskgroup, for example.
SQL> alter diskgroup OCR rebalance modify power 30;
Once complete, we can drop the disk with alter diskgroup <DiskGroup Name> drop disk <Disk Name>; for example
Drop old Disk
SQL> alter diskgroup OCR drop disk CRS; Diskgroup altered.
Add new and drop old Disk
A slightly easier approach is to add a new disk to a diskgroup and drop the old disk in one SQL statement, for example.
asm_layout.sql
SET LINESIZE 200
SET PAGESIZE 100
col groupname format a25 heading 'Disk Group'
col path format a35 heading 'LUN Path'
col diskname format a20 heading 'Disk Name'
col sector_size format 9,999 heading 'Sector |Size'
col block_size format 99,999 heading 'Block |Size'
col state format a10 heading 'State'
col au format 9,999,999 heading 'AU Size'
col total_gig format 999,999 heading 'Group |Total |GB'
col dtotal_gig format 999,999 heading 'Disk |Total |GB'
col free_gig format 999,999 heading 'Group |Free |GB'
col dfree_gig format 999,999 heading 'Disk |Free |GB'
SELECT
g.name groupname,
d.path,
d.name diskname,
d.total_mb/1024 dtotal_gig,
d.free_mb/1024 dfree_gig
FROM
v$asm_diskgroup g, v$asm_disk d
WHERE
d.group_number = g.group_number
ORDER BY
g.name, d.disk_number
/
EXIT
View before we move our FRA (Fast Recovery Area)
[oracle@z-rac1 sql_scripts]$ sqlplus / as sysasm @asm_layout
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Sep 28 12:00:38 2021
Version 19.10.0.0.0
Copyright (c) 1982, 2020, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.10.0.0.0
Group Disk Group Disk
Sector Block Total Total Free Free
Disk Group LUN Path Disk Name Size Size State AU Size GB GB GB GB
--------------- ----------------------------------- ------------------ ------- ------- ---------- ---------- -------- -------- -------- --------
ACFS /dev/mapper/dg_rac_acfs_new ACFS_0001 512 4,096 MOUNTED 4,194,304 10,240 10,240 8,192 8,192
CONTROL_REDO /dev/mapper/dg_rac_control_redo_new CONTROL_REDO_0001 512 4,096 MOUNTED 4,194,304 200 200 200 200
DATA /dev/mapper/dg_rac_data01 DATA_0000 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
DATA /dev/mapper/dg_rac_data02 DATA_0001 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
DATA /dev/mapper/dg_rac_data03 DATA_0002 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
DATA /dev/mapper/dg_rac_data04 DATA_0003 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
FRA /dev/mapper/dg_rac_fra FRA 512 4,096 MOUNTED 4,194,304 3,072 3,072 3,048 3,048
MGMT /dev/mapper/dg_rac_gimr_new MGMT_0001 512 4,096 MOUNTED 4,194,304 100 100 44 44
OCR /dev/mapper/dg_rac_crs_new OCR_0001 512 4,096 MOUNTED 4,194,304 50 50 50 50
9 rows selected.
Add and drop disk in one operation.
SQL> alter diskgroup FRA add disk '/dev/mapper/dg_rac_fra_new' drop disk FRA rebalance power 30 wait; Diskgroup altered.
And after
Group Disk Group Disk
Sector Block Total Total Free Free
Disk Group LUN Path Disk Name Size Size State AU Size GB GB GB GB
--------------- ----------------------------------- ------------------ ------- ------- ---------- ---------- -------- -------- -------- --------
ACFS /dev/mapper/dg_rac_acfs_new ACFS_0001 512 4,096 MOUNTED 4,194,304 10,240 10,240 8,192 8,192
CONTROL_REDO /dev/mapper/dg_rac_control_redo_new CONTROL_REDO_0001 512 4,096 MOUNTED 4,194,304 200 200 200 200
DATA /dev/mapper/dg_rac_data01 DATA_0000 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
DATA /dev/mapper/dg_rac_data02 DATA_0001 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
DATA /dev/mapper/dg_rac_data03 DATA_0002 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
DATA /dev/mapper/dg_rac_data04 DATA_0003 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799
FRA /dev/mapper/dg_rac_fra_new FRA_0001 512 4,096 MOUNTED 4,194,304 3,072 3,072 3,048 3,048
MGMT /dev/mapper/dg_rac_gimr_new MGMT_0001 512 4,096 MOUNTED 4,194,304 100 100 44 44
OCR /dev/mapper/dg_rac_crs_new OCR_0001 512 4,096 MOUNTED 4,194,304 50 50 50 50
9 rows selected.
Multi add and drop Disks
We can also add and drop multiple disks within the same operation
SQL> alter diskgroup DATA add disk 2 '/dev/mapper/dg_rac_data01_new', 3 '/dev/mapper/dg_rac_data02_new', 4 '/dev/mapper/dg_rac_data03_new', 5 '/dev/mapper/dg_rac_data04_new' 6 drop disk DATA_0000, DATA_0001, DATA_0002, DATA_0003 7 rebalance power 30 wait; Diskgroup altered.
Let’s see how long the estimated time is by querying GV$ASM_OPERATION
SQL> select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV$ASM_OPERATION;
INST_ID OPERA STAT POWER SOFAR EST_WORK EST_RATE EST_MINUTES
---------- ----- ---- ---------- ---------- ---------- ---------- -----------
1 REBAL WAIT 30 0 0 0 0
1 REBAL RUN 30 56823 1803059 11768 148
1 REBAL DONE 30 0 0 0 0
We can also see the ASM layout during migration using the previous asm_layout.sql script.
Group Disk Group Disk Sector Block Total Total Free Free Disk Group LUN Path Disk Name Size Size State AU Size GB GB GB GB --------------- ----------------------------------- ------------------ ------- ------- ---------- ---------- -------- -------- -------- -------- ACFS /dev/mapper/dg_rac_acfs_new ACFS_0001 512 4,096 MOUNTED 4,194,304 10,240 10,240 8,192 8,192 CONTROL_REDO /dev/mapper/dg_rac_control_redo_new CONTROL_REDO_0001 512 4,096 MOUNTED 4,194,304 200 200 200 200 DATA /dev/mapper/dg_rac_data01 DATA_0000 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 972 DATA /dev/mapper/dg_rac_data02 DATA_0001 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 972 DATA /dev/mapper/dg_rac_data03 DATA_0002 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 972 DATA /dev/mapper/dg_rac_data04 DATA_0003 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 972 DATA /dev/mapper/dg_rac_data01_new DATA_0004 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 2,387 DATA /dev/mapper/dg_rac_data02_new DATA_0005 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 2,387 DATA /dev/mapper/dg_rac_data03_new DATA_0006 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 2,387 DATA /dev/mapper/dg_rac_data04_new DATA_0007 512 4,096 MOUNTED 4,194,304 20,480 2,560 13,437 2,387 FRA /dev/mapper/dg_rac_fra_new FRA_0001 512 4,096 MOUNTED 4,194,304 3,072 3,072 3,048 3,048 MGMT /dev/mapper/dg_rac_gimr_new MGMT_0001 512 4,096 MOUNTED 4,194,304 100 100 44 44 OCR /dev/mapper/dg_rac_crs_new OCR_0001 512 4,096 MOUNTED 4,194,304 50 50 50 50 13 rows selected.
Once complete, re-run the query to confirm the old disk have been removed.
Group Disk Group Disk Sector Block Total Total Free Free Disk Group LUN Path Disk Name Size Size State AU Size GB GB GB GB --------------- ----------------------------------- ------------------ ------- ------- ---------- ---------- -------- -------- -------- -------- ACFS /dev/mapper/dg_rac_acfs_new ACFS_0001 512 4,096 MOUNTED 4,194,304 10,240 10,240 8,192 8,192 CONTROL_REDO /dev/mapper/dg_rac_control_redo_new CONTROL_REDO_0001 512 4,096 MOUNTED 4,194,304 200 200 200 200 DATA /dev/mapper/dg_rac_data01_new DATA_0004 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799 DATA /dev/mapper/dg_rac_data02_new DATA_0005 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799 DATA /dev/mapper/dg_rac_data03_new DATA_0006 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799 DATA /dev/mapper/dg_rac_data04_new DATA_0007 512 4,096 MOUNTED 4,194,304 10,240 2,560 3,197 799 FRA /dev/mapper/dg_rac_fra_new FRA_0001 512 4,096 MOUNTED 4,194,304 3,072 3,072 3,048 3,048 MGMT /dev/mapper/dg_rac_gimr_new MGMT_0001 512 4,096 MOUNTED 4,194,304 100 100 44 44 OCR /dev/mapper/dg_rac_crs_new OCR_0001 512 4,096 MOUNTED 4,194,304 50 50 50 50 9 rows selected.
Or via the asmcmd utility.
[oracle@z-rac1 ~]$ asmcmd lsdsk -t Create_Date Mount_Date Repair_Timer Path 27-SEP-21 27-SEP-21 0 /dev/mapper/dg_rac_acfs_new 28-SEP-21 28-SEP-21 0 /dev/mapper/dg_rac_control_redo_new 27-SEP-21 27-SEP-21 0 /dev/mapper/dg_rac_crs_new 28-SEP-21 28-SEP-21 0 /dev/mapper/dg_rac_data01_new 28-SEP-21 28-SEP-21 0 /dev/mapper/dg_rac_data02_new 28-SEP-21 28-SEP-21 0 /dev/mapper/dg_rac_data03_new 28-SEP-21 28-SEP-21 0 /dev/mapper/dg_rac_data04_new 28-SEP-21 28-SEP-21 0 /dev/mapper/dg_rac_fra_new 27-SEP-21 27-SEP-21 0 /dev/mapper/dg_rac_gimr_new
Now our data has fully migrated, we are now free to disconnect the old volumes from our Linux server.
Summary
In the Blog I have demonstrated how we can add newly created disks to an ASM diskgroup, drop disks from and ASM DiskGroup, perform a single add and drop disk, and perform a multi add and drop with rebalance to migrate Oracle ASM diskgroups between storage arrays.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]