It has been awhile since my last post. My pathetic excuses are all pretty much mentioned here. 🙂
Last month we’ve worked with the storage team to migrate the SAN storage of our Oracle 11gR1 database to a new one. The drive of migration is mainly for SAN consolidation which is, of course, ultimately for cost saving. In addition to migrating the ASM disk groups storing database’s data files, all clusterware files (OCR and voting disk) must be migrated too. The rebalance feature in Oracle ASM makes data migration very easy and seamless. And since the clusterware files have redundancy, they can be seamlessly migrated as well. With 11gR1, all migration tasks can be performed online.
Prerequisites/Assumptions:
– New SAN LUNs/disks are already visible to all RAC nodes. In case of the disks for ASM diskgroups, they are already discovered by ASM. The minimum numbers and permissions of the OCR and voting disks must be met.
– It is recommended to perform the migration tasks during off-peak hours or even better if during planned maintenance window period.
Note that the sample shown here is specific to my environment (11.1.0.7 on Solaris 10 with dual-pathing to Hitachi SAN, and OCR and voting disks are on raw devices).
SAN Migration of the ASM diskgroups
If you’re more comfortable with GUI, all tasks here can be accomplished using the Enterprise Manager.
1. Add new disks to ASM diskgroups.
ALTER DISKGROUP PMDW_DG1 ADD DISK '/dev/rdsk/c4t60060E80056FB30000006FB300000823d0s6' NAME PMDW_DG1_0003, '/dev/rdsk/c4t60060E80056FB30000006FB300000826d0s6' NAME PMDW_DG1_0004, '/dev/rdsk/c4t60060E80056FB30000006FB300000829d0s6' NAME PMDW_DG1_0005 REBALANCE POWER 11;
We go with the rebalance power of 11 which is full throttle because it is planned maintenance.
2. Check rebalance status from Enterprise Manager or v$ASM_OPERATION.
3. When rebalance completes, drop the old disks.
ALTER DISKGROUP PMDW_DG1 DROP DISK PMDW_DG1_0000, PMDW_DG1_0001, PMDW_DG1_0002 REBALANCE POWER 11;
When adding or removing several disks, it is recommend to add or remove all disks at once. This is to reduce the number of the rebalance operations that are needed for storage changes.
SAN Migration of the OCR Files
1. Backup all OCR-related files.
# {CRS_HOME}/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 921332 Used space (kbytes) : 4548 Available space (kbytes) : 916784 ID : 776278942 Device/File Name : /dev/rdsk/c4t50060E800000000000002892000003F8d0s6 Device/File integrity check succeeded Device/File Name : /dev/rdsk/c4t50060E800000000000002892000003F9d0s6 Device/File integrity check succeeded
Backup the /var/opt/oracle/ocr.loc file:
# cp ocr.loc ocr.loc.old
Manually backup OCR:
# {CRS_HOME}/bin/ocrconfig -manualbackup
2. As root, run the following commands to replace OCR files. This change can be performed on-line, and will be reflected across the entire cluster.
# {CRS_HOME}/bin/ocrconfig -replace ocr /dev/rdsk/c4t60060E80056FB30000006FB300001014d0s6 # {CRS_HOME}/bin/ocrconfig -replace ocrmirror /dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6
3. Verify the new configuration.
Check new ocr.loc file updated:
# cat /var/opt/oracle/ocr.loc #Device/file /dev/rdsk/c4t50060E800000000000002892000003F9d0s6 getting replaced by device /dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6 ocrconfig_loc=/dev/rdsk/c4t60060E80056FB30000006FB300001014d0s6 ocrmirrorconfig_loc=/dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6
Check OCR:
# {CRS_HOME}/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 921332 Used space (kbytes) : 4548 Available space (kbytes) : 916784 ID : 776278942 Device/File Name : /dev/rdsk/c4t60060E80056FB30000006FB300001014d0s6 Device/File integrity check succeeded Device/File Name : /dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6 Device/File integrity check succeeded Cluster registry integrity check succeeded Logical corruption check succeeded
SAN Migration of the voting disks
1. Backup the voting disks.
Query the original locations:
# /opt/oracrs/bin/crsctl query css votedisk 0. 0 /dev/rdsk/c4t50060E800000000000002892000003FBd0s6 1. 0 /dev/rdsk/c4t50060E800000000000002892000003FCd0s6 2. 0 /dev/rdsk/c4t50060E800000000000002892000003FFd0s6
Backup voting disks using dd:
dd if={voting_disk_name} of={backup_file_name} Example, dd if=/dev/rdsk/c4t50060E800000000000002892000003FBd0s6 of=/tmp/voting1
2. Move voting disks.
Starting with 11.1 onwards, the voting disk migration can be performed on-line.
# /opt/oracrs/bin/crsctl delete css votedisk /dev/rdsk/c4t50060E800000000000002892000003FBd0s6 # /opt/oracrs/bin/crsctl add css votedisk /dev/rdsk/c4t60060E80056FB30000006FB300001017d0s6 # /opt/oracrs/bin/crsctl delete css votedisk /dev/rdsk/c4t50060E800000000000002892000003FCd0s6 # /opt/oracrs/bin/crsctl add css votedisk /dev/rdsk/c4t60060E80056FB30000006FB300001018d0s6 # /opt/oracrs/bin/crsctl delete css votedisk /dev/rdsk/c4t50060E800000000000002892000003FFd0s6 # /opt/oracrs/bin/crsctl add css votedisk /dev/rdsk/c4t60060E80056FB30000006FB300001019d0s6
3. Verify the new configuration.
# /opt/oracrs/bin/crsctl query css votedisk 0. 0 /dev/rdsk/c4t60060E80056FB30000006FB300001017d0s6 1. 0 /dev/rdsk/c4t60060E80056FB30000006FB300001018d0s6 2. 0 /dev/rdsk/c4t60060E80056FB30000006FB300001019d0s6
Reference:
Metalink #428681.1:Â OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices
Nice… and Great to see your post again.
I just curious about add/drop ASM disks 😉
How much data in ASM DiskGroup?
and …
How long a time did you Add/Drop new disks on ASM diskgroups?
thank You
Yeah, it is good to be back too.
It takes roughly 40 minutes for 600GB of data using the rebalance power of 11. Note that this has been done when no users’ activities (planned maintenance).
Thank You, that’s great idea for me.
Pingback: Blogroll Report 02/10/2009-09/10/2009 « Coskan’s Approach to Oracle
Hello from Russia!
Can I quote a post in your blog with the link to you?
Great information! very helpful!
need to read the article
Great article, does the disk need to be the same size?
It is not required but is highly recommended for ease of the storage management.
Can U please explain me in detail how OCR and voting disks can be moved to a new SAN disk array.
Is it reqd to recreate OCR and voting disks on the new SAN disks?
If notcan u explain the steps in detail as i am an amateur in Clusterware.
Thank u..
Basically you will need to add new LUN/disk paths before remove the old ones. The step-by-step instructions are already in the post.
what is your hardware configuration? eg – what type of sun server, number cpu’s, speed cpu’s, ram, 2/4/8gb hba’s, etc. what multipathing software are you using? any tweaking done is scsi queue depth? all those sorts of questions are important to understanding the value of the approach. At ~15gb per minute at full application downtime, I’m not moving a 50tb db using this method. good post.
The goal of the post is not about the speed. Yes, likely, you will achieve better performance if having bigger boxes or doing storage-related tunings. The SAN/disk migration by adding and removing them is pretty standard approach here. Note that moving ASM diskgroups can be done on-line, and it can be performed in chunk whenever system has light load (e.g. multiple weekends).
What a great site and inspiring posts, I will add a link on my blogroll and bookmark this site. Regards! Thanks! Cheers! Glendive Passport Offices
please explain me the same on windows server 2003 64-bit environement
It is the same steps regardless of platforms. The only difference is the way Windows refers to the shared disks which is in this format \\.\ORCLDISKprefixn.
See here for more details http://download.oracle.com/docs/cd/B28359_01/install.111/b28250/racstorage.htm#autoId16.
Thanks for the complete note!
I will performing these actions on out test-RAC next few days.
All as a prep to move our production-RAC to new SAN-systems.
Glad that it helps. Good luck with the task.
Hi,
I am new to Oracle RAC/ASM….
Just got a query on above procedure, the ADD/DROP disk to a diskgroup needs to be done on one of the node or both.
Regards
Rahul
Rahul
Since this is database operation, it can be done on any one of the nodes.
ittichi:
i’ll migrate 10g rac from IBM storage to EMC storage, can i use the method: ADD/DROP disk to a asm diskgroup
Yes definitely. it is independent of the underlying storage sub-system as long as paths to those are accessible to the database system.
I’m considering using such as method as this on a storage refresh project.
However, it’s well documented in the ASM best practices to “Use diskgroups with similarly sized and performing disks”.
From a performance perspective if your moving from a different or older technology, this is likely not going to be the case.
This would also require me to allocoate luns sizes (at least initially) that i’d no longer consider optional given newer technologies.
Care to comment? I’m concerned about not following best practices, then having to go to Oracle support if there are any issues.
Steve,
That statement makes sense. All disks used during normal operation should be same in size and same storage system if possible. The goal of this document is to move out completely from one storage system to another. At the end, we’re NOT using disks from both storage systems.
Thanks for replying, so you have experienced that for short-duration (overnight/weekend/offpeak) this in practice doesn’t pose any issue.
My intention would be, to drop the “old” disks as soon as the rebal is compelete.. essential swapping old array for new.
I’ll look at those other posts.. appreciate your insight.
No issue during our migration.
Will this approach is applicable for ASM disk exteranl redudancy mode?
Yes it will work.
I’m on 11.2.0.2 Clusterware. Any idea if I could just add and remove disks from an OCR_VOTE Diskgroup, leaving it to the ASM rebalancing?
In 11gR2, you can move the OCR and Voting into ASM which will make thing easier. Just simply google for “oracle move ocr and voting to ASM 11gR2”. There is a plenty good information out there. Once in ASM, the move is just simply add/drop ASM disks.
The OCR and Voting file is already in ASM. Just wanted to move the files from one SAN to another. While adding and removing disks worked for the OCR files, it didn’t really work for the voting files. So what I did was created another diskgroup and moved the OCR and Voting file into the new diskgroup. I thought I could just add and remove new disks from the new SAN into the existing diskgroup. That didn’t work though.
ocrconfig -replace ocr is not valid in 11g
nor is
ocrconfig -replace ocrmirror
11g construct is
ocrconfig -replace -replacement
furthermore raw devices for ocr and voting disks are not supported in 11g except if you migrated from 10g. ocr and voting disks are resident on ASM in 11g.
I think you’re talking about 11gR2 (11.2). My article is on 11gR1 (11.1). There are a lot of changes especially 11.2 allows OCR and voting disks on ASM. That’s not possible in 11.1.
We have a 50TB db and plan for ASM san migration. Did not think of figuring out the hardware configuration. Can you help me better understand your thoughts on this, and what would be the best approach for a size like this ?