Online SAN Storage Migration for Oracle 11g RAC database with ASM

It has been awhile since my last post. My pathetic excuses are all pretty much mentioned here. :-)

Last month we’ve worked with the storage team to migrate the SAN storage of our Oracle 11gR1 database to a new one. The drive of migration is mainly for SAN consolidation which is, of course, ultimately for cost saving. In addition to migrating the ASM disk groups storing database’s data files, all clusterware files (OCR and voting disk) must be migrated too. The rebalance feature in Oracle ASM makes data migration very easy and seamless. And since the clusterware files have redundancy, they can be seamlessly migrated as well. With 11gR1, all migration tasks can be performed online.


– New SAN LUNs/disks are already visible to all RAC nodes. In case of the disks for ASM diskgroups, they are already discovered by ASM. The minimum numbers and permissions of the OCR and voting disks must be met.

– It is recommended to perform the migration tasks during off-peak hours or even better if during planned maintenance window period.

Note that the sample shown here is specific to my environment ( on Solaris 10 with dual-pathing to Hitachi SAN, and OCR and voting disks are on raw devices).

SAN Migration of the ASM diskgroups

If you’re more comfortable with GUI, all tasks here can be accomplished using the Enterprise Manager.

1. Add new disks to ASM diskgroups.

  '/dev/rdsk/c4t60060E80056FB30000006FB300000823d0s6' NAME PMDW_DG1_0003,
  '/dev/rdsk/c4t60060E80056FB30000006FB300000826d0s6' NAME PMDW_DG1_0004,
  '/dev/rdsk/c4t60060E80056FB30000006FB300000829d0s6' NAME PMDW_DG1_0005

We go with the rebalance power of 11 which is full throttle because it is planned maintenance.

2. Check rebalance status from Enterprise Manager or v$ASM_OPERATION.

3. When rebalance completes, drop the old disks.


When adding or removing several disks, it is recommend to add or remove all disks at once. This is to reduce the number of the rebalance operations that are needed for storage changes.

SAN Migration of the OCR Files

1. Backup all OCR-related files.

# {CRS_HOME}/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          2
         Total space (kbytes)     :     921332
         Used space (kbytes)      :       4548
         Available space (kbytes) :     916784
         ID                       :  776278942
         Device/File Name         : /dev/rdsk/c4t50060E800000000000002892000003F8d0s6
                      Device/File integrity check succeeded
         Device/File Name         : /dev/rdsk/c4t50060E800000000000002892000003F9d0s6
                       Device/File integrity check succeeded

Backup the /var/opt/oracle/ocr.loc file:

# cp ocr.loc ocr.loc.old

Manually backup OCR:

# {CRS_HOME}/bin/ocrconfig -manualbackup

2. As root, run the following commands to replace OCR files. This change can be performed on-line, and will be reflected across the entire cluster.

# {CRS_HOME}/bin/ocrconfig -replace ocr /dev/rdsk/c4t60060E80056FB30000006FB300001014d0s6

# {CRS_HOME}/bin/ocrconfig -replace ocrmirror /dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6

3. Verify the new configuration.

Check new ocr.loc file updated:

# cat /var/opt/oracle/ocr.loc
#Device/file /dev/rdsk/c4t50060E800000000000002892000003F9d0s6 getting replaced by device /dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6

Check OCR:

# {CRS_HOME}/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version                  :          2
Total space (kbytes)     :     921332
Used space (kbytes)      :       4548
Available space (kbytes) :     916784
ID                       :  776278942
Device/File Name         : /dev/rdsk/c4t60060E80056FB30000006FB300001014d0s6
Device/File integrity check succeeded
Device/File Name         : /dev/rdsk/c4t60060E80056FB30000006FB300001015d0s6
Device/File integrity check succeeded

Cluster registry integrity check succeeded

Logical corruption check succeeded

SAN Migration of the voting disks

1. Backup the voting disks.

Query the original locations:

# /opt/oracrs/bin/crsctl query css votedisk
0.     0    /dev/rdsk/c4t50060E800000000000002892000003FBd0s6
1.     0    /dev/rdsk/c4t50060E800000000000002892000003FCd0s6
2.     0    /dev/rdsk/c4t50060E800000000000002892000003FFd0s6

Backup voting disks using dd:

dd if={voting_disk_name} of={backup_file_name}

dd if=/dev/rdsk/c4t50060E800000000000002892000003FBd0s6 of=/tmp/voting1

2. Move voting disks.

Starting with 11.1 onwards, the voting disk migration can be performed on-line.

# /opt/oracrs/bin/crsctl delete css votedisk /dev/rdsk/c4t50060E800000000000002892000003FBd0s6
# /opt/oracrs/bin/crsctl add  css votedisk /dev/rdsk/c4t60060E80056FB30000006FB300001017d0s6

# /opt/oracrs/bin/crsctl delete css votedisk  /dev/rdsk/c4t50060E800000000000002892000003FCd0s6
# /opt/oracrs/bin/crsctl add css votedisk  /dev/rdsk/c4t60060E80056FB30000006FB300001018d0s6

# /opt/oracrs/bin/crsctl delete css votedisk  /dev/rdsk/c4t50060E800000000000002892000003FFd0s6
# /opt/oracrs/bin/crsctl add css votedisk  /dev/rdsk/c4t60060E80056FB30000006FB300001019d0s6

3. Verify the new configuration.

# /opt/oracrs/bin/crsctl query css votedisk
0.     0    /dev/rdsk/c4t60060E80056FB30000006FB300001017d0s6
1.     0    /dev/rdsk/c4t60060E80056FB30000006FB300001018d0s6
2.     0    /dev/rdsk/c4t60060E80056FB30000006FB300001019d0s6

Metalink #428681.1:  OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices

, , , , , ,

34 Responses to Online SAN Storage Migration for Oracle 11g RAC database with ASM

  1. Surachart Opun October 5, 2009 at 11:56 am #

    Nice… and Great to see your post again.

    I just curious about add/drop ASM disks 😉
    How much data in ASM DiskGroup?
    and …
    How long a time did you Add/Drop new disks on ASM diskgroups?

    thank You

  2. ittichai October 5, 2009 at 1:28 pm #

    Yeah, it is good to be back too.

    It takes roughly 40 minutes for 600GB of data using the rebalance power of 11. Note that this has been done when no users’ activities (planned maintenance).

  3. Surachart Opun October 5, 2009 at 1:41 pm #

    Thank You, that’s great idea for me.

  4. Polprav October 21, 2009 at 12:34 pm #

    Hello from Russia!
    Can I quote a post in your blog with the link to you?

  5. Alice Liour February 9, 2010 at 9:27 am #

    Great information! very helpful!

  6. pundarik March 3, 2010 at 8:09 am #

    need to read the article

  7. cchan September 16, 2010 at 3:18 pm #

    Great article, does the disk need to be the same size?

    • ittichai September 16, 2010 at 3:54 pm #

      It is not required but is highly recommended for ease of the storage management.

  8. Raghav January 13, 2011 at 4:22 am #

    Can U please explain me in detail how OCR and voting disks can be moved to a new SAN disk array.
    Is it reqd to recreate OCR and voting disks on the new SAN disks?
    If notcan u explain the steps in detail as i am an amateur in Clusterware.
    Thank u..

    • ittichai January 13, 2011 at 8:07 am #

      Basically you will need to add new LUN/disk paths before remove the old ones. The step-by-step instructions are already in the post.

  9. 11bangbang February 11, 2011 at 10:46 am #

    what is your hardware configuration? eg – what type of sun server, number cpu’s, speed cpu’s, ram, 2/4/8gb hba’s, etc. what multipathing software are you using? any tweaking done is scsi queue depth? all those sorts of questions are important to understanding the value of the approach. At ~15gb per minute at full application downtime, I’m not moving a 50tb db using this method. good post.

    • ittichai February 11, 2011 at 12:55 pm #

      The goal of the post is not about the speed. Yes, likely, you will achieve better performance if having bigger boxes or doing storage-related tunings. The SAN/disk migration by adding and removing them is pretty standard approach here. Note that moving ASM diskgroups can be done on-line, and it can be performed in chunk whenever system has light load (e.g. multiple weekends).

  10. Discount SeaWorld Tickets June 12, 2011 at 8:16 pm #

    What a great site and inspiring posts, I will add a link on my blogroll and bookmark this site. Regards! Thanks! Cheers! Glendive Passport Offices

  11. Poorna August 15, 2011 at 2:14 am #

    please explain me the same on windows server 2003 64-bit environement

  12. Alfons August 18, 2011 at 10:08 am #

    Thanks for the complete note!
    I will performing these actions on out test-RAC next few days.
    All as a prep to move our production-RAC to new SAN-systems.

    • ittichai August 23, 2011 at 8:39 am #

      Glad that it helps. Good luck with the task.

  13. Rahul September 8, 2011 at 12:31 am #


    I am new to Oracle RAC/ASM….

    Just got a query on above procedure, the ADD/DROP disk to a diskgroup needs to be done on one of the node or both.



    • ittichai September 8, 2011 at 7:09 am #


      Since this is database operation, it can be done on any one of the nodes.

  14. qgrape November 21, 2011 at 7:23 pm #

    i’ll migrate 10g rac from IBM storage to EMC storage, can i use the method: ADD/DROP disk to a asm diskgroup

    • ittichai November 21, 2011 at 8:57 pm #

      Yes definitely. it is independent of the underlying storage sub-system as long as paths to those are accessible to the database system.

  15. Steve February 8, 2012 at 2:03 pm #

    I’m considering using such as method as this on a storage refresh project.

    However, it’s well documented in the ASM best practices to “Use diskgroups with similarly sized and performing disks”.

    From a performance perspective if your moving from a different or older technology, this is likely not going to be the case.

    This would also require me to allocoate luns sizes (at least initially) that i’d no longer consider optional given newer technologies.

    Care to comment? I’m concerned about not following best practices, then having to go to Oracle support if there are any issues.

    • ittichai February 8, 2012 at 3:59 pm #


      That statement makes sense. All disks used during normal operation should be same in size and same storage system if possible. The goal of this document is to move out completely from one storage system to another. At the end, we’re NOT using disks from both storage systems.

      • Steve February 8, 2012 at 4:05 pm #

        Thanks for replying, so you have experienced that for short-duration (overnight/weekend/offpeak) this in practice doesn’t pose any issue.

        My intention would be, to drop the “old” disks as soon as the rebal is compelete.. essential swapping old array for new.
        I’ll look at those other posts.. appreciate your insight.

        • ittichai February 8, 2012 at 5:19 pm #

          No issue during our migration.

  16. ever March 7, 2012 at 12:50 am #

    Will this approach is applicable for ASM disk exteranl redudancy mode?

  17. Imi March 11, 2012 at 9:58 pm #

    I’m on Clusterware. Any idea if I could just add and remove disks from an OCR_VOTE Diskgroup, leaving it to the ASM rebalancing?

    • ittichai March 12, 2012 at 2:15 pm #

      In 11gR2, you can move the OCR and Voting into ASM which will make thing easier. Just simply google for “oracle move ocr and voting to ASM 11gR2”. There is a plenty good information out there. Once in ASM, the move is just simply add/drop ASM disks.

  18. Imi March 12, 2012 at 4:23 pm #

    The OCR and Voting file is already in ASM. Just wanted to move the files from one SAN to another. While adding and removing disks worked for the OCR files, it didn’t really work for the voting files. So what I did was created another diskgroup and moved the OCR and Voting file into the new diskgroup. I thought I could just add and remove new disks from the new SAN into the existing diskgroup. That didn’t work though.

  19. Moe Sourour October 27, 2012 at 12:47 am #

    ocrconfig -replace ocr is not valid in 11g
    nor is
    ocrconfig -replace ocrmirror

    11g construct is

    ocrconfig -replace -replacement

    furthermore raw devices for ocr and voting disks are not supported in 11g except if you migrated from 10g. ocr and voting disks are resident on ASM in 11g.

    • ittichai October 27, 2012 at 9:13 am #

      I think you’re talking about 11gR2 (11.2). My article is on 11gR1 (11.1). There are a lot of changes especially 11.2 allows OCR and voting disks on ASM. That’s not possible in 11.1.

  20. lakshmi September 23, 2014 at 7:26 am #

    We have a 50TB db and plan for ASM san migration. Did not think of figuring out the hardware configuration. Can you help me better understand your thoughts on this, and what would be the best approach for a size like this ?


  1. Blogroll Report 02/10/2009-09/10/2009 « Coskan’s Approach to Oracle - October 14, 2009

    […] Ittichai Chammavanijakul-Online SAN Storage Migration for Oracle 11g RAC database with ASM […]

Leave a Reply

Powered by WordPress. Designed by WooThemes