Oracle VM 3.4.6 : Part 6 – OVM Block Storage

Oracle VM Block Storage

In my last Blog post I presented NFS storage to my Oracle VM Server, in this post I will show how block storage can also be configured for use with OVM.

Create Pure FlashArray volume

Our first step is to create a volume and connect it to our Oracle VM Server. In this example I have created a 1TB volume on one of my lab Pure Storage FlashArrays and connected it to my OVM Server.

Create Volume
Volume details
Connected Hosts

Configure OVM – SAN Servers

Before we present our new Pure Storage block device to our Oracle VM Server we first need to update the /etc/multipath.conf file to include any vendor specific settings and restart the multipath service. Below is the entry for a Pure Storage FlashArray.

        device {
                vendor                "PURE"
                product               "FlashArray"
                path_selector         "queue-length 0"
                path_grouping_policy  group_by_prio
                path_checker          tur
                fast_io_fail_tmo      10
                dev_loss_tmo          60
                no_path_retry         0
                hardware_handler      "1 alua"
                prio                  alua
                failback              immediate
                user_friendly_names   no
[root@z-ovm ~]# service multipathd reload

You can check that your changes have been applied with the multipathd show config command. You man find it useful to direct output to a file and then using view to search for your device. e.g.

[root@z-ovm ~]# multipathd show config > r.r
[root@z-ovm ~]# view r.r
        device {
                vendor "PURE"
                product "FlashArray"
                path_grouping_policy group_by_prio
                getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
                path_selector "queue-length 0"
                path_checker tur
                features "0"
                hardware_handler "1 alua"
                prio alua
                failback immediate
                rr_weight uniform
                no_path_retry fail
                rr_min_io 1000
                rr_min_io_rq 1
                fast_io_fail_tmo 10
                dev_loss_tmo 60
                user_friendly_names no

Now Logon to your Oracle VM Manager and refresh your previously created SAN Server to discover our newly created volume(s).

Refresh SAN Servers

Click OK to confirm Refresh, the new LUN(s) should now be visible in Oracle VM Manager Storage tab.

As I have presented the block storage over iSCSI we can see IQN (iSCSI Qualified Name) in the Storage Targets, this was set-up in Part 4 OVM Storage.

ID: is the UUID is a universally unique identifier that Oracle VM Manager assigns to a physical disk.

Page83 ID: The unique SCSI identifier for the physical disk, for a Pure Storage FlashArray this will be set as the Vendor ID + lowercase volume serial number e.g 3624a9370 + 513519106E354B37002EB1D1.

NOTE: Oracle VM does not currently support User Friendly Names (see extract from /etc/multipath.conf.

## IMPORTANT for OVS this must be no. OVS does not support user friendly
## names and instead uses the WWIDs as names.
defaults {
        user_friendly_names no
        getuid_callout "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"
        no_path_retry 10

Therefore the User Friendly Name will be same as the Page83 ID.

An optional step now is to Right Click the volume, and select Edit to provide a more meaningful Name and Description. Once updated Right Click again and this time select Refresh.

Updated volume names

Create Oracle VM Repository

From the Repository tab click on the Green Plus to create a new Repository, enter a Repository Name, select Physical Disk, Description, for non-Clustered deployments select Server Pool None and click on the Magnifying Glass to select the Physical Disk previously added. Then click Next.

Select SAN Server and Name, check the User Friendly Name is as expected.

Create a Repository: Select Physical Disk

Oracle VM Server

If we now logon to the Oracle VM Server as root we should now see our newly created volume(s).

[root@z-ovm ~]# multipath -ll
3624a9370513519106e354b37002eb1d1 dm-1 PURE,FlashArray
size=1.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='queue-length 0' prio=1 status=active
  |- 47:0:0:1 sdb 8:16 active ready running
  |- 48:0:0:1 sdc 8:32 active ready running
  |- 49:0:0:1 sdd 8:48 active ready running
  `- 50:0:0:1 sde 8:64 active ready running
[root@z-ovm ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root       7 Jan  3 15:26 3624a9370513519106e354b37002eb1d1 -> ../dm-1

Each repository contains the following directory structure /OVS/Repositories/*/

  • Assemblies
    • Contains pre-configured sets of virtual machines.
  • ISOs
    • Contains ISO images which can be used by VM’s
  • Templates
    • Contains virtual machine templates
  • VirtualDisks
    • Contains dedicated or shared virtual disks
  • VirtualMachines
    • Contains virtual machine configuration files

However before we logoff our Oracle VM server lets use findmnt to check that our NFS and OCFS2 filesystems mounts look OK. In the example below we can see both our NFS export and block devices as expected.

[root@z-ovm mapper]# findmnt -t nfs
TARGET                                             SOURCE                      FSTYPE OPTIONS
/OVS/Repositories/0004fb00000300009e6780edf21a0ee5 nfs    rw,relatime,vers=3,rsize=52

[root@z-ovm mapper]# findmnt -t ocfs2
TARGET                                            SOURCE                                       FSTYPE OPTIONS
/OVS/Repositories/0004fb0000030000b72890166aa11f29                                                   /dev/mapper/3624a9370513519106e354b37002eb1d1                                                                                              ocfs2  rw,relatime

You can read-up on OCFS2 at:

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]

Leave a Reply

Create a website or blog at

Up ↑

%d bloggers like this: