Background
There are many times when a shared file system is a hard requirement or is needed to make life easier. If you don’t have access to Pure FlashBlade or other NFS server and need a shared filesystem then Oracle Automatic Storage Management Cluster File System (ACFS) maybe worth considering.
Starting with Oracle Automatic Storage Management (ASM) 11g Release 2 (11.2.0.3), Oracle ASM Clustered File System (ACFS) has continued to expand the support of file types throughout releases. Oracle 19c now provides database, application and general purpose files support including RMAN backups, you can check the latest restrictions and guidelines here.
Block Volume Creation
Let’s start by creating a new volume, for this blog I will be using my lab Pure FlashArray for the ASM Clustered File System.

Multipath Configuration
Having created and connected the FlashArray volume to my Oracle 19c RAC nodes I will add an entry to the ‘/etc/multipath.conf’ on my database servers, this will allow us to work with more human readable names.
In the example the wwid (World Wide Identifier) is set to Vendor ID + Serial number. e.g. ‘3624a9370’ (for Pure Storage) + ‘50c939582b0f46c0003b69e4’.
Note, the ‘wwid’ needs to be in lowercase and the ‘alias’ name for ASM disks needs to be less than 30 characters, alphanumeric and only use the ‘_ ‘ special character.
multipaths { ... multipath { wwid 3624a937050c939582b0f46c0003b69e4 alias dg_rac_acfs } ...
Re-scan SCSI Bus
[root@z-rac1:~]# rescan-scsi-bus.sh -a
Reload multipath configuration
[root@z-rac1:~]# service multipathd reload
Load and display multipath configuration, device mapper and other components.
[root@z-rac1:~]# multipath -ll
For this post I will use UDEV rules, however if you want to try the ASM Filter Driver you can read how to use it here. Update your UDEV rules file and reload.
[root@z-rac1:~]# udevadm trigger
We are now ready to use our volume alias with Oracle ASM.
Oracle ASM Configuration
Create an ASM Diskgroup using ASM Configuration Assistant asmca (UI) or asmcmd (command line).

Using asmcmd we can view the Oracle ASM Diskgroup information (lsdg) and list disks (lsdsk)
[oracle@z-rac1 ~]$ asmcmd lsdg -g ACFS Inst_ID State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name 1 MOUNTED EXTERN N 512 512 4096 4194304 10485760 10485588 0 10485588 0 N ACFS/ 2 MOUNTED EXTERN N 512 512 4096 4194304 10485760 10485588 0 10485588 0 N ACFS/
[oracle@z-rac1 ~]$ asmcmd lsdsk -t -g -G ACFS Inst_ID Create_Date Mount_Date Repair_Timer Path 2 30-OCT-20 30-OCT-20 0 /dev/mapper/dg_rac_acfs 1 30-OCT-20 30-OCT-20 0 /dev/mapper/dg_rac_acfs
Creating an ASM File System
Before we start creating on ACFS file system let’s check the driver version with acfsdriverstate version
[oracle@z-rac1 ~]$ acfsdriverstate version ACFS-9325: Driver OS kernel version = 4.14.35-1902.0.9.el7uek.x86_64. ACFS-9326: Driver build number = RELEASE. ACFS-9212: Driver build version = 19.0.0.0.0 (19.3.0.0.0). ACFS-9547: Driver available build number = RELEASE. ACFS-9548: Driver available build version = 19.0.0.0.0 (19.3.0.0.0).
Now let’s create a 1TB volume called demobck in our mounted ACFS disk group with the asmcmd volcreate command.
[oracle@z-rac1 ~]$ asmcmd volcreate -G ACFS -s 1T demobck
Use the asmcmd volinfo command to check volume device details.
[oracle@z-rac1 ~]$ asmcmd volinfo -G ACFS demobck Diskgroup Name: ACFS Volume Name: DEMOBCK Volume Device: /dev/asm/demobck-142 State: ENABLED Size (MB): 1048576 Resize Unit (MB): 64 Redundancy: UNPROT Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath:
Or via SQLPLUS if you prefer.
SQL> SELECT volume_name, volume_device FROM V$ASM_VOLUME WHERE volume_name ='DEMOBCK'; VOLUME_NAME VOLUME_DEVICE ----------- ------------- ASMBCK /dev/asm/demobck-142
Try the Linux fdisk command to see the physical device details of our ACFS volume
[oracle@z-rac1 ~]$ fdisk -l /dev/asm/demobck-142 Disk /dev/asm/demobck-142: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 4194304 bytes
OK, now let’s create an ACFS filesystem on our device.
[oracle@z-rac1 ~]$ mkfs -t acfs /dev/asm/demobck-142 mkfs.acfs: version = 19.0.0.0.0 mkfs.acfs: on-disk version = 46.0 mkfs.acfs: volume = /dev/asm/demobck-142 mkfs.acfs: volume size = 1099511627776 ( 1.00 TB ) mkfs.acfs: Format complete.
Before we can mount our ACFS file system we need to create a Linux mount point on all nodes within the RAC cluster.
[root@z-rac1:~]# mkdir -p /mnt/demobck
We now mount our ACFS file system as root
[root@z-rac1:~] # mount -t acfs /dev/asm/demobck-142 /mnt/demobck
Add the Oracle Automatic Storage Management Cluster File System
[root@z-rac1:~]# srvctl add filesystem -m /mnt/demobck -d /dev/asm/demobck-142
Start the Oracle Automatic Storage Management Cluster File System
[root@z-rac1:~]# srvctl start filesystem -d /dev/asm/demobck-142
Oracle ACFS mounts
If we check the status we can see our ACFS file system has been mounted on both nodes of our Oracle 19c RAC cluster.
[root@z-rac1:~]# srvctl status filesystem -d /dev/asm/demobck-142 ACFS file system /mnt/demobck is mounted on nodes z-rac1,z-rac2
[oracle@z-rac1:~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/asm/demobck-142 1.0T 2.7G 1022G 1% /mnt/demobck [oracle@z-rac1 ~]$ mount | grep demobck /dev/asm/demobck-142 on /mnt/demobck type acfs (rw,relatime,device,rootsuid,ordered)
And again on our second node.
[oracle@z-rac2 ~]$ df -h Filesystem Size Used Avail Use% Mounted on ... /dev/asm/demobck-142 1.0T 2.7G 1022G 1% /mnt/demobck [oracle@z-rac2 ~]$ mount | grep demobck /dev/asm/demobck-142 on /mnt/demobck type acfs (rw,relatime,device,rootsuid,ordered)
Before we perform any tests, let’s change ownership and open up permissions.
[root@z-rac1:~]# chown -R oracle:oinstall /mnt/demobck [root@z-rac1:~]# chmod -R 775 /mnt/demobck
Quick ACFS test
Let’s perform a quick read / write test on the shared file system.
[oracle@z-rac1 ~]$ echo "test file created by Ron from `hostname`" > /mnt/demobck/test.txt [oracle@z-rac2 ~]$ echo "test file created by Ron from `hostname`" >> /mnt/demobck/test.txt [oracle@z-rac2 ~]$ cat /mnt/demobck/test.txt test file created by Ron from z-rac1.uklab.purestorage.com test file created by Ron from z-rac2.uklab.purestorage.com
Done, so in this blog I have shown how we can take a new block volume and configure it for use as an ASM Clustered File System.
In future posts I plan to look at other topics including resizing ACFS filesystems, taking snapshots, using ACFS for RMAN backups, and configuring and using HA-NFS.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]