Before we can use DataPump with Oracle dNFS we need to create and export an NFS filesystem, prepare our Linux environment and configure our database.
Ok let’s begin.
Create File System
For this walkthrough I need an NFS server, fortunately I have access to a Pure Storage FlashBlade in my lab, so I will be using this to deliver my NFS filesystem throughout this Blog post.
Let’s start by creating a filesystem for our Oracle Direct NFS (dNFS) DataPump export, in this example I have used the FlashBlade UI, but I could have also used the Command Line Interface (CLI), REST API, Python SDK or Ansible collection available from Ansible Galaxy.

Setup Linux
Now as root update your /etc/fstab file, create a mount point, mount the exported filesystem and change ownership to oracle e.g.
$ mkdir /mnt/dnfs_dump $ mount /mnt/dnfs_dump $ chown -R oracle:install /mnt/dnfs_dump $ df -h Filesystem Size Used Avail Use% Mounted on 192.168.4.100:/oracle_dnfs_dump 1.0T 0 1.0T 0% /mnt/dnfs_dump
Prepare Oracle Database
Update your oranfstab file $ORACLE_HOME/dbs/oranfstab or /etc/oranfstab if your oranfstab is system wide to include the new export e.g.
server: flashblade1 local: 192.168.4.2 path: 192.168.4.100 local: 192.168.5.2 path: 192.168.5.100 local: 192.168.6.2 path: 192.168.6.100 local: 192.168.7.2 path: 192.168.7.100 nfs_version: nfsv3 export: /oracle_dnfs_dump mount:/mnt/dnfs_dump
If you want to learn more about configuring Oracle dNFS, using multipath and checking your configuration you may want to check-out one of my previous Blogs on this.
If you are using Oracle RAC you will need to repeat all of the above on every node within the cluster.
Now bounce your Oracle database to get your database to re-read your updated oranfstab file, again remember to do this on all nodes if your using RAC.
OK, we are now ready to logon to our database and create an Oracle directory.
SQL> create directory dNFS_dump as '/mnt/dnfs_dump'; Directory created.
Now grant read and write access to it.
SQL> GRANT READ, WRITE ON DIRECTORY DNFS_DUMP TO PUBLIC; Grant succeeded.
Check your Oracle directory has been set-up.
set line 200 col OWNER heading 'Owner' for a10 col DIRECTORY_NAME heading 'Directory Name' format a20 col DIRECTORY_PATH heading 'Directory Path' format a20 COL ORIGIN_CON_ID heading 'Container ID' format 9 select * from dba_directories where DIRECTORY_NAME like '%DUMP%';
Owner Directory Name Directory Path Container ID -------- ---------------- -------------------- ------------ SYS DNFS_DUMP /mnt/dnfs_dump 3
I have created to two example DataPump parameter files, one for Single Instance and the other for Oracle RAC (Real Application Cluster).
directory=DNFS_DUMP dumpfile=EXP_soe_%U.dmp logfile=EXP_soe_si_dNFS.log schemas=soe logtime=all parallel=32 cluster=no exclude=statistics
And for Oracle RAC.
directory=DNFS_DUMP dumpfile=EXP_soe_%U.dmp logfile=EXP_soe_rac_dNFS.log schemas=soe logtime=all parallel=64 cluster=yes exclude=statistics
Now kick-off your DataPump export e.g.
expdp <username>/<password>@<connection string> PARFILE=rac_dNFS.par
Whilst the export is running you can use the Oracle modnfs package to monitor database IO and assist in tuning by reviewing impact of changing the parallel and filesize parameters.
Alternatively, you maybe able to use OEM (Oracle Enterprise Manager Cloud Control) or your storage platform to provide filesystem monitoring e.g.

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]
Leave a Reply