Accelerating Oracle DataPump with Oracle dNFS (Direct NFS)

Before we can use DataPump with Oracle dNFS we need to create and export an NFS filesystem, prepare our Linux environment and configure our database.

Ok let’s begin.

Create File System

For this walkthrough I need an NFS server, fortunately I have access to a Pure Storage FlashBlade in my lab, so I will be using this to deliver my NFS filesystem throughout this Blog post.

Let’s start by creating a filesystem for our Oracle Direct NFS (dNFS) DataPump export, in this example I have used the FlashBlade UI, but I could have also used the Command Line Interface (CLI), REST API, Python SDK or Ansible collection available from Ansible Galaxy.

Create FlashBlade dNFS Filesystem

Setup Linux

Now as root update your /etc/fstab file, create a mount point, mount the exported filesystem and change ownership to oracle e.g.

$ mkdir /mnt/dnfs_dump
$ mount /mnt/dnfs_dump
$ chown -R oracle:install /mnt/dnfs_dump

$ df -h
Filesystem                       Size  Used Avail Use% Mounted on  1.0T     0  1.0T   0% /mnt/dnfs_dump

Prepare Oracle Database

Update your oranfstab file $ORACLE_HOME/dbs/oranfstab or /etc/oranfstab if your oranfstab is system wide to include the new export e.g.

server: flashblade1
local: path:
local: path:
local: path:
local: path:
nfs_version: nfsv3
export: /oracle_dnfs_dump mount:/mnt/dnfs_dump 

If you want to learn more about configuring Oracle dNFS, using multipath and checking your configuration you may want to check-out one of my previous Blogs on this.

If you are using Oracle RAC you will need to repeat all of the above on every node within the cluster.

Now bounce your Oracle database to get your database to re-read your updated oranfstab file, again remember to do this on all nodes if your using RAC.

OK, we are now ready to logon to our database and create an Oracle directory.

SQL> create directory dNFS_dump as '/mnt/dnfs_dump';
Directory created.

Now grant read and write access to it.

Grant succeeded.

Check your Oracle directory has been set-up.

set line 200
col OWNER heading 'Owner' for a10
col DIRECTORY_NAME heading 'Directory Name' format a20
col DIRECTORY_PATH heading 'Directory Path' format a20
COL ORIGIN_CON_ID heading 'Container ID' format 9

select * from dba_directories where DIRECTORY_NAME like '%DUMP%';
Owner    Directory Name   Directory Path       Container ID
-------- ---------------- -------------------- ------------
SYS      DNFS_DUMP        /mnt/dnfs_dump                  3

I have created to two example DataPump parameter files, one for Single Instance and the other for Oracle RAC (Real Application Cluster).


And for Oracle RAC.


Now kick-off your DataPump export e.g.

expdp <username>/<password>@<connection string> PARFILE=rac_dNFS.par

Whilst the export is running you can use the Oracle modnfs package to monitor database IO and assist in tuning by reviewing impact of changing the parallel and filesize parameters.

Alternatively, you maybe able to use OEM (Oracle Enterprise Manager Cloud Control) or your storage platform to provide filesystem monitoring e.g.

Monitor FlashBlade performance UI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at

Up ↑

%d bloggers like this: