Site icon Ron Ekins' – Oracle Technology, DevOps and Kubernetes Blog

How to Configure NVMe over TCP (NVMe/TCP) on Oracle Linux 9

Background

Over the recent months I have an increasing number of discussions around the use of Non-volatile Memory Express™ (NVMe™) over TCP (NVMe/TCP) storage and Oracle, so I thought it was time I documented how to configure NVMe/TCP on Oracle Linux 9 for Oracle Database 19c.

Is this NVMe over TCP supported ?

Yes, Oracle supports NVMe over Fabrics (NVMe-oF) storage devices for Oracle database 19c binaries, datafile and recovery files.

See supported storage options for Oracle Database 19c for latest information.

Note: In Oracle Database 19c, NVMe-oF uses the kernel initiator, which exposes the disks as block devices, this changes with Oracle AI Database 26ai, so expect another blog post on the GA of on-premises 26ai.

For this blog I will use a 2 node Oracle Database 19c RAC Cluster running Oracle Linux 9 with NVMe storage presented over TCP from a Pure Storage FlashArray.

Linux Server

Let’s start by confirming the version of Oracle Linux we are using.

# cat /etc/oracle-release
Oracle Linux Server release 9.6

Install NVMe-CLI

On each RAC node install the nvme-cli NVMe CLI utility

# dnf install nvme-cli

Identify Host NQN

Identify the host NQN (NVMe Qualified Name) for each Oracle Database RAC node using nvme show-hostnqn for example:

# nvme show-hostnqn
nqn.2014-08.org.nvmexpress:uuid:80cb3a5d-1dd3-ed11-9bc7-a4bf0195e9f8

Alternately, you can grab the NQN in the /etc/nvme/hostnqn file created when you installed nvme-cli.

# cat /etc/nvme/hostnqn
nqn.2014-08.org.nvmexpress:uuid:80cb3a5d-1dd3-ed11-9bc7-a4bf0195e9f8

Loading NVMe-TCP Kernel Module

Install and confirm the NVME-tcp module is installed on each RAC node

# modprobe nvme-tcp

# lsmod | grep nvme
nvme_tcp 57344 0
nvmet_fc 49152 1 lpfc
nvmet 188416 1 nvmet_fc
nvme_fc 65536 1 lpfc
nvme 65536 3
nvme_fabrics 36864 2 nvme_tcp,nvme_fc
nvme_core 208896 10 nvmet,nvme_tcp,nvme,nvme_fc,nvme_fabrics
t10_pi 16384 3 nvmet,sd_mod,nvme_core
nvme_common 24576 2 nvmet,nvme_core

To ensure the NVMe-TCP module is loaded automatically after a host reboot add nvme_tcp to /etc/modules-load.d/nvme-tcp.conf

# echo nvme_tcp  >> /etc/modules-load.d/nvme-tcp.conf
# cat /etc/modules-load.d/nvme-tcp.conf
nvme_tcp

Pure FlashArray

Logon to the PureFlash Array WebUI and create a new host by navigating to Storage -> Hosts and then click + to Create Host.

Create Host

Now, click on Configure NQNs from the Hosts Ports panel.

Provide the NQN, details previously obtained using nvme show-hostnqn, and repeat for all RAC nodes.

Configure NVMe-oF NQNs

Now create a Host Group and add Member Hosts, in this example the Oracle RAC nodes.

From the above we can see the Interface is reported as NVMe-oF.

Create required Volumes, providing meaningful names and size.

Return to Storage -> Hosts, select Host Group and connect newly created volumes.

FlashArray NVMe/TCP Volumes

Logon to the Pure FlashArray CLI and use purenetwork etc list –service nvme-tcp to list the FlashArray’s NVMe/TCP addresses, for example.

pureuser@z-x90-a> purenetwork eth list --service nvme-tcp
Name Enabled Type Subnet Address Mask Gateway MTU MAC Speed Services Subinterfaces
ct0.eth14 True physical - 192.168.130.10 255.255.255.0 192.168.130.1 1500 b8:ce:f6:e9:62:0b 25.00 Gb/s nvme-tcp -
ct0.eth15 True physical - 192.168.131.10 255.255.255.0 192.168.131.1 1500 b8:ce:f6:e9:62:0a 25.00 Gb/s nvme-tcp -
ct1.eth14 True physical - 192.168.130.11 255.255.255.0 192.168.130.1 1500 b8:ce:f6:e9:5c:b7 25.00 Gb/s nvme-tcp -
ct1.eth15 True physical - 192.168.131.11 255.255.255.0 192.168.131.1 1500 b8:ce:f6:e9:5c:b6 25.00 Gb/s nvme-tcp -
pureuser@z-x90-a>

Use purevol list <volume name> –human-readable –total to list the volumes and serial numbers, for example.

pureuser@z-x90-a> purevol list dg_*nvme --human-readable --total
Name Size Source Created Serial
dg_data01_nvme 1T - 2025-12-23 13:44:21 GMT 6C1B16CE1C034D1C05569B17
dg_data02_nvme 1T - 2025-12-23 13:44:38 GMT 6C1B16CE1C034D1C05569B25
dg_data03_nvme 1T - 2025-12-23 13:44:55 GMT 6C1B16CE1C034D1C05569B26
dg_data04_nvme 1T - 2025-12-23 13:45:10 GMT 6C1B16CE1C034D1C05569B28
dg_fra01_nvme 4T - 2025-12-23 13:45:34 GMT 6C1B16CE1C034D1C05569B29
dg_redo01_nvme 100G - 2025-12-23 13:45:54 GMT 6C1B16CE1C034D1C05569B2A
(total) 8292G - - -

Linux NVMe Configuration

NVMe Discover

Returning to the Oracle Database server, perform nvme discover using the IPs obtain with purenetwork eth list –service nvme-tcp to discover the available subsystems on the NVMe controller.

nvme discover
--transport, -t (transport type)
--traddr, -a (transport address)
--trsvcid, -s (transport service id (e.g. IP port)

For example:

# nvme discover --transport tcp --traddr 192.168.130.10 --trsvcid 4420 | grep -E 'traddr|subnqn|Entry'
=====Discovery Log Entry 0======
subnqn: nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d
traddr: 192.168.131.11
=====Discovery Log Entry 1======
subnqn: nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d
traddr: 192.168.130.11
=====Discovery Log Entry 2======
subnqn: nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d
traddr: 192.168.130.10
=====Discovery Log Entry 3======
subnqn: nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d
traddr: 192.168.131.10

NVMe Connect

Now we have the confirmed connectivity, we can connect using nvme connect or nvme connect-all.

Usage: nvme connect <device> [OPTIONS]
Connect to NVMeoF subsystem

Options:
[ --transport=<STR>, -t <STR> ] --- transport type
[ --nqn=<STR>, -n <STR> ] --- subsystem nqn
[ --traddr=<STR>, -a <STR> ] --- transport address
[ --trsvcid=<STR>, -s <STR> ] --- transport service id (e.g. IP port)
[ --keep-alive-tmo=<NUM>, -k <NUM> ] --- keep alive timeout period in seconds
[ --ctrl-loss-tmo=<NUM>, -l <NUM> ] --- controller loss timeout period in seconds

NVMe connect example:

# nvme connect --transport tcp --traddr 192.168.131.11 --trsvcid 4420 --nqn nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d --keep-alive-tmo 15 --ctrl-loss-tmo 3600
connecting to device: nvme2

# nvme connect --transport tcp --traddr 192.168.130.11 --trsvcid 4420 --nqn nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d --keep-alive-tmo 15 --ctrl-loss-tmo 3600
connecting to device: nvme3

# nvme connect --transport tcp --traddr 192.168.130.10 --trsvcid 4420 --nqn nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d --keep-alive-tmo 15 --ctrl-loss-tmo 3600
connecting to device: nvme4

# nvme connect --transport tcp --traddr 192.168.131.10 --trsvcid 4420 --nqn nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d --keep-alive-tmo 15 --ctrl-loss-tmo 3600
connecting to device: nvme5

Alternatively, using NVMe connect-all example:

# nvme connect-all --transport tcp --traddr 192.168.130.10 --trsvcid 4420 --keep-alive-tmo 15 --ctrl-loss-tmo 3600

Verification

Verify that the expected paths to the array have been established with nvme list-subsys for example:

# nvme list-subsys -v
nvme-subsys2 - NQN=nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d
hostnqn=nqn.2014-08.org.nvmexpress:uuid:80cb3a5d-1dd3-ed11-9bc7-a4bf0195e9f8
model=Pure Storage FlashArray
firmware=6.7.7
iopolicy=numa
type=nvm
\
+- nvme2 tcp traddr=192.168.131.11,trsvcid=4420,src_addr=192.168.131.70 live
+- nvme3 tcp traddr=192.168.130.11,trsvcid=4420,src_addr=192.168.130.70 live
+- nvme4 tcp traddr=192.168.130.10,trsvcid=4420,src_addr=192.168.130.70 live
+- nvme5 tcp traddr=192.168.131.10,trsvcid=4420,src_addr=192.168.131.70 live

NVMe List Controllers

Verify that the volumes are exposed on each of the paths using nvme list, for example:

# nvme list -v
Subsystem Subsystem-NQN Controllers
---------------- ------------------------------------------------------------------------------------------------ ----------------
nvme-subsys2 nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d nvme2, nvme3, nvme4, nvme5

Device Cntlid SN MN FR TxPort Address Slot Subsystem Namespaces
---------------- ------ -------------------- ---------------------------------------- -------- ------ -------------- ------ ------------ ----------------
nvme2 39 318B69BEFD9F9F2D Pure Storage FlashArray 6.7.7 tcp traddr=192.168.131.11,trsvcid=4420,src_addr=192.168.131.70 nvme-subsys2 nvme2n1, nvme2n2, nvme2n3, nvme2n4, nvme2n5, nvme2n6
nvme3 41 318B69BEFD9F9F2D Pure Storage FlashArray 6.7.7 tcp traddr=192.168.130.11,trsvcid=4420,src_addr=192.168.130.70 nvme-subsys2 nvme2n1, nvme2n2, nvme2n3, nvme2n4, nvme2n5, nvme2n6
nvme4 38 318B69BEFD9F9F2D Pure Storage FlashArray 6.7.7 tcp traddr=192.168.130.10,trsvcid=4420,src_addr=192.168.130.70 nvme-subsys2 nvme2n1, nvme2n2, nvme2n3, nvme2n4, nvme2n5, nvme2n6
nvme5 40 318B69BEFD9F9F2D Pure Storage FlashArray 6.7.7 tcp traddr=192.168.131.10,trsvcid=4420,src_addr=192.168.131.70 nvme-subsys2 nvme2n1, nvme2n2, nvme2n3, nvme2n4, nvme2n5, nvme2n6

Device Generic NSID Usage Format Controllers
----------------- ----------------- ---------- -------------------------- ---------------- ----------------
/dev/nvme2n1 /dev/ng2n1 0xda 1.10 TB / 1.10 TB 512 B + 0 B nvme2, nvme3, nvme4, nvme5
/dev/nvme2n2 /dev/ng2n2 0xe4 1.10 TB / 1.10 TB 512 B + 0 B nvme2, nvme3, nvme4, nvme5
/dev/nvme2n3 /dev/ng2n3 0xec 1.10 TB / 1.10 TB 512 B + 0 B nvme2, nvme3, nvme4, nvme5
/dev/nvme2n4 /dev/ng2n4 0xed 1.10 TB / 1.10 TB 512 B + 0 B nvme2, nvme3, nvme4, nvme5
/dev/nvme2n5 /dev/ng2n5 0xee 4.40 TB / 4.40 TB 512 B + 0 B nvme2, nvme3, nvme4, nvme5
/dev/nvme2n6 /dev/ng2n6 0xf4 107.37 GB / 107.37 GB 512 B + 0 B nvme2, nvme3, nvme4, nvme5

Use nvme id-ctrl sends an identify controller command to a specified NVMe device, returning controller capabilities, features, and identification data e.g. serial number (sn), model number (mn), firmware revision (fr). For example:

# nvme id-ctrl /dev/nvme2n1
NVME Identify Controller:
vid       : 0x1d00
ssvid     : 0x1d00
sn        : 318B69BEFD9F9F2D
mn        : Pure Storage FlashArray
fr        : 6.7.7

NVMe Persistence

Create a file /opt/nvme_connect.sh, providing the nvme connect commands previously used.

And now create a file /etc/systemd/system/nvme_fabrics.persistent.service

#!/bin/bash
[Unit]
Description=NVMe-oF persistent connection
Requires=network.services
After=systemd-modules-load.service network.target

[Service]
Type=oneshot
ExecStart=/opt/nvme_connect.sh
StandardOutput=journal

[Install]
WantedBy=multi-user.target timers.target

Remember to chmod (755) the above to make the files executable before moving on.

NVMe Persistence Test

Disconnect the current nvme devices using the nvme-cli command nvme disconnect, for example:

# nvme disconnect --nqn nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d
NQN:nqn.2010-06.com.purestorage:flasharray.318b69befd9f9f2d disconnected 4 controller(s)

Now try to start the new service with systemctl start

# systemctl status nvme_fabrics.persistent.service

Confirm the service started ok with systemctl status

# systemctl status nvme_fabrics.persistent.service
○ nvme_fabrics.persistent.service - NVMe-oF persistent connection
Loaded: loaded (/etc/systemd/system/nvme_fabrics.persistent.service; enabled; preset: disabled)
Active: inactive (dead) since Fri 2026-01-09 13:57:50 GMT; 13s ago
Process: 3642903 ExecStart=/opt/nvme_connect.sh (code=exited, status=0/SUCCESS)
Main PID: 3642903 (code=exited, status=0/SUCCESS)
CPU: 94ms

If OK, enable the service with systemctl enable.

# systemctl enable nvme_fabrics.persistent.service
Created symlink /etc/systemd/system/multi-user.target.wants/nvme_fabrics.persistent.service → /etc/systemd/system/nvme_fabrics.persistent.service.
Created symlink /etc/systemd/system/timers.target.wants/nvme_fabrics.persistent.service → /etc/systemd/system/nvme_fabrics.persistent.service.

Multipathing

My Oracle RAC lab servers are currently configured to use Device-Mapping (DM) Multipathing rather than Native Multipathing, so to avoid any multipathing issues I will disable Native Multipathing.

Let’s check to see if Native NVMe multipath is enabled by checking /sys/module/nvme_core/parameters/multipath, if ‘Y’ it is enabled.

# cat /sys/module/nvme_core/parameters/multipath
Y

If enabled ‘Y’, disable using grubby, the change also needs to be copied to the boot filesystem so that the parameter will be set when nvme_core is loaded at boot time.

# grubby --update-kernel=ALL --args="nvme_core.multipath=N"
# dracut -f

Before we reboot, check the above has been successful using grubby –info=DEFAULT.

How to Map an EUI to FlashArray UID

The way to map an EUI to a FlashArray NVMe-oF based volume is by using the Namespace Globally Unique Identifier (NGUID) for every volume presented to our database servers.

After a reboot perform a multipath -l and identify the NVMe disks, for example:

# multipath -l | grep eui
eui.006c1b16ce1c034d24a9371c05569b17 dm-27 NVME,Pure Storage FlashArray
eui.006c1b16ce1c034d24a9371c05569b25 dm-28 NVME,Pure Storage FlashArray
eui.006c1b16ce1c034d24a9371c05569b26 dm-29 NVME,Pure Storage FlashArray
eui.006c1b16ce1c034d24a9371c05569b28 dm-30 NVME,Pure Storage FlashArray
eui.006c1b16ce1c034d24a9371c05569b29 dm-31 NVME,Pure Storage FlashArray
eui.006c1b16ce1c034d24a9371c05569b2a dm-32 NVME,Pure Storage FlashArray

From the above we can see that NVMe devices return an EUI-64 (Extended Unique Identifier) value rather than WWID (World Wide Identifier).

Note: this does not follow the WWID format of including the Pure Vendor prefix of ‘3624a9370′

For the Pure Storage FlashArray an NGUID is broken into 3 parts as the the NVM Express Base Specification.

Breaking down: eui.006c1b16ce1c034d24a9371c05569b29

Pure ID

The first 8 bytes will contain a leading ’00’, followed by the first 7 bytes of FlashArray ID, this can be seen using the Pure CLI purearray list command, for example:

pureuser@z-x90-a> purearray list
Name ID OS Version
z-x90-a 6c1b16ce-1c03-4d1c-b974-33405cf8d565 Purity//FA 6.7.7

Or alternatively using Pure UI or purevol list <volume name> command, for example:

pureuser@z-x90-a> purevol list dg_fra01_nvme
Name Size Source Created Serial
dg_fra01_nvme 4T - 2025-12-23 13:45:34 GMT 6C1B16CE1C034D1C05569B29

Array ID

The next 3 bytes of the string is the Unique Company ID for Pure Storage (24a937):

FA SN / UUID

The last 5 bytes of the string will be the unique identifier of the individual volume on the FlashArray.

We can obtain this by again by using the purevol list <volume name> command, for example.

pureuser@z-x90-a> purevol list dg_fra01_nvme
Name Size Source Created Serial
dg_fra01_nvme 4T - 2025-12-23 13:45:34 GMT 6C1B16CE1C034D1C05569B29

Giving us:

Example: eui.006c1b16ce1c034d24a9371c05569b29

DM Multipath

Before we label disks for Oracle ASM with ASMLib v3.1, let’s configure multi-pathing by adding aliases for our FA volume names for each eui device in /etc/multipath.conf

First, create a device entry within the devices section.

device {
        vendor                      "NVME"
        product                     "Pure Storage FlashArray"
        path_selector               "queue-length 0"
        path_grouping_policy        group_by_prio
        prio                        ana
        failback                    immediate
        fast_io_fail_tmo            10
        user_friendly_names         no
        no_path_retry               0
        features                    0
        dev_loss_tmo                60
    }

Confirm FA volume names and Serial numbers, for example.

pureuser@z-x90-a> purevol list dg_*_nvme
Name Size Source Created Serial
dg_data01_nvme 1T - 2025-12-23 13:44:21 GMT 6C1B16CE1C034D1C05569B17
dg_data02_nvme 1T - 2025-12-23 13:44:38 GMT 6C1B16CE1C034D1C05569B25
dg_data03_nvme 1T - 2025-12-23 13:44:55 GMT 6C1B16CE1C034D1C05569B26
dg_data04_nvme 1T - 2025-12-23 13:45:10 GMT 6C1B16CE1C034D1C05569B28
dg_fra01_nvme 4T - 2025-12-23 13:45:34 GMT 6C1B16CE1C034D1C05569B29
dg_redo01_nvme 100G - 2025-12-23 13:45:54 GMT 6C1B16CE1C034D1C05569B2A

And add them to the multipaths section /etc/multipath.conf, for example:

     multipath {
            wwid        eui.006c1b16ce1c034d24a9371c05569b17
            alias       dg_data01_nvme
     }
     multipath {
            wwid        eui.006c1b16ce1c034d24a9371c05569b25
            alias       dg_data02_nvme
     }
     multipath {
            wwid        eui.006c1b16ce1c034d24a9371c05569b26
            alias       dg_data03_nvme
     }
     multipath {
            wwid        eui.006c1b16ce1c034d24a9371c05569b28
            alias       dg_data04_nvme
     }
     multipath {
            wwid        eui.006c1b16ce1c034d24a9371c05569b29
            alias       dg_fra01_nvme
     }
     multipath {
            wwid        eui.006c1b16ce1c034d24a9371c05569b2a
            alias       dg_redo01_nvme
     }

Repeat the above and every node in the Oracle RAC cluster, then reload multipath configuration using, multipath -r or systemctl restart multipathd

If we repeat the multipath -l

# multipath -l | grep eui
dg_data01_nvme (eui.006c1b16ce1c034d24a9371c05569b17) dm-27 NVME,Pure Storage FlashArray
dg_data02_nvme (eui.006c1b16ce1c034d24a9371c05569b25) dm-28 NVME,Pure Storage FlashArray
dg_data03_nvme (eui.006c1b16ce1c034d24a9371c05569b26) dm-30 NVME,Pure Storage FlashArray
dg_data04_nvme (eui.006c1b16ce1c034d24a9371c05569b28) dm-29 NVME,Pure Storage FlashArray
dg_fra01_nvme (eui.006c1b16ce1c034d24a9371c05569b29) dm-31 NVME,Pure Storage FlashArray
dg_redo01_nvme (eui.006c1b16ce1c034d24a9371c05569b2a) dm-32 NVME,Pure Storage FlashArray

Summary

In this post I have shared how to configure an Oracle Linux 9.6 server to use NVMe block storage over TCP (NVMe/TCP) presented from a Pure Storage FlashArray.

I have also shared how map the NVMe devices back to FlashArray volumes and how to configure Device Mapper (DM) Multipath alias names to provide descriptive, meaningful device names.

If you have completed the above and are looking to use your new NVMe FlashArray volumes with Oracle Database 19c, check-out my blog post: Installation and Configuration of the new Oracle ASMLib v3.1 on Oracle Linux 9

Exit mobile version