nvme-connect - Man Page

Connect to a Fabrics controller.


nvme connect
                [--transport=<trtype>     | -t <trtype>]
                [--nqn=<subnqn>           | -n <subnqn>]
                [--traddr=<traddr>        | -a <traddr>]
                [--trsvcid=<trsvcid>      | -s <trsvcid>]
                [--host-traddr=<traddr>   | -w <traddr>]
                [--hostnqn=<hostnqn>      | -q <hostnqn>]
                [--hostid=<hostid>        | -I <hostid>]
                [--nr-io-queues=<#>       | -i <#>]
                [--nr-write-queues=<#>    | -W <#>]
                [--nr-poll-queues=<#>     | -P <#>]
                [--queue-size=<#>         | -Q <#>]
                [--keep-alive-tmo=<#>     | -k <#>]
                [--reconnect-delay=<#>    | -c <#>]
                [--ctrl-loss-tmo=<#>      | -l <#>]
                [--duplicate-connect      | -D]
                [--disable-sqflow         | -d]
                [--hdr-digest             | -g]
                [--data-digest            | -G]


Create a transport connection to a remote system (specified by --traddr and --trsvcid) and create a NVMe over Fabrics controller for the NVMe subsystem specified by the --nqn option.


-t <trtype>, --transport=<trtype>

This field specifies the network fabric being used for a NVMe-over-Fabrics network. Current string values include:

rdmaThe network fabric is an rdma network (RoCE, iWARP, Infiniband, basic rdma, etc)
fcWIP The network fabric is a Fibre Channel network.
loopConnect to a NVMe over Fabrics target on the local host
-n <subnqn>, --nqn <subnqn>

This field specifies the name for the NVMe subsystem to connect to.

-a <traddr>, --traddr=<traddr>

This field specifies the network address of the Controller. For transports using IP addressing (e.g. rdma) this should be an IP-based address (ex. IPv4).

-s <trsvcid>, --trsvcid=<trsvcid>

This field specifies the transport service id. For transports using IP addressing (e.g. rdma) this field is the port number. By default, the IP port number for the RDMA transport is 4420.

-w <traddr>, --host-traddr=<traddr>

This field specifies the network address used on the host to connect to the Controller.

-q <hostnqn>, --hostnqn=<hostnqn>

Overrides the default Host NQN that identifies the NVMe Host. If this option is not specified, the default is read from /etc/nvme/hostnqn first. If that does not exist, the autogenerated NQN value from the NVMe Host kernel module is used next. The Host NQN uniquely identifies the NVMe Host.

-I <hostid>, --hostid=<hostid>

UUID(Universally Unique Identifier) to be discovered which should be formatted.

-i <#>, --nr-io-queues=<#>

Overrides the default number of I/O queues create by the driver.

-W <#>, --nr-write-queues=<#>

Adds additional queues that will be used for write I/O.

-P <#>, --nr-poll-queues=<#>

Adds additional queues that will be used for polling latency sensitive I/O.

-Q <#>, --queue-size=<#>

Overrides the default number of elements in the I/O queues created by the driver.

-k <#>, --keep-alive-tmo=<#>

Overrides the default keep alive timeout (in seconds).

-c <#>, --reconnect-delay=<#>

Overrides the default delay (in seconds) before reconnect is attempted after a connect loss.

-l <#>, --ctrl-loss-tmo=<#>

Overrides the default controller loss timeout period (in seconds).

-D,  --duplicate-connect

Allows duplicated connections between same trnsport host and subsystem port.

-d,  --disable-sqflow

Disables SQ flow control to omit head doorbell update for submission queues when sending nvme completions.

-g,  --hdr-digest

Generates/verifies header digest (TCP).

-G,  --data-digest

Generates/verifies data digest (TCP).


See Also

nvme-discover(1) nvme-connect-all(1)


This was co-written by Jay Freyensee[1] and Christoph Hellwig[2]


Part of the nvme-user suite


  1. Jay Freyensee
  2. Christoph Hellwig

Referenced By

nvme(1), nvme-connect-all(1), nvme-disconnect(1), nvme-discover(1).

04/24/2020 NVMe Manual