NVM Express [NVMe] or Non-Volatile Memory Host Controller Interface Specification (NVMHCI), is a specification for accessing solid-state drives (SSDs) attached through the PCI Express (PCIe) bus. NVM is an acronym for non-volatile memory, as used in SSDs. NVMe defines optimized register interface, command set and feature set for PCIe SSD’s. NVMe focuses to standardize the PCIe SSD’s and improve the performance
PCIe SSD devices designed based on the NVMe specification are NVMe based PCIeSSD’s. For more details on the NVMe please refer the link http://www.nvmexpress.org/ .The NVMe devices used currently are NVMe 1.0c compliant
Below we will be looking into RHEL 7 support for the NVMe devices.
No cause information is available.
The following are the list of the things being covered:
NVMe driver exposes the following features
The following table lists the RHEL 7 [Out of box] driver supported features for NVMe on 12G and 13 G machines
Generation | Basic IO | Hot Plug | UEFI Boot | Legacy Boot |
---|---|---|---|---|
13 G | Yes | Yes | Yes | No |
12 G | Yes | Yes | No | No |
The below [Fig 5] explains the naming convention of the device nodes
The number immediately after the string "nvme" is the device number
Example:
nvme0n1 – Here the device number is 0
Partitions are appended after the device name with the prefix ‘p’
Example:
nvme0n1p1 – partition 1
nvme1n1p2 – partition 2
Example:
nvme0n1p1 – partition 1 of device 0
nvme0n1p2 – partition 2 of device 0
nvme1n1p1 – partition 1 of device 1
nvme1n1p2 – partition 2 of device 1
Figure 5: Device node naming conventions
1) The following command formats the nvme partition 1 on device 1 to xfs
[root@localhost ~]# mkfs.xfs /dev/nvme1n1p1
meta-data=/dev/nvme1n1p1 isize=256 agcount=4, agsize=12209667 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=48838667, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=23847, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
2) Mount the device to a mount point and list the same
[root@localhost ~]# mount /dev/nvme1n1p1 /mnt/
[root@localhost ~]# mount | grep -i nvme
/dev/nvme1n1p1 on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
Using ledmon utility to manage backplane LEDs for NVMe device
Ledmon and ledctl are two utilities for Linux that can be used to control LED status on drive backplanes. Normally drive backplane LEDs are controlled by a hardware RAID controller (PERC), but when using Software RAID on Linux (mdadm) for NVMe PCIE SSD, the ledmon daemon will monitor the status of the drive array and update the status of drive LEDs.
For extra reading check the link https://www.dell.com/support/article/SLN310523/
1) Installing OpenIPMI and ledmon/ledctl utilities:
Execute the following commands to install OpenIPMI and ledmon
[root@localhost ~]# yum install OpenIPMI
[root@localhost ~]# yum install ledmon-0.79-3.el7.x86_64.rpm
2) Use ledmod/ledctl utilities
Running ledctl and ledmon concurrently, ledmon will eventually override the ledctl settings
a) Start and check the status of ipmi as shown in the [Fig.6] using the following command
[root@localhost ~]# systemctl start ipmi
Figure 6: IPMI start and status
a) Start the ledmod
[root@localhost ~]# ledmon
b) [Fig 7] shows LED status after executing ledmon for the working state of the device
Figure 7: LED status after ledmon run for working state of the device (green)
a) The below command will blink drive LED [on the device node /dev/nvme0n1 ]
[root@localhost ~]# ledctl locate=/dev/nvme0n1
Below command will blink both the drive LEDs [on the device node /dev/nvme0n1 and /dev/nvme1n1]
[root@localhost ~]# ledctl locate={ /dev/nvme0n1 /dev/nvme1n1 }
And the following command will turn off the locate LED
[root@localhost ~]# ledctl locate_off=/dev/nvme0n1