How to choose the best NVMe-over-Fabrics solution

As an alternative to SCSI, NVMe™ aims to achieve low-latency operation. At present, the major server and storage vendors on the market have adopted this protocol to replace the SCSI protocol in the access process of solid state drives (SSD). After adopting the NVMe protocol, NVMe-based SSD can eliminate the bottleneck inherent in the traditional storage implementation process. As a result, the corresponding access speed will be greatly improved for the access operations of various applications in the enterprise data center. This article will take you to analyze the use of NVMe in SSD and the key problems encountered in large-scale network deployment.

Author: Ian Sagan, Marvell Field Technical Support Engineer

As an alternative to SCSI, NVMe™ aims to achieve low-latency operation. At present, the major server and storage vendors on the market have adopted this protocol to replace the SCSI protocol in the access process of solid state drives (SSD). After adopting the NVMe protocol, NVMe-based SSD can eliminate the bottleneck inherent in the traditional storage implementation process. As a result, the corresponding access speed will be greatly improved for the access operations of various applications in the enterprise data center. This article will take you to analyze the use of NVMe in SSDs and the key problems encountered during large-scale network deployment.

Why is the application range of NVMe so extensive and growing rapidly?

NVMe is a newly designed protocol for communicating with high-speed flash memory. It only requires 30 instructions specifically developed for processing SSDs. In addition, in order to take advantage of the parallel processing capabilities of the latest multi-core processors, the protocol also supports multi-depth instruction queues. The NVMe protocol can support up to 64K queues, and each queue can support 64K commands (commands) operations. The initial development of traditional SCSI, SAS, and SATA protocols is aimed at traditional mechanical hard disks (HDD). Compared with them, NVMe has shown great progress.

Looking at the global market today, the sales of NVMe-based SSD drives are catching up with the sales of SAS and SATA SSD storage*. This is because, for current and next-generation SSD technologies (such as 3D XPoint and NVDIMM non-volatile DIMM), the NVMe protocol can bring significant performance improvements.

Why should we consider using NVMe-over-Fabrics?

The main goal of building the NVMe protocol is to allow the central processing unit (CPU) to use the PCIe bus to access the NVMe-based SSD inside the server. However, storage administrators have begun to realize that local server storage has caused serious management problems, especially in the oversupply of expensive SSD storage resources (that is, by considering sufficient margin to deal with any excessive demand). The amount of high-performance NVMe SSD storage required by different servers varies according to the workload of the application. These applications can be migrated to different physical servers, but the amount of SSD storage required is always the same. In order to prevent each server from occupying expensive SSD storage excessively, a more cost-effective model is to establish an NVMe SSD shared storage pool, which can dynamically allocate storage space according to workload.

For local storage, the most important point is that all data must be backed up in case the server fails. In addition, the past storage methods still have serious security risks, and the replication between different sites is often difficult to manage. By sharing storage, administrators can avoid the above-mentioned problems. In other words, CIOs can not only make full use of high-performance flash memory across servers, but also have all the more powerful availability and security features of modern storage arrays, and have the advantages of high performance and low latency similar to local NVMe SSD storage.

What is the development trend of NVMe-over-Fabrics?

In order to understand this problem, compare the shared storage array to the car engine. Specifically, it can be considered that traditional Fibre Channel FC/iSCSI shared storage is equivalent to a traditional gasoline engine. These solutions have been used for many years and are very reliable. They will provide an excellent transportation method for a long time.

Over time, hybrid vehicles have become more and more common and have the advantages of both electric and gasoline. Similarly, the more novel NVMe array mixes NVMe inside, but still uses SCSI commands to connect to the host through FC or Ethernet transmission protocols.

Although most people agree that electric vehicles are an inevitable development trend in the future, they have not yet become mainstream. Compared with traditional solutions, electric vehicles are not only more expensive, but the related infrastructure is insufficiently distributed to support the charging demand. It is conceivable that NVMe storage arrays in the growth stage are in the same situation as all-electric vehicles. NVMe-over-Fabrics is the infrastructure for implementing NVMe storage. This approach will become the mainstream communication standard for connecting shared storage arrays and servers at the right time, but there is still a long way to go before NVMe-over-Fabrics is widely deployed and all related emerging problems are solved.

Which NVMe-over-Fabrics solution should you choose?

The biggest challenge for storage administrators is to determine which technology is the right direction for investment. As with any emerging technology, there are multiple ways to deploy the overall solution. From this perspective, NVMe-over-Fabrics is no exception. NVMe commands can be transmitted via FC, RDMA-enabled Ethernet, or standard Ethernet using TCP/IP. You can understand the main differences between different programs by reading the following content.

How to choose the best NVMe-over-Fabrics solution

Figure 1: Currently available mainstream NVMe optical fiber connection solutions

1. NVMe-over-FC (FC-NVMe)

For users who have deployed Fibre Channel storage network (SAN) infrastructure, FC-NVMe is the best solution. Using 16Gb FC or 32Gb FC host bus adapters (HBA) and SAN switches, the NVMe protocol can be enclosed in the FC framework. You can get FC-NVMe support on Linux servers by upgrading to the latest HBA firmware and drivers. Therefore, investing in new 16Gb or 32Gb FC HBA and SAN infrastructure can prepare in advance for the application of FC-NVMe storage arrays that will be launched in the future. It is also worth noting that SCSI (FCP) and NVMe (FC-NVMe) can coexist in the same FC fiber optic network. Therefore, the old FC-SCSI-based storage can run simultaneously with the new NVMe storage.

2. NVMe-over-Ethernet fiber using RDMA (NVMe/RDMA)

This RDMA compatible Ethernet adapter is now authorized. There are two different deployment methods for RDMA, named RoCE (v1/v2) and iWARP. Unfortunately, the above two protocols cannot achieve interactive operations. Below I will briefly explain the advantages and disadvantages of the two protocols:

a. NVMe-over-RoCE (NVMe/RoCE): If you are using an Ethernet-only network, NVMe-over-RoCE is the best solution for shared storage or hyper-converged infrastructure (HCI) connections. Because of this, many storage array vendors have announced their plans and expressed support for NVMe-over-RoCE connections. RoCE can provide the lowest Ethernet network delay, and can achieve very excellent operating results for small-scale storage networks with no more than 2 hops. As the name implies, RoCE requires a converged or lossless Ethernet network to function properly. In addition, the solution needs to enable additional network functions, including data center bridging (DCB), priority flow control (PFC), and other more complex organizational structures and network congestion management mechanisms. If low latency is your primary goal, then NVMe-over-RoCE is probably your best choice, despite its relatively high network complexity.

b. NVMe-over-iWARP (NVMe/iWARP): The iWARP RDMA protocol runs on a standard TCP/IP network, so its deployment is easier. Although the delay performance of this protocol is not as good as RoCE, its easier-to-use features and lower management difficulty are still very attractive. At this stage, storage array vendors have not designed arrays that support iWARP, so current iWARP is most suitable for software-defined or HCI solutions based on Microsoft Azure Stack HCI / Storage Spaces Direct (S2D).

3. NVMe-over-TCP (NVMe/TCP)

NVMe-over-TCP is still in the embryonic stage of research and development. The program was approved in November 2018 and can run in existing Ethernet infrastructure without any necessary adjustments (this takes advantage of the widespread ubiquity of TCP/IP). The performance of NVMe-over-TCP may not be as fast as NVMe-over-RDMA or FC-NVMe, but it can be easily deployed on standard Ethernet cards and Ethernet network switches. Without a lot of hardware investment, you can enjoy the main advantages of NVMe SSD storage. Some network cards such as Marvell® FastLinQ® 10/25/50/100GbE can also use the hardware offload function of the TCP/IP protocol stack built into the network card to unload and accelerate the potential of NVMe/TCP packets.

Summarize

No matter which NVMe-over-Fabrics route you decide to adopt, Marvell can provide you with a rich and flexible product portfolio and provide you with all assistance during the deployment process. In particular, it should be emphasized that Marvell QLogic® 16Gb and 32Gb FC host bus adapters (HBA) support FC-NVMe; at the same time, thanks to the built-in universal RDMA function of the network card, Marvell FastLinQ 41000 and 45000 series 10/25/40/ Both 50/100Gb Ethernet NIC and CNA network cards support NVMe-over-RoCE and NVMe-over-iWARP functions and NVMe-over-TCP. Therefore, administrators can determine the system architecture according to their own needs, while ensuring that the current deployment can also be applied to future networks.

The Links:   G170ETN02.2 LMG7401PLBC

Wonder

Need Help?

I’m Here To Assist You

How to make a perfect plastic injection mold and injection molding is always our goal.