Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products
  • Manage your Dell EMC sites, products, and product-level contacts using Company Administration.

Dell PowerFlex Manager 4.6.x CLI Reference Guide

PDF

Limitations of measuring latency between the host and SDT

In an NVMe over TCP setup, PowerFlex does not have a component that is installed on the host. As a result of the above fact there is no way of knowing when the host initiates the I/O request or when it receives requested data or a response about data it sent. Therefore, it is impossible to measure latency between the host and storage data target. The effect this has on the host-related counters are described below. Counters for traffic between storage data server (SDS) and storage data target (SDT) are reliable and reflect any network latency between those nodes.

Exception

Latency can be measured in one scenario. The following table describes how network latency affects the counters that measure latency between the host and the storage data target.

Table 1. Network latency effect on counters measuring latency between host and storage data targetThe following table describes network latency effects on counters that measure latency between the host and the storage data target.
I/O type I/O size I/O direction Result
READ All Host to and from the storage data target Values are estimates based on the I/O size. Values are not affected by network latency and cannot provide any information about such issues.
Less than or equal to 4 thousand
Storage data target to the host
WRITE More than 4 thousand Host to the storage data target PowerFlex can accurately measure the network latency between the host and the storage data target.

The counter value is affected
twice by the network latency, that is, if network latency is increased by 100 microseconds then the counter value increases by 200 microseconds.
  

Significance of large write requests

In all the host to storage data target flow cases, the host sends read or write request to the storage data target.

The flow is complicated while writing large IOs in the host to the storage data target:

  1. Host sends a read/write request to the storage data target.
  2. Storage data target sends the host a request to send the write data.
  3. Host sends the write data to the storage data target.

Since the system knows when it initiated step 2 and it knows when step 3 is completed it can measure the time of these steps. Any network latency affects both step 2 and 3, and this is the reason the latency affects twice as described in the table above.

Summary

To check for possible effects of network latency between the host and the storage data target in an NVMe over TCP setup, complete the following:

  1. Run a write I/O load with I/O size larger than 4 thousand and examine the SDT_HOST_WRITE_LATENCY counter.
  2. If a baseline value is available to compare with, divide the delta of the two counter values by two for an estimate of the network latency.

Rate this content

Accurate
Useful
Easy to understand
Was this article helpful?
0/3000 characters
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please provide ratings (1-5 stars).
  Please select whether the article was helpful or not.
  Comments cannot contain these special characters: <>()\