Skip to main content
NetApp Knowledgebase

Higher than expected All Flash FAS (AFF) Solid State Drive (SSD) latency

Views:
155
Visibility:
Public
Votes:
0
Category:
aff-series
Specialty:
perf
Last Updated:

Applies to

  • ONTAP 9
  • Clustered Data ONTAP 8

Issue

  • Under certain rare circumstances, All Flash FAS (AFF) storage controllers may experience higher than expected latency numbers at the disk layer.
  • The larger than expected latency can be observed in the "statit" command output or "qos statistics volume latency show" command output.
Examples:

"qos statistics volume latency show" command output showing disk latency:

cluster::> qos statistics volume latency show
Workload            ID  Latency    Network  Cluster       Data     Disk  Qos Max    Qos Min      NVRAM
--------------- ------ --------   -------- --------   -------- -------- -------- ---------- ----------
-total-                 10.35ms     1.35ms      0ms        0us      9ms      0ms        0ms        0ms
vs1vol0            111  17.23ms        0us      0ms   603.00us  16.63ms      0ms        0ms        0ms
vol1              1234  17.76ms        0ms      0ms   150.00us  17.61ms      0ms        0ms        0ms
vol2               999   4.24ms        0us      0ms   190.00us   4.05ms      0ms        0ms        0ms

 

The node level command "statit":

::> node run -node <node>  
node> priv set advanced  
node> statit -b  
node> statit -e  
                       Disk Statistics (per second)  
        ut% is the percent of time the disk was busy.  
        xfers is the number of data-transfer commands issued per second.  
        xfers = ureads + writes + cpreads + greads + gwrites  
        chain is the average number of 4K blocks per command.  
        usecs is the average disk round-trip time per 4K block.  
  disk             ut%  xfers  ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs
/z_nacdot_02_root/plex0/rg0:  
  0b.00.0            5  18.01   15.26   5.11   881   2.04  21.23   112   0.71  14.90   610   0.00   ....     .
  0b.00.2            6  21.06   18.12   4.42   914   2.24  19.31   113   0.69  12.98   769   0.00   ....     .
  0b.00.4           35 950.55  890.12   5.00  2200  40.36  20.00   375   20.07  1.00  6000   0.00   ....     .

  • In the above output, 2200 usecs (.0022 seconds, or 2.2ms) per 4kB user read request is the latency seen, at 5 chains that is 11ms per transfer.
  • By using the command "statistics start -object disk -counter io_pending|io_pending_util|io_queued -sample-id disk_queue", followed by the command "statistics show-object disk -sample-id disk_queue", you may see output values such as:

io_pending:1.51
io_pending_util:1.53
io_queued:1.66

  • In the above output, io_pending and io_queued greater than 1 indicates there is io pending or io queued which can indicate a bottleneck at the Solid State Drive (SSD) layer.

 

CUSTOMER EXCLUSIVE CONTENT

Registered NetApp customers get unlimited access to our dynamic Knowledge Base.

New authoritative content is published and updated each day by our team of experts.

Current Customer or Partner?

Sign In for unlimited access

New to NetApp?

Learn more about our award-winning Support