Higher read I/O latency with some bursty workloads
Applies to
- ONTAP 9
- AFF systems
Issue
- The system is experiencing higher read I/O latency in LUNs or volumes
- We can see that in a Grafana or any other reporting tool for Backend Read Latency monitoring.
- That high read I/O latency is specially noticed in drives
- But it can be pointing to Network/Protoco, like FCP.
- The latency spikes are above 15ms, which is higher than the expected normal latency. Example:
::> qos statistics volume latency show -volume <vol_name> -vserver <SVM_name>
Workload ID Latency Network Cluster Data Disk ...
-------- -- ------- --------- ------- --------- ------- ...
-total- - 17.47ms 1018.00us 0ms 2.04ms 14.41ms ...
-total- - 4.43ms 1400.00us 0ms 861.00us 2.17ms ...
-total- - 16.98ms 133.00us 0ms 1.93ms 14.92ms ...
-total- - 7.49ms 748.00us 0ms 1199.00us 5.54ms ...
-total- - 12.77ms 207.00us 0ms 1496.00us 11.06ms ...
-total- - 10.02ms 227.00us 0ms 2.03ms 7.76ms ...
...