Skip to main content
NetApp Knowledge Base

ONTAP Data Processing latency - Resolution Guide

Views:
14,876
Visibility:
Public
Votes:
9
Category:
ontap-9
Specialty:
perf
Last Updated:

Applies to

ONTAP 9

Description

  • Troubleshoot Data Processing latency in ONTAP.
    • CPU contention
    • High CPU utilization
    • Dblade latency
    • WAFL latency
    • In extreme circumstances, CPU starvation may result in service disruption
  • Note: This is a secondary article on how to troubleshoot ONTAP 9 performance, please see the main ONTAP 9 performance troubleshooting Resolution Guide
  • Using Active IQ Unified Manager or commands such as qos statistics volume latency show, latency is shown to be in the Data (CLI) or Data Processing (Active IQ Unified Manager), and latency is actively impacted on the target volume. 

Example:

Cluster::>rows 0;date;qos statistics volume latency show
Workload            ID    Latency    Network    Cluster       Data       Disk        QoS      NVRAM
--------------- ------ ---------- ---------- ---------- ----------  ---------  ---------  ---------
-total-              -   136.49ms    99.00us    70.00us   136.17ms   153.00us        0ms        0ms
vserver1_vol1..   4201   206.05ms   130.00us        0ms   205.88ms    44.00us        0ms        0ms
vserver5_vol8..   7704  1309.00us   351.00us     1.00us   834.00us   114.00us        0ms     9.00us
-total-              -   140.29ms   103.00us    75.00us   139.94ms   174.00us        0ms        0ms
vserver1_vol1..   4201   379.03ms   127.00us        0ms   378.73ms   175.00us        0ms        0ms
vserver5_vol8..   7704     2.02ms   309.00us     1.30us  1820.00us   105.00us        0ms     9.00us

Procedure

  1. Determine if high CPU is an issue causing latency
  1. Use the qos statistics workload resource cpu show to:
    • Determine top workloads
    • See if internal workloads are running
    • Compare utilization to User-Default or user workloads

Note: wafltop may also be used to check this

Example: CPU is highest on User-Default, so frontend CIFS/FCP/iSCSI/NFS workload is the cause and can be identified by volume in step 3

cluster1::> qos statistics workload resource cpu show
 -node nodeB
Workload               ID   CPU
------------------ ------ -----
-total- (100%)          -   70%
User-Default            -   42%
_SNAPMIRROR             -   20%
_Efficiency_BestEffort  -    8%
  1. Use the qos statistics volume resource cpu show to:
    • Determine top workloads
    • See if user workloads are running (CIFS/FCP/iSCSI/NFS client work)
    • Compare utilization across volumes
cluster1::> qos statistics volume resource cpu show 
-node nodeB
Workload            ID   CPU
--------------- ------ -----
-total- (100%)       -   71%
vs0-wid101         101   22%
file-1-wid121      121   11%
vol0-wid1002      1002    8%
  1. Reduce top workloads
Click for more details
  1. If none of the steps above work:

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.
Scan to view the article on your device