Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 7 results
    • https://kb.netapp.com/Legacy/OnCommand_Suite/ActiveIQ_Alert__Storage_VM_Latency_Critical_Threshold_Breached
      Applies to ONTAP 9 OnCommand Unified Manager 7.3 ( OCUM ) OnCommand Unified Manager 9.4 ( OCUM ) OnCommand Unified Manager 9.5 ( OCUM ) Active IQ Unified Manager 9.6 ( AIQUM ) Active IQ Unified Manage...Applies to ONTAP 9 OnCommand Unified Manager 7.3 ( OCUM ) OnCommand Unified Manager 9.4 ( OCUM ) OnCommand Unified Manager 9.5 ( OCUM ) Active IQ Unified Manager 9.6 ( AIQUM ) Active IQ Unified Manager 9.7 ( AIQUM ) Issue Offline volume showing high latency in Unified Manager Source XXX-nfs Cluster Name - Mycluster1 Cluster FQDN - XXX.XXX.XXX.XXX Trigger Condition - Latency value of 273 ms/op on XXX-nfs has triggered a CRITICAL event based on threshold setting of 150 ms/op.
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Why_is_a_workloads_latency_high_when_the_IOPS_are_low
      Understand why a workload's latency seems to be high even when the IOPS are low.
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_CIFS_latency_on_new_shares_based_on_LOCK_MANAGER
      High operation latency for locking files based on feature CIFS OPLOCKS enabled -total- - 136.49ms 99.00us 70.00us 136.17ms 153.00us 0ms 0ms vserver1_vol1.. 4201 206.05ms 130.00us 0ms 205.88ms 44.00us ...High operation latency for locking files based on feature CIFS OPLOCKS enabled -total- - 136.49ms 99.00us 70.00us 136.17ms 153.00us 0ms 0ms vserver1_vol1.. 4201 206.05ms 130.00us 0ms 205.88ms 44.00us 0ms 0ms vserver5_vol8.. 7704 1309.00us 351.00us 1.00us 834.00us 114.00us 0ms 9.00us -total- - 140.29ms 103.00us 75.00us 139.94ms 174.00us 0ms 0ms vserver1_vol1.. 4201 379.03ms 127.00us 0ms 378.73ms 175.00us 0ms 0ms vserver5_vol8.. 7704 2.02ms 309.00us 1.30us 1820.00us 105.00us 0ms 9.00us
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_latency_observed_on_volume_with_quotas_enabled
      High Write or Other latency on a volume with quota enabled. [node1: wafl_exempt13: quota.exceeded:debug]: params: {'ltype': 'hard', 'limit_value': '1024000', [node1: wafl_exempt02: wafl.quota.userQtre...High Write or Other latency on a volume with quota enabled. [node1: wafl_exempt13: quota.exceeded:debug]: params: {'ltype': 'hard', 'limit_value': '1024000', [node1: wafl_exempt02: wafl.quota.userQtree.exceeded.win:notice]: tid 2: disk quota exceeded on volume Additional warnings will be suppressed for approximately 60 minutes or until a 'quota resize' is performed.
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Poor_client_performance_but_ONTAP_latency_is_good_-_Resolution_Guide
      KB article provides steps to investigate network latency when High-end user latency reported from Internal monitoring software, Low throughput/Poor Performance reported by end users and Data ONTAP and...KB article provides steps to investigate network latency when High-end user latency reported from Internal monitoring software, Low throughput/Poor Performance reported by end users and Data ONTAP and NetApp monitoring tools show low latency.
    • https://kb.netapp.com/Legacy/NetApp_HCI/OS/High_latency_on_SolidFire_storage_volume_due_to_Max_IOPs_QoS_set_too_low
      In SolidFire Active IQ (AIQ) under Volumes > Active Volumes > Actions > View Details > Latency you will observe higher than expected latency values: You will also notice during high latency that the t...In SolidFire Active IQ (AIQ) under Volumes > Active Volumes > Actions > View Details > Latency you will observe higher than expected latency values: You will also notice during high latency that the throughput is hitting its highest peak based on the Max QoS set. Below example shows QoS Max of 15,000 IOPS with a max throughput of 100 MB/sec, but when you look on the overall cluster performance is relatively very low:
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Experiencing_high_latency_on_FC_LIF_connected_to_SAN
      Applies to ONTAP 9.x Brocade Issue Experiencing high latency on SAN target port fiber channel (FC) LIF impacting multiple hosts No indication of backbend disk latency PerfStats collected on the NetApp...Applies to ONTAP 9.x Brocade Issue Experiencing high latency on SAN target port fiber channel (FC) LIF impacting multiple hosts No indication of backbend disk latency PerfStats collected on the NetApp nodes show latency in the front-end fabric