Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 14 results
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Higher_read_I_O_latency_with_some_bursty_workloads
      High LUN or volume read latency with some bursty workloads seen as coming from disks.
    • https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Node_reporting_false_IO_latency_for_partner_owned_disks
      High latency is reported against a disk, but the reporting node does not own the disk. [Node01: disk_latency_monitor:shm.threshold.ioLatency:debug]: Disk 11d.63.9 has exceeded the expected IO latency ...High latency is reported against a disk, but the reporting node does not own the disk. [Node01: disk_latency_monitor:shm.threshold.ioLatency:debug]: Disk 11d.63.9 has exceeded the expected IO latency in the current window with average latency of 580 msecs and average utilization of 29 percent. Node that owns the disk reports no latency issue with the drive.
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_VM_latency_or_volume_latency_on_disk_fixed_by_volume_move_or_vMotion
      cluster1::> qos statistics volume latency show -vserver svm1 -volume vol1 Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- --...cluster1::> qos statistics volume latency show -vserver svm1 -volume vol1 Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -total- - 4.71ms 185.00us 7.00us 468.00us 4.04ms 0ms 0ms 11.00us vol1 32140 122.22ms 86.00us 125.00us 904.00us 122.11ms 0ms 0ms 1.00us -total- - 4.58ms 253.00us 7.00us 991.00us 3.32ms 0ms 0ms 11.00us vol1 32140 29.69ms 90.00us 162.00us 2.61ms 26.…
    • https://kb.netapp.com/data-mgmt/AIQUM/AIQUM_Kbs/Why_does_the_volume_latency_or_IOPS_not_match_the_aggregate_in_Active_IQ_Unified_Manager_or_ONTAP
      Backend disk/aggregate IOPS should not be used as a metric for monitoring performance unless Performance Capacity hits 100% on the aggregate in Active IQ Unified Manager or disk latency is seen on use...Backend disk/aggregate IOPS should not be used as a metric for monitoring performance unless Performance Capacity hits 100% on the aggregate in Active IQ Unified Manager or disk latency is seen on user work A DR or backup filer with minimal frontend will use all available CPU and disk I/O bandwidth to process SnapMirror/backup workloads as quickly in the absense of frontend work By prefetching, the reads are in cache (RAM) as the IOP comes in through the network
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_Read_or_Write_Latency_due_to_disk_bottleneck_from_user_workload
      This article describes how to resolve High latency from disk (SATA or SAS disk aggregate) experienced on several volumes
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Sick_disk_causes_performance_impact
      cluster1::> node run -node local -command "priv set -q advanced; statit -e" ... disk ut% xfers ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs /aggr...cluster1::> node run -node local -command "priv set -q advanced; statit -e" ... disk ut% xfers ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs /aggr1/plex0/rg0: 0a.10.10 31 93.15 0.00 .... . 54.89 26.94 590 38.26 38.85 155 0.00 .... . 0.00 .... . 0a.10.1 33 93.98 0.00 .... . 55.75 26.55 630 38.23 38.83 183 0.00 .... . 0.00 .... . 0a.10.2 19 118.78 9.53 3.50 8515 56.77 10.57 291 52.49 9.60 543 0.00 .... . 0.00 .... . 0a.10.3 21 120.65 10.11 3.8…
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_response_times_for_volumes_on_SATA_or_SAS_disks_when_workload_increases_Performance_Capacity_beyond_critical_threshold
      The article cover what causes high response times for volumes on SATA or SAS disks when workload increases and what you need to do to reduce, stagger, or monitor workloads causing high disk latency an...The article cover what causes high response times for volumes on SATA or SAS disks when workload increases and what you need to do to reduce, stagger, or monitor workloads causing high disk latency and utilization.
    • https://kb.netapp.com/hybrid/StorageGRID/Platforms/Virtual_Machine_based_Storage_Nodes_report_high_disk_latency
      Applies to NetApp StorageGRID Virtual Machine (VM) Storage Nodes Issue Multiple VM based Storage Nodes report high disk latency, CPU load and/or poor throughput
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Aggregate_has_high_disk_utilization_due_to_large_deletion
      Applies to FAS systems ONTAP 9 Issue Disk utilization suddenly goes to 100%, but no clear workload can be seen
    • https://kb.netapp.com/Cloud/Cloud_Volumes_ONTAP/Cloud_Volumes_ONTAP_-_Disk_utilization_and_latency_-_Resolution_Guide
      Cloud Volumes ONTAP helps troubleshoot high disk utilization or latency in ONTAP.  Latency is shown in the Disk or Aggregate Processing (Active IQ Unified Manager), and latency actively impacts the ta...Cloud Volumes ONTAP helps troubleshoot high disk utilization or latency in ONTAP.  Latency is shown in the Disk or Aggregate Processing (Active IQ Unified Manager), and latency actively impacts the target volume.
    • https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_utilization_of_disk_model_X342_TA14E1T2A10_with_NA02_firmware
      Applies to disk model X342_TA14E1T2A10 with NA02 firmware Issue Higher latency and disk utilization on aggregate with disk model X342_TA14E1T2A10