ONTAP Performance KBs
ONTAP Performance KBs
Other Resources
- Sick disk causes performance impactDec, 2024
- How to investigate ONTAP cifs latency with Nblade.CifsOperationTimedOut and SpinNp error 410Dec, 2024
- A FlexGroup sees outliers over 100ms into seconds causing client timeoutsDec, 2024
- High latency after enabling NVE on several volumesDec, 2024
- High Latency from QoS Min on workloads with no QoS Policy or not in Adaptive QoS policyDec, 2024
- Sudden latency and CPU utilization from workloads going idle to busy resolved by reducing workloadDec, 2024
- Performing a lot of locks (NFSv3) causing higher latencyDec, 2024
- Lower than expected performance while utilizing a single volumeDec, 2024
- High Write Latency due to Back to Back CP with Small Write SizeDec, 2024
- How to find workloads that do not meet their min-throughput SLOsDec, 2024
- Slow optimized volume move with ONTAP S3 cold tierDec, 2024
- High latency from Disk IO even the disk utilization is still lowDec, 2024
- Customer monitoring reports High CPU UtilizationDec, 2024
- Slow writes on Flexcache write-back using ACLsDec, 2024
- High Read or Write Latency due to CPU bottleneck from user workloadDec, 2024
- Adaptive QoS maximum latency for volume when IO is below its expected IO limitDec, 2024
- High CPU utilization from TSSEDec, 2024
- What are performance archives and how are they triggered?Dec, 2024
- ONTAP Disk utilization and latency - Resolution GuideDec, 2024
- Lower throughput on certain HBADec, 2024
#
A
- A FlexCache origin volume hangs and stops responding with extremely high latency
- A FlexGroup sees outliers over 100ms into seconds causing client timeouts
- Active IQ Wellness: Up to High Impact - This system is nearing the limits of its performance capacity
- Adaptive QoS maximum latency for volume when IO is below its expected IO limit
- Affinity bottleneck due to a few volumes sharing the same VOL affinity ID
- After Linux kernel update, NFS is slow with many extra denied ACCESS calls
- Aggregate has high disk utilization due to large deletion
- AIQUM 9.7+ or NSLM may enable workload AQoS causing unexpected latency
- AIQUM reports high latency from Cluster Interconnect
- An ONTAP cluster stops serving traffic with FlexCache
- An ONTAP ethernet port is busy, highly utilized, or 100 percent busy
- API calls to query qtrees from Grafana Harvest take long time
- Are benchmarks appropriate to measure or troubleshoot performance?
- Are long Consistency Points (wafl.cp.toolong) normal?
- Are there Performance implications to using Advanced Drive Partitioning (ADP) ?
- Authentication Error when attempting to use NetApp System Performance Modeler
B
- Backup jobs via CIFS are slow while doing indirect access
- Bad LUN to LUN copy performance due to ODX
- Bad Performance on API Calls for multi-admin-verfication
- Bad read performance for reads after upgrading to 9.6 and later
- Benchmarks are slow on CrystalDiskMark especially 4k reads or other benchmarks
- Bully write workload causing the performance issue in MetroCluster environment
- Bursty write append workloads impacting write latency on AFF
- Bursty write workload could affect performance
C
- Can adding a second Fiber Connection increase throughput?
- Can CHANGE NOTIFY option set for CIFS Vserver affect NFS clients write performance?
- Can I increase IOPS on a QoS policy or will it cause performance problems?
- Can I modify the IOPS and throughput for GP3 disks after the aggregate has been created in BlueXP?
- Can ONTAP specify a thruput limit at the SMB share level
- Can Optimal Transfer Length be modified from the default after presenting LUN to OS?
- Can qtree read/write throughput and IOPS be monitored?
- Can READDIR+ heavy workloads impact performance on FlexGroups?
- Can thin provisioning affect performance on a NFS volume?
- CIFS Client Change Notification setting can cause unnecessary load in ONTAP
- CIFS client slow with enable FPolicy
- CIFS clients access slowly with FPolicy disconnecting repeatedly
- CIFS FlexGroups with ChangeNotify become unresponsive
- CIFS high other latency on Citrix VDI or other workloads
- CIFS inaccessible after Snapmirror Cloud Restore
- CIFS or NFS RENAME operations slow on FlexGroup
- CIFS or NFS responses are very slow caused by CPU bottleneck with Kahuna domain due to CIFS operations
- CIFS performance degrades during system configuration backup
- Cifs performance on shares behind VPN is low
- CIFS Share stuck on calculating when copying a file with Windows Explorer
- CIFS slowness issue observed after receiving Nblade.vscanConnInactive alerts
- Citrix VDI crashing along with high volume latency
- Cloud data retrieval is slow
- Cloud Insights does not return perf data for all expected volumes
- Cloud Volumes ONTAP NVlog transfer bottleneck - Resolution Guide
- Cloud Volumes ONTAP: High CPU utilization on highest-numbered CPU of Azure HA nodes
- Cluster Interconnect delay due to data LIFs not local to node owning disks
- Cluster Interconnect Latency - Resolution Guide
- Cluster Network Latency - Troubleshooting Guide
- CommVault backups using NIC teaming in ONTAP 9.4 and later experiencing low throughput over SMB 3
- Compression enabled volumes won't read from Flash/External cache
- Connection group load imbalance might cause latency problems for clients
- Constant EMS messages "wafl.inode.cache.highLoad" on a SnapMirror destination volume
- Constant High Disk Utilization in the Root Volume
- Copy of data from one volume to another is slower compared to the ODX enabled SVM
- copy-offload throttle is not being enforced
- Could a storage bottleneck be causing Microsoft Exchange server to be slow?
- CPU over-subscription leads to QoS minimum latency and user slowness
- CPU utilization above 50 percent before upgrading ONTAP
- CPU utlization high due to high nwk_lg domain
- Creating clones in a FlexGroup is slow
- Customer monitoring reports High CPU Utilization
- CVO AWS cluster seeing high NVlog latency
- CVO instance not retaining 28 days of performance archive data
D
- Data aggregate's IOPS increases after upgrading to ONTAP 9.13.1 or later
- Data ONTAP 8: How to accurately determine CPU utilization of a Data ONTAP 8.0 Cluster-Mode cluster
- Data Processing Latency on AFF from workload which switches volumes rapidly
- Days to Retain Performance Archive Data
- Deletion takes longer for large directories when the SMB CLOSE requests are runnig for longer duration
- Disk bottleneck may impact VMware LUN performance and timeouts
- Disk latency with extCache.io.writeError and insert_reject_throttle_io
- Disk utilization from read I/O requests might cause an ONTAP internal service failure
- Do QoS throughput policies apply to read+write operations
- Do RAID scrubs hurt performance?
- Does disk reconstruction lead to performance impact?
- Does network high latency affect CPU utilization
- Does ONTAP track SMB2 or SMB3 directory creation latency?
- Does the NetApp Technical Support Center deliver ONTAP performance health checks or evaluations?
- During giveback client experienced latency with disk timeout seen
E
- Early signs of latency on Flexgroups
- Elevated CPU or high cluster latency when using indirect traffic resolved by using referrals
- Elevated CPU or high cluster latency when using indirect traffic using CIFS or NFS
- Elevated CPU or high cluster latency when using indirect traffic using iSCSI or FCP
- Enabling SMB 3 encryption reduces CIFS performance
- End path evaluation for device errors and slow performance on ESXi
- Error creating or moving volume with encryption
- Essential Case Framing - Performance
- Esx device timeout due to high latency
- ESXi bug causes slow virtual machine replication due to error: IO was aborted by VMFS via a virt-reset on the device
- ESXi events report Lost access to volume
- Excessive Deduplication jobs may decrease available IOPS and increase latency in ONTAP systems
- Extreme performance issue observed on one FlexGroup volume
- Extremely high latency from CPU N-blade caused by volume offline
- Extremely latency on volume from CPU N-blade
- Extremely slow "rm -rf" performance
F
- FabricPool object storage has poor GET throughput with ONTAP firewall services enabled
- FabricPool overutilizes a 100Mb link
- Facing slowness while writing data to Flex vol
- Fails to setup throughput floor with unsupported platform
- First steps troubleshooting Netapp FAS error wafl_exempt10: wafl.cp.toolong
- Flash Pool Read Hit rate increases with an empty Read Cache
- FlexCache origin node goes unresponsive with undersized cache volumes
- Flexcache report reads performance due to write calls
- FlexGroup constituents don't report the correct IOPS when Qtree QoS Policies are attached
- FlexGroup has high latency due to all workload on single constituent
- FlexGroup has high Other latency on one single constituent with a large directory
- FlexGroup performance impact after node reboot
- FlexGroup performance information is not displayed in ONTAP System Manager
- FlexGroup performance is worse after resizing due to different aggregate types
- Flexgroup volume high latency with inode used exceeding 90 percent
- Fragmentation and Reallocate
G
H
- Heavy SMB signing pressure causes high SMB3 latency from Network CPU domain
- High await and util while accessing the luns on host
- High build time process latency on the teamcity build agent
- High Cifs latency on FlexGroup constituents from heavy RENAME workload
- High CIFS latency on new shares based on LOCK MANAGER
- High Cluster Interconnect latency for 256KB or larger IO size
- High CPU and latency caused by misconfigured Fpolicy
- High CPU and latency is seen despite low OP counts
- High CPU due to user workload causing various issues
- High CPU utilization - Resolution Guide
- High CPU utilization after a storage giveback during an ONTAP upgrade
- High CPU utilization after ONTAP upgrade
- High CPU utilization and performance imbalance alerts
- High CPU Utilization at irregular intervals seen from Grafana Harvest
- High CPU utilization due to branch-cache feature
- High CPU utilization due to deletion workload
- High CPU utilization due to FlexGroup constituents imbalance
- High CPU utilization due to HostOS domain process MGWD
- High CPU utilization due to HostOS domain with counter manager daemon(CMD)
- High CPU utilization due to massive other IOPS
- High CPU utilization due to one specific HostOS domain process
- High CPU utilization due to volume move
- High CPU utilization due to wafl_exempt
- High CPU utilization from TSSE
- High cpu utilization in host domain from MGWD in ONTAP 9.7
- High CPU utilization in HostOS with Docker Swarm
- High CPU utilization on a node due to tiering scanners
- High CPU utilization on a Snapmirror Destination Node
- High CPU utilization when Continuous Segment Cleaning is enabled
- High CPU utilization when IOPS are lower than expected
- High CPU/Disk utilization on Snapmirror source node from Deswizzlers
- High CREATE or OPEN latency when many clients attempt to access the same file
- High CVO nvlog latency due to overutilized disks
- High Data Processing latency due to a bursty workload
- High disk latency after upgrade due to Flash Cache options being disabled
- High disk latency due to Snapmirror forward sync
- High disk latency on FlexArray system
- High disk latency with low IOPS in Azure CVO
- High Disk or CPU utilization due to excessive Container Block Reclamation scanners
- High disk utilization at night between 1 to 5 AM
- High Disk Utilization caused by deswizzling on SnapMirror destinations
- High disk utilization causing applications to freeze
- High disk utilization due to aggregate reallocation
- High disk utilization is due to improper RG configuration
- High disk utilization on a SnapMirror Source/Destination
- High FlexGroup or FlexVol latency due to CIFS ChangeNotify before ONTAP 9.10
- High HOST CPU domain utilization from nphmd
- High HostOS CPU utilization due to problems with secd
- High HostOS domain CPU utilization after deletion of LS mirror
- High HostOS Utilization due to process ndo_manager
- High IO wait due to bursty sequential write operations
- High latency after enabling NVE on several volumes
- High latency after ONTAP reboot due to malfunctioning Flash Cache
- High latency after upgrading ONTAP Select to version 9.11.1 when using OpenShift 3
- High latency and disk errors with single/few disk high utilization
- High latency and slow throughput when creating volumes via OnCommand System Manager's ‘Application and Tiers’ function
- High latency and unreachable ESXi virtual machine
- High latency caused by massive symlink creation
- High latency due to deletion workload
- High latency due to FCVI error
- High latency due to IOM6 bottleneck
- High latency due to sinlge LIF serving data to each node in cluster
- High Latency for CREATE and RENAME operations
- High latency from cluster interconnect caused by faulty SFPs/cable
- High latency from cluster interconnect with port errors
- High latency from Disk IO even the disk utilization is still low
- High latency from excessive SMB TRIM operations
- High latency from NVLOG transfer on Azure Cloud Volume ONTAP
- High Latency from QoS Min on workloads with no QoS Policy or not in Adaptive QoS policy
- High latency from SM Sync on SnapMirror Sync(SM-S) primary volumes
- High latency from SSD disks
- High latency in CPU Dblade in ONTAP due to large io sizes
- High latency is seen during SVM DR initialization
- High latency lasting for 20s after re-inserting the NVMe disks
- High latency observed on qtree quota enabled
- High latency observed on volume with quotas enabled
- High latency observed when trying to list directories from FlexCache volume
- High Latency on CIFS SVM with Home Folder
- High Latency on cluster interconnect due to connection imbalance
- High latency on database log volumes
- High latency on FlexCache destination volume with no IOPs
- High latency on FlexCache due to AIR
- High latency on scratch volumes with high file counts or inodes
- High latency on SnapMirror-Sync volumes due to SQL Server IO Bursts
- High latency on volume after file or snapshot deletion
- High latency on volumes with indirect access due to RCF file
- High latency reported from monitoring tools such as OCUM and AIQ
- High latency reported on SSD-NVM drives in MetroCluster IP using Cisco 9336C-FX2 shared switches
- High latency resolved by disabling Fpolicy
- High latency seen in ONTAP on FlexGroup constituents
- High latency seen when using volume move in foreground
- High latency spikes during switch maintenance
- High latency using LogicMonitor despite lower ONTAP latency
- High latency when adding shelf
- High latency when FlexCache origin has stale entries after deleting cache volumes
- High latency when using ONTAP native file auditing
- High Latency while reading snapshot from the archive
- High latency with high nwk_exempt CPU utilization after FlexVol to FlexGroup conversion
- High Latency with SM-BC
- High latency due to QoS Min throughput
- High LUN latency and increased read throughput after an ESXi upgrade
- High memory consumption by VSCAN causing poor performance
- High metadata latency with FlexGroup
- High N-blade latency due to extensive export policy configuration changes
- High nblade latency with available headroom
- High network latency in an iSCSI SAN environment
- High network LIF bandwidth utilization or S3 target load due to FabricPool
- High NFS latency due to single disk IO medium error
- High Node utilization alert due to heavy CIFS SMB signing
- High NVRAM transfer latency on MetroCluster IP
- High nwk_exempt CPU consumption on A800 driven by heavy Single Volume Workload
- High ONTAP latency reported but not in qos statistics volume latency show
- High or fluctuating latency after turning on NetApp ONTAP File System Analytics
- High other latency and high other IOPS seen
- High other latency when tiering set to all
- High port utilization leading to performance degradation
- High read latency due to high disk utilization
- High read latency from CPU D-blade after volume move
- High read latency on guest VM
- High read latency seen after upgrading from AFF-A400 controller
- High READ latency when implementing copy-offload between flexgroup constituents
- High Read or Write Latency due to CPU bottleneck from user workload
- High Read or Write Latency due to disk bottleneck from user workload
- High Read/Write latency at SVM level due to single or multiple volumes
- High response times for volumes on SATA or SAS disks when workload increases (Performance Capacity beyond critical threshold)
- High SSD disk utilization with APDv2
- High utilization of disk model X342_TA14E1T2A10 with NA02 firmware
- High VM latency or volume latency on disk fixed by volume move or vMotion
- High VMware or application latency but ONTAP and UM show low due to bad upstream SFP
- High write latency and suspends with sequential append workload
- High Write Latency due to Back to Back CP with Small Write Size
- High write latency due to NVLog Transfer on MetroCluster SDS
- High write latency from Back to Back CPs caused by disk pressure
- High write latency on FlexCache
- High write latency on MetroCluster IP with Nexus
- High write latency on MetroCluster-IP during heavy peak of write workload
- High write latency only during the SM-Sync snapshot
- Higher latency after volume move
- Higher than expected SSD latency due to 6Gb SAS shelves and cables
- Higher than expected SSD latency due to missing 12Gb SAS cables
- Higher than expected SSD latency due to needing disk firmware updates
- How concurrency can impact throughput performance with ONTAP 9
- How do Dynamic Home Directories improve performance?
- How do I know if CPU is causing a performance issue?
- How do I search the top file in the volume
- How do I tell what "other" NFS iops are being processed within my volume?
- How do you identify the source of reads for a workload in ONTAP 9?
- How does ONTAP allocate CPU resource to volumes
- How does QoS policy group associated with a file work?
- How many IOPs can a FAS or AFF controller handle
- How to access and display CLI and ZAPI performance data in ONTAP 9
- How to address network latency in a SAN environment - Resolution Guide
- How to adjust the SnapMirror-Synchronous Common Snapshot Schedule for performance tuning
- How to Analyze Oracle Database Performance Issues
- How to Assess Disk Response Times in ONTAP
- How to calculate SAP HANA's "alter system reclaim datavolume" command percentage after hitting bug 865444
- How to change and stagger the deduplication schedule in ONTAP 9
- How to change the Performance Metadata Size ?
- How to check for background CPU utilization in ONTAP 9
- How to Check SnapMirror Operations in ONTAP
- How to check what are other IOPS
- How to collect a Perfstat from a clustered Data ONTAP system in Mac OS X
- How to collect a Perfstat from a clustered Data ONTAP using the CLI in Windows
- How to collect a Perfstat from clustered Data ONTAP systems using the Perfstat GUI
- How to collect a Perfstat from ONTAP 9 using the CLI in Linux
- How to collect a Perfstat7 in Windows
- How to collect a Perfstat7 on Windows 7 or Server 2008 or newer via GUI
- How to collect performance data when Perfstat8 fails by using preset files and manual data collection
- How to collect Performance Statistics for intermittent issues
- How to collect Perfstat data over RSH
- How to collect WAFLTOP output from CLI
- How to compare CSS schedule to SM-S latency occurrences
- How to confirm volume latency and IOPS
- How to create a custom Performance Preset for Performance Archives
- How to detect lock storms faster
- How to determine the number of RENAME requests for CIFS and NFS access
- How to disable or modify cache settings for a FlexVol resident on a Flash Pool
- How to disable QoS policy on a LUN
- How to disable QoS policy on a volume
- How to enable the option "disk.latency_check_ssd.fail_enable"
- How to find workloads that do not meet their min-throughput SLOs
- How to fix indirect volume access
- How to get full output of raw data with full precision and not SI prefixes in the statistics command
- How to get Support for Harvest
- How to identify a bursty or oversubscribed workload causing QoS latency
- How to identify a disk with high IQ_QUEUED latency
- How to identify and resolve top workloads
- How to identify bully workloads from Linux
- How to identify bully workloads from VMware
- How to identify bully workloads from Windows
- How to identify bully workloads using packet traces in CIFS or NFS environments
- How to identify full volume name from qos statistics output
- How to identify high-latency volumes with truncated volume names
- How to identify if workload is using indirect access
- How to identify transient workload bursts using "sysstat" command
- How to increase priority for disk reconstruction
- How to interpret the QoS ratebucket ID
- How to investigate ONTAP cifs latency with Nblade.CifsOperationTimedOut and SpinNp error 410
- How to limit the maximum number of post-process compression and deduplication processes running in parallel
- How to manually generate and upload performance archives
- How to map an ocs_vserver workload to which volume it came from
- How to measure CPU utilization
- How to measure CPU Utilization in 7mode mode device
- How to measure the Ops and Latency reported from Volume layer
- How to monitor LUN statistics from the CLI in Clustered Data ONTAP 8.3 and later
- How to perform volume reallocation in ONTAP 9
- How to rectify performance issues using monitoring tools
- How to reduce Data Processing latency
- How to reduce the disk latency or data processing latency by volume move
- How to review aggregate performance via Active IQ website
- How to Show Throughput and Operations/s per File in ONTAP 9.x
- How to stagger dedupe automatically, based on thresholds
- How to throttle volume move in ONTAP 9.10 or later
- How to troubleshoot and resolve FabricPool Performance issues or "measured latency from cloud" alerts
- How to troubleshoot FlexGroup performance issues
- How to troubleshoot performance issue related to Snapmirror throughput and link utilization
- How to tune iSCSI Qdepth on Microsoft iSCSI SW initiator
- How to use and understand qos statistics commands to monitor volume latency in real time
- How To Use OneCollect 2.0 or Greater to Collect and Upload Performance Archive Data
- How to validate splitter vol queue time for SM-S
- How to verify expected latency and network RTT with SM-S
- How to view past throughput
- How volume read/write latency in Grafana/NABox graph is measured?
- HyperV over SMB3 logs StorageVSP latency multiple times a day
I
- IBM FileNet intermittently reporting an error "CONTENT_FCA_FILE_DOES_NOT_EXIST"
- Identify Performance Degradation due to Misbehaving Disk in ONTAP
- Improving volume throughput for NFSv3 with nconnect mount option
- Incorrect network stats reported in ifstat and sysstat
- Increase in CPU utilization after S3 copy
- Increase in CPU utilization when snapmirror is in progress
- Increase in network traffic following update from ONTAP 9.13.1P7 to 9.13.1P8
- Increased CPU after changing Snapmirror DP to XDP
- Increased Disk utilization during during and after ONTAP upgrade
- Increased latency after converting FlexVol to FlexGroup
- Increased latency due to high QOS minimum latency after upgrade to ONTAP 9.7
- Increased latency when Oracle DB is configured with logbias=latency
- Increased read latency on a node due to increased FabricPool cold tier reads
- Inode Cache - file operations suspend heavily creating massive latency
- Intermittent latency spikes and latency on remote disk
- Intermittent performance issue on the cluster
- Intermittent write latency spikes when using VAAI clones
- Intermittently receiving "NFS not responding" errors on the unix client
- IO size throttle on workload from Windows Server 2022, 2025
- Is my controller overloaded?
- Is the high memory utilization can be tracked by SNMP
- Is there any impact on performance after enabling Security Audit Log?
- Is there one available way to monitor memory utilization of ONTAP
- It takes longer to run a batch job when the client NFS mount option actimeo is set to 0
K
L
- Large file access is slow while reading from cold tier
- Latency and disk utilization increases such as ONTAP shutdown
- Latency from CPU_protocol for extremely bursty SAN workloads
- Latency higher than expected for AFF system
- Latency in QoS before limit on ONTAP 9.1
- Latency increase using Dell Quest Change Auditor Fpolicy server after ONTAP upgrade
- Latency increases during volume move cutover phase
- Latency increases suddenly while using QoS - Resolution Guide
- Latency induced by CPU N-Blade/Network on SAN datastores
- Latency issues with Oracle databases on a C-Series AFF system
- Latency observed on Application as well as AIQUM when CA is enabled on the share
- Latency occurs when snapshots are triggered for volumes created through System Manager's "Applications" feature
- Latency reported as ICMP packets dropped by storage interface (LIF)
- Latency seen while copying files from local disk to NetApp lun due to incompatible host HBA and switch sfp
- Latency when FlexShare is enabled
- Linux clients using SMBv3 share report slow performance
- Logging into cluster using Active Directory domain account times out when using Kerberos for domain-tunnel authentication
- Long Consistency Point "wafl.cp.toolong" when disk is about to fail or being failed
- Long CP after Snapmirror Transfer on AFF using XDP
- long CP occurred when the NSM module was replaced
- Long CP on aggregate for CVO setup
- Long CPs dominated by P2_FLUSH during SnapMirror Finalizing phase
- Long CPs from multiple aggregates when target disk for SDC is also sick
- Long CPs with high CPU utilization on FlexClone volumes
- Low DISK IO impacting the network throughput
- Low file transfer throughput from one ONTAP cluster compared to another
- Low latency and High CPU or DISK utilization due to deduplication
- Low NFS read throughput on ESXi due to tuning
- Low Space in FabricPool Performance tier leads to poor performance
- Low throughput when using CIFS
- Low throughput when using windows file explorer
- Low throughput while restoring a file
- Lower read throughput in a FabricPool environment via NFSv3 on ONTAP 9.10.1
- Lower SnapMirror throughput due to disk bottleneck
- Lower than expected performance while utilizing a single volume
- Lower throughput on certain HBA
M
- Managing Cluster IOPS Warning Threshold Breached
- Memory overutilization causes slow opening of directories on a CIFS share
- Microsoft SQL Disk overwhelmed high response times
- Monitoring system is triggering alert for a CVO volume crossing the threshold limits
- Multiple aggregates long CPs caused by failed Flash Cache
N
- NAS access slow or hung caused by CRC errors on network port
- Natus NeuroWorks Acquisition Station randomly drops network connection to NetApp Storage
- Nblade.CifsOperationTimedOut errors due to latency outliers
- nblade.cvo.remote.access
- nconnect modification causes degraded NFS performance
- Network latency error message in EMS logs for Snapmirror-Synchronous
- Network latency for FC SAN due to noisy neighbour
- Network latency in a Fibre Channel SAN environment
- Network outage during ONTAP upgrade
- NFS performance is slow without any latency increasing in ONTAP
- NFS resource not performing as quickly as expected due to high workload on LIF
- NFS traffic over UDP is slow
- NFS, iSCSI, or FCP performance problems on Cisco UCS platforms
- No performance archives are available when attempting to invoke them
- Node takes a long time to complete the takeover
- NVRAM purpose during ONTAP outage
O
- Observing high latency while modifying an Export policy
- OCUM Alert: Max Data Disk Utilization value
- ODX copy-offload is slower than host based copy for files with more extents
- On FlexGroup/Flexvol Tier All data is very slow
- OneCollect 1.9 for Mac OS Performance Archives Fail to Collect
- Only "-total-" when filtering for volume in "qos statistics volume latency show"
- ONTAP 9 - Per Aggregate CP
- Ontap 9 - Slow Space Reclamation when deleting Files via NAS Protocol
- ONTAP 9 Performance - Resolution Guide
- ONTAP Data Processing latency - Resolution Guide
- ONTAP Disk utilization and latency - Resolution Guide
- ONTAP LUN with VMware and SQL latency spikes
- ONTAP running at 100 percent CPU due to Snapmirror
- ONTAP single LUN latency at network layer since ONTAP 9.3
- Opening a specific directory is very slow
- Operations times out due to AQoS throttling
- Outages and high latency during ONTAP upgrade by high CPU
P
- Performance archive (PA) is not enabled for datastore 'opm', when PA is enabled
- Performance Archive failed and took longer than 25 seconds to complete
- Performance degradation after moving LIFs between nodes in a cluster
- Performance impact due to low aggregate free space
- Performance impact when browsing snapshot directory
- Performance issue accompanied with the storage errors
- Performance issue due to PCIE error
- Performance issue during the NDMP backup
- Performance issue, where the performance/latency issue is not on NetApp storage
- Performance Issues due to Consistency Point (CP) long wait time messages
- Performance violation alert seen in Oncaommand Insight
- Performing a lot of locks (NFSv3) causing higher latency
- Periodic write/read performance degradation while processing huge small files
- Poor client performance but ONTAP latency is good - Resolution Guide
- Poor performance and high CPU usage in a single node due to a degraded DIMM
- Poor performance and service disruption on CIFS vFilers
- Poor Performance due to MTU size differences
- Poor performance during RAID scrubs
- Poor performance on VMware Oracle DB
- Poor performance on Windows CIFS clients using Adobe Premiere
- Poor performance with CIFS and Microsoft Defender for Endpoint
- Poor performance (low throughput) with iSCSI during intravolume copy offload
- Poor reading performance due to COS4
- Poor SQL Workload Performance on New AFF-A250 Compared to Old E-Series System
Q
- QoS Limits causing high volume latency
- QoS policy with duplicate PGIDs will not be effective in MCC cluster
- QoS throughput floor is not met on workloads with low concurrency and low latency
- QoS Workload exceeding configured limit
- qos.VioDet.Mintput.Throttle seen after upgrading to 9.13
- Quality of Service limit is not enforced
R
- Read Latency due to slow SATA disks from user workload
- Read latency on Clone volume
- Recurring High CPU with BacktoBack CPs
- Reducing cluster interconnect latency with DFS node referrals
- Resolving excessive FabricPool object store network latency
- Robocopy stops making progress during a copy, using close to 100 percent CPU
S
- SAN increased latency with low throughput from VAAI Copy Offload
- Shared QoS Policy causing high latency and performance issues
- Sick disk causes performance impact
- Single client with large number of open file requests inducing latency
- Single LUN has latency in ONTAP 9
- Slow backup to cloud for large volume
- Slow copy operations, latency, or loss of access due to outage in a FabricPool volume
- Slow directory deletion on FlexCache Origin
- Slow file metadata queries via REST API
- Slow file moves across directories in FlexGroups
- Slow FlexCache operations on the origin volume on workloads like SVN
- Slow Java application performance pulling data from a CIFS share
- Slow lun access with high latency from Network subsystem
- Slow LUN moves with all LUNs on a single volume
- Slow LUN read performance for ONTAP 9.3 to 9.9
- Slow NDMP restore Performance with insufficient TCP packet window size
- Slow NDMP when Media Server is located in different subnet range
- Slow NFS read throughput with Oracle 12 db
- Slow optimized volume move for FabricPool attached volume due to BRE throttling
- Slow optimized volume move with ONTAP S3 cold tier
- Slow Oracle database on VMware ESXi using VMware's NFS
- Slow performance due to SnapMirror Synchronous
- Slow performance on SMB shares when SMB encryption is enabled
- Slow performance using NFS with FlexCache with larger number of NOENTs
- Slow performance when running vMotion to local vSAN datastore
- Slow reads from database volumes
- Slow restore with many or small files via BlueXP backup and recovery
- Slow Rubrik backups in tiered volumes
- Slow SnapMirror or volume move performance and low throughput due to BRE/LRSE Throttling
- Slow SQL performance and disconnects and high utilization in AIQUM
- Slow storage vmotion for thin provisioned disks on NFS
- Slow tiering during high workload
- Slow Tiering to an Object Store
- Slow VM Clone using SCE caused by long suspensions on COPY_WAIT
- Slow VMware vMotions on NFS due to default NFS TCP transfer size
- Slow VMware vMotions on NFS due to Spanning Tree issues
- Slow VMware vMotions on NFS due to VMKernel Interfaces not having vMotion configured
- Slow VMware vMotions on ONTAP 9
- Slow volume move in Cloud volume ONTAP
- Slow Wafl Scanner cause delay in freeing blocks from snapshots
- Slow writes on Flexcache write-back using ACLs
- Slow writes to a FlexCache when writing small files
- Slower than expected file volume transfer over NFS
- Slowness and poor performance due to mismatched MTU
- Slowness doing additional workload such as volume moves on FAS9000 or larger systems
- Slowness during data copy from On-Prem to NetApp CVO
- Slowness in ONTAP Data Processing when repeat Zone Identifiers don't exist
- Slowness in renaming large number of files concurrently with fpolicy enabled
- Slowness on FC LUNs on new ONTAP Cluster
- Slowness redirecting Documents and Desktops to ONTAP after Windows restart
- Slowness reported for volumes with QOS min configured
- Slowness while saving files in a particular directory of CIFS share using Flexgroup
- Slowness with ls command and reading , writing random files inside NFS mount
- Slowness with Synopsis StarXT or other NFSv3 applications with NLM
- SnapMirror Sync(SM-S) primary volumes has high latency due to volume bursty workload
- Snapmirror takes a long time
- Snapmirror Throughput limited by CPU
- SnapMirror tiered destination backup restores time out using CommVault or other backup software
- SnapMirror transfer speed is slow to a FabricPool destination
- SNMP traps not recognized by SNMP monitoring tool
- SPINNP_WAFFI_PUNCH_HOLES extreme suspend latency on LOAD_BUF_DISK_LOCK_CHILD
- Sporadic high latency for NFS Other operations on FlexGroup
- SQL latency error "Logical disk transfer (read and writes) latency is too high"
- Statistics show-periodic does not show port throughput
- Storage Performance Impact due to NFS Client Configuration
- Storage VM Latency Event Threshold Breached
- Sudden delay in Veeam backup job completion with FabricPool
- Sudden high IOPS seen in AIQUM tool at Node level
- Sudden latency and CPU utilization from workloads going idle to busy resolved by QoS
- Sudden latency and CPU utilization from workloads going idle to busy resolved by reducing workload
- System Manager loading very slow
T
- Tape/NDMP or backup jobs can cause latency or utilization due to too few disks
- The SyncMirror resync takes a long time on the SyncMirror Aggregate
- Transfer speeds not hitting 10 Gigabit maximum
- Traversing a large sparse directory is extremely slow with long running READDIR requests
- Troubleshooting CIFS Latency - Single volume readdir workloads with high latency
U
- UDP connections are single threaded
- Unable to access System Manager using the cluster-mgmt IP
- Unable to connect to Storage Account leading to high disk latency in CVO
- Unable to give back or SFO data aggregates post TakeOver GiveBack operations
- Unresponsive D-Blade leading to NAS protocol outage
- Unzipping of file using Tar xvf is slow
- User workload leading to latency from the disk layer
- Users complain of performance on a volume impacted by other nodes in the cluster
- Users report slowness on volumes with TSSE enabled
V
- VDI slowness and latency during high logins or load such as between 8-9 AM due to high ONTAP load
- VDI slowness and latency during high logins or load such as between 8-9 AM due to ONTAP configuration
- VDI slowness and latency during high logins or load such as between 8-9 AM due to VDI configuration
- VDI slowness during batch desktop creations on ONTAP NFS export
- Veeam Backup of Windows VM fails during Shadow copies commit due to background write handling
- Very slow/low throughput reads vs. writes on CVO
- VM freezing when deleting VM snapshot in ESXi 7.0U2
- VMware datastore disconnects after Splunk delete/create jobs
- VMware reports "Lost access to volume, recovery attempt in progress" error
- Volume high latency caused by misconfigured FPolicy
- Volume latency induced by CPU N-Blade
- Volume Latency Warning Threshold Breached Alert from AIQUM
- Volume tiering very slow in ONTAP 9.8
- VSS Backups failing with error message Failed to create VSS snapshot
- VSS_E_HOLD_WRITES_TIMEOUT with SnapCenter due to slow ONTAP response
W
- wafl.cp.toolong errors seen due to high workload and high CPU
- wafl.cp.toolong or high disk utilization on archive, backup, or disaster recovery nodes
- What are Calculate Parity reads (CP reads) and why it makes extra disk reads?
- What are common performance terms?
- What are CPU as a compute resource and the CPU domains in ONTAP 9?
- What are IOPs?
- What are performance archives and how are they triggered?
- What are some internal ONTAP workloads in qos statistics commands?
- What are the benefits of Consistency Points versus Direct Writes?
- What are the best practices for adding disks to an existing aggregate?
- What are the counters from the flexscale-access output and how does it work?
- What are the Delay Centers from different performance monitoring tools?
- What are the different Consistency Point types and how are they measured in Data ONTAP 8?
- What are the methods of perfstat collection for ONTAP 9?
- What are the metrics used to analyze system performance of CPU?
- What are the performance considerations of deduplication in ONTAP systems?
- What are the throughput values for common LAN and WAN connections?
- What commands are useful to monitor performance in ONTAP 9?
- What does "Spin_ops" mean in show-periodic command output?
- What does ANY mean by "sysstat" command
- What does the "CP time" displayed in the output of sysstat mean?
- What does the 'cluster' displayed in the 'statistics system show' command indicate?
- What does the wafltop output mean and how can it be interpreted?
- What is "PEAK_PERFORMANCE" line in Active IQ Digital Advisor or the headroom in ONTAP?
- What is a PIT file in Data ONTAP?
- What is Adaptive QoS and how does it work?
- What is Consistency Point, and why does NetApp use it?
- What is CPU utilization in Data ONTAP: Scheduling and Monitoring?
- What is database logging due to misaligned I/O?
- What is Quality of Service (QoS) in ONTAP?
- What is the "_ocs_vserver" workload in qos statistics?
- What is the available bandwidth for a shelf stack using IOM12 and SAS3 cards?
- What is the Back-to-Back (B2B) Consistency Point Scenario?
- What is the difference in latency of the same volume between "statistics volume show" and "qos statistics volume latency show" commands?
- What is the Directory Indexing Scanner and what does directory indexing accomplish?
- What is the DPO license and how does it affect QoS policies and latency?
- What is the FlexVol read or write path?
- What is the ONTAP inode cache?
- What is the server's name causing high IOPs and latency
- What is the theoretical maximum throughput of a Gigabit Ethernet interface?
- What operations are categorized under Other IOPS in ONTAP?
- What tool can be used to check ONTAP's performance?
- When do ONTAP metrics measure round trip time to clients or only internal ONTAP time?
- When we change the Qos policy of a large number of volumes from shared mode to non-shared at the same time, Does it have performance impact to the system
- Where can I learn more about Consistency Points?
- Why are IOPS in System Manager different than per protocol IOP counts in CLI?
- Why are there qos.VioDet.Mintput.Throttle alerts after upgrading ONTAP to 9.13.1 or newer?
- Why does a single disk have higher utilization that the others in the same raid group?
- Why does FabricPool generate more cloud storage load on ONTAP reboot?
- Why Does NetApp Harvest Report High Usage Times for WAFL Cleaning?
- Why does qos statistics volume latency show have disk instead of cloud latency?
- Why does the statistics volume show command show the total ops which do not match the read, write, and other ops?
- Why is a workload's latency high when the IOPS are low?
- Why is CPU high when FabricPool is enabled?
- Why is ONTAP setting Optimal Transfer Length to 0x80?
- Why is the reported LUN latency higher than the volume latency?
- Why is there a front-end latency spike without a corresponding user workload?
- Why is there lower throughput with increased nconnect threads with IPsec enabled
- Why is there QoS latency after upgrading to ONTAP 9.3+?
- Why is vol0 showing high number of IOPS?
- Why is Volume write throughput much higher than Workload write throughput when SCSI WRITE SAME is present
- Why the Flash Cache reads may be very low during Performance issues
- Why would AIQUM report a higher delay than other performance monitoring tools?
- Will enabling FlashCache mbuf_inserts and rewarm options negatively impact performance?
- Will high delays from ONTAP processing layer impact Oracle Redo workload performance?
- Will there be any impact from collecting statistics for the client object for about a week using the statistics command?
- Workload exceeding QoS limit on non-MCC cluster
- Workload exceeds the QoS limit on All SAN Array
- Workload lun latency threshold breached defined by performance service level policy
- Write latency and low throughput on Metrocluster Environment
- Write latency due to imbalanced Raid group
- Write Performance Impacted by Back to Back Consistency Points
- Wrong output for the command statistics top client
Z