ONTAP Performance
ONTAP Performance
Featured
Performance Resolution Guide
Start here to troubleshoot ONTAP 9 performance
Start here to troubleshoot ONTAP 9 performance
Data Processing latency Resolution Guide
Troubleshoot Data Processing latency
Troubleshoot Data Processing latency
Disk utilization and latency Resolution Guide
Troubleshoot FAS high disk latency or utilization
Troubleshoot FAS high disk latency or utilization
Other Resources
- What are performance archives and how are they triggered?Mar, 2024
- QoS Limits causing high volume latencyMar, 2024
- What are the best practices for adding disks to an existing aggregate?Mar, 2024
- Does network high latency affect CPU utilizationMar, 2024
- ONTAP Data Processing latency - Resolution GuideMar, 2024
- Slow NDMP when Media Server is located in different subnet rangeMar, 2024
- High latency reported on SSD-NVM drives in MetroCluster IP using Cisco 9336C-FX2 shared switchesMar, 2024
- QoS policy with duplicate PGIDs will not be effective in MCC clusterMar, 2024
- What are some internal ONTAP workloads in qos statistics commands?Mar, 2024
- Slow copy operations, latency, or loss of access due to outage in a FabricPool volumeMar, 2024
- How to manually generate and upload performance archivesMar, 2024
- Slow Rubrik backups in tiered volumesMar, 2024
- High Read or Write Latency due to CPU bottleneck from user workloadMar, 2024
- Latency increases suddenly while using QoS - Resolution GuideMar, 2024
- How to measure CPU utilizationMar, 2024
- Unresponsive D-Blade leading to NAS protocol outageMar, 2024
- High write latency due to NVLog Transfer on MetroCluster SDSMar, 2024
- High latency observed on volume with quotas enabledMar, 2024
- High Data Processing latency due to a bursty workloadMar, 2024
- Sick disk causes performance impactMar, 2024
#
A
- A FlexCache origin volume hangs and stops responding with extremely high latency
- Active IQ Wellness: Up to High Impact - This system is nearing the limits of its performance capacity
- Adaptive QoS maximum latency for volume when IO is below its expected IO limit
- Affinity bottleneck due to a few volumes sharing the same VOL affinity ID
- After headswap iSCSI latency is seen
- After Linux kernel update, NFS is slow with many extra denied ACCESS calls
- Aggregate has high disk utilization due to large deletion
- AIQUM 9.7+ or NSLM may enable workload AQoS causing unexpected latency
- AIQUM reported about 4.5 million IOPS for a node
- AIQUM reports high latency from Cluster Interconnect
- An ONTAP cluster stops serving traffic with FlexCache
- Are benchmarks appropriate to measure or troubleshoot performance?
- Are long Consistency Points (wafl.cp.toolong) normal?
- Are there Performance implications to using Advanced Drive Partitioning (ADP) ?
- Authentication Error when attempting to use NetApp System Performance Modeler
B
- Backup jobs via CIFS are slow while doing indirect access
- Bad LUN to LUN copy performance due to ODX
- Bad Performance on API Calls for multi-admin-verfication
- Bad read performance for reads after upgrading to 9.6 and later
- Benchmarks are slow on CrystalDiskMark especially 4k reads or other benchmarks
- Bully write workload causing the performance issue in MetroCluster environment
- Bursty write append workloads impacting write latency on AFF
- Bursty write workload could affect performance
C
- Can adding a second Fiber Connection increase throughput?
- Can CHANGE NOTIFY option set for CIFS Vserver affect NFS clients write performance?
- Can I increase IOPS on a QoS policy or will it cause performance problems?
- Can thin provisioning affect performance on a NFS volume?
- CIFS Client Change Notification setting can cause unnecessary load in ONTAP
- CIFS client slow with enable FPolicy
- CIFS clients access slowly with FPolicy disconnecting repeatedly
- CIFS FlexGroups with ChangeNotify become unresponsive
- CIFS high other latency on Citrix VDI or other workloads
- CIFS inaccessible after Snapmirror Cloud Restore
- CIFS or NFS RENAME operations slow on FlexGroup
- CIFS or NFS responses are very slow caused by CPU bottleneck with Kahuna domain due to CIFS operations
- CIFS performance degrades during system configuration backup
- CIFS Share stuck on calculating when copying a file with Windows Explorer
- CIFS slowness issue observed after receiving Nblade.vscanConnInactive alerts
- Citrix VDI crashing along with high volume latency
- Cloud data retrieval is slow
- Cloud Insights does not return perf data for all expected volumes
- Cloud Volumes ONTAP NVlog transfer bottleneck - Resolution Guide
- Cloud Volumes ONTAP: High CPU utilization on highest-numbered CPU of Azure HA nodes
- Cluster Interconnect delay due to data LIFs not local to node owning disks
- Cluster Interconnect Latency - Resolution Guide
- Cluster Network Latency - Troubleshooting Guide
- CommVault backups using NIC teaming in ONTAP 9.4 and later experiencing low throughput over SMB 3
- Compression enabled volumes won't read from Flash/External cache
- Connection group load imbalance might cause latency problems for clients
- Consistency Point taking too long during Ontap upgrade
- Constant EMS messages "wafl.inode.cache.highLoad" on a SnapMirror destination volume
- Constant High Disk Utilization in the Root Volume
- Copy of data from one volume to another is slower compared to the ODX enabled SVM
- Could a storage bottleneck be causing Microsoft Exchange server to be slow?
- CPU Load constantly at 100% caused by vldb services
- CPU over-subscription leads to QoS minimum latency and user slowness
- CPU utilization above 50 percent before upgrading ONTAP
- CPU utlization high due to high nwk_lg domain
- CVO AWS cluster seeing high NVlog latency
- CVO instance not retaining 28 days of performance archive data
D
- Data ONTAP 8: How to accurately determine CPU utilization of a Data ONTAP 8.0 Cluster-Mode cluster
- Data Processing Latency on AFF from workload which switches volumes rapidly
- Days to Retain Performance Archive Data
- Deduplication for AFF is unable to keep up with incoming write workload
- Deletion takes longer for large directories when the SMB CLOSE requests are runnig for longer duration
- Disk bottleneck may impact VMware LUN performance and timeouts
- Disk I/O performance issue in the Azure single node instances
- Disk latency with extCache.io.writeError and insert_reject_throttle_io
- Disk utilization from read I/O requests might cause an ONTAP internal service failure
- Do QoS throughput policies apply to read+write operations
- Do RAID scrubs hurt performance?
- Does disk reconstruction lead to performance impact?
- Does network high latency affect CPU utilization
- Does ONTAP track SMB2 or SMB3 directory creation latency?
- Does the NetApp Technical Support Center deliver ONTAP performance health checks or evaluations?
- During giveback client experienced latency with disk timeout seen
E
- Early signs of latency on Flexgroups
- Elevated CPU or high cluster latency when using indirect traffic resolved by using referrals
- Elevated CPU or high cluster latency when using indirect traffic using CIFS or NFS
- Elevated CPU or high cluster latency when using indirect traffic using iSCSI or FCP
- Enabling SMB 3 encryption reduces CIFS performance
- Encountering AIQUM alerts while backup performance is slow
- End path evaluation for device errors and slow performance on ESXi
- Error creating flex volume with encryption
- Error: win32_create took 26 (> 15) secs, reported on GFC core logs
- Essential Case Framing - Performance
- ESXi bug causes slow virtual machine replication due to error: IO was aborted by VMFS via a virt-reset on the device
- ESXi events report Lost access to volume
- Excessive Deduplication jobs may increase latency in ONTAP systems
- Extreme performance issue observed on one FlexGroup volume
- Extremely high latency from CPU N-blade caused by volume offline
- Extremely latency on volume from CPU N-blade
- Extremely slow "rm -rf" performance
F
- FabricPool object storage has poor GET throughput with ONTAP firewall services enabled
- FabricPool overutilizes a 100Mb link
- Facing slowness while writing data to Flex vol
- Fails to setup throughput floor with unsupported platform
- First steps troubleshooting Netapp FAS error wafl_exempt10: wafl.cp.toolong
- Flash Pool Read Hit rate increases with an empty Read Cache
- Flexcache report reads performance due to write calls
- FlexGroup constituents don't report the correct IOPS when Qtree QoS Policies are attached
- FlexGroup has high latency due to all workload on single constituent
- FlexGroup has high Other latency on one single constituent with a large directory
- FlexGroup IOPS reported from OCUM is zero after applying Qtree QoS Policies
- FlexGroup performance impact after node reboot
- FlexGroup performance information is not displayed in ONTAP System Manager
- FlexGroup performance is worse after resizing due to different aggregate types
- Flexgroup volume high latency with inode used exceeding 90 percent
- FlexGroup volume reports a lower ops than the sum of ops from all the constituent volumes
- Fragmentation and Reallocate
G
H
- HCL BigFix Enpoint Management software scans .snapshot directory causing latency
- Heavy SMB signing pressure causes high SMB3 latency from Network CPU domain
- High await and util while accessing the luns on host
- High build time process latency on the teamcity build agent
- High CIFS latency on new shares based on LOCK MANAGER
- High CIFS latency when volume/qtree or CIFS share have oplocks disabled
- High Cluster Interconnect latency for 256KB or larger IO size
- High CPU and latency caused by misconfigured Fpolicy
- High CPU and latency is seen despite low OP counts
- High CPU due to user workload causing performance issues
- High CPU utilization - Resolution Guide
- High CPU utilization after ONTAP upgrade
- High CPU Utilization at irregular intervals seen from Grafana Harvest
- High CPU utilization due to branch-cache feature
- High CPU utilization due to FlexGroup constituents imbalance
- High CPU utilization due to HostOS domain process MGWD
- High CPU utilization due to HostOS domain with counter manager daemon(CMD)
- High CPU utilization due to massive other IOPS
- High CPU utilization due to one specific HostOS domain process
- High CPU utilization due to volume move
- High CPU utilization due to wafl_exempt
- High cpu utilization in host domain from MGWD in ONTAP 9.7
- High CPU utilization in HostOS with Docker Swarm
- High CPU utilization on a Snapmirror Destination Node
- High CPU utilization when Continuous Segment Cleaning is enabled
- High CPU utilization when IOPS are lower than expected
- High CPU/Disk utilization on Snapmirror source node from Deswizzlers
- High CREATE or OPEN latency when many clients attempt to access the same file
- High CVO nvlog latency due to overutilized disks
- High Data Processing latency due to a bursty workload
- High disk latency after upgrade due to Flash Cache options being disabled
- High disk latency due to Snapmirror forward sync
- High disk latency migration of data over SVM-DR
- High Disk Latency on a FlexArray/V-Series System
- High disk latency on FlexArray system
- High disk latency with low IOPS in Azure CVO
- High Disk or CPU utilization due to excessive Container Block Reclamation scanners
- High disk utilization at night between 1 to 5 AM
- High Disk Utilization caused by deswizzling on SnapMirror destinations
- High disk utilization causing applications to freeze
- High disk utilization due to aggregate reallocation
- High disk utilization is due to improper RG configuration
- High disk utilization on a SnapMirror Source/Destination
- High FlexGroup or FlexVol CIFS latency due to CIFS ChangeNotify fixed with share properties
- High FlexGroup or FlexVol latency due to CIFS ChangeNotify before ONTAP 9.10
- High HOST CPU domain utilization from nphmd
- High HostOS CPU utilization due to problems with secd
- High HostOS domain CPU utilization after deletion of LS mirror
- High HostOS Utilization due to process ndo_manager
- High IO wait due to bursty sequential write operations
- High latency after ONTAP reboot due to malfunctioning Flash Cache
- High latency and disk errors with single/few disk high utilization
- High latency and slow throughput when creating volumes via OnCommand System Manager's ‘Application and Tiers’ function
- High latency caused by massive symlink creation
- High latency due to deletion workload
- High latency due to FCVI error
- High Latency due to ODX workload
- High Latency for CREATE and RENAME operations
- High latency from cluster interconnect caused by faulty SFPs
- High latency from cluster interconnect with port errors
- High latency from disks TA14 HDD's
- High latency from excessive SMB TRIM operations
- High latency from NVLOG transfer on Azure Cloud Volume ONTAP
- High Latency from QoS Min on workloads with no QoS Policy or not in Adaptive QoS policy
- High latency from SM Sync on SnapMirror Sync(SM-S) primary volumes
- High latency in CPU Dblade in ONTAP due to large io sizes
- High latency is seen during SVM DR initialization
- High latency lasting for 20s after re-inserting the NVMe disks
- High latency observed on volume with quotas enabled
- High latency observed when trying to list directories from FlexCache volume
- High Latency on CIFS SVM with Home Folder
- High Latency on cluster interconnect due to connection imbalance
- High latency on database log volumes
- High latency on FlexCache destination volume with no IOPs
- High latency on FlexCache due to AIR
- High latency on scratch volumes with high file counts or inodes
- High latency on Snapmirror Synchronous source volume
- High latency on SnapMirror-Sync volumes due to SQL Server IO Bursts
- High latency on volume after file or snapshot deletion
- High latency on volumes with indirect access due to RCF file
- High latency reported from monitoring tools such as OCUM and AIQ
- High latency reported on SSD-NVM drives in MetroCluster IP using Cisco 9336C-FX2 shared switches
- High latency resolved by disabling Fpolicy
- High latency seen in ONTAP on FlexGroup constituents
- High latency using LogicMonitor despite lower ONTAP latency
- High latency when adding shelf
- High latency when on ONTAP 8.3.1
- High Latency while reading snapshot from the archive
- High latency with high nwk_exempt CPU utilization after FlexVol to FlexGroup conversion
- High Latency with SM-BC
- High latency due to QoS Min throughput
- High memory consumption by VSCAN causing poor performance
- High N-blade latency due to extensive export policy configuration changes
- High nblade latency with available headroom
- High network latency in an iSCSI SAN environment
- High network LIF bandwidth utilization or S3 target load due to FabricPool
- High NFS latency due to single disk IO medium error
- High NFSv3 LOOKUP latency on transitioned volumes
- High Node utilization alert due to heavy CIFS SMB signing
- High NVRAM transfer latency on MetroCluster IP
- High nwk_exempt CPU consumption on A800 driven by heavy Single Volume Workload
- High ONTAP latency reported but not in qos statistics volume latency show
- High or fluctuating latency after turning on NetApp ONTAP File System Analytics
- High other latency and high other IOPS seen
- High read latency due to high disk utilization
- High read latency from CPU D-blade after volume move
- High read latency observed on all FlexClone volumes
- High read latency on guest VM
- High read latency seen after upgrading from AFF-A400 controller
- High READ latency when implementing copy-offload between flexgroup constituents
- High Read or Write Latency due to CPU bottleneck from user workload
- High Read or Write Latency due to disk bottleneck from user workload
- High Read/Write latency at SVM level due to single or multiple volumes
- High response times for volumes on SATA or SAS disks when workload increases (Performance Capacity beyond critical threshold)
- High SSD disk utilization with APDv2
- High utilization of disk model X342_TA14E1T2A10 with NA02 firmware
- High VM latency or volume latency on disk fixed by volume move or vMotion
- High VMware or application latency but ONTAP and UM show low due to bad upstream SFP
- High write latency and suspends with sequential append workload
- High Write Latency due to Back to Back CP with Small Write Size
- High write latency due to NVLog Transfer on MetroCluster SDS
- High write latency from Back to Back Cps caused by disk pressure
- High write latency on FlexCache
- High write latency on MCC-IP with Nexus
- High write latency on MetroCluster-IP during heavy peak of write workload
- High write latency only during the SM-Sync snapshot
- Higher latency after volume move
- Higher than expect latency for some operations, AIQUM, OCI and CLI
- Higher than expected FCP latency due to network delay
- Higher than expected SSD latency due to 6Gb SAS shelves and cables
- Higher than expected SSD latency due to missing 12Gb SAS cables
- Higher than expected SSD latency due to needing disk firmware updates
- How concurrency can impact throughput performance with ONTAP 9
- How do Dynamic Home Directories improve performance?
- How do I know if CPU is causing a performance issue?
- How do I search the top file in the volume
- How do I tell what "other" NFS iops are being processed within my volume?
- How do you identify the source of reads for a workload in ONTAP 9?
- How does ONTAP allocate CPU resource to volumes
- How does QoS policy group associated with a file work?
- How many IOPs can a FAS or AFF controller handle?
- How to access and display CLI and ZAPI performance data in ONTAP 9
- How to achieve high throughput
- How to address network latency in a SAN environment - Resolution Guide
- How to adjust the SnapMirror-Synchronous Common Snapshot Schedule for performance tuning
- How to Analyze Oracle Database Performance Issues
- How to Assess Disk Response Times in ONTAP
- How to calculate SAP HANA's "alter system reclaim datavolume" command percentage after hitting bug 865444
- How to change and stagger the deduplication schedule in ONTAP 9
- How to check for background CPU utilization in ONTAP 9
- How to collect a Perfstat from a clustered Data ONTAP system in Mac OS X
- How to collect a Perfstat from a clustered Data ONTAP using the CLI in Windows
- How to collect a Perfstat from clustered Data ONTAP systems using the Perfstat GUI
- How to collect a Perfstat from ONTAP 9 using the CLI in Linux
- How to collect a Perfstat7 in Windows
- How to collect a Perfstat7 on Windows 7 or Server 2008 or newer via GUI
- How to collect performance data when Perfstat8 fails by using preset files and manual data collection
- How to collect Performance Statistics for intermittent issues
- How to collect Perfstat data over RSH
- How to collect WAFLTOP output from CLI
- How to compare CSS schedule to SM-S latency occurrences
- How to create a custom Performance Preset for Performance Archives
- How to detect lock storms faster
- How to disable or modify cache settings for a FlexVol resident on a Flash Pool
- How to disable QoS policy on a volume
- How to enable the option "disk.latency_check_ssd.fail_enable"
- How to fix indirect volume access
- How to get full output of raw data with full precision and not SI prefixes in the statistics command
- How to identify a bursty or oversubscribed workload causing QoS latency
- How to identify a disk with high IQ_QUEUED latency
- How to identify and resolve top workloads
- How to identify bully workloads from Linux
- How to identify bully workloads from VMware
- How to identify bully workloads from Windows
- How to identify bully workloads using packet traces in CIFS or NFS environments
- How to identify full volume name from qos statistics output
- How to identify high-latency volumes with truncated volume names
- How to identify transient workload bursts using "sysstat" command
- How to increase priority for disk reconstruction
- How to interpret the QoS ratebucket ID
- How to investigate ONTAP cifs latency with Nblade.CifsOperationTimedOut and SpinNp error 410
- How to limit the maximum number of post-process compression and deduplication processes running in parallel
- How to manually generate and upload performance archives
- How to map an ocs_vserver workload to which volume it came from
- How to measure CPU utilization
- How to measure CPU Utilization in 7mode mode device
- How to measure the Ops and Latency reported from Volume layer
- How to monitor LUN statistics from the CLI in Clustered Data ONTAP 8.3 and later
- How to perform volume reallocation in ONTAP 9
- How to rectify performance issues using monitoring tools
- How to reduce Data Processing latency
- How to reduce the disk latency or data processing latency by volume move
- How to review aggregate performance via Active IQ website
- How to Show Throughput and Operations/s per File in ONTAP 9.x
- How to stagger dedupe automatically, based on thresholds
- How to throttle volume move in ONTAP 9.10 or later
- How to troubleshoot and resolve FabricPool Performance issues or "measured latency from cloud" alerts
- How to troubleshoot FlexGroup performance issues
- How to troubleshoot performance issue related to Snapmirror throughput and link utilization
- How to tune iSCSI Qdepth on Microsoft iSCSI SW initiator
- How to use and understand qos statistics commands to monitor volume latency in real time
- How To Use OneCollect 2.0 or Greater to Collect and Upload Performance Archive Data
- How to validate splitter vol queue time for SM-S
- How to verify expected latency and network RTT with SM-S
- How volume read/write latency in Grafana/NABox graph is measured?
I
- IBM FileNet intermittently reporting an error "CONTENT_FCA_FILE_DOES_NOT_EXIST"
- Identify Performance Degradation due to Misbehaving Disk in ONTAP
- Inconsistent read performance from object store
- Incorrect network stats reported in ifstat and sysstat
- Increase of iSCSI throughput affecting VMware environment
- Increased CPU after changing Snapmirror DP to XDP
- Increased latency due to QOS minimum after upgrade to ONTAP 9.7
- Increased latency when Oracle DB is configured with logbias=latency
- Increased read latency on a node due to increased FabricPool cold tier reads
- Inode Cache - file operations suspend heavily creating massive latency
- Intermittent high latency observed on volumes affecting host applications
- Intermittent performance issue on the cluster
- Intermittently receiving "NFS not responding" errors on the unix client
- Is my controller overloaded?
- Is there one available way to monitor memory utilization of ONTAP
- It takes longer to run a batch job when the client NFS mount option actimeo is set to 0
L
- Large file access is slow while reading from cold tier
- Latency and disk utilization increases such as ONTAP shutdown
- Latency from CPU_protocol for extremely bursty SAN workloads
- Latency higher than expected for AFF system
- Latency in QoS before limit on ONTAP 9.1
- Latency in volume after consolidating 3 volumes into 1
- Latency increase using Dell Quest Change Auditor Fpolicy server after ONTAP upgrade
- Latency increases during volume move cutover phase
- Latency increases suddenly while using QoS - Resolution Guide
- Latency induced by CPU N-Blade/Network on SAN datastores
- Latency reported as ICMP packets dropped by storage interface (LIF)
- Latency seen while copying files from local disk to NetApp lun due to incompatible host HBA and switch sfp
- Latency when FlexShare is enabled
- LIF shows in Unhomed state
- Linux clients using SMBv3 share report slow performance
- Logging into cluster using Active Directory domain account times out when using Kerberos for domain-tunnel authentication
- Long Consistency Point "wafl.cp.toolong" when disk is about to fail or being failed
- Long CP after Snapmirror Transfer on AFF using XDP
- Long CP on aggregate for CVO setup
- Long CPs dominated by P2_FLUSH during SnapMirror Finalizing phase
- Long CPs from multiple aggregates when target disk for SDC is also sick
- Long CPs with high CPU utilization on FlexClone volumes
- Low latency and High CPU or DISK utilization due to deduplication
- Low NFS read throughput on ESXi due to tuning
- Low NFS throughput from one ONTAP cluster compared to another
- Low Space in FabricPool Performance tier leads to poor performance
- Low throughput for share mounted to Windows clients
- Low throughput when using CIFS
- Low throughput while restoring a file
- Lower than expected performance while utilizing a single volume
M
- Managing Cluster IOPS Warning Threshold Breached
- Memory overutilization causes slow opening of directories on a CIFS share
- Microsoft SQL Disk overwhelmed high response times
- Migrating storage to a new datacenter or cloud provider introduces external WAN delay
- Monitoring system is triggering alert for a CVO volume crossing the threshold limits
- Multiple aggregates long CPs caused by failed Flash Cache
N
- NAS access slow or hung caused by CRC errors on network port
- Natus NeuroWorks Acquisition Station randomly drops network connection to NetApp Storage
- nconnect modification causes degraded NFS performance
- NDMP Backup low Throughput
- NDMP Performance is slower than prior backup
- NDMP workload causes disk bottleneck
- Network latency error message in EMS logs for Snapmirror-Synchronous
- Network latency in a Fibre Channel SAN environment
- NFS resource not performing as quickly as expected due to high workload on LIF
- NFS traffic over UDP is slow
- NFS, iSCSI, or FCP performance problems on Cisco UCS platforms
- NVRAM purpose during ONTAP outage
O
- Observing high latency while modifying an Export policy
- OCUM Alert: Max Data Disk Utilization value
- ODX copy-offload is slower than host based copy for files with more extents
- On FlexGroup/Flexvol Tier All data is very slow
- OnCommand Insight (OCI)report consistent high read latency for overall cluster
- OneCollect 1.9 for Mac OS Performance Archives Fail to Collect
- Only "-total-" when filtering for volume in "qos statistics volume latency show"
- ONTAP 9 - Per Aggregate CP
- Ontap 9 - Slow Space Reclamation when deleting Files via NAS Protocol
- ONTAP 9 Performance - Resolution Guide
- ONTAP Data Processing latency - Resolution Guide
- ONTAP Disk utilization and latency - Resolution Guide
- ONTAP LUN with VMware and SQL latency spikes
- ONTAP running at 100 percent CPU due to Snapmirror
- ONTAP single LUN latency at network layer since ONTAP 9.3
- Opening a specific directory is very slow
- Operations times out due to AQoS throttling
- Oracle Database performance latency following a Storage insertion, headswap or ONTAP upgrade
- Oracle databases implemented with single LUN and multiple FlexClones exhibit latency
- Oracle SLOB is not meeting IOP requirements
- Outages and high latency during ONTAP upgrade by high CPU
P
- Performance archive (PA) is not enabled for datastore 'opm', when PA is enabled
- Performance data stopped polling after implementing workaround related to Bug 1505697
- Performance degradation after moving LIFs between nodes in a cluster
- Performance impact due to low aggregate free space
- Performance impact when browsing snapshot directory
- Performance issue accompanied with the storage errors
- Performance issue due to PCIE error
- Performance issue during Vmotion due to pcie.stealth.errors
- Performance issue on volumes or LUNs from ESXi after upgrading ESXi versions
- Performing a lot of locks (NFSv3) causing higher latency
- Poor client performance but ONTAP latency is good - Resolution Guide
- Poor performance and high CPU usage in a single node due to a degraded DIMM
- Poor performance and service disruption on CIFS vFilers
- Poor performance during RAID scrubs
- Poor performance on VMware Oracle DB
- Poor performance on Windows CIFS clients using Adobe Premiere
- Poor performance (low throughput) with iSCSI during intravolume copy offload
- Poor reading performance due to COS4
- Possible causes for high NFS latency - Resolution Guide
Q
R
- Read Latency due to slow SATA disks from user workload
- Read latency on Clone volume
- Receiving alerts "Lif Threshold exceeded" intermittently
- Recurring High CPU with BacktoBack CPs
- Reducing cluster interconnect latency with DFS node referrals
- Repeated Node Panics after upgrade to ONTAP 9.8
- Resolving excessive FabricPool object store network latency
- Robocopy stops making progress during a copy, using close to 100 percent CPU
- Root volume usage signifigantly dropped post reboot
S
- SAN increased latency with low throughput from VAAI Copy Offload
- Scheduled ONTAP storage efficiency takes days to complete
- secd tracing causes high ops on root volume
- Shared QoS Policy causing high latency and performance issues
- Sick disk causes performance impact
- Single client with large number of open file requests inducing latency
- Single LUN has latency in ONTAP 9
- Slow backup to cloud for large volume
- Slow CIFS Performance in ONTAP on clients due to Avecto
- Slow CLI access when using data LIF instead of e0M for management traffic
- Slow copy operations, latency, or loss of access due to outage in a FabricPool volume
- Slow directory deletion on FlexCache Origin
- Slow file metadata queries via REST API
- Slow file moves across directories in FlexGroups
- Slow FlexCache operations on the origin volume on workloads like SVN
- Slow Java application performance pulling data from a CIFS share
- Slow LUN moves with all LUNs on a single volume
- Slow LUN read performance for ONTAP 9.3 to 9.9
- Slow NDMP when Media Server is located in different subnet range
- Slow NFS read throughput with Oracle 12 db
- Slow Oracle database on VMware ESXi using VMware's NFS
- Slow performance due to SnapMirror Synchronous
- Slow performance of workloads repopulating VMware data mart
- Slow performance using NFS with FlexCache with larger number of NOENTs
- Slow reads from database volumes
- Slow restore with many or small files via Cloud Backup Service
- Slow Rubrik backups in tiered volumes
- Slow SnapMirror performance and low throughput do to BRE/LRSE Throttling
- Slow SnapMirror Transfers due to CPU bottleneck at the Source Cluster
- Slow SQL performance and disconnects and high utilization in AIQUM
- Slow storage vmotion for thin provisioned disks on NFS
- Slow tiering during high workload
- Slow Tiering to an Object Store
- Slow VM Clone using SCE caused by long suspensions on COPY_WAIT
- Slow VMware vMotions on NFS due to default NFS TCP transfer size
- Slow VMware vMotions on NFS due to Spanning Tree issues
- Slow VMware vMotions on NFS due to VMKernel Interfaces not having vMotion configured
- Slow VMware vMotions on ONTAP 9
- Slow volume move in Cloud volume ONTAP
- Slow Wafl Scanner cause delay in freeing blocks from snapshots
- Slower than expected file volume transfer over NFS
- Slowness and poor performance due to mismatched MTU
- Slowness doing additional workload such as volume moves on FAS9000 or larger systems
- Slowness in renaming large number of files concurrently with fpolicy enabled
- Slowness on FC LUNs on new ONTAP Cluster
- Slowness redirecting Documents and Desktops to ONTAP after Windows restart
- Slowness reported for volumes with QOS min configured
- Slowness with Synopsis StarXT or other NFSv3 applications with NLM
- SnapMirror Sync(SM-S) primary volumes has high latency due to volume bursty workload
- Snapmirror takes a long time
- Snapmirror Throughput limited by CPU
- SnapMirror tiered destination backup restores time out using CommVault or other backup software
- SnapMirror transfer speed is slow to a FabricPool destination
- SnapMirror within the same node has a low throughput
- SNMP traps not recognized by SNMP monitoring tool
- SPINNP_WAFFI_PUNCH_HOLES extreme suspend latency on LOAD_BUF_DISK_LOCK_CHILD
- Sporadic high latency for NFS Other operations on FlexGroup
- SQL latency error "Logical disk transfer (read and writes) latency is too high"
- Statistics associating with a Qtree QoS Policie are not supposed to be displayed
- Storage Performance Impact due to NFS Client Configuration
- Storage VM latency due to high inode activity in ONTAP 9 leads to panic
- Sudden delay in Veeam backup job completion with FabricPool
- Sudden latency and CPU utilization from workloads going idle to busy resolved by QoS
- Sudden latency and CPU utilization from workloads going idle to busy resolved by reducing workload
- System Manager loading very slow
T
- Tape/NDMP or backup jobs can cause latency or utilization due to too few disks
- The SyncMirror resync takes a long time on the SyncMirror Aggregate
- Transfer speeds not hitting 10 Gigabit maximum
- Traversing a large sparse directory is extremely slow with long running READDIR requests
- Troubleshooting CIFS Latency - Single volume readdir workloads with high latency
U
- UDP connections are single threaded
- Unable to connect to Storage Account leading to high disk latency in CVO
- Unresponsive D-Blade leading to NAS protocol outage
- User workload leading to latency from the disk layer
- Users complain of performance on a volume impacted by other nodes in the cluster
- Users report slowness on volumes with TSSE enabled
V
- VDI slowness and latency during high logins or load such as between 8-9 AM due to high ONTAP load
- VDI slowness and latency during high logins or load such as between 8-9 AM due to ONTAP configuration
- VDI slowness and latency during high logins or load such as between 8-9 AM due to VDI configuration
- VDI slowness during batch desktop creations on ONTAP NFS export
- VDI's show high latency issues after upgrading 2 applications called NXE and TC
- Veeam Backup of Windows VM fails during Shadow copies commit due to background write handling
- Very high "QoS-Cloud" latency indicated by QoS statistics
- VM freezing when deleting VM snapshot in ESXi 7.0U2
- VM reboot during long CP
- VMware reports "Lost access to volume, recovery attempt in progress" error
- Vol0 100 percent utilization High HostOS
- Volume high latency caused by misconfigured FPolicy
- Volume latency and CRC errors on port
- Volume latency induced by CPU N-Blade
- Volume moves affecting other workloads in ONTAP 9.3
- Volume tiering very slow in ONTAP 9.8
- VSS Backups failing with error message Failed to create VSS snapshot
- VSS_E_HOLD_WRITES_TIMEOUT with SnapCenter due to slow ONTAP response
W
- wafl.cp.toolong errors seen due to high workload and high CPU
- wafl.cp.toolong or high disk utilization on archive, backup, or disaster recovery nodes
- What are Calculate Parity reads (CP reads) and why it makes extra disk reads?
- What are common performance terms?
- What are CPU as a compute resource and the CPU domains in ONTAP 9?
- What are IOPs?
- What are performance archives and how are they triggered?
- What are some internal ONTAP workloads in qos statistics commands?
- What are the benefits of Consistency Points versus Direct Writes?
- What are the best practices for adding disks to an existing aggregate?
- What are the counters from the flexscale-access output and how does it work?
- What are the Delay Centers from different performance monitoring tools?
- What are the different Consistency Point types and how are they measured in Data ONTAP 8?
- What are the methods of perfstat collection for ONTAP 9?
- What are the metrics used to analyze system performance of CPU?
- What are the performance considerations of deduplication in ONTAP systems?
- What are the throughput values for common LAN and WAN connections?
- What can slow down the volume move operation?
- What commands are useful to monitor performance in ONTAP 9?
- What does "Spin_ops" mean in show-periodic command output?
- What does the "CP time" displayed in the output of sysstat mean?
- What does the wafltop output mean and how can it be interpreted?
- What is "PEAK_PERFORMANCE" line in Active IQ or the headroom in ONTAP?
- What is a PIT file in Data ONTAP?
- What is Adaptive QoS and how does it work?
- What is Consistency Point, and why does NetApp use it?
- What is CPU utilization in Data ONTAP: Scheduling and Monitoring?
- What is database logging due to misaligned I/O?
- What is Quality of Service (QoS) in ONTAP?
- What is the "_ocs_vserver" workload in qos statistics?
- What is the Back-to-Back (B2B) Consistency Point Scenario?
- What is the difference in latency of the same volume between "statistics volume show" and "qos statistics volume latency show" commands?
- What is the Directory Indexing Scanner and what does directory indexing accomplish?
- What is the DPO license and how does it affect QoS policies and latency?
- What is the FlexVol read or write path?
- What is the ONTAP inode cache?
- What is the server's name causing high IOPs and latency
- What is the theoretical maximum throughput of a Gigabit Ethernet interface?
- What operations are categorized under Other IOPS in ONTAP?
- When do ONTAP metrics measure round trip time to clients or only internal ONTAP time?
- Where can I learn more about Consistency Points?
- Why are IOPS in System Manager different than per protocol IOP counts in CLI?
- Why disable qos.violation.detection policy does not work?
- Why do files take long to populate when accessing my S3 bucket?
- Why does FabricPool generate more cloud storage load on ONTAP reboot?
- Why Does NetApp Harvest Report High Usage Times for WAFL Cleaning?
- Why does qos statistics volume latency show have disk instead of cloud latency?
- Why does the statistics volume show command show the total ops which do not match the read, write, and other ops?
- Why is a workload's latency high when the IOPS are low?
- Why is CPU high when FabricPool is enabled?
- Why is the reported LUN latency higher than the volume latency?
- Why is there a front-end latency spike without a corresponding user workload?
- Why is there lower throughput with increased nconnect threads with IPsec enabled
- Why is there QoS latency after upgrading to ONTAP 9.3+?
- Why is vol0 showing high number of IOPS?
- Why is Volume write throughput much higher than Workload write throughput when SCSI WRITE SAME is present
- Why the Flash Cache reads may be very low during Performance issues
- Why would AIQUM report a higher delay than other performance monitoring tools?
- Will high delays from ONTAP processing layer impact Oracle Redo workload performance?
- Will migrating snapmirror destination impact the source performance?
- Windows Photo App shows excessive OPEN/CLOSE ops on CIFS shares
- Workload exceeds the QoS limit on All SAN Array
- Workload lun latency threshold breached defined by performance service level policy
- Write latency and low throughput on Metrocluster Environment
- Write latency due to imbalanced Raid group
- Write Performance Impacted by Back to Back Consistency Points
- Wrong output for the command statistics top client
Z