Search
- Filter results by:
- View attachments
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_latency_on_database_log_volumesApplies to ONTAP Issue High latency in a database environment reported against the controller NetApp Unified Manager Insight etc. This article only applies to situations where the controller is report...Applies to ONTAP Issue High latency in a database environment reported against the controller NetApp Unified Manager Insight etc. This article only applies to situations where the controller is reporting high latency for database logging volumes/LUNs
- https://kb.netapp.com/on-prem/ontap/DP/SnapMirror/SnapMirror-KBs/Snapmirror_transfer_speed_is_extremely_slowApplies to SnapMirror Issue Those destination clusters that were moved to the new IDC or initially created from new IDC all experienced the SnapMirror performance degradation issue The ONTAP resource ...Applies to SnapMirror Issue Those destination clusters that were moved to the new IDC or initially created from new IDC all experienced the SnapMirror performance degradation issue The ONTAP resource usage and the network throughput are very low The status of the cluster peer is healthy
- https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/High_CPU_utilization_due_SecD_memory_issueHigh hostOS CPU is coming from SecD load Thu Nov 19 20:37:06 CET [Node-01: secd: secd.rpc.server.request.dropped:debug]: The RPC secd_rpc_auth_user_id_to_unix_ext_creds sent from NBLADE_NFS was droppe...High hostOS CPU is coming from SecD load Thu Nov 19 20:37:06 CET [Node-01: secd: secd.rpc.server.request.dropped:debug]: The RPC secd_rpc_auth_user_id_to_unix_ext_creds sent from NBLADE_NFS was dropped by SecD due to memory pressure. 00000023.3d2753ed 03357f70 Mon Nov 23 2020 00:17:48 +01:00 [kern_secd:info:14174] [SECD MASTER THREAD] SecD RPC Server:Too many outstanding Generic RPC requests: sending System Error to RPC 217:secd_rpc_auth_user_id_to_unix_ext_creds Request ID:16082.
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/Performance_implications_of_using_mixed_40G_and_10G_cluster_interconnectsIncreased latency over the cluster interconnect is observed when accessing data indirectly, along with port errors in netstat and/or ifstat in a cluster of different platform models having 40G support...Increased latency over the cluster interconnect is observed when accessing data indirectly, along with port errors in netstat and/or ifstat in a cluster of different platform models having 40G supported in one HA pair and 10G supported in another HA pair with data from 40G nodes being accessed through 10G nodes.
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/General_guidance_for_collecting_data_for_performance_analysis_ONTAP_7Mode_and_CdotFor example, if the issue is reported for slow backup process or execution of some scrips takes longer then usually, data must be collected during whole process of backup and also during “normal” time...For example, if the issue is reported for slow backup process or execution of some scrips takes longer then usually, data must be collected during whole process of backup and also during “normal” time of backup (period in which backup was not taking “much longer”). Decision about timeframe for single iteration (-t parameter), number of iterations ( -i ) and number of runs may vary based on issue and is provided by TSE working on the issue via action plan.
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/What_does_ONTAP_do_with_available_IDLE_cycles_on_the_storage_controllerONTAP is opportunistic, when there are available IDLE CPU cycles (IDLE time), instead of wasting the IDLE time, ONTAP can and will use as much of the available time as possible to run background and s...ONTAP is opportunistic, when there are available IDLE CPU cycles (IDLE time), instead of wasting the IDLE time, ONTAP can and will use as much of the available time as possible to run background and scheduled tasks such as: This increased latency is expected since the storage system becomes busy with work it is doing in the background (by design), when the work is complete, utilization and latency can drop back to the normal levels.
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/ESX_Hosts_losing_access_to_LUNs_regularly_along_with_Aggregate_reporting_long_CP_sApplies to VMware critical alert for datastore path down Poor performance and alerts for SQL virtual machines Affected datastores: Path redundancy to storage device naa.xyz degraded. Path vmhba1:C0:T7...Applies to VMware critical alert for datastore path down Poor performance and alerts for SQL virtual machines Affected datastores: Path redundancy to storage device naa.xyz degraded. Path vmhba1:C0:T78:L10 is down. ONTAP reports long or back2back consistency points (CP) wafl_exempt00: wafl.cp.toolong:error]: Aggregate_xyz_sata_aggr1 experienced a long CP. I/O latency increased from average value of 8000 microseconds to 1000008 microseconds
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/What_is_the_Directory_Indexing_Scanner_and_what_does_directory_indexing_accomplishONTAP has scaled to meet these needs as it has evolved, and one of those features is directory indexing. Directory lookup will only read required blocks in the directory instead of entire directory. I...ONTAP has scaled to meet these needs as it has evolved, and one of those features is directory indexing. Directory lookup will only read required blocks in the directory instead of entire directory. Indexing the same 100 directories of 10 MB each in 9.2+ reduces this to around 200 MB of RAM. You will notice in "wafl scan status" a "directory index creation" scan. A frontend call (lookup/create generally) will cause ONTAP to notice that the directory is greater than 2 MB and index it.
- https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/SIS_scan_is_extremely_slow_at_gathering_phaseSIS scan is extremely slow at Gathering phase. SIS log just stops at Compressing and there is no further recording for slow scan task. Tue Dec 15 10:51:35 2020 [Vserver UUID: 60a25723-700f-11e7-96c7-x...SIS scan is extremely slow at Gathering phase. SIS log just stops at Compressing and there is no further recording for slow scan task. Tue Dec 15 10:51:35 2020 [Vserver UUID: 60a25723-700f-11e7-96c7-xxxxxxxxxxxx] /vol/volname [sid: 1607997095] Begin (sis start scan) Tue Dec 15 10:51:35 2020 [Vserver UUID: 60a25723-700f-11e7-96c7-xxxxxxxxxxxx] /vol/volname [sid: 0] Info (Optimized scan)
- https://kb.netapp.com/hybrid/StorageGRID/Maintenance/Volumes_getting_high_latency_due_to_incomplete_decommission_of_StorageGRID_nodeDecommission of VM-based node Host volumes having high latency due to incomplete decommission of storageGRID Node. We could see Decommission completed on Grid Topology > Site > Primary Admin node > CM...Decommission of VM-based node Host volumes having high latency due to incomplete decommission of storageGRID Node. We could see Decommission completed on Grid Topology > Site > Primary Admin node > CMN > Tasks but From Maintenance > Decommission page doesnt reset. Decommissionned Node logs shows it got failed, and although it should re-try the decommission on this node but instead got success message (Child process exited with RC=0) due to a known issue.
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Active_IQ_Unified_Manager_9.7__can_enabled_aQoS_on_workloads_automatiallyCluster::> qos statistics volume latency show Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- ---------- ---------- --------...Cluster::> qos statistics volume latency show Workload ID Latency Network Cluster Data Disk QoS Max QoS Min NVRAM --------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ... vol1-wid21368 21368 879.73ms 283.00us 9.00us 498.00us 184.00us 0ms 877.99ms 216.00us vol2-wid19287 19287 736.25ms 401.00us 8.00us 734.00us 166.00us 0ms 734.91ms 199.00us