Search
- Filter results by:
- View attachments
- https://kb.netapp.com/on-prem/ontap/mc/MC-KBs/Node_reports_StorageFCAdapterFault_Alert_for_down_or_unused_FC_portsFC ports in initiator mode The node sends AutoSupport alerts for StorageFCAdapterFault for down FC ports after every reboot. The reported port is not in use. The Metrocluster_Node subsystem health get...FC ports in initiator mode The node sends AutoSupport alerts for StorageFCAdapterFault for down FC ports after every reboot. The reported port is not in use. The Metrocluster_Node subsystem health gets degraded: cluster::> system health subsystem show Probable Cause: FC initiator adapter 4d is at fault. Ensure that the FC initiator link has not been tampered with. Verify the operational status of the FC initiator adapter by using the command "system node run -node
- https://kb.netapp.com/Support/General_Support/NABox_Systems_Tab_Shows_No_DataApplies to NetApp Harvest NetApp ONTAP NetApp ONTAP Select Grafana NABox Issue When clicking on the Systems page in NABox, the page is blank as shown in the example below: Normally, this page will sho...Applies to NetApp Harvest NetApp ONTAP NetApp ONTAP Select Grafana NABox Issue When clicking on the Systems page in NABox, the page is blank as shown in the example below: Normally, this page will show configured controllers or the option to add a storage system (controller) as shown below:
- https://kb.netapp.com/on-prem/Switches/Cisco-KBs/Is_the_Network_Port_DCB_configurable_on_NetApp_to_match_the_Cisco_Switch_settingsAny changes made with regards to QoS/CoS/ DCB on Cisco Switch should reflect immediately on NetApp Storage. There is no specific configuration change required from ONTAP. Use dcb show command on ONTAP...Any changes made with regards to QoS/CoS/ DCB on Cisco Switch should reflect immediately on NetApp Storage. There is no specific configuration change required from ONTAP. Use dcb show command on ONTAP to verify the settings. Interface PGID Priority Application Bandwidth --------- ---- ---------- ------------ ------------ e3a 0 0 1 2 4 5 6 7 unassigned 50% 1 3 FCoE 50% e3b 0 0 1 2 3 4 5 6 7 unassigned 0% e4a 0 0 1 2 4 5 6 7 unassigned 50% 1 3 FCoE 50% e4b 0 0 1 2 3 4 5 6 7 unassigned 0%
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Unable_to_create_QTREE_on_FlexCache_volumeApplies to ONTAP 9 QTREE Issue An error message is returned when attempting to create a new QTREE in a FlexCache volume running 9.7 Error: Failed to determine the effective cluster version of all the ...Applies to ONTAP 9 QTREE Issue An error message is returned when attempting to create a new QTREE in a FlexCache volume running 9.7 Error: Failed to determine the effective cluster version of all the nodes hosting FlexCache volumes connected to FlexCache origin volume "vol1" in Vserver "vserver1"
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/ONTAP_create_qos_adaptive_policy_group_fails_with_the_effective_cluster_version_is_not_Data_ONTAP_9_3_0_or_laterApplies to ONTAP 9 Issue CLI command to create a new adaptive QOS policy fails: cluster::> qos adaptive-policy-group create -policy-group group -vserver SVM -expected-iops 300IOPS/TB -peak-iops 525IOP...Applies to ONTAP 9 Issue CLI command to create a new adaptive QOS policy fails: cluster::> qos adaptive-policy-group create -policy-group group -vserver SVM -expected-iops 300IOPS/TB -peak-iops 525IOPS/TB Error: command failed: The feature that supports creating a QoS adaptive-policy-group is either disabled during a revert operation or the effective cluster version is not Data ONTAP 9.3.0 or later.
- https://kb.netapp.com/data-mgmt/OTV/VSC_Kbs/SRM_test_failover_fails_due_to_lack_of_space_on_the_targetApplies to ONTAP tools for VMware Netapp Storage Replication Adapter 9.11 VMware Storage Replication Manager 8.5 ONTAP 9.7 NFS datastore Issue SRM test recovery plan fails with error Module MonitorLoo...Applies to ONTAP tools for VMware Netapp Storage Replication Adapter 9.11 VMware Storage Replication Manager 8.5 ONTAP 9.7 NFS datastore Issue SRM test recovery plan fails with error Module MonitorLoop power on failed The VMware Virtual Machine shows event: Failed to extend swap file /vmfs/volumes/<uuid>/<VM>.vswp from 0 KB to <size> KB: No space left on device
- https://kb.netapp.com/on-prem/ontap/DM/Efficiency/Efficiency-KBs/VMWare_NFS_datastore_size_is_lower_than_ONTAP_volume_sizeApplies to VMWare NFS datastore ONTAP volumes Issue NFS VMWare datastore size is lower than the ontap volume size
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/High_Latency_on_CIFS_SVM_with_Home_FolderApplies to ONTAP 9 CIFS Home folder Citrix Profiles Issue Sudden High Latency after enabling Chrome Cache Mirroring A sudden increase in latency is observed in ActiveIQ UM or other performance monitor...Applies to ONTAP 9 CIFS Home folder Citrix Profiles Issue Sudden High Latency after enabling Chrome Cache Mirroring A sudden increase in latency is observed in ActiveIQ UM or other performance monitoring tools.
- https://kb.netapp.com/Cloud/BlueXP/Cloud_Manager/Cloud_Manager__Unable_to_delete_volume_snapmirror_relationshipApplies to Cloud Manager SnapMirror Issue SnapMirror relationship cannot not be destroyed via Cloud Manager GUI due to error: An error occurred while deleting relationship between Source_Volume and De...Applies to Cloud Manager SnapMirror Issue SnapMirror relationship cannot not be destroyed via Cloud Manager GUI due to error: An error occurred while deleting relationship between Source_Volume and Destination_Volume: A SnapMirror relationship for destination path Destination_SVM:Destination_Volume was not found.
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Volume_Autosize_does_not_automatically_increase_the_volume_size_as_expectedApplies to ONTAP Volume Autosize Issue Volume Autosize does not automatically increase the volume size as expected.
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Wrong_output_for_the_command_statistics_top_client8317 nfs [node name] [vserver name] 172.28.9.15 8313 nfs [node name] [vserver name] 172.31.179.161 3558 nfs [node name] [vserver name] 172.28.9.13 3555 nfs [node name] [vserver name] 172.28.9.15 3548 ...8317 nfs [node name] [vserver name] 172.28.9.15 8313 nfs [node name] [vserver name] 172.31.179.161 3558 nfs [node name] [vserver name] 172.28.9.13 3555 nfs [node name] [vserver name] 172.28.9.15 3548 nfs [node name] [vserver name] 172.31.169.50 3548 nfs [node name] [vserver name] 146.106.231.169 3544 nfs [node name] [vserver name] 146.106.43.217 3541 nfs [node name] [vserver name] 172.31.110.82 1658 nfs [node name] [vserver name] 146.106.28.149 1648 nfs [node name] [vserver name] 146.106.28.210