Skip to main content
NetApp Knowledge Base

Search

  • Filter results by:
    • View attachments
    Searching in
    About 4 results
    • https://kb.netapp.com/on-prem/ontap/mc/MC-KBs/How_to_recover_a_failed_System_Volume_from_an_nvfailed_state
      As part of error handling during MetroCluster switchover operations, a volume can end in an nvfailed state. This article covers on how to recover a system volume like ONTAP's Metadata Volumes (MDV) fr...As part of error handling during MetroCluster switchover operations, a volume can end in an nvfailed state. This article covers on how to recover a system volume like ONTAP's Metadata Volumes (MDV) from this situation. The instructions for restoring access to data volumes can be found here. To identify if a volume is in an nvfailed state, run the following command: In the above example, no volume is in nvfailed state. In the above example, one volume is in the nvfailed state.
    • https://kb.netapp.com/on-prem/ontap/DP/SnapMirror/SnapMirror-KBs/Volume_type__MDV_CRS_3339eaa4b2e311e4bf7900a0987xxxx_A_created_by_SVM_DR_is_high_utilized_8_9_GB_from_total_10_GB
      Metadata Volume with Configuration Replication Service (MDV_CRS) SnapMirror update fails with the following error Last Transfer Error: Failed to apply the source Vserver configuration. Reason: Context...Metadata Volume with Configuration Replication Service (MDV_CRS) SnapMirror update fails with the following error Last Transfer Error: Failed to apply the source Vserver configuration. Reason: Context is busy or it is already in cleanup. Execute "snapmirror show -destination-vserver SVM-fs-dr -fields last-transfer-error,unhealthy-reason -expand" to check if the constituent volumes have encountered errors. The volume used size keeps increasing as long as this error exists
    • https://kb.netapp.com/on-prem/ontap/da/NAS/NAS-KBs/Vserver_audit_create_generates_EMS_Error__wafl_vol_create_clusterVolOnCFOaggr
      Applies to ONTAP 9 Issue When enabling CIFS auditing the following error is seen for MDV volumes hosted in the root aggregate wafl.vol.create.clusterVolOnCFOaggr: Creating volume 'MDV_aud_a111aaa1a1a1...Applies to ONTAP 9 Issue When enabling CIFS auditing the following error is seen for MDV volumes hosted in the root aggregate wafl.vol.create.clusterVolOnCFOaggr: Creating volume 'MDV_aud_a111aaa1a1a11a11111aaaaa111a111a' on an aggregate with a CFO HA policy. File access delays will occur for a period following giveback.
    • https://kb.netapp.com/on-prem/ontap/mc/MC-KBs/MetroCluster_Aggr_Delete_completed_unsuccessfully_Failed_to_remove_the_location_information
      After deleted the only one data aggregate on a node of MCC system, the aggregate name sill shows in the output of "aggr show" with "unknown" state. [MCC2-02: mgwd: mgmtgwd.jobmgr.jobcomplete.failure:i...After deleted the only one data aggregate on a node of MCC system, the aggregate name sill shows in the output of "aggr show" with "unknown" state. [MCC2-02: mgwd: mgmtgwd.jobmgr.jobcomplete.failure:info]: Job "Aggr Delete" [id 132] (Delete aggr1_MCC2) completed unsuccessfully: Failed to remove the location information of aggregate aggr1_MCC2 from VLDB due to The MetroCluster or Vserver DR Configuration Replication Service is recovering.