Reverse resync failing for Flexgroup Snapmirror in a mirror-vault cascade
Applies to
- ONTAP 9
- Flexgroup SnapMirror (FG SM)
Issue
- Customer performed DR test of the Flexgroup Snapmirror that are cascaded to SnapVault clusterDuring the reverse resync *(failback) the customer had a brief network outage that affected 2 of the FG SM relationships during the reverse resync
Sat Nov 9 22:22:09 MST 2024 FlexGroupResync[Nov 9 22:22:09]: Operation-Uuid=bbec6290-9f23-11ef-a70b-d039ea1e0177 Group=flexgroup Operation-Cookie=0 action=Start source=svm01:vol01 destination=svmDR01:vol01Sat Nov 9 22:22:37 MST 2024 FlexGroupResync[Nov 9 22:22:09]: Operation-Uuid=bbec6290-9f23-11ef-a70b-d039ea1e0177 Group=flexgroup Operation-Cookie=0 action=End source=svm01:vol01 destination=svmDR01:vol01 status=Failure message=Transfer aborted
.
- After network outage recovered, reverse resync continued to fail after multiple attempts stating the source volume does not have the named snapshot
Failed to set "snapmirror-label" or internal tag for Snapshot copy "snapmirror.daa21bc5-5a79-11ea-9ccc-00a098b3e845_2151093806.2024-11-09_214500" on volume "svm01:vol01". (Volume 1af9548a-1ffa-11ed-a863-d039ea3a8185:vol01 does not have Snapshot copy snapmirror.daa21bc5-5a79-11ea-9ccc-00a098b3e845_2151093806.2024-11-09_214500.)
- Checked snapshot show on all 3 clusters/volumes and confirmed that snapshot =
"snapmirror.daa21bc5-5a79-11ea-9ccc-00a098b3e845_2151093806.2024-11-09_214500"
still exists on all 3 volumes from clusters A > B > C