SnapMirror went to OutOfSync condition for a couple of minutes
Applies to
- ONTAP 9
- SnapMirror Synchronous (SM-S)
- out-of-sync (OOS)
Issue
- The Synchronous SnapMirror is in "out-of-sync" status due to "Transfer failed."
- The issue occurred only once and was fixed automatically
Mon Mar 14 14:58:16 +0000 [dest_node_name: sm_logger_main: sms.status.out.of.sync:error]: Source volume "source_svm:vol_name" and destination volume "dest_svm:vol_name" with relationship UUID "rel_uuid" is in "out-of-sync" status due to the following reason: "Transfer failed.".
Mon Mar 14 15:00:17 +0000 [dest_node_name: sm_rpl_admin_main: sms.status.in.sync:notice]: Initialize or resynchronize operation was successful. Source volume "source_svm:vol_name" and destination volume "dest_svm:vol_name" with relationship UUID "rel_uuid" are in "in-sync" status.
Mon Mar 14 14:58:16 GMT 2022 REPL-CTRL:cd68e2b6-46b7-11ec-a91c-00a0b8ad11dd Trace_Buf (1 of 5 log entries): repl_granular::GranularSyncWriter::_init():91 : Initializing GranularSyncWriter for SSM with PROTO_SYNC_REPL
Mon Mar 14 14:58:16 GMT 2022 REPL-CTRL:cd68e2b6-46b7-11ec-a91c-00a0b8ad11dd Trace_Buf (2 of 5 log entries): repl_granular::GranularSyncWriter::WriterInSyncOperationCallback():741 : Notifying d-control about DRL resync completion. Bytes Transferred: 241696
Mon Mar 14 14:58:16 GMT 2022 REPL-CTRL:cd68e2b6-46b7-11ec-a91c-00a0b8ad11dd Trace_Buf (3 of 5 log entries): repl_receiver::Node::abort():638 : Replication (ReceiverNode) aborted with transferComplete=0,Status=0
Mon Mar 14 14:58:16 GMT 2022 REPL-CTRL:cd68e2b6-46b7-11ec-a91c-00a0b8ad11dd Trace_Buf (4 of 5 log entries): repl_receiver::LogicalWriter::~LogicalWriter():262 : Unable to determine Datawarehouse file size since data store is not valid
Mon Mar 14 14:58:16 GMT 2022 REPL-CTRL:cd68e2b6-46b7-11ec-a91c-00a0b8ad11dd Trace_Buf (5 of 5 log entries): repl_granular::SyncCache::updateCGState():4953 : CG: cb2a34e8-46b7-11ec-a91c-00a0b8ad11dd terminating wait