Object count delta between src and dst bucket within a Snapmirror-S3 relationship
Applies to
- Snapmirror-S3 (FabricLink)
- ONTAP_S3 object store bucket
Issue
- Within a Snapmirror-S3 FabricLink relationship with RPO=0, the destination object-store bucket object-count keeps on raising, possibly way above the object-count of the source bucket.
- Aggregate hosting the affected bucket keeps on filling and space is getting tight or aggregate runs out of space.
- Alternatively, in a lighter form, the object-count between source- and destination-bucket within the Snapmirror-S3 relationship, shows only different numbers with no outstanding work:
::> vserver object-store-server bucket show -fields object-count
vserver bucket object-count
------- ---------- ------------
svm_dst bucket-dst 124080
svm_src bucket-src 124078
2 entries were displayed.
::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_src:/bucket/bucket-src
XDP svm_dst:/bucket/bucket-dst
Snapmirrored
Idle - true -
::*> snapmirror show -instance
Source Path: svm_src:/bucket/bucket-src
Destination Path: svm_dst:/bucket/bucket-dst
Relationship Type: XDP
SnapMirror Policy Type: continuous
SnapMirror Policy: Continuous
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
Relationship ID: 16d619e6-9044-11ed-823e-005056bbfb01
FabricLink Source Role: bucket-src
FabricLink Source Bucket: active-mirror
FabricLink Peer Role: bucket-dst
FabricLink Peer Bucket: active-mirror
FabricLink Topology: 145c48be-9044-11ed-8707-005056bbcabb
FabricLink Pull Byte Count: 0
FabricLink Push Byte Count: 0
FabricLink Pending Work Count: 0
FabricLink Status: ok
- After manual deletion of all objects within the source-bucket, the object-count either on source-bucket, or on destination-bucket, or on both, does not drop down to zero and some or even the majority of occupied space is not released.
- Deeper investigation shows, partial multipart-objects are left within the destination bucket when checking from an external S3 client like
aws cli
:
$ aws --profile bucket-dst --endpoint-url https://ontap_s3.local s3api list-multipart-uploads --bucket bucket-dst --output text
UPLOADS 2023-01-09T09:57:09.000Z f001 MjE1NTI5NTY5OF8xMDU2XzIzMF80NjgzNzU4Mg
UPLOADS 2023-01-09T10:03:46.000Z f002 MjE1NTI5NTY5N18xMDU1XzE3MV80NjQzNTk2MQ
Note: If the destination-bucket had ever been promoted (Snapmirror-S3 was reversed), partial multipart objects can remain in both, source- and destination-bucket.
- At the time when the Snapmirror-S3 relationship was initialized,
air.repaired.vplus:notice
events are reported within ONTAPevent log show
(EMS) output:
Thu Sep 15 09:07:09 +0200 [zbghns4-03: wafl_exempt07: air.repaired.vplus:notice]: AIR vplus rebuild of 1204~0.147527 completed for subtype 6