Skip to main content
NetApp Knowledge Base

S3 server receives numerous LIST requests from ONTAP after fabricpool aggr is deleted

Views:
54
Visibility:
Public
Votes:
0
Category:
ontap-9
Specialty:
core
Last Updated:

Applies to

  • ONTAP 9
  • Hitachi Content Platform (HCP)

Issue

  • After deleting a fabric pool aggregate, numerous GET (LIST) requests were received on the object storage (HCP) from ONTAP, causing significant performance degradation on the object store.

ONTAP started sending tons of HTTP listing requests causing them other issues.
- ontap [20/Feb/2025:15:29:52 +1300] "GET /art*****/?max-keys=1000&prefix=fxxxxxx9-3xx5-4xx2-8xx2-8xxxxxxxxx91 HTTP/1.1" 500 0 311716061 013 0
- ontap [20/Feb/2025:15:31:42 +1300] "GET /art*****/?max-keys=1000&prefix=fxxxxxx9-3xx5-4xx2-8xx2-8xxxxxxxxx91 HTTP/1.1" 500 0 311604094 013 0
- ontap [20/Feb/2025:15:33:42 +1300] "GET /art*****/?max-keys=1000&prefix=fxxxxxx9-3xx5-4xx2-8xx2-8xxxxxxxxx91 HTTP/1.1" 500 0 268425944 011 0
- ontap [20/Feb/2025:15:36:02 +1300] "GET /art*****/?max-keys=1000&prefix=fxxxxxx9-3xx5-4xx2-8xx2-8xxxxxxxxx91 HTTP/1.1" 500 0 311347027 011 0

  • A FlexGroup member which existed under the deleted aggregate had a large amount of data tiered:

Volume : <volume_name>

Feature                                           Used      Used%
--------------------------------      ----------------      -----
Volume Data Footprint                           94.9TB       381%
Footprint in Performance Tier                   1.81TB         2%
Footprint in hcpartnzpontap01                   93.3TB        98%
Flexible Volume Metadata                         562GB         2%
Delayed Frees                                    131GB         1%
File Operation Metadata                         4.00KB         0%
Total                                           95.6TB       383%

Effective Total Footprint                       95.6TB       383%
 

  • Since a non-optimized volume move was performed to relocate the data (to a local aggr) before the fabric pool aggregate was deleted, the tiered data for this volume remained there (object store). Once the Fabric pool aggregate was deleted, the clean-up started:
 
2025-02-20T02:29:51Z 68296316172029660    [8:0] CLOUD_BIN_ERR:  cloud_bin_cleanup_info_alloc: Bin cleanup started for btuuid fxxxxxx9-3xx5-4xx2-8xx2-8xxxxxxxxx91 of bin-uuid=cxxxxx2-bxxe-1xx9-8xx3-0xxxxxxxxxx6, config-id=1.
2025-02-20T02:33:36Z 68296697564845269    [15:0] CLOUD_BIN_ERR:  cloud_bin_cleanup_info_alloc: Bin cleanup started for btuuid fxxxxxx9-3xx5-4xx2-8xx2-8xxxxxxxxx91 of bin-uuid=cxxxxx2-bxxe-1xx9-8xx3-0xxxxxxxxxx6, config-id=1.

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.