Skip to main content
NetApp Knowledge Base

Inconsistency Due to Missing Files on Azure Blob for FabricPool

Views:
31
Visibility:
Public
Votes:
0
Category:
azure-netapp-files
Specialty:
cloud
Last Updated:

Applies to

  • ONTAP 9
  • Fabric Pool
  • Azure Blob

Issue

  • Data inconsistencies have been observed in a volume's user data.

Thu Feb 16 14:14:15 1100 [PRDXXX: wafl_exempt04: wafl.raid.incons.userdata:error]: WAFL inconsistent: inconsistent user data block at VBN 281474976710654 (vvbn:281474976710646 fbn:2726 level:0) in public inode (fileid:304190 snapid:0 file_type:1 disk_flags:0x8002 error:120 raid_set:1) in volume vol_XYZ(2)@vserver:9b36a1d5-97b8-11e9-a218-000d3a6a918a.
Thu Feb 16 14:14:15 1100 [PRDXXX: wafl_exempt04: wafl.incons.userdata.vol:alert]: WAFL inconsistent: volume vol_XYZ(2)@vserver:9b36a1d5-97b8-11e9-a218-000d3a6a918a has an inconsistent user data block. Note: Any new Snapshot copies might contain this inconsistency.Thu Feb 16 14:14:15 1100 [PRDXXX: wafl_exempt04: callhome.wafl.inconsistent.user.block:alert]: Call home for WAFL INCONSISTENT USER BLOCKThu Feb 16 14:14:15 1100 [PRDXXX: coredump_manager: callhome.micro.core:notice]: Call home for MICRO-CORE: /etc/crash/micro-core.2176910288.2023-02-16.03_14_15

  • This issue appears to be linked to a GET command failure with an OSC error code, indicating missing files in the cloud blob.
2023-02-16T03:14:15Z 112832354170767121   [8:0] REPL_0:  repl_writer::ReplopBufferWriter::replopBufferWriterCallback(): | [42522fd1-aaa1-11ea-b089-000d3a6a918a] | [0xfffff806dd3d6040] Buffer Writer Ctx : outstandingWriterCtxList size:0, flushPendingList size:0, lastWrittenOffset:159744, lastFlushedOffset:159744
2023-02-16T03:14:15Z 112832354265553347   [3:0] OSC_ERR:  logFailedCmd:345 GET Cmd ID:897 name:e1d3d257-1f5a-49c2-b509-6c88e8c3ee82/65386df1_0000000000003976_e1d3d257-1f5a-49c2-b509-6c88e8c3ee82 failed with OSC error: 111 Offset:2602928, Len:4112
2023-02-16T03:14:15Z 112832354265556905   [3:0] OSC_ERR:  logFailedCmd:367 Stats of GET Cmd ID:897, wait 1us, start-send 135us, 1stbyte 4547us, 1st-lastbyte 50us
2023-02-16T03:14:15Z 112832354265566417   [3:0] CLOUD_BIN_ERR:  cloud_io_objstore_callback: Cloud bin: volume aggr01_vmssd04_PRDXXX, osc_error 111
2023-02-16T03:14:15Z 112832354272458146   [15:0] OSC_ERR:  logFailedCmd:345 GET Cmd ID:897 name:e1d3d257-1f5a-49c2-b509-6c88e8c3ee82/65386df1_0000000000003976_e1d3d257-1f5a-49c2-b509-6c88e8c3ee82 failed with OSC error: 111 Offset:2602928, Len:4112
2023-02-16T03:14:15Z 112832354272461304   [15:0] OSC_ERR:  logFailedCmd:367 Stats of GET Cmd ID:897, wait 0us, start-send 62us, 1stbyte 2789us, 1st-lastbyte 11us
2023-02-16T03:14:15Z 112832354282529401   [15:0] OSC_ERR:  logFailedCmd:345 GET Cmd ID:897 name:e1d3d257-1f5a-49c2-b509-6c88e8c3ee82/65386df1_0000000000003976_e1d3d257-1f5a-49c2-b509-6c88e8c3ee82 failed with OSC error: 111 Offset:2602928, Len:4112
2023-02-16T03:14:15Z 112832354282532195   [15:0] OSC_ERR:  logFailedCmd:367 Stats of GET Cmd ID:897, wait 0us, start-send 72us, 1stbyte 2861us, 1st-lastbyte 10us
2023-02-16T03:14:15Z 112832354288947685   [14:0] OSC_ERR:  logFailedCmd:345 GET Cmd ID:897 name:e1d3d257-1f5a-49c2-b509-6c88e8c3ee82/65386df1_0000000000003976_e1d3d257-1f5a-49c2-b509-6c88e8c3ee82 failed with OSC error: 111 Offset:2602928, Len:4112
2023-02-16T03:14:15Z 112832354288950721   [14:0] OSC_ERR:  logFailedCmd:367 Stats of GET Cmd ID:897, wait 1us, start-send 142us, 1stbyte 2510us, 1st-lastbyte 11us
2023-02-16T03:14:15Z 112832354296523888   [2:0] OSC_ERR:  logFailedCmd:345 GET Cmd ID:897 name:e1d3d257-1f5a-49c2-b509-6c88e8c3ee82/65386df1_0000000000003976_e1d3d257-1f5a-49c2-b509-6c88e8c3ee82 failed with OSC error: 111 Offset:2602928, Len:4112
2023-02-16T03:14:15Z 112832354296526939   [2:0] OSC_ERR:  logFailedCmd:367 Stats of GET Cmd ID:897, wait 1us, start-send 199us, 1stbyte 2687us, 1st-lastbyte 10us
2023-02-16T03:14:15Z 112832354296540439   [2:0] CLOUD_BIN_ERR:  wafl_cloud_read_blkr_retry: Cloud-bin read block 35194375163513 marked pseudobad due to obj not found (blkr state:DONE)
 
  • Further from the SKtrace logs we can see that a block is marked pseudobad due to object not found. Volume's [vol_XYZ(2)] buftreeid matches with the alerts in sktrace's object name: name:e1d3d257-1f5a-49c2-b509-6c88e8c3ee82/65386df1_0000000000003976_e1d3d257-1f5a-49c2-b509-6c88e8c3ee82
  • From the sktrace logs we can see the same error : Cloud-bin read block 35194364685975 marked pseudobad due to obj not found

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.
Scan to view the article on your device