Search
- Filter results by:
- View attachments
- https://kb.netapp.com/on-prem/ontap/Perf/Perf-KBs/Slow_volume_move_in_Cloud_volume_ONTAPApplies to Clustered Data ONTAP (cDOT) Cloud Volumes ONTAP (CVO) Amazon Web Services (AWS) Issue Volume move is slow in CVO
- https://kb.netapp.com/Cloud/BlueXP/Cloud_Tiering/Deleting_the_FabricPool_NAME_will_result_in_orphaned_objectsWarning: Deleting the FabricPool "[NAME]" will result in orphaned objects because there is either a pending volume move operation or a pending metadata cleanup after volume move. Use the "volume move ...Warning: Deleting the FabricPool "[NAME]" will result in orphaned objects because there is either a pending volume move operation or a pending metadata cleanup after volume move. Use the "volume move show" command to check the status of the volume move operation or wait for the pending metadata cleanup to complete. If the error persists for more than a few minutes after the volume move operation is complete, contact technical support for assistance.
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Volume_move_job_suspend_at_decision_point_when_NDMP_restore_operation_runningCommand: "volume move show" disappears job suspend at decision point Volume move job at decision point Error: Preparing the source of the volume move for transfer: Restore running on the volume being ...Command: "volume move show" disappears job suspend at decision point Volume move job at decision point Error: Preparing the source of the volume move for transfer: Restore running on the volume being moved NDMP restore operation is running rst Tue Jan 18 16:29:37 CST 2022 /SVM01/volume_prd/ Options (b=0, d, y, H, TCP recv buffer size = 33580, TCP send buffer size = 33580) rst Tue Jan 18 16:29:37 CST 2022 /SVM01/volume_prd/ Tape_open (ndmp)
- https://kb.netapp.com/on-prem/ontap/DM/FabricPool/FabricPool-KBs/Volume_move_fails_with_error__Tiering_not_allowed_due_to_FabricPool_licenseApplies to ONTAP 9 FabricPool Issue Volume move fails with Replication encountered an unknown error. Volume access error. (Tiering not allowed due to FabricPool license.)
- https://kb.netapp.com/on-prem/ontap/da/S3/S3-KBs/Unable_to_connect_to_the_object_store_local_ontap_s3_Reason__Internal_server_errorApplies to ONTAP 9 ONTAP s3 Issue During volume move of capacity tier volume (ONTAP s3) the object store becomes unavailable.
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Volume_move_fails_with_error_Not_enough_space_in_the_volume_or_aggregateSource and destination aggregate avail size also exceeds volume total size. /vol/volume_move/.snapshot 0B 0B 0B 0% --- tpemisfs32 aggr_ssd_dst/.snapshot 0B 0B 0B 0% Volume move still fails with error ...Source and destination aggregate avail size also exceeds volume total size. /vol/volume_move/.snapshot 0B 0B 0B 0% --- tpemisfs32 aggr_ssd_dst/.snapshot 0B 0B 0B 0% Volume move still fails with error "Not enough space in the volume or aggregate" in EMS log 5/17/2021 12:57:29 Netapp-2 ERROR mgmt.vopl.move.nospace: The 'volume move' operation with ID '14748' for volume 'volume_move' present on Vserver 'vserver' cannot proceed (Reason: Not enough space in the volume or aggregate).
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Why_do_volume_names_have_1_appended_to_the_end_of_them_after_they_are_movedIf an existing volume is moved across aggregates within the same node by running the volume move command, the name of the moved volume will have a number such as (1) appended to the end of the volume ...If an existing volume is moved across aggregates within the same node by running the volume move command, the name of the moved volume will have a number such as (1) appended to the end of the volume name. If you have a CIFS Vserver with an "Engineering" volume and an NFS Vserver with an "Engineering" volume, and they are both on the same node, one of them would have a (1) appended to it from the node level, but from the cluster level (with a volume show), both would show "Engineering".
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Why_does_volume_move_copy_cloud_tiered_data_on_releases_9.6_and_aboveHowever there are few conditions that can make the volume move to be unoptimized, where all the data in cloud tier will be copied to performance tier and then copied back to cloud tier based on the ti...However there are few conditions that can make the volume move to be unoptimized, where all the data in cloud tier will be copied to performance tier and then copied back to cloud tier based on the tiering policy. For configurations using AWS S3, if the source FabricPool was created on releases earlier than ONTAP 9.5 and the destination FabricPool was created on releases ONTAP 9.5 and above, then the volume move between these FabricPools will be unoptimized.
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/FlexClone_parent_volume_becomes_Unavailable_after_volume_moveThe output of "volume clone show" lists unavailable parent snapshots after a volume move of the parent volume. Parent Parent Parent Vserver FlexClone Vserver Volume Snapshot State Type vs3 flexclone_v...The output of "volume clone show" lists unavailable parent snapshots after a volume move of the parent volume. Parent Parent Parent Vserver FlexClone Vserver Volume Snapshot State Type vs3 flexclone_volA vs3 moved_parent_temp__3586__65835__volA (unavailable) online RW flexclone_vol1 vs3 moved_parent_temp__3603__66062__vol1 (unavailable) online RW FlexClone Parent Volume: moved_parent_temp__3586__65835__volA FlexClone Parent Snapshot: (unavailable)
- https://kb.netapp.com/on-prem/ontap/Ontap_OS/OS-KBs/Are_Snapshots_moved_with_the_volumeApplies to ONTAP 9 Snapshots Answer Snapshot copies are part of the volume. When a volume move is initiated with the command "vol move" the Snapshots are being transferred to the new aggregate, alongs...Applies to ONTAP 9 Snapshots Answer Snapshot copies are part of the volume. When a volume move is initiated with the command "vol move" the Snapshots are being transferred to the new aggregate, alongside with the rest of the volume. Additional Information additionalInformation_text
- https://kb.netapp.com/on-prem/ontap/DP/SnapMirror/SnapMirror-KBs/SnapMirror_updates_are_failing_after_moving_the_source_volume_to_another_aggregateSnapMirror is failing with the error after a successful volume move operation on the source: Failed to get transfer Snapshot information from source volume "Source_SVM:Source_Volume". (Failed to get v...SnapMirror is failing with the error after a successful volume move operation on the source: Failed to get transfer Snapshot information from source volume "Source_SVM:Source_Volume". (Failed to get volume attributes for 07hf1b4c-z3qc-12e6-a3c6-70f076a67jd8:Source_Volume. (Volume location not found. Either the system is busy or the peer cluster is not Available. If the volume is on a remote cluster, before retrying the operation, use the "cluster peer show" command to check the availability.))