Skip to main content
NetApp Knowledgebase

How is space utilization managed in a Data ONTAP SAN environment?

Views:
953
Visibility:
Public
Votes:
0
Category:
data-ontap-8
Specialty:
san
Last Updated:

 

Applies to

  • Clustered Data ONTAP 8
  • SAN
  • FlexPod

Answer

What are the different layers of storage ?
Storage Efficiency features:
What are the Volume Space Guarantee settings?
How is the space in an aggregate allocated?
How does a Snapshot of a LUN work?
What is the purpose of LUN Space Reservation ?
What is the impact of snapshot and space reservation?
What are the options if you are running out of space?
How to reclaim space?


What are the different layers of storage ?

Aggregate is a collection of physical disk space that is a container for one or more RAID groups.
Within each aggregate, there is 1 or more FlexVol volumes. FlexVol volumes are allocated as a portion of available space with an aggregate. It contains 1 or more LUNs for use with iSCSI, FC or FCoE protocols.

3014144__en_US__3014144Image_1.png 



Storage Efficiency features:

Snapshot Point-in-time copies use minimal storage space to protect your data without performance impact.
Thin Provisioning Defers storage purchases by keeping a common pool of free storage available to all applications.
Data Deduplication Cuts storage requirements by reducing redundancies in primary, backup, and archival data.
Data Compression Reduces the disk space required, regardless of storage protocol, application, or storage tier.
Thin Replication Protects business-critical data while minimizing storage capacity requirements.

 What are the Volume Space Guarantee settings?

  • Volume: Default setting: Guarantees space at volume creation
    When a volume contains one or more LUNs with reservations enabled, operations that require free space, such as the creation of Snapshot copies, are prevented from using the reserved space. If these operations do not have sufficient unreserved free space, they fail. However, writes to the LUNs with reservations enabled continue to succeed.
  • None: No space reserved. Space allocated on first-come first-serve basis.
    Thin provisioning enables storage administrators to provision more storage on a LUN than is currently available on the volume. The space gets allocated only when the host application needs it. It results in better space utilization. If all the LUNs used all their configured space, then the volume would run out of free space. The storage administrator needs to monitor the storage controller and increase the size of the volume as needed.
  • File: No space reserved at volume creation. Individual files or LUNs guaranteed space when created.
    A space guarantee of file reserves space in the aggregate so that any file in the volume with space reservation enabled can be completely rewritten, even if its blocks are being retained on disk by a Snapshot copy.
    Note: When the uncommitted space in an aggregate is exhausted, only writes to volumes or files in that aggregate with space guarantees are guaranteed to succeed.

How is the space in an aggregate allocated?

  • WAFL reserve WAFL reserves a 10% of the total disk space for aggregate level metadata and performance. The space used for maintaining the volumes in the aggregate comes out of the WAFL reserve and it cannot be changed.
  • Aggregate Snap reserve is the amount of space reserved for aggregate snapshots.
  • Snapshot reserve percentage of disk space reserved for snapshots for each of the volumes in the system. Reserve space can be used only by snapshots and not by the active file system. Default value is 20% and it can be changed.

3014144__en_US__3014144Image_2.jpg

How does a Snapshot of a LUN work?

Snapshot is a locally retained read-only point-in-time image of the data. When a snapshot of a LUN is taken, snapshot copies the pointers to the corresponding LUN blocks on the disk. There is no data copied or moved. When there is a change in the LUN data, it is written to new blocks on the disk. Snapshot will continue pointing to the blocks with the old data. Snapshot Technology is a feature of Data ONTAP.  

 

3014144__en_US__3014144snapshot.JPG

Use snap delta to displays the rate of change of data between snapshot copies. When used without any arguments, it displays the rate of change of data between snapshots for all volumes in the system, or all aggregates in the case of snap delta -A.

snap delta [ vol_name] [ snapshot-name ] [ snapshot-name ]

If a volume is specified, the rate of change of data is displayed for that particular volume.

snap reclaimable <volname snapshot-name>  - Displays the amount of space that would be reclaimed if the mentioned list of snapshots is deleted from the volume.
 

What is the purpose of LUN Space Reservation ?

  • To allocate space equal to the size of LUN during LUN creation.
  • To reserve space for overwrites (Fractional reserve) once snapshots are taken to the size of LUN.

What is the impact of snapshot and space reservation?

Example 1: Without reservation.

Thin Provisioning allows the creation of a LUN with larger capacity than the volume.
Create 110GB LUN (Thin provisioned) in 100GB volume with no space reservation.
Then write 50GB data and create a snapshot.
And then write another 50GB and take another snapshot.
At this point 100GB space is full and any additional attempt to write fails.
 

3014144__en_US__3014144Image_7.jpg

Example 2: With reservation

Create a 50GB LUN in 100GB volume with space reservation.
Then write 25GB data.
Now take a snapshot. 25GB is now read only in the snapshot. Space reservation overwrite allows writes to the original volume.
After Additional 25GB of data is written, 25GB is reserved for future overwrites. Use df-r to see the reservation.
Try to take another snapshot.
An attempt to take a snapshot fails as we have exhausted the space reservation limit.
 

3014144__en_US__3014144Image_5.jpg

Delete 1 or more snapshots to be able to create more space.

3014144__en_US__3014144Image_6.jpg

Check the occupied Space in LUN by running the lun show -v <LUN Name> command.

  • LUN creation enables space reservation by default.

To create a LUN:
lun create –s size -t ostype [ -o noreserve ] <lun_path>
A new LUN is created at the given lun_path (which must be at a qtree root). A hard link is created to the existing file.

If the -o noreserve option is used, make sure that the file does not have any space reservations enabled using the file reservation command.

  • To change the existing space reservation:
    Lun set reservation <lun_path> [enable|disable]

Space Reservation in System Manager.

3014144__en_US__3014144Image_8.jpg

Example 3: With fractional reserve.

Create a 50GB LUN in 100GB volume with space reservation.
50GB Data is written and a snapshot is taken. Now the LUN is completely utilized
Now 25GB data is overwritten and snapshot is created.

Snapshot fails as it requires 25GB space (= data overwritten) and there is not enough space to be reserved.
 

3014144__en_US__3014144Image_9.jpg

Set fractional reserve to 0 and reserved space disappears allowing you to take a snapshot.
If you overwrite data >25GB to Snapshot-protected LUN, it goes offline.

 

3014144__en_US__3014144Image_12.JPG

To change fractional overwrite reserve:
Vol options < vol_name> fractional_reserve pct (Default is 100%)

3014144__en_US__3014144Image_10.jpg

What are the options if you are running out of space?

If the volume containing the LUN is full, any subsequent attempts to write data on the LUN will fail. Data ONTAP will make the LUN offline to avoid any inconsistencies. When volume is running out of space, you can choose 1 of the following options:

  • Free some space by deleting snpshots.
  • Add more space to the volume using autogrow options
  1. Snapshot Autodelete:
    snap autodelete vol_name option value
    To define which Snapshot copies to delete, use the following options and their corresponding values in the snap autodelete command.
Option Values
commitment Specifies whether a snapshot copy is linked to data protection utilities (SnapMirror or NDMPcopy) or data backing mechanisms (volume or LUN clones).
  • Try—delete only snapshot copies that are not linked to data protection utilities and data backing mechanisms.
  • Disrupt—delete only snapshot copies that are not linked to data backing mechanisms.
  • Destroy—destroy only snapshot copies locked because of data backing and data protection mechanisms.
You can set this value for a volume with the snapshot-clone-dependency value set to on. An error message is returned if you set this option on a volume with the snapshot-clone-dependency option set to off.
trigger Defines when to automatically begin deleting snapshot copies.
  • Volume—begin deleting snapshot copies when the volume reaches the capacity threshold percentage and the space for volume snap reserve is exceeded.
  • Snap_reserve—begin deleting snapshot copies when the snapshot reserve reaches the capacity threshold percentage.
  • Space_reserve—begin deleting snapshot copies when the space reserved in the volume reaches the capacity threshold percentage and the space for volume snap reserve is exceeded.
target_free_space Determines when to stop deleting snapshot copies. Specify a percentage. For example, if you specify 30, then snapshot copies are deleted until 30 percent of the volume becomes free.
delete_order
  • Newest_first—delete the most recent snapshot copies first.
  • Oldest_first—delete the oldest snapshot copies first.
defer_delete Delete one of the following types of snapshot copies last:
  • User_created—snapshot copies that are not autoscheduled
  • Prefix—snapshot copies with the specified prefix_string
prefix Delete snapshot copies with a specific prefix last. You can specify up to 15 characters (for example, sv_snap_week). Use this option only if you specify prefix for the defer_deleteoption.
destroy_list Destroy one of the following types of snapshot copies. You can specify the following values:
  • Lun_clone—allows destroying snapshot copies locked due to LUN clones.
  • Vol_clone—allows destroying snapshot copies locked due to volume clones.
  • CIFS_share—allows destroying snapshot copies even if these snapshot copies are locked due to CIFS shares.
  • None—is the default option. No locked snapshot copies are destroyed.
  1. Volume autosize

Volume autosize allows a flexible volume to automatically grow or shrink in size within an aggregate. Autogrow is useful when a volume is about to run out of available space, but there is space available in the containing aggregate for the volume to grow. Autoshrink is useful in combination with autogrow. It can return unused blocks to the aggregate when the amount of data in a volume drops below a user configurable shrink threshold. Autoshrink can be enabled via the grow_shrink subcommand. Autoshrink without autogrow is not supported.

vol autosize volname [-m size [k|m|g|t] ][-i size [k|m|g|t]][-minimum-size size [k|m|g|t]][-grow-threshold-percent <used space %>][-shrink-threshold-percent <used space %>] [grow(on) | grow_shrink |off | reset]
m-
maximum growth, i-incremental size

The autogrow feature works together with snap autodelete to automatically reclaim space when a volume is about to become full. The volume option try_first controls the order in which these two reclaim policies are used.

Vol options <vol-name> try_first [vol_grow | snap_delete]
If snapshot deletion is not possible, attempt volume autogrow first.

How to reclaim space?

Space reclamation complements thin provisioning to reduce inefficiency in your storage by identifying data blocks that are not required by the applications that are still being consumed by the storage. In other words, if you have data stored on a LUN and it is consuming space, and then the data is deleted, without space reclamation, the storage controller would not know that those blocks are free to add back to the available pool. A mechanism between the application and the storage is required to tell the storage that blocks freed up by the application can be reclaimed by the storage controller for use with other applications.

3014144__en_US__3014144Image_11.jpg

 

Additional Information

Add your text here.