- Data ONTAP 8.2 7-Mode
- ONTAP 9
Before estimating the necessary size of the volume, decide how to manage storage at the volume level. In SAN environments, there are three methods to consider for managing the storage at the volume level: Volume Autosize, Snapshot Autodelete and Fractional Reserve. The method you select will help to determine the volume size. In Data ONTAP, by default, fractional reserve is set to 100 percent, and Volume Autosize and Snapshot autodelete are disabled. However, in a SAN environment, use the Snapshot Autodelete method, or the Volume Autosize method, which are less complicated than using the Fractional Reserve method.
- Volume Autosize: Volume Autosize allows you to automatically make more free space available for autosizing a FlexVol, when that volume is nearly full by incrementally increasing the volume size.
- Snapshot Autodelete: Snapshot Autodelete allows you to automatically reclaim space consumed by Snapshot copies when the volume is low in available space.
- Fractional Reserve: Fractional Reserve is a volume setting that enables you to configure how much space Data ONTAP reserves in the volume for overwrites in space-reserved LUNs and files when Snapshot copies are created.
Volume Autosize is useful if the volume's containing aggregate has enough space to support a larger volume. Volume Autosize allows you to use the free space in the containing aggregate as a pool of available space shared between all the volumes on the aggregate.
Volumes can be configured to automatically grow as needed, as long as the aggregate has free space. When using the Volume Autosize method, you can increase the volume size incrementally and set a maximum size for the volume. Monitor the space usage of both the aggregate and the volumes within that aggregate to ensure volumes are not competing for available space.
Note: The autosize capability is disabled by default. Run the
vol autosize command to enable, configure and to view the current autosize settings for a volume.
Snapshot Autodelete is a volume-level option that allows you to define a policy for automatically deleting snapshot copies, based on a definable threshold.
You can set the threshold, or trigger, to automatically delete snapshot copies when:
- The volume is nearly full
- The snap reserve space is nearly full
- The overwrite reserved space is full
When SIS is enabled on the volume, fractional reserve behaves as if a snapshot is always present. Therefore, fractional reserve will be honored and the volume will appear to have less space available. This can be problematic, as LUNs can potentially go offline if the volume fills up and no overwrite space is available.
The amount of overwrite reserve can be seen using the '
-r' option of the
df command. This reserve area is used only when Data ONTAP reports that the volume is full. Until the volume is full, space for snapshot overwrites are taken from the volume, and only when the volume is 100% full will this reserved space be used. Data ONTAP can also use this reserve for caching.
If you have a 1-TB volume with a 500-GB LUN and fractional reserve is set to 100%, after writing 200 GB of data to the LUN, the volume now has 200 GB of space intended for overwrite reserve. This space is only reserved to take a snapshot.
Run the following command for the Fractional reserve percentage:
Filer> vol options <volname> fractional_reserve <pct>
When creating a LUN and a Volume container, it is highly recommended to take the proposed size of the LUN and add 5GB, as a size for the containing volume. This rule of thumb is recommended all the way up to 1TB for the size of a LUN. VMWare and SnapDrive already do this automatically. This is to allow for buffering and metadata.
For intents to create a LUN greater than 1TB, make the containing volume 2-3% larger than the LUN it will contain.
There are many ways to configure the NetApp storage appliance for LUN thin provisioning; each has advantages and disadvantages. It should be noted that it is possible to have thinly provisioned volumes and non-thinly provisioned volumes on the same storage system or even the same aggregate. LUNs for critical production application might be configured without thin provisioning while LUNs for other types of applications might be thinly provisioned.
The following are considered to be best practice configurations:
Volume Guarantee=None Configuration
guarantee = none
LUN reservation = enabled
fractional_reserve = 0%
snap_reserve = 0%
autodelete = volume / oldest_first
autosize = off
try_first = snap_delete
This configuration has the advantages that the free space in the aggregate is used as a shared pool of free space. The disadvantages of this configuration is that there is a high level of dependency between volumes and the level of thin provisioning cannot easily be tuned on an individual volume basis. When using this configuration the total size of the volumes would be greater than the actually storage available in the host aggregate.
With this configuration the storage administrator will generally size volume so that they only need to manage and monitor the used space in the aggregate.
LUN reservation = disabled
fractional_reserve = 0%
snap_reserve = 0%
autodelete = volume / oldest_first
autosize = on
try_first = autogrow
This configuration has the advantage that it is possible, if desired, to finely tune the level of thin provisioning for each application. With this configuration, the volume size defines or guarantees an amount of space that is only available to LUNs within that volume. The aggregate provides a shared storage pool of available space for all the volumes contained within it. If the LUNs or snapshot copies require more space than available in the volume, the volumes will automatically grow, taking more space from the containing aggregate.
The degree of thin provisioning is done on a per-volume level, allowing an administrator to, for example, set the volume size to 95% of the cumulative LUN size for a more critical application and to 80% for a less critical application. It is possible to tune how much of the shared available space in the aggregate a particular application can consume by setting the maximum size to which the volume is allowed to grow as explained in the description of the autogrow feature.
In cases where snapshots are also being used, the volume might also be configured larger than the size of the LUNs contained within the volume. The advantage of having the LUN space reservation disabled in that case is that snapshots can then use the space that is not needed by the LUNs. The LUNs themselves are also not in danger of running out of space because the autodelete feature will remove the snapshots consuming space. It should be noted that, currently, snapshots used to create clones will not be deleted by autodeleted.
This is the default type. When you use thick provisioning, all of the space specified for the LUN is allocated from the volume at the LUN creation time. Even though the volume fills up to 100%, the LUN still has space allocated to it and will still be able to be written to.
- TR-3483: Thin Provisioning in a NetApp SAN or IP SAN Enterprise Environment
- Chapter 2: 'How 100 percent fractional reserve affects available space' of Block Access Management Guide for FCP
- Data ONTAP® 8.1 Storage Efficiency Management Guide For 7-Mode
- Data ONTAP® 8.2 SAN Administration Guide For 7-Mode
- ONTAP 9 Guidelines for working with FlexVol volumes that contain LUNs