What is a Hybrid aggregate?
Applies to
Clustered Data ONTAP 8
Answer
What is a Hybrid aggregate?
- NetApp Flash Pool is an intelligent storage caching product within the NetApp Virtual Storage Tier (VST) product family.
- A Flash Pool aggregate (or Hybrid aggregate) configures Solid-State Drives (SSDs) and Hard Disk Drives (HDDs), either performance disk drives (often referred to as SAS or FC) or capacity disk drives (often called SATA) into a single storage pool (aggregate) with the SSDs providing a fast-response-time cache for volumes that are provisioned on the Flash Pool aggregate.
Provisioning a volume in a Flash Pool aggregate can provide one or more of the following benefits:
- Persistent low read latency for large active datasets: NetApp systems configured with Flash Pool can cache up to 100 times more data than configurations that have no supplemental flash-based cache. The data can be read 2 to 10 times faster from the cache than from HDDs. In addition, data cached in a Flash Pool aggregate is available through planned and unplanned storage controller takeovers, enabling consistent read performance throughout these events.
- More HDD operations for other workloads: Repeat random read and random overwrite operations utilize the SSD cache, enabling HDDs to handle more reads and writes for other workloads, such as sequential reads and writes.
- Increased system throughput (IOPS): For a system where throughput is limited due to high HDD utilization, adding Flash Pool cache can increase total IOPS by serving random requests from the SSD cache.
- HDD reduction: A storage system that is configured with Flash Pool to support a given set of workloads typically has fewer of the same type of HDD, and often fewer and lower-cost-per-terabyte HDDs, than does a system that is not configured with Flash Pool.
Although configuring a NetApp storage system with Flash Pool can provide significant benefits, there are some things that Flash Pool does not do. For example:
- Accelerate write operations: The NetApp Data ONTAP® operating system is already write-optimized through the use of write cache and nonvolatile memory (NVRAM or NVMEM). Flash Pool caching of overwrite data is done primarily to offload the intensive write operations of rapidly changing data from HDDs.
- Reduce or alleviate high CPU or memory utilization: Adding a caching technology to a storage system results in an incremental increase in CPU and memory consumption. Consequently, adding Flash Pool to a system that is already near maximum CPU or memory utilization increases the consumption of these resources.
- Cache sequential (read or write) or large-block (>16KB) random write operations: HDDs handle sequential read and write operations efficiently. Large-block random write operations are typically organized into more sequential write operations by Data ONTAP before being written to disk. For these reasons and others discussed in the TR 4070, Flash Pool does not cache sequential writes or random overwrites that are larger than 16KB.
- Increase the maximum throughput capability of a storage system: Achieving the maximum throughput (IOPS or MB/sec) of a system is a function of the memory and CPU resources of the storage controller. Maximizing throughput also requires a sufficient number of drives (HDDs or SSDs) to handle the workloads that will result in peak system (controller) performance. Caching technologies do not increase the system memory or CPU cycles available in a system. As a result, the maximum throughput values for NetApp storage systems are not higher for systems configured with caching technology.
Creating a Flash Pool Aggregate:
A Flash Pool aggregate can be created non-disruptively, that is, while the system is operating and serving data. The process of creating a Flash Pool aggregate has three steps:
- Create the 64-bit HDD aggregate (unless it already exists).
Notes:- When creating an aggregate of multiple HDD RAID groups, NetApp's best practice is to size each RAID group with the same number of drives or with no more than 1 drive difference (for example, one RAID group of 16 HDDs and a second one of 15 HDDs is acceptable).
- If an existing aggregate is 32-bit, it must be converted to a 64-bit aggregate before it is eligible to become a Flash Pool aggregate. As noted in section 3.1, there are situations in which a converted 64-bit aggregate is not eligible to become a Flash Pool aggregate.
- Set the
hybrid_enabled
option toon
for the aggregate:
Note: A RAID group cannot be removed from an aggregate after the aggregate has been created.
aggr options <aggr_name> hybrid_enabled on
aggr add <aggr_name> -T SSD <number_of_disks>
-Or-
aggr add <aggr_name> -d <diskid1>,<disksid2>
- Determine the names of the spare SSDs available to you:
storage aggregate show-spare-disks -disk-type SSD
- Create the storage pool:
storage pool create -storage-pool sp_name -disk-list disk1,disk2,disk3...
- Optional: Show the newly created storage pool:
storage pool show -storage-pool sp_name
- Mark the aggregate as eligible to become a Flash Pool aggregate:
storage aggregate modify -aggregate aggr_name -hybrid-enabled true
- Show the available SSD storage pool allocation units:
storage pool show-available-capacity
- Add the SSD capacity to the aggregate:
storage aggregate add aggr_name -storage-pool sp_name -allocation-units number_of_units
Reverting a Flash Pool aggregate back to a standard HDD-only aggregate requires migrating the volumes to an HDD-only aggregate. After all volumes have been moved from a Flash Pool aggregate, the aggregate can be destroyed, and then the SSDs and HDDs are returned to the spares pool, which makes them available for use in other aggregates or Flash Pool aggregates.
A Flash Pool aggregate that has a SSD RAID group containing one data drive is supported; however, with such a configuration, the SSD cache can become a bottleneck for some system deployments. Therefore, NetApp recommends configuring Flash Pool aggregates with a minimum number of data SSDs, as shown in the table below:
Additional Information
For further details, see TR-4070: Flash Pool Design and Implementation Guide.