What is the Back-to-Back (B2B) Consistency Point Scenario?
Applies to
- ONTAP 9
- Data ONTAP 8
- Data ONTAP 7
Answer
- A NetApp Storage Controller has two buffers for accepting and logging write data.
- The Storage Controller can only process one Consistency Point (CP) per Aggregate at a time due to this buffered writing scenario.
- The Consistency Point process is:
- Global - all writes flow through Consistency Points (per each aggregate)
- Atomic - all modified data is considered dirty in RAM and must be cleaned by flushing to disk
- Under certain circumstances, while one CP is being processed and written to disk, the second memory buffer can reach a watermark that triggers a new CP prior to the previous CP being completed.
- If writes arrive faster than CPU and/or disk can process:
- The internal limit of NVLOG data is hit
- Write latency will increase as user write operations are not replied until a write buffer frees up
- In this scenario, the NVLOG process is not the cause but a secondary bottleneck because of overwhelmed CPU or disk resources
- If writes arrive faster than CPU and/or disk can process:
- In most instances of this specific scenario, the time at which the storage controller must pause incoming write requests is measured in milliseconds, and the environment is not significantly impacted.
- However, on storage controllers that fall into one or both of the categories below, the impact on overall performance might be undesirable.
Additional Information
- For more information on diagnosing this issue, see this KB: Write Performance Impacted by Back to Back Consistency Points
- What are the different Consistency Point types and how are they measured
- Where can I learn more about Consistency Points?