What are nvme settings regarding io-queue-count and io-queue-depth
Applies to
- ONTAP SAN
- NVMe-oF
Answer
- NVMe supports 64k queues, each with a queue depth of 64k / 64k outstanding commands there is no modify command for these parameters.
- Review Implementing and configuring modern SANs with NVMe-oF, high parallelism I/O submission and completion queue pairs ( io-queue-count ) are aligned to host CPU cores.
- Each host/controller pair has an independent set of NVMe queues.
-
The following EMS message will be logged if a host tries to create more NVMe/FC controllers than is supported.
nvmf.spdk.err: NVMe/TCP controller limits reached on this node. Queue slots provisioned 69568, additional queue slots requested:128, currently supported queue slots:69632. No more NVMe connects via NVMe/TCP LIFs on this node are allowed.
-
If the customer is having a large NVMe/TCP or NVMe/FC configuration, which breaches the NVMe/TCP or NVMe/FC limits published in HWU, then it will result into over provisioning of NVMe object.