Skip to main content

NetApp_Insight_2020.png 

NetApp Knowledgebase

What are the recommended settings for VMware ESX/ESXi 5.X, 4.x, 3.5 when connected to NetApp Storage Systems?

Views:
292
Visibility:
Public
Votes:
0
Category:
flexpod-with-infrastructure-automation
Specialty:
virt
Last Updated:

 

Applies to

  • FlexPod
  • Data ONTAP 8.2 7-Mode
  • Data ONTAP 8.1 7-Mode
  • Data ONTAP 8 7-Mode
  • Data ONTAP 7 and earlier 

Answer

To prevent possible I/O failures and data corruptions, it is strongly recommended to configure the following:

  • The Fiber Channel HBA settings on an ESXi 3.5  server
  • The preferred active paths for NetApp storage systems
  • The Hardware iSCSI HBA settings for ESX 3.5 and 4.x

ESXi 5.x

  1. To configure ESX server multipathing and timeout settings: From the VMware vSphere Web Client Home page, click vCenter > Hosts.
  2. Right-click on a host and select Actions > NetApp VSC > Set Recommended Values.
  3. In the Recommended Settings pop-up box, select the values that work best with your system. The standard, recommended values are set by default.
  4. Click OK.

For more information, see: Configuring ESX server multipathing and timeout settings

ESX host values set by VSC for VMware vSphere

  1. HBA Timeout settings:
    The following HBA settings apply only to ESX/ESXi 3.5. ESX/ESXi 4.x does not need timeouts configured.

    The HBA settings on ESXi can be set using the remote console provided by VMware. To download the remote console, go to VMware's site here. Scroll down to 'VMware Infrastructure Remote CLI'. Download either the Windows or Linux VI Remote CLI Installer. Either will work; however, for this article, the Windows Remote CLI was used.

    When installing the remote console, ensure that the location has been added to your path. Else, to run the RCLI commands, change to Program Files\VMware\VMware VI Remote CLI\bin.
    The following settings apply for NetApp storage systems running Data ONTAP 7.2.4 and later with cfmode=single_image. To determine the correct Data ONTAP version and cfmode on all your connected NetApp Storage Systems, run the following commands on each connected controller:

    controller> version
    NetApp Release 7.2.4: [date]

    controller> fcp show cfmode
    fcp show cfmode: single_image


    After applying the settings, the host will require a reboot for the timeouts to take effect.
  • For QLogic FC HBAs:
    To query the current HBA setting,s use vicfg-module.pl with the --get-options option.

    C:\>perl vicfg-module.pl --server --username root --password --get-options  qla2300_707_vmw
    qla2300_707_vmw options = ''

    To set the timeout, run the vicfg-module.pl command with the --set-options option. For NetApp storage systems running Data ONTAP 7.2.4 and later with cfmode=single_image, set qlport_down_retry to 5.

    C:\>perl vicfg-module.pl --server --username root --password --set-options "qlport_down_retry=5" qla2300_707_vmw

    To verify the value has been set correctly, run the vicfg-module.pl command again with the --get-options:

    C:\>perl vicfg-module.pl --server --username root --password   --get-options qla2300_707_vmw
    qla2300_707_vmw options = 'qlport_down_retry=5'
     
  • For Emulex HBAs:
    To query the current HBA settings, run the vicfg-module.pl command with the --get-options option.

    C:\>perl vicfg-module.pl --server --username root --password - --get-options lpfc_740
    lpfc_740 options = ''

    To set the timeout, run the vicfg-module.pl command with the --set-options option. For NetApp storage systems running Data ONTAP 7.2.4 and later with cfmode=single_image, set lpfc_nodev_tmo to 10.

    C:\>perl vicfg-module.pl --server --username root --password --set-options "lpfc_nodev_tmo=10" lpfc_740

    To verify the value has been set correctly, run the vicfg-module.pl command again with the --get-options:

    C:\>perl vicfg-module.pl --server --username root --password --get-options lpfc_740
    lpfc_740 options = 'lpfc_nodev_tmo=10'
  1. Preferred Active Path Selection Settings:

    Every time the SAN topology changes it is strongly recommended to set the preferred active path for every LUN on every ESX server. When the ESX server reboots, the preferred active paths will need to be set again.
     
    If the ESX/ESXi 4.x environment is using ALUA with either a path selection policy of Round Robin (recommended) or Most Recently Used, then preferred paths do not need to be set. If the path selection policy is set to FIXED, with or without ALUA, preferred paths need to be set.

    Using Virtual Infrastructure Client (vSphere Client for ESX/ESXi 4.x), select the ESX host in question. Next, select the Configuration tab and then Storage Adapters in the hardware section. The example shows an ESX host with two fiber channel storage adapters, vmhba1 and vmhba2, that are connected to separate fabrics and are connected to an individual LUN.

    1001387-1.jpg

    In this example, vmhba1 is the selected storage adapter in the upper portion of the window. The SCSI targets connected to the selected storage adapter are displayed in the lower portion of the window. The storage adapter, vmhba1, is connected to SCSI Target 0 and SCSI Target 1 which provide paths, vmhba1:0:1 and vmhba1:1:1, to LUN ID 1 which also is accessible through the canonical path vmhba2:0:

    Select a storage adapter:

    1001387-2.jpg

    Select one of the SCSI targets and right click to select Manage Paths. On ESXi 3.5, this looks similar to the following:

    1001387-3.jpg

    On ESX/ESXi 4.x , the screen is a little different. In this example, ALUA is used because it says 'Storage Array Type: VMW_SATP_ALUA.' Because the path selection policy is 'FIXED' a preferred path needs to be selected. If the Storage Array Type had not said 'ALUA',  then ALUA is not enabled and  the path selection type should be set to 'FIXED' if it is not already. 

    1001387-4.jpg

    From the storage controller that is hosting the LUN, the command fcp show adapters might be run to display WWPNs of the FCP target ports. This output might be used to determine if the Active path is using one of the FCP target ports of the storage controller.

    controller> fcp show adapters
    Slot:            0c
    Description:     Fibre Channel Target Adapter 0c (Dual-channel, QLogic 2322 (2362) rev. 3)
    Adapter Type:    Local
    Status:          ONLINE
    FC Nodename:     50:0a:09:80:86:d7:f5:b8 (500a098086d7f5b8)
    FC Portname:     50:0a:09:81:86:d7:f5:b8 (500a098086d7f5b8)
    Standby:         No

     
    Slot:            0d
    Description:     Fibre Channel Target Adapter 0d (Dual-channel, QLogic 2322 (2362) rev. 3)
    Adapter Type:    Local
    Status:          ONLINE
    FC Nodename:     50:0a:09:80:86:d7:f5:b8 (500a098086d7f5b8)
    FC Portname:     50:0a:09:82:86:d7:f5:b8 (500a098086d7f5b8)
    Standby:         No


    The highlighted values are the WWPNs that are visible in the output of the Manage Paths window. In this example, the WWPNs confirm that the Active path is not using an FCP target port of the hosting storage controller. Therefore, the Active path is using a non-primary and non-optimal partner path. This configuration must be corrected so that redundancy and optimal performance are maintained.
    • Set the hardware iS  CSI HBA settings.
    • Verify that the Policy setting is Fixed. Use the preferred path when available.
    • Identify a primary FCP target port WWPN and associated vmhba device. In this example, WWPN 50:0a:09:81:86:d7:f5:b8 represents a primary FCP target port.
    • Select the path to the primary FCP target port and then check the box for Preferred.

      On ESXi 3.5:

      1001387-5.jpg

      On ESX/ESXi 4.x , right-click to pull down a menu and select 'Preferred.' The preferred path will get a *.

      1001387-6.jpg

      The selected path should now be displayed as Active and Preferred.

      1001387-7.jpg
       

There is no support for hardware iSCSI on ESXi. To set the HBA settings on either ESX 3.5 or ESX 4.x, perform the following steps:

  1. Query the QLogic iSCSI HBA settings by entering the following command on the ESX host console:
    #/usr/sbin/esxcfg-module -q driver
    driver is qla4022 for ESX 3.5 or qla4xxx for ESX 4.0.
    The current settings are displayed. 
  2. Run the following command to set a new timeout value:
    #/usr/sbin/esxcfg-module -s "parm_name=value" driver
    parm_name is ql4xportdownretrycount for the qla4022 driver or ka_timeout for the qla4xxx
    driver.

    value is either 14 (SAN boot) or 60 (non-SAN boot).
    driver is qla4022 for ESX 3.5 or qla4xxx for ESX 4.0.
    # /usr/sbin/esxcfg-module -s "ka_timeout=60" qla4xxx 
  3. Run the following command to update the boot configuration with the new timeout value:
    #/usr/sbin/esxcfg-boot -b 
  4. Reboot the ESX host. 
  5. If iscli is not already installed on the ESX host, download the QLogic SANsurfer iscli from the QLogic support web site. Search the page for 'iscli' to locate the download. 
  6. Run iscli.  
  7. Select Port Level Info & Operations Menu.
  8. Select Edit Configured Port Settings Menu.
  9. Select Edit Port Firmware Settings Menu.
  10. Select Configure Advanced Settings.  
  11. Set IP_ARP_Redirect = ON.
  12. Select Configure Device Settings.  
  13. Set KeepAliveTO = 14 for iSCSI SAN booted systems and KeepAliveTO = 60 for other systems.  
  14. Repeat Steps 7-13 for the next hardware iSCSI HBA port.