Skip to main content
NetApp Knowledge Base

How do LS mirrors affect NAS access when new volumes are added?

Views:
2,206
Visibility:
Public
Votes:
1
Category:
ontap-9
Specialty:
nas
Last Updated:

 

Applies to

  • ONTAP 9
  • NFS
  • CIFS
  • LS mirror

Answer

  • Why is it that newly created volumes cannot be immediately accessed by either NFS or CIFS clients, even though they are later accessible after the next scheduled LS Mirror is complete?
  • A sign the LS mirror might be down, is that after the creation of the new share, when checking the Properties of the share through File Explorer, the Security Tab is missing.
CIFS

The following error message is displayed:

Windows cannot access <share>. Check the spelling of the name. Otherwise, there might be a problem with your network. To try and identify and resolve network problems, click Diagnose.

Clicking "See details" displays:

Error code: 0x80070035
The network path was not found

A packet trace will display  Error: STATUS_BAD_NETWORK_NAME

How do LS mirrors affect NAS access when new volumes are added

 

NFS

For NFS, the newly created export is not even visible.

Cluster::> volume show -vserver vs_nfs -volume nfs1 -fields junction-path 

vserver volume junction-path

------- ------ -------------

vs_nfs  nfs1   /nfs1

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/nfs1 /mnt/nfs1

nfs mount: 10.128.141.122:/nfs1: No such file or directory

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/ /mnt/vs_root

-bash-3.00# cd /mnt/vs_root

-bash-3.00# ls

automount  test.txt   vol_nfs

Note: nfs1 is not listed here.

Explanation

In both cases, the root mount point (/) is LS Mirrored:

cm2244a-cn::> snapmirror show

Source              Destination  Mirror        Relationship  Total

Path          Type  Path         State         Status        Progress   Healthy

------------- ---- ------------ ------------- -------------- ---------- -------

cm2244a-cn://vs_cifs/vs2_root <~~~Source Path

              LS <~Type   cm2244a-cn://vs_cifs/vs2_root_ls <~~ Destination Path

                                Snapmirrored  Idle           -          true

cm2244a-cn://vs_nfs/vs_nfs_root

              LS   cm2244a-cn://vs_nfs/vs_nfs_root_ls

                                Snapmirrored  Idle           -          true

After a manual LS Mirror update or the next scheduled LS Mirror update, both NFS and CIFS access work correctly.

When a client accesses a volume using NAS, it leverages a file handle. That file handle in ONTAP is the MSID. Every volume has a unique MSID, whereas the LS mirror volumes share the same MSID.

Cluster::> volume show -vserver vs_nfs -fields msid                    

vserver volume    msid      

------- --------- ----------

vs_nfs  automount 2147484731

vs_nfs  nfs1      2147484763

vs_nfs  nfs2      2147484764

vs_nfs  nfs3      2147484765

vs_nfs  nfs4      2147484766

vs_nfs  vol_nfs   2147484685

vs_nfs  vs_nfs_root 2147484684

vs_nfs  vs_nfs_root_ls 2147484684

When a file accesses a volume that is part of an LS mirror set, it uses the MSID. The request goes into the cluster and VLDB checks for that MSID. However, because there are multiple volumes with the same MSID, MSIDs get resolved to the data set ID (DSID) through VLDB calls.DSIDs are unique to each volume in the cluster, regardless of LS mirrors:

cm2244a-cn::> volume show -vserver vs_nfs -fields dsid

vserver volume    dsid

------- --------- ----

vs_nfs  automount 1082

vs_nfs  nfs1      1111

vs_nfs  nfs2      1112

vs_nfs  nfs3      1113

vs_nfs  nfs4      1114

vs_nfs  vol_nfs   1037

vs_nfs  vs_nfs_root 1036

vs_nfs  vs_nfs_root_ls 1108

Note: All volumes are mounted as /volname. Thus, data access results in:
Data Requests --> / --> volname

To a cluster, a volume is a folder. When you create and mount a volume to /, it appears as a folder to the cluster and clients.

A read or write request comes through that path into the N-blade of a node, the N-blade first determines if there are any LS mirrors of the volume that it needs to access. If there are no LS mirrors of that volume, the read request will be routed to the R/W volume. If there are LS mirrors of the volume, preference is given to an LS mirror on the same node as the N-blade that fielded the request. If there is no LS mirror on that node, an up-to-date LS mirror from another node is chosen. This is why the newly created volumes are invisible, since before the LS mirror update, all the requests go to LS Mirror Destination volume, which is Read-Only.

For example:

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/ /mnt/vs_root

-bash-3.00# cd /mnt/vs_root

-bash-3.00# ls

automount  test.txt   vol_nfs     

After updating the LS mirror:

Cluster::> snapmirror update-ls-set -S //vs_nfs/vs_nfs_root
8.3 command
Cluster::> snapmirror update-ls-set -source-path cm2244a-cn://vs_cifs/vs2_root

It is now visible and accessible:

-bash-3.00# ls

automount  nfs1 test.txt vol_nfs

Alternatively, when attempting access through .admin, VLDB will send the request to the source volume's DSID to allow for write access.

For NFS, Specify the .admin path

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/.admin /mnt/vs_root

-bash-3.00# cd /mnt/vs_root

-bash-3.00# ls

automount  nfs1  test.txt   vol_nfs

For CIFS, the difference is not in how a share is accessed, but in what share is accessed. If a share is created for the .admin path, then use of that share will cause the client to always have R/W access

Cluster::> vserver cifs share create -vserver vs_cifs -share-name vs2_root_rw -path /.admin

If another new volume 'cifs4' is created and a CIFS share is created upon it without updating the LS Mirror manually:

cm2244a-cn::> volume show -vserver vs_cifs -fields junction-path

vserver volume junction-path

vs_cifs cifs4  /cifs4

It still cannot be accessed.

S Mirror Destination volume

However, it is visible and accessible from the .admin path.

LS mirrors affect NAS access when new volumes are added

In order to prevent any potential confusion caused by this, a warning will be displayed, to remind users to update the LS Mirrors after volume creation.

For example:

Cluster::> volume create -vserver vs_nfs -volume nfs2 -aggregate aggr0 -state online -junction-path /nfs2

Warning: You are about to create a volume on a root aggregate.  This may cause severe performance or stability problems

         and therefore is not recommended. Do you want to proceed? {y|n}: y

[Job 3074] Job succeeded: Successful                                                                                     

Notice: Volume nfs2 now has a mount point from volume vs_nfs_root.  The load sharing (LS) mirrors of volume vs_nfs_root

        are scheduled to be updated at 9/28/2013 07:05:00.  Volume nfs2 will not be visible in the global namespace until

        the LS mirrors of volume vs_nfs_root have been updated.

Note: This warning will not be displayed if the new volume is created from the old version of System Manager

Additional Information

N/A

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.