Skip to main content

NetApp_Insight_2020.png 

NetApp Knowledgebase

How do LS mirrors affect NAS access when new volumes are added?

Views:
151
Visibility:
Public
Votes:
0
Category:
data-ontap-8
Specialty:
nas
Last Updated:

 

Applies to

Clustered Data ONTAP 8.2

Answer

Why is it that newly created volumes cannot be accessed by both NFS and CIFS clients, though they are accessible after the next scheduled LS Mirror is complete.

Note: LS Mirror information can be located in the SVM Root Volume Protection Express Guide.

Another area that is a sign the LS mirror might be down is after the creation of a new share when checking the Properties of the share through Explorer the Security Tab is missing.

Cluster::> vol show -vserver vs_cifs -volume cifs1 -fields junction-path

  (volume show)

vserver volume junction-path

------- ------ -------------

vs_cifs cifs1  /cifs1

The following error message is displayed:

Windows cannot access <share>. Check the spelling of the name. Otherwise, there might be a problem with your network. To try and identify and resolve network problems, click Diagnose.

1001193_1.png

 

For NFS, the newly created export is not even visible.

Cluster::> vol show -vserver vs_nfs -volume nfs1 -fields junction-path 

  (volume show)

vserver volume junction-path

------- ------ -------------

vs_nfs  nfs1   /nfs1

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/nfs1 /mnt/nfs1

nfs mount: 10.128.141.122:/nfs1: No such file or directory

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/ /mnt/vs_root

-bash-3.00# cd /mnt/vs_root

-bash-3.00# ls

automount  test.txt   vol_nfs     

Note: nfs1 is not listed here.

In both cases, the root mount point (/) is LS Mirrored:

cm2244a-cn::vserver nfs> snapmirror show

Source              Destination  Mirror        Relationship  Total

Path          Type  Path         State         Status        Progress   Healthy

------------- ---- ------------ ------------- -------------- ---------- -------

cm2244a-cn://vs_cifs/vs2_root <~~~Source Path

              LS <~Type   cm2244a-cn://vs_cifs/vs2_root_ls <~~ Destination Path

                                Snapmirrored  Idle           -          true

cm2244a-cn://vs_nfs/vs_nfs_root

              LS   cm2244a-cn://vs_nfs/vs_nfs_root_ls

                                Snapmirrored  Idle           -          true

After a manual LS Mirror update or the next scheduled LS Mirror update, both NFS and CIFS access work correctly.

When a client accesses a volume using NAS, it leverages a file handle. That file handle in Clustered Data ONTAP is the master set ID (MSID). Every volume has a unique MSID, whereas the LS mirror volumes share the same MSID.

Cluster::vserver nfs> vol show -vserver vs_nfs -fields msid                    

(volume show)

vserver volume    msid      

------- --------- ----------

vs_nfs  automount 2147484731

vs_nfs  nfs1      2147484763

vs_nfs  nfs2      2147484764

vs_nfs  nfs3      2147484765

vs_nfs  nfs4      2147484766

vs_nfs  vol_nfs   2147484685

vs_nfs  vs_nfs_root 2147484684

vs_nfs  vs_nfs_root_ls 2147484684

When a file accesses a volume that is part of an LS mirror set, it uses the MSID. The request goes into the cluster and VLDB checks for that MSID. However, because there are multiple volumes with the same MSID, MSIDs get resolved to the data set ID (DSID) through VLDB calls.DSIDs are unique to each volume in the cluster, regardless of LS mirrors:

cm2244a-cn::vserver nfs> vol show -vserver vs_nfs -fields dsid

  (volume show)

vserver volume    dsid

------- --------- ----

vs_nfs  automount 1082

vs_nfs  nfs1      1111

vs_nfs  nfs2      1112

vs_nfs  nfs3      1113

vs_nfs  nfs4      1114

vs_nfs  vol_nfs   1037

vs_nfs  vs_nfs_root 1036

vs_nfs  vs_nfs_root_ls 1108

Note: All volumes are mounted as /volname. Thus, data access results in:
Data Requests --> / --> volname

To a cluster, a volume is a folder. When you create and mount a volume to /, it appears as a folder to the cluster and clients.

A read or write request comes through that path into the N-blade of a node, the N-blade first determines if there are any LS mirrors of the volume that it needs to access. If there are no LS mirrors of that volume, the read request will be routed to the R/W volume. If there are LS mirrors of the volume, preference is given to an LS mirror on the same node as the N-blade that fielded the request. If there is no LS mirror on that node, an up-to-date LS mirror from another node is chosen. This is why the newly created volumes are invisible, since before the LS mirror update, all the requests go to LS Mirror Destination volume, which is Read-Only.

For example:

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/ /mnt/vs_root

-bash-3.00# cd /mnt/vs_root

-bash-3.00# ls

automount  test.txt   vol_nfs     

After updating the LS mirror:

Cluster::> snapmirror update-ls-set -S //vs_nfs/vs_nfs_root
8.3 command
Cluster::> snapmirror update-ls-set -source-path cm2244a-cn://vs_cifs/vs2_root

It is now visible and accessible:

-bash-3.00# ls

automount  nfs1 test.txt vol_nfs

Alternatively, when attempting access through .admin, VLDB will send the request to the source volume's DSID to allow for write access.

For NFS, Specify the .admin path

-bash-3.00# mount -F nfs -o vers=3 10.128.141.122:/.admin /mnt/vs_root

-bash-3.00# cd /mnt/vs_root

-bash-3.00# ls

automount  nfs1  test.txt   vol_nfs

For CIFS, the difference is not in how a share is accessed, but in what share is accessed. If a share is created for the .admin path, then use of that share will cause the client to always have R/W access

Cluster::vserver nfs> vserver cifs share create -vserver vs_cifs -share-name vs2_root_rw -path /.admin

If another new volume 'cifs4' is created and a CIFS share is created upon it without updating the LS Mirror manually:

cm2244a-cn::vserver nfs> vol show -vserver vs_cifs -fields junction-path

  (volume show)

vserver volume junction-path

vs_cifs cifs4  /cifs4

It still cannot be accessed.

1001193_2.png

However, it is visible and accessible from the .admin path.

1001193_3.png

In order to prevent any potential confusion caused by this, a warning will be displayed, to remind users to update the LS Mirrors after volume creation.

For example:

Cluster::> vol create -vserver vs_nfs -volume nfs2 -aggregate aggr0 -state online -junction-path /nfs2

  (volume create)

Warning: You are about to create a volume on a root aggregate.  This may cause severe performance or stability problems

         and therefore is not recommended. Do you want to proceed? {y|n}: y

[Job 3074] Job succeeded: Successful                                                                                     

Notice: Volume nfs2 now has a mount point from volume vs_nfs_root.  The load sharing (LS) mirrors of volume vs_nfs_root

        are scheduled to be updated at 9/28/2013 07:05:00.  Volume nfs2 will not be visible in the global namespace until

        the LS mirrors of volume vs_nfs_root have been updated.

Note: This warning will not be displayed if the new volume is created from the old version of System Manager

Additional Information

N/A