PeerGFS File Management and Orchestration across Edge, Data Center and Cloud Storage

Enterprise Vision Technologies (EVT) is a Peer Software reseller implementing PeerGFS solutions at several enterprise customers.

High Level Overview

PeerGFS by Peer Software has a unique unstructured data solution for on-prem, multi-site, multi-vendor (hint: not just NetApp), cloud, multi-cloud and hybrid environments. At a high level, the solution enables “Active-Active” multi-site anywhere with global file locking and real-time replication without file system scans. Additionally, highly valuable byproducts are continuous data protection (CDP) and high availability (HA). The solution has no friction since a PeerGFS deployment is non-disruptive and integrates on top of existing storage.

The Why?

  • Data is local to users, applications and compute resources wherever they reside or move
  • Flexible granularity at the share/folder/export level
  • Real-time file system updates with no file system scans after initial sync
  • Active-Active synchronization for near-zero RPO/RTO
  • Cross-platform integration on-prem, cloud and hybrid with no vendor lock-in
    • NetApp ONTAP (FPOLICY)
    • Amazon FSxN (FPOLICY)
    • NetApp Cloud Volumes ONTAP (CVO) (FPOLICY) on all 3 major cloud providers
    • Nutanix Files
    • Dell EMC Isilon, VNX, Unity (CEE)
    • Windows native storage
  • Global distributed file locking and version integrity across sites
  • Efficient delta replication of changed file blocks, not entire files
  • DFS namespace management
  • Object Connector
  • Malicious Event Detection (MED)

How?

The Peer agent runs directly on a Windows File Server or Windows VM fronting a NAS share with native API integration for file events with no file system scans.

Overlap with NetApp native solutions

There is some overlap with NetApp SnapMirror, NetApp FlexCache and Global File Cache (GFC) that will be compared below. All are great solutions and can co-exist and complement each other as shown in the first case study below.

Case study #1 – customer with engineering data

A customer with engineering data has on-premises ONTAP AFF and FAS clusters that are migrating to Amazon FSxN. The migration is staged so an individual user can cutover one at a time. The cutover must be individual users, not all at the same time. The workflow design below leverages SnapMirror, FlexCache and PeerGFS at different times in the workflow to use the best tool at the right time for the job.

  • For deduplication, compression and compaction efficiencies, SnapMirror replication was used to seed the FSxN file system from on-premises ONTAP. The advantage is files were not rehydrated and SnapMirror also supports native network compression.
  • After initialization, the SnapMirror relationships were broken and PeerGFS was integrated with FPOLICY real-time file events to keep all sites in sync with file locking. The key value is that users can move to the cloud one at a time with the Active-Active solution. Another advantage of PeerGFS is the ability for users to fail back or fail forward anytime.
    • SnapMirror is a great solution and was used to seed FSxN, but does not provide Active-Active read/write at all sites.
    • FlexCache is also a great solution, but scales out a file system from an origin to caches. Since the end-state is all data in Amazon FSxN, this was not a fit, but see below where FlexCache will be brought back in the solution.
  • There is a requirement for on-premises access to the data after all users are migrated to Amazon FSxN. When on-premises AFF and FAS are decommissioned, there will be latency from on-prem users to cloud storage.
    • FlexCache is now a great fit to cache on-premises from the origin volumes on Amazon FSxN.
    • The on-premises cache can be deployed on existing AFF or FAS equipment or virtually on ONTAP Select.

Note: PeerGFS could also be used on-premises on AFF, FAS, ONTAP Select or even a Windows file server. This would be a full copy per share/folder/export instead of a sparse cache. However, instead of a renewing license, the customer is using a 6-month PeerGFS license for the individual user migration. Long-term, PeerGFS would also be a good fit to work on NetApp and multi-vendor solutions.

Case study #2 – customer with telemetry data and traveling users

A customer with telemetry data and traveling users between sites required Active-Active read and write file access. With the central site in an earthquake zone, another site in a fire zone and a third site in a hurricane zone, having full replication of the Active-Active data set was another decision point for integrating PeerGFS. Additionally, the central site is running ONTAP 9, and the two remote sites were running ONTAP 7-Mode. The remote sites have been upgraded to ONTAP 9, but PeerGFS supported all of the mixed versions during the conversion.

In the current end-state, a user who travels to any site has local access to their files, access to the files if the WAN is down, and also has disaster recovery of their files at two other sites. All three sites have the same data, global file locking and replication.

Collateral and References

  1. Peer Software https://www.peersoftware.com
  2. High level two page PeerGFS datasheet PeerGFS Data Sheet
  3. Product Page for PeerGFS PeerGFS Product Page – Peer Software
  4. Full PDF version of PeerGFS Help Manual Peer Global File Service Help (peersoftware.com)
  5. Playlist for PeerGFS “How To” Videos (Youtube Channel) PeerGFS How-To Videos [EN] – YouTube
  6. NetApp ONTAP 9.8 – FlexCache SMB Overview https://storageexorcist.wordpress.com/2020/11/11/netapp-ontap-9-8-flexcache-smb-overview/
  7. NetApp ONTAP 9.8 – FlexCache Hands-on Setup (NFS and SMB) https://storageexorcist.wordpress.com/2020/11/11/netapp-ontap-9-8-flexcache-hands-on-setup-nfs-and-smb/
Advertisement

ONTAP Junction Path Band-Aids to the FlexGroup Rescue!

A common customer challenge is running out of space in a data volume. Sometimes you need to order additional on-prem or cloud storage. You then grow the volume, delete Snapshot copies and/or delete or tier cold data to gain useable space. Often the space remediation is automated with “Volume Autosize” and “Snapshot Autodelete”. However, what do you do when the volume is at maximum capacity? For example, a NetApp FlexVol has a maximum size of 100TB and there is no way to grow that volume beyond that size.

A preferred solution is migrate or convert to FlexGroups which support 20PB+ space under a single mount point. Another option is to junction path, or stitch together multiple FlexVols in the namespace. The junction path method takes more work to maintain and is less flexible than a FlexGroup, but it has a good use case as a “band-aid” to alleviate space constraints while migrating to FlexGroups. The example below will show you how to use junction paths to free up space, while you implement FlexGroups. You can also junction FlexGroups, but most often we mount a FlexGroup under “/” for one large NAS bucket.

The easy button is the ONTAP feature to convert FlexVols in-place to FlexGroups non-disruptively. However, when a FlexVol is near full at 100TB, that volume is not a good candidate for conversion and should be migrated using a host method, XCP, robocopy, rsync or your other favorite host migration tool. To explain, with a 100TB FlexVol, you would need to add additional 100TB member volumes after conversion in order to grow the FlexGroup, and the balance would be uneven leaving the first constituent near full. A better use case for conversion is if you have a 10TB FlexVol at 60% utilization, you can convert in-place, then add multiple 10TB member volumes after conversion.

When implementing the junction-path “band-aid”, the user does not see any change in the path to their files, and the nfs exports and smb share paths do not change. There is a short downtime window, but only for the specific user migrated in the example below. This makes host-based migration from FlexVols to FlexGroups easier, since you can continually move data into junction pathed volumes to alleviate space, without any change for users or migration hosts.

Example

You have a 100TB Flexible volume named “Users” with multiple users. The volume is mounted to the junction path /Users and users have directories in the volume.

Now the challenge and remediation scenario… You are at 95% capacity on this 100TB FlexVol, and have an XCP job running to migrate /Users to a FlexGroup named /Users_new, but meanwhile you need to alleviate space in /Users. You see that user5 is taking 20TB of space and see that you can migrate user5 to a new volume, then junction that volume under the same /Users/user5 path.

Create a new volume “user5” junction pathed to /Users/user5_new and the new path will show up under the cifs share. Note that you likely will want to use the same snapshot policy as the Users volume to keep the same retention. Also note the arrow on the folder indicates a linked path, but the user and all processes do not see a difference from a directory.

volume create -volume user5 -aggregate aggr1 -size 25T -space-guarantee none -junction-path /Users/user5_new

Copy the user data from user5 to user5_new, then cutover and rename user5_new. In this example, we will rename user5 to user5_old, but you would need to delete user5_old to reclaim space, and also note that you also need to wait for Snapshot copies to rotate out to free the deleted space. The user has a new volume for expansion and other users still have some free space available in /Users. You can repeat this process for other users while you migrate to FlexGroups.

Rename user5 to user5_old after copying the user data

Unmount and mount the new “user5” volume so it appears the same as it did when it was a directory in the Users volume.

volume unmount -volume user5 -vserver Users

volume mount -volume user5 -junction-path /Users/user5 -active true -vserver Users

You have applied a band-aid to free space, provide new space for expansion, all without changing the XCP job that is copying /Users out to a new FlexGroup that you will cutover to for easier growth later.

Dude Where’s my Firewall? ONTAP Logical Interface (LIF) Service Policies

When upgrading ONTAP to 9.5 and later, you may have noticed some firewall policies are gone. Rest assured they are there and in a better method called LIF Service Policies. The firewall policies are translated to Logical Interface (LIF) service policies which are more granular per interface (IP address). Additionally, starting with ONTAP 9.6, LIF roles are deprecated and replaced with LIF Service Policies for protocols. Service Policies automatically replace and are translated from formerly used firewall rules and LIF roles.

The examples below will set access to allow only the 192.168.150.0/24 network using service policies. You can granularly set policies per LIF, but in the examples below, we will set the same policy. The service policies will be set on an ONTAP 9.8 cluster named cmode-prod management interfaces, and we will also set service policies on NAS and iSCSI SAN data LIFs on Storage Virtual Machines (SVMs) named source_ntfs and san1. The NAS and SAN examples will also show how service policies interact with data protocols which can have overlapping effects. Always check the NetApp Docs site at https://www.netapp.com/support-and-training/documentation/ for additional information on command syntax.

Additional Information

  • Service Policies on LIFs were introduced in ONTAP 9.5 with some firewall protocols, but not inclusive of SSH and HTTPS
  • Service Policies migrated additional firewall rules in ONTAP 9.6  including SSH and HTTPS
  • ONTAP 9.5 and lower, SSH and HTTPS are shown and set in the firewall
    • system services firewall policy
  • ONTAP 9.6 and higher, SSH and HTTPS are shown and set with the commands below, then the service policy is applied to individual LIFs.
    • network interface service show
    • network interface service-policy show
  • The Service Processor (SP/BMC) allow firewall is handled with a separate mechanism and the method is also shown below

Cluster Management Interface Service Policies

Show the Firewall Policies and LIF Service Policies

system services firewall show

The firewall is enabled by default with no logging

system services firewall policy show

Note the remaining protocols in the system firewall: dns, http, ndmp, ndmps, ntp and snmp

network interface service show

network interface service-policy show -vserver cmode-prod

Note the default policies and the mapping to the service

Create a clone of “default-management” and modify the clone leaving the default service-policy unchanged. We will then add the management services only allowing the 192.168.150.0/24 subnet. By cloning, we copy over the five management services that we can then modify.

network interface show -fields network interface show -vserver cmode-prod -fields service-policy,services

network interface service-policy clone -vserver cmode-prod -policy default-management -target-vserver cmode-prod -target-policy secure-management

network interface service-policy modify-service -vserver cmode-prod -policy secure-management -service management-core -allowed-addresses 192.168.150.0/24

network interface service-policy modify-service -vserver cmode-prod -policy secure-management -service management-autosupport -allowed-addresses 192.168.150.0/24

network interface service-policy modify-service -vserver cmode-prod -policy secure-management -service management-ssh -allowed-addresses 192.168.150.0/24

network interface service-policy modify-service -vserver cmode-prod -policy secure-management -service management-https -allowed-addresses 192.168.150.0/24

network interface service-policy show -vserver cmode-prod

ems was left at the 0.0.0.0/0 default

Assign the policy to the LIFs (key step) for the cluster and node management LIFs – Note that you could have a different service policy per LIF or even additional cluster and node management LIFs on different networks. You can also apply a different service policy to the intercluster (SnapMirror/FabricPool/FlexCache) LIFs, but we will leave those the system default, open to all networks.

network interface modify -vserver cmode-prod -lif cluster_mgmt -service-policy secure-management

network interface modify -vserver cmode-prod -lif cmode-prod-01_mgmt1 -service-policy secure-management

network interface modify -vserver cmode-prod -lif cmode-prod-02_mgmt1 -service-policy secure-management

network interface show -vserver cmode-prod -fields service-policy,services

Cluster and node management LIFs now are assigned the secure LIF Service Policy

Service-Processor (SP/BMC) Firewall Allow Addresses to Enforce the same rules as the management LIFs.

service-processor ssh show

service-processor ssh add-allowed-addresses 192.168.150.0/24

service-processor ssh show

NAS Interface Service Policies

  • If you need to secure the SMB protocol to a specific network, then service policies add this feature that was not available prior for the protocol.
  • NFS export policy rules allow for specific networks and hosts separate from LIF service policies.
  • If you want to ensure that a data LIF is only serving data local on a subnet, you could use this method to ensure there is no routing to an SMB share or NFS export.
  • Note that there may be additional troubleshooting with NFS exports. For example, an nfs export policy rule may allow a subnet not allowed in the service policy.

network interface service-policy show -vserver source_ntfs

Note the data SVM Service Policies and Services

network interface show -vserver source_ntfs -fields service-policy,services

Create a new data service policy allowing only the 192.168.150.0/0 subnet

network interface service-policy create -policy source_ntfs-secure-data-files -allowed-addresses 192.168.150.0/0 -vserver source_ntfs -services data-cifs,data-core,data-flexcache,data-nfs,data-fpolicy-client

network interface service-policy show -vserver source_ntfs

Apply the service policies to the data LIFs

network interface modify -vserver source_ntfs -lif lif* -service-policy source_ntfs-secure-data-files

network interface show -vserver source_ntfs -fields service-policy,services

iSCSI SAN Interface Service Policies

  • Note that adding a service policy for an iSCSI data LIF affects access with other existing methods.
  • There are five methods in ONTAP that can restrict access to iSCSI LUNs. LUN access issues could be from one to all five of these mechanisms which all can interact together.
    • This is outside of the scope of this blog, but the five methods are listed below. Please comment if there are other methods in ONTAP you have found.

1. LUN mapping to igroups (lun masking)
The Initiator groups will mask the LUN to allowed hosts (iqns)

lun mapping show

2. Selective LUN Mapping (reporting-nodes to ha-pairs)
Enabled by default for ha-pairs (LUNs are available on 2-nodes only in the cluster)

lun mapping show -vserver san1 -fields reporting-nodes

3. Igroup binding to Portsets (port masking)
igroups bound to portsets will limit LIFs allowed to export a LUN
This can work with SLM where specific ports on an ha-pair are used

lun portset show

4. LIF Service Policies (firewall) at the network interface will limit hosts or subnets
LIF Service Policies below

network interface service-policy show

5. iSCSI Access Lists (SendTargets filter)
The iSCSI host SendTargets command can be filtered to a subset of LIFs

iscsi interface accesslist show

Show the Service Policies for the SAN SVM

network interface service-policy show -vserver san1

network interface show -vserver san1 -fields service-policy,services

Note the data SVM Service Policies and Services

Create new service policies allowing only the 192.168.150.0/0 subnet for the SVM management and data LIFs

network interface service-policy create -policy san1-secure-management -allowed-addresses 192.168.150.0/0 -vserver san1 -services data-core,management-ssh,management-https

network interface service-policy create -policy san1-secure-data-blocks -allowed-addresses 192.168.150.0/0 -vserver san1 -services data-core,data-iscsi

network interface service-policy show -vserver san1

Apply the service policies to the management and data LIFs

network interface modify -vserver san1 -lif san1_mgmt -service-policy san1-secure-management

network interface modify -vserver san1 -lif san1_lif* -service-policy san1-secure-data-blocks

network interface show -vserver san1 -fields service-policy,services

NetApp ONTAP – RBAC User Role Sub-command / Query

ONTAP has rich Role-based access control capabilities. One of these extended capabilities is the ability to specify sub-commands and a query within the sub-command allowed for the user. Both the sub-command and query are independent methods shown together below. The example below is on my 2-node cluster named cmode-prod. The user “admin3” is created with a locked down sub-command and a locked down query allowing only the node root (mroot) aggregate on node2. The node root aggregate contains a system volume named vol0. You could set a query for vol0, but in this example we will set the query using the containing aggregate of vol0 with the same result.

  • We will specify a cmddirname volume show” to enable ONLY the “show” subcommand
  • We will specify a query “aggregate aggrname” to only allow query of a specific aggregate of the node mroot on node2 named “cmode_prod_02_aggr0

As admin, run “volume show” to see all volumes. Note that the cluster has no data aggregates so only the node vol0 volumes display on each node

ssh admin@cmode-prod

::> volume show

Vserver   Volume       Aggregate    State      Type       Size  Available Used%

——— ———— ———— ———- —- ———- ———- —–

cmode-prod-01 vol0     cmode_prod_01_aggr0 online RW 2.50GB   1.48GB   40%

cmode-prod-02 vol0     cmode_prod_02_aggr0 online RW 2.50GB   1.47GB   41%

2 entries were displayed.

Create a new role to allow a sub-command “volume show” and -query on cmode-prod

Create an access-control role named “admin3” for the admin (cluster management) Vserver. The role has all access to the “volume show” command but only within the “aggr0” aggregate on node2.

::> security login role create -role admin3 -cmddirname “volume show” -query “-aggregate cmode_prod_02_aggr0” -access all -vserver cmode-prod

Create a user “admin3” using the locked down “admin3” role

::> security login create -vserver cmode-prod -username admin3 -role admin3 -application ssh -authmethod password

Login as admin3 and run “volume show” and you will ONLY see the one volume in the aggregate allowed in the query

ssh admin3@cmode-prod

::> volume show

Vserver   Volume       Aggregate    State      Type       Size  Available Used%

——— ———— ———— ———- —- ———- ———- —–

cmode-prod-02

          vol0         cmode_prod_02_aggr0

                                    online     RW       2.50GB     1.55GB   37%

NetApp ONTAP 9.8 – FlexCache Hands-on Setup (NFS and SMB)

To follow up on my prior blog covering FlexCache features, best practices and use cases, below is a hands-on example in my VSIM lab. The lab will create an origin volume and two caches, local and remote. The lab will also show how to setup a cache from a mirrored readonly volume, management of caches and reporting examples.

There are two clusters, “cmode-prod” and “cmode-single“.

There are two data SVMs

source_unix” on cmode-prod that serves both nfs and cifs on LIF IPs 192.168.150.110 and .111

dest_async” on cmode-single that serves both nfs and cifs on LIF IP 192.168.150.201

1     FlexCache Origin Configuration (NFS and SMB)

1.1      FlexCache Origin Volume Create

  • FlexCache is a “set and forget” feature
  • Origin can be a FlexVol or FlexGroup
  • We will use the exsting “source_unix” SVM for the origin
  • We will create two FlexCache Volumes in the next two sections (one local, one remote)
  • NFS export policies are already created, so we only create a CIFS share for multi-protocol access

cmode-prod

Set nfs 64-bit identifiers for FlexGroups

vserver nfs modify -vserver source_unix -v3-64bit-identifiers enabled   # “y” twice

Create origin (source) volume (we will create a FlexGroup for this origin)

volume create  -vserver source_unix -volume origin -size 50t -space-guarantee none -security-style unix -junction-path /origin -policy data -aggr-list cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3_SSD -aggr-list-multiplier 2   # “y” confirm

volume show -vserver source_unix -is-constituent true

To avoid invalidations on files that are cached when there is only a read at the origin, turn off last accessed time updates on the origin volume (when you create the cache in the next section, this is automatic for the cache)

volume modify -vserver source_unix -volume origin -atime-update false

Create a CIFS share

cifs share create -vserver source_unix -share-name origin -path /origin

Enable Block Level Invalidate (disabled by default) on the Origin

flexcache origin config show

flexcache origin config modify -origin-volume origin -is-bli-enabled true

flexcache origin config show

1.2      FlexCache Origin Volume Mount NFS

NFS Client

Mount the source_unix volume

mount 192.168.150.110:/origin origin

LS the mount 

ls -l origin/

Make Directories

mkdir origin/dir1

mkdir origin/dir2

mkdir origin/dir3

Create files

touch origin/file1.txt

echo “new file” >> origin/file1.txt

cat origin/file1.txt

touch origin/dir1/dir1file.txt

touch origin/dir2/dir2file.txt

touch origin/dir3/dir3file.txt

ls -l origin/dir1

ls -l origin/dir2

ls -l origin/dir3

1.3      FlexCache Origin Volume SMB Share

  • The sourceunix (cmode-prod source) and destasync (cmode-single mirror) CIFS servers were already created and joined to the lab2.local domain
    • cifs server create -vserver vserver -cifs-server netbiosname -domain lab2.local

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

net use o: \\sourceunix\origin

PS C:\Users\Administrator> net use o: \\sourceunix\origin

The command completed successfully.

dir o:

PS C:\Users\Administrator> dir o:

    Directory: o:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

d—–        11/8/2020   9:53 AM                dir1

d—–        11/8/2020   9:53 AM                dir2

d—–        11/8/2020   9:53 AM                dir3

-a—-        11/8/2020  10:32 AM             15 file1.txt

Create a new file

New-Item -ItemType file o:file1SMB.txt

PS C:\Users\Administrator> New-Item -ItemType file o:file1SMB.txt

    Directory: o:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

-a—-        11/8/2020  10:36 AM              0 file1SMB.txt

dir o:

PS C:\Users\Administrator> dir o:

    Directory: o:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

d—–        11/8/2020   9:53 AM                dir1

d—–        11/8/2020   9:53 AM                dir2

d—–        11/8/2020   9:53 AM                dir3

-a—-        11/8/2020  10:32 AM             15 file1.txt

-a—-        11/8/2020  10:36 AM              0 file1SMB.txt

cmode-prod

vserver cifs session show

vserver locks show

2     FlexCache Cache Configuration (same Cluster, same SVM)

2.1      FlexCache Volume Create (same Cluster, same SVM)

  • No cluster or SVM peering is needed since intra-cluster and intra-SVM
    • A different SVM in the same cluster would require an SVM peer
  • You can create the FlexCache with the -aggr-list option so it creates the prescribed number of constituents
    • List aggregates for constituents (-aggr-list)
  • You can also let FlexGroup autoselect (-auto-provision-as)
  • The option -aggr-list-multiplier determines how many constituent volumes are being used per aggregate listed in the -aggr-list option
  • Number of constituents recommended proportion to the size of the FlexCache volume
    • FlexCache volume < 100GB = 1 member volume
    • FlexCache volume > 100GB < 1TB = 2 member volumes
    • FlexCache volume > 1TB < 10TB = 4 member volumes
    • FlexCache volume > 10TB < 20TB = 8 member volumes
    • FlexCache volume > 20TB = the default number of member volumes (use -auto-provision-as flexgroup)

cmode-prod

Create the FlexCache Volume

flexcache create -vserver source_unix -volume cache1 -auto-provision-as flexgroup -origin-volume origin -size 5TB -origin-vserver source_unix -junction-path /cache1 -space-guarantee none

Modify the export policy (this can also be set on create with -policy above)

volume modify -vserver source_unix -volume cache1 -policy data

Create CIFS share

cifs share create -vserver source_unix -share-name cache1 -path /cache1

Show the Cache

volume flexcache show

volume flexcache origin show-caches

Show the cache connection (connected, disconnected or unknown)

volume flexcache connection-status show     # confirm connected

2.2      FlexCache Volume NFS Mount (same Cluster, same SVM)

NFS Client

Mount the source_unix cache volume

mount 192.168.150.111:/cache1 cache1

LS the cache

ls -l cache1/                 # file1.txt is there

Append the file1.txt file to the cache that will write to the origin

echo “new line cache1” >> cache1/file1.txt

Create a new file on the cache that will write to both cache and origin

touch cache1/file2.txt

cat the updated file from both locations

cat cache1/file1.txt

cat origin/file1.txt

LS both origin and cache1 which match

ls -l origin/

ls -l cache1/

2.3      FlexCache Volume SMB Share (same Cluster, same SVM)

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

net use p: \\sourceunix\cache1

PS C:\Users\Administrator> net use p: \\sourceunix\cache1

The command completed successfully.

dir p:

PS C:\Users\Administrator> dir p:

    Directory: p:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

d—–        11/8/2020   9:53 AM                dir1

d—–        11/8/2020   9:53 AM                dir2

d—–        11/8/2020   9:53 AM                dir3

-a—-        11/8/2020  10:40 AM             31 file1.txt

-a—-        11/8/2020  10:36 AM              0 file1SMB.txt

-a—-        11/8/2020  10:57 AM              0 file2.txt

Show the file contents of the updated file earlier in Linux origin and cache

gc o:\file1.txt

gc p:\file1.txt

Create a new file in the cache which also writes to origin

New-Item -ItemType file p:file2SMB.txt

PS C:\Users\Administrator> New-Item -ItemType file o:file2SMB.txt

    Directory: p:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

-a—-        11/8/2020  11:07 AM              0 file2SMB.txt

Show the files to see they are in both cache and origin

dir p:

dir o:

cmode-prod

vserver cifs session show

vserver locks show

2.4      FlexCache Volume SMB File Locking

Run Open Office or Excel or similar to create a file that will be locked

Create and save a a spreadsheet in \\sourceunix\cache1             

  • Untitled1.ods is created below

Open the file you created on the origin \\sourceunix\origin

  • You will see a message the the file is open for editing on the cache1 share to show file locking is enforced

3     FlexCache Cache Configuration (different Cluster, different SVM)

3.1      FlexCache Volume Create (different Cluster, different SVM)

  • Both cluster peering and SVM peering are required and these are already setup between the clusters and SVMs
  • We will use the cmode-single cluster SVM named dest_async which is already peered with the source  cluster cmode-prod SVM named source_unix
  • We would setup Intercluster LIFs, cluster peering and SVM (vserver) peering if not setup already
  • We must modify the existing vserver peer for SnapMirror to allow for FlexCache using the “flexcache” type
  • We will use the “default” export policy which is already open for mount

cmode-single

Confirm Peering

cluster peer show                                            # cmode-prod is peered to cmode-single

vserver peer show                                           # dest_async is peered to source_unix

vserver peer show -vserver dest_async          # FlexCache is not peered

Add the FlexCache to the SVM peer

vserver peer modify -vserver dest_async -peer-vserver source_unix -applications snapmirror,flexcache

vserver peer show -vserver dest_async

Create the FlexCache Volume

flexcache create -vserver dest_async -volume cache2 -auto-provision-as flexgroup -origin-volume origin -size 5TB -origin-vserver source_unix -junction-path /cache2 -space-guarantee none

Create CIFS share

cifs share create -vserver dest_async -share-name cache2 -path /cache2

Show the Cache

volume flexcache show

volume flexcache origin show-caches

Show the cache connection (connected, disconnected or unknown)

volume flexcache connection-status show     # confirm connected

cmode-prod

Show the caches from the origin

volume flexcache origin show-caches

Show the cache connection (connected, disconnected or unknown)

volume flexcache connection-status show     # confirm connected

3.2      FlexCache Volume NFS Mount (different Cluster, different SVM)

NFS Client

Mount the source_unix cache volume

mount 192.168.150.201:/cache2 cache2

LS the cache

ls -l cache2/                 # all origin, cache1 and cache2 files are the same

Append the file

echo “new line cache2” >> cache2/file1.txt

Create a new file on the caches that will write to both cache and origin

touch cache2/file3.txt

Cat the updated file from all 3 locations

cat cache2/file1.txt

cat cache1/file1.txt

cat origin/file1.txt

LS the origin and both caches which are the same in all 3 locations

ls -l origin/

ls -l cache1/

ls -l cache2/

3.3      FlexCache Volume SMB Share (different Cluster, different SVM)

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

Map the drive with the IP

net use q: \\192.168.150.201\cache2

PS C:\Users\Administrator> net use q: \\192.168.150.201\cache2

The command completed successfully.

dir q:

    Directory: q:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

d—–        11/8/2020   9:53 AM                dir1

d—–        11/8/2020   9:53 AM                dir2

d—–        11/8/2020   9:53 AM                dir3

-a—-        11/8/2020  11:10 AM             47 file1.txt

-a—-        11/8/2020  10:36 AM              0 file1SMB.txt

-a—-        11/8/2020  10:57 AM              0 file2.txt

-a—-        11/8/2020  11:07 AM              0 file2SMB.txt

-a—-        11/8/2020  11:11 AM              0 file3.txt

Show the file contents of the updated file earlier in Linux origin and cache

gc o:\file1.txt

gc p:\file1.txt

gc q:\file1.txt

Create a new file in the cache which also writes to origin

New-Item -ItemType file q:file3SMB.txt

PS C:\Users\Administrator> New-Item -ItemType file o:file3SMB.txt

    Directory: q:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

-a—-        11/8/2020  11:19 AM              0 file3SMB.txt

Show the files to see they are in both cache and origin

dir q:

dir p:

dir o:

cmode-single

vserver cifs session show

vserver locks show

4     Client Cleanup

4.1      Linux NFS Client

NFS Client

unmount the source_unix cache volume

umount origin

umount cache1

umount cache2

4.2      Windows Client

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

net use O: /DELETE

net use P: /DELETE

net use Q: /DELETE

5     FlexCache from a SnapMirror Secondary Origin (9.8+)

5.1      FlexCache Volume Create (same dest Cluster, same SVM)

cmode-single

Show an existing mirrored volume in the dest_async SVM

snapmirror show -vserver dest_async

                                                                       Progress

Source            Destination Mirror  Relationship   Total             Last

Path        Type  Path        State   Status         Progress  Healthy Updated

———– —- ———— ——- ————– ——— ——- ——–

source_unix:home XDP dest_async:home_dr_async Snapmirrored Idle – true –

Create the FlexCache Volume

flexcache create -vserver dest_async -volume cachemirror -auto-provision-as flexgroup -origin-volume home_dr_async -size 1TB -origin-vserver dest_async -junction-path /cachemirror -space-guarantee none

Create CIFS share to the origin and cache

cifs share create -vserver dest_async -share-name originmirror -path /home_dr_async

cifs share create -vserver dest_async -share-name cachemirror -path /cachemirror

Show the Cache

volume flexcache show

volume flexcache origin show-caches

Show the cache connection (connected, disconnected or unknown)

volume flexcache connection-status show     # confirm connected

5.2      FlexCache Volume NFS Mount (Origin Mirror)

NFS Client

Mount the origin volume

mount 192.168.150.201:/home_dr_async origin

LS the cache

ls -l origin/                   # 6 files from prior labs

Append a file – this FAILS since it is a readonly mirror

echo “new line origin” >> origin/file1.txt         FAILS

Create a new file  – this FAILS since it is a readonly mirror

touch origin/file7.txt                                        FAILS

5.3      FlexCache Volume NFS Mount (Cache)

NFS Client

Mount the cache volume

mount 192.168.150.201:/cachemirror cache1

LS the cache

ls -l cache1/                 # 6 files from prior labs

Append a file – this FAILS since it is a readonly mirror

echo “new line cache1” >> cache1/file1.txt 2>&1     FAILS

Create a new file  – this FAILS since it is a readonly mirror

touch cache1/file7.txt                                      FAILS

Cat and LS

cat cache1/file1.txt

ls -l cache1/

5.4      FlexCache NFS Cleanup

NFS Client

Unmount

umount origin/

umount cache1/

5.5      FlexCache Volume SMB Share (Origin Mirror)

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

Map the drive with the IP

net use o: \\192.168.150.201\originmirror

PS C:\Users\Administrator> net use o: \\192.168.150.201\originmirror

The command completed successfully.

dir o:

    Directory: o:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

-a—-        11/6/2020   3:25 PM              0 file1.txt

-a—-        11/6/2020   3:25 PM              0 file2.txt

-a—-        11/6/2020   3:25 PM              0 file3.txt

-a—-        11/6/2020   3:26 PM              0 file4.txt

-a—-        11/6/2020   3:26 PM              0 file5.txt

-a—-        11/6/2020   3:26 PM              0 file6.txt

5.6      FlexCache Volume SMB Share (Cache)

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

Map the drive with the IP

net use p: \\192.168.150.201\cachemirror

PS C:\Users\Administrator> net use p: \\192.168.150.201\cachemirror

The command completed successfully.

dir p:

    Directory: p:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

-a—-        11/6/2020   3:25 PM              0 file1.txt

-a—-        11/6/2020   3:25 PM              0 file2.txt

-a—-        11/6/2020   3:25 PM              0 file3.txt

-a—-        11/6/2020   3:26 PM              0 file4.txt

-a—-        11/6/2020   3:26 PM              0 file5.txt

-a—-        11/6/2020   3:26 PM              0 file6.txt

5.7      FlexCache Volume SMB Cleanup

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

net use O: /DELETE

net use P: /DELETE

6     FlexCache Management

6.1      Synchronizing properties of a FlexCache volume from an origin volume

  • Some of the volume properties of the FlexCache volume must always be synchronized with those of the origin volume. If the volume properties of a FlexCache volume fail to synchronize automatically after the properties are modified at the origin volume, you can manually synchronize the properties.
  • The following volume properties of a FlexCache volume must always be synchronized with those of the origin volume
    • Security style (-security-style)
    • Volume name (-volume-name)
    • Maximum directory size (-maxdir-size)
    • Minimum read ahead (-min-readahead)

cmode-prod

volume flexcache sync-properties -vserver source_unix -volume cache1

cmode-single

volume flexcache sync-properties -vserver dest_async -volume cache2

6.2      Updating the configurations of a FlexCache relationship

  • After events such as volume move, aggregate relocation, or storage failover, the volume configuration information on the origin volume and FlexCache volume is updated automatically. In case the automatic updates fail, an EMS message is generated and then you must manually update the configuration for the FlexCache relationship
  • If you want to update the configurations of a FlexCache volume, you must run the command from the origin volume. If you want to update the configurations of an origin volume, you must run the command from the FlexCache volume
  • Syntax
    • volume flexcache config-refresh -peer-vserver peer_svm -peer-volume peer_volume_to_update -peer-endpoint-type [origin | cache]

cmode-prod

volume flexcache config-refresh -peer-vserver dest_async -peer-volume cache2 -peer-endpoint-type cache

6.3      AutoGrow the Cache

  • Autogrow might be a good option to use on the FlexCache to conserve space. You might consider using autogrow when you don’t know what the working set size is or if you must be conservative with space on the FlexCache cluster
  • Set the maximum autogrow size to between 10% and 15% of the origin
  • Autogrow is only triggered at a certain threshold. By default, this threshold is 85%. When a particular constituent reaches 85% full, then it is grown to a specific number calculated by ONTAP. Also, the eviction threshold is 90%. So, if there is an ingest rate (first read from origin, which writes it to cache) of greater than 5%, then grow and evict loops could result in undesirable behavior

cmode-single

volume autosize -vserver dest_async -volume cache2 -maximum-size 10TB -mode grow

vol autosize -vserver dest_async -volume cache2

6.4      File Locking (origin)

  • File locking is done on the origin volume
  • A lock on a cache is passed back to the origin
  • Locks are not stored on the cache

cmode-prod and cmode-single

vserver locks show                 # we should see no locks

vserver cifs session show

6.5      Pre-populate the cache from ONTAP (9.8+)

  • Prepopulate reads files only and crawls through directories 
  • The is-recursion flag applies to the entire list of directories passed to prepopulate 
  • Syntax
    • volume flexcache prepopulate -cache-vserver vserver_name -cache-volume -path-list path_list -is-recursion true|false

cmode-single

Prepopulate a FlexCache volume with a single directory path for prepopulation


flexcache prepopulate start -cache-vserver dest_async -cache-volume cache2 -path-list /dir1



Prepopulate a FlexCache volume with a list of several paths for prepopulation

flexcache prepopulate start -cache-vserver dest_async -cache-volume cache2 -path-list /dir1,/dir2,/dir3

Display the number of files read

job show                      # -id job_ID -ins

97     FlexCache prepopulate job for volume “cache2” in Vserver “dest_async”. cmode-single cmode-single-01 Success

       Description: FLEXCACHE PREPOPULATE JOB

98     FlexCache prepopulate job for volume “cache2” in Vserver “dest_async”. cmode-single – Queued

       Description: FLEXCACHE PREPOPULATE JOB

6.6      Pre-populate the cache from the NFS client

  • Use ONTAP “flexcache preopulate start” as best practice if 9.8 or higher
  • Preload the data using the “find” command in a specific directory. You can run this command with either a dot (.) in the <dir> to run it from the current directory, or you can give it a specific directory
  • Usually, the command also warms the directory listings. If it does not, you can also run ls -R <dir> and replace the <dir> with the same information in the find command
  • ONTAP 9.8 adds pre-populate by specifying a directory which needs to be pre-populated at a cache location

Pre-9.8 from a client

NFS client

Mount source_unix cache volume

mount 192.168.150.201:/cache2 cache2

LS the cache

ls -l cache2/                 

NFS client find command on each cache mount

find cache2 -type f -print -exec sh -c “cat {} > /dev/null” \;

Cleanup

umount cache2/

6.7      Pre-populate the cache from the Windows SMB client

  • Use ONTAP “flexcache preopulate start” as best practice if 9.8 or higher
  • Powershell

Pre-9.8 from a client

Windows Server

Task bar search “Windows PowerShell”, right click and “Run as administrator

PS C:>  prompt

Map the drive with the IP

net use q: \\192.168.150.201\cache2

PS C:\Users\Administrator> net use q: \\192.168.150.201\cache2

The command completed successfully.

dir q:

    Directory: q:\

Mode                LastWriteTime         Length Name

—-                ————-         —— —-

d—–        11/8/2020   9:53 AM                dir1

d—–        11/8/2020   9:53 AM                dir2

d—–        11/8/2020   9:53 AM                dir3

-a—-        11/8/2020  11:10 AM             47 file1.txt

-a—-        11/8/2020  10:36 AM              0 file1SMB.txt

-a—-        11/8/2020  10:57 AM              0 file2.txt

-a—-        11/8/2020  11:07 AM              0 file2SMB.txt

-a—-        11/8/2020  11:11 AM              0 file3.txt

-a—-        11/8/2020  11:19 AM              0 file3SMB.txt

powershell command to read all files to the cache for dir1

Measure-Command {Get-ChildItem -Path Q:\dir1 -Recurse -File | ForEach-Object { $name = $_.FullName; Get-Content $name } > $NUL | Out-Default}

Days              : 0

Hours             : 0

Minutes           : 0

Seconds           : 0

Milliseconds      : 47

Ticks             : 473697

TotalDays         : 5.48260416666667E-07

TotalHours        : 1.315825E-05

TotalMinutes      : 0.000789495

TotalSeconds      : 0.0473697

TotalMilliseconds : 47.3697

Cleanup

net use Q: /DELETE

6.8      Disconnected Mode (Read Only) LS disable

  • Origin behavior
    • Reads are supported
    • Write to new files are supported
    • Writes to existing files, if not cached outbound yet, are written
    • Writes to files after a TTL timeout are allowed 9.6+ (Disconnected mode TTL and resync)
  • Cache behavior
    • Reads are supported
    • Reads for data that has not been cached time out
    • Writes to the FlexCache time out
    • An ls command only works if there was an ls or equivalent command performed on that directory before the disconnection.  A directory is just another inode. If it has been cached, it is served. If it has not been cached, it is not served
      • Disabling “ls” when disconnected at the cache is a best practice we will turn off below

cmode-single

Change FlexGroup behavior to prevent ls hanging in disconnected mode.  Set the following bootarg to revert the RAL or FlexGroup behavior to previous so that the “ls” command does not hang in disconnected mode **requires a reboot**

node run cmode-single-01 “priv set diag; flexgroup set fast-readdir=false persist”

reboot

6.9      Delete a FlexCache Relationship

cmode-single

Offline and FlexCache Delete (not volume delete)

volume offline -vserver dest_async -volume cache2  # “y

volume flexcache delete -vserver dest_async -volume cache2

cmode-prod

Clean up on the origin volume

  • Run when the cache is orphaned
    • This command only needs to be run if “volume flexcache delete” fails on the FlexCache cluster and prompts you to run this command. The cache configuration will be deleted and cannot be reestablished for the cache relationship between origin of a FlexCache volume “origin” in Vserver “source_unix” and FlexCache volume “cache2” in Vserver “dest_async”.
    • Running this command unless guided by the “volume flexcache delete” command or NetApp Support can lead to unwanted outcomes.
  • When you run the volume flexcache origin cleanup-cache-relationship command, the FlexCache relationship is deleted and cannot be reestablished
  • This command will fail with ”Error: command failed: entry doesn’t exist” because the cache deleted above without issue

set diag                       # confirm “y”

volume flexcache origin cleanup-cache-relationship -origin-volume origin -origin-vserver source_unix -cache-vserver dest_async -cache-volume cache2

confirm “y”

Error: command failed: entry doesn’t exist  (this was expected since already deleted)

set admin

7     FlexCache Performance and Reporting

7.1      FlexCache Statistics

cmode-prod

set diag           # required for waflremote and debug

Show Commands

flexcache show

flexcache origin show

flexcache origin config show

debug smdb table nflexcache_origin_config show

flexcache show -instance

df -aggregates -autosize

df -autosize -V

volume show-footprint 

volume show-space

aggregate show-space

Average latencies per I/O operation for FlexCache

qos statistics workload latency show -iterations 100

FCache Hits with Statistics show-periodic

statistics show-periodic -interval 1

Show inodes showing up and evicting show per constituent

vol explore -format dir -scope cache1__0001./

vol explore -format dir -scope cache1__0002./

Start Statistics

statistics start -sample-id workload_cache -object waflremote

Show WAFL remote

statistics show -object waflremote -sample-id workload_cache

Show spinhi

statistics show -object spinhi -counter spinhi_flexcache* -raw

statistics show -object spinnp_replay_cache -instance spinnp_replay_cache -counter * -raw true

Cache misses (look at “fc_miss”)

statistics show -sample-id workload_cache -counter read_io_type

FlexCache Delays

statistics show -sample-id workload_cache -instance *FLEXCACHE*

Performance Histogram and cache evictions

statistics show -object waflremote -counter fc_retrieve_hist -raw

Show Cache Evictions

statistics show waflremote -counter scrub_need_freespace -raw

Stop and Delete Statistics

statistic stop -sample-id workload_cache

statistics samples delete -sample-id workload_cache

set adv

NetApp ONTAP 9.8 – FlexCache SMB Overview

NetApp FlexCache added SMB support and more in ONTAP 9.8.  NFS v3 was already supported and following are the new capabilities including SMB. This is one of my favorite additions to ONTAP with the ability to scale-out a central CIFS share natively. Some of the information in this blog is consolidated from NetApp Docs at https://docs.netapp.com and the NetApp FlexCache Technical Report (currently at version 9.7) at https://www.netapp.com/pdf.html?item=/media/7336-tr4743pdf.pdf

FlexCache Overview

  • FlexCache is a persistent read/write cache of a volume that can improve performance by providing load distribution, reduced latency by locating data closer to the point of client access, and enhanced availability by serving cached data in a network disconnection situation.  
  • A FlexCache volume is a sparse copy where some files from the origin volume are cached. When a FlexCache volume is created, a FlexGroup volume is created by default with 4x constituent volumes. 
  • The cache is instant with no data transfer to create the cache
  • Similar to SnapMirror, the cache mechanism communicates over InterCluster LIFs and uses cluster and SVM (vserver) peering when caching to a different cluster and/or SVM. The cache can be local or remote across ONTAP clusters or even on the same cluster and same SVM. InterCluster peering supports TLS for encryption on the wire and both the source and destination can encrypt at rest with NVE, NAE or NVE.
  • In FlexCache terms, the origin is the source volume, and the caches are the remote volumes.
  • FlexCache works as an origin or cache on any ONTAP cluster, hardware and software, on AFF, FAS, ONTAP Select (OTS) and Cloud Volumes ONTAP (CVO).  
  • You can mix different disk and tier types, for example, the origin/cache can be any mix of HDD, SSD, FlashPool (HDD+SSD), and FabricPool (performance tier + object capacity tier).
  • FlexCache supports disconnected mode which allows reads but not writes to cached files while disconnected.
  • FlexCache enables operational efficiencies for backup and disaster recovery since the origin is the only site that needs backup and replication.
  • FlexCache has been around for many years for NFS, even on legacy 7-Mode systems. FlexCache in ONTAP 9 uses a more efficient and faster Remote Access Layer (RAL) protocol compared to the legacy 7-Mode NetApp Remote Volume (NRV) protocol.

New ONTAP 9.8 Features

  • FlexCache for SMB version 2.x and 3.x shares with file locking which is handled by the origin (source volume) locally and to all FlexCache volumes
  • FlexCache volumes from a mirrored destination DP volume as the origin of the cache
  • Block Level Invalidate for more efficiency in the cache (prior ONTAP was only file level invalidate). Note that BLI is disabled by default on origin volumes.
  • Fan-out from 1x origin volume to 100x cache volumes.  Prior to ONTAP 9.8, the ratio was 1 to 10
  • Pre-populate of directories in the cache.  Prior to ONTAP 9.8, or as another option, you can use the nfs “find” or Windows Powershell “Measure-Command” commands to populate the cache.  I will cover this in my next blog with examples from ONTAP and the clients

Free Licensing!

  • One of the best things about this technology is that the FlexCache feature is free!  Starting from ONTAP 9.7, FlexCache no longer needed a license.  
  • For ONTAP 9.5 and 9.6 there is  a free master license key for up to 400TB valid through 2099 at https://mysupport.netapp.com/NOW/knowledge/docs/olio/guides/master_lickey/
  • For ONTAP prior to 9.5, work with your NetApp sales and support teams for the license, but I highly recommend you upgrade to 9.8 for the new features and the free use without a license

FlexCache Limits (check the FlexCache Power Guide in NetApp Docs for the latest)

Best Practices

  • To avoid invalidations on files that are cached when there is only a read at the origin, turn off last accessed time updates on the origin volume
    • volume modify -vserver origin-svm -volume vol_origin -atime-update false
  • Try not to use applications that confirm writes with a read-after-write. The write-around nature of FlexCache can cause delays for such applications
  • Set the following bootarg to revert the RAL or FlexGroup behavior to previous so that the “ls” command does not hang in disconnected mode
    • node run <node> “priv set diag; flexgroup set fast-readdir=false persist
  • Create the FlexCache with the -aggr-list option so it creates the prescribed number of constituents (the default is 4x constituents)
    • Always use the -size option for the FlexCache create to specify the FlexCache volume size
  • Cache size should be larger than the largest file
    • Because a FlexCache is a FlexGroup, a single constituent should not be any smaller than the largest file that must be cached. There is one constituent by default, so the FlexCache size should be at least as large as the largest file to be cached.

Sizing – it depends (depends on what?) TR-4743 examples

  • The cache size can be the same or smaller than the origin volume
    • Best practice is at least 10% of the origin size (see sizing examples below)
  • The working set determines the cache size.  Auto Grow may be useful (see setup of Autogrow in my next blog)
    • Working set – If the origin volume has 1TB of data in it, but a particular job only needs 75GB of data, then the optimal size for the FlexCache volume is the working set size (75GB) plus overhead (approximately 25%). 
      • In this case, 75GB + 25% = 93.75GB or 94GB
    • The other method to determine optimal cache volume size is to take 10-15% of the origin volume size and apply it to the cache. For a 50TB origin volume size, a cache should be 5TB to 7.5TB in size. You can use this method in cases where the working set is not clearly understood and then use statistics, used sizes, and other indicators to determine the overall optimal cache size
      • 100GB for 1TB origin for example
    • Read/Write %s – The rule of thumb for FlexCache is a read/write mix of at least 80% reads and 20% writes at the cache. This ratio works because of the write-around nature of FlexCache. Writes incur a latency penalty when forwarding the write operation to the origin. FlexCache does allow a higher write percentage, but it is not optimal for the way FlexCache in ONTAP processes the write

Global File Cache (GFC) Comparison

  • NetApp Global File Cache (GFC) is from the NetApp acquisition of Talon, and is another SMB caching option worth discussion since these are two similar products. Below are some comparisons, and my opinion on when one or the other is the best option. GFC is SMB only and is licensed by each remote site virtual Windows 2016 or 2019 Server instance at about ~4K list price per year.
  • You can mix and match GFC and FlexCache. For example you may have a FAS8700 serving CIFS at a central location with an ONTAP FlexCache at a remote site with a FAS2720, another remote FlexCache with ONTAP Select, and another remote site with a GFC virtual Windows instance. The FAS8700 can Mirror and Vault to another location for DR and Backup without backup needed at the remote sites.
  • When to use ONTAP FlexCache
    • When all sites already have ONTAP storage
    • When you need to cache NFS
  • When to use GFC
    • When remote sites do not have ONTAP storage
    • When the origin (source) is Cloud Volumes Service (CVS) or Azure NetApp Files (ANF)
      • CVO and ANF are storage as a service native cloud offerings that are not supported with FlexCache
  • When it depends
    • At a new remote site, you can use an ONTAP Select (OTS) VM or a Global File Cache (GFC) Windows Server instance. With GFC you also need a VM at the origin (source) site. There is no right answer and your NetApp team can provide budget options to see which is the best fit, and in some cases you may want to use both.

Use cases (credit to the NetApp 9.8 EAP docs for the use cases and images)

  • Global File System using sparse cache volumes to remote sites
  • Large data sets
  • No need for full replication or multiple copies.  Keep 10-15% (sparse copy) cached where needed with a single master copy
  • For 80% read workloads is a best fit
  • Hot volume performance load balancing
  • Software build (Git)
  • Common tool distribution
  • Cloud bursting, acceleration, caching
  • Stretched NAS on MCC
  • Artificial Intelligence (AI), Machine Learning (ML), Deep Learning (DL)
  • ASIC Electronic Design Automation (EDA)
  • Media and computer generated imagery (CGI) rendering

Provides a FlexCache to extend the volume namespace beyond the current cluster serving the volume. This can bring the data physically closer to the resources needing the data via a sparse mechanism so that only the data being requested is cached in the remote cluster.  This allows for the remote resource to bypass the WAN and read the data from the local cluster with the FlexCache volume

  • Caching to and from Cloud
    • Provides FlexCache in the cloud and Origin volume on-prem OR FlexCache on-prem Origin volume in cloud OR both Flexcache volume & Origin volume in cloud
  • FlexCache with ONTAP Select
  • FlexCache for Cloud Bursting
  • When using the cloud for a compute resource, you can utilize FlexCache to bring your data to the cloud immediately.  No waiting for replication, no waiting for an initial sync, just create the FlexCache and go. Create FlexCaches in multiple clouds to balance resources and leverage less expensive resources

FlexCache for Cloud Acceleration

  • For those employing a “cloud forward” strategy, cloud caching and acceleration can provide a way to get at data in the cloud faster. Setting up a FlexCache does not require DR or backup.  Primary data is still in the cloud and that backup/dr strategy is a good zero touch way of getting your data to users quicker
  • Limit your egress charges in the cloud be reading data only once from the cloud with consecutive reads from the cache

FlexCache for Cloud Caching

  • Cache from cloud to cloud or region to region

In my next blog, I will demonstrate detailed setup and features of FlexCache for both NFS and SMB in my VSIM lab.

NetApp ONTAP 9.8 – FabricPool Tiering to ONTAP S3

In my prior blog, ONTAP S3 was configured and we will build on that blog connecting ONTAP aggregates to the S3 capacity tier with FabricPool. This blog will cover setup of FabricPool to ONTAP S3 with some additional features like tagging, mirroring and tiering policies. The cluster name is “cmode-prod” and the cluster will connect to an SVM named “S3” resides on the same cluster. The S3 SVM could have been a different cluster with connectivity from the Cluster InterCluster LIF(s) to the S3 SVM Data LIF(s). Note that REST, the System Manager GUI and Cloud Manager (Cloud Tiering) are also excellent tools for easy FabricPool configuration. For FabricPool Best Practices, please see John Lantz’s TR at https://www.netapp.com/us/media/tr-4598.pdf. The NetApp TRs and documentation at https://docs.netapp.com were used for the setup below.

1     FabricPool Configuration

1.1      Check Licenses

  • Free use
    • When StorageGRID or ONTAP (up to 300TB) is used
    • When CVO (Cloud Volumes ONTAP) is used to the local cloud vendor object store
  • Per TB use license for on-prem to non-StorageGRID/ONTAP

cmode-prod

license show

license show-status

1.2      Install a Server-CA Certificate for TLS (for https S3 access)

  • In the S3 blog posted earlier, we created a server certificate that matched FQDN name s3.lab2.local.
  • We will use the public key from that S3 cert to create a server-ca client certificate on cmode-prod as an S3 client to the S3 SVM S3 server
  • On-Prem S3 solutions require certificates to install in ONTAP and the Object Store
    • StorageGRID
    • IBM Cloud Object Storage (formerly Cleversafe)
  • In the next step below we will create a certificate for ONTAP_S3
  • Parameter to bypass cert validation for private cloud (StorageGRID) NOT RECOMMENDED
    • -is-certificate-validation-enabled false
  • If you do not install the server-ca on cmode-prod for the S3 server root-ca public key you will get an error (see below if you create the object-store connection without a cmode-prod server-ca certificate

Error: command failed: Cannot verify availability of the object store from node cmode-prod-01. Reason: Cannot verify the certificate given by the object store server. It is possible that the certificate has not been installed on the cluster. Use the ‘security certificate install -type server-ca’ command to install it..

cmode-prod

Show the root-ca public cert of the S3 default server cert we created in the prior blog (we will copy/paste the begin to end)

security certificate show -vserver S3 -common-name SVM_CA -type root-ca  -instance

Your public key wll be different

                             Vserver: S3

                    Certificate Name: SVM_CA_160AA44E38972249_SVM_CA

          FQDN or Custom Common Name: SVM_CA

        Serial Number of Certificate: 160AA44E38972249

               Certificate Authority: SVM_CA

                 Type of Certificate: root-ca

 Size of Requested Certificate(bits): 2048

              Certificate Start Date: Thu Apr 30 09:01:14 2020

         Certificate Expiration Date: Fri Apr 30 09:01:14 2021

              Public Key Certificate: —–BEGIN CERTIFICATE—–

                                      MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

                                      AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

                                      NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

                                      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

                                      tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

                                      q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

                                      s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

                                      Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

                                      3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

                                      gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

                                      ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

                                      +ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

                                      oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

                                      aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

                                      yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

                                      2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

                                      VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

                                      5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

                                      —–END CERTIFICATE—–        Country Name (2 letter code): US

  State or Province Name (full name):

           Locality Name (e.g. city):

    Organization Name (e.g. company):

    Organization Unit (e.g. section):

        Email Address (Contact Name):

                            Protocol: SSL

                    Hashing Function: SHA256

                             Subtype: –                          

security certificate show -vserver S3 -common-name SVM_CA -type root-ca  -fields public-cert

—–BEGIN CERTIFICATE—–

MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

+ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

—–END CERTIFICATE—–

security certificate show -vserver S3 -common-name s3.lab2.local -fields public-cert

Install the S3 server certificate for the cmode-prod server-ca client (paste results above)

security certificate install -type server-ca -vserver cmode-prod -cert-name s3.lab2.local

Please enter Certificate: Press <Enter> when done

—–BEGIN CERTIFICATE—–

MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

+ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

—–END CERTIFICATE—–

[enter]

You should keep a copy of the CA-signed digital certificate for future reference.

The installed certificate’s CA and serial number for reference:

CA: SVM_CA

serial: 160AA4596B7767A0

security certificate show -cert-name s3.lab2.local

Vserver    Serial Number   Certificate Name                       Type

———- ————— ————————————– ————

S3         160AA4945C51B18E s3.lab2.local                         server

    Certificate Authority: SVM_CA

          Expiration Date: Sun Apr 25 09:06:15 2021

cmode-prod 160AA4596B7767A0 s3.lab2.local                         server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

2 entries were displayed.

security certificate show -common-name SVM_CA

Vserver    Serial Number   Certificate Name                       Type

———- ————— ————————————– ————

S3         160AA4596B7767A0 SVM_CA_160AA4596B7767A0_SVM_CA        root-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

S3         160AA4596B7767A0 SVM_CA_160AA4596B7767A0               client-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

S3         160AA4596B7767A0 SVM_CA                                server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

cmode-prod 160AA4596B7767A0 s3.lab2.local                         server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

4 entries were displayed.

1.3      Configure a Proxy for S3 (to access Public Cloud)

  • Commands for reference below
  • When configuring the object store with “object-store config create” you will specify “-use-http-proxy true

REFERENCE

network ipspace show

vserver http-proxy create -ipspace <ipspace> -server <proxy-server-FQDN> – port <port> 

vserver http-proxy show 

1.4      S3 Object Store Bucket and Account Information

  • Output from the prior blog
    • We created the s3admin user and two buckets (s3ontap1 and s3ontap2) in the prior blog
    • We will use both buckets to demonstrate the object mirror feature
  • S3 to ONTAP is supported for 300TB or less in ONTAP 9.8
    • There is an ONTAP_S3 provider type in the object-store config create command

cmode-prod

Show current S3

network interface show -vserver S3

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

———– ———- ———- —————— ————- ——- —-

S3

            lif1         up/up    192.168.150.141/24 cmode-prod-01 e0c     true

vserver services name-service dns hosts show

Vserver    Address        Hostname        Aliases

———- ————– ————— ———————-

cmode-prod 192.168.150.141 s3.lab2.local  –

object-store-server user show -user s3admin             # your keys will be different

Use the “s3admin” account keys

Vserver     User            ID        Access Key          Secret Key

———– ————— ——— ——————- ——————-

S3          s3admin         1         ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9

rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

object-store-server user show -user s3admin -fields access-key       # to show the separate acess from secret

object-store-server bucket show

Vserver     Bucket          Volume            Size       Encryption

———– ————— —————– ———- ———-

S3          s3ontap1        fg_oss_1585321366 100GB      true

S3          s3ontap2        fg_oss_1585321198 100GB      true

2 entries were displayed.

1.5      Verify Intercluster LIF Connectivity to S3

  • We will network ping from each intercluster LIF to the S3 FQDN
  • Cluster peering over InterCluster LIFs is required for FabricPool to connect from the cluster (admin SVM) to the S3 bucket
    • Since we are connecting cmode-prod to one of it’s own SVMs, S3, the peer is not needed with the local cluster connection

cmode-prod

network ping -lif cmode-prod-01_ic1 -vserver cmode-prod -destination s3.lab2.local

network ping -lif cmode-prod-02_ic1 -vserver cmode-prod -destination s3.lab2.local

1.6      Object Store Connect to ONTAP (add a cloud tier)

  • We will add two buckets for mirroring
  • We will use SSL per best practices
  • ONTAP S3 is supported, using the “ONTAP_S3” type instead of “S3_Compatible
  • Multiple aggregates can use the same bucket
  • Configure FabricPool on the cluster with S3 bucket information
    • Server name (MUST be a FQDN)
    • Secret and access keys
    • Bucket / Container name
  • Syntax

object-store config create
-object-store-name <name> 
-provider-type <AWS/SGWS>
-port <443/8082> (AWS/SGWS)
-server <name> 
-container-name <bucket-name> 
-access-key <string> 
-secret-password <string> 
-ssl-enabled true 
-ipspace default
-is-certificate-validation-enabled

cmode-prod

Connect Two Capacity Object Tiers # Update your access key and secret password

  • Using s3admin keys


storage aggregate object-store config create -object-store-name s3ontap1 -provider-type ONTAP_S3 -server s3.lab2.local -container-name s3ontap1 -ssl-enabled true -port 443 -ipspace Default -use-http-proxy false -server-side-encryption none -access-key ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9-secret-password rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

storage aggregate object-store config create -object-store-name s3ontap2 -provider-type ONTAP_S3 -server s3.lab2.local -container-name s3ontap2 -ssl-enabled true -port 443 -ipspace Default -use-http-proxy false -server-side-encryption none -access-key ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9-secret-password rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

storage aggregate object-store config show

Name            Server               Container Name Provider Type Ipspace

————— ——————– ————– ————- ————-

s3ontap1        s3.lab2.local        s3ontap1       ONTAP_S3      Default

s3ontap2        s3.lab2.local        s3ontap2       ONTAP_S3      Default

2 entries were displayed.

1.7      Object Store Profiler

  • Performance profiling of the object storage put and get operations
  • FabricPool read latency is a function of connectivity to the cloud tier. LIFs using 10Gbps ports provide adequate performance. NetApp recommends validating the latency and throughput of your specific network environment to determine the impact it has on FabricPool performance
  • Cloud tiers do not provide performance similar to that found on the local tier (typically GB per second)
  • Although cloud tiers can easily provide SATA-like performance, they can also tolerate latencies as high as 10 seconds and low throughputs for tiering solutions that do not need SATA-like performance

cmode-prod

Start the profiler

storage aggregate object-store profiler start -node cmode-prod-01 -object-store-name s3ontap1 #y

storage aggregate object-store profiler start -node cmode-prod-02 -object-store-name s3ontap2 #y

storage aggregate object-store profiler show

2     Aggregate Configuration

2.1      Object Store Attach to Aggregate (local tier)

  • Attaching a cloud tier to a local tier is a permanent action. A cloud tier cannot be unattached from a local tier after being attached. (Using FabricPool Mirror, a different cloud tier can be attached.)
    • Exception is you can mirror an object store, swap mirrors then remove the mirror tier, but you will always have one tier attached once attached
  • Volumes in the aggregate must be thin provisioned -space-guarantee none to set a policy to use the tier
  • Aggregate autobalance must be disabled on the aggregates
  • We will connect both buckets to both SSD aggregates
    • Connecting mirrored buckets to multiple aggregates
  • Syntax

storage aggregate object-store attach
-aggregate <name> 
-object-store-name <name>

-allow-flexgroup <true|false>

cmode-prod

Show the object stores vailable

storage aggregate object-store config show

Disable autobalance on the SSD aggregates (from prior lab)

aggr modify -aggregate cmode_prod_01_aggr3_SSD -is-autobalance-eligible false

aggr modify -aggregate cmode_prod_02_aggr3_SSD -is-autobalance-eligible false

Attach the first Capacity Tier to the SSD aggregates

storage aggregate object-store attach -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap1 -allow-flexgroup true   # y

storage aggregate object-store attach -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap1 -allow-flexgroup true   # y

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      primary

2 entries were displayed.

storage aggregate object-store show -instance

2.2      Object Store Unreclaimed Space Threshold

  • See default thresholds by object store type above
  • Object defragmentation reduces the amount of physical capacity used by the cloud tier at the expense of additional object store resources (reads and writes)
  • Reducing the Threshold
    • To avoid additional expenses, consider reducing the unreclaimed space thresholds when using object store pricing schemes that reduce the cost of storage but increase the cost of reads. Examples include Amazon’s Standard-IA and Azure Blob Storage’s cool
    • For example, tiering a volume of 10-year-old projects that has been saved for legal reasons might be less expensive when using a pricing scheme such as Standard-IA or cool than it would be when using standard pricing schemes. Although reads are more expensive for such a volume, including reads required by object defragmentation, they are unlikely to occur frequently here.
  • Increasing the Threshold
    • Alternatively, consider increasing unreclaimed space thresholds if object fragmentation is resulting in significantly more object store capacity being used then necessary for the data being referenced by ONTAP. For example, using an unreclaimed space threshold of 20%, in a worst-case scenario where all objects are equally fragmented to the maximum allowable extent, it is possible for 80% of total capacity in the cloud tier to be unreferenced by ONTAP
    • 2TB referenced by ONTAP + 8TB unreferenced by ONTAP = 10TB total capacity used by the cloud tier.
    • In situations such as these, it might be advantageous to increase the unreclaimed space threshold—or increasing volume minimum cooling days—to reduce capacity being used by unreferenced blocks.
    • To change the default unreclaimed space threshold, run the following command:
      • storage aggregate object-store modify –aggregate <name> -object-store-name <name> –unreclaimed- space-threshold <%> (0%-99%)

cmode-prod

View the current threshold

storage aggregate object-store show -fields unreclaimed-space-threshold

aggregate               object-store-name unreclaimed-space-threshold

———————– —————– —————————

cmode_prod_01_aggr3_SSD s3ontap1          40%

cmode_prod_02_aggr3_SSD s3ontap1          40%

Modify the threshold to 50%

storage aggregate object-store modify -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap1 -unreclaimed-space-threshold 50%

storage aggregate object-store modify -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap1 -unreclaimed-space-threshold 50%

storage aggregate object-store show -fields unreclaimed-space-threshold

aggregate               object-store-name unreclaimed-space-threshold

———————– —————– —————————

cmode_prod_01_aggr3_SSD s3ontap1          50%

cmode_prod_02_aggr3_SSD s3ontap1          50%

2.3      Object Store Attach to Aggregate (local tier) Mirror

  • When using FabricPool Mirror, data is mirrored across two buckets
  • When adding FabricPool Mirror to an existing FabricPool, data previously tiered to the original cloud tier is written to the newly attached cloud tier as well. After both tiers are mirrored, data is synchronously tiered to both cloud tiers
  • Although essential for FabricPool with NetApp MetroCluster, FabricPool Mirror is a stand-alone feature that does not require MetroCluster to use
  • Attach
    • storage aggregate object-store mirror -aggregate <aggregate name> -name <object-store-name-2>
  • Swap
    • storage aggregate object-store modify -aggregate <aggregate name> -name <object-store-name-2> – mirror-type primary
  • Delete
    • storage aggregate object-store unmirror -aggregate <aggregate name> -name

cmode-prod

Show the current primary tier

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      primary

2 entries were displayed.

Attach the Mirrored Capacity Tier to the SSD aggregates

storage aggregate object-store mirror -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap2

storage aggregate object-store mirror -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap2

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      primary

cmode_prod_01_aggr3_SSD s3ontap2 available      mirror

cmode_prod_02_aggr3_SSD s3ontap1 available      primary

cmode_prod_02_aggr3_SSD s3ontap2 available      mirror

4 entries were displayed.

storage aggregate object-store show -instance

2.4      Object Store Mirror – Swap/Unmirror/Mirror

  • It is not supported to remove a tier with the exception we can swap to a mirror then delete the mirror
  • We will make s3ontap2 primary, delete s3ontap1, then remirror to s3ontap1

cmode-prod

Swap to s3ontap1 as primary

storage aggregate object-store modify -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap2 -mirror-type primary

storage aggregate object-store modify -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap2 -mirror-type primary

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      mirror

cmode_prod_01_aggr3_SSD s3ontap2 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      mirror

cmode_prod_02_aggr3_SSD s3ontap2 available      primary

4 entries were displayed.

Unmirror s3ontap1 to remove a tier

storage aggregate object-store unmirror -aggregate cmode_prod_01_aggr3_SSD

storage aggregate object-store unmirror -aggregate cmode_prod_02_aggr3_SSD

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap2 available      primary

cmode_prod_02_aggr3_SSD s3ontap2 available      primary

2 entries were displayed.

Add the mirror back (opposite of before with s3ontap1 mirrored)

storage aggregate object-store mirror -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap1

storage aggregate object-store mirror -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap1

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      mirror

cmode_prod_01_aggr3_SSD s3ontap2 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      mirror

cmode_prod_02_aggr3_SSD s3ontap2 available      primary

4 entries were displayed.

2.5      Aggregate Tiering Fullness Threshold

  • By default, tiering to the cloud tier only happens if the local tier is >50% full. There is little reason to tier cold data to a cloud tier if the local tier is being underutilized
  • Setting the threshold to a lower number reduces the amount of data required to be stored on the local tier before tiering takes place. This may be useful for large local tiers that contain little hot/active data
  • Setting the threshold to a higher number increases the amount of data required to be stored on the local tier before tiering takes place. This may be useful for solutions designed to tier only when local tiers are near maximum capacity
  • This is the same command that also modifies the unreclaimed threshold we increased from 40 to 50% earlier
  • Syntax
    • storage aggregate object-store modify –aggregate <name> –tiering-fullness-threshold <#> (0%-99%)

cmode-prod

Show the current threshold

storage aggregate object-store show -fields tiering-fullness-threshold,unreclaimed-space-threshold,mirror-type

aggregate               object-store-name unreclaimed-space-threshold tiering-fullness-threshold mirror-type

———————– —————– ————————— ————————– ———–

cmode_prod_01_aggr3_SSD s3ontap1          50%                         50%                        mirror

cmode_prod_01_aggr3_SSD s3ontap2          50%                         50%                        primary

cmode_prod_02_aggr3_SSD s3ontap1          50%                         50%                        mirror

cmode_prod_02_aggr3_SSD s3ontap2          50%                         50%                        primary

4 entries were displayed.

Set tiering to 25% and change uncrelaimed back to 40% (set on the PRIMARY tier which is s3ontap2)

storage aggregate object-store modify -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap2 -tiering-fullness-threshold 25% -unreclaimed-space-threshold 40%

storage aggregate object-store modify -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap2 -tiering-fullness-threshold 25% -unreclaimed-space-threshold 40%

storage aggregate object-store show -fields tiering-fullness-threshold,unreclaimed-space-threshold,mirror-type

aggregate               object-store-name unreclaimed-space-threshold tiering-fullness-threshold mirror-type

———————– —————– ————————— ————————– ———–

cmode_prod_01_aggr3_SSD s3ontap1          40%                         25%                        mirror

cmode_prod_01_aggr3_SSD s3ontap2          40%                         25%                        primary

cmode_prod_02_aggr3_SSD s3ontap1          40%                         25%                        mirror

cmode_prod_02_aggr3_SSD s3ontap2          40%                         25%                        primary

4 entries were displayed.

3     Volume Tiering Configuration

3.1      Volume Tiering Policies

  • From the information section
    • Auto                       2-63 (9.7) or 2-183 (9.8) days cooling (default = 31)
    • Snapshot-Only        2-63 (9.7) or 2-183 (9.8) days cooling (default = 2)
    • All                          All but metadata moves to object
    • None                      Default
  • You can set tiering on volumes that are not in tiered aggregates and the setting will be used when moved

cmode-prod

Show volumes in the SSD aggregates and the tiering policy and minimum cooling days

vol show -aggregate cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3_SSD -fields tiering-policy,tiering-minimum-cooling-days

vserver    volume               tiering-policy tiering-minimum-cooling-days

———- ——————– ————– —————————-

source_fg1 source_fg1_root_ls01 none           –

source_fg1 source_fg1_root_ls02 none           –

source_ntfs apps                none           –

source_ntfs apps_clone1         none           –

source_ntfs source_ntfs_root_dp01 none         –

source_ntfs source_ntfs_root_dp02 none         –

source_ntfs source_ntfs_root_ls01 none         –

source_ntfs source_ntfs_root_ls02 none         –

source_ntfs users               none           –

source_test apps                none           –

source_test home                none           –

source_unix apps                none           –

source_unix apps_clone          none           –

source_unix source_unix_root_dp01 none         –

source_unix source_unix_root_dp02 none         –

source_unix source_unix_root_ls01 none         –

source_unix source_unix_root_ls02 none         –

source_unix users               none           –

18 entries were displayed.

Set an All tiering policy  to apps on source_ntfs (tiering days is not supported, not needed)

volume modify -vserver source_ntfs -volume apps -tiering-policy all

Set a Snapshot-Only tiering policy to users on source_ntfs with 2 cooling days

volume modify -vserver source_ntfs -volume users -tiering-policy snapshot-only -tiering-minimum-cooling-days 2

Set an auto tiering policy to home on source_test with 5 cooling days

volume modify -vserver source_test -volume home -tiering-policy auto -tiering-minimum-cooling-days 5

Show the tiering

vol show -aggregate cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3_SSD -fields tiering-policy,tiering-minimum-cooling-days

vserver    volume               tiering-policy tiering-minimum-cooling-days

———- ——————– ————– —————————-

source_fg1 source_fg1_root_ls01 none           –

source_fg1 source_fg1_root_ls02 none           –

source_ntfs apps                all            –

source_ntfs apps_clone1         none           –

source_ntfs source_ntfs_root_dp01 none         –

source_ntfs source_ntfs_root_dp02 none         –

source_ntfs source_ntfs_root_ls01 none         –

source_ntfs source_ntfs_root_ls02 none         –

source_ntfs users               snapshot-only  2

source_test apps                none           –

source_test home                auto           5

source_unix apps                none           –

source_unix apps_clone          none           –

source_unix source_unix_root_dp01 none         –

source_unix source_unix_root_dp02 none         –

source_unix source_unix_root_ls01 none         –

source_unix source_unix_root_ls02 none         –

source_unix users               none           –

18 entries were displayed.

3.2      Volume Move with Tiering

  • Best PracticeCreate a single Bucket for all aggregates within a cluster
    • This will ensure that vol move does not retrieve data from capacity layer when moving volumes between aggregates
  • You can set the tiering policy (change policy) on vol move
  • If a volume move’s destination local tier does not have an attached cloud tier, data on the source volume that is stored on the cloud tier is written to the local tier on the destination local tier
  • If a volume move destination local tier uses the same bucket as the source local tier, data on the source volume that is stored in the bucket does not move back to the local tier. This results in significant network efficiencies. (Setting the tiering policy to None will result in cold data being moved to the local tier.)
  • If a volume move’s destination local tier has an attached cloud tier, data on the source volume that is stored on the cloud tier is first written to the local tier on the destination local tier. It is then written to the cloud tier on the destination local tier if this approach is appropriate for the volume’s tiering policy. moving data to the local tier first improves the performance of the volume move and reduces cutover time
  • If a volume tiering policy is not specified when performing a volume move, the destination volume uses the tiering policy of the source volume. If a different tiering policy is specified when performing the volume move, the destination volume is created with the specified tiering policy.
  • Note: When in an SVM-DR relationship, source and destination volumes must use the same tiering policy

cmode-prod

Move apps on source_test to cmode_prod_02_aggr3_SSD changing the policy from none to snapshot only

vol move start -vserver source_test -volume apps -destination-aggregate cmode_prod_02_aggr3_SSD -tiering-policy snapshot-only

vol move show            #wait until completed

vol show -vserver source_test -volume apps -fields tiering-policy,tiering-minimum-cooling-days

4     Object Store Tagging (ONTAP 9.8)

  • ONTAP 9.8 only feature

4.1      Tagging Metadata Information

  • Starting in ONTAP 9.8, FabricPool supports object tagging using user-created custom tags. If you are a user with the admin privilege level, you can create new object tags, and modify, delete, and view existing tags
  • Supports a maximum of 4 tags per volume and all volume tags must have a unique key
  • Supported on StorageGrid WebScale object store 
  • Keys are specified as a key=value string. For example, type=PDF
  • Keys and values must contain only alphanumeric characters and underscores
  • Volume parameter -tiering-object-tags <key1=value1> [,<key3=value3>,<key4=value4>]

4.2      Create a Volume with Object Tags

cmode-prod

Create a volume with 4 object tags (none or 1-4 is supported)

volume create -vserver source_unix -volume volFP_tagged -aggregate cmode_prod_02_aggr3_SSD -size 1g -space-guarantee none -junction-path /volFP_tagged -state online -tiering-policy auto -tiering-minimum-cooling-days 183 -tiering-object-tags labenv=evtlabs,lab=lab15,cluster=cmode_prod,svm=source_unix

Warning: The export-policy “default” has no rules in it. The volume will therefore be inaccessible over NFS and CIFS protocol.

Do you want to continue? {y|n}: y

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

4.3      Modify Tags

  • This command change is an absolute setting, so to change one key, you need to specify ALL keys

cmode-prod

Modify 2 of the tags (repeat the original 2 and 2 new)

volume modify -vserver source_unix -volume volFP_tagged  -tiering-object-tags labenv=evtlabs,lab=lab15,node=cmode_prod_02,aggr=aggr3_SSD

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

4.4      Remove Tags with “”

cmode-prod

Create a volume with 4 object tags (none or 1-4 is supported)

volume modify -vserver source_unix -volume volFP_tagged -tiering-object-tags “”

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

Add the original tags back

volume modify -vserver source_unix -volume volFP_tagged -tiering-object-tags labenv=evtlabs,lab=lab15,cluster=cmode_prod,svm=source_unix

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

4.5      Check if Tagging is Complete

  • Check if the object tagging scanner has not yet to run or needs to run again for volumes 

cmode-prod

volume show -needs-object-retagging true

volume show -fields needs-object-retagging

5     Promote Data to the Performance Tier from S3 (ONTAP 9.8)

  • ONTAP 9.8 only feature

5.1      Promote Information

  • Starting in ONTAP 9.8, you can proactively promote, or pull back data, to the performance tier from the cloud tier using a combination of the tiering-policy and the cloud-retrieval-policy setting. You might do this if you want to stop using FabricPool on a volume, or if you have a snapshot-only tiering policy and you want to bring restored Snapshot copy data back to the performance tier
  • There are 4x Cloud Retrieval Policies
    • -cloud-tetrival-policy
      • default 
      • on-read 
      • never   
      • promote

5.2      Promote ALL Data to the Performance Tier (from S3)

  • Set the tiering policy to “none” so all data is brought back before the promote

cmode-prod

volume modify -vserver source_unix -volume volFP_tagged -tiering-policy none -cloud-retrieval-policy promote

Warning: The “promote” cloud retrieve policy retrieves all of the cloud data for the specified volume. 

If the tiering policy is “snapshot-only” then only AFS data is retrieved. 

If the tiering policy is “none” then all data is retrieved. It may take a significant amount of time, and may degrade performance during that time. 

The cloud retrieve operation may also result in data charges by your object store provider.

Do you want to continue? {y|n}: y

5.3      Promote Active File System Data to the Performance Tier (from S3)

  • Set the tiering policy to “snapshot-only” so only active file system (non-snapshot) data is brought back before the promote

cmode-prod

volume modify -vserver source_unix -volume volFP_tagged -tiering-policy snapshot-only -cloud-retrieval-policy promote

Warning: The “promote” cloud retrieve policy retrieves all of the cloud data for the specified volume. 

If the tiering policy is “snapshot-only” then only AFS data is retrieved. 

If the tiering policy is “none” then all data is retrieved. It may take a significant amount of time, and may degrade performance during that time. 

The cloud retrieve operation may also result in data charges by your object store provider.

Do you want to continue? {y|n}: y

5.4      Check Migration and Tiering Status

cmode-prod

volume object-store tiering show -vserver source_unix -volume volFP_tagged -instance

5.5      Start Schedule Migration and Tiering

  • You can start the tiering scan status when you prefer not to wait for the default tiering scan. 

cmode-prod

volume object-store tiering trigger -vserver source_unix -volume volFP_tagged 

6     Monitoring and Space Reporting

  • Active IQ Unified Manager provides basic capacity and performance insights
  • NetApp Harvest provides detailed performance information

6.2      Show Space in Each Tier (Aggregate, Volume, Object)

  • Volume and aggregate level
  • Aggr show-space breaks out used in each performance and object capacity tier

cmode-prod

Show space in the aggregate (Performance Tier and Object Tier are shown)

Get information on data tiered per aggregate

aggr show-space -aggregate-name cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3

Show volume space in cloud capacity and performance tiers

Get information on data tiered per volume

vol show-footprint

volume show-footprint -fields bin0-name,volume-blocks-footprint-bin0,bin1-name,volume-blocks-footprint-bin1

vol show-footprint -vserver source_ntfs -volume apps

vol show -vserver source_ntfs -volume apps -fields performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent

Show Object Store Space

storage aggregate object-store show-space

6.3      Inactive Data Reporting (IDR)

  • Show how much data will tier added in 9.4
  • Works for all tiering policies
  • Works on HDD for reporting (9.6+)
  • Does not work on Flash Pool aggregates
  • Go to the SINGLE NODE cluster for IDR
  • ONTAP 9.8 – IDR uses the ONTAP cooling period, so you no longer have to wait 31 days
  • Use the XCP method below if you need faster results pre-9.8

cmode-single

Show if enabled

aggr show -is-inactive-data-reporting-enabled true    # not enabled

Enable on both HDD aggregates

storage aggregate modify -aggregate cmode_single_01_aggr1,cmode_single_01_aggr2_mir -is-inactive-data-reporting-enabled true

Show enabled

aggr show -is-inactive-data-reporting-enabled true

Show Inactive Data (zero for our lab – needs 31 days)

storage aggregate show-space -fields performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent

vol show -fields performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent

6.4      XCP (host based) Estimate (IDR at the file level)

  • This isn’t as good as IDR (XCP looks at files, not blocks, and has no concept of WAFL metadata) but will get you very close
  • If you do not have 31 days to wait for reporting and want immediate file level results
  • XCP is free (90-day key you can continually renew)
  • See the SVM NAS Lab for more information and XCP command examples



xcp scan -match “((now-x.atime) / 3600) > 31*day” <source>”

Windows Host

  • Copy the license file to  C:\NetApp\XCP

Powershell

cd “C:\Users\administrator\Desktop\NetApp Software\xcp\windows”

./xcp activate

XCP SMB 1.6P1; (c) 2020 NetApp, Inc.; Licensed to Scott Gelb [None] until Sun Aug  9 08:07:08 2020

XCP activated

Run a scan for cold files on a NAS share

./xcp scan -match “((now-x.atime) / 3600) > 31*day” \\source_ntfs\apps

Run a scan for cold files on your your windows host

./xcp scan -match “((now-x.atime) / 3600) > 31*day”  “C:\Users\administrator\Desktop\NetApp Software”

6.5      Statistics and Node Shell Diag (Advanced Reporting)

cmode-prod

set diag

Get information on operations issued (this will have no output for the lab)

statistics show -object wafl_comp_aggr_bin -counter cloud_bin_operation -raw

Cloud read and write performance monitored per node 

node run -node cmode-prod-01 “priv set diag;sysstat -d 1”

Detailed IO size and full request latency (this is not the same as frontend latency. Bin 0 refers to the hot tier and Bin 1 to the cold tier)

node run -node cmode-prod-01 “priv set diag;wafl composite stats show cmode_prod_01_aggr3_SSD”

Detailed client to object-store statistics per operation type are available via the following commands. This commands only collect the metrics between start and stop

node run -node cmode-prod-01 -command “priv set diag;stats start object_store_client_op”
node run -node cmode-prod-01 -command “priv set diag;stats stop”

Detailed information about the connections to the object store, including TLS handshake latency, are available via the following commands. This commands only collect the metrics between start and stop

node run -node cmode-prod-01 -command “priv set diag;stats start object_store_client_conn”
node run -node cmode-prod-01 -command “priv set diag;stats stop”  

For a volume tiered with “all” policy, some blocks may still be on SSD

To flush all blocks of a flexvol from SSD to Cloud, you can use the following diag level command
node run -node cmode-prod-01 “priv set diag; wafl scan redirect apps”

Check completion of the process 

node run -node cmode-prod-01 “priv set diag; wafl scan status”

set adv

NetApp ONTAP 9.8 – S3 is GA!

ONTAP 9.8 now has an S3 front-end available for production use. This solution is complementary to StorageGRID and is good for smaller S3 requirements where you have excess ONTAP capacity that don’t need a full S3 ILM feature set. In ONTAP, you can serve S3 with other protocols in the same SVM, however the S3 buckets can only be served over the S3 protocol. To explain, there is no multi-protocol NAS/S3 to the same data set. Also, S3 buckets are created as FlexGroups behind the scenes. A data logical interface (LIF) can serve NAS and S3 protocols to clients over the same IP address/DNS name.

  • GA in 9.8
    • TLS 1.2 added
    • Adjustable ports
    • Multi-part upload
    • System Manager integration
    • Bucket access policies
    • Mutiple buckets per volume
    • S3 can co-exist with other protocols in the same SVM

This blog will show setup of S3 in ONTAP 9.8 on my 2-node VSIM with https and certificates. The cluster name is “cmode-prod” and the S3 SVM is called “S3“. The example below is all CLI but most can also be done in REST and the System Manager GUI. My next blog will show how to setup FabricPools to tier from ONTAP aggregates (all HDD or all SSD) to ONTAP S3. ONTAP 9.8 also added HDD support for FabricPools, but note that FlashPool (SSD accelerated hybrid HDD aggregates) are not supported. FabricPool tiering to ONTAP will be available for up to 300TB of tiered capacity with no license needed. For 300TB+ capacity tiering, StorageGRID is the recommended solution for on-prem S3 with no additional licenses. For a great technical report by TME John Lantz, please see https://www.netapp.com/us/media/tr-4814.pdf

1.1      Create an S3 SVM

vserver create -vserver S3 -subtype default -rootvolume S3_root -rootvolume-security-style unix

vserver show -vserver S3

1.2      Create an S3 LIF Service Policy

-Setting wide open, but you could lock down to a subnet or hosts

network interface service-policy create -vserver S3 -policy S3 -allowed-addresses 0.0.0.0/0 -services data-core,data-s3-server,data-cifs,data-nfs

network interface service-policy show -services data-s3-server

1.3      Create an S3 LIF with the Service Policy

-the same LIF can also serve NFS and SMB protocols

network interface create -vserver S3 -lif lif1 -service-policy S3 -role data -address 192.168.150.141 -netmask 255.255.255.0 -home-node cmode-prod-01 -home-port e0c

net int show -vserver S3

1.4      Create a host entry for FQDN

  • For FabricPools in the next blog, we need to use a FQDN and here we will make a manual entry here to not have to rely on DNS.  This is not best practice, but easier practice for a lab to provide name resolution
    • Alternatively, you can create a DNS A record

vserver services name-service dns hosts create -vserver cmode-prod -address 192.168.150.141 -hostname s3.lab2.local

vserver services name-service dns hosts show

vserver services ns-switch show -vserver cmode-prod -database hosts

net ping -node cmode-prod-01 -destination s3.lab2.local

1.5      Default Route

route create -vserver S3 -destination 0.0.0.0/0 -gateway 192.168.150.2

route show -vserver S3

1.6      DNS Client

dns create -vserver S3 -domains lab2.local -name-servers 192.168.150.12

dns show -vserver S3

1.7      Generate and install a Server certificate on the S3 SVM CA

  • Create vserver CA certificate
  • Create a server certificate that matches the name of the FQDN s3.lab2.local
  • On the S3 client, you will need to create a server-ca certificate using the public-key (.crt file if using openssl or public-cert in the ONTAP output) below.  This will be shown next in the FabricPool lab
  • The example below creates the server certificate in ONTAP

cmode-prod

Show Certificates (there is one server certificate)

security certificate show -vserver S3

Vserver    Serial Number   Certificate Name                       Type

———- ————— ————————————– ————

S3         160AA43A5D674CB4 3.cert.1588262389                     server

    Certificate Authority: 3.cert.1588262389

          Expiration Date: Fri Apr 30 08:59:49 2021

Create a CA certificate on the S3 SVM (this will create 3 certs, root-ca, client-ca and server-ca)

security certificate create -vserver S3 -type root-ca -common-name SVM_CA

The certificate’s generated name for reference: SVM_CA_160AA4596B7767A0_SVM_CA 

security certificate show -vserver S3 -common-name SVM_CA

Vserver    Serial Number   Certificate Name                       Type

———- ————— ————————————– ————

S3         160AA4596B7767A0 SVM_CA_160AA44E38972249_SVM_CA        root-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:01:14 2021

S3         160AA4596B7767A0 SVM_CA_160AA44E38972249               client-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:01:14 2021

S3         160AA4596B7767A0 SVM_CA                                server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:01:14 2021

3 entries were displayed.

1.8      Generate a Certificate Signing Request

  • The Common name parameter will be the dns name of the S3 Server.
    • Use this name to create the S3 Server as well as when configuring Client side
  • Copy the output of this command and save it. The information will be used in subsequent commands

security certificate generate-csr -common-name s3.lab2.local -size 2048 -country US

Certificate Signing Request :

—–BEGIN CERTIFICATE REQUEST—–

MIICqjCCAZICAQAwJTEWMBQGA1UEAxMNczMubGFiMi5sb2NhbDELMAkGA1UEBhMC

VVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC10qaq6uYpxHmSYMB2

WKSNQHeEjH+oE7csQ8/l4Wf7V0HNLHmXigwNXr4T95fCU8xhuX2uR+E9+5lgCSyj

flRVapI1hsD2PNjElkjX6/529HJygwCywKkF3CzkgL/Agg3JwwlpoNB+rMHUTHzJ

YwEV475sdIiVy6z/ISQYYMeURZhe+IWFdo0g7ExboS/eX6s8eqT7KLiD4JAYRpZW

sDr2m/MzAuX8UnNOjbw5Ezi9XxgRNUyNLcFbIFLs81eosBsj3xZ8BC9QlV+IkuEU

2K6nbenil1Mkojbg53Yuvh1OrUq2eCI9Dpd0VSHmlYMvPFslMDZG70xxDFQ7XMrx

qpu5AgMBAAGgQDA+BgkqhkiG9w0BCQ4xMTAvMA4GA1UdDwEB/wQEAwIFoDAdBgNV

HSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEwDQYJKoZIhvcNAQELBQADggEBAJIN

hF3bazWzcVxU97Ulsj9/QFc9wOnu7iFUUOl83MOfVG34LwJQtZSXZYMMOPIcB3pk

lxatYZ0ePMKsHX3Wkylgx237bDcZUWJgVGk5MpyQ2i2rbtEc+PbMH0Y7gs0mwfnY

+ENo4TiTUt9uj382olYSNvckkXir94uQerqyw9rshzmJsmWZ5QQSOkuLZJJP3gnq

4oGrN0+QcdeA6B0yQSls7ZEgJdAukCMCKgPPFk+6YNk2QWFcgZX306INRwrr2Iob

vWb6D64lCYddM+U4avadmDeSRFFNsXwTDOVl3rlt+0c0FMb9zPo4LFbd5AfsaRgD

bTX4Z9bifBmoSUw/JDY=

—–END CERTIFICATE REQUEST—–

Private Key :

—–BEGIN PRIVATE KEY—–

MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC10qaq6uYpxHmS

YMB2WKSNQHeEjH+oE7csQ8/l4Wf7V0HNLHmXigwNXr4T95fCU8xhuX2uR+E9+5lg

CSyjflRVapI1hsD2PNjElkjX6/529HJygwCywKkF3CzkgL/Agg3JwwlpoNB+rMHU

THzJYwEV475sdIiVy6z/ISQYYMeURZhe+IWFdo0g7ExboS/eX6s8eqT7KLiD4JAY

RpZWsDr2m/MzAuX8UnNOjbw5Ezi9XxgRNUyNLcFbIFLs81eosBsj3xZ8BC9QlV+I

kuEU2K6nbenil1Mkojbg53Yuvh1OrUq2eCI9Dpd0VSHmlYMvPFslMDZG70xxDFQ7

XMrxqpu5AgMBAAECggEAKasDzQmWA55mKfiIQtbfpwtOGI9GNhOGl9tWip0UglIl

30pA90yIpIvAzbyhB8TCgubKeaU5ZkYBiTOxCirKUuTgaund0NBy8OJsASexIju0

+q8w+sYSNiiWFSu4RfrIBCPxRUa4YT9gEDITKufIeOa/XgV6w7FwjOtgZUHQmxbQ

jYKDVQXSA02/lO6z/Ulhm3lBPbHasbSGeL0+3pd+zvzTJmg9dlKIZQisCXF6895v

gosZxbWeRpI7SNpde7WcoxFB519BMpz718exjZcN2iP/LuUzUF15eHF8grkSyQxF

e+u7CRwGM1k8x5gVOZo8pl+TUEjYmUUc8yjbEKYzDQKBgQDuGcvY0nPBlY+162xc

rPuBqIrheAZBik0hx9bKfkwvKP/vdNVn/3mHxjxT2S5M3QaMlpu2u1UV24V/XwEa

kqwp4ISgmCahogyuARMOd51OTUAsesiU51kseolP010CTcljjJQeRbIqgszKW/Jr

BLcSFtHDZwzxF9qoIB0pcUpMQwKBgQDDfcwQ2NTS7Duf2kXjb+Ok895dlWNmXQwX

CfXLVjG9QE/fNhmnZctBNVvFGQvLYep6c4QNSrkwGx2MKxLfzAnEFQJ1D6sokpgP

c6H1VIY7FqGxJuS17cYNUORHQBjk1QhEorPYyzaY5XlohXUtlH/CKwSTS/KAITSz

23gdSo92UwKBgQCTdY51vgDKx2G1fRQjYU5yQnugn8DgHlMetLElv4pXOsEm/+ia

+/G8UN1T4JF4MPq5Xx0Y0nQjkUzgUWpRlrzhQpdhDln+iGnp6ehvcU0PDXDNG03W

SmFD1q/rrC9SGfK7oHirNubcxR0nxkIgXU8z+MX4in3NYsSckyb8X5lwGQKBgA57

aD2rQoDpnTUnX1wM8ulKY6O9KGLx665dP4czuHWTqRcZE+dxxA/tmwHL7DLB6zPt

ENBHQ9bLe3Hh0wEfRW3wPIFdislzqq4iW9In09XWxF2ySukrVyuvXWnl1rJFEdq7

zuT1kPLctRTIJjkdMiW5OBqNWsahLx1P2eMZne0fAoGBAOgwwdQEjqYo8I9bmMIx

F6LXK6Lggm4U12bIyrxkHF3mQBucOnqvCIK6phSgPzoRXGD+BuHpoHTySCzgoAA7

5rd2jkVZcFXZl66Vf6WeEIMVT6DpszCRziW5IgWXmCwctPq4GBYqHhXhjgvhSzlt

LgDpJQSFTrdG48oXLaZ/YcSP

—–END PRIVATE KEY—–

Note: Please keep a copy of your certificate request and private key for future reference.

1.9      Generate the S3 SVM Server Certificate by signing the CSR using the SVM_CA

  • The ca-serial is pasted from the SVM_CA above from section 4.1 and displayed below to copy/paste from your lab
  • You will paste the public key output from generate-csr output above in section 4.2
    • Take the CSR generated in the previous step in section 4.2 and copy it below
  • We will paste the public key to generate the signed certificate

security certificate show -vserver S3 -type root-ca -fields ca,serial,common-name,cert-name

vserver common-name serial           ca     type    subtype cert-name

——- ———– —————- —— ——- ——- ——————————

S3      SVM_CA      160AA4596B7767A0 SVM_CA root-ca –       SVM_CA_160AA44E38972249_SVM_CA

security certificate sign -vserver S3 -ca SVM_CA -ca-serial 160AA4596B7767A0 -expire-days 360

Please enter Certificate Signing Request(CSR): Press <Enter> when done

—–BEGIN CERTIFICATE REQUEST—–

MIICqjCCAZICAQAwJTEWMBQGA1UEAxMNczMubGFiMi5sb2NhbDELMAkGA1UEBhMC

VVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC10qaq6uYpxHmSYMB2

WKSNQHeEjH+oE7csQ8/l4Wf7V0HNLHmXigwNXr4T95fCU8xhuX2uR+E9+5lgCSyj

flRVapI1hsD2PNjElkjX6/529HJygwCywKkF3CzkgL/Agg3JwwlpoNB+rMHUTHzJ

YwEV475sdIiVy6z/ISQYYMeURZhe+IWFdo0g7ExboS/eX6s8eqT7KLiD4JAYRpZW

sDr2m/MzAuX8UnNOjbw5Ezi9XxgRNUyNLcFbIFLs81eosBsj3xZ8BC9QlV+IkuEU

2K6nbenil1Mkojbg53Yuvh1OrUq2eCI9Dpd0VSHmlYMvPFslMDZG70xxDFQ7XMrx

qpu5AgMBAAGgQDA+BgkqhkiG9w0BCQ4xMTAvMA4GA1UdDwEB/wQEAwIFoDAdBgNV

HSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEwDQYJKoZIhvcNAQELBQADggEBAJIN

hF3bazWzcVxU97Ulsj9/QFc9wOnu7iFUUOl83MOfVG34LwJQtZSXZYMMOPIcB3pk

lxatYZ0ePMKsHX3Wkylgx237bDcZUWJgVGk5MpyQ2i2rbtEc+PbMH0Y7gs0mwfnY

+ENo4TiTUt9uj382olYSNvckkXir94uQerqyw9rshzmJsmWZ5QQSOkuLZJJP3gnq

4oGrN0+QcdeA6B0yQSls7ZEgJdAukCMCKgPPFk+6YNk2QWFcgZX306INRwrr2Iob

vWb6D64lCYddM+U4avadmDeSRFFNsXwTDOVl3rlt+0c0FMb9zPo4LFbd5AfsaRgD

bTX4Z9bifBmoSUw/JDY=

—–END CERTIFICATE REQUEST—–

Signed Certificate : (SUPPLIED BY ONTAP)

—–BEGIN CERTIFICATE—–

MIIDQTCCAimgAwIBAgIIFgqklFxRsY4wDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjA2MTVaFw0yMTA0MjUx

NjA2MTVaMCUxFjAUBgNVBAMTDXMzLmxhYjIubG9jYWwxCzAJBgNVBAYTAlVTMIIB

IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtdKmqurmKcR5kmDAdlikjUB3

hIx/qBO3LEPP5eFn+1dBzSx5l4oMDV6+E/eXwlPMYbl9rkfhPfuZYAkso35UVWqS

NYbA9jzYxJZI1+v+dvRycoMAssCpBdws5IC/wIINycMJaaDQfqzB1Ex8yWMBFeO+

bHSIlcus/yEkGGDHlEWYXviFhXaNIOxMW6Ev3l+rPHqk+yi4g+CQGEaWVrA69pvz

MwLl/FJzTo28ORM4vV8YETVMjS3BWyBS7PNXqLAbI98WfAQvUJVfiJLhFNiup23p

4pdTJKI24Od2Lr4dTq1KtngiPQ6XdFUh5pWDLzxbJTA2Ru9McQxUO1zK8aqbuQID

AQABo3wwejAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsG

AQUFBwMBMAkGA1UdEwQCMAAwHQYDVR0OBBYEFFk8TwJioU9LKYLky1rcXoZ4XvpY

MB8GA1UdIwQYMBaAFKDPZZJHZJs+VsIz0FT2Dwqf+ZRmMA0GCSqGSIb3DQEBCwUA

A4IBAQB25slAZ+niVDivqJ7ebaqpuzt05Jg75wDiN/J8nDLWaBUjcbqco8YnrAna

wr9CJr+wj0lONtE79gNNI7K2ZVrbUELFgUQIO+sOb6EavvEZnG0HYUnkAI2I/fh5

Oh6U0C1lPX5L501ATVfK190KyWDmYphL6Zee7fzomDQ20G9j5PtSu7dFA1iG7rPD

vyAnywEtEU4k1iu7QPL5I/MRdqnggpmp+wK+OCQ1tm0pHUiZUzJm6N7pJ/IVwToY

zPyTJ13gmc+FF0P8nbeQknQ3kK9K5Q/S88gq1BTExltUxa8V4K7nhsqOrHeC+L9e

xKLSZpc7/V2+h0khcGtycUj0K+mE

—–END CERTIFICATE—–

1.10      Install the S3 Server Certificate on the SVM that will serve S3

  • Install the certificate (generated in the previous step) on the vserver on which the S3 server will be configured
  • The private key is the one which was generated in step Generate a Certificate Signing Request’

security certificate install -type server -vserver S3

Please enter Certificate: Press <Enter> when done

—–BEGIN CERTIFICATE—–

MIIDQTCCAimgAwIBAgIIFgqklFxRsY4wDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjA2MTVaFw0yMTA0MjUx

NjA2MTVaMCUxFjAUBgNVBAMTDXMzLmxhYjIubG9jYWwxCzAJBgNVBAYTAlVTMIIB

IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAtdKmqurmKcR5kmDAdlikjUB3

hIx/qBO3LEPP5eFn+1dBzSx5l4oMDV6+E/eXwlPMYbl9rkfhPfuZYAkso35UVWqS

NYbA9jzYxJZI1+v+dvRycoMAssCpBdws5IC/wIINycMJaaDQfqzB1Ex8yWMBFeO+

bHSIlcus/yEkGGDHlEWYXviFhXaNIOxMW6Ev3l+rPHqk+yi4g+CQGEaWVrA69pvz

MwLl/FJzTo28ORM4vV8YETVMjS3BWyBS7PNXqLAbI98WfAQvUJVfiJLhFNiup23p

4pdTJKI24Od2Lr4dTq1KtngiPQ6XdFUh5pWDLzxbJTA2Ru9McQxUO1zK8aqbuQID

AQABo3wwejAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsG

AQUFBwMBMAkGA1UdEwQCMAAwHQYDVR0OBBYEFFk8TwJioU9LKYLky1rcXoZ4XvpY

MB8GA1UdIwQYMBaAFKDPZZJHZJs+VsIz0FT2Dwqf+ZRmMA0GCSqGSIb3DQEBCwUA

A4IBAQB25slAZ+niVDivqJ7ebaqpuzt05Jg75wDiN/J8nDLWaBUjcbqco8YnrAna

wr9CJr+wj0lONtE79gNNI7K2ZVrbUELFgUQIO+sOb6EavvEZnG0HYUnkAI2I/fh5

Oh6U0C1lPX5L501ATVfK190KyWDmYphL6Zee7fzomDQ20G9j5PtSu7dFA1iG7rPD

vyAnywEtEU4k1iu7QPL5I/MRdqnggpmp+wK+OCQ1tm0pHUiZUzJm6N7pJ/IVwToY

zPyTJ13gmc+FF0P8nbeQknQ3kK9K5Q/S88gq1BTExltUxa8V4K7nhsqOrHeC+L9e

xKLSZpc7/V2+h0khcGtycUj0K+mE

—–END CERTIFICATE—–

Please enter Private Key: Press <Enter> when done

—–BEGIN PRIVATE KEY—–

MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC10qaq6uYpxHmS

YMB2WKSNQHeEjH+oE7csQ8/l4Wf7V0HNLHmXigwNXr4T95fCU8xhuX2uR+E9+5lg

CSyjflRVapI1hsD2PNjElkjX6/529HJygwCywKkF3CzkgL/Agg3JwwlpoNB+rMHU

THzJYwEV475sdIiVy6z/ISQYYMeURZhe+IWFdo0g7ExboS/eX6s8eqT7KLiD4JAY

RpZWsDr2m/MzAuX8UnNOjbw5Ezi9XxgRNUyNLcFbIFLs81eosBsj3xZ8BC9QlV+I

kuEU2K6nbenil1Mkojbg53Yuvh1OrUq2eCI9Dpd0VSHmlYMvPFslMDZG70xxDFQ7

XMrxqpu5AgMBAAECggEAKasDzQmWA55mKfiIQtbfpwtOGI9GNhOGl9tWip0UglIl

30pA90yIpIvAzbyhB8TCgubKeaU5ZkYBiTOxCirKUuTgaund0NBy8OJsASexIju0

+q8w+sYSNiiWFSu4RfrIBCPxRUa4YT9gEDITKufIeOa/XgV6w7FwjOtgZUHQmxbQ

jYKDVQXSA02/lO6z/Ulhm3lBPbHasbSGeL0+3pd+zvzTJmg9dlKIZQisCXF6895v

gosZxbWeRpI7SNpde7WcoxFB519BMpz718exjZcN2iP/LuUzUF15eHF8grkSyQxF

e+u7CRwGM1k8x5gVOZo8pl+TUEjYmUUc8yjbEKYzDQKBgQDuGcvY0nPBlY+162xc

rPuBqIrheAZBik0hx9bKfkwvKP/vdNVn/3mHxjxT2S5M3QaMlpu2u1UV24V/XwEa

kqwp4ISgmCahogyuARMOd51OTUAsesiU51kseolP010CTcljjJQeRbIqgszKW/Jr

BLcSFtHDZwzxF9qoIB0pcUpMQwKBgQDDfcwQ2NTS7Duf2kXjb+Ok895dlWNmXQwX

CfXLVjG9QE/fNhmnZctBNVvFGQvLYep6c4QNSrkwGx2MKxLfzAnEFQJ1D6sokpgP

c6H1VIY7FqGxJuS17cYNUORHQBjk1QhEorPYyzaY5XlohXUtlH/CKwSTS/KAITSz

23gdSo92UwKBgQCTdY51vgDKx2G1fRQjYU5yQnugn8DgHlMetLElv4pXOsEm/+ia

+/G8UN1T4JF4MPq5Xx0Y0nQjkUzgUWpRlrzhQpdhDln+iGnp6ehvcU0PDXDNG03W

SmFD1q/rrC9SGfK7oHirNubcxR0nxkIgXU8z+MX4in3NYsSckyb8X5lwGQKBgA57

aD2rQoDpnTUnX1wM8ulKY6O9KGLx665dP4czuHWTqRcZE+dxxA/tmwHL7DLB6zPt

ENBHQ9bLe3Hh0wEfRW3wPIFdislzqq4iW9In09XWxF2ySukrVyuvXWnl1rJFEdq7

zuT1kPLctRTIJjkdMiW5OBqNWsahLx1P2eMZne0fAoGBAOgwwdQEjqYo8I9bmMIx

F6LXK6Lggm4U12bIyrxkHF3mQBucOnqvCIK6phSgPzoRXGD+BuHpoHTySCzgoAA7

5rd2jkVZcFXZl66Vf6WeEIMVT6DpszCRziW5IgWXmCwctPq4GBYqHhXhjgvhSzlt

LgDpJQSFTrdG48oXLaZ/YcSP

—–END PRIVATE KEY—–

Enter certificates of certification authorities (CA) which form the certificate chain of the server certificate. This starts with the issuing CA

certificate of the server certificate and can range up to the root CA certificate.

Do you want to continue entering root and/or intermediate certificates {y|n}: n

You should keep a copy of the private key and the CA-signed digital certificate for future reference.

The installed certificate’s CA and serial number for reference:

CA: SVM_CA

serial: 160AA4945C51B18E

The certificate’s generated name for reference: s3.lab2.local

1.11      Get the public certificate of SVM_CA and save it for Client-side configuration

  • This will be installed on the “cmode-prod” admin (cluster) SVM in the next lab for FabricPool connectivity from the cluster to the S3 SVM

security certificate show -vserver S3 -common-name SVM_CA -type root-ca  -instance

                             Vserver: S3

                    Certificate Name: SVM_CA_160AA4596B7767A0_SVM_CA

          FQDN or Custom Common Name: SVM_CA

        Serial Number of Certificate: 160AA4596B7767A0

               Certificate Authority: SVM_CA

                 Type of Certificate: root-ca

 Size of Requested Certificate(bits): 2048

              Certificate Start Date: Thu Apr 30 09:02:02 2020

         Certificate Expiration Date: Fri Apr 30 09:02:02 2021

              Public Key Certificate: —–BEGIN CERTIFICATE—–

                                      MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

                                      AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

                                      NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

                                      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

                                      tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

                                      q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

                                      s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

                                      Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

                                      3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

                                      gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

                                      ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

                                      +ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

                                      oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

                                      aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

                                      yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

                                      2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

                                      VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

                                      5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

                                      —–END CERTIFICATE—–

        Country Name (2 letter code): US

  State or Province Name (full name):

           Locality Name (e.g. city):

    Organization Name (e.g. company):

    Organization Unit (e.g. section):

        Email Address (Contact Name):

                            Protocol: SSL

                    Hashing Function: SHA256

                             Subtype: –

1.12      Create an Object Store Server

Create an HTTPS (and http) Object Store Server

vserver object-store-server create -vserver S3 -object-store-server s3.lab2.local -status-admin up -is-http-enabled true -is-https-enabled true -certificate-name s3.lab2.local

object-store-server show

ONTAP 9.7 (http only)

vserver object-store-server create -vserver S3 -object-store-server s3.lab2.local -status-admin up -listener-port 80 -comment “”

object-store-server show

1.13      Create Two Buckets (a FlexGroup Volume is created)

ONTAP 9.8 allows multiple buckets per FlexGroup

vserver object-store-server bucket create -vserver S3 -bucket s3ontap1 -size 100GB -aggr-list cmode_prod_01_aggr2_FP,cmode_prod_02_aggr2_FP

vserver object-store-server bucket create -vserver S3 -bucket s3ontap2 -size 100GB -aggr-list cmode_prod_01_aggr2_FP,cmode_prod_02_aggr2_FP

object-store-server bucket show

Vserver     Bucket          Volume            Size       Encryption

———– ————— —————– ———- ———-

S3          s3ontap1        fg_oss_1585321366 100GB      true

S3          s3ontap2        fg_oss_1585321198 100GB      true

2 entries were displayed.

vol show -vserver S3

Vserver   Volume       Aggregate    State      Type       Size  Available Used%

——— ———— ———— ———- —- ———- ———- —–

S3        S3_root      cmode_prod_02_aggr2_FP online RW   20MB    18.55MB    2%

S3        fg_oss_1585321366 –       online     RW        100GB    94.54GB    0%

S3        fg_oss_1585321198 –       online     RW        100GB    94.77GB    0%

3 entries were displayed.

vol show -vserver S3 -is-constituent true

Vserver   Volume       Aggregate    State      Type       Size  Available Used%

——— ———— ———— ———- —- ———- ———- —–

S3        fg_oss_1585321366__0001 cmode_prod_01_aggr2_FP online RW 12.50GB 11.81GB  0%

S3        fg_oss_1585321366__0002 cmode_prod_02_aggr2_FP online RW 12.50GB 11.81GB  0%

S3        fg_oss_1585321366__0003 cmode_prod_01_aggr2_FP online RW 12.50GB 11.81GB  0%

S3        fg_oss_1585321366__0004 cmode_prod_02_aggr2_FP online RW 12.50GB 11.82GB  0%

S3        fg_oss_1585321366__0005 cmode_prod_01_aggr2_FP online RW 12.50GB 11.82GB  0%

S3        fg_oss_1585321366__0006 cmode_prod_02_aggr2_FP online RW 12.50GB 11.82GB  0%

S3        fg_oss_1585321366__0007 cmode_prod_01_aggr2_FP online RW 12.50GB 11.82GB  0%

S3        fg_oss_1585321366__0008 cmode_prod_02_aggr2_FP online RW 12.50GB 11.82GB  0%

S3        fg_oss_1585321198__0001 cmode_prod_01_aggr2_FP online RW 12.50GB 11.84GB  0%

S3        fg_oss_1585321198__0002 cmode_prod_02_aggr2_FP online RW 12.50GB 11.84GB  0%

S3        fg_oss_1585321198__0003 cmode_prod_01_aggr2_FP online RW 12.50GB 11.84GB  0%

S3        fg_oss_1585321198__0004 cmode_prod_02_aggr2_FP online RW 12.50GB 11.85GB  0%

S3        fg_oss_1585321198__0005 cmode_prod_01_aggr2_FP online RW 12.50GB 11.85GB  0%

S3        fg_oss_1585321198__0006 cmode_prod_02_aggr2_FP online RW 12.50GB 11.85GB  0%

S3        fg_oss_1585321198__0007 cmode_prod_01_aggr2_FP online RW 12.50GB 11.85GB  0%

S3        fg_oss_1585321198__0008 cmode_prod_02_aggr2_FP online RW 12.50GB 11.85GB  0%

16 entries were displayed.

1.14      Create a Bucket Policy with Access to Everyone for the Two Buckets

Create the policy for both Buckets

vserver object-store-server bucket policy show

vserver object-store-server bucket policy add-statement -bucket s3ontap1 -effect allow -action GetObject,PutObject,DeleteObject,ListBucket,GetBucketAcl,GetObjectAcl,ListBucketMultipartUploads,ListMultipartUploadParts -principal – -resource s3ontap1,s3ontap1/*

vserver object-store-server bucket policy add-statement -bucket s3ontap2 -effect allow -action GetObject,PutObject,DeleteObject,ListBucket,GetBucketAcl,GetObjectAcl,ListBucketMultipartUploads,ListMultipartUploadParts -principal – -resource s3ontap2,s3ontap2/*

vserver object-store-server bucket policy show

1.15      Create an s3admin User

  • ONTAP 9.7 has no users
  • ONTAP 9.8 has a default root user with no key.  
    • Create root keys for S3 connectivity
  • ONTAP 9.8 adds group support
  • Note that S3 clients do not use a user name and only the access and secret keys
  • The Secret Key will be visible only in advanced/diagnostic mode)

object-store-server user show            # 9.8 adds a default root user with no keys

vserver object-store-server user create -vserver S3 -user s3admin

set diag                                               y  (to show the secret key along with access key)

object-store-server user show -user s3admin

Vserver     User            ID        Access Key          Secret Key

———– ————— ——— ——————- ——————-

S3          s3admin         1         3jj9_wnPs7IG0X1d57o83_193g_SOTsT2QKCg3_Kj27qM7x3JrP_bWUA_A02N8QZPmHc_Xk7PB48Vcg7vWAAtsN5B8P_4P2_5Ln5KAIUxA_S9ry7Xk324PsDZ0DMppME

2APh4pW_3SnY2cAqnsg3d22A4ylG6zx_93vpN6cN4g0sBgSrJ9BfPsgwZ_p93Q8cTBsQ97__e6l6WZql3rfY5V3QMeJT61CZh5f3Jzk0A38Nk4Hz7Hz6AX_5dssA0C0S

set adv

1.16      Bucket Permissions Examples (Users, Allow/Deny Policies (bucket/group), Groups, Conditions)

Create 4x additional users

vserver object-store-server user create -user s3user1

vserver object-store-server user create -user s3user2

vserver object-store-server user create -user s3user3

vserver object-store-server user create -user s3user4

object-store-server user show

Create a test bucket

vserver object-store-server bucket create -vserver S3 -bucket test-bucket -size 100GB -aggr-list cmode_prod_01_aggr2_FP,cmode_prod_02_aggr2_FP

object-store-server bucket show

Create a bucket policy which provide access to s3 users “s3user1” and “s3user2” to all the resources

vserver object-store-server bucket policy add-statement -bucket test-bucket -effect allow -action GetObject,PutObject,DeleteObject,ListBucket,GetBucketAcl,GetObjectAcl,ListBucketMultipartUploads,ListMultipartUploadParts -principal s3user1,s3user2 -resource test-bucket,test-bucket/* -index 1

Create a bucket policy which denies access to s3 resources to s3 user “s3user4’

vserver object-store-server bucket policy add-statement -bucket test-bucket -effect deny -action GetObject,PutObject,DeleteObject,ListBucket,GetBucketAcl,GetObjectAcl,ListBucketMultipartUploads,ListMultipartUploadParts -principal s3user4 -resource test-bucket,test-bucket/* -index 2

Show Bucket Policies

vserver object-store-server bucket policy show

Create a Group Policy

vserver object-store-server policy create -policy policy1

vserver object-store-server policy show

Associate a policy statement with the policy – ‘policy1’

vserver object-store-server policy add-statement -policy policy1 -effect allow -action GetObject,PutObject,DeleteObject,ListBucket,GetBucketAcl,GetObjectAcl,ListBucketMultipartUploads,ListMultipartUploadParts -resource test-bucket,test-bucket/*

vserver object-store-server policy show

vserver object-store-server policy show-statements

Create a group with 2x s3 users and a policy

vserver object-store-server group create -name group1 -users s3user1,s3user2 -policies policy1

vserver object-store-server group show

Create bucket policy which allows access to public users (principal = “*”)

vserver object-store-server bucket policy add-statement -bucket test-bucket -effect allow -action GetObject,PutObject,DeleteObject,ListBucket,GetBucketAcl,GetObjectAcl,ListBucketMultipartUploads,ListMultipartUploadParts -principal * -resource test-bucket,test-bucket/* -index 3

vserver object-store-server bucket policy show

Create bucket policy conditions by user and subnet

vserver object-store-server bucket policy-statement-condition create -bucket test-bucket -operator string-equals -index 1 -usernames s3user1 

vserver object-store-server bucket policy-statement-condition create -bucket test-bucket -operator ip-address -index 1 -source-ips 192.168.150.0/24

vserver object-store-server bucket policy-statement-condition show

1.17      Regenerate User Keys

  • To regenerate the access key and secret key for this user s3admin (you won’t use the username from external, only the access and secret key)
  • Generate the keys for the root user
  • Note that we won’t use s3admin since we did not add allow permissions and root does not need these added for the FabricPool lab

object-store-server user show

object-store-server user regenerate-keys -vserver S3 -user s3admin

object-store-server user regenerate-keys -vserver S3 -user root

object-store-server user show -user s3admin,root

Vserver     User            ID        Access Key          Secret Key

———– ————— ——— ——————- ——————-

S3          root            0         dy5ErHDEqr5pYsldqAsH3_0gQ5T8N98QhC__9N_6TZUrEiq0CEPD8aehNd_YuY_8ipDPQ_t9XDJh3x_O9j2rDb63IE7d_C5n895Hc8p2jj3gPh8T6AnAsUHrO3jHfPg3

DO670LG4xqc__pN9_P66ciCjY__20hsshXqRPZlq5aH_Wb_CC_y_5k93qgQ39CXH_R564_24Nf_C4Ae0Ny6Sd01_Zc3zX4x_H9c0X131JtTo5xBcPOpxXAie6X88zC7a

   Comment: Root User

S3          s3admin         1         ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9

rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

2 entries were displayed.

1.18      S3 Browser

  • You can use the S3 Browser from Amazon installed on on the Windows Desktop VM
  • Use the “root” account to enumerate the buckets

Windows Server  S3 Browser

  • Enter “ONTAP-S3” for account name
  • Choose  “S3 Compatible Storage
  • REST Endpoint: “192.168.150.141
  • Access Key ID:       paste from output (your own will be different than above) for root
  • Secret Access Key: paste from output (your own will be different than above) for root
  • Leave “Use secure transfer (SSL/TLS)” checked (9.8+)
  • Click the “Advanced S3-compatible storage settings” link on the bottom left
  • Change Signature version to “Signature V4” and click the “Close” button
  • Click the “Add new account” button
  • The S3browser will enumerate the two ONTAP buckets s3ontap1 and s3ontap2

ONTAP Null Quotas Tip Revisited for non-qtree (volume) data Real-time File Count Reporting

This blog is a quick workaround and addition to my earlier blog “NetApp ONTAP Tip – Quick File Count Reporting with Null Quotas”  A customer noticed that the null qtree quotas only report file counts and space usage on qtrees, but they also wanted to see file counts in the base volume (non-qtree) data.  Below is a demonstration using both user and group null quotas for all users and groups. The same former example with a null qtree quota is also shown. You could choose to enable null quotas for either all users or all groups with the same null result, but all three null methods are shown. A null user quota by itself provides all real-time file counts.

Note that user or group quotas are necessary to see file counts in the base volume (non-qtree contained data) and report all paths (base volume plus qtrees), so you need subtract qtree file counts to get the standalone base volume file count. If you do not have qtrees, then you will have the total base volume file count. The real-time file count and usage report is useful when a du may run for hours or days.  

ONTAP continually evolves, and I look forward to new native analytics features coming that we are testing in our lab. This post will be replaced by a future method coming soon.

The example below has one volume named “quota_vol1” with one qtree named “prod”.

ONTAP

Create a Quota Policy called “null”

quota policy create -vserver quotas -policy-name null

Create Quota Policy Rules using dash “-“ to track without enforcement

Create Null Tree, User and Group Quotas for all users/groups/trees

Tree

quota policy rule create -vserver quotas -policy-name null -volume quota_vol1 -type tree -target “” -disk-limit – -file-limit – -threshold –

User

quota policy rule create -vserver quotas -policy-name null -volume quota_vol1 -type user -target “” -disk-limit – -file-limit – -threshold – -qtree “”

Group (you likely would use user or group, not both)

quota policy rule create -vserver quotas -policy-name null -volume quota_vol1 -type group -target “” -disk-limit – -file-limit – -threshold – -qtree “”

Modify the SVM to use the quota policy (only one policy at a time per SVM is active and up to five are supported with one active and four inactive)

vserver modify -vserver quotas -quota-policy null

Enable Quotas on the Volume

quota on -vserver quotas -volume quota_vol1

Show Quotas and Report

quota show

quota show -state on

Quota Report to real-time check file count and space used

quota report -vserver quotas

As seen below, a user null quota provides ALL information needed, and group and tree are redundant when calculating file counts for the volume and trees

  • Tree – we have 43,328 files 
    • 43,327 files plus the parent volume “.” in the “prod” qtree
  • User/Group – we have 67,844 total files
    • 67,842 files plus 2x parent volumes “.”
    • For non-qtree, base volume files
      • 67,844 volume files minus 43,328 qtree files = 24,516
      • 24,515 files plus the parent volume “.”
    • Note that for additional qtrees, you would subtract all qtrees from the base volume count

ONTAP Multifactor Authentication (MFA) for ssh

In my last blog, 2-Factor GUI authentication with SAML IdP was demonstrated. To complete 2-Factor with the command line, this blog covers multifactor authentication (MFA) for the CLI. 

Many ONTAP users already have publickey setup for passwordless ssh.  If you do, MFA is really easy since MFA uses both a password and publickey.  Just add the secondary method in step 3 below, and it is setup.  We will cover how to setup a publickey for a Linux or MacOS client with ssh-keygen, and for Windows with puttygen.exe.  We will use usernames “admin” for publickey and “admin2” for MFA to an ONTAP 9.7P4 cluster with a cluster management IP of 192.168.150.230.

1.    PUBLICKEY password-less SSH (Linux/MacOS)

  • This feature does NOT work when FIPS is enabled
  • Using “admin” for the example below
  • ssh-keygen is used to build the keys

Linux client

ssh-keygen -t rsa

When asked for a ‘passphrase’, do not enter one, press “ENTER” three times

Test Password Connectivity to the NetApp cluster (before public key setup)

ssh admin@192.168.150.230 security login show

“yes” to accept the fingerprint

Enter the password (we won’t need this after we are done)

cat ~/.ssh/id_rsa.pub               # we will paste this into the cluster next

ONTAP

Enable Public/Private SSH Keys for passwordless access for the admin user

security login create -username admin -application ssh -authmethod publickey -profile admin

security login show

Create the public key (pasted from above)

security login publickey create -username admin -index 1 -publickey ” ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOODKTWGiWHrx2CDfblqd4QG7PGqRlb4I9KQ3uSu+mOEuDls7+HffkdRwieiSnG1fM8g2D/HYeSE7vf7ybkCDfbKyGCJKQfot+cmr08ELFiR5f8qi6eQFYvgfQuOUj2G3UzcUby/soDVnupye4eJKKld5JbiWD6zJt8l17trq20s9I8CWX6KTXyOWTYd/TXF9Rt1pDfPWX9cUDZTM3xFPWJUPCfgw3/5IgCm2oBhcXeC6XDNbRIUcxQYT20J1HaK8ER20PU9pzAkH3LDBnLjm62Ow9g9l+2gwGoU/7XAMva3IPj415WiC95JNoel7PnnlXd1G8fxxqBTTcinZaPzTRKG//m+bHWXZcPfHwy/qF3qHO9sJY/0EZlGcJYq1EMriZxJiOpFtcaQzSkKkxcTa/z3QPVpVaw5u+w5lEXZfl0BLPXuyRmatN+BEDnIoUVGL67q/56+ll8yPhStBoxgFe6EDd+k8Eoy8tht3Qa09Y3bQ3fm9U7AN4eFA/lGkqQUM= root@linux1.lab2.local”

NOTE you can also load from a URI

::> security login publickey load-from-uri -username admin -uri file://localhost/mroot/id_rsa.pub  or http://ip/path/id_rsa.pub  [-overwrite false]

  • for file:// scp the file to /mroot on one node
    • OR – create key for user (copy/paste – using uri method can be easier as shown above)

Confirm user and key

security login publickey show -username admin

Linux client

Test Connectivity from Linux to the NetApp cluster without a password

ssh admin@192.168.150.230 security login show

2.    PUBLICKEY password-less SSH (Windows PuTTy puttygen.exe/plink.exe)

  • This feature does NOT work when FIPS is enabled
  • Using “admin” for the feature
  • Windows Putty using plink.exe

Windows Client with PuTTy

Generate keys for this, use puttygen.exe

  • Open puttygen.exe in C:\Program Files\PuTTY
  • Leave the default “RSA” radio button checked (this is SSH-2RSA)
  • Use default 2048 number of bits for the key size
    • The key size on the host does not have to match that of the storage system but it does have to be larger.
  • Click Generate. You will be prompted to move the mouse in the key area.
  • DO NOT enter a passphrase when generating the keys.
  • Once the keys have been generated, save them to the C:\Program Files\PuTTY (plink.exe) directory
  • Click “Save public key”        
    • Enter rsa_pub_clientplink_key
  • Click “Save private key”       
    • Click “Yes” to save without a passphrase
    • Enter: rsa_priv_plink_key.ppk
  • Copy the “Public key for pasting into OpenSSH authorized_keys” file but delete the “rsa-key-CCYYMMDD” at the end
  • Open Wordpad and paste the key
    • DELETE THE “RSA-KEY-ccyymmdd” at the end so the key ends with no spaces
    • The authorized_keys file does not take any line breaks. Therefore, do not edit this file with notepad, use wordpad or textpad and leave NO spaces or lines at the end
  • Save as “authorized_keys” in the PuTTy directory.  Choose “Text Document”
  • Rename the file removing the “.txt” file extension

From command prompt or powershell window test connectivity to the NetApp cluster and that it asks for a password (confirm non-interactive ssh works)

plink.exe ssh admin@192.168.150.230 security login show

ONTAP

Create the public key (pasted from above – make sure PuTTY authorized keys matches the key below)

security login publickey create -username admin -publickey ” ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAiG2YhcxQVTgRT/rLZZvvN+8yYXQwurAodG2Qn6+JHVGRCK1MO5VNbvl0gjZlaX8vsdN3LniH+4fd0v7Iej+e/I1TbPl+p9VLlRVYV1dex6JaDjrgdzYK3GGXQBfkpGpdqaCrKNTtNpEDgg3EJFbrDTW4dym9GuULyyHbiZS0iwtGSkU/qcaaSGHeidwq69UrLm6RH8NJNzCvMzMq2tgm6x3pnDsGd3GduRqKgOTDyScSu38A3HCLKjPSXYP6MJBXhDsczUYiRrFk1EMJFsk0j3k1PsYWPglf8HC4QOKJO5Q31STjKXDtjJfWGnSDELXnWf0XTdMNN2KChqGf0jorTQ==”

Confirm user and key (we will have 2 index entries,one for Linux, one for Windows)

security login publickey show -username admin

Windows Client

Test Connectivity from Plink to the NetApp cluster without a password (you will see “Access granted” instead of “Password:-”

plink.exe admin@192.168.150.230 security login show

You can also use PuTTY which will not require a password

3.    2-Factor SSH CLI with MFA (password AND PUBLICKEY)

ONTAP

Create a new user called “admin2” for 2-factor using “Netapp1!” password

security login create -username admin2 -application ssh -authmethod password -profile admin -second-authentication-method publickey

Enter the password twice “Netapp1!”

security login show

Create the publickey using the publickey RSA keys from the sections above

For Linux

security login publickey create -username admin2 -index 1 -publickey ” ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOODKTWGiWHrx2CDfblqd4QG7PGqRlb4I9KQ3uSu+mOEuDls7+HffkdRwieiSnG1fM8g2D/HYeSE7vf7ybkCDfbKyGCJKQfot+cmr08ELFiR5f8qi6eQFYvgfQuOUj2G3UzcUby/soDVnupye4eJKKld5JbiWD6zJt8l17trq20s9I8CWX6KTXyOWTYd/TXF9Rt1pDfPWX9cUDZTM3xFPWJUPCfgw3/5IgCm2oBhcXeC6XDNbRIUcxQYT20J1HaK8ER20PU9pzAkH3LDBnLjm62Ow9g9l+2gwGoU/7XAMva3IPj415WiC95JNoel7PnnlXd1G8fxxqBTTcinZaPzTRKG//m+bHWXZcPfHwy/qF3qHO9sJY/0EZlGcJYq1EMriZxJiOpFtcaQzSkKkxcTa/z3QPVpVaw5u+w5lEXZfl0BLPXuyRmatN+BEDnIoUVGL67q/56+ll8yPhStBoxgFe6EDd+k8Eoy8tht3Qa09Y3bQ3fm9U7AN4eFA/lGkqQUM= root@linux1.lab2.local”

For Windows plink.exe

security login publickey create -username admin2 -index 2 -publickey ” ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAiG2YhcxQVTgRT/rLZZvvN+8yYXQwurAodG2Qn6+JHVGRCK1MO5VNbvl0gjZlaX8vsdN3LniH+4fd0v7Iej+e/I1TbPl+p9VLlRVYV1dex6JaDjrgdzYK3GGXQBfkpGpdqaCrKNTtNpEDgg3EJFbrDTW4dym9GuULyyHbiZS0iwtGSkU/qcaaSGHeidwq69UrLm6RH8NJNzCvMzMq2tgm6x3pnDsGd3GduRqKgOTDyScSu38A3HCLKjPSXYP6MJBXhDsczUYiRrFk1EMJFsk0j3k1PsYWPglf8HC4QOKJO5Q31STjKXDtjJfWGnSDELXnWf0XTdMNN2KChqGf0jorTQ==”

security login publickey show -username admin2

Test Logins that require both a password and the publickey

Linux

ssh admin2@192.168.150.230 security login show                 # Netapp1!

Windows plink.exe

plink.exe admin2@192.168.150.230 security login show         # Netapp1!