NetApp ONTAP 9.8 – FabricPool Tiering to ONTAP S3

In my prior blog, ONTAP S3 was configured and we will build on that blog connecting ONTAP aggregates to the S3 capacity tier with FabricPool. This blog will cover setup of FabricPool to ONTAP S3 with some additional features like tagging, mirroring and tiering policies. The cluster name is “cmode-prod” and the cluster will connect to an SVM named “S3” resides on the same cluster. The S3 SVM could have been a different cluster with connectivity from the Cluster InterCluster LIF(s) to the S3 SVM Data LIF(s). Note that REST, the System Manager GUI and Cloud Manager (Cloud Tiering) are also excellent tools for easy FabricPool configuration. For FabricPool Best Practices, please see John Lantz’s TR at https://www.netapp.com/us/media/tr-4598.pdf. The NetApp TRs and documentation at https://docs.netapp.com were used for the setup below.

1     FabricPool Configuration

1.1      Check Licenses

  • Free use
    • When StorageGRID or ONTAP (up to 300TB) is used
    • When CVO (Cloud Volumes ONTAP) is used to the local cloud vendor object store
  • Per TB use license for on-prem to non-StorageGRID/ONTAP

cmode-prod

license show

license show-status

1.2      Install a Server-CA Certificate for TLS (for https S3 access)

  • In the S3 blog posted earlier, we created a server certificate that matched FQDN name s3.lab2.local.
  • We will use the public key from that S3 cert to create a server-ca client certificate on cmode-prod as an S3 client to the S3 SVM S3 server
  • On-Prem S3 solutions require certificates to install in ONTAP and the Object Store
    • StorageGRID
    • IBM Cloud Object Storage (formerly Cleversafe)
  • In the next step below we will create a certificate for ONTAP_S3
  • Parameter to bypass cert validation for private cloud (StorageGRID) NOT RECOMMENDED
    • -is-certificate-validation-enabled false
  • If you do not install the server-ca on cmode-prod for the S3 server root-ca public key you will get an error (see below if you create the object-store connection without a cmode-prod server-ca certificate

Error: command failed: Cannot verify availability of the object store from node cmode-prod-01. Reason: Cannot verify the certificate given by the object store server. It is possible that the certificate has not been installed on the cluster. Use the ‘security certificate install -type server-ca’ command to install it..

cmode-prod

Show the root-ca public cert of the S3 default server cert we created in the prior blog (we will copy/paste the begin to end)

security certificate show -vserver S3 -common-name SVM_CA -type root-ca  -instance

Your public key wll be different

                             Vserver: S3

                    Certificate Name: SVM_CA_160AA44E38972249_SVM_CA

          FQDN or Custom Common Name: SVM_CA

        Serial Number of Certificate: 160AA44E38972249

               Certificate Authority: SVM_CA

                 Type of Certificate: root-ca

 Size of Requested Certificate(bits): 2048

              Certificate Start Date: Thu Apr 30 09:01:14 2020

         Certificate Expiration Date: Fri Apr 30 09:01:14 2021

              Public Key Certificate: —–BEGIN CERTIFICATE—–

                                      MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

                                      AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

                                      NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

                                      SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

                                      tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

                                      q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

                                      s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

                                      Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

                                      3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

                                      gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

                                      ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

                                      +ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

                                      oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

                                      aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

                                      yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

                                      2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

                                      VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

                                      5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

                                      —–END CERTIFICATE—–        Country Name (2 letter code): US

  State or Province Name (full name):

           Locality Name (e.g. city):

    Organization Name (e.g. company):

    Organization Unit (e.g. section):

        Email Address (Contact Name):

                            Protocol: SSL

                    Hashing Function: SHA256

                             Subtype: –                          

security certificate show -vserver S3 -common-name SVM_CA -type root-ca  -fields public-cert

—–BEGIN CERTIFICATE—–

MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

+ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

—–END CERTIFICATE—–

security certificate show -vserver S3 -common-name s3.lab2.local -fields public-cert

Install the S3 server certificate for the cmode-prod server-ca client (paste results above)

security certificate install -type server-ca -vserver cmode-prod -cert-name s3.lab2.local

Please enter Certificate: Press <Enter> when done

—–BEGIN CERTIFICATE—–

MIIDUTCCAjmgAwIBAgIIFgqkWWt3Z6AwDQYJKoZIhvcNAQELBQAwHjEPMA0GA1UE

AxQGU1ZNX0NBMQswCQYDVQQGEwJVUzAeFw0yMDA0MzAxNjAyMDJaFw0yMTA0MzAx

NjAyMDJaMB4xDzANBgNVBAMUBlNWTV9DQTELMAkGA1UEBhMCVVMwggEiMA0GCSqG

SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDw9WuyZOUUInxU0EZKp34yQpctDFbtHAgu

tvoyhwzCd7rhQjH4WIqmkcl3f8TAkdOe6ExMgq7+fT6B8jHKDWfu6sXrmoXg61Bk

q09uD8TDXzNg07HQPglJV0FWwIhnG5965Dx7/hvkKXas59lk2XwSrIGXbp1/K32A

s1/ywUr3vRYWkMLq/p3RBgIK0bszyXgS26XXIgPSZUdMgCiZxf7ErVfPZMLnT196

Ff0KFrqsjVleGyMQpULt4H8aHtYPnqjhi1ofvng5/8uhl6FhSF66tVVeSE1xjdMf

3xOy5eKteySWn+52fpcLGjvWjea+Z5ZR7MaWw0150fp19uDlGV4PAgMBAAGjgZIw

gY8wDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYEFKDP

ZZJHZJs+VsIz0FT2Dwqf+ZRmME0GA1UdIwRGMESAFKDPZZJHZJs+VsIz0FT2Dwqf

+ZRmoSKkIDAeMQ8wDQYDVQQDFAZTVk1fQ0ExCzAJBgNVBAYTAlVTgggWCqRZa3dn

oDANBgkqhkiG9w0BAQsFAAOCAQEAk7mHgpW4HZcod6DdOua4EB8GdsSM5vQkgP3X

aq7Hie8SRjL8vOgZ2OIGre+LXudpVS1jZMCb0igbD0ncbGn36ycLqoNq+lrAPfj6

yzk9DoTuWZU62/D4gTSieNm3BMB6NMptthFOsApEe08MLQk1/qefDQb9FvfTStSQ

2THEFaHKzIs20UHER+a0B8h8oV2cCu/A7a14k4mkQAIfDK/xfNXW5J/BE8TkKHV5

VXCnuPIQR41PwC8HvUKmKITQpx/KTMxqSLTQomyc3r4xZdKQr7yOQP69Z9XPW2pM

5KMSsJPnCoad+ZGR5mYUtOwFfuM96fvqHC/I+uRbfi3HG2UhtQ==

—–END CERTIFICATE—–

[enter]

You should keep a copy of the CA-signed digital certificate for future reference.

The installed certificate’s CA and serial number for reference:

CA: SVM_CA

serial: 160AA4596B7767A0

security certificate show -cert-name s3.lab2.local

Vserver    Serial Number   Certificate Name                       Type

———- ————— ————————————– ————

S3         160AA4945C51B18E s3.lab2.local                         server

    Certificate Authority: SVM_CA

          Expiration Date: Sun Apr 25 09:06:15 2021

cmode-prod 160AA4596B7767A0 s3.lab2.local                         server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

2 entries were displayed.

security certificate show -common-name SVM_CA

Vserver    Serial Number   Certificate Name                       Type

———- ————— ————————————– ————

S3         160AA4596B7767A0 SVM_CA_160AA4596B7767A0_SVM_CA        root-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

S3         160AA4596B7767A0 SVM_CA_160AA4596B7767A0               client-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

S3         160AA4596B7767A0 SVM_CA                                server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

cmode-prod 160AA4596B7767A0 s3.lab2.local                         server-ca

    Certificate Authority: SVM_CA

          Expiration Date: Fri Apr 30 09:02:02 2021

4 entries were displayed.

1.3      Configure a Proxy for S3 (to access Public Cloud)

  • Commands for reference below
  • When configuring the object store with “object-store config create” you will specify “-use-http-proxy true

REFERENCE

network ipspace show

vserver http-proxy create -ipspace <ipspace> -server <proxy-server-FQDN> – port <port> 

vserver http-proxy show 

1.4      S3 Object Store Bucket and Account Information

  • Output from the prior blog
    • We created the s3admin user and two buckets (s3ontap1 and s3ontap2) in the prior blog
    • We will use both buckets to demonstrate the object mirror feature
  • S3 to ONTAP is supported for 300TB or less in ONTAP 9.8
    • There is an ONTAP_S3 provider type in the object-store config create command

cmode-prod

Show current S3

network interface show -vserver S3

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

———– ———- ———- —————— ————- ——- —-

S3

            lif1         up/up    192.168.150.141/24 cmode-prod-01 e0c     true

vserver services name-service dns hosts show

Vserver    Address        Hostname        Aliases

———- ————– ————— ———————-

cmode-prod 192.168.150.141 s3.lab2.local  –

object-store-server user show -user s3admin             # your keys will be different

Use the “s3admin” account keys

Vserver     User            ID        Access Key          Secret Key

———– ————— ——— ——————- ——————-

S3          s3admin         1         ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9

rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

object-store-server user show -user s3admin -fields access-key       # to show the separate acess from secret

object-store-server bucket show

Vserver     Bucket          Volume            Size       Encryption

———– ————— —————– ———- ———-

S3          s3ontap1        fg_oss_1585321366 100GB      true

S3          s3ontap2        fg_oss_1585321198 100GB      true

2 entries were displayed.

1.5      Verify Intercluster LIF Connectivity to S3

  • We will network ping from each intercluster LIF to the S3 FQDN
  • Cluster peering over InterCluster LIFs is required for FabricPool to connect from the cluster (admin SVM) to the S3 bucket
    • Since we are connecting cmode-prod to one of it’s own SVMs, S3, the peer is not needed with the local cluster connection

cmode-prod

network ping -lif cmode-prod-01_ic1 -vserver cmode-prod -destination s3.lab2.local

network ping -lif cmode-prod-02_ic1 -vserver cmode-prod -destination s3.lab2.local

1.6      Object Store Connect to ONTAP (add a cloud tier)

  • We will add two buckets for mirroring
  • We will use SSL per best practices
  • ONTAP S3 is supported, using the “ONTAP_S3” type instead of “S3_Compatible
  • Multiple aggregates can use the same bucket
  • Configure FabricPool on the cluster with S3 bucket information
    • Server name (MUST be a FQDN)
    • Secret and access keys
    • Bucket / Container name
  • Syntax

object-store config create
-object-store-name <name> 
-provider-type <AWS/SGWS>
-port <443/8082> (AWS/SGWS)
-server <name> 
-container-name <bucket-name> 
-access-key <string> 
-secret-password <string> 
-ssl-enabled true 
-ipspace default
-is-certificate-validation-enabled

cmode-prod

Connect Two Capacity Object Tiers # Update your access key and secret password

  • Using s3admin keys


storage aggregate object-store config create -object-store-name s3ontap1 -provider-type ONTAP_S3 -server s3.lab2.local -container-name s3ontap1 -ssl-enabled true -port 443 -ipspace Default -use-http-proxy false -server-side-encryption none -access-key ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9-secret-password rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

storage aggregate object-store config create -object-store-name s3ontap2 -provider-type ONTAP_S3 -server s3.lab2.local -container-name s3ontap2 -ssl-enabled true -port 443 -ipspace Default -use-http-proxy false -server-side-encryption none -access-key ggd1DrNc8_uCp_x6B3313_14py_9xx29yrITbej8_fGLNZO0Za6h6pDZgRQ_C__jNsXCk80BdQTwx_2u0pRRZ_h67xZa003aSgNc_P2_sYav74998l95AP14wyAbOXP9-secret-password rqNFN6tu_6_nLWWrKA_946U_8f3TvpYmt7W15Tt1qA9rGnCBTHZCFCQAqkPXYIv4WX9_szjsLJU_5AcAi9ubs5dVicZ631_zeLPV7yV2tG_ahaSOpK46bccjbmE4nzYr

storage aggregate object-store config show

Name            Server               Container Name Provider Type Ipspace

————— ——————– ————– ————- ————-

s3ontap1        s3.lab2.local        s3ontap1       ONTAP_S3      Default

s3ontap2        s3.lab2.local        s3ontap2       ONTAP_S3      Default

2 entries were displayed.

1.7      Object Store Profiler

  • Performance profiling of the object storage put and get operations
  • FabricPool read latency is a function of connectivity to the cloud tier. LIFs using 10Gbps ports provide adequate performance. NetApp recommends validating the latency and throughput of your specific network environment to determine the impact it has on FabricPool performance
  • Cloud tiers do not provide performance similar to that found on the local tier (typically GB per second)
  • Although cloud tiers can easily provide SATA-like performance, they can also tolerate latencies as high as 10 seconds and low throughputs for tiering solutions that do not need SATA-like performance

cmode-prod

Start the profiler

storage aggregate object-store profiler start -node cmode-prod-01 -object-store-name s3ontap1 #y

storage aggregate object-store profiler start -node cmode-prod-02 -object-store-name s3ontap2 #y

storage aggregate object-store profiler show

2     Aggregate Configuration

2.1      Object Store Attach to Aggregate (local tier)

  • Attaching a cloud tier to a local tier is a permanent action. A cloud tier cannot be unattached from a local tier after being attached. (Using FabricPool Mirror, a different cloud tier can be attached.)
    • Exception is you can mirror an object store, swap mirrors then remove the mirror tier, but you will always have one tier attached once attached
  • Volumes in the aggregate must be thin provisioned -space-guarantee none to set a policy to use the tier
  • Aggregate autobalance must be disabled on the aggregates
  • We will connect both buckets to both SSD aggregates
    • Connecting mirrored buckets to multiple aggregates
  • Syntax

storage aggregate object-store attach
-aggregate <name> 
-object-store-name <name>

-allow-flexgroup <true|false>

cmode-prod

Show the object stores vailable

storage aggregate object-store config show

Disable autobalance on the SSD aggregates (from prior lab)

aggr modify -aggregate cmode_prod_01_aggr3_SSD -is-autobalance-eligible false

aggr modify -aggregate cmode_prod_02_aggr3_SSD -is-autobalance-eligible false

Attach the first Capacity Tier to the SSD aggregates

storage aggregate object-store attach -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap1 -allow-flexgroup true   # y

storage aggregate object-store attach -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap1 -allow-flexgroup true   # y

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      primary

2 entries were displayed.

storage aggregate object-store show -instance

2.2      Object Store Unreclaimed Space Threshold

  • See default thresholds by object store type above
  • Object defragmentation reduces the amount of physical capacity used by the cloud tier at the expense of additional object store resources (reads and writes)
  • Reducing the Threshold
    • To avoid additional expenses, consider reducing the unreclaimed space thresholds when using object store pricing schemes that reduce the cost of storage but increase the cost of reads. Examples include Amazon’s Standard-IA and Azure Blob Storage’s cool
    • For example, tiering a volume of 10-year-old projects that has been saved for legal reasons might be less expensive when using a pricing scheme such as Standard-IA or cool than it would be when using standard pricing schemes. Although reads are more expensive for such a volume, including reads required by object defragmentation, they are unlikely to occur frequently here.
  • Increasing the Threshold
    • Alternatively, consider increasing unreclaimed space thresholds if object fragmentation is resulting in significantly more object store capacity being used then necessary for the data being referenced by ONTAP. For example, using an unreclaimed space threshold of 20%, in a worst-case scenario where all objects are equally fragmented to the maximum allowable extent, it is possible for 80% of total capacity in the cloud tier to be unreferenced by ONTAP
    • 2TB referenced by ONTAP + 8TB unreferenced by ONTAP = 10TB total capacity used by the cloud tier.
    • In situations such as these, it might be advantageous to increase the unreclaimed space threshold—or increasing volume minimum cooling days—to reduce capacity being used by unreferenced blocks.
    • To change the default unreclaimed space threshold, run the following command:
      • storage aggregate object-store modify –aggregate <name> -object-store-name <name> –unreclaimed- space-threshold <%> (0%-99%)

cmode-prod

View the current threshold

storage aggregate object-store show -fields unreclaimed-space-threshold

aggregate               object-store-name unreclaimed-space-threshold

———————– —————– —————————

cmode_prod_01_aggr3_SSD s3ontap1          40%

cmode_prod_02_aggr3_SSD s3ontap1          40%

Modify the threshold to 50%

storage aggregate object-store modify -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap1 -unreclaimed-space-threshold 50%

storage aggregate object-store modify -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap1 -unreclaimed-space-threshold 50%

storage aggregate object-store show -fields unreclaimed-space-threshold

aggregate               object-store-name unreclaimed-space-threshold

———————– —————– —————————

cmode_prod_01_aggr3_SSD s3ontap1          50%

cmode_prod_02_aggr3_SSD s3ontap1          50%

2.3      Object Store Attach to Aggregate (local tier) Mirror

  • When using FabricPool Mirror, data is mirrored across two buckets
  • When adding FabricPool Mirror to an existing FabricPool, data previously tiered to the original cloud tier is written to the newly attached cloud tier as well. After both tiers are mirrored, data is synchronously tiered to both cloud tiers
  • Although essential for FabricPool with NetApp MetroCluster, FabricPool Mirror is a stand-alone feature that does not require MetroCluster to use
  • Attach
    • storage aggregate object-store mirror -aggregate <aggregate name> -name <object-store-name-2>
  • Swap
    • storage aggregate object-store modify -aggregate <aggregate name> -name <object-store-name-2> – mirror-type primary
  • Delete
    • storage aggregate object-store unmirror -aggregate <aggregate name> -name

cmode-prod

Show the current primary tier

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      primary

2 entries were displayed.

Attach the Mirrored Capacity Tier to the SSD aggregates

storage aggregate object-store mirror -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap2

storage aggregate object-store mirror -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap2

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      primary

cmode_prod_01_aggr3_SSD s3ontap2 available      mirror

cmode_prod_02_aggr3_SSD s3ontap1 available      primary

cmode_prod_02_aggr3_SSD s3ontap2 available      mirror

4 entries were displayed.

storage aggregate object-store show -instance

2.4      Object Store Mirror – Swap/Unmirror/Mirror

  • It is not supported to remove a tier with the exception we can swap to a mirror then delete the mirror
  • We will make s3ontap2 primary, delete s3ontap1, then remirror to s3ontap1

cmode-prod

Swap to s3ontap1 as primary

storage aggregate object-store modify -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap2 -mirror-type primary

storage aggregate object-store modify -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap2 -mirror-type primary

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      mirror

cmode_prod_01_aggr3_SSD s3ontap2 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      mirror

cmode_prod_02_aggr3_SSD s3ontap2 available      primary

4 entries were displayed.

Unmirror s3ontap1 to remove a tier

storage aggregate object-store unmirror -aggregate cmode_prod_01_aggr3_SSD

storage aggregate object-store unmirror -aggregate cmode_prod_02_aggr3_SSD

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap2 available      primary

cmode_prod_02_aggr3_SSD s3ontap2 available      primary

2 entries were displayed.

Add the mirror back (opposite of before with s3ontap1 mirrored)

storage aggregate object-store mirror -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap1

storage aggregate object-store mirror -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap1

storage aggregate object-store show

Aggregate      Object Store Name Availability   Mirror Type

————– —————– ————-  ———–

cmode_prod_01_aggr3_SSD s3ontap1 available      mirror

cmode_prod_01_aggr3_SSD s3ontap2 available      primary

cmode_prod_02_aggr3_SSD s3ontap1 available      mirror

cmode_prod_02_aggr3_SSD s3ontap2 available      primary

4 entries were displayed.

2.5      Aggregate Tiering Fullness Threshold

  • By default, tiering to the cloud tier only happens if the local tier is >50% full. There is little reason to tier cold data to a cloud tier if the local tier is being underutilized
  • Setting the threshold to a lower number reduces the amount of data required to be stored on the local tier before tiering takes place. This may be useful for large local tiers that contain little hot/active data
  • Setting the threshold to a higher number increases the amount of data required to be stored on the local tier before tiering takes place. This may be useful for solutions designed to tier only when local tiers are near maximum capacity
  • This is the same command that also modifies the unreclaimed threshold we increased from 40 to 50% earlier
  • Syntax
    • storage aggregate object-store modify –aggregate <name> –tiering-fullness-threshold <#> (0%-99%)

cmode-prod

Show the current threshold

storage aggregate object-store show -fields tiering-fullness-threshold,unreclaimed-space-threshold,mirror-type

aggregate               object-store-name unreclaimed-space-threshold tiering-fullness-threshold mirror-type

———————– —————– ————————— ————————– ———–

cmode_prod_01_aggr3_SSD s3ontap1          50%                         50%                        mirror

cmode_prod_01_aggr3_SSD s3ontap2          50%                         50%                        primary

cmode_prod_02_aggr3_SSD s3ontap1          50%                         50%                        mirror

cmode_prod_02_aggr3_SSD s3ontap2          50%                         50%                        primary

4 entries were displayed.

Set tiering to 25% and change uncrelaimed back to 40% (set on the PRIMARY tier which is s3ontap2)

storage aggregate object-store modify -aggregate cmode_prod_01_aggr3_SSD -object-store-name s3ontap2 -tiering-fullness-threshold 25% -unreclaimed-space-threshold 40%

storage aggregate object-store modify -aggregate cmode_prod_02_aggr3_SSD -object-store-name s3ontap2 -tiering-fullness-threshold 25% -unreclaimed-space-threshold 40%

storage aggregate object-store show -fields tiering-fullness-threshold,unreclaimed-space-threshold,mirror-type

aggregate               object-store-name unreclaimed-space-threshold tiering-fullness-threshold mirror-type

———————– —————– ————————— ————————– ———–

cmode_prod_01_aggr3_SSD s3ontap1          40%                         25%                        mirror

cmode_prod_01_aggr3_SSD s3ontap2          40%                         25%                        primary

cmode_prod_02_aggr3_SSD s3ontap1          40%                         25%                        mirror

cmode_prod_02_aggr3_SSD s3ontap2          40%                         25%                        primary

4 entries were displayed.

3     Volume Tiering Configuration

3.1      Volume Tiering Policies

  • From the information section
    • Auto                       2-63 (9.7) or 2-183 (9.8) days cooling (default = 31)
    • Snapshot-Only        2-63 (9.7) or 2-183 (9.8) days cooling (default = 2)
    • All                          All but metadata moves to object
    • None                      Default
  • You can set tiering on volumes that are not in tiered aggregates and the setting will be used when moved

cmode-prod

Show volumes in the SSD aggregates and the tiering policy and minimum cooling days

vol show -aggregate cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3_SSD -fields tiering-policy,tiering-minimum-cooling-days

vserver    volume               tiering-policy tiering-minimum-cooling-days

———- ——————– ————– —————————-

source_fg1 source_fg1_root_ls01 none           –

source_fg1 source_fg1_root_ls02 none           –

source_ntfs apps                none           –

source_ntfs apps_clone1         none           –

source_ntfs source_ntfs_root_dp01 none         –

source_ntfs source_ntfs_root_dp02 none         –

source_ntfs source_ntfs_root_ls01 none         –

source_ntfs source_ntfs_root_ls02 none         –

source_ntfs users               none           –

source_test apps                none           –

source_test home                none           –

source_unix apps                none           –

source_unix apps_clone          none           –

source_unix source_unix_root_dp01 none         –

source_unix source_unix_root_dp02 none         –

source_unix source_unix_root_ls01 none         –

source_unix source_unix_root_ls02 none         –

source_unix users               none           –

18 entries were displayed.

Set an All tiering policy  to apps on source_ntfs (tiering days is not supported, not needed)

volume modify -vserver source_ntfs -volume apps -tiering-policy all

Set a Snapshot-Only tiering policy to users on source_ntfs with 2 cooling days

volume modify -vserver source_ntfs -volume users -tiering-policy snapshot-only -tiering-minimum-cooling-days 2

Set an auto tiering policy to home on source_test with 5 cooling days

volume modify -vserver source_test -volume home -tiering-policy auto -tiering-minimum-cooling-days 5

Show the tiering

vol show -aggregate cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3_SSD -fields tiering-policy,tiering-minimum-cooling-days

vserver    volume               tiering-policy tiering-minimum-cooling-days

———- ——————– ————– —————————-

source_fg1 source_fg1_root_ls01 none           –

source_fg1 source_fg1_root_ls02 none           –

source_ntfs apps                all            –

source_ntfs apps_clone1         none           –

source_ntfs source_ntfs_root_dp01 none         –

source_ntfs source_ntfs_root_dp02 none         –

source_ntfs source_ntfs_root_ls01 none         –

source_ntfs source_ntfs_root_ls02 none         –

source_ntfs users               snapshot-only  2

source_test apps                none           –

source_test home                auto           5

source_unix apps                none           –

source_unix apps_clone          none           –

source_unix source_unix_root_dp01 none         –

source_unix source_unix_root_dp02 none         –

source_unix source_unix_root_ls01 none         –

source_unix source_unix_root_ls02 none         –

source_unix users               none           –

18 entries were displayed.

3.2      Volume Move with Tiering

  • Best PracticeCreate a single Bucket for all aggregates within a cluster
    • This will ensure that vol move does not retrieve data from capacity layer when moving volumes between aggregates
  • You can set the tiering policy (change policy) on vol move
  • If a volume move’s destination local tier does not have an attached cloud tier, data on the source volume that is stored on the cloud tier is written to the local tier on the destination local tier
  • If a volume move destination local tier uses the same bucket as the source local tier, data on the source volume that is stored in the bucket does not move back to the local tier. This results in significant network efficiencies. (Setting the tiering policy to None will result in cold data being moved to the local tier.)
  • If a volume move’s destination local tier has an attached cloud tier, data on the source volume that is stored on the cloud tier is first written to the local tier on the destination local tier. It is then written to the cloud tier on the destination local tier if this approach is appropriate for the volume’s tiering policy. moving data to the local tier first improves the performance of the volume move and reduces cutover time
  • If a volume tiering policy is not specified when performing a volume move, the destination volume uses the tiering policy of the source volume. If a different tiering policy is specified when performing the volume move, the destination volume is created with the specified tiering policy.
  • Note: When in an SVM-DR relationship, source and destination volumes must use the same tiering policy

cmode-prod

Move apps on source_test to cmode_prod_02_aggr3_SSD changing the policy from none to snapshot only

vol move start -vserver source_test -volume apps -destination-aggregate cmode_prod_02_aggr3_SSD -tiering-policy snapshot-only

vol move show            #wait until completed

vol show -vserver source_test -volume apps -fields tiering-policy,tiering-minimum-cooling-days

4     Object Store Tagging (ONTAP 9.8)

  • ONTAP 9.8 only feature

4.1      Tagging Metadata Information

  • Starting in ONTAP 9.8, FabricPool supports object tagging using user-created custom tags. If you are a user with the admin privilege level, you can create new object tags, and modify, delete, and view existing tags
  • Supports a maximum of 4 tags per volume and all volume tags must have a unique key
  • Supported on StorageGrid WebScale object store 
  • Keys are specified as a key=value string. For example, type=PDF
  • Keys and values must contain only alphanumeric characters and underscores
  • Volume parameter -tiering-object-tags <key1=value1> [,<key3=value3>,<key4=value4>]

4.2      Create a Volume with Object Tags

cmode-prod

Create a volume with 4 object tags (none or 1-4 is supported)

volume create -vserver source_unix -volume volFP_tagged -aggregate cmode_prod_02_aggr3_SSD -size 1g -space-guarantee none -junction-path /volFP_tagged -state online -tiering-policy auto -tiering-minimum-cooling-days 183 -tiering-object-tags labenv=evtlabs,lab=lab15,cluster=cmode_prod,svm=source_unix

Warning: The export-policy “default” has no rules in it. The volume will therefore be inaccessible over NFS and CIFS protocol.

Do you want to continue? {y|n}: y

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

4.3      Modify Tags

  • This command change is an absolute setting, so to change one key, you need to specify ALL keys

cmode-prod

Modify 2 of the tags (repeat the original 2 and 2 new)

volume modify -vserver source_unix -volume volFP_tagged  -tiering-object-tags labenv=evtlabs,lab=lab15,node=cmode_prod_02,aggr=aggr3_SSD

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

4.4      Remove Tags with “”

cmode-prod

Create a volume with 4 object tags (none or 1-4 is supported)

volume modify -vserver source_unix -volume volFP_tagged -tiering-object-tags “”

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

Add the original tags back

volume modify -vserver source_unix -volume volFP_tagged -tiering-object-tags labenv=evtlabs,lab=lab15,cluster=cmode_prod,svm=source_unix

volume show -vserver source_unix -volume volFP_tagged -fields tiering-object-tags,tiering-policy,tiering-minimum-cooling-days

4.5      Check if Tagging is Complete

  • Check if the object tagging scanner has not yet to run or needs to run again for volumes 

cmode-prod

volume show -needs-object-retagging true

volume show -fields needs-object-retagging

5     Promote Data to the Performance Tier from S3 (ONTAP 9.8)

  • ONTAP 9.8 only feature

5.1      Promote Information

  • Starting in ONTAP 9.8, you can proactively promote, or pull back data, to the performance tier from the cloud tier using a combination of the tiering-policy and the cloud-retrieval-policy setting. You might do this if you want to stop using FabricPool on a volume, or if you have a snapshot-only tiering policy and you want to bring restored Snapshot copy data back to the performance tier
  • There are 4x Cloud Retrieval Policies
    • -cloud-tetrival-policy
      • default 
      • on-read 
      • never   
      • promote

5.2      Promote ALL Data to the Performance Tier (from S3)

  • Set the tiering policy to “none” so all data is brought back before the promote

cmode-prod

volume modify -vserver source_unix -volume volFP_tagged -tiering-policy none -cloud-retrieval-policy promote

Warning: The “promote” cloud retrieve policy retrieves all of the cloud data for the specified volume. 

If the tiering policy is “snapshot-only” then only AFS data is retrieved. 

If the tiering policy is “none” then all data is retrieved. It may take a significant amount of time, and may degrade performance during that time. 

The cloud retrieve operation may also result in data charges by your object store provider.

Do you want to continue? {y|n}: y

5.3      Promote Active File System Data to the Performance Tier (from S3)

  • Set the tiering policy to “snapshot-only” so only active file system (non-snapshot) data is brought back before the promote

cmode-prod

volume modify -vserver source_unix -volume volFP_tagged -tiering-policy snapshot-only -cloud-retrieval-policy promote

Warning: The “promote” cloud retrieve policy retrieves all of the cloud data for the specified volume. 

If the tiering policy is “snapshot-only” then only AFS data is retrieved. 

If the tiering policy is “none” then all data is retrieved. It may take a significant amount of time, and may degrade performance during that time. 

The cloud retrieve operation may also result in data charges by your object store provider.

Do you want to continue? {y|n}: y

5.4      Check Migration and Tiering Status

cmode-prod

volume object-store tiering show -vserver source_unix -volume volFP_tagged -instance

5.5      Start Schedule Migration and Tiering

  • You can start the tiering scan status when you prefer not to wait for the default tiering scan. 

cmode-prod

volume object-store tiering trigger -vserver source_unix -volume volFP_tagged 

6     Monitoring and Space Reporting

  • Active IQ Unified Manager provides basic capacity and performance insights
  • NetApp Harvest provides detailed performance information

6.2      Show Space in Each Tier (Aggregate, Volume, Object)

  • Volume and aggregate level
  • Aggr show-space breaks out used in each performance and object capacity tier

cmode-prod

Show space in the aggregate (Performance Tier and Object Tier are shown)

Get information on data tiered per aggregate

aggr show-space -aggregate-name cmode_prod_01_aggr3_SSD,cmode_prod_02_aggr3

Show volume space in cloud capacity and performance tiers

Get information on data tiered per volume

vol show-footprint

volume show-footprint -fields bin0-name,volume-blocks-footprint-bin0,bin1-name,volume-blocks-footprint-bin1

vol show-footprint -vserver source_ntfs -volume apps

vol show -vserver source_ntfs -volume apps -fields performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent

Show Object Store Space

storage aggregate object-store show-space

6.3      Inactive Data Reporting (IDR)

  • Show how much data will tier added in 9.4
  • Works for all tiering policies
  • Works on HDD for reporting (9.6+)
  • Does not work on Flash Pool aggregates
  • Go to the SINGLE NODE cluster for IDR
  • ONTAP 9.8 – IDR uses the ONTAP cooling period, so you no longer have to wait 31 days
  • Use the XCP method below if you need faster results pre-9.8

cmode-single

Show if enabled

aggr show -is-inactive-data-reporting-enabled true    # not enabled

Enable on both HDD aggregates

storage aggregate modify -aggregate cmode_single_01_aggr1,cmode_single_01_aggr2_mir -is-inactive-data-reporting-enabled true

Show enabled

aggr show -is-inactive-data-reporting-enabled true

Show Inactive Data (zero for our lab – needs 31 days)

storage aggregate show-space -fields performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent

vol show -fields performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent

6.4      XCP (host based) Estimate (IDR at the file level)

  • This isn’t as good as IDR (XCP looks at files, not blocks, and has no concept of WAFL metadata) but will get you very close
  • If you do not have 31 days to wait for reporting and want immediate file level results
  • XCP is free (90-day key you can continually renew)
  • See the SVM NAS Lab for more information and XCP command examples



xcp scan -match “((now-x.atime) / 3600) > 31*day” <source>”

Windows Host

  • Copy the license file to  C:\NetApp\XCP

Powershell

cd “C:\Users\administrator\Desktop\NetApp Software\xcp\windows”

./xcp activate

XCP SMB 1.6P1; (c) 2020 NetApp, Inc.; Licensed to Scott Gelb [None] until Sun Aug  9 08:07:08 2020

XCP activated

Run a scan for cold files on a NAS share

./xcp scan -match “((now-x.atime) / 3600) > 31*day” \\source_ntfs\apps

Run a scan for cold files on your your windows host

./xcp scan -match “((now-x.atime) / 3600) > 31*day”  “C:\Users\administrator\Desktop\NetApp Software”

6.5      Statistics and Node Shell Diag (Advanced Reporting)

cmode-prod

set diag

Get information on operations issued (this will have no output for the lab)

statistics show -object wafl_comp_aggr_bin -counter cloud_bin_operation -raw

Cloud read and write performance monitored per node 

node run -node cmode-prod-01 “priv set diag;sysstat -d 1”

Detailed IO size and full request latency (this is not the same as frontend latency. Bin 0 refers to the hot tier and Bin 1 to the cold tier)

node run -node cmode-prod-01 “priv set diag;wafl composite stats show cmode_prod_01_aggr3_SSD”

Detailed client to object-store statistics per operation type are available via the following commands. This commands only collect the metrics between start and stop

node run -node cmode-prod-01 -command “priv set diag;stats start object_store_client_op”
node run -node cmode-prod-01 -command “priv set diag;stats stop”

Detailed information about the connections to the object store, including TLS handshake latency, are available via the following commands. This commands only collect the metrics between start and stop

node run -node cmode-prod-01 -command “priv set diag;stats start object_store_client_conn”
node run -node cmode-prod-01 -command “priv set diag;stats stop”  

For a volume tiered with “all” policy, some blocks may still be on SSD

To flush all blocks of a flexvol from SSD to Cloud, you can use the following diag level command
node run -node cmode-prod-01 “priv set diag; wafl scan redirect apps”

Check completion of the process 

node run -node cmode-prod-01 “priv set diag; wafl scan status”

set adv

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s