Hardware Installation
NetApp uses an inordinate amount of packing material to ship what ultimately amounts to 3U’s of occupied space in the rack. Better safe than RMA I guess.If you’ve assembled other storage arrays or servers this part won’t be much of a challenge. One item of note is that the upper shelf controller goes in upside down, which may not be immediately obvious.
Once your shelf is securely installed in the rack with the drives inserted, install your SFPs in the “In” ports on both controllers, keeping in mind the upper SFP will go in upside down. NetApp will ship 2 sets fiber pairs with SC connectors, you will only need 1 set if you are installing a single shelf. Each pair will be labeled to match “1” and “2” on both ends. If you have additional shelves to install you will need to also install SFPs in the “out” ports to connect those shelves to the loop. Make sure to properly set your shelf ID which will be “1” if this is your first shelf.
FC Adapter Configuration
Ok, now the fun begins. Because my FAS2020 had no external shelves previously I had both FC ports on each controller connected to my Fiber Channel fabrics providing 4 paths to each storage target. Unfortunately I now need 2 of these ports to connect a loop to my new shelf. Any subsequent shelves added to the stack will attach to a prior shelf via the “Out” ports. The first step is to remove the 2 controller ports from my fabrics, both physically and in the Brocade switch configuration. I will be using the 0B interfaces on both controllers to connect to my shelf. My FC clients, vSphere and Server 2008 R2 clusters running DSM, are incredibly resilient and adjust to the dead paths immediately with no data interruption. Perform an HBA rescan in ESX and check the pathing just to be sure everything is ok.Before the fiber from the shelf can be connected to the controller ports, we need to change the operation mode of the FC ports. Currently they are in “target” mode as they were being used to serve data via the FC fabric. To talk to an external drive shelf they need to be in “initiator” mode. This is done using the fcadmin command in the console. Fcadmin config will display the current state of a controller’s FC adapters. Notice that they are in target mode. The syntax to change the mode is fcadmin config –t <adapter mode> <adapter>. You must also first offline the adapter to be changed because Ontap will not allow the change to an active adapter.
Once the adapter mode has been changed you will need to reboot the controller before it will take effect. If you are running an HA cluster this can be done easily utilizing the takeover and giveback functions. From the console of the controller that will be taking over the cluster, run cf takeover. This will migrate all operations of the other controller to the node on which you issue the command. As part of this process the node that has been taken over, will be rebooted. Very clean.
Fas1 taking over the cluster:
Fas2 being gracefully rebooted:
Once the rebooted node is back up, from the console of the node that is in takeover mode, issue the command cf giveback. This will gracefully return all appropriate functions owned by the taken over node back into its’ control. Client connections are completely unaffected by this activity.
The cluster will resume normal operation after the giveback which can be verified by issuing the cf status command, or via System Manager if you’d like a more visually descriptive display.Disk Assignments
Now that Fas2 is back up, you can verify the operation mode the 0B adapters (fcadmin config) as well as check that the disks in the external shelf can now be seen by the array. Issue the disk show –n command to view any unassigned disks in the array (which should be every disk in the external shelf).Because I am working with a partially populated shelf (8 out of 14 disks), I will configure a 3:3 split (+ 2 spares) between the controllers and create new aggregates on both. Performance is not a huge concern for me on this external shelf, I’m just looking for reserve capacity. Here is the physical disk design layout I’ll be working with:
*NOTE make sure that “disk auto assign” is turned off in the options if you want complete control on disk assignment. Otherwise the filer will likely assign all disks to a single controller for you. It is enabled by default and needs to be disabled on both nodes.
With auto assign turned off issue the disk assign –n <disk count> –o <filer owner name> command. Or if you like you can assign the disks individually by name.
Don’t worry if you goofed and need to reassign disks between controllers as this can be done rather painlessly. This is what it looks like when the filer auto assigns all disks to a single controller:
To fix this, enter advanced privilege mode on the filer and issue the disk remove_ownership <drive name> command for each drive you want to change. Once the drives have been removed from ownership, run the disk assign command again to get them where they should go. NetApp also recommends that you re-enable auto disk assign. Run a vol status –s on both controllers to verify the newly assigned disks and their pertinent details.
Aggregates and Spares
Now that the disks are assigned to their respective controllers, we can create aggregates. If the disk type in the external shelf were the same as the internal disks, we could add them to an existing aggregate, but since I am adding a new disk type to my array I have to create a new aggregate. I’m going to switch over to System Manager for the remaining tasks.Each controller will need its own aggregate comprised of the disks you just assigned to each (save the spare). I will be using the default NetApp naming standard and creating aggr1. This can be performed from the Disks or Aggregate page and is pretty self explanatory.
There you have it. A new shelf added hot to a NetApp array with no disruption to the connected clients. Now you can create your volumes, LUNs, CIFS/NFS shares, etc. If I add another AT shelf at some point at least I won’t have to sacrifice any more disks to spares!
0 Comments