Breaking News :

nothing found
December 22, 2024

3Par StoreServ 7000 – Step by step Installation and Configuration (Part 2)

In Part 1 of this 3 part series series we went over the initial installation and configuration of the 3Par 7000.  This included executing the “birth process” of the Virtualized Service Processor and activate the Storage System, which included the “Out of the box” configuration process.

Now, we are going to run through the rest of the process to finalize the setup of the 7000 and hook it up to your servers.  Here is a high level list of remaining tasks:

  • Apply license keys
  • Zone the 3Par and servers to
  • Create Common Provisioning Groups
  • Create Virtual Volumes to present to the ESX hosts

First, we are going to move forward with applying the license keys to the array so we can enable the hard drive slots and activate 3Par’s full suite of features such as Adaptive Optimization, Dynamic Optimization, etc.  You should have received your license keys form the procedure we performed in Part 1.  Now, there is a way to add the license via the GUI but there seems to be a known bug with that process right now so I had to hop into the CLI For this.  You can find the CLI install client on the “System Reporter” CD that came with the system.  If you don’t have the CD you can register your 7000 here and all available software for your system can be downloaded instantly.

LICENSE

Once you have your CLI client installed open it up and log into your system by first giving the IP address, than your username and password.  You should now be at the “cli%” prompt so go ahead and type in without quotes “setlicense” and hit “y” to agree to the terms.

SNAGHTML393ee4

Now, go ahead and enter in your license keys by copying and pasting them into the CLI window.  The keys are 32 characters long with a hyphen every 4 characters (60R3-60R3-60R3-60R3-60R3-60R3-60R3-60R3).  To make it easy I took the individual keys I got from the HP site and put them into a blank notepad text file, I made sure there were no spaces between them, select all, and hit copy.  I hit paste on the command prompt window to populate all of the keys and hit enter.  After that, your curser will move down into a blank line, hit enter one more time and you should see the following lines to confirm your features were installed and activated.

**************************************************************************

The system will be licensed for 40 disks instead of 8 disks.

The following features will be enabled:

-Dynamic Optimization (No expiration date)
-Management Plug-In for VMware vCenter (No expiration date)
-Peer Motion (Expires July 16, 2013)
-Recovery Manager for VMware vSphere (No expiration date)
-System Reporter (No expiration date)
-System Tuner (No expiration date)
-Thin Conversion (No expiration date)
-Thin Persistence (No expiration date)
-Thin Provisioning (10240000G) (No expiration date)
-Virtual Copy (No expiration date)
-SS Provider for Microsoft Windows (No expiration date)

Are these the expected changes? (yes/no) yes License key successfully set.

cli%

**************************************************************************
To confirm that the expected features are activate and loaded hop into the IMC GUI, click on your storage system and on the right pane hit the software tab. (See below picture)

SNAGHTML52f46d
FABRIC ZONING

image

In 3Par OS 3.1.2 Persistent Ports was introduced into the 3Par ecosystem.  Persistent Ports (also known as NPIV) introduces an added layer of redundancy without the dependency of any software multi-pathing to perform online software updates, HBA firmware upgrades, and controller maintenance.  While any of these actions are in motion while your host paths stay up and running.  Talk about a Tier 1 capable system!

image

ZONING AND FABRIC RECOMMENDATIONS FROM HP – Single initiator to multiple targets per zone (zoning by HBA *no switch port zoning*).

  • Host ports can be cabled on a random basis
  • Each node should be connected to both Fabrics (switches)
  • Max of 64 initiators per front end port
  • Ports of the same pair of nodes with the same ID should be connected to the same fabric (switch)
      Example:

    • 0:2:3 and 1:2:3 on Fabric 1
    • 0:2:4 and 1:2:4 on Fabric 2

SNAGHTML13911d1

[NOTE] – One thing I had to do to get the Fabric to see the 3Par ports was to change the array side ports to the settings pictured above.  Not sure if this is fixed in the latest bath off the assembly line.

COMMON PROVISIONING GROUPS

Now that we have completed our zoning on the Fabric and populated our hosts to the 3Par array we can go ahead and carve out our initial CPG (Common Provisioning Group).  I won’t get into the in’s and out’s of CPG’s here since I am focusing this post on initial setup and installation.  My initial CPG is easy here since I have sixteen 450GB, 10K RPM SAS Small form factor drives.  This will allow me to utilize a maximum amount of spindles as possible and set my availability (HA) levels.  Let’s go ahead and create the CPG.

SNAGHTML6a0530

On the Provisioning section right click on CPG and hit “Create CPG”, hit Next after the Welcome section.

SNAGHTML6b74e3

I went ahead and named my CPG and hit “Advanced Options” below to bring up the hidden settings.  This is where the customization are endless and your settings will vary on your availability and performance requirements.  I went ahead an went for most of the defaults, in my device type I select FC since that is the only type of drives I have installed, Raid 5, 4+1 for my Chunklets with a 32KiB step size.  For the availability option always try to use cage level availability.  This level of protection means that if you have enough disk shelves (cages) the Chunklets for a given Raid set will be taken from separate drive cages so if a disk cage were to have a power failure and go completely down your data will still be intact and serviceable to your hosts.  Note that this is one of the main features that makes 3Par a Tier 1 capable system and excellent for Virtualized environments, you just can’t break this thing!  Since CPG’s can automatically grow in size you can also set the size increments and set limits on that CPG growth.  Once you are done customizing your CPG click next to select/filter out any disks from participating in this CPG.  I made sure all 16 were selected and hit next and finish creating the CPG.  This process will now create the underlying “Logical Disk” completely transparently and under the hood in the system.  This is something the administrator never has to worry about.

VIRTUAL VOLUMES

Now, to get our hosts to actually see some storage we have to create a Virtual Volume and present it up to our VMware servers.  In the Provisioning section right click on Virtual Volumes and click “Create Virtual Volume”.

image

At this point we will configure a simple Virtual Volume.  Most of the the options here are self explanatory.  Configure the VV name, check “export volume after creation” to assign the volume to your servers, and select whether or not the VV should be provisioned in either Thin or Thick format.  Set the size of the virtual volume and select the CPG that this Virtual Volume should be pulled from.  The “Copy CPG” option allows you to select the CPG that should store this VV’s snapshots (Virtual Copy).  Hit next and finish to have the system create the Virtual Volume.

Since we selected “Export volume after creation” the Export Virtual Volume wizard will come up after the VV creation.

SNAGHTML1429c06

On the left select the LUN you want to export and on the right highlight the servers that should see this Virtual Volume.  Hit next and finish to assign the Virtual Volume to the selected servers.

SNAGHTMLa38e8b

On those VMware hosts navigate to the Storage Adapters section and hit “rescan all” – once that is complete you should see the exported Virtual Volumes that your ESX host can now access so you can create your VMFS datastore.  Notice that “Supported” is shown under the Hardware Acceleration section which tells you that VAAI is currently active and working.  Also, 3Par recommends utilizing the “Round Robin” load balancing policy under the “Manage Paths” option in ESX(i).

In Part 3 of this series we will go over some of the additional software features of the 7000 including the installation and configuration of the VMware vCenter management plug-in and Recovery Manager for VMware vSphere.

-Justin Vashisht (3cVguy)

Read Previous

Videos: New Product Releases from HP GPC 2013

Read Next

An afternoon at the NYC VMUG with Justin King, @vCenterGuy talking about vCenter SSO

Most Popular