HP Lefthand VSA minimum memory requirements

September 17, 2010 — 4 Comments

These week I’ve trying to stretch the virtualization resources of my homelab as much as possible. In my obsession to run as many VMs as possible I decided to lower the memory of some of them, including my storage appliances.

My VSAs are configured with various amounts of RAM ranging from 384MB to 1GB. I took the one I have in my laptop for demo purposes, powered it off, set the RAM to 256MB and fired it up again.

The VSA seemed to start without any problems and from the console everything looked fine.

I started the CMC and quickly noticed that something was wrong, the status of the storage server was “offline”.

I then looked into the alerts area and found one saying that there was not enough ram to start the configured features.

OK then, the VSA doesn’t work with 256MB of RAM; so which value is the minimum required in order to run the storage services?

After looking into several docs I found the answer in the P4000 Quick Start VSA user guide. The minimum amount of RAM required is 384MB for the laptop version and 1GB for the ESX version. Also in the VSA Install and Configure Guide, that comes with the VSA, the following values are provided for the ESX version and for the new Hyper-V version:

  • <500GB to 4.5TB – 1GB of RAM
  • 4.5TB to 9TB – 2GB of RAM
  • 9TB to 10TB – 3GB of RAM

After that I configured again the VSA with 384MB and the problem was fixed and the alarm disappeared.

Juanma.

Advertisements

4 responses to HP Lefthand VSA minimum memory requirements

  1. 

    The minimum amount of memory that seems to be required is a factor of the ramdisks plus the overhead Hydra requires for managing the storage. The Linux kernel doesn’t require too much more than sufficent memory to support the initrd (when used), but the VSA initializes three ramdisks and, of course, without Hydra and the memory overhead it requires, there’s no storage appliance.

    Since three ramdisks are required (initrd, one for logs, and one for Hydra) the minimum will be the minimum for the kernel + 3x the ramdisk size compiled in to the kernel. Since LH optimized their initrd to barely fit in their kernel’s default ramdisk size, which I think was a 16MB ramdisk, when I added debugging extras to mine, the ramdisk size had to be made larger. Since I elected for a 32MB initrd for future growth room, my minimum footprint was 96MB just for the ramdisks.

    Although LH’s use of ramdisks for both logs and Hydra bodes extremely well for using a pair of USB flash drives for the VSA operating system, I never worried about physical memory since I planned on running it in a physical environment as wannabe PSA 4000’s.

    The FOM was a bit more picky about minimum memory requirements, and I’m not sure why. I found it required around 768MB to function properly after using my new kernel. Perhaps the extra memory was only needed when joining the cluster. I later increased it up to 1GB just to be safe. From the results below, it doesn’t appear to be needing all that much for Hydra compared to the physical storage nodes.

    I didn’t tinker with the default kernel and initrd for very long since it didn’t include a complete bonding module.

    Below are some realistic figures from the top command with 1TB of storage in the whole cluster (500GB per node), where most of my volumes are 2-way replicated between the two nodes, and the FOM as a witness. I haven’t really touched them in nearly a year, but with only one power outage, they’ve been up nearly 6 months, which should provide pretty realistic long-term memory consumption in a clustered scenario.

    They currently act as cluster-shared storage for a Hyper-V cluster, and also provide storage for 3 actively used Hyper-V virtual desktop machines.

    OptiPlex GX620 Node 1
    19:55:41 up 198 days, 11:17, 0 users, load average: 0.00, 0.00, 0.00
    Mem: 2066148k total, 521096k used, 1545052k free, 57428k buffers
    Swap: 0k total, 0k used, 0k free, 88644k cached

    OptiPlex GX620 Node 2
    03:50:07 up 198 days, 11:12, 0 users, load average: 0.00, 0.00, 0.00
    Mem: 2066148k total, 558920k used, 1507228k free, 58428k buffers
    Swap: 0k total, 0k used, 0k free, 89764k cached

    FOM (can’t capture the text easily since I converted it to Hyper-V)
    Mem: about 156MBytes, so top reports.

    Also of note, the storage nodes are running 8.1.00.0047.0 and the FOM 8.0.00.1682.0; I’ve been thinking about updating to 8.5 in the next few weeks.

  2. 

    Hi Michael, sorry for the delay about displaying your comment it seemed to fall into the spam folder.

    Thank you so much for the info, you clearly know the LH appliances better than myself, I just use them for labs :-).

    Anyway I supposed those Optiplex are Hyper-v nodes running a pair of VSAs, I only use the VMware version, my daily work has nothing to do with Hyper-v but I’m sure it runs as smooth as the ESX version.

  3. 

    Actually, the OptiPlex’s running the VSA’s are not Hyper-V hosts, I re-configured the VSA’s initrd to load the device drivers required to run natively (such as the 3ware driver and the broadcom driver). Running on native hardware caused me to learn more about how the VSA works and I learned LH supports many different platforms, including IBM servers, Dell PowerEdge, and HP Proliants — or at least they did until HP bought them out.

    Very soon I’ll explore the EMC Clariion virtual appliance, which is also Linux based, but has all desirable modules pre-compiled, unlike the LH VSA. It also has CIFS and NFS support in addition to iSCSI.

    To help you better understand where Hyper-V fits in to my situation, my two Hyper-V servers (one Dimension and one OptiPlex) needed some type of clustered shared storage, so they use the other two OptiPlex’s as shared storage. It’s a lot easier to manage a VSA when it’s running natively and not on the free version of ESX, but it does take a little bit of effort to get it there.

Trackbacks and Pingbacks:

  1. 256MB RAM NO STORAGE | Quality Products Blog - July 12, 2011

    […] 256mb ram no storage jreypo.wordpress.com […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s