Booting up 500 desktops on 11 spindles in under 6mins

 

Pretty cool right? 

A little backstory – One of the things the End User Computing team inside the vSpecialist organization was tasked with was updating some of the tools, demos and various other “rules of thumb” type metrics we use when speaking to Screen shot of diskscustomers.  A few weeks ago our team met with VMware, Cisco and Citrix to hear about their End User Computing strategy as well as to hash out some various sizing ideas when working with customers on VDI designs.  Out of these meetings and discussion an idea was brought to the table to help our field teams and partners understand the value of features like FAST Cache when doing VDI designs.  During one of our discussions, we decided that we needed to test a config that would 1) be easy for our teams to understand and recite when speaking with customers, and 2) be something we’ve tested and could stand behind.  Out of that discussion, and in our best “Herman Cain” mantra we came up with 5, 5, 5, 2 for 500 desktops (non-persistent).  That breaks down to:

  • 5(ea) SSD/EFD Drives
  • 5(ea) 15k RPM SAS drives
  • 5(ea) 7200 RPM NearLine-SAS Drives
  • 2(ea) SSD/EFD setup as FAST Cache (Read/Write)
  • NFS Datastores
  • VMware View 5 on vSphere 5

 

The first 3 drive types would be put into a RAID 5 (4+1) FAST VP pool (1 pool) and the 2(ea) SSD/EFD’s would be setup as FAST Cache (You can read more about both of these here).   We were more than confident that this would work but we wanted to test it to see if we could push the number of desktops above 500.

Well, a funny thing happened on the way to testing.  We were able to adjust our spindle count….DOWN!.  The proof is in the pudding as they say, so the testing can be found on Andre Leibovici’s site (finish reading here then go to his site – he already gets enough blog hits Smile ) but here is the breakdown of the Boot Storm he created and the spindle count used to test against it:

  • 3(ea) SSD/EFD Drives (RAID 5 – 2+1)
  • 3(ea) 15k RPM SAS Drives (RAID 5 – 2+1)
  • 3(ea) 7200 RPM Near-Line SAS (RAID 5 – 2+1)
  • 2(ea) SSD/EFD for FAST Cache. 100GB
  • No Dedicated SSD/EFD for Replica’s – we just tossed the Replica into the FAST VP Pool.
  • VNX with NFS Datastores

 

Now just to be clear, we are not suggesting that if you have a 500 desktop requirement that you should run out and buy this 11 drive system (but you certainly could).  There are more things you need to focus on other then just the desktop number.  We focused booting up 531 desktops in 5 minuteson performance and not capacity so you will want to pay attention to that.  BUT, C’MON !! How about that for a problem to have when designing for VDI !!  Also, when I spoke to Andre who headed up the testing, he mentioned that during the testing the NL-SAS drives never got touched so technically we ran this config on 6 Spindles plus 2 SSD/EFD’s in FAST Cache.  If you are adding that up at home, that’s 8 spindles for 510 non-persistent VMware View 5, Windows 7 (32bit) desktops.

At the end of the day, driving down the cost of Virtual Desktops is one of the key things we try to do when designing VDI solutions.  Storage (specifically spindle count) is usually one of the top line items when doing this.  The value of FAST Cache type features that can utilize SSD/EFD’s as a read and write caching can really help drive the spindle count down.  When you drive the spindle count down, you drive down the costs of the array.  This helps drive down the costs of the VDI deployment.

At the end of the day, this is a HUGE win for companies looking to do virtual desktops but were concerned about the storage costs. 

Dare I say, 2012 is the year of VDI Smile

 

@vTexan

 

 

 

 

12 thoughts on “Booting up 500 desktops on 11 spindles in under 6mins

  1. Interesting but perhaps we need a different word from “spindles” when talking about SSDs/EFDs!
    Ideas?

    1. I’m so old school that everything in a drive bay I call a spindle 🙂 I probably should just say “drives” but then i’m sure thats probably not right either 🙂

  2. This is excellent, as it goes a long way in simplifying FAST Suite and VDI solution. I did have some questions, and was wondering if you could clarify things for me.
    1) My understanding was that the recommendation for File was not to use FAST VP pools. Seems like you are using NFS on FAST VP. Has the recommendation changed?
    2) NL-SAS use was minimal since most of the IOPs were driven by the other 2 tiers. IOPs to the EFD tier would have minimal benefit from FAST Cache, as they would be redundant. Would it be more cost effective to have a solution that used maybe 10 x 10K SAS drives (4+1, 4+1) and 8 EFD drives (4+4)? All data drives, along with vault and hot spare drives should be able to fit in the 25 drive shelf. 500 user VDI solution in one shelf, how cool is that!
    Just wondering if such a configuration has been tested in your labs.
    Cheers!
    Patrick

    1. Hey John – thanks for the comment – always love to see my friends from Dell reading my stuff !! So, 2000 desktops on 6 SSD drives is pretty impressive – care to go into any amount of detail around the config? Did you guys do any bootstorm testing? What about steadstate work loads? What was the useable capacity per desktop ? What size SSD drives were you using? Was this View with LinkedClones or was it something like Citrix PVS ? Just curious.

      If you note, we listed all the information and recorded the bootstorms and steadystate workload for all to see – would love to see that from Dell/Compellent !!

      Tommyt

  3. I’m getting ready to test the very same scenario on Compellent. But I’m using Citrix for my broker hosted on VMware. I’m still deciding on a provisioning method via Citrix Provisioning server or use a combination of Atlantis Computing to pack on more VDI sessions. I too have a good mix of drives to play with. I’ll be posting the results in a few weeks on Burdweiser.com.

    1. Awesome – but as I’m sure you know, PVS is WAY different then View in sizing (at least on the storage side)…at least it has been in my experience. Let me know when you post it, i would love to look it over. Would love to see a before Atlantis and then after for testing. Also, what will you be using to generate the workload? One of the things that frustrates me the most is the testing tends to focus on more reads then writes and as I’m sure you know, VDI workload (Citrix or View) is WAY more Write heavy then reads. close to 80% Writes and 20% reads !! CRAZY !!

      Again – let me know when you get done !! the more details the better !!

      1. I’m still looking for options to generate work loads (any suggestions?). I have a compellent test array that has 6 SSD’s, so I’m going to see how much I can squeeze on those. I’ve primarily worked with EMC, but we are testing this Dell seed unit to see how well it does. I’m working with Dell engineers and I’ve got a conference with Atlantis tomorrow to talk things over. I might even test with View if time permits. I’ll be detailing everything in a post, possibly in a few weeks.

      2. We’ve used Login VSI for the tests – you just want to make sure you dial in the workload so that it’s heavy on writes and low on reads – like 80% writes and 20% reads – and staggering the workloads so they all aren’t opening up the same things at the same time etc. Andre over at MyVirtualCloud.Net would be a good resource for finding out some ideas for testing.

Leave a comment