I do a lot of work with Hyper-V virtual machines in a test environment. My Hyper-V host is a white box kludged together cheaply, but with reasonable hardware.
The VMs that are used for Microsoft courses use base drives for the core OS and then differencing drives for each individual VM. The bouncing back and forth between the base drive and the differencing drive tends to drag down performance. In addition, I often use snapshots to give me a backout point when developing labs. Which reduces performance even more.
I've got the storage system setup as four 7200 RPM SATA drives in a RAID 10 array using the Intel Storage Technology enterprise that is built into the motherboard. This gives far better performance than a single drive, but still not enough when I'm running many VMs. Storage speed is the bottle neck in this system.
Today I bought a Kingston 240 GB HyperX SSD drive to improve performance. Since the drive is not very large, I though I'd start by moving only the base drives onto the SSD to see what the performance improvement was like. Then I mounted the SSD as the folder that I copied the base files from. This retained the proper association between the differencing drives and the base drives.
The performance improvement was huge! Tasks that were painfully before now behave like normal servers. I figure that labs which took me an hour or more to complete will now take about 20 minutes just because of the wait time. As an example, for course 20417C there is a lab where AD FS is configured to authenticate an application. Accessing this application the first time used to take 2-3 minutes. Now it happens in a few seconds.
The differencing drives and the snapshots are still stored on the RAID 10 array. So, all write activity and some read activity is still done there. Even in this configuration the performance difference is amazing.
All together I have about 200 GB of base drives. Basically, they filled the SSD. Then I remembered that Windows 2012 has data deduplication functionality. I have never tried it before, but thought it might work.
Windows Server 2012 adds support for deduplication on live virtual hard drive files, but I didn't require that for these base disks because they are static and don't change.
I thought that deduplication might cut the space usage by half since I had multiple base disks for each operating system like Windows 7 and Windows Server 2012. However, when I ran the test tool (ddpeval.exe) it showed that the 200 GB would be reduced to about 63 GB without compression and 35 GB with compression. It was right!
After enabling deduplication disk utilization dropped from 200 GB to 34 GB leaving over 200 GB free on the SSD drive. I was scared that this might impact performance of the VMs but I could not see a performance difference at all.
Overall, I'm very impressed.
For details about how to implement data deduplication in Windows Server 2012 see here:
Tuesday, December 31, 2013
Friday, December 6, 2013
Free Online Hyper-V Training
I work with Hyper-V a lot for Microsoft training and we've starting to use it as our standard virtualization platform for clients. I've seen a lot of improvements in it since it was first introduced and honestly think that for smaller organizations it's easier to work with than VMware. For larger organizations, I think it's a contender.
One of the biggest impediments to implementation is knowledge. Microsoft is making online training for Hyper-V in Windows Server 2013 R2 available for free. They're also kicking in a certification exam.
Check it out:
One of the biggest impediments to implementation is knowledge. Microsoft is making online training for Hyper-V in Windows Server 2013 R2 available for free. They're also kicking in a certification exam.
Check it out:
Subscribe to:
Posts (Atom)