Many customersÂ over provisionÂ their virtual environments, meaning that they throw a lot of expensive CPU and Memory resources at a problem. Â Over the last couple of years, I have worked with several different customers and will narrow it down to 3 main reasons;
1 â€“ Application originally resided on a physical server and CPU and Memory must be duplicated in virtual environment (I call this CYAT..Cover youâ€™re a$$ technology)
2 â€“ Third-Party software providers over estimate CPU and Memory Requirements. (Typically to compensate for the bad code J )
3 – Â LOB owners typically pay for the IT infrastructure when deploying a new solution therefore want to feel they are getting the most out of their investment so they over allocate the CPU and Memory.
Remember the three reasons above are not from Gartner but reflect my personal experience in the field..however I think most of you reading this article might conquer! Overall, the goal is the same across the board, overestimate requirements, so there is never an issue. Â The problem with this approach is it is VERY EXPENSIVE.
Letâ€™s try to understand what happened and why capacity management must be taken seriously by IT now more than ever. Over the last couple of years, companies have been riding the â€œDo more with lessâ€ bandwagon resulting in an â€œ adoption rate of server virtualization in 2012 is predicted to be 14.3% of total new physical x86 servers and will reach 21.3% of total servers in 2016. Total virtual OS instances will contribute 70.2% of total OS instances in 2012 and reach 82.4% of total OSs in 2016.â€ Â
As companies embrace virtualization as it has been proven to save costs they are starting to move their Tier-1 and Tier-2 (Exchange/Oracale/SAP) infrastructures in this shared resource environment. As these Tier-1/ Tier-2 applications typically have SLA agreements tied to them, performance becomes an issue of concern. How do you overcome this red flag, well over allocate CPU and Memory. So what is the problem one might ask? The virtualization platform that was chosen to â€œdo more with lessâ€ that was supposed to deliver a 30:1 ratio is now delivering a 4:1 virtual to physical ratio.
So how can IT able to demonstrate to the business that their Tier-1 servers will be downsized and still be able to hit and exceed their performance SLAs â€“ HP vPV. Â As vPV provides VMware and Microsoft HyperV, administrators and virtualization subject matter experts (vSMEs) diagnostic tools, essential to support their decision processes and problem resolution it could also support “how to identify oversized VMs “Â Ramkumar Devanathan’s HP Blog demonstrates how to leverage the HP vPV tool to identify and correct these over-sized VMs.
Feel free to download the tool and try it against your VMware or Hyper-V infrastructure. As the tool is free, it is an excellent way to start collecting these valuable metrics to support your 30:1 virtualization initiatives.
After all, would you build a stadium for 100 fans? The Montreal Expos did!
Enjoy and be social!
Â 15 Minute Demo Available via http://www.ctcmanage.com