I have to say that the market of virtual private servers is a little different from both the other 2 sides of the hosting services. Plain web hosting doesn’t really offer much in terms of customization. Of course different services have different software enabled like curl, ioncube, ruby, python but at the end you can always say for sure that the leading management panel is cPanel and there are other minorities that are chosen by relationships or by particular needs, and then there are a few free alternatives like the everdying kloxo and the kinda of weird zPanel, but that’s it, there is nothing different you are going to find on web hosting plans. At the end you can define them as just “folders” options.
If you look at dedicated servers instead you see that what matters is the hardware of course, that’s where you find the leading argument, the base from where you got to choose which one to trust and which one to try.
But when you think about virtual private servers there is a list of different options and no one really thinks to build up some standards. It is clear that the virtualization technologies in some way don’t really satisfy the needs completely. The most common software are openVZ, XEN, and KVM, followed by a situational and often bad use of VMware and Hyper-v. The thing about those software is that often one does a thing that the other can’t do , if you want something that just does “all”, you are not going to find it, and you will have to rely on dedicated servers.
“Ok just go for a ded server then!”…Well, is that simple? This kind of layer of virtualization was created to distribute in a more accessible way resources that otherwise would have been wasted. You virtualize things in order to avoid the “material” side of the software you are using, it is used for efficiency, to deploy a layer of modularized environments without the hassle of dealing with physical stuff that often causes more problems, and more is of course more expensive.
Imagine a company that wants a set of virtual private servers to give to each employee a working ground with more resources than the common desk pc. This could be the purpose of a virtualization setup, now for a particular job an employee needs a relatively high level of cpu usage for a prolonged time. How the system is going to manage it? This is a common question from a long list an administrator have to evaluate in order to choose the virtualization that fits his needs; Now the bad part is that CPU sharing is like one of the curses of the virtualization technologies because you are never going to be able to give the best to everyone, but knowing this you actually NEVER do it, in order to keep a system light and fast you never really use it thoroughl, because you have to expect the worse. Doing this actually gives you two options, either you deal with it, and get more resources and spend more, or you go the other way, stay safe and reduce the cap CPU for every user. Either way you are an example of virtualization failure, because if you virtualized a set of operative systems to reduce the costs of getting more dedicated servers but then you buy more resources to keep the server working that is a fail, you wanted to be cheap, right? And, if you virtualized in order to allow multiple users to get access to a larger pool of resources but then end up limiting the resources for each one you just did a hole in the water, now you have a virtualized desktop PC, is it really useful?
So What to do? How does this demonstrate that the VPS market is messy?
…End part 1…