Virtualisation (2 of 3) - Why should you care?
Everything we implement has to have a great Return on Investment (ROI) and the Total Cost of Ownership (TCO) must always be low. We all want to increase the availability of our systems, and we all want to 'do more with less'. Here's a good one: We all want to 'enable agility' (whatever that means).
I'm not taking a deliberate swipe at marketing departments (I've probably said most of those myself at some time or other), it's just that, apparently, we all spend 70 per cent of our time and money 'fighting fires' (keeping our IT systems up and running) and only 30 per cent adding values to the business (implementing new systems and solving new challenges). Most people I say that to tell me that I don't know the half of it; the mix is closer to 90:10.
So, is virtualisation the 'Holy Grail'? Is it going to solve all of our issues and turn us into super heroes?
It depends!
Let's look at some of the challenges you face and see if virtualisation can help.
I can't remember where I learnt this, but apparently you can spend as much money keeping a server cool as you do keeping it switched on. This means that the fewer physical servers you have, the smaller your electricity bill will be. It also means that with fewer servers, the space you need to house them all can be smaller. Can virtualisation help here? You bet it can. As part of your server consolidation strategy, virtualisation will let you run fewer physical servers, which will address your power and space issues as well as let you run your servers at a much higher utilisation.
How long does it take to provision a new server (including the time taken to get financial approval, the lead time for delivery and the time to build and implement)? How long do you think it would take if the physical server infrastructure was already in place and all you were provisioning was a virtual server (that is already created as a template and is already up to date)? You've got it - the difference is minutes compared to weeks or months.
How much do you spend on providing high availability? How much do you spend on back-ups? If you virtualised your operating systems and applications, you can back them up into a single file, and replicate and move them to other available servers and desktops.
Most organisations spend a lot of time testing for application incompatibilities before implementing any infrastructure change (new operating system, service pack or patch). If you used desktop virtualisation (Virtual PC or similar) or application virtualisation, this lengthy testing would disappear. I feel a need to explain myself here. If you have an application that works fine on, say, Windows XP and you know that it currently fails on Windows Vista, then an option would be to run a virtual machine (running XP) within your Windows Vista host. This gives you all of the benefits of the new operating system (better security, easier management, etc) plus your application 'just works' (because nothing has changed as far as it's concerned - it's still running on XP). Your other option would be to use application virtualisation. Here your application runs in its own little sandbox, and if it ran fine when that sandbox was on XP, it will run fine when it's running on Windows Vista (the sandbox never changed).
Session virtualisation can also help with this scenario (and to be honest, this is one of the few scenarios for which I still see a need for terminal services). If you have an application that has very specific requirements (i.e., won't behave on new operating system or service pack, or needs a lot of testing before it can be put into production), then run it on a terminal server and remote the keyboard, video and mouse to the users over the network. You can actually mix application and session virtualisation together and come up with a very neat solution - run your applications in a sandbox on the terminal servers!
All the above makes it sound like virtualisation can solve a lot of issues, but on its own it can introduce almost as many. At the end of part one of this series, I left you with this comment: Every machine you run, either virtually or physically, needs to be managed. Let's imagine that I virtualise everything and enable self-service provisioning of new servers. How long do you think it would take to have a hundred servers? A thousand servers? Tens of thousands of servers? Who is going to back them up? Who is going to keep them up to date? Who is going to monitor them? Who is going to keep them in compliance with corporate policies? Who is going to manage them? Who is going to pay for them all? Without decent management products, you're just making your infrastructure even more complicated.
Ideally, you use management tools that can differentiate between physical assets and virtual ones. If I need to reboot a server after applying an update, I want to know that it is actually a host server for a number of virtual ones - I don't want to 'accidentally' reboot a dozen, mission-critical servers that have a 24 by 7 service level agreement. I feel another need for an explanation coming on (or at least a solution to the stated problem). How would you reboot a server that was the host for a dozen servers that can't be taken down? You would have them running on top of a cluster. The management tools would have the knowledge of what to do. The running, mission-critical servers would be failed over to another host (to maintain the SLA) before the server in question was rebooted (they could be moved back afterwards if that was required).
It would be great if the management tools had the knowledge within them to know what to do next, to do the right thing. Imagine a world where you are installing an application and are asked what SLA you require (99.999 per cent uptime and less than a second response time). The system would know what was required, in terms of architecture, to provide such an SLA (geographically-clustered, mirrored databases and multiple, load-balanced servers at every tier) and would implement and monitor that configuration. Over time, if the service level was going to be missed, the system would automatically implement the best practice resolution (put more memory, processors or I/O into a virtual machine; introduce another server into the presentation tier). This might sound a bit 'far fetched', but it's not that far off.
So, it depends. Virtualisation is definitely here to stay. With hardware advances moving as quickly as they are, you'd be hard pressed to maximise the utilisation of a modern server just by running a single workload. With great management solutions and a 'holistic' view of the entire platform, virtualisation may well turn us all into super heroes!
This day fortnight, I will cover Microsoft's offerings in the virtualisation space. I'll explain both the technologies we have now and what's coming (our complete solution). I guess I'll also have to touch on cost and licensing.
Oh, and another last point (to get you thinking): Who would you go to for support with a SuSE Linux Enterprise Server 9 running within Microsoft's Virtual Server?
Dave.
Comments
Anonymous
January 01, 2003
Dave Northey has written a great 3 part article on Virtualization: Virtualisation (1 of 3) - What isAnonymous
October 08, 2007
Hi Dave, I don't quite get it when you say "You can actually mix application and session virtualisation together and come up with a very neat solution - run your applications in a sandbox on the terminal servers!". The hybrid solution which you talked about(above statement) is not a standard one and cannot be used in an enterprise with large users running a lot of mission business applications(egs:-global bank).To me it appears like a makeshift arrangement of already existing Terminal Services and other Application virtualization software.Hence i think it is not at all scalable. However, I dont quite know about the possibilities of the same. Would like to hear about it from you. Thanks, Arun.PC arunpc.wordpress.com