Freigeben über


Virtualisation (1 of 3) - What is it? What's all the fuss?

I get to talk to a lot of people in my role at Microsoft, and the subject of virtualisation seems to pop up more and more often nowadays. Most people talk about virtualisation in the same sentence as server consolidation and see it as a way to reduce the number of servers that they currently manage. It takes a while to explain Microsoft's server consolidation strategy and where virtualisation fits in, so I figured that writing an article about it would get the message out there a bit quicker. I get to publish this in Microsoft Ireland's TechNet Newsflash and have decided to write it as a three-part series to ensure I can get to a sufficient level of detail. To give it some structure, I'm going to use the first instalment to get us all onto the same page - give us all some common words and definitions and maybe dispel a myth or two. The second will be to put a more business-oriented slant onto the subject (to discuss the benefits) and the third will be where I get to explain Microsoft's offerings in this field.

Where to start? The Internet is always a good place, I find, and this definition from Wikipedia is a pretty good start:

'In computing, virtualisation is a broad term that refers to the abstraction of computer resources.'

So, using this definition, I could abstract many physical things (like computers) and have them appear and behave like one logical thing (e.g., a computer cluster), or I could make one physical thing (like a disk) appear to be many logical things (like partitions). I can virtualise anything from an individual component or capability of a system to an entire server or collection of servers (and anything in-between). For the purpose of this document, I don't want to focus on Virtual LANs, Virtual Private Networks or Virtual Storage (SANs) and the like, but would like to talk about virtualising computer resources.

Some larger computers allow me to physically partition them into a number of smaller ones (one big, physical box that contains a number of smaller physical computers, the configuration of which can be modified by the system administrator). I don't want to talk about one of these; let me start with a single computer (it doesn't really matter how many processors or disks it has, nor does it matter how much memory it has - let's just assume it has enough) and see what we can virtualise.

I guess the most obvious option is Machine Virtualisation. This is where I create multiple simulated, virtual computers in software. In this scenario, the physical machine is defined as the 'host' machine and all the other virtual computers are 'guests'. Each guest machine has virtualised system resources available to it - CPU, memory, disk, etc. There are two big flavours of this technology: one for the desktop and one for the server. The desktop variant is designed to let me load up another operating system as an application on my desktop and is mainly targeted at test and development and demonstrations (it is used as a solution to application compatibility - it lets me run older applications that will not run on modern operating systems). The server variant is where I attempt to use the 80 or 90 per cent of the computing power that is traditionally not being used by a modern server. Implementations are designed for production server consolidation (run multiple server workloads on fewer physical servers, thus reducing power, space and cooling - and run each physical server at a much higher utilisation). They are also used for test and development and business continuity (fire up a virtual machine in the event of a failed server. No need for duplicate physical environments).

Modern examples of this technology are: Virtual Server, Virtual PC, VMWare, Xen.

Another virtualisation option, which a lot of us already do, is desktop or session virtualisation. This is where the server computer runs the applications, performs the heavy processing and remotes the user interaction (keyboard, video and mouse) over the network to the user's terminal or PC. The popularity of this grew a while ago due to the management overhead of deploying applications to multiple PCs - the idea was that it was easier to deploy applications centrally to a few servers than to many PCs (and an application update to a few servers was easier than to many PCs). This driver has now gone away, as it is as easy nowadays to deploy an application to a thousand PCs as it is to one. The obvious limitations of this option are that it requires the network to be always present (there is no offline capability) and that it cannot use the local processing power of the terminal or PC (it is not very good at video or graphic intensive applications for example).

Examples of this technology are: Terminal Services, X Windows, Citrix.

Yet another virtualisation option is application virtualisation, this is where the system services (file system, registry, etc) are virtualised on an application by application basis. Applications never actually get installed and as such do not interfere with the host operating system (they run within their own little 'sandbox' and run their own DLLs - the end of 'DLL Hell' as we know it). This is becoming a very interesting option for environments with locked down desktops or where application compatibility is an issue (this option allows multiple versions of the same application to run side-by-side, with no conflicts). This uses a very similar model to desktop virtualisation, but overcomes its limitations (I can run offline and use the local processing power). Even though this comes across as a client PC solution, it can be used together with terminal services to deploy and run applications on a terminal server (that never get installed and therefore don't 'mess' with its configuration).

Examples of this technology are: SoftGrid, DataSynapse, Thinstall.

If you've heard the saying 'When all you have is a hammer, everything looks like a nail', then you'll see virtualisation as the answer to your server consolidation 'issue'. Inside Microsoft, we see things slightly differently: If you want to consolidate databases, our solution is SQL Server (run all your databases on fewer, clustered SQL Servers). If you want to consolidate messaging, our solution is Exchange (a couple of centralised, clustered Exchange servers would suffice for most organisations). If you want to consolidate any servers running a similar workload, our answer is fewer, clustered (if you need high availability) Windows servers. Windows Server 2003 Enterprise Edition (and/or Datacenter Edition) can be an answer for consolidating applications that have traditionally required their own, dedicated servers - both of these versions of Windows include a technology called Windows System Resource Manager (WSRM), which lets administrators control how CPU resources are allocated to applications, services and processes (the caveat here is that all the applications need to be able to run on that version of the OS). I believe that if you have an application that needs to be performant, you'll want it to run on its own dedicated hardware (not virtualised). So virtualisation, in the context of server consolidation, has a role whereby it runs multiple servers that do not have a large system resource requirement. In other words, take all those servers you have that currently tick over at 10-20 per cent and run them virtually.

So just to recap on what I intend covering in the next two parts: In a fortnight I will explore why you would want to embrace virtualisation, and in the third instalment I will explain Microsoft's offerings in this field.

One last point (to get you thinking): Every machine you run, either virtually or physically, needs to be managed - more on this in part two.

Dave.

Comments