共用方式為


Improving Your Image: Sector-Based, File-Based, and Sysprep - What Makes the Most Sense? Part 3: Deploy-time Build Automation and Recommendations

This is the third and final blog in the series . In the last blog, The Pros and Cons, I covered the main benefits and underlying drawbacks of file-based and sector-based core images. I intentionally stuck to core image files (the .WIM, .V2I, .GHO, .TIB, .VHD, etc. files themselves) and purposely left out the automation needed to tailor installations at install-time using automation. The automation will actually play a role into the overall recommendations of what type of image to use – file or sector-based. I would argue that thin, file-based images would make a lot less sense if the automation wasn’t there to support the additional functions.

The Task Sequencer… the Key to Thin and Layered Images

There are always critical innovations marking the fundamental turning point when things go from being painful to usable, for automated deployment this is the task sequencer. Some might argue drive cloning tools were the turning point and they were definitely important, but all of the stuff you need to do before and after a cloned drive is applied can take even more work. Think about it, you’re gathering system information, validating the install itself will work, backing up data that needs to be reapplied later, moving in and out of Windows PE (Preinstallation Environment), joining a domain, adding/removing applications after the OS is laid down and restoring the data you backed up in the beginning. With only drive cloning tools, this would work well on factory floor with all hardware and builds being equal, but once you start throwing variables at it, you need task sequence automation. Conversely, you could have full control and do this manually, spending about 4-5 hours per desktop where there is data and personalization… or strip out steps in the process… or keep your control and automate the whole thing and more.

We have quite a few things we can use to automate pre and post Windows installation routines. Some will script it all pretty heavily and just use an unattend.xml file to automate the Windows setup portion, others will seek more extensible ways. Thankfully, the pioneers of task sequencing pooled resources at Microsoft and the fathers of what used to be Automated Deployment Services (ADS) and Automated Provisioning Framework (APF) got together and built a shared task sequencing engine for System Center Configuration Manager. A free version of this engine is also in the Microsoft Deployment Toolkit (MDT). I can remember a time when there were half a dozen extensible task sequencing engines for deployment-related tasks buried in various Microsoft products and Solution Accelerators around 2005, but the major ones share the same core and therefore it is easier to move from MDT to Configuration Manager and back. Let’s see what a deployment task sequence normally looks like:

tasksequence

MDT Deployment Task Sequence

As you can see from the task sequence, these are all the steps you’d take to migrate a user from an old operating system to a new one. I had to paste together two UI images to show everything in the sequence (which explains the curious-looking scroll bars), but you get an idea of everything that needs to happen. This is a default MDT task sequence, so you don’t need to figure all of this stuff out yourself – just populate the deployment share and go. Without this, the scripting to do everything is possible, but would be a nightmare to get everything right. For those who do figure it out and build it all via scripts or build their own task sequencing mechanism, the major benefits are typically job-security and knowing precisely how everything works, but it will take way longer to reinvent a wheel that has already been invented and is officially supported (even in the free MDT version). If you want to save time and achieve a good and customizable outcome, then you can use the task sequencer in Configuration Manager or MDT.

The reason why the task sequencer is the key to thin and layered images is because it can dynamically install packages (aka language packs and updates), inject necessary drivers and install or uninstall applications to/from the core image. What that means is that you can keep a lot of this stuff out of your core image and apply only what is needed just-in-time at install time. Since WIM files are file-based and offline-serviceable, you can inject components into a mounted image or in the case of deployment automation, steps at the right time in the process can install these components for you.

You can manage drivers and put your faith into plug-n-pray play IDs (which seem to be getting more reliable in general) or if you want complete control, you can assign drivers based on hardware make/model queries. The only image you really need to pack additional non-in-box in are Windows PE images and full OS images in cases of the occasional unexpected NIC or mass storage driver requirement (tip: look out for in-box RAID requirements on client hardware).

Beyond drivers, we can also change the operating system language before first logon, Windows Updates, add things like Virtual PC and importantly we can install applications which are not needed by every user. All of these things support the goal and vision of a single, dynamically composable image, respecting user and hardware-specific needs at install time. You might think this is really hard to build, but this is how MDT and Configuration Manager are intended to be used out of the box in their default configurations and task sequence templates.

Thick or Thin Images – Which are Better?

If there ever was a religious debate on imaging the top theme would be about “thick vs. thin imaging.” Answer: It depends. There are always instances where thick images will make more sense: manufacturing, cases where EVERYONE needs an identical build and set of applications, where one language is spoken, etc. In almost all other cases, it will make more sense to have something that can be configured at deploy time. If you feel the need for speed and that is your biggest concern, then thicker imaging can be faster, but you can get creative with it; have your build automation uninstall unnecessary applications and migrate user state. The tools are there and can do anything you want them to do before and after the core image is applied. Most will use a combination of both thin and thick, a “hybrid image”, and follow the 80/20 rule to pack in the essential and infrequently changing applications that 80% of the user population needs, the other 20% can be applied as-needed at install-time.

If you were paying attention in the last blog, you read that I was thinking about the similarities between thick file-based images and sector-based images. Assuming you equate all thick images to monolithic things that include all drivers, updates, and applications, then the decision criteria around sector-based imaging is roughly equivalent to that of a thick file-based image (By the way, they don’t need to be thought of like this, because thick file-based images can still be serviced offline and both sector and file-based can be customized with build automation). I think a number of people do think like this, because the most frequently asked imaging question I get is “How do I use ImageX and related tools like the drive imaging tools I’m used to using?” If you are using some of the newer versions of sector-based tools that incorporate Volume Shadow Copy service and can run online, then I will say that using an imaging tool which requires you to boot into Windows PE or a similar environment to capture an image is more difficult in general, but tools like MDT will generate the Windows PE environments, run sysprep and do everything else for you to capture a “golden” or “reference” computer for cloning.

Tool Recommendations for Imaging

With all of the imaging options put forward, I can start to materialize them into real recommendations. There is nothing really “new” or “revolutionary” here, a lot of it is common sense. As I talk with more and more desktop administrator friends, I also find that the reason many of the processes exist as they do out there is because of tradition and the whole “it’s good enough” syndrome. I’m not buying that excuse though, and once they tell me the user needs to send their machine to a lab for a day and either be without a computer or use a loaner; a tech needs to visit the user’s desk with a pre-imaged hard drive and screwdriver (and maybe ask the user to backup what data they want to keep beforehand); or the other all-to-common taboo deployment practices – I know there is a better, more-automated way that doesn’t take someone like Mark Russinovich to learn and master.

If you do need a UI for everything, it’s there in System Center Configuration Manager and MDT. As a comment to the first blog post highlights, some IT guys have an aversion to Command Line Interfaces (CLIs). I tend to do whatever is fastest and that means I’m in CMD or PowerShell quite a bit. If you want to customize things beyond in-box functionality, create rules, manually operate DISM.exe or ImageX.exe in CMD, you can and it usually makes sense to know what the tools do by themselves, but to answer that original commenter about use of CLIs, you don’t need to go into a CLI to get everything working – that is actually the point of MDT in relationship to the Windows Automated Installation Kit’s command line tools. Some might argue that people in IT roles with an aversion to CLIs highlights a bigger issue, but that is another topic for another blog post I probably won’t be writing.

What all that in mind, let’s take a look at when different imaging methods make sense. I enumerate three common approaches. “Traditional Thick Sector-Based Images with Limited Install-time Customization” essentially means what most people are doing today – building a reference computer with a bunch of applications and available drivers, then hopefully using Sysprep and capturing. “File-based Images with Build Automation for User and Hardware Customization” is the process where a typically thinner, file-based image is used alongside heavy deploy-time build automation to customize the environment as needed by the user or hardware. “Thick File-based Images used like Traditional Sector-Based Images with Limited Install-Time Customization” is related to options one and two, where one is comfortable with the first approach and would like to retrofit that somehow to the file-based imaging tools because they are offline-serviceable and free (aka the frequently asked “How do I use MDT like Ghost?” question). On the vertical axis are the common and up-and-coming desktop service architectures and considerations.

Desktop Environment Type / Image and Build Type

Traditional Thick Sector-Based Images with Limited Install-time Customization

File-based Images with Build Automation for User and Hardware Customization

Thick File-based Images used like Traditional Sector-Based Images with Limited Install-Time Customization

Thick local client environment with user and region-specific applications, local user state and variance in hardware

No

Yes

No

Completely homogeneous environment for all users (no local user state, one language, draconian hardware standardization) – either physical or hosted

Yes

No

Yes

Thin client hardware environment with main user desktop environment hosted in the datacenter

Yes

Yes

Yes

Datacenter-hosted user-specific desktop environments

No

Yes

No

OEM Computer Manufacturing Process

Yes

No

Yes

Any Combination of the Environment Types Listed in This Column

No

Yes

No

“It depends” is the right answer again. If you look down the left column and identify your environment type, then you can see which imaging types and build automation levels make the most sense. There are also shades of grey with everything and there will also be debates about sysprep usage in hosted desktop environments where all hardware is defacto identical as a virtual machine, but that would warrant another complete blog in itself. If your environment looks like a combination of two or more items in the left column, then you should definitely be looking at file-based imaging with build automation.

Basically anything that might cause unnecessary image sprawl should drive the conversation for a move to more build automation and file-based images. Image sprawl tends to multiply if you use sector-based images and have hardware, region, language and user-specific customization to contend with. Sometimes these factors are not even in your control; some governments for example don’t allow use of certain wireless frequencies others may not allow hard drive encryption or Trusted Platform Modules in hardware, etc. If this sounds like you and you want to send the same image to your OEM provider that you use with your deployment servers, then you’ll typically need use composable builds that change dynamically based on environmental variables.

I think most people will fall into the “Thick local client environment with user and region-specific applications, local user state and variance in hardware” at the moment. If the visionaries are right and virtual desktop infrastructures, stateless PCs and “the cloud” is indeed the future of desktop computing, then my table covers most of those scenarios as well. I did set out with a goal here to get you thinking about your current imaging approaches and considering other approaches if you are using thick, sector-based images now for deploying Windows. In some cases, sticking with what you have might be the best option for you, but I think that will be the exception rather than the rule – especially if you want to save time and enjoy automating complex processes with things that aren’t really that hard to use; but I’ll let you decide.

Thanks for reading,

Jeremy

Windows Deployment