Freigeben über


Piloting Windows 7 - Part 3 : More Project Planning, Pilot Phases and Timelines

In the last post I started the planning process, but we are far from finished. We’ve basically outlined what we want and quality bars, but we haven’t actually mapped out the test cases, pilot phases or built a timeline yet. Let’s start by looking at test cases for the pilot.

Initial Testing for the Pilot

As this is just a pilot, you should not expect to be testing with the full rigor of a production deployment. That said, the more committed you are to actually deploying the operating system you are piloting, you could argue that testing as much as possible in early pilot phases will pay off in the long run, resulting in less testing for the production deployment. Your test cases should cover the following primary categories:

  • Application and hardware compatibility. Required applications and targeted hardware works with Windows 7. As discussed earlier, this may include 64 bit considerations.
  • Unattended operating system and application installation. Windows 7 is installed and configured without prompting users or administrators for more information. Application installations are fully-automated or included in the base image. If you are evaluating Application Virtualization or desktop virtualization, considerations should be added to these tests.
  • User State Migration. Users get back their data and profile from the previous operating system – if it exists. This testing will also include any required computer backups for potential roll-back to the previous user-specific environment. 
  • Windows 7 base image validation. This ensures that the base image(s) can be installed on the hardware you are targeting and nothing was impacted by the System Preparation (sysprep) process.
  • The deployment process itself and end user outcome. This ensures the end-to-end process is working, infrastructure is delivering as expected and the migration experience of the user is acceptable. If you are evaluating Application Virtualization or desktop virtualization, considerations should be added to these tests.

While this list isn’t exhaustive, it covers the main areas for your initial test cases. Testing can be iterative and results-based, so may uncover additional areas to test or de-prioritize some of the main categories above if everything simply works as expected.

We’ll define the phases as an addendum to testing. Especially with lab deployments as the first pilot phase, there is little to delineate it from “initial testing” or “initial piloting” – both are early tests of functionality and often the operating system and general application testing will bleed into the testing of automation elements. Now let’s define our phases and assume that with Phase 1, a person in IT is actually using the computer for common tasks and we aren’t just looping installations of Windows images.

Phase 1: The Lab and IT Department – aka “Proof of Concept”

There are a couple of phases to a typical pilot rollout. The first phase really involves your IT department gathering common hardware types – or standard hardware for orgs with standards in place – and installing the operating system and applications that comprise the current desktop standard image or standard build in a lab. This is often called a Proof of Concept or POC for short. For the POC, you are performing light validation that everything works and noting what doesn’t work. If you are thinking about moving to 64 bit, now is a good time to open up those test matrices as well and build systems using a 64 bit version of Windows 7. Also, if you are considering re-architecting how applications or the entire desktop is delivered (think Application Virtualization or desktop virtualization), you probably want to try out the options you have in mind in the lab – before you test these implementations on unsuspecting users.

Phase 2: End Users and Production Hardware

When it comes to end user testing, depending on which operating system you are coming from, you probably want to start small and gradually grow your pilot user base. If your users have Windows XP or Windows 2000, the transition to Windows 7 will be a significant change for many users. For organizations with Windows Vista in production, you may be able to be a bit more aggressive with your timelines. If you are using an opt-in approach, you can phase in additional users with on a schedule or based on helpdesk, issue and feedback load. Based on these activities, you can ratchet up the user count based on your ability to support them.

With end users, you are paying particular attention to most of the primary categories of test cases I listed out above; application and hardware compatibility, user state migration, Windows 7 base image validation, and the deployment process itself. As you roll out to end users, pay attention to end user communications, training and tips for using new features. The Enterprise Learning Framework helps the IT department author emails for communicating new Windows and/or Microsoft Office features to end users during the pilot and production deployment phases.

While the content Microsoft provides to aid in end user training may suffice, pay attention to how people use their PCs and make sure they are aware of and using new features. Find out which features resonate with people and observe where people have issues. You can use this information to augment your user training for the production deployment.

Establishing Timelines 

Timelines should be established to follow your project objectives and organization size. For a single site, single language pilot with limited applications, you can probably perform Phase 1 in as little as 2-3 weeks and Phase 2 in as little as 30 days. Once you add more geographies, applications, hardware, changes to how desktop images or applications are delivered, etc., then you’ll need to add time accordingly. I haven’t seen many organizations where the initial pilot exceeds around 4 months in total. Every pilot is different though and there will be some that exceed 4 months or the “pilot” itself transitions into other phases of the deployment project, continuing as user validation for application compatibility mitigations. There is no one-size-fits all pilot timeline, but the main objective here is to have a rough schedule in place based on your unique environment and know what you are testing for and would like to validate.

With that, I will end part three of the series. In the next blog, I will highlight strategies for installing your image builds and harvesting and making sense of user feedback and information once Phase 2 has begun.

Stay tuned and thanks for reading,

Jeremy Chapman