Udostępnij za pośrednictwem


Piloting Windows 7- Part 2 : Initial Project Planning for a Windows 7 Pilot

Continuing the series with our guest blogger, Jeremy Chapman.

As with any IT project, the first part of planning is about building a plan. There are several things you’ll want to accomplish with a pilot and depending on your organization, the importance of each validation area will vary. I think of the pilot as trying to achieve the following key tasks

  1. Technology validation. This not only validates the desktop environment you are delivering, but also how you deliver it. This covers everything from inventorying current users’ desktops, to imaging, applications and deployment technologies.
  2. Process validation. Process validation is ultimately for ensuring that you have covered all bases for the project and implementation infrastructure, while making sure that you have the right people and resources in place for the production rollout.
  3. User validation. User validation is not about validating the user’s abilities, but instead it gauges the impacts specific to the deployment process, new desktop environment and especially application experience.

 
Once you have the project goals in mind, there are many ways to execute the pilot in minimize user disruption. The idea is to start small and gradually increase the number of pilot seats – ensuring that you have an adequate representation of users, hardware types and sites (or geographic locations). Now is the time to document a plan for rolling out the pilot. You will also want to define success criteria relatively early in the process and what should constitute sign off for each phase. This typically means the number of issues and issue severity that you are willing to live with during each phase – recognizing that things should improve as you get closer to the production deployment. The concept of severity is important here as with any testing. You can use the following as a sample guideline for classifying severity:

  • Severity 1. A fatal error, or a critical fix is needed before production
  • Severity 2. Error is non-fatal, but it still needs to be fixed before production
  • Severity 3. A fix isn’t required before production deployment
  • Severity 4. Generally a nitpick, does not affect usage, performance or the average user, but somebody had to log it

Your quality gates should reflect these severity levels and include some count or measure for success. These numbers should get better as you approach production deployment. Here are a couple of examples:

  • System performance. Were the desktop computers successfully migrated? The target for success is that 90 percent of all desktop computers migrated received only severity 3 or 4 issues.
  • User satisfaction. How satisfied are the users with the outcome of the migration? Were all of the authorized data and settings migrated? Are their new installations usable? Did they need to call the help desk? How much downtime did they experience? Severity is can be judged individually per question above – you might want to stick with the 90% 
  • Operations readiness. How satisfied is information technology (IT) Operations with how the pilot went? Were there critical issues that had to be resolved? Success in this case might be if more than 10% of users were dissatisfied with the delivery, service and issue resolutions.

By now you should have project goals, a system for assigning and prioritizing issues and a few quality gates defined for sign-off when moving up phases. Now is the time when we define the high level phases for the project with timelines and who is targeted per phase. I’ll save that for Part 3 of this blog series though.

Stay tuned and thanks for reading,

Jeremy Chapman