Share via


Re-imaging part 2

I got some questions yesterday about what imaging software I'm using.  It's an internal windows tool called "Total Control".  The way it works is you boot with the disc on a machine you want to setup.  By default it will create 3 partitions (C,D and E).  C drive will store the OS that you image and restore to.  The images get stored on E and you can store all your data on E since this partition will never be touched.  D contains a safe OS in case your OS on C is hosed you can still get in.  I've never needed the safe OS, I think it's more for developers working on windows where they may have situations where the OS gets hosed.

There is a server piece to Total Control that controls the imaging and a small client utility that runs on the OS (they talk to each other when running).  I have another "email" machine (running XP) that I don't touch much except for email and music.  The server piece is a very small app that allows me to control any of my machines that I setup with Total Control that have the client app running.  It's as easy as selecting the machine from the list, picking one of the images to restore and hitting go.  10 minutes later my dev box is wiped clean and i'm at the logon screen.

So to continue my imaging process from yesterday:  After re-imaging, I found I had to get 11 new Window patches (which just so happened to be released yesterday).  Somebody's been doing some work!  It's funny every time we release patches I read these negative articles about them on the web, but I feel good about them, it gets us one step closer to being more secure.

After the patches I installed TweakUI XP power tools which Jonathan Hardwick recommeded to my post yesterday, to set my favorites to my E drive.  This is great, thanks Jonathan,  it looks like I can set a ton of other things with this tool as well, but I was too busy to explore it yesterday.  Now I have one less step each time I re-image.

After that I had one last software package to install that wasn't on my previous image.  Since I'm new to blogging I installed a blog reader called "SharpReader" recently.  It seems pretty decent for a free reader, however I haven't tried any others yet to compare.  Any suggestions?

So I figured it was time to take a snapshot of this machine so I went on the Total Control server software and told it to take an image.  This took about an hour (must be all the compression).

Once that was done I was ready to Install Whidbey.  Now this can be tricky figuring out a good build to install.  We have daily builds to choose from.  I knew I needed one that was at least build after Sunday since we took an FI (see yesterday's blog).  The problem is that leaves me with monday or tuesday's build (Daily builds may not be complete until the afternoon but it was about 5:00pm when I got to this step).  We have some web sites that post the build status if it was successful or abandoned.  They both looked OK.  Then there's another site to give status on BVT (Build verification tests), Tuesday's were not complete yet, but I decided to take a chance and install it.  It took about an hour to install Whidbey with all the documentation.  The install went well.

Now it was time to sync up the Team Foundation source code and build it locally.  We are currently dogfooding Hatteras, so what I do from the command line is "ht.exe get"  this will give me all the latest source code for our project.  However we also need some code that is not in our Hatteras DB so we use another internal source control (to be replaced by Hatteras at some point), so I then sync'd that code up as well.  I was now ready to build to build the code.  It can take a couple of hours to build all our code, so I kicked it off and headed home.

When I got home and after some fighting time with my 2 year old son (when I get home he always "Hey daddy wanna fight?", how can I say no to that. I love being a dad!), I RAS'd in and the build was done, so I kicked off a script to install the TFS sources and setup my data tier.  Then we have another script to setup my client.  I use my dev box as my client and Application Tier.  Then I have a second box for my data tier.

I then ran the project creation wizard for a sanity check to see if things were working.  It ran flawlessly and I was able to enter some bugs and query them. 

There was a time a month or so ago, that we had to do this almost weekly, but that's slowing down now, as the product and APIs get much more stable.

Now I'm ready to do some coding, today I'm adding web method logging to all the work item tracking web methods that will log the entry and exit to an admin db for statistics gathering.  It should be faily simple, since other teams are using it right now and have provided an API for me to use.

Have a great day,

Sam

 

This posting is provided "AS IS" with no warranties, and confers no rights.

Comments

  • Anonymous
    February 09, 2005
    That's a really interesting sounding restore system you have out there. I've been planning on refreshing my home system from scratch (hurrah for the MSDN Academic Alliance and good modern software) to eliminate about two years of built up cruft.

    It would be nice to have a faster method of bringing it back to a known state. I especially like how you can create your own image; most of the system-imaging methods that I've seen at work/university have based off of standard images which are ok but it's very limiting as you can imagine.

    It seems to me that having all these separate labs working on code and pushing changes out to each other is a potential breeding ground for patch conflicts; different groups fixing the same bugs in different ways so when you push the changes, you have to decide between two versions of the same code. Do you have techniques in place to deal with this? Or does each lab have a specific area of the code they're allowed to work on so as to prevent this kind of conflict.