Back to working efficiently
A quick recap - I had been intentionally performing a lot of testing work manually in order to track where my time was going. Once I had that list, I worked on a tool to alleviate as much of this as possible.
So I came up with a tool to do these tasks for me. It parses through the log files looking for errors. If it finds any, it shows that line from the log - at most, this may be 10 lines out of a 20MB log. Much easier to simply look at to see if more investigation is required, and if there are no errors, it saves all the time of finding, opening and searching the log manually. I figure this saves 20 minutes per day, conservatively.
Tracing the root cause of the error is still a manual hit. Trying to come up with a tool here would be very difficult and I had a secondary goal of minimizing my time spent on the tool.
We can now get a partial callstack from memory dumps to help see if the error from the tool run has been reported. Not a huge saving, but my tool also tells you if you have any memory dumps at all to investigate. Call this 2 minutes per day saved.
The last few items on my list only cost 2 minutes per day to perform manually, so there is not much savings here possible.
Lastly, having this report consolidated in one location saves a minute or two each day.
So that saves 24 minutes per day. If 15 other folks use this, that is 15*24=360 minutes saved per day. 6 hours per day works out to a nice savings of 30 hours per week - almost a "full time tester" worth of savings. And if the entire test team starts using this tool (which is automatically run, so there is no real barrier to using it), the "savings" can be more than a full time tester per week.
All this for a four or five hour investment in getting the tool in place. Not a bad deal, if I do say so myself.
Oh, and for astute readers, I was out on Monday so this follow up got delayed a bit.
Questions, comments, concerns and criticisms always welcome,
John