Jaa


Mining Work Items for Opportunities to Improve Your Engineering Process

One of the things I spend a lot of time doing as a test manager is mining for interesting data in our work item tracking system (TFS of course). 

For example, this morning, I went through a query of bugs that had been resolved as “won’t fix” but were also marked as regressions from previous releases.  Most of these were correctly resolved because we’ve rewritten some features and the new features have different/improved functionality.  But a few of them had truly worked in past releases and were now broken and for whatever reason our feature teams had decided not to fix them.  Because we had added the “regression” field to our bug form and because people had correctly filled out these fields, we can easily identify such issues and dig into whether or not these decisions were correct.

Out of the box, the bug forms in the Agile and CMMI processes are fairly sparse (at least compared to our internal bug form).  If they don’t have fields to track the information you need, you’ll have to customize the forms.  Learn how to customize WIT forms here (TFS 2005, TFS 2008, TFS 2010 Beta)

7 Tips to Help Your Data Mining

Of course, there’s a fine balance to aim for between overloading your bug form with tons of fields and having a bug form that’s simple enough as to not discourage people from actually filling out all the fields properly.  Here are some tips I’ve learned to help improve your chances of success:

  1. Work with your project stakeholders to decide which engineering processes you want to improve
  2. Prioritize the list
  3. Determine which bug form fields will give you the necessary insight into how those processes are working today
  4. Determine what length of time you’ll need to look for patterns.  Don’t assume a few data points means a trend.  Consider observing data during different phases of your development cycle (e.g. feature development vs. stabilization) and across phases.
  5. Do regular spot-checks to make sure the data your team is supplying is valid.  i.e. – are they taking the time to fill in the fields properly?  are they skipping the fields altogether?  check in with your team to get their feedback on the bug form.
  6. Don’t make the bug form part of the process problem by overloading it with too many fields.
  7. For critical process questions, mark the associated fields as required on the bug form.
  8. Understand the normal ranges for each of your metrics and drill down to understand anything that varies from the norm.  You could find areas where some teams are working much more efficiently than others and find opportunities to spread those practices to other teams.  Alternatively, you could also find areas in need of improvement.

Data Mining Examples

Here are some other interesting queries I’ve run recently, the fields I use to gather data, and some follow-up questions I ask for each.

Example 1 – What is the ratio of bugs in automated tests due to product changes vs. bugs due to poorly coded tests?

Fields:

  • Issue Type (code defect, test defect)
  • Issue Level 01 (product change, test bug, infrastructure/lab/network issue)

Follow-up Questions:

  • Why do we have product changes getting checked in that break tests? 
  • Could those changes have been caught before check-in?  How? 
  • Did anyone run the tests before the code was checked in?

Example 2 - Do we have high priority bugs that are regressions from previous releases (points to test holes or cowboy check-ins)?

Fields (values):

  • Priority
  • Regression (previous milestone, previous release, not a regression)

Follow-up Questions:

  • Do we have unit tests for those scenarios we should have run before check-in?
  • Did anyone run the tests?
  • Why aren’t we fixing these high priority regressions? Is this the right decision?

Example 3 - Who is finding our bugs and how are they finding them?

Fields (values):

  • Source (feature tester, feature developer, feature test team, feature developer team, customer, etc.)
  • How Found (exploratory testing, unit test, integration test, dogfooding)

Follow-up Questions:

  • Who is finding the high priority bugs? 
  • Are testers/developers finding more bugs in functional units/end-to-end scenarios?  Why?  Is this the “right” mix?

Example 4 – Which bugs were not fixed in feature crews and added to our technical debt?

This one might take a little explaining.  We use a parallel development model called “feature crews”.  This means small teams work on new features in branches until the features are “complete”, then they merge those changes up with a reverse-integration into a higher level branch.  We guide teams to fix all their feature bugs before integrating their work back into the main lines to avoid accumulating technical debt.

Fields (values):

  • Bug type (product bug, feature bug)
  • Feature ID (which feature work item was this bug related to?)

Follow-up Questions:

  • What types of feature bugs are getting turned into “product bugs” / i.e. what technical debt are we accumulating and why?
  • Is one particular feature team accumulating more technical debt on average than others?  Why?
  • When were those bugs moved to product bugs?  Did the team push technical debt out of their feature crew too aggressively?
  • Which teams can I reward for consistently not contributing to technical debt?

Wrapping Up…

I hope this post helps ignite some conversations around how to identify opportunities for process improvement in your organizations. 

Remember, you get what you measure, so measure what you get!

Comments