Udostępnij za pośrednictwem


Automation maturity - what should a Test Manager focus on?

Automation is frequently a passionate debate, usually around how much and whether it is effective.  But are engineering managers prepared for the effects of automation as it grows?  Instead of focusing on whether or not to automate or by how much, let's focus on what having automation on an engineering team means for the manager, assuming the team has already decided the correct balance of what needs automated and what doesn't (and in what priority).

 

The infancy of automation: Initially, a team may say they have automation.  I learn that when I drill down on this, they don't necessarily have test cases automated, but instead they only wrote tools to help with parts of the testing process, like installation/setup/deployment tools or tools for emulating inputs due to a dependency on an unreliable source.  There is a difference between writing tools and writing automation (although that can become a blurred line when describing a test harness or execution engine).

 

Establishing the automation report:  As teams get better at automation and their automation grows, managers can benefit by pulling reports from the automation.  This is an extremely necessary result of having automation and one that a manager should focus on.  At times, I have started generating the reports before the automation is written just to help the team focus on what needs done.  This could be as simple as listing the builds, BVT pass rate, and % of BVTs that are automated.  One can argue that BVTs should always pass 100%, but let's save that discussion for another time.  As the team completes automation for BVTs (Build Verification Tests), I start reporting on functional automation reporting and code coverage numbers.

 

A significant location change:  As the team continues to write automation, the process of them running their growing suite of automation on their office machines or specific VM starts becoming a bottleneck.  It is key that the Engineering Manager thinks ahead and plans for this with the beginnings of an automation lab or a group of VMs in the cloud.  Continuing to run automation in engineers' offices takes up machine time and limits the amount of coverage that could be achieved.  Using a lab or VMs hosted in the cloud will allow for running your automation on different software and hardware environments to catch those bugs that couldn't be caught by just running on the same machine day-after-day in an engineer's office.  The automation lab also makes reproducible results an achievable goal because every day the test automation can run on the same type of virtual machines.

 

The overgrown automation suite:  When I have a team that is mature in their processes around writing automation, there are a few different issues that need focus or the automation efficiency starts to suffer.  The two biggest problems I have seen is legacy test automation and analysis paralysis. 

 

Legacy automation is automation code that was written years ago by someone probably not on the team anymore.  It tests key features in the product, or at least that's what everyone thinks.  The team is usually afraid to change or affect this automation in any way because of concerns that coverage will diminish.  But the automation may also make running the whole suite very long.  If lucky, it will always pass because investigating a failure may become difficult if nobody on the team knows the code very well.  Also, if it always passes, it is questionable if the automation is truly still testing things correctly.  Is it cost effective to investigate this automation, verify it's correctness, and modernize it to the current technologies?  That depends on many factors within the team.

 

Analysis paralysis is when too much automation is run on too many machines too frequently.  Is that really possible?  Yes it is.  When that happens and the results come back as anything less than 100% passing (which is always the case), the engineering team will have to focus on why the automation failed and if it was a bug in the automation code or the product code.  Of course that's what they would do.  That's part of the expectations when having automation.  The key point here is that too much of a good thing can overload the team to the point that they are blocked from doing anything else because all their time is spent investigating automation failures.  But if they don't investigate the failures to understand why the automation results aren't at 100%, is that ok?  If your automation passes at less than 100% or bounces around a lot, is it still beneficial to run it?  Are you missing key bugs?  Those are the questions I ask when in situations like this.  The goal is to have robust automation that has been proven to produce accurate results.

 

I experienced teams in these different levels of automation development.  I managed teams with no automation, I managed teams with too much automation, I managed teams that only ran automation labs and produced results daily.  There's not one solution that works.  But as a manager, I found that staying aware of how much automation my team has and watching closely if the automation is a benefit or a burden is key to allowing the automation to be effective in improving product quality.